Cool readings and excerpts from this week:
Lots of news in January about media downsizing and layoffs. The news industry in the U.S. looks like it is in a precarious position, but I feel like it’s also been this way for a while now. In the meanwhile, Joshua Benton of Nieman Lab reminds us that while it can feel like the industry is in freefall, sometimes outlets can simply collapse because they were ill-conceived, ego-driven projects that were terribly run. Good old bad-business-strategy! It’s a really informative read, and here’s my favourite quote:
… it is both comical and infuriating to see Jimmy Finkelstein blaming his failure on those “economic headwinds” and all the ills facing the broader news industry. No, Jimmy — this is on you. The broader economy is doing good-to-great, and there is little about the state of media economics today that wasn’t foreseeable a year ago.
Let’s be clear about this. The Messenger was Finkelstein’s self-conceived “legacy” project. It was born out of his bizarro nostalgia for a media universe long gone. It was his fundamental misunderstanding of web traffic that tied its strategies in knots. (Finkelstein loved to brag about The Messenger’s big pageview numbers — without recognizing the site’s overall web traffic was on par with mid-sized regional newspapers and niche sites with 15-person newsrooms.)
When Finkelstein says ruefully that he’d “exhausted every option available…to raise sufficient capital to reach profitability,” he’s also saying that he didn’t want to put any more capital into such an obvious money loser.
Sometimes, people just don’t know how to run news outlets ¯_(ツ)_/¯ (yes, this is also about you Condé Nast).
The Economic Advisory Council to the Prime Minister of India (EACPM) just put out a policy brief disccusing their approach to regulating AI in the country. This is a timely release, given recent headlines around opaque and inaccurate decision-making with AI in governmental allocation schemes.
The policy brief hones in on AI systems as “complex adaptive systems”, meaning that they can be non-determinisitc and evolve over time, and that their behaviour especially changes as they are scaled. I think this is a really interesting approach to take, and fairly distinctive in its focus, because it moves away from ex-ante assessments of risk and benefit, toward policy that can stay agile and operate on an abstract enough level that even unanticipated risks are possible to deal with. A quote summarizing this approach:
As an effective approach, we consider AI to be a complex adaptive system (CAS) with unpredictable emergent behaviours arising from the interactions of multiple components like machine learning and neural networks. Just as small changes can cascade in unpredictable ways in CAS, small tweaks to AI systems can compound into disproportionate impacts that are hard to model long-term. AI development is an evolutionary process with continual interplay between variations in technical components. We are agnostic to risk assessment and predicting the AI evolution cycle. In addition to this, the burden of predictability of pattern and risk does does not exist in this regulatory principle, thereby making it truly agile and independent of human-induced bias.
Agility is cool! Accounting for stochasticity is cool! At the same time, I can’t help but wonder why some consideration of ex-ante risk is not a part of this approach. High-risk AI systems, such as those that denied food and pension to people in the Al Jazeera reporting I read about, should have requirements that are more fine-grained and stringent than broadstrokes guidance about human oversight or agile regulatory bodies. And human-induced bias is hard to ignore in such circumstances, given that it underlies our interactions with technology regardless of whether we realise it or not.
Ultimately, I think there’s some tough questions along those themes that I think such a policy does not account for. Is it okay that the use of AI systems meant that officials were mandated to check whether the algorithmic system approved the eligibility of the applicant [for certain benefits] before making their own decision"? Does this set employees up for over-reliance on algorithmic outputs? How can citizens get recourse when the system or the human relying on it messes up? What about accounting for recourse mechanisms in light of shifting contexts of human stakeholders? Is there ever room to say that a certain application or use-case should not be subjected to AI-assisted decision-making? Are any stakeholders – beyond developers – significantly empowerd or accounted for in this framework?
This is a really interesting macro-level philosophy, but I also think it relies too heavily on technological determinism to solve potential problems that encounter with AI. Technology-centric interventions (e.g. automated monitoring systems, alert mechanisms, databses of algorithms) will not fix technology’s problems. I believe you need to account for human biases among users, and equip stakeholders with actual agency, for AI-assisted decision-making to actually be accountable.
AI2 just put out OLMo, a fully-open source family of LLM models! They’ve shared access to training data, training code, models, and the evaluation suite for these models as well. This is a really cool move for open science! Nathan Lambert (one of the members on the team) also wrote a little bit about the significance of this move in the current AI ecosystem, and the differences against other open-source models like Llama and Mistral:
OLMo represents the first time in a while (maybe before GPT2) that a state-of-the-art language model is fully transparent and open. While some communities may advocate for different behaviors, the release of the OLMo family represents the first time where many areas of study can be empowered to support a more well rounded discussion around the potential harms and benefits of LLMs. While many language models have been close to meeting the standards of “open”, and are perceived as open by the general public, such as Llama 2 and Mistral, they do not provide access to certain types of work that is needed to make clear arguments around the potential risks.
Very cool, AI2!