In Silicon Valley speak, “kill zones” are market areas that large companies occupy by buying up the competition at scale and Big Tech’s latest battleground is artificial intelligence. GAMMAN (a new acronym for Google, Apple, Microsoft, Meta, Amazon, and Nvidia) has an unprecedented appetite for AI startups, attracting the attention of regulators scrutinizing antitrust violations.
In August, Google acquired chatbot startup Character.AI for $2.7 billion. Amazon completed a $4 billion investment in Anthropic in March and announced another deal with Adept AI, while Microsoft “hired away the head of Inflection AI” reports the Washington Post.
How will the market for AI technologies evolve if major players control AI research and development, cloud infrastructure, and data centers? What impact could it have on innovation and society?
The non-profit organization Future of Life Insitute, founded in 2014 by MIT professor Max Tegmark, Skype co-founder Jaan Tallinn, and UCSC cosmologist Anthony Aguirre, addresses these and other questions surrounding AI, committed to ensuring the secure development of technologies that focus on the common good.
At Web Summit in Lisbon, Europe’s largest technology conference, the organization hosted an AI reception get-together. Here, I met Emilia Javorsky, Director of the Futures Program at the FLI to discuss the societal risks posed by potential big-tech AI monopolies. She breaks down the different dimensions of power concentration and explains what impact it could have on all our lives.
The Overview: In a podcast you mentioned the importance of addressing the risks of AI monopolies. To set the stage, could you break down the different areas of power concentration?
Emilia Javorsky: When we think about power concentration and the current dynamics within the cutting edge of AI, it’s important to delineate what type of AI we are talking about. For example, some smaller applications and tools are very democratic, accessible, and beneficial for different use cases in education or healthcare. Problem-solving in these areas is very doable with those more narrow AI systems.
Then there are the large, generalized models developed by the main labs that strive for AGI (general artificial intelligence). Those companies are uniquely at power concentration risk. And that's for a few different reasons: one is the financial piece.
These large general-purpose AI systems are incredibly expensive to train. They require a lot of capital and they will make a lot of capital. So they are systems that will further concentrate financial power into the hands of a few corporate actors.
Only these few companies have hundreds of millions of dollars to train the models and can access computing power. That makes it very hard for their competitors. So basically all the state-of-the-art AI technology lies in the hands of the big technology companies. This is in contrast to previous technologies that did not have such financial barriers to entry to do the needed experimentation and research. There was a very competitive ecosystem and an open-source spirit that served as an effective counterbalance to software development. And that's not the case with AI.
You are also referring to the data centers needed to train these large models. What do you think about companies like Amazon and other major players building their own nuclear reactors to meet their energy demands?
Emilia Javorsky: This would be another dimension of power dynamics. As someone who has been very much pro-nuclear energy, I think it's been quite a sad story that we haven't gotten it right. Because it does have such incredible power, it’s clean and ubiquitous, but we haven’t had that discussion as a society for 50, or 60 years. And then suddenly the big tech companies come in and all of a sudden it changes. So now it’s okay to use nuclear energy because big tech needs it? Where was this conversation when society needed it to mitigate climate change and foster economic development?
Additionally to the financial dimension of power concentration; how much political influence do the big tech companies have?
Emilia Javorsky: Google or Microsoft have deep lobbying branches and much more influence over government stakeholders than a normal lobby. Their main argument against AI guard rails is to ensure geopolitical competitiveness. Through this narrative, they have quite a bit of influence over the trajectory of government regulation, but I think politics and money are two aspects that people are already more aware of. There are also sociocultural dimensions in terms of power concentration that are important to address.
What exactly are the sociocultural dimensions?
Emilia Javorsky: These new large general-purpose systems concentrate people’s information, attention, and engagement and they have a much richer dataset about their users and a far more sophisticated way to engage with them than any other technology, not even social media. So if these major players control the flow of information, the question is what values are represented in these systems? At the end of the day, you have the beliefs of a few people in Silicon Valley being broadcasted to the world. No matter what side of the ideological spectrum people fall on, I think there should be choice over what values you want to ascribe to.
So it’s crucial to preserve the diversity of cultures and ideas. I’m a biologist by training and monocultures die. We need a landscape of ideas and values for society to grow, progress, and evolve. The notion that there is one Silicon Valley value system and narrative that the world will accept is not only morally problematic but also prevents humanity from flourishing.
Big tech companies already exploit our attention and personal data to maximize profit through ad revenue. The competition for our engagement has led to social media recommender algorithms resulting in filter bubbles and tribalism. So big tech dominating the digital sphere is nothing new. Why is it even more harmful if the same companies also dominate AI?
Emilia Javorsky: The power of these models comes from our collective data so there also should be a collective sense of ownership and responsibility for these systems to benefit us because it has been our data that has given life to the progress in AI that we've seen.
Compared to social media AI is at a whole nother level because it's not just about what information you're being served. It's also hacking how you think and feel. You can see the canary in the coal mine already with the adoption of AI companions. This is not just hacking people's attention to sell products to them, it’s hacking their emotional landscape. There are already young people who have more AI friends than actual friends. Once you control someone's emotions, it becomes a very powerful tool that beats every recommender algorithm. I think the magnitude of what lies ahead, combined with just a handful of companies controlling it, is quite a scary scenario.
As the director of the Futures Program at the Future of Life Institute you also work on finding solutions to these problems. What are some promising approaches?
Emilia Javorsky: I think there's a broad set of potential solutions for example AI safety guardrails and robust privacy regulations. There's also an opportunity for a social movement, for people to come together and collaborate demanding to chart a different path. It’s about engaging others and communicating the gravity of what's happening in the AI landscape both in terms of power concentration and the huge economic implications these systems could have regarding job displacement.
There is also a great opportunity for builders of new tools and applications, especially from the decentralized communities to develop AI that benefits society instead of just capturing all the existing value that humans have created in a very parasitic way, which is what the companies are doing today. At the Future of Life Institute, we launched a request for proposals to do a big round that supports projects working to combat power concentration using different strategies. And I'm super optimistic about our potential future with AI and how it can be used to solve problems. An example is the breakthrough of AlphaFold which just won the Nobel Prize. It’s an AI system developed to solve the protein folding problem.
We need more tool-based examples like these. Many companies say how beneficial AI could become someday in solving the big problems of our time, but the technology is already here to make it happen. We could have that future today.
This interview has been edited and condensed for readability.
Interview by Eva Bolhoefer