Eric Schmidt's Age of AI: Unfiltered
His take on AI's rapid rise, the looming dangers of misinformation, and why the U.S.-China tech race could define our future.
Eric Schmidt’s recent talk (script/video) on “The Age of AI” at Stanford University is considered by some to be one of his most honest and unfiltered since his tenure as Google CEO. I have (machine) analyzed and abstracted some of his most important assertions and quotes concerning the progress of AI, risk/safety, misinformation/democracy, labor, TikTok, and competition/ geopolitics. He sees misinformation driven by AI as one of the greatest threats to democracy. His tone regarding the U.S.-China competition over AI is concerned and strategic, emphasizing the importance of the U.S. maintaining its technological edge over China and recognizing the significant risks if the U.S. falls behind. He did not offer any suggestions or pathways for collaboration between the U.S. and China in his talk.
Progress: AI is advancing rapidly, particularly in areas like context windows, which are expanding from 0.1/0.2 to 1 million, AI agents involving LLM, state, and memory capabilities, and text-to-action systems that translate language into programming languages like Python and Mojo. These developments are happening faster than anticipated and are expected to have transformative impacts within the next 1-2 years. NVIDIA's competitive edge is rooted in its powerful GPUs, the highly optimized CUDA programming language, and a suite of open-source libraries such as vLLM. Additionally, new non-transformer architectures are emerging, and chain-of-thought reasoning, which involves thousands of steps to test the efficacy of AI systems, is gaining traction. As AI systems evolve, hallucinations are expected to become less frequent. Schmidt emphasized the infinite potential of AI, noting that the invention of intelligence could yield infinite returns, though the high capital costs involved may render open-source business models less viable. He also highlighted companies like Augment Code, which is poised to significantly improve coding efficiency, and ChemCrow, which will expedite scientific research in chemistry.
Quote: “In the next year, you're going to see very large context windows, agents and text action. When they are delivered at scale, it's going to have an impact on the world at a scale that no one understands yet. Much bigger than the horrific impact we've had by social media in my view.”
A. Misinformation/Democracy: Misinformation, particularly when driven by AI, poses one of the greatest threats to democracy. As the methods for spreading false information become increasingly sophisticated, the potential for destabilizing democratic institutions grows. Schmidt suggested the use of public key authentication as a countermeasure to verify the authenticity of statements made by public figures. He also discussed the uncertainty surrounding whether the Chinese government is using TikTok to influence U.S. public opinion, though he noted that TikTok users spend an average of 95 minutes per day on the platform, watching over 200 videos. This raises concerns about the platform's influence and the potential for AI-powered misinformation to play a significant role in geopolitical affairs.
Quote: “We’re going to get really good at [misinformation], and democracies can fail. The greatest threat to democracy is misinformation because we’re going to get really good at it.”
“If you look at TikTok, for example, there are lots of accusations that TikTok is favoring one kind of misinformation over another. And there are many people who claim without proof, that I'm aware of, that the Chinese are forcing them to do it.“ (also see below apropos TikTok)
B. Risks/Safety: AI poses significant risks, particularly in the realms of misinformation and the challenge of monitoring and controlling increasingly autonomous AI systems. The difficulty in detecting harmful behaviors in AI systems, especially when they learn dangerous actions that are not immediately apparent to humans, presents an existential threat. Schmidt stressed the importance of understanding the boundaries and limits of AI/knowledge systems, acknowledging that fully characterizing these systems is currently not possible. To address these risks, he suggested the use of AI-driven red teams to intentionally break systems and identify vulnerabilities. The recent Executive Order on AI, which Schmidt and major tech companies heavily influenced, is aimed at keeping AI safe while ensuring that the U.S. remains ahead of China in AI development.
Quote: „"How do you detect danger in a system which has learned it but you don't know what to ask it? It’s learned something bad, but it can’t tell you what it learned and you don’t know what to ask it.“
C. Labour/Work: AI is expected to replace low-skilled jobs that require little human judgment while enhancing high-skilled jobs and significantly boosting productivity. Schmidt expressed concerns about the work ethic of certain companies in the U.S., suggesting that this might be why the official Stanford video was removed from YouTube. He emphasized the importance of a strong work ethic, particularly for startups and innovative companies, to remain competitive in the AI race. For software developers, productivity is expected to double due to advancements in AI. Schmidt also pointed out that companies like Google, which prioritize work-life balance, might be sacrificing their competitive edge in favor of these values.
Quote: “Google decided that work life balance and going home early and working from home was more important than winning.“ https://www.theverge.com/2024/8/14/24220658/google-eric-schmidt-stanford-talk-ai-startups-openai „The jobs which require very little human judgment will get replaced... software programmers’ productivity will at least double.”
D. TikTok: The capabilities of AI extend to rapidly replicating popular apps like TikTok, showcasing the technology's powerful potential in software development and posing a competitive threat to established platforms. Schmidt highlighted TikTok's significant user engagement, with users spending an average of 95 minutes per day on the platform, watching over 200 videos. This engagement raises concerns about the platform's influence, especially given the uncertainty surrounding whether the Chinese government is using TikTok to influence U.S. public opinion. The discussion on TikTok also ties into broader concerns about the geopolitical implications of AI, particularly in the context of U.S.-China relations.
Quote: “If TikTok is banned, say to your LLM... ‘Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds.’ That’s the command.”
E. Competition/geopolitics: n the ongoing rivalry between the U.S. and China, maintaining a technological edge in AI is crucial for national security. Schmidt emphasized that the U.S. must stay ahead, requiring significant investments, including a $300 billion investment in data centers, with Canada playing a key role due to its green energy resources. He noted that the U.S. currently enjoys a 10-year advantage in chip technology over China. However, Schmidt did not consider Europe a major competitor in the AI frontier, criticizing the EU Act for making AI research difficult in Europe. He sees France as having potential in this field but is skeptical about Germany's ability to make significant progress. Schmidt also discussed the strategic importance of India, describing it as a swing state whose alignment with the U.S./West is uncertain. In terms of AI warfare, Schmidt's White Stork program focuses on eliminating land-based invasions, underscoring the evolving role of AI in global military strategy.
Quote: “The battle between the U.S. and China for knowledge supremacy is going to be the big fight. The U.S. government banned essentially the NVIDIA chips into China... and the Chinese are whopping mad about this.”
And those were his recommendations, without any recommendation on how to improve global collaboration on AI, especially between US and China.
Investment: Substantial funding, quantified as billions of dollars, is essential to maintain a competitive edge in AI development, requiring investments between $10 billion and $100 billion.
Collaboration: Strengthening partnerships with allied nations, particularly leveraging Canada’s resources, is crucial for sustainable AI advancement.
Regulation: Implement robust government oversight, with a specific threshold of 10^26 flops for reporting AI activities, to ensure safety and control over AI systems.
Safety: Establish industries dedicated to adversarial testing to identify and mitigate vulnerabilities in AI systems before deployment.
Authentication: Implement public key authentication to verify the authenticity of information, combating the spread of misinformation.
Work Ethic: Foster a strong work ethic and competitive drive within organizations to stay ahead in the rapidly evolving AI landscape.
Academia: Provide universities with the necessary resources, such as data centers, to support cutting-edge AI research and innovation.
In a previous blog, I discussed Eric Schmidt's vision of AI's future, where he highlighted the rapid advancements in AI, the risks of proliferation, and the need for a U.S.-China collaboration on AI safety. He referenced the "no-surprise rule" akin to the Treaty of Open Skies, suggesting that both sides should inform each other when training completely new models to avoid unexpected developments. In this latest talk, Schmidt provides further insights into the strategic importance of maintaining the U.S. technological edge over China and the potential threats posed by AI-driven misinformation.



A new interview with Eric Schmidt on his new book Genesis: https://youtu.be/AjgwIRPnb_M?si=OdInlfcVL4rzfP9g&t=1596 For him, the U.S. is only a year ahead of China in AI—a gap that seems to fuel his anxieties as he thought the U.S. to be ahead two to three years. A new open-source model in China rivals—and even outperforms in some areas—Meta’s LLaMA 3 (400 billion parameters). He raised again his concerns about open-source AI being used for malicious purposes. Regarding China, he said being dependent is better than complete independence, as interdependence forces a degree of communication and understanding, which can reduce the risks of miscalculation or escalation in conflicts.
Schmidt also highlighted similar concerns regarding U.S. open-source AI systems. He pointed out the risks of model exfiltration, where bad actors could steal powerful AI models developed by leading U.S. companies like Google, Microsoft, or OpenAI, and use them maliciously on the dark web. He emphasized that neither the U.S. nor any other country currently has a framework to detect or manage such exfiltration effectively. He compared this proliferation risk to the spread of enriched uranium, which is closely monitored internationally, but noted the absence of equivalent monitoring systems for AI.
While Schmidt acknowledged the benefits of open-source AI for innovation, he warned that without proper safeguards, powerful models could be exploited for harm, ranging from misinformation campaigns to advanced cyberattacks. This dual vulnerability—posed by both U.S. and Chinese open-source AI—intensifies the urgency for global agreements on responsible use.
Schmidt strongly advocated for U.S.-China collaboration on AI, emphasizing that cooperation between the two leading AI powers is essential to mitigate global risks. He suggested bilateral agreements on responsible AI use, particularly regarding its weaponization, as such collaboration could address a significant portion of the global AI security threat. Echoing Kissinger’s realism, Schmidt argued that while the U.S. and China may never be close allies, strategic cooperation on AI is vital to manage its risks effectively and ensure its benefits for humanity. Link: https://youtu.be/AjgwIRPnb_M?si=OdInlfcVL4rzfP9g&t=1596