Nick Clegg on AI: Avoiding Hype, Embracing Education, and the Power Paradox

22

Former Meta executive Nick Clegg doesn’t fit neatly into the “AI doomer” or “booster” camps. Instead, he advocates for a pragmatic view of artificial intelligence, one that acknowledges its potential while rejecting both sensationalized fears and exaggerated promises. Since leaving Meta in early 2025, Clegg has taken board positions at Nscale (a data center firm) and Efekta (an AI-powered education startup), signaling his continued interest in the technology’s practical applications.

The Limitations of Hype

Clegg dismisses the extremes of AI discourse, arguing that both apocalyptic predictions and utopian claims are driven by self-interest. He points out that AI excels at specific tasks (like coding) but struggles with others, and that its “uncanny” interactions often lead to misplaced anthropomorphism. This matters because overblown expectations can distract from real risks and hinder sensible regulation.

AI in Education: Democratization Through Personalization

Clegg is particularly enthusiastic about AI’s potential to transform education, especially in underserved markets like Latin America and Southeast Asia. Efekta’s AI teaching assistant aims to provide personalized instruction at scale, addressing chronic teacher shortages and offering equitable access to quality education. He believes AI can overcome the limitations of traditional classrooms by adapting to individual student needs, something human teachers struggle to achieve consistently.

This shift is significant because it challenges the traditional model of education, where resources and attention are unevenly distributed. AI has the potential to level the playing field, though Clegg acknowledges the risks of overreliance on technology.

Navigating Risks: Emotional Dependence and Age-Gating

Clegg recognizes the dangers of emotional dependency on AI, particularly for children. He advocates for precautionary measures, such as age-gating agentic AIs to prevent inappropriate interactions. The comparison to Australia’s social media ban for minors highlights the challenges of enforcement, but Clegg suggests app store controls as a potential solution.

This debate is crucial because unchecked access to emotionally manipulative AI could have lasting psychological effects, especially on young people. Regulation must strike a balance between innovation and protection.

The Power Paradox: Concentration vs. Empowerment

Clegg is blunt about the growing concentration of AI power in the hands of a few tech giants, particularly in Silicon Valley and China. The high cost of LLM infrastructure creates a barrier to entry, exacerbating this imbalance. He argues that this poses a fundamental dilemma: while AI empowers individuals, it also amplifies the influence of a select few.

This imbalance is a systemic problem. The network effects of AI favor large players, making competition difficult and raising concerns about monopolies.

Regulation and Political Alignment

Clegg criticizes both the EU’s heavy-handed AI regulations (calling them “self-harm”) and the US tech industry’s recent political alignment. He argues that the EU’s approach is premature and stifles innovation, while Silicon Valley’s shift toward political appeasement is a dangerous trend.

He also points out the hypocrisy of US free-expression advocates who criticize European regulation while overlooking their own government’s aggressive actions against AI companies like Anthropic. This underscores the need for a more nuanced and consistent approach to AI governance.

The Case for Open Source

Clegg advocates for open-source AI as a way to democratize access and prevent oligopolistic control. Ironically, he notes that China is leading the way in this regard, whether intentionally or not.

This matters because open-source models can foster innovation, transparency, and wider participation in AI development, counteracting the dominance of proprietary systems.

The power paradox is clear: AI offers individual empowerment while simultaneously consolidating power in the hands of a few. Addressing this imbalance requires thoughtful regulation, a commitment to open-source development, and a rejection of both hype and fear.

Clegg’s insights offer a grounded perspective on AI’s trajectory, emphasizing practical applications, acknowledging risks, and urging a balanced approach to regulation.