There are notable technologistsâmany under 65âwho remain skeptical that a rapid, runaway technological singularity is likely. Hereâs a curated list of younger or middleâaged experts and influential figures who express caution or doubt:
đ Younger & middleâaged critics (under ~65)
Eliezer Yudkowsky (born 1979)
An AI researcher and founder of MIRI, Yudkowsky is well under 65. Despite predicting existential risk from AGI, he does not expect a sudden intelligence explosion. He emphasizes alignment challenges rather than inevitability, and even estimates a nearâcertain risk of extinction if unchecked www.reddit.comen.wikipedia.org.
Roman Yampolskiy (born 1979)
A computer scientist focused on AI safety, Yampolskiy argues that controlling AGI may be fundamentally impossibleâlikening human attempts at oversight to an ant attempting to influence an NFL game. He supports pausing development rather than assuming singularity is imminent en.wikipedia.org.
Gary Marcus (PhD in cognitive science, younger than 65)
While not referenced in the Wikipedia list you saw, Marcus is frequently cited by others as a critic of deep learning as a path to AGI. He argues for hybrid AI systems (symbolic reasoning + learning) and doubts that current trends will yield singularity-level intelligence soon www.reddit.com.
Melanie Mitchell (professor, under 65)
A leading complexity scientist and AI critic, Mitchell highlights brittleness, lack of common sense, and overconfidence in deep learning as reasons to doubt that AGI or singularity is just around the corner www.reddit.com.
Judea Pearl (though older, still under 65? Noâheâs older). Instead see Yann LeCun:
While not outright ruling out advanced AI, LeCun (Chief AI Scientist at Meta) argues that scaling up LLMs alone wonât lead to AGI, and that fundamental architectural breakthroughs are still needed. Many online commentators cite LeCunâs view that current progress doesnât amount to singularity imminence www.reddit.com.
đ§ Established but younger-than-65 technoâcritics
Jeff Hawkins (cofounder of Numenta, â 70)
Although just above 65, Hawkins is noteworthy: he explicitly rejects the idea of intelligence explosion, arguing that machine intelligence will plateau due to fundamental limits on resources and training timeâeven with self-improving systems spectrum.ieee.org.
Daniel Lemire (professor, likely under 65)
Writes in Communications of the ACM that ChatGPT and its peers are not signs of collapse into singularity: he argues that the broader economy will adapt before AI has any chance at runaway general intelligence www.vanityfair.com.
Tyler Cowen (economist, under 65)
While not a technical AI researcher, Cowen dismisses singularity fears as overblown, instead focusing on near-term productivity gains. He rejects apocalyptic visions of AGI in favor of measured optimism grounded in real-world impacts www.ft.com.
đ Broader landscape & poll findings
- A 2019 survey of AI experts showed significant skepticism toward nearâterm singularity: many predicted humanâlevel AGI only after 2060 or laterâand questioned whether current deep learning methods could ever lead there contemporarymahal.comarxiv.org.
- The Guardiansâ article cited Benjamin Hanson, who argues that markets and competition would naturally limit any runaway AI scenario, suggesting that singularity-level events are socially implausible even if technically possible www.newyorker.com.
â Summary Table
| Name | Birth / Age | Position on Singularity | 
|---|---|---|
| Eliezer Yudkowsky | b. 1979 | Critic of inevitability; alignment risks | 
| Roman Yampolskiy | b. 1979 | Doubts controllability; pause development | 
| Gary Marcus | â under 65 | Advocates hybrid AI; skeptical of deep learning-only path | 
| Melanie Mitchell | â under 65 | Highlights common-sense and brittleness deficits | 
| Yann LeCun | â 60 | Scaling LLMs insufficient; needs breakthroughs | 
| Daniel Lemire | â under 65 | Notes economic adaptation; no imminent singularity | 
| Tyler Cowen | â under 65 | Rejects hype; focuses on productivity and realistic outcomes | 
đ Reddit & community perspectives
From discussions on r/singularity and r/ai:
âI personally do not buy into the idea that AGI is just around the corner or that we are very close to a singularity event.â â self-described PhDâlevel ML engineer www.reddit.comwww.reddit.comwww.reddit.comâYann LeCun doesnât believe scaling LLMs alone will lead to humanâlevel intelligence⊠we wonât hit the singularity without a few more breakthroughs.â www.reddit.com
đ§Ÿ In conclusion
Yesâdespite the Wikipedia list featuring mostly older namesâthere is a growing cohort of younger and midâcareer technologists and researchers who question the feasibility, timeline, and inevitability of the singularity. While not all deny AGI entirely, they caution that:
- Current architectures (e.g., LLMs alone) lack the necessary structure for true general intelligence.
- Fundamental scientific, computational, and ethical barriers remain.
- Economic and regulatory forces may slow or reshape the trajectory. Would you like a deeper dive on any specific individualâs reasoning?
