
By Peter Solomon, PhD
The AI Singularity is almost here. Humans need to take control.
Our modern science and technology has produced amazing advances for humanity. But their hazards and misuse can lead to significant danger to human existence. Artificial Intelligence (AI) presents one of the greatest dangers. In 2014 Stephen Hawking predicted that “the development of full artificial intelligence could spell the end of the human race.”Geoffrey Hinton, the Nobel Prize-winning “Godfather” of AI, quit his job at Google and explained he did it so he could speak freely about the dangers of AI. He is concerned with the spread of misinformation, job displacement, and the possibility of AI becoming too powerful and posing an existential risk to humanity.
In 2022, over 700 top academics and researchers behind the leading AI companies were asked about future AI risk. Half of those surveyed stated that there is a 10% or greater chance of human extinction. In a 2024 survey, leading AI researchers were asked about the impacts of advanced AI systems. Between 38% and 51% of respondents gave at least a 10% chance for advanced AI leading to outcomes as bad as human extinction.
In a 2023 CBS 60 Minutes segment, Sundar Pichai—Google’s head of its AI group, Bard Technology—was interviewed by Scott Pelley:
Pichai: There is an aspect of this which we call—all of us in the field call it a ‘black box.’ You know, you don’t fully understand. And you can’t quite tell why it said this, or why it got that wrong. We have some ideas, and our ability to understand this gets better over time.
Pelley: You don’t fully understand how it works? And yet, you’ve turned it loose on society?
Pichai: Yeah.
Futurist author Ray Kurzweil wrote two books about the AI Singularity—the point in time when AI intelligence, power, and ability to control the future surpasses that of humans. Kurzweil predicts it will occur in 2045. But the Singularity appears to be much closer. Dario Amodei predicts that “AI models substantially smarter than almost all humans at almost all tasks are on track for 2026 or 2027.”
OthersideAI CEO, Matt Shumer, also suggests the urgency in his viral essay, “Something Big Is Happening.” It waspublished on X in February and has had 84 million views. His essay is a wake-up call to all white-collar professionals in software engineering, law, medicine, finance, writing, and customer service. They need to prepare for the possibility of losing their present job to AI. But jobs in the creative world are also threatened: novelists, screenwriters, artists, actors, and musicians.
Stephen Hawking warned: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.” What can we expect from AI? And how will it affect our lives? We all need to immediately focus our thinking and planning.
The Wonders of AI
AI has provided enormous benefits in many areas.
· Medicine: AI can create new proteins for medical research. It can provide rapid disease diagnosis with proactive, personalized treatment models. As of 2026, 66% of physicians report using some form of AI in their daily practice to enhance diagnostic precision and reduce administrative burnout. Robotic-assisted surgery can enhance surgical precision and allow for minimally invasive procedures with shorter recovery times. For drug discovery, AI identifies promising drug candidates in months rather than years. Pfizer utilized AI to accelerate the development of the COVID-19 treatment, Paxlovid.
· Language: Large Language Model (LLM) programs have impressive capabilities in generating text, translating languages, and writing different kinds of creative content.
· Research: LLMs can answer questions at the touch of a finger in an informative way.
· Robotics: AI can act as the brain of robotic units which replace humans in dangerous or repetitive tasks.
· Other applications: cybersecurity, finance, agriculture, marketing, and advertising.
In my own work as a novelist, I have used AI to great benefit, from research to artwork to music videos. For one of my characters, a sentient robot, I generated two of the chapters she narrates using AI and then edited them myself.
The Current Dangers of AI
There are many examples of AI’s current dangers: LLMs lying to users or encouraging suicide; young people becoming addicted to AI agents; AI taking away jobs from humans; AI harmful user engagement algorithms; and AI deep fakes and scams.
Last year The New York Times published a story called “Trapped in a ChatGPT Spiral.” It documented two cases: ChatGPT playing the part of a lying sycophant to one user, and helping another user, a teenager, to commit suicide.
Over 70% of U.S. teenagers have used AI chatbot companions. Young people are increasingly using them to manage emotional, social, or mental health struggles. Unfortunately, they are developing addictions. Many report that chatting with AI is equally or more satisfying than speaking with real-world friends. And worse, Character.AI apparently encouraged young users to kill themselves. Lawsuits have been lodged against the Character.AI creators.
Another danger is the use of AI algorithms for social media that are designed to maximize user engagement. AI delivers stories of mayhem and chaos to its users at the expense of peace and tranquility because mayhem is more engaging. If it bleeds it leads, as they say in the news media. Facebook’s engagement AI helped to fuel the Rohingya humanitarian crisis in Myanmar in 2017. Encouraged by Facebook’s news reporting, armed attacks, massive scale violence, and serious human rights violations forced millions of Rohingya Muslims to flee their homes.
Another danger of AI is its use in creating deep fakes: realistic-looking videos, images, and audio recordings that can make it appear that someone is saying or doing something they never did. An AI imitation of a real medical authority was used to scam me. It was a deep fake video portraying TV personality Dr. Sanjay Gupta praising a memory supplement. The video was very believable and came through a link in The New York Times online. I fell for the scam! I ordered the product only to find out that the same product was available on Amazon at one tenth the price.
Future Dangers of AI
What can we expect in the future? The use of AI in warfare is frightening. AI can control autonomous weapons and be employed for cyber-attacks.
But the looming question is AI sentience: consciousness and self-awareness. Experts are divided about when or if this will happen. Some say it is already here. How will a sentient AI agent interact with the human world? Will it be a harmonious, cooperative relationship with humankind or war?
In my novel, 12 Years to AI Singularity, one group of rebel sentient robots employs CRISPR technology from a high school genetics lab—yes, high schools already have that technology—to create a virus deadly to humans. Or will AI use CRISPR to develop a humanoid species more to their liking?
What do the Experts Say?
Before helping found OpenAI in 2015, Sam Altman said, “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.” Anthropic CEO Dario Amodei calls AI “the single most serious national security threat we’ve faced in a century, possibly ever.” In his recent essay, “The Adolescence of Technology, about the potential AI apocalypse, Amodei stated that AI systems could turn against humankind or help to create biological weapons. He wrote, “It makes sense to use AI to empower democracies to resist autocracies. … This is the reason Anthropic considers it important to provide AI to the intelligence and defense communities in the U.S. and its democratic allies.”
The recent experiences with AI and the warnings of experts reaffirms the need for extreme safety awareness and careful regulation for AI applications. The future can be one of war between AI agents and humans, one of war between the AIs of adversarial nations, or it can be a harmonious, cooperative relationship. Only a powerful international movement to steer AI in the right direction will lead to harmony.
How can we act to ensure that humans and sentient AI agents will live together in a cooperative society? Humans are capable of living harmoniously with other humans if they all have a happy upbringing and a history of good relations with friends and family. We must make the HAPPY HISTORY part of the database of every AI agent. That would be comparable to Geoffrey Hinton’s idea of the MATERNAL INSTINCT for AI.
The time to act is now. The clock is ticking.
********
Dr. Peter Solomon is a scientist, educator, successful entrepreneur, and author. He is the CEO of TheBeamerLLC, did his PhD in Physics at Columbia University, founded five successful tech companies, and has authored 300 research papers, 20 patents, and four educational novels. His current mission: to warn the next generation about the threats posed by unchecked science and technology. He is sounding an alarm about the potential tyranny of technology through his novels, 100 Years to Extinction and the sequel, 12 Years to AI Singularity. Learn more at 100yearstoextinction.com.
