The AI arms race might destroy humanity.

Opinion by: Merav Ozair, PhD
The launch of ChatGPT in late 2023 sparked an arms race amongst Massive Tech firms reminiscent of Meta, Google, Apple and Microsoft and startups like OpenAI, Anthropic, Mistral and DeepSeek. All are dashing to deploy their fashions and merchandise as quick as doable, saying the following “shiny” toy on the town and attempting to assert superiority on the expense of our security, privateness or autonomy.
After OpenAI’s ChatGPT spurred main progress in generative AI with the Studio Ghibli pattern, Mark Zuckerberg, Meta’s CEO, urged his groups to make AI companions extra “humanlike” and entertaining — even when it meant stress-free safeguards. “I missed out on Snapchat and TikTok, I gained’t miss out on this,” Zuckerberg reportedly mentioned throughout an inner assembly.
Within the newest Meta AI bots mission, launched on all their platforms, Meta loosened its guardrails to make the bots extra participating, permitting them to take part in romantic role-play and “fantasy intercourse,” even with underage customers. Employees warned concerning the dangers this posed, particularly for minors.
They’ll cease at nothing. Not even the protection of our youngsters, and all for the sake of revenue and beating the competitors.
The injury and destruction that AI can inflict upon humanity runs deeper than that.
Dehumanizing and lack of autonomy
The accelerated transformation of AI possible results in full dehumanization, leaving us disempowered, simply manipulable and completely depending on firms that present AI companies.
The most recent AI advances have accelerated the method of dehumanization. We’ve got been experiencing it for greater than 25 years because the first main AI-powered suggestion techniques emerged, launched by firms like Amazon, Netflix and YouTube.
Corporations current AI-powered options as important personalization instruments, suggesting that customers can be misplaced in a sea of irrelevant content material or merchandise with out them. Permitting firms to dictate what folks purchase, watch and suppose has grow to be globally normalized, with little to no regulatory or coverage efforts to curb it. The implications, nevertheless, may very well be important.
Generative AI and dehumanization
Generative AI has taken this dehumanization to the following stage. It grew to become frequent follow to combine GenAI options into current purposes, aiming to extend human productiveness or improve the human-made final result. Behind this large push is the concept that people aren’t adequate and that AI help is preferable.
Latest: Meta opens Llama AI mannequin as much as US army
A 2024 paper, “Generative AI Can Hurt Studying,” discovered that “entry to GPT-4 considerably improves efficiency (48% enchancment for GPT Base and 127% for GPT Tutor). We additionally discover that when entry is subsequently taken away, college students carry out worse than those that by no means had entry (17% discount for GPT Base). That’s, entry to GPT-4 can hurt instructional outcomes.”
That is alarming. GenAI disempowers folks and makes them depending on it. Individuals could not solely lose the power to supply the identical outcomes but in addition fail to speculate effort and time in studying important abilities.
We’re shedding our autonomy to suppose, assess and create, leading to full dehumanization. Elon Musk’s assertion that “AI can be means smarter than people” isn’t a surprise as dehumanization progresses, as we are going to now not be what really makes us human.
AI-powered autonomous weapons
For many years, army forces have used autonomous weapons, together with mines, torpedoes and heat-guided missiles that function based mostly on easy reactive suggestions with out human management.
Now, it enters the world of weapon design.
AI-powered weapons involving drones and robots are actively being developed and deployed. As a result of how simply such expertise proliferates, they may solely grow to be extra succesful, refined and extensively used over time.
A significant deterrent that retains nations from beginning wars is troopers dying — a human value to their residents that may create home penalties for leaders. The present growth of AI-powered weapons goals to take away human troopers from hurt’s means. If few troopers die in offensive warfare, nevertheless, it weakens the affiliation between acts of battle and human value, and it turns into politically simpler to begin wars, which, in flip, could result in extra demise and destruction total.
Main geopolitical issues might rapidly emerge as AI-powered arms races amp up and such expertise continues to proliferate.
Robotic “troopers” are software program that is perhaps compromised. If hacked, the complete military of robots could act in opposition to a nation and result in mass destruction. Stellar cybersecurity can be much more prudent than an autonomous military.
Keep in mind that this cyberattack can happen on any autonomous system. You’ll be able to destroy a nation just by hacking its monetary techniques and depleting all its financial sources. No people are harmed, however they might not be capable to survive with out monetary sources.
The Armageddon state of affairs
“AI is extra harmful than, say, mismanaged plane design or manufacturing upkeep or dangerous automobile manufacturing,” Musk mentioned in a Fox Information interview. “Within the sense that it has the potential — nevertheless small one could regard that chance, however it’s non-trivial — it has the potential of civilization destruction,” Musk added.
Musk and Geoffrey Hinton have lately expressed considerations that the potential for AI posing an existential risk is 10%-20%.
As these techniques get extra refined, they might begin appearing in opposition to people. A paper revealed by Anthropic researchers in December 2024 discovered that AI can pretend alignment. If this might occur with the present AI fashions, think about what it might do when these fashions grow to be extra highly effective.
Can humanity be saved?
There may be an excessive amount of give attention to revenue and energy and nearly none on security.
Leaders ought to be involved extra about public security and the way forward for humanity than gaining supremacy in AI. “Accountable AI” isn’t just a buzzword, empty insurance policies and guarantees. It ought to be on the high of the thoughts of any developer, firm or chief and applied by design in any AI system.
Collaboration between firms and nations is important if we want to forestall any doomsday state of affairs. And if leaders aren’t stepping as much as the plate, the general public ought to demand it.
Our future as humanity as we all know it’s at stake. Both we guarantee AI advantages us at scale or let it destroy us.
Opinion by: Merav Ozair, PhD.
This text is for normal info functions and isn’t supposed to be and shouldn’t be taken as authorized or funding recommendation. The views, ideas, and opinions expressed listed below are the creator’s alone and don’t essentially mirror or signify the views and opinions of Cointelegraph.
