08/23/2023 / By Kevin Hughes
A scientist has warned that getting too comfortable with artificial intelligence (AI) could pose a danger to humanity.
Indian scientist Shekhar Mande issued this warning during a lecture, saying that humanity should be ready for AI to take over and create viral outbreaks, nuclear war and even human extinction. According to Mande – the former director general of India’s Council of Scientific and Industrial Research – AI will be the principal cause of human extinction.
Experts in the field have predicted that AI will be the first cause of humanity’s extinction, followed by nuclear war and viral outbreaks. His elucidation on the three threats that could render humanity extinct invited reflection about the fine balance between progress, security and the preservation of humanity.
The Indian scientist is not the first person to think about the problems mankind faces with AI. While humans have made progress in science and technology by creating computers that think like people, some troubling thoughts are popping up as well. (Related: AI likely to WIPE OUT humanity, Oxford and Google researchers warn.)
This pivot toward AI is not in the best interest of humanity. Yuval Noah Harari, a close adviser to Klaus Schwab of the globalist World Economic Forum, stated that AI is going to perform the hard task of controlling the slave class and making them obsolete.
Harari’s argument centers on the ruling class employing this technology against the slave class. Once a critical mass of the slave population completely realizes their situation, the machines will do the tough job for the sociopaths at the top.
Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Use our decentralized, blockchain-based, uncensorable free speech platform at Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.
Meanwhile, a top American official for cybersecurity earlier warned that humanity could be at risk of an “extinction event” if tech companies fail to self-regulate and work with the government to reign in the power of AI. The warning came from Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA) under the U.S. Department of Homeland Security (DHS).
Easterly’s remarks followed the release of a May 2023 statement involving hundreds of tech leaders and public figures who compared the existential threat of AI to a pandemic or nuclear war. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the one-sentence statement issued by the San Francisco-based nonprofit Center for AI Safety (CAIS).
More than 300 individuals affixed their signatures to the statement, including Open AI CEO Sam Altman, Google DeepMind CEO Demis Hassabis. Other public figures outside the tech industry also signed the statement, including neuroscience author Sam Harris and musician Grimes.
In response to questions about the CAIS statement, Easterly asked the signatories to self-regulate and work with the government.
“I would ask these 350 people and the makers of AI – while we’re trying to put a regulatory framework in place – think about self-regulation, think about what you can do to slow this down, so we don’t cause an extinction event for humanity,” Easterly said.
“If you actually think that these capabilities can lead to [the] extinction of humanity, well, let’s come together and do something about it.”
For his part, Altman told senators during a hearing that he backs government regulation as a means of preventing the harmful effects of AI. Such regulatory steps include the adoption of licenses or safety requirements required for the operation of AI models.
“If this technology goes wrong, it can go quite wrong,” he said. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”
Follow FutureTech.news for more news about AI.
Watch Yuval Noah Harai explaining how AI can destroy humanity below.
This video is from the Thrivetime Show channel on Brighteon.com.
Researchers: AI decisions could cause “nuclear-level” CATASTROPHE.
Big Tech, globalist elites join forces in secret meeting to talk about artificial intelligence.
Sources include:
Tagged Under:
AI dangers, apocalypse, artificial intelligence, Collapse, computing, cyber war, cyborg, Dangerous, depopulation, end game, future science, future tech, Glitch, human extinction, infections, information technology, inventions, Jen Easterly, nuclear war, overlords, pandemic, robotics, robots, Shekhar Mande, viral outbreaks, Yuval Noah Harari
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 COLLAPSE.NEWS
All content posted on this site is protected under Free Speech. Collapse.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Collapse.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.