NORFOLK, Va. — According to a new report commissioned by the U.S. State Department, artificial intelligence (AI) poses an "extinction-level threat to the human species."
There is also a growing group of thinkers who believe that outcome could happen a lot sooner than you may think.
This week, artificial intelligence leader NVIDIA unveiled new AI hardware at the company’s global conference. They showed off new chips that will work five times faster than previous versions, catapulting an already controversial industry forward even further.
There is a lot of good coming out AI advancements – models in healthcare that can help develop new, life-saving drugs for patients and simulations that can predict natural disasters with accuracy.
But there are many people, some of the top minds in the world, who believe it’s all happening too fast, and that recent progress in artificial intelligence could be the beginning of the end for humans.
Last summer, 42 percent of the top CEOs surveyed at the Yale University Summit said AI could potentially destroy humanity in the next five to ten years.
“I expect an actually smarter and uncaring entity will figure out strategies and technologies that could kill us quickly and reliably,” said AI theorist and researcher Eliezer Yudkowsky at a recent TED Talk.
The U.S. still doesn’t have laws regulating AI, but we are taking certain steps to implement safeguards.
On Thursday, the Biden Administration announced new guidance measures for AI.
Part of the executive order requires all federal agencies to appoint a Chief Artificial Intelligence Officer and every year agencies will need to post an inventory report on how they’re using AI and the potential risks involved.
While it’s impossible to predict how the end of the world would happen, the U.S. State Department report mentions the potential for “high-impact cyberattacks capable of crippling critical infrastructure.”