The "technological elite" warns: Humans may become extinct because of AI.

newsmeki Team

Three hundred fifty leading leaders and experts in artificial intelligence (AI) jointly deliver a message warning that humanity is in danger of extinction because of AI.

"Risk reduction extinction because AI must be a global priority in addition to other social-scale risks such as pandemics or nuclear war" - a notice published by the Center for AI Safety (CAIS) in San Francisco - USA, posted on its website in the 30th. -5 (US time).

Bloomberg describes the above message as signed by 350 leading AI leaders and experts, including Mr. Sam Altman - CEO of OpenAI - "father" of ChatGPT.

In addition, there are leaders from Google DeepMind or Anthropic … but no one from Meta, which is pursuing AI.

The fact that many characters are likened to "technological elites" together to warn AI, like a pandemic or nuclear war, takes place in the context of many concerns about AI.

More than 1,000 leading technology experts, including billionaire Elon Musk and Apple co-founder Steve Wozniak in late March, jointly signed a letter calling on global companies and organizations to stop the race of super AI in the world—6 months to develop a standard set of rules for this technology.

About a month later, Google CEO Sundar Pichai admitted that AI kept him awake for many nights "because it could be more dangerous than anything humans have ever seen."

"Until one day, AI will have capabilities beyond human imagination, and we can't imagine the worst that can happen," Sundar Pichai said.

Billionaire Elon Musk (left) and OpenAI CEO Sam Altman

The OpenAI CEO also said that AI could greatly benefit but raise concerns about misinformation, economic shock, or something "far beyond anything humans are prepared for." Mr. Sam Altman himself must admit that there is always a feeling of anxiety about AI and shock because ChatGPT is so popular.

Experts are also concerned about a higher paradigm than artificial intelligence (AGI). AGI is judged to be much more complex than generative AI because of its ability to be self-aware of what it says and does. Theoretically, this technology should scare people into the future.

Source from the Internet

You might also like

How do you feel?