AI is an abbreviation that I am listening to many times a day, and is usually used for the exact original thing with only 30 % hit rate. LLMs like Chat GBT and Dippec are permanently present in the news, with our gaming chips to talk about putting AI in everything in our schools. It is easy to reject it as a pop culture phase, just as uranium fever has engulfed the gloves with nuclear anxiety in the past.
The comparison between A-Bomb and AI launch may seem hyperbolic, but Benefactor AI experts are demanding for a safety test for the first nuclear weapons explosion for the trinity test.
Max Tag Mark, MIT Professor of Physics and AI researchers as well as their three students Published a dissertation Recommend a similar approach. In this article, they demand the required calculation whether or not a significant AI can get out of the control of humans. The test is being compared to those who are made by Arthur Capton who detect the possibility of explosion in a nuclear bomb environment before being allowed to be tried.
In these tests, the computer approved the possibility of such an explosion to go beyond the Trinity after announcing the possibility of such an explosion in three million. When Tigmark does similar calculations, it has been 90 % of the possibility that a very modern AI can be at risk for humanity, contrary to Windows insects. Currently this level of theoretical AI is named after artificial super intelligence or ASI.
The calculations have convinced the TEGmark that the implementation of safety is needed, and that the companies have the responsibility to examine these potential risks. He also believes that a standard approach that agreed and calculated by several companies needs to create political pressure to comply with the companies.
“The companies also need to calculate the Comptan Permanent, the possibility that we will control it,” he said. “It’s not enough to say that ‘we like it’. They have to account for percentage.”
This is not the first push of the tagmark and thinks about creating a new AIS. He is also a non -profit founder for the development of the Safe AI, called the Future of Life Institute. The Institute published an open letter in 2023 demanding the development of powerful AIS, which received the attention and signatures of people like Elon Musk and Steve Wozniac.
Tagmark worked with Open A, Google, and Deep Mind researchers as well as global computer scientist Yoshua Bango on the consensus of Singapore on the report of global AI seafy research priorities. It seems that if we ever release an ASI on the world, we will at least know the exact percentage of us all.