The world’s leading AI scientists are calling on the world’s governments to work together to control the technology before it’s too late.
Three winners of the Turing Award – essentially the Nobel Prize in computer science – who’ve advanced AI research and development, together with a dozen top scientists from world wide, signed a open letter It called for the creation of higher protective measures for the further development of AI.
The scientists claimed that given the rapid advancement of AI technology, any mistake or misuse could have serious consequences for humanity.
“The loss of human control or the malicious use of these AI systems could have catastrophic consequences for all of humanity,” the scientists wrote within the letter. They also warned that these “catastrophic consequences” could occur at any time given the rapid development of AI.
To immediately address the danger of malicious use of AI, scientists outlined the next steps:
Government AI safety bodies
Governments have to work together on AI safeguards. The scientists’ ideas included encouraging countries to establish special AI authorities to reply to AI “incidents” and risks inside their borders. These authorities would ideally work together, and in the long run, a brand new international body must be created to forestall the event of AI models that pose risks to the world.
“This body would ensure that states adopt and implement a minimum set of effective safeguards, including model registration, disclosure and tripwires,” the letter said.
Security promise for AI developers
Another idea is to require developers to consciously ensure the protection of their models and promise to not cross red lines. Developers would vow to not develop AI “that can autonomously reproduce, improve, seek power or deceive its creators, or those that enable the construction of weapons of mass destruction and the conduct of cyberattacks,” based on an announcement by leading scientists during a gathering in Beijing last 12 months.
Independent research and tech checks on AI
Another proposal is to create a series of world AI safety and verification funds, funded by governments, philanthropists and corporations, to support independent research to support the event of higher technological controls for AI.
Among the experts who called on governments to take motion on AI safety were three Turing Prize winners, including Andreas Yaothe mentor of a few of China’s most successful tech entrepreneurs, Yoshua Bengio, one of the crucial cited computer scientists on the planet, and Geoffrey Hintonwho taught OpenAI co-founder and former chief scientist Ilya Sutskever and worked on machine learning at Google for a decade.
Cooperation and AI ethics
In the letter, the scientists praised existing international collaborations in the sphere of AI, corresponding to Meeting in May The leaders of the United States and China discussed the risks of artificial intelligence in Geneva. However, they stressed that more cooperation is required.
The development of artificial intelligence must be accompanied by ethical standards for engineers much like those for doctors or lawyers, the scientists argue. Governments should view AI less as an exciting latest technology and more as a worldwide public good.
“Together, we must prepare to avert the associated catastrophic risks that could occur at any time,” the letter said.
Datasheet: Stay on top of the technology business with in-depth evaluation from the most important names within the industry.
Register here.