In a significant development, seven top artificial intelligence (AI) companies in the U.S. – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – have mutually agreed to implement voluntary safeguards on AI technology’s progression. This agreement was announced by the White House and indicates the companies’ commitment to handle the potential risks associated with these new tools, even amidst fierce competition.
The commitment was officially declared during a meeting with President Joe Biden at the White House. The president emphasized the importance of being alert and aware of the potential threats that emerging technologies can pose to democracy and societal values. He further emphasized the significant responsibility on their shoulders to ensure the correct use of these technologies while also recognizing the vast potential benefits.
These AI companies are striving to surpass each other in creating advanced AI versions that can independently generate text, images, music, and videos. However, the technological advancement has also raised concerns about disinformation spread and the potentially disastrous implications as AI grows more sophisticated and human-like.
This initial step towards implementing voluntary safeguards is part of global attempts to establish legal and regulatory frameworks to guide AI development. These frameworks include provisions for product security testing and use of watermarks to help consumers identify AI-generated content.
However, legislative efforts to regulate rapidly evolving technologies like social media and AI have been challenging. President Biden assured that he would continue to enact executive orders to maintain America’s leadership in responsible innovation and work across party lines to develop suitable legislation and regulation.
Although the White House hasn’t provided any specifics regarding a forthcoming executive order aimed at controlling AI technology access by foreign competitors like China, it is expected to involve new restrictions on advanced semiconductors and language model exports.
The voluntary commitments from these companies, although seen as an important first step towards establishing responsible AI guidelines, are not binding, leaving the advancement of AI technology largely unregulated. However, the companies have pledged to security testing, research on bias and privacy issues, risk information sharing, development of societal challenge combating tools, and transparency in identifying AI-generated content.
While these agreed-upon standards signal the AI companies’ proactive and thoughtful approach to this new technology, they’re considered the least common standards and are open to interpretation by each company. With the continuing risks, it is unlikely that this agreement will deter legislative and regulatory efforts on this emerging technology. Even as European regulators prepare to enact AI laws this year, U.S. lawmakers are grappling with the complexities of regulating AI technology while maintaining a competitive edge, especially against rivals like China.