WASHINGTON (AP) — Amazon, Google, Meta, Microsoft and other companies leading the development of artificial intelligence technology have agreed to abide by a set of artificial intelligence security measures negotiated by the Joe Biden administration.
The White House said Friday it has obtained voluntary commitments from seven US companies to ensure their AI products are safe before launch. Some of the commitments call for third-party oversight of the operation of business AI systems, though they do not spell out who will audit the technology or hold companies accountable.
A surge of commercial investment in generative AI tools that can write convincingly human-like text and produce new images and other media has sparked public fascination, as well as concerns about their ability to mislead people and spread disinformation, among other dangers.
The four tech giants, along with OpenAI, the maker of ChatGPT, and startups Anthropic and Inflection, have committed to conducting security tests “conducted in part by independent experts” to guard against major risks, including biosecurity and cybersecurity, the White House said in a statement.
Companies have also committed to methods for reporting vulnerabilities in their systems and using digital watermarks to help distinguish between real and AI-generated images, known as deepfakes.
They will also publicly report flaws and risks in their technology, including the effects on fairness and bias, the White House said.
The voluntary commitments are meant to be an immediate way to address risks before a longer-term push to get Congress to pass laws regulating the technology.
Some AI regulation advocates said Biden’s move is a start, but more needs to be done to hold companies and their products accountable.
“History would indicate that many technology companies do not actually voluntarily commit to acting responsibly and supporting strong regulations,” James Steyer, founder and CEO of the nonprofit Common Sense Media, said in a statement.
Senate Majority Leader Chuck Schumer, DN.Y., has said he will introduce legislation to regulate AI. He has conducted several briefings with government officials to educate senators on an issue that attracts bipartisan interest.
Several tech executives have called for regulation, and several went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.
But some experts and emerging competitors worry that the kind of regulation being proposed could be a boon for deep-pocketed early adopters led by OpenAI, Google and Microsoft, as smaller players are crowded out by the high cost of making their AI systems known as big language models that adhere to regulatory constraints.
The BSA software trade group, which includes Microsoft as a member, said Friday it welcomed the Biden administration’s efforts to set rules for high-risk AI systems.
“Enterprise software companies look forward to working with the administration and Congress to enact laws that address the risks associated with artificial intelligence and promote its benefits,” the group said in a statement.
Several countries have been looking for ways to regulate AI, including European Union lawmakers who have been negotiating general AI rules for the 27-nation bloc.
UN Secretary General Antonio Guterres recently called the United Nations “the ideal place” to adopt global standards and appointed a board that will report on options for global governance of AI before the end of the year.
The United Nations chief also said he welcomed calls from some countries for the creation of a new UN body to support global efforts to govern AI, inspired by models such as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
The White House said Friday that it has already consulted on voluntary commitments with several countries.