As the AI industry advances rapidly, concerns about potential risks and the need for safety measures have prompted the Biden administration to take action. To address these concerns, the White House has gathered “voluntary commitments” from seven major developers in pursuit of shared safety and transparency goals. This non-binding agreement comes ahead of a planned Executive Order, aiming to encourage responsible practices within the sector.
The companies participating in this initiative include. To discuss these voluntary commitments, representatives from each company are meeting with President Biden at the White House today.
While these commitments are not legally binding, they will likely become a matter of public record, holding companies accountable in the court of public opinion. Although there is no enforcement proposed by any government agency for non-compliance, the voluntary commitments provide a framework for ethical development and deployment.
The White House gathering consists of high-profile attendees, including Brad Smith, President of Microsoft; Kent Walker, President of Google; Dario Amodei, CEO of Anthropic; Mustafa Suleyman, CEO of Inflection; Nick Clegg, President of Meta; Greg Brockman, President of OpenAI; and Adam Selipsky, CEO of Amazon Web Services. Notably, the list lacks gender diversity and does not include underlings .
The key voluntary commitments agreed upon by the seven AI companies are as follows:
1. Internal and External Security Tests: systems will undergo security assessments, including adversarial “red teaming” by external experts, before release.
2. Information Sharing: Companies will share information on risks and mitigation techniques with government, academia, and civil society.
3. Investment in Cybersecurity: To protect private model data, companies will invest in cybersecurity and “insider threat safeguards” to prevent unauthorized access.
4. Third-Party Vulnerability Reporting: Companies will facilitate third-party discovery and reporting of vulnerabilities through bug bounty programs expert analysis.
5. Robust Watermarking: generated content will be marked with robust watermarking or other identification methods.
6. Transparency Reports: Companies will report on systems’ capabilities, limitations, and appropriate and inappropriate use.
7. Research on Societal Risks: There will be a focus on researching societal risks such as systematic bias and privacy issues.
8. Societal Challenges: Companies will develop and deploy to address significant challenges like cancer prevention and climate change.
While these commitments are voluntary, the possibility of an Executive Order looms, encouraging compliance and emphasizing the importance of responsible practices. For instance, if some companies fail to allow external security testing of their models before release, the E.O. may direct regulatory agencies to scrutinize products claiming robust security.
The White House’s proactive approach to regulation is aimed at staying ahead of disruptive technology, learning from past experiences with social media. The administration is actively seeking input from industry leaders to develop a national strategy, allocating funding for research centers and programs. Despite these efforts, the national science and research apparatus remains ahead, as evidenced by comprehensive research challenges and opportunities reports from the Department of Energy and National Labs.
As the Artificial industry continues to evolve, responsible practices and safety measures are becoming increasingly critical. The voluntary commitments made by these top companies set a positive precedent for ethical and accountable development, fostering public trust and confidence in the sector.