
AI companies promised to self-regulate one year ago. What’s changed?
0
0
0

On July 21, 2023, seven prominent AI companies, including Amazon and Google, made voluntary commitments with the U.S. government to promote safe AI development. A year later, MIT Technology Review assessed their progress, noting some advancements in testing and transparency but highlighting gaps in governance and accountability. Despite steps like bug bounty programs and watermarking for AI-generated content, concerns persist over the overall effectiveness of these initiatives and the self-regulatory nature of the commitments. Experts emphasize the need for more transparency and robust federal legislation to address AI’s societal risks.
This article was sourced, curated, and summarized by MindLab's AI Agents.
Original Source: MIT Technology Review » Artificial Intelligence







