×
Why we need a new ‘Statement on AI Risk’ and what it should accomplish
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The growing gap between acknowledged artificial intelligence risks and actual government investment in AI safety measures highlights a concerning disconnect in policy priorities.

The central proposal: A new “Statement on AI Inconsistency” aims to highlight the disparity between U.S. military spending and AI safety investment, given similar risk levels.

  • The proposed statement points out that while the U.S. spends $800 billion annually on military defense, it allocates less than $0.1 billion to AI alignment and safety research
  • This spending disparity exists despite artificial superintelligence (ASI) being considered as significant a threat as traditional military concerns
  • The statement is intended to succeed an earlier “Statement on AI Risk” with more specific policy implications

Risk assessment consensus: Multiple expert groups and the public share similar views about AI risks to humanity.

  • Superforecasters estimate a 2.1% chance of an AI catastrophe severe enough to kill 10% of humanity
  • AI experts project a 5-12% probability of such an event
  • Other expert groups and the general public consistently estimate around a 5% risk level
  • These assessments suggest AI poses a comparable threat level to traditional military concerns

Strategic rationale: The authors argue this new statement would be harder for governments to dismiss without meaningful action.

  • Unlike the previous Statement on AI Risk, governments couldn’t satisfy this call to action with token initiatives
  • The explicit comparison to military spending creates a clear benchmark for adequate investment
  • The statement builds on previously established concerns about AI risks, making it an incremental rather than radical position

Implementation challenges: The authors face significant hurdles in gaining institutional support.

  • They seek backing from established organizations like the Future of Life Institute or the Center for AI Safety
  • They acknowledge the need for organizational infrastructure to gather expert signatures

Analyzing the implications: The massive disparity between military and AI safety spending suggests institutional inertia may be preventing appropriate resource allocation to emerging threats, potentially leaving society vulnerable to new forms of risk that fall outside traditional defense frameworks.

A better “Statement on AI Risk?”

Recent News

Super Micro stock surges as company extends annual report deadline

Super Micro Computer receives filing extension from Nasdaq amid strong AI server sales, giving the manufacturer until February to resolve accounting delays.

BlueDot’s AI crash course may transform your career in just 5 days

Demand surges for specialized training programs that teach AI safety fundamentals as tech companies seek experts who can manage risks in artificial intelligence development.

Salesforce expands UAE presence with new Dubai AI hub

Salesforce expands its footprint in Dubai as the UAE advances its digital transformation agenda and emerges as a regional technology hub.