Ronan Farrow on Instagram: "During the Biden administration, Sam Altman withdrew from a process in which he had been exploring getting a security clearance to join classified AI policy discussions. Staffers at the RAND Corporation, which helped coordinate the process, were skeptical he could get a clearance due to his extensive foreign entanglements—one noted that he’d been raising “hundreds of billions of dollars” from foreign governments.
Why? To build AI at the scale and speed he wanted, Altman needed money that few entities on earth possess—and the company considered extraordinary plans to achieve this.
According to internal documents, OpenAI once considered auctioning off access to advanced AI models to governments, deliberately playing world powers against each other. A junior researcher recalled thinking: “This is completely fucking insane.”
Watch to learn more about how AI companies are shaping the balance of power in the world.
And read the full story now in The New Yorker—link in bio.
#openai #samaltman #middleeast #geopolítics #artificialintelligence"
openai
| Source: Mastodon | Original article
Sam Altman, the chief executive of OpenAI, stepped back from a classified‑clearance process that would have placed him in a U.S. government AI policy forum, according to a new investigation published by The New Yorker and highlighted by journalist Ronan Farrow on Instagram. Internal documents obtained by the magazine show that the RAND Corporation, which was helping coordinate the clearance, doubted Altman’s eligibility because of “extensive foreign entanglements,” including his role in raising “hundreds of billions of dollars” from foreign governments.
The withdrawal matters because it reveals how closely the private AI sector is courting state power. Altman’s push for massive, rapid scaling of AI models required capital that only sovereign wealth funds and state‑backed investors could supply. The same documents indicate that OpenAI once debated auctioning access to its most advanced models to governments, deliberately pitting world powers against each other to secure funding. A junior researcher involved in the discussion described the idea as “completely fucking insane,” underscoring the ethical shockwaves such a strategy would generate.
If true, the episode signals a potential shift in the balance of AI governance: private firms may seek to leverage geopolitical rivalries to finance development, while governments scramble to embed themselves in the technology’s strategic core. It also raises questions about the adequacy of existing clearance mechanisms when industry leaders have deep, multinational financial ties.
Watch for reactions from U.S. officials, who have already begun granting Anthropic’s Mythos access to federal agencies, and from rival AI labs that may adopt similar funding models. Congressional oversight hearings on AI security clearances are likely to intensify, and any formal policy proposals to regulate corporate‑government AI collaborations will become a focal point in the coming months.
Sources
Back to AIPULSEN