OpenAI's newest fellowship includes up to $15,000 in AI compute a month
ai-safety anthropic openai
| Source: Insider | Original article
OpenAI unveiled a new safety‑focused fellowship that will grant external researchers up to $15,000 worth of AI compute each month, alongside a modest stipend and mentorship from OpenAI staff. The pilot, slated to run from September 2026 through February 2027, targets work on alignment, robustness, privacy and misuse prevention. Applicants will be selected on the basis of technical merit and the potential impact of their proposals, with the first cohort expected to begin experiments later this year.
The announcement arrives just hours after a media report questioned CEO Sam Altman’s commitment to AI safety, positioning the fellowship as a concrete response to growing scrutiny. By matching the structure of Anthropic’s own safety fellowship, OpenAI signals a willingness to compete directly in the nascent ecosystem of corporate‑funded safety research. The compute allocation—equivalent to roughly 1,200 GPU‑hours per month—addresses a chronic bottleneck for independent labs that lack access to the scale required for modern foundation‑model experiments.
If the program succeeds, it could accelerate breakthroughs in alignment techniques and provide a pipeline of vetted talent for OpenAI’s internal safety teams. It also sets a benchmark for other AI firms, many of which have announced similar initiatives but have yet to disclose comparable resource commitments. Observers will be watching how OpenAI balances the fellowship’s open‑research ethos with its proprietary model roadmap, especially as the company rolls out new products such as its text‑to‑speech API and a job‑matching platform later in 2026.
Key developments to monitor include the release of application guidelines, the composition of the inaugural cohort, and any early research outputs that demonstrate the compute grant’s practical value. The fellowship’s impact on the broader safety community—and whether it spurs a wave of corporate‑backed alignment projects—will shape the narrative around industry responsibility in the next wave of AI advancement.
Sources
Back to AIPULSEN