Codex for (almost) everything
agents openai
| Source: Mastodon | Original article
OpenAI has rolled out a major upgrade to its desktop‑based Codex agent, branding the new version “Codex for (almost) everything”. The update, released on 16 April 2026 for macOS and Windows, expands the tool beyond code completion to full‑system interaction. Codex can now move the mouse, type in any application, launch and navigate a built‑in web browser, generate images on demand, retain preferences across sessions, and load third‑party plugins that automate repetitive tasks. In short, the AI has been turned into a development partner that can orchestrate the entire workflow from design mock‑ups to deployment scripts without the user leaving the IDE.
The move matters because it pushes conversational agents into the same territory occupied by Anthropic’s Claude Code and emerging “super‑app” agents. By handling UI actions and visual assets, Codex reduces the context‑switching that has long slowed software teams, promising faster prototyping and tighter DevOps loops. At the same time, the ability to control a computer raises security and privacy questions that enterprises will need to address before granting the model broad permissions.
As we reported on 17 April 2026, OpenAI’s earlier Codex update introduced background computer use; today’s release adds browsing, image generation, memory and a plugin framework, marking the first step toward a truly general‑purpose coding assistant. The next milestones to watch are OpenAI’s plans for Linux support, the pricing model for the expanded feature set, and the growth of the plugin marketplace. Equally important will be how quickly development teams adopt the tool versus entrenched solutions such as GitHub Copilot and Claude Code, and whether regulators impose new safeguards on AI agents that can manipulate operating systems.
Sources
Back to AIPULSEN