Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7
claude qwen
| Source: HN | Original article
Simon Willison’s latest blog post shows a striking shift in the AI‑generated‑art landscape: running the open‑source Qwen 3.6‑35B‑A3B model on a standard laptop produced a pelican illustration that he judged superior to the one rendered by Anthropic’s Claude Opus 4.7. The comparison, posted on 16 April 2026, pits Qwen’s multimodal capabilities—now fine‑tuned for image synthesis—against Claude’s newly released 4.7 version, which we covered in “What’s new in Claude Opus 4.7” (16 April 2026).
Willison’s experiment is more than a novelty. Qwen 3.6‑35B‑A3B, the latest entry in Alibaba’s Qwen series, can run on consumer‑grade GPUs thanks to aggressive quantisation and the A3B inference engine. By contrast, Claude Opus 4.7 remains a cloud‑only service, charging per token and requiring an internet round‑trip for every request. The ability to generate higher‑fidelity visuals locally reduces latency, eliminates data‑exfiltration risks, and cuts operating costs for developers and small studios.
The result matters for the Nordic AI ecosystem, where many startups rely on tight budgets and data‑privacy regulations. If a 35‑billion‑parameter model can outperform a premium API on a laptop, the incentive to adopt open‑source alternatives grows. It also pressures proprietary providers to justify their pricing or accelerate feature releases.
What to watch next: Alibaba plans a Qwen 4.x series with larger vision‑language models, while the community is already integrating Qwen into frameworks such as Chartroom and Datasette, as indicated by recent package releases. Anthropic may respond with tighter integration of image generation or revised pricing tiers. Meanwhile, benchmark suites that compare multimodal output quality across open‑source and commercial models are likely to gain traction, giving developers concrete data for future migrations. The pelican test may be a small anecdote, but it foreshadows a broader rebalancing of power between cloud‑bound AI services and locally run, open‑source alternatives.
Sources
Back to AIPULSEN