Score 98/100 sur Claude Code — Top 0.1% Mondial des Sessions
claude
| Source: Dev.to | Original article
A developer on the Claude Code community platform posted a new benchmark on April 3, posting a 98 out of 100 score that places the session in the top 0.1 percent of all global runs. The achievement, announced by French‑speaking coder Franck Hlb, eclipses his own previous best of 95 / 100 and beats the earlier Gemini CLI record that sat at the 1 percentile. The score was generated using Anthropic’s latest Claude Opus 4.6 model, which was rolled out last week and already boasts a 90.2 % BigLaw Benchscore – the highest for any Claude variant.
The result matters because Claude Code is Anthropic’s flagship tool for turning natural‑language prompts into production‑ready code, and real‑world benchmarks are the clearest proof of its readiness for enterprise adoption. A 98 / 100 rating suggests the model can resolve complex programming tasks with minimal errors, a point highlighted in our April 4 coverage of AI agents’ blind spots. The record also signals that Claude Code is now competitive with, and perhaps surpassing, rival code‑generation systems such as Google’s Gemini CLI, which has been a reference point for developers evaluating AI‑assisted coding.
What to watch next is whether Anthropic will publish a formal leaderboard or integrate these community scores into its product marketing. Analysts will be looking for follow‑up data on error‑rate reductions, especially in safety‑critical domains like legal tech where Claude Opus 4.6 already shows strong reasoning. A potential expansion of Claude Code into more IDEs and tighter coupling with the newly announced usage‑bundle credits could accelerate uptake. If the trend holds, the model may become a default choice for developers seeking high‑precision AI assistance, reshaping the competitive landscape of code‑generation AI.
Sources
Back to AIPULSEN