Back

Claude is Slow, Kimi is Go

3 min read · AI · Coding Agents · Claude · Opus 4.6 · Kimi 2.5 · Developer Tools · Productivity

Over the last few weeks, the landscape of AI coding agents has evolved rapidly. Some might remember when GitHub released Copilot back in October 2021. It’s impressive to see how we’ve gone from a small chat window inside an IDE to CLI tools that can build a simple CRUD API or a website with a single prompt.

After spending a considerable amount of time in the Anthropic ecosystem, using Opus 4.5 daily via Claude Pro and Claude Code, I said goodbye to Anthropic.

My Previous Setup

Until recently, my setup was as follows:

  • Cursor: My primary IDE and go-to for standard engineering tasks, initial planning, and guided coding. It remains an excellent tool for “everyday” development. The Plan and Ask modes are often a massive help for gathering context or architecting a feature.

  • Claude Opus 4.5: This was my sledgehammer and my scalpel. Whenever I faced complex architectural tasks or cumbersome refactorings, I reached for Opus. I would often plan a feature using Cursor’s dedicated mode, update the plan, and then hand it over to Claude Code for execution.

Opus 4.6, a step back

When Anthropic released Opus 4.6, I didn’t expect a massive leap, but it actually felt like a step back. The model feels considerably slower and is significantly more expensive to run.

Furthermore, Anthropic’s recent policy changes and general behavior have felt unprofessional and at times dishonest. For me, the 4.6 update, the pricing, the communication, and the constant “predictions” from their CEO were the final straw.

Switching to Kimi 2.5

I’ve used various open models in the past Devstral 2 was my daily driver in Cursor back in January, and I still highly recommend GLM 4.5 Air (available via OpenRouter). However, with the release of Kimi 2.5 and their coding plans, I honestly see no reason to stay with Claude.

Being on the Allegretto plan, I almost never hit usage limits. The 5-hour reset periods are perfectly timed for a standard workday. I’ve only “hit the wall” once, and that was mostly due to my own experimentation rather than actual work constraints.

If you are using the Kimi CLI, I highly recommend adding the Playwright and Chrome DevTools MCPs. These make debugging and frontend testing significantly easier within the agent’s workflow:

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"]
    },
    "chrome-devtools": {
      "command": "npx",
      "args": ["chrome-devtools-mcp@latest"]
    }
  }
}

While no model is 100% accurate, Kimi 2.5 consistently gets closest to my intended results. Whether I’m working on the frontend or a complex backend feature, Kimi handles the task like any other frontier model on the market. It feels faster, and I personally prefer its logic and output style.

Final Thoughts

Kimi currently “lacks” a dedicated plan mode like Claude Code, but I prefer doing my actual planning inside my IDE anyway. That’s why I’m keeping my Cursor Pro subscription. Cursor’s “Ask” and “Plan” modes are still the gold standard for navigating a codebase. Having that context directly where I code just makes sense. It’s also a great playground for testing other models, and for smaller to medium tasks before committing a complex job to Kimi.

For the foreseeable future, I am sticking with Moonshot AI. Kimi 2.5 provides the performance I need for complex tasks without the quality degradation I’ve recently experienced with Anthropic.

Thanks for the coffee!

Every cup fuels another line of thoughtful code.

× 1