newsletter-cursor-pricing-claude-code-100m-arr-grok-4 image

July 8, 2025

Cursor Pricing Changes, Claude Code = 100M+ ARR, Grok 4 Incoming

Cursor found itself at the wrong end of a PR fiasco this week, fending off a firestorm of social media criticism in response to miscommunicated pricing changes that rolled out in the last few weeks. The crux of the issue was that Cursor quietly transitioned from selling request limits to selling compute limits. With the change, a single large request could cost you 10x+ the "credits" of another request.

Naturally, power users prone to use these "large requests" were caught in the middle, and started getting surprise overage bills. Cursor offers an escape hatch - you get truly unlimited requests if you use their "Auto" mode - but that mode optimizes for cost and bandwidth, not capability, so it's very probable that using "Auto" mode means an unpredictably deteriorated development experience.

On the back of that, Cursor officially announced their $900 million raise, sporting $500 million in ARR. Meanwhile, some back of the napkin math to Cursor's 4-month-old competitor, Claude Code, suggests that Anthropic is already pulling in somewhere in the neighborhood of $100 million per year from the product. The relationship between the two companies has increasingly tense dynamics: Cursor obviously relies heavily on Anthropic's models, but recently was bold enough to poach both the engineer and product manager who created Claude Code to be their head of engineering and head of product.

It's clear that both Cursor and Anthropic are losing money chasing this market. And yet, it's all a testament to the degree of product-market-fit coding with AI has: not only are users willing to pay billions of dollars for these hardly mature months-old products, but the companies selling them are willing to take massive losses to retain market share in anticipation of just how big this market is going to be.

Elon Musk announced that xAI's Grok 4 will go live at 8PM PT on Wednesday. Its predecessor, Grok 3, was a much-hyped but ultimately disappointing launch, and the xAI team has not made much of a mark on the industry since then. We'll see if this might be their moment to turn the ship around: they just raised another $10 billion and have been shopping a valuation of up to $200 billion - that's a lofty price when you consider Anthropic's raise in March landed at only $60 billion.

We at PulseMCP have released an interactive page where we're publishing MCP ecosystem statistics. MCP adoption by end-users continues to grow: June was an all time high for MCP usage across the ecosystem. Our estimated download counts of local servers for the last four months: 4.1m (March), 6.8m (April), 6.1m (May), 7.4m (June). Want to see other stats or data? Shoot us a note!

Have questions or feedback for the Pulse team? Requests for changes on pulsemcp.com? Ideas for content/collaboration? Join our Discord community.

What's upcoming for MCP?

Elicitations - the ability for MCP servers to request structured information from end-users - are the current theme of bleeding edge MCP implementations. Den from Microsoft wrote up a great approachable piece describing their usefulness as support for it landed in VS Code.

→ Previously called "User Interactions" but now renamed to "Secure Elicitations", this work on the spec driven by Arcade.dev with input from AWS, Microsoft, Auth0, Stytch, and others has a lot of community buy-in and seems on track to make its way into the official spec. This will extend Elicitations to allow collecting sensitive information from end-users, like passwords or payment details.

→ Members of the Community Working Groups (CWG) have been pushing for the idea of "asynchronous, long-running tool calls" for a while now, and now those proposals (8 of them!) have officially been moved into the formal Standards Track. The ability for MCP servers to run tool calls without blocking the caller's agentic loop is coming.

→ The CWG Discord has also spun up a new working group, themed around specification validation. As the specification grows in complexity and use cases proliferate, there is an obvious need for validation tools, reference implementations, and other testing frameworks. Work like Janix AI's mcp-validator has the ball rolling - but there is plenty to flesh out here. If this is a topic of interest, join the Discord and contribute to building out this group's work.

→ A common ask we've noticed coming up again and again is for MCP servers to be able to communicate up-front "instructions" for how callers should use their tools. The catch is: this is already in MCP! We just need client apps to start supporting it. If it sounds useful to you, help out with adoption of this little-known MCP feature.

OpenCode (10k+ GitHub stars) MCP Client by SST
→ A direct competitor to Claude Code, OpenCode offers terminal-based agentic coding capabilities. Released less than a month ago, some very positive reviews have started rolling in, and the founder has some hot takes on how Anthropic should be thinking about being a "platform" vs. investing in the tool that is Claude Code. As a product, we're disappointed that OpenCode has fairly limited support for MCP - basic tool calling only - so we're not all that eager to try it ourselves; but we're keeping an eye on this leading ground-up open source player.

Fast-agent MCP Client: Elicitations Support by @evalstate
→ Our favorite agent-building framework now has support for elicitations in a very elegant terminal-based UI. The examples in the documentation page highlight some very compelling use cases for MCP server builders: throw up a newsletter subscription form; request a rating or review; and more.

LlamaCloud (#60 this week) MCP Server by LlamaIndex
→ One of the founders of LlamaIndex put out a nice demo of this server: use it to reliably extract structured data from complex documents like PDF's, images, and a lot more. Great for use cases like processing legal contracts, invoices, financial statements… any kind of unstructured data source you can dream up.

Remote Code Orchestrator (#61 this week) MCP Server by Systemprompt
→ Given the rise of remote-friendly coding, this server that "turns your workstation into a remotely accessible AI coding assistant" is one worth highlighting. Pair it with the Systemprompt voice-native MCP client we featured last week, and you have yourself a home-rolled version of "Cursor on your phone," that works with Claude Code (or could be extended to work with any alternative).

Remote Control: macOS (@baryhuang, #40 this week) or Windows (CursorTouch, #27 this week) MCP Servers
→ Two sides of the same coin, these two MCP servers enable remotely controlling your Mac or PC. Unlike the prior featured server, these solutions are more generalist: do anything on the remote, not just coding. Practically speaking, these are probably most interesting in the context of controlling remote cloud machines, like those rented on MacStadium, and used for use cases like testing out apps before shipping them out to production.

Browse all 300+ clients we've cataloged. See the most popular servers this week and other recently released servers.

Cloudflare certainly put a stake in the ground last week with its declaration of Content Independence Day. Reactions have been mixed: on one hand, a notable player is standing up for the long tail of content creators that otherwise might not have a voice against the multi-trillion dollar AI industry. On the other hand, they're accused of building their own walled garden, and are risking limiting visibility of their customers by default. Our take: standards of the kind Cloudflare is championing are absolutely critical to the future of the open internet (see Cloudflare microsite: contentsignals.org). And holding AI crawlers to account is another (see microsite: goodaibots.com) necessary piece of the puzzle. Cloudflare's CEO is hanging his hat on the effort, claiming "we will get Google to [split out AI crawling vs. search engine crawling]."

→ While Cloudflare's work may be welcome news to established internet brands, here's another research report from Kevin Indig emphasizing, yet again, that building your brand is what matters for acquiring eyeballs in an AI-first world. His lens comes from the search marketing world (which is rapidly morphing into "AI answer engine visibility"), but we think extends to any building you're doing in the AI and MCP space as well. No matter how good your product, there will be no magic bullet like a "search engine for MCP" that will get you users. Those MCP search engines will get optimized to reward one thing: your brand and its perception in the market. So make sure you're in those Reddit communities, interacting on Discord, tweeting on X.com, building your viral loops: that will be what separates a sticky product vs. one whose beautiful code withers away in a corner of the internet.

→ An extensive research paper has been making the social media rounds on the subject of "are we shooting ourselves in the foot by over-relying on AI?" The paper's overall message is that there is a "pressing matter of a likely decrease in learning skills based on the results of our study," which has led to a lot of discourse worrying about whether the usage of AI is making us "dumber" and will come back to bite us. But we think the real takeaway is buried further in the paper: there is "a clear distinction in how higher-competence and lower-competence learners utilized LLMs, which influenced their cognitive engagement and learning outcomes." Meaning, if you use AI thoughtfully, you come out ahead as you maintain cognitive engagement, and still get the benefit that "LLM users are 60% more productive overall." It's the difference between blindly vibe-coding, vs. architecting an agentic coding harness.

→ If you've been trying out Claude Code or another agentic coding agent, and finding the results not nearly as mindblowing as we're all trying to lead you to believe, you might benefit from Mario Zechner's deep dive on taming agentic engineering. He breaks down how to leverage AI thoughtfully when dealing with an extremely complex legacy codebase. The thinking harkens back to reintroducing traditional software engineering principles as you reorient around the idea that "prompts are code, and .md files are state". It's a clever reframing, one that is perhaps overkill for solo devs and vibe coders, but offers a path to "scaling" agentic coding in a predictable, reliable way: "By treating LLMs as programmable computers rather than conversational partners, I've found a more reliable approach to complex software tasks. It's not a panacea and won't work for all problems, but it represents a step toward turning AI-assisted coding into an engineering discipline rather than a "throwing shit at the wall" approach." This kind of thinking will likely become less necessary as models get better and better, but it might be able to help you juice Claude Code for all its worth while we're waiting for 2026 to bring us Claude 5.

→ A much simpler quick tip we liked by way of Tobi from Shopify: don't sleep on what we'll call "meta"-context engineering. Preparing a static slash command or MCP Prompt with context is an obvious context engineering tactic. But the bit we found interesting from his post is the idea to create dynamic context: "Context: current git status (`git status`)...". The context you've prepared is not literally "git status" - it's the idea that the context you need for this saved prompt is the dynamic result of the `git status` CLI operation.

Amazon as a company might be trying to repeat its famous "API-first" transition by pushing a narrative of "MCP-first". Gergely Orosz reports that "Amazon is likely the global leader in adopting MCP servers at scale." Indeed, Amazon employees are claiming that they are often starting to consider building MCP servers before (or instead of) building a UI for a new service.

→ In a piece critical of MCP, Armin Ronacher makes the claim that most MCP servers today should just be code or CLI tools instead. He makes fair points: it's true that generating code and using (and reusing) scripts can be the most efficient way to solve many engineering problems. However, we think the piece doesn't give enough credence to many of the unique qualities of MCP. (1) MCP servers are shareable. Writing raw code instead is like writing a personal script instead of deploying it as a web app baked in with a thoughtful UX for others to use. (2) MCP servers are portable. Using an MCP server with another MCP client is generally easy - repurposing a shell script likely requires a bespoke process to lift it across. And finally, (3) MCP increasingly has helpful, opinionated patterns and libraries/SDK's to solve cross-cutting concerns like secure authentication and various UX odds and ends (like elicitation). By all means, If you're facilitating a one-off process that you're not going to share, re-use, or take advantage of auth or UX: generate some code instead of an MCP server. But in the long run, that'll be a minority of valuable use cases, so you're just missing out on the opportunity to familiarize yourself with MCP as you go along.

→ This great writeup on MCP tool design is one we feel like we could have written ourselves. If you're looking for a starting point on MCP tool design 101, start here (and not by mapping to your REST API). It does a good job laying out the reasons for doing so, like the reminder that tool descriptions are for "context, not comprehension … if your tool descriptions use ambiguous terms or assume external knowledge, the model is essentially guessing". And lays out a process for design that "starts with user intent, not API operations." The author lays out good examples, and you can take a peek at our ongoing work on our pulsemcp/mcp-servers repository as another point of reference.

Security research continues to be a theme in the MCP ecosystem. Similar to the Invariant Labs report on the GitHub MCP server some time ago, the Tramlines team showcases similar exploits on the Neon and Heroku MCP servers. In both cases, thoughtful application of the design patterns from Simon Willison we highlighted a few weeks ago would have been an effective mitigation.

Cheers,
Mike and Tadas

Sign up for the weekly Pulse newsletter

Weekly digest of new & trending MCP apps, use cases, servers, resources and community developments.

Check out last week's edition: Claude Opus 4.5, OpenAI vs. Google, Nano Banana Pro.

Tadas Antanavicius image

Tadas Antanavicius

Co-creator of Pulse MCP. Software engineer who loves to build things for the internet. Particularly passionate about helping other technologists bring their solutions to market and grow their adoption.