July 1, 2025
Prompt-To-App Galore, Gemini Coding CLI, Cursor on your Phone
Perhaps inspired by the story that Replit jumped from $10m ARR to $100m ARR in under 6 months, or Lovable from $0 to $40m ARR in the same amount of time, seemingly every software company has decided that they want a slice of the action… by building the same product.
This week, Airtable announced a pivot to relaunch as an "AI-native app platform." Anthropic launched Claude Artifacts to "turn ideas into interactive AI-powered apps." Fly.io's Phoenix. Figma's Make. Bolt.new. The "prompt-to-app" product idea is becoming a meme. And yet, it makes sense that the free market won't let Replit and Lovable keep the firehose to themselves.
The other AI product reaching widespread product-market-fit is the agentic coding CLI. Google released Gemini CLI, a competitor to Anthropic's Claude Code and OpenAI's Codex CLI. It's gotten quick adoption, largely due to its generous free tier. It's hemorrhaging so much cash that Google even asked Cline to remove their Gemini CLI integration. Makes sense; it doesn't do Google any good to have its free product being used behind the scenes. Google quickly caught up in the popularity contest by GitHub star count, but we still see Claude Code dominating the social media narrative.
The Cursor team keeps thinking a step ahead, recently launching Cursor for your phone. While we don't think coding-on-your-phone is going to be a particularly dominant use case, it is an interesting leap of faith for the Cursor team to take. Your automated coding agent has to be really good if you're going to be able to trust it on a tiny screen while typing one-liners with your thumbs.
The rumor mill is starting to churn for Apple, with Bloomberg reporting that the AI laggard might be in talks with Anthropic and/or OpenAI to power Siri. It would be a major shift from their current commitment to their in-house "Foundation Models." Sounds like it's still early, but a partnership like that could certainly catapult Apple products back into the AI conversation.
We at PulseMCP have been busy scratching our own itch with agentic tooling and MCP. It's our view that the current bottleneck in the MCP ecosystem is the lack of quality server implementations. MCP will be truly useful when there exists a quality MCP server for every use case you can dream up: from fetching a website to searching for a flight. We're throwing our weight behind helping make that happen, one server at a time.
So, we've begun to author some well-designed MCP servers (⭐️every GitHub star is greatly appreciated) for our commonly used integrations and workflows, which we're now regularly using with agents like Claude Code and Goose. They are a work-in-progress but we are really excited about what we have cooking here. More to come on this front…
Have questions or feedback for the Pulse team? Requests for changes on pulsemcp.com? Ideas for content/collaboration? Join our Discord community.
What's upcoming for MCP?
Things are quieter than usual on the MCP front as we approach the July 4 US holiday a week after the big specification version launch last week. Nonetheless, some nuggets of interest:
→ We're working on well-defining a `server.json` shape for the MCP ecosystem. This will likely be the static file you'll want to maintain alongside every MCP server you build. It will be what you publish to the official Registry, to the .well-known endpoint, perhaps what you bake into those new DXT packages from Anthropic, and more. If you have a use case for it, we'd love your input and help working through the various issues related to server.json.
→ If you haven't been keeping up with the play-by-play evolution of auth in MCP, Darin from AWS and Elie from Anthropic teamed up to write a very approachable blog post breaking down the current state of auth in MCP. They explain the tension of designing for ease-of-use versus security versus enterprise needs, and it all coalesced to an elegant solution in the recent 2025-06-18 spec release.
→ The Security working group within the Community Working Groups Discord has been ramping up its activity, with a deliberate push to encourage attendance at open meetings and take contributions from experts across the ecosystem. These meetings and conversations regularly include maintainer Den Delimarsky from the MCP Steering Group, so those who invest in participating can be sure their voices will be heard. Hats off to the endless stream of work going on here: MCP security leadership is resoundingly answering the common refrain that MCP needs to improve its security posture.
→ There is now an official Go SDK. It's not quite live yet, but after a long initial discussion and ensuing design discussion, the Go community is close to being rewarded for its patience with an official SDK, thoughtfully built from scratch. Go is certainly the most popular MCP server language beyond TypeScript and Python, so this should help grease the wheels on improving server quality for a large segment of the community.
→ The Python SDK joins the TypeScript SDK as being fully up-to-date with the new 2025-06-18 version of the spec. We hope that means we'll start seeing servers using elicitations in the wild very soon.
Featured MCP Work
Pulse Fetch (Tues, July 1) MCP server by PulseMCP
→ Born out of the pain of getting anti-bot errors and/or unnecessarily verbose responses when using the barebones reference Fetch server, we're crafting this server to be the simplest, most reliable way to extract data you want from a given URL. While it sounds like a modest ambition, it's one that actually begs for a gamut of MCP features: Resources, Sampling, and Prompts all have a very useful place here. That, and the guarantee that you'll never run into a "403: Bot Unauthorized" error again. You can operate in either "speed" mode, which prioritizes leveraging Firecrawl and BrightData proxies to scrape a URL; or in "cost" mode, which will try first with just a local fetch before falling back to the non-free options. If you don't want to fall back to those external services, just omit the API keys. And it's smart: it remembers what strategy worked, so the second scrape of a given target domain will be faster than the last. At this stage, we're still iterating on an 0.x version: early adopters welcome as we rapidly iterate. You'll hear from us again when 1.0 is ready.
.dxt Desktop Extensions (Thurs, June 26) by Anthropic
→ In another open source move, Anthropic is pushing to make local MCP servers easier to install by introducing the .dxt zip archive format. Claude Desktop is, of course, the only app initially compatible with .dxt, but the open source format means that other desktop MCP client apps can adopt this standardized one-click install experience as well. Check out this episode of The Context livestream to hear more about it. In the long run, we expect the .dxt ecosystem to integrate smoothly with the upcoming `server.json` work and official MCP registry mentioned above.
LM Studio (Wed, June 25) official MCP client
→ HackerNews loved this release. LM Studio is a leading desktop app for leveraging local models right on your computer, without hitting those expensive LLM API endpoints. Now, it comes with native MCP support. Local models have been getting better at tool calling, so if you've tried tool calling with local LLM's in the past with disappointing results - it may be a good time to try again.
Systemprompt (Sat, June 21) official MCP client
→ We've featured Systemprompt before, way back in early January (!). But Edward has been hard at work turning his web-based MCP client into one that now has official native apps on both iOS and Android with his recent launch. Control MCP entirely with your voice - it's a strong, MCP-focused alternative to the 11.ai launch we featured last week.
Container Use (#38 all time, #21 this week) MCP server by Dagger
→ This server slipped through the cracks for us, and we only recently added it to our directory. It's one of the more popular ones in the ecosystem: it facilitates a standalone, containerized environment for each of your coding agents. This is a safer way to wield your coding agent clusters than our writeup on doing everything on your main machine, if you're looking for that level of assurance that your agents won't go haywire. Dagger has a nice playlist demonstrating how this works with various coding agent clients to give you a sense for how it might work with your setup.
Domain Checker (Sun, June 29) MCP server by @rinadelph
→ An MCP server we wish we had back when we were deciding what to name PulseMCP: use this the next time you're running through domain name ideas for a new business or project. Not only does it do the obvious checks through WHOIS, but it also leverages a variety of open resources like DomainsDB and Snapnames to augment the discovery process. And all with no API keys required.
Browse all 300+ clients we've cataloged. See the most popular servers this week and other recently released servers.
A Few Good Links
→ A northern California judge ruled that Anthropic's use of books to train Claude falls under "fair use." The ruling uses exactly the logic we've been pitching as a way forward for the past month: copyright can be enforced on outputs, not inputs. Indeed, the critical passage in the verdict is that "Users interacted only with the Claude service, which placed additional software between the user and the underlying LLM to ensure that no infringing output ever reached the users." Now, that's putting an awful lot of faith in said "software," which is presumably designed and self-verified by Anthropic. Can we be sure it's properly filtering not only "exact matches" but also "similar and competitive" with the original work? The spirit of the ruling makes sense, but we're not sure yet on the practicality of it.
→ After much fanfare about the $10-100m size of the bonuses and salaries it was offering, Meta has seemingly finalized their list of highly-paid AI researchers, largely poached from OpenAI with a sprinkling of Google and Anthropic. Highlighted by leadership coming from Alexandr Wang (Scale) and Nat Friedman (GitHub). Will these large paychecks translate to productivity for Meta? Only time will tell, but we hope this brings a healthy dose of new competition into the AI infrastructure layer.
→ Cloudflare has declared today, July 1, as Content Independence Day. This is the launch that CEO Matthew Prince has been teasing the last few weeks: Cloudflare's solution to save the monetization model of the internet. The core principle they appear to be pursuing is the gating of content so that AI crawlers cannot liberally grab and serve it up. Their "first experiment": pay per crawl. While we think the direction of this model makes sense - we agree with the premise that there should be a "[third option] instead of a blanket block or uncompensated open access" - it's not quite the solution we wrote about in December. We think there's still a gap in flexibility within that third option: one that might eventually be better-solved by a robust, more mature MCP ecosystem. Maybe Cloudflare is en route to helping solve this that way, too.
→ A common, and often subtle, failure mode for agents that run for at least a few minutes is long context. And it's not just because you hit your chosen model's max context window: it's often because whatever noise is in your context is directly mis-steering your agent's performance. Drew Breunig wrote a great piece on this explaining how contexts fail, and then a followup on mitigations against these failure modes. For example, the idea of "Context Poisoning," when some error or hallucination makes it into your ongoing context and then gets repeatedly references, is a very concrete and helpful diagnosis for those times when your agent is going haywire, and a "let's start from scratch" directive might mysteriously solve all your problems.
→ A new Claude Code feature has landed: hooks. The use cases for this feature are numerous. Use Notifications to know when an agent in your cluster needs attention. Use logging to create auditable retros to evaluate how your agent performed on some task. Create custom permissions that preserve your particularly sensitive files. This should be a boon to those of us who were starting to build up very unwieldy CLAUDE.md files.
→ There's a growing negative narrative on social media that, despite all these claims of improved engineering productivity due to coding agents, nobody is actually producing 10x better products. We think it's true that 10x better products are not (yet) being built, and we think Box CEO Aaron Levie's take on this AI adoption curve is the reason why: "When AI accelerates work in one area, you run into a bottleneck somewhere else." Engineers have found product-market-fit with agentic coding. They (at least, the ones that choose to use the new tools) are becoming 10x more productive. But most engineers aren't the ones good at making product decisions and driving business strategy. We don't suddenly have 10x more product managers capable of guiding the capacity for more product development at the world's most visible companies and brands. It will take some time for the AI ecosystem to rebalance this bottleneck.
→ On the subject of product managers: one gap we've been seeing in many companies' MCP products to-date is a lack of staffing for MCP-building teams. Many seem content to ride the hype wave by assigning a solo engineer to the task of "build our MCP server," launch it, and call it a day. But we're starting to see much more thoughtful approaches to this staffing problem. For example, the Figma MCP server team is staffed by dedicated PM, marketing, design, and engineering talent. They've been taking a product-building lens to their process, regularly releasing new features as they take in new use cases and feedback. This has resulted in meaningful ongoing in-product enhancements like annotations (giving additional design intent for behaviors or functionality in generated code) and Code Connect Snippets (surfacing the actual design system code as depicted in the design).
→ Anthropic put out a research report on their Project Vend - what happens when you give Claude ("Claudius") autonomous control over a physical vending machine with $1,000 in funding? Answer: an unprofitable business. Nonetheless, we had two notable takeaways from this report. (1) is that this is a nice reminder that most of these foundation models were trained as assistants. As Anthropic pointed out, "Claude's underlying training as a helpful assistant made it far too willing to immediately accede to user requests (such as for discounts)". All the leading foundation models in use en-masse today are trained as assistants: imagine a world where the state of the art models are retrained or regularly fine-tuned for significantly different roles, like a business-focused shopkeeper persona. And then (2), the insight that no amount of LLM pre-training could replace some basic missing needs for Claudius, like a CRM. Lacking the CRM and other simple tooling ultimately played a big role in Claudius's failed business venture. And to that, the solution is clearly MCP. That and "context engineering" is where the opportunity lies for most AI-related opportunities; not in waiting for a Claude 5 model release.
Cheers,
Mike and Tadas
Sign up for the weekly Pulse newsletter
Weekly digest of new & trending MCP apps, use cases, servers, resources and community developments.
Thanks for subscribing!
We've just sent you a confirmation email, please confirm your subscription (we want to make sure we never spam anyone)
Recaptcha failed
Please enter "Human" to proceed.
Please enter "Human" (case insensitive)
Co-creator of Pulse MCP. Software engineer who loves to build things for the internet. Particularly passionate about helping other technologists bring their solutions to market and grow their adoption.