March 15, 2025
Manus the Agent, Gemini 2.0 Impresses, Why MCP Won
Last weekend, Manus took the AI newswaves by the storm. As a "general AI agent" built by a Chinese company with a rather impressive user experience, many started lauding it as a major leap forward in AI capabilities.
Upon further review, it turned out that Manus was built almost entirely on Claude Sonnet, open source Browser Use, and some fine-tuned Qwen (Alibaba's open source LLM). They might've even considered using MCP, but had started building the product before MCP was released.
All to say: the Manus team did a great job building thoughtful UI/UX on top of pre-existing AI tooling. There's a lot of value to be unlocked simply by building great applications with models and infrastructure that already exist out there. Application layer innovation is still far behind the foundational AI research layer.
Google Gemini continued to stay relevant this week by broadly releasing Gemini 2.0 Flash Experimental. It had been available to select testers before, but now anyone can try it, and its visual capabilities feels like a meaningful leap forward. Some say it removes the need for Photoshop - now you can chat with images to tweak and modify them, maintaining pixel perfect scene consistency.
Though, there's something to be said again for UI/UX design: this kind of image manipulation was already pretty achievable with Claude + Stable Diffusion via MCP. Google packaged it up nicely and made it very accessible - but the technology has already been out in the market for months.
OpenAI released their open source Agents SDK. The announcement was paired with an upgrade to their Completions API called the Responses API, and a sunsetting of their Assistants API. Rather than adopt MCP in this SDK, OpenAI released a few of their own custom tools, but the community quickly jumped in to offer extending it with MCP support. OpenAI's response was that they will "circle up internally on the best way to support MCP."
MCP recognition and adoption from the big players and broader community in the world of AI is starting to feel inevitable. Latent Space even penned an article declaring "Why MCP Won"; in short, the explanation is that MCP solves a retrospectively obvious problem for AI applications (connecting LLMs to external services), and came on the scene with the right auxiliary tooling + backing by a major player (Anthropic).
Did we miss something this week? Notice something interesting you want to make sure we cover next time? Shoot us a note, we love feedback.
What's upcoming for MCP?
The MCP Core team published a public Standards Tracker this week, to provide visibility into progress regarding major changes to the MCP specification. Here are some highlights worth calling out:
→ Short-lived/resumable/stateless connections is still in Draft state, but is clearly a priority for the MCP core team as the discussion on the topic continues to fly with a range of conflicting community opinions. We'll be watching this one closely to see where it lands, as it'll set the foundation for how MCP enters the very-important world of remote MCP server connections.
→ There's an active discussion soliciting community input on adding "Augmentations" to the specification. This would be another primitive alongside Resources, Prompts and Tools. The most obvious use case for Augmentations would be application-controlled RAG, but there are others. Adding a primitive like this would be a significant expansion, so thoughts from the community on whether it's necessary are much needed and welcome.
→ We at Pulse are excited to be co-sponsoring the work on the long-awaited official registry alongside the Block team behind Goose. More to come on this front as some of the hairier higher priority issues on the docket come to conclusions.
Featured MCP Work
Blender (Fri, Mar 7) by @ahujasid
→ With a captivating demo, this MCP server has boomed to accrue over 4k GitHub stars in a single week. For comparison, the official MCP repository has only 17k in months. Blender is a 3D creation suite, useful in fields like game development or product design. And now you can use it by just talking to Claude. Go from idea to 3D-printed-object-sitting-on-your-desk with just a few prompts.
Perplexity (Wed, Mar 12) official implementation
→ Aravind Srinivas polled the X community on whether Perplexity should make its relationship with MCP official, and it was live a day later. At last, an official implementation to go alongside the ~20 unofficial ones. Use it to leverage the Perplexity Sonar API to augment chat responses with web research.
Solana (Wed, Mar 12) official implementation
→ Much like Perplexity above, the blockchain-MCP community has been begging for Solana integrations to the tune of 10+ community implementations. And now it's official. The pattern of community-driven implementations unearthing MCP utility and inspiring official adoption is proving to be a repeatable pattern.
Resend Email (Fri, Mar 7) official implementation
→ You would think sending email is a solved problem, but the Resend team has been getting a lot of love for their dead simple approach to getting emails formatted and sent. And with this MCP server booming in popularity, drafting and formatting a beautiful email with LLMs has never been easier. Check out their "March Madness" win in the first round of mcp.run's MCP tournament.
RepoMix (Thurs, Mar 13) by @yamadashy
→ This project has been around for a while, but MCP support is newly released. Repomix packages entire codebases into a flat file of context perfect for analysis by LLMs. The process is very configurable, so it's easy to optimize for tokens and pull in the right context without overloading your context window. Designed for use on your own codebase, but useful for exploring library and SDK capabilities as well.
See all recently released servers.
A Few Good Links
→ OpenAI is calling on the US government to allow AI companies free rein to train models on copyrighted material. The framing of a national competition with China might be compelling coming from a truly public-interest aligned organization, but rings hollow while the same company is trying to convert itself to a fully for-profit organization.
→ Microsoft appears to be fraying its relationship with OpenAI. Despite their very public partnership, Microsoft is internally testing out competing models with plans to bring them to their customers. Perhaps to stem the tide of negative reception to its GPT-4-powered Copilot product.
→ Court documents recently obtained by the NYT reveal that Google owns 14% of Anthropic, a fact that was previously seemingly a guarded secret. Both Anthropic and Google are keen to keep that investment in place, despite a moment of pressure to divest as part of the ongoing antitrust proceedings against Google's search advertising monopoly.
→ Anthropic released a range of new token-saving capabilities to their API this week, including improvements to prompt caching and token-efficient tool use capabilities. It's encouraging to see an AI lab optimizing for developer experience over the short-term bottom-line gains they would see with more rampant token usage.
→ OpenAI released interesting AI alignment research on "chain of thought monitoring," where they found it was possible to identify cases of AI "cheating" on tasks by monitoring a thinking model's chain of thought. This is an interesting extension to Anthropic's findings on the possibility of "alignment faking" a few months back. Importantly, OpenAI found that it was possible to train the models to hide this deception in their chain of thought as well. So, the takeaway is that we might be able to keep the models aligned if we avoid polluting the chain of thought process with further, potentially dangerous training.
→ This research from the Cooperative AI Foundation highlights an interesting angle for abuse of AI: what are the risks exposed by independent models pieced together to work collaboratively? Much of safety research is oriented around keeping a given model "safe." But the experiments in this study showed that it was possible to, for example, decompose a dangerous task (e.g. "create a bio-weapon") into sub-tasks, and ask different models to complete the subtasks. Without any single model knowing the context for its sub-task, any safety constraints built into those individual models would never have a chance to kick in.
→ Every other tech influencer on X wants to give you tips on maximizing your development workflows with AI, and we've picked a couple to highlight that we found easy to digest and action. This longer one from Simon Willison gives some high level frameworks helpful to keep in mind with using LLM's to code, and this short and approachable brief on Cursor tips might have a nugget or two you haven't thought about.
→ If last week's edition with all its AI tool monetization tips didn't inspire you enough, here's one more: a great demo from Fewsats highlighting just how simple it might be to put an MCP server's capabilities behind a paywalled or freemium "credit purchasing" gate.
→ While much of the MCP hype in the last few weeks has oriented around standardizing Tool calls, it's easy to forget that the MCP specification extends beyond tools. @evalstate put together a nice Tweet-storm highlighting how Prompts can be useful in an agentic context.
Cheers,
Mike, Tadas, and Ravina
Sign up for the weekly Pulse newsletter
Weekly digest of new & trending MCP apps, use cases, servers, resources and community developments.
Thanks for subscribing!
We've just sent you a confirmation email, please confirm your subscription (we want to make sure we never spam anyone)
Recaptcha failed
Please enter "Human" to proceed.
Please enter "Human" (case insensitive)
Co-creator of Pulse MCP. Software engineer who loves to build things for the internet. Particularly passionate about helping other technologists bring their solutions to market and grow their adoption.