Part III
Architect Your Agents
In our PulseMCP newsletter automation roadmap, we organized our work into six distinct buckets of work with clear boundaries of separation. The Sourcer, Organizer, Drafter, Polisher, Publisher, and Sender can each operate independently of the others, producing some specific output, as long as they are provided with a specific set of inputs.
Here's how we envision them working together:
Sourcer
Inputs: List of GitHub sources we want to collate (modelcontextprotocol/registry Discussions, Issues, Pull Requests; modelcontextprotocol/inspector Pull Requests, etc)
Outputs: Markdown files (1 per GitHub page) in "data-dump" directory of the last week of updates
Organizer
Inputs:
- Sourcer's Output
- Links human saved throughout the week
Outputs: Organized markdown file of "links.md"
Drafter (Still Human)
Inputs: "links.md"
Outputs: "draft.md"
Polisher
Inputs: "draft.md"
Outputs: "draft.md" (polisher agent gives feedback without modifying; human might modify)
Publisher
Inputs: "draft.md"
Outputs: live post on pulsemcp.com
Sender (Still Human)
Inputs: "draft.md"
Outputs: draft campaign on email SaaS
Maximizing the number of explicit agent boundaries is key. As a human, you might do most of "Sourcing" and "Organizing" at the same time. As you read through GitHub, you probably won't copy/paste every page you read into a markdown file en route to creating a document of "links.md". But in architecting our agents, we realized that we can create a boundary by splitting the task in two. Sourcing results in markdown files. Organizing picks up where Sourcing left off. As a rule of thumb, smaller tasks with tighter boundaries are always better than longer, more complex tasks. You'll see later how unnecessary complexity leads to frustration when things don't work quite right.
At this point, each of your agents is still a black box. You may find that as you do this exercise, you'll want to revisit Part 2 and adjust your roadmap; some of the assumptions about inputs and outputs you may have had may not hold up as you get more specific here.
When you feel good about your agents' prospective inputs and outputs, we can dig into exploring each "black box," one at a time. How exactly is each agent going to take the inputs, and transform them into the outputs? This is where MCP comes in.
MCP Servers: The Connectors
MCP servers are the connectors that allow your Goose agents to interact with your computer and the internet. Here are the five servers we're using in our automation:
- Goose Developer MCP Server. Used by all four agents we built. This server created by the Goose team is the most common one Goose users will use: it enables basic functionality like "read file", "search filesystem", "save file to directory."
- Official GitHub MCP Server. Used by the Sourcer agent. We use this to pull down GitHub Discussions, Issues, and Pull Requests.
- Pulse Fetch MCP Server. Used by the Organizer agent. This is a free server we've built and maintain for the community that helps you simply "fetch" results from some specific URL. It's capable of bypassing anti-bot technology and is optimized to save you on LLM API costs by simplifying the content it returns.
- Official Tavily MCP Server. Used by the Organizer agent. This server helps your agent "search" the internet, much like a human would use a Google search.
- PulseMCP Admin CMS MCP Server. Used by the Publisher agent. This is an internal server we use for managing blog posts - you can imagine it to be our equivalent of something like a "WordPress CMS MCP Server".
Inside the Sourcer Agent
Agent
Sourcer
Scours key data sources to catch what humans might miss
Input: List of GitHub sources we want to collate (modelcontextprotocol/registry Discussions, Issues, Pull Requests; modelcontextprotocol/inspector Pull Requests, etc)
Process:
- Developer server creates a data-dump directory
- GitHub MCP server searches up the last week of updates in each source
- GitHub MCP server grabs the contents of each relevant update
- Developer server saves a summary of the contents and some metadata into a markdown file in data-dump
Output: Markdown files (1 per GitHub page) in "data-dump" directory of the last week of updates
Of course, the success of this operation hinges on the dependability of the Developer and GitHub MCP servers. That begs the natural question: how do I choose good MCP servers for my agents?
Choosing Quality MCP Servers
Tread carefully here. At the time of writing (August 2025), there are thousands of MCP servers available, but we estimate only hundreds are production-ready for serious workflows.
Here are some recommendations for how to select a good server as you search the directory of MCP servers: First, look for an official server for the connection you're trying to make. We consider an MCP server "official" if the company who runs the service you are trying to connect to is also on the hook for maintaining the server. So if you're trying to access GitHub data, the official GitHub MCP server is the way to go. There's no guarantee that an "official" server truly is the best designed implementation at the moment, but it usually is. And more importantly, there is an economic incentive for the service to continue investing in improving its MCP server over time.
One particularly compelling reason to make sure you pick MCP servers where you think the maintainer is in it for the long haul: the agentic workflows you build will then self-improve over time, with no work needed on your part. MCP server design is still in its infancy.
Maintainers are constantly learning to save you tokens, extend capabilities, fix bugs, and more. You can be sure that GitHub will still be around as a company in a few years, and surely will continue to produce a top notch MCP server. The same can't be said for an indie developer for whom an MCP server is a hobby project they already abandoned a month ago.
In some cases, there is no "official" answer. For example, if you need a "server that fetches content from the internet," (i.e. Pulse Fetch) that's more of a category of MCP server functionality than it is a connector to a specific service. For that, you'll have to rely on your own assessment of brand quality and reliability.
Next, check out estimated usage metrics. Across PulseMCP, we scrape data about every server to try to estimate its "number of downloads." This is a pretty good heuristic for "what MCP server(s) are popular." Although this heuristic favors established incumbents more than it does quality, you'll at least know that a server is good enough to get recurring usage if its download counts are persisting over time.
Lastly, consider assessing the server yourself hands-on. This could mean simply reading their documentation and assessing whether you think they've designed the Tools well for your use case. Or it might mean looking to see how active the associated GitHub repository is. Worst comes to worst, you can fire it up, try it out, and see how it performs when connected to Goose or another MCP client with you running some test conversations.
The unfortunate reality is that for many use cases, there simply may not be a great server yet. We expect that will become an uncommon situation within the next 6-12 months, but do be aware of that limitation, and reconsider your roadmap if you get this far and realize there is a missing piece for you here.
If you're technical, you can consider building your own MCP servers to plug the gaps. That's what we've been doing often.
Part IV →