Part II
Create Your Automation Roadmap
As you move into planning your automations, remember this: your automation needs to meet your workflow where it's already at. Do not overhaul your workflow to make automation "easier." Do not replace software or ask stakeholders to change how they interact with you.
This is important, because you know your existing workflow is already valuable. Small changes can create problems in surprising ways, or introduce friction in a way that tanks your project. Sometime later you can change more variables: but initially, your job is to keep every step the same, and work surgically to delegate bits and pieces of it to your agents.
As far as any of your stakeholders are concerned, they shouldn't even know you have introduced automation.
So let's build a project plan: what are the specific steps you'll take to make progress on your automation, piece by piece?
The key here: sequence your roadmap so you are getting value at every step of the way.
Automation projects like this may take a long time. It's easy to get excited and bite off more than you can chew. Assume it will take you longer than you expect. Put yourself in a position to have small wins. Don't put yourself in a position where you're sitting on a half-done project you've spent 30 hours on, but have no productivity gains to show for it yet.
The simplest way to think about it is to categorize each of your manual work items with one of three categories:
- Automate this now; it's not too complicated and would save me time
- Automate this later; it's either complicated or not particularly helpful to take off my plate
- Will never automate
The last category is tricky. Let's talk about it.
To automate or not to automate? It's about the human connection
Many specific functions we do in our jobs today will eventually be automated away by AI. But one type of function we expect to stand the test of time: those involving human connection.
We pay big money to watch NBA players entertain us, and it's important to us that they're not on performance enhancing drugs. We expect our doctors to have good bedside manner, and getting a cancer diagnosis via email will never be acceptable. Those physical world examples are fairly obvious. What's the equivalent for the digital world?
The answer: trust.
You don't believe everything you read online. Increasingly, you only trust what certain influencers have to say. Or you only trust the writing and content of teams and companies that have a storied history of reliable reporting, and background you can trust is free of biases and mal-incentives. It all boils down to trust: of an individual, or of a brand.
Examples of where trust manifests:
- Delivering a code review to your colleague that they trust is a representation of your best engineering capabilities
- Producing art that consumers trust was inspired by a lived human experience
- Writing newsletter content that your readership trusts is representative of human leaders and decision makers in your niche
Importantly, it's not "all or nothing." You can deliver great newsletter content by having AI collate candidate information, and then use your human expertise to filter and curate it. You can produce art by coming up with an idea, then using AI to put pen to paper.
But there are also some technical realities holding back AI agents today
While we believe the notion of human-human "trust" to be immortal, the on-the-ground reality of today is that there are a few more "exclusion criteria" for automation opportunities than there might be as soon as a year or two from today.
Is precision important? LLMs are inherently nondeterministic. Even if they're right 99 times out of 100, is that good enough? When sourcing news, missing 1 out of 100 stories is acceptable. But publishing even 1 incorrect blurb out of 100? Not acceptable. That's why we won't be automating final proofreading anytime soon.
Where are there technical integration gatekeepers? Not every external software will take kindly to your attempt to insert an agent in your stead. DoorDash makes a lot of money off of running ads on their UI, so they won't want you programmatically dodging them. Using automation on Craigslist is a violation of their ToS, because that's how they combat spam. In our case, our SaaS email provider does not expose a REST API and has indicated to us that automating their web UI is a violation of their ToS. Rather than fight that, we're migrating over to an alternative where no such misalignment exists.
Who has a good MCP server available? Even if there is no gatekeeper per se, the unfortunate reality is that most of the thousands of MCP servers out there today are poorly designed. If you're a product builder, you can remedy this for yourself by implementing the gaps: we're doing so for several of our SaaS tools. If you're not a builder, then the reality is that you'll want to ensure there's a popular, production-ready MCP server out there for the system(s) your workflow requires you to integrate with.
Finalizing your Roadmap
Let's revisit our target categorizations:
- Automate this now; it's not too complicated and would save me time
- Automate this later; it's either complicated or not particularly helpful to take off my plate
- Will never automate; this is part of my work's core value proposition
Earlier, we presented what the automation opportunity looked like for us:
Sourcing + Sourcer Agent
Tadas still does:
- Save interesting links throughout the week across all platforms
Sourcer Agent does:
- Weekly platform reviews (GitHub complete; Discord, Reddit, HackerNews, X coming soon)
- Check Google News, PulseMCP rankings, new servers
- Read all stories and annotate what they are and why they're interesting
Organizing + Organizer Agent
Tadas still does:
- Cut less interesting stories
- Choose headliner topics
Organizer Agent does:
- Compile saved links into document
- Find canonical links, deduplicate, categorize
- Research context, review past editions for threads and prior mentions
Drafting + Drafting Agent
Tadas still does:
- Finalize story order/arc
- Write first draft of blurbs while re-reading links
Drafter Agent does:
- Pull in server/client stats (e.g. Top #X this week) and PulseMCP links
Polishing + Polishing Agent
Tadas Still Does:
- Add bold emphasis where appropriate
Polisher Agent does:
- Run typo check
- Verify all links are correct
Publishing + Publishing Agent
Tadas Still Does:
- (none)
Publisher Agent does:
- Generate og:image for newsletter
- Format and push to CMS
- Run quality checks
- Final proofread
Sending + Sender Agent
Tadas still does:
- Final proofread
Sender Agent does:
- Set up email campaign
- Format content
- Send preview
- Final checks and send/schedule
That's the end-state. More specifically, our "roadmap" initially looked like this:
1. Sourcer
- Extract interesting links from second review of: just GitHub (for now)
- Read all the stories & make notes about what they are and/or why they might be interesting
2. Organizer
- Pull together saved links into a document
- Identify at least one canonical link if we don't already have one
- Deduplicate / consolidate similar or related stories
- Decide which section(s) uncategorized items go into
Drafter: keep this human
3. Polisher
- Run a typo check
- Run a check on whether links are correct
4. Publisher
- Format for CMS
- Push to CMS
- Final proofread
Sender: keep this human
Let's explain a few decisions we made on the way to finalizing this initial roadmap.
We started with a slice of the Sourcer. Even though our manual process involves combing through Reddit, X.com, Discord, and other sources; we began by just automating the GitHub portion of it. We chose GitHub because we know there is a strong, officially supported GitHub MCP server; GitHub does not gate their data access in the same way Reddit or X.com might; and GitHub is a data source that is increasingly hard to keep up with to filter signal from noise (unlike the social algorithms powering Reddit and X.com). GitHub should be one of the easier automations to crack, and then we could copy/paste our solution there and iterate on it to work for the other sources later.
We skipped on implementing the Drafter. The drafting stage of our process is where our human-ness shines. The only part we could even conceivably automate is some simple formatting work we do as we finalize our features section every week, which doesn't take a whole lot of time. So we chose to skip on investing in this for now.
Publisher had clear value, we just skipped some messy parts. The rote task of translating our markdown formatted writing into our HTML formatted CMS, and then again into our email sending system was always one ripe for streamlining. Though, some pieces of the process we use there were deceptively tricky: our work in Figma to create a new post thumbnail for each addition takes only a few seconds, but is actually difficult to automate due to lack of capable MCP servers. So we skipped that for now, but we have in mind an "HTML to Image" MCP server with which we'll bridge that gap in the future.
Our newsletter software doesn't offer proper API access, so we're deferring the Sender. Although sending out the newsletter is a clunky process for us - we do some copy/pasting, rote clicking around an interface, and tedious reformatting - there wasn't a clear path to giving Goose access to the software we were using. Because they do not provide the right API access, we could not stand up a good MCP server. Our plan is to migrate to a more programmatic-friendly solution, but that nontrivial migration plan means we are deferring the Sender work from our initial roadmap.
We were able to make some of these decisions up front because we happen to be up-to-date with the state/quality of various MCP servers and related technical nuances. If you're not as in-tune with the ecosystem, there's no problem with just taking your full list from Part 1, proceeding to Part 3, and then coming back to Part 2 if you find that some of your assumptions about the availability of quality MCP servers aren't (yet) bearing out, or you find other logical gaps in how your process can be recreated by a series of agents.