Towards the end of 2025, I shifted from exclusively chatting with AI chatbots to fully delegating tasks to AI agents, once again transforming the way I work as a PM. My days have moved from context-switching chaos to a system that scales with knowledge and runs workflows autonomously. Today, instead of frantically skimming Slack before my first meeting of the day begins, I start my day with a detailed briefing that already knows my action items, my calendar, and what happened while I was sleeping. This post describes the tools I’m using to achieve this.
The Chat Era
I was an early adopter of generative AI, incorporating ChatGPT into my daily workflows immediately after the product launched. This provided tremendous value in my day-to-day, but also helped establish habits that became difficult to break out of. For years, I used ChatGPT primarily in the browser, sometimes jumping to the desktop app to get integration with Notion, or the mobile app to check my daily pulse. It was annoying that each application had a different feature set, but it was workable. I had custom GPTs for specific purposes, and tried to give them as much context as possible around the problem and task to be performed.
This worked well enough for specific tasks. For example, I could give a PRD Generator GPT a product proposal doc, some meeting notes, and some simple instructions and have it spit back a pretty decent requirements document. The challenge was that each chat started from scratch. If I needed to make changes to a PRD based on new findings, or kick off a new project that continued where another had left off, I would have to provide all of the context again, a time consuming and often tedious task.
While I didn’t realize it at the time, utilizing these custom GPTs introduced a lot of overhead. It took a lot of preparation to get the results I needed, and I had to keep track of the context of all of my work separately, and constantly keep it updated, to make it easier to provide it on demand.
All of the effort that went into customizing my GPTs and tweaking my prompts all but locked me into ChatGPT. I had heard great things about Claude and Gemini in 2025, and while I tried them both out, I wasn’t compelled to switch. That is until I discovered Claude Code. Claude Code is Anthropic’s command-line interface for Claude. Unlike browser-based chatbots, it runs in your terminal and has direct access to your file system, allowing it to read, write, and execute code within any project directory.
Moving Beyond Chat
Towards the last half of 2025, Claude Code was gaining in popularity online. For anyone following AI closely, it was becoming very hard to ignore. Sometime in October I decided to download the app and give it a try. Using the out-of-the-box config, I ran Claude Code on a complicated backend repository and asked it to explain how a random feature worked. I specifically chose a feature that I had just discussed a week prior with an engineer over the course of a one-hour meeting, so it was something that was fresh in my mind, and I understood the complexities and nuances. I was immediately impressed with how well Claude Code explained the feature, effectively summarizing the meeting I had just had with the engineer. I began thinking about how I could use this tool to better understand system constraints or to improve our API documentation. Overall, I considered Claude Code as a developer tool, and not a general purpose agent infrastructure.
That framing was wrong. What I failed to realize at the time was that Claude Code had all the components necessary to be a general purpose agent harness. When I discovered Daniel Miessler’s Personal AI Infrastructure (PAI) project through the Unsupervised Learning newsletter, a lightbulb went off. It quickly became clear Claude Code could be configured as a generalized digital assistant capable of handling specific, repetitive tasks, that retained all of the context of the problems I was trying to solve. For the first time since beginning to incorporate AI into my daily workflows, I experienced a massive leap forward in productivity, rather than the incremental improvements I’d been experiencing with chat interfaces.
How It Works
The main benefits of using Claude Code are that it is context aware. It has access to all of the files in a given file tree, as well as the files you store in the default folder for Claude. This allows you to create custom context files, outlining your goals, challenges, domain knowledge, writing templates, specific rules or instructions, and anything else you want Claude to remember. I highly recommend you install the PAI project from Daniel Miessler and modify it to fit your needs. While the project is mostly geared towards security professionals, it was easy for me to tweak to fit the needs of a PM.
The benefits of using PAI is that it provides templates for creating new agents itself (if you’re not familiar with skills, I recommend reading about them here). In other words, you tell it what you want, and it will make those custom skills. It also includes guardrails that force Claude Code to review all context prior to running any command, so you can prevent it from running destructive commands (like rm -rf), or in my case, prevent it from ever pushing updates back to Linear or posting messages in Slack.
This system allows you to shift away from singular, task-oriented chats with AI chat bots to delegating multi-step workflows to an AI agent, which then run autonomously. The deep context, access to history, and layers of instructions reduce the need to repeat yourself or provide pages of background information each time you start a new conversation. The scaffolding provided by PAI makes it possible to define your preferences and the desired results in a deterministic way. And since this is all using Claude Code, it’s incredibly simple to change the setup, like adding new skills or workflows, programmatically, just by discussing the desired results with the AI.
The Payoff
I have many use cases, and I’m applying new ones each day, so I’ll discuss several here that I come back to regularly. These are repetitive tasks that I often struggled to complete consistently prior to adopting this personal AI infrastructure.
Daily Brief
My team is mostly based in Europe, which means I start my day five hours behind the majority of my team. This makes mornings challenging, often requiring me to skim a lot of different tools quickly to get a high-level overview of everything that has happened before I go into my first meeting. To address this, I created an agent that will take care of all of this for me. It will check my calendar to review my meeting schedule, review my recent meetings notes and take note of my pending action items, connect to Linear to review all open tickets, retrieve new comments, status update, and dependencies changes, and so on, and finally check several specific Slack channels, threads, and mentions. It then outputs a summary of what has changed since yesterday, my top priorities for the day, any action items that need to be completed or are overdue, and the Slack messages I should read first. It’s truly like having a personal assistant that hands me a daily briefing as I sit down at my desk. For this skill to work well, it requires connections to my notes (Notion MCP), my Linear projects (Linear MCP), my Slack workspace (Slack MCP), and the transcripts from all my recent meetings. This is where the next skill comes in.
Meeting Memory
I’ve been capturing meeting transcripts for quite some time now, but I’ve always found it somewhat cumbersome to go back through them and find what I need. To address this, I’ve created a skill that will sync all of my meeting transcript into a folder, keeping the previous two weeks in /context and pushing those older than two weeks into /history. This allows Claude to load all of my recent meetings into context when I start a session, while keeping previous transcripts available on-demand. There are several benefits to having these transcripts accessible to my assistant. Not only am I able to be more present in the meeting, not being distracted by frantically scratching down notes, but I’m able to revisit discussions and decisions without any manual overhead. Previously, I would have had to go searching through my notes to find some specific task or decision, and hope I had provided enough context. If not, I would need to find the transcript of that call and search through it. Now, I can simply ask my digital assistant “What did Alice say about the XYZ last week in our sync?”, and I’ll get an answer back immediately. Extending on this, I’ve created a skill that will extract all decisions that were made and update the corresponding decision records.
Parallel Research
So far, I’ve only really discussed outsourcing process to an agent to reduce manual, repetitive work. This is by far the lowest hanging fruit for me. It helps me be more efficient with my time, so it’s a reasonable place to start. But the power of agents really starts to show when you begin outsourcing research work to agents. Using MCPs like BrightData, Playwright, and Apify, it becomes trivial to conduct research across multiple sources simultaneously. For example, I was working on a feature recently that needed to comply with specific regulations, but I only had a general understanding of what they specified. Instead of spending time finding the documentation, and reading through pages of legalese to find the single article that applied to what I was building, I passed the task to my research agent. I was able to move onto other sections of the product requirements while the agent found the files, and the specific article that applied.
Where It Goes
I’ve seen significant productivity gains in the last several months, and I’m only scratching the surface. Many people are discovering the potential of these tools through practical use cases, and adoption will only keep accelerating. Just last week, Anthropic released Claude Cowork, which is essentially Claude Code for those who don’t want to work in the terminal, and it’s been getting good reviews so far.
The biggest shift for me has been mental, not technical. I’ve stopped thinking about “how do I write a prompt to get the result I want?” and started asking “what work would I delegate if I had a capable assistant?”. This reframe has fundamentally changed the way I approach incorporating AI into my daily workflows.
Agentic AI is not going to book me a flight or order me lunch. It’s going to handle my daily tedious, time-consuming tasks, reducing the operational overhead that eats away my time. And that’s far more valuable than a single clever prompt.
If you’re curious to try this yourself, start with Claude Code and Daniel Miessler’s PAI. You don’t need to build everything at once. Just pick one repetitive task that annoys you, automate it, and go from there.