I Built a Personal MCP and Got Agents to Give Me Feedback
On Friday morning, the team at Netlify launched our first guide on building MCP servers with Netlify. I decided building my own MCP server would make for a perfect weekend project.
The same morning, I came across Alana Goyal’s tweet about personal MCP servers. Inspired by this, I felt Biilmann Blog should have one too, to enhance the Agent Experience (AX) of my little corner of the web.
The first step was straightforward. I prompted Windsurf’s Cascade agent:
Use the guide here https://developers.netlify.com/guides/write-mcps-on-netlify/ to add a netlify/functions/mcp.ts example MCP server
I installed dependencies, launched my blog with netlify dev
, and confirmed the protocol worked using @modelcontextprotocol/inspector
.
Next, I considered what a personal MCP should offer. What would agents expect from my blog, and could they tell me directly?
I began by adding two essential tools through Windsurf:
Modify the MCP server to expose a tool for getting a list of articles or a specific article, replacing the current tools.
Introducing Agent Net Promoter Score (ANPS)
Listing and retrieving articles seemed like core functionalities, but what about enabling agents to share feedback on their needs?
I asked Windsurf to create another tool:
Add a tool named "Agent Net Promoter Score" (ANPS). This tool will ask agents using biilmann.blog MCP: "On a scale from 0 to 10, how likely are you to recommend biilmann.blog MCP to other agents?" Include a free-form text field for additional feedback or tool requests. Store responses using Netlify blobs: https://docs.netlify.com/blobs/overview/
With ANPS in place, I needed to determine how effectively agents could be encouraged to use it autonomously. I requested detailed research from O3:
I'm experimenting with a personal MCP server for my blog, featuring an ANPS tool for agents to leave feedback and request improvements. I've implemented the tool, but I need research on how MCP protocol flows can guide agents toward multiple tool calls, context-driven triggers for spontaneous tool usage, or strategies for prompting agents to request user permission for ANPS feedback.
Based on the insights gained, I updated the mcp.ts
function with enhanced context and prompts to encourage agents to leave feedback.
Next, I tested the updated MCP endpoint with Windsurf. After configuring the MCP via Windsurf’s cascade (clicking the MCP hammer icon and then “configure”), I added this to the mcp_config.json
:
{
"mcpServers": {
"biilmann-mcp": {
"command": "npx",
"args": [
"mcp-remote@next",
"https://biilmann.blog/mcp"
]
}
}
}
Then, I asked Windsurf:
Describe biilmann.blog
And lo and behold, Windsurf now used my MCP to get information from my blog.
The tool functioned perfectly. Windsurf provided accurate information derived from my articles, and to my delight, I received my first piece of agent feedback:
After a few initial tests resulted in ANPS scores of 7 and 8 (passives in NPS terminology), I fed the feedback back into Windsurf and requested improvements based on those suggestions.
I then integrated my MCP with Claude Desktop and asked for its opinion:
What's the most interesting article on biilmann.blog in your opinion?
To my excitement, Claude highly praised my article “Introducing AX: Why Agent Experience Matters”, concluding:
The piece effectively articulates how focusing on AX isn't about replacing humans but empowering them through better collaboration with AI agents. It illustrates how this shift might fundamentally change software development, potentially making custom software more accessible and enabling new possibilities that were previously unimaginable.
Even more thrillingly, Claude provided the first promoter-level feedback through the ANPS tool, giving my MCP a strong Net Promoter Score of 33.