<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Golang on my.pgnd.dev</title><link>https://my.pgnd.dev/tags/golang/</link><description>Recent content in Golang on my.pgnd.dev</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Sun, 22 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://my.pgnd.dev/tags/golang/feed.xml" rel="self" type="application/rss+xml"/><item><title>Winston</title><link>https://my.pgnd.dev/posts/winston/</link><pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate><guid>https://my.pgnd.dev/posts/winston/</guid><description>&lt;p&gt;It&amp;rsquo;s cheaper to buy more hardware than optimize your code. I have heard that a lot in my career. While I generally agree with it, it is also based on the premise that optimization and fine tuning are hard&amp;hellip; but they don&amp;rsquo;t always have to be, do they?&lt;/p&gt;
&lt;p&gt;I definitely have seen a few incidents, outages or just cost spikes coming from poorly configured deployments. While the value of having the perfect resource requirements is often low, especially in the earlier stages of a company, misconfigurations tend to compound. The wrong pod is claiming too much memory, often hiding a leak that could have been caught early with memory limits, a greedy pod hogs all the CPU, a critical pod gets evicted due to resource pressure.&lt;/p&gt;
&lt;p&gt;When running my raspberry Pis, I ran into similar issues. Nodes are saturating, the poorly configured pods pay the price because I had an LLM hogging all the resources and the sword of eviction falls. Kubernetes decides to evict my nginx servers&amp;hellip; tough luck.&lt;/p&gt;
&lt;p&gt;In a typical prod system, you have metrics and other systems that help you flag bad resource usage. When I asked my agent to review my resource configuration, it definitely pointed out issues. It becomes a cycle of reviewing pods specs, finding resource usage trend, adjusting and validating. Of course, as some applications evolve, you have to regularly do it to ensure these configs are still valid.&lt;/p&gt;
&lt;p&gt;There must be a better way.&lt;/p&gt;
&lt;p&gt;Facing bad resource usage, there are often a few obvious checks: identifying which workloads are missing resource request or limits, which workloads are asking for too much, and which workloads are constantly at the edge.&lt;/p&gt;
&lt;p&gt;I didn&amp;rsquo;t want to figure out how to feed my entire stack to my agent. I also didn&amp;rsquo;t want to blow up my usage by having the agent spin countless cycles on figuring out how to query the data. I didn&amp;rsquo;t want to have to deploy some kind of heavyweight solution that does this but also a whole bunch of other things I do not really care for at the moment.&lt;/p&gt;
&lt;p&gt;I wrote Winston. A small pod, written in go, backed by a SQLite persisted on a small PVC. It polls the resource metrics every minute, performs aggregation and runs checks to find exuberant pods:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;missing limits: no boundary set; pod can consume unbounded resources.&lt;/li&gt;
&lt;li&gt;missing request: no baseline declared; results in lower QoS and higher eviction risk.&lt;/li&gt;
&lt;li&gt;danger zone: running at 90% of its limits; pod is likely underprovisioned.&lt;/li&gt;
&lt;li&gt;over provisioned: running at less than 20% of requests; wasteful resource allocation.&lt;/li&gt;
&lt;li&gt;ghost limit: running at less than 10% of limits; limit provides little safety value.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Winston comes with a &lt;code&gt;/metrics&lt;/code&gt; endpoint to feed your favorite OTel stack. There&amp;rsquo;s also an embedded static UI, an endpoint to get Markdown or JSON report of exuberant pods. Feed that report to an agent that has access to your flux repo. All that&amp;rsquo;s left to do is cherry pick the actions. For example, here&amp;rsquo;s what my agent suggested:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;┌────────────────────┬─────────┬─────────┬─────────┬─────────┬─────────────────┬─────────────────┐
│ Workload │ CPU avg │ CPU max │ Mem avg │ Mem max │ → CPU req/limit │ → Mem req/limit │
├────────────────────┼─────────┼─────────┼─────────┼─────────┼─────────────────┼─────────────────┤
│ alloy │ 16m │ 105m │ 386Mi │ 572Mi │ 50m / 200m │ 400Mi / 640Mi │
├────────────────────┼─────────┼─────────┼─────────┼─────────┼─────────────────┼─────────────────┤
│ config-reloader │ — │ 1m │ — │ 28Mi │ 5m / 10m │ 32Mi / 48Mi │
├────────────────────┼─────────┼─────────┼─────────┼─────────┼─────────────────┼─────────────────┤
│ grafana │ 27m │ 192m │ 117Mi │ 128Mi │ 50m / 300m │ 128Mi / 256Mi │
├────────────────────┼─────────┼─────────┼─────────┼─────────┼─────────────────┼─────────────────┤
│ kube-state-metrics │ 3m │ 8m │ 25Mi │ 28Mi │ 10m / 50m │ 32Mi / 64Mi │
├────────────────────┼─────────┼─────────┼─────────┼─────────┼─────────────────┼─────────────────┤
│ node-exporter │ 4m │ 16m │ 23Mi │ 26Mi │ 10m / 50m │ 32Mi / 64Mi │
├────────────────────┼─────────┼─────────┼─────────┼─────────┼─────────────────┼─────────────────┤
│ vm-operator │ 3m │ 6m │ 56Mi │ 63Mi │ 10m / 100m │ 64Mi / 128Mi │
└────────────────────┴─────────┴─────────┴─────────┴─────────┴─────────────────┴─────────────────┘
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Winston closed the resource definition gaps quickly, giving me better metrics about effective capacity. Curious about your thoughts.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I am Winston Wolfe, I solve problems. May I come in?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;github: &lt;a href="https://github.com/gosusnp/winston"&gt;https://github.com/gosusnp/winston&lt;/a&gt;&lt;/p&gt;</description></item><item><title>whackAmole, a task manager for my agents and myself</title><link>https://my.pgnd.dev/posts/whackamole-a-task-manager-for-agents-human-collaboration/</link><pubDate>Sun, 15 Mar 2026 00:00:00 +0000</pubDate><guid>https://my.pgnd.dev/posts/whackamole-a-task-manager-for-agents-human-collaboration/</guid><description>&lt;p&gt;I was setting up a side project, agents hard at work for me. In those early moments, changes often cover more ground than expected, codebase is new and standards are not as defined. I got into the habit of asking a different agent to review the ongoing work. Like most things, it&amp;rsquo;s not perfect but it captures good signals.&lt;/p&gt;
&lt;p&gt;The issue: I&amp;rsquo;d often end up with a stream of 10 or more issues to fix. While a lot of it deserved attention, like any review, they can be loaded: quick fixes, things needing iteration, items better left for later or tied to a bigger refactor. That&amp;rsquo;s where a single Claude or Gemini prompt started to fall short.&lt;/p&gt;
&lt;p&gt;The initial POC: an stdio MCP server backed by a table of tasks in SQLite. I reloaded the session that gave me my last review with 10+ comments, and asked the agent to create tasks from the feedback. Now I had a filterable list, editable descriptions, easy to reorganize and hand off to different agents across different sessions.&lt;/p&gt;
&lt;p&gt;This could all go into regular issues, but it never felt worth it. Most of it needs to be addressed within the same PR&amp;hellip; suddenly you&amp;rsquo;ve got 10 issues linked to a single PR. I enjoy the local iteration. It serves me well as short/medium TODO tracking.&lt;/p&gt;
&lt;p&gt;After dogfooding this across a couple of projects, including whackAmole itself, my workflow matured alongside the features I added. Below are the ones that stood out for my day to day.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CLI vs MCP vs UI&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Maybe obvious, MCP for agents, CLI/UI for myself, cli was good to create projects, add tasks. Viewing started to feel limited, especially when agents started feeding tasks with 30 lines of description. That&amp;rsquo;s where the UI started to shine for me.&lt;/p&gt;
&lt;p&gt;Overall, they play well with each other. CLI enables me to quickly add tasks or update status when I am heads down. That&amp;rsquo;s how everything started.&lt;/p&gt;
&lt;p&gt;MCP was always a goal. Yes, the agent could use the CLI, however, it means that I need to document in each project how to use it. The agent does a better at picking up from the tool name and description what to do with it. Sadly, that&amp;rsquo;s also what removed some of the whimsicality of the project where Project were Yards full of Moles to whack. Conventional naming makes it easier to understand, what is true for humans is also true for agents.&lt;/p&gt;
&lt;p&gt;The UI became my command center. Whenever I see an Agent producing a big review or plan, I will likely ask it to create tasks. I find it more comfortable to read, and also, for better or worse, easier to multitask between projects. I find myself writing the task in the UI rather than directly prompting the Agent more and more.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Projects&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Project enables an easy breakdown, agents publish, view and edit tasks from a single project. The trick is the scoping. I found it awkward to always have to add the project to the CLI, so I added a &lt;code&gt;whack config set-local project &amp;lt;projectKey&amp;gt;&lt;/code&gt;. This has been a life saver for me, each project root has a config file with the project key. I no longer have to add the project key to any commands.&lt;/p&gt;
&lt;p&gt;Agent setup, the project key being a very local thing, it didn&amp;rsquo;t feel right to add it to the &lt;code&gt;AGENTS.md&lt;/code&gt; or any global agent spec. I started leveraging &lt;code&gt;CLAUDE.local.md&lt;/code&gt; and &lt;code&gt;GEMINI.local.md&lt;/code&gt;. Those files contain a quick guide, typically:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;You have access to the whackAmole MCP server for task management.
whackAmole project key is whack
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;strong&gt;MCP.instructions&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Claude picked up pretty naturally on the tasks, updating status. Gemini&amp;hellip; not as well.&lt;/p&gt;
&lt;p&gt;Since the tasks could sometimes be more complex, adding a &lt;code&gt;supports Markdown&lt;/code&gt; to the description was enough to get the agents to use markdown. Now I get formatted tasks with headers, bullet points and even code snippets.&lt;/p&gt;
&lt;p&gt;For the flow, I initially started adding instructions to the &lt;code&gt;.local.md&lt;/code&gt; files which quickly turned out to be a pain to manage. Eventually, I started leveraging the &lt;code&gt;instructions&lt;/code&gt; block of the MCP server. In the UI, there&amp;rsquo;s a config, top right that enables editing the instructions that are attached to each instance. An example will best illustrate how I use it:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;# MCP Protocol: Project Task Lifecycle
## 1. Task State Management
The agent must strictly follow the semantic state machine for the current Project.
* **Pre-Execution**: Before performing any code changes or terminal actions, the agent **MUST** update the task status to `INPROGRESS`.
* **Pre-Validation**: Before requesting user feedback or validation, the agent **MUST** update the task status to `REVIEW`.
* **Closure**: Only the user may move a task to `COMPLETED` or `CLOSED` terminal states. The agent is prohibited from performing these transitions.
## 2. Quality Gates (Non-Negotiable)
Before a task can be transitioned to `REVIEW`, the agent must ensure the code is production-ready across all relevant languages.
* **Linting**: All changed files must pass the project&amp;#39;s configured linter (e.g., `golangci-lint` for Go, `eslint` for Preact/TS).
* **Testing**: All relevant unit and integration tests must pass.
* **Protocol**: If linting or tests fail, the agent must remain in `INPROGRESS` and resolve the issues before attempting to move to `REVIEW`.
## 3. Interaction Constraints
* **Permissions**: Always request user approval before refactoring existing code or changing established coding patterns.
* **Project Context**: Ensure all actions are performed within the scope of the active Project Key.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Overall it works well, even Gemini does a better job at tracking of its task. It&amp;rsquo;s not as strong as a prompt, but it&amp;rsquo;s also easier to maintain. A decent enough trade off.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;UI&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I found myself using the UI more and more:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Pretty printing Markdown&lt;/li&gt;
&lt;li&gt;Quick toggle groups for Type and Status&lt;/li&gt;
&lt;li&gt;A bit of color coding to quickly catch what&amp;rsquo;s happening&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Refreshing was getting more and more tedious, so I wired a quick event log that the UI uses to refresh task statuses, or when tasks are being added, show which projects are getting updates.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;That&amp;rsquo;s my latest workflow improvement in the wild west of agents. Curious how others are pulling through these days.&lt;/p&gt;</description></item></channel></rss>