AI
Moral stand
I acknowledge the risks posed by AI development, especially in environmental terms and considering the main driver is corporate capital gain (as usual).
That said I consider myself a realist and I believe AI/LLMs/coding agents are just a tool as many others that came before, all morally questionable in their own specific ways.
Ultimately only humans are supposed take the full responsibility on how these tools are going to be used and deployed.
Pi Coding Agent
I’ve been toying around with a few AI coding agents harnesses (Claude Code, Codex, OpenCode) and settled for Pi which follows a minimalistic approach that makes it very performant and easy to hack. It’s fun! try it out and join the community at the official Discord channel, I’m Al3xFor there ✌️
I also maintain various Pi extensions and Pi coding agent action, an open-source GitHub Action that turns any LLM model supported by Pi into a first-class collaborator on any GitHub repository. Pi agent reads issue descriptions, reviews pull requests, and pushes commits — all triggered by a simple comment on a GitHub issue or PR.
How Pi works on this website
This very codebase uses Pi through the pi.yml GitHub Actions workflow. When a comment is left on an issue or pull request review, Pi kicks in, analyzes the context, and produces code changes as a new commit or pull request. I use it most frequently via the iOS GitHub application to create new issues on the go and have Pi work on them in background.
The workflow configuration is minimal:
jobs:
pi-agent:
uses: shaftoe/pi-coding-agent-action/.github/workflows/pi.yml@v2
with:
allowed_actor: shaftoe # I'm the only GH user allowed to trigger the agent
If you are curious you can browse issues and pull requests to see my actual prompts triggered by comments starting with /pi and the PRs/reports generated by them.