The Shiny Object Trap
When you can do anything, what you don't do is the skill
When you can do anything, what you don't do is the skill
Andrej Karpathy described exactly what I've been doing for the past few months. Here's a template to get started in five minutes.
Types as agent instructions, auto-generated verifiers, and experimental REPL mode.
Software UX has been built around make it easy to do things. It's shifting towards make it easy to say what you want.
One person and one AI built pretext in 27 days. Autoresearch applied to a real problem.
10 principles for building CLIs that AI agents can actually use, from a Cursor dev. Plus the missing piece: WHEN and WHEN NOT to use each command.
Anthropic's harness engineering article highlights what's missing from most AI coding: scrutiny and evaluation.
Why build a RAG pipeline when AI can already do most of the work for you?
I pointed Claude at Evan Miya probabilities and Yahoo public picks and told it to build a bracket optimizer. It mostly worked, except for the part where it started picking 16-seeds.
AI makes checking claims cheaper than making them. A penny-rounding simulation proves the point.
Why would you make 1M context the default at no extra cost unless something changed architecturally?
Two annoying problems, two TUIs
Automate the draining parts, focus on what gives you energy
Agents + CLIs + skills is the core of the future - nobody has figured out the right human interface for that yet
No theory, just the setup — how I actually work with AI tools day to day
Everything else is CLIs and skills files
Building a more efficient bridge between AI assistants and your files with intelligent context management
A Reference and Hyperparameter Free Approach to Preference Alignment