Andy Decker

// Software Engineer

Andy Decker

Available for work

What AI experiments taught me

// Published On: Jan 17, 2026

#ai

I have spent the last couple of years running small experiments with AI tools — integrating them into workflows, building prototypes, and occasionally watching them fail in instructive ways. None of this has made me an expert. But it has given me a clearer picture of what these tools actually are, as opposed to what the surrounding hype suggests they are.

A few things stood out as genuinely surprising. Not surprising in the sense of “AI is magic,” but surprising in the sense of “I had the wrong mental model, and reality corrected me.”

The quality of your input matters more than you expect

The early instinct with a language model is to treat it like a search engine — type a short fragment, expect a useful result. This works poorly. The output you get is shaped heavily by the framing, context, and specificity of what you put in.

This is not a technical quirk. It reflects something real about how these systems work. They are pattern-completing machines, and the pattern you set up determines the space of completions they draw from. A vague prompt produces a vague, generic response. A prompt that includes context, constraints, and a clear sense of what good output looks like produces something far more useful.

The practical implication is that working well with these tools is a skill. It takes practice, and it transfers across models and applications once you develop it.

They are better at breadth than depth

Where I have found AI tools most consistently useful is in covering ground quickly — drafting a first version of something, exploring an unfamiliar domain, generating options to react to rather than starting from a blank page. They compress the early, broad phase of work significantly.

Where they are less reliable is in deep, precise, domain-specific work where errors are costly and hard to spot. The output can look authoritative while being subtly wrong in ways that require genuine expertise to catch. This is not a reason to avoid using them. It is a reason to apply them thoughtfully and verify the things that matter.

Automation is the wrong frame for most use cases

A lot of early excitement about AI is framed around automation — the idea that tasks currently done by people will simply be handed off to models. For a narrow class of well-defined, high-volume, low-stakes tasks, this is accurate. But for most knowledge work, it misses the point.

The more useful frame is augmentation. Not “can this replace the human?” but “can this make the human faster, broader, or less blocked?” In that frame, the value is clearer and more honest. A tool that helps a developer navigate an unfamiliar codebase, or helps a writer break through a stuck draft, is genuinely valuable even if it is nowhere near replacing that developer or writer.