"AI" for computer programming
Origins
I've been programming computers professionally for 30 years.
May 2024: all these "AIs" (LLMs) were pretty much crap as far as I could tell. But I felt professional pressure to have one available to me, so apparently I've been paying OpenAI $20/month since May 2024. Just to have a browser tab available in case it happens to be useful occasionally. Sometimes to be able to say "ya, I tried that, but it sucked at that."
As a consultant, it appears to me you always have to be open to at least trying new tech. Blanket refusal to even try something is frowned upon. If my $20/month destroys the entire planet / humanity, that's my bad. I don't feel massively complicit (yet?), I believe they're losing money on my usage? (Burn that venture capitol, burn. 🔥) If the price increases much (even enough to cover their costs?) I predict I'll drop my subscription.
Today
Today I'm often very impressed with ChatGPT 4o in specific situations. When I have no idea how to do X, ChatGPT is often excellent at quickly putting me in a functional ballpark of one way to start. Saving me 10-90 minutes of trying to find good docs / a single working example.
So I always have a ChatGPT 4o browser tab open. I go back and forth between command-line docs, web reference docs, Kagi searches, and asking ChatGPT 4o.
Very recently (May 2025), I've also been playing with Cursor for small hobby projects. It seems excellent at "writing things for me" and "explaining" what it's doing, how it works, and why to do those things. I've paid $0 for Cursor so far. The free tier is 125 whatevers/month, which I haven't hit yet.
e.g. It's been years since I've attempted any NLP (Natural Language Processing), so I had ChatGPT 4o sketch out some code for me in whatever language / libraries it recommend. I've had Cursor refining it for me. I'm still doing a lot of it manually myself, but from a tool-assisted initial scaffolding. I have to understand the new code myself before I accept anything the tools spit out. I commit each step to git manually after I understand what it did.
I've been very surprised that I can ask it questions about non-obvious, mildly janky code I wrote and it explains what I wrote in 3 seconds in very well written docs. If asked to document what I wrote, I probably would have written similar (or slightly worse?) docs of that thing, and it would have taken me 30 minutes.
I haven't attempted any AI integrations into large scale real-world contexts. I assume that if, a few months ago, I had attempted to have Cursor help me with the ~30K lines of Terraform configuration I was working on, that maybe that would have been useful? At the time I wasn't using Cursor yet, so I didn't try. Cursor is apparently super great at feeding your code context to it. It manipulates your code in-line inside a VSCode fork. Very fancy. I have no idea how well it scales.
I'm a baby "AI" tool user, not a power user (so far). Everything I've done is the intro stuff. I've been very impressed recently. It also kicks out total nightmare code occassionally. Or completely non-functional nonsense. But not very often in my recent experience.
- ← Previous
Scuba diving log - Next →
Résumé / CV