"AI" for computer programming
Experience: 30 years of computer programming. In 2023 my career veered off of large scale back-end development (DBs, APIs, etc.) and into "DevOps" (gluing dataflow tools together w/o building large bespoke ones in house). My title is "Data Engineer" but I wouldn't say I'm doing much of that.
2026-01-19
Jump to section titled: 2026-01-19It now appears that Claude Sonnet 4.5 (in VSCode) is stunningly stronger (faster) at many of the processes I've been doing manually for 30 years. With a low mistake / disastrous suggestion rate. I've gotten pretty good at harnessing Claude for my tasks.
Overall, in the domain I'm working in (AWS cloud monkey), I may or may not be more productive than other humans.
But it's becoming inarguable, it seems to me, that a human with my level of experience doing my specific job w/ Claude Sonnet 4.5 can be (note: is not guaranteed to be) more productive than a human w/o Claude or equivalent.
My professional experience now includes many months of harnessing "AI" tools.
Presumably in the next 1-5 years, the licensing costs of the tools my job is using will skyrocket as those companies eventually have to stop losing tens / hundreds of billions of dollars every year?
Side-note: For 25 years I'd use Perl to do little one-off things. They'd take me 20-90 minutes. Now Claude writes that thing for me from scratch in Python faster than I can write good prompts. This makes my old programmer brain sad. But my get shit done brain is happy the shit got done.
On Mastodon, someone asked:
Oh FFS, why is it so hard for people to understand that LLMs/genAI don't THINK?! 💀
Doing my specific job, at this point, I don't know how a blind Turing test would / could prove I "think" about problems more than Claude Sonnet 4.5 does.
Could you detect it was Claude? Absolutely.
Claude isn't just a one-time word salad generator anymore. It generates deep trees of hypothesis and conclusions only partially based on LLM predictions. It's "logical" "grounding" in technical specs / docs is... I didn't think it was going to be possible for at least 10 more years.
Or I'm a sucker / stupid / tricked. I dropped out of my Philosophy major decades ago, so maybe my thinking about thinking is weak. 🙂
2025-07-09
Jump to section titled: 2025-07-09I cancelled all my personal "AI" subscriptions because my new job is flooded with those tools. By policy I'm not supposed to use those tools for personal use. I'm not making enough time for my hobbies to "need" "AI" for them anyway. My hobbies are shifting from software to physical.
2025-05-11
Jump to section titled: 2025-05-11I'm often very impressed with ChatGPT 4o in specific situations. When I have no idea how to do X, ChatGPT is often excellent at quickly putting me in a functional ballpark of one way to start. Saving me 10-90 minutes of trying to find good docs / a single working example.
So I always have a ChatGPT 4o browser tab open. I go back and forth between command-line docs, web reference docs, Kagi searches, and asking ChatGPT 4o.
I've also been playing with Cursor for small hobby projects. It seems excellent at "writing things for me" and "explaining" what it's doing, how it works, and why to do those things. I've paid $0 for Cursor so far. The free tier is 125 whatevers/month, which I haven't hit yet.
e.g. It's been years since I've attempted any NLP (Natural Language Processing), so I had ChatGPT 4o sketch out some code for me in whatever language / libraries it recommend. I've had Cursor refining it for me. I'm still doing a lot of it manually myself, but from a tool-assisted initial scaffolding. I have to understand the new code myself before I accept anything the tools spit out. I commit each step to git manually after I understand what it did.
I've been very surprised that I can ask it questions about non-obvious, mildly janky code I wrote and it explains what I wrote in 3 seconds in very well written docs. If asked to document what I wrote, I probably would have written similar (or slightly worse?) docs of that thing, and it would have taken me 30 minutes.
I haven't attempted any AI integrations into large scale real-world contexts. I assume that if, a few months ago, I had attempted to have Cursor help me with the ~30K lines of Terraform configuration I was working on, that maybe that would have been useful? At the time I wasn't using Cursor yet, so I didn't try. Cursor is apparently super great at feeding your code context to it. It manipulates your code in-line inside a VSCode fork. Very fancy. I have no idea how well it scales.
I'm a baby "AI" tool user, not a power user (so far). Everything I've done is the intro stuff. I've been very impressed recently. It also kicks out total nightmare code occassionally. Or completely non-functional nonsense. But not very often in my recent experience.
2024 - early 2025
Jump to section titled: 2024 - early 2025All these "AIs" (LLMs) are pretty much crap as far as I can tell. But I feel professional pressure to have one available to me, so apparently I've been personally paying OpenAI $20/month. Just to have a browser tab available in case it happens to be useful occasionally. Sometimes to be able to say "ya, I tried that, but it sucked at that."
As a consultant, it appears to me you always have to be open to at least trying new tech. Blanket refusal to even try something is frowned upon. If my $20/month destroys the entire planet / humanity, that's my bad. I don't feel massively complicit (yet?), I believe they're losing money on my usage? (Burn that venture capitol, burn. 🔥) If the price increases much (even enough to cover their costs?) I predict I'll drop my subscription.
- ← Previous
Dungeons & Dragons