Some of those are before your time, but: The only time you don't get pushed to use new technologies is when a) nothing is changing and the industry is stagnant, or b) you're ahead of the curve and already exploring the new technology on your own.
Otherwise, everyone always gets pushed into using the new thing when it's good.
(Likewise with CVS to svn: "you can rename files now? and branches aren't horrible? Great, how fast can we switch?" - no "pushing" because literally everyone could see how much better it was in very concrete cases, it was mostly just a matter of resource allocation.)
In the context of this discussion, it feels more like ipv6 :-)
and then there is AS/400 and all the COBOL still in use which AI doesn't want to touch.
But when you stop trying new stuff (“because you don’t want to”), it is a sign that your inner child got lost. (Or you have a depression or burnout.)
Docker has obvious benefits over bare metal.
Etc.
My own experiences with LLMs have shown them to be entertaining, and often entertainingly wrong. I haven't been working on a project I've felt comfortable handing over to Microsoft for them to train Copilot on, and the testimonials I've seen from people who've used it are mixed enough that I don't feel like it's worth the drawbacks to take that risk.
And...sure, some people have to be pushed into using new things. Some people still like using vim to write C code, and that's fine for them. But I never saw this level of resistance to git, Docker, unit tests, or CSS.
At least with other advancements in our field like git, Docker, etc., they're made with a local-first mindset (e.g. your git repos can live anywhere and same with your docker images)
If that is not interesting to you I think that’s a totally fine choice, but you’re getting a lot of pushback from people who have made a different choice.
That's probably true on some level for some evangelists, but it's probably just as true that some people who are a bit scared of AI read every positive post about it as some sort of propaganda trying to change their mind.
Sometimes it's fine to just let people talk about things they like. You don't know what camp someone is in so it's good to read their post as charitably as possible.
Also not all of us need to sell ourselves as high-speed AI-boosted developers, especially those with decades of experience. Investors might well choose to invest in artisanal coding, and many of us can act as our own investors as well. So the inevitability of agentism is still undecided IMHO.
Not doing so seems a bit like a farmer ploughing fields and harvesting crops by hand while seeking to remain competitive with modern machinery, surely?
Because your boss is going to want you capable of using these things effectively even as shortly as 1-2 years from now? If not them, then their boss.
TBH i'm fine with AI but my main concern isn't any of these issues (even if they suck now -though supposedly Claude Code doesn't- they can get better in the future).
My main concern, by far, is control and availability. I do not mind using some AI, but i do mind using AI that runs on someone else's computer and isn't under my control - and i can, or have a chance at, understanding/tweaking/fixing (so all my AI use is done via inference engines that are written in C++ that i compiled myself and are running on my PC).
Of course the same logic applies to anything where that makes sense (i.e. all my software runs locally, the only things i use online/cloud versions for are things which are inherently about networking - e.g. chat, forums, etc, but even then i use -say- a desktop-based email client instead of webmail).
If it produces value for you, you should use it. If not, don't.
Imagine if we had to suffer these posts, day in and day out, when React or Kubernetes or any other piece of technology got released. This kind of proselyting that is the very reason there is tribalism with AI.
I don't want to use it, just like I don't want to use many technologies that got released, while I have adopted others. Can we please move on, or do we have to suffer this kind of moaning until everybody has converted to the new religion?
Never in my 20 years in this career have I seen such maniacal obsession as it has been over the past few years, the never-ending hype that have transformed this forum into a place I do not recognise, into a career I don't recognise, where people you used to respect [1] have gone into a psychosis and dream of ferrets, and if you dare being skeptical about any of it, you are bombarded with "I used to dislike AI, now I have seen the light and if you haven't I'm sorry for you. Please reconsider." stories like this one.
Jesus, live and let live. Stop trying to make AI a religion. It's posts like this one that create the sort of tribalism they rail against, into a battle between the "enlightened few" versus the silly Luddites.
If such a guy is slowly dipping his toes into AI and comes to the conclusion he just posted, you should take a step back and consider your position.
I know what its capabilities are. If I wanted to manage a set of enthusiastic junior engineers, I'd work with interns, which I love doing because they learn and get better. (And I still wouldn't want to be the manager.) AIs don't, not from your feedback anyway; they sporadically get better from a new billion dollar training run, where "better" has no particular correlation with your feedback.
There's no both-sides-ing of genAI. This is an issue akin to street narcotics, mass weapons of war, or forever chemicals. You're either on the side of heavy regulation or outright bans, or you're on the side of tech politics which are directly harmful to humanity. The OP is not a thoughtful moderate because that's not how any of this works.
For some people, that's picking up the tool and trying to figure out what its good for (if anything) and how it works.
Many people are seeing this as an existential moment requiring careful navigation and planning, not just another language or browser or text editor war.
So that's how I think AI will be seen in 20 years: like the PC, the internet, and mobile phones. Tech that shapes society, for better or worse.
This is a tipping point and most anti-AI advocates don't understand that other software developers who keep telling them to reevaluate their positioned are often just trying to make sure no one is left behind.
So… humans are now doing the stuff that computers are supposed to do and be good at?
where are the websites that are lightning fast, where speed and features and ads have been magically optimized by ai, and things feel fast like 2001 google.com fast
why does customer service still SUCK?
What worries me about this is that it might end up putting up a barrier for those that can't afford it. What do things look like if models cost $1000 or more a month and genuinely provide 3x productivity improvements?
You can bootstrap something with yourself and a friend with some hard work and intelligence
This is available to people all over the world, even those in countries where $1000 is a months salary
Microsoft and their employees will be fine, yeah. That's not who I'm thinking about
But there is an interesting point about what it does to hobby dev. If it takes real money just to screw around for fun on your own, it's kinda like going back to the old days when you needed to have an account on a big university system to do anything with Unix.
Small bootstrapped startups
Are more what I had in mind. Of course an established company can pay it. I don't like the idea of a world where all software is backed by big companies
But yeah, I share your concern about open source and hobby projects. My hope would be that you get free tiers that are aimed at hobby/non-profit/etc stuff, but who knows.
AI has increased the sheer volume of code we are producing per hour (and probably also the amount of energy spent per unit of code). But, it hasn't spared me or anyone I know the cost of testing, reviewing or refining that code.
Speaking for myself, writing code was always the most fun part of the job. I get a dopamine hit when CI is green, sure, but my heart sinks a bit every time I'm assigned to review a 5K+ loc mountain of AI slop (and it has been happening a lot lately).
My medium term concern is that the tasks where we want a human in the loop (esp review) are predicated on skills that come from actually writing code. If LLMs stagnate, in a generation we’re not going to have anyone who grew up writing code.
1: not that LLMs write objectively bad code, but it doesn’t follow our standards and patterns. Like, we have an internal library of common UI components and CSS, but the LLM will pump out custom stuff.
There is some stuff that we can pick up with analysers and fail the build, but a lot of things just come down to taste and corporate knowledge.
I don't see why it doesn't help with reviewing, testing, or refining code either. One of the advantages I find is that an LLM "thinks" differently from me so it'll find issues that I don't notice or maybe even know about. I've certainly had it develop entire test harnesses to ensure pre/post refactoring results are the same.
That said, I have "held it wrong" and had it done the fun stuff instead and that felt bad. So I just changed how I used it.
I read anecdotes of teams that push through AI-driven changes as fast as possible with awe. Surely their AIs are no more capable than the ones I'm familiar with.
I still think whether you see sustained value or not depends a lot on your workflow -- in what you choose to do or decide and what you let it choose to do or decide.
I agree with you that this idea of just pushing out AI code -- especially code written from scratch -- by an AI sounds like a disaster waiting to happen. But honestly a lot of organizations let a lot of crappy code into their code-base long before AI came long. Those organizations are just doing the same now at scale. AI didn't change the quality, it just changed the quantity.
You mean the knowledge that Claude has stolen from all of us and regurgitated into your projects without any copyright attributions?
> But I see a lot of my fellow developers burying their heads in the sand
That feeling is mutual.
You can't, and shouldn't be able to, copyright and hoard "knowledge".
You can twist this around as much as you like but there are several studies showing that LLMs and and will happily reproduce content from their training data.
Correct. But if read your code, produce a detailed specification of that code, and then give that code to another team (that has never seen your code) and they create a similar product then they haven't broken the law.
LLMs reproducing exact content from their training data is symptom of overfitting and is an error that needs correcting. Memorizing specific training data means that it is not generalizing enough.
That costs significantly more and involves the creation of jobs. I see this as a great outcome. There seems to be a group of people who share the opposite of my views on this matter.
> and is an error that needs correcting
It's been known for years. They don't seem interested in doing that or they simply aren't capable. I presume because most of the value in their service _is_ the copyright whitewashing.
> Memorizing specific training data means that it is not generalizing enough.
Is that like a knob they can turn or is it something much more fundamental to the technology they've staked trillions on?
I don't see it that way. If whatever you're doing can now be automated then it's become a bullshit job. It no longer a benefit to humanity to have a human sit on their ass, stand on their feet, or break their back to do a job that can be automated. As a software developer, it's my job to take the dumb repetitive stuff that humans do and make it so that humans never have to do that job again.
If that's a problem for society, it's because society is messed up.
> It's been known for years. They don't seem interested in doing that or they simply aren't capable.
I don't find that to be particularly big problem. Fundamentally an AI isn't just compressing all human knowledge and decompressing it on demand; it's tweaking parameters in a giant matrix. I can reproduce the lyrics of songs that I've heard but that doesn't mean there is a literal copy of that song in my brain that you could extract out with a well placed scalpel. It just means I've heard it a bunch of times and the giant matrix in my brain is tuned to be able to spit it out.
> Is that like a knob they can turn or is it something much more fundamental to the technology they've staked trillions on?
In a sense, it a knob. It's not fundamental to the technology; if it's reproducing something exactly that likely means it's over-trained on that data. It's actually bad for the models (makes them more incorrect, more rigid, and more repetitive) so that is a knob they will turn.
And comes with a price tag paid to people who neither own nor generated that content. You don't think that shifts the ethical boundaries _significantly_?
I would very much like someone to give me the magic reproduction triple: a model trained on your code, a prompt you gave it to produce a program, and its output showing copyright infringement on the training material used. Specific examples are useful; my hypothesis is that this won't be possible using a "normal" prompt that's in general use, but rather a prompt containing a lot of directly quoted content from the training material, that then asks for more of the same. This was a problem for the NYT when they claimed OpenAI reproduced the content of their articles...they achieved this by prompting with large, unmodified sections of the article and then the LLM would spit out a handful of sentences. In their briefing to the court, they neglected to include their prompts for this reason. I think this is significant because it relates to what is really happening, rather than what people imagine is happening.
But I guess we'll get to see from the NYT trial, since OpenAI is retaining all user prompts and outputs and providing them to the NYT to sift through. So the ground-truth exists, I'm sure they'll be excited to cite all the cases where people were circumventing their paywall with OpenAI.
Then you have been mislead:
https://arstechnica.com/features/2025/06/study-metas-llama-3...
> I would very much like someone to give me the magic reproduction triple
Here's how I saw it directly. Searched for "node http server example." Google's AI spit out an "answer." The first link was a Digital Ocean article with an example. Google's AI completely reproduced the DO example down to the content of the comments themselves.
So.. don't know what to tell you. How hard have you been looking yourself? Or are you just trying to maintain distance with the "show me" rubrick? If you rely on these tools for commercial purposes then the onus was always on you.
> So the ground-truth exists
And you expect a civil trial to be the most reliable oracle of it? I think you know what I know but would rather _not_ know it.
Also: Over the past 20 years, I could count the number of times on one hand that I was been able to get away with out-right copy/paste from SO.
What about 10x more?
Edit: If I get a raise, I'd consider paying up to $25,000 per year for the aforementioned Claude automaton.
Many of these techniques can also work with Chinese LLMs like Qwen served by your inference provider of choice. It's about the harness that they work in, gated by a certain quality bar of LLM.
Taking a discussion about harnesses and stochastic token generators and forcing it into a discussion of American imperialism is making a topic political that is not inherently political, and is exactly the sort of aggressive, cussing tribalistic attitude the article is about.
I don't think everything is for certain though. I think it's 50/50 on whether Anthropic/whoever figures out how to turn them into more than a boilerplate generator.
The imprecision of LLMs is real, and a serious problem. And I think a lot of the engineering improvements (little s-curve gains or whatever) have caused more and more of these. Every step or improvement has some randomness/lossiness attached to it.
Context too small?:
- No worries, we'll compact (information loss)
- No problem, we'll fire off a bunch of agents each with their own little context window and small task to combat this. (You're trusting the coordinator to do this perfectly, and cutting the sub-agent off from the whole picture)
All of this is causing bugs/issues?:
- No worries, we'll have a review agent scan over the changes (They have the same issues though, not the full context, etc.)
Right now I think it's a fair opinion to say LLMs are poison and I don't want them to touch my codebase because they produce more output I can handle, and the mistakes they make are too subtle that I can't reliably catch them.
It's also fair to say that you don't care, and your work allows enough bugs/imprecision that you accept the risks. I do think there's a bit of an experience divide here, where people more experienced have been down the path of a codebase degrading until it's just too much to salvage – so I think that's part of why you see so much pushback. Others have worked in different environments, or projects of smaller scales where they haven't been bit by that before. But it's very easy to get to that place with SOTA LLMs today.
There's also the whole cost component to this. I think I disagree with the author about the value provided today. If costs were 5x what they are now, I think it would be a hard decision for me to decide if they are worth it. For prototypes, yes. But for serious work, where I need things to work right and be reasonably bug free, I don't know if the value works out.
I think everyone is right that we don't have the right architecture, and we're trying to fix layers of slop/imprecision by slapping on more layers of slop. Some of these issues/limitations seem fundamental and I don't know if little gains are going to change things much, but I'm really not sure and don't think I trust anyone working on the problem enough to tell me what the answer is. I guess we'll see in the next 6-12 months.
When I look back over my career to date there are so many examples of nightmare degraded codebases that I would love to have hit with a bunch of coding agents.
I remember the pain of upgrading a poorly-tested codebase from Python 2 to Python 3 - months of work that only happened because one brave engineer pulled a skunkworks project on it.
One of my favorite things about working with coding agents is that my tolerance for poorly tested, badly structured code has gone way down. I used to have to take on technical debt because I couldn't schedule the time to pay it down. Now I can use agents to eliminate that almost as soon as I spot it.
Overall I like using it still but I can also see my mental model of the codebase has significantly degraded which means I am no longer as effective in stopping it from doing silly things. That in itself is a serious problem I think.
It is another post that advocates for AI assisted coding without addressing the question of responsibility and trust. It makes claims without offering test data or even talking about testing.
> The models don’t have to get better, the costs don’t have to come down (heck, they could even double and it’d still be worth it), and we don’t need another breakthrough.
The costs should come down. I don’t know what costs this post refers to, but the cost of using Claude is almost definitely hiding the actual cost.
That said, I’m still hoping we ensure our public models out there work well enough with opencode or other options so my cost is more transparent to me, what is added to my electric bill rather than a subscription to Claude.
Choosing not to use AI agents is maybe the only tool position I feel I've had to defend or justify in over a decade of doing this, and it's so bizarre to me. It almost reeks of insecurity from the Agent Evangelists and I wonder if all the "fear" and "uncertainty" they talk about is just projecting.