Ask HN: Why isn't using AI in production considered stupid?

Just as the title says, isn't it stupid to run AI in a production environment before we have addressed some pretty major fucking problems with it?

Comments

nis0sMar 29, 2026, 11:40 AM
Where are all the production issues that have been created because of AI? Are there more incidences than before now? What’s the rate of production failures pre and post AI?

Only reason humans need to be in the loop is so there is someone to blame or hold accountable in a legal sense.

al_borlandMar 29, 2026, 7:21 AM
It is stupid. Where I work, management has been pushing the idea of AI pretty hard, but they have repeatedly said it should not be used in production and a human needs to be in the loop.

I think the overall push, even when it doesn’t make sense is also stupid, but at least it’s tempered with a little logic to keep production a bit safer.

I get great joy from reading stories of AI in prod gone wrong.

jqpabc123Mar 29, 2026, 7:57 AM
US corporate culture is overly focused on short term effects. Why isn't this considered stupid?

Short term --- AI can generate code so let's fire those pesky, expensive developers.

Long term --- AI is terrible at maintaning the code it has generated. We need more human developers who can understand and fix this mess.

https://towardsdatascience.com/the-black-box-problem-why-ai-...

spl757Mar 29, 2026, 8:13 AM
I agree that that is all true. And exceedingly fucking stupid.
prohoboMar 29, 2026, 7:30 AM
Depends on the problem-space doesn't it? If you're making CRUD apps, then go wild IMO. If you're making rocket launch systems, maybe don't?
spl757Mar 29, 2026, 8:11 AM
The problem that I see is that no one and no company seems to be making that distinction.
serfMar 29, 2026, 7:55 AM
a hammer requires an operator, so it's rarely used wrong, and if something goes wrong the operator can intervene. sometimes a thumb will be struck, but usually that will result in a painful lesson that prevents future strikes.

the timed/automated hammer forging machine continues working regardless of whether or not an operator is at the helm. it will chop as many hands as you feed it.

we are at the point where a lot of value can be leveraged from AI by using it like a hand tool (a hammer), and in doing so one will avoid most of the chopped hands that a fully automatic factory has to offer.

spl757Mar 29, 2026, 8:12 AM
What if the hammer has a problem where the handle breaks off randomly? Same thing happens with AI. Sometimes it breaks, randomly, and without any way of predicting it.
beardywMar 29, 2026, 9:17 AM
I think you have missed the analogy.

Another way to look at it is that the operator of the hammer has an immediate feedback loop and will not continue with a broken hammer. AI as it stands rarely has that feedback on the consequences of its decisions, and lacks the ability to react appropriately.

spl757Mar 29, 2026, 6:43 AM
Why do we find the unreliabilty and resulting hallicinations as acceptable for AI in production? Can you imagine if Postgres, Apache, Nginx, hell even the Linux kernel were allowed to be use in production if they occassionally went insane?
CrimsonRainMar 29, 2026, 7:08 AM
You can use the same logic for most humans yet they are in production since birth :)
spl757Mar 29, 2026, 8:03 AM
I don't think that is an apt comparison.
jeffreygoestoMar 29, 2026, 7:12 AM
Well, but agents today are pretty much like Fitzcarraldo...
drekipusMar 29, 2026, 7:37 AM
No one gets a newborn to configure nginx
vrighterMar 29, 2026, 6:47 AM
it is considered stupid by tons of people. And their problems are intrinsic and can't really be solved
spl757Mar 29, 2026, 6:56 AM
Tons of people, apparently, aren't enough. I guess I'm just tired of seeing post after post on HN about people complaining that their use of AI in production isn't reliable.

It makes me want to pull out the hair I used to have an scream into the wilderness and eat a twinkie.

hactuallyMar 29, 2026, 7:25 AM
what do you mean by "run AI"?

as in, providing self hosted models? or running Claude code/Codex? or using it for support? or what?

spl757Mar 29, 2026, 8:09 AM
AI is an umbrella term. All AI models can hallucinate. There has been no solution to this problem. Until that problem is resolved, it is, in my opinion, something that only an idiot would run in production. I read about a company that had their whole codebase wiped out because they gave an agent access to be able to do that.
jaredsohnMar 29, 2026, 8:02 AM
This question is too vague to answer.
spl757Mar 29, 2026, 8:08 AM
No, it's not. The problem is all AI hallucinates. Therefore, it is guaranteed to be confidently wrong. Until the problem of hallucinations are solved, anyone using AI in a production environment is an idiot, which is, of course, my personal opinion. But it seems pretty cut and dry to me.
jaredsohnMar 29, 2026, 8:15 AM
Your original post (and even after this comment I think) was vague in that AI can be used in a lot of different ways in 'production' - to generate code, to manage deployment / scripts, or as part of a feature that uses inference.

For example, if you're writing code with AI, you can still review it just like you would if a colleague wrote it. You can write tests (or have the AI do so) to prevent some hallucinations, too.

spl757Mar 29, 2026, 8:47 AM
Yes, AIs that hallucinate can all be used in different ways. But they can still all hallucinate, so I fail to see how what you are saying mitigates the fundamental, as yet to be solved, problem of AI hallucinations.

edit to say, what is the point, after all, of artificial intelligence if it's not used to make decisions? That's what it does. But ALL AI HALLUCINATES. Therefore, it's unreliable.

microbuildercoMar 29, 2026, 8:20 AM
[dead]
microbuildercoMar 29, 2026, 8:21 AM
[dead]
EnXuMar 29, 2026, 6:53 AM
[dead]
allinonetools_Mar 29, 2026, 6:50 AM
[dead]