You might as well work on product marketing for ai because that is where the client dollars are allocated.
If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.
I am in a different camp altogether on AI, though, and would happily continue to do business with it. I genuinely do not see the difference between it and the computer in general. I could even argue it's the same as the printing press.
What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations. that's not to say two things cant both be bad, but it just looks to me like people are using the moral argument as a means to avoid learning something new while being able to virtue signal how ethical they are about it, while at the same time they refuse to sacrifice things they are already accustomed to for ethical reasons when they learn more about it. It just all seems rather convenient.
the main issue I see talked about with it is in unethical model training, but let me know of others. Personally, I think you can separate the process from the product. A product isnt unethical just because unethical processes were used to create it. The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain? For example, should we let people die rather than use medical knowledge gained unethically?
Maybe we should be targeting these AI companies if they are unethical and stop them from training any new models using the same unethical practices, hold them accountable for their actions, and distribute the intellectual property and profits gained from existing models to the public, but models that are already trained can actually be used for good and I personally see it as unethical not to.
Sorry for the ramble, but it is a very interesting topic that should probably have as much discussion around it as we can get
Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.
Turning the page is a valid choice though. Sometimes a clean slate is what you need.
> Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.
Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.
> For example, should we let people die rather than use medical knowledge gained unethically?
Depends if you are doing it 'for their own good' or not.
Also the ends do not justify the means in the world of morals we are discussing -- that is pragmatism / utilitarianism and belongs to the world of the material not the ideal.
Finally - Who determines what is ethical? beyond the 'golden rule'? This is the most important factor. I'm not implying ethics are ALL relative, but beyond the basics they are, and who determines that is more important than the context or the particulars.
Lots of room for nuance here, but generally Id say its more pragmatic to pivot your business to one that aligns with your morals and is still feasible, rather than convince yourself youre going to influence something you have no control over while compromising on your values. i am going to emphasize the relevance of something being an actual moral or ethical dilemma vs something being a very deep personal preference or matter of identity/personal branding.
>Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.
I agree, it is a real loss and I don't mean for it to be treated lightly but if we are talking about morals and potentially feeling forced to compromise them in order to survive, we should acknowledge it's not really a survival situation.
>Depends if you are doing it 'for their own good' or not.
what do you mean by this?
I am not posing a hypothetical. modern medicine has plenty of contributions to it from unethical sources. Should that information be stripped from medical textbooks and threaten to take licenses away from doctors who use it to inform their decision until we find an ethical way to relearn it? Knowing this would likely allow for large amounts of suffering to go untreated that could have otherwise been treated? I am sincerely trying not to make this sound like a loaded question
also, this is not saying the means are justified. I want to reiterate my point of explicitly not justifying the means and saying the actors involved in the means should be held maximally accountable.
I would think from your stance on the first bullet point you would agree here - as by removing the product from the process you are able to adapt it to your morals.
>Finally - Who determines what is ethical?
I agree that philosophically speaking all ethics are relative, and I was intending to make my point from the perspective of navigating the issues as in individual not as a collective making rules to enforce on others. So you. you determine what is ethical to you
However, there are a lot of systems already in place for determining what is deemed ethical behavior in areas where most everyone agrees some level of ethics is required. This is usually done through consensus and committees with people who are experts in ethics and experts in the relevant field its being applied to.
AI is new and this oversight does not exist yet, and it is imperative that we all participate in the conversation because we are all setting the tone for how this stuff will be handled. Every org may do it differently, and then whatever happens to be common practice will be written down as the guidelines
You should tell that to all the failed businesses Jobs had or was ousted out of. Hell, Trump hasn't really had a single successful business in his life.
Nothing is final until you draw your last breath.
>Who determines what is ethical? beyond the 'golden rule'?
To be frank, you're probably not the audience being appealed to in this post if you have to suggest "ethics can be relative". This is clearly a group of craftsmen offering their hands and knowledge. There are entire organizations who have guidelines if you need some legalese sense of what "ethical" is here.
Because there are no great ways to leverage the damage without perpetuating it. Who do you think pays for the hosting of these models? And what do you mean by distribute the IP and profits to the public? If this process will be facilitated by government, I don’t have faith they’ll be able to allocate capital well enough to keep the current operation sustainable.
Is moving a bill around to have someone else pay for the hosting an issue? Is hosting an already built model itself an ethical concern?
That's very similar to other unethical processes(for example child labour), and we see that government is often either too slow to move or just not interested, and that's why people try to influence the market by changing what they buy.
It's similar for AI, some people don't use it so that they don't pay the creators (in money or in personal data) to train the next model, and at the same time signal to the companies that they wouldn't be future customers of the next model.
(I'm not necessarely in the group of people avoiding to use AI, but I can see their point)
The main difference is that for those devices, the people negatively affected by operations are far away in another country, and we're already conditioned to accept their exploitation as "that's just how the world works" or "they're better off that way". With AI, the people affected - those whose work was used to train, and those who lose jobs because of it - are much closer. For software engineers in particular, these are often colleagues and friends.
as opposed to the ethical concerns hitting home and waking them up as a call to action and even though it's a little unfortunate that they didnt care until it effected them, we could argue it is still a point of ethics because they are consistently applying the ethical standpoint unilaterally across their life now that theyve had a change of heart.
Depends. Is it better to be "wrong" and burn all your goodwill for any future endeavors? Maybe, but I don't think the answer is clear cut for everyone.
I also don't fully agree with us being the "minority". The issue is that the majority of investors are simply not investing anymore. Those remaining are playing high stakes roulette until the casino burns down.
yes [0]
I believe that they are bringing up a moral argument. Which I'm sympathetic too, having quit a job before because I found that my personal morals didn't align with the company, and the cognitive dissonance to continue working there was weighing heavily on me. The money wasn't worth the mental fight every day.
So, yes, in some cases it is better to be "right" and be forced out of business than "wrong" and remain in business. But you have to look beyond just revenue numbers. And different people will have different ideas of "right" and "wrong", obviously.
Agreed that you cannot be in a toxic situation and not have it affect you -- so if THAT is the case -- by all means exit asap.
If it's perceived ethical conflict the only one you need to worry about is the golden rule -- and I do not mean 'he who has the gold makes the rules' I mean the real one. If that conflicts with what you are doing then also probably make an exit -- but many do not care trust me... They would take everything from you and feel justified as long as they are told (just told) it's the right thing. They never ask themselves. They do not really think for themselves. This is most people. Sadly.
Have they done more harm than, say, Meta?
Yeah, that's why I took a guess at what they were trying to say.
>Is that supposed to intrinsically represent "immorality"?
What? The fact that they linked to Wikipedia, or specifically Raytheon?
Wikipedia does not intrinsically represent immorality, no. But missile manufacturing is a pretty typical example, if not the typical example, of a job that conflicts with morals.
>Have they done more harm than, say, Meta?
Who? Raytheon? The point I'm making has nothing to do with who sucks more between Meta and Raytheon.
But if someone wants to make some blanket judgement, I am asking for a little more effort. For example, I wonder if they would think the same as a Ukrainian under the protection of Patriot missiles? (also produced by Raytheon)
https://www.bellingcat.com/news/middle-east/2018/04/27/ameri...
I know the part number of every airplane part I have ever designed by heart, and I would be horrified to see those part numbers in the news as evidence of a mass murder.
So, what is your moral justification for defending one of the world’s largest and despised weapons manufacturers? Are you paid to do it or is it just pro-bono work?
Most if not all aerospace companies also produce military aircraft, right? Or is your reasoning that if your particular plane doesn't actually fire the bullets, then there's no moral dilemma?
If you think Raytheon is the apex evil corporation you are very mistaken. There is hardly any separation between mega corps and state above a certain level. The same people are in majority control of IBM, Procter & Gamble, Nike, and Boeing, Lockheed Martin, etc, etc.
Stop consuming marketing materials as gospel.
What you see as this or that atrocity on CNN or whatever that is produced *propaganda*, made for you, and you are swallowing it blindly without thinking.
Also the responsibility is of course down to individuals and their actions-- whether you know their names or not. Objects do not go to war on their own.
I've also worked in aerospace and aviation software but that doesn't preclude me from thinking clearly about whether I'm responsible for this or that thing on the news involving planes -- you might want to stop consuming that.
Everyone and everything has a website and an app already. Is the market becoming saturated?
The market is nowhere close to being saturated.
What are examples of these "new applications" that are needed every day? Do consumers really want them? Or are software and other companies just creating them because it benefits those companies?
I've worked (still do!) for engineering services companies. Other businesses pay us to build systems for them to either use in-house or resell downstream. I have to assume that if they're paying for it, they see profit potential.
It was an offshoot bubble of the bootcamp bubble which was inflated by ZIRP.
Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.
There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.
Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.
So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
Is that the promise of the faustian bargain we're signing?
Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?
But it requires giving up things a lot of people don't want to, because consuming less once you are used to consuming more sucks. Here is a list of things people can cut from their life that are part of the "consumption has gone up" and "new categories of consumption were opened" that ovi256 was talking about:
- One can give up cell phones, headphones/earbuds, mobile phone plans, mobile data plans, tablets, ereaders, and paid apps/services. That can save $100/mo in bills and amortized hardware. These were a luxury 20 years ago.
- One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.
- One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.
- One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.
- One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.
I could keep going, but by this point I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
It's not clear that it's still possible to function in society today with out a cell phone and a cell phone plan. Many things that were possible to do before without one now require it.
> - One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.
Maybe you can replace these with the cell phone + plan.
> - One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.
It's not clear that imported food is cheaper than locally grown food. Also I'm not sure you have the right time frame. I'm pretty sure my parents were buying imported produce in the winter when I was a kid 50 years ago.
> - One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.
Agreed.
> - One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.
Yes but in urban areas whatever you're saving on cars you are probably spending on higher rent and mortgage costs compared to rural areas where cars are a necessity. And if we're talking USA, many urban areas have terrible public transportation and you probably still need Uber or the equivalent some of the time, depending on just how walkable/bike-able your neighborhood is.
> It's not clear that it's still possible to function in society today with out a cell phone
Like I said... I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
---
As an aside, I live in a rural area. The population of my county is about 17,000 and the population of its county seat is about 3,000. We're a good 40 minutes away from the city that centers the Metropolitan Statistical Area. A 1 bedroom apartment is $400/mo and a 2 bedroom apartment is $600/mo. In one month, minimum wage will be $15/hr.
Some folks here do live without a car. It is possible. They get by in exactly the ways I described (except some of the Amish/Mennonites, who also use horses). It's not preferred (except by some of the Amish/Mennonites), but one can make it work.
This idea that increased consumption over the past century has been irrelevant to quality of life is just absurd.
Past 50 years...meh.
I've been alive slightly longer than that. And can't say life today is definitively better than 50 years ago in the USA.
It was the tail end of one income affording a house and groceries for a family. So to afford the same things, for many families requires almost double the labor.
A lot of new medical treatments, less smoking and drinking, overall longer life spans. But more recently increases to longevity have plateaued, and an epic of obesity has mitigated a lot of the health care improvements. And the astronomical increases in health care costs means improvements to health care capabilities are not available to a lot of people, at least not without greatly reducing their standard of living elsewhere.
College and university costs have grown exponentially, with no discernible increase in the quality of learning.
Housing prices far outpacing inflation of other goods and services.
Fewer intact families, less in person interactions, and the heroin like addictiveness of screens, have ushered in an epidemic of mental illness that might be unprecedented.
Now AI scaring the shit out of everyone, that no matter how hard you study, how disciplined and responsible you are, there's a good chance you will not be gainfully employed.
I frankly think the quality of life in the world I grew up in is better than the one my kids have today.
But if we take "surprisingly small salary" to literally mean salary, most (... all?) salaried jobs require you to work full time, 40 hours a week. Unless we consider cushy remote tech jobs, but those are an odd case and likely to go away if we assume AI is taking over there.
Part time / hourly work is largely less skilled and much lower paid, and you'll want to take all the hours you can get to be able to afford outright necessities like rent. (Unless you're considering rent as consumption/luxury, which is fair)
It does seem like there's a gap in terms of skilled/highly paid but hourly/part time work.
(Not disagreeing with the rest of your post though)
>I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
Of course it's changed. The point is that
1. the necessities haven't changed and have gotten more expensive. People need healthcare, housing, food, and tranport. All up.
2. the modern day expectations means necessities change. We can't walk into a business and shake someone's hand to get a job, so you "need" access to the internet to get a job. Recruiters also expect a consistent phone number to call so good luck skipping a phone line (maybe VOIP can get around this).
These are society's fault as they shifted to pleasing shareholders and outsourcing entire industries (and of course submitted to lobbying). so I don't like this blame being shifted to the individual for daring to consume to survive.
As an aside, every thread I see here has a comment by you lol, that's some good effort but maybe take a break from such strenuous commenting, I say this sincerely as I also used to get into all these back and forths on HN and then realized, much of the time, it's a waste of my own time.
But that's not what they said, they said they want to work less. As the GP post said, they'd still be working a full week.
I do think this is an interesting point. The trend for most of history seems to have been vastly increasing consumption/luxury while work hours somewhat decrease. But have we reached the point where that's not what people want? I'd wager most people in rich developed countries don't particularly want more clothes, gadgets, cars, or fast food. If they can get the current typical middle class share of those things (which to be fair is a big share, and not environmentally sustainable), along with a modest place to live, they (we) mainly want to work less.
Not really a "want" as much as "move where the jobs are". Remote jobs are shakey now and being in the middle of nowhere only worsens your compensation aspects. Being able to live wherever you please is indeed a luxury. The suburb structure already sacrificed the aspect of high CoL for increase commute time to work.
I also do think that dismissing aspects of humanity like family, community and sense of purpose to "luxuries" is an extremely dangerous line of thinking.
If you think your life is a waste right now, do something with it. That's actually the number one thing people don't expect from being retired, how bored they get. They say in FIRE communities that all the money and time in the world won't help if you don't actually utilize it.
Edit: There is some research covering work time estimates for different ages.
We can live without a flat screen TV (which has gotten dirt cheap). We can't live without a decent house. Or worse, while we can live in some 500 sq ft shack we can't truly "live" if there's no other public amenities to gather and socialize without nickel-and-diming us.
There's more to cosume than 50 years ago, but I don't see that trend continuing. We shifted phone bills to cell phone bills and added internet bills and a myriad of subscriptions. But that's really it. everything was "turn one time into subscrition".
I don't see what will fundamentally shift that current consumption for the next 20-30 years. Just more conversion of ownership to renting. In entertainment we're already seeing revolts against this as piracy surges. I don't know how we're going to "consume a lot more" in this case.
Then the hospital takes the house to pay off the rest of the debts. Everybody wins!
But _somebody_ will be living in a 900,000 sq ft apartment and working an hour a week, and the concept of money will be defunct.
And I believe they (and I) are suggesting that this is just a bad faith spin on the market, if you look at actual AI confidence and sentiment and don't ignore it as "ehh just the internet whining". Consumers having less money to spend doesn't mean they are adopting AI en masse, nor are happy about it.
I don't think using the 2025 US government for a moral compass is helping your case either.
>If AI can make things 1000x more efficient
Exhibit A. My observations suggest that consumers are beyond tired of talking about the "what ifs" while they struggle to afford rent or get a job in this economy, right now. All the current gains are for corporate billionaires, why would they think that suddenly changes here and now?
Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?
If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.
The goal of AI is NOT to be a tool. It's to replace human labor completely.
This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.
To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.
The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).
I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.
However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.
Not really. Or at least, "just be happy" isn't a good response to someone homeless and jobless.
Yes, absolutely.
Just because it's monopolized by evil people doesn't mean it's inherently bad. In fact, mots people here have seen examples of it done in a good way.
Like this very website we're on, proving the parent's point in fact.
I don't know if HN 2025 has been a good example of "in a good way".
It won't be as hard as you think for HN to slip into the very thing they mock Instagram of today for being.
Yup, story of my life. I have on fact had a dozen different times where I chose not to jump off the cliff with peers. How little I realized back then how rare that quality is.
But you got your answer, feel free to follow the crowd. I already have migrations ready. Again, not my first time.
How about we start with "commercial LLMs cannot give Legal, Medical, or Financial advice" and go from there? LLMs for those businesses need to be handled by those who can be held accountable (be it the expert or the CEO of that expert).
I'd go so far to try and prevent the obvious and say "LLM's cannot be used to advertise product". but baby steps.
>AI as a technology is almost impossible to stop.
Not really a fan of defeatism speak. Tech isn't as powerful as billionaire want you to pretend it is. It can indeed be regulated, we just need to first use our civic channels instead of fighting amongst ourselves.
Of course, if you are profiting off of AI, I get it. Gotta defend your paycheck.
It will be, of course, but only until all human competition in those fields is eliminated. And after that, all those billions invested must be recouped back by making the prices skyrocket. Didn't we see that with e.g. Uber?
AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.
I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.
Machine learning isn't about developing anything intelligent at all, its about optimizing well defined problem spaces for algorithms defined by humans. Intelligence is much more self guided and has almost nothing to do with finding the best approximate solution to a specific problem.
https://en.wikipedia.org/wiki/Machine_learning
You are free to define AI differently but don't be surprised if people don't share your unique definition.
You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.
Guess you mmissed the post where lawyers were submitting legal documents generated by LLM's. Or people taking medical advice and ending up with hyperbromium consumptions. Or the lawsuits around LLM's softly encouraging suicide. Or the general AI psychosis being studied.
It's way past "some exceptions" at this point.
You don't see how botched law case can't cost someone their life? Let's not wait until more die to reign this in.
>Someone could search on Google just the same and ignore their symptoms.
Yes, and it's not uncommon for websites or search engines to be sued. Millenia of laws exist for this exact purpose, so companies can't deflect bad things back to the people.
If you want the benefits, you accept the consequences. Especially when you fail to put up guard rails.
Removing all personal responsibility from this equation isn't going to solve anything.
That argument is rather naive, given that millenia of law is meant to regulate and disincentivize behavior. "If people didn't get mad they wouldn't murder!"
We've regulated public messages for decades, and for good reason. I'm not absolving them this time because they want to hide behind a chatbot. They have blood on their hands.
But you may indeed be vying against your best interests. Hope you can take some time to understand where you lie in life and if your society is really benefiting you.
For instance I work for one of the companies that produces some of the most popular LLMs in use today. And I certainly have a stake in them performing well and being useful.
But your line of reasoning would have us believe that Henry Ford is a mass murderer due to the number of vehicular deaths each year, or that the wright brothers bear some responsibility for 9/11. They should have foreseen that people would fly their planes into buildings, of course.
If you want to blame someone for LLMs hurting people, we really need to go all the way back to Alan Turing -- without him these people would still be alive!
Okay, cool. Note that I never asked for your opinion and you decided to pop up in this chain as I was talking to someone else. Go about your day or be curious, but don't butt in then pretend 'well I don't care what you say' when you get a response back.
Nothing you said contradicted my main point. So this isn't really a conversation but simply more useless defense. Good day.
Didn't think that would go so cleanly over your head given you're all the way up there on your high horse of morality.
According to the logic of your argument, it's perfectly okay to drive a 360t BelAZ 75710 loaded to its full 450t capacity over that bridge just because it's a truck too.
Put differently, is "the market" shaped by the desires of consumers, or by the machinations of producers?
Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.
[1]https://www.pcmag.com/news/microsoft-exec-asks-why-arent-mor...
[2]https://fortune.com/2025/08/18/mit-report-95-percent-generat...
"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming,”
ChatGPT can listen to you in real time, understands multiply languages very well and responds in a very natural way. This is breath taking and not on the horizon just a few years ago.
AI Transcription of Videos is now a really cool and helpful feature in MS Teams.
Segment Anything literaly leapfroged progress on image segmentation.
You can generate any image you want in high quality in just a few seconds.
There are already human beings being shitier in their daily job than a LLM is.
2) if you had read the paper you wouldn’t use it as an example here.
Good faith discussion on what the market feels about LLMs would include Gemini, ChatGPT numbers. Overall market cap of these companies. And not cherry picked misunderstood articles.
I bet a few Pets.com exec were also wondering why people weren't impressed with their website.
This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.
Maybe the only difference between us is that I think there is a difference between a description and an interpretation, and you don't :)
In the grand scheme of things, is it even worth mentioning? Probably not! :D :D Why focus on the differences when we can focus on the similarities?
>Ok change my qualifier from interpretation to description if it helps.
I... really don't think AI is what's wrong with you.
And if we look at the players who are the winners in the AI race, do you see anyone particularly good participating?
I could obviously give you examples where LLMs have concrete usecases but that's besides the larger point.
Can you explain why I should not be equally suspicious of gaming, social media, movies, carnivals, travel?
> My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it.
Enough people doing something doesn't make that something good or desirable from a societal standpoint. You can find examples of things that go in both directions. You mentioned gaming, social media, movies, carnivals, travel, but you can just as easily ask the same question for gambling or heavy drugs use.
Just saying "I defer to their judgment" is a cop-out.
People find a benefit in smoking: a little kick, they feel cool, it’s a break from work, it’s socializing, maybe they feel rebellious.
The point is that people FEEL they benefit. THAT’S the market for many things. Not everything obv, but plenty of things.
I don't disagree, but this also doesn't mean that those things are intrinsically good and then we should all pursuit them because that's what the market wants. And that was what I was pushing against, this idea that since 800M people are using GPT then we should all be ok doing AI work because that's what the market is demanding.
When billions of people watch football, my first instinct is not to decry football as a problem in society. I acknowledge with humility that though I don't enjoy it, there is something to the activity that makes people watch it.
Agree. And that something could be a positive or a negative thing. And I'm not suggesting I know better than them. I'm suggesting that humans are not perfect machines and our brains are very easy to manipulate.
Because there are plenty of examples of things enjoyed by a lot of people who are, as a whole, bad. And they might not be bad for the individuals who are doing them, they might enjoy them, and find pleasure in them. But that doesn't make them desirable and also doesn't mean we should see them as market opportunities.
Drugs and alcohol are the easy example:
> A new report from the World Health Organization (WHO) highlights that 2.6 million deaths per year were attributable to alcohol consumption, accounting for 4.7% of all deaths, and 0.6 million deaths to psychoactive drug use. [...] The report shows an estimated 400 million people lived with alcohol use disorders globally. Of this, 209 million people lived with alcohol dependence. (https://www.who.int/news/item/25-06-2024-over-3-million-annu...)
Can we agree that 3 million people dying as a result of something is not a good outcome? If the reports were saying that 3 million people a year are dying as a result of LLM chats we'd all be freaking out.
–––
> my first instinct is not to decry football as a problem in society.
My first instinct is not to decry nothing as a problem, not as a positive. My first instinct is to give ourselves time to figure out which one of the two it is before jumping in head first. Which is definitely not what's happening with LLMs.
(By way of reminder, the question here is about the harms of LLMs specifically to the people using them, so I'm going to ignore e.g. people losing their jobs because their bosses thought an LLM could replace them, possible environmental costs, having the world eaten by superintelligent AI systems that don't need humans any more, use of LLMs to autogenerate terrorist propaganda or scam emails, etc.)
People become like those they spend time with. If a lot of people are spending a lot of time with LLMs, they are going to become more like those LLMs. Maybe only in superficial ways (perhaps they increase their use of the word "delve" or the em-dash or "it's not just X, it's Y" constructions), maybe in deeper ways (perhaps they adapt their _personalities_ to be more like the ones presented by the LLMs). In an individual isolated case, this might be good or bad. When it happens to _everyone_ it makes everyone just a bit more similar to one another, which feels like probably a bad thing.
Much of the point of an LLM as opposed to, say, a search engine is that you're outsourcing not just some of your remembering but some of your thinking. Perhaps widespread use of LLMs will make people mentally lazier. People are already mostly very lazy mentally. This might be bad for society.
People tend to believe what LLMs tell them. LLMs are not perfectly reliable. Again, in isolation this isn't particularly alarming. (People aren't perfectly reliable either. I'm sure everyone reading this believes at least one untrue thing that they believe because some other person said it confidently.) But, again, when large swathes of the population are talking to the same LLMs which make the same mistakes, that could be pretty bad.
Everything in the universe tends to turn into advertising under the influence of present-day market forces. There are less-alarming ways for that to happen with LLMs (maybe they start serving ads in a sidebar or something) and more-alarming ways: maybe companies start paying OpenAI to manipulate their models' output in ways favourable to them. I believe that in many jurisdictions "subliminal advertising" in movies and television is illegal; I believe it's controversial whether it actually works. But I suspect something similar could be done with LLMs: find things associated with your company and train the LLM to mention them more often and with more positive associations. If it can be done, there's a good chance that eventually it will be. Ewww.
All the most capable LLMs run in the cloud. Perhaps people will grow dependent on them, and then the companies providing them -- which are, after all, mostly highly unprofitable right now -- decide to raise their prices massively, to a point at which no one would have chosen to use them so much at the outset. (But at which, having grown dependent on the LLMs, they continue using them.)
I do agree about ads, it will be extremely worrying if ads bias the LLM. I don't agree about the monopoly part, we already have ways of dealing with monopolies.
In general, I think the "AI is the worst thing ever" concerns are overblown. There are some valid reasons to worry, but overall I think LLMs are a massively beneficial technology.
Getting worse at mental arithmetic because of having calculators didn't matter much because calculators are just unambiguously better at arithmetic than we are, and if you always have one handy (which these days you effectively do) then overall you're better at arithmetic than if you were better at doing it in your head but didn't have a calculator. (Though, actually, calculators aren't quite unambiguously better because it takes a little bit of extra time and effort to use one, and if you can't do easy arithmetic in your head then arguably you have lost something.)
If thinking-atrophy due to LLMs turns out to be OK in the same way as arithmetic-atrophy due to calculators has, it will be because LLMs are just unambiguously better at thinking than we are. That seems to me (a) to be a scenario in which those exotic doomy risks become much more salient and (b) like a bigger thing to be losing from our lives than arithmetic. Compare "we will have lost an important part of what it is to be human if we never do arithmetic any more" (absurd) with "we will have lost an important part of what it is to be human if we never think any more" (plausible, at least to me).
[1] I don't see how one can reasonably put less than 50% probability on AI getting to clearly-as-smart-as-humans-overall level in the next decade, or less than 10% probability on AI getting clearly-much-smarter-than-humans-overall soon after if it does, or less than 10% probability on having things much smarter than humans around not causing some sort of catastrophe, all of which means a minimum 0.5% chance of AI-induced catastrophe in the not-too-distant future. And those estimates look to me like they're on the low side.
(Maybe no longer needing the skill of thinking would be fine! Maybe what happens then is that people who like thinking can go on thinking, and people who don't like thinking and were already pretty bad at it outsource their thinking to AI systems that do it better, and everything's OK. But don't you think it sounds like the sort of transformation where if someone described it and said "... what could possibly go wrong?" you would interpret that as sarcasm? It doesn't seem like the sort of future where we could confidently expect that it would all be fine.)
What became toxic was, arguably, the way in which it was monetized and never really regulated.
- gaming
- netflix
- television
- social media
- hacker news
- music in general
- carnivals
A priori, all of these are equally suspicious as to whether they provide value or not.
My point is that unless you have reason to suspect, people engaging in consumption through their own agency is in general preferable. You can of course bring counter examples but they are more of caveats against my larger truer point.
And today's adapt or die doesn't sound less fascist than in 1930.
If not, for the purpose of paying his bills, your giving a shit is irrelevant. That’s what I mean.
Yes.
I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.
> Don’t write a blog post whining about your morals,
why on earth not?
I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?
So is it rational for a web design company to take a moral stance that they won't use JavaScript?
Is there a market for that, with enough clients who want their JavaScript-free work?
Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?
I am an AI skeptic and until the hype is supplanted by actual tangible value I will prefer products that don't cram AI everywhere it doesn't belong.
However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.
In that sense AI has been the biggest heist that has ever been perpetrated.
Authors still get recognition. If they are decent authors producing original, literary work. But the type of author that fills page five of your local news paper, has not been valued for decades. But that was filler content long before AI showed up. Same for the people that do the subtitles on soap operas. The people that create the commercials that show at 4am on your TV. All fair game for AI.
It's not a heist, just progress. People having to adapt and struggling with that happens with most changes. That doesn't mean the change is bad. Projecting your rage, moralism, etc. onto agents of change is also a constant. People don't like change. The reason we still talk about Luddites is that they overreacted a bit.
People might feel that time is treating them unfairly. But the reality is that sometimes things just change and then some people adapt and others don't. If your party trick is stuff AIs do well (e.g. translating text, coming up with generic copy text, adding some illustrations to articles, etc.), then yes AI is robbing you of your job and there will be a lot less demand for doing these things manually. And maybe you were really good at it even. That really sucks. But it happened. That cat isn't going back in the bag. So, deal with it. There are plenty of other things people can still do.
You are no different than that portrait painter in the 1800s that suddenly saw their market for portraits evaporate because they were being replaced by a few seconds exposure in front of a camera. A lot of very decent art work was created after that. It did not kill art. But it did change what some artists did for a living. In the same way, the gramophone did not kill music. The TV did not kill theater. Etc.
Getting robbed implies a sense of entitlement to something. Did you own what you lost to begin with?
People claiming that backpropagation "steals" your material don't understand math or copyright.
You can hate generative tools all you want -- opinions are free -- but you're fundamentally wrong about the legality or morality at play.
See Studio Ghibli's art style being ripped off, Disney suing Midjourney, etc
Regardless of whether you think IP laws should prevent LLMs from training on works under copyright, I hardly think the situation is beyond dispute. Whether copyright itself should even exist is something many dispute.
If they had succeeded in regulation over machines and seeing wealth back into the average factory worker’s hands, of artisans integrated into the workforce instead of shut out, would so much of the bloodshed and mayhem to form unions and regulations have been needed?
Broadly, it seems to me that most technological change could use some consideration of people
Robbing implies theft. The word heist was used here to imply that some crime is happening. I don't think there is such a crime and disagree with the framing. Which is what this is, and which is also very deliberate. Luddites used a similar kind of framing to justify their actions back in the day. Which is why I'm using it as an analogy. I believe a lot of the anti AI sentiment is rooted in very similar sentiments.
I'm not missing the point but making one. Clearly it's a sensitive topic to a lot of people here.
This type of business isn’t going to be hit hard by AI; this type of business owner is going to be hit hard by AI.
If every AI lab were to go bust tomorrow, we could still hire expensive GPU servers (there would suddenly be a glut of those!) and use them to run those open weight models and continue as we do today.
Sure, the models wouldn't ever get any better in the future - but existing teams that rely on them would be able to keep on working with surprisingly little disruption.
>the type of business that's going to be hit hard by AI [...] will be the ones that integrate AI into their business the most
There. Fixed!
Prompting isn't a skill, and praying that the next prompt finally spits out something decent is not a business strategy.
No thanks, I'm good.
well you just describing an chatgpt is, one of the most fastest growing user acquisition user base in history
as much as I agree with your statement but the real world doesn't respect that
By selling a dollar of compute for 90 cents.
We've been here before, it doesn't end like you think it does.
Recently I had to learn SPARQL. What I did is I created an MCP server to connect it to a graph database with SPARQL support, and then I asked the AI: "Can you teach me how to do this? How would I do this in SQL? How would I do it with SPARQL?" And then it would show me.
With examples of how to use something, it really helps that you can ask questions about what you want to know at that moment, instead of just following a static tutorial.
I still wondering why I'm not doing my banking in Bitcoins. My blockchain database was replaced by postgres.
So some tech can just be hypeware. The OP has a legitimate standpoint given some technologies track record.
And the doctors are still out on the affects of social media on children or why are some countries banning social media for children?
Not everything that comes out of Silicon Valley is automatically good.
same like StackOverflow down today and seems like not everyone cares anymore, back then it would totally cause breakdown because SO is vital
Then there's an oversupply of programmers, salaries will crash, and lots of people will have to switch careers. It's happened before.
It's happened before and there's no way we could have learned from that and improved things. It has to be just life changing, life ruining, career crippling. Absolutely no other way for a society to function than this.
You and I both know we're probably headed for revolutionary times.
"Always", in the same way that five years ago we'd "never" have an AI that can do a code review.
Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.
And the LLMs can use the static analysis tools.
I did not say that.
That it can *also* use tools to help, doesn't mean it can *only* get there by using tools.
They can *also* just do a code review themselves.
As in, I cloned a repo of some of my old manually-written code, cd'd into it, ran `claude`, and gave it the prompt "code review" (or something close to that), and it told me a whole bunch of things wrong with it, in natural language, even though I didn't have the relevant static analysis tools for those languages installed.
Well sure, but was the result any better than that of installing and running the tools? If the AI can provide better or at least different (but accurate!) PR feedback from conventional tools, that's interesting. If it's just offering the same thing (which is not really "code review" as I'd define it, even if it is something that code reviewers in some contexts spend some of their time on) through a different interface, that's much less interesting.
Someone with a name, an employment contract, and accountability is needed to sign off on decisions. Tools can be infinitely smart, but they cannot be responsible, so AI will shift how developers work, not whether they are needed.
If AI becomes powerful enough to generate entire systems, the person supervising and validating those systems is, functionally, a developer — because they must understand the technical details well enough to take responsibility for them.
Titles can shift, but the role dont disappear. Someone with deep technical judgment will still be required to translate intent into implementation and to sign off on the risks. You can call that person "developer", "AI engineer" or something else, but the core responsibility remains technical. PMs and QA do not fill that gap.
LLMs can already do that.
What they can't do is be legally responsible, which is a different thing.
> Responsibility is about being able to judge whether the system is correct, safe, maintainable, and aligned with real-world constraints.
Legal responsibility and technical responsibility are not always the same thing; technical responsibility is absolutely in the domain of PM and QA, legal responsibility ultimately stops with either a certified engineer (which software engineering famously isn't), the C-suite, the public liability insurance company, or the shareholders depending on specifics.
Ownership requires legal personhood, which isn't the same thing as philosophical personhood, which is why corporations themselves can be legal owners.
They are powerful tools but they are not engineers.
And its not about legal responsibility at all. Developers dont go to jail for mistakes, but they are responsible within the engineering hierarchy. A pilot is not legally liable for Boeing's corporate decisions, and the plane can mostly fly on the autopilot, but you still need a pilot in the cockpit.
AI cannot replace the human whose technical judgment is required to supervise, validate, and interpret AI-generated systems.
Like everything else they do, it's amazing how far you can get even if you're incredibly lazy and let it do everything itself, though of course that's a bad idea because it's got all the skill and quality of result you'd expect if I said "endless hoarde of fresh grads unwilling to say 'no' except on ethical grounds".
regulation still very much a thing
Musk's claims about what Tesla's would be able to do wasn't limited to just "a few locations" it was "complete autonomy" and "you'll be able to summon your car from across the country"… by 2018.
And yet, 2025, leaves: https://news.ycombinator.com/item?id=46095867
An tech employee posted he looked for job for 6 months, found none and has joined a fast food shop flipping burgers.
That turned tech workers switching to "flipping burgers" into a meme.
I feel like there are a lot of people in school or recently graduated though that had FNG dreams and never considered an alternative. This is going to be very difficult for them. I now feel, especially as tech has gone truly borderless with remote work, that this downturn is now way worse than the .com bust. It has just dragged on for years now, with no real end in sight.
covid overhiring + AI usage = massive layoff we ever see in decades
There are still plenty of tech jobs these days, just less than there were during covid, but tech itself is still in a massive expansionary cycle. We'll see how the AI bubble lasts, and what the fallout of it bursting will be.
The key point is that the going is still exceptionally good. The posts talking about experienced programmers having to flip burgers in the early 2000s is not an exaggeration.
History always repeats itself in the tech industry. The hype cycle for LLMs will probably peak within the next few years. (LLMs are legitimately useful for many things but some of the company valuations and employee compensation packages are totally irrational.)
When you search programming-related questions, what sites do you normally read? For me, it is hard to avoid SO because it appears in so many top results from Google. And I swear that Google AI just regugitates most of SO these days for simple questions.
Instead of running a google query or searching in Stackoverflow you just need a chatGPT, Claude or your Ai of choice open in a browser. Copy and paste.
But the killer feature of an LLM is that it can synthesize something based on my exact ask, and does a great job of creating a PoC to prove something, and it's cheap from time investment point of view.
And it doesn't downvote something as off-topic, or try to use my question as a teaching exercise and tell me I'm doing it wrong, even if I am ;)
There were more problems. And that's from the point of view of somebody coming from Google to find questions that already existed. Interacting there was another entire can of worms.
Stack Overflow’s moderation is overbearing and all, but that’s nowhere near at the same level as Expert Exchange’s baiting and switching
SO was never that bad, even with all their moderation policies, they had no paywalls.
Running a services business has always been about being able to identify trends and adapt to market demand. Every small business I know has been adapting to trends or trying to stay ahead of them from the start, from retail to product to service businesses.
The fact is, a lot of new business is getting done in this field, with or without them. If they want to take the "high road", so be it, but they should be prepared to accepts the consequences of worse revenues.
Without knowing the future I cannot answer.
That collapsed during the covid lockdowns. My financial services client cut loose all consultants and killed all 'non-essential' projects, even when mine (that they had already approved) would save them 400K a year, they did not care! Top down the word came to cut everyone -- so they did.
This trend is very much a top down push. Inorganic. People with skills and experience are viewed by HR and their AI software as risky to leave and unlikely to respond to whatever pressures they like to apply.
Since then it's been more of the same as far as consulting.
I've come to the conclusion I'm better served by working on smaller projects I want to build and not chasing big consulting dollars. I'm happier (now) but it took a while.
An unexpected benefit of all the pain was I like making things again... but I am using claude code and gemini. Amazing tools if you have experience already and you know what you want out of them -- otherwise they mainly produce crap in the hands of the masses.
You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers. Aside from the fixed math (i.e. limit on savings vs. unlimited revenue growth) there's the psychological component of teams and management. I saw this in the energy sector where our company had two products: selling to the drilling side was focused on helping get more oil & gas; selling to the remediation side was fulfill their obligations as cheaply as possible. IT / dev at a non-software company is almost always a cost center.
The problem is that many places don't see the cost portions of revenue centers as investment, but still costs. The world is littered with stories of businesses messing about with their core competencies. An infamous example was Hertz(1) outsourcing their website reservation system to Accenture to comically bad results. The website/app is how people reserve cars - the most important part of the revenue generating system.
Best advice I got in school is -- at least early in your career-- work in the main line of business for your company. So if you are in marketing, work for a marketing firm, an accountant, work for an accounting firm.. etc. Video game designer: work for a video game developer.
Later you can have other roles but you make your mark doing the thing that company really depends on.
Related advice I got - work in the head office for your company if possible. Definitely turned out to be a good call in my case as the satellite offices closed one by one over time.
The logic is simple, if unenlightened: "What if we had cheaper/fewer nerds, but we made them nerd harder?"
So while working in a revenue center is advantageous, you still have to be in one that doesn't view your kind as too fungible.
In many cases I've seen projects increase their revenue substantially by making simple messaging pivots. Ex. Instead of having your website say "save X dollars on Y" try "earn X more dollars using Y". It's incredible how much impact simple messaging can have on your conversion rates.
This extends beyond just revenue. Focusing on revenue centers instead of cost centers is a great career advice as well.
Totally agree. This is a big reason I went into solutions consulting.
In that particular case I mentioned it was a massive risk management compliance solution which they had to have in place, but they were getting bled dry by the existing vendor, due to several architectural and implementation mistakes they had made way back before I ever got involved, that they were sort of stuck with.
I had a plan to unstuck them at 1/5 the annual operating cost and better performance. Presented it to executives, even Amazon who would have been the infr vendor, to rave reviews.
We had a verbal contract and I was waiting for paperwork to sign... and then Feb 2020... and then crickets.
a little earlier very few suspected that our mobile phone is not only listening to our conversations and training some ai model but also all its gyrometers are being used to profile our daily routine. ( keeping mobile for charging near our pillow) looking at mobile first thing in morning.
Now when we are asked to use ai to do our code. I am quite anxious as to what part of our life are we selling now .. perhaps i am no longer their prime focus. (50+) but who knows.
Going with the flow seems like a bad advice. going Analog as in iRobot seems the most sane thing.
I've been doing a lot of photography in the last few years with my smartphone and because of the many things you mentioned, I've forgone using it now. I'm back to a mirrorless camera that's 14 years old and still takes amazing pictures. I recently ran into a guy shutting down his motion picture business and now own three different Canon HDV cameras that I've been doing some interesting video work with.
Its not easy transferring miniDV film to my computer, but the standard resolution has a very cool retro vibe that I've found a LOT of people have been missing and are coming back around too.
I'm in the same age range and couldn't fathom becoming a developer in the early aughts and being in the midst of a gold rush for developer talent to suddenly seeing the entire tech world contract almost over night.
Strange tides we're living in right now.
Instead I found Linux/BSD and it changed my life and I ended up with security clearances writing code at defense contractors, dot com startups, airports, banks, biotech/hpc, on and on...
Exactly right about Github. Facebook is the same for training on photos and social relationships. etc etc
They needed to generate a large body of data to train our future robot overlords to enslave us.
We the 'experienced' are definitely not their target -- too much independence of thought.
To your point I use an old flip phone an voip even though I have written iOS and android apps. My home has no wifi. I do not use bluetooth. There are no cameras enabled on any device (except a camera).
LLMs strike me as mainly useful in the same way. I can get most of the boilerplate and tedium done with LLM tools. Then for core logic esp learning or meta-programming patterns etc. I need to jump in.
Breaking tasks down to bite size, and writing detailed architecture and planning docs for the LLM to work from, is critical to managing increasing complexity and staying within context windows. Also critical is ruthlessly throwing away things that do not fit the vision and not being afraid to throw whole days away (not too often tho!)
For ref I have built stuff that goes way beyond CRUD app with these tools in 1/10th of the time it previously took me or less -- the key though is I already knew how to do and how to validate LLM outputs. I knew exactly what I wanted a priori.
Code generation technically always 'replaced' junior devs and has been around for ages, the results of the generation are just a lot better now., whereas in the past it was mixed bag of benefits/hassles doing code generation regularly, now it works much better and the cost is much less.
I started my career as a developer and the main reasons I became a solutions systems guy were money and that I hated the tedium boilerplate phase of all software development projects over a certain scale. I never stoped coding because I love it -- just not for large enterprise soul destroying software projects.
Two engineers use LLM-based coding tools; one comes away with nothing but frustration, the other one gets useful results. They trade anecdotes and wonder what the other is doing that is so different.
Maybe the other person is incompetent? Maybe they chose a different tool? Maybe their codebase is very different?
"Put this data on a web page" is easy. Complex application-like interactions seem to be more challenging. It's faster/easier to do the work by hand than it is to wait for the LLM, then correct it.
But if you aren't already an expert, you probably aren't looking for complex interaction models. "Put this data on a web page" is often just fine.
Sometimes I don't care for things to be done in a very specific way. For those cases, LLMs are acceptable-to-good. Example: I had a networked device that exposes a proprietary protocol on a specific port. I needed a simple UI tool to control it; think toggles/labels/timed switches. With a couple of iterations, the LLM produced something good enough for my purposes, even if it wasn't particularly doted with the best UX practices.
Other times, I very much care for things to be done in a very specific way. Sometimes due to regulatory constraints, others because of visual/code consistency, or some other reasons. In those cases, getting the AI to produce what I need specifically feels like an exercise in herding incredibly stubborn cats. It will get done faster (and better) if I do it myself.
Protestant Reformation? Done, 7 years ago, different professor. Your brothers are pleased to liberate you for Saturday's house party.
Barter Economy in Soviet Breakaway Republics? Sorry, bro. But we have a Red Square McDonald's feasibility study; you can change the names?
Sorry to anyone whose feelings this hurts.
Not everyone cares to be precise with their semantics.
No train, no gain.
I will say that being social and being in a scene at the right time helps a lot -- timing is indeed almost everything.
I concur with that and that's what I tell every single junior/young dev. that asks for advice: get out there and get noticed!
People who prefer to lead more private lives, or are more reserved in general, have far fewer opportunities coming their way, they're forced to take the hard path.
This is wildly condescending. Holy.
Also wtf did I just read. Op said he uses his network to find work. And you go on a rant about how you're rising and grinding to get that bread, and everything you have ever earned completely comes from you, no help from others? Jesus Christ dude, chill out.
>I'm not for/or against a particular style
... so I'm not sure why some of you took offense in my comment, but I can definitely imagine why :)
>Ex-colleagues reach out to me and ask me to work with them
Never happened to me, that's the point I'm making.
1. I wish work just landed at my feet.
2. As that never happened and most likely was never going to happen, I had to learn another set of skills to overcome that.
3. That made me a much more resilient individual.
(4. This is not meant as criticism to @arthurfirst's style. I wish clients just called me and I didn't have to save all that money/time I spend taking care of that)
... so I'm not sure why some of you took offense in my comment, but I can definitely imagine why :)
Because surrounding your extremely condescending take with "just my opinion"-style hedging still results in an extremely condescending take.
You only have to follow the market if you want to continue to stay relevant.
Taking a stand and refusing to follow the market is always an option, but it might mean going out of business for ideological reasons.
So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.
The ideal version of my job would be partnering with all the local businesses around me that I know and love, elevating their online facilities to let all of us thrive. But the money simply isn’t there. Instead their profits and my happiness are funnelled through corporate behemoths. I’ll applaud anyone who is willing to step outside of that.
Of course. If you want the world to go back to how it was before, you’re going to be very depressed in any business.
That’s why I said your only real options are going with the market or finding a different line of work. Technically there’s a third option where you stay put and watch bank accounts decline until you’re forced to choose one of the first two options, but it’s never as satisfying in retrospect as you imagined that small act of protest would have been.
Even in the linked post the author isn't complaining that it's not fair or whatever, they're simply stating that they are losing money as a result of their moral choice. I don't think they're deluded about the cause and effect.
Isn't that what money is though, a way to get people to stop what they're doing and do what you want them to instead? It's how Rome bent its conquests to its will and we've been doing it ever since.
It's a deeply broken system but I think that acknowledging it as such is the first step towards replacing it with something less broken.
It doesn't have to be. Plenty of people are fulfilled by their jobs and make good money doing them.
We've always tolerated a certain portion of society who finds the situation unacceptable, but don't you suspect that things will change if that portion is most of us?
Maybe we're not there yet, idk, but the article is about the unease vs the data, and I think the unease comes from the awareness that that's where we're headed.
If you're only raised in a grifter's society, sure. Money is to be conquered and extracted.
But we came definetly shift back to a society where money is one to help keep the boat afloat for everyone to pursue their own interests, and not a losing game of Monopoly where the rich get richer.
Past that, simply look at the small actions on your life. These build and define your overall character. It's hard to vote for collective bargaining of you have trouble complimenting your family at the table. You need to appreciate and feel a part of a community to really come together.
This all sounds like mumbo jumbo on the outside, but just take some time to reflect a bit. People don't wake up one day and simply think "you know, this really is all the immigrant's fault". That's a result of months or year of mindset.
> So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.
You're correct in this, but I think it's worth making the explicit statement that that's also true because we live in a system of amoral resource allocation.
Yes, this is a forum centered on startups, so there's a certain economic bias at play, but on the subject of morality I think there's a fair case to be made that it's reasonable to want to oppose an inherently unjust system and to be frustrated that doing so makes survival difficult.
We shouldn't have to choose between principles and food on the table.
I am increasingly convinced that these are the only true kind of ethical decision. Painless/straightforward ethical decisions that you make every day - they probably don't even register on your radar. But a tough tradeoff does.
It's not "swim with the tide or die", it's "float like a corpse down the river, or swim". Which direction you swim in will certainly be a different level of effort, and you can end up as a corpse no matter what, but that doesn't mean the only option you have is to give up.
taking a moral stance isn't inherently ideological
You can also just outlive the irrationality. If we could stop beating around the bush and admit we're in a recession, that would explain a lot of things. You just gotta bear the storm.
It's way too late to jump on the AI train anyway. Maybe one more year, but I'd be surprised if that bubble doesn't pop by the end of 2027.
If you found it unacceptable to work with companies that used any kind of digital database (because you found centralization of information and the amount of processing and analytics this enables unbecoming) then you should probably look for another venture instead of finding companies that commit to pen and paper.
Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.
I may even move to a cathedral model or just stop sharing the software I write with the general world, too.
Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.
Life is more nuanced than that.
Now, I'll probably install a gallery webapp to my webserver and put it behind authentication. I'm not rushing because I don't crave any interaction from my photography. The images will most probably be optimized and resized to save some storage space, as well.
(But only all of us simultaneously, otherwise won't count! ;))))
The number of triggered Stockholm Syndrome patients in this comment section is terminally nauseating.
Not that innovative, but hey. If it let's someone pretend it is and fixes the problem, I'm all for it.
The people pushing AI aren't listening to the true demand for AI. This, its not making ita money back. That's why this market is broken and not prone to last.
Maybe ot will. I'm still waiting for the utility. Right now it's just a big hype bubble, so wake me when it pops.
Following the market is also not cravenly amoral, AI or not.
What stance against AI? Image generation is not the same as code generation.
There are so many open source projects out there, its a huge difference than taking all the images.
AI is also just ML so should i not use image bounding box algorithm? Am i not allowed to take training data online or are only big companies not allowed to?
A studio taking on temporary projects isn't investing into AI— they're not getting paid in stock. This is effectively no different from a construction company building an office building, or a bakery baking a cake.
As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.
The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.
IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.
A construction company would still be justified to say no based on moral standards. A clearer example would be refusing to build a bridge if you know the blueprints/materials are bad, but you could also make a case for agreeing or not to build a detention center for immigrants. But the bakery example feels even more relevant, seeing as a bakery refusing to bake a cake base on the owner's religious beliefs ended up in the US Supreme Court [1].
I don't fault those who, when forced to choose between their morals and food, choose food. But I generally applaud those that stick to their beliefs at their own expense. Yes, the game is rigged and yes, the system is the problem. But sometimes all one can do is refuse to play.
[1] https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...
I totally agree. I still think opposing AI makes sense in the moment we're in, because it's the biggest, baddest example of the system you're describing. But the AI situation is a symptom of that system in that it's arisen because we already had overconsolidation and undue concentration of wealth. If our economy had been more egalitarian before AI, then even the same scientific/technological developments wouldn't be hitting us the same way now.
That said, I do get the sense from the article that the author is trying to do the right thing overall in this sense too, because they talk about being a small company and are marketing themselves based on good old-fashioned values like "we do a good job".
Fucking this. What I tend to see is petty 'my guy good, not my guy bad' approach. All I want is even enforcement of existing rules on everyone. As it stands, to your point, only the least moral ship, because they don't even consider hesitating.
I'm all down once we all to backed in a corner to refuse, though.
No. It is the AI companies that are externalizing their costs onto everyone else by stealing the work of others, flooding the zone with garbage, and then weeping about how they'll never survive if there's any regulation or enforcement of copyright law.
AI is another set of tooling. It can be used well or not, but arguing the morality of a tooling type (e.g drills) vs maybe a specific company (e.g Ryobi) seems an odd take to me.
I don't want to openly write about the financial side of things here but let's just say I don't have enough money to comfortably retire or stop working but course sales over the last 2-3 years have gotten to not even 5% of what it was in 2015-2021.
It went from "I'm super happy, this is my job with contracting on the side as a perfect technical circle of life" to "time to get a full time job".
Nothing changed on my end. I have kept putting out free blog posts and videos for the last 10 years. It's just traffic has gone down to 20x less than it used to be. Traffic dictates sales and that's how I think I arrived in this situation.
It does suck to wake up most days knowing you have at least 5 courses worth of content in your head that you could make but can't spend the time to make them because your time is allocated elsewhere. It takes usually 2-3 full time months to create a decent sized course, from planning to done. Then ongoing maintenance. None of this is a problem if it generates income (it's a fun process), but it's a problem given the scope of time it takes.
It skews back-end stats too. For example if someone buys a course, I hope they take it in full so they feel happy and fulfilled but in reality a good portion never start. It's like Steam games. Some people just like collecting digital goods knowing they exist and that gives comfort.
Massive course platforms are still thriving it seems so the market is there but it is way more saturated than 10 years ago. My first Docker course in 2015 was like maybe 1 out of 5 courses out there, but now there's 5,000 courses and Docker's documentation has gotten a lot better over time.
I haven't figured out how to make things work, I just know I love tech, solving real world problems, documenting my journey (either through blog posts or courses) and traveling. It would be amazing to be able to travel the world and make courses. Back when I started 10 years ago I didn't realize I like traveling so much so I squandered that extra time.
It sounds like you have a solid product, but you need to update your marketing channels.
Now the same thing happens, but there's 20x less sales per month.
I've posted almost 400 free videos on YouTube as well over the years, usually these videos go along with the blog post.
A few years back I also started a podcast and did 100 weekly episodes for 2 years. It didn't move the needle on course sales and it was on a topic that was quite related to app development and deployment which partially aligns with my courses. Most episodes barely got ~100 listens and it was 4.9 rated out of 5 on major podcast platforms, people emailed me saying it was their favorite show and it helped them so much and hope I never stop but the listener count never grew. I didn't have sponsors or ads but stopped the show because it took 1 full day a week to schedule + record + edit + publish a ~1-2 hour episode. It was super fun and I really enjoyed it but it was another "invest 100 days, make $0" thing which simply isn't sustainable.
> Now the same thing happens, but there's 20x less sales per month.
You’re a victim of the AI search results. There are lots of those.
I recommend something like social media ads where your target audience hangs out (maybe LinkedIn, possibly Google).
And yeah, I agree with the other reponsder that AI + Google's own enshittification of search may have cost your site traffic.
We should have more posts like this. It should be okay to be worried, to admit that we are having difficulties. It might reach someone else who otherwise feels alone in a sea of successful hustlers. It might also just get someone the help they need or form a community around solving the problem.
I also appreciate their resolve. We rarely hear from people being uncompromising on principles that have a clear price. Some people would rather ride their business into the ground than sell out. I say I would, but I don’t know if I would really have the guts.
You can either hope that this shift is not happening or that you are one of these people surviving in your niche.
But the industry / world is shifting, you should start shifting with.
I would call that being innovative, ahead etc.
Google has a ton of code internal.
And million of people happily thumb down or up for their RL / Feedback.
The industry is still shifting. I use LLMs instead of StackOverflow.
You can be as dismissive as you want, but that doesn't change the fact that millions of people use AI tools every single day. People start using AI based tools.
The industry overall is therefore shifting money and goals etc. into direction of AI.
And the author has an issue because of that.
Feel free to tell me what your industry is so we can continue our discussion.
It’s tech, but not everyone in tech is a coder. The industry is bigger than big tech and SaaS.
I helped a collegue to migrate to germany, your job can easily be optimized away by an LLM.
Biggest issue he had was communication/translations. Second one was finding the right information.
Try perplexity and it will find good resources. Make pictures and use google lense for translation. Talk to ChatGPT to get explanaitions on how things work in germany.
I also have no clue what you mean by 'connections and infrastructure'. You know the number of some Ausländerbehördenmitarbeiter?
In my eyes, that's when the grifters get out and innovators can actually create value.
That image generation is already disrupting industries and jobs today.
My mother! (non technical) had a call with a support ai agent just a few month ago.
AI is also not a fad. LLMs are the best interface we had sofar. They will stay.
AI Coding helps non developers to develop a lot faster and better (think researchers who sometimes need a little bit of python or 'code').
I'm using AI to summarzie meetings i missed, i asked chatgpt to summarize error logs (successfully).
AlphaFold solved protein folding.
Nearly all roboters you see today are running on Nvidias Isaac and Groot.
The progress has not stoped at all. When Nano Banana came out, Seedream 4 came out a week later. Now we have nano Bananan Pro and Gemini 3.
After Gemini 3 came out, Opus 4.1 came out and now Deep Seek v3.2. All of them got better, faster and/or cheaper.
I'm not convinced. I've heard all the justifications and how it saved someone's marriage (too bad it ended that other relationship).
The numbers don't line up. The money from consumers isn't there, the money isn't actually there in B2B. It's not going to last. Refulations will catch up and strain things further once the US isnt in a grifter's administration and people get tired of not having jobs.Its a huge pincer attack on 4 fronts.
After the crash and people need to put their money where their mouth is, let's see how much people truly value turning their brains off and consuming slop. There will be cool things from it, but not in this current economy.
Until then, The bubble will burst.this isn't the 10's anymore and the us government doesn't have the money to bail out corporate this time.
USA vs. China. If USA stops research on AI right now, China will leapfrog even further.
And companies like Google and Microsoft can easily affort the current AI spend.
If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
This is also not to say that service providers should not have any moral standards. I just don't understand the expectation in this particular case. You ignore what the market wants and where a lot/most of new capital turns up. What's the idea? You are a service provider, you are not a market maker. If you refuse service with the market that exists, you don't have a market.
Regardless, I really like their aesthetics (which we need more of in the world) and do hope that they find a way to make it work for themselves.
I'm not sure the penetration of AI, especially to a degree where participants must use it, is all that permanent in many of these industries. Already the industry where it is arguably the most "present" (forced in) is SWE and its proving to be quite disappointing... Where I work the more senior you are the less AI you use
The big issue is Ai isn't profitable. Streaming services is actuslly useful, but well see how that lasts.
Pretty sure the market doesn't want more AI slop.
Meanwhile, actual consumer sentiment is at all time lows for AI.
There is also absolutely very tasteful products that add value using LLM and other more recent advancements.
Both can exist at the same time.
They can. But I'm not digging in a swamp to find a gold nugget. Let me know when swamp is drained. Hopefully the nugget isn't drained with it.
Try to focus on the argument next time instead of the person. Especially if you're leaving so many replies you can't keep up with. "I'm sorry for you" isn't convincing anyone and only weakinig your point.
Demand for AI anything is incredible high right now. AI providers are constantly bouncing off of capacity limits. AI apps in app stores are pulling incredible download numbers.
Data centre's and the stock market are not humans. They are powered by humans who's incentives do not align with the betterment of humanity. Be careful looking down the green abyss.
Okay. Lemme know when they need to pay for it. A free app for a trillion dollar investment isn't the flex Altman wants to make it seem.
Are people that are not you making it profitable? That's the obvious issue.
The only thing that's going to change is the quality of the slop will get better by the year.
They sure aren't paying for it. It's great how we're on a business topic we're not talking about the fact that the market demand doesn't match the investment put into it.
"High quality AI slop" is a contradiction in terms. The relevant definitions[1] are "food waste (such as garbage) fed to animals", "a product of little or no value."
By definition, the best slop is only a little terrible.
I started TextQuery[1] with same moralistic standing. Not in respect of using AI or not, but that most software industry is suffering from rot that places more importance on making money, forcing subscription vs making something beautiful and detail-focused. I poured time in optimizing selections, perfecting autocomplete, and wrestling with Monaco’s thin documentation. However, I failed to make it sustainable business. My motivation ran out. And what I thought would be fun multi-year journey, collapsed into burnout and a dead-end project.
I have to say my time was better spent on building something sustainable, making more money, and optimizing the details once having that. It was naïve to obsess over subtleties that only a handful of users would ever notice.
There’s nothing wrong with taking pride in your work, but you can’t ignore what the market actually values, because that's what will make you money, and that's what will keep your business and motivation alive.
"When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through." - Steve jobs
Didn't take long for people to abandon their principles, huh?
I think this is the crux of the entire problem for the author. The author is certain, not just hesitant, that any contribution they would make to project involving AI equals contribution to some imagined evil ( oddly, without explictly naming what they envision so it is harder to respond to ). I have my personal qualms, but run those through my internal ethics to see if there is conflict. Unless author predicts 'prime intellect' type of catastrophe, I think the note is either shifting blame and just justifying bad outcomes with moralistic: 'I did the right thing' while not explaining the assumptions in place.
Do you "run them through" actual ethics, too?
With that said, mister ants in the pants, what does actual mean to you in this particular instance?
> I try to not show exactly what I think to people at work. I just say sufficiently generic stuff to make people on both sides agree with a generic statement.
basically the ethics of anyone who doesn't this
> I try to not show exactly what I think to people at work. I just say sufficiently generic stuff to make people on both sides agree with a generic statement.
Bravo! Encore! Teach us, wise master!
If you genuinely are unaware of the issues, it's a very easy topic to research. Heck, just put "AI" into HN and half the articles will cover some part of the topic.
Should we try to guess which of of the many objections belong to the author?
It's better to ask than to assume.
I didn't see that commentor truly asking. I can ask them, but I know that the topic will have dried out by the time o get an answer.
"To run your business with your personal romance of how things should be versus how they are is literally the great vulnerability of business."
What about a functioning market is immoral?
Surely you would agree that making landmines simply because there are people who want to buy them would be an immoral choice.
your argument: but what about this hypothetical?
You do you, but let's not act in bad faith and dismiss others dispositions.
I had hope during the NFT days, but I guess many here always wanted a not that told them they were smart and correct. Alas.
By that metric, everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world. Everyone pays taxes funding Israel, previously the war in Iraq, Afghanistan, Vietnam, etc.
But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.
I'm mostly blaming the rich.
>everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world.
Yeah we kind of are. So many chances to learn and push to reverse policy. Yet look how we voted.
>sometimes you just have to do what you have to do, and one of those things is pay your taxes.
If it's between being homeless and joining ICE... I'd rather inflict the pain on myself than others. There are stances I will take, even of AI isn't the "line" for me personally. (But in not gonna optimize my portfolio towards that either).
>
I mean, the Iraq War polled very well. Bush even won an election because of it, which allowed it to continue. Insofar as they have a semblance of democracy, yes, Americans are responsible. (And if their government is pathological, they're responsible for not stopping it.)
>But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.
Two things. One, you don't have to pay taxes if you're rich. Two, tax protests are definitely a thing. You actually don't have to pay them. If enough people coordinated this, maybe we'd get somewhere.
Having the empathy to reject an endemic but poisonous trend is the opposite of narcissistic.
And we're making big assumptions on the author's finances. A bad year isn't literally a fatal year depending om the business and structure.
Models that are trained only on public domain material. For value add usage, not simply marketing or gamification gimmicks...
[0] I think the data can be licensed, and not just public domain; e.g. if the creators are suitably compensated for their data to be ingested
None, since 'legal' for AI training is not yet defined, but Olma is trained on the Dolma 3 dataset, which is
1. Common crawl
2. Github
3. Wikipedia, Wikibooks
4. Reddit (pre-2023)
5. Semantic Scholar
6. Project Gutenberg
"AI products" that are being built today are amoral, even by capitalism's standards, let alone by good business or environmental standards. Accepting a job to build another LLM-selling product would be soul-crushing to me, and I would consider it as participating in propping up a bubble economy.
Taking a stance against it is a perfectly valid thing to do, and the author is not saying they're a victim due to no doing of their own by disclosing it plainly. By not seeing past that caveat and missing the whole point of the article, you've successfully averted your eyes from another thing that is unfolding right in front of us: majority of American GDP is AI this or that, and majority of it has no real substance behind it.
But I also understand this is a design and web development company. They're not refusing contracts to build AI that will take people's jobs, or violate copyright, or be used in weapons. They're refusing product marketing contracts; advertising websites, essentially.
This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them. I'll respect the decision, sure, but it very much is an inconsequential self-inflicted wound. It's more amoral to fully pay your federal taxes if you live in the USA for example, considering a good chunk are ultimately used for war, the CIA, NSA, etc, but nobody judges an average US-resident for paying them.
They very well might be. Websites can be made to promote a variety of activity.
>This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them
That's not what "marketing" is. This is OpenAI coming to your firm and saying "I need you to make a poster saying AI is the best thing since Jesus Christ". That very much will reflect on you and the industry at large as you create something you don't believe in.
This is disingenuous and inflamatory, and a manichaeist attitude I very much see in rich western nations for some reason. I wrote about this in another comment: it's sets people off on a moral crusade that is always against the players but rarely against the system. I wish more people in these countries would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI as one of many examples.
The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.
IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.
I fail to see how. Why would I not hold some personal responsibility for what I built?
Its actually pretty anti-western to have that mindset since that's usually something that pops up in collectivist societies.
>it's sets people off on a moral crusade that is always against the players but rarely against the system.
If you contribute to the system you are part of the system. You may not be "the problem" but you don't get guilt absolved for fanning the flames of a fire you didn't start.
I'm not suggesting any punishment for enablers. But guilt is inevitable in some people over this, especially those proud of their work.
>I wish more people in these countries would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims,
I can and do.
>The problem isn't AI. The problem is a system where new technology means millions fearing poverty.
Sure. Doesn't mean AI isn't also a problem. We're not a singlethreaded being. We can criticize the symptoms and attack the source.
>over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment
I don't disagree. But the topic at hand is about AI, and talking about politics here is the only thing that gets nastier. I have other forums to cover that (since HN loves to flag politics here) and other IRL outlets to contribute to the community here.
Doesn't mean I also can't chastise how utterly sold out this community can be on AI.
Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
> Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
I hope this doesn't come across as rude, but why? My understanding is American tech pays very well, especially on the executive level. I understand for some odd reason your country is against public healthcare, but surely a year of big tech money is enough to pay for decades of private health insurance?
Generally you’re right, though. Working in tech, especially AI companies, would be expected to provide ample money for buying health insurance on your own. I know some people who choose not to buy their own and prefer to self-pay and hope they never need anything serious, which is obviously a risk.
A side note: The US actually does have public health care but eligibility is limited. Over one quarter of US people are on Medicaid and another 20% are on Medicare (program for older people). Private self-pay insurance is also subsidized on a sliding scale based on your income, with subsidies phasing out around $120K annual income for a family of four.
It’s not equivalent to universal public health care but it’s also different than what a lot of people (Americans included) have come to think.
They are outsourcing just as much as US Big Tech. And never mind the slow-mo economic collapse of UK, France, and Germany.
As Americans, getting a long-term visa or residency card is not too hard, provided you have a good job. It’s getting the job that’s become more difficult. For other nationalities, it can range from very easy to very hard.
https://finnish.andrew-quinn.me/
... But, no, it's still a very forbidding language.
However salaries are atrocious and local jobs aren't really available to non mandarin speakers. But if you're looking to kick off your remote consulting career or bootstrap some product you wanna build, there's not really anywhere on earth that combines the quality of life with the cost of living like Taiwan does.
You make such a hard bargain.
>there's not really anywhere on earth that combines the quality of life with the cost of living like Taiwan does.
Tempting, but I think the last thing I need for what little work I can grab is to create a 14 hour time zone gap.
Applied to quite a few EU jobs via LinkedIn but nothing came of it- I suspected they wanted people already in EU countries
Both of us are US Citizens but we don't want to retire in the US it seems to be becoming a s*hole esp around healthcare
I'm not sure the claim "we can use good devs" is true from the perspective of European corporations. But would love to learn otherwise?
And of course: where in Europe?
Well, which is it?
>Not to mention the higher taxes for this privilege.
Rampant tax cuts is how we got here to begin with. I don't think the EU wants someone with this mentality anyway.
Market has changed -> we disagree -> we still disagree -> business is bad.
It is indeed hard to swim against the current. People have different principles and I respect that, I just rarely - have so much difficulty understanding them - see such clear impact on the bottom line
Ok. They are not talking about AI broadly, but LLMs which require insane energy requirements and benefit off the unpaid labor of others.
Deciding not to enable a technology that is proving to be destructive except for the very few who benefit from it, is a fine stance to take.
I won't shop at Walmart for similar reasons. Will I save money shopping at Walmart? Yes. Will my not shopping at Walmart bring about Walmart's downfall? No. But I refuse to personally be an enabler.
I wish I had Walmart in my area, the grocery stores here suck.
People who believe it's a net detriment don't want to be a part of enabling that, even at cost to themselves, while those who think it's a net benefit or at least neutral, don't have a problem with it.
If your goal is to not contribute to community and leave when it dries up, sure. Walmart is great short term relief.
The industry joke is: What do you call AI that works? Machine Learning.
I started my career in AI, and it certainly didn’t mean LLMs then. some people were doing AI decades ago
I would like to understand where this moral line gets drawn — neural networks that output text? that specifically use the transformer architecture? over some size?
The immoral thing about gen-AI is how it's trained. Regardless of source code, images or audio; the disregard of licenses and considering everything fair-use and ingesting them is the most immoral part.
Then there comes the environmental cost, and how it's downplayed to be able to pump the hype.
I'm not worried about the change AI will bring, but the process of going there is highly immoral, esp. when things are licensed to prohibit that kind of use.
When AI industry says "we'll be dead if we obey the copyright and licenses", you know something is wrong. Maybe the whole industry shouldn't build a business model of grabbing whatever they can and running with it.
Because of these zealots, I'm not sharing my photos anymore and considering not sharing the code I write either. Because I share these for the users, with appropriate licenses. Not for other developers or AI companies to fork, close and do whatever do like with them.
I'm a strong believer on copyleft. I only share my code with GNU/GPLv3+, no exceptions.
However, this doesn't allow AI companies to scrape it, remix it and sell it under access. This is what I'm against.
If scraping, closing and selling GPLv3 or strong copylefted material is fair use, then there's no use of having copyleft if it can't protect what's intended to be open.
Protecting copyleft requiring protecting copyright, because copyleft is built upon copyright mechanism itself.
While I'm not a fan of a big media company monopolizing something for a century, we need this framework to keep things open, as well. Copyright should be reformed, not abolished.
Yes, copyleft exists as a response to copyright, but it builds something completely different with respect to what copyright promises. While copyright protects creators, copyleft protects users. This part is generally widely misunderstood.
Deregulation to prevent regulatory capture is not a mechanism that works when there's money and a significant power imbalance. Media companies can always put barriers to the consumption of their products through contracts and other mechanisms. Signing a contract not to copy the thing you get to see can get out of hand in very grim ways. Consumers are very weak compared to the companies providing the content, because of the desirability of the content alone, even if you ignore all the monetary imbalance.
Moreover, copyleft doesn't only prevent that kind of exploitation; it actively protects the user by making it impossible to close the thing you get. Copyleft protects all the users of the thing in question. When the issue is viewed in the context of the software, it not only allows the code to propagate indefinitely but also allows it to be properly preserved for the long run.
Leaving things free-for-all again not only fails to protect the user but also profits the bigger companies, since they have the power to hoard, remix, refine, and sell this work, which they get for free. So, it only carries water to the big companies' water wheels. Moreover, even permissive licenses depend on the notion of copyright to attribute the artifact to its original creator.
Otherwise, even permissively licensed artifacts can be embedded in the works of larger companies and not credited, allowing companies to slightly derive the things they got for free and sell them to consumers on their own terms, without any guardrails.
So abolishing copyright not only will further un-democratize things, but it'll make crediting the creators of the building blocks the companies use to erect their empires impossible.
This is why I will always share my work under strong copyleft or non-commercial/share-alike (and no-derivatives, where it makes sense) licenses.
In short, I'm terribly sorry to tell you that you didn't convince me about abolishing copyright at all. The only thing you achieved was to think further on my stance, fill the mental gaps I found in my train of thought, and fill them appropriately with more copyleft support. Also, it looks like my decision not to share my photos anymore is getting more concrete.
I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.
LLMs are approximately right. That means they're sometimes wrong, which sucks. But they can do things for which no 100% accurate tool exists, and maybe could not possibly exist. So take it or leave it.
No, but the companies have agencies. LLMs lie, and they only get fixed when companies are sued. Close enough.
Not going to go back and forth on thos as you inevitably try to nitpick "oh but the chatbot didn't say to do that"
It kind of is that clear. It's IP laundering and oligarchic leveraging of communal resources.
2. Open source models exist.
The only thing worse than intellectual property is a special exception for people rich enough to use it.
I have hope for open source models, I use them.
I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.
You can be among a a swamp and say "but my corner is clean". This is the exact opposite of the rotten barrel metaphor. You're trying to claim your sole apple is so how not rotted compared to the fermenting that is came from.
Also, we clearly aren't prioritizing cancer research if Altman has shifted to producing slop videos. That's why sentiment is decreasing.
>Make it make sense.
I can't explain to one who doesn't want to understand.
HTML + CSS is also one area where LLMs do surprisingly well. Maybe there’s a market for artisanal, hand-crafted, LLM-free CSS and HTML out there only from the finest experts in all the land, but it has to be small.
I suspect young people are going to flee the industry in droves. Everyone knows corporations are doing everything in their power to replace entry level programmers with AI.
Not everyone values that, but anyone who will say "just use an LLM instead" was never his audience to begin with.
But I digress.
Anyways, I can't speak for the content itself, but I can definitely tell on the javascript coirse from the trailer and description that they understand the industry and emphasize how this is focused towards those wanting a deep dive on the heart of web, not just another "tutorial on how to use newest framework". Very few tech courses really feel like "low level" fundamentals these days.
On the other topic, I do not agree, as you have just proven: you explain very well why you appreciate Picasso. You thought I (or anybody) needed to be told why I/they should appreciate Picasso/OP. I don't care about that. But I'm very much interested in other peoples reasoning behind their appreciation, especially when I consider something - like HTML and CSS – to be neither very complicated, nor complex. On the other hand: that's what we love about Lumpito: simplicity. Right?
Nobody doubts the prior is better and some people make money doing it, but that market is a niche because most people prioritize price and 80/20 tradeoffs.
Average mass produced clothes are better than average hand made clothing. When we think of hand made clothing now, we think of the boutique hand made clothing of only the finest clothing makers who have survived in the new market by selling to the few who can afford their niche high-end products.
The only perk artisans enjoy then is uniqueness of the product as opposed to one-size fits all of mass manufacturing. But the end result is that while we still have tailors for when we want to get fancy, our clothes are nearly entirely machine made.
This one. Inferred from context about this individual’s high quality above LLMs.
And no, I don't think people are seeking demand for AI website slop the way they do for textiles. Standing out is a good way to get your product out there compared to being yet another bloated website that takes 10 seconds to load with autoplay video generic landing text.
I'd liken it to Persona 5 in the gaming market. No one is playing a game for its UI. But a bespoke UI will make the game all the more memorable, and someone taking the time for that probably pjt care into the rest of the game as well (which you see on its gameplay, music, characters, and overall presentation).
And that market may be a good chunk of existing contract work.
Having the most well tested backend and beautiful frontend that works across all browsers and devices and not just on the main 3 browsers your customers use isn't paying the bills.
Fact is there's just less businesses forming, so there's less demand for landing sites or anything else. I don't see this as a sign that 'good websites don't matter'
It's like equating being a craftsman with being someone who a very particular kind of shoe. If the market for that kind of shoe dries up, what then?
Its why I'm still constantly looking at and practicing linear algebra as an aspiring "graphics programmer". I'm no mathematician but I should be able to breath matrix operations as a graphics programmer. Someone who dismisses their role to "just optimizing GPU stacks" isn't approaching the problem as a craftsman.
And I'll just say that's also a valid approach and even an optimal one for career. But courses like that aren't tailored towards people who want to focus on "optimizing value" to companies.
When you think 99.99% of company websites are garbage, it might be your rating scale that is broken.
This reminds me of all the people who rage at Amazon’s web design without realizing that it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.
Or it could mean that most websites are trash.
>it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.
Yeah, sorry. I will praise plenty of Amazon's scale, but not their deception, psychological manipulation, and engagement traps. That goes squarely in "trash website".
I put up with a lot, but the price jumpsa was finally the trigger i needed to cancel prime this year. I don't miss it.
which can easily be garbage. it only has to be not garbage enough to not cause enough customers to shift enough spending elsewhere
I suspect it's the former.
In this case, running a studio without using or promoting AI becomes a kind of sub-game that can be “won” on principle, even if it means losing the actual game that determines whether the business survives. The studio is turning down all AI-related work, and it’s not surprising that the business is now struggling.
I’m not saying the underlying principle is right or wrong, nor do I know the internal dynamics and opinions of their team. But in this case the cost of holding that stance doesn’t fall just on the owner, it also falls on the people who work there.
Links:
It could still be trash, but they are setting all the right flags.
We Brits simply don't have the same American attitude towards business. A lot of Americans simply can't understand that chasing riches at any cost is not a particularly European trait. (We understand how things are in the US. It's not a matter of just needing to "get it" and seeing the light)
I have a family member that produces training courses for salespeople; she's doing fantastic.
This reminds me of some similar startup advice of: don't sell to musicians. They don't have any money, and they're well-versed in scrappy research to fill their needs.
Finally, if you're against AI, you might have missed how good of a learning tool LLMs can be. The ability to ask _any_ question, rather than being stuck-on-video-rails, is huge time-saver.
I think courses like these are peak "DIY". These aren't courses teaching you to RTFM. It's teaching you how to think deeper and find the edge cases and develop philosophy. That's knowledge worth its weight in gold. Unlike React tutorial #32456 this is showing us how things really work "under the hood".
I'd happily pay for that. If I could.
>don't sell to musicians. They don't have any money
But programmers traditionally do have money?
>if you're against AI, you might have missed how good of a learning tool LLMs can be.
I don't think someone putting their business on the line with their stance needs yet another HN squeed on why AI actually good. Pretty sure they've thought deeply of this.
They have a right to do business with whomever they wish. I'm not suggesting that they change this. However they need to face current reality. What value-add can they provide in areas not impacted by AI?
I'm sure the author has thought much longer on this than I, but I get the vibes here of "2025 was uniquely bad for reasons in and outside of AI". Not "2025 was the beginning of the end for my business as a whole".
I don't think demand for proper engineering is going away, people simply have less to spend. And oncestors have less to invest or are all in gambling on AI. It's a situation that will change for reasons outside the business itself.
we say that wordpress would kill front end but years later people still employ developer to fix wordpress mess
same thing would happen with AI generated website
I barely like fixing human code. I can't think of a worse job than fixing garbage in, garbage out in order to prop up billionaires pretending they don't need humans anymore. If that's the long term future then it's time for a career shift.
I'm still much more optimistic about prospects, fortunately.
Probably even moreso. I've seen the shit these things put out, it's unsustainable garbage. At least Wordpress sites have a similar starting point. I think the main issue is that the "fixing AI slop" industry will take a few years to blossom.
I'd much rather see these kind of posts on the front page. They're well thought-out and I appreciate the honesty.
I think that, when you're busy following the market, you lose what works for you. For example, most business communication happens through push based traffic. You get assigned work and you have x time to solve all this. If you don't, we'll have some extremely tedious reflection meeting that leads to nowhere. Why not do pull-based work, where you get done what you get done?
Is the issue here that customers aren't informed about when a feature is implemented? Because the alternative is promising date X and delaying it 3 times because customer B is more important
I intentionally ignored the biggest invention of the 21st century out of strange personal beliefs and now my business is going bankrupt
Any white-collar field—high-skill or not—that can be solved logically will eventually face the same pressure. The deeper issue is that society still has no coherent response to a structural problem: skills that take 10+ years to master can now be copied by an AI almost overnight.
People talk about “reskilling” and “personal responsibility,” but those terms hide the fact that surviving the AI era doesn’t just mean learning to use AI tools in your current job. It’s not that simple.
I don’t have a definitive answer either. I’m just trying, every day, to use AI in my work well enough to stay ahead of the wave.
The market is literally telling them what it wants and potential customers are asking them for work but they are declining it from "a moral standpoint"
and instead blaming "a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis"
This is a failure of leadership at the company. Adapt or die, your bank account doesn't care about your moral redlines.
All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.
I wonder what their plan was before LLMs seemed promising?
These techbros got rich off the dotcom boom hype and lax regulation, and have spent 20 years since attempting to force themselves onto the throne, and own everything.
Careful now, if they get their way, they’ll be both the market and the government.
I fundamentally disagree with this stance. Labeling a whole category of technologies because of some perceived immorality that exists within the process of training, regardless of how, seems irrational.
And it continued growing nonstop all the way through ~early Sep 2024, and been slowing down ever since, by now coming to an almost complete stop - to the point i ever fired all sales staff because they were treading water with no even calls let alone deals, for half a year before being dismissed in mid-July this year.
I think it won't return - custom dev is done. The myth of "hiring coders to get rich" is over. No surprise it did, because it never worked, sooner or later people had to realise it. I may check again in 2-3 years how market is doing, but i'm not at all hopeful.
Switched into miltech where demand is real.
"Moral" is mentioned 91 times at last count.
Where is that coming from? I understand AI is a large part of the discussion. But then where is /that/ coming from? And what do people mean by "moral"?
EDIT: Well, he mentions "moral" in the first paragraph. The rest is pity posting, so to answer my question - morals is one of the few generally interesting things in the post. But in the last year I've noticed a lot more talking about "morals" on HN. "Our morals", "he's not moral", etc. Anyone else?
From the clients perspective, it's their job to set the principles (or lack thereof) and your job to follow their instructions.
That doesn't mean it's the wrong thing to do though. Ethics are important, but recognise that it may just be for the sake of your "soul".
The equivalent of that comic where the cyclist intentionally spoke-jams themselves and then acts surprised when they hit the dirt.
But since the author puts moral high horse jockeying above money, they've gotten what they paid for - an opportunity to pretend they're a victim and morally righteous.
Par for the course
I hope things turn around for them it seems like they do good work
Can someone explain this?
* The environmental cost of inference in aggregate and training in specific is non-negligible
* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.
* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.
* The primary companies who are selling AI products have, at times, controversial pasts or leaders.
* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.
* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.
* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.
Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)
My experience with large companies (especially American Tech) is that they always try and deliver the product as cheap as possible, are usually evil and never cared about social impacts. And HN has been steadily complaining about the lowering of quality of search results for at least a decade.
I think your points are probably a fair snapshot of peoples moral issue, but I think they're also fairly weak when you view them in the context of how these types of companies have operated for decades. I suspect people are worried for their jobs and cling to a reasonable sounding morality point so they don't have to admit that.
And while some might be doing what you say, others might genuinely have a moral threshold they are unwilling to cross. Who am I to tell someone they don't actually have a genuinely held belief?
Please note, that there are some accounts downvoting any comment talking about downvoting by principle.
Should people not look for reasons to be concerned?
see the diversity of views.
I hope things with the AI will settle soon and there will be applications that actually make sense and some sort of new balance will be established. Right now it's a nightmare. Everyone wants everything with the AI.
All the _investors_ want everything with AI. Lots of people - non-tech workers even - just want a product that works and often doesn't work differently than it did last year. That goal is often at odds with the ai-everywhere approach du jour.
No, that's the most important situation to consider the moral thing. My slightly younger peers years back were telling everyone to eat tide pods. That's a pretty important time to say "no, that's a really stupid idea", even if you don't get internet clout.
I'd hope the tech community of all people would know what it's like to resist peer pressure. But alas.
>But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.
I don't see that at all in the article. Quite the opposite here actually. I just see a person being transparent about their business and morals and commentors here using it to try and say "yea but I like AI". Nothing here attacked y'all for liking it. The author simply has his own lines.
What am I missing?
Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.
imo LLMs are (currently) good at 3 things:
1. Education
2. Structuring unstructured data
3. Turning natural language into code
From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.
No need to buy the hype, but no reason to die from it either.
Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.
I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
But that is exactly what the "is ought problem" manifests, or? If morals are "oughts", then oughts are goal-dependent, i.e. they depend on personally-defined goals. To you it's scary, to others it is the way it should be.
I don't use AI tools in my own work (programming and system admin). I won't work for Meta, Palantir, Microsoft, and some others because I have to take a moral stand somewhere.
If a customer wants to use AI or sell AI (whatever that means), I will work with them. But I won't use AI to get the work done, not out of any moral qualm but because I think of AI-generated code as junk and a waste of my time.
At this point I can make more money fixing AI-generated vibe coded crap than I could coaxing Claude to write it. End-user programming creates more opportunity for senior programmers, but will deprive the industry of talented juniors. Short-term thinking will hurt businesses in a few years, but no one counting their stock options today cares about a talent shortage a decade away.
I looked at the sites linked from the article. Nice work. Even so I think hand-crafted front-end work turned into a commodity some time ago, and now the onslaught of AI slop will kill it off. Those of us in the business of web sites and apps can appreciate mastery of HTML and CSS and Javascript, beautiful designs and user-oriented interfaces. Sadly most business owners don't care that much and lack the perspective to tell good work from bad. Most users don't care either. My evidence: 90% of public web sites. No one thinks WordPress got the market share it has because of technical excellence or how it enables beautiful designs and UI. Before LLMs could crank out web sites we had an army of amateur designers and business owners doing it with WordPressl, paying $10/hr or less on Upwork and Fiverr.
Sad part is I probably will still be accused of using AI. But I'll still do my best.
We are still nowhere near to get climate change under control. AI is adding fuel to the fire.
You will continue to lose business, if you ignore all the 'AI stuff'. AI is here to stay, and putting your head in the sand will only leave you further behind.
I've known people over the years that took stands on various things like JavaScript frameworks becoming popular (and they refused to use them) and the end result was less work and eventually being pushed out of the industry.
That's horrifying.
Sounds like a self inflicted wound. No kids I assume?
Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.
Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?
Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.
Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.
User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.
If you won't even lean into things like this you're just leaving yourself behind.
Okay. When the hype cycle dies we can re-evaluate. Stances aren't set in stone.
>If you won't even lean into things like this
I'm sure Andy knows what kind of business was in his clients and used that to inform his acceptance/rejection of projects. It mentions web marketing so it doesn't seem like much edutech crossed ways here.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.
I think everyone in the programming education business is feeling the struggle right now. In my opinion this business died 2 years ago – https://swizec.com/blog/the-programming-tutorial-seo-industr...