The grand, boneheaded naivete to fail to understand that middlemen are an emergent and intrinsic property of free markets in practice.
Back before the iPhone I used to get into arguments with HCI specialists that phones could be like butlers and should know with all the sensors that they have that you put it in your bag and behaved accordingly. I was told that was impossible then but it seems more possible now. Had the world gone at all that way we'd have a freakin' API to make a restaurant reservation and wouldn't have to go through multimodal hell.
What you call multimodal hell is what others call meaningful choice and market competition.
Factor in how a lot of tools have weaponized their interfaces against their users - then the motivation isn't just cost, but usability.
George Hotz runs Comma AI, a self driving car company.
Most people only care about money as they have no choice. Absent a cash-first system money has become a social construct. The valuation of a dollar is entirely ephemeral now.
I see that social reality becoming more realized and the existing social system around money collapsing due to generational churns attenuation of the social significance.
Tech bros are little more than disciples of a dogma being missionaries for their dogma. There are other dogmas.
But the market is so unbelievably messed up that this is not what happens in practise.
The work on self-driving cars would not be done if there were not a way to profit from it. If it isn’t expected to earn more than it costs, it wouldn’t be done.
Now maybe it’s all being done because of expectations of a monopoly (this is a free market consequence, right?), but …
A couple decades ago people used to think that since anyone can build a website internet businesses will never have a moat. In a hyper-capitalist system the top players will always find a way to dig a moat.
92% of the world has electricity. 74% of the world has internet access. 58% of the world has a personal mobile internet device.
That means the median consumer can access the free AI chatbots from OpenAI, Anthropic, etc.
Moreover, if you compare this to the world in 2000, you'll see that things have only gotten better. ~25 years ago, only 78% of the world had access to electricity, and only 6% had access to the internet. Effectively 0% had access to a personal mobile internet device.
I think, if you look at actual statistics, it is easy to be optimistic.
Economic rent is the extra money you can charge for owning a scarce resource. ML models are not waterfront real estate, they are IP. Other people can make more models if they can/want to.
Now, whether IP should be legally protected is a totally separate question, and while we in the West tend to assume the answer is obvious geohot would certainly not be the first person to suggest broadly applying private property rights to information makes questionable sense.
network effects, distribution, proprietary data, systems of record
companies like opencode have none of the above
cursor's distribution has been faltering and they're hard pivoting to training their own models with their proprietary data to try to build their moat back
Agentic commerce will render Amazon and the rest of the rent seeking marketplaces obsolete given enough time. Because LLMs can literally go straight to the seller and perform checkout, do market research to make sure the seller is legit, and the seller can sell for lower than on the marketplace since they aren’t paying a 15-20% cut.
First, I think there is value in the “rent seeking” Amazon marketplace because how else would the models “go straight to the seller”, another centralized search engine? Why not just use the Amazon one then?
Second, one of LLMs big weaknesses is judgement on what to trust. I would not trust the judgment of an LLM to determine “the seller is legit”… unless we outsource trust verification to a third party marketplace (who will want a cut).
Finally, OpenAI has been aggressively pushing for this so they can take a cut of the transaction. So it’ll just be another middle man.
Exactly this, and not just another middleman, a middleman with an obscene burn rate that isn't close to profitability and is incentivized to ratchet up prices as soon as they can.
And then AI procurement has problems on the buyer side. Do I just blindly trust that the model is going to make the purchase as specified? Do I trust the model's search capabilities and objectivity of returning results? How do I know that OpenAI isn't running its own "marketplace", only showing me options to buy that they want me to see while filtering out less desirable options for them?
It's a fundamentally less transparent experience than Amazon.
The sellers and the marketplaces can spend more time on their LLMs because it's their livelihood. It's the same asymmetry with different tooling.
1. Demographics - Aging population needs transportation. God knows we certainly don't need really old people driving themselves. We got a taste of the future in West Portal in that regard not long ago.
2. Human Capital - The US has pretty much demonstrated that there is little desire to import low skilled labor. Where do these theoretical Taxi drivers come from? Or welders or plumbers. Labor is going to become increasing expensive no matter how you slice the pie.
3. Younger US citizens are going to gravitate to non-manual labor jobs. It is not just that every one is being steered toward college. Physical labor (trades) take a toll on the body. I know - I have work in them - and you quickly extrapolate what that will be like when you are 50.
0: https://www.census.gov/programs-surveys/geography/guidance/g...
Yes, this is what Tock is for. It's not clear to me that it's a bad thing. It replaces the old $20 in a handshake I used to do with the maitre d at the front of the restaurant. Democratizes opportunity and improves transparency
https://hannahritchie.substack.com/p/ai-footprint-august-202...
Almost always nowadays lol. Shit I’ve gotten poorer over the past few years.
Congress: "Hold my beer and watch this"
The Chinese models are open source because they are not state of the art. Once they catch-up or lead, they will likely close them down by a government mandate. Just like Meta was fine with Llama being open source but once they started to get close to OpenAI/Google/Anthropic, they shifted their language to "maybe we won't keep doing that."
The idea that AI will end the "rent-seeking class" that has effectively existed for thousands of years is... not going to happen! The business model just adjusts. And if AI is going to be an economy-shaping super disruptor, the cloud-hosted models will continue evolving beyond what you could ever run at home under the desk.
I think geohot is burying the lead in this text in his post with a lot of speculation.
It's not not that these specific models will become closed it's that the hardware/hosting vendors have an incentive to train models where inference is custom tuned to their chip's dimensions and VRAM.
The Chinese models do a great job of showing what's capable on consumer/prosumer hardware because of export restrictions but anyone entering the hardware space has the same incentives to undercut the frontier labs so they can sell more hardware.
It's also not clear if being at the forefront of inference quality really matters. The open source models appear to be doing a fine enough job of keeping up even if they're a few months behind. So it seems like there's not much of a technology moat for these labs other than the capital costs of training/serving.
> A home for poorly researched ideas that I find myself repeating a lot anyway
This "rent seeking class" is not a historical universal, regardless of how much college marxists insist that it is. Leaders can be good or bad, and they hold power in different ways. In America today we have bad leaders (across the entirety of the political spectrum) - and AI poses a lot of challenges in how they hold power. This is not to say Chinese leaders are any better - but the way they hold power is not challenged by AI. Business models will indeed adapt - but the condition is excellent, as they say.
> The era of purposefully frustrating humans is over. The Chinese open source model running on the box under my desk can pass the Turing Test. When you call, e-mail, text, or show me an ad, you’ll never know if it’s me or my model seeing it.
But at some point, you're going to want to do something, like, e.g., buy something. Then you're right back to the problem in the opening quote:
> things take time, patience runs out, brand familiarity substitutes for diligence, and most people are willing to accept a bad price to avoid more clicks.
& we're already seeing AI used to do this. E.g., Amazon listings where product photos are AI generated. (… not that many product photos weren't "bad photoshop of product onto hot sexy model who is obviously not using our product" before … but now it's AI!) Whereas before someone would have had to spend a modicum of time badly using Photoshop, now AI can just churn out the same fraudulent result in a fraction of the time.
Now, if I have a problem with a product, instead of just calling a number, browsing a phone tree, getting put on hold, and finally having to struggle to get some human to understand the basic logistics of "I paid for X, I did not get X, I demand X or refund", I get to do all that but with the extra step of "forced engagement with an AI that is incapable of actually solving my problem". (This somehow still manages to apply even when the problem is seemingly trivial enough that I find myself thinking "… this actually should be something an AI can do" but inevitably, no, the AI is "sorry", it cannot do that.)
And besides, calls, emails, etc. are already handled without AI: I (and everyone I really care about) have either allowlisted all inbound comms, or abandoned the medium altogether. Moreover, any communications medium is useful because it is not infested with spam, and will eventually be destroyed by spam. At least until we grow laws for mediums like phone/email, maybe named things like "Do Not Call" or "CAN-SPAM" and those laws are enforced. But the GOP has no interest in enforcing any level of consumer protection, so here we are.
Right.
You'll just end up paying the 15-20% cut to the people who train the model and keep it updated and run the agents that you rent from them.
When every financial transaction I make in part goes to a layer at the top that exists solely because they leveraged some digit in a balance book it is techno feudalism with an owning class being paid because they are the owning class, only instead of the baron always getting his cut it's invisible someones with even less accountability than the local feudal baron had.
YouTube destroyed Hollywood's monopolization of entertainment. Anyone with a smartphone now has a shot at becoming a full-time creator. Prior to this, it was gate kept by Hollywood execs.
Smartphones destroyed Microsoft's monopolization of apps.
Not a leap to believe this will happen to some extent with AI (and it's already happening to some degree).
Just look at the people whose success was broken by Youtube overnight. Youtube/BigTech won over Hollywood, not the people 100% beholden to Youtube.
In an LLM arms race, corporations will win bud, sorry.
I just think this analysis is wrong from the start. The "proper" pricing structure, the one tracking the actual costs involved, would be that you don't get to talk to a human being at all unless you pay for their time. Human frictions are what allow no-charge customer support to exist.
It's going to be your bots vs theirs. Theirs will have more resources. Net result? probably fewer jobs and wealthier companies yet again.
A free market that is "properly priced" is a not a real state of existence.
Resource and information asymmetry, and the exploitation by the those with resource and information privilege of those without it has been present from the very beginning. A free market is just a tool (among many) to achieve a goal for a society.
For some, that goal is explicitly the concentration of wealth and welfare in very few hands. This is oligarchy.
For others, it's the advantaging the welfare and dignity of their "tribe" at the cost of welfare and dignity to perceived outsiders.
And for yet others, it is the advancement of universal welfare and dignity.
Neither a free market nor socialism gets you any of these. What gets you there are the shared narratives that utilize tools like free markets, regulation, and redistribution.
this will continue forever and no rugs, chinese or otherwise, will ever be pulled
we know that because the label on the rug says "open source"
Anyway, I remember that Google demo of making restaurant reservations. I believe it was scripted and had a human fallback. Little did we know that Google would drop the bag on the whole transformer thing that came out soon after. I wouldn't be surprised if it was some of the same people involved.
What the author is talking about isn't rent-seeking per se but a moat. The entire proposition of OpenAI is that they can build a moat and recoup the billions of investment. I'm not convinced that's true, which is part of the author's point, for some of the same reasons:
1. Cost of hardware and training and tokens keeps going down. We saw the same thing with Bitcoin mining. I wonder if we'll see ASICs enter the fray here too; and
2. China will make sure no one company owns this future. DeepSeek was a shot across the bow of OpenAI, Google and Anthropic. It is a national security issue for China.
Where I disagree is that this will be an end for the rent-seeking class. I think we're bouldering towards a dystopian future of even more wealth concentration where most people get displaced by automation and AI, which suppresses wages and ultimately leads to a situation where a handful of people have all the money and almost everyone else has no money.
[1]: https://en.wikipedia.org/wiki/Inclosure_act
[2]: https://medium.com/@jrcoleman97/the-hidden-origins-of-capita...
It's being used in a more literal-meanings-of-the-words sense ("pursuing monopoly rents") rather than the narrow economic term-of-art sense of "pursuing monopoly rents through influence over public policy by means that do not create, or which inhibit the creation of, additional wealth" (the definition you seem to be complaining about it not adhering to without actually providing.)
But most of the usages would also be correct in the narrower sense, because virtually ever actor referred to as rent-seeking in the broader sense are also rent-seeking in the narrow sense as part of that. (E.g., actively lobbying for "safety" regulation which would disproportionately impair non-incumbent new competitors.)
> What the author is talking about isn't rent-seeking per se but a moat.
Pursuing a moat is just another term for seeking monopoly rents by any means, including rent-seeking in the narrow sense.
(There's also obviously an ideological angle in creating the term "rent seeking" as a term of criticism to those seeking monopoly rents through means that the creators of the term disapprove of, excluding seeking the same kind of rents by other means from "rent seeking".)
Rent-seeking is fundamentally intermediation like a health insurer putting themselves between a patient and a healthcare provider or privatizing grazing lands or controlling water supplies from snowmelt (like the Resnicks) or privatizing trains in the UK.
Microsoft has a compeitive advantage with Windows, Google with its search engine and the ad business that funds it, Amazon with AWS or NVidia with GPUs. There are alternatives to all of these things but these companies maintain dominance with a combination of scale, cost, technology and network effects. That's not rent-seeking in a broad or narrow sense.
Rent-seeking would be Microsoft lobbying lawmakers to require schools and governments to purchase Windows, for example.
> Enter AI, the great equalizer of time.
I didn't read any futher: this article is dumb. If a company has the capability to hire literal people to waste your time, they can deploy more AI than you to waste the time of your AI.
Or they just use price to limit access instead of time. Which means you're totally SOL if you have time but no money. Pay to win, that game everyone loves /s!
AI doesn't flatten asymmetries, it exacerbates them.
I don't totally buy OP's argument, but I think you're dismissing it unfairly.
His point is that in the pre-LLM world, if a company wants to waste your time, they can hire a call center employee overseas for US$4/hr and make you wait for an hour to talk to them for 30 minutes. If you value your time at $80/hr, then the 90-minute call cost you $120, but it only cost the company about $2, thus the asymmetry.
OP's claim is that now, the asymmetry is gone. If both you and the company try to use AI, the company has less leverage to impose costs on you. They can deploy more AI to waste more of your time, but that means the asymmetry now is in the customer's favor because it costs the company more than the customer to get support.
Anyhow ...
https://geohot.github.io/blog/jekyll/update/2024/10/28/ameri...
> If Trump wins and Elon has influence, I do think there is a path to fix America longer term. While the general Trump/Elon vibes are good, I do have concerns about Trump’s fiscal record and protectionist tendencies. [...]
> Of course if Kamala wins, I think I’ll be staying in Asia. Managed decline with a side of resentment and looting oligarchs isn’t for me.
(he would get a hong kong citizenship anyhow)
This is an oxymoron.
2. Anthropic does not care about what models and hardware he is running under his desk.
3. When you look behind the cupboard—Anthropic is "rent seeking" on a level well above consumers.
4. I've got "AI safety" + "Capitalism" + "Military-industrial complex" bound together on my mental corkboard.
Giving functionally illiterate people computers with GUIs should be regarded as a mistake.
That is the market being free, George.