Like if I offered you $8 billion in soft serve ice cream so long as you keep bringing birthday parties to my bowling alley. The moment the music stops and the parents want their children back, it’s not like I’m out $8 billion.
Cloud provider gives credit to LLM provider in exchange for a part of the company.
These are really normal business deals.
the talent will move out naturally -- amazon can just scoop up with its bucket (*not s3)
Not only the models but also training data, model architecture, documentation, weights and latest R&D experiments ?
Take an instance -> Snapshot -> Investigate.
Unless they get caught it is not illegal.
maybe the full array of options is: pass the hot potato, hold the buck, or drop it like a bag.
It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
Interesting, I tried it with the chatbot widget on my city government's page, and it worked as well.
I wonder if someone has already made an openrouter-esque service that can connect claude code to this network of chat widgets. There are enough of them to spread your messages out over to cover an entire claude pro subscription easily.
I think if you run any kind of freely-accessible LLM, it is inevitable that someone is going to try to exploit it for their own profit. It's usually pretty obvious when they find it because your bill explodes.
From a perspective of "how do we monetize AI chatbots", an easy thing about this usage context is that the consumer is already expecting and wanting product recommendations.
(If you saw this behavior with ChatGPT, it wouldn't go down as well, until you were conditioned to expect it, and there were no alternatives.)
I think it can be done right now with MCP servers in a way that you don't immediately hand over your data to the chatbot portal companies so that they can cut you out. (But, over time/traffic, they could quickly learn to mimick your MCP server, much like they mimick Web content and other training data, and at least appear to casual users to interact like you, even if twisted to push whatever company bid for the current user interaction. I haven't figured out what you do when they've trained on mimicking you with an evil twin; maybe you get acquired early, and then there are more resources to solve that next problem.)
I assume if Amazon was using Claude's latest models to power it's AI tools, such as Alexa+ or Rufus, they would be much better than they currently are. I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
I work for Amazon, everyone is using Claude. Nova is a piece of crap, nobody is using it. It's literally useless.
I haven't tried the new versions that just came out though.
I would assume quite the opposite: it costs more to support and run inference on the old models. Why would Anthropic make inference cheaper for others, but not for amazon?
EDIT: I then asked for a Fizzbuzz implementation and it kindly asked. I asked then for a Rust Fizzbuzz implementation, but this time I asked again in Spanish, and he said that it could not help me with Fizzbuzz in Rust, but any other topic would be ok. Then again I asked in English "Please do Rust now" and it just wrote the program!
I wonder what the heck are they doing there? The guardrailing prompt is translated to the store language?
The market is too new for AI.
AI is unquestionably useful, but we don't have enough product categories.
We're in the "electric horse carriage" phase and the big research companies are pleading with businesses to adopt AI. The problem is you can't do that.
AI companies are asking you to AI, but they aren't telling you how or what it can do. That shouldn't be how things are sold. The use case should be overwhelmingly obvious.
It'll take a decade for AI native companies, workflows, UIs, and true synergies between UI and use case to spring up. And they won't be from generic research labs, but will instead marry the AI to the problem domain.
Open source AI that you can fine tune to the control surface is what will matter. Not one-size-fits-all APIs and chat interfaces.
ChatGPT and Sora are showing off what they think the future of image and video are. Meanwhile actual users like the insanely popular VFX YouTube channel are using crude tools like ComfyUI to adopt the models to their problems. And companies like Adobe are actual building the control plane. Their recent conference was on fire with UI+AI that makes sense for designers. Not some chat interface.
We're in the "AI" dialup era. The broadband/smartphone era is still ahead of us.
These companies and VCs thought they were going to mint new Googles and Amazons, but it's more than likely they were the WebVans whose carcasses pave the way.
Or, as a slight variation of that, they think the underlying technology will always be quickly commoditized and that no one will ever be able to maintain much of a moat.
It's a black box with input/output in text, thats not a very good moat.
especially given that Deepseek type events can happen because you can just train off of your competitors outputs
I've tried out Gemini 2.5/3 and it generally seems to suck for some reason, problems with lying/hallucinating and following instructions, but ever since Bard came out at first, I thought Google would have the best chances of winning since they have their own TPUs, YouTube (insane video/visual/audio data), Search (indexed pages), and their Cloud/DCs and they can stick it into Android/Search/Workspace.
meanwhile OpenAI has no existing business, they only have API/Subs as revenue, and they're utilizing Nvidia/AMD
I really wonder how things will look once this gold rush stabilizes
A few gold diggers will be worth 10x the shovel maker.
They could make more money keeping control of the company and have control.
I'd love to see evidence for such a thing, because it's not clear to me at all that this is the case.
I personally think they're the best of the model providers but not sure if any foundation model companies (pure play) have a path to profitability.
https://www.anthropic.com/news/anthropic-acquires-bun-as-cla...
Gemini could get much better tomorrow and their entire customer base could switch without issue.
Again, I know that's a shallow moat - agents just aren't that complex from a pure code perspective, and there are already tools that you can use to proxy Claude Code's requests out to different models. But at least in my own experience there is a definite stickiness to Claude that I probably won't bother to overcome if your model is 1.1x better. I pay for Google Business or whatever it's called primarily to maintain my vanity email and I get some level of Gemini usage for free, and I barely touch it, even though I'm hearing good things about it.
(If anything I'm convincing myself to give Gemini a closer look, but I don't think that undermines my overarching (though slightly soft) point).
1. using Claude Code exclusively (back when it really was on another level from the competition) to
2. switching back and forth with CC using the Z.ai GLM 4.6 backend (very close to a drop-in replacement these days) due to CC massively cutting down the quota on the Claude Pro plan to
3. now primarily using OpenCode with the Claude Code backend, or Sonnet 4.5 Github Copilot backend, or Z.ai GLM 4.6 backend (in that order of priority)
OpenCode is so much faster than CC even when using Claude Sonnet as the model (at least on the cheap Claude Pro plan, can't speak for Max). But it can't be entirely due to the Claude plan rate limiting because it's way faster than CC even when using Claude Code itself as the backend in OC.I became so ridiculously sick of waiting around for CC just to like move a text field or something, it was like watching paint dry. OpenCode isn't perfect but very close these days and as previously stated, crazy fast in comparison to CC.
Now that I'm no longer afraid of losing the unique value proposition of CC my brand loyalty to Anthropic is incredibly tenuous, if they cut rate limits again or hurt my experience in the slightest way again it will be an insta-cancel.
So the market situation is much different than the early days of CC as a cutting edge novel tool, and relying on that first mover status forever is increasingly untenable in my opinion. The competition has had a long time to catch up and both the proprietary options like Codex and open source model-agnostic FOSS tools are in a very strong position now (except Gemini CLI is still frustrating to use as much as I wish it wasn't, hopefully Google will fix the weird looping and other bugs ... eventually, because I really do like Gemini 3 and pay for it already via AI Pro plan).
Model training, sure. But that will slow down at some point.
If claude code's revenue grows faster than cost, it will become profitable.
No shit?
That was the differentiation. What makes you think AI companies can't find moats similar to Google's? The right UX, the right model and a winner can race past everyone.
It really was!
I remember the pre-Google days when AltaVista was the best search engine, just doing keyword matching, and of course you would therefore have to wade through pages of results to hopefully find something of interest.
Google was like night & day. PageRank meant that typically the most useful results would be on the first page.
Google was also way more minimal (and therefore faster on slow connections) and it raised enough money to operate without ads for years (while its competitors were filled with them).
Not really comparable to today, when you have 3-4 products which are pretty much identical, all operating under a huge loss.
Just having far more user search queries and click data gives them a huge advantage.
Are they profitable (no),
Is Claude Code even running at a marginal profit? (who knows)
Is the marginal profit large enough to pay for continued R&D to stay competitive (no)
Does Claude Code have a sustainable advantage over what Amazon, Microsoft and Google can do in this space using their incumbency advantage and actual profits and using their own infrastructure?
They just sell tokens, essentially. Much like Open AI but very different from Google or Microsoft, who make their money elsewhere.
They're preparing for IPO?
> They could make more money keeping control of the company and have control.
It depends on how much they can sell for.
I guess they're taking the old adage about selling picks and shovels when everyone else is digging for gold to heart
There is no moat in being a frontier model developer. A week, month, or a year later there will be a open source alternative which is about 95% as good for most tasks people care about.
I get the feeling Amazon wants to be the shovel seller for the AI rush than be a frontier model lab.
I think this is simply wrong. Don't you think Amazon would love to be in Google's position of having Gemini 3 Pro?The shovel maker will never make more money than a few lucky gold diggers.
I am basing my observation on the noises they are making. They did put out a model called Nova but they are not drumming it up at all. The model page makes no claims of benchmarks or performance. There are no signs of them poaching talent. Their CEO has not been in the press singing praises about AI unlike every big tech CEO.
Maybe they have a skunk-works team on it but something tells me they are waiting for the paint to dry.
Hard to buy a company that is part-owned by your competitors.
Completely possible for me to do, but it saved me at least a couple hours of Googling.
1. Why buy the cow when you can get the milk for free?
2. Amazon doesn't appear interested in acquiring Anthropic _at its current valuation_. I would be surprised if it's not available for acquisition at 1/10th its current price in the next 3-5 years
AI isn't going anywhere, but "prop model + inference" is far from a proven business model.
Same w/ Perplexity.
> I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
One thing you're right about - Anthropic isn't surviving - it's thriving. Probably the fastest growing revenue in history.
https://medium.com/@Arakunrin/the-post-ipo-performance-of-y-...
> One thing you're right about - Anthropic isn't surviving - it's thriving. Probably the fastest growing revenue in history.
Growing revenue and losing money is not “thriving”
But someone who invested in the hypothetical “YC Index Fund” wouldn’t be too happy.
Is the claim that coding agents can't be profitable?
Their margins are negative and every increase in usage results in more cost. They have a whole leaderboard of people who pay $20 a month and then use $60,000 of compute.
If you tell me to click the link, I did, but backed out because I thought you'd actually be willing to break it down here instead. I could also ask Claude about it I guess.
$60K in a month was unusual (and possibly exaggerated); amounts in the $Ks were not. For which people would pay $200 on their Max plan.
Since that bonanza period Anthropic seem to have reined things in, largely through (obnoxiously tight) weekly consumption limits for their subscription plans.
It’s a strange feeling to be talking about this as if it were ancient history, when it was only a few months ago… strange times.
This is always the pitch for money-losing IPOs. Occasionally, it is true.
It's not, even by his own citing: https://www.youtube.com/watch?v=iWs71LtxpTE
He said that this applies to "many teams" rather than "uniformly across the whole company".
In the AI coding and tooling space everything seems to be constantly changing: which models, what workflows, what tools are in favor are all in flux. My hesitancy to dive in and regularly include AI tooling in my own programming workflow is largely about that. I'd rather wait until the dust has settled some.
also FWIW I think healthy skepticism is great; but developers outright denying this technology will be useful going forward are in for a rude awakening IMO
I kind of get it, especially if you are stuck on some shitty enterprise AI offering from 2024.
But overall it’s rather silly and immature.
This idea that “AI writes 90% of our code” means you don’t need developers seems to spring from a belief that there is a fixed amount of software to produce, so if AI is doing 90% of it then you only need 10% of the developers. So far, the world’s appetite for software is insatiable and every time we get more productive, we use the same amount of effort to build more software than before.
The point at which Anthropic will stop hiring developers is when AI meets or exceeds the capabilities of the best human developers. Then they can just buy more servers instead of hiring developers. But nobody is claiming AI is capable of that so far, so of course they are going to capitalise on their productivity gains by hiring more developers.
I'm not an LLM luddite, they are useful tools, but people with vested interests make a lot of claims that if they were true would result in a situation where we should already be seeing the signs of a giant software renaissance... and I just haven't seen that. Like, at all.
I see a lot more blogging and influncer peddling about how AI is going to change everything than I do any actual signs of AI changing much of anything.
That's the hype being sold. So where's the software...?
And again, I'm not anti-LLM. But I still think the hype around them is far, far greater than their real impact.
> AI will replace 90% of developers within 6 months
> The two things aren’t contradictory at all, in fact one strongly implies the other. If AI is writing 90% of your code, that means the total contribution of a developer is 10× the code they would write without AI. This means you get way more value per developer, so why wouldn’t you keep hiring developers?
Let's review the original claim:
> AI will replace 90% of developers within 6 months
Notice that the original claim does not say "developers will remain the same amount, they will just be 10x more effective". It says the opposite of what you claim it says. The word "replace" very clearly implies loss of job.
> > AI will replace 90% of developers within 6 months
That’s not the original claim though; that’s a misrepresentative paraphrase of the original claim, which was that AI will be writing 90% of the code with a developer driving it.
I think it was also back in March, not a year ago
>"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code," Amodei said at a Council of Foreign Relations event on Monday.
>Amodei said software developers would still have a role to play in the near term. This is because humans will have to feed the AI models with design features and conditions, he said.
>"But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then, we will eventually reach the point where the AIs can do everything that humans can. And I think that will happen in every industry," Amodei said.
I think it's a silly and poorly defined claim.
edit: sorry you mostly included it paraphrased; it does a disservice (I understand it’s largely the media’s fault) to cut that full quote short though. I’m trying to specifically address someone claiming this person said 90% of developers would be replaced in a year over a year ago, which is beyond misleading
edit to put the full quote higher:
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
from https://www.youtube.com/live/esCSpbDPJik?si=kYt9oSD5bZxNE-Mn
(sorry have been responding quickly on my phone between things; misquotes like this annoy the fuck out of me)
Have you used Gemini 3?
The employees/VCs of companies that IPOd in 1999, and early 2000 cashed out leaving bag holders. The companies that IPOd in 2000/2001 had a mixed bag. The six-month-lockup had many employees salivating but unable to cash out. It all depended on the timing of the thing. This time around there are private markets that apparently are allowing the employees to become liquid. Nevertheless, earlier is better for startups to IPO particularly when the tide appears to be turning.
It's a hot take, I know :D
That said Gemini is still very, very good at reviews, SQL, design and smaller (relatively) edits; but today it is not at all obvious that Google is going to win it all. They’re positioned very well, but execution needs to be top notch.
I know that people here are myopically focussed on code, but that's not what the majority of people use AI for.
If Opus 4.5 is better than Gemini 3 for code, but the same or worse for most other uses (which seems to be the case according to benchmarks), that's great for us but terrible for Anthropic.
Claude still can't even draw basic pictures, for example.
It's an absolute workhorse.
It is so proactive in fixing blockers - 90% of the time for me, choosing the right path forward.
If anything they're far ahead of Google on the enshittification schedule (who still give out API keys for free Gemini usage and a free tier on Gemini CLI, although CLI is still pretty shaky unfortunately but that's a different issue).
It also doesn't help that CC will stop working literally in the middle of a task with zero heads up, or at best I get the 90% warning and then 30 seconds later have it stop working claiming I hit 100% after about two additional messages during the same task. I'm truly baffled by how they've managed to make the warnings as useless and aggravating as possible in CC and routinely shutdown while the repo is in a broken state, so I have to ask Codex to read the logs and piece things back together to continue working.
As to the size of the bump they'll get there isn't a single rule of thumb but larger cap companies tend to get a smaller bump, which you'd expect. I've seen models estimate a 2-5% bump for large companies and a 4-7% bump for mid level and 6-12% for "small" under $20 Billion dollar market cap companies.
Everybody who puts their retirement fund into an index fund are buying the index fund without relation to the index fund's price (aka price insensitive). But the index fund itself is buying shares based on each company's relative performance, hence the index fund is price sensitive. That is evidenced by companies falling out of the SP500 and even failing.
*specifically float-adjusted market capitalization
https://www.spglobal.com/spdji/en/documents/index-policies/m...
>The goal of float adjustment is to adjust each company’s total shares outstanding for long-term, strategic shareholders, whose holdings are not considered to be available to the market.
see also:
https://www.spglobal.com/spdji/en/methodology/article/sp-us-...
For most ordinary investors, this doesn't really matter, because you put your money into your retirement fund every month and you only take it out at retirement. But if you're looking at the short term, it absolutely matters. I've heard S&P 500 indexing referred to as a momentum investment strategy: it buys stocks whose prices are going up, on the theory that they will go up more in the future. And there's an element of a self-fulfilling prophecy to that, since if everybody else is investing in the index fund, they also will be buying those same stocks, which will cause them to go up even more in the future.
If you want something that buys shares based on each company's relative performance, you want a fundamental-weighted index. I've looked into that and I found a few revenue-weighted index funds, but couldn't find a single earnings-weighted index fund, which is what I actually want. Recommendations wanted; IMHO the S&P 500 is way overvalued on fundamentals and heavily exposed to certain fairly bubbly stocks (the Mag-7 alone make up 35% of your index fund, and one of them is my employer, and all of them employ heavily in my geographic area and are pushing up my home value), so I've been looking for a way to diversify into companies that actually have solid earnings.
This isn't a term used in economics. The typical terms used are positive price sensitivity and negative price sensitivity.
https://www.investopedia.com/terms/p/price-sensitivity.asp
While it is true that being added to the SP500 can lead to an increase in demand, and hence cause the index fund to pay more for the share, there are evidently opposing forces that modulate share prices for companies in the SP500.
>I've been looking for a way to diversify into companies that actually have solid earnings.
No one has more solid earnings than the top tech companies. Assuming you don't work for Tesla, you already are doing about the best you can in the US. Your options to diversify is to invest in other countries, develop your political connections, and possibly get into real estate development. Maybe have a bunch of kids.
> The sum of the most recent four consecutive quarters’ Generally Accepted Accounting Principles (GAAP) earnings (net income excluding discontinued operations) should be positive as should the most recent quarter.
https://www.spglobal.com/spdji/en/documents/methodologies/me...
Google and Microsoft are pushing AI extremely heavily but they have other sources of revenue, whereas ChatGPT only has partnerships and subscribers. Are they profiting from their 200 USD tier? Because it seems quite obvious to me they are just bearing the brunt of free users by being loss leaders at the moment.
I have no doubt many of these companies will go bankrupt, but I wonder who will be first.
All companies go bankrupt given enough time.It's like the foundry business, an ever increasing cost of moving to the next node requires ever increasing scale which naturally kills off competition.
Anyone is bearish on Nvidia today if the share price would be at a $10T valuation.
I spend $0 on AI. My employer spends on it for me, but I have no idea how much nor how it compares to vast array of other SaaS my employer provides for me.
While I anecdotally know of many devs who do pay out of pocket for relatively expensive LLM services, they a minority compared to folks like me happy to leach off of free or employer-provided services.
I’m very excited to hopefully find out from public filings just how many individuals pay for Claude vs businesses.
In an interview Sam Altman said he preferred to stay away from an IPO, but the notion of the public having an interest in the company appealed to him. Actions speak louder than words, and so it is fitting from a mission standpoint that Anthropic may do it first.
https://www.wsj.com/tech/ai/big-techs-soaring-profits-have-a...
I am not aware of any frontier inference disclosures that put margins at less than 60%. Inference is profitable across the industry, full stop.
Historically R&D has been profitable for the frontier labs -- this is obscured because the emphasis on scaling the last five years has meant they just keep 10xing their R&D compute budget. But for each cycle of R&D, the results have returned more in inference margin than they cost in training compute. This is one major reason we keep seeing more spend on R&D - so far it has paid, in the form of helping a number of companies hit > $1bn in annual revenue faster than almost any companies in history have done so.
All that said, be cautious shorting these stocks when they go public.
If OpenAI IPO's first, it'd be huge. Then Anthropic does, but AI IPO hype has sailed.
If Anthropic IPO's first, they get the AI IPO hype. OpenAI IPO probably huge either way.
Also, is there a way to know how much of the total volume of shares is being traded now? If I kept hyping my company (successfully), and drove the share price from $10 to $1000, thanks to retail hype, I could 100x the value of my company lets say from $100m to $10B, while the amount of money actually changing hands would be miniscule in comparison.
> ETFs by definition need to participate
You meant to say "index funds". There are many different kinds of ETFs.
Genuinely asking.
Goldman puts out their retail reports weekly that show retail is 20% of trading in alot of names and higher in alot of the meme stock names.
They used to be so tiny due to $50/trade fees, but with the advent of all the free money in the system since covid and GenZ feeling like real estate won't be their path to freedom, and option trading for retail, and zero commission trading retail has a real voice in the markets.
You can easily look up the numbers you are asking for, the TLDR is that the volume in most stocks is high enough that you can’t manipulate it much. If it’s even 2x overpriced then there’s 100m on the table for whoever spots this and shorts, ie enough money that plenty of smart people will be spending effort on modeling and valuation studies.
But that isn't relevant? If they trade a lot but own less than 10% of the shares they're still a small piece.
The institutional investors are likely not trading much, things like 401k are all long term investments
This isn't going to end well is it.
Modern IPOs are mainly dumping on retail and index investors.
Also:
> The US led a sharp rebound, driven by a surge in IPO filings and strong post-listing returns following the Federal Reserve’s rate cut.
And for the rest (SP 500 etc), these companies are going to fake profits using some sort of financial engineering to be included.
They are going public.
They are expected to hit 9 billion by end of year. Meaning the valuation multiple is only 30x. Which is still steep but at that growth rate not totally unreasonable.
https://techcrunch.com/2025/11/04/anthropic-expects-b2b-dema...
The problem as I see it is that neither of those things are significant moats. Both OpenAI and Google have far better branding and a much larger user base, and Google also has far lower costs due to TPUs. Claude Code is neat but in the long run will definitely be replicated.
Anthropic is going for the enterprise and for developers. They have scooped up more of the enterprise API market than either Google or OpenAI, and almost half the developer market. Those big, long contracts and integration into developer workflows can end up as pretty strong moats.
I am old enough (> 1 year old) to remember when Cursor had won the developer market from the previous winner copilot.
Google or Apple should have locked down Anthropic.
It’s a fair point, but the counter-point is that back then these tools were ide plugins you could code up in a weekend. Ie closer to a consumer app.
Now Claude Code is a somewhat mature enterprise platform with plenty of integrations that you’d need to chase too. And long-term enterprise sales contracts you’d need to sell into. Ie much more like an enterprise SAAS play.
I don’t want to push this argument too far as I think their actual competitors (eg Google) could crank out the work required in 6-12 months if they decided to move in that direction, but it does protect them from some of the frothy VC-funded upstarts that simply can’t structurally compete in multi-year enterprise SAAS.
Is there some sort of unlimited plan that people take advantage of ?
Its a step up from copy-pasting from an llm.
But claude code is on another level.
Google should be stomping everyone else but it's ad addiction in search will hold it back. Innovators dilemma...
Developers will jump ship to a better tool at a blink of an eye. I wouldn't call it locked in at all. In fact, people do use Claude Code and Codex simultaneously in some cases.
The latter are locked in to whatever vendor(s) their corporate entity has subscribed to. In a perverse twist, this gives the approved[tm] vendors an incentive to add backend integrations to multiple different providers so that their actual end-users can - at least in theory - choose which models to use for their work.
what about Chinese models?..
when has anything been 'locked in', someone comes with a better tool people will switch.
Are you ... aware that OpenAI and Google have launched more recent models?
They charge higher costs than OpenAI and have faster growing API demand. They have great margins compared to the rest of the industry on inference.
Sure the revenue growth could stop but it hasn’t and there is no reason to think it will.
I hear this a lot, do you have a good source (apart from their CEO saying it in an interview). I might have more faith in him but checks notes, it's late 2025 and AI is not writing all our code yet (amongst other mental things he's said).
> The Information reports that Anthropic expects to generate as much as $70 billion in revenue and $17 billion in cash flow in 2028. The growth projections are fueled by rapid adoption of Anthropic’s business products, a person with knowledge of the company’s financials said.
> That said, the company expects its gross profit margin — which measures a company’s profitability after accounting for direct costs associated with producing goods and services — to reach 50% this year and 77% in 2028, up from negative 94% last year, per The Information.
https://techcrunch.com/2025/11/04/anthropic-expects-b2b-dema...
However, I'm still a little sceptical around this as the cost to train new models is going up super-linearly (apparently) which means that the revenue from inference needs to also go up along side this.
Interesting to think about though, thanks for the source!
2. A 300bn IPO can mean actually raising n 300bn by selling 100% of the company. But it could also mean seeing 1% for 3bn right? Which seems like a trivial amount for the market to absorb no?
Would be so massively oversubscribed that it would become a $600bn company by the end of the day (which is a good tactic for future fund raising too).
I suspect if/when Anthropic does its next raise VCs will be buyers still not sellers.
If they get to be a memestock, they might even keep the grift going for a good while. See Tesla as a good example of this.
See page ~9 of https://www.spglobal.com/spdji/en/documents/methodologies/me...
It is against the law to prioritize AI safety if you run a public company. You must prioritize profits for your shareholders.
-google cofounders Larry Page and Sergey Brin
then came the dot com bubble.
This is nonsense. Public companies are just as free as private companies to maximise whatever sharedholders wants them to.
It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.