roughly, what lean theorem (and statement on the website) asks is this: take some numbers t_i, for each of them form all the powers t_i^j, then combine all into multiset T. Barring some necessary conditions, prove that you can take subset of T to sum to any number you want
what Erdosh problem in the paper asks is to add one more step - arbitrarily cut off beginnings of t_i^j power sequences before merging. Erdosh-and-co conjectured that only finite amount of subset sums stop being possible
"subsets sum to any number" is an easy condition to check (that's why "olympiad level" gets mentioned in the discussion) - and it's the "arbitrarily cut off" that's the part that og question is all about, while "only finite amount disappear" is hard to grasp formulaically
so... overhyped yes, not actually erdos problem proven yes, usual math olympiad level problems are solvable by current level of Ai as was shown by this year's IMO - also yes (just don't get caught by https://en.wikipedia.org/wiki/AI_effect on the backlash since olympiads are haaard! really!)
"I also wonder whether this 'easy' version of the problem has actually appeared in some mathematical competition before now, which would of course pollute the training data if Aristotle [Ed.: the clanker's name] had seen this solution already written up somewhere."
I just feel like if we were genuinely on the cusp of an AI revolution like it is claimed, we wouldn't need to keep seeing this sort of thing. Like I feel like a lot of the industry is full of flim-flam men trying to scam people, and if the tech was as capable as we keep getting told it is there'd be no need for dishonesty or sleight of hand.
If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).
I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.
To give a quick example vis-a-vis LLM's: I can reason and understand well enough without having to be 'trained' on near the entire corpus of human literary. LLM's of course do not reason or understand and their output is determined by human input. That alone indicates our minds work differently to LLM's.
I wonder how ChatGPT would fair if it were trained on birdsong and then asked for a rhyming couplet?
Short AI and tech, and just hope you get the timing right.
That being said, I think AI has a lot more immediately useful cases than cryptocurrency. But it does feel a bit overhyped by people who stand to gain a tremendous amount of money.
I might get slammed/downvoted on HN for this, but really wondering how much of VC is filled with get-rich-quick cheerleading vs supporting products that will create strong and lasting growth.
The more interesting one is the closed door conversations. Earlier this year, for example, it seemed there was a pattern of VCs heavily invested in AI asking the other software companies they invested in to figure out how to make AI useful for them and report back. I.e. "we invested heavily in hype, tell us how to make it real."
If by this you refer to "Aristotle" in the parent post - it's not that Aristotle. This is "Aristotle AI" - the name of their product.
A reality where we are talking about if the problem solved by the automated model using formal verification was too easy.
Don't get me wrong, I am not saying any of this means we get AGI or something or even if we continue to see improvements. We can still appreciate things. It doesn't need to be a binary. What a time to be alive!
Can you be specific about the goal posts being shifted? Like the specific comments you're referring to here. Maybe I'm just falling for the bait, but non specific claims like this seem designed just to annoy while having nothing specific to converse about.
I got to the end of your comment and counting all the claims you discounted, the only goal post I see left is that people aren't using a sufficiently excited tone while sifting fact from hype? A lot of us follow this work pretty closely and don't feel the need to start every post with "there is no need for excitement to abate, still exciting! but...".
> I am not saying any of this means we get AGI or something or even if we continue to see improvements. We can still appreciate things. It doesn't need to be a binary.
You'll note, however, that the hype guys happily include statements like "Vibe proving is here" in their posts with no nuance, all binary. Why not call them out?
If the turing test is a goal, then we passed it 60 years ago and AGI has been here since the LISP days. If the turing test is not a goal (which is the correct interpretation), nobody should care what a random nobody thinks about an LLM "passing" it.
"LLMs pass the turing test so they are intelligent (or whatever)" is not a valid argument full stop, because "the turing test" was never a real thing ever meant to actually tell the difference between human intelligence and artificial intelligence, and was never formalized, and never evaluated for its ability to do so. The entire point of the turing test was to be part of a conversation about thinking machines in a world where that was an interesting proposition.
The only people who ever took the turing test as a "goal" were the misinformed public. Again, that interpretation of the turing test has been passed by things like ELIZA and markov chain based IRC bots.
It actually doesn't help me at all at anything at my job and I really wish it could.
That isn't really goal post moving as much as a very strange shape to the goal post.
Everything is amazing and nobody is happy: https://www.youtube.com/watch?v=nUBtKNzoKZ4
"How quickly the world owes him something, he knew existed only 10 seconds ago"
These people treat math research as if it is a math homework assignment. There needs to be an honest discussion about what the LLM is doing here. When you bang your head against a math problem you blindly try a bunch of dumb ideas that don't actually work. It wastes a lot of paper. The LLM automates a lot of that.
It is actually pretty cool that modern AI can help speed this up and waste less paper. It is very similar to how classical AI (Symbolica) sped up math research and wasted less paper. But we need to have an honest discussion about how we are using the computer as a tool. Instead malicious idiots like Vlad Tenev are making confident predictions about mathematical superintelligence. So depressing.
Problems like the one discussed also aren’t interesting to applied mathematicians, either, because of lack of applications.
But yes, if this kind of AI produces new materials, solves diseases, etc. they will be very useful. We wouldn’t care whether they arrived at those solutions through valid reasoning, though. A sloppy AI that has better ‘guesses/intuitions’ than humans or that can guess and check ‘guesses/intuitions' for correctness faster would be very useful.
GPT-5 solved Erdős problem #848 (combinatorial number theory):
https://cdn.openai.com/pdf/4a25f921-e4e0-479a-9b38-5367b47e8...
One of his requirements is "Borders don't exist" like jeeze
It's just lies. These people are all lies, or worse, are so utterly divorced from reality and don't even know what the words they say mean
The person who runs that site claims to be an "ai expert", and leads with his "Doctor" title despite nobody being willing to tell you what he got his doctorate in, and spends all his time writing pop-"sci" slop. One brag point the instution that sells his speaking services points out is that his dissertation was adapted into a book about "Smart people are naturally smart and better than you" and brags about how it involved "Dr Rupert Sheldrake", the well known pseudo-scientist
>He is a AI consultant whose 2021-2022 experiments with Leta AI and Aurora AI have been viewed over 5 million times.
>Alan completed his Bachelor of Science (Computer Science, AI, and Psychology) at Edith Cowan University, 2004; studied Gifted Education at Flinders University, 2017; became a Fellow of the Institute of Coaching affiliated with Harvard Medical School, 2017; and received his doctorate from Emerson, 2021. Alan’s dissertation was adapted into a book featuring Dr Rupert Sheldrake, Connected: Intuition and Resonance in Smart People.
>Alan was a major contributor to human intelligence research and practice. As chairman for Mensa International’s gifted families committee, Alan served two consecutive terms sharing best practice among 54 countries, and his work on gifted education was referenced in the Department of Education’s High Potential policy.
Look at all that title inflation baby! Surely executive material!
I mean just come on I can't believe people who fall for this trash are able to breathe
Classic grifter trash.
https://arxiv.org/html/2510.19804v1#Thmtheorem3
> In over a dozen papers, beginning in 1976 and spanning two decades, Paul Erdős repeatedly posed one of his “favourite” conjectures: every finite Sidon set can be extended to a finite perfect difference set. We establish that {1, 2, 4, 8, 13} is a counterexample to this conjecture.
Ai was given step-by-step already found proof, and asked "please rewrite in Lean"
---
here Ai did the proof itself
This is different from the situation you are talking about, where a problem genuinely appears difficult, attracts sustained attention, and is cited repeatedly as many people attempt partial results or variations. Then eventually someone discovers a surprisingly simple solution to the original problem which basically make all the previous paper look ridiculous in hindsight.
In those cases, the problem only looks “easy” in hindsight, and the solution is rightly celebrated because there is clear evidence that many competent mathematicians tried and failed before.
Are there any evidence that this problem was ever attempted by a serious mathematician?
In this case, the solution might have been missed before simply because the difference between the "easy" and "hard" formulations of the problem wasn't quite clear, including perhaps to Erdős, prior to it being restated (manually) as a Lean goal to be solved. So this is a success story for formalization as much as AI.
> My point is that basic ideas reappear at many places; humans often fail to realize that they apply in a different setting, while a machine doesn't have this problem! I remember seeing this problem before and thinking about it briefly. I admit that I haven't noticed this connection, which is only now quite obvious to me!
Doesn't this sound extremely familiar to all of us who were taught difficult/abstract topics? Looking at the problem, you don't have a slightest idea what is it about but then someone comes along and explains the innerbits and it suddenly "clicks" for you.
So, yeah, this is exactly what I think is happening here. The solution was there, and it was simple, but nobody discovered it up until now. And now that we have an explanation for it we say "oh, it was really simple".
The bit which makes it very interesting is that this hasn't been discovered before and now it has been discovered by the AI model.
Tao challenges this by hypothesizing that it actually was done before but never "released" officially, and which is why the model was able to solve the problem. However, there's no evidence (yet) for his hypothesis.
What exactly would constitute evidence?
related article:
> Is Math the Path to Chatbots That Don’t Make Stuff Up?
https://www.nytimes.com/2024/09/23/technology/ai-chatbots-ch...
1. https://www.claymath.org/millennium-problems/
1.
Having it go beyond that to do things that we can't do would be a nice bonus. But given the way AI has been going, I'd say there's a decent chance that it solves a Millennium Prize problem before it solves driving.
> I agree that the [BEGL96] problem is still open (for now!), and your plan to keep this problem open by changing the statement is reasonable. Alternatively, one could add another problem and link them. I have no preference. — BorisAlexeev
There we go, so there is hype to some degree.
Yeah that's the crux of the matter. How do AI did it? Using already existing math. If we need new math to prove Collatz, Goldbach or Riemman, LLMs are simply SOL. That's what's missing and hype boys always avoid to mention.
An unproved theorem now proved is by definition new math. Will LLMs get you to Collatz, Goldbach, or Riemann? Unclear.
But it's not like there's some magical, entirely unrelated to existing math, "new math" that was required to solve all the big conjectures of the past. They proceeded, as always, by proving new theorems one by one.
I think eventually LLMs will also be used as part of systems that come up with new, broadly useful definitions, but we're not there yet.
No. By _new math_ I mean new mathematical constructs and theories like (to mention the "newest" ones) category theory, information theory, homotopy type theory, etc. Something like Cantor inventing set theory, or Shannon with information theory, or Euler with graph theory.
AFAIK no new field of mathematics as been created _by_ AI. Feel free to correct me.
You seem to be smuggling in the assumption that this problem was "important".
We've had automated theorem proving since the 60s. What we need is automated theorem discovery. Erdős discovered these theorems even if he wasn't really able to prove them. Euler and Gauss discovered a ton of stuff they couldn't prove. It is weird that nobody considers this to be intelligence. Instead intelligence is a little game AI plays with Lean.
AI researchers keep trying to reduce intelligence into something tiny and approachable, like automated theorem proving. It's easy: you write the theorem you want proven and hope you get a proof. It works or it doesn't. Nice and benchmarkable.
Automated axiom creation seems a lot harder. How is an LLM supposed to know that "between any two points there is a line" formalizes an important property of physical space? Or how to suggest an alternative to Turing machines / lambda calculus that expresses the same underlying idea?
I've been thinking mathematicians have fun doing math, making discoveries, crafting proofs.
Does Tour de France & Co. make no sense since small, lightweight and powerful e-bicycles appeared?
Using computer as a helper like bicycles is one thing, using LLMs seems more like e-bicycle and is something another.
By that logic, we've had LLMs since the 60s!
> What we need is automated theorem discovery.
I don't see any reason you couldn't train a model to do this. You'd have to focus it on generating follow-up questions to ask after reading a corpus of literature, playing around with some toy examples in Python and making a conjecture out of it. This seems much easier than training it to actually complete an entire proof.
> Erdős discovered these theorems even if he wasn't really able to prove them. Euler and Gauss discovered a ton of stuff they couldn't prove. It is weird that nobody considers this to be intelligence.
Who says they don't? I wouldn't be surprised if HarmonicMath, DeepMind, etc have also thought about this kind of thing.
> Automated axiom creation seems a lot harder. How is an LLM supposed to know that "between any two points there is a line" formalizes an important property of physical space?
That's a good question! It would be interesting to see if this is an emergent property of multimodal LLMs trained specifically on this kind of thing. You would need mathematical reasoning, visual information and language encoded into some shared embedding space where similar things are mapped right next to each other geometrically.
> By that logic, we've had LLMs since the 60s!
From a bit earlier[0], actually:
Progressing to the 1950s and 60s
We saw the development of the first language models.
Were those "large"? I'm sure at the time they were thought to be so.0 - https://ai-researchstudies.com/history-of-large-language-mod...
On one hand, as the nice sketch provided below by tsaf confirms, the final proof is quite simple and elementary - indeed, if one was given this problem in a maths competition (so therefore expected a short simple solution existed) I'd guess that something like the below would be produced. On the other hand, if something like this worked, then surely the combined talents of Burr, Erdős, Graham, and Li would have spotted it.
Normally, this would make me suspicious of this short proof, in that there is overlooked subtlety. But (a) I can't see any and (b) the proof has been formalised in Lean, so clearly it just works!
Perhaps this shows what the real issue in the [BEGL96] conjecture is - namely the removal of 1 and the addition of the necessary gcd condition. (And perhaps at least some subset of the authors were aware of this argument for the easier version allowing 1, but this was overlooked later by Erdős in [Er97] and [Er97e], although if they were aware then one would hope they'd have included this in the paper as a remark.)
At the moment I'm minded to keep this as open, and add the gcd condition in the main statement, and note in the remarks that the easier (?) version allowing 1 and omitting the gcd condition, which was also asked independently by Erdős, has been solved."
The commentator is saying: "I can't believe this famous problem was solved so easily. I would have thought it was a fake proof, but the computer verified it. It turns out the solution works because it addresses a slightly different set of constraints (regarding the number 1) than what Erdős originally struggled with. (Generated by Gemini)
Please don't post generated comments here. We want HN to be a place for humans writing in their own voice.
He was cited https://the-decoder.com/leading-openai-researcher-announced-...
Where a while back OpenAI made a misleading claim about solving some of these problems.
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
Software engineering is more subtle since you actually care about the performance and semantics of a program. Unless you get really good at program specification, it would be difficult to fully automate.
But with math the only thing you care about is whether the moves you made were right.
I keep a list of ridiculously failed predictions, and this goes into it.
Can I ask you to be more concrete? What does "math is toast" mean to you?
Computers have proved stuff people couldn't since at least 1976 when Appel and Haken proved the 4-color theorem.
The 4 color theorem was proven because humans actually did the math and discovered a way to simplify the problem enough for them to write a program that proves it. The proof wasn't discovered by a computer. The proof was discovered by the people who wrote that program. There was no machine learning involved.
My claim is that in 15 years an AI system will be able to prove it from first principles for under 100 dollars. That would render us normal mathematicians toast.
Sure it suffers from amnesia and cannot get basic things right sometimes, but one is a design limitation that can be overcome and the other a possible problem caused by training (we’re discovering that overt focus during training on adherence to prompt somewhat lobotomizes their general competence).
For the 2nd: We are completely 100% sure this cannot be solved. This is not a training issue. This is the case for any statistical machine. No amount of training can ever solve this.
If you had enough paper and ink and the patience to go through it all you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process.
> This is a nice solution, and impressive to be found by AI, although the proof is (in hindsight) very simple, and the surprising thing is that Erdos missed it. But there is definitely precedent for Erdos missing easy solutions!
> Also this is not the problem as posed in that paper
> That paper asks a harder version of this problem. The problem which has been solved was asked by Erdos in a couple of later papers.
> One also needs to be careful about saying things like 'open for 30 years'. This does not mean it has resisted 30 years of efforts to solve it! Many Erdos problems (including this one) have just been forgotten about it, and nobody has seriously tried to solve it.[1]
And, indeed, Boris Alexeev (who ran the problem) agrees:
> My summary is that Aristotle solved "a" version of this problem (indeed, with an olympiad-style proof), but not "the" version.
> I agree that the [BEGL96] problem is still open (for now!), and your plan to keep this problem open by changing the statement is reasonable. Alternatively, one could add another problem and link them. I have no preference.[2]
Not to rain on the parade out of spite, it's just that this is neat, but not like, unusually neat compared to the last few months.
[1] https://twitter.com/thomasfbloom/status/1995083348201586965
[2] https://www.erdosproblems.com/forum/thread/124#post-1899