It also needs to be vertically integrated to make money, otherwise it's a handout to the materials science company. I can't see any of the AI companies stretching themselves that thin. So they give it away for goodwill or good PR.
That said Deepmind are doing a spin-off making drugs https://www.isomorphiclabs.com/
AI for science is not "marketed". It silently evolves under the wraps and changes our lives step by step.
There are many AI systems already monitoring our ecosystem and predicting things as you read this comment.
I don't see how OpenAI or Google can profit from drug discovery. It's nearly pure consumer surplus (where the drug companies and patients are the consumers).
I am sure you can think of a few prominent examples.
Not sure what are LLMs supposed to do there.
The AI/AGI hype in my opinion could be better renamed to ml with data and compute 'hype' (i don't like the word hype as it doesn't fit very well)
Right now the generators aren’t effective but they are definitely stepping stones to something better in the future.
If that future thing produces video, movies and pictures better than anything humanity can produce at a rate faster than we can produce things… how is that a waste?
It can arguably be bad for society but definitely not a waste.
Education-style infographics and videos are OK.
I help people turn wire rolling shelf racks into the base of their home studio, and AI can now create a "how to attach something to a wire shelf rack" without me having to do all the space and rack and equipment and lighting and video setup, and just use a prompt. It's not close to perfect yet, but it's becoming useful.
compelling graphics take a long time to create. for education content creators, this can be too expensive as well. my high school physics teacher would hand draw figures on transparencies on an overhead projector. if he could have produced his drawings as animations cheap and fast using AI, it would have really brought his teaching style (he really tried to make it humorous) to another level. I think it would be effective for his audience.
imagine the stylized animations for things like the rebooted Cosmos, NOVA, or even 3Blue1Brown on YT. there is potential for small teams to punch above their weight class with genAI graphics
Stop talking about the status quo… we are talking about the projected trendline. What will AI be when it matures?
Second you’re just another demographic. Smaller than fans of Coldplay but equally generic and thus an equal target for generated art.
Here’s a prompt that will one day target you: “ChatGPT, create musical art that will target counter culture posers who think they’re better than everyone just because they like something that isn’t mainstream. Make it so different they will worship that garbage like they worship Pearl Jam. Pretend that the art is by a human so what when they finally figure out they fell for it hook line and sinker they’ll realize their counter culture tendencies are just another form of generic trash fandom no different than people who love cold play or, dare I say it, Taylor swift.”
What do you do then when this future comes to pass and all content even for posers is replicated in ways that are superior?
Nobody gives a fuck about what ChatGPT can currently do. It’s not interesting to talk about because it’s obvious. I don’t even understand why you’re just rehashing the obvious response. I’m talking about the future. The progression of LLMs is leading to a future where my prompt leads to a response that is superior to the same prompt given to a human.
I have a feeling that's already happened to me.
Draw the trendline into the future. What will happen when the content is indistinguishable and AI is so good it produces something moves people to tears?
Most of it is used to fool people for engagement, scam, politics or propaganda, it definitely is a huge waste of resource, time, brain and compute power. You have to be completely brainwashed by consumerism and techsolutionism to not see it
Take your favorite works of art, music and cinema. Imagine if content on that level can be generated by AI in seconds. I wouldn’t classify that as a “waste” at all. You’re obviously referring to bullshit content, I’m referring to content that is meaningful to you and most people. That is where the trendline is pointing. And my point, again is this:
We don’t know the consequence of such a future. But I wouldn’t call such content created by AI a waste if it is objectively superior to content created by humans.
We consume A LOT of entertainment every day. Our brains like that a lot.
Doesn't has to be just video but even normal people not watching tv at all entertain themselves through books or events etc.
Live would be quite boring.
From 1:14:55-1:15:20, within the span of 25 seconds, the way Demis spoke about releasing all known sequences without a shred of doubt was so amazing to see. There wasn't a single second where he worried about the business side of it (profits, earnings, shareholders, investors) —he just knew it had to be open source for the betterment of the world. Gave me goosebumps. I watched that on repeat for more than 10 times.
I think it's more about someone trying to do the most good that was possible at that time.
I doubt he cares much about prizes or money at this point.
He doesn't have to care much about prizes or money at this point: he won his prize and he gets all the hardware and talent he needs.
Still great of them to do, and as can be seen it's worth it as a marketing move.
One of the smart choices was that it omitted a whole potential discussion about LLMs (VLMs) etc. and the fact that that part of the AI revolution was not invented in that group, and just showed them using/testing it.
One takeaway could be that you could be one of the world's most renowned AI geniuses and not invent the biggest breakthrough (like transformers). But also somewhat interesting is that even though he had been thinking about this for most of his life, the key technology (transformer-type architecture) was not invented until 2017. And they picked it up and adapted it within 3 years of it being invented.
Also I am wondering if John Jumper and/or other members of the should get a little bit more credit for adapting transformers into Evoformer.
Both are fundamental to their followers.
So its quite clear that you can't just say 'its DeepMind' but have a figure in the middle of it like Dennis.
They trust him to lead DeepMind.
I would love to see a real (ie outsider) filmmaker do this - eg an updated ‘Lo and behold’ by Werner Herzog
They do a great job capturing the "Move 37" moment: https://youtu.be/WXuK6gekU1Y?t=2993
There are a couple parts at the start and the end where a lady points her phone camera at stuff and asks an AI about what it sees. Must have been mind-blowing stuff when this section was recorded (2023), but now it's just the bare minimum people expect of their phones.
Crazy times we're living in.
They should have ended the movie on the success of AlphaFold.
It is interesting that Hassabis has had the same goal for almost 20 years now. He has a decent chance of hitting it too.
Moderators: Please change the link; feels kind of unethical to bait someone into paying for this now.
AI for science is much bigger than RL or Generative AI in science.
There are several classes of models Like operator learning, physics informed neural networks, Fourier operators
That perform magnificently well and have killer applications in various industrial settings
Do read the attached paper if you're curious about AI in science
He is famously a North London lad, but the at home shots are clearly shot from South London looking North (you can tell by the orientation of The Shard and Bishops Gate out of the window).
I thought that this might have been a "stage home" but it appears to be the same place in the background of various video conferences he is on too, so unless those were staged for the documentary (which seems like a lot of effort), then he lives near Crystal Palace and not Highgate?
As a brit, I found it to be a really great documentary about the fact that you can be idealistic and still make it. There are, for sure, numerous reasons to give Deepmind shit: Alphabet, potential arms usage, "we're doing research, we're not responsible". The Oppenheimer aspect is not to be lost, we all have to take responsibility for wielding technology.
I was more anti-Deepmind than pro before this, but the truth is as I get older it's nicer to see someone embodying the aspiration of wanton benevolence (for whatever reason) based on scientific reasoning, than to not. To keep it away from the US and acknowledge the benefits of spreading the proverbial "love" to the benefit of all (US included) shows a level of consideration that should not be under-acknowledged.
I like this documentary. Does AGI and the search for it scare me? Hell yes. So do killer mutant spiders descending on earth post nuclear holocaust. It's all about probabilities. To be honest: disease X freaks me out more than a superintelligence built by an organisation willing to donate the research to solve the problems of disease X. Google are assbiscuits, but Deepmind point in the right direction (I know more about their weather and climate forecasting efforts). This at least gave me reason to think some heart is involved...
we can guarantee that whether its the birth of superintelligence or just a very powerful but fundamentally limited algorithm, it will not be used for the betterment of mankind, it will be exploited by the few at the top at the expense of the masses
because thats apparently who we are as a species
but seriously, its just more comfortable to type. apostrophes and capitals are generally superfluous, we'll and well the only edge case, theyve, theyll, wont, dont etc its just not necessary. theres no ambiguity
i only recently started using full stops for breaks. for years, I was only using commas, but full stops are trending among the right people. but only for breaks, not for closing
I’ll code switch depending on the venue, on HN i mostly Serious Post so my post history might demonstrate more care for the language than somewhere i consider more causal.
every technological advancement that made people more productive and should have led to them having to do less work, only led to people needing to do more work to survive. i just dont see AI being any different
The original comment and you agreeing, struck me as examples of the more open commentary one can see at the weekends.
Do you know how long it took us to get to this point? Massive compute, knowledge, alogorithm etc.
Why are you even on HN if the most modern and most impactful technologie leads you to say "i couldn't care less were its going'?
Just a few years ago there was not a single way of just solving image generation, music generation and chat bots which actually able to respond reasonable to you and that in different languages.
AlphaFold already helps society today btw.
The illusion that agency 'emerges' from rules like games, is fundamentally absurd.
This is the foundational illusion of mechanics. It's UFOlogy not science.
Anyways. I thought the documentary was inspiring. Deepmind are the only lab that has historically prioritized science over consumer-facing product (that's changing now, however). I think their work with AlphaFold is commendable.
Science is exceeding the envelop of paradox, and what I see here is obeying the envelope in order to justify the binary as a path to AGI. It's not a path. The symbol is a bottleneck.
The computer is a hand-me-down tool under evolution's glass ceiling. This should be obvious: binary, symbols, metaphors. These are toys (ie they are models), and humans are in our adolescent stage using these toys.
Only analog correlation gets us to agency and thought.
Look around you, look at the absolute shit people are believing, the hope that we have any more agency than machines... to use the language of the kids, is cope.
I have never considered myself particularly intelligent, which, I feel puts me at odds with many of HN readership, but I do always try to surround myself with myself with the smartest people I can.
The amount of them that have fallen down the stupidest rabbit holes i have ever seen really makes me think: as a species, we have no agency
Also, solving the protein folding problem (or getting to 100% accuracy on structure prediction) would not really move the needle in terms of curing diseases. These sorts of simplifications are great if you're trying to inspire students into a field of science, but get in the way when you are actually trying to rationally allocate a research budget for drug discovery.
Edit to clarify my question: What useful techniques 1. Exist and are used now, and 2. Theoretically exist but have insurmountable engineering issues?
If your goal is to bring a drug to market, the most useful thing is predicting the outcome of the FDA drug approval process before you run all the clinical trials. Nobody has a foolproof method to do this, so failure rates at the clinical stage remain high (and it's unlikely you could create a useful predictive model for this).
Getting even more out there, you could in principle imagine an extremely high fidelity simulation model of humans that gave you detailed explanations of why a drug works but has side effects, and which patients would respond positively to the drug due to their genome or other factors. In principle, if you had that technology, you could iterate over large drug-like molecule libraries and just pick successful drugs (effective, few side effects, works for a large portion of the population). I would describe this as an insurmountable engineering issue because the space and time complexity is very high and we don't really know what level of fidelity is required to make useful predictions.
"Solving the protein folding problem" is really more of an academic exercise to answer a fundamental question; personally, I believe you could create successful drugs without knowing the structure of the target at all.
So, in the meantime (or perhaps for ever), we look for patterns rather than laws, with neural nets being one of the best tools we have available to do this.
Of course ANNs need massive amounts of data to "generalize" well, while protein folding only had a small amount available due to the months of effort needed to experimentally discover how any protein is folded, so DeepMind threw the kitchen sink at the problem, apparently using a diffusion like process in AlphaFold 3 to first determine large scale structure then refine it, and using co-evolution of proteins as another source of data to address the paucity.
So, OK, they found a way around our lack of knowledge of chemistry and managed to get an extremely useful result all the same. The movie, propaganda or not, never suggested anything different, and "at least 90% correct" was always the level at which it was understood the result would be useful, even if 100% based on having solved chemistry / molecular geometry would be better.
btw an excellent explanation, thank you.
"Oh, machine learning certainly is not real learning! It is a purely statistical process, but perhaps you need to take some linear algebra. Okay... Now watch this machine learn some theoretical physics!"
"Of course chain-of-thought is not analogous to real thought. Goodness me, it was a metaphor! Okay... now let's see what ChatGPT is really thinking!"
"Nobody is claiming that LLMs are provably intelligent. We are Serious Scientists. We have a responsibility. Okay... now let's prove this LLM is intelligent by having it take a Putnam exam!"
One day AI researchers will be as honest as other researchers. Until then, Demis Hassabis will continue to tell people that MuZero improves via self-play. (MuZero is not capable of play and never will be)