For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.
Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist.
All of those have their issues.
These are valid approaches to the problem, but they are not necessary.
> or some kind of open hybrid - which doesn't exist.
PGP exists for decades. It doesn't have a great UX, it isn't used outside of its narrow niches, but it exists and does exactly this.
Can this problem be solved with better software?
I believe it can, it is just average person doesn't need PGP. No demand for software solving this problem, therefore no software for that.
The problem can be solved, like a storage for known PGP public keys with their history: like where the key was acquired, and a simple algo that calculated trust to the key as a probability of it being valid (or what adjective cryptographers would use in this case?).
You can start with PGP keys of people you know, getting them as QR codes offline, marking them as "high trust" and then pull from them keys stored at their devices (lowering their trust levels by the way). There are some issues how to calculate probability, because when we pull some keys from different sources we can't know are their reported trust levels are independent variables or not, but I believe you can deal with it, by pulling the whole chain of transfers of the key, starting from the owner of the key and ending at your device.
It is just a rough idea, how it can be made. Maybe other solutions are possible. My point is: the ugliness of PGP is a result of PGP was made by nerds and for the nerds. There is no demand for PGP-like solutions outside of nerd communities. But maybe LLM induced corrosion of trust will create demand?
The point of GP was that there any such system will require a central authority, PGP shows that you don't need it. I didn't claimed that PGP is a perfect or good enough solution, just that it exists and works for some people.
> both of you are honest and can be trusted to act in good faith when not in person
I believe it is not strictly necessary for the scheme to work. It is a limitation of OpenPGP and other implementations that they do not allow convert multiple independent observation of a public key (finding it from different sources, or encountering them used to sign messages) into a measure of trust to the key.
It is not a silver bullet either, but it can alleviate the problem and make it tractable.
The only doubts I have is how this system will stand against multiple actors trying to undermine it, but still I believe you can get something that will be better than nothing, and probably better than a central authority.
In the interview scenario, generating an email signature is hardly beyond what an AI can do.
You have no prior knowledge of this person or his signature, it's not some government issued ID, it's in essence just random data unless you know the person to be real.
You are assuming that only you can generate fake AI videos of yourself.
With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.
With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.
At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.
And by that you mean tens of millions to billions right? Bank transfer scamming/fraud is a thing.
Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.
> we already experience misleading articles today
Again, had been happening for decades.
> footage of some incident somewhere may have been entirely fabricated by AI
Not like we did not already have doctored footage plaguing the public.
> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video
Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).
We may be dealing with the problem of spam, but the problems have already been there.
Misinformation in pure text form has always been cheapest, but is even cheaper now that text generation is basically a solved problem. Photos have been more expensive, it used to take time and skill with a photo editor to produce a believable image of an event that never happened. The cost is now very low, it's mostly about prompting skills. Fake videos were considerably harder, especially coupled with speech. Just a few years ago I could assume any video I saw was either real or a time-consuming, deliberate fake.
We've now entered a time where fake videos of famous people take actual effort to tell apart, and can be produced for a low cost - something accessible to an individual, not a big corporation. We can have an entirely fake video of Trump, or another world leader, giving a speech and it will look like the real thing, with the audiovisual "tells" of it being fake getting harder to notice every few months.
So it's a spam issue. And normally, while annoying it's possible to fight spam, however on these topics we have built structures that disable the very mechanisms allowing us to fight spam. That's worrying.
The fact that someone can instruct their computer to astroturf their flight tracking app on some forum for nerds is irrelevant - people have been instructing "marketing agencies" to astroturf their brand of caffeinated sugar water on tv, radio and press for decades and centuries. For a very long time the "traditional media" was aware that their ability to sell astroturfing capacity was hanging on their general trustworthiness. Then the internets rose to prominence, traditional media followed by selling more and more of their capacity to astroturfers. Now we have a worrying situation that the internets might be spammed by astroturfers a bit too much, but the backup is broken already. Now that's truly frightening.
Welcome to the post-truth world, where objective references outside of your own village cannot exist.
There will be some regulatory capture in between.
World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.
Just have Apple and Google digitally sign videos and photos recorded from phones and then have Google and Meta, etc display that they are authentic when shown on their platforms.
If the bits that make up the video as was recorded by the camera don't match the hash anymore, then you know it was modified. That doesn't mean it's fake, it just means use skepticism when viewing. On the other hand the ones that have not been modified and still match can be trusted.
Are you saying Apple and Google can't put a secure hash into the output from their camera apps that apply after their internal processing is done?
As far as recording a monitor, I guess, but I feel like you can tell that someone is recording a monitor.
As far as editing, no it wont work in those cases, but the point here is not to verify ALL videos, but to have an easy way for people to verify important videos. People will learn that if you edit it, it won't be verified, so they will be less inclined to edit it if they want to make it clear it's an authentic video. Think like people recording some event going down on the streets etc or recording a video message for family and friends.
If AI video generation is going to get that good, don't you think it would be a good idea to have a way to record provably authentic videos if we need? Like a police interaction or something. There is no real reason to need to edit that.
Also, could a video hash just be computed every X seconds, and give the user the choice to trim the video at each of those intervals?
i've thought about this off and on and how to implement it. Not easily, was my general takeaway.
or rather, it's easily to implement but you're in a adversarial relationship with bad actors and easy implementations may be easily broken
e.g. your certs gotta come from somewhere and stay protected, and how do you update and control them. key management for every single camera on every phone, etc.
Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI”
Maybe Apple will be able to pull it off? Aka if you FaceTime me I know that you are a person
You know those incriminating Epstein photos with his associates? A few years from now a common defense from people like that would be that the photos were AI generated, and it would be difficult to prove them wrong beyond reasonable doubt.
People in previous cases already attempted to dismiss incriminating pics of themselves as being the work of clever Photoshop artists.
AI has platued, it's not getting better!
Not to mention that most 2FA still uses SMS, which has it's own well-understood security flaws.
OP is still correct. No matter what, humans will remain the weakest link...it's in our nature to sympathize and every one of us has distracted/weak moments. It's just a matter of time; look at the guy who runs haveibeenpwnd...getting pwned.
Maybe a good startup idea would be “local verify” , where you check locally for a client if the online destination is real.
Getting off of the Internet and off of our devices. It's not just a solution to AI/LLMs modifying our reality but also a solution to [gestures wildly at the cultural, societal and global communication impacts of the past ~16 years].
This sentiment is unpopular, but it's true. Prioritize true connections and experiences.
More in-person stuff feels like a win to me (and I say this as someone who probably counts as introverted).
Not being able to trust any online interactions anymore? Seems like a new height in what was already a negative.
Though there is a nightmarish possibility that people just accept this and willingly interact purely with bots, giving up all real relationships for AI ones.
identity serivce is not useful because that person might be a real person but they might just be a pipe to ai like we see on linkedin.
What damage are you talking about?
I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing.
The damage is to the trust we have in our communication media. The conclusion here is that every person is trivial to impersonate; that's the damage.
Also it was already possible for someone to impersonate your mother via text or similar, and even easier to pull off.
If you got a suspicious text, the logical thing is to call up the person who sent it and try to verify it. AI impersonation makes that much harder.
The communication channel is what you trust. So you would call the person using that trusted channel.
It's just like when you get a scam email or popup from "Microsoft" saying your laptop is compromised and you need to call their number ASAP.
Instead what we have now with AI is people exchanging merely the tokens and being contented with the symbol in-and-of itself, as something valuable in its own right, with no need for an actual candidate or physical product underlying the symbol.
There is a clip by McLuhan I can't be assed to find right now where he says eventually people will stop deriving pleasure from the products themselves and instead derive the feelings of (projected) accomplishment and pleasure from viewing advertisements about the product. The product itself becomes obsolete, for all you actually need to evoke the desired response is the advertisement, or the symbol.
A hiring manager interviewing an AI and offering it a job is like buying the advertisement you just watched, and.... that's it. No more, the transaction is complete.
Hmm, this guy may have been on to something
>Instead of tending towards a vast Alexandrian library the world has become a computer, an electronic brain, exactly as an infantile piece of science fiction. And as our senses have gone outside us, Big Brother goes inside. So, unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. [...] Terror is the normal state of any oral society, for in it everything affects everything all the time. [...] In our long striving to recover for the Western world a unity of sensibility and of thought and feeling we have no more been prepared to accept the tribal consequences of such unity than we were ready for the fragmentation of the human psyche by print culture.
--The Gutenberg Galaxy, 1962
Not GP, but there's a lot of damage that can be done with impersonation.
However, the increase in fake videos that are difficult to tell from real is indeed a potential issue. But the fact that misinformation today is already so prevalent is evidence that better video doesn't make it any worse than it already is imho.
We're in deep shit.
“Auntie, it’s me! N*** k** f**! X is really a man! ** did 9/11!”
“Oh it really is you Johnny!”
We’re all going to have to start communicating this way. Best of luck.
I offer consulting services on the side to help professionals hone these skills. $250 / hour.
I do believe it's possible but as far as I am aware, getting LLM's to say that sort of stuff is still pretty difficult
There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
A summary, courtesy of chess dot com:
> The name of this "syndrome" comes from GM Alexander Kotov, author of the classic chess book Think Like a Grandmaster. In the book, Kotov described an incorrect yet very common calculation process that often leads players to select a suboptimal or bad move.
> According to Kotov, in positions where the lines are complex and there are numerous candidate moves and variations to calculate, it's easy to make a hasty move. A player in that situation might spend too much time going over two moves and all of their ramifications without finding a favorable ending position. In that process, the player is likely to go back and forth between the two different lines, always coming to the same unsatisfying conclusion—this wastes precious mental energy and time.
> After spending too much time evaluating the first two options, the player gives up the calculation due to time pressure or fatigue and plays a third move without calculating it. According to the author, that sort of move can cause tremendous blunders and cost the game.
People will default to believing something is AI if there's no downside to that opinion. It's a defence mechanism. It stops them being 'caught out' or tricked into believing something that's not true.
As soon as there's a potential loss (e.g. missing out on getting rich, not helping a loved one) people will switch off that cynical critical thinking and just fall for AI-driven scams.
This is the downside of being a human being.
Easy to replicate by asking someone something obvious, like the weather, and when they reply ask “are you sure?” - they won’t be so sure any more (believing it’s a trick question)
If I ask my mother if I’m real, she’ll have a pause because she has never had to entertain such a question, or the possibility her son over the phone is an impostor. Good way to push someone towards paranoia and psychosis.
I have personally intervened in one of those when I heard someone reading off a 6 digit number.
That said, hog butchering scams have gotten popular so manufactured urgency isn't the only way.
Interestingly, these are both phenomena where we start to _lose_ the ability to question our thoughts or introspect. These are phenomena of self-confidence rather than of self-doubt.
I'm sure I'm not the first to use this technique, but I don't know what it's called.
So at each stage in the loop they are always super convinced of the position.
Actions might include some continuous checks in them, like the famous plan, do, check, act.
Solipsism already tell us that anything beyond current present self experience, existence of anything is uncertain. So, almost everything one have to take for granted to make anything outside metaphysic argument require an act of faith.
Conversely, a type-2 moron favors the last thing he/she heard, readily allowing it to dislodge any prior beliefs, values or intentions no matter how well-founded. Here in the US, our current president can be cited as an example of a type-2 moron.
In reality, we all fall into one or both of these categories on occasion, so it's best not to indulge in excessive self-assurance.
It's not even close.
It's easy to "pass the Turing test" for 5 minutes. It's extremely hard if you try to hold a longer, continuous conversation. Anything longer than 10 minutes the user will immediately know it's not human. Some problems you'll encounter:
- The bot needs to handle all situations, especially the nonsensical ones. This is when the user types "EEEEEEEEEEEEE...", or curse words, repeatedly.
- Who would've thought that it's extremely hard to decide when to stop talking?
- No matter how well you build the "persona" for the bot, they'll eventually converge to the same one, which is that of the llm itself.
- You'll notice that the bot is ignoring something obvious (e.g. it's not remembering past convo), and then give it some instructions to help with that. And then that'll be THE ONLY THING it does.
So it's all context clues really - i.e. if the video tracking shot is sort of within the constraints of the models, plays to obvious agendas etc. then I might tweak to go looking for artifacts...but in the propaganda game? That's already game over. And we're all vulnerable to the ground shifting beneath us - i.e. how much power would there be if you had a model which could just slightly exceed those "well known" limitations?
IMO the failure to implement strong distributed cryptography much earlier in the digital age is going to punish us hard for this - i.e. we haven't built a societal convention of verifying and authenticating digital communications amongst each other, and technology has finally caught up that it can fool our wetware now. It was needed well before this - e.g. the rise of the telephone scam and VOIP should've been when we figured out how to make sure people were in the habit of comprehending digital signatures and authentication. It isn't though, and now something much more dangerous is out there.
It also included personal details only her closest friends and family would know. I assume this is being done at scale now. These are NOT Nigerian prince scams of yesteryear; this is something entirely different.
Then came the time when I wanted to use it. They didn’t remember. Not the phrase. Nor that we ever talked about this in the first place.
Definitely more than 1yr and less than 10. I know that’s a wide range, sry.
Imagine your crying grandson who caused a traffic accident in Mexico and the police planted drugs in his car and now he needs money to pay them off. He is in pain and probably has a concussion (explanation why he can't remember what you are asking), the police is hassling him to get off the phone (time pressure, explanation why the quality of the call is terrible). Will you get hung up on some code word he asked you to memorise years ago and you can't even know where it is anymore? And if you bring it up he just starts crying and tells you that you are his last chance to turn his life around. And you remember when he was a wee little kid and he fell and scraped his knee and you comforted him. Just the thought of pressing him on the code makes you feel like a terrible person. Or not. And then the scammer just finds someone more gullible. Theirs is a number game after all.
> The solution the world's leading experts have landed on is one your grandparents could have come up with: codewords. You, your family, business partners and anyone else you communicate with about important subjects need to come up with a secret phrase that no-one else knows you can use in an emergency to verify each other's identities. Think of it like a convoluted form of the multi-factor authentication we all use to login online.
> "My wife and I have a codeword that we use if we ever get an unusual call," Farid says. "We haven't needed to use it yet, but sometimes I ask just to test her to make sure we don't forget it."
How was this solved, actually? More training data, or was there more to it?
More training on fingers specifically.
Image VAEs (variation auto encoders) are functions that compress the latent (working) image down. The earlier VAEs would mess up fine details. At a most basic level, just picture compression issues.
Training against bad previous work with six fingers.
Models working in 1024 instead of 512.
https://www.linkedin.com/posts/fabianhemmert_handwriting-vs-...
It feels good to connect with humans that way.
The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.
(No food, no plane wings, just ugly banalities and beautiful nothingness from everyday life.)
https://ars.electronica.art/panic/de/view/reverse-turing-tes...
(I.e. trying to hide the fact that you're human, among a group of AIs)
I thought I'd get at least some traction, considering part of the family works for No Such Agency. Nope. <shrug>
Somewhat related: over the last few weeks at work we've started having people calling our customer support asking for their e-mail addresses to be changed. The first one went through, but the scammer somehow messed it up and the address bounced. They called back in and the support person they talked to recognized by voice that it wasn't the same person they'd talked to in the past. Now we've had this happen to 3 different accounts, the first two times was people with thick Indian accents, the most recent one was suspected of being AI generated voice.
I truly believe that it is a crime against humanity
Mexed Missaging.
Though, it you believe that Netanyahu is dead, then it will look to you as an attempt to convince you, but I don't think this was the goal of the author. Still, if you in this situation, try to run with the opposite hypothesis and think of ways how Netanyahu could prove he is alive. Or, if it seems difficult, then imagine any other prime minister who accidentally posted a six-fingered video of herself and now faces a problem of proving that she is alive. You'll get the idea of the article easily.
But at least for friends and family it should be possible to create some flow where every member has a key-combo and you trust them to only sign stuff they wrote etc. and have local mini-keysign parties.
You have far too much faith in humanity. The majority of my extended family members are not smart enough to resist continuous attacks and would eventually not only sign, but give away the key in question.
Simply put I think we are stretching humanity farther than intellectual ability allows in a lot of people.
The people you'd want to be wary of would be the ones that'd look legit.
e.g. "yes i guess i will send my son $400,000 in cash tonight because he's been kidnapped, and i know it's real because there's no AI watermark that all the nice US/EU companies use."
Necessity is the mother of invention.
It’s absolutely asinine that we’re still relying on paper birth certificates and social security numbers, and stupid tax systems. I’m interested in breaking everything we have to see what comes next.
Really? The coffee in his cup, filled to the brim, did the most bizarre dance possible. And he handled the cup as if was empty, without any care.
What's the correct answer? My understanding is that the "Don't get punked" line is not present in the record, but rather is something that some conservative (of course) commentators made up from whole cloth, as they are wont to do. If this isn't correct, I'd appreciate a citation.
https://nypost.com/2013/06/28/trayvon-martins-girlfriend-adm...
But also, really hilarious hill to die on. As if the jury trial wasn’t enough, it’s been long enough for you to not automotically take the media’s narrative.
I still can find no trace of the "Don't get punked" line in any transcripts, though, so I'm still hoping for a pointer to that.
That's why it always falls back to the same tired formalistic clichês, like "Not this, but that", rampant baiting and sensationalism, because that's what would get high marks from your typical low-rent liberal arts annotator.
Tell us more about this axe you appear to need to grind.
But I am very critical of what pass as the modern liberal arts academic establishment. To avoid a very long text, let's say that my view is heavily influence by Ortega y Gasset.
Because no frontier model is allowed to go against the popular narratives of the day.
But about deepfakes, these exist to re-add 6 fingers. Once you do this, you can claim the video was generated.
https://www.etsy.com/listing/1667241073/realistic-silicone-s...
You people should become more aware of the propaganda around you, you’re too easy to play up like fiddles.
You don't need to imply I'm uncritical because I take issue with one specific complaint.
Perhaps we need tamper proof authenticated cameras in all major cities worldwide that publish a livestream 24/7 and you can then stand in front of them to prove your human existance...
This could be something that notaries around the world could offer as a service.
The options I have seen so far were a) using our digital IDs, which is very handy or b) having a bank verify my identity in person with my ID, which is also pretty good.
There's nothing missing technology wise to achieving this but we, at this point, lack the collective will and the regulatory regime. I do foresee a future where this is the norm and that anything you listen to or watch you'll be able to trace back to the device that captured the data.
Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.