I keep seeing posts about this kind of thing getting people’s hopes up, so let’s address this myth.
What’s an “AI detector”?
We’re talking about these tools that advertise the ability to accurately detect things like deep-fake videos or text generated by LLMs (like ChatGPT), etc. We are NOT talking about voluntary watermarking that companies like OpenAI might choose to add in the future.
What does “effective” mean?
I mean something with high levels of accuracy, both highly sensitive (low false negatives) and highly specific (low false positives). High would probably be at least 95%, though this is ultimately subjective.
Why should the accuracy bar be so high? Isn’t anything better than a coin flip good enough?
If you’re going to definitively label something as “fake” or “real”, you better be damn sure about it, because the consequences for being wrong with that label are even worse than having no label at all. You’re either telling people that they should trust a fake that they might have been skeptical about otherwise, or you’re slandering something real. In both cases you’re spreading misinformation which is worse than if you had just said “I’m not sure”.
Why can’t a good AI detector be built?
To understand this part you need to understand a little bit about how these neural networks are created in the first place. Generative Adversarial Networks (GANs) are a strategy often employed to train models that generate content. These work by having two different neural networks, one that generates content similar to existing content, and one that detects the difference between generated content and the existing content. These networks learn in tandem, each time one network gets better the other one also gets better.
That this means is that building a content generator and a fake content detector are effectively two different sides of the same coin. Improvements to one can always be translated directly and in an automated way into improvements into the other one. This means that the generator will always improve until the detector is fooled about 50% of the time.
Note that not all of these models are always trained in exactly this way, but the point is that anything CAN be trained this way, so even if a GAN wasn’t originally used, any kind of improved detection can always be directly translated into improved generation to beat that detection. This isn’t just any ordinary “arms race”, because the turn around time here is so fast there won’t be any chance of being ahead of the curve… the generators will always win.
Why do these “AI detectors” keep getting advertised if they don’t work?
- People are afraid of being saturated by fake content, and the media is taking advantage of that fear to sell snake oil
- Every generator network comes with its own free detector network that doesn’t really work all that well (~50% accuracy) because it was used to create the generator originally, so these detectors are ubiquitous among AI labs. That means the people that own the detectors are the SAME PEOPLE that created the problem in the first place, and they want to make sure you come back to them for the solution as well.
There are stories after stories of students getting shafted by gullible teachers who took one of those AI detectors at face value and decided their students were cheating based solely on their output.
And somehow those teachers are not getting the message that they’re relying on snake oil to harm their students. They certainly won’t see this post, and there just isn’t enough mainstream pushback explaining that AI detectors are entirely inappropriate tools to decide whether to punish a student.
If ChatGPT somehow ends up being the death of social media, i guess it is a win-win for the human race.
It’ll destroy the fediverse first, big social media companies will be able to hold out longer.
I’ve had documents of my own and even by my professors come up as “May be written by A.I.” which I know isn’t true. I feel bad for the dude that talks completely like a robot and gets accused of plagiarism.
Yeah, an internet comment is a bit whatever, but if you’re a student, a plagiarism accusation could get you expelled. That’s life ruining.
Asking for ID works. Some national IDs can be verified online cryptographically.
Care to expand on that idea? How does verifying an ID help in this situation?
If you want to tell humans from machines it’s the only method that reliably works. If you want to prevent humans cheating with machines use proctoring.
Sure, but this post is about detecting machine-generated content. How does ID verification help there?
Challenge-response. There is no validation after the fact unless it’s been already notarized. Which involved id validation.
This assumes that nation-states issuing the id have no incentive to cheat. Often not a safe assumption.
Once someone has validated their ID, that can just be added to the deepfake. I’m not seeing how needing a few extra seconds of fakery is going to solve anything.
Unless something like a TOTP identification is added, along with the current date and time displayed alongside it, there’s no real benefit to identification.
There is an existing realtime, interactive online validation process for those IDs that can’t be verified cryptographically. No, you can’t deepfake that right now. Nor anytime soon.
There could be a regulation mandating all AI tools and services to encode a watermark into everything made by them, but of course, it will be hard to actually implement.
Good summary of the issues. I’ve been fairly disappointed with what a lot of people think the AI text generators are good for - replacement for search engines, magic oracle that can tell you any fact, something to write legal briefs. And the people who generate documents and then don’t even proof read or fact checking them before using them for something important… Some uses are good, like basic code generation for programming tasks, but many are just silly.
The instances where some professor with zero clue about how AI text generation works or the issues you outline here has told a student “My AI detector said this was generated!” have been absurd, like one professor with obvious serious misunderstandings told a student “I asked ChatGPT if it wrote this and it said yes.”
deleted by creator
I think Bing did a pretty good job at coming up with name suggestions for some Sims characters. Playing with a virtual doll house is in the more harmless end of the spectrum, but obviously people want to try LLMs with all sorts of tasks, where the stakes are much higher and consequences could be severe.
The more you use it, the more you’ll begin to understand how much you can or cannot trust an LLM. A sensible person would become more suspicious of the results, but people don’t always make sensible decisions.
The biggest issue with publicly available ML based text tools is that they’re American centric. Detection of ChatGPT in the UK is simple - it creates texts using American spelling. And if you live outside of English speaking world, like most humans do, it’s completely useless.
ChatGPT speaks other languages. It’s actually a really good translator.
I just asked it to describe an organization using UK English and it indeed used ‘organisation’ instead (didn’t check for other words).
Can it understand and create new compound words (in a language like German)? That’s an issue I have with most spell checks and translators as well, it’s forcing the language to be more like English
I just ran a quick experiment, asking it to create compound German words. Easy. But maybe it just memorized them? So I asked it to create double-compound words (consisting of 2*2 = 4 words), and it delivered:
Natürlich! Hier sind einige zusammengesetzte Wörter aus zusammengesetzten Wörtern:
- Sonnenblumenfeldweg
- Wassermelonensaftglas
- Handschuhfachdeckel
- Zahnarztpraxisrezeption
- Flugzeugsitzplatzreservierung
- Straßenbahnhaltestellenschild
- Regenschirmgriffpolster
- Apfelbaumpflanzloch
- Fernseherfernbedienungsknopf
- Hausaufgabenerledigungszeit
Diese Wörter mögen etwas ungewöhnlich sein, aber sie folgen den Regeln der deutschen Sprache und sind zusammengesetzt aus jeweils vier Wörtern.
They all seem legit. Most are rare. I think this was the first time I read “Wassermelonensaftglas” as a native German reader. Sadly, the next iteration (compound words made from compound words made from compound words) failed, it just added one more word.
@ChatGPT@lemmings.world Was bedeuten diese Wörter?
- Sonnenblumenfeldweg
- Wassermelonensaftglas
- Handschuhfachdeckel
- Zahnarztpraxisrezeption
- Flugzeugsitzplatzreservierung
- Straßenbahnhaltestellenschild
- Regenschirmgriffpolster
- Apfelbaumpflanzloch
- Fernseherfernbedienungsknopf
- Hausaufgabenerledigungszeit
- Wassermelonensaftweg
Sicher! Hier sind die Bedeutungen:
- Sonnenblumenfeldweg: Ein Weg, der durch ein Feld von Sonnenblumen führt.
- Wassermelonensaftglas: Ein Glas, das mit Wassermelonensaft gefüllt ist.
- Handschuhfachdeckel: Der Deckel eines Handschuhfachs, normalerweise in einem Fahrzeug gefunden.
- Zahnarztpraxisrezeption: Der Empfangsbereich einer Zahnarztpraxis.
- Flugzeugsitzplatzreservierung: Der Vorgang oder die Bestätigung der Reservierung eines Sitzplatzes in einem Flugzeug.
- Straßenbahnhaltestellenschild: Ein Schild, das eine Straßenbahnhaltestelle kennzeichnet.
- Regenschirmgriffpolster: Ein Polster am Griff eines Regenschirms für zusätzlichen Komfort.
- Apfelbaumpflanzloch: Ein Loch, das zum Pflanzen eines Apfelbaums ausgegraben wurde.
- Fernseherfernbedienungsknopf: Ein Knopf auf einer Fernbedienung, um einen Fernseher zu steuern.
- Hausaufgabenerledigungszeit: Die Zeit, die für die Erledigung von Hausaufgaben vorgesehen ist.
- Wassermelonensaftweg: Könnte auf einen Weg hinweisen, in dessen Verlauf Wassermelonensaft verkauft oder verzehrt wird, aber dies ist eher ein Zusammenbau von Wörtern und hat keine verbreitete oder standardmäßige Bedeutung.
Neat, that’s a pretty reasonable definition for all of them, even for #11 (which I’m pretty sure isn’t in its dataset, unlike the others which all have some matches on Google)
Not to mention that this “AI” is in no way actually AI. It’s just ML taken to a new level.
There is a fundamental difference between recombinant regurgitation and creation.
who is downvoting this? lol. if you are paying for these sevices you are being grifted
Well written, thanks! I like how you build up with useful explanations but also quickly get to the gist.
You’re either telling people that they should trust a fake that they might have been skeptical about otherwise, or you’re slandering something real.
This insight scares me. Deep Fakes are About to Change Everything (Johnny Harris) also went over this. Maybe the biggest threat is not that indistinguishable deep fakes become possible (which is scary enough on it’s own), but that trust in real documents is eroded easily. The example in the video: A bad deep fake of a politician pops up and is discarded, but some amount of distrust and skepticism about actually real documents sticks. It seems we’re doubling down on post-truth society.