You are viewing a single thread.
View all comments View context
6 points

Why tho or are you trying to be vague on purpose

permalink
report
parent
reply
72 points

Because you’re training a detector on something that is designed to emulate regular languages closest possible, and human speech has so much incredible variability that it’s almost impossible to identify if someone or something has been written by an AI.

You can detect maybe your typical generic chat GPT type outputs, but you can characterize a conversation with chat GPT or any of the other much better local models (privacy and control are aspects which make them better) and after doing that you can get radically human seeming outputs that are totally different from anything chat GPT will output.

In short, given a static block of text it’s going to be nearly impossible to detect if it’s coming from an AI. It’s just too difficult to problem, and if you’re going to solve it it’s going to be immediately obsolete the next time someone fine tunes their own model

permalink
report
parent
reply
6 points

Yeah this makes a lot of sense considering the vastness of language and it’s imperfections (English I’m mostly looking at you, ya inbred fuck)

Are there any other detection techniques that you know of? Wb forcing AI models to have a signature that is guaranteed to be indentifiable, permanent, and unique for each tuning produced? It’d have to be not directly noticeable but easy to calculate in order to prevent any “distractions” for the users.

permalink
report
parent
reply
18 points

The output is pure text so you would have to hide the signature in the response itself. On top of being useless since most users slightly modify the text after receiving it, it would probably have a negative effect on the quality. It’s also insanely complicated to train that kind of behavior into an llm.

permalink
report
parent
reply
10 points

forcing AI models to have a signature that is guaranteed to be indentifiable, permanent, and unique for each tuning produced

Either AI remains entirely in the hands of fucks like open AI or this is impossible and easily removed. AI should be a free common use tool, not an extension of corporate control.

permalink
report
parent
reply
22 points

Because AIs are (partly) trained by making AI detectors. If an AI can be distinguished from a natural intelligence, it’s not good enough at emulating intelligence. If an AI detector can reliably distinguish AI from humans, the AI companies will use that detector to train their next AI.

permalink
report
parent
reply
-2 points

I’m not sure I’m following your argument here - you keep switching between talking about AI and AI detectors. Each of the below are just numbered according to the order of your prior responses as sentences:

  1. Can you provide any articles or blog posts from AI companies for this or point me in the right direction?
  2. Agreed
  3. Right…

I’m having trouble finding your support for your claim

permalink
report
parent
reply
8 points

See Generative Adversarial Network (GAN). Basically, making new AI detectors will always be harder than beating current ones. AI detectors have to somehow find a new “tell”, the target AI need only train itself on the output of the detector to figure out how to trick it.

permalink
report
parent
reply
7 points

At a very high level, training is something like:

  • generate some output
  • give the output a score based on how much it looks like real human text
  • adjust the parameters slightly to improve the score
  • repeat

Step #2 is also exactly what an “AI detector” does. If someone is able to write code that reliably distinguishes between AI and human text, then AI developers would plug it in to that training step in order to improve their AI.

In other words, if some theoretical machine perfectly “knows” the difference between generated and human text, then the same machine can also be used to make text that is indistinguishable from human text.

permalink
report
parent
reply
1 point
*
Deleted by creator
permalink
report
parent
reply
-1 points

Because generative Neural Networks always have some random noise. Read more about it here

permalink
report
parent
reply
3 points

Isn’t that article about GANs?

Isn’t GPT not a GAN?

permalink
report
parent
reply
5 points

It almost certainly has some gan-like pieces.

Gans are part of the NN toolbox, like cnns and rnns and such.

Basically all commercial algorithms (not just nns, everything) are what I like to call “hybrid” methods, which means keep throwing different tools at it until things work well enough.

permalink
report
parent
reply
2 points

It’s not even about diffusion models. Adversarial networks are basically obsolete

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 554K

    Comments