TikTok’s parent company, ByteDance, has been secretly using OpenAI’s technology to develop its own competing large language model (LLM). “This practice is generally considered a faux pas in the AI world,” writes The Verge’s Alex Heath. “It’s also in direct violation of OpenAI’s terms of service, which state that its model output can’t be used ‘to develop any artificial intelligence models that compete with our products and services.’”

You are viewing a single thread.
View all comments
70 points

Training one AI with the output of another AI will just make an even crappier AI.

permalink
report
reply
36 points
*

Ever since that paper about “model decay” this has been a common talking point and it’s greatly misunderstood. Yes, if you just repeatedly cycle content through AI training over and over through successive generations, you get AIs that lose “fidelity.” But that’s not what any actual real world training regimen using synthetic data does. The helper AI is usually used to process input data. For example, if you’re training an AI to respond in a chat-like format, you could take raw non-conversational text (like a book) and have the helper AI create a conversation about that content for the new AI to learn from. Or to take a real-world example, Dalle3 was trained by having a helper AI look at pictures and create detailed text descriptions of them to use as the caption to associate with the image when training.

OpenAI has put these restrictions in its TOS as a way of trying to “pull up the ladder behind them”, preventing rivals from trying to build AIs as good as the ones they have already. Fortunately it’s not going to work. There are already open LLMs that can be used as “helpers” without needing OpenAI at all. ByteDance was likely just being lazy here.

permalink
report
parent
reply
19 points

There is actually a whole subsection of AI focused on training one model with the output of another called knowledge distillation.

permalink
report
parent
reply
11 points

Depends how it’s done. GAN (Generative Adversarial Network) training works with exactly that, having networks train against each other, each improving the other over time.

permalink
report
parent
reply
9 points

Works kinda neat with stable Diffusion tho

permalink
report
parent
reply
9 points
*

I’ve watched Multiplicity enough times to know you get a slightly less functional copy.

permalink
report
parent
reply
4 points

She touched my peppy Steve.

permalink
report
parent
reply
-2 points

That wasn’t a documentary, and it wasn’t about machine learning.

permalink
report
parent
reply
7 points

Like photocopying a picture of a terd

permalink
report
parent
reply
4 points
*
Deleted by creator
permalink
report
parent
reply
2 points

Sounds like what you’d get if you ordered a ChatGPT off of Wish dot com. Cheap knock-offs that blatantly steal ideas/designs and somewhat work are kinda their thing.

permalink
report
parent
reply
2 points

Not necessarily: there have been recent works that indicate that filtering effects of fine tuned LLMs greatly improves the data efficiency (e.g phi-1). Further, if you have e.g. human selection on top of LLM generated content you can get great results as the LLM generation can be used as a soft curriculum, with the human selection biasing towards higher quality.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 10K

    Posts

  • 466K

    Comments