But just as Glaze’s userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze’s protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze’s protections could be “easily bypassed, leaving artists vulnerable to style mimicry.”

34 points

Glaze has always been fundamentally flawed and a short term bandage. There’s no way you can make something appear correctly to a human and incorrectly to a computer over the long term - the researchers will simply retrain on the new data.

permalink
report
reply
15 points

Agreed. It was fun as a thought exercise, but this failure was inevitable from the start. Ironically, the existence and usage of such tools will only hasten their obsolescence.

The only thing that would really help is GDPR-like fines (based as a percentage of income, not profits), for any company that trains or willingly uses models that have been trained on data without explicit consent from its creator.

permalink
report
parent
reply
10 points

That would “help” by basically introducing the concept of copyright to styles and ideas, which I think would likely have more devastating consequences to art than any AI could possibly inflict.

permalink
report
parent
reply
4 points

No, Just the concept of getting a say in who can train AIs on your creations.

So yes, that would leave room for a loophole where a human could recreate your creation (without just making a copy), and they could then train their model on that. It isn’t water tight. But it doesn’t need to be, just better than what we have now.

permalink
report
parent
reply
28 points
*

Reminder that the author of Glaze, Ben Zhao, a University of Chicago professor stole open source code to make a closed source tool that only targets open source models. Glaze never even worked on Microsoft, Midjourney, or OpenAI’s models.

permalink
report
reply
10 points

Easy, DDoS those who provide such services

permalink
report
reply
15 points

Setting aside the hypocrisy, there’s simply no “service” to DDoS here. There’s hardly even a tool. According to the article:

Hönig told Ars that breaking Glaze was “simple.” His team found that “low-effort and ‘off-the-shelf’ techniques”—such as image upscaling, “using a different finetuning script” when training AI on new data, or “adding Gaussian noise to the images before training”—“are sufficient to create robust mimicry methods that significantly degrade existing protections.”

So automatically running a couple of basic Photoshop tools on the image will do it.

I had to check the date on this article because I’m not sure why it’s suddenly news, these techniques for neutralizing Glaze have been mentioned since Glaze itself was first introduced. Maybe Hönig just formalized it?

permalink
report
parent
reply
9 points

Wasn’t glaze almost immediately circumvented?

permalink
report
reply
6 points

Your work should always include a qr code that leads to goatse.

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 12K

    Posts

  • 529K

    Comments