You are viewing a single thread.
View all comments View context
1 point

And what is the danger in this? At this point everyone knows AI can make realistic fake content. It’s unlikely that someone, say, in a position of power, would do anything rash after seeing an AI video considering the technology exists. No wars were started over photoshopped images.

permalink
report
parent
reply
4 points

the target isn’t people in power, the target of these tools is the general population. disinformation combined lack of critical thinking is already bad enough just with the posting of carefully cropped, edited, or out of context media, when the new tools can create realistic video with voice matched audio, more people will be fooled - plenty of them will happily believe whatever reinforces their existing position.

permalink
report
parent
reply
1 point

Also if someone in power gets caught doing something bad they could muddy the waters saying it was AI and fake content.

permalink
report
parent
reply
1 point

Imagine the president giving a very important message to the people. Using content generation, a bad actor could insert minor alterations that change the meaning of an important sentence and then spread it naturally on social media. That could have dramatic implications for disinformation campaigns.

permalink
report
parent
reply
1 point

Could they not have done that 5-10 years ago?

permalink
report
parent
reply
1 point

Not nearly as well as it can now. Frame generation and speech imitators get better every day. Our AI is far better than it was 5 years ago, and 10 years ago algorithms like chatGPT and stable diffusion were things of science fiction.

permalink
report
parent
reply

Memes

!memes@lemmy.ml

Create post

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

Community stats

  • 8.5K

    Monthly active users

  • 13K

    Posts

  • 288K

    Comments