The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.
I can’t wait until we find out AI trained on military secrets is leaking military secrets.
I can’t wait until people find out that you don’t even need to train it on secrets, for it to “leak” secrets.
Language learning models are all about identifying patterns in how humans use words and copying them. Thing is that’s also how people tend to do things a lot of the time. If you give the LLM enough tertiary data it may be capable of ‘accidentally’ (read: randomly) outputting things you don’t want people to see.
I mean even with chatgpt enterprise you prevent that.
It’s only the consumer versions that train on your data and submissions.
Otherwise no legal team in the world would consider chatgpt or copilot.