His post: https://discuss.tchncs.de/post/17400082
I’m just convinced all of y’all asking about this are in a huge circle jerk that never ends, but refuses to understand how it all works.
A model is a model. It’s a simplified way of narrowing down thresholds of confidences. It’s a pretty basic sorting algorithm that runs super fast on accelerated hardware.
You people seem to think it’s like fucking magic that steals your soul.
Don’t send information over the wire, and you’re golden. Learn how it works, and stop asking dumb questions like this is all brand new, PLEASE.
There is a difference between a general scare about the AI buzzword and legitimate distrust in online services which are closely connected to american spying institutions (regardless if they are ai or not)
If my calories tracker app would apoint a (former) NSA official on their board, I would be looking for alternatives too. This is not about AI, this is about a company with huge sets of private data being closely interconnected with american spy institutions.
Sad that you don’t seem to be able to distinguished between legitimate security questions and badly informed hypes/scares ass soon as a buzzword like AI occurs
Read the last part of my comment again. Seems I very clearly grasp the concern.
I did read this part, and while this is generally true, there are use cases of such large models. Some of them require the input of personal data (find bugs in my code, formalize this email, scan this picture for text and translate it, draw an anime version of this picture of my friend tom)
So people being weary of security implications of such large models are certainly not
in a huge circle jerk that never ends, but refuses to understand how it all works.
Sure you can just call them all dumb using ai like the mainstream (putting in personal data) and attribute it to an unwillingness to understand, but this doesn’t match the reality. Most people don’t even understand how an operating system functions, which components work online and which offline and who can access which of their information, let alone know how “AI” works and what the security implications are.
So If people ask those questions, hoping there are alternatives they can use safely your answer “no, u just dumb, machine can’t harm you, its not magic, just don’t put in data in”
Is not only rude but also missing the point. Most usefull/fun/mainstream ways DO in fact, put in data.
You explaining basic models also doesn’t help, as the concern here is not mainly/only the model, but american spy institution to access all prompts you did put in, maybe categorizing you in personality clusters dependent on your usage of language or assigning tags on which political stance a users has (and with entities like the NSA I could imagine far worse)
Also “A model is a model” Is not very accurate in such cases. When someone has control and secrecy over each aspect of the model, it would be very well possible for entities like the NSA to manipulate the content the models puts out in arbitrary directions. A government controlling and manipulating information the public receives is a red flag for a lot of people (rightfully so IMHO)
How are people supposed to get better in digital privacy topics if you just tell them to shut up and insult them when they aks questions trying to learn? You acting like you are in your Elfenbeinturm of genius isn’t helping anyone.