Avatar

Z4rK

Z4rK@lemmy.world
Joined
29 posts • 412 comments
Direct message

Lol thank you autocorrect. Ollama.

permalink
report
parent
reply

Ok, I just don’t see the relevance to this post then. Sure, you’re fine to rant about Apple in any thread you want to, it’s just not particularly relevant to AI, which was the technology in question here.

I hear good things about GrapheneOS but just stay away from it because of all the stranger. I love Olan’s.

permalink
report
parent
reply
  1. Security / privacy on device: Don’t use devices / OS you don’t trust. I don’t see what difference on-device AI have at all here. If you don’t trust your device / OS then no functionality or data is safe.
  2. Security / privacy in the cloud: The take here is that Apples proposed implementation is better than 99% of every cloud service out there. AI or not isn’t really part of it. If you already don’t trust Apple then this is moot. Don’t use cloud services from providers you don’t trust.

Security and privacy in 2024 is unfortunately about trust, not technology, unless you are able to isolate yourself or design and produce all the chips you use yourself.

permalink
report
parent
reply

They have designed a very extensive solution for Private Cloud Computing: https://security.apple.com/blog/private-cloud-compute/

All I have seen from security persons reviewing this is that it will probably be one of the best solutions of its kind - they basically do almost everything correctly, and extensively so.

They could have provided even more source code and easier ways for third parties to verify their claims, but it is understandable that they didn’t, is the only critique I’ve seen.

permalink
report
parent
reply

To be honest, I’m not sure what we’re arguing - we both seem to have a sound understanding of what LLM is and what it is not.

I’m not trying to defend or market LLM, I’m just describing the usability of the current capabilities of typical LLMs.

permalink
report
parent
reply

It goes a tad bit beyond classical conditioning… LLM’a provides a much better semantic experience than any previous technology, and is great for relating input to meaningful content. Think of it as an improved search engine that gives you more relevant info / actions / tool-suggestions etc based on where and how you are using it.

Here’s a great article that gives some insight into the knowledge features embedded into a larger model: https://transformer-circuits.pub/2024/scaling-monosemanticity/

permalink
report
parent
reply

That’s fair, but you are misunderstanding the technology if you’re bashing the AI from Apple for making macOS less secure. Most likely, it will be just as secure as for example their password functionality, although we don’t have details yet. You either trust the OS or not.

Microsoft Recall was designed so badly, there’s no hope for it.

permalink
report
parent
reply

macOS and Windows could already be doing this today behind your back regardless of any new AI technology. Don’t use an OS you don’t trust.

permalink
report
parent
reply

That’s why it’s on the OS-level. For example, for text, it seems to work in any text app that uses the standard text input api, which Apple controls.

User activates the “AI overlay” on the OS, not in the app, OS reads selected text from App and sends text suggestions back.

The App is (possibly) unaware that AI has been used / activated, and has not received any user information.

Of course, if you don’t trust the OS, don’t use this. And I’m 100% speculating here based on what we saw for the macOS demo.

permalink
report
parent
reply

Yes definitely, Apple claimed that their privacy could be independently audited and verified; we will have to wait and see what’s actually behind that info.

permalink
report
parent
reply