Avatar

masonlee

masonlee@lemmy.world
Joined
4 posts • 12 comments
Direct message

Hmm. It would be very un-apple-like (and thus un-apollo-like) to choose an adjective for an app name.

permalink
report
parent
reply

Here some big names are working on a standard for chaining digital signatures on media files: https://c2pa.org.

Their idea is that the first signature would come from the camera sensor itself, and every further modification adds to the signature chain.

permalink
report
parent
reply

It’s the further research being done on top of the breakthrough tech enabling the chat bots applications people are worried about. It’s basically big tech’s mission now to build Ultron, and they aren’t slowing down.

permalink
report
parent
reply

Possibly, due to selective pressure. For those interested in the topic, this excellent paper was written for a broad audience and offers a lot to think about: “Natural Selection Favors AIs over Humans” https://arxiv.org/abs/2303.16200 (find link to PDF in the sidebar)

permalink
report
parent
reply

Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but who’s counting? :)

permalink
report
reply

100%. Autopoietic computronium would be a “best case” outcome, if Earth is lucky! More likely we don’t even get that before something fizzles. “The Vulnerable World Hypothesis” is a good paper to read.

permalink
report
parent
reply

Your worry at least has possible solutions, such as a global VAT funding UBI.

permalink
report
parent
reply

Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.

permalink
report
parent
reply

Seven years ago I would have told you that GPT-4 was sci fi, and I expect you would have said the same, as would have most every AI researcher. The deep learning revolution came as a shock to most. We don’t know when the next breakthrough will be towards agentification, but given the funding now, we should expect soon. Anyways, if you’re ever interested to learn more about unsolved fundamental AI safety problems, the book “Human Compatible” by Stewart Russell is excellent. Also “Uncontrollable” by Darren McKee just came out (I haven’t read it yet) and is said to be a great introduction to the bigger fundamental risks. A lot to think about; just saying I wouldn’t be quick to dismiss it. Cheers.

permalink
report
parent
reply