Michał "rysiek" Woźniak · 🇺🇦
Hacker, activist, free-softie ◈ techie luddite ◈ formerly information security and infrastructure at https://isnic.is/ and https://occrp.org/ ◈ my opinions are my own etc.
(he/him)
profile image: drawing of a head and shoulders of a cat-person, in a space suit.
banner image: long-exposure photo of a large tent, brightly illuminated from inside, looking as if it is made of lava
#foss #libre #privacy #infosec #fedi22
(public toots CC By-SA 4.0 if applicable)
🇪🇺 🇵🇱 · 🇧🇦 🇮🇸 · 🇺🇦
@ElectronSoup @borari @Spitfire that’s just mathwashing:
https://www.mathwashing.com/
The tool cannot be liable itself, obviously, but the creators of the tool and those who wield it absolutely can, depending on specific circumstances.
The “AI” does not “create independently”. Just like a script with some randomness built in does not “create independently”. Somebody designed and built the tool, somebody decided what training data to use, somebody decided to deploy it. These people are liable.
@federico3 you can bet kbin.social is being hammered with insane traffic. Reddit about Kbin migration got banned and then un-banned, so Streisand effect iis at work.
There are other Kbin instances, though there are not many of them:
https://the-federation.info/platform/184
So people need to start setting up Kbin instances to spread the load. 🙂
@vfrmedia @technology yup. It’s ActivityPub all the way down.
@coldredlight @peyotecosmico interesting!
Do you have any thoughts on what kind of mod tooling the Threadiverse needs to make mods’ work easier?
@Thebazilly @VeeSilverball a “SheetCoin”, as it were
@Barbarian772 so? If the cookie tastes sweet, what do I care what sweetening agent is used inside?
@Barbarian772 I don’t have to. It’s the ChatGPT people making extremely strong claims about equivalence of ChatGPT and human intelligence. I merely demand proof of that equivalence. Which they are unable to provide, and instead use rhetoric and parlor tricks and a lot of hand waving to divert and distract from that fact.
@Barbarian772 no, GTP is not more “intelligent” than any human being, just like a calculator is not more “intelligent” than any human being — even if it can perform certain specific operations faster.
Since you used the term “intelligent” though, I would ask for your definition of what it means? Ideally one that excludes calculators but includes human beings. Without such clear definition, this is, again, just hand-waving.
I wrote about it in a bit longer form:
https://rys.io/en/165.html
@CorruptBuddha well technically, since we’re nit-picking, I did not make that claim, BobKerman3999 did.
And the claim was was about how ChatGPT’s “intelligence” can be understood through the lens of the Chinese Room thought experiment.
Then I was asked to prove that human brains don’t work like Chinese rooms, and that’s a *different* thing. The broader claim in all of this, of course, is that ChatGPT “is intelligent” in the same sense as humans are, and that strong claim requires strong proof.
@Barbarian772 it was shown over and over and over again that ChatGPT lacks the capacity for abstraction, logic, understanding, self-awareness, reasoning, planning, critical thinking, and problem-solving.
That’s partially because it does not have a model of the world, an ontology, it cannot *reason*. It just regurgitates text, probabilistically.
So, glad we established that!