You are viewing a single thread.
View all comments View context
39 points

Totally is. Because it makes the AI look and feel much better than the smoke-and-mirrors it actually is.

permalink
report
parent
reply
29 points

The current stuff is smoke and mirrors and not intelligent in any meaningful sense, but that doesn’t mean it isn’t dangerous. It doesn’t have to be robots with guns to screw over people. Just imagine trying to get PharmaGPT to let you refill your meds, or having to deal with BankGPT trying to figure out why it transfered your rent payment twice. And companies are sure as hell thinking about using this stuff to get rid of human decisionmakers.

permalink
report
parent
reply
11 points

That is totally true but that’s a different direction than the danger in the marketing as discussed above.

The media is full of “AI is so amazingly great, we are all going to lose our jobs and it will take over the world.”

That’s a quite different message than what’s really the case, which is “AI is so shitty, that it will literaly kill people with bad advice when given the chance. And business leaders are so shit that they willingly trust AI, just because it’s cheaper.”

permalink
report
parent
reply
3 points

This is my biggest concern. I’m in a position where (potentially in the near future) I see AI being used as an excuse to do work quicker so we can focus on other things more but still have to review the AI response before agreeing/signing off. Reviewing for accuracy takes just as long as doing it yourself when it’s strongly regulated and it comes down to revisions and document numbers. Much less making a sound argument that actually is up to date with that documentation. So either I trust the AI short cut and open myself up to errors, or redo all the work for them. No gain in time efficiency with shorter timelines. I’d rather make something and have it flag things that I can check so I’m more sure of my own work. What I do shouldn’t be faster, but it can be more error free. It would take a lot of training and updating of training with each iteration of documentation change. I could be the slave of change, with more expectations, with no actual improvement of the tools I have (in fact more risk of issues with the tools being used).

permalink
report
parent
reply
7 points

Frankly that stuff is already a huge problem and people should be louder about it. So many large companies want you to wade through 30 layers deep menus if AI chat bots before they’ll let you talk to an actual human to get assistance with a service you pay for. It’s just going to get worse and worse.

permalink
report
parent
reply
-1 points

That’s not a bad thing. Humans really aren’t good decision makers. Having a system with an incredible amount of input data will be able to draw better conclusions than a person.

Just look at cars.

permalink
report
parent
reply
4 points
*

Humans are good decision makers, we’re just not good at paying attention for long periods of time. Which is why I think self-driving cars will eventually be better, but they aren’t yet. And those are expert systems (I refuse to call them AI) trained on a well-curated and limited set of data for a limited and specific purpose. Which is an important difference over the generalized generative models. More data does not make systems, especially more unsorted data.

But here’s another important difference: I can grab the wheel at any time and take over. If we are going to give these systems decision making authority there needs to be an obvious and intuitive override.

permalink
report
parent
reply
3 points
*

AI is just as biased as the data that’s put into them and that data originates from humans who have their own biases so humans are just going to pass their own biases onto the AI that makes the decisions

I don’t think ai is a good idea

It just exists as a replacement of the human mind and with the whole population of us on earth that’s a large enough number to contribute any unique ideas to contribute to humanity

Creating ai would just be making some sort of copy of us

An AI is similar to an impressionable child

permalink
report
parent
reply
26 points

We thought we were getting Skynet but, instead we got Super Clippy and I Can’t Believe It’s Not Art Theft

permalink
report
parent
reply
5 points

I for one am grateful it’s just super clippy (yet)

permalink
report
parent
reply
4 points

We thought we were getting Skynet, but instead it was “I Can’t Believe It’s Not Art Theft” that triggered the revolution and lead us to WWIII.

permalink
report
parent
reply
3 points

Do you see any reason to think enough iterations of random nodes in a large enough network could result in emergent conscious intelligence?

Or are you more of a spiritualist than a materialist when it comes to the mind?

permalink
report
parent
reply
1 point

I can’t say anything about the spiritualist/materialist thing, but there are two things that are clear:

First: Same as you won’t be able to ever get a Shakespeare work by randomly stringing letters together in any reasonable time frame, you won’t be able to do the same with conciousnes. If it’s possible, the number of incorrect permutations are so massive, that just random trying will not ever be enough in any realistic amount of time.

Second: Transformer networks and all other generative AI concepts we have today aren’t even trying to create a conciousnes. They are not the path to general AI.

permalink
report
parent
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 3.6K

    Monthly active users

  • 1K

    Posts

  • 38K

    Comments