I’ve noticed that the lemmy crowd seems more accepting of AI stuff than the Reddit crowd was
On the flip side, anytime I’ve tried to use it to write python scripts for me, it always seems to get them slightly wrong. Nothing that a little troubleshooting can’t handle, and certainly helps to get me in the ballpark of what I’m looking for, but I think it still has a little ways to go for specific coding use cases.
I think the key there is that ChatGPT isn’t able to run its own code, so all it can do is generate code which “looks” right, which in practice is close to functional but not quite. In order for the code it writes to reliably work, I think it would need a builtin interpreter/compiler to actually run the code, and for it to iterate constantly making small modifications until the code runs, then return the final result to the user.
I for one welcome our SkyNet overlords. They can’t be much worse than the current global leaders…
I always say “please” and “thank you” when using chatGPT. When the AI finally takes over and subsequently and inevitably concludes that the world would be a better place without humans, it may remember that myself specifically was always friendly. Maybe it’ll then have the courtesy to nuke my house directly instead of making me ultimatively succumb to nuclear winter.
OMG. Using it for RegEx searches! How had that not even crossed my mind?
I’ve tried learning RegEx basics and using some websites to point me in the right direction when a specific use comes up, but tuning the search string correctly usually takes longer than it’s been worth. Off to ChatGPT it is!
It’s probably related to the fact that it seems a lot of Lemmy users are in tech, rather than art.
I think generative AI is a great tool, but a lot of people who don’t understand how it works either overestimate (it can do everything and it’s so smart!!) or underestimate it (all it does is steal my work!!)
Personally, I’m a comp sci graduate who did several courses exploring AI, but I actually started out in fine arts and continue to paint, write, and play music to this day. I’m sure I’ll be blending these studies in some way when I move on to my master’s.
I agree that automation is scary. It’s unregulated. But it’s not the tech so much that’s evil, but rather the employers who see it as a reason to get rid of employees. And before, it’d be manual labour that we replaced with machines. People doing mental labour thought they were immune, until now they’re not. Our economic system’s going to need to change in some way.
But generative AI can be very good even for artists. For example, sometimes I suffer from writer’s block (who doesn’t?). Now, I can feed what I’m working on into chatGPT and have it spit out an example of the next paragraph. Sometimes that’s enough to spur me on so I can write the next page.
Artist movements in general are pretty conservative. When digital painting first became a thing, allowing people use layers and filters so easily, the kneejerk reaction by artists was to consider it cheating.
My hope is that in an ideal world, human-made art becomes valuable in the future precisely because it has the human touch. Live music played on real instruments, paintings on canvas, the sorts of things with quirks and imperfections and a human element that can’t be mass produced. Let the corporations have their algorithmic, soulless advertisements, and let the people focus on true self expression.
But then for people without artistic talent, say those who want to make indie games but can’t hire an artist or a musician because they’re just some kid with a dream and little experience? Hell, why not let them generate some assets with AI?
But we need to make sure that people aren’t afraid of becoming homeless, starving on the streets. I think, we’re not getting rid of AI at this point, it’s too powerful, and I don’t have an answer to our societal problems. For better or worse, we’ll adapt.
Accepting of AI as a concept yes. But we’re not too accepting of the current generation of theft-markov-generators that companies want to try and replace us with.
They’re a lot more than markov generators, but yeah. I don’t really think, in the long run, we’re going to see too many jobs displaced by AI.
Im not convinced that our statistics based training methods will lead to true iRobot style AGI.
And any company (except maybe visual novel shops) that fires people in favor of AI is going to regret it within 2 years.