Avatar

Wololo

Wololo@lemmy.world
Joined
0 posts • 9 comments
Direct message

My first playthrough I sent lae’zel straight to camp, never spoke to her again. At some random point before even act 1 ended, her romance dialogue came up.

My theory is the tadpoles ramp up sex drive to absurd levels, unless you’re shadowheart apparently. Illithid orgies must be incredible.

permalink
report
parent
reply

It varies is the answer for sure but I’d say 5ish or 10 minutes is average. It’s been longer, it’s been (much) faster, but it totally depends on context. Energy levels, general excitement, consumption of booze or other intoxicants, etc.

My unfortunate observation is the longer I am with a partner the more comfortable I slowly become, and the less time I’m able to last.

permalink
report
reply

I’ve had similar experiences lately. Either that or it decides to review and analyze my code unprompted when I’m trying to troubleshoot a particularly tricky line. Had a few instances where it tried to borderline gaslight me into thinking that it was right and I was wrong about certain solutions. It feels like it happened rather suddenly too, it never used to do that save for the odd exception.

permalink
report
parent
reply

Actually, I’ve had a slightly opposite experience. I found that while asking it general programming questions, it initially tried to exain to me how I could achieve what I was looking to do, and lately it has been jumping straight to writing example code (sometimes even asking for my existing code so that it can modify it)

permalink
report
parent
reply

When you end up resorting to saying things like “wow, this is wonderful, but… It breaks my code into a million tiny pieces” Or “for the love of God do you have any idea what you’re actually doing?” It’s a sign that perhaps stack overflow is still your best (and only) ally

permalink
report
parent
reply

It’s entirely possible! I remember listening to a podcast on AI, where they mentioned someone once asked the questions “which mammal lays the largest eggs” to which the ai responded with elephants, and proceeded to argue with the user that it was right and he was wrong.

It has become a lot easier as I’ve learned how to kind of coach it in the direction I want, pointing out obvious errors and showing it what I’m really looking to do.

Ai is a great tool, when it works. As the technology improves I’m sure it will rapidly get better.

permalink
report
parent
reply

If I remember correctly it should have been gpt-4, of course, there is always a chance it was 3.5

Since then I’ve learned much better ways to kind of manipulate it into answering my questions more precisely, and that seems to do the trick

permalink
report
parent
reply

I literally broke down into tears doing this one night. Was running something that would take hours to complete and noticed an issue at maybe 11pm. Tried to troubleshoot and could not for the life of me figured it out. Thought to myself, surely chatgpt can help me figure this out quickly. Fast forward to 3am, work night: “no, as stated several times prior, this will not resolve the issue, it causes it to X, Y, Z, when it should be A, B, C. Do you not understand the issue?”

“I apologize for any misunderstanding. You are correct, this will not cause the program to A, B, C. You should… Inserts the same response it’s been giving me for several hours

It was at that moment that I realized these large language models might not currently be as advanced as people make them out to be

permalink
report
parent
reply

Says someone was charged for returning fire? I’m curious about that, were they arrested for illegal firearm possession or was it something else?

permalink
report
reply