Avatar

magn418

magn418@lemmynsfw.com
Joined
7 posts • 25 comments
Direct message

I’d say for once don’t push yourselves. You don’t have to do every sex technique just because other people do it. If neither of you likes it, just let it go and focus on things you like. And if you want to do it, maybe take it slow. Let the person who is overwhelmed lead the pace. Agree on some signals and cues. Don’t be disappointed. Just stop and change to something different. It’s alright if it only lasts for a short moment. Maybe you can work your way up. But don’t push. Sex is about enjoying it, not do something specific.

And if you like to play games:

https://bettymartin.org/videos/

That’s about learning to give and receive. About setting boundaries and learning each other’s level of comfort. Maybe it helps. She has a free game(PDF) further down on that page.

permalink
report
reply

Sure, public enemy number one, inflatable edition.

permalink
report
parent
reply
permalink
report
reply

Nice one.

permalink
report
reply

They all have the same breasts. Need more variation 😆

permalink
report
reply

https://lemmynsfw.com/post/4048137

I’d say try MythoMax-L2 first. I think it’s a pretty solid allrounder. Does NSFW but also other things. Nothing special and not the newest any more, but easy to get going without fiddling with the settings too much.

If you can’t run models with 13B parameters, I’d have to think which one of the 7B models is currently the thing. I think 7B is the size most people play around with and produce new finetunes, merges and what not. But I also can’t keep up with what the community does, every bit of information is kind of outdated after 2-4 weeks 😆

permalink
report
parent
reply

I assume (from your user handle) that you know about the allure of roleplaying and diving into fantasy scenarios. AI can do it to some degree. And -of course- people also do erotic roleplay. I think this always took place. People met online to do this kind of roleplay in text chats. And nowadays you can do it with AI. You just tell it to be your synthetic maid or office affair or waifu and it’ll pick up that role. People use it for companionship, it’ll listen to you, ask you questions, reassure you… Whatever you like. People also explore taboo scenarios… It’s certainly not for everyone. You need a good amount of imagination, everything is just text chat. And the AI isn’t super smart. The intelligence if these models isn’t quite on the same level as the big commercial services like ChatGPT. Those can’t be used as they all banned erotic roleplay and also refuse to write smutty stories.

I agree with j4k3. It’s one of the use-cases for AI I keep coming back for. I like fantasy and imagination in connection with erotics. And it’s something that doesn’t require AI to be factually correct. Or as intelligent as it’d need to be to write computer programs. People have raised concerns that it’s addicting and/or makes people yet more lonely to live with just an AI companion… To me it’s more like a game. You need to pay attention not to get stuck in your fantasy worlds and sit in front of your computer all day. But I’m fine with that. And I’m less reliant on AI with that, than people who use AI to sum up the news and believe the facts ChatGPT came up with…

permalink
report
parent
reply

Hehe. It’s fun. And a different experience every time 😆

I don’t know which models you got connected to. Some are a bit more intelligent. But they all have their limits. I also sometimes get that. I roleplay something happening in the kitchen and suddenly we’re in the livingroom instead. Or lying in bed.

And they definitely sometimes have the urge to mess with the pacing. For example that now is the time to wrap up everything in two sentences. It really depends on the exact model. Some of them have a tendency to do so. It’s a bit annoying if it happens regularly. The ones trained more on stories and extensive smut scenes will do better.

The comment you saw is definitely also something AI does. It has seen text with comments or summaries underneath. Or forum style conversations. Some of the amateur literature contains lines like ‘end of story’ or ‘end of part 1’ and then some commentary. But nice move that it decided to mock you 😂

Thanks for providing a comparison to human nsfw chats. I always wondered how that works (or turns out / feels.) Are there dedicated platforms for that? Or do you look for people on Reddit, for example?

permalink
report
reply

The LLMs use a lot of memory. So if you’re doing inference on a GPU you’re going to want one with enough VRAM. Like 16GB or 24GB. I heard lots of people like the NVidia 3090 Ti because that graphics card could(/can?) be bought used for a good price for something that has 24GB of VRAM. The 4060 Ti has 16GB of VRAM and (I think) is the newest generation. And AFAIK the 4090 is the newest consumer / gaming GPU with 24GB of VRAM. All the gaming performance of those cards isn’t really the deciding factor, the somewhat newer models all do. It’s mostly the amount of VRAM on them that is important for AI. (And pay attention, a NVidia card with the same model name can have variants with different amounts of VRAM.)

I think the 7B / 13B parameter models run fine on a 16GB GPU. But at around 30B parameters, the 16GB aren’t enough anymore. The software will start “offloading” layers to the CPU and it’ll get slow. With a 24GB card you can still load quantized models with that parameter count.

(And their professional equipment dedicated to AI includes cards with 40GB or 48GB or 80GB. But that’s not sold for gaming and also really expensive.)

Here is a VRAM calculator:

You can also buy an AMD graphics card in that range. But most of the machine learning stuff is designed around NVidia and their CUDA toolkit. So with AMD’s ROCm you’ll have to do some extra work and it’s probably not that smooth to get everything running. And there are less tutorials and people around with that setup. But NVidia sometimes is a pain on Linux. If that’s of concern, have a look at RoCm and AMD before blindly buying NVidia.

With some video cards you can also put more than one into a computer, combine them and thus have more VRAM to run larger models.

The CPU doesn’t really matter too much in those scenarios, since the computation is done on the graphics card. But if you also want to do gaming on the machine, you should consider getting a proper CPU for that. And you want at least the amount of VRAM in RAM. So probably 32GB. But RAM is cheap anyways.

The Apple M2 and M3 are also liked by the llama.cpp community for their excellent speed. You could also get a MacBook or iMac. But buy one with enough RAM, 32GB or more.

It all depends on what you want to do with it, what size of models you want to run, how much you’re willing to quantize them. And your budget.

If you’re new to the hobby, I’d recommend trying it first. For example kobold.cpp and text-generation-webui with the llama.cpp backend (and a few others) can do inference on CPU (or CPU plus some of it on GPU). You can load a model on your current PC with that and see if you like it. Get a feeling what kind of models you prefer and their size. It won’t be very fast, but it’ll do. Lots of people try chatbots and don’t really like them. Or it’s too complicated for them to set it up. Or you’re like me and figure out you don’t mind waiting a bit for the response and your current PC is still somewhat fine.

permalink
report
parent
reply

Can you tell us a bit more about it?

  • What kind of LLM do you use?
  • How do you do the prompting? One-shot? Would you like to share that?
  • Do you keep the generated stories private?
permalink
report
reply