Avatar

DudeWTF

DudeWTF@lemmynsfw.com
Joined
31 posts • 28 comments
Direct message

Honestly, may I ask, how do you perceive this?

I have used images to help me learn how training works with AI. It is far easier to see ass nipples are a mistake than it is to see that poor text training has resulted in a middle aged woman with excessive hairiness and a passion for gardening is now going by the name Harry Potter.

I may have a database of images and trained models that I have used to learn, not your content in particular, and not any particularly good results. I’ve mostly explored why labias are so bad with stable diffusion, and scrapped a couple of ftv galleries. I wouldn’t call myself a fan of anyone really. I’m certainly not a mark in this space. My real interest is in other AI applications. Posting trained models of people seems too gray area for me. At the same time, this is becoming a super powerful tool that essentially expands exposure and likely attracts the type of person that would pay for more. Like the recent creation of Open Dream makes it possible to do image layering for complex composition. I’m curious about a content creator’s take here.

permalink
report
reply

I don’t want to encourage that. I otherwise thought the output was kinda interesting perspective and cute. However, I am removing this to avoid pushing the grey area. I was just going to add my previous and this one as a humorous “getting started with AI” kinda thing. These were the best of the first several dozen failures before they got better.

permalink
report
parent
reply

Yeah OCC participation is best, so long as they can be objective and unbiased. That can be hard for some people when they feel invested in the content.

My rule is to not take mod actions if I am involved in a conversation and especially if I initialized an engagement with someone. I’ll send a message to other mods and ask their opinion and to do whatever they feel is needed.

I don’t mind knocking down a bot posting threads in the wrong spot, removing a duplicate post if I see it, or resolving an open report, but I’m here mostly playing with Stable Diffusion and AI stuff. I’m nerding out because playing with the public NSFW SD stuff is much easier than when I tried it as a tool for CAD design initially. I’m not interested in becoming a real cc in this space. It is interesting to find clever prompts on here that can be reproduced, and what kinds of things work and don’t but that is all that interests me.

I’m not the type to help source new content or drive engagement here really. I’m a solid up vote, maybe a comment. If on topic AI posts were allowed here, I might add something in passing if it fit, but the category isn’t a very interesting thing to generate by itself. I would rather work on stuff like NSFW science fiction characters if I’m focused on a human figure, or looking at the original images used in the yraining weights and trying to prompt specific elements taken from these.

Don’t get me wrong, I like the tiny titty type. I’d challenge anyone to make it both naked and intellectually interesting, but otherwise I’m just here to add some level headed help in the rare instance it’s needed. If that is not, I can disappear into the æther at any time, no hard feelings.

permalink
report
parent
reply

Today’s the first and only time my main account has been useless. There have been minor issues, but nothing like this for me.

permalink
report
parent
reply

These are alternates for this post and notes for a few others that were not worth posting on their own. These notes and posts may be silly if your perspective is someone hosting your own software where iteration takes a few seconds. For someone online, on a rate limited account, the perspective is very different.

This is what “tan lines” really does on this instance. It just doesn’t work and becomes clothing like tan bikini bottoms.

This is (very loosely) “leaning back with arms behind back resting on elbows” taken WAY out of context.

Crazy, broken looking results from trying to force a better labia than this instance is capable of producing.

(Loosely)“Fishnet stockings and heels only, with a pink pussy.”

permalink
report
reply

They were the best of ~300 while I was trying different checkpoints and Loras. The first one was actually done by a Llama2 7B uncensored chat model in Oobabooga. I have no idea what final prompts it generated. This is one of the only ones without major errors, but the outputs were quite interesting. I’ll share more of what it generated shortly. I made a NSFW chat character that liked “traveling and exhibitionism.” It was a fun mix. It’s like playing fax machine porn games waiting for image outputs with offline text generation

permalink
report
parent
reply

It isn’t too hard to read the way the scripts parse prompts. I haven’t gone into much detail when it domes to stable diffusion. The GUIs written in gradio, like Oobabooga for text or Automatic1111 are quite simple python scripts. If you know the basics of code like variables, functions, and branching, you can likely figure out how the text is parsed. This is the most technically correct way to figure this stuff out. Users tend to share a lot of bad information, especially in the visual arts space, and even more so if they use Windows.

Because the prompt parsing method this is part of the script. If we don’t know what software you are using, it is hard to tell you what to do with certainty. I think most are compatible, bit I don’t know for sure. In the LLM text space, things like characters are parsed differently across various systems.

With Automatic1111, on the text2img page, there is a small red icon under the image that opens up a menu in the GUI and lists all the LoRAs you have placed in the appropriate folder for LoRAs on your host system where you installed A1111. Most of the LoRAs you download that show up on the text2img page will have a small circled “i” icon in one corner, this will usually contain a list of the text data that was used to train the LoRA. This text data was associated with each image. These are the keywords that will trigger certain LoRA attributes. When you have this LoRA menu open, if you click on any of the entries, it will automatically add the tag used to set the strength of the LoRA’s influence on the prompt. This defaults to 1.0 but this is always too high. Most of the time 0.2-0.7 work okay. You also need the main key word used to trigger the prompt added somewhere in the prompt. This can be difficult to find unless to keep this information from the place you downloaded the LoRA from. Personally, I rename all of my LoRAs to whatever the keyword is. Also, you’re likely going to get a lot of LoRAs eventually. Get in the habit of putting an image relative to what each LoRA does in the LoRAs folder. The image should be named the same as the LoRA itself. A1111 will automatically add this image to each entry you see in the GUI menu. LoRAs are not hard to train too. Try it some time. If you can generate images, you can train LoRAs.

permalink
report
reply

Oobabooga is the GUI (github). Hugging Face is for uncensored open source LLMs. Automatic1111 (github) is the GUI for stable diffusion.

permalink
report
parent
reply