Avatar

FactorSD

FactorSD@lemmy.dbzer0.com
Joined
1 posts • 48 comments
Direct message

Hey, if you can’t be consistent at least be honest.

permalink
report
parent
reply

There’s a weird modern military turn based strategy game where you fight invading orcs. It’s called Spellcross and until recently it only was available through Hall of the Underdogs. Great game, very Xcom, balls hard.

permalink
report
reply

I’m not familiar with that site specifically, but in principle using a web/cloud tool means that your being on mobile has zero impact on the output. You are feeding a more powerful computer prompts and it sends you back the pictures it generates. So this isn’t a “mobile” problem.

There’s two things to keep in mind though - Firstly, using SD is a art in itself. It’s not easy to get good outputs. I know it’s kinda presented as being “just type and awesome comes out”, but typically a lot of work goes into generating good AI art works. There are a lot of parameters and a lot of possible tools and you do need to spend some time learning how it all fits together. Secondly, running on someone else’s platform is always limiting in terms of what parameters you can fiddle with. A big chunk of getting good results is being able to use your own preferred embeddings, LORA and model to get the results you want. SD can do photorealistic aliens and cartoon smut, but it can’t do both on the same settings, and if you can’t change them then you will always be limited.

You don’t necessarily need to move off of mobile, and at least while you are starting out I wouldn’t recommend spending lots of money, but you should think about SD as being a workflow and consider what is convenient. Personally, I would work on a laptop if at all possible, even if you are just using the various cloud versions (HuggingFace etc). That’s just because you are going to do a lot of copying and pasting and granular tweaking of settings. When you have a big prompt and you need to just change one value, having a trackpad is a lot easier than poking at tiny text on a small screen.

I do generally believe that running a personal instance of SD is the way forward in the long term. The real barrier is technical knowledge more than cash/gpu power; setting it up is not easy if you are someone who doesn’t know Python (like me). If you have any device with a mediocre gpu (I started on my laptop 3050ti) then SD will run slow, but will actually run. If you already have that device to use, it’s literally free and you get the benefits of a local instance immediately, like being able to do big runs (leave them overnight if they take too long) of X/Y plots to help you learn how parameters work, and being able to try out models and LORAs to get where you want to be.

If you don’t have such a device, you can still dip your toe in without spending a lot of money. I do my SD work on a Shadow.tech cloud PC and various other services are available. Yes, in this economy throwing 50 bucks around isn’t nothing, but you get a GPU with 20gigs of VRAM and it runs 10x faster (more actually - 10 iterations per second instead of 4 seconds per iteration) than I had before.

You can access any cloud instance via mobile if that’s your bag, although it does not work wonderfully on Shadow, because it’s so focused on giving you a Windows desktop rather than a mobile front end. You could however be a super cool guy and connect your phone to a USB-C hub, then connect it to a mouse, keyboard and monitor.

All just food for thought :D

permalink
report
reply

It’s more complex than that - You aren’t wrong, but there’s a lot more going on. Almost anything made by an employee as part of their job belongs to the company. If Amazon licences your work to make something based on it, that’s one thing, but if you are a jobbing writer who gets assigned to develop a new series, Amazon will own everything. You get paid in your salary, not in royalties. And, frankly, a lot of creatives are quite happy with that arrangement (since it’s so rare to make money at all).

And that’s why it’s… Odd. Because the “creator” is some dude who has already been paid; literally has received his salary. But the performance of his show does impact him, at least to some degree. Low ratings don’t mean he gets paid less, but it means he’s unlikely to earn more in future.

permalink
report
parent
reply

It’s true that SaaS does stop you from owning software… But what good does “owning” a piece of software do you if you can’t get updates anyway? Back in the pre-internet era we got used to software existing as discrete versions but it hasn’t been like that for a LONG time. As soon as patching became a regular occurrence, “ownership” became a service contract with a CD attached. Then the CD vanished, and it just became a service.

While I do dislike needless “as a service” stuff, that model does genuinely suit a lot of people. It’s not a conjob; companies offer this stuff because a lot of customers want it. Most of the companies that are selling you SaaS stuff themselves use SaaS things in-house.

permalink
report
parent
reply

I guess YMMV on whether focused is boring or not. I agree that I never really found stimulants to be super interesting, but thats partly because it was too expensive to do coke just to work on whatever project was on my mind.

permalink
report
parent
reply

Knowledge does want to be free, but its a stretch to say Guardians 3 is a unit of “knowledge”. Creative works kinda don’t want to be free; Guardians is only desirable because of the cast and crew’s work, and you acting out the script is not the same at all. We shouldn’t devalue creative labour, even as pirates.

Piracy cuts into the profits of studio investors, and that’s good, without impacting how much actors and crew are paid. Win/win.

permalink
report
reply

There’s nuance in the pirate ranks my dude. Some people don’t really believe in property rights at all, some people think that piracy is acceptable when you can’t afford/obtain the original, some just like to try before they buy.

permalink
report
parent
reply

Most artists never make any money at all…

permalink
report
parent
reply

I too have just started on my LORA-making journey, and I too am interested in ahem specialist apparel. My experience is that most LORA are made by non-enthusiasts who don’t necessarily know how enthusiasts refer to things, and to some degree non-enthusiasts want visual variety so they can churn out “dwarf in armour” and “elf in armour” prompts and get things that actually look different. That is fine for most people, who just want to have some nice pictures to go along with their D&D campaign or whatever. But if you are a discerning connoisseur then yes, you kinda do need to roll up your sleeves and make it yourself.

There are some guides out there for LORA making - As ever, they are a mix of helpful and not helpful, and you are going to end up having to work things out yourself. You are definitely going to end up wasting a lot of compute time on LORAs that just fail. That’s part of the process. You are going to see a lot of parameters which you don’t understand and that have seemingly absurd values.

Before I jump into the rest of this - I strongly advise you to start out with LORAs that do one specific thing, and only that thing. So, a LORA just for bucket helmets, using just images of bucket helmets. You can make more complex LORA, but holy crap this is a complex process with a lot of moving parts, so start out with just one thing that you can easily tell if its working and how well.

It is good to hear that you are mentally prepared to manually tag your own images because this is utterly essential and you need to do a very thorough job of it. When training stuff the rule is “garbage in, garbage out” and there is no shortcut here. I honestly haven’t found a good tagging methodology yet, but the advice I’m trying to follow is that you want to tag anything that you DON’T want included in the training term, and don’t tag things that you DO want included.

So, you have a picture of some reenactor in brigandine - You tag it as “brigandine”, but you don’t tag “rivets” or “visible plates”. You would tag “black leather” because brigandine could be any colour, so you call out the colour so the AI can see that the colour is separate from the armour design. You would also tag the trousers, the helmet, the person wearing the armour, the background, the lighting, the time of day; and then also add in “good quality” or “cropped” or other terms about the photograph itself. This sounds like overkill, but if you don’t do it right then the LORA will do all kinds of weird things that you didn’t intend.

To give an example - On the first run of my first LORA I was actually kinda shocked that I was getting good results for the garment that I had trained on… But the LORA was also changing the skin tones and the white balance in the image. The training data was skewed towards very warm light and tanned skin tones, and I hadn’t tagged that, so the result was the AI also associated the training terms with olive skin and incandescent light and they couldn’t easily be separated. I had to go back, reprocess the images, retag them and then come back again.

Which brings me on to the images - You want the largest, highest quality images you can find. You want a range from long shots to close ups, but don’t use anything too close up because SD needs to see how the armour relates to the rest of the figure.

You don’t need to train on huge sets, but I strongly advise that you grab yourself 100+ images and then aggressively prune that collection down to around 30-50 of the best quality images. You should run all of them through some denoising, and for almost all of them you should fiddle with the colour balance. You don’t need to perfectly match colours or anything, but when you see that the light is a bit orange or the image is dark, just change the levels a bit so they are more neutral. You also want to manually crop the images, and (AFAIK) they do have to be 1:1 squares which often means having to crop figures.

As for the actual LORA settings - Don’t ask me what they do, I don’t know. I have been kinda kludging together suggestions from different guides and just seeing what happens. I know for certain that my LORAs are training too fast and typically are burning out by about epoch 6 or 7, but I have no idea how to fix that.

I would recommend that while you are learning you set the trainer to save every epoch and print a sample every epoch. This is because Kohya is fucking twitchy and can sometimes just hang during an 8 hour run, and also so you can monitor the training in real time and see what is happening. I’ve never had a training session that actually needed to run to the end, but the sample images showed training, then good fit, then blurring and overfitting, and I quit out at that point. When you are saving every epoch you of course keep every version to test out, even if there is a crash or you quit, but you can also do a nice X/Y plot of every version of the LORA next to each other and find where the sweet spot is.

Also, if a LORA is just for personal use you don’t necessarily need perfection to get results that will work for you. The standard for most LORA is that they are very transparent and compose well with other LORA and work on lots of models. That’s a lot of work man, and if you aren’t using it that way you won’t even appreciate it all.

Instead, I have been using a combination of control nets, latent couples and composable LORA to apply my mediocre quality outputs to specific parts of specific images. It’s a faff, but you can generate a nice knightly figure, freeze his pose, mask off just the torso, specify brigandine, then generate an image that will probably be very good even with amateurish LORA creations. And that way you don’t have to worry too much about your LORA melting people’s faces and turning their hair pink.

Here are some of the resources that I’ve used:

https://civitai.com/questions/158/guide-lora-style-training https://rentry.co/lora_train https://civitai.com/models/22530/guide-make-your-own-loras-easy-and-free

God’s speed, fellow garment enthusiast!

permalink
report
reply