Yes, let’s keep growing our group here! We’ve been getting new faces/bodies in the OC communities, and I’d like to hear from them here, too :-)
Thinking about and playing with ways of writing a LLM roleplaying story that prompts the user to choose between sending interstellar colony ships with 3k people or convincing them to send a colony ship with only 325 genetically selected humans on a 10 year contract to get the population past 3k. Then forcing them to discover the social and moral implications as they discover you can’t form relationships but must coexist in close confines with an AGI telling you who you must partner with for 10 years. I don’t know if I can warp it to mostly interesting philosophical with a bit of fun. It’s mostly fun but forgotten philosophy.
Interesting. Do you plan to ship the LLM with the game or dial up a remote server? Presumably you know how to make the model talk dirty?
Just doing stuff with offline open source LLMs. Oobabooga Textgen WebUI, KoboldCpp, Python scripts for a model loader (tokenizer), models from huggingface.co. There are plenty of NSFW models, but you need to run the larger ones for complex fun. I use either a 70B or a 8×7B model, but you’ll need enthusiast level hardware for those like i7 12th gen or greater, and 16GB+ GPU. You’ll also need 64GB+ of system memory. There are smaller models that run on lower specs but they tend go really struggle with complexity and highly constrained stories like this.
Interesting. I managed to run a llama model on the cpu of 3 different machines. It did ok, but it had 10% of the complexity of what you’re talking about. I don’t have enough faith in Moore’s Law that we’ll all have machines that can run your game, nor internet bandwidth to download it, any time soon. Best bet would be physical media and dedicated hardware like ps5, but then players would expect something more than text. If you wanted to share this with the world, I suppose you would host a web UI somewhere.