Hi everyone!
I saw that NixOS is getting popularity recently. I really have no idea why and how this OS works. Can you guys help me understanding all of this ?
Thanks !
this comment reads suspiciously like it was written by an LLM (eg ChatGPT). was it? please don’t do that!
Do LLMs give citations?
(The citations in this comment appear to be all real links about NixOS, but they are not particularly relevant to the places in the comment where they’re cited.)
@dessalines@lemmy.ml @AgreeableLandscape@lemmy.ml @wazowski@lemmy.ml @k_o_t@lemmy.ml @nutomic@lemmy.ml @kixiQu@lemmy.ml an admin is telling me not to use LLMs. Is this the official stance of this instance? If so, please let me know so I can find another instance and add it to the rules, if not please choose admins that actually enforce the instance rules without making them up.
I don’t know whether just using an LLM is a problem. But in your case I would say the fact you used one and didn’t indicate you did. If you indicated the answer came from an LLM, then the trust in the answer could be weighted accordingly by each user.
That’s my opinion at any rate.
If OP wanted a response from an LLM, they would have typed their question into an LLM. The least you could do is label it as such.
I use an LLM to edit everything I write. Does this mean I have to label everything as LLM-generated? I am the one doing the job, but in the end, I’m just copy-pasting the output from the LLM.
thanks for clarifying. i’m deleting your generated comment per rule 4 (spamming) as well as two other generated comments you posted elsewhere; if another admin wants to undelete any of these i would be surprised.
please do not post LLM-generated comments without clearly labeling them as such. imo this is common sense, and doesn’t need its own rule, rule 4 is sufficient.
Under the soon to be enacted EU AI laws such a bot would be limited-risk application (interaction with humans), the requirements for a text bot aren’t particularly high but also non-negotiable from a best practice POV: Stating front and centre that it’s an AI generated post. It’s also best practice to fulfil criteria necessary for high-risk systems voluntarily, the more you can fulfil I bet the less hostile people are going to be.
The library of congress has an executive summary of the thing.
(EU sources alas are a bit iffy at the moment there’s the commission version and the parliament amendments, haven’t seen a consolidated version yet. When will politicians start using proper VCS)
I use an LLM to edit everything I write. Does this mean I have to label everything as LLM-generated? I am the one doing the job, but in the end, I’m just copy-pasting the output from the LLM.
The admins did not remove the comment, a community mod did. Mods can impose further restrictions on their communities on top of instance wide rules (within reason of course), including banning LLMs. Lemmy.ml at least does not have a blanket ban on LLMs, but generally it’s expected that, 1, you should not post LLMs excessively, we mainly want to host discussions by humans, 2, you should disclose it’s from an LLM and which one it’s from, and preferably add to what it says with your own comments or analysis. If it’s a mix of LLM and your own writing, say so at the start of the comment, but if the community directly disallows LLMs then you shouldn’t post it there at all.