Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

For fucks sake it’s just an algorithm. It’s not capable of becoming sentient.

Have I lost it or has everyone become an idiot?

permalink
report
reply
30 points

I don’t know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.

I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

Even if we find the limit to LLMs and figure out that sentience can’t arise (I don’t know how this would be proven, but let’s say it was), you’d still somehow have to prove that algorithms can’t produce sentience, and that only the magical fairy dust in our souls produce sentience.

That’s not something that I’ve bought into yet.

permalink
report
parent
reply
44 points
*

so i know a lot of other users will just be dismissive but i like to hone my critical thinking skills, and most people are completely unfamiliar with these advanced concepts, so here’s my philosophical examination of the issue.

the thing is, we don’t even know how to prove HUMANS are sentient except by self-reports of our internal subjective experiences.

so sentience/consciousness as i discuss it here refers primarily to Qualia, or to a being existing in such a state as to experience Qualia. Qualia are the internal, subjective, mental experiences of external, physical phenomena.

here’s the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.

hint: you can’t. the move by physicalist philosophy is simply to deny the existence of qualia, consciousness, and subjective experience altogether as ‘illusory’ - but illusory to what? an illusion necessarily has an audience, something it is fooling or decieving. this ‘something’ would be the ‘consciousness’ or ‘sentience’ or to put it in your oh so smug terms the ‘soul’ that non-physicalist philosophy might posit. this move by physicalists is therefore syntactically absurd and merely moves the goalpost from ‘what are qualia’ to ‘what are those illusory, deceitful qualia decieving’. consciousness/sentience/qualia are distinctly not information processing phenomena, they are entirely superfluous to information processing tasks. sentience/consciousness/Qualia is/are not the information processing, but internal, subjective, mental awareness and experience of some of these information processing tasks.

Consider information processing, and the kinds of information processing that our brains/minds are capable of.

What about information processing requires an internal, subjective, mental experience? Nothing at all. An information processing system could hypothetically manage all of the tasks of a human’s normal activities (moving, eating, speaking, planning, etc.) flawlessly, without having such an internal, subjective, mental experience. (this hypothetical kind of person with no internal experiences is where the term ‘philosophical zombie’ comes from) There is no reason to assume that an information processing system that contains information about itself would have to be ‘aware’ of this information in a conscious sense of having an internal, subjective, mental experience of the information, like how a calculator or computer is assumed to perform information processing without any internal subjective mental experiences of its own (independently of the human operators).

and yet, humans (and likely other kinds of life) do have these strange internal subjective mental phenomena anyway.

our science has yet to figure out how or why this is, and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.

so the options we are left with in terms of conclusions to draw are:

  1. all matter contains some kind of (inhuman) sentience, including computers, that can sometimes coalesce into human-like sentience when in certain configurations (animism)
  2. nothing is truly sentient whatsoever and our self reports otherwise are to be ignored and disregarded (self-denying mechanistic physicalist zen nihilism)
  3. there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia (‘soul’-ism as you might put it, but no ‘soul’ is required for this conclusion, it could just as easily be termed ‘mystery-ism’ or ‘unknown-ism’)

And personally the only option i have any disdain for is number 2, as i cannot bring myself to deny the very thing i am constantly and completely immersed inside of/identical with.

permalink
report
parent
reply

here’s the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.

hint: you can’t.

Why not? I understand that we cannot, at this particular moment, explain every step of the process and how every cause translates to an effect until you have consciousness, but we can point at the results of observation and study and less complex systems we understand the workings of better and say that it’s most likely that the human brain functions in the same way, and these processes produce Qualia.

It’s not absolute proof, but there’s nothing wrong with just saying that from what we understand, this is the most likely explanation.

Unless I’m misunderstanding what you’re saying here, why is the idea that it can’t be done the takeaway rather than it will take a long time for us to be able to say whether or not it’s possible?

and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.

Once you believe you understand exactly what external brain activity leads to particular internal experiences, you could surely prove it experimentally by building a system where you can induce that activity and seeing if the system can report back the expected experience (though this might not be possible to do ethically).

As a final point, surely your own argument above about an illusion requiring an observer rules out concluding anything along the lines of point 2?

permalink
report
parent
reply
6 points

God damn what a good post

permalink
report
parent
reply

there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia (‘soul’-ism as you might put it, but no ‘soul’ is required for this conclusion, it could just as easily be termed ‘mystery-ism’ or ‘unknown-ism’)

This is just wrong lol, there’s nothing magical about vertebrates in comparison to unicellular organisms. Maybe the depth of our emotions might be bigger, but obviously a paramecium also feels fear and happiness and anticipation, because these are necessary for it to eat and reproduce, it wouldn’t do these things if they didn’t feel good

The discrete dividing line is life and non-life (don’t @ me about viruses)

permalink
report
parent
reply
3 points
*

It seems by your periodically hostile comments (“oh so smug terms the ‘soul’”) indicates that you have a disdain for my position, so I assume you think my position is your option 2, but I don’t ignore self-reports of sentience. I’m closer to option 1, I see it as plausible that a sufficiently general algorithm could have the same level of sentience as humans.

The third position strikes me as at least just as ridiculous as the second. Of course we don’t totally understand biological life, but just saying there’s something “special” is wild. We’re a configuration of non-sentient parts that produce sentience. Computers are also a configuration of non-sentient parts. To claim that there’s no configuration of silicon that could arrive at sentience but that there is a configuration of carbon that could arrive at sentience is imbuing carbon with some properties that seems vastly more complex than the physical reality of carbon would allow.

permalink
report
parent
reply
2 points

permalink
report
parent
reply
23 points

I’m no philosopher, but at lot of these questions seem very epistemological and not much different from religious ones (i.e. so what changes if we determine that life is a simulation). Like they’re definitely fun questions, but I just don’t see how they’ll be answered with how much is unknown. We’re talking “how did we get here” type stuff

I’m not so much concerned with that aspect as I am about the fact that it’s a powerful technology that will be used to oppress

permalink
report
parent
reply
19 points

I think it would be far less confusing to call them algorithmic statistical models rather than AI

permalink
report
parent
reply
12 points
*

Actually, yeah, you’re on it. These questions are epistemological. They’re also phenomenological. Testing AI is all about seeing how it responds and reacts just as much as they are about being. It’s silly. When it comes to AI right now, existing is measured by reaction to see if it’s imitating a human intelligence. I’m pretty sure “I react therefore I am” was never coined by any great, old philosopher. So, what can we learn from your observation? Nobody knows anything. Or at least, the supposed geniuses who make AI and test it believe that reaction measures intelligence.

permalink
report
parent
reply
4 points

Yeah, capitalists will use unreliable tech to replace workers. Even if GPT4 is the end all (there’s no indication that it is), that would still displace tons of workers and just result in both worse products for everyone and a worse, more competitive labor market.

permalink
report
parent
reply
22 points

I don’t know where everyone is getting these in depth understandings of how and when sentience arises.

It’s exactly the fact that we don’t how sentience forms that makes the acting like fucking chatgpt is now on the brink of developing it so ludicrous. Neuroscientists don’t even know how it works, so why are these AI hypemen so sure they got it figured out?

The only logical answer is that they don’t and it’s 100% marketing.

Hoping computer algorithms made in a way that’s meant to superficially mimic neural connections will somehow become capable of thinking on its own if they just become powerful enough is a complete shot in the dark.

permalink
report
parent
reply
5 points
*

The philosophy of this question is interesting, but if GPT5 is capable of performing all intelligence-related tasks at an entry level for all jobs, it would not only wipe out a large chunk of the job market, but also stop people from getting to senior positions because the entry level positions would be filled by GPT.

Capitalists don’t have 5-10 years of forethought to see how this would collapse society. Even if GPT5 isn’t “thinking”, it’s actually its capabilities that’ll make a material difference. Even if it never gets to the point of advanced human thought, it’s already spitting out a bunch of unreliable information. Make it slightly more reliable and it’ll be on par with entry-level humans in most fields.

So I think dismissing it as “just marketing” is too reductive. Even if you think it doesn’t deserve rights because it’s not sentient, it’ll still fundamentally change society.

permalink
report
parent
reply
4 points

The problem I have with this posture is that it dismisses AI as unimportant, simply because we don’t know what we mean when we say we might accidentally make it ‘sentient’ or whatever the fuck.

Seems like the only reason anyone is interested in the question of AI sentience is to determine how we should regard it in relation to ourselves, as if we’ve learned absolutely nothing from several millennia of bigotry and exceptionalism. Shit’s different.

Who the fuck cares if AI is sentient, it can be revolutionary or existential or entirely overrated independent of whether it has feelings or not.

permalink
report
parent
reply

You’re making a lot of assumptions about the human mind there.

permalink
report
parent
reply
8 points
*

What assumptions? I was careful to almost universally take a negative stance not a positive one. The only exception I see is my stance against the existence of the soul. Otherwise there are no assumptions, let alone ones specific to the mind.

permalink
report
parent
reply
20 points

To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.

this is the popular sentiment with programmers and spectators right now, but even taking all those assumptions as true, it still doesn’t mean we are close to anything.

Consider the complexity of sentient, multicellular organism. That’s trillions of cells all interacting with each-other and the environment concurrently. Even if you reduce that down to just the processes with a brain, that’s still more things happening in and between those neurons than anything we could realistically model in a programme. Programmers like to reduce that complexity down by only looking at the synaptic connections between neurons, and ignoring the everything else the cells are doing.

permalink
report
parent
reply
16 points

I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

Any algorithm, by definition, has a finite number of specific steps and is made to solve some category of related problems. While humans certainly use algorithms to accomplish tasks sometimes, I don’t think something as general as consciousness can be accurately called an algorithm.

permalink
report
parent
reply

Every human experience is necessarily finite and made up of steps, insofar as you can break down the experience of your mind into discrete thoughts.

permalink
report
parent
reply
3 points

It seems you’re both implying here that consciousness is necessarily non-algorithmic because it’s non-finite, but then also admitting in another comment that all human experience is finite, which would necessarily include consciousness.

I don’t get what your point is here. Is all human experience finite? Are some parts of human experience “non-categorical”? I think you need to clarify here.

permalink
report
parent
reply

Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don’t even have language.

It just screams of a marketing scam. I’m not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don’t think this is what they’re doing. I think they’re just trying to sell the next Google AdSense

permalink
report
parent
reply
2 points
*

Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.

permalink
report
parent
reply

That’s an unfalsifiable belief. “We don’t know how sentience works so they could be sentient” is easily reversed because it’s based entirely on the fact that we can’t technically disprove or prove it.

permalink
report
parent
reply
3 points

There’s a distinction between unfalsifiable and currently unknown. If we did someday know how sentience worked, my stance would be falsifiable. Currently it’s not, and it’s fine to admit we don’t know. You don’t need to take a stance when you lack information.

permalink
report
parent
reply

To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.

How is that plausible? The human brain has more processing power than a snake’s. Which has more power than a bacterium’s (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written

chatGPT : freshman-year-“hello world”-program
human being : amoeba
(the : symbol means it’s being analogized to something)

a human is a sentience made up of trillions of unicellular consciousnesses.
chatGPT is a program made up of trillions of data points. But they’re still just data points, which have no sentience or consciousness.

Both are something much greater than the sum of their parts, but in a human’s case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn’t…do anything, it has no will

permalink
report
parent
reply
17 points

Have I lost it

Well no, owls are smart. But yes, in terms of idiocy, very few go lower than “Silicon Valley techbro”

permalink
report
parent
reply

Have I lost it

No you haven’t. I feel the same way though, since the world has gone mad over it. Reporting on this is just another proof that journalism only exists ro make capitalists money. Anything approaching the lib idea of a “free and independent press” would start every article explaining that none of this is AI, it is not capable of achieving consciousness, and theyvare only saying this to create hype

permalink
report
parent
reply

Have I lost it or has everyone become an idiot?

Brainworms has been amplified and promoted by social media, I don’t think you have lost it. This is just the shitty capitalist world we live in.

permalink
report
parent
reply
67 points
*

They switched from worshiping Elon Musk to worshiping ChatGPT. There are literally people commenting ChatGPT responses to prompt posts asking for real opinions, and then getting super defensive when they get downvoted and people point out that they didn’t come here to read shit from AI.

permalink
report
reply
49 points

I’ve seen this several times now; they’re treating the word-generating parrot like fucking Shalmaneser in Stand on Zanzibar, you literally see redd*tors posting comments that are basically “I asked ChatGPT what it thought about it and here…”.

Like it has remotely any value. It’s pathetic.

permalink
report
parent
reply
10 points

They think the people who want to hear from ChatGPT don’t know how to copy paste a post title on their own.

permalink
report
parent
reply
56 points

I said it at the time when chatGPT came along, and I’ll say it now and keep saying it until or unless the android army is built which executes me:

ChatGPT kinda sucks shit. AI is NO WHERE NEAR what we all (used to?) understand AI to be ie fully sentient, human-equal or better, autonomous, thinking, beings.

I know the Elons and shit have tried (perhaps successfully) to change the meaning of AI to shit like chatGPT. But, no, I reject that then, now, and forever. Perhaps people have some “real” argument for different types and stages of AI and my only preemptive response to them is basically “keep your industry specific terminology inside your specific industries.” The outside world, normal people, understand AI to be Data from Star Trek or the Terminator. Not a fucking glorified Wikipedia prompt. I think this does need to be straight forwardly stated and their statements rejected because… Frankly, they’re full of shit and it’s annoying.

permalink
report
reply

What I wanted:

What I got:

permalink
report
parent
reply
14 points

AI has been used to describe many other technologies, when those technologies became mature and useful in a domain though they stopped being called AI and were given a less vague name.

Also gamers use AI to refer to the logic operating NPCs and game master type stuff, no matter how basic it is. Nobody is confused about the infected in L4D being of Skynet level development, it was never sold as such.

The difference with this AI push is the amount of venture capital and public outreach. We are being propagandized. To think that wouldn’t be the case if they simply used a different word in their commercial ventures is a bit… Idk, silly? Consider the NFT grift, most people didn’t have any prior associations with the word nonfungible.

permalink
report
parent
reply
10 points
*
Deleted by creator
permalink
report
parent
reply

ChatGPT does no analysis. It spits words back out based on the prompt it receives based on a giant set of data scraped from every corner of the internet it can find. There is no sentience, there is no consciousness.

The people that are into this and believe the hype have a lot of crossover with “Effective Altruism” shit. They’re all biased and are nerds that think Roko’s Basilisk is an actual threat.

As it currently stands, this technology is trying to run ahead of regulation and in the process threatens the livelihoods of a ton of people. All the actual damaging shit that they’re unleashing on the world is cool in their minds, but oh no we’ve done too many lines at work and it shit out something and now we’re all freaked out that maybe it’ll kill us. As long as this technology is used to serve the interests of capital, then the only results we’ll ever see are them trying to automate the workforce out of existence and into ever more precarious living situations. Insurance is already using these technologies to deny health claims and combined with the apocalyptic levels of surveillance we’re subjected to, they’ll have all the data they need to dynamically increase your premiums every time you buy a tub of ice cream.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
50 points

This tech is not less than a year old. The “tech” being used is literally decades old, the specific implementations marketed as LLMs are 3 years old.

People hyping the technology are looking at the dollar signs that come when you convince a bunch of C-levels that you can solve the unsolvable problem, any day now. LLMs are not, and will never be, AGI.

permalink
report
parent
reply
23 points

Yeah, I have friend who was a stat major, he talks about how transformers are new and have novel ideas and implementations, but much of the work was held back by limited compute power, much of the math was worked out decades ago. Before AI or ML it was once called Statistical Learning, there were 2 or so other names as well which were use to rebrand the discipline (I believe for funding, don’t take my word for it).

It’s refreshing to see others talk about its history beyond the last few years. Sometimes I feel like history started yesterday.

permalink
report
parent
reply
20 points

Oh, I didn’t scroll down far enough to see that someone else had pointed out how ridiculous it is to say “this technology” is less than a year old. Well, I think I’ll leave my other comment, but yours is better! It’s kind of shocking to me that so few people seem to know anything about the history of machine learning. I guess it gets in the way of the marketing speak to point out how dead easy the mathematics are and that people have been studying this shit for decades.

“AI” pisses me off so much. I tend to go off on people, even people in real life, when they act as though “AI” as it currently exists is anything more than a (pretty neat, granted) glorified equation solver.

permalink
report
parent
reply
13 points

It’s pretty crazy to me how 10 years ago when I was playing around with NLPs and was training some small neural nets nobody I was talking to knew anything about this stuff and few were actually interested. But now you see and hear about it everywhere, even on TV lol. It reminds me of how a lot of people today seem to think that NVidia invented ray tracing.

permalink
report
parent
reply
35 points

Where do you get the idea that this tech is less than a year old? Because that’s incredibly false. People have been working with neural nets to do language processing for at least a decade, and probably a lot longer than that. The mathematics underlying this stuff is actually incredibly simple and has been known and studied since at least the 90’s. Any recent “breakthroughs” are more about computing power than a theoretical shift.

I hate to tell you this, but I think you’ve bought into marketing hype.

permalink
report
parent
reply
1 point
Removed by mod
permalink
report
parent
reply
1 point

“Tech” includes hardware, though.

permalink
report
parent
reply
30 points

I never said that stuff like chatGPT is useless.

I just don’t think calling it AI and having Musk and his clowncar of companions run around yelling about the singularity within… wait. I guess it already happened based on Musk’s predictions from years ago.

If people wanna discuss theories and such: have fun. Just don’t expect me to give a shit until skynet is looking for John Connor.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
26 points
*

Perceptrons have existed since the 80s 60s. Surprised you don’t know this, it’s part of the undergrad CS curriculum. Or at least it is on any decent school.

permalink
report
parent
reply
26 points
*

LOL you are a muppet. The only people who tough this shit is good are either clueless marks, or have money in the game and a product to sell. Which are you? Don’t answer that I can tell.

This tech is less then a year old, burning billions of dollars and desperately trying to find people that will pay for it. That is it. Once it becomes clear that it can’t make money, it will die. Same shit as NFTs and buttcoin. Running an ad for sex asses won’t finance your search engine that talks back in the long term and it can’t do the things you claim it can, which has been proven by simple tests of the validity of the shit it spews. AKA: As soon as we go past the most basic shit it is just confidently wrong most of the time.

The only thing it’s been semi-successful in has been stealing artists work and ruining their lives by devaluing what they do. So fuck AI, kill it with fire.

permalink
report
parent
reply
18 points

AKA: As soon as we go past the most basic shit it is just confidently wrong most of the time.

So it really is just like us, heyo

permalink
report
parent
reply
1 point

The only thing you agreed with is the only thing they got wrong

This tech is less then a year old,

Not really.

The only people who tough this shit is good are either clueless marks, or have money in the game and a product to sell

Third option, people who are able to use it to learn and improve their craft and are able to be more productive and work less hours because of it.

permalink
report
parent
reply
21 points

this tech is less than a year old

what? I was working on this stuff 15 years ago and it was already an old field at that point. the tech is unambiguously not old. they just managed to train an LLM with significantly more parameters than we could manage back then because of computing power enhancements. undoubtedly, there have been improvements in the algorithms but it’s ahistorical to call this new tech.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
47 points

I’m not really a computer guy but I understand the fundamentals of how they function and sentience just isn’t really in the cards here.

permalink
report
reply
30 points

I feel like only silicon valley techbros think they understand consciousness and do not realize how reductive and stupid they sound

permalink
report
parent
reply

I don’t understand how we can even identify sentience.

permalink
report
parent
reply

Nobody does and anyone claiming otherwise should be taken with cautious scrutiny. There are compelling arguments which disprove common theses, but the field is essentially stuck in metaphysics and philosophy of science still. There are plenty of relevant discoveries from neighboring fields. Just nothing definitive about what consciousness is, how it works, or why it happens.

permalink
report
parent
reply
6 points

Nobody does, we might not even be. But it’s pretty easy to guess inorganic material on earth isn’t.

permalink
report
parent
reply

Personally I believe it’s possible that different types of sentiences could exist

however, if chatGPT has this divergent type of sentience, then so does every other computer program ever written, and they’d be like the computer-life-version of bacteria while chatGPT would be a mammal

permalink
report
parent
reply

sapience isn’t but all these things already respond to stimuli, sentience is a really low bar.

permalink
report
parent
reply
17 points

Sentience is not a “low bar” and means a hell of a lot more than just responding to stimuli. Sentience is the ability to experience feelings and sensations. It necessitates qualia. Sentience is the high bar and sapience is only a little ways further up from it. So-called “AI” is nowhere near either one.

permalink
report
parent
reply
3 points

I’m not here to defend the crazies predicting the rapture here, but I think using the word sentient at all is meaningless in this context.

Not only because I don’t think sentience is a relevant measure or threshold in the advancement of generative machine learning, but also I think things like ‘qualia’ are impossible to translate in a meaningful way to begin with.

What point are we trying to make by saying AI can or cannot be sentient? What material difference does it make if the AI-controlled military drone dropping bombs on my head has qualia?

We might as well be arguing about weather a squirrel is going around a tree.

permalink
report
parent
reply

A piece of paper is sentient because it reacts to my pen

permalink
report
parent
reply
4 points

plenty of things respond to stimuli but aren’t sapient - hell, bacteria respond to stimuli.

permalink
report
parent
reply
47 points

Roko’s Basilisk, but it’s the snake from the Nokia dumb phone game.

permalink
report
reply
22 points

… i liked the snake game :(

permalink
report
parent
reply

We all did…

permalink
report
parent
reply

the_dunk_tank

!the_dunk_tank@hexbear.net

Create post

It’s the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances’ admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

Community stats

  • 1.7K

    Monthly active users

  • 4.2K

    Posts

  • 88K

    Comments