Even worse, Reddit itself has been getting infected with corporate AI-generated “recommendations”
It’s really strange it’s still at 47 dollars in the stock market. That thing is extreamly overvalued.
It’s like 1/10 or so of comments. Though it’s fun seeing posts that have obviously been edited to advertise lemmy.
That’s why I use “site:reddit.com” instead of just adding “reddit”
Don’t worry, I’m sure google will disable that soon in the same way they disable all the other search syntax that used to make searching a simple and easy task
Cool, pedant. Addend “on google” to my comment then if you need, since that’s clearly the context we’re talking about here. I’m aware there are other search engines, but context should have made what I was talking about pretty fucking obvious.
Here’s a tip:
site:reddit.com
Makes me sad to think that this will soon be about as useful as “site:facebook.com” with the way Reddit is going.
Do you think it will ever be possible to do that for all the Lemmy instances?
Pretty much all content gets federated to lemmy.world so if you use site:lemmy.world that’ll do it.
Nah. The best option we have imo is a service that indexes everything on one site so traditional search engines can find it. That requires someone to build it, and AFAIK that’s hasn’t happened.
That or the search engines themselves implement their own fediverse instances just for the purposes of indexing results. At a certain point if the platform becomes relevant enough I think we could see that happen.
I think they’d probably prefer instances that they have control over to reduce the avenues for a third party to manipulate the results. Otherwise they have to trust whoever runs the search instances.
Lemmy’s built-in search barely works as it is, so unless some drastic changes happen it’s resounding no.
I look forward to Google being forced to down rank any sites with “reddit” in the H1.
I’ve spent a lot of time working in SEO.
Search results like this can drive people away from Google and toward other resources. Google likes money, and this is why they usually try to combat spammers that are gaming the system.
It’s a cat and mouse game that has been happening for years. Organic search spammers find a new thing, then Google tweaks the algorithm to downrank what they’re exploiting.
then Google tweaks the algorithm
Well you don’t have to read Cory’s newest column to understand that Google hasn’t been doing that, because they don’t have to. They do not care, at least not yet, because they have arguably become too big to care.
As useful as Mozilla/5.0; AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.3
Mine is Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0. Joke is, this is the trimmed version (about:config Xorigin and trimming settings) and some pages already have problems with it. If you strip out the OS part, pages like google.com won’t work anymore. Despite that you shouldn’t parse the UA string…
Trick is I took out the actually useful parts like Chrome, Firefox, Edge, etc. And the OS. All the agents these days have AppleWebKit and Mozilla just so old websites that look for it don’t downgrade the experience.
Yeah, make your user agent absolutely unique. Too much entropy will surely confuse the shit out server side HTTP Header tracking. 😬
Firefox doesn’t pretend to use AppleWebKit. It’s actually the only one which identifies itself correctly… mostly, at least:
Mozilla/5.0 (X11; Linux x86_64; rv:122.0) Gecko/20100101 Firefox/122.0
While about:support says “Window Protocol: wayland”. But that’s ok websites shouldn’t care anyway.
It’s other browsers who send things like “like Gecko” to sneak past old browser-detection code.