I am working on a simple static website that gives visitors basic information about myself and the work I do. I want this as a way use to introduce myself to potential clients, collaborators, etc., rather than rely solely on LinkedIn as my visiting card.

This may seem sound rather oxymoronic given that I am literally going to be placing (some relevant) details about myself and my work on the internet, but I want to limit the websites’ access from bots, web scraping and content collection for LLMs.

Is this a realistic expectation?

Also, any suggestions on privacy respecting, yet inexpensive domains that I can purchase in Europe would be of super great help.

30 points

Speaking from experience, be careful you don’t become over-zealous in your anti-scraping efforts.

I often buy parts and equipment from a particular online supplier. I also use custom inventory software to catalog my parts. In the past, I could use cURL to pull from their website, and my software would parse the website and save part specifications to my local database.

They have since enacted intense anti-scraping measures, to the point that cURL no longer works. I’ve had to resort to having the software launch Firefox to load the web page, then the software extracts the HTML from Firefox.

I doubt that their goal was to block customers from accessing data for items they purchased, but that’s exactly what they did in my case. I’ve bought thousands of dollars of product from them in the past, but this was enough of a pain in the ass to make me consider switching to a supplier with a decent API or at least a less restrictive website.

Simple rate limiting may have been a better choice.

permalink
report
reply
9 points

Try using “curl -A” to specify a User-Agent string that matches Chrome or Firefox.

permalink
report
parent
reply
2 points

I probably should have specified I’m using libcurl, but I did try the equivalent of what you suggested. I even tried setting a list of user agents and having it cycle through. None of them work. A lot of anti-scraping methods use much more complex schemes than just validating the user agent. In some cases, even a headless browser will be blocked.

permalink
report
parent
reply
1 point

Mouser?

permalink
report
parent
reply
24 points

I did this a while back for blocking LLMs and there are more methods discussed in that threads comments.

https://lemmy.world/post/14767952

permalink
report
reply
1 point
*
Removed by mod
permalink
report
parent
reply
9 points

Are you perhaps an LLM in disguise?

permalink
report
parent
reply
0 points
*
Removed by mod
permalink
report
parent
reply
1 point

Works for me.

permalink
report
parent
reply
3 points
*
Removed by mod
permalink
report
parent
reply
23 points

Scrape a bunch of Onion articles, link them together in an index, then post an invsible link from your home page that spiders will follow but humans can’t see.

Write a script to randomize the words on all the articles and link them in too. Then change the image tags to point to random wikimedia files.

If there’s one thing we’ve learned, it’s that there’s very little quality control. Channel your inner Ken Kesey / Merry Prankster. Have fun.

permalink
report
reply
1 point

You suggest luring them away? Did you implement this solution?

permalink
report
parent
reply
0 points

I could, but I personally feel anyone foolish enough to use my blathering deserves the unfortunate consequences.

My idea was for people who felt strongly about keeping their stuff away from the big maws of AI.

permalink
report
parent
reply
12 points

Why not add a basic http Auth to the site? Most web hosts provide a simple way to protect a site or directory.

You can have a simple username and pass for humans, but it will stop scrapers as they won’t get past the Auth challenge unless they know the details. I’m pretty sure you can even show login details in the Auth dialog, if you wanted to, rather than pre sharing them.

permalink
report
reply
5 points

as a user, if I saw this trying to visit a personal web page I would close the tab immediately

permalink
report
parent
reply
3 points

I don’t expect my potential collaborators and clients to make an account with username and passwords just to view my relevant details and works.

Or have I not understood your suggestion correctly?

permalink
report
parent
reply
2 points

With htacces everyone can use the same credentials and you can have a message in the popup like ‚use username admin, passeword= what’s a duck? as the login‘ The other option would be an actual captcha

permalink
report
parent
reply
10 points

https://en.wikipedia.org/wiki/Robots.txt

Should cover any polite web crawlers but it is voluntary.

https://platform.openai.com/docs/gptbot

Might have to put it behind a captcha or other type to severely limit automated access.

It’s not realistic to assume it won’t get scraped eventually. Such as someone paying people to bypass capatcha or web crawlers that don’t respect robots.txt. I also don’t know if Google and Microsoft bundle their AI data collection that doesn’t also remove your site from web search.

permalink
report
reply

Privacy

!privacy@lemmy.ml

Create post

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

  • Posting a link to a website containing tracking isn’t great, if contents of the website are behind a paywall maybe copy them into the post
  • Don’t promote proprietary software
  • Try to keep things on topic
  • If you have a question, please try searching for previous discussions, maybe it has already been answered
  • Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
  • Be nice :)

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

Community stats

  • 3.9K

    Monthly active users

  • 3K

    Posts

  • 78K

    Comments