The_Decryptor
spoiler
made you look
They’re “file like” in the sense that they’re exposed as an fd
, but they’re not exposed via the filesystem at all (Unlike e.g. unix sockets), and the existing API is just mapped over the sockets one (i.e. write()
instead of send()
, read()
instead of recv()
). There’s also a difference in how you create them, you open()
a file, but connect()
a socket, etc.
(As an aside, it turns out Bash has its own virtual file-based wrapper around sockets, so you can do things like cat
a remote port with Bash, something you can do natively in Plan 9)
Really it just shows that “everything is a file” didn’t stand up in practice, there’s more stuff that needs special treatment than doesn’t (e.g. Interacting with TTYs also has special APIs). It makes more sense to have a better dedicated API than a generic catch-all one.
- If your ISP doesn’t do IPv6, then you’re fine (But should look for a better ISP)
- If your ISP does do IPv6, then you should install the patch now (Unless you’re not using IPv6 on the LAN, in which case you’re fine but get a better router/sysadmin)
- If your ISP does do IPv6, but you can’t install the patch for whatever reason, only then should you disable IPv6
The problem is people recommend disabling IPv6 for random unrelated reasons (Like gamers claiming it decreases your IPv4 latency), so yeah MS is going to be insistent that users not fiddle with things they don’t understand because it’s really unlikely they’ll go back and restore that config when it doesn’t actually help.
Yep, our center-left government recently announced plans to keep using natural gas for at least another 25 years
But it’s ok, because we’ll work out carbon capture in the future! Which is the exact same notion that our previous right wing government based their policy on.
Ideally you don’t directly ship the code it outputs, you use it instead of re-writing it from scratch and then slowly clean it up.
Like Mozilla used it for the initial port of qcms (the colour management library they wrote for Firefox), then slowly edited the code to be idiomatic rust code. Compare that to something like librsvg that did a function by function port
Then don’t get me started about how the www subdomain itself no longer makes sense. I get that the system was designed long before HTTP and the WWW took over the internet as basically the default, but if we had known that in advance it would’ve made sense to not try to push www in front of all website domains throughout the 90"s and early 2000’s.
I have never understood why you can delegate a subdomain but not the root domain, I doubt it was a technical issue because they added support for it recently via SVCB
records (But maybe technical concerns were actually fixed in the decades since)
Existing JPEG files (which are the vast, vast majority of images currently on the web and in people’s own libraries/catalogs) can be losslessly compressed even further with zero loss of quality. This alone means that there’s benefits to adoption, if nothing else for archival and serving old stuff.
Funny thing is, there was talk on the Chrome bug tracker of using just this ability transparently at the HTTP layer (like gzip/brotli compression), but they’re so set on pushing their AVIF format that they backed away from it.
For a game I don’t think it’s the end of the world, but you could end up in a situation where the first check passed, then you go to use the file and that fails, so you end up having to handle the “can’t use file” case twice anyway. But for something like showing a “Continue” menu item you obviously need to check that there’s an existing save to begin with before loading it.
In general checking first leads to race conditions known as “time-of-check to time-of-use”, the pitfalls of which can vary greatly, but realistically aren’t a problem for a lot of cases.
Plan 9 even extended the “everything is a file” philosophy to networking, unlike everybody else that used sockets instead.