Avatar

CasualTee

CasualTee@beehaw.org
Joined
3 posts • 22 comments
Direct message

This looks like one of those wireguard based solution like tailscale or netbird though I’m not sure they are using it here. They all use a public relay used for NAT penetration as well as client discovery and in some instance, when NAT pen fails, traffic relay. From the usage, this seems to be the case here as well:

Share the local Minecraft server:

$ holesail --live 25565 --connector “holesailMCServer420”

On other computer(s):

$ holesail “holesailMCServer420”

So this would register a “holesailMCServer420” on their relay server. The clients could then join this network just by knowing its name and the relay will help then reach the host of the Minecraft server. I’m just extrapolating from the above commands though. They could be using DHT for client discovery. But I expect they’d need some form of relay for NAT pen at the very least.

As for exposing your local network securely, wireguard based solution allow you to change the routing table of the peers as well as the DNS server used to be able to assign domain name to IPs only reachable from within another local network. In this instance, it works very much like a VPN except that the connection to the VPN gateway is done through a P2P protocol rather than trough a service directly exposed to the internet.

Though in the instance of holesail, I have heavy doubts about “securely” as no authentication seems required to join a network: you just need to know its name. And there is no indication that choosing a fully random name is enough.

permalink
report
parent
reply

On the topic of exposing sequence number in APIs, this has been a security issue in the past. Here is one I remember: https://www.reuters.com/article/us-cyber-travel-idUSKBN14G1I6/

From the article:

Two of the three big booking systems - Amadeus and Travelport - assign booking codes sequentially, making brute-force computer guesswork easier. Of the three, Amadeus, through its web portal CheckMyTrip, is especially vulnerable, Nohl said.

The PNRs (flight booking code) have many more security issues, but at least nowadays, their sequential aspect should no longer be exposed.

So that’s one more reason to be careful when exposing DB id in APIs, even if converted to a natural looking key or at least something easier to remember.

permalink
report
parent
reply

I would not have expected anyone to go to ASUS’ office to press the issue. So, good on GN and hopefully will see some long-term results. But seeing how the company has a hard time acknowledging some issues such as the ROG Ally SD card one, I would not hold my breadth.

permalink
report
reply

This is what was used by Benjamin to fix David’s issue with his XPPen stylus. Here is the mail thread where this was discussed.

It’s nice to see it documented now. Hopefully this will extend the support of weird devices such as tablets and game controllers since it should allow user space to “fix” them.

permalink
report
reply

Ah, I see. I think you might need to specify your own pre-scaled texture for those then. By creating a StyleBoxTexture, as many as needed for all the disabled/hover/normal/… effects and use those in your theme. Which is not ideal, but that’s all I have.

Otherwise, if you want to automatically scale your UI, you can have a look at the viewport suggestion from @magikmw and make an auto-loading node that does the necessary manipulations for you. Though it will scale everything, font and icon included.

permalink
report
parent
reply

If you are working with vector font, you can set some global settings that should help.

In Project Settings, tick Advanced Settings and then look for:

General -> Rendering -> Textures -> Canvas Textures -> Default Texture Filter: set to Nearest

General -> GUI -> Theme -> Default Theme Scale: set to the appropriate value, e.g., 4

Note that in this same panel you can set the default theme to your own. Then, as suggested, reload the project for the changes to apply.

If you are working with bitmap fonts, then yes, you have to manually scale the root Control node of all your scenes, while still enabling the texture filter to nearest. But there should be few of them hopefully.

Though, I’m not an expert, so there might be a better way.

permalink
report
reply

The main problem is that dynamic linking is hard. It is not just easier for the maintainers of the languages to ignore it, it removes an entire class of problems.

Dynamic linking does not even reliably work with C++, an “old” language with decades of tooling and experience on the matter. You get into all kind of UB when interacting with a separate DSO, especially since there are minimal verification of the ABI compatibility when loading a dynamic library. So you have to wait for a crash to be certain you got it wrong. Unless you control the compilation of your dependencies, it’s fairly hard to be certain you won’t encounter dynamic linking related issues. At which point you realize that, if the license allows it, you’re better off static linking everything, including the C++ library itself: it makes it much more predictable, you’re not forcing an additional dependency on your users and most UB are now gone (especially the one about raising exception across DSO boundaries, which can happen behind your back, unless you control the compilation of all your dependencies…).

That’s especially true if you are releasing a library where you do not know it’s runtime: it might be dynamically loaded via dlopen by a C++ binary that will load its own C++ library first, but some of your users use the version that is stuck on C++14 and your codebase is in C++23. This can be solved, by playing with LD_LIBRARY_PATH, but the application is already making use of it to load the C++ library it comes with instead of the one provided by the system (which only provides C++11 runtime), and it completely ignores the initial state of the environment variable (how could it do otherwise? It would have to guess the path to the libstdc++ is for a newer version and not the older one provided by the system). Now imagine the same issue with your own transitive dependency on top of that: it’s a nightmare.

So dynamic linking never really worked, except maybe for C when you expect a single level of dependency, all provided by the system. And even then that’s mostly thanks to C simpler ABI and runtime.

So I expect that is the main reason newer languages do not bother with dynamic linking: it introduces way too many issues. Look at your average rust program and how many version of a same dependency it loads, transitively. How would you solve that problem as to be able to load different versions when it matters but try first and foremost to load only one if possible? How would you be able to make the right call? By using semver? If nobody made any mistake why not, but you will rather be required to provide escape hatches that, much like LD_LIBRARY_PATH and LD_PRELOAD, will be misused. And by then, you only “solved” the simplest problem.

Nowadays, based on how applications are delivered on Windows and OSX, and with the advent of docker, flatpack/snap and appimage, I do not see a way back to dynamic linking anytime soon. It’s just too complicated of a problem, especially as the number of dependency grows.

permalink
report
reply

They are fun, especially with the motorized arms, but they are crazy expensive last I checked. There are cheaper options, for Arduino and microbit. But even those are not cheap. They run at at least $30, without the microbit or arduino, for a couple of motors, IR and ultrasonic sensor.

permalink
report
parent
reply

I’ve used scratch to introduce kids (6~10) to programming. It works quite well IMO. They had a laptop with windows. I recommend a touch screen if possible, especially for younger kids. Though at 8~12yo that should not be as much of an issue.

I used it with the microbit from the BBC. While not required, a dedicated piece of hardware makes it much more interactive and fun, for a basic introduction. Basically, the microbit can be turned into a remote control for your characters in scratch for example.

Though, kids get fond of the ability to create pre-programmed scenes. That are not very logic intensive, more like an animated movie. And since they can add their own drawings and voice, they can get very engaged on this sole basis. So the microbit is not required at all.

Though if you want to use it, Microsoft has its own scratch for microbit that is more annoying to use IMO (you need to flash the program every time, which is not easy for younger kids that have trouble with the mouse), but it unlocks all the capabilities of the microbit for even more interactive applications. You can make them communicates through a basic protocol over 2.4GHz radio, control led strips or even robots for example (though the robots are far from cheap for what they are 😕).

Both scratch and makecode (the links mentioned above) have plenty of resources if you want to get a lab going. Personally, I would set my expectation fairly low and plan for many additional small features that kids that are really interested could implement on their own. In my experience, some kids will not be interested at all, not until they see a feature they want to interact with at least. Others will try to see what they can do by themselves, before the lab even begin. But usually, the older they get, the less likely they are to experiment by themselves and they’d rather wait for instructions. Which is a shame, but that’s how it is I guess.

Also, try to make sure they can continue their work from home. Scratch is available on many platforms (though makecode sucks on Android last time I checked) and is trivial to get up and running. That said, importing a project is another matter for kids barely familiar with computers, which is why I would distribute a document aimed at their parent to get them set up.

permalink
report
reply

The reason behind kernel mode/user mode separation is to require all user-land programs to have to go through the kernel to do any modification to the system. In other words, would it not be for syscalls, the only thing a user land program could do would be to burn CPU cycles. And even then, the kernel can still preempt it any time to let other, potentially more important programs, run instead.

So if a program can harm your system from userland, it’s because the kernel allowed it, every time. Which is why we currently see a slow move toward sandboxing everything. Basically, the idea of sandboxing is to give the kernel enough information about the running program so that we can tailor which syscalls it can do and with which arguments. For example: you want to prevent an application from accessing the network? Prevent it from allocating sockets through the associated syscall.

The reason for this slow move is historical really: introducing all those protections from the get go would require a lot of development time to start with, but it had to be built unpon non-existant security layers and not break all programs in the process. CPUs were not even powerful enough to waste cycles on such concerns.

Now, to better understand user mode/kernel mode, you have to realize that there are actually more modes than this. I can only speak for the ARM architecture because it’s the one I know, but x86 has similar mechanisms. Basically, from the CPU perspective, you have several privilege levels. On x86 those are called rings, on ARM, they’re called Exception Level. On ARM, a CPU has up to four of those, EL3 to EL0. They also have names based on their purpose (inherited from ARMv7). So EL3 is firmware level, EL2 is hypervisor, EL1 is system and EL0 is user. A kernel typically run on EL2 and EL1. EL3 is reserved for the firmware/boot process to do the most basic setup, partly required by the other ELs. EL2 is called hypervisor because it allows to have several virtual EL1 (and even EL2). In other words, a kernel running at EL2 can run several other kernels at EL1: this is virtualization and how VMs are implemented. Then you have your kernel/user land separation with most of the kernel (and driver) logic running at EL1 and the user programs running at EL0.

Each level allocates resources for the sub-level (under the form of memory map, as memory maps, which do not necessarily map to RAM, are also used to talk to devices). Would a level try to access a resource (memory address) it has no rights to, an exception would be raised to the upper level, which would then decide what to do: let it through or terminate the program (the later translates to a kernel panic/BSOD when the program in question is the kernel itself or a segmentation fault/bus error for user land programs).

This mechanism is fairly easy to understand with the swap mechanism: the kernel allows your program to access some page in memory when asked through brk or mmap, used by malloc. But then, when the system is under memory pressure, and it turns out your program has not used that memory region for a little while, the kernel swaps it out. Which means your program is now forbidden from accessing this memory. When the program tries to access that memory again, the kernel is informed of the action through a exception raised (unintentionally) by your program. The kernel then swaps back the memory region from disk, allows your program to access the memory region again, and then let the program resume to a state prior to the memory access (that it will then re-attempt without even realizing).

So basically, a level is fully responsible for what a sub-level does. In theory, you could have no protection at all: EL1 (the kernel) could allow EL0 to modify all the memory EL1 has access to (again, those are memory maps, that can also map to devices, not necessarily RAM). In practice, the goal of EL1 is to let nothing through without being involved itself: the program wants to write something on the disk: syscall, wants more memory: syscall, wants to draw something on the screen: syscall, use the network: syscall, talk to another program: syscall.

But the reason is not only security. It is also, and most importantly, abstraction. For example, when talking to a USB device, a user program does not have to know the USB protocol. This is implemented once in the kernel and then userland programs can use that to deal with all the annoying stuff such as timings, buffers, interruptions and so on. So the syscalls were initially designed for that: build a library of functions all user programs can re-use without having to re-implement them, or worse, without having to deal with the specifics of every device/vendor: this is the sole responsibility of the kernel.

So there you have it: a user program cannot harm the computer without going through the kernel first. But the kernel allows it nonetheless because it was not initially designed as a security feature. The security concerns came afterward and were initially implemented with users, which are mostly enough for servers, and where root has nearly as many privileges as the kernel itself (because the kernel allows it). Those are currently being improved under the form of sandboxes, for which the work started a while ago, with every OS (and CPU architecture) having its own implementation. But we are only seeing widespread adoption by userland since fairly recently on desktop. Partly thanks to the push from smartphones where application-level privileges (to access the camera for example) were born AFAIK.

Nowadays, CPUs are powerful enough to even have security features to try to protect a userland program from itself: from buffer overflow, return address manipulation and the like. If you’re interested, I recommend you look at the concept of pointer authentication.

permalink
report
reply