https://fosstodon.org/@fedora/110821025948014034

TL;DR: Asahi Linux will be developed with the Fedora Linux distribution as the primary distribution moving forward. Fedora’s discourse forum will be the primary place to discuss Asahi Linux.

2 points

I set up a Fedora Asahi dualboot today. Really liking it so far. When this has support for speakers and microphone I’m fully ditching macOS.

permalink
report
reply
0 points
*
Deleted by creator
permalink
report
reply
6 points

Nice, even more fedora users.

permalink
report
reply
18 points

Linux on m1 is mostly exciting for gaming when the gpu drivers progress IMO. my m1 macbook is primarily a work machine, but it holds up fine for mobile gaming. Ffxiv is already a great experience in mac os, but more games compatibility is exciting!

permalink
report
reply
9 points

With vulkan support and wine, running windows games on an M1 using linux is already possible. How fast that will be is another question.

It would reduce the amount of work needed by developers as at minimum all they’d have to do is build something that uses Vulkan and it would run on Mx hardware thans to linux. For better support, they could compile their games on any ARM64 hardware and that’d remove the need for an emulation layer (x86 -> ARM).

I’d say this might actually be what unlocks a real gaming experience on Apple hardware without relying on (DX ->) Vulkan -> Metal for graphics, x86 -> ARM, and windows -> darwin. Nothing would have to be done for graphics (vulkan), x86->ARM (if it’s only compiled for x86), and windows -> Linux (which already has tons of work and the major support of Steam, while Apple just started with their wine fork for this).

It wouldn’t surprise me if an “apple gaming” community would have a post pinned “install linux to game on Apple” in a year or two.

permalink
report
parent
reply
-16 points

No way Im running linux on a Mac. That’s what a raspberry pi is for.

permalink
report
reply
1 point

No, Apple Silicon-based Linux is why VMware Fusion exists. 😉 You can currently virtualize ARM-based distros, but if this new release leverages Apple’s implementation of ARM for M-series chips better, I’ll probably create a new VM and migrate what I need from the old one.

permalink
report
parent
reply
6 points

I got an email yesterday that a solid year after signing up for notifications I can finally buy the pi 4 lol. It might have been a longer wait than my steam deck.

permalink
report
parent
reply
1 point

I’ve bought two on eBay in the last year. Got the last one for around $135 and it was a kit.

permalink
report
parent
reply
1 point

I’ve been wanting to get one or two for some projects, but because of the prices and scarcity, I ended up just buying an old mini PC. It was like $60 and I can do a lot more with it than a pi (at least for home server projects).

permalink
report
parent
reply
1 point

Yeah, there were ways to get them if you had to have them, but I wasn’t committed enough to keep up with checking and paying a premium. The reality is that I have more older ones than I use already, along with multiple other devices that could be used as servers if I had a use for everything.

It’s really just because I have a silly need to buy tech for no reason (and yes, I bought one when I got the email).

permalink
report
parent
reply
6 points

Old school. Orange Pi is the new new.

permalink
report
parent
reply
4 points

If it won’t work in a docker container, I need a real server anyway.

permalink
report
parent
reply
1 point

I keep seeing Docker being mentioned everywhere, but I can’t seem to figure out what it is.

maybe I’m just dumb or my Google Fu isn’t as good as I thought, but can you offer an explanation? is it just virtualization software?

permalink
report
parent
reply
1 point

Here’s the difference between virtualisation and containerisation:

virtualisation –> virtualise / emulate an entire machine (including hardware). Meaning you’re running a second virtual computer (called a guest) within your own computer (called the host)

containerisation –> cordon off parts of your system for a group of processes aka contain them to parts of your system.

Imagine if you’re in a factory and you have a group of workers that handle generic tasks (CPU) and another one graphical tasks (GPU), a storage room (RAM), and an operator (the operating system)

Virtualisation is the equivalent of taking some generic workers, letting them build a separate factory within the existing factory, and act like another factory. They may even know how to translate instructions from the host factory to instructions understood only in the guest factory. They also occupy a part of the storage room. And to top it off they of course have their own operator that communicates with the host operator before doing virtually anything.

Containerisation is the equivalent of the operator starting processes that either do not know how large the storage room, generic worker pool, nor graphical worker pool are, or only having access to a section of the aforementioned. Basically contains them in their own view of the world with very little overhead. No new factory, no new operator, no generic workers that behave like graphical workers or can only understand certain instructions.

Distribution

In terms of distribution, virtualisation is like passing around mini-factories to other factories (or optimally descriptions of the factories needed to execute the instructions within the file). Containers are really a bunch of compressed directories, with some meta information about which process should be started first (amongst other things) and that processes and its subprocesses having a limited view of their world.

On Mac

Containerisation was popularised on linux (even though BSD had it first IIRC), which is where the operating system primitives and concepts were made to enable what we now know as Docker. Since virtually all containers in existence these days require linux due to how they are created and the binaries they contain, running docker (or anything that supports containers) requires virtualising a linux machine within which containers run.

This comes with its own hurdles and of course is slower than on linux.

permalink
report
parent
reply
4 points

Apparently not cause it’s super easy to find. Searching “docker” on Google returned it as the top result for me. it’s a container platform. You have code and it needs somewhere to run. That could be on your computer but that’s ineffective at handling package conflicts. So you run it in a container. This means you can install the specific versions of dependencies that the code needs and you’re least likely to run into conflicts. You can also run multiple instances of a program regardless of whether it would allow it because each instance runs in its own container. Blissfully unaware of the others

permalink
report
parent
reply
2 points

Docker isn’t virtualization. It’s a way of packaging applications, their dependencies and configuration. Docker containers can be run together or segregated based on configuration. Essential in much modern software- no more this dependency for x clashes with that dependency for y / ‘works on my machine’ / I can’t install that version.

The containers share a host Linux kernel (which is virtualized on non Linux systems). Docker runs fine on ARM but only using arm containers. It’s tricky to run x86_64 containers on an arm host, especially with a different OS

permalink
report
parent
reply
4 points

It’s lighter than a VM but a bit heavier than aiming to run an application natively (and all the dependency & configuration hell that entails).

Basically a convenient way to package and run applications with all their dependencies, without regard for what libraries & configurations exist in the host OS and other containers.

If your application only works with up to version 42 of the Whatchamacallit library, you ship it with that version of Whatchamacallit, the underlying OS doesn’t need to install it. Other containers running on the system that depend on that library don’t get broken since they’re packaged with version 69 which works fine for them.

Meme answer:

permalink
report
parent
reply
5 points

If I understood correctly, Docker is a software to maintain containers. Containers are ready to go images that can run on top of your base os, like virtualisation but in a more direct way, for exemple by sharing the kernel with the os, making it lighter and way more efficient than full virtualisation

permalink
report
parent
reply

Apple

!apple_enthusiast@lemmy.world

Create post
Welcome

to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!

Rules:
  1. No NSFW Content
  2. No Hate Speech or Personal Attacks
  3. No Ads / Spamming
    Self promotion is only allowed in the pinned monthly thread

Lemmy Code of Conduct

Communities of Interest:

Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple

Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode

Community banner courtesy of u/Antsomnia.

Community stats

  • 1.2K

    Monthly active users

  • 1.2K

    Posts

  • 18K

    Comments