KingRandomGuy
As an added comment, I think INDIGO and Ain Imager are worth trying for anyone using INDI, especially if you’re using a Sony camera. Sony cameras on INDIGO have focuser support and in my experience the driver is much more stable. Furthermore, the architecture of distributed agents controlling different functions makes setup easier and more performant on low-power and potentially slow wireless systems like an ARM single board computer.
To explain that a bit more, I image through Ain Imager on my Linux laptop. However, this imager program is more or less just a frontend where I select options, but everything is executed on the agents on the ROCKPro64. This includes imaging and frame capture, guiding, and plate solving. This has some benefits compared to KStars + INDI. For example, you avoid the latency of transferring a guider image to a laptop and then sending a message back to the SBC with a guiding pulse (you can avoid this on INDI by running PHD2 separately, but then you have the overhead of running a whole desktop environment), and you don’t need to wait to frames to download before starting another exposure (they’re downloaded asynchronously instead).
(Disclaimer - I contribute to INDIGO occasionally but I’m not a member of the project, just an open source developer adding features to it)
Without addressing performance, the largest difference is going to be software support. The Jetsons have legitimate CUDA-compatible GPUs, which means they can be used directly with libraries like PyTorch without (oftentimes poorly supported) additional tooling to convert formats.
It’s unfortunately super clear from their Steam charts. When they had creator events and whatnot, the player count spiked, but other than that they only have about 1000 players active and I seriously doubt many people spend money on the game since it’s already rather F2P friendly.
It’s a shame, the game was a lot of fun and I still play with friends.
Great shot! Impressive that you can pull this much out from just 78 subs.
Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won’t be able to perform backpropagation and therefore can’t generate gradients to update your generator’s weights.
That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.
I’m curious what field you’re in. I’m in computer vision and ML and most conferences have clauses saying not to use ChatGPT or other LLM tools. However, most of the folks I work with see no issue with using LLMs to assist in sentence structure, wording, etc, but they generally don’t approve of using LLMs to write accuracy critical sections (such as background, or results) outside of things like rewording.
I suspect part of the reason conferences are hesitant to allow LLM usage has to do with copyright, since that’s still somewhat of a gray area in the US AFAIK.
I haven’t read the article myself, but it’s worth noting that in CS as a whole and especially ML/CV/NLP, selective conferences are generally seen as the gold standard for publications compared to journals. The top conferences include NeurIPS, ICLR, ICML, CVPR for CV and EMNLP for NLP.
It looks like the journal in question is a physical sciences journal as well, though I haven’t looked much into it.
The big thing you get with frameworks is super simple repairability. This means service manuals, parts availability, easy access to components like the battery, RAM, ssd, etc. Customizable ports are also a nice feature. You can even upgrade the motherboard later down the line instead of buying a whole new laptop.