Seeing nothing here so maybe we can get some conversation going
2 points
I have been using it in proxmox and passing the GPU through to a windows VM and it’s performing well; I’ll likely be trying to eliminate the windows part before too long and see if I can tell any difference.
Unfortunately my Jupyter server uses docker which doesn’t play well with non-cuda GPUs, but I’ve been wanting to toss it so some ai/ml workloads.