Sorry if I’m not the first to bring this up. It seems like a simple enough solution.

9 points

You know that, I know that. Most people browsing here know that. Anyone you speak to who buys GPUs would probably also agree. Still not gonna change.

permalink
report
reply
31 points
*

What other company besides AMD makes GPUs, and what other company makes GPUs that are supported by machine learning programs?

permalink
report
reply
25 points

Exactly, Nvidia doesn’t have real competition. In gaming sure, but no one is actually competiting with CUDA.

permalink
report
parent
reply
8 points

AMD has ROCm which tries to get close. I’ve been able to get some CUDA applications running on a 6700xt, although they are noticeably slower than running on a comparable NVidia card. Maybe we’ll see more projects adding native ROCm support now that AMD is trying to cater to the enterprise market.

permalink
report
parent
reply
3 points

I don’t own any nvidia hardware out of principal, but ROCm is no where even close to cuda as far as mindshare goes. At this point I rather just have a cuda->rocm shim I can use, in the same was as directx->vulkan does with proton. Trying to fight for mindshare sucks, so trying to get every dev to support it just feel like a massive uphill battle.

permalink
report
parent
reply
5 points

They kinda have that, yes. But it was not supported on windows until this year and is in general not officially supported on consumer graphics cards.

Still hoping it will improve, because AMD ships with more VRAM at the same price point, but ROCm feels kinda half assed when looking at the official support investment by AMD.

permalink
report
parent
reply
6 points

No joke, probably intel. The cards won’t hold a candle to a 4090 but they’re actually pretty decent for both gaming and ML tasks. AMD definitely needs to speed up the timeline on their new ML api tho.

permalink
report
parent
reply
6 points

Problem with Intel cards is that they’re a relatively recent release, and not very popular yet. It’s going to be a while before games optimize for them.

For example, the ARC cards aren’t supported for Starfield. Like they might run but not as well as they could if Starfield had optimized for them too. But the card’s only been out a year.

permalink
report
parent
reply
3 points

The more people use Arc the quicker it becomes mainstream and optimised for but arc is still considered “beta” and slow in peoples minds even though there were huge improvements and the old benchmarks don’t hold any value anymore. chicken and Egg problem. :/

Disclaimer: i have an arc 770 16GB because every other sensible upgrade path would have cost 3x-4x more for the same performance uplift (and I’m not buying an 8GB card in 2023+) but now I’m starting to get really angry at people blaming Intel for “not supporting this new game” - all that gpus should support is the graphics API to the letter of the specification, all this day-1 patching and driver hotfixes to make games run decent is bs. Games need to feed the API and GPUs need to process what the API tells it to, nothing more nothing less. It’s a complex issue and i think Nvidia held the monopoly for too long, everything is optimised for Nvidia at the cost of making it worse for everyone else.

permalink
report
parent
reply
10 points

My Intel Arc 750 works quite well at 1080 and is perfectly sufficient for me. If people need hyper refresh rates and resolution and all all the bells well then have fun paying for it. But if you need functional, competent gaming, at US$200 Arc is nice.

permalink
report
parent
reply
11 points

AMD supports ML, its just a lot of smaller projects are made with CUDA backends, and dont have developers there to switch from CUDA to OpenCL or similar.

Some of the major ML libraries that used to built around CUDA like Tensorflow has already made non CUDA branches, but thats only because tensorflow is open source, ubiquitous in the scene and litterally has google behind it.

ML for more niche uses basically is in the chicken and egg situation. People wont use other gpus for ML because theres no dev working on non CUDA backends. No ones working on non CUDA backends because the devs end up buying Nvidia, which is basically what Nvidia wants.

There are a bunch of followers but a lack in of leaders to move the direction in a more open compute environment.

permalink
report
parent
reply
2 points

Huh, my bad. I was operating off of old information. They’ve actually already released the sdk and apis I was referring to.

permalink
report
parent
reply
1 point

Apple. Their own processors have both GPUs and AI accelerators. But for some reason, the industry refuses to use them.

permalink
report
parent
reply
1 point

I jumped to team red this build.
I have been very happy with my 7900XTX.
4K max settings / FPS on every game I’ve thrown at it.
I don’t play the latest games, so I guess I could hit a wall if I play the recent AAA releases, but many times they simply don’t interest me.

permalink
report
parent
reply
10 points
*
Deleted by creator
permalink
report
reply
10 points

There are also games that don’t render a square mile of a city in photorealistic quality.

permalink
report
parent
reply
4 points

Graphical fidelity has not materially improved since the days of Crysis 1, 16 years ago. The only two meaningful changes for how difficult games should be to run in that time are that 1440p & 2160p have become more common, and raytracing. But consoles being content to run at dynamic resolutions and 30fps combined with tools developed to make raytracting palatable (DLSS) have made developers complacent to have their games run like absolute garbage even on mid spec hardware that should have no trouble running 1080p/60fps.

Destiny 2 was famously well optimized at launch. I was running an easy 1440p/120fps in pretty much all scenarios maxed out on a 1080 Ti. The more new zones come out, the worse performance seems to be in each, even though I now have a 3090.

I am loving BG3 but the entire city in act 3 can barely run 40fps on a 3090, and it is not an especially gorgeous looking game. The only thing I can really imagine is that maxed out the character models and armor models do look quite nice. But a lot of environment art is extremely low-poly. I should not have to turn on DLSS to get playable framerates in a game like this with a Titan class card.

Nvidia and AMD just keep cranking the power on the cards, they’re now 3+ slot behemoths to deal with all the heat, which also means cranking the price. They also seem to think 30fps is acceptable, which it just… is not. Especially not in first person games.

permalink
report
parent
reply
2 points

W/r/t Baldur’s Gate 3, I don’t think the bottleneck is the GPU. Act 3 is incredibly ambitious in terms of NPC density, and AI is one of those things that’s still very hard to parallelize.

permalink
report
parent
reply
1 point
*

Graphical fidelity has not materially improved since the days of Crysis 1

I think you may have rose tinted glasses on this point, the level of detail in environments and accuracy of shading, especially of dynamic objects, has increased greatly. Material shading has also gotten insanely good compared to what we had then. Just peep the PBR materials on guns in modern FPS games, it’s incredible, Crysis just had normals and specular maps all black or grey guns that are kinda shiny and normal mapped. If you went inside of a small building or whatever there was hardly any shading or shadows to make it look right either.

Crysis is a very clever use of what was available to make it look good, but we can do a hell of a lot better now (without raytracing) At the time shaders were getting really computationally cheap to implement so those still look relatively good, but geometry and framebuffer size just did not keep pace at all, tesselation was the next hotness after that because it was supposed to help fix the limited geometry horsepower contemporary cards had by utilizing their extremely powerful shader cores to do some of the heavy lifting. Just look at the rocks in Crysis compared to the foliage and it’s really obvious this was the case. Bad Company 2 is another good example of good shaders with really crushingly limited geometry though there are clever workarounds there to make it look pretty good still.

I could see the argument that the juice isn’t worth the squeeze to you, but graphics have very noticeably advanced in that time.

permalink
report
parent
reply
1 point

I’m currently part of the problem and this is so fucking true. Games have really stopped pushing the envelope because they either have to be cross platform compatible or they’re not even PC first.

3D mark is the only thing I could find to put a dent in my 3060ti

permalink
report
parent
reply
1 point

Why would datacenters be buying consumer grade cards? Nvidia has the A series cards for enterprise that are basically identical to consumer ones but with features useful for enterprise unlocked.

permalink
report
parent
reply
2 points

I think you mean their Tesla line of cards? The A (e.g. A100) stands for the generation name (e.g. Ada or Ampere, don’t remember which one got the A), and that same name applies to both the consumer line (GeForce and Quadro) and the data’s centre cards.

The hardware isn’t identical either. I don’t know all the differences, but I know at least that the data centre cards have SXM connectors that greatly increase data throughput.

permalink
report
parent
reply
1 point
*
Deleted by creator
permalink
report
parent
reply
4 points

And as soon as they have any competitors we might consider it

permalink
report
reply
1 point

As a console gamer, I don’t have to worry about it. Xbox Series X, PS5, Steam Deck are all AMD based. The Switch is Nvidia, but honestly, I can’t remember the last time I turned the Switch on.

permalink
report
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.8K

    Monthly active users

  • 3.4K

    Posts

  • 78K

    Comments