User's banner
Avatar

AdrianTheFrog

AdrianTheFrog@lemmy.world
Joined
12 posts • 407 comments

e

Direct message

next up: microsoft announces development of Bethesda’s next game will be largely outsourced

permalink
report
reply

there’s someone who uploaded this same meme before the reddit thing but instead of 2000 it said 20

edit: I was linked to the post a few months ago maybe but I can’t find it now

permalink
report
parent
reply

Ok, i guess its just kinda similar to dynamic overclocking/underclocking with a dedicated npu. I don’t really see why a tiny 2$ microcontroller or just the cpu can’t accomplish the same task though.

permalink
report
parent
reply

Ram is slower than GPU VRAM, but that extreme slowdown is due to the bottleneck of the pcie bus that the data has to go through to get to the GPU.

permalink
report
parent
reply

there are some local genai music models, although I don’t know how good they are yet as I haven’t tried any myself (stable audio is one, but I’m sure there are others)

also minor linguistic nitpick but LLM stands for ‘language model’ (you could maybe get away with it for pixart and sd3 as they use t5 for prompt encoding, which is an llm, i’m sure some audio models with lyrics use them too), the term you’re looking for is probably ‘generative’

permalink
report
parent
reply

from the articles I’ve found it sounds like they’re comparing it to native…

permalink
report
parent
reply

Having to send full frames off of the GPU for extra processing has got to come with some extra latency/problems compared to just doing it actually on the gpu… and I’d be shocked if they have motion vectors and other engine stuff that DLSS has that would require the games to be specifically modified for this adaptation. IDK, but I don’t think we have enough details about this to really judge whether its useful or not, although I’m leaning on the side of ‘not’ for this particular implementation. They never showed any actual comparisons to dlss either.

As a side note, I found this other article on the same topic where they obviously didn’t know what they were talking about and mixed up frame rates and power consumption, its very entertaining to read

The NPU was able to lower the frame rate in Cyberpunk from 263.2 to 205.3, saving 22% on power consumption, and probably making fan noise less noticeable. In Final Fantasy, frame rates dropped from 338.6 to 262.9, resulting in a power saving of 22.4% according to PowerColor’s display. Power consumption also dropped considerably, as it shows Final Fantasy consuming 338W without the NPU, and 261W with it enabled.

permalink
report
parent
reply

We have plenty of real uses for ray tracing right now, from blender to whatever that avatar game was doing to lumen to partial rt to full path tracing, you just can’t do real time GI with any semblance of fine detail without RT from what I’ve seen (although the lumen sdf mode gets pretty close)

although the rt cores themselves are more debatably useful, they still give a decent performance boost most of the time over “software” rt

permalink
report
parent
reply

Yeah, you also have to deal with the latency with the cloud, which is a big problem for a lot of possible applications

permalink
report
parent
reply

well, i think a lot of these cpus come with a dedicated npu, idk if it would be more efficient than the tensor cores on an nvidia gpu for example though

edit: whatever npu they put in does have the advantage of being able to access your full cpu ram though, so I could see it might be kinda useful for things other than custom zoom background effects

permalink
report
parent
reply