6 points

Opens with: Notepad

Does it, though?

permalink
report
reply
1 point

Depends, how much RAM do you have?

permalink
report
parent
reply
33 points

Seems weird to critique “western game devs”

Developers of any region can be terrible.

permalink
report
reply

I’ve never heard anyone accuse of game of being “western jank” but I’ve heard plenty be called “eurojank” or “slavjank.”

Doesn’t make 'em bad. Some of my favorite games are slavjank. Like STALKER.

permalink
report
parent
reply
9 points

“westjank” should mean always-online singleplayer experiences with kernel anti-cheats and 300 gb crash logs

permalink
report
parent
reply
1 point

I didn’t even know Ghost of Sushimi was western, I thought it was from the Dark Souls devs.

Apparently no, it’s from the Infamous devs.

permalink
report
parent
reply
6 points

Japanese game devs would NEVER. Because their bosses literally chain them to the desk until their code built from scratch works flawlessly.

(This belief may be out of date)

permalink
report
parent
reply
81 points
*

Ok, but the second tweet is a bit redundant

Like what else would a .log file be? A video file? A Word Document? An executable?

Do you really need to inspect the properties to be told: “This .log file is certainly containing text. Thank you for installing Windows 10. Save 5% on your Office 365 subscription with code ‘ILOVEMICROSOFT’”

permalink
report
reply
21 points

Like what else would a .log file be? A video file? A Word Document? An executable?

I think their point is that a 200gb text file is a wild size usage for a crash log, and there’s probably accidentally some binary data in that log. There’s no way a crash log can exceed 2x the size of the game binary itself.

permalink
report
parent
reply
21 points

Could be a bug in their crash handler, just like, infinitely looping and printing something over and over.

permalink
report
parent
reply
3 points

Binary data is almost always more compact than text data

permalink
report
parent
reply
81 points
*

You should have rolling log files of limited size and limited quantity. The issue isn’t that it’s a text file, it’s that they’re not following pretty standard logging procedures to prevent this kind of thing and make logs more useful.

Essentially, when your log file reaches a configured size, it should create a new one and start writing into that, deleting the oldest if there are more log files than your configured limit.

This prevents runaway logging like this, and also lets you store more logging info than you can easily open and go through in one document. If you want to store 20 gb of logs, having all of that in one file will make it difficult to go through. 10 2 gb log files is much easier. That’s not so much a consumer issue, but that’s the jist of it.

permalink
report
parent
reply
13 points

As a sysadmin there are few things that give me more problems than unbounded growth and timezones.

permalink
report
parent
reply
1 point

Printers. Desk phones. Wmi service crashing at full core lock under the guise of svchost.

permalink
report
parent
reply
14 points

Fully agree, but the way it’s worded makes it seem like log being a text file is the issue. Maybe I’m just misinterpreting intent though.

permalink
report
parent
reply
28 points

200GB of a text log file IS weird. It’s one thing if you had a core dump or other huge info dump, which, granted, shouldn’t be generated on their own, but at least they have a reason for being big. 200GB of plain text logs is just silly

permalink
report
parent
reply
5 points

Essentially, when your log file reaches a configured size, it should create a new one and start writing into that, deleting archiving the oldest

FTFY

permalink
report
parent
reply
4 points

Sure! Best practices vary to your application. I’m a dev, so I’m used to configuring stuff for local env use. In prod, archiving is definitely nice so you can track back even through heavy logging. Though, tbh, if you’re applications getting used by that many people a db logging system is probably just straight better

permalink
report
parent
reply
6 points
*

It could be a XML or JSON with some embedded binary data (but to your point Windows isn’t gonna figure that out from the extension)

permalink
report
parent
reply
46 points

I thought they were just trying to hammer home how wild it was for the file to get that big, since it’s just a text file.

permalink
report
parent
reply
37 points

Yeah and also when they said “300gb of crash logs”, i assumed it was a folder with thousands of files, instead of all those gbs in a single text file, that’s wild

permalink
report
parent
reply
27 points

if you assume the second post has ulterior meaning it could be that someone might not know what a crash log is, but most people who have interacted with computers at least once would be at least vaguely familiar window’s file description and understand that text file icon + >200 gb size is not normal

this is, of course, a rather big assumption.
most people don’t put that much though in a post and expecting them to will make your online experience a confusing mess.

permalink
report
parent
reply
10 points

Most people have zero understanding of how programs work. I have slightly more understanding than the average person and I didn’t catch that a crash log would nearly always be a text file.

permalink
report
parent
reply
9 points

It could be a binary file, though that would probably make it smaller if anything.

I’m guessing the point was the developer didn’t invent some proprietary log that also contained a dump and other things that could conceivably be very large. That would also be terrible design, but managing to create hundreds of gigs of text in a game crash log is a special kind of terrible.

permalink
report
parent
reply
99 points

That can happen with any program, and should be a simple fix on the dev side

permalink
report
reply
78 points

It is also something that can happen easily. Just program to log an error and then the error happens unexpected every frame.

permalink
report
parent
reply
68 points
*

So

300×1024×1024= 314,572,800kb

Assuming something like 200 bytes per log line

x5 = 1,572,864,000 logs

Assuming this is your standard console port with a 60fps frame rate lock:

÷60fps ÷ 60 seconds ÷ 60 minutes ÷ 24h = 303.407… days

You would need to play for nearly a year solid to generate that many logs at a rate of one per frame.

Given that’s probably not what’s happened, this is a particularly impressive rate of erroring

permalink
report
parent
reply
47 points

Yeah, that does not add up, you are right. There must be several error or it must include the stacktrace or something.

permalink
report
parent
reply
8 points

If you’re getting a stack trace every frame youd be there much sooner. Maybe like a week.`

permalink
report
parent
reply
36 points

It’s a crash log, not an error log. It’s probably dumping the entire memory stack to text instead of a bin dump every time it crashed. I would also suspect the crash handler is appending to the log instead of deleting old crashes and just keeping the latest. At several dozen gigas of RAM it would just take a couple of game crashes to fill up the 300GB.

permalink
report
parent
reply
24 points

To happen every frame without crashing the game, it’s more likely a warning ⚠️ “Warning, the texture is named 1.png instead of 1.PNG”

permalink
report
parent
reply
15 points

It happened to my cousin awhile back with Photoshop. She’s a professional photographer and it shut her down for a few days. I found it pretty quickly and an update stopped it from happening. It wasn’t removing temporary files and totally filled her drive up.

Poor thing was ready to buy a new hard drive.

permalink
report
parent
reply
4 points

I vaguely remember the Nvidia driver generating tons of log files, so many that they piled up over years and filled my drive

permalink
report
parent
reply
68 points

My log file in rimworld after I add my 691st mod

permalink
report
reply
16 points

Oh no, I should probably check that.

permalink
report
parent
reply