This isn’t a gloat post. In fact, I was completely oblivious to this massive outage until I tried to check my bank balance and it wouldn’t log in.

Apparently Visa Paywave, banks, some TV networks, EFTPOS, etc. have gone down. Flights have had to be cancelled as some airlines systems have also gone down. Gas stations and public transport systems inoperable. As well as numerous Windows systems and Microsoft services affected. (At least according to one of my local MSMs.)

Seems insane to me that one company’s messed up update could cause so much global disruption and so many systems gone down :/ This is exactly why centralisation of services and large corporations gobbling up smaller companies and becoming behemoth services is so dangerous.

184 points

The annoying aspect from somebody with decades of IT experience is - what should happen is that crowdstrike gets sued into oblivion, and people responsible for buying that shit should have an epihpany and properly look at how they are doing their infra.

But will happen is that they’ll just buy a new crwodstrike product that promises to mitigate the fallout of them fucking up again.

permalink
report
reply
93 points

decades of IT experience

Do any changes - especially upgrades - on local test environments before applying them in production?

The scary bit is what most in the industry already know: critical systems are held on with duct tape and maintained by juniors 'cos they’re the cheapest Big Money can find. And even if not, There’s no time. or It’s too expensive. are probably the most common answers a PowerPoint manager will give to a serious technical issue being raised.

The Earth will keep turning.

permalink
report
parent
reply
34 points

some years back I was the ‘Head’ of systems stuff at a national telco that provided the national telco infra. Part of my job was to manage the national systems upgrades. I had the stop/go decision to deploy, and indeed pushed the ‘enter’ button to do it. I was a complete PowerPoint Manager and had no clue what I was doing, it was total Accidental Empires, and I should not have been there. Luckily I got away with it for a few years. It was horrifically stressful and not the way to mitigate national risk. I feel for the CrowdStrike engineers. I wonder if the latest embargo on Russian oil sales is in anyway connected?

permalink
report
parent
reply
18 points

I wonder if the latest embargo on Russian oil sales is in anyway connected?

Doubt it, but it’s ironic that this happens shortly after Kaspersky gets banned.

permalink
report
parent
reply
30 points

Unfortunately falcon self updates. And it will not work properly if you don’t let it do it.

Also add “customer has rejected the maintenance window” to your list.

permalink
report
parent
reply
35 points

Turns out it doesn’t work properly if you do let it

permalink
report
parent
reply
7 points

Well, “don’t have self-upgrading shit on your production environment” also applies.

As in “if you brought something like this, there’s a problem with you”.

permalink
report
parent
reply
25 points

Not OP. But that is how it used to be done. Issue is the attacks we have seen over the years. IE ransom attacks etc. Have made corps feel they needf to fixed and update instantly to avoid attacks. So they depend on the corp they pay for the software to test roll out.

Autoupdate is a 2 edged sword. Without it, attackers etc will take advantage of delays. With it. Well today.

permalink
report
parent
reply
15 points
*

I’d wager most ransomware relies on old vulnerabilities. Yes, keep your software updated but you don’t need the latest and greatest delivered right to production without any kind of test first.

permalink
report
parent
reply
2 points
*

I get the sentiment but defense in depth is a methodology to live by in IT and auto updating via the Internet is not a good risk to take in general. For example, should Crowdstrike just disappear one day, your entire infrastructure shouldn’t be at enormous risk nor should critical services. Even if it’s your anti-virus, a virus or ransomware shouldn’t be able to easily propagate through the enterprise. If it did, then it is doubtful something like Crowdstrike is going to be able to update and suddenly reverse course. If it can then you’re just lucky that the ransomware that made it through didn’t do anything in defense of itself (disconnecting from the network, blocking CIDRs like Crowdsource’s update servers, blocking processes, whatever) and frankly you can still update those clients anyway from your own AV update server which is a product you’d be using if you aren’t allowing updates from the Internet in order to roll them out in dev first, phasing and/or schedules from your own infrastructure.

Crowdstrike is just another lesson in that.

permalink
report
parent
reply
131 points
*

I isn’t even a Linux vs Windows thing but a competent at your job vs don’t know what the fuck you are doing thing. Critical systems are immutable and isolated or as close as reasonably possible. They don’t do live updates of third party software and certainly not software that is running privileged and can crash the operating system.

I couldn’t face working in corporate IT with this sort of bullshit going on.

permalink
report
reply
61 points

This is just like “what not to do in IT/dev/tech 101” right here. Every since I’ve been in the industry for literally decades at this point I was always told, even when in school, “Never test in production, never roll anything out to production on a Friday, if you’re unsure have someone senior code review” of which, Crowdstrike, failed to do all of the above. Even the most junior of junior devs should know better. So the fact that this update was allowed go through…I mean blame the juniors, the seniors, the PM’s, the CTO’s, everyone. If your shit is so critical that a couple bad lines of poorly written code (which apparently is what it was) can cripple the majority of the world…yeah crowdstrike is done.

permalink
report
parent
reply
35 points

It’s incredible how an issue of this magnitude didn’t get discovered before they shipped it. It’s not exactly an issue that happens in some niche cases. It’s happening on all Windows computers!

This can only happen if they didn’t test their product at all before releasing to production. Or worse: maybe they did test, got the error, and they just “eh, it’s probably just something wrong with test systems”, and then shipped anyway.

This is just stupid.

permalink
report
parent
reply
5 points

Can you imagine being the person that hit that button today? Jesus.

permalink
report
parent
reply
28 points
*

It’s also a “don’t allow third party proprietary shit into your kernel” issue. If the driver was open source it would actually go through a public code review and the issue would be more likely to get caught. Even if it did slip through people would publically have a fix by now with all the eyes on the code. It also wouldn’t get pushed to everyone simultaneously under the control of a single company, it would get tested and packaged by distributions before making it to end users.

permalink
report
parent
reply
5 points

It’s actually a “test things first and have a proper change control process” thing. Doesn’t matter if it’s open source, closed source scummy bullshit or even coded by God: you always test it first before hitting deploy.

permalink
report
parent
reply
12 points

And roll it out in a controlled fashion: 1% of machines, 10%, 25%…no issues? Do the rest.

How this didn’t get caught by testing seems impossible to me.

The implementation/rollout strategy just seems bonkers. I feel bad for all of the field support guys who have had there next few weeks ruined, the sys admins who won’t sleep for 3 days, and all of the innocent businesses that got roped into it.

A couple local shops are fucked this morning. Kinda shocked they’d be running crowd strike but also these aren’t big businesses. They are probably using managed service providers who are now swamped and who know when they’ll get back online.

One was a bakery. They couldn’t sell all the bread they made this morning.

permalink
report
parent
reply
2 points

It’s not that clear cut a problem. There seems to be two elements; the kernel driver had a memory safety bug; and a definitions file was deployed incorrectly, triggering the bug. The kernel driver definitely deserves a lot of scrutiny and static analysis should have told them this bug existed. The live updates are a bit different since this is a real-time response system. If malware starts actively exploiting a software vulnerability, they can’t wait for distribution maintainers to package their mitigation - they have to be deployed ASAP. They certainly should roll-out definitions progressively and monitor for anything anomalous but it has to be quick or the malware could beat them to it.

This is more a code safety issue than CI/CD strategy. The bug was in the driver all along, but it had never been triggered before so it passed the tests and got rolled out to everyone. Critical code like this ought to be written in memory safe languages like Rust.

permalink
report
parent
reply
15 points

I couldn’t face working in corporate IT with this sort of bullshit going on.

im taking you don’t work in IT anymore then?

permalink
report
parent
reply
6 points

There are state and government IT departments.

permalink
report
parent
reply
14 points
*

More generally: delegate anything critical to a 3rd party and you’ve just put your business at the mercy of the quality (or lack thereof) of their own business processes which you do not control, which is especially dangerous in the current era of “cheapest as possible” hiring practices.

Having been in IT for almost 3 decades, a lesson I have learned long ago and which I’ve also been applying to my own things (such as having my own domain for my own e-mail address rather than using something like Google) was that you should avoid as much as possible to have your mission critical or hard to replace stuff dependent on a 3rd Party, especially if the dependency is Live (i.e. activelly connected rather than just buying and installing their software).

I’ve managed to avoid quite a lot of the recent enshittification exactly because I’ve been playing it safe in this domain for 2 decades.

permalink
report
parent
reply
3 points
*
Deleted by creator
permalink
report
parent
reply
2 points

Our group got hit with this today. We don’t have a choice. If you want to run Windows, you have to install this software.

It’s why stuff like this is so crippling. Individual organizations within companies have to follow corporate mandates, even if they don’t agree.

permalink
report
parent
reply
-6 points

So it’s Linux vs Windows

permalink
report
parent
reply
25 points

No it’s Crowdstrike… we’re just seeing an issue with their Windows software, not their Linux software.

permalink
report
parent
reply
-7 points

That being said Microsoft still did hire crowd strike and give them the keys to release an update like this.

End result still is windows having more issues than linux

permalink
report
parent
reply
83 points
*

Didn’t Crowdstrike have a bad update to Debian systems back in April this year that caused a lot of problems? I don’t think it was a big thing since not as many companies are using Crowdstrike on Debian.

Sounds like the issue here is Crowdstrike and not Windows.

permalink
report
reply
42 points
*

They didn’t even bother to do a gradual rollout, like even small apps do.

The level of company-wide incompetence is astounding, but considering how organizations work and disregard technical people’s concerns, I’m never surprised when these things happen. It’s a social problem more than a technical one.

permalink
report
parent
reply
18 points
*

They didn’t even bother to test their stuff, must have pushed to prod

(Technically, test in prod)

permalink
report
parent
reply

Everyone has a test environment

Some are lucky enough to also have a separate production environment

permalink
report
parent
reply
17 points

A crowdstrike update killed a bunch of our Linux VMs that had a newer kernel a month or so ago.

permalink
report
parent
reply
3 points

*crowdstrike

permalink
report
parent
reply
79 points

While I don’t totally disagree with you, this has mostly nothing to do with Windows and everything to do with a piece of corporate spyware garbage that some IT Manager decided to install. If tools like that existed for Linux, doing what they do to to the OS, trust me, we would be seeing kernel panics as well.

permalink
report
reply
63 points

Hate to break it to you, but CrowdStrike falcon is used on Linux too…

permalink
report
parent
reply
55 points
*

And if it was a kernel-level driver that failed, Linux machines would fail to boot too. The amount of people seeing this and saying “MS Bad,” (which is true, but has nothing to do with this) instead of “how does an 83 billion dollar IT security firm push an update this fucked” is hilarious

permalink
report
parent
reply
10 points
*

Falcon uses eBPF on Linux nowadays. It’s still an irritating piece of software, but it no make your boxen fail to boot.

edit: well, this is a bad take. I should avoid commenting on shit when I’m sleep deprived and filled with meeting dread.

permalink
report
parent
reply
-1 points

You’re asking the wrong question: why does a security nightmare need a 90 billion dollar company to unfuck it?

permalink
report
parent
reply
12 points

And Macs, we have it on all three OSs. But only Windows was affected by this.

permalink
report
parent
reply
32 points

Hate to break it to you, but most IT Managers don’t care about crowdstrike: they’re forced to choose some kind of EDR to complete audits. But yes things like crowdstrike, huntress, sentinelone, even Microsoft Defender all run on Linux too.

permalink
report
parent
reply
4 points

Yeah, you’re right.

permalink
report
parent
reply
24 points

I wouldn’t call Crowdstrike a corporate spyware garbage. I work as a Red Teamer in cybersecurity, and EDRs are bane of my existence - they are useful, and pretty good at what they do. In the last few years, I’m struggling more and more to with engagements we do, because EDRs just get in the way and catch a lot of what would pass undetected a month ago. Staying on top of them with our tooling is getting more and more difficult, and I would call that a good thing.

I’ve recently tested a company without EDR, and boy was it a treat. Not defending Crowdstrike, to call that a major fuckup is great understatement, but calling it “corporate spyware garbage” feels a little bit unfair - EDRs do make a difference, and this wasn’t an issue with their product in itself, but with irresponsibility of their patch management.

permalink
report
parent
reply
2 points

Fair enough.

Still this fiasco proved once again that the biggest thread to IT sometimes is on the inside. At the end of the day a bunch of people decided to buy Crowdstrike and got screwed over. Some of them actually had good reason to use a product like that, others it was just paranoia and FOMO.

permalink
report
parent
reply
2 points

This is the problem that managers view security as a product they can simply buy as wholesale, instead of a service that they need to hire a security guy (or a whole department) for.

permalink
report
parent
reply
2 points

Hmmm… but that goes up to the CEO level, people like to see everything as a product they can buy because that has less liabilities than hiring people… Also makes a lot more sense from an accounting perspective.

permalink
report
parent
reply
-2 points

How is it not a window problem?

permalink
report
parent
reply
19 points

Why should it be? A faulty software update from a 3rd party crashes the operating system. The exact same thing could happen to Linux hosts as well with how much access those IPSec programms usually get.

permalink
report
parent
reply
-16 points
*

But that patch is for windows, not Linux. Not a hypothetical, this is happening.

permalink
report
parent
reply
15 points
*

The fault seems to be 90/10 CS, MS.

MS allegedly pushed a bad update. Ok, it happens. Crowdstrike’s initial statement seems to be blaming that.

CS software csagent.sys took exception to this and royally shit the bed, disabling the entire computer. I don’t think it should EVER do that, so the weight of blame must lie with them.

The really problematic part is, of course, the need to manually remediate these machines. I’ve just spent the morning of my day off doing just that. Thanks, Crowdstrike.

EDIT: Turns out it was 100% Crowdstrike, and the update was theirs. The initial press release from CS seemed to be blaming Microsoft for an update, but that now looks to be misleading.

permalink
report
parent
reply
3 points

It is on the sense that Windows admins are the ones that like to buy this kind of shit and use it. It’s not on the sense that Windows was broken somehow.

permalink
report
parent
reply
65 points

I’ve just spent the past 6 hours booting into safe mode and deleting crowd strike files on servers.

permalink
report
reply
19 points

Feel you there. 4 hours here. All of them cloud instances whereby getting acces to the actual console isn’t as easy as it should be, and trying to hit F8 to get the menu to get into safe mode can take a very long time.

permalink
report
parent
reply
7 points

Ha! Yes. Same issue. Clicking Reset in vSphere and then quickly switching tabs to hold down F8 has been a ball ache to say the least!

permalink
report
parent
reply
2 points

Just go into settings and add a boot delay, then set it back when you’re done.

permalink
report
parent
reply
2 points
*

What I usually do is set next boot to BIOS so I have time to get into the console and do whatever.

Also instead of using a browser, I prefer to connect vmware Workstation to vCenter so all the consoles insta open in their own tabs in the workspace.

permalink
report
parent
reply
1 point

Can’t you automate it?

permalink
report
parent
reply
10 points

Since it has to happen in windows safe mode it seems to be very hard to automate the process. I haven’t seen a solution yet.

permalink
report
parent
reply
5 points

Sadly not. Windows doesn’t boot. You can boot it into safe mode with networking, at which point maybe with anaible we could login to delete the file but since it’s still manual work to get windows into safe mode there’s not much point

permalink
report
parent
reply
7 points

It is theoretically automatable, but on bare metal it requires having hardware that’s not normally just sitting in every data centre, so it would still require someone to go and plug something into each machine.

On VMs it’s more feasible, but on those VMs most people are probably just mounting the disk images and deleting the bad file to begin with.

permalink
report
parent
reply

Linux

!linux@lemmy.ml

Create post

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word “Linux” in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

  • Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
  • No misinformation
  • No NSFW content
  • No hate speech, bigotry, etc

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

Community stats

  • 7.7K

    Monthly active users

  • 6.5K

    Posts

  • 179K

    Comments