…according to a Twitter post by the Chief Informational Security Officer of Grand Canyon Education.
So, does anyone else find it odd that the file that caused everything CrowdStrike to freak out, C-00000291-
00000000-00000032.sys was 42KB of blank/null values, while the replacement file C-00000291-00000000-
00000.033.sys was 35KB and looked like a normal, if not obfuscated sys/.conf file?
Also, apparently CrowdStrike had at least 5 hours to work on the problem between the time it was discovered and the time it was fixed.
Every affected company should be extremely thankful that this was an accidental bug, because if crowdstrike gets hacked, it means the bad actors could basically ransom I don’t know how many millions of computers overnight
Not to mention that crowdstrike will now be a massive target from hackers trying to do exactly this
You mean it’s going to cost corporations a pretty penny. Which means they’ll pass those “costs of operation” on to the rest of us. Fuck.
On Monday I will once again be raising the point of not automatically updating software. Just because it’s being updated does not mean it’s better and does not mean we should be running it on production servers.
Of course they won’t listen to me but at least it’s been brought up.
Thank God someone else said it. I was constantly in an existential battle with IT at my last job when they were constantly forcing updates, many of which did actually break systems we rely on because Apple loves introducing breaking changes in OS updates (like completely fucking up how dynamic libraries work).
Updates should be vetted. It’s a pain in the ass to do because companies never provide an easy way to rollback, but this really should be standard practice.
I thought it was a security definition download; as in, there’s nothing short of not connecting to the Internet that you can do about it.
Well I haven’t looked into it for this piece of software but essentially you can prevent automatic updates from applying to the network. Usually because the network is behind a firewall that you can use to block the update until you decide that you like it.
Also a lot of companies recognize that businesses like to check updates and so have more streamlined ways of doing it. For instance Apple have a whole dedicated update system for iOS devices that only businesses have access to where you can decide you don’t want the latest iOS and it’s easy you just don’t enable it and it doesn’t happen.
Regardless of the method, what should happen is you should download the update to a few testing computers (preferably also physically isolated from the main network) and run some basic checks to see if it works. In this case the testing computers would have blue screened instantly, and you would have known that this is not an update that you want on your system. Although usually requires a little bit more investigation to determine problems.
I’ve got a feeling crowdstrike won’t be as grand of target anymore. They’re sure to lose a lot of clients…at least until they spin up a new name and erease all traces of “crowdstrike”.
Third parties being able to push updates to production machines without being tested first is giant red flag for me. We’re human … we fuck up. I understand that. But that’s why you test things first.
I don’t trust myself without double checking, so why would we completely trust a third party so completely.
Properly regulated capitalism breaks up monopolies so new players can enter the market. What you’re seeing is dysfunctional capitalism - an economy of monopolies.
Years ago I read an study about insurance companies and diversification of assets in Brazil. By regulation, an individual insurance company need to have a diversified investment portfolio, but the insurance market as a whole not. the diversification of every individual company sum, as a whole of all the insurance market, as an was exposed market, and the researchers found, iirc, like 3 banks that if they fail they can cause a chain reaction that would take out the entire insurance market.
Don’t know why, but your comment made me remind of that.
Ah, a classic off by 43,008 zeroes error.
If I had to bet my money, a bad machine with corrupted memory pushed the file at a very final stage of the release.
The astonishing fact is that for a security software I would expect all files being verified against a signature (that would have prevented this issue and some kinds of attacks
So here’s my uneducated question: Don’t huge software companies like this usually do updates in “rollouts” to a small portion of users (companies) at a time?
I mean yes, but one of the issuess with “state of the art av” is they are trying to roll out updates faster than bad actors can push out code to exploit discovered vulnerabilities.
The code/config/software push may have worked on some test systems but MS is always changing things too.
Companies don’t like to be beta testers. Apparently the solution is to just not test anything and call it production ready.
Every company has a full-scale test environment. Some companies are just lucky enough to have a separate prod environment.
When I worked at a different enterprise IT company, we published updates like this to our customers and strongly recommended they all have a dedicated pool of canary machines to test the update in their own environment first.
I wonder if CRWD advised their customers to do the same, or soft-pedaled the practice because it’s an admission there could be bugs in the updates.
I know the suggestion of keeping a stage environment was off putting to smaller customers.
Windows kernel drivers are signed by Microsoft. They must have rubber stamped this for this to go through, though.
This was not the driver, it was a config file or something read by the driver. Now having a driver in kernel space depending on a config on a regular path is another fuck up
Not sure about Mac, but on Linux, they’re signed by the distro maintainer or with the computer’s secure boot key.
This file compresses so well. 🤏
If it had been all ones this could have been avoided.
Just needed to add 42k of ones to balance the data. Everyone knows that, like tires, you need to balance your data.
I mean, joking aside, isn’t that how parity calculations used to work? “Got more uppy bits than downy bits - that’s a paddlin’” or something.
Assuming they were all calculations, which they won’t have been.
We will probably never know for sure, because the company will never actually release a postmortem, but I suspect that the file was essentially just treated as unreadable, and didn’t actually do anything. The problem will have been that important bits of code, that should have been in there, now no longer existed.
You would have thought they’d do some testing before releasing an update wouldn’t you. I’m sure their software developers have a bright future at Boeing ahead of them. Although in fairness to them, this will almost certainly have been a management decision.