TL;DR - What are you running as a means of “antivirus” on Linux servers?
I have a few small Debian 12 servers running my services and would like to enhance my security posture. Some services are exposed to the internet and I’ve done quite a few things to protect the services and the hosts. When it comes to “antivirus”, I was looking at ClamAV as it seemed to be the most recommended. However, when I read the documentation, it stated that the recommended RAM was at least 2-4 gigs. Some of my servers have more power than other but some do not meet this requirement. The lower powered hosts are rpi3s and some Lenovo tinys.
When I searched for alternatives, I came across rkhunter and chrootkit, but they seem to no longer be maintained as their latest release was several years ago.
If possible, I’d like to run the same software across all my servers for simplicity and uniformity.
If you have a similar setup, what are you running? Any other recommendations?
P.S. if you are of the mindset that Linux doesn’t need this kind of protection then fine, that’s your belief, not mine. So please just skip this post.
To be honest, antivirus software is just not really a security tool. If you’re at the point where malicious software is running on your server you’ve already lost and it’s hard to know what extent the damage will be. Having proper isolation is much more important (something which, tbh, Linux isn’t quite as great at as we’d like to think, at least not with additional effort… mobile operating systems seem to take the isolation of applications a lot more seriously). You could maybe argue that the anti virus software is useful for monitoring, but I’d rather have some stronger guarantees that my application isn’t going to take my lunch money and private keys than a notice a day later that something sketchy is on my machine… I won’t flat out say a virus scanner is completely useless, because of course you can contrive of scenarios where one could be helpful, but they’re kind of dubious.
Also yeah, ClamAV afaik isn’t really used like a typical windows antivirus. It’s mostly used on mail servers to scan email attachments. It’s not necessarily even looking for “Linux viruses”.
Okay sure same thing as Windows. If you aren’t reckless with the things you install and run then you are likely fine BUT there’s always a chance. All it takes is one slip up. Same logic as having a lock in the door knob and a deadbolt. By your logic (and many others), the lock on the door knob is sufficient and that may be okay with you BUT I’m gonna put a deadbolt on too just in case.
We can argue about this all day long. You will have valid points and so will I.
But would you put a deadbolt on your garage door? Or on your fridge door? IMO, arguing by analogy here just obfuscates the points – your servers aren’t physical doorways with locks, and comparing them just confuses the issue.
Can you explain what added security an antivirus package would offer for a Linux server? I haven’t done much with Linux administration, mostly just using Docker images for stuff at work.
I’m not a super Linux expert or anything, but I do grok tech, and I’m curious about this topic.
The core problem with this approach is that antivirus scanning is generally based on signature recognition of malicious binaries. Behavior-based antivirus scanning mostly doesn’t work and tends to generate a lot of false positives. No freely available antivirus is going to have a signature library that is kept up to date enough to be worth the effort of running it on Linux - most vulnerabilities are going to be patched long before a free service gets around to creating a signature for malware that exploits those vulnerabilities, at which point the signature would be moot. If you want antivirus that is kept up to date on a weekly or better basis, you’re going to have to pay for a professional service.
That said, there are other, simpler (and probably more effective) options for hardening your systems:
- Firewall - if your servers are dedicated to specific services and you don’t plan on adding many more applications, you should be able to tighten up their firewalls to have only the ports they need open and nothing else. If network security is a priority, you should start with this.
- Application Whitelisting - prevent unrecognized applications from running. There are more options for this on Windows (including the builtin Applocker), but there are some AWL options for Linux. It’s a lot easier to recognize the things that you do want to run than all of the things that you don’t want to run.
- Secure OS - I assume you’re using Debian because it’s familiar, but it is a general-purpose OS with a broad scope. Consider switching to a more stripped-down variant like Alpine Linux (it can be installed on a Pi).
The firewall point I just don’t get. When I set up a server, for every port I either run a service and it is open, or I don’t and it is closed. That’s it. What should the firewall block?
A malware might create a service which opens a previously closed port on your system. An independently configured firewall would keep the port closed, even if the service was running without your knowledge, hopefully blocking whatever activity the malware was trying to do.
Also, you can configure the firewall to drop packets coming in to closed ports, rather than responding to the sending device that the port is closed. This effectively black-holes the incoming traffic, so it looks like there’s just nothing there.
You can set up an intrusion detection/prevention system, that logs/blocks certain traffic. If you do have public services running, you could block access based on location, lists of known bad actors etc. I guess you could argue that this is beyond the scope of a traditional firewall.
A modern firewall might also block connections to known bad sites, in case you do somehow get malware reaching out to a command & control server. Or it might identify malicious application traffic over a port that should be for a more trustworthy service.
But these are usually only a concern in places like businesses or schools where there are a lot more people, devices, etc. on the network, especially if there’s a guest network.
I think you’re about to find out that the “belief” that Linux doesn’t need antivirus isn’t just held by everyone in this community, it’s held by the whole Linux community. Hence there being no active projects in the space.
Heck you almost don’t need any antivirus in windows anymore. Just windows defender and half a brain when it comes to what you download.
Many security experts I know consider AV software to be snake oil. I do so too. They are so complex and need so far reaching permissions to be somewhat effective, that they become the attack vector and/or a large risk factor for faulty behavior.
Add in lots of false positives and it just numbs the users to the alerts.
Nothing beats educating users and making sure the software in use isn’t braindead. For example Microsoft programs that hide file extensions by default is a far bigger security problem than a missing AV tool. Or word processors that allow embedded scripts that can perform shit outside the application. The list goes on …
I don’t really understand that belief. There is plenty of Linux malware especially targeting servers, you just need to have an unsecure service running to find that out
I have been using linux for almost 2 decades, never seen a virus. And I never heard of a colleague or friend who got one on Linux. That’s why no one has ever installed an antivirus, because, till now, the risk has been practically zero.
On windows, on the other hand, I saw so many viruses on friends and relatives computers…
People install antiviruses depending on the experience.
To be fair, we all know on Linux viruses exist, but is objectively pretty difficult to get one. It is not worth installing an antivirus if one doesn’t actively install garbage from untrusted sources
It’s not any more difficult to get a virus on Linux than Windows. It comes down to experience as you said. I’ve been using Windows for my entire life and haven’t gotten a virus since I was 8. But all it takes is one mistake on both Windows and Linux, you accidentally leave a docker endpoint or ssh server exposed and insufficiently protected on Linux and you’re going to get a virus the same as if you accidentally opened a .pdf.exe on Windows.
What happens in the Windows world: Microsoft is not capable of creating and distributing a patch timely. Or they wait for “patch day”, the made up nonsense reason to delay patches for nothing. Also since Windows has no sensible means of keeping software up to date, the user itself has to constantly update every single thing, with varying diligence. Hence Antivirus: there is so much time between a virus becoming known and actual patches landing on windows, that antivirus vendors can easily implement and distribute code that recognizes that virus in the meantime.
What happens in the linux world: a patch is delivered often in a matter of hours, usually even before news outlets get to report about the vulnerability.
Zero days aren’t the only way you get viruses. Misconfiguration and social engineering are both vectors that are OS agnostic.
I’m a senior Linux/Kubernetes sysadmin, so I deal with system security a lot.
I don’t run ClamAV on any of my servers, and there’s much more important ways to secure your server than to look for Windows viruses.
If you’re not already running your servers in Docker, you should. Its extremely useful for automating deployment and updates, and also sets a baseline for isolation and security that you should follow. By running all your services in docker containers, you always know that all of your subcomponents are up to date, and you can update them much faster and easier. You also get the piece of mind knowing, that even if one container is compromised by an attacker, it’s very hard for them to compromise the rest of the system.
Owasp has published a top 10 security measures that you can do once you’ve set up Docker.
https://github.com/OWASP/Docker-Security/blob/main/dist/owasp-docker-security.pdf
This list doesn’t seem like it’s been updated in the last few years, but it still holds true.
-
Don’t run as root, even in containers
-
Update regularly
-
Segment your network services from each other and use a firewall.
-
Don’t run unnecessary components, and make sure everything is configured with security in mind.
-
Separate services by security level by running them on different hosts
-
Store passwords and secrets in a secure way. (usually this means not hardcoding them into the docker container)
-
Set resource limits so that one container can’t starve the entire host.
-
Make sure that the docker images you use are trustworthy
-
Setup containers with read-only file systems, only mounting r/w tmpfs dies in specific locations
-
Log everything to a remote server so that logs cannot be tampered with. (I recommend opentelemetry collector (contrib) and loki)
The list goes into more detail.
Hey, kinda off topic but what’s the best way to get into a Linux/Kubernetes admin role? I’ve got a degree in networking, several years of helpdesk experience and I’m currently working as an implementation specialist.
Is that something I could simply upskill and slide into or are there specific certs that will blow the doors open for new opportunities?
Sure! I got my start with this sort of tech, just running docker containers on my home server for running stuff like nextcloud and game servers. I did tech support for a more traditional web hosting MSP for a while, and then I ended up getting hired as a DevOps trainee for a internal platform team doing Kubernetes. I did some Kubernetes consulting after that and got really experienced with the tech.
I would say to try running some Docker containers and learn the pros and cons with them, and then I would say to start studying for the CKAD certification. The CKAD cert is pretty comprehensive and it’ll show you how to run docker containers in production with Kubernetes. Kind is a great way to get a Kubernetes cluster running on your laptop. For more long term clusters, you can play around with k3s on-prem, or otherwise, I would recommend Digital Ocean’s managed Kubernetes. Look into ArgoCD once you want to get serious about running Kubernetes in production.
I think with a CKAD cert you can land a Kubernetes job pretty easily.
I would probably only recommend the CKA cert on the path to CKS. CKA gets into a lot of the nitty gritty of running a kubernetes cluster, that I think most small-to-medium companies would probably skip and just run a managed solution.
Kubernetes has a steep learning curve, since you need to understand Operations top-to-bottom to start using it, but once you have it in your tool belt, it gives you endless power and flexibility when it comes to IT Operations.
I dont see how anything I said justifies you calling me names and calling me bad at my job. Chill out.
Containers allow for more defense-in-depth, along with their multiple other benefits to maintability, updatability, reproducibility, etc. Sure, you can exploit the same RCE vuln on both traditional VMs and containers, but an attacker who gets shell access on a container is not going to be able to do much. There are no other processes or files that it can talk to or attack. There’s no obvious way for an attacker to gain persistence, since the file system is read-only, or at least everything will be deleted the next time the container is updated/moved. Containers eliminate a lot of options for attackers, and make it easier for admins to setup important security systems like firewalls and a update routine.
Obviously containers aren’t always going to be the best choice for every situation, architecting computer systems requires taking into account a lot of different variables. Maybe your application can never be restarted and needs to have a more durable, VM solution. Maybe your application only runs on Windows. Maybe your team doesn’t have experience with kubernetes. Maybe your vendor only supplies VM images. But running your applications as stateless containers in Kubernetes solves a lot of problems that we’ve historically had to deal with in IT Operations, both technically and organizationally.
Behavior-based antivirus is extremely difficult, failure-prone, and almost entirely unnecessary because of how secure Linux is, so they don’t exist to my knowledge. Signature-based antivirus is basically useless because any security holes exploited by a virus are patched upstream rather than waiting for an antivirus to block it. ClamAV focuses on Windows viruses, not Linux ones, so it can be a signature-based antivirus, but not many people run an email server accessed by Windows devices or other similar services that require ClamAV, so not many people use it, and nobody made any alternatives.
If you’re worried about security, focus on hardening and updates, not antiviruses.