Sunday, December 15, 2024

Why Production servers shouldn’t have external interfaces

Share

Read the Disclaimer

People sometimes want to use their application servers as firewalls. This seems attractive at first glance: slap in another network card, add some packet filtering, tighten the system down a bit and connect it to the outside world. Cheap and quick, but a very bad idea.

Production servers should never be firewalls. That doeesn’t mean that you should neglect security on these servers; you should in every respect treat them as though they were wide open to the big bad world, but they never should be. Do run packet filters, tcp wrappers, and intrusion detection software. Eliminate unneeded services, and keep your systems up to date with security patches. But have a separate firewall.

Why do I say this? There are many reasons.

More security is always better than less. Important resources should have better protection. It isn’t just that your data could be stolen; that may not be of great concern to you. Often more important is that your systems can be damaged or their performance severely affected by a security breach. See How secure do you want to be? for more on that.

It’s better to be protected by a different OS.

The more locks in front of something, the harder it is to get to. That’s just obvious. Unfortunately, “keys” to specific parts of certain operating systems turn up every now and then. If you have (for example) Linux as your firewall and Unixware on your production machine, a security hack that lets someone into one may stall at the other. This could prevent or lessen damages.

Internal servers are apt to lag behind in patches and OS updates simply because such things may affect critical apps running theron. Firewalls that do nothing but security won’t be crippled by that need.

Often people hesitate to apply patches to production servers just because the ordinary function is too important to lose. That hesitation may be valid or invalid, but it is very apt to cause patches and updates not to be applied. A separate firewall can usuallly be updated without affecting production appplications. Obviously organizations heavily dependent upon remote acccess might have more concerns here, but in general those will still be less than on a production server.

Financial people don’t want to spend money on something that works. It’s easier (and often cheaper for various reasons) to keep a separate firewall up to date than an internal production server. For example, an OS update that would affect security might cost much more than upgrading a firewall for the same fix because the server may require costly application updates, more user licenses etc.

People hate to take down internal servers to do updates because it affects real work. Often you can live without the internet for a few hours but not without the production server, so updates get delayed. Whenever there are delays, security is compromised.

Internal servers have to allow much more legitimate access than a firewall requires. You might have hundreds of user accounts on a production server, but might need only a small number on a firewall. Every account adds to your security concerns: the less accounts to manage, the easier.

Internal servers are more subject to accidental security problems such as incorrect file permissions. This is often done in the interests of making applications easier. How many times has the advice “chmod 777” been given?

Internal servers are quite apt to have dozens of accounts with weak passwords. It’s generally easier to enforce strong password policy for external access. Such access can also be limited to only the accounts that reallly need it. Joe may have to login 2 or 3 times if he’s coming in remotely, but he won’t usually object to that as much as having a long internal password. And if he does object, it’s an easier battle to fight.

Internal servers are (obviously) already open for access to inside people who can accidentally or on purpose open up more access by their actions. It’s often necessary or expedient to give relatively unsophisticated users some system level access for routine maintenance. Such access is not necessary on a dedicated firewall.

Internal servers may need to advertise services that are dangerous on the Internet. Yes, you can and should filter those services but even better is not even have them ever get near the outside world in the first place. If services are accidentally turned on, or local filter rules forgot to account for the outside world, it won’t matter if the firewall is rigorously blocking everything that is not explicitly allowed.

It’s also just that much more difficult to secure rather than just plain shut off. For example, you may need ftp running on your local lan, but not need it externally at alll. Rules to allow internal ftp but not external are plainly more complex (and therefore easier to get wrong) than just not running the ftp service at all.

When server applications malfunction, firewall rules are often the first things turned off. The problem may have nothing whatsoever to do with whatever packet filtering is in place, but sooner or later somebody will flush the rules out of desperation if nothing else. If that happens to “fix” the problem, the rules may be modified and accidentally or unavoidably open up other possible breach points.

Internal servers are more subject to tinkering for performance, to add new features or applications etc. Every change has the possibility of opening up new security problems. A dedicated firewall may never have anything new added to it at all, and if anything is, the security aspects are much more likely to be examined.

The potential for trouble is just too great. Have a separate firewall. Even better, run multiple levels of firewalls: hardware is very cheap today.

Reprint and Copyright Info

A.P. Lawrence provides SCO Unix and Linux consulting services http://www.pcunix.com

Table of contents

Read more

Local News