Security is the discipline of using effective protection measures to safeguard important assets from abuse. In other words, ?security? is about protecting important things. Protection involves not just mechanisms (such as locks and doors), but also proper selection and use of mechanisms.
Properly applied, the various disciplines of information security really come down to risk management that is not fundamentally different from risk management in other situations such as finance and insurance.
Value: how important the asset is
Threat: a potential kind of abuse
Risk: likelihood of threat leading to actual abuse
Cost (1): reduction in value of abused asset
Cost (2): amount of resources required to use security measures to protect an asset
Benefit: the value of a security measure
It would be great if these terms ? asset, value, threat, risk, cost, benefit ? could be used scientifically, but when it comes to information systems, most of them are pretty squishy. Nevertheless, even a ?best guess? is remarkably useful. If guesses about relative value and likelihood are consistently applied, then it is usually possible to decide on the priority of potential improvements in information security.
Cost becomes a matter of budget. Most people with authority over funds for security can, if properly informed, make good decisions about how to allocate the budget. In many instances, it is possible to analyze whether the incremental value of a high budget would be significant.
There are several types of security issues: data security, computer security, system security, communication security, and network security. The term ?information security? is often used to encompass all of them and to distinguish them from closely related and important issues ? such as physical security, operational security, and personnel security ? that do not rely primarily on computing technology.
Threats and Vulnerabilities
Computing is as risky as any other aspect of modern life, and in some sense more so because of the complexity of computing systems. Vulnerabilities exist at all levels: network, operating system, middleware and application because all software has bugs, administration is error-prone and users are unreliable.
It is virtually impossible to develop any significant system without some errors in it. We know how to build bridges so the imperfections are tolerable. That is, we can build bridges that do not crash (if proper engineering methodology is followed), but we cannot build systems and applications that do not crash.
In computing systems, flaws are often bugs ? repeatable situations in which the system behaves in an unintended manner. Each bug can also be a security vulnerability, if the bug can be used in a way that allows a failure of security: either authorized users exceeding their privileges, or unauthorized users gaining access to systems. Furthermore, the complexities of modern computing systems make them difficult to manage.
Configuration and administrative errors also create security vulnerabilities. It can be difficult to determine whether the system is ?properly? configured. For example, to ?harden? Windows NT for usage on the Internet, Microsoft recommends over a hundred specific configuration changes that effectively turn off many features that led people to want to use NT. In addition, security experts have other recommendations in addition to those described by Microsoft.
Computing, like life, has many threats. But what are the risks? Given the wide rage of threats, the sheer number of vulnerabilities, and the ever-increasing number of attackers, the risk is nearly 100 per cent that some incident will occur if information security is not addressed in a systematic manner.
There are many different avenues of attack. Inadequate data security can provide unauthorized users access to sensitive information. Inadequate computer security can result from the use of weak passwords and allow abuse of user accounts. Applications filled with ?bugs? can allow unauthorized transactions. Inadequate system security can result from a mis-configured operating system and allow unintended network access. Eavesdropping and password reuse are examples of inadequate communication security which can result in impersonation of individuals. Inadequate network security can lead to unintended Internet access to private systems.
There are many examples of inadequate security. Who is hurt by these attacks? Internet access in this scenario affects the on-line consumer greatly, sometimes in a negative way. Companies store information about their customers on corporate servers and networks. Sensitive information such as credit card and social security numbers and other personal details are stored in file servers. Any individual with knowledge of networking protocols can capture data flowing over the Internet via unsecured methods.
IT organizations? lack of knowledge has jeopardized the information that corporations are responsible for. The convenience of the Internet and client server systems contributes to this problem. If important and sensitive data is permitted to travel unprotected between computers, it is subject to theft and alteration. Sophisticated individuals (or corporations) can capture the data for illegal or malicious reasons.
Security for Internet-connected systems was not designed for dedicated attackers. Most Internet-connected systems were variants of an operating system called Unix, and many variants were designed and implemented in, and for, an academic environment. The early cases of attacks were oriented towards gaining privilege that could be abused: spying on sensitive information, maliciously disclosing or destroying information etc.
As time has gone by, people have become more adept to automating attacks. The results of such automation are programs that do more damage than many of the perpetrators could do on their own: viruses, Trojan horses, etc. However, the basic vulnerabilities are often the same, while the change is result of human ingenuity applied to exploiting the vulnerabilities.
Companies and people who are Internet-connected are not immune to the attacks and risks, some of which are described below.
?Finger? is a trivial Unix networking program that conveys information about the status of a user account (e.g. when the user last logged in). The finger ?daemon? (or server program) would listen for requests over the network from anywhere. This program, ?fingerd? was executed with ?root? privilege, for reasons mostly derived from the ?kitchen sink? integration of networking with the operating system (OS).
The software has a common bug: unexpectedly long messages could overfill the message buffers in the code and cause execution errors. In particular, the error in the execution allowed a careful attacker to cause ?fingerd? to execute any command with full administrative privilege. This bug and similar ones are still useful today for attacking network applications of all kinds. Buffer overflow attacks are still very common, and the wide range of potentially vulnerable server software gets wider all the time.
Sendmail is an example of a program that is too valuable to turn off, and is too dangerous to expose to the Internet. The Morris worm was a particularly interesting case ? aside from the fact that it crashed pretty much the entire Internet by accident ? because it used not a bug, but a feature of sendmail.
The ?debug mode? feature allowed anybody who asked to get the ability to do pretty much anything on the host machine. This ability was a necessary side effect of having the capability to play with the sendmail program during execution in order to find out why some of the sendmail?s notoriously complex behavior was misbehaving. The necessity of this side effect was, again, related to the need for the sendmail server program to run with administrative privilege. While no longer viewed as a good idea, few had disabled it, and many were hit by the Morris worm. The ?worm? used the debug mode to copy itself to another computer, and to copy itself repeatedly, until it infested a great number of computers on the Internet.
The Morris worm turned out to be a blessing in disguise. It caused people to close off a very dangerous vulnerability, before someone trying to cause very serious and unrecoverable damage exploited it.
Enterprise client/server applications have application protocols, and many operate beyond the boundaries of a traditional enterprise network (extranet features and Internet usage). These applications have application protocols, and leaving aside a large number of potential security problems (from lowly password management on up), protocol implementations have ?bugs? that can leave applications vulnerable.
To see how important applications are on the Internet (and vice versa), one only has to listen to Microsoft?s anti-anti-trust mantra: ?the OS isn?t the platform, the Internet is the platform? and to watch the scramble to embed applications into the OS ? creating more unnecessary complexity to create vulnerabilities.
Application security consists of features of an application that provide security features to authenticate users, control their access, and audit (log) their actions. Each factor exists, works well, and has challenges. For authentication, the typical problem is too many user/password databases to manage and too many users with multiple passwords. For access control, there are simply too many things to be controlled with an access rule (or list, ACL) for each.
For audit, too many applications produce different kinds of log data that is practically impossible to analyze and correlate. In other words, the main challenges are in security management where complexity creates significant practical challenges that generate a different kind of risk: misconfigured applications can create security vulnerabilities.
Most recently, news media picked up on a string of stories about theft of credit card numbers from e-commerce sites. In many cases, the vulnerability is from mis-management of the SQL server storing the payments database: the administrator account is left unsecured.
Trojan horse is a term used to describe a malicious program that users are tricked into executing. The term comes from Homer?s Iliad where the Achaeans tricked the Trojans into bringing inside their walls a large wooden horse in which Achaean warriors were concealed.
Probably the most common Trojan technique is sending an email attachment that is an executable file, which installs and/or executes some malicious software. Although many mail programs try to help people be careful about opening the ?e-mail bombs,? it still happens. Recent reports indicate that in some unlucky enterprises, as much as a quarter of workstations have been ?trojaned? with a program called netbus.
Hackers are present on the net. For example, a user who was logged onto the Internet visited some IRQ chat rooms frequented by hackers, and noted that his workstation was probed for the presence of netbus as soon as he entered the chat room. There are bad neighborhoods in the ?net as in the real world!
Perhaps better known than netbus is back-orifice (the recent release is often referred to as BO2K) by the Cult of the Dead Cow. Like netbus, BO2K allows the host system to be remotely controlled over the network. Any informed person can get a trojaned workstation to do anything it is asked to do. BO2K achieved some notoriety when the Cult of the Dead Cow presented BO2K as a remote management and debugging tool. In fact, BO2K is reputedly pretty useful, and it is not fundamentally different in techniques than ?legitimate? products like PC Anywhere.
Perhaps the most ingenious Trojan horse was a free-ware e-mail tool that really was a fully functional and quite popular program that thousands of people used daily. In addition to some very carefully thought out and well-implemented features, it also had some hidden features that allowed one?s e-mail to be obtained by others without one?s knowledge.
The main lesson from Trojan horses is simply that software should be untrusted by default and used only if obtained through legitimate channels. In corporate environments, this is more often addressed by security policies in which installation of programs is a privilege reserved for systems support staff, and supported by security mechanisms designed to help keep users out of situations in which they might forget their security awareness training and accidentally install software on their own.
A virus is a type of malicious software that takes advantage of a fundamental weakness of a pre-NT windows systems: there was no operating system. That is, application programs have free rein of the system and are on the honor system not to do things like mess around with the file system, the operating system software, etc.
A virus does just that. When a virus-laden program is executed, it copies itself around the system so that even if the original program is deleted, the virus is still around. Further, it can copy itself so that any time the infected PC interacts with the outside world (e.g. copying files via floppy) it goes along for the ride.
Originally, viruses operated only on programs and propagated by sharing software. Before long, virus writers expanded their bag of tricks as parts of an arms race in the anti-virus battle. Several clever and subtle types of self-copying software techniques were invented, as well as a never-ending series of schemes to hide the code. Virus writers? jobs were made much easier when data files started to actually contain a form of executable code called macros. Then virus propagation required only file sharing of the sort that happens all the time in work groups.
And of course, besides propagating themselves, viruses sometimes did malicious things like delete data.
The measures span all the areas of information security. At the network level, networks must be segmented from other networks. A most notable example is segmenting an enterprise network from the Internet using router filtering or firewalls. Communication of sensitive information over open networks (such as the Internet) often requires communication security services that are based on encryption techniques. For systems that communicate over open networks, rigorous system security is necessary to avoid vulnerabilities to network-based attacks. Both operating system and application security features must be properly configured to protect critical data, and these features must be used properly by end-users, including password management, virus checking, etc.
Data security measures include the encryption of data and key management. Computer security requires security measures that consist of authentication and access control lists. Application security measures include distributed authentication, directories and authorizations. System security measures include application specific lockdown of dedicated servers, anti-virus protection, and intrusion detection. Communication security measures should include cryptographic protocols, key management and the usage of a public key infrastructure. Network security measures consist of network segmentation, firewalls, packet filters and intrusion detection.
Each of these kinds of measures has its limits as well. In addition to examining security techniques (and how to use them as effective security measures), attention must be paid to their limits. In doing so, security measures can be used effectively in a way that makes sense in terms of budget and of risk management.
A security program is a business function that balances technology management, risk management, technology operation, and budget. In the real world, an organization has a finite budget to spend on security, and an obligation to spend the (both on continuing operations and on new acquisitions) in a way that is cost-effective. The best metric of effectiveness is risk reduction.
Running a strong security program is not easy because it depends on well-articulated security requirements and goals, a reasonable approach to analyzing risk, and hard-nosed analysis of cost and benefit. It also requires top-level management support to provide both budget and incentive for compliance from the full range of people: from end-users to technology management and support staff.
? formulate requirements
? weigh requirements and formulate policy
? plan and execute implementation
? often disagree on details of exactly what to do
? will eventually make mistakes
In dealing with these realities, many organizations can afford to seat-of-pants it, instead of taking a structured approach.
Effective security requires a security policy, an implementation plan to control acquisition and the use of security technology. The acquisition and use of security technology must be controlled and coordinated. Every change should be policy-driven, deliberate, and justified by qualifiable improvement in the security posture. The alternative is ?fly by seat-of-pants? and hope that someone occasionally thinks about worst cases, costs, and the likelihood of a security ?issue?. Technology is precise; people are not.
Risk management is the core of any type of security program. If a company is not prepared to assess risks and base its actions on the results, then any kind of security program (other than seat of pants with worst-case checking) is probably not going to be rewarding.
Risk management is the way that a company gets information about priorities, values, costs, benefits ? all the things it needs to make informed choices about what security tools and techniques to use.
A different approach is to take a best-practices approach: buy and use (to some extent) the products and services that others do, and hope that your intuitive sense of priorities helped you spend the budget reasonably well. This is actually preferable to trying to run a real security program in a requirement vacuum, but less preferable than getting the requirement vacuum filled.
Security policy embodies both information about how costs and benefits should be considered, and information about how to enforce the priorities that result from cost/benefit analysis. A security policy states basic goals, and also elaborates them in the following three ways:
Roles and responsibilities form management, IT, IT security, end users, etc.
Issue-specific do/don?t on potentially dozens of issues
Drive practices and procedures for operational staff with responsibility for IT and IT security
Security policy also maintains the value of the security program. A good policy is itself dynamic, with a well-defined and managed policy review process. The review process ensures that all bases are covered to some extent that was consciously chosen. In addition to policy reviews, compliance audits check that the required security measures are actually in place and are being used effectively.
All these aspects of security are required for the execution of a security program. Without any policy, a security program may or may not be accomplishing anything useful. It is the old GIGO rule ? you may have people responsible for security, but unless there are stated goals and well-defined processes to achieve them, it is Garbage In, Garbage Out. Or to be more precise, a company might be getting some value out of its security measures, but it would not have any way of knowing it.
Perimeter and Policy
Defining a network security perimeter is not always easy. Defining a policy is rarely easy. Implementing a policy is hard. If a policy is correctly implemented, then the only network communications that cross the perimeter are those communications that should be allowed. Then, something changes: new hosts are added, the network topology changes, new applications are installed ? each of which can effect the implementation of the policy, making the implementation incorrect. Then, of course, policies themselves also change.
If an organization is functioning well, then the implementation is audited for correctness, and rigorous change management is in place. Even then, there are security vulnerabilities, even if the policy is correctly implemented ? for example, a bug in the application software or a problem in the system administration ? that allow systems to be exploited.
To construct a policy, the assumption should be: anything that is not explicitly permitted must be denied. This is a simple concept. The default is to ?just say no.? Unfortunately, most systems (from network components to operating systems to the most recent applications) are not built that way. They are built to provide service, and this takes priority over the ability to constrain how service is provided.
Perhaps the simplest instance of this rule can be seen in packet filtering, an essentially simple operation. Each packet of data passing through a filter is examined as it comes in. Unless there is a rule that says it should be sent out, the packet is dropped. Yet even this simple function is governed by system configuration items that are subject to human error (and not infrequently alas) as the rules are updated to account for changes in the network environment.
In other words, security implementation is never easy. Lack of policy means that even when you think you have a correct implementation, you do not have a way of checking. It is the same as with any kind of engineering: if you do not have blueprints or requirements, you cannot know for sure when you are done. That is why, although defining policy can be real work, it is important work that must be done in order to ensure the value of using security measures.
Just as policy decisions are needed for network perimeter security, similar decisions are needed for extranets, Intranets, system security measures, communication security measures, and so on.
Implementing a security program involves making choices about using security measures. There are always tradeoffs, and it is possible to bite off more than you can chew. Any security measure is only as good as it is used properly. A good example is intrusion detection products that are deployed, but not used much in the sense that rarely does anyone examine logs to determine whether a serious incident may have occurred. Similarly, a firewall is worse than useless if improperly configured because of the false sense of security.
On the other hand, a properly used firewall is a good tradeoff. For example, most firewalls will block some kinds of remote login functions of operating systems (e.g. implementations of ?telnet?). They may or may not provide a more secure remote access mechanism, but they definitely block attempts from outside to telnet to inside computers. There may be hundreds of inside computers for which telnet would otherwise have to be disabled, and frequently audited. But with a simple firewall rule against telnet, it becomes much less critical to ensure that telnet is disabled everywhere.
The same is true of services like network file systems (NFS) that are useful within enterprises but much too dangerous (because of protocol-level vulnerabilities) to share with others over the Internet. By blocking NFS traffic from the Internet, internal systems are free to use NFS without having to ensure that every system tries to reject NFS communication from the outside.
But every measure, even these good tradeoffs where modest effort saves lots of effort that would otherwise be required, are part of complex systems where every change can have unexpected side effects. For example, if is easy to block NFS by blocking all Internet-based traffic using UDP (the transport protocol underlying NFS). This once was typical because of common security issues of all UDP-based protocols. However, some UDP-based protocols are permitted, especially ones with relatively well-defined (or tunable) port usage. Therefore, it may be acceptable to allow UDP packets, for example, on the port used by RealAudio.
But suppose that a system has the NFS service turned on with unusual port usage. If that port usage includes the ports typical to RealAudio, then NFS may accidentally be shared with the Internet and attackers might be able to attack all files that are shared over the corporate network. Sound farfetched? Well, recall buffer overflow attacks. RealAudio software was recently discovered to have a buffer overflow bug that was demonstrated (together with other common vulnerabilities) to allow attackers to gain control of the target system and turn and/or reconfigure network services. Among the network services is, of course, NFS.
This shows how a change to allow a new kind of application communication (RealAudio) also opened up a new vulnerability (buffer overflow) that allowed the new communication path to be exploited (NFS over the Internet).
Corporations must make tradeoffs between the usefulness and security of technologies. To do this, they must make judgements about what is the acceptable risk, and implement a default policy that denies everything, unless it is explicitly permitted. This is a simple and critical concept, but it is not always easy to implement.
It should be clear by now that a security program is the set of people and activities in which knowledge of both needs and solutions comes together. An organization can decide what to do, assess its value, and monitor to ensure that the expected value is delivered. The only question is whether people in the organization will make the commitment to positive change, and have the will to follow through.
Security issues include technical issues, business issues, cost/benefit issues and budget issues. Policy and process should stay on target and provide the ability to assess whether the expected value is delivered.
Companies can live without a security program, but at some point, concern over worst cases will dictate some kind of organized attention to security. In most organizations of size, the ?concern? part is well underway. The questions are about how to tackle the concerns constructively and when to start committing efforts within the organization.
Building a Corporate Public Key Infrastructure. INFOSEC Engineering, 1999. *http://www.infoseceng.com/corppki.htm*.
Glossary. Baltimore Learning Center, 1999. *http://www.baltimore.com/library/mn_glossary.html*.