Means of Security
Creating secure software, and secure infrastructure around it, can prove intimidating. Reports of security breaches and substantial design flaws in frequently-used software have become so commonplace that merely tracking them all is a challenge. It is absolutely critical that designers and implementers have a firm understanding of the fundamental principles by which security is accomplished -- as well as their inherent weaknesses.
Information and the technology it is stored with do not exist in a vacuum. Protecting a computer system or a network first involves exerting significant control over the environment in which it exists.
It would be pointless to care about the virtual security of a system if an attacker has a viable physical access path to it. All virtual locks can be circumvented by hardware control; it is the ultimate manual override. Operating system security can be defeated with external boot devices such as removable media. BIOS restrictions on booting from alternative devices can be bypassed by asserting the correct motherboard pins, removing the battery backing the memory of the settings, or swapping the right hard disk. Even hardwired security chips can be rewired, replaced, or analyzed inline for flaws.
Most physical security mechanisms are reasonably obvious. Isolated rooms, gates, locks, cameras, safes, and guards are all common. The primary issue with these means is not that people do not understand them, but that they assess them as unnecessary for their purposes. Although paranoia is very real, what is often meant by unnecessary is "too expensive". Furthermore, security checks interfere with normal work and aggravate people with a legitimate need for access. These facts make it more likely for an organization to fail to fully prepare for a physical breach.
No amount of physical security suffices against overwhelming force or an insider with thorough knowledge of the system's weaknesses, however. Addressing threats requires a proper assessment of likelihood for various intruders and their methods. Many organizations feel that their physical security is "sound enough", at least from external threats. This conclusion should be re-examined on a frequent basis, and only demonstrated facts ought to be introduced as evidence.
Many of the most obvious ways to attack computer systems involve leveraging advantages in computational power or number of entities. One example is brute force attacks which involve testing all possible passwords or keys. For short passwords, it is feasible with commodity hardware to check every last combination. The essential defense is naturally to use longer keys. As the number of possibilities rises exponentially with the length of the key, enforcing sufficient length can keep the system well ahead of the attacker's ability to guess at random.
Another example in the same strain is the denial of service (DoS) attack, especially a distributed denial of service. The assailant makes use of their ability to issue a disproportionately large number of requests compared to the background activity to temporarily consume all of a server's resources. Countermeasures involve eliminating the attack's numerical superiority, either by identifying and pre-emptively blocking the vast majority of the malicious requests or by reinforcing the host with sufficient hardware to handle the onslaught with resources to spare.
A more direct form of denial of service is possible when you give users the ability to run arbitrary software on a system, whether by local or remote access. Most systems allow a program to consume arbitrary amounts of CPU, RAM, I/O, or other resources by default. Trivially small applications can bring the system to a standstill under such conditions. There are even specialized attacks, such as the fork bomb, which can make it very difficult to restore the system state without a full reset. All of these issues are prevented with user and/or application quotas. However, be wary of allowing the automatic or semi-automatic creation of too many user accounts, as the same attacks can be applied in a distributed manner.
Deception operates by exploiting the target's preconceptions, lack of patience, and inability to calmly and thoroughly distinguish fact from fiction. Social engineering takes specific forms such as "phishing" or trojan horse executables. In the vast majority of cases, these attacks succeed not because of a particular flaw in certain software but rather due to trust or ignorance within the user.
Although there are many unique cases, phishing attempts are commonly sent through e-mail, instant messenging, and private messaging services. The message itself is usually harmless; rather, it is what the user does with the contents that matters. Normally, the goal of the attacker is to lead the user to a pre-established website under their control. Often that website has been designed to mirror some popular organization's real site as closely as feasible. Those taking the bait will enter useful information such as passwords or credit card numbers into the false site, which the phisher can then turn around and use to masquerade as the user's identity.
Much as in the Greek myth, a trojan horse is a harmless looking object presented as a free gift or favor. Typical 'trojans' are standalone executables distributed over the internet as purporting to have some legitimate function. However, they contain malicious code that runs an exploit such as installing spyware. As the modifications normally occur in the background, many users do not notice the changes.
Protection against these forms of invasion involve establishing, as a matter of protocol, mechanisms that can acccurately identify organizations and code. This is typically done in the modern era using cryptographic signing.
Deception may also be wielded defensively. In the context of software services, a classic example is the use of a "honeypot": a fake service or system whose only purpose is to draw and identify would-be crackers. In very convincing cases, the honeypot may look and feel exactly like the real production system. However, it contains no data or resources of significant value, making it expendable even if wholly compromised.
This sort of deception improves security in several ways. Primarily, it draws the attention, energy, and time of intruders away from valuable targets. Some feeble attacks may never succeed even at gaining access to the honeypot. Others will gain access, but then fail to realize that they have cracked a false system.
There is further meaning behind this technique. Honeypots provide the capacity to study the enemy's behavior. Information gathered about the methods and sources of attacks proves quite useful in strengthening existing defenses. It is one of the few passive means by which it is possible to gather information about new and unknown exploits. Indeed, security audits are never complete until you have given the opposition a fair chance to break in.
Many other kinds of deceptive measures commonly deployed are similar in nature. Setting up a fake login screen or prompt in an obvious location, with the real one located somewhere rather different. Renaming the root or administrative account to something else, and then creating a new minimal capability account its place. Installing a commonly attacked piece of software as a decoy, locking down its privileges, and then using a simpler, more secure package to provide the real functionality.
So we come to secrecy, quite probably the most frequently used -- and most frequently abused -- means of security. Secrecy, whether in computing or elsewhere, is ultimately about one's capacity to hide. To use secrecy is to put some non-trivial level of confidence in the belief that it is possible to hide the object or information to be protected in such a way that it is very unlikely to be discovered within any time frame in which it would be useful.
The reliance on secrecy permeates daily life, but its effectiveness is dubious. How many of us write down or print out our passwords and keys, assuming no one interested will walk in? How many throw away important documents without shredding or burning them? How many put spare keys under a mat, in a hidden crevice, or obscured by a plant? How many leave wireless routers open with poor or default access credentials? How many provide unique, valuable data on request even when the inquirer has neither demonstrated their identity nor explained why they must have the information?
Even protocols widely perceived to be secure rely on secrecy to function. The entire field of cryptography is founded on the basis of secrets; for every encrypted document that can be read there is a corresponding key. Symmetric cryptographic schemes use the same key for encryption and decryption. Should the key be discovered, the message will be decoded. Ultimately, this means that the key cannot be transported along the same channel as the message itself. After all, if the communications channel was already secure, why bother to encrypt the message?
Cryptographers in the 1970s came up with a clever way around this issue; they named it public-key cryptography. Rather than use one key, two keys are used -- a private key and a public key. The public key is used to encrypt the message, whereas the private key decrypts it. The security of the scheme relies on keeping the private key unknown. Notably, a single key pair only provides one-way confidential communication. Two separate key-pairs, one for each correspondent, must exist to establish a bidirectional message channel. When a message is to be sent, one party uses the other's public key to encrypt the message. It is then distributed over the insecure channel, where after reception the target will decrypt it in secret using the corresponding private key. With this scheme, it is possible to avoid ever revealing the private keys themselves even when only insecure means of communication are available. This is the wonder of asymmetric cryptography.
One might try to reveal encrypted communications by brute force. Computers, however, are good at working with very large numbers -- keys can easily be in the thousands of bits. A kilobit (1024-bit) key has 21024 possible values; this is over 10300. Comparing the number of elementary particles in the universe to that number is holding a candle up to the sun. Even if you incorporated every electron in the universe into one massive quantum computer that could test 10250 possibilities every second, it would still take over 1040 years to search the entire 1024-bit keyspace. For context, the age of the universe is roughly fifteen billion years -- that's on the order of 1010. Yes, you read that right. Using the entire universe's particles and giving each an average computational throughput that no doubt violates the laws of physics, it would still take far longer than the age of the universe to uncover an 1024-bit key by brute force. Thus, in some instances it is simply not possible to muster the superior force necessary for a direct assault.
However, there are much more efficient algorithms for searching the keyspace than attempting every possibility. This is a result of the fact that cryptographic algorithms -- ciphers -- have weaknesses. Remember how asymmetric cryptography uses two keys, one of which is publicly known? How is it possible to decrypt with the private key the information encrypted with the public key unless there is a derivable relationship between the two? Although the exact correspondence differs with algorithm, all asymmetric ciphers rely on computationally difficult problems. In the widespread cipher RSA, that problem is factoring very large integers. Specifically, while it is trivially easy to multiply two given prime numbers (creating a semi-prime product), it is rather difficult to factor a large semi-prime without any special information about it. However, the best factoring algorithms known are still dramatically more efficient than brute force. Furthermore, the RSA algorithm is not perfect -- it has additional weaknesses that permit further performance gains. As a result, that same 1024-bit RSA key which should take many trillions of years to crack in reality can be broken in reasonably short time by a very determined, very talented group with a great deal of computing power at their disposal.
Thus, no matter how sound the principles are mathematically, no cryptographic scheme is secure indefinitely. One might try to reduce the theoretical risk of breaches by changing keys on a regular basis, limiting the amount of information protected by a single key and thus making it a less likely target of attack. Counter-intuitively, this may make loss of confidentiality more probable.
The greatest weakness in cryptographic systems lies not in the strength of the keys or the ciphers. It is found in the fact that fallible human beings protect the secrets. People cannot remember keys consisting of thousands of bits; inevitably they record them somewhere. These records are a systemic vulnerability that cannot be avoided. To avoid widening this hole even further, secret keys should never be kept on systems directly connected to public networks, never stored in plain text, never given to others, and only used for genuinely important communications. These rules and others are casually broken on an all too frequent basis.
One key is difficult to protect. A hundred independent keys are much worse. The needless multiplication of entities complicates any cryptographic scheme and should make observers worry. Human error in the management of many keys is far more likely than in that of few keys. This is assuming that one does make the foolish mistake of encoding and storing all keys in the very same place with the very same seal. In that case, the keys are not truly independent, and if any are discovered, all are discovered.
Use as few keys as sensible. Protect each key by an independent token or lock. Disclose keys to as few people as feasible. Remember that three may keep a secret, if two of them are dead.
No discussion of security is complete without mentioning careful attitudes, attention to detail, and a steady, watchful eye. Indeed, no manner or method of protection can succeed without vigilance. Force cannot be deployed against an unseen enemy. The careless will fail to realize their deception has failed and that the enemy has deceived them. Uncovered secrets will be thought safe and sound until it is too late.
It is surprisingly difficult to remain alert at all times. Therefore, it is essential that there be more than one watcher. Preferably, each pair of eyes should see from a different approach and scan with a different methodology. This minimizes the risk of blind spots.
Automated eyes make for useful assistants, but do not lean on them. The bypass of cameras and alarms was mentioned earlier, but it is equally possible to circumvent logs, anti-virus, and network scanners. When a compromise is in progress, you should assume the cracker's first goals will be to disable any systems that might track or identify them. Logs will be erased or overwritten with fabrications. Anti-virus and other periodic scanners will be disabled, or more insidiously replaced with dysfunctional variants.
If an attacker ever gains unrestricted access to a machine, it should be assumed wholly tainted. With full administrative access it is possible to install a rootkit that not only evades detection itself, but actively forges data to make it appear that nothing is wrong. No rootkit has perfect stealth, and the activity of the hardware itself cannot be hidden, but it is dangerous to underestimate what clever foes will do with full control of a machine. Whenever in doubt, examine the software and data from an independent and isolated environment which is known good -- for example, by starting the machine from a read-only image of a secure OS. Only then can you trust your eyes. Or, if there is nothing of value to be saved on the compromised machine, erase the entirety of the software and data stored on it and start from scratch.
Take care not to rely too deeply on vigilance to remain secure. It is purely a means of identifying and dealing with threats as they occur. The best threat, however, is the one that is never noticed due to its thorough failure.