Security by obscurity
See Security through obscurity
Security by obscurity is a controversial principle and bad practice in security engineering. It relies on secrecy to enforce security. That is to say, a system is (wrongly) regarded as secure, even though it has theoretical or actual security vulnerabilities, because its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them. Typically they believe that they have ensured this by keeping the design of the system secret.
The principle of security by obscurity has been demonstrated to be flawed over and and over again. This is because:
- secrets are hard to keep
- the people you trust with the secrets may actually be the people performing the attacks
- security flaws can be found without access to the secret design
- the secret design can be reverse engineered, in any case. Merely making reverse engineering illegal is useless, because determined attackers are prepared to break the law.
In addition, security by obscurity prevents peer review of security systems. Operators of systems that rely on security by obscurity often keep the fact that their system is broken secret, so as not to destroy confidence in their service or product. It is possible that this may amount in some cases to fraudulent misrepresentation of the security of their products.
The reverse of security by obscurity is Kerckhoffs' principle, which states that system designers should assume that the entire design of a security system is known to all attackers, with the exception of cryptographic key secrets. The full disclosure movement goes futher, suggesting that security flaws should be disclosed as soon as possible, delaying the information no longer than is necessary to fix or workaround the immediate threat.