A "Threat Model" is the enumeration the threats to a particular system's security. For example, if you keep your money locked up in a safe, your threat model might include the safe being forced open, the safe being cracked, somebody finding the code to the safe, somebody forcing the code from an employee, or an "insider job" by someone who knows the code.
There is no security without a threat model. Without a threat model, you risk wasting time and effort implementing safeguards that do not address any realistic threat to the system. Or, just as dangerously, you run the risk of concentrating your security measures on one threat while leaving yourself dangerously exposed from others.
From BoingBoing comes a link to a mailing-list post by Ian Grigg that challenges the threat-model we use far too often for Internet security. To paraphrase, the threat model that informs SSL, the technology we throw like pixie-dust over web servers to make them "safe" is this: "Assume the end-points of any transmission are trusted. Assume the network in between is under the complete control of some malicious attacker". As Grigg notes:
It's a strong model: the end nodes are secure and the middle is not. It's clean, it's simple, and we just happen to have a solution for it. Problem is, it's also wrong. The end systems are not secure, and the comms in the middle is actually remarkably safe.
While a lot of the specific points Grigg makes in his email could be considered overstated for the sake of his thesis, (for example, the interception of plain-text passwords was a problem before SSH) he is largely correct. Almost all security breaches over the Internet occur not due to the interception or alteration of secrets over the wire, but through the compromise of end-points: buffer overruns, backdoors, trojans and of course, social engineering.
We pay so much attention to making sure web browsers communicate with webservers in unbreakable cryptograpic magic. We pay enormous attention to even the most theoretical attack against that cryptography. And meanwhile, nobody ever steals credit-card numbers on the wire. They hack into net-accessible databases, or set up dummy websites to grab them en masse. We can install a blast-proof foot-thick concrete door, but that doesn't fix the flimsy bars on the window.
This isn't just because we've tightened up the transport-level security so much that the end-points are the only thing left to attack. As Eric Rescorla notes in this 2003 Usenix Presentation, The Internet is Too Secure Already, most products that were devised to protect the transport of messages: SSL, S/MIME, IPSec, WAP are either under-utilised, completely ignored, or don't work. SSH is really the only useful encrypted transport that has successfully replaced its clear-text counterpart.
Even when the transport protocol is broken, the result is underwhelming. Rescoria notes a timing attack against OpenSSL over which much fuss was made, but for which no exploit tool was ever developed. Far more telling, though: for years the worst-case scenario envisioned for SSL/TLS was the compromise of a trusted CA signing key. Such a breach would render every HTTPS website certificate invalid. When, in 2002, a bug in Internet Explorer made this a reality for every Windows user... nothing happened.
Grigg points the finger at the Demon convenience:
Well, in a nutshell, we won't protect against the end system attack, because its really difficult. And we'll ignore DOS because that's too difficult too. But we'll cover the entire on-the-wire threats... because, as the book goes on to show, we can!
Once again, he's mostly correct. Protecting the end-points, especially with the human factors involved, is hard. The extent to which we protect the transport layer, while not a useless exercise, is excessive given the vulnerability of each end, and the threats that exist in the real world.
Going back to Rescoria's presentation: much of the problems with the deployment of the cryptographic transport of data may actually be solved by compromising perfect end-to-end security in exchange for making it easier for users to simply slot it in as a replacement solution.
For example. When SSH was first introduced, it was criticised because for the way it handled host authentication. When you first connect to a new host with ssh, the host's public key is delivered to you.1 You then keep using that key from then on (and are warned if it changes). This is open to a Man in the Middle attack. If, the first time you connect to a host, an attacker is in the right place, they can intercept the host's public key, and own the connection (and any subsequent connections).
This "flaw" in SSH's security helped its adoption. If you had to arrange some alternate secure channel for the delivery of a server key, this would make setting up SSH that much harder for both server and client.
If "Perfect" isn't going to be adopted, but "Pretty Good" is still worthwhile protection, (especially when considering the threat model as a whole) the latter becomes a worthwhile goal.
1 You are given the opportunity to check the key's fingerprint, but nobody ever does.