(Before I continue with this post: Go Katie! Our friend Katie has just gone into labor—it’s early, so she hasn’t left for the hospital yet. But push that sucker out! Good luck, and be safe in the snow!)
Paired with my interest in telecommunications law is electronic security. My favorite parts of Cory Doctorow’s wonderful novel Little Brother were the digressions about (un)secure networks, circles of trust for encryption keys, and encryption software.
So while I’ll frequently experiment with security tools for my own computers, one pitfall with each tool is the requirement that I depend on my own brain to remember a password….
A key feature of good security is plausible deniability: if my laptop were stolen or hacked, my sensitive data would be more secure if its data were not only encrypted but also if it were impossible to tell the encrypted data even exists. A spectrum showing the least secure file to most secure file would, presuming a secure network connection, look something like this:
- Unprotected file on a harddrive
- Password-protected/encrypted file on a harddrive
- Password-protected/encrypted file stegonographically disguised as/in another file of a different type and size
- Password-protected/encrypted file stegonographically disguised as/in another file of a different type and size but with no evidence that the harddrive contains a such a tool to disguise files
That last bullet point provides full plausible deniability. Not only is your data hidden but there’s no evidence you ever had the tool to hide sensitive data. It would be like sending a coded letter via U.P.S. without anyone being able to know U.P.S. ever came to your house or subsequently delivered your letter.
But my weak link—not for security per se but for making the whole thing practicable—is the fact that I have little confidence in always remembering which file is disguised and even perhaps what my tough password is.
Around the time my memory went in 2007, I had recently installed TrueCrypt on my work computer to encrypt a lot of my data. At the time I thought it prudent: I was working with research written by people in east Africa who, if they were identified, could be in some danger. (Not a likely occurrence that a Ugandan would hack my computer, but I considered it a best practice.) When it was clear that my health would keep me out of the office for several months, Tufts brought in a freelancer…who, of course, couldn’t access any of the files she needed. But because of my short-term memory loss, I couldn’t remember my TrueCrypt password. It was only when I felt well enough—a week or two out of the hospital—to go into the office and sit at my desk that my muscle memory (I guess?) recalled my password. I copied the files to the desktop, and I uninstalled TrueCrypt.
I learned three lessons:
- TrueCrypt, to its programmers’ credit, works exactly as advertised.
- For me to have had full plausible deniability, Tufts never should have been able to tell I’d encrypted anything. (It was easy to tell: every morning I had launched TrueCrypt to decrypt and mount my hidden file, so TrueCrypt was not only in my Programs folder but was my frequently used programs menu.)
- But the key lesson, obvious as it sounds: Security is only as strong as your ability to store your password(s) in your own head.
And that’s where I’m stuck. A perfect example. I’m about to test out KeePass Password Safe to store the various passwords I use, as it’s less-than-ideal to use variations on one single password for everything you do. However, doing so requires me to still remember a lot. It’s not too big a deal to label a username/password combo as “Email” in KeePass and still know which webmail service I use. It’s a smidge more troublesome to label something “Banking,” as anyone seeing KeyPass would then know I use online banking. But then it’s very problematic when I try to obscure, say, multiple financial accounts. It would be dumb to label them “Bank of America-Checking” and “Merril Lynch-401(k)” of course. But what about two savings accounts with different institutions? To obscure the names of “Bank of America-Savings” and “ING-Savings”, you’d end up having to remember what non-descriptive nicknames refer to which accounts (“Savings account 1″ and “Savings account 2″). It gets tougher for accounts that you rarely use—savings accounts are a good example, as many people set up a direct deposit with their employer and then don’t think about accessing that account for months.
Which takes us back to the fact that to obscure all the information about a password—the password itself but also which site or service that password unlocks—you need a program like KeePass to hide them all. Yet KeePass’s database of passwords is itself protected by a single master password:
KeePass is a free open source password manager, which helps you to manage your passwords in a secure way. You can put all your passwords in one database, which is locked with one master key or a key file. So you only have to remember one single master password or select the key file to unlock the whole database.
So we’re back to the beginning: using one password to control everything. If someone can acquire that one KeePass password, if they can successfully threaten you, they likely know which banks, webmail, etc. you use. That information isn’t too helpful separately, but together it tells a lot about a person. And KeePass itself, like TrueCrypt, isn’t hidden (the best thing to do it keep them on a USB, though that comes with similar problems), so there’s no plausible deniability that you’re not hiding something from someone.
Does anyone therefore know: is it possible to be 100% “secure in your person and effects” if you can’t trust your “person” to remember all your passwords?