Storing passwords in an app

I recently covered the topic of sending unfakeable messages from your app to a server, which requires the use of a private key – basically a password that users can’t access. In this case, and many others, you might want to include private data (such as a private key) in your app in a way that hackers can’t find it. The most obvious way to store a password string is as a string literal. But this is a bad idea, since anyone who purchases your app (or obtains it by more nefarious means) has access to the binary, and extracting string literals from a binary is pretty easy.

The problem

I’ll mention the bad news up front, which is that there is no really good solution to this problem. In security, it’s basically a war between hackers and researchers. (I like to think of these guys as two sides of research, but I’m not sure if others would agree.) When it comes to this specific problem, the hackers are winning. Or, to be more fair to researchers, the setup of the problem is so skewed in favor of hackers that it may be essentially impossible to defend against them completely.

Why do the hackers win this one?

A hacker can get your binary, and can run your binary. A clever hacker can disassemble, inspect, and even modify your code, although they don’t have access to the source, so it’s harder.

But, no matter how you implement things, your code will be sending some string to the server, and it has to compute the authentication code (the HMAC). In order to compute the HMAC, it has to pass around a private key. All the hacker has to do is locate the point in code when you send out the network request, and work backwards from there to find the origin of the HMAC, and from there find your private key and hash function. Once they locate these, they can execute your hash function with your private key to sign any message they like. They win.

However, going to all this work is a real pain, and requires a rare skill set for the hacker. So there are some very practical methods you can use, which I’ll list below, which will make life harder, and probably completely deter, many would-be hackers.

Good solutions we can’t use

Before saying what we can do in iOS, in my research for this post I learned about two techniques to help solve this problem in other contexts. I like these ideas because in these cases the researchers are winning, and I think each of these is a very interesting idea.

Non-iOS solution 1: Trusted platform module

A trusted platform module (TPM) is a small piece of hardware that can be included with a device to provide certain security functionality. The main feature we would care about is device integrity. Basically, a TPM keeps a hash computed from the system device setup and OS/low-level software. If any of those are altered (such as an iPhone being jailbroken), the TPM will detect it. So a platform can be created where no apps can run without approval from the TPM. In this context, it becomes much more difficult for a hacker to extract anything from your code, because they can’t run it. If someone were implementing this system, they might as well protect against users copying binaries, which would also disallow disassembly.

However, iOS devices do not have these protections.

Non-iOS solution 2: Secure password verification

This is a very cool idea. You can write a function BOOL PasswordIsCorrect(NSString *password) that will correctly return YES or NO depending on if the input string matches a secret password. This much is easy. The cool part is that you can write this function in a way that keeps the password secret, even if a hacker has the source code for the function.

How can this work? Like this:

BOOL PasswordIsCorrect(NSString *candidatePassword) {
  // These values are constants pre-computed by the programmer.
  // If you use this, make sure your p is much larger.
  const long long int encryptedPasswordInt = 82303;
  const long long int r = 19392;
  const long long int p = 89237;
  long long int candidatePasswordInt = IntRepresentationOfString(password);
  long long int encryptedCandidatePasswordInt = PowerModN(r, candidatePasswordInt, p);
  return encryptedCandidatePasswordInt == encryptedPasswordInt;

The background is to choose any large prime for p, then any value r < p, and then compute encryptedPasswordInt = PowerModN(r, realPasswordInt, p). To use more standard mathematical notation, we’re doing something like:

y = rx (mod n)   (x is secret, done ahead of time)
z = rq (mod n)   (q is user-provided password)
y == z ?

They will be equal iff the password was correct. Here, x is the real password int, and q is the user-provided candidate password int. y is called encryptedPasswordInt, and z is called encryptedCandidatePasswordInt in the code.

This is considered secure because the discrete logarithm problem is considered hard. I say “considered” because it hasn’t been proven either way. For example, it would be great if someone could prove there did not exist a polynomial-time algorithm to solve the discrete logarithm. Of course, it would also be great if I were a billionaire with a yacht and I could solve the Riemann hypothesis in my sleep. Actually I don’t really care about yachts that much. That’s more of a “nice-to-have” one.

So, that technique works and has a bunch of research behind it. For example, check out these lecture notes or just search for “obfuscating point functions.”

Now for some methods you can actually use. Sorry it took me so long; I thought these detours were very interesting.

Method 1: Encrypt your private string

The first thing you should do is encrypt your private key string. At this point some readers will be thinking “Oh hey I know this great routine I can write called rot13. I love rot13.”

Do not use rot13.

Every time you use rot13, a kitten explodes. Rot13 is not an encryption method. It is a way to tell other programmers that you are a security newbie. It’s ok to be a newbie, unless of course you’re in charge of security at my bank.

In general, most encryption techniques that are easy to think of have been broken. If you want to be safe, use a community-accepted technique like AES. iOS comes with some powerful, though poorly-documented, security libraries that you can take advantage of here.

Method 2: Obfuscate your strings, independent of encryption

Now, you have your encrypted password, and you need to decrypt it at runtime. So you also have a decryption key. How have we really improved the situation? A hacker can still figure things out from just your string literals. Well, now he has to guess which two strings are the keys, and which decryption technique to apply, which is already much more work.

You can make things even harder by:

  • Using English phrases as the decryption key for your private key. This way, if a hacker sees a list of extracted string literals, it’s harder for them to guess which one is your decryption key.
  • Store a series of short, separate string literals in your code, and combine them in a random order to get your decryption key. This makes it even harder to guess your decryption key from a list of string literals.

Method 3: Obfuscate security symbol names

Next up, hackers can often extract symbols from your binary. Things like class names may be available to them. If you have a class named PrivateKeyHelper, you’re advertising to them where to look in your code. You can keep your code readable but place misleading symbols in your code by clever use of #define’s in your header files. For example, near the top of PrivateKeyHelper.h, do this:

#define PrivateKeyHelper ComponentNode

Your code will look the same, but the linker will never see the symbol PrivateKeyHelper, only ComponentNode (or any other boring non-security symbol you choose). Do this for all function/method and class names that reek of security.

Method 4: Write your own key decryption function

The above method works whenever you have all the source, but fails when you’re using a library you didn’t build, including any third-party security library. There are a few very common decryption functions you are likely to use. A clever and determined hacker could look for calls from your code to these common functions. To fight against this, you could write your own decryption function. Again, don’t use rot13 (remember the kittens?), but something more serious.

Method 5: Use function pointers to make security calls

If hackers can follow a deterministic control flow through your code, they can still find your decryption function by working carefully backwards from your network request. But you can fight even this. If you don’t call your functions directly, but rather use a function pointer, then it will be harder for a hacker looking at disassembled code to follow the control flow. If you do this, try to fool the compiler into thinking that multiple values may be used for the function pointer, so it doesn’t optimize away the obscurity.

Method 0: Don’t worry

Lastly, and most importantly, all of this post could very easily fall into the category of “premature optimization.” Unless you have a lot of users or a lot of user money already invested in your app, I think this stuff should be a low priority. Why? Because the majority of apps only have a small number of users, and it’s unlikely that anyone is going to put in serious effort toward hacking your app. Of course, your priorities are up to you, but I thought I would mention this in lieu of Knuth’s warning about the root of all evil.

(Thanks to Susan Hohenberger Waters for some help with this topic!)