Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all articles
Browse latest Browse all 12749

Best Practices in App Security

$
0
0

Video & transcription below provided by Realm: a replacement for SQLite & Core Data with first-class support for Swift!

Learn more about Realm

Smartphones have become an extension of the user, allowing them to buy merchandise, pay for services and hold a strong social presence. This places strict demands on security and data privacy.

If you want your users to be comfortable using your app, you must place emphasis on utilising the security methods at your disposal.

This talk will cover the best practices in app security, demonstrate common mistakes and pitfalls and show what we’ve learned in our own experience in the mobile banking industry.

My name is Ana . I come from a company in Europe that’s called Infinum . We do design and development. We’re an agency. We work with a lot of clients.

In my line of work, I take care of security for banks. Security in general is a vast topic. I will try to focus on basic things that we can do to improve security in our applications. Throughout the presentation you’ll see that adding them up results in a good product.

Banks tend to have different approaches to security. Some focus on prevention, but most banks tend to focus on mitigation. That means that one day, when somebody makes a huge withdrawal from your account, they will call you up and ask you, “Did you pay a certain amount of money to that company?”

This is great, but it’s something that should be an added layer of security. The first layer should be the one you put in in the quality of your application and all the standards that you apply to the build itself.

Let’s start with the basics: build integrity . What I mean by build integrity is, all the little things you can do when creating a project. Creating your super secure application, you can do small things that aren’t very complicated but will have a huge impact later on.

You need to add a release key store to your application. We have a big team, and it often happens that a few of us have to work on the same application. While we’re in debug mode, I really hate it when I give my phone to another colleague and tell him to deploy his changes on my phone, and he has to reinstall it because the signatures don’t match.

A good rule of thumb, which we have begun using, is that when you create project you create a release keystore immediately. And you sign all your builds with that one. The simple reason is that you will need that release keystore for publishing to your Google Play account. The other reason is that you don’t get irritating reinstalls between your team members as you develop.

One release keystore can be used for all build types. I suggest you do use it for all build types. You really don’t want to lose that keystore. Ever. You learn from experience.

Our experience was that a few years ago we had a colleague that worked on a project from scratch. He developed an application. He finished the application and pushed it to our Google Play account. A few months later he left the company. And a few months after that, the client said, “Oh, we would like an upgrade.” Fine. “What do you want us to do?” They give us the specification. We implemented.

Then there comes the day when you have to push that update to your Google Play account. And we don’t have the keystore. Our colleague was super secure and placed the keystore outside of the git repository. Somewhere else. The problem is that we didn’t know where that somewhere else was. So you have to tell your client, “Whoops, we made a mistake. We cannot publish your application.” That’s really bad for you. You don’t want to end up looking like an amateur.

Thankfully, the laptop that the colleague was using was still in our company. That problem was averted, but I cannot describe the shame that you feel when you tell the client that you have misplaced the only thing that’s needed for another publish or an update of the application.

Your keystore needs to be safe. If somebody acquires your keystore, they hopefully won’t have access to your Google Play account, hopefully. If they do acquire your keystore, they can repackage your application, add some malicious code inside, and put it somewhere else. Publish it on a different site. Send emails to random people saying, “Hey, this is a new cool version of Facebook. Why don’t you download it from this link?” If they signed it with your keystore, or Facebook’s keystore, the users will hopefully not download it, but if they do, they will be able to upgrade the app.

This is a huge issue. Keeping your keystore separately stored somewhere, not in your git depository, is mandatory.

signingConfigs { release { storeFile file("myapp.keystore") storePassword "password123" keyAlias "keyAlias" keyPassword "password789" } }

You don’t keep your release keystores in your git repository because again, somebody could technically get to them, and if you have a large team and your team members leave, you don’t necessarily want them to have access to it.

Another thing that people do is put keystore data directly in their build.gradle files. You don’t want to put this data directly in your build.gradle because, again, somebody can get to it.

local.properties KEYSTORE_PASSWORD=password123 KEY_PASSWORD=password789

This is just one alternative. It’s an easy, obvious, and unimportant one, but you can put key-value pairs in your local.properties or build.gradle properties or somewhere else. You can use system environment variables and then reference them, or you can use properties and then parse them.

try { storeFile file("myapp.keystore") storePassword KEYSTORE_PASSWORD keyAlias "keyAlias" keyPassword KEY_PASSWORD } catch (ex) { throw new InvalidUserDataException(“…”) }

But the main reason is that you can reference something that doesn’t contain the actual data from your keystore. That’s one way to mitigate this issue.

Another thing that people usually do by default, but don’t know the implications of, is to enable obfuscation. You usually obfuscate to minimize the app. To remove unnecessary resources. To shrink the build. But you also make your code unreadable. And it’s crucial because making your code unreadable for you will also make it unreadable for somebody else.

release { minifyEnabled true proguardFiles getDefaultProguardFile( 'proguard-android.txt'), ‘proguard-rules.txt' signingConfig signingConfigs.release }

This is, again, from the generated build.gradle file. You see references to .txt files that contain rules for your Proguard rules.

People don’t like Proguard. Builds fail because when you add Proguard in the beginning, you develop, you add libraries, but you kind of forget to add rules for those other libraries. And most self-respecting libraries have a little section in their readme that says, “For Proguard, please add these two lines.” But people don’t do that until they have to release their build.

That’s the last bullet here, staging versus production. You wait until production to try out your build that has Proguard in it. You should add Proguard to all your builds, whether it’s debug or whether it’s staging or production. It will cause problems when you try to debug it, but at least you’ll know straight away that you have forgotten to add some kind of rule.

Again, builds failed, and then you have to go through the libraries that you added, and try to find the one that broke the build and add rules for that. If you add it as you develop and add other libraries, then you should have fairly little problems with it. If you don’t like Proguard, and I think we agreed that you don’t, you have other options. These are just two.

They’re tools that you can add to your builds and you can write rules for your application. You don’t just get minimization, obfuscation. You can also add some build tampering detection. They’re powerful. I think the text protector has trial periods, but I think that they both are commercial solutions.

If you look at obfuscated code, it’s the code you see that first year in college: when you just started programming, and you’re so super hyped. Everything’s going to be awesome and short. I’m just going to use one letter words for variables and it’s going to be really compact.

public abstract class e { private int a = -1; private String b = null; protected boolean k = false; public abstract void a(Intent var1); protected final void a(String var1) { this.b = var1; } public final void c() { this.a = -1; this.b = null; } public final boolean d() { return this.k; } }

This is an example of that. It’s hard to read, but that’s the point behind it.

The final question: we think that we have managed to secure our APK at this point, but the truth is you can reverse engineer in Android applications very simply. There are tools, and by the half million results shown here, you can really start with anything. The problem is that your APK is a zip file. Unwrapping it and using some tool to get data out of it is not a big problem. Hence the tampering detection.

When you’ve obfuscated your file, and you’ve added your keystore, you want to check whether somebody has downloaded it, maybe changed it a little, and whether there is something wrong. Potentially if it’s run on a rooted device or something else.

context.getPackageManager() .getInstallerPackageName(context.getPackageName()) .startsWith("com.android.vending")

There are three simple things that you can do with your build:

Verify signing certificate at runtime . You can place your signature somewhere in the app, preferably throughout several variables so that it’s not placed somewhere obvious.

You can also verify the installer . A few lines of code to check if the installer is Google Play.

Another thing to check is whether the app is run on an emulator or if it’s debuggable . Again, this is very simple to do.

Another problem we had is, at one point in time we started getting a lot of pressure ports for an application, which was weird, because the frequency of the ports was very high. Upon inspecting with Crashlytics we found that someone was running their application on a rooted emulator device.

Having some kind of dialog that would raise up when your app is run on an emulator or is debuggable, and telling the user, “Hey, there is something wrong with this build, please don’t use it” or maybe exit the app, would be a much better solution than allowing the users to do whatever they want to the build.

The next part that I want to address is data privacy. I think that most users are very sensitive about their data. I’m talking pictures. I’m talking conversations with your family or loved ones. You don’t want other people poking and prodding at it.

Android says that there are three basic ways to store and retrieve data. We have internal storage . We have external storage . And you can use content providers for the same thing. We’re asking ourselves if the data is private.

Internal storage, in general, is private. It belongs to your app. Only it can access it. It’s fine for most things you want to store for your app. The shared preferences are a subpart of the internal storage. Whenever you want to store some kind of user preference you use shared preferences. It’s not for big data, just simple information. It’s also stored in the private section of your app.

The external storage, as it says by definition, is generally readable and writeable. That means that your app can change the data, other apps can change the data, and the user can come and delete whatever kind of configuration, picture, or file you placed in the external storage. That’s not private, by definition.

And the third thing, content providers, are more of a storage mechanism used for sharing data between applications. One good use for this would be if you have an application that requires you to log in. Logging in to that application will also allow you access to some other application, so you don’t have a two login process.

<provider android:name="com.example.android.datasync.provider.StubProvider" android:authorities="com.example.android.datasync.provider" android:exported=“false"/> android:protectionLevel="signature"

It’s safe, but you have to do a little writing to make it safe. Those two things shown above are all you need to make your content providers secure. They have to be exported for other applications to use them, and the protection level for them has to be set to signature . This means that only an application that’s signed with the same keystore as your default application can use the content provider.

We’re referring to the things we’ve already done with our build. This is an overview of those storage mechanisms. All we need to know in general is that internal storage and by definition share prefs, are private, while the other two options are not. Or it depends on how you configure them.

But the question is whether it’s safe. Generally, yes. Until you do something with your device. And that’s rooting it. Once you root your device, everything you’ve done to ensure privacy is out the window. You can root your device for malicious reasons, or if you just want to remove the bloatware that comes with it. Either way, it’s fine, but it’s a reality and you cannot influence it.

One solution is to encrypt stuff. The safest way to keep data on your device is to not keep it at all. But if you have to, encrypt it. Find some library or tool that suits your needs, and encrypt all the things that you need to have encrypted in your application. I will not go into all the options, but you don’t have to reinvent the wheel. Use whatever is available.

When you encrypt stuff, you might still want to prompt your user to provide some kind of authentication method, some kind of PIN or password. That PIN or password should probably be the key that you use for the encryption. Wrong. If you use some kind of PIN to encrypt data, how many options for four digit PIN can you come up with? 10,000. 10,000 options, just for cracking the application. If you have time, I know you all here don’t, but if you have time and you’re really set on breaking something, you can go iterate through them. Try the basic things, birthday, anniversary dates, all the things that people tend to use as passwords. Using some other method like a password is a better option, but it depends on its length and complexity.

One other thing I wanted to mention here is that when you unlock your phone with your PIN, like I said, you have 10,000 options to try out, but you will start with the obvious ones. If you use lock screen pattern, you have raised the bar about 40 times. There are about 389,000 unique, distinct patterns you can create on your device. That’s always a better option than your PIN. But again, human nature is the core of most problems on everything including security, so most people start at the top left point, and just draw three, four, items. That number decreases drastically.

If you use some kind of password or something, don’t use it directly. You can use something that will transform your pin or password into something more complex. You can use Bcrypt . This is just one of the suggestions, which is an algorithm that transforms your pin into some kind of key that’s longer. It iterates through its magic, its function, and it’s much harder to crack, because the computational power that’s needed to crack that kind of password is unparalleled to the ones used for MD5 and SHA1 or other hashtables that you can get to. Encrypt the data and don’t do it directly with the pin. Transform it into something more complex.

My question to you at the end of this section is, can your data remain private, in any way? No. Private, definitely not, because rooting your device allows access to everything, including the data. But if you don’t encrypt data and you keep it on the device, then you’re just asking for trouble. Don’t allow misuse of the user’s data.

After we take care of our build integrity and implement some kind of privacy for the user’s data, the next thing we need to take care of is network security.

If you want to make things safe you will use HTTPS. HTTP is a text protocol and the text is right there. If you want to compare HTTP to something, it would be like writing your credentials on a postcard and sending it through mail to the ending recipient. Anybody can intercept it, anybody can read it, and of course, anybody can use it.

HTTPS, as you all know, encrypts the communication channel, which makes the whole communication between the app and the servers more secure. I say more secure because this is not enough to prevent a man-in-the-middle attack. (If you’re really playful, you can use a Charles proxy and place it to intercept data between the app and the servers. If you give that person your device and he decides to install the Charles proxy certificate on your device, if you use it, you will not get any kind of warning. Your device will think that it’s talking to the server, and the server will obviously think that it’s talking to your application, regardless of the HTTPS.)

The solution for this problem is to pin stuff. Pinning certificates has its merits. We use it frequently because it adds that extra protection layer that is needed in some of the applications that we have. What it does is it defines which certificate authorities are trusted.

Without certificate pinning your application will trust all the certificate authorities that are placed on your device, which is fine, but if your client wants to use a custom certificate, then you have to add it somehow. You can add it on your device, but having to ask 4,000 users to install an additional certificate just so they can use your application without warnings and problems is irritating. Nobody wants to do that.

So, you pin the certificate in the application, and the effectiveness of the attack is reduced and the users can use your application to communicate with the servers.

okhttpbuilder .pinClientCertificate(resources, R.raw.client_cert, "pass".toCharArray(), “PKCS12”) .pinServerCertificates(resources, R.raw.server_cert, "pass".toCharArray(), "BKS") .build(); return new OkClient(client);

This is for an older version of okhttp , which we use because who doesn’t love Square libraries? This is all that’s needed to pin certificates. This is also an example that shows that we pinned two different certificates, client and server, which is not necessary, but pinning the server certificate is a must.

What happens when the certificate changes and you have pinned it? Again, I think you learn from experience. When you get a call Monday morning from your client saying that the app is broken, nothing works, the world is about to implode. You ask them what has changed and they say “Nothing, everything’s the same. We just updated the certificates.” Then you ask yourself and the client, “Okay, do you remember the time when we pinned that certificate in the application?” They respond, “Sure.” And they’re still drawing a blank.

This is a problem that we generally have with our clients. They don’t really understand the impact that the implementation has on the release cycle of the app and on the security features on the server. When you do pin certificates, you need to know how to inform your users and your clients that the change will break everything. If you do need to change your production certificate, and you have some kind of mechanism like, when you log in to your application, and the server tells you, “Okay, there’s a new version available. Please upgrade.” This doesn’t work, because the login call is probably under HTTPS itself, which means that there’s no communication at that point.

You want some kind of outside mechanism that can notify users that there’s another version of the app available, “Please install it due to security reasons” or whatever you need to tell them. Or you can use even Google Call messaging or Fire-based messaging to notify them of the change.

Another fun thing is many users don’t have auto-update set in their devices. This varies from region to region, and it also varies between users and their age group. Older users tend to not update their apps, even if their phones are burning, because that’s evil. I’m used to the version I had before. But this also poses a problem for you and your development cycle because users will not be able to use the application. And most users tend to respond, “It’s not working.” There’s no context. There’s no way you can help them, unless they’re informed of the changes that will have to happen.

Also, one thing to plan in advance for is the impact that server setup has on your devices. If you go to the Qualys site, and type in any kind of URL from your website or something else, you get the complete security overview of your site. If you’re an A or A plus or minus, good job. If you have a lower grade then you probably need to rethink your security strategy on the server.

Where it impacts Android is setting the security level on your site will also kill off some of the devices. As you know, TLS1.0 is obsolete. It’s not to be used anymore. But it will also eliminate the 2.37 devices that, believe me, are still used. Clients are reluctant to eliminate them.

A good thing to do before you upgrade your security is to look at this site and see what kind of impact it will have on your users. It’s context aware, as most things are, so you need to find a compromise between the reach that you want to achieve and the security on the server.

Another thing that you can do, and this is the moment where Android has begun producing serious and good things, is use the platform to your advantage.

android:usesCleartextTraffic="false" StrictMode.setVmPolicy( new StrictMode.VmPolicy.Builder() .detectCleartextNetwork() .penaltyLog().build());

Starting from Android M, you have a one liner that breaks all HTTP calls in your app. As simple as that, you can disable all HTTP calls. You go through your app. It breaks somewhere, it’s not working. No data. Good. You forgot to take care of that one call, now is the time to do it. You can also set the strict mode policy, but it’s not necessary since you have the parameter up there.

Another thing that has got more traction is biometrics in the form of fingerprint API. We finally have a standardized, unified API fingerprint storage and reading. The most important part of it is that it’s sufficiently secure now. Samsung is not trying to do their own thing. HTC is not trying to do their own thing. We have the TEE definition that must be complied with and it’s okay now.

[{ "relation": ["delegate_permission/common.handle_all_urls"], "target": { "namespace": "android_app", "package_name": "com.example", "sha256_cert_fingerprints": ["14:6D:E9:...44:E5"] } }]

Starting from M you have now the power to configure for a certain domain what app will open it. There’s a unified link between the app and between the server that will launch your app each time a link is launched. What you have is the place where you need to plant the JSON file with the configuration of the server. This is all the code that’s needed to link that to your application. That’s what you place there.

Another thing is, with the coming Android N, we have a network security configuration feature. It’s all the stuff we did with certificate pinning, clear text data and other stuff, and you don’t need to do in code anymore. You have one unified file which allows you to set all the security features, exclusions, inclusions, that you want working for your app.

<?xml version="1.0" encoding="utf-8"?> <manifest ... > <application android:networkSecurityConfig=" @xml/network_security_config" ... > ... </application> </manifest>

The first thing you need to do with that is add it to the application tag.

<?xml version="1.0" encoding="utf-8"?> <network-security-config> <domain-config cleartextTrafficPermitted="false"> <domain includeSubdomains="true">example.com</domain> <trust-anchors> <certificates src="@raw/my_ca"/> </trust-anchors> <pin-set expiration="2018-01-01"> <pin digest="SHA-256">7HIpa...BCoQYcRhJ3Y=</pin> <!-- backup pin --> <pin digest="SHA-256">fwza0...gO/04cDM1oE=</pin> </pin-set> </domain-config> </network-security-config>

As far as configuration, here are few things that can be done. You can determine when you want to use clear text or not. You don’t. You can define the trust anchors that you want in your apps. Again, use your certificate, place it in your app, but you don’t need to read it or place the passwords and other stuff that you needed to do before. You just specify that this certificate is the one that’s going to be used for communication. And you can use another set of tags for pins and expiration, but that’s generally not a good idea.

So you have a file, which you can configure, and it contains all the security information in one place. Another thing that’s important is you need to be an authority for your clients and you need to lead by example. That means it’s your job to tell them how to improve the security of the application and the whole process. You have to keep them up to date, which means if they tell you that they plan to update their certificates in six months, that’s a good thing because now you have six months to notify the users that their apps will stop working. If there are security issues, patches to be applied, it’s your job to notify them immediately, and to try to rollout a new version of the application with those things applied.

Things to take away from this presentation include, if you need to use storage, use the internal storage. But encrypt data that you place in it. Use HTTPS. Certificates. You can pin them. You can now use the configuration file that will hold all the features that you need for your application. And again, be aware of the update cycle because breaking stuff for your end users is not cool and it damages your ego and it carries a message that you’re not doing your job right.

Generally, and specifically, Android is not secure. I’d say not as secure as the iOS platform. I have both phones, and I feel more confident when using applications on the iPhone. What’s important is that everything that you do in your application regarding security is just another deterrent for your malicious attacker.

You’re not creating a safe application, you’re just adding bits and pieces of rules that will make it harder for somebody to break it. It will make it harder for them to change stuff in your build, to intercept data, to read the data, and sniff the communication between the server and the application. We can make it better and we can make it less easy to abuse.

Best Practices in App Security Resources Gradle configuration Storage Options Coda Hale Android cryptography tools for beginners Android Security: Adding Tampering Detection to Your App Qualys SSL Labs Android Developer: Network Security Configuration Ionic: MITM

See the discussion on Hacker News .


Best Practices in App Security
Ana Baoti

During college I got involved in e-learning projects where I developed in Java, and ended up in love with it. After graduation I got a job at Infinum as an Android developer, and have been working there ever since. By the end of 2014. I switched from various project in the telecommunication, medical and travel industry to mobile banking, where I became the Technical Manager of Mobile Banking in early 2016. My responsibilities include researching new technology and advancements in security, making sure new practices are implemented and generally pushing things forward in the right direction. I love making quality, reliable products. Apart from the effort you put in the architecture of the project, I believe that the details will define the difference between good and really amazing. We prefer amazing.

Twitter


Viewing all articles
Browse latest Browse all 12749

Trending Articles