Home       About Me       Contact       Downloads       Part 69    Part 71   

Part 70: Application Roles

October 26, 2012

I continue to creep forward on my project, in the two hours or so a day I feel well enough to work on it. Since I still have nothing to show you, I thought I'd write some more about security.

Last week I argued that we're doing security wrong -- we're authenticating the user and assuming he knows what programs he's running. We're also allowing programs to do pretty much anything they like, under the assumption that the user knew what the program was going to do when he ran it.

I also argued that since users are clueless, the industry is moving towards a white-list approach, where only approved programs sold on App Stores are allowed. This hurts innovation, since the store policies are so restrictive. It also hurts independent developers who can't navigate a store approval process for every demo or update. And it allows the OS vendors to charge a 30% toll to put an app on their website.

Some of the comments argued that this is an unsolvable problem. No AV software or other OS feature can know the intent of a program, and so we can never automatically detect all malicious software. Some readers just didn't see the harm, since there are still ways to release code without using App Stores.

I'd like to build an extensible MMO where people can write their own games in a scripting language. In fact, I'd like to support a real programming language, so that existing tools can be ported into the system, letting people do cooperative work inside the MMO. I can't do that without a reasonable approach to security. If scripts in the world can damage your machine, it won't be safe to use the system.

Roles

We can't analyze a program in software and decide if it's dangerous. We don't have a complete model of "dangerous" for one thing, and the analysis would be hugely complicated. Instead, I think we want to define an "application role" that is both capable and safe. That safe set of capabilities should be as large as possible, but of course cannot include all useful programs.

Web browsers implement scripts in this style already. Javascript, Java applets, Silverlight, and Flash all implement a set of functions considered harmless. Of course bugs and exploits in the implementation are found every so often, but they are patched. The ideal is that no program running in one of these environments can damage your machine.

Consider the kinds of damage a program could do:

  1. A script could inject executable code and run it, giving it control of your system.

  2. It could write your files for various reasons. Botnet software could be installed, advertising could be added to the browser, etc. I read about one case where the hacker would encrypt the files of a company and demand ransom. Purely malicious code could wreck your system or corrupt files.

  3. The program could read private files and send them over the net, including raiding address books, bank account information, passwords, etc.

  4. Badly written code could waste system resources by running infinite loops or creating huge numbers of files.

  5. Badly written (or malicious) code could crash system services like the window system, forcing you to reboot.

A good virtual machine solves some of these problems. I'm surprised by the designs already out there. Programmers seem to make the instruction set rich, presumably so that the language compiler doesn't have to do much. But the higher level the instruction set, the harder the VM code is going to be to check. I'd make the instruction set as simple as possible, so that you could prove the implementation secure.

The big problem is items #2 and #3. Access to the file system would seem to make security impossible. The browser scripting languages just don't allow access. Is there a way for scripts to be safe and still work with user files? If not, we can't have anything like general purpose applications in the MMO.

You might wonder why this is even a requirement. As I said, I'd like this to be a platform for real application development, but even just as a game, it would be nice to have capable apps. I'd like users to be able to improve on the avatar editor, or write a building-generating app. Those can't be written without file system access.

Accessing Files

If the app had a virtual file system, it could read and write files there without any real danger. We'd have to make sure the app couldn't fill up the disk, but that's about it. Without the ability to reach into the user file system, the app can't export sensitive information or corrupt your system.

For the kind of application I just mentioned -- an avatar editor for example -- this might be enough. The user would copy an avatar file into the virtual directory, work on it with the app, then upload and publish it. Running in a virtual machine with virtual files, the app can do useful things, have persistent state, and still not be dangerous.

To make this more general, we could use links. An entry in the virtual file system is an opaque link into the real file system. The app can read and write that without doing any damage. Since the app doesn't have access to the real name, it can't use it as a starting point for traversing your user directories. It still can't open an arbitrary directory (like "C:/Windows/...") so it can't do damage.

Creating links would be a nuisance for users, but we can solve this problem by adding a file dialog operation to the virtual machine. The (insecure) app opens the secure file dialog, and the result is not a file name, but a link to some file the user has explicitly pointed to. The user could still do something stupid, but otherwise, this is both capable and fairly safe.

Unfortunately, this doesn't quite work. If I had a tool that maintained a set of files (like the components of a project), or wanted to convert one file into another (".cpp" files into ".o" files), it needs to be able to create files based on the name of the original file.

I think there are a limited number of operations that would suffice. Turning ".cpp" into ".o" and creating the file is manageable. I don't see that the app can do any damage with that. Creating relative file names downward ("project/myfile.o" when the app has a link to "project") also seems harmless. We just have to keep it from climbing to the parent of a directory given to it by the user. We might also want to keep the file dialog from ever working with OS directories.

When I first thought of doing security this way, I was fairly excited. The general concept was to run the app in a tight virtual machine, and also break the app into trusted and untrusted parts. The trusted parts like "file dialog" are not part of the VM or shared libraries. Instead, they are independent services called by the app via message passing. With this kind of approach, your VM would come with a few trusted services, and apps could all be untrusted.

Flaws

Unfortunately, there are some fundamental flaws with this approach.

No matter what you do in the security architecture, "phishing" and "social engineering" attacks are still going to be possible. There's nothing to prevent an app from popping up some fake "give your password to authenticate" dialog and stealing your password. Most users would just go ahead and answer the dialog, since systems do that all the time, and users have no idea exactly why.

I think Java applets tried to solve this problem for awhile by putting some warning line at the bottom of all applet created windows. But that also kept apps from creating drop-down menus and so on, and I haven't see it in a long time. I don't know that there is any answer to this problem.

My "secure components" approach also doesn't generalize very well to other services than the file system. For example, what if we want apps to be able to do OpenGL calls? Even if we emulated the entire OpenGL API inside a service, checking all the calls, what would we do about shaders? Those are arbitrary programs, and there's just no way for an analysis to tell if they are defective or malicious.

The same problem with occurs with other services. A file system is relatively simple and I can decide what is safe. But what are all the safe operations on a mail server or database? And there's no such thing as a safe connection to the internet. Once the app can talk to the outside world, it can do all kinds of dangerous things. An app could be a perfectly functioning email program, but also forward your mail passwords to a hacker's site or send spam along with your mail.

Finally, there's the problem that my secure environment, if I ever wrote one, is not the whole world. An evil app could write an ".exe" file to your machine. It can't run the file from within the VM, but it can create one. Then the file is just sitting there like a time bomb waiting for a naive user to run it. The app could even try to tell you it was "needed for post-processing the avatar description files. Just run it... it's harmless!"

A more subtle app could create files that will be used by other programs. For example, I think there was a Microsoft Office macro exploit at one time. An evil programmer could take the OpenOffice package and port it into the MMO -- with one change. He could make an EvilOffice program that added a virus to every file it edited. This virus couldn't act inside the MMO (since the VM is perfect...), but would spread as these edited documents were exchanged outside the MMO.

We can run an app in the virtual machine and keep it harmless as long as it can't talk to the outside world (which makes it worthless.) We can let it talk to the user, which allows "phishing" attacks. If we open it up the file system, even in a controlled manner, the evil app can potentially use any other app as a conduit for corrupting your system.

Risk and Reward

The web browsers have to deal with these exact problems, since they support scripting. I'm actually surprised there aren't more exploits, since all the scripting language virtual machines are very complicated. It seems unlikely that they don't have significant bugs that allow malware.

In the browser, the whole point of a script is to present data to the user, so they've had to take the risk of phishing and social engineering attacks. I assume none of the languages allow access to the file system or arbitrary web sites. WebGL can run shaders, which would seem to allow scripts to lock up your display, if not crash your machine.

I think for my MMO, it's reasonable to have a completely virtual file system that cannot reach the user file system. My "dialog plus link" approach might be secure enough in practice to write useful large applications. I would have to keep the system from writing executable file types, or working with system directories. If it can only create files for use within the MMO, that limits the damage it can do.

I would like to make the MMO a new platform for cooperative program development, but I don't know how to make that safe. The more powerful the applications get and the more they can interact with the rest of the system, the more potential they have for malware.

At this point, it's all about reputation. If a game was known to spread viruses with no way to stop that, people would not use it. I'm not sure anyone would trust it again after a major exploit.

What to Do?

I really don't know what to think about this whole situation. On the one hand, it seems possible for malware to do a lot of damage. I read about large botnets and how little AV software can keep up with them. The governments of the U.S. and Iran are even using malware as a weapon now. There are constant calls for more "cybersecurity" spending. We'll end up with government controlling the net at the OS vendor and ISP level if we keep going this way.

On the other hand, although there are attacks big enough to make the news now and then, I don't personally see much malware other than spam email. I've only gotten one bad virus that I can remember in the last twenty years. So perhaps the whole problem is overrated. Programmers, even the malicious ones, just want your money. They don't have any incentive to destroy your system and steal your data.

If we can't trust code and can't detect bad code automatically, we need to use white lists. A black list (as in anti-virus software) just doesn't work when anyone can create new apps and distribute them as easily as web pages.

However, a true white list has enormous costs. You have to identify and sign all programs, so they can be put on the list and recognized when you run them. You pretty much have to identify all the programmers too, and get rid of anonymity on the net. Someone has to inspect the program, at least trivially. Someone has to be in charge of maintaining the list, and have the authority to restrict programs. There are legal consequences to that role, which is why the stores restrict content based on "obscenity". And all of this costs money, so you will have to pay to distribute programs.

Yes, the user can opt out of all of this and run what he likes (on some platforms.) But whenever I run something off the net now, or download some new utility, I get nervous. I should really Google it to make sure it's not malware. Assuming anyone has noticed that it's malware! When you run an untrusted program, you are risking all of your data and a huge chunk of your time (if it corrupts your system.) The more I use my computer for everything in my life, the more hesitant I am to risk my system.

If anyone knows of new security work that looks promising, please point me to it. I would really like a better solution to this problem.

Bugs

The tracking image this week is a picture of bugs -- what I get when I write code without enough sleep. Actually, it's ants attacking an apple slice in my garbage. They find food very quickly!

Home       About Me       Contact       Downloads       Part 69    Part 71   

blog comments powered by Disqus