Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - kmfkewm

Pages: 1 ... 62 63 [64] 65 66 ... 249
946
Security / Re: Plain text addresses
« on: June 18, 2013, 03:18 am »
http://rdist.root.org/2010/11/29/final-post-on-javascript-crypto/

Quote
The talk I gave last year on common crypto flaws still seems to generate comments. The majority of the discussion is by defenders of Javascript crypto. I made JS crypto a very minor part of the talk because I thought it would be obvious why it is a bad idea. Apparently, I was wrong to underestimate the grip it seems to have on web developers.

Rather than repeat the same rebuttals over and over, this is my final post on this subject. It ends with a challenge — if you have an application where Javascript crypto is more secure than traditional implementation approaches, post it in the comments. I’ll write a post citing you and explaining how you changed my mind. But since I expect this to be my last post on the matter, read this article carefully before posting.

To illustrate the problems with JS crypto, let’s use a simplified example application: a secure note-taker. The user writes notes to themselves that they can access from multiple computers. The notes will be encrypted by a random key, which is itself encrypted with a key derived from a passphrase. There are three implementation approaches we will consider: traditional client-side app, server-side app, and Javascript crypto. We will ignore attacks that are common to all three implementations (e.g., weak passphrase, client-side keylogger) and focus on their differences.

The traditional client-side approach offers the most security. For example, you could wrap PGP in a GUI with a notes field and store the encrypted files and key on the server. A client who is using the app is secure against future compromise of the server. However, they are still at risk of buggy or trojaned code each time they download the code. If they are concerned about this kind of attack, they can store a local copy and have a cryptographer audit it before using it.

The main advantage to this approach is that PGP has been around almost 20 years. It is well-tested and the GUI author is unlikely to make a mistake in interfacing with it (especially if using GPGME). The code is open-source and available for review.

If you don’t want to install client-side code, a less-secure approach is a server-side app accessed via a web browser. To take advantage of existing crypto code, we’ll use PGP again but the passphrase will be sent to it via HTTP and SSL. The server-side code en/decrypts the notes using GPGME and pipes the results to the user.

Compared to client-side code, there are a number of obvious weaknesses. The passphrase can be grabbed from the memory of the webserver process each time it is entered. The PGP code can be trojaned, possibly in a subtle way. The server’s /dev/urandom can be biased, weakening any keys generated there.

The most important difference from a client-side attack is that it takes effect immediately. An attacker who trojans a client app has to wait until users download and start using it. They can copy the ciphertext from the server, but it isn’t accessible until someone runs their trojan, exposing their passphrase or key. However, a server-side trojan takes effect immediately and all users who access their notes during this time period are compromised.

Another difference is that the password is exposed to a longer chain of software. With a client-side app, the passphrase is entered into the GUI app and passed over local IPC to PGP. It can be wiped from RAM after use, protected from being swapped to disk via mlock(), and generally remains under the user’s control. With the server-side app, it is entered into a web browser (which can cache it), sent over HTTPS (which involves trusting hundreds of CAs and a complex software stack), hits a webserver, and is finally passed over local IPC to PGP. A compromise of any component of that chain exposes the password.

The last difference is that the user cannot audit the server to see if an attack has occurred. With client-side code, the user can take charge of change management, refusing to update to new code until it can be audited. With a transport-level attack (e.g., sslstrip), there is nothing to audit after the fact.

The final implementation approach is Javascript crypto. The trust model is similar to server-side crypto except the code executes in the user’s browser instead of on the server. For our note-taker app, the browser would receive a JS crypto library over HTTPS. The first time it is used, it generates the user’s encryption key and encrypts it with the passphrase (say, derived via PBKDF2). This encrypted key is persisted on the server. The notes files are en/decrypted by the JS code before being sent to the server.

Javascript crypto has all the same disadvantages as server-side crypto, plus more. A slightly modified version of all the server-side attacks still works. Instead of trojaning the server app, an attacker can trojan the JS that is sent to the user. Any changes to the code immediately take effect for all active users. There’s the same long chain of software having access to critical data (JS code and the password processed by it).

So what additional problems make JS crypto worse than the server-side approach?

    Numerous libraries not maintained by cryptographers — With a little searching, I found: clipperz, etherhack, Titaniumcore, Dojo, crypto-js, jsSHA, jscryptolib, pidCrypt, van Everdingen’s library, and Movable Type’s AES. All not written or maintained by cryptographers. One exception is Stanford SJCL, although that was written by grad students 6 months ago so it’s too soon to tell how actively tested/maintained it will be.
    New code has not been properly reviewed and no clear “best practices” for implementers — oldest library I can find is 2 years old. Major platform-level questions still need to be resolved by even the better ones.
    Low-level primitives only — grab bag of AES, Serpent, RC4, and Caesar ciphers (yes, in same library). No high-level operations like GPGME. Now everyone can (and has to) be a crypto protocol designer.
    Browser is low-assurance environment — same-origin policy is not a replacement for ACLs, privilege separation, memory protection, mlock(), etc. JS DOM allows arbitrary eval on each element and language allows rebinding most operations (too much flexibility for crypto).
    Poor crypto support — JS has no secure PRNG such as /dev/urandom, side channel resistance is much more difficult if not impossible
    Too many platforms — IE, Firefox, Netscape, Opera, WebKit, Konqueror, and all versions of each. Crypto code tends to fail catastrophically in the face of platform bugs.
    Auditability — each user is served a potentially differing copy of the code. Old code may be running due to browser cache issues. Impossible for server maintainers to audit clients.

JS crypto is not even better for client-side auditability. Since JS is quite lenient in allowing page elements to rebind DOM nodes, even “View Source” does not reveal the actual code running in the browser. You’re only as secure as the worst script run from a given page or any other pages it allows via document.domain.

I have only heard of one application of JS crypto that made sense, but it wasn’t from a security perspective. A web firm processes credit card numbers. For cost reasons, they wanted to avoid PCI audits of their webservers, but PCI required any server that handled plaintext credit card numbers to be audited. So, their webservers send a JS crypto app to the browser client to encrypt the credit card number with an RSA public key. The corresponding private key is accessible only to the backend database. So based on the wording of PCI, only the database server requires an audit.

Of course, this is a ludicrous argument from a security perspective. The webserver is a critical part of the chain of trust in protecting the credit card numbers. There are many subtle ways to trojan RSA encryption code to disclose the plaintext. To detect trojans, the web firm has a client machine that repeatedly downloads and checksums the JS code from each webserver. But an attacker can serve the original JS to that machine while sending trojaned code to other users.

While I agree this is a clever way to avoid PCI audits, it does not increase actual security in any way. It is still subject to the above drawbacks of JS crypto.

If you’ve read this article and still think JS crypto has security advantages over server-side crypto for some particular application, describe it in a comment below. But the burden of proof is on you to explain why the above list of drawbacks is addressed or not relevant to your system. Until then, I am certain JS crypto does not make security sense.

Just because something can be done doesn’t mean it should be.
Epilogue
Auditability of client-side Javascript

I had overstated the auditability of JS in the browser environment by saying the code was accessible via “View Source”. It turns out the browser environment is even more malleable than I first thought. There is no user-accessible menu that tells what code is actually executing on a given page since DOM events can cause rebinding of page elements, including your crypto code. Thanks to Thomas Ptacek for pointing this out. I updated the corresponding paragraph above.

JS libraries such as jQuery, Prototype, and YUI all have APIs for loading additional page elements, which can be HTML or JS. These elements can rebind DOM nodes, meaning each AJAX query can result in the code of a page changing, not just the data displayed. The APIs don’t make a special effort to filter out page elements, and instead trust that you know what you’re doing.

The same origin policy is the only protection against this modification. However, this policy is applied at the page level, not script level. So if any script on a given page sets document.domain to a “safe” value like “example.net”, this would still allow JS code served from “ads.example.net” to override your crypto code on “www.example.net”. Your page is only as secure as the worst script loaded from it.

Brendan Eich made an informative comment on how document.domain is not the worst issue, separation of privileges for cross-site scripts is:

    Scripts can be sourced cross-site, so you could get jacked without document.domain entering the picture just by <script src=”evil.ads.com”>. This threat is real but it is independent of document.domain and it doesn’t make document.domain more hazardous. It does not matter where the scripts come from. They need not come from ads.example.net — if http://www.example.net HTML loads them, they’re #include’d into http://www.example.net’s origin (whether it has been modified by document.domain or not).

    In other words, if you have communicating pages that set document.domain to join a common superdomain, they have to be as careful with cross-site scripts as a single page loaded from that superdomain would. This suggests that document.domain is not the problem — cross-site scripts having full rights is the problem. See my W2SP 2009 slides.

“Proof of work” systems

Daniel Franke suggested one potentially-useful application for JS crypto: “proof of work” systems. These systems require the client to compute some difficult function to increase the effort required to send spam, cause denial of service, or bruteforce passwords. While I agree this application would not be subject to the security flaws listed in this article, it would have other problems.

Javascript is many times slower than native code and much worse for crypto functions than general computation. This means the advantage an attacker has in creating a native C plus GPU execution environment will likely far outstrip any slowness legitimate users will accept. If the performance ratio between attacker and legitimate users is too great, Javascript can’t be used for this purpose.

He recognized this problem and also suggested two ways to address it: increase the difficulty of the work function only when an attack is going on or only for guesses with weak passphrases. The problem with the first is that an attacker can scale up their guessing rate until the server slows down and then stay just below that threshold. Additionally, she can parallelize guesses for multiple users, depending on what the server uses for rate-limiting. One problem with the second is that it adds a round-trip where the server has to see the length of the attacker’s guess before selecting a difficulty for the proof-of-work function. In general, it’s better to select a one-size-fits-all parameter than to try to dynamically scale.
Browser plugin can checksum JS crypto code

This idea helps my argument, not hurts it. If you can deploy a custom plugin to clients, why not run the crypto there? If it can access the host environment, it has a real PRNG, crypto library (Mozilla NSS or Microsoft CryptoAPI), etc. Because of Javascript’s dynamism, no one knows a secure way to verify signatures on all page elements and DOM updates, so a checksumming plugin would not live up to its promise.

947
Security / Re: Plain text addresses
« on: June 18, 2013, 02:34 am »
Quote
Quote
Privnote is not anywhere near as secure as GPG. For one they could backdoor the code just like you said. For two you need to transfer the URL in plaintext or encrypt it with GPG, opening it up to massive MITM potential.
The question is in regard to plain text addresses, buddy.  End-to-end encrypted within the Tor network.  This doesn't apply.

Lol are you kidding me? Why the fuck even use privnote if the end to end encryption of Tor is enough to protect you? Do you understand the difference between link encryption and message encryption? Just because Tor hidden services use end to end encrypted links doesn't mean that the message is encrypted when it is stored on the server. That is the reason why we use GPG, to add message encryption as well. So that when the message is sitting on the server it cannot be intercepted and decrypted. When a privnote link is sitting on the server it can be intercepted and the message can be obtained. So it is completely worthless for our purposes. Privnote symmetrically encrypts the message and then you hand out the symmetric key without using an asymmetric algorithm for session key transfer. I might as well AES encrypt a message to the vendor and send them the password to decrypt it along with the ciphertext. That is what privnote is doing. It isn't even the same thing as GPG, which is a hybrid cryptosystem. Privnote is a retarded implementation of a symmetric encryption algorithm that they are tricking idiots into using instead of an asymmetric-symmetric cryptosystem like GPG.

It would be nice if you had a basic understanding of the fundamentals of cryptography prior to trying to argue with me.

Quote
Quote
The fact that a message is deleted automatically doesn't mean jack shit since somebody who does MITM will just intercept , read, make a new message.
Yes, this is true.  It also has nothing to do with GPG since GPG doesn't make the same claim.

It has everything to do with GPG because if I send a vendor a message encrypted with GPG the attacker can not read my message but if I send the vendor a privnote message the attacker can read my message and then replace it such that the vendor never knows the message was read. That means that Privnote has 0 security, it accomplishes jack fucking shit. I send the vendor a link to a symmetrically encrypted message and in plaintext I send them the symmetric key, so obviously the encryption isn't helping a god damn thing. That means the security of the system entirely depends on privnote deleting messages after they are read once, so the vendor can tell if their message was intercepted, but oh the MITM attacker can just recreate the same exact message and send the vendor the new link. So Privnote accomplishes absolutely nothing at all, GPG accomplishes something.

Quote
Quote
GPG is for getting around issues like that, privnote doesn't do jack shit to solve the underlying issues.
Plain text addresses you bloody loon, not the entire fucking universe.

I don't even understand what the hell you are talking about here??


Quote
Quote
Not to mention it is written in javascript, which is hardly the ideal language for doing crypto shit in.
You're a fucking idiot.  Javascript to my knowledge is Turing complete.  Someone will fucking write your personality in it someday, I guarantee you.

Ho hum, can javascript even do constant time operations. Get a clue before making claims plz.


Quote
Quote
Not to mention you have not even looked at the code so how the hell are you to know if it is secure or not?
Again, Gibberish-AES.  Go look it up.

You are the one who said you never looked at the code in the first place. Anyway AES implemented in javascript is not likely to be very secure.

Quote
Quote
Oh not to mention when you use privnote you are weak to your Tor exit node sending you a bugged version of the javascript client.
Here, however, you're correct.  Which I stated initially -- no guarantee that the code will remain secure the moment before you encrypt.  Why are you trying to throw my own points back at me?

In short: if you want to call someone an idiot, go look in a mirror.


Your initial statement was that GPG and Privnote are of equal security, and I never called anyone an idiot but you did just clarify for me that you are one.

948
Security / Re: Plain text addresses
« on: June 18, 2013, 02:33 am »
continued ....

Quote
Imagine a system that involved your browser encrypting something, but filing away a copy of the plaintext and the key material with an unrelated third party on the Internet just for safekeeping. That's what this solution amounts to. You can't outsource random number generation in a cryptosystem; doing so outsources the security of the system.
What else is the Javascript runtime lacking for crypto implementors?

Two big ones are secure erase (Javascript is usually garbage collected, so secrets are lurking in memory potentially long after they're needed) and functions with known timing characteristics. Real crypto libraries are carefully studied and vetted to eliminate data-dependant code paths --- ensuring that one similarly-sized bucket of bits takes as long to process as any other --- because without that vetting, attackers can extract crypto keys from timing.
But other languages have the same problem!

That's true. But what's your point? We're not saying Javascript is a bad language. We're saying it doesn't work for crypto inside a browser.
But people rely on crypto in languages like Ruby and Java today. Are they doomed, too?

Some of them are; crypto is perilous.

But many of them aren't, because they can deploy countermeasures that Javascript can't. For instance, a web app developer can hook up a real CSPRNG from the operating system with an extension library, or call out to constant-time compare functions.

If Python was the standard browser content programming language, browser Python crypto would also be doomed.
What else is Javascript missing?

A secure keystore.
What's that?

A way to generate and store private keys that doesn't depend on an external trust anchor.
External what now?

It means, there's no way to store a key securely in Javascript that couldn't be expressed with the same fundamental degree of security by storing the key on someone else's server.
Wait, can't I generate a key and use it to secure things in HTML5 local storage? What's wrong with that?

That scheme is, at best, only as secure as the server that fed you the code you used to secure the key. You might as well just store the key on that server and ask for it later. For that matter, store your documents there, and keep the moving parts out of the browser.
These don't seem like earth-shattering problems. We're so close to having what we need in browsers, why not get to work on it?

Check back in 10 years when the majority of people aren't running browsers from 2008.
That's the same thing people say about web standards.

Compare downsides: using Arial as your typeface when you really wanted FF Meta, or coughing up a private key for a crypto operation.

We're not being entirely glib. Web standards advocates care about graceful degradation, the idea that a page should at least be legible even if the browser doesn't understand some advanced tag or CSS declaration.

"Graceful degradation" in cryptography would imply that the server could reliably identify which clients it could safely communicate with, and fall back to some acceptable substitute in cases where it couldn't. The former problem is unsolved even in the academic literature. The latter recalls the chicken-egg problem of web crypto: if you have an acceptable lowest-common-denominator solution, use that instead.
This is what you meant when you referred to the "crushing burden of the installed base"?

Yes.
And when you said "view-source transparency was illusory"?

We meant that you can't just look at a Javascript file and know that it's secure, even in the vanishingly unlikely event that you were a skilled cryptographer, because of all the reasons we just cited.
Nobody verifies the software they download before they run it. How could this be worse?

Nobody installs hundreds of applications every day. Nobody re-installs each application every time they run it. But that's what people are doing, without even realizing it, with web apps.

This is a big deal: it means attackers have many hundreds of opportunities to break web app crypto, where they might only have one or two opportunities to break a native application.
But people give their credit cards to hundreds of random people insecurely.

An attacker can exploit a flaw in a web app across tens or hundreds of thousands of users at one stroke. They can't get a hundred thousand credit card numbers on the street.
You're just not going to give an inch on this, are you?

Nobody would accept any of the problems we're dredging up here in a real cryptosystem. If SSL/TLS or PGP had just a few of these problems, it would be front-page news in the trade press.
You said Javascript crypto isn't a serious research area.

It isn't.
How much research do we really need? We'll just use AES and SHA256. Nobody's talking about inventing new cryptosystems.

AES is to "secure cryptosystems" what uranium oxide pellets are to "a working nuclear reactor". Ever read the story of the radioactive boy scout? He bought an old clock with painted with radium and found a vial of radium paint inside. Using that and a strip of beryllium swiped from his high school chemistry lab, he built a radium gun that irradiated pitchblende. He was on his way to building a "working breeder reactor" before moon-suited EPA officials shut him down and turned his neighborhood into a Superfund site.

The risks in building cryptography directly out of AES and SHA routines are comparable. It is capital-H Hard to construct safe cryptosystems out of raw algorithms, which is why you generally want to use high-level constructs like PGP instead of low-level ones.
What about things like SJCL, the Stanford crypto library?

SJCL is great work, but you can't use it securely in a browser for all the reasons we've given in this document.

SJCL is also practically the only example of a trustworthy crypto library written in Javascript, and it's extremely young.

The authors of SJCL themselves say, "Unfortunately, this is not as great as in desktop applications because it is not feasible to completely protect against code injection, malicious servers and side-channel attacks." That last example is a killer: what they're really saying is, "we don't know enough about Javascript runtimes to know whether we can securely host cryptography on them". Again, that's painful-but-tolerable in a server-side application, where you can always call out to native code as a workaround. It's death to a browser.
Aren't you creating a self-fulfilling prophecy about Javascript crypto research?

People don't take Javascript crypto seriously because they can't get past things like "there's no secure way to key a cryptosystem" and "there's no reliably safe way to deliver the crypto code itself" and "there's practically no value to doing crypto in Javascript once you add SSL to the mix, which you have to do to deliver the code".
These may be real problems, but we're talking about making crypto available to everyone on the Internet. The rewards outweigh the risks!

DETROIT --- A man who became the subject of a book called "The Radioactive Boy Scout" after trying to build a nuclear reactor in a shed as a teenager has been charged with stealing 16 smoke detectors. Police say it was a possible effort to experiment with radioactive materials.

The world works the way it works, not the way we want it to work. It's one thing to point at the flaws that make it hard to do cryptography in Javascript and propose ways to solve them; it's quite a different thing to simply wish them away, which is exactly what you do when you deploy cryptography to end-users using their browser's Javascript runtime.

949
Security / Re: Plain text addresses
« on: June 18, 2013, 02:32 am »
Quote
If I may indulge in a little bit of back-and-forth, if you will: anyone who says a language isn't suited to something because of its inherent properties as a language is a fool who pays more attention to standards and stereotypes than reality and truth.  Any language that can accomplish something is a perfectly fine language to use for the task.

Try again: http://www.matasano.com/articles/javascript-cryptography/

Quote
What do you mean, "Javascript cryptography"?

We mean attempts to implement security features in browsers using cryptographic algoritms implemented in whole or in part in Javascript.

You may now be asking yourself, "What about Node.js? What about non-browser Javascript?". Non-browser Javascript cryptography is perilous, but not doomed. For the rest of this document, we're referring to browser Javascript when we discuss Javascript cryptography.
Why does browser cryptography matter?

The web hosts most of the world's new crypto functionality. A significant portion of that crypto has been implemented in Javascript, and is thus doomed. This is an issue worth discussing.
What are some examples of "doomed" browser cryptography?

You have a web application. People log in to it with usernames and passwords. You'd rather they didn't send their passwords in the clear, where attackers can capture them. You could use SSL/TLS to solve this problem, but that's expensive and complicated. So instead, you create a challenge-response protocol, where the application sends Javascript to user browsers that gets them to send HMAC-SHA1(password, nonce) to prove they know a password without ever transmitting the password.

Or, you have a different application, where users edit private notes stored on a server. You'd like to offer your users the feature of knowing that their notes can't be read by the server. So you generate an AES key for each note, send it to the user's browser to store locally, forget the key, and let the user wrap and unwrap their data.
What's wrong with these examples?

They will both fail to secure users.
Really? Why?

For several reasons, including the following:

    Secure delivery of Javascript to browsers is a chicken-egg problem.

    Browser Javascript is hostile to cryptography.

    The "view-source" transparency of Javascript is illusory.

    Until those problems are fixed, Javascript isn't a serious crypto research environment, and suffers for it.

What's the "chicken-egg problem" with delivering Javascript cryptography?

If you don't trust the network to deliver a password, or, worse, don't trust the server not to keep user secrets, you can't trust them to deliver security code. The same attacker who was sniffing passwords or reading diaries before you introduce crypto is simply hijacking crypto code after you do.
That attack sounds complicated! Surely, you're better off with crypto than without it?

There are three misconceptions embedded in that common objection, all of them grave.

First, although the "hijack the crypto code to steal secrets" attack sounds complicated, it is in fact simple. Any attacker who could swipe an unencrypted secret can, with almost total certainty, intercept and alter a web request. Intercepting requests does not require advanced computer science. Once an attacker controls the web requests, the work needed to fatally wound crypto code is trivial: the attacker need only inject another <SCRIPT> tag to steal secrets before they're encrypted.

Second, the difficulty of an attack is irrelevant. What's relevant is how tractable the attack is. Cryptography deals in problems that intractable even stipulating an attacker with as many advanced computers as there are atoms composing the planet we live on. On that scale, the difficulty of defeating a cryptosystem delivered over an insecure channel is indistinguishable from "so trivial as to be automatic". Further perspective: we live and work in an uncertain world in which any piece of software we rely on could be found vulnerable to new flaws at any time. But all those flaws require new R&D effort to discover. Relative to the difficulty of those attacks, against which the industry deploys hundreds of millions of dollars every year, the difficulties of breaking Javascript crypto remain imperceptibly different than "trivial".

Finally, the security value of a crypto measure that fails can easily fall below zero. The most obvious way that can happen is for impressive-sounding crypto terminology to convey a false sense of security. But there are worse ways; for instance, flaws in login crypto can allow attackers to log in without ever knowing a user's password, or can disclose one user's documents to another user.
Why can't I use TLS/SSL to deliver the Javascript crypto code?

You can. It's harder than it sounds, but you safely transmit Javascript crypto to a browser using SSL. The problem is, having established a secure channel with SSL, you no longer need Javascript cryptography; you have "real" cryptography. Meanwhile, the Javascript crypto code is still imperiled by other browser problems.
What's hard about deploying Javascript over SSL/TLS?

You can't simply send a single Javascript file over SSL/TLS. You have to send all the page content over SSL/TLS. Otherwise, attackers will hijack the crypto code using the least-secure connection that builds the page.
How are browsers hostile to cryptography?

In a dispriting variety of ways, among them:

    The prevalence of content-controlled code.

    The malleability of the Javascript runtime.

    The lack of systems programming primitives needed to implement crypto.

    The crushing weight of the installed base of users.

Each of these issues creates security gaps that are fatal to secure crypto. Attackers will exploit them to defeat systems that should otherwise be secure. There may be no way to address them without fixing browsers.
What do you mean by "content-controlled code"? Why is it a problem?

We mean that pages are built from multiple requests, some of them conveying Javascript directly, and some of them influencing Javascript using DOM tag attributes (such as "onmouseover").
Ok, then I'll just serve a cryptographic digest of my code from the same server so the code can verify itself.

This won't work.

Content-controlled code means you can't reason about the security of a piece of Javascript without considering every other piece of content that built the page that hosted it. A crypto routine that is completely sound by itself can be utterly insecure hosted on a page with a single, invisible DOM attribute that backdoors routines that the crypto depends on.

This isn't an abstract problem. It's an instance of "Javascript injection", better known to web developers as "cross-site scripting". Virtually every popular web application ever deployed has fallen victim to this problem, and few researchers would take the other side of a bet that most will again in the future.

Worse still, browsers cache both content and Javascript aggressively; caching is vital to web performance. Javascript crypto can't control the caching behavior of the whole browser with specificity, and for most applications it's infeasible to entirely disable caching. This means that unless you can create a "clean-room" environment for your crypto code to run in, pulling in no resource tainted by any other site resource (from layout to UX) , you can't even know what version of the content you're looking at.
What's a "malleable runtime"? Why are they bad?

We mean you can change the way the environment works at runtime. And it's not bad; it's a fantastic property of a programming environment, particularly one used "in the small" like Javascript often is. But it's a real problem for crypto.

The problem with running crypto code in Javascript is that practically any function that the crypto depends on could be overridden silently by any piece of content used to build the hosting page. Crypto security could be undone early in the process (by generating bogus random numbers, or by tampering with constants and parameters used by algorithms), or later (by spiriting key material back to an attacker), or --- in the most likely scenario --- by bypassing the crypto entirely.

There is no reliable way for any piece of Javascript code to verify its execution environment. Javascript crypto code can't ask, "am I really dealing with a random number generator, or with some facsimile of one provided by an attacker?" And it certainly can't assert "nobody is allowed to do anything with this crypto secret except in ways that I, the author, approve of". These are two properties that often are provided in other environments that use crypto, and they're impossible in Javascript.
Well then, couldn't I write a simple browser extension that would allow Javascript to verify itself?

You could. It's harder than it sounds, because you'd have to verify the entire runtime, including anything the DOM could contribute to it, but it is theoretically possible. But why would you ever do that? If you can write a runtime verifier extension, you can also do your crypto in the extension, and it'll be far safer and better.

"But", you're about to say, "I want my crypto to be flexible! I only want the bare minimum functionality in the extension!" This is a bad thing to want, because ninety-nine and five-more-nines percent of the crypto needed by web applications would be entirely served by a simple, well-specified cryptosystem: PGP.

The PGP cryptosystem is approaching two decades of continuous study. Just as all programs evolve towards a point where they can read email, and all languages contain a poorly-specified and buggy implementation of Lisp, most crypto code is at heart an inferior version of PGP. PGP sounds complicated, but there is no reason a browser-engine implementation would need to be (for instance, the web doesn't need all the keyring management, the "web of trust", or the key servers). At the same time, much of what makes PGP seem unwieldy is actually defending against specific, dangerous attacks.
You want my browser to have my PGP key?

Definitely not. It'd be nice if your browser could generate, store, and use its own PGP keys though.
What systems programming functionality does Javascript lack?

Here's a starting point: a secure random number generator.
How big a deal is the random number generator?

Virtually all cryptography depends on secure random number generators (crypto people call them CSPRNGs). In most schemes, the crypto keys themselves come from a CSPRNG. If your PRNG isn't CS, your scheme is no longer cryptographically secure; it is only as secure as the random number generator.
But how easy is it to attack an insecure random generator, really?

It's actually hard to say, because in real cryptosystems, bad RNGs are a "hair on fire" problem solved by providing a real RNG. Some RNG schemes are pencil-and-paper solveable; others are "crackable", like an old DES crypt(3) password. It depends on the degree of badness you're willing to accept. But: no SSL system would accept any degree of RNG badness.
But I can get random numbers over the Internet and use them for my crypto!

How can you do that without SSL? And if you have SSL, why do you need Javascript crypto? Just use the SSL.
I'll use RANDOM.ORG. They support SSL.

“Javascript Cryptography. It's so bad, you’ll consider making async HTTPS requests to RANDOM.ORG simply to fetch random numbers."

950
Security / Re: Plain text addresses
« on: June 18, 2013, 02:31 am »
Quote
... did I do something to offend you? 

No nothing to offend me, it is just annoying to see you constantly going on about things and acting like you fully comprehend them when you clearly don't while making as many excuses for yourself as possible ('oh I am so tired', 'oh I didn't even read the full thing') in case you got something wrong (which you seem to do a lot). How about you read the papers you comment on, read the articles you comment on, read the source code of the program you comment on, and then come to a conclusion. You thought the attack on the implementation of that password safe program was an attack that cut AES key strength in half when it was an attack on the PBKDF, you dismissed Zerocoin as being impossible without even reading the whitepaper, you spoke poorly of the people who wrote the HSDIR attack paper because you confused rendezvous points with introduction points, and now you are saying that Privnote offers security equal to GPG when it obviously doesn't. So no I am not offended I am just annoyed that you keep spouting off nonsense when you don't even take the slightest time to research what you are talking about. Once or twice or three times or four times I would let it pass, and I even edited my post in the HSDIR attack thread after originally calling you out, but you consistently engage in spouting off about technical things that you are not understanding. And you do it in the most condescending way half of the time, like the security researchers writing these papers are idiots who don't know what they are talking about, simultaneously with making excuses for yourself and admitting that you have not even researched what you are commenting on or even finished reading the papers.


Quote
Perhaps if you read 2 messages down you'd notice that I looked again and decided I was wrong.  I'm not entirely sure what your tone is about, friend, but I made a statement; decided I needed to verify my statement because I couldn't quite remember what led me to the conclusion I came to; did so; decided I was wrong; and corrected it.  Perhaps you'd like to tell me what I should have done, other than leaving a question completely unanswered while we all waited for you or astor to show up?

You could have said nothing at all if you didn't know what you were talking about. I don't even care if people get things wrong, I doubt anybody here is a professional level security expert, but you don't say things that are wrong you make claims that are wrong. There is a big difference.

Quote
Privnote actually was -- at least at the time that I looked at it -- just as safe as PGP.

This is a false claim. If you had said

Quote
I think privnote is just as safe as PGP

I would have corrected you but not called you out. Hell even if you had just said this alone I wouldn't have cared, it is just a consistent theme I notice in your posts that has been consistently irritating me. It is okay to be wrong, it is annoying as hell when people consistently make definitive claims that are wrong, even more so when it is due to lack of research or even reading of the entire paper that they are commenting on.

951
Security / Re: Liberte Vs Tails Vs Ubuntu?
« on: June 18, 2013, 01:41 am »
I think TAILS supports persistent entry guards now, but it did not the last time I used it so I am not positive.

952
Security / Re: How secure is the (im)mature i2p Network?
« on: June 17, 2013, 02:18 pm »
Generally speaking, for standard, vanilla configurations.

I2P's biggest concern is long term intersection attacks. Client enumeration is very easy. If users are at all pseudonymous, the attacker can observe who is connected to the network during times they see traffic from the targeted pseudonym. People go offline and come back, sometimes days pass in the mean time. If Pseudonym Alice is always active when IP a.l.i.c is connected to the I2P network, and is never active when a.l.i.c is not connected to the I2P network, then the attacker can come to a pretty solid guess that a.l.i.c is Alice's IP address. Especially because after the attacker has come to this pretty good guess, they can do active attacks such as DDoS to confirm their suspicion. I mean, many of you probably don't go onto Tor for days at a time on occasion. If you don't go onto I2P for days at a time, people might notice that you are not posting. Then when you come back days later they will notice you are posting again, and they will also see the IP addresses that are part of the I2P network. They will likely correctly guess that you are the IP address that left the network when you stopped posting and joined the network right before you started posting. Of course even ignoring this, client enumeration is bad news for vendors. Since LE already knows where vendors ship from, they can therefor use client enumeration to narrow in on a likely very small set of IP addresses suspected of being the vendor. That in combination with a long term intersection attack will likely identify vendors very quickly. I2P is not really a good bet for us.

I2P's biggest advantage is also its biggest weakness imo. Path lengths are variable and all users route for all users. This means that from an internal attacker, you probably have some plausible deniability from timing attacks. If the peer you are using for hop 1 is owned by the attacker, and the Eepsite you are visiting is also owned by the attacker, they can definitely tell that you sent packets to the Eepsite, but they probably cannot say with certainty that the packets originated at you. For all they know you could have routed the packets on for somebody else. In the face of an external attacker you will not have this protection, but having some level of protection against even only internal timing attacks is a very nice feature.

Tor's biggest concern is timing attacks. If the attacker can watch your traffic enter the network and arrive at its' destination, then you are pretty much fucked. This can happen if your entry guard is bad and your exit node is bad, it can happen if your entry guard is bad and the website you are visiting is bad, it can happen if your entry guard is bad and the website you are visiting is being externally monitored, it can happen if you are being externally monitored and your exit node is bad, it can happen if you are being externally monitored and the site you are visiting is being externally monitored, it can happen if you are being externally monitored and the HSDIR you connect to is bad, it can happen if you have a bad entry guard and the HSDIR you connect to is bad, it can happen if you have the same entry guard as the hidden service you are connecting to, it can happen if your entry guard is bad and the hidden services introduction point is bad, it might be able to happen if your entry guard is bad and the final node you use while connecting to the HSDIR is bad, it might be able to happen if your entry guard is bad and the final node you use while connecting to the introduction point is bad, and I cannot even finish typing out more combinations of bad shit that could happen that could link you to the websites you are visiting because my hands are cramping up. Entry guards somewhat help alleviate this, but they rotate frequently enough that it is pretty much just a matter of time before somebody gets you with a timing attack if they want to badly enough.

Tor has the potential to be quite well protected from long term intersection attacks because of entry guards and the fact that most clients are not also routing nodes. This makes it much harder to enumerate the entire list of client IP addresses. Most attackers who could manage to do a long term intersection attack against Tor wouldn't even need to because they could do timing attacks against everybody and totally deanonymize the entire network. Of course unless you use bridges, and until directory guards are implemented, Tor connects directly to the authority servers to bootstrap if it has been offline for more than about 24 hours. This means that monitoring the directory authority servers works for client enumeration. Thankfully this is in the process of being corrected though.

953
kmfkewm +1 for your in depth explanation of quantum encryption and classical algorithmic encryption. If you have any more details or links where we can learn more please do share. This is riveting information.

You could read about Shor's algorithm and Grover's algorithm. Wikipedia is generally a great source for basic knowledge about cryptography, it is extremely superficial but it is great for giving you an idea of things to look up.

Additionally, symmetric algorithms such as AES are resistant to all known quantum attacks. The best quantum attack against symmetric algorithms is only capable of dividing their bit strength in half, giving AES-256 a key space of 2^128. This is indeed a big reduction in key space, but enough is preserved to maintain the cryptographic integrity of the algorithm.

Regarding AES-256 encryption [with Truecrypt], are you factoring in the salt with your calculations?

Grover's algorithm is a quantum based direct attack on symmetric encryption keys, not on their corresponding passwords. The strength of a symmetric algorithm is hard limited by the key space of the algorithm. Grover's algorithm cuts key strength in half, the quality of the password or PBKDF will have no effect on it.

Quote
"512-bit salt is used, which means there are 2512 keys for each password. This significantly decreases vulnerability to 'off-line' dictionary/'rainbow table' attacks"
http://www.truecrypt.org/docs/header-key-derivation

Will the salt really thwart brute force as it is alleged?

Truecrypt almost certainly is using something called a password based key derivation function, commonly referred to as a PBKDF. PBKDF's are used for turning a users password into a symmetric encryption key. When you encrypt a file with an algorithm like AES-256, you must provide a key that is of the appropriate length. That is to say that you cannot directly use the password 'password' with AES-256, you need to convert the password into a 256 bit key. This could naively be done by using a hash algorithm, perhaps you take the SHA256 value of 'password', which is '6b3a55e0261b0304143f805a24924d0c1c44524821305f31d9277843b8a10f4e' (in hex), and which consists of 256 bits. PBKDF's do use hashing as their primitives, but they add at least two important features. The first feature they add is called salting. Salting adds some fixed randomness to your password prior to hashing it. You see, if your password is 'password', then you are weak to rainbow table attacks. A rainbow table attack involves the attacker taking sometimes terabytes worth of dictionary words / leaked passwords / common phrases / etc, and hashing all of them with a specific algorithm (generally with many algorithms, which is I believe where it gets its name from, it stores the SHA256 of the password, the MD5 of the password, the SHA512 of the password, etc). This takes a lot of computational power, but after it is done once the attacker no longer needs to compute the hash values again, now they can attempt to directly use the stored hash data as the symmetric encryption key until they find the correct one. Any half decent rainbow table will have the SHA256 value of 'password' stored in it. To protect from rainbow tables, the encryption program will generate a few random bytes of data, let's say 'j82opdl29e' , and then it concatenates it to your password prior to hashing your password. Therefor your password is now really 'j82opdl29epassword' , but you only need to remember password because the salt is handled by the encryption program (and generally stored in plaintext with the encrypted data, salts do not need to be secret). This means that your password now produces the key  '718e7e73155913d6ab75a6d4a3a0e515f0c2056c25c98103f9d4f2dd8e661172' ,  which will not likely be part of the attackers rainbow table. Since everybody who uses the program has a different salt generated, the input password 'password' can now produce a wide variety of different output keys (for example, since Truecrypt uses a 512 bit salt, a rainbow table effective against it will be 2^512 times as large, and take 2^512 as many operations to compute, as a rainbow table effective against an application that doesn't use any salt at all. Essentially this means that Truecrypt is immune to rainbow table attacks).

Another thing that PBKDF's do is iteratively hash a password. This is what protects from brute force attacks. If the attacker obtains your salt (which they can easily do since the salt isn't itself encrypted) then they can start the brute force attack with 'j82opdl29e' and start adding characters to it, perhaps starting at 'j82opdl29ea' and then going to 'j82opdl29eb' etc. Additionally they can try computational dictionary attacks directly (although the salt protects from rainbow table based attacks). Now it takes a very small amount of time to take the hash value of 'j82opdl29ea', and a very small amount of time to take the hash value of 'j82opdl29eb' etc. So instead the PBKDF will have a set number of iterations, probably in the thousands or tens of thousands. Without iterations the key corresponding to the password ''j82opdl29ea' is '19798b674de3fa0111d46315048a5b33893b347249af9dc9ba106af3eea9a824', assuming that SHA256 is used. With 10,000 iterations of hashing, as specified by the PBKDF, the hash is that hashed out 10,000 additional times. For example

19798b674de3fa0111d46315048a5b33893b347249af9dc9ba106af3eea9a824 SHA256 = 68529cb832e34720b8be405233bca6d231a95766dacf36a66acc769bbab71daf SHA256 = 31a065cb81ba09fa9831b0700a904fad25b5e39d88160764c6b325dc61df614c etc....

This obviously will take ten thousand times longer for an attacker to compute. Usually this will work out to about a second or two to convert a password to a key (although it is entirely computationally bound), an amount of time that is not noticeable to a user with the correct password, but which adds up to a lot of time for an attacker who needs to repeatedly guess incorrect passwords. If it takes two seconds to do that many iterations, it takes you two seconds to obtain your key after correctly entering your password, but an attacker who attempts 100,000,000 different passwords before they get the correct one ends up spending over six years with an equal amount of computational power.

So in summary PBKDF's can protect you but they don't protect the encryption algorithm itself, and they don't protect the actual symmetric key from being brute force (only the password used to derive the key). So against the quantum attack called Grover's algorithm, PBKDF's have absolutely no effect at all. The reason for this is that you can always try to obtain the encryption key without the password at all, although generally it is vastly more efficient to guess the password than it is to break the encryption key. For example, with AES-256 we know the encryption key is 256 bits. Nothing stops you from starting at

00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

and working your way up to

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

and everything in between. Of course this leaves you with 2^256 permutations of bits to try in order to exhaust the key space. It is much more likely that the bit pattern will map to a specific input password that is much easier to guess or brute force. PBKDF's try to strengthen the password, but they don't come anywhere near to giving most passwords 256 bit equivalent security. Grover's algorithm is not concerned with breaking the password to obtain the correct key, it is concerned with breaking the symmetric key directly. I don't really understand how it works in detail, but it works such that the number of bits you need to guess in order to form the correct encryption key is cut in half. So instead of going from

00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

to

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

and everything in between, you only need to go from

00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

to

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

and everything in between. In such an attack the attacker doesn't even care if they can map the bit sequence that the symmetric key consists of to a human readable password, because the only reason somebody tries to figure out the human readable password is so that they can derive the correct bit sequence from it. Grover's algorithm goes straight to brute forcing the correct bit sequence, it doesn't start at the password.

Thankfully even going from

00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
to
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
and everything in between

is not realistic. This means that 256 bit algorithms are not broken by Grover's algorithm. On the other hand, against 128 bit algorithms it turns into

0000000000000000000000000000000000000000000000000000000000000000
to
1111111111111111111111111111111111111111111111111111111111111111
and everything in between

which is possible to brute force. Therefor 128 bit symmetric algorithms are broken by Grover's algorithm.

954
Security / Re: Plain text addresses
« on: June 17, 2013, 10:03 am »
Why don't more people encrypt or at least privnote their addresses? Is there absolutely anything stopping LEAs/ "good" guys running a compromised exit node from reading your order list and adding all of the addresses to a watch list? Seems like an easy way to get nabbed to me.
Why would you think that privnote is safer than using the address field? Your address is automatically deleted from SR's servers too so why use a third party to do that for you? Privnote could keep a copy of all its messages for all we know.

Privnote actually was -- at least at the time that I looked at it -- just as safe as PGP.  Though I didn't go over it line by line or anything; still, the only real problem with it is that the code to encrypt stuff is downloaded on-the-fly from the server, and there's no guarantee that it hasn't changed since someone last looked at it.

By design, there will never be a guarantee that the code you download to encrypt the message wasn't changed the moment prior to you downloading it.  That's why it isn't secure, but to my knowledge it's the only reason.

SelfSovereignty please stop making claims about technical things that you very clearly don't have a clue about, it is getting extremely annoying. Privnote is not anywhere near as secure as GPG. For one they could backdoor the code just like you said. For two you need to transfer the URL in plaintext or encrypt it with GPG, opening it up to massive MITM potential. The fact that a message is deleted automatically doesn't mean jack shit since somebody who does MITM will just intercept , read, make a new message. GPG is for getting around issues like that, privnote doesn't do jack shit to solve the underlying issues. Not to mention it is written in javascript, which is hardly the ideal language for doing crypto shit in. Not to mention you have not even looked at the code so how the hell are you to know if it is secure or not? Oh not to mention when you use privnote you are weak to your Tor exit node sending you a bugged version of the javascript client.

955
As this clearly demonstrates, there are two distinct forms of thinking, verbal and visual. This can be clearly demonstrated in an n-back test that uses visual images. Imagine a computer screen that flashes images of various things, perhaps cartoon images of carrots, apples, cats, mice, televisions, radios and stars. An n-back test flashes such images to the subject for a limited period of time, and requires the subject to hit a button if the image currently being displayed is the same image that was displayed n previous images prior. A person can solve this task either with their visuospatial sketchpad or with their phonological loop. Somebody who solves this with their visuospatial sketchpad will imagine the sequence of flashed images in their mind n + 1 at a time, and compare the current object in their memory to the furthest object back, then after answering they will shift the images and repeat with the new input. A person who solves this problem with their phonological loop will remember the verbal labels representing the objects.

So in the 2-back sequence carrot, apple, carrot, television, star, television they will think like this

Round 1: Carrot (remember first), Apple (remember second)
Round 2: (Carrot?) Carrot (match, remember carrot first), (Apple?) television (no match, forget apple, remember television second)
Round 3: (Carrot?) Star (no match, forget carrot, remember star first), (Television?) television (match, remember television second)

A person who solves this problem visually could do it like this (imagine all text == the image it labels)

Round 1: carrot, apple, carrot (match identified, shift 1 to the left)
Round 2: apple, carrot, television (no match identified, shift 1 to the left)
Round 3: carrot, television, star (no match identified, shift 1 to the left)
round 4: television, star, television (match identified, shift 1 to the left)


The same thing can be done with reverse digit span, in which a person is orally presented with a series of numbers and then asked to recite them backwards. A visual strategy to solve this problem is visualizing the numbers forward as they are recited, and then reading them in reverse:

1, 20, 30, 40 (read backwards from 40 to 1)

a verbal technique to solve this problem is looping the numbers verbally as they are presented forward:

1
1, 20
1, 20, 30
1, 20, 30, 40

and then going backwards:

1, 20, 30, say 40
1, 20, say 30
1, say 20
say 1


yet another example is reciting the alphabet. When asked to say what comes after a certain letter, some people will verbally loop the alphabet forward to the letter + 1

a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p is after o

other people will visually recall the alphabet and simply look to the right of p, and say what they see which is o.

In the first case the problem is solved with long term verbal recall + phonological looping, in the second case the problem is solved with long term visual recall + the visuospatial sketchpad.

So clearly there are vast differences between verbal and visual thinking, and it is possible to think in either.

956
A thing is a unit of thought (hence it's called thinking), how do we get a thing out of the continuous world - we name it. Thinking without language is not possible by definition. Is there other ways of conscious being and understanding the world - of course.

So you don't think you can solve a navigational problem (how to get from point A to point B) without using words or language?

If you already have developed a language then the chances are that in your navigational exercise you would use lot of language based terms to refer to various entities to do the job. Our brain does it quite fast that you might think that you are not using language base terms at all.

As I mentioned in the OP, language based thinking is possible. Many people do that on a daily basis. May be certain types of activities/thinking are predominantly without any language; such as thoughts/actions needed to meet the biological needs, etc. Babies do think without any language and so did the human before developing any language.

Thinking that are around ideas or interactions with others (of course in our head only) require language. On the other hand it might be possible to have fantasies about someone where language may not be needed much; of course it depends on what you are fantasizing about :)

I do not think that thinking requires language. You can visually think complex things through without necessarily requiring any language at all. The simplest example is spatial navigation. However, even complex things like anonymity networks can be thought of in a purely visual sense.

I don't think you realize that in your thinking you use lot of language based terms and symbols. Our brain does it quite fast and can fool you about it.

I think it is pretty well established that working visual memory is not the same thing as working verbal memory. The brain thinks with working memory, of which it has verbal and visual. The brain recalls data with long term memory, of which it also has verbal and visual. It is extremely widely accepted in psychology and neuroscience that humans can think in pictures and/or in language, and nobody in the field thinks that thought is limited entirely to language.

www.simplypsychology.org/working%20memory.html

Quote
by Saul McLeod email icon published 2008, updated 2012

Atkinson’s and Shiffrin’s (1968) multi-store model was extremely successful in terms of the amount of research it generated.

However, as a result of this research, it became apparent that there were a number of problems with their ideas concerning the characteristics of short-term memory.

Building on this research, Baddeley and Hitch (1974) developed an alternative model of short-term memory which they called working memory (see fig 1).

Baddeley and Hitch (1974) argue that the picture of short-term memory (STM) provided by the Multi-Store Model is far too simple.  According to the Multi-Store Model, STM holds limited amounts of information for short periods of time with relatively little processing.  It is a unitary system. This means it is a single system (or store) without any subsystems.  Working Memory is not a unitary store.

working memory

    Fig 1. The Working Memory Model (Baddeley and Hitch, 1974)

Working memory is STM. Instead of all information going into one single store, there are different systems for different types of information.  Working memory consists of a central executive which controls and co-ordinates the operation of two subsystems: the phonological loop and the visuo-spatial sketchpad.

Central Executive: Drives the whole system (e.g. the boss of working memory) and allocates data to the subsystems (VSS & PL). It also deals with cognitive tasks such as mental arithmetic and problem solving.

Visuo-Spatial Sketch Pad (inner eye): Stores and processes information in a visual or spatial form. The VSS is used for navigation.

The phonological loop is the part of working memory that deals with spoken and written material. It can be used to remember a phone number. It consists of two parts

    o Phonological Store (inner ear) – Linked to speech perception Holds information in speech-based form (i.e. spoken words) for 1-2 seconds.

    o Articulatory control process (inner voice) – Linked to speech production. Used to rehearse and store verbal information from the phonological store.

working memory diagram

    Fig 2. The Working Memory Model Components (Baddeley and Hitch, 1974)

The labels given to the components (see fig 2) of the working memory reflect their function and the type of information they process and manipulate. The phonological loop is assumed to be responsible for the manipulation of speech based information, whereas the visuo-spatial sketchpad is assumed to by responsible for manipulating visual images. The model proposes that every component of working memory has a limited capacity, and also that the components are relatively independent of each other.

The Central Executive

The central executive is the most important component of the model, although little is known about how it functions.  It is responsible for monitoring and coordinating the operation of the slave systems (i.e. visuo-spatial sketch pad and phonological loop) and relates them to long term memory (LTM). The central executive decides which information is attended to and which parts of the working memory to send that information to be dealt with.

The central executive decides what working memory pays attention to. For example, two activities sometimes come into conflict such as driving a car and talking. Rather than hitting a cyclist who is wobbling all over the road, it is preferable to stop talking and concentrate on driving. The central executive directs attention and gives priority to particular activities.

The central executive is the most versatile and important component of the working memory system. However, despite its importance in the working-memory model, we know considerably less about this component than the two subsystems it controls.

Baddeley suggests that the central executive acts more like a system which controls attentional processes rather than as a memory store.  This is unlike the phonological loop and the visuo-spatial sketchpad, which are specialized storage systems. The central executive enables the working memory system to selectively attend to some stimuli and ignore others.

Baddeley (1986) uses the metaphor of a company boss to describe the way in which the central executive operates.  The company boss makes decisions about which issues deserve attention and which should be ignored.  They also select strategies for dealing with problems, but like any person in the company, the boss can only do a limited number of things at the same time. The boss of a company will collect information from a number of different sources.

If we continue applying this metaphor, then we can see the central executive in working memory integrating (i.e. combining) information from two assistants (the phonological loop and the visuo-spatial sketchpad) and also drawing on information held in a large database (long-term memory).

The Phonological Loop

The phonological loop is the part of working memory that deals with spoken and written material. It consists of two parts (see Figure 3).

The phonological store (linked to speech perception) acts as an inner ear and holds information in speech-based form (i.e. spoken words) for 1-2 seconds. Spoken words enter the store directly. Written words must first be converted into an articulatory (spoken) code before they can enter the phonological store.

The articulatory control process (linked to speech production) acts like an inner voice rehearsing information from the phonological store. It circulates information round and round like a tape loop. This is how we remember a telephone number we have just heard. As long as we keep repeating it, we can retain the information in working memory.

The articulatory control process also converts written material into an articulatory code and transfers it to the phonological store.


The Visuo-Spatial Sketchpad

The visuo-spatial sketchpad (inner eye) deals with visual and spatial information. Visual information refers to what things look like. It is likely that the visuo-spatial sketchpad plays an important role in helping us keep track of where we are in relation to other objects as we move through our environment (Baddeley, 1997).

As we move around, our position in relation to objects is constantly changing and it is important that we can update this information.  For example, being aware of where we are in relation to desks, chairs and tables when we are walking around a classroom means that we don't bump into things too often!

The sketchpad also displays and manipulates visual and spatial information held in long-term memory. For example, the spatial layout of your house is held in LTM. Try answering this question: How many windows are there in the front of your house?  You probably find yourself picturing the front of your house and counting the windows. An image has been retrieved from LTM and pictured on the sketchpad.

Evidence suggests that working memory uses two different systems for dealing with visual and verbal information. A visual processing task and a verbal processing task can be performed at the same time. It is more difficult to perform two visual tasks at the same time because they interfere with each other and performance is reduced. The same applies to performing two verbal tasks at the same time. This supports the view that the phonological loop and the sketchpad are separate systems within working memory.

Empirical Evidence for the Working Memory Model

What evidence is there that working memory exists, that it is made up of a number of parts, that it performs a number of different tasks?

The working memory model makes the following two predictions:

    1. If two tasks make use of the same component (of working memory), they cannot be performed successfully together.

    2. If two tasks make use of different components, it should be possible to perform them as well as together as separately.

Key Study: Baddeley and Hitch (1976)

Aim: To investigate if participants can use different parts of working memory at the same time.

Method: Conducted an experiment in which participants were asked to perform two tasks at the same time (dual task technique) - a digit span task which required them to repeat a list of numbers, and a verbal reasoning task which required them to answer true or false to various questions (e.g. B is followed by A?).

Results: As the number of digits increased in the digit span tasks, participants took longer to answer the reasoning questions, but not much longer - only fractions of a second.  And, they didn't make any more errors in the verbal reasoning tasks as the number of digits increased.

Conclusion: The verbal reasoning task made use of the central executive and the digit span task made use of the phonological loop.

Update on the Working Memory Model - The Episodic Buffer

The original model was updated by Baddeley (2000) after the model failed to explain the results of various experiments. An additional component was added called the episodic buffer. The episodic buffer acts as a 'backup' store which communicates with both long term memory and the components of working memory.

episodic buffer
Fig 3.Updated Model to include the Episodic Buffer

Evaluation of Working Memory

Strengths

Researchers today generally agree that short-term memory is made up of a number of components or subsystems. The working memory model has replaced the idea of a unitary (one part) STM as suggested by the multistore model.

The working memory model explains a lot more than the multistore model. It makes sense of a range of tasks - verbal reasoning, comprehension, reading, problem solving and visual and spatial processing. And the model is supported by considerable experimental evidence.

The working memory applies to real life tasks:

    - reading (phonological loop)

    - problem solving (central executive)

    - navigation (visual and spatial processing)

The KF Case Study supports the Working Memory Model. KF suffered brain damage from a motorcycle accident that damaged his short-term memory. KF's impairment was mainly for verbal information - his memory for visual information was largely unaffected. This shows that there are separate STM components for visual information (VSS) and verbal information (phonological loop).

Working memory is supported by dual task studies (Baddeley and Hitch, 1976).

The working memory model does not over emphasize the importance of rehearsal for STM retention, in contrast to the multi-store model.

Weaknesses

Lieberman criticizes the working memory model as the visuo-spatial sketchpad (VSS) implies that all spatial information was first visual (they are linked). However, Lieberman points out that blind people have excellent spatial awareness although they have never had any visual information. Lieberman argues that the VSS should be separated into two different components: one for visual information and one for spatial.

There is little direct evidence for how the central executive works and what it does. The capacity of the central executive has never been measured.

Working memory only involves STM so it is not a comprehensive model of memory (as it does not include SM or LTM).

The working memory model does not explain changes in processing ability that occur as the result of practice or time.


957
Security / Re: What does an ISP "see" when you use Tor?
« on: June 16, 2013, 03:11 pm »
Can my ISP see what onion/clearnet websites I'm browsing while using Tor Browser?

Tor tries to prevent your ISP from determining what onion/clearnet websites you are browsing. If it accomplishes its goal then the answer to your question is no.

958
Quote
While thoughts without language certainly occurs, it follows the same inherent system from which language originates. Our mind uses images for placeholders instead of names; space may follow different rules in our brain, but ultimately it perpetuates from the same physical frame work (neural network) from which language originates. The distinction is trivial at best.

I don't think it is all that trivial. Different thought based tasks are optimized for different sorts of thinking. Thinking in pictures is the best way to navigate through space, it can be done with words as well but it is horribly inefficient. Drawing something you have seen before is much easier to do if you think of it as a picture rather than try to encode it as a series of words that describe it. Somebody who only thinks in words ends up using words as placeholders instead of images. If you see a painting you may encode it as 'A painting of a turtle, it is done with oil paint, the turtle is surrounded by a bunch of grass, it has a green shell with little flecks of color on it, I can see the sky in the background etc' but wouldn't it be more efficient to just visually recall the painting? In this case you are using words as a placeholder for the image. I once heard somebody say that every picture is worth 1,000 words but not every 1,000 words has a corresponding picture. I think this is very true. In some cases being able to think of something as a picture can be vastly more efficient than thinking of it as words, but some things just cannot really be thought of as pictures, but all pictures can be thought of as words.

Quote
but ultimately it perpetuates from the same physical frame work (neural network) from which language originates

This is not entirely true either. Language based thinking and visuospatial thinking originate from different neural networks within the brain. When somebody is thinking in sign language or written text though the line kind of blurs, if they have lesions that completely remove their working visual memory then it seems that they can no longer think in language despite having functional language processing neurons. I read once about a person who had brain lesions and he could remember how to write text and he could see text, but he was incapable of reading the text he saw, even if he had already written it. This was caused by an accident that severed the language processing part of his brain from the visuospatial processing part of his brain, both worked independently but they could not communicate with each other. 

959
Security / Re: I worried I'm not using PGP Keys correctly
« on: June 16, 2013, 01:27 pm »
That is not doing anything to hide your address. Read one of the threads on using GPG in this subforum. Also that GPG key looks pretty damn small.

960
A thing is a unit of thought (hence it's called thinking), how do we get a thing out of the continuous world - we name it. Thinking without language is not possible by definition. Is there other ways of conscious being and understanding the world - of course.

So you don't think you can solve a navigational problem (how to get from point A to point B) without using words or language?

If you already have developed a language then the chances are that in your navigational exercise you would use lot of language based terms to refer to various entities to do the job. Our brain does it quite fast that you might think that you are not using language base terms at all.

As I mentioned in the OP, language based thinking is possible. Many people do that on a daily basis. May be certain types of activities/thinking are predominantly without any language; such as thoughts/actions needed to meet the biological needs, etc. Babies do think without any language and so did the human before developing any language.

Thinking that are around ideas or interactions with others (of course in our head only) require language. On the other hand it might be possible to have fantasies about someone where language may not be needed much; of course it depends on what you are fantasizing about :)

I do not think that thinking requires language. You can visually think complex things through without necessarily requiring any language at all. The simplest example is spatial navigation. However, even complex things like anonymity networks can be thought of in a purely visual sense.

Pages: 1 ... 62 63 [64] 65 66 ... 249