Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - kmfkewm

Pages: 1 ... 8 9 [10] 11 12 ... 249
136
Philosophy, Economics and Justice / Re: Why I abandonded Libertarianism
« on: September 08, 2013, 10:11 am »
"Let's play monopoly, its a game of skill and some chance, where the goal is to acquire property and money.....oh, by the way, the way I like playing it, I start with all the money and property. Other than that its basically a free market. So presumably the most deserving will end up owning most of the board. Oh also, we don't use the rules in the box, how we play is the person with the most property and money makes up the rules as they go along. And if you pull a 'super tax' card you don't have to pay, why should men with guns extort money from you?"

The thing is, we have been playing the same game of monopoly since the start of life. You just joined the game late. Your ideology says that every now and then we take all the money back and all the property back and start the game all over from scratch.

137
Philosophy, Economics and Justice / Re: Why I abandonded Libertarianism
« on: September 08, 2013, 10:09 am »
How will the WTLA deal with the distribution of agricultural land and the mineral wealth contained within? Also all the wealth, factories machinery and other means of production. All of these are currently mostly in private hands. Since the default position will be free market capitalism, the current owners will surely begin this glorious golden dawn with something of an advantage. So I imagine they will be unwilling to join our voluntary tax based societies.

Yes I certainly imagine that they will be totally unwilling to join your voluntary tax based society. I know I sure as hell will not. Why should I pay for schools that I don't go to, or pay to use roads that I don't use, or pay to fund the police that I disagree with? I would rather pay to go to school where I want to, and pay to use the roads I want to, and pay the police who I agree with. I am not so dumb that I cannot select what I want to fund myself, I sure as hell don't need armed men with guns to take my money and spend it on what they see fit.

WTLA recognizes that many of the rich people today are only rich at the expense of others. A prime example of this would be all drug enforcement agents. Now of course we would immediately fire all such people, but if they can keep their ill gotten gains is another question. I would personally be in favor of seizing all of their assets, as all the money they have is from kidnapping, slavery, and extortion. As far as owners of companies and such, in many cases they rightfully have those things. You are not born into life promised a means of production or promised a factory or promised a big chunk of land. However, you have the right to not be extorted by others, by your very nature of being a human! So you will not be taxed at all, and can spend your money however you want as well.

It is just not realistic for us to go through out all of human history and look at the events that caused certain people to have money today, and see if they got their money from wrong deeds, so that we can seize it and give it to the people they wronged. We cannot say that oh some white people are only rich today because their great great great great grandparents owned slaves and got rich off of their labor, so let us take their money and distribute it to the great great great great grandchildren of slaves. Too much history in the world, too many events, too many interpretations, etc, we cannot undo the wrongs of history.

To me it sounds like some of you might be be less opposed to the WTLA if we had a final single redistribution of wealth, to try to lessen the impact of the previous wrongs in the world. Maybe good people will have money taken from them as well, but perhaps having a single time fee for the cost of perpetual freedom afterwards is worth it. But the thing is, even this is not really realistic. The world has $231,000,000,000,000 worth of private wealth in it. If we taxed everybody a single time fee of 50% of their money, we would have $115,500,000,000,000. There are like 8,000,000,000 people in the world. That means if we evenly distributed half of the wealth in the world over the entire world, everybody would get $14,437. But then take into account the logistics of doing this, etc, it will end up being much less than that after costs are calculated. Is getting everybody in the world a one time payment of a few thousand dollars really worth all the work it would take? And can we really justify taking 50% of every single persons money and then redistributing it in such a way? On the positive side, some of the people who are only rich because of slavery will lose money that will benefit some people who might only be poor because of slavery. On the other hand, we will be enslaving people who did not get their money in a bad way in the process! So in helping to right the wrongs of the past we are making new wrongs in the present!

So it doesn't seem very realistic to me that we have a final massive wealth redistribution prior to implementing Totalibertarianism. I think it is better to recognize that in the past we allowed bad things to happen, but in the future we do not. People will always benefit from doing bad things in the world, especially when society lets them. But that doesn't mean we cannot say "Okay, from now on no more evil shit is allowed. The system is fair from now on!". Sure, some people who have power got it in bad ways. But we can either allow bad things to keep happening to not give them more of an advantage, or we can just stop these bad things from happening all together and hope that over time things correct themselves.

Quote
So me and my redistributive taxation loving comrades will be free to start our more equitable society, as long as we don't attempt to tresspass on any of THEIR productive agricultural land or attempt to take any of THEIR oil. There's plenty of desert going spare. Once again Libertarianism seems to promise freedom in proportion to wealth.

Libertarianism promises equal freedom to every single person. Wealth promises power, but libertarianism promises freedom.

138
Off topic / Re: Let's talk about prison rape
« on: September 08, 2013, 07:34 am »
Quote
The dude caught for smoking pot and sentenced to 2 years will not get tossed in a cell with the serial killer doing life.

That is certainly not true. I know people who went to prison for under two years for drug dealing offenses, and some of them were cell mates with people doing life sentences for murder, lol.


If this is true there is something seriously wrong with your countrys jail system, inmates should always be seperated to min med max and supermax depending on crime/s commited

Hell, I know people who got less time in prison for holding up stores with guns than other people got for selling weed.

139
Security / Re: Cleaning up my computer
« on: September 08, 2013, 07:12 am »
Secure Erase is guaranteed to securely erase your drive, provided it is implemented correctly and nothing goes wrong anyway.

140
Wait, wait... I'm really fucking confused now.  We're *not* supposed to use elliptic curve cryptography?  But... but I... but...

wtf is going on?

Right now expert cryptographers seem to hold conflicting opinions. Some are saying we need to switch to ECC right away, because they take the NSA revelations to mean that the NSA might be able to crack low bit strength RSA and DH (ie: The leak says that ten years ago the NSA had a break through allowing them to crack many forms of cryptography). Others are saying we need to stay far away from it. Personally I prefer ECC by a lot, but if it is broken well obvious it is no good. ECC has been the traditional wisdom up until very recently, with pretty nearly everybody suggesting it be switched to from RSA and DH. But with the NSA revelations, some people are getting cold feet in regard to the ECC algorithms, because the NSA has been their biggest supporter and trying to get everybody to switch to them for some years now (ie: The leak says that the NSA is trying to get people to use encryption that they can break).

So use ECC if you think the NSA revelations mean RSA and DH are screwed, and use high bit strength RSA and DH if you think the NSA revelations mean ECC is screwed. Right now the experts are split. ECC is pretty new. I think the mathematics behind ECC is relatively new, only being formalized a bit over a hundred years ago, whereas the mathematics behind RSA go back several thousand years. On the other hand, most people thought ECC was much stronger than RSA bit-for-bit. I really cannot say which I would use. I think ECC has much nicer properties and I would much rather use ECC than RSA or DH, provided it is secure. Honestly though I would probably have to lean more toward RSA or DH with really high bit strength, because not many people are worried the NSA can break those, but some people are worried they can break ECC in general and the others are worried they can break low bit strength RSA/DH.

141
Security / Re: Cleaning up my computer
« on: September 08, 2013, 06:11 am »
Wow every single person in this thread is giving horrible advice. Throwing your HD in a bucket of water doesn't accomplish jack shit. Reinstalling an OS doesn't accomplish much at all either, it only overwrites some data. To clean your drive you need to use Secure Erase, preferably followed by a single pass of random data from DBAN.

142
Off topic / Re: Let's talk about prison rape
« on: September 08, 2013, 05:57 am »
Also, generally the MORE secure the prison is the LESS likely you are to get shanked or raped. Jail in the USA is maximum security I believe, because they house people from serial killers to shoplifters, they need to default to the highest level (well, supermax is the highest). Not likely at all to get raped or shanked in jail, for a number of reasons. Like NN said, a lot of the people in jail plan to be out really soon, so they are not going to risk getting a ton of time when they only have a few days or weeks to serve. A lot of the ones who are facing a lot of time have not been convicted yet, and they are not going to risk getting more charges when they still are not convicted of the original ones against them. Also, everybody is fresh from the outside world and not likely to be as hardened in many cases.

At a supermax prison you are definitely not going to get raped or shanked, because you have no contact with anybody at all or only have contact with guards. Maybe the lowest level of a supermax you will have very little contact with other inmates, but it will be under such strict security that there is no way you will be raped.

Pretty much the less secure a prison is the more likely you are to be raped or shanked in it. Where do you think you are more likely to be raped, a prison where you have one hour of contact with other humans, under strict supervision of guards, or a prison where you share a cell with five random people? Or even in a big dormitory where you are free to move about the place, kind of like a college campus from hell. The lower security the place you are at, the more contact you will have with other people, the more isolated you will be from guards, etc. The only advantage, from this perspective, of being at a lower security place, is that the people you are locked up with are less likely to be insane rapist murderers, and the less time they will be doing so the less likely they are to want to kill or rape you.

143
Off topic / Re: Let's talk about prison rape
« on: September 08, 2013, 05:46 am »
Quote
The dude caught for smoking pot and sentenced to 2 years will not get tossed in a cell with the serial killer doing life.

That is certainly not true. I know people who went to prison for under two years for drug dealing offenses, and some of them were cell mates with people doing life sentences for murder, lol.

144
Security / Re: Dissent: accountable anonymous group communication
« on: September 08, 2013, 04:53 am »
Quote
  I'm afraid it exposes too much data about who has seen what, which would be a shame since PIR is starting with the best available method to keep that from happening.   But I'll freely admit that I haven't thought this out all the way yet.

There are two different problems. The first problem is preventing people from learning who (IP) has seen what. The second problem is preventing unwanted people from learning who (pseudonym) has seen what. PIR protects from the first problem, it allows an IP address to download messages for a pseudonym without anybody being able to determine which pseudonyms messages they are downloading. Problem two is more a concern from a network analysis perspective. But in many cases we do want many people to know which pseudonym has seen what, so they know who to respond to.

I can give an example with Pynchon Gate. Pynchon Gate is designed for person to person communications, not for group communications (unfortunately, because it is comparatively easy to implemented, in that I could be done with it in like a month given what we already have done). Pynchon gate makes use of the following components:

Mix Network -> To send forward messages
Nymserver -> To receive messages for users, it processes messages sent to a user and bunches them together into buckets for the PIR server
PIR Server  -> To store messages for distribution, to let users download message buckets anonymously

Alice wants to talk to Bob. So she makes him a message and sends it through the mix network to the Nymserver, addressed for Bob. The Nymserver gathers many messages for Bob over a period of time called a cycle. At the end of each cycle, the Nymserver groups together all messages for Bob into what is called a bucket, then it pads the bucket to a fixed size , uploads it to the PIR servers. The PIR servers put Bob's bucket in a certain numbered position in their database (PIR is usually indexed by number, if it is indexed by keyword then it is something else based on PIR, usually. The only exception I can think of is everybody gets everything PIR), and then they release a list of every pseudonym and its current bucket number, which everybody downloads with Everybody Gets Everything PIR (username to bucket position is downloaded with everybody gets everything PIR, actual message buckets are downloaded with a more sophisticated PIR). Every cycle every client downloads the entire list of pseudonym:bucket_number pairings. Then they find their current buckets index number and perform the more sophisticated PIR protocol with the PIR servers to download their messages. Because of the PIR, the PIR servers cannot tell which IP is linked to Bob, because even though Bob gets the message bucket at the position known to be 'Bob's messages' , the PIR servers cannot tell this.

Anyway this is fine and great for person to person communications. The Nymserver cannot link Alice to Bob, or anybody else to Bob, because they don't need to say who they are when they send messages to Bob. But if you try to turn this into a group communication system, a lot of problems come up. Let's say that Alice wants to send a message to Bob and Carol. Okay, there are two options here now. Either Alice can send a copy of the message to Bob, and then re-encrypt it and send a brand new copy to Carol, or Alice can send a single message to a node that sends a copy to Bob and Carol.

In the first case, it is simply bandwidth prohibitive. If Alice communicates with 100 people, we cannot have her sending 100 copies of the same message. That drains bandwidth from the mix network in such a huge way that it is not feasible. The second option greatly reduces the bandwidth requirements. In this case, Alice sends the same message to her 100 contacts down the same circuit, and only the key to decrypt the message is encrypted uniquely to each of her contacts. Alice would need to include for the final mix node each of the contacts to send the message to, and at the final mix node the message can be split off to the different nymservers (and where many people share a single nymserver, only one message needs to be sent to it with the list of recipients). This brings the bandwidth requirements into the realm of reasonable, Alice doesn't send 100 messages individually through the mix network, for each one to individually be sent to a nymserver. Now she sends a single message down the mix network and then it is sent one time to each nymserver from the final mix. So this is much better, but there is still a huge problem! The nymservers can see that the same ciphertext has arrived for multiple people, and now they can socially link those people together. This is a social network analysis problem. Since PIR is being used, the PIR servers still cannot link an IP address to a pseudonym, but pseudonyms communicating can be linked together by a third party (the nymserver, the final mix).

Using Keyword Search instead of a nymserver can eliminate all of these problems. Now instead of sending a message to Bobs' nymserver and telling the nymserver the message is for Bob, Alice merely sends a message tagged with a shared secret between her and Bob directly to the PIR-like server. The PIR-like server no longer associates this message with "A message for Bob", but rather can only tell that it is some message for somebody. So now Bob can download the message by doing a keyword search. And this actually scales to group communications. Because when Alice sends the same message for all 100 of her contacts, yes there is a single ciphertext still but it is indexed by 100 different arbitrary strings. It is not a message for Bob and Carol and Doug etc, it is a message for 100 random strings that will never be used in the future and have never been used in the past. So this solves the problem of network analysis and actually allows us to totally remove the nymserver.

But of course if Alice sends a message to Bob and Carol, she wants Bob to know to tell Carol about any replies he makes to it. Otherwise there is not group communication taking place. So the problem is third party network analysis, communicating parties should be able to determine who all is involved in the communication.

So yeah there are a few closely related things here but they are distinct. Internet network analysis and social network analysis. PIR takes care of the first problem, but not the second problem. I think we can use Keyword Indexed PIR-like-systems to take care of the second problem. But in some cases it is not even a problem, because we need communicating parties to know that they are communicating with each other, we just don't want unwanted third parties to know this. If we allowed for unwanted third parties to know who is communicating with who from a *social network* perspective, and only not from a *computer network* perspective, we could just use a modified Pynchon Gate and be done with this a hell of a lot faster :).

145
Security / Re: Dissent: accountable anonymous group communication
« on: September 08, 2013, 04:21 am »
kmfkewm - I really do get the desire to see which of your friends have seen links to the payload, and the desire to reduce unneeded duplication of metadata to share that.

It is more than just a desire to reduce bandwidth, although that does come into play as well. The primary reason why users need to be able to tell which of their friends have seen a payload, is so they know who to respond to when they make a response to the message in the payload. Picture it with E-mail and a mail archive:

Alice sends a tagged encrypted message to a mail archive. This is the payload. Now she wants Bob and Carol to see the message and be able to respond to it. So she sends Bob an E-mail with the tag of the message and a key to decrypt the message, and she tells Bob that Carol can also see the message. Alice sends the same E-mail to Carol, but letting her know that Bob can see the message. The E-mails Alice sends to Bob and Carol can be seen as the metadata packets. Now, if Bob knows who Carol is, when he makes a response to the message he can know to tell Carol about it. If Alice never told Bob that Carol knew about the message, Bob could only make a response and tell Alice about it. But in this case there is not group communication taking place, rather it is like Alice holds a conversation with Bob and independently holds a conversation with Carol, about the same topic. So it is required for group communication that Bob knows Carol is part of the group communicating. Now there are cryptographic tricks we can do to make this more secure, for example we don't want Bob to learn anything about Carol if he doesn't already know who she is. Additionally, we don't want Bob to even know how many people Alice pointed to the message, unless he knows all of the people that she pointed to the message.

So the most important reason for clients to know who all has seen a message is so that they know who to respond to when they make a response to the message. Saving bandwidth by not resending a ton of metadata packets is just an added advantage of this.

Quote
I'm kinda stuck at this question:
How does the overhead of performing searches relate to the overhead of duplicated metadata objects?

I am not sure I understand this question. If there are duplicated metadata objects (although each one is a bit different, even if it points to the same message), that will increase the number of searches that need to be performed as well. If Alice points Bob to a message he already knows about, then Bob still needs to search for that metadata object, because he doesn't know what it points to until he downloads it, and downloading it requires him to search for it. He is capable of searching for it because he has a shared secret search string between him and Alice, but until he actually downloads it he has no idea what it is he is downloading.

Here is an abstract from one of the papers that looks like a suitable candidate (however I still need to read the one way indexing paper, it looks like it might be better actually in that it could provide censorship resistance as well) :

www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA456185

Quote
A system for private stream searching, introduced by Ostrovsky and Skeith [18], allows a client to
provide an untrusted server with an encrypted search query. The server uses the query on a stream
of documents and returns the matching documents to the client while learning nothing about the
nature of the query. We present a new scheme for conducting private keyword search on stream-
ing data which requires O(m) server to client communication complexity to return the content
of the matching documents, where m is the size of the documents. The required storage on the
server conducting the search is also O(m). Our technique requires some metadata to be returned
in addition to the documents; for this we present a scheme with O(m log(t/m)) communication
and storage complexity. In many streaming applications, the number of matching documents is
expected to be a fixed fraction of the stream length; in this case the new scheme has the optimal
O(m) overall communication and storage complexity with near optimal constant factors. The pre-
vious best scheme for private stream searching was shown to have O(m log m) communication
                                       
and storage complexity. In applications where m > m, we may revert to an alternative method of
returning the necessary metadata which has O(m log m) communication and storage complexity;
in this case constant factor improvements over the previous scheme are achieved. Our solution
employs a novel construction in which the user reconstructs the matching files by solving a sys-
tem of linear equations. This allows the matching documents to be stored in a compact buffer
rather than relying on redundancies to avoid collisions in the storage buffer as in previous work.
We also present a unique encrypted Bloom filter construction which is used to encode the set of
matching documents. In this paper we describe our scheme, prove it secure, analyze its asymptotic
performance, and describe several extensions.

The Internet currently has several different types of sources of information. These include con-
ventional websites, time sensitive web pages such as news articles and blog posts, real time public
discussions through channels such as IRC, newsgroup posts, online auctions, and web based fo-
rums or classified ads. One common link between all of these sources is that searching mechanisms
are vital for a user to be able to distill the information relevant to him.
    Most search mechanisms involve a client sending a set of search criteria to a server and the
server performing the search over some large data set. However, for some applications a client
would like to hide his search criteria, i.e., which type of data he is interested in. A client might
want to protect the privacy of his search queries for a variety of reasons ranging from personal
privacy to protection of commercial interests.
    A naive method for allowing private searches is to download the entire resource to the client
machine and perform the search locally. This is typically infeasible due to the large size of the data
to be searched, the limited bandwidth between the client and a remote entity, or to the unwillingness
of a remote entity to disclose the entire resource to the client.
    In many scenarios the documents to be searched are being continually generated and are already
being processed as a stream by remote servers. In this case it would be advantageous to allow
clients to establish persistent searches with the servers where they could be efficiently processed.
Content matching the searches could then be returned to the clients as it arises. For example,
Google News Alerts system [1] emails users whenever web news articles crawled by Google match
their registered search keywords. In this paper we develop an efficient cryptographic system which
allows services of this type while provably maintaining the secrecy of the search criteria.

Private Stream Searching Recently, Ostrovsky and Skeith defined the problem of “private fil-
tering”, which models the situations described above. They gave a scheme based on the homomor-
phism of the Paillier cryptosystem [19, 9] providing this capability [18]. First, a public dictionary
of keywords D is fixed. To construct a query for the disjunction of some keywords K ⊆ D, the
user produces an array of ciphertexts, one for each w ∈ D. If w ∈ K, a one is encrypted; otherwise
a zero is encrypted. A server processing a document in its stream may then compute the product
of the query array entries corresponding to the keywords found in the document. This will result
in the encryption of some value c, which, by the homomorphism, is non-zero if and only if the
document matches the query. The server may then in turn compute E (c)f = E (cf ), where f is
the content of the document, obtaining either an encryption of (a multiple of) the document or an
encryption of zero.
    Ostrovsky and Skeith propose the server keep a large array of ciphertexts as a buffer to accu-
mulate matching documents; each E (cf ) value is multiplied into a number of random locations in
the buffer. If the document matches the query then c is non-zero and copies of that document will
be placed into these random locations; otherwise, c = 0 and this step will add an encryption of 0
to each location, having no effect on the corresponding plaintexts. A fundamental property of their
solution is that if two different matching documents are ever added to the same buffer location
then we will have a collision and both copies will be lost. If all copies of a particular matching

document are lost due to collisions then that document is lost, and when the buffer is returned to
the client, he will not be able to recover it.
    To avoid the loss of data in this approach one must make the buffer sufficiently large so that this
event does not happen. This requires that the buffer be much larger than the expected number of
required documents. In particular, Ostrovsky and Skeith show that a given probability of success-
fully obtaining all matching documents may be obtained with a buffer of size O(m log m),1 where
m is the number of matching documents. While effective, this scheme results in inefficiency due
to the fact that a significant portion of the buffer returned to the user consists of empty locations
and document collisions.

Note that we would need to use the extension in 5.2 of the paper which eliminates a globally public dictionary, as the keywords searched for are random, secret, and long.

Quote
At some point, every viewer of every thread will be performing an "is it read?" search one time for each contact they have that they want an answer for.   You can cache results on the client, so it's just a "What about the folks that hadn't seen it last time?" set of searches every time you reopen the client.

Sure

Quote
You're trading an increase in CPU-time (to perform additional searches) to affect a decrease in storage (duplicated metadata objects).

Sort of, but the CPU-time required for a client to decrypt a search is actually quite large. Decryption of search results can take many minutes and a lot of processing power at the clients end. So not searching for things you already have actually is probably a significant reduction in required client CPU-time as well as certainly a decrease in storage at the PSS (in this case) database.

Quote
The DoS/Flood exposure of the PIR/etc method seems to be twofold:  CPU, in terms of search flood, and storage.   Storage is what actually concerns me the most, without the ability to share the load of storage, ala Freenet.  Proof of Work gives you a rate limiting mechanism that can be ratcheted up to an appropriate level, of course.  But PoW works well to limit storage.. not so much with searches (CPU).   "Please fill out a CAPTCHA to see if Alice has read this... please fill one out for Bob..."   Obviously, you could trade a non-human-interactive PoW (some concept of hashcash, etc).

Yes storage concerns me the most as well. Pretty much we need to have several PSS servers sharing the load to gain the benefit of decentralization (ie: taking down a single server has no effect on the overall system). But it seems very wasteful to have, say, two servers with 4TB of storage capacity, that need to be exact mirrors of each other. Adding more servers doesn't really increase storage capacity in this case, it only increases redundancy. But at the same time, having it so different messages go to different servers will very likely hurt the anonymity of the system and make it weak to traffic analysis. Just because PIR/PSS/OWI/Whatever makes it so the clients can download messages without the server knowing which message they downloaded, does not make it perfectly immune to traffic analysis. It sets a strong foundation upon which we need to build up our traffic analysis resistance. I think that we should certainly have proof of work to send messages to at least discourage spam and flooding of the PIR-like servers. It is trivial to implement hash based proof of work I could make such a system in half an hour tops. I don't much care for the idea of CAPTCHA , it is easy to circumvent and makes things much less nice for the legitimate users. 

Quote
I just don't have a feel for how CPU-intensive each individual search is.  If it's trivial, I think I like your method, but remain concerned about the Layer 7 reverse-mapping of content viewers.   I'm afraid it exposes too much data about who has seen what, which would be a shame since PIR is starting with the best available method to keep that from happening.   But I'll freely admit that I haven't thought this out all the way yet.

For clients searching is very CPU intensive. It will likely take several minutes (depending on CPU of course) for the client to decrypt their obtained search results. I don't think searching is actually very CPU intensive for the servers, but let me read the entire document through again before I make that claim. The PIR for Pynchon Gate is extremely CPU intensive for the servers but not very much so for the clients. Most PIR systems I have read about are very CPU intensive for the servers, and there tends to be a direct trade off between CPU and bandwidth (for example, the easiest PIR is that of BitMessage, everybody gets everything, where no significant processing needs to be done by the server but it needs to send enormous amounts of data to every client. Pynchon Gate PIR is the opposite of this, in that the server needs to do enormous amounts of computation, but only needs to send small amounts of data to every client). But in the PSS papers and other PIR-Like systems I have read about, it seems that most of the load is put on the clients, which is actually fantastic.

Quote
Writing or definitively auditing code that people will trust their lives to just isn't something I'm personally comfortable with doing, so I'm not much use to audit your code or implement algorithms from whitepapers.  I have a lot of skillsets, but that one is too weak to trust anybody else's life with.   Although the more I think about the direction the Internet/surveillance/etc is heading, the more I think that maybe it ought to be my focus moving forward.

Do you think you are better programmer / know more about security than the BitMessage people? Do you know not to directly encrypt payloads with RSA? Are you pretty good at math? Do you know C programming? The thing is yeah it is best if only top experts program stuff like this. But to help people we only need to be better than the "competition". Right now the competition for systems like this is not that good. The only systems similar to this are Syndie, BitMessage, Frost (and I suppose Dissent as well, but I don't think anyone is using that yet). Syndie doesn't include a network but only an interface that can be used with any network:storage pair. BitMessage is horrible and obviously not made by people who know what they are doing. Frost I know little about, but it is essentially relying on the anonymity of Freenet, which we can improve upon. Dissent I don't know enough to comment on, but quite likely it is the best of the bunch considering it is coming from academics instead of hobbyists. Also we have solutions like Tor and php forums, and I2P and the same, but that is so far below the aspirations of this that it can hardly even be compared. We need to make this because the solution of Tor + php forum, or I2P and the same, is simply not anywhere near good enough. Even if Tor is programmed perfectly and the PHP forum has no flaws, by the very design it is just not going to be resistant enough to strong attackers.

pretty much what I am trying to say is that even if you are not top security expert in the world, you can still help contribute to this and still look over code that is done. Peoples lives will not depend on you, they will depend on everybody who contributes to the project. The more people who contribute to writing code and to auditing code the better it is going to be. At first I tried doing this all by myself, and two years later I realized I bit off more than I could chew and other people became involved. Things improved! Not all of the people involved are experts, I consider myself only to be a hobbyist as well but I still implemented a cryptographic packet format, and the expert people who I asked to audit my code said it looked correct. That was the first cryptographic system I implemented, and it was hard work, but by sticking to the whitepaper and researching the shit out of things, I was able to produce a system that true experts in the field said I had produced correctly. 

146
Parallel reconstruct is way easier on drug users than CP viewers. Random interception, dog hit on the package, etc. For CP viewer what are they gonna say? I guess that they hacked the computer. But that would entail that the computer seized was actually hackable. And I don't think the NSA is gonna burn zero days busting people for CP. And in fact the exploit that was used against people going to FH servers was a month old and had already been patched.

147
If NSA spends resources to bust people viewing CP they probably willing to spend just about as much resources to bust people ordering drugs.

148
Security / Re: Majority of Tor crypto keys COULD be broken
« on: September 07, 2013, 11:42 am »
I just want to point out that technically ECDH and DH are not asymmetric algorithms, but rather are secret derivation algorithms. ECDH and DH are not even encryption algorithms. You cannot encrypt or decrypt anything with them.

Symmetric Encryption (AES) = Encrypt and Decrypt with single key
Asymmetric Encryption (RSA) = Encrypt with public key, Decrypt with private key
Secret Derivation (ECDH) = Pubkey-A and Privkey-B = Secret1, Pubkey-B And Privkey-A = Secret1

I just point this out because it is extremely common misconception that asymmetric encryption = anything with public and private key. Not all public-private algorithms are asymmetric cryptography, and some of the most popular ones are not even encryption algorithms at all.

A few ECC algorithms are included in OpenSSL, a few other libraries as well. I don't know if they are in violation of BlackBerrys patents or not, but I doubt it as ECDH and ECDSA are widely used. There is really no reason for any new software to use RSA or DH imo, ECDH and ECDSA are both easily utilized and have some significantly superior properties to RSA, including key size, security, and speed.

149
You don't need to be scared about development versions. Maybe there's a bug which will make Tor crash, but it won't suddenly deanonymize you.

Bugs that can cause crashes can usually be used to pwn people though

150
Philosophy, Economics and Justice / Re: Why I abandonded Libertarianism
« on: September 07, 2013, 08:13 am »
The only way to accomplish it is for enough people to join the Worlds Totalibertarian Liberation Army, and for us to amass enough weapons to beat the armies of the world into submission.

Of course it would be more beneficial for us to not engage in traditional warfare. Rather, targeted assassinations and attacks without fronts forming. The worlds political structures can be taken advantage of as well, to gradually make our way toward Totalibertarianism. For example, consistently kill the political leaders who are most against us, and it becomes artificial selection. Is a certain political organization consisting of 50% people who are 100% against freedom and 50% people who are 90% against freedom? Kill the 50% who are most against freedom, and hope that their replacements are more for freedom. After doing this long enough maybe we can cause the balance to lean more and more toward freedom, up to the point that the political organization voluntarily embraces Totalibertarianism. There is actually no point for us to take on armies, armies are there largely to form a front that an attacking force needs to get past in order to target the politicians and other powerful members of a country. But Totalibertarians are already spread through out the world, and we can spread our ideology to others through the internet and try to convert more members to our cause. We have no need to try to push back a front, we have mini-fronts all over the place, every man can be his own front.

Then we can get the armies of nations to submit to us without even actually fighting with them. Once we take over the political structures we effectively take over the armies they control. Then we can use the armies to further our cause, if required, on our path to global domination.

Pages: 1 ... 8 9 [10] 11 12 ... 249