Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - astor

Pages: 1 ... 189 190 [191] 192 193 ... 208
2851
Security / Re: Best program to securely delete files
« on: December 22, 2012, 11:43 pm »
Nah, no need for Gutman 35. I've tested it myself before I set up FDE. Did a single random write across the whole disk, then used a program to scan for files and found nothing. Not a single file was recovered. NSA might have technology not available to us, but NSA won't be analyzing the hard drive of a low or mid-level drug dealer. They're worried about terrorists and spies.

The important thing is you have to write across the whole disk while the host OS is offline (like from a boot disk), filling the empty space is not enough.

2852
Off topic / Re: is photobucket.com safe for Forums?????
« on: December 22, 2012, 11:35 pm »
Not sure what the problem with Onion Image Uploader is. It doesn't require JavaScript. I used it just fine a few days ago.

2853
Security / Re: Best program to securely delete files
« on: December 22, 2012, 11:26 pm »
Deleting individual files is unsafe because of filesystem journaling and defragging. Each file has almost certainly been written to multiple places on the disk and is potentially recoverable. The only NIST-approved method of secure file erasure, for example for destroying medical records before decommissioning a hospital computer, is offline and full disk. Boot DBAN or a Linux Live CD and do a single random write across the entire hard drive.

2854
Off topic / Re: is photobucket.com safe for Forums?????
« on: December 22, 2012, 11:16 pm »
No worse than browsing other clearnet sites, as long as you do it over Tor.

2855
Security / Re: New Cust... Am I paranoid?
« on: December 22, 2012, 09:51 pm »
I only recommend full disk encryption. An encrypted volume on an unencrypted hard drive can leak info. For example, if you browse through your encrypted volume with a file manager, it can create thumbnails of photos, which may stored in a cache in the unencrypted part of the drive. Or if you open a document or some other file, many programs will add the path to that file in their "Recent Documents" or "Recently Opened" list. Someone who analyzes your hard drive can find these pointers to the contents of your encrypted volume. If you have a virtual hard disk for a VM inside the encrypted volume, presumably you'll be running it with VirtualBox or another virtualization program installed on the main OS. That means VirtualBox will be pointing to a virtual hard disk inside the encrypted volume, so anyone who looks at that will know you have a virtual hard disk in there.

So, you might as well forget about the encrypted volume. A better solution is to use FDE on the virtual hard disk itself, which can be stored on an unencrypted drive. With Debian or Ubuntu, you can turn on full disk encryption during install time. That way there will be complete separation between data in the VM and the host OS.

2856
Off topic / Re: - Karma WTF (Why?)
« on: December 22, 2012, 11:11 am »
That was when I just knew about all this BTC shit, And everybody leaving one of those bill gates won't do that stuff...And its not just SR which will give me drugs...I have a plenty of sources here to buy any fuckin shit you would think...You want some, send your address and escrow some coins, will send you within no time... ;) ;)...

I can't make heads or tails of what you just said.

2857
Off topic / Re: - Karma WTF (Why?)
« on: December 22, 2012, 10:49 am »
Does anybody else immediately look up a publicly posted bitcoin address in the block chain?

This one is funny. 37 transactions totaling 0.11 BTC. This guy has been trying to earn coins by doing those online surveys and other gimmicks that pay shit rewards for hours of work.

Just buy some bitcoins already. You're never going to earn enough to buy drugs with that strategy.

2858
Security / Re: How safe is tor really?
« on: December 22, 2012, 10:24 am »
Here's a good overview of known attacks on Tor

https://lists.torproject.org/pipermail/tor-dev/2012-September/003992.html

> > - "Traffic confirmation attack". If he can see/measure the traffic flow
> > between the user and the Tor network, and also the traffic flow between
> > the Tor network and the destination, he can realize that the two flows
> > correspond to the same circuit:
> > http://freehaven.net/anonbib/#SS03
> > http://freehaven.net/anonbib/#timing-fc2004
> > http://freehaven.net/anonbib/#danezis:pet2004
> > http://freehaven.net/anonbib/#ShWa-Timing06
> > http://freehaven.net/anonbib/#murdoch-pet2007
> > http://freehaven.net/anonbib/#ccs2008:wang
> > http://freehaven.net/anonbib/#active-pet2010

It depends in what way you want to become more precise.

I think the #SS03 paper might have the simplest version of the attack
("count up the number of packets you see on each end"). The #timing-fc2004
paper introduces the notion of a sliding window of counts on each side.
The #murdoch-pet2007 one looks at how much statistical similarity you
can notice between the flows when you are only sampling a small fraction
of packets on each side.

> > - "Congestion attack". An adversary can send traffic through nodes or
> > links in the network, then try to detect whether the user's traffic
> > flow slows down:
> > http://freehaven.net/anonbib/#torta05
> > http://freehaven.net/anonbib/#torspinISC08
> > http://freehaven.net/anonbib/#congestion-longpaths

Section 2 and the first part of Section 3 in #congestion-longpaths is
probably your best bet here. It actually provides a good pretty overview
of related work including the passive correlation attacks above.

If by 'more precise' you mean you want to know exactly what the threat
model is for this attack, I'm afraid it varies by paper. In #torta05
they assume the adversary runs the website, and when the target user starts
to fetch a large file, they congest (DoS) relays one at a time until they
see the download slow down.

In #congestion-longpaths they assume the adversary runs the exit relay
as well, so they know the middle relay, and the only question is which
relay is the guard (first) relay.

In #torspinISC08 on the other hand, they preemptively try to DoS the
whole network except the malicious relays, so the target user will end
up using malicious relays for her circuit.

> > - "Latency or throughput fingerprinting". While congestion attacks
> > by themselves typically just learn what relays the user picked (but
> > don't break anonymity as defined above), they can be combined with
> > other attacks:
> > http://freehaven.net/anonbib/#tissec-latency-leak
> > http://freehaven.net/anonbib/#ccs2011-stealthy
> > http://freehaven.net/anonbib/#tcp-tor-pets12

These are three separate attacks.

In #tissec-latency-leak, they assume the above congestion attacks work
great to identify Alice's path, and then the attacker builds a parallel
circuit using the same path, finds out the latency from them to the
(adversary-controlled) website that Alice went to, and then subtracts
out to find the latency between Alice and the first hop.

#ccs2011-stealthy actually proposes a variety of variations on these
attacks. They show that if Alice uses two streams on the same circuit,
the two websites she visits can use throughput fingerprinting to
realize they're the same circuit. They also show that by looking at
the throughput Alice gets from her circuit, you can rule out a lot of
relays that wouldn't have been able to provide that throughput at that
time. And finally, they show that if you build test circuits through
the network and then compare the throughput your test circuit gets with
the throughput Alice gets, you can guess whether your circuit shares a
bottleneck relay with Alice's circuit. Where "show" should probably be
in quotes, since it probably works sometimes and not other times, and
nobody has explored how robust the attack is.

#tcp-tor-pets12 has the adversary watching Alice's local network, and
wanting to know whether she visited a certain website. The adversary
exploits vulnerabilities in TCP's window design to spoof RST packets
between every exit relay and the website in question. If they do it
right, the connection between the exit relay and the website cuts its
TCP congestion window in response, leading to a drop in throughput on
the flow between the Tor network and Alice. In theory. It also works
in the lab, sometimes.

I also left out
http://freehaven.net/anonbib/date.html#esorics10-bandwidth
which uses a novel remote bandwidth estimation algorithm to try to
estimate whether various physical Internet links have less bandwidth when
Alice is fetching her file. In theory this lets them walk back towards
Alice, one traceroute-style hop at a time. In practice they need an
Internet routing map (these are notoriously messy for the same reasons
the Decoy Routing people are realizing), and also Alice's flows have to be
quite high throughput for a long time.

> > - "Website fingerprinting". If the adversary can watch the user's
> > connection into the Tor network, and also has a database of traces of
> > what the user looks like while visiting each of a variety of pages,
> > and the user's destination page is in the database, then in some cases
> > the attacker can guess the page she's going to:
> > http://freehaven.net/anonbib/#hintz02
> > http://freehaven.net/anonbib/#TrafHTTP
> > http://freehaven.net/anonbib/#pet05-bissias
> > http://freehaven.net/anonbib/#Liberatore:2006
> > http://freehaven.net/anonbib/#ccsw09-fingerprinting
> > http://freehaven.net/anonbib/#wpes11-panchenko
> > http://freehaven.net/anonbib/#oakland2012-peekaboo

#oakland2012-peekaboo aims to be a survey paper for the topic, so it's
probably the right one to look at first.

> > - "Correlating bridge availability with client activity."
> > http://freehaven.net/anonbib/#wpes09-bridge-attack

If you run a relay and also use it as a client, the fact that the
adversary can route traffic through you lets him learn about your
client activity. Section 1.1 summarizes:

2. A bridge always accepts connections when its operator is using
Tor. Because of this, an attacker can compile a list of times when
a given operator was either possibly or certainly not using Tor, by
repeatedly attempting to connect to the bridge. This list can be used to
eliminate bridge operators as candidates for the originator of a series
of connections exiting Tor. We demonstrate empirically that typically,
a small set of linkable connections is sufficient to eliminate all but
a few bridges as likely originators.

3. Traffic to and from clients connected to a bridge interferes with
traffic to and from a bridge operator. We demonstrate empirically that
this makes it possible to test via a circuit-clogging attack [17, 15]
which of a small number of bridge operators is connecting to a malicious
server over Tor.  Combined with the previous two observations, this
means that any bridge operator that connects several times, via Tor,
to a web-site that can link users across visits could be identified by
the site's operator.

> > I tried to keep this list of "excepts" as small as possible so it's not
> > overwhelming, but I think the odds are very high that if the ratpac comes
> > up with other issues, I'll be able to point to papers on anonbib that
> > discuss these issues too. For example, these two papers are interesting:
> > http://freehaven.net/anonbib/#ccs07-doa

Traditionally, we calculate the risk that Alice's circuit is controlled
by the adversary as the chance that she chooses a bad first hop and a bad
last hop. They're assumed to be independent. But if an adversary's relay
is chosen anywhere in the circuit yet he *doesn't* have both the first
and last hop, he should tear down the circuit, forcing Alice to make a
new one and roll the dice again. Longer path lengths (once thought to
make the circuit safer) *increase* vulnerability to this attack.

I think the guard node design helps here, but whether that's true is an
area of active research.

> > http://freehaven.net/anonbib/#bauer:wpes2007

If you lie about your bandwidth, you can get more traffic than you
"should" get based on bandwidth investment. In theory we've solved this by
doing active bandwidth measurement:
https://blog.torproject.org/blog/torflow-node-capacity-integrity-and-reliability-measurements-hotpets
but in practice it's not fully solved:
https://trac.torproject.org/projects/tor/ticket/2286

-----

All that being said, no Tor user has ever been identified through a direct attack on the Tor network. There are lots of ways to give up your identity, but if you behave safely, Tor won't betray you.

2859
Silk Road discussion / Re: becoming vendor without paying anything
« on: December 22, 2012, 04:06 am »
This suggestion has been made before, that there should be separate Vendor and Trusted Vendor categories, where the Trusted Vendors pay more. I think BMR does something like that. The single, fixed high fee reduces the number of scammers, and DPR has decided his strategy is to get rid of them entirely. He doesn't want an ecosystem of mixed trusted and scammer accounts.

With a $500 fee, only people who are serious about vending for the long term are likely to sign up. This precludes honest people who might have a single windfall of items they'd like to off load, but that's the trade off you make. It's an imperfect system, but right now it's vastly better than the swamp of scammers that exist on clearnet.

2860
I did shrooms and MDMA once. Christ that was intense. If you manage to consume more than half of those drugs in one day, that would be impressive. :)

2861
Security / Re: SR Messages
« on: December 22, 2012, 03:31 am »
Would be nice if all messages between 2 users were automatically deleted 14 (or 30 or whatever) days after the last message between them, on the assumption that the conversation is over. That used to happen with the account transaction history after (I've read) 20 days, but it doesn't appear to be the case anymore, which is strange. I have items in my account history going back almost 2 months. I suppose I could push them off the end of the list by doing a bunch of deposits and withdrawals, but there's no guarantee they are actually deleted.

2862
Security / Re: Security Tutorials
« on: December 22, 2012, 03:01 am »
Thanks! :)

Yeah, I've seen the thread where the vendors say 80-90% of their buyers don't use encryption. It's sad. They could read through my tutorial in 10 minutes.

2863
Security / Re: orweb for android help!
« on: December 22, 2012, 12:37 am »
Agreed, unless you've rooted your phone, I wouldn't trust Orbot.

2864
Off topic / Re: It is only fucking Friday SR, so what will you be on?
« on: December 22, 2012, 12:06 am »
Drug -- Time
--------------------
alcohol -- right now
180 mg MDMA oral -- whenever I feel like it
120 mg MDMA insufflated -- 2 hours later

2865
Security / Security Tutorials
« on: December 21, 2012, 10:11 pm »
I decided to write some tutorials on how to use PGP and eventually other things, since there's so much confusion about these things.

The first one I wrote is for GPG4USB. I kept it deliberately concise so people are more likely to read it, while pointing out common misconceptions, stumbling blocks, etc., and giving tips that I've learned along the way (bigger key size, etc).

You can find it here

http://32yehzkk7jflf6r2.onion/gpg4usb/

Other stuff that I'm planning is on the main page.

Pages: 1 ... 189 190 [191] 192 193 ... 208