Hmm interesting, OpenBSD and FreeBSD devs writing correct code is the basis of all security.
Although I like both FreeBSD and OpenBSD it is insane to think that they are the basis of all security. One thing I don't like about FreeBSD is the fact that it entirely lacks ASLR. It does have one of the most awesome mandatory access control systems I have ever seen though. It also has a really good OS virtualization tool, Jails. I don't like that OpenBSD has almost no support at all for virtualization technology, because I like to isolate my network facing applications from external IP address. I could use OpenBSD on hardware and make a Tor router though, and isolate Firefox by running it on different hardware and only letting the machine it runs on have access to an internal IP address. I will probably be doing this, and I think OpenBSD is a great choice for OS in this scenario. I also don't like the fact that OpenBSD entirely lacks a mandatory access control system. This seems to piss off a substantial number of security professionals actually, although the OpenBSD devs/fanboys argue against the use of mandatory access controls as well.
I have heard the following from one security professional: 'Hardened Gentoo can certainly be used for the configuration of a more secure environment than can be obtained by using OpenBSD, however OpenBSD is more secure out of the box and the proper configuration of Hardened Gentoo is an extremely difficult and time consuming task'. I also personally think that Hardened Gentoo is the way to go if you want the best possible security and have the time and skill to configure it.
If you are using binary blobs to manipulate hardware (many linux drivers do this) then you are basically hoping that there's no bugs/overflows that can be exploited because you can't see the vendors source, so hope for the best. If the code is correct from the beginning then it enhances security. This is why OpenBSD has a notoriously slow and methodical approach to coding so they get it right the first time. They also have a very different mechanism for updating CVS whereas the Debian project has thousands of part-time developers around the world all contributing with no standards (according to BSD devs).
I agree that non-open source software should be avoided. I don't see the link between code being correct from the beginning and code being open source though. However code being open source may very well lead to it being more correct over time. It also lets you check it yourself for correctness, which is a big benefit but also requires that you know how to audit code. I think they are right about Debian.
Take for instance the Debian dev in 2006 who commented out two lines of code to get rid of some strange errors he was seeing. What he didn't know is that he commented out crucial random number generating code and in effect, reduced all keys generated on Debian machines (and all their clones) to 15 bit entropy. !! It wasn't discovered until 2008. For 2 years anybody could have MITM attacked any openSSL connection and ripped root/super user passwords easily. Anybody could have decrypted the id_rsa key in a couple of minutes and just logged straight in.
Yes I knew about that Debian vulnerability
Add on top of all that questionable fast release code from Xen/Virtualbox since it's primary function is testing and investigation of error prone software and not security and you have bugs stacked on top of bugs.
Xen being used for isolation/security has been established as something that expert level security researchers suggest, for example the Qubes operating system (and other expert level security people I talked with about it, all of whom said paravirtualization is the best option for virtualized isolation and all of whom said that paravirtualization based isolation is better than no isolation at all and significantly less insecure than full hardware virtualization)
OpenBSD's mantra is the simpler your setup is, the more secure.
OpenBSD's mantra is the more correct your setup is, the more secure. This is recognized as fact by everyone. However, you need to take your threat model into account when you decide on the level of complexity required to acheive it. Not using GPG presents you with an environment that has less code complexity, does that mean that you should stop using GPG and that in doing so are you increasing your security? Of course not. The security advantages that some code brings outweighs the increased risk that comes from the added complexity. For me (and everyone I talked with) the advantages of isolating applications from external IP address and avoiding windowing system EOP to root vulnerabilities outweigh the disadvantages of increased complexity, *even if the least secure sort of virtualization is used* (however I don't think they took attackers stealing plaintexts into account when they weighed in on full hardware virtualization, so I think you should use OS or paravirtualization or nothing, if you don't want to use physical layer isolation)
The more packages and pf filter rules you have the more you're decreasing security. I think it's pointless to jail or virtualize any X browser if you need maximum security when you can use lynx or w3m. A vendor certainly wouldn't need firefox to post here or log into silkroad tho I haven't tried the main site with lynx but I suspect it works fine. A bitcoin trader could just use their bot and lynx to trade securely.
I don't know if I would agree that additional pf filter rules decrease security, although you should probably be using the least amount required to achieve your goal. Is lynx even maintained anymore? I have thought of going full CLI before, I might look into it again.
For me isolation is a requirement. I will probably start using physical layer isolation since it gives the benefits I am looking for and has no security disadvantages. If not I will probably use paravirtualization or OS virtualization. I wont be using full hardward virtualization anymore though, including for live CD's. I certainly wont stop using isolation though, the very important security benefits it brings are far too great for me to stop using the technique at all.
What I meant by Tor exploit was the possibility of a buffer overflow (like the one patched in Dec)
If you use a 64 bit version of OpenBSD you are essentially fully protected from all buffer overflow attacks via ASLR. You will get the same benefit on any other 64 bit OS that has implemented full ASLR (and you will get some of the benefit if the OS has implemented partial ASLR).
so isolating Tor behind a firewall/dmz and using it as a private bridge is a must if extreme anonymity is required which that guy you quoted agreed as physically separating applications under traditional dmz and vlan rules is best. You can use OBSD redundant failover firewalls using CARP + Pfsense and NAT to isolate Tor (jailed/chrooted) in the DMZ, running intrusion detection where it only ever sees internal addresses, so unlikely priv escalation on the Tor box would amount to much unless the attack wget their own Tor malware with built in snoopwares and install it without you knowing which is likely if you follow that guy's .onion guide where he recommends running Tor as root in an Virtualbox instance.
There is little point in running Tor in a chroot or jail since if it is pwnt the attacker can get your IP address even if it is isolated. The only thing it will do is make it harder for the attacker to root the host OS on the machine Tor is being run on, but if you make a Tor router machine and Tor is the only thing you are running on it other than the OS etc there is little point to this. I will probably make a tutorial for physically isolating Tor using a dedicated OpenBSD machine in a few weeks, if you want to make it first feel free and I will make a tutorial on how to use paravirtualization and OS virtualization for people who don't have extra machines and for people who use laptops from random locations. You are right that we need to ditch full hardware virtualization and move to more secure solutions ASAP though.
I guess the simple rule is the more packages and software you run, the more chances of exploits. If running a minimalist OBSD/FBSD network segregated into firewall/dmz/vlan this should increase security exponentially. Ditching X all together and using a text based browser to do business further reduces exploits.
I think I already did a good job of explaining the trade offs between complexity and features. In some cases added features are worth added complexity. You should always aim to use as little code as required to achieve your goals. My goal is isolation of network facing applications from external IP address (and my secondary goal is isolation of windowed applications to protect from EOP to root attacks). The software based solution for accomplishing this with the least code is OS virtualization. Paravirtualization also achieves this with much less code complexity than full hardware virtualization, and also comes with its own security advantages/disadvantages as compared to OS virtualization. If you use physical isolation you can acheive this goal with no added code complexity, so this is clearly the route to go. However, it doesn't mean that it is the *only* route to go, and it doesn't mean that you shouldn't get these security advantages in other ways if you can't use physical layer isolation for whatever reason.
On not using Truecrypt in a VM this comes from Bruce Schneier's attacks on TC containers. Let's say you spawn a virtual Debian instance, plug in your TC encrypted USB key and decrypt it. Your priv keys and data are leaked all over that VM and now you are trusting buggy virtual machines to safeguard this data which was shown by him to be leaked using a variety of word processors, gmail/google docs and other programs while the TC container was opened. The VM can also leak to Dom0 through a hundred different methods, which won't matter if your primary host o/s is already full disk encrypted but if it's not forensics can recover data or malicious exploits in DomU can. Full disk should be mandatory.
Can I get a link to the article please? Anyway I will keep trying to find it. What you are saying may very well be true although I am not sure if a key is anymore likely to leak in a VM than it is on a normal OS. Not using FDE is opening yourself up to forensic teams finding your private key if it ever leaks from RAM, and there are multiple ways this could happen even if you are not using a virtual machine. If using a virtual machine increases the risk of key leaking or not is something that I do not know, and I would love to read about it. FDE should be mandatory. So should keeping your laptop on you at all times. FDE isn't going to protect you from anything but attackers who don't know you are using it or don't know shit about attacking more sophisticated targets. Is your computer by a glass window that faces outside? They will use a laser microphone to keylog you from a distance based on analysis of the sounds you make when you type. Or they will analyze fluctuations in the power grid. Or they will sneak in and use a hardware keylogger. Or they will rush in when they raid you, flash freeze your RAM, put it into a forensics laptop and dump your key. Or they will use hidden cameras. Or they will add a software keylogger to your bootloader. Or they will do one of a dozen other things to steal your encryption key.
Memory in encapsulation material
Shielded equipment and not plugged into the electrical grid to protect from transient electromagnetic pulse analysis
physical tripwire systems that cause an immediate shutdown to memory wipe
physical surveillance and intrusion detection systems to watch for intruders
keeping your machine on you at all times
a lot of additional steps go into getting much benefit from FDE, assuming that the attacker knows you are using FDE and know the (many) methods to counter it via stealing passphrases / keys. Of course most LE still power down machines during raids.
TC patched a lot of these problems but his team described it as the 'tip of the iceberg' meaning if him and a bunch of other cryptographers got together and did this on a regular basis who knows how many holes they'd discover. This is primarily why I encrypt twice. First encrypt really sensitive data with LUKS or softraid then wrap it in TC deniable containers just in case Truecrypt development isn't up to par and a major exploit is found (while I'm sitting in jail and their working on my servers).
Using two layers of encryption for file storage is the suggested practice. You should use FDE and then additionally you should encrypt your sensitive files (preferably one at a time, although less compartmentalization is still a benefit and requires you to remember substantially less passwords).
While this forum was down I read a shit ton of ebooks on BSD network security and leased myself a 1U failover rack firewall running OpenBSD for $31/mth, set up an internal DMZ/NAT firewall with a $40 comp, placed Tor by itself on a minimal openbsd installation chrooted with console only access on an old SPARCserver I had lying around from 1999 (runs awesome!) and am testing Tails live CD with it as a bridge for persistence to avoid it grabbing guard nodes everytime I reboot.
Cool
Mainly because I'm interested in plausible deniability as well as security (if Tails turns out not to fail).
I wouldn't put much faith in Tails turning out to be anything other than fail, personally.
I like the idea of removing the disk, wiping memory and having an O/S that's never been touched by biz the feds in my country won't find anything on though I could always symbolic link all logs/.bash.history and everything else to /dev/null. I'm also testing a custom OpenBSD .iso I modified instead of having to rely on linux and burn yet another new debian security update for Tails every couple of weeks. As for my now ridiculous rack stack and no doubt sudden surge in power consumptionI can always claim my private bridge is for research for democracy activists, which it sort of is with some other projects I have. Though I'm leaning towards only using lynx/w3m browser from now on and command line gpg. Less software I install and have to trust while doing this kind of work the better.
I am also considering going full CLI and ditching GUI's for good. I think it is almost a requirement for true security. You can still use virtualization without a GUI btw, one person I talk with was shocked that you don't need a GUI to use virtualization lol.
Ask your security friend what forum software he recommends. I'm leaning towards custom SMF or even some sort of perl implementation if I can avoid PHP all together. Now I'm going to attack my network to see what kind of data leaks
He would probably suggest Frost or Syndie. Frost is tied to Freenet. Syndie can be used on a number of anonymity networks but it was made by the I2P crew. Syndie actually lets you host a single forum environment over several sorts of anonymity network / server / newsgroups / etc. I have done only a little research on either of these systems, I personally much prefer Tor to Freenet or I2P and think it offers substantially better anonymity than either of those options. I personally like PunBB for a minimalist and secure php forum. Some people are working on programming a decentralized forum in Ruby right now, would you like to join us?