This is a long one, containing detailed ramblings regarding
- my Qubes experience
- - bugs and maturity issues
- - my preliminary opinion
- security philosophies
- - "capabilities" and RedHat
- - "just add another operating system" security
- - security-vs-reliability-vs-performance
- - on merging security philosophies
- - - Attack surface
- - - Outlook (Qubes and Genode on seL4)
bugs and maturity issues
I didn't try 3.1 yet, but played around some more with 3.2 yesterday.
I have to correct my wifi statement:
WiFi was only available as wlp0s0 within NET vm for the first two bootups.
I was able to manage the wifi card via Dom0 XFCE toolbar after the third reboot (the icon didn't appear before that).
The XFCE Network settings dialog within the menu continiued to provide no functionality at all.
Disposable VM Shortcuts did not work for me.
I was able to start firefox via "untrusted" domain shortcut, but was not able to play any sounds.
Shutdown untrusted vm, attached audio hardware, restartet untrusted vm ... networking somehow broke down during that process... as in firefox couldnt resolve/reach any sites.
These experiences reinforce my initial "wait till version number doubles" impression.
my preliminary opinion
I conclude (for me) that Qubes' virtualization approach is rather novel
and might evolve into quite the game changer for endpoint security, which currently is in a very sad state.
I hope that this Qubes project continiues to soar, integrate additional security layers (defense in depth),
and maybe someday also tackle design issues like code bloat and software selection
(which have been summarized in the capabilities/redhat section below).
I'm looking forward to see what Qubes v6.4 will be like - see last topic "Outlook (Qubes and Genode on seL4)".
security philosophies
Any technical security evaluation between a non-braindead open-source Unix and a Windows is bound to favor the Unix,droppointalpha wrote:I certainly understand the approach of less complexity, less vulnerability.Karl Klammer wrote: 2-3) systemd and fedora
okay, methodology seems rather odd from my minimalistic "every line of code has a potential bug" point of view,
but I can understand that one uses the path of least resistance when trying to get a project off the ground
Just to clarify: Fedora with Systemd and X11 runs as Dom0 - correct?
Perhaps we are taking a side trail to theory land but I believe we are hitting a point where complexity-related bug/exploit issues are unavoidable and system strategies should focus more on mitigation, rather than employing only solid, thoroughly debugged software. Of course, debugging and patching carries on but solid software build simply advances too slowly, with (it seems) only Red Hat putting up the money and manpower to significantly advance Linux capabilities. I suppose it is telling that many of the security-conscious gov't agencies are dealing with RH more and more rather than Microsoft for secure systems (and cheaper I hear to boot).
as long as the required applications (e.g. oracle, java) run on both systems.
The only exceptions - that I am currently aware of - revolve around centralized organisational security demands aka Active Directory.
(group policies, password policies, antivirus/endpoint settings, software/patch rollout)
BTW: Fedora+SystemD+X11 run indeed in Dom0, just checked.
"capabilities" and RedHat
Yes, it is mostly Redhat who designs and pushes "capabilities" like systemd, pulseaudio, avahi, dbus and gnome down our throats.
I am not sure if this is really such a good thing, considering these two Linus quotes from https://igurublog.wordpress.com/2014/04 ... t-systemd/
Linus Torvalds wrote: …a lot of the fear and uncertainty over systemd may not be so much about systemd, but the fear and loathing over radical changes that have been coming down the pike over the past few years, many of which have been not well documented, and worse, had some truly catastrophic design flaws that were extremely hard to fix.
Linus Torvalds wrote: …Kay Sievers and Lennart Poettering often have the same response style to criticisms as the GNOME developers [read other Red Hat developers] — go away, you’re clueless, we know better than you, and besides, we have commit privs and you don’t, so go away.
"just add another operating system" security
This is kinda tongue-in-cheek, as I just can't resist trolling the "just add another operating system" approach with a famous Theo quote from 2007.
There was a Xen exploit in June 2016 that allowed an untrusted VM to takeover Qubes, which demonstrates why defense in depth is a good idea,Theo de Raadt wrote: x86 virtualization is about basically placing another nearly full kernel, full of new bugs,
on top of a nasty x86 architecture which barely has correct page protection.
Then running your operating system on the other side of this brand new pile of shit.
You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers
who can't write operating systems or applications without security holes,
can then turn around and suddenly write virtualization layers without security holes.
as opposed to the current "just trust Xen" approach of "vanilla" kernel without grsecurity, passwordless sudo and huge fedora userland in dom0.
https://tech.slashdot.org/story/16/07/3 ... n-the-host
To soften the blow: OpenBSD is currently working on their own x86 virtualization software (vmm), so .... yeah.
security-vs-reliability-vs-performance
1) security and reliability go hand-in-hand (can't be reliable if its easily hacked)droppointalpha wrote: The only real chance there is to double down and truly hone anything to a razor edge of reliability and security is for performance demand to flat-line, redirecting monetary drivers away from increasing features/capabilities/power and towards resolving issues of stability and security.
2) most real-world performance issues I have encountered are caused by stupid application design,
which no operating system can fix, e.g. sizeOfArray(select * from table) to display a counter
3) security will allways impact performance, but the penality is often less than one might assume
Virtual Machine penality is 10-20%; L4 micro kernels penality is 5-20%; OpenBSD runs on 30+ years old hardware like VAX
4) Moores law seems to hold up for processor and memory (but not disks).
Thus the easiest way to get that 10-20% performance penality back is to just magically wait for 3 to 6 months. (12-25%)
Maybe write some more testcases during that time...
on merging security philosophies
The design principles of Qubes (container,endpoint,end user) and OpenBSD (small,audit,privsep) will eventually integrate and we can shift focus to the hardware side ... one can dream, right?
Example: vs-top
The "vs-top" and "cyber-top" government laptops sold by GeNUA are promising attempts of such a combined design.
https://www.genua.de/fileadmin/download ... atures.pdf
They basically use a L4 microkernel (Fiasco.OC) to
- run three operating systems (e.g. l4-"secure windows", l4-"insecure windows", l4-"fw/vpn openbsd")
- and also to pass messages between the systems and hardware (e.g. mouse-l4-win-l4-bsd-l4-wlan).
2011 bachelor thesis on porting openbsd to l4 fiasco.oc: https://www.isti.tu-berlin.de/fileadmin ... Fiasco.pdf
2011 correspoinding codebase of openbsd l4 port: https://github.com/chrissicool/l4openbsd
Attack surface
Fiasco.OC has 30-40k LOC (lines of code), which is
one order of magnitude (x15) smaller than systemd(550k LOC as of May 2014),
two orders of magnitude (x500) smaller than Linux kernel (20,000k LOC) and thus most likely about
three to four orders of magnitude smaller than all the Fedora stuff running inside Qubes' "trusted" dom0.
I am not really sure about Xen, as I have found different numbers ranging from 60k to 300k LOC.
Translation: There are thousands of Qubes bugs for each Fiasco.OC bug, when also accounting for Fedora/dom0, which may or may not be fair.
The seL4 microkernel for ARM and x86 is even more interesting than Fiasco.OC, as it only measures 10k LOC (x2,000 smaller than Linux kernel)
AND is mathematically proofen to be correct. (meet spec, be realtime, no side effects, no buffer overflows, ...)
The mathematical proof contains 180k LOC, that is 18 lines "badass Unit Test" for each line of code.
seL4 was open-sourced late in 2014 and is currently only fully verified/proofen on ARM;
as x86 code/proof for SMP and 64bit is marked as experimental.
See also: L4 Microkernels: The Lessons from 20 Years of Research and Deployment (2016-04), focusing on Fiasco.OC and seL4.
Outlook (Qubes and Genode on seL4)
Joanna of Qubes and Gerwin of L4.verified had an interesting discussion on utilizing Xen vs seL4 back in May 2010:
http://theinvisiblethings.blogspot.de/2 ... s-and.html
Some academic efforts for porting Qubes to seL4 are already underway:
https://my.cse.unsw.edu.au/thesis/thesi ... hp?ID=3289
https://www.mail-archive.com/devel@sel4 ... 00752.html
http://sel4.systems/pipermail/devel/201 ... 00312.html
Genode seems to be the more purer/smaller/fine-graineder/securer attempt at seL4 enduser compartmentalisation, and thus might take more time to market than Qubes+seL4.
Cheers,
Karl
FYI: Edited multiple times from Monday till Saturday, as I continued to follow down the MicroKernel rabbit hole.