DEFCON is always entertaining as it’s the largest hacker conference in North America. Back to back with it’s corporate counterpart, Black Hat, it generally draws thousands of hacker-type people to Las Vegas every summer. The related parties, shenanigans, and drama surrounding it are legendary, and this year was no different.
Below are my thoughts on the talks I was able to attend.
Because Steganography is one of my interests (and research fields) I was very interested in seeing this talk. James started off by talking a bit about the target mediums for StegoFS which included a range of media types. The important bit was that the target media is essentially intended to be stored in public forums like YouTube, Google Images, etc., where steganograpically embedded data may potentially be corrupted during upload via Codec conversion, compression, etc. that the storage site may perform. He spoke about some mitigation for this such as Forward Error Correction (one of the techniques I considered for use in my tool, SteganRTP). He then went on to speak about some interesting techniques that I had never heard before or thought about, such as including data encoded as a barcode overlaid on top of a video, made to look like old-timey video lines/streaks, or embedding data within a watermark image in a source video, which is then obscured by the real watermark image overlaid by the hosting site, such as the YouTube logo. This is interesting because if you view the video (and capture it raw) as it is played from YouTube, you likely won’t be able to extract the embedded data as it is overlaid by the legitimate logo, however if you download the original video from the site which doesn’t include the legitimate watermark, the stego-watermark is visible and available for data extraction. More along the topic of the filesystem, he indicated that StegoFS hides individual blocks of the filesystem in different media objects and has some interesting mechanisms for retrieval and/or reconstruction (via FEC, pairity striping, etc.) for missing blocks of the filesystem. You can check out the paper and slides (and hopefully soon the code) at his resources website.
Death Envelope: Medieval Solution to a 21st Century Problem
This was an interesting talk, albeit a complete departure from the types of tech-heavy talks we usually see at DEFCON. Matt was speaking about the problem of death in our digital age, and how you can create a mechanism for the transfer of certain knowledge to someone else after your death, which likely includes access passwords, financial information, etc. The mechanism he described was either a real, physical paper envelope, to be opened by the holder in the event of your death, or a hybrid envelope and encrypted digital storage, where the key to accessing the digital storage was stored in the envelope. The primary characteristics of this envelope should be that it is secure and tamper evident, and part of the process of maintaining an envelope includes inspecting it regularly, which the envelope holder should not take as an insult, as well as recreating it as often as the information inside it requires updating. One of the best ways to seal a paper envelope is with a wax seal; The seal itself can be duplicated, but getting wax to seal an envelope in exactly the same way twice is extremely difficult. If you take photographs of your own sealed envelope, it should be trivial to detect a breach and attempt at re-sealing it. Matt indicated that he will be setting up and maintaining his death envelope website soon, and also recommended the Wax Works website for a place to purchase a wax seal. At the very least this talk gives me yet another excuse to purchase a wax seal, which is something I’ve considered doing many times.
New Ideas for Old Practices – Port Scanning Improved
Fabian “Fabs” Yamaguchi & FX
Port scanning has been done time and again, however the goal with this project, called PortBunny, was to create a tool that does one job, does it well, and does it in a consistent amount of time. PortBunny only does TCP SYN scans, none of the other types of scans you might find in nmap such as XMAS, UDP, etc. An interesting note that the presenters made was that a UDP scan has zero value against a properly firewalled host, which most hosts these days are due to local firewalling practice. While I agree with the second part of the statement (many, many more hosts these days employ local firewalling), I don’t agree with the first part in that while UDP probes provide no differentiation between closed and filtered ports, they obviously do provoke a response from open ports, which does provide value if you are in fact attempting to enumerate reachable UDP services on a host. The presenters indicated that they are running PortBunny on an embedded system, and that it’s implemented as a kernel module to avoid any constraints or modifications to traffic by running it through the kernel from userland. This also focuses the tool on accuracy and speed. The tool is engineered around algorithmic processes rather than “patchwork” like large conditional trees in the code. A focus was also made on using it’s algorithmic base to achieve auto-tuning which works in a similar manner to IP congestion control, which is based on the network’s capacity rather than speed/latency measurements, and accomplishes it’s task by interleaving packet probes with known responses (triggers) in with batches of probe packets. If a trigger packet drops, the entire batch it was associated with is scheduled for a resend. I was especially happy to see this being included in such a tool as I had just been working with similar concepts in the development of the auto-tuning code of the Metasploit DNS Cache Poisoning exploit with HD the week before. So, as people always say, the proof is in the pudding. The presenters then made some comparisons to nmap’s TCP SYN scan, which resulted in nmap taking about 12 minutes whereas PortBunny took about 15 seconds for the same network. This was enough to convince me to at least check it out. Unfortunately the tool’s goal of only doing one thing and doing it well is also the source of some of it’s shortcomings. The results indicated in the comparisons are impressive, however they don’t always hold up in all situations. Introducing rate-limiting and QoS metrics into the network, or firewalling the target hosts in certain ways, cause the tool’s approach to not hold up as well, and these conditions must be detected so as to use a more appropriate approach. Some of the techniques for detecting these conditions and differentiating between rate-limiting and regular network congestion included checking the RTT value. It was also good that the presenters noted that rate limiting is based on number of packets, whereas congestion control is based on packet size, hence requiring the different approaches to auto-tuning. The tool is available at the Port Bunny webpage.
This was a really interesting talk about behavioral prediction engines and collaborative filtering. It focused largely on the behavior related to desire, such as the systems that propose music you might like based on your ratings of previously heard songs, or the same regarding movies, food, etc. The prevalent example used throughout the talk was NetFlix and it’s recommendation engine, likely due to their “NetFlix Challenge” a while back which was meant to entice work on better engines and methods than what they were using at the time. Ian covered the prevalent existing approaches which are generally item based, following the form of “you liked item X, so you might like this other similar item Y”, or user based, following the form of “you liked item X, so you might like item Y which a user similar to you liked”. The speaker’s approach was to mathematically model both of these existing approaches and apply gradient descent algorithms to them so that the model is trained to accurately predict past known behavior. Gradient descent is also used in other applications such as training neural networks, and can identify properties of items itself, which may not necessarily be the same properties that a human might think were useful for categorization. This approach is roughly 5% better than the current algorithm used by NetFlix, but there are a few that are even better. Ian’s future work will focus on bringing in more meta-data into the equation, such as web browser and OS characteristics, mail client used, etc., as well as circumstance tags such as time of day, where they are within a website, etc. More properties equal more correlations and thus better performance, however it was noticed that useful correlations tend to plateau at around 64 characteristics of an item. More characteristics obviously slows down training as well, however the benefit here is that training only needs to happen once and is then maintained by the algorithm as it is fed new data to analyze. More details of Ian’s approach to this problem can be found at the SenseArray website.
All your Sploits (and Servers) are belong to us
Panel: David Mortman, Rich Mogull, Chris Hoff, Robert “RSnake” Hansen, David Maynor
RSnake went first and spoke about web authentication, which boiled down to everyone is trying to do 2 factor auth but doing it poorly, the .bank TLD is a dumb idea and doesn’t solve anything, and the gridmark authentication method is dumb, especially so because it has the inverse properties of normal passwords, in that longer ones are easier to break than shorter ones. He touched on something that I’ve been pointing out for a long time in that SiteKey is pretty much pointless, because nowdays many attackers are just MITMing your connection and will get the correct sitekey information from the legitimate site using the username you just gave them, and then present that sitekey information back to the user being attacked.
Larry and Rich went next and talked about hidden wireless APs, an idea spawned by Renderman’s TeddyNet (2005) where he hid a wireless AP inside a Teddy bear. Their project was to hide an AP in something more likely to be found in a business environment, such as a UPS, printer, or fax machine. They had to use a smaller AP device to fit in many of the places they wanted to hide and so went with the LaFonera AP which also supports OpenWRT. In many cases they also had to include an ethernet HUB so as not to have extra network ports other than what is normal for the host device. The host devices are meant to remain functional as well, so gutting them entirely is not an option, and usually involved trimming extra bits off of PCBs, re-soldering connections, tapping internal power, etc. Some other recommendations for places to hide included inside fire boxes, climate control systems, punch-card time-clocks, projectors, IP phones, and even other wireless APs.
Rich then talked a bit about “evil twin”, which is essentially a rogue AP which overpowers and impersonates a legitimate one. I didn’t find this all that interesting as it has been done by various people for years now, and an even better approach, faking ANY AP that’s probed for, has been standard operating procedure for interception for a while now. HD recently began integrating Karma with Metasploit (Karmetasploit) to do this in an entirely automated fashion (and then attack the wireless client), which Rich mentioned toward the end of this segment.
Larry talked about data-mining using meta-information from various document types such as Microsoft Office documents. He mentioned the Goolag search engine by cDc and the Metagoofil and Maltego tools as well. He also recommended regularly using metadata removal tools to mitigate some of this information leakage.
Maynor’s segment was mostly unintelligible slurred mumbling, which apparently isn’t due to alcohol as it was pointed out that he doesn’t drink anymore. The first part was something about checklists and how his credit card number got stolen and used in Guatemala. Then something about an IE toolbar that does vulnerability scans against the web server you’re looking at a page from. What I couldn’t figure out is, out of the people that want to do those types of scans, what percentage of them use IE? I’m guessing a very, very low percentage. So… yea.
Chris, throughout all of the previous segments, was providing humorous commentary via slides shown on one side of the room’s projectors, which were quit amusing.
Making a Text Adventure Documentary
I only saw the last bit of this talk, but what I saw was interesting. Jason is working on a documentary about text-based adventure games. He mentioned that this documentary is focused on many of the tangents he’s come across during making it, which is a bit of a departure from other documentaries. He visited Colossal Cave, which is the real cave “Bedquilt” that the text game “Adventure” was based on. In order to get access to the cave he had to register as a film crew even though he was just one person and had some difficulties in getting all the paperwork in order to be able to go. He finally made it to visit the site with some very experienced cavers as guides and got lots of really cool pictures and footage. You can find out more about this documentary project at the Get Lamp website.