What does SDN stand for - 'Security Disaster' or 'Software Defined' Networking?

18 October 2017

SD - should this stand for security disaster?

We're often talking about software-defined networking (SDN), and how the latest generation of hardware has the intelligence to simplify the provisioning and management of networks. Added brains bring the ability to support new security features like Cisco TrustSec, but we've also moved away from the labour-intensive configuration of individual devices, and towards simplified, centralised administration.

That's great for convenience, allowing admins to configure and support branch networks from a single controller, but if there's a weakness in the system, or if an unauthorised user gains access, isn't software definition a disaster waiting to happen? Does SDN actually stand for security disaster networking? We asked network solutions architect Richard Harvey and senior cyber security consultant Adrian Clarke to put our minds at rest.

Adrian ClarkeWhat is an SDN?

Adrian: The first thing to cover is that Richard and Daren Vallyon talk about software definition in the network and datacentre, but it's actually gathering pace across IT. For example, once Palo Alto Network security estates grow to a certain size you administer them with Panorama, a centralised point through which you can make changes across the whole network. It gives us an almost god-like power to make changes and, if there's a bug, to seriously foul things up. These tools are incredibly powerful.

Richard: That's true. When you just used management tools to monitor the network, having them go offline only meant it was harder to troubleshoot. When they evolved into helping us provision stuff they became more critical.

Now in the more advanced SDNs, packets are generally tunnelled or encapsulated by edge devices into the SDN. As intermediate devices don't even see the application-layer traffic inside the tunnel, it's difficult to track, report or troubleshoot any problems that arise along the way unless you have an SDN telemetry capability that combines the inner and outer parts of a flow - in fact, that should be a key consideration when choosing one SDN platform over another.

Now there's also greater dependency on the controller to tell devices where to send traffic. Obviously there's a failsafe mode, but in theory quite a simple denial of service attack to take out the controller or management tools could seriously disrupt network operations.

What other security disaster risks are there?

Richard: There are interesting attack vectors that are only possible because networks are getting smarter. For example, networks can identify devices by type and enforce the appropriate security policies. If a device starts acting suspiciously or maliciously, the AI can react and maybe quarantine it. That's great, but what if someone writes malware to deliberately make your PCs start sending suspect network traffic? Attackers could prod the controller into quarantining thousands of devices, and cause massive disruption within the organisation - effectively employing the network's own defences to create a denial of service (DoS) attack.

As automation and intelligence become more widespread, other exploits are going to open up. For instance, we're talking to a new WAN wholesaler who gives customers the ability to change a connection's bandwidth on the fly, in software. That's a brilliant feature and I'm confident that the systems behind it are secure, but what if someone were to write a bit of code to capture my password? If I'm the person who's allowed to change the bandwidth, then a malicious actor could point all of the bearers at the internet, up their bandwidth to 1Gbit/s and launch a massive DoS attack from my company.

To achieve something similar in the past - now, even - you'd have had to phone up someone at the wholesaler, who's probably someone you know. Imagine the conversation: "Hi Rich. What's that? You want to change all your sites to disconnect from the corporate network and just be internet connections, and you want a Gig at every site? So that's definitely what you want? And you know it's going to cost you this much, right?"


Need help balancing risk and reward in your organisation? Get in touch for expert advice, support or strategic recommendations.


Future gazing

Richard HarveyRichard: Gartner's Strategic Roadmap for Networking paints an interesting picture when it comes to security. We're on the path to a model where organisations have a service catalogue, from which lines of business will simply pick what they want. So your users are building your IT services in realtime, and in the background your SDN controller and firewall platform and other network intelligence is all interacting to make it a reality.

That's massively open to people using it in inappropriate ways. And while you might think you're offering a controlled catalogue, your IT landscape could take on a quite unforeseen shape. In the old days you'd go up to someone, log a ticket and provided it was allowable they'd manually create whatever it was you needed.

Granted that's a menial task from which you might want to free up your IT staff, but when it's a manual process, somebody is at least thinking about the impact of the changes they're making on the rest of your systems. Removing people from the equation is… powerful.

Adrian: And that's a core issue in security: the way things interact in these automated, hybrid solutions can potentially introduce additional risk. If you've got two ways of working - manual and automated - or two systems, and individually they work, say, 99% effectively, putting them together without effective mitigating control measures can create all kinds of unforeseen issues.

If you truly believe SD stands for security disaster, what kind of protections exist?

Richard: In Cisco's networks, the communication between device and controller is well secured, but again this area has become more complex. Whereas before you'd log in to a device with a username and password, now we also have APIs which you can use to hook your organisation's apps into the controller. You have to authenticate the API session, but now you potentially have bits of code around the business that are talking to the controller.

That code has to be secured, too, and the privileges it has should be as tightly constrained as possible. Even in the more relaxed days when everyone had a high-level account, at least they were individuals, and you could go and ask what they were doing with the hardware.

Adrian: This is where companies need to implement role-based access controls where no-one has routine access to the top-level account. If people want to make changes they should have to go through technically enforceable security ingress and egress points, and conform to rigorous change-control procedures.

We tend to talk most about badly intentioned malware actors, but the more powerful and centralised your administration interface, the greater the potential impact of human error - that's another reason why change-control procedures are essential.

Richard: Organisations also need to continue building safeguards into their networks: taking a layered and segmented approach that minimises risk and mitigates the damage from breaches or errors. These software-based network controllers you're putting into your environment: to have them do connectivity and security as well - which some vendors would like you to do - that's potentially quite a dangerous thing.

Adrian: So you design a security solution that has different layers designed to counter the different attack vectors. You have, for example, a firewall and endpoint protection, and they may be from two different vendors. And you might have two layers of firewall, again from two vendors. That's why when it comes to security, a trusted partner shouldn't favour a vendor: it should favour a solution.

So introducing intelligence does introduce risk?

Richard: Yes, but also it works the other way. Cisco Stealthwatch, for example, uses the network's intelligence to spot unusual and potentially malicious activity. Smarter networks let you implement more effective, granular segregation, which is a key way to mitigate the effects of a breach, exploit or human error.

We've talked about the greater potential of human errors when we deal with more powerful administration interfaces, but the flipside is that greater intelligence and automation in the network can also help prevent human error. For example, look at the intelligent port provisioning we designed for the RFU at Twickenham, which ensures that only the appropriate devices get access to privileged network resources. Because the platform automatically detects and provisions each device, there's no need for error-prone manual re-patching or reconfiguring.

Overall, it comes back to common sense and good security practice. You need those anyway, but as the landscape changes they only become more vital.

Adrian: It comes back also to the importance of specific security expertise. Networks are becoming easier to manage, yet also more complex to fully understand as their details are increasingly abstracted by software. Accordingly, some threats or risks are subtle and elusive. We talk about how the landscape is changing: it's understanding the detail of that and knowing where to look to make sure you're protected effectively, and where to monitor to be sure those protections are working.

To return to the initial question - is it 'software defined' or 'security disaster': if we introduce these new tools and make no other changes then categorically there is increased risk. However, if we deploy this new technology with the appropriate mitigating control measures, then we can quite happily manage any extra risk, and benefit from the many advantages.


Need help balancing risk and reward in your organisation? Get in touch for expert advice, support or strategic recommendations.

Image: alfire13/Flickr, Creative Commons