Wednesday, February 28, 2018

FINGERPRINTING OF SOFTWARE-DEFINED NETWORKS

FINGERPRINTING OF SOFTWARE-DEFINED NETWORKS

Introduction

Software-defined networking (SDN) is an umbrella term encompassing several kinds of networktechnology aimed at making the network as agile and flexible as the virtualized server and storage infrastructure of the modern data center.

The Software-defined networking (SDN) eases network management by centralizing the control plane and separating it from the data plane. 

The separation of planes in SDN, however, introduces new vulnerabilities in SDN networks, since the difference in processing packets at each plane allows an adversary to fingerprint the network’s packet-forwarding logic. 

In this paper, we study the feasibility of fingerprinting the controller-switch interactions by a remote adversary, whose aim is to acquire knowledge about specific flow rules that are installed at the switches.

Network devices can also notify the controller about network events (e.g., the reception of certain packets) and device’s state changes. 

For example, a number of advanced reactive control plane logic implementations configure network devices to send notification to the controller according to some installed policy (e.g., when a received packet does not match any of the installed flow rules). 

This notification triggers the controller to perform a series of operations, such as installing the appropriate forwarding rules at the switches, reserve network resources on a given network’s path, etc.

Problem Facing

SDN separates the control and data planes by defining a switch’s programming interface and a protocol to access such interface, i.e., the Open Flow protocol . 

The controller leverages the Open Flow protocol to access the switch’s programming interface and configure the forwarding behavior of the switch’s data plane. 

The communication between the controller and switches is established using an out-of-band control channel.


Statements of problem

The main objective of our work is to study the ability of a remote adversary to identify whether an interaction between the controller and the switches (and a subsequent rule installation) has been triggered by a given packet. 

The absence of a controller-switch interaction typically provides evidence that the flow rules that handle the received packet are already installed at the switches. 

Otherwise, if a communication between the controller and the switches is triggered, then this suggests that the received packet requires further examination by the controller, e.g., since it does not have any matching entry stored at the switch’s flow table, or because the controller requires additional information before installing a forwarding decision at the switches.

In contrast, a passive adversary cannot inject packets in the network but only monitors the exchanged traffic between the server and the client. Notice that passive adversaries are hard to detect by standard intrusion detection systems since they do not generate any extra network traffic.


Setup for experiment

The controller is configured to minimise the processing delay for an incoming packet-in event, i.e., we only require the controller to perform a table lookup and retrieve pre-computed forwarding rules in response to packet-in events. 

Furthermore, the controller always performs bi-directional flow installation; that is, the handling of a packet-in event triggers the installation of a pair of rules, one per flow direction, at each involved switch. 

We ensure that the controller’s CPU is not overloaded during our measurements. We deploy a cross-traffic generator on an AMD dual core processor running at 2.5 GHz to emulate realistic WAN traffic load on the switches’ ports that were used in our study. 

The generated cross traffic follows a Pareto distribution with 20 ms mean and 4 ms variance [7]. To analyze the effect of the data link bandwidth on the fingerprinting accuracy, we bridge our SDN network to the Internet using 100 Mbps and 1 Gaps links (respectively), by means of a firewall running on an AMD Athlon dual core processor 3800+ machine. 

For the purpose of our experiments, we collect measurement traces between an Intel Xeon E3-1230 3.20 GHz CPU server with 16 GB RAM and 20 remote clients deployed across the globe. Table I details the specifications and locations of the clients used in our experiments. 

In our testbed, the server and the software switch were co-located on the same machine. Note that, by reducing the time required for rule installation to a minimum, our tested emulates a scenario that is particularly hard for fingerprinting.

Collection of Data

To collect timing information based on our features, we deployed 20 remote clients across the globe   that exchange UDP-based probe packet trains with the local server. 

Notice that we rely on UDP for transmitting packets since Internet gateways may filter TCP SYN or ICMP packets. Each probe train consists of: 


• A CLEAR packet signaling the start of the measurements. Upon reception of this packet, the controller deletes all the entries stored within the flow tables of the OpenFlow switches in Pcs. • After one second2 since the transmission of the CLEAR packet, the client transmits four MTU-sized packet pairs. Here, different packet pairs are sent with an additional second of separation. 

• After one second since the transmission of the last packet pair, another CLEAR packet is sent to clear all flow tables. 

• Two packets separated by one second finally close the probe train. We point out that all of our probe packets belong to the same network flow, 

i.e., they are crafted with the same packet header. For each received packet of every train, the local server issues a short reply (e.g., 64 bytes). We maintain a detailed log of the timing information relevant to the sending and reception of the exchanged probe packets. 

When measuring dispersion, we account for out-of-order packets; this explains negative dispersion values. For each of our 20 clients, we exchange 450 probe trains on the paths Pcs and Psc to the server. 

Half of these probe trains are exchanged before noon, while the remaining half is exchanged in the evening. In our measurements, we vary the number of Open Flow switches that need to be configured in


Reaction to the exchanged probe packets. Namely, we consider the following four cases where a probe packets triggers the reconfiguration of some of the Open Flow switches: 

(1) one hardware switch, 
(2) two hardware switches, 
(3) three hardware switches, and 
(4) the software switch. 

We remark that the choice of the configured hardware switches in our test bed (cf. Figure 1) has no impact on the measured features since we ensure that the remaining hardware switches have already matching rules installed. 

Furthermore, we remark that packets of a probe train only traverse the software switch in case (4), i.e., when it is configured. In total, our data collection phase lapsed from April 27, 2015 until October 27, 2015, in which 869,201 probe packets were exchanged with our local server using all clients/configurations, amounting to almost 0.66 GB of data.

No comments:

Post a Comment

Hybrid scheme of public-key encryption

Hybrid scheme of public-key encryption We introduce a hybrid homomorphic encryption that combines public-key encryption (PKE) and som...