Wednesday, February 24, 2016

Data Center Microsegmentation

Data center microsegmentation can provide enhanced security for east-west traffic within the data center. 

Data centers historically have been protected by perimeter security technologies. These technologies include firewalls, intrusion detection and prevention platforms, and custom devices, all designed to aggressively analyze incoming traffic and help ensure that only authorized users can access data center resources. These services interdict and analyze north-south traffic: that is, traffic into and out of the data center. These services can be very effective at the perimeter, but they generally have not been provisioned to analyze device-to-device traffic within the data center, commonly referred to as east-west traffic. 


Historical Data Center Security Protection:

Modern application design, with the popularity of the N-tier web, application, and database application model, has vastly expanded the ratio of east-west traffic to north-south traffic. By some estimates, data centers may have five times as much east-west traffic as north-south traffic as dozens or hundreds of web, application, and database servers communicate to deliver services. 
Classic data center designs assume that all east-west traffic occurs in a well-protected trust zone. Any device inside the data center is generally authorized to communicate with any other device in the data center. Because all data center devices exist inside a hardened security perimeter, they should all be safe from outside incursion. Recent data breaches, however, have shown that such an assumption is not always valid. Whether through advanced social engineering or compromised third-party attacks, determined malefactors have shown the ability to penetrate one data center device and use that as a platform to launch further attacks inside the data center.


Microsegmentation and the New Data Center:

Microsegmentation divides the data center into smaller, more-protected zones. Instead of a single, hardened perimeter defense with free traffic flow inside the perimeter, a microsegmented data center has security services provisioned at the perimeter, between application tiers, and even between devices within tiers. The theory is that even if one device is compromised, the breach will be contained to a smaller fault domain. As a security design, microsegmentation makes sense. However, if such a data center relies on traditional firewall rules and manually maintained access control lists, it will quickly become unmanageable. An organization cannot simply create a list of prohibited traffic between nodes; that list will grow exponentially as attack vectors, vulnerabilities, and servers multiply.

The only workable solution is a trusted declarative (whitelist) security model in which security services specify the traffic that is permitted and everything else is denied. This solution requires tight integration between application and network services, and that integration is lacking in most data center deployments. Indeed, network analysis frequently demonstrates that applications are using unexpected traffic flows to deliver services, demonstrating the traditional disconnection between application, server, and network managers.


Application-Centric Networking:

Applications provide critical access to company data and reporting metrics associated with business outcomes. Essentially, delivery of secure applications is the most important task of the data center, so it necessarily follows that networking and infrastructure must be application-centric. The terms and constructs of applications must determine the flow, permissions, and services that the network delivers. The application profile must be able to tell the network how to deliver the required services.

This approach is what is known as a declarative model, which is an instantiation of promise theory. Devices are told what services to provision, and they promise to do it or to tell the control system what they can do. The application profile specifies the policy that must be provisioned. The network administrator declares that intended policy to the network devices, which then configure themselves to deliver the services (filtering, inspection, load balancing, quality of service [QoS], etc.) that are necessary for that application. This approach provides the best way to manage applications and network devices at scale.

The declarative model contrasts sharply with the traditional imperative model. In the imperative model, operators manually configure each network and Layer 4 through 7 device, specifying such parameters as VLANs, IP addresses, and TCP and User Datagram Protocol (UDP) ports. Even with scripting and third-party management tools, this process is a time-consuming and complicated endeavor. Moreover, configurations (such as firewall rules and access control lists [ACLs]) are often left unchanged as the network load changes, because altering a series of manually maintained ACLs can have unanticipated consequences. This approach leads to longer and longer firewall rule sets, longer and more complicated processing, and increased device load. As data centers have grown from tens to hundreds to thousands of devices, the imperative model is no longer sufficient.

With the declarative model, the responsibility for interpreting policy falls to the network, and the responsibility for programming devices falls to the devices themselves. This model has some important advantages over the imperative model:
● Scalability: Network management capability scales linearly with the network device count.
● Intelligence: The network maintains a logical view of all devices, endpoints, applications, and policy and intelligently specifies the subset of that view that each device needs to maintain.
● Holistic view of the data center: Services can truly be made available on demand and provisioned more efficiently, and unused services can be retired.



It’s About Policy:

An application-centric view of the infrastructure allows applications to request services from the network, and it provides the network intelligence to efficiently deliver those services. Security, whether macrosegmented or microsegmented, is just another service.

Security must be part of the overall data center policy. It must be considered and provisioned as an integral part of the services that the data center provides; it cannot stand alone. Security is usually provisioned as one of a series of services to be applied in a chain. Any modern data center must provide security and segmentation as one of the many services that are provisioned as dictated by the overall policy and compliance objectives.

Microsegmentation may require hundreds or thousands of security policy rules. It would be very difficult to provision all those rules manually. A declarative model enables the scale required, because the administrator only needs to pass the security and services policies to the network, which would then interprets and provisions the physical and virtual devices as needed. The administrator defines these policies within the logical model, which is an abstracted view of the network and everything connected to it. One important advantage of this logical model is that every device and service is an object, and logical objects can be created, replicated, modified, and applied as needed. One security and service policy can be applied a hundred times, or a hundred different, custom policies can be applied. This flexibility and exceptional level of control is inherent in the abstraction of the physical network into the logical model.

Requirements for the Modern Data Center

To deliver this policy effectively, the data center infrastructure must also support:

● Virtual servers, with flexible placement and workload mobility: Virtual machine managers and the network must communicate with each other, enabling rapid machine and service creation and virtual switch configuration for network policy.

● Physical servers, with flexible placement and mobility: Physical bare-metal servers are and will continue to be widely deployed and used. They must not be treated like less important elements by the data center infrastructure. They must have access to the same policies, services, and performance as virtual servers.

● Hypervisors from a number of vendors as well as open-source hypervisors: Most data centers do not rely on a single hypervisor, and an infrastructure that effectively supports only one hypervisor drastically restricts performance and service flexibility.

● Open northbound interfaces to orchestration platforms and development and operations (DevOps) tools, such as OpenStack and Puppet and Chef: Network automation goes well beyond virtual machine management, and the infrastructure needs to support a wide range of automation platforms.

● Open southbound protocols to a broad ecosystem of physical and virtual devices: Security and other services are provided by a number of vendors, and the modern data center infrastructure must support those devices in the same declarative manner that it supports routers and switches.

● Services at line rate, whether 1, 10, or 40 Gbps or beyond: Today’s data center traffic patterns rely on consistently low latency, high speed, and efficient fabric utilization, and any infrastructure that doesn’t provide all three should not be considered. All three characteristics must be available to all devices and services, without gateway devices or other latency-inducing extra hops.

● Tight logical-physical integration for provisioning and troubleshooting: The network control system cannot just tell the network devices how to configure themselves; there must be full, two-way communication. This tight integration provides a real-time view of network utilization, latency, and other parameters, enabling the control system to more effectively deploy needed services and troubleshoot any problems that may occur. Without this integration, provisioning is performed without any insight, and troubleshooting requires examining two distinct network structures: a virtual one and a physical one.

Cisco Application Centric Infrastructure

Cisco® Application Centric Infrastructure (ACI) provides true microsegmentation in an effective manner. Cisco ACI abstracts the network, devices, and services into a hierarchical, logical object model. In this model, administrators specify the services (firewalls, load balancers, etc.) that are applied, the kind of traffic they are applied to, and the traffic that is permitted. These services can be chained together and are presented to application developers as a single object with a simple input and output. Connection of application-tier objects and server objects creates an application network profile (ANP). When this ANP is applied to the network, the devices are told to configure themselves to support it. Tier objects can be groups of hundreds of servers, or just one; all are treated with the same policies in a single configuration step.

Cisco ACI provides this scalability and flexibility to both physical servers and virtual machines, and it provides virtual machine management to several market-leading hypervisors. If permitted by the network policy, vendor A’s virtual machine can speak freely to vendor B’s virtual machine as if they were in the same hypervisor.

Cisco ACI exposes a published northbound Representational State Transfer (REST) API through XML and JavaScript Object Notation (JSON), as well as a Python software development kit (SDK), allowing easy integration with popular tools such as Cisco UCS® Director, OpenStack, and Puppet and Chef. The system also provides an open source southbound API, which allows third-party network service vendors to implement policy control.

The Cisco ACI fabric is built on Cisco Nexus® 9000 Series switches, the fastest and greenest 40-Gbps switches on the market today. A Cisco ACI fabric is deployed in a spine-and-leaf architecture and supports advanced equal-cost multipath (ECMP) routing, enabling 40 percent greater network efficiency than conventional architecture. Every end device, whether physical or virtual, connects to a leaf port, and every device’s traffic is switched at line rate through the fabric. There are no gateway devices to add latency or interfere with policy application.

The fabric is abstracted by the logical model, not virtualized. Therefore, the network control systems have full visibility into the physical domain as well as the virtual domain. For example, Cisco ACI maintains a real-time measure of latency through every path in the network, which other networking solutions can’t do. The system maintains health scores for all devices, applications, and tenants, quickly flagging any degraded condition and programmatically reconfiguring the network as needed.

Conclusion

Microsegmentation provides internal control of traffic within the data center and can greatly enhance a data center’s security posture. Cisco ACI is the only solution available today that enables true microsegmentation with the performance, scalability, and visibility that modern applications demand.

Monday, February 22, 2016

QUIC

QUIC (Quick UDP Internet Connections, pronounced quick) is an experimental transport layer network protocol designed by Jim Roskind at Google, initially implemented in 2012, and announced as experimentation broadened in 2013. QUIC supports a set of multiplexed connections between two endpoints over User Datagram Protocol (UDP), and was designed to provide security protection equivalent to TLS/SSL, along with reduced connection and transport latency, and bandwidth estimation in each direction to avoid congestion
QUIC's main goal is to improve perceived performance of connection-oriented web applications that are currently using TCP. It also provides a venue for rapid iteration of congestion avoidance algorithms, placing control into application space at both endpoints, rather than (the relatively slow to evolve) kernel space.

UDP’s (and QUIC’s) counterpart in the protocol world is basically TCP (which in combination with the Internet Protocol (IP) makes up the core communication language of the Internet). UDP is significantly more lightweight than TCP, but in return, it features far fewer error correction services than TCP. This means that the sending server isn’t constantly talking to the receiving server to check if packages arrived and if they arrived in the right order, for example. That’s why UDP is great for gaming services. For these services, you want low overhead to reduce latency and if the server didn’t receive your latest mouse movement, there’s no need to spend a second or two to fix that because the action has already moved on. You wouldn’t want to use it to request a website, though, because you couldn’t guarantee that all the data would make it.

http://techcrunch.com/2015/04/18/google-wants-to-speed-up-the-web-with-its-quic-protocol/

With QUIC, Google aims to combine some of the best features of UDP and TCP with modern security tools.

On a typical secure TCP connection, it typically takes two or three round-trips before the browser can actually start receiving data. Using QUIC, a browser can immediately start talking to a server it has talked to before. QUIC also introduces a couple of new features like congestion control and automatic re-transmission, making it more reliable that pure UDP.
With SPDY, which later became the basis for the HTTP/2 standard, Google already developed another alternative protocol that had many of the same goals as QUIC, but HTTP/2 still runs over TCP and still runs into some of the same latency cost.

QUIC Geek FAQ (for folks that know about UDP, TCP, SPDY, and stuff like that)


What is QUIC?  QUIC is the name for a new experimental protocol, and it stands for Quick UDP Internet Connection.  The protocol supports a set multiplexed connections over UDP, and was designed to provide security protection equivalent to TLS/SSL, along with reduced connection and transport latency. An experimental implementation is being put in place in Chrome by a team of engineers at Google.

What are some of the distinctive techniques being tested in QUIC?  QUIC will employ bandwidth estimation in each direction into congestion avoidance, and then pace packet transmissions evenly to reduce packet loss.  It will also use packet-level error correction codes to reduce the need to retransmit lost packet data.  QUIC aligns cryptographic block boundaries with packet boundaries, so that packet loss impact is further contained.

Doesn’t SPDY already provide multiplexed connections over SSL?  Yes, but SPDY currently runs across TCP, and that induces some undesirable latency costs (even though SPDY is already producing lower latency results than traditional HTTP over TCP).

Why isn’t SPDY over TCP good enough?  A single lost packet in an underlying TCP connection stalls all of the multiplexed SPDY streams over that connection. By comparison, a single lost packet for X parallel HTTP connections will only stall 1 out of X connections. With UDP, QUIC can support out-of-order delivery, so that a lost packet will typically impact (stall) at most one stream. TCP’s congestion avoidance via a single congestion window also puts SPDY at a disadvantage over TCP when compared to several HTTP connections, each with a separate congestion window. Separate congestion windows are not impacted as much by a packet loss, and we hope that QUIC will be able to more equitably handle congestion for a set of multiplexed connections.

Are there any other reasons why TCP isn’t good enough?  TCP, and TLS/SSL, routinely require one or more round trip times (RTTs) during connection establishment.  We’re hopeful that QUIC can commonly reduce connection costs towards zero RTTs. (i.e., send hello, and then send data request without waiting).

Why can’t you just evolve and improve TCP under SPDY?  That is our goal. TCP support is built into the kernel of operating systems. Considering how slowly users around the world upgrade their OS, it is unlikely to see significant adoption of client-side TCP changes in less than 5-15 years. QUIC allows us to test and experiment with new ideas, and to get results sooner. We are hopeful that QUIC features will migrate into TCP and TLS if they prove effective.

Why didn’t you build a whole new protocol, rather than using UDP? Middle boxes on the Internet today will generally block traffic unless it is TCP or UDP traffic.  Since we couldn’t significantly modify TCP, we had to use UDP.  UDP is used today by many game systems, as well as VOIP and streaming video, so its use seems plausible.

Why does QUIC always require encryption of the entire channel?  As we learned with SPDY and other protocols, if we don’t encrypt the traffic, then middle boxes are guaranteed to (wittingly, or unwittingly) corrupt the transmissions when they try to “helpfully” filter or “improve” the traffic.

UDP doesn’t have congestion control, so won’t QUIC cause Internet collapse if widely adopted?  QUIC employs congestion controls, just as it employs automatic retransmission to support reliable transport.  QUIC will attempt to be fair with competing TCP traffic.  For instance, when conveying Q multiplexed flows, and sharing bandwidth with T concurrent TCP flows, we will try to use resources in the range of Q / (Q+T) bandwidth (i.e., “a fair share” for Q additional flows).

Why didn’t you use existing standards such as SCTP over DTLS?  QUIC incorporates many techniques in an effort to reduce latency. SCTP and DTLS were not designed to minimize latency, and this is significantly apparent even during the connection establishment phases. Several of the techniques that QUIC is experimenting with would be difficult technically to incorporate into existing standards. As an example, each of these other protocols require several round trips to establish a connection, which is at odds with our target of 0-RTT connectivity overhead.

How much do QUIC’s techniques reduce latency? This is exactly the question we are investigating at the moment, and why we are experimenting with various features and techniques in Chromium. It is too early to share any preliminary results - stay tuned.

Is there any way to disable QUIC, if I really want to avoid running it on my Chromium browser?  Yes.  You can visit about:flags and set the “Experimental QUIC protocol” to “Disabled.”

Where can I learn more about QUIC?  If you want a lot of background, and need material to help you sleep, you can look at the QUIC Design Document and Specification Rationale.  For cryptographers that wonder how well the i’s are dotted, and t’s crossed, there is a QUIC Crypto Specification.  If you’d rather see client code, you can take a look at the Chromium source directory.  If you’re wondering about what a server might have to do, there is some prototype server code.  Finally, if you just want to think about the bits on the wire, and how this might look, there is an evolving wire specification.

Is there a news group for discussing QUIC?  Yes.  proto-quic@chromium.org a.k.a., https://groups.google.com/a/chromium.org/d/forum/proto-quic


Phonetic Alphabet...Always needed....


Understand Secure Sockets Layer (SSL)

Understanding SSL

Regardless of where you access the Internet from, the connection between your Web browser and any other point can be routed through dozens of independent systems. Through snooping, spoofing, and other forms of Internet eavesdropping, unauthorized people can steal credit card numbers, PIN numbers, personal data, and other confidential information.

The Secure Sockets Layer (SSL) protocol was developed to transfer information privately and securely across the Internet. SSL is layered beneath application protocols such as HTTP, SMTP, and FTP and above the connection protocol TCP/IP. It is used by the HTTPS access method. Figure 1 illustrates the difference between a non-secure HTTP request and a secure SSL request.
Transport Layer Security (TLS) is the successor of Secure Sockets Layer (SSL); they are both cryptographic protocols that provide secure communications on the Internet for such things as web browsing, e-mail, Internet faxing, instant messaging, and other data transfers. There are slight differences between SSL and TLS, but the protocol remains substantially the same.


How It Works

When a client and server communicate, SSL ensures that the connection is private and secure by providing authentication, encryption, and integrity checks. Authentication confirms that the server, and optionally the client, is who they say they are. Encryption through a key-exchange then creates a secure “tunnel” between the two that prevents any unauthorized system from reading the data. Integrity checks guarantee that any unauthorized system cannot modify the encrypted stream without
being detected. 

SSL-enabled clients (such as a Mozilla™ or Microsoft Internet Explorer™ web browser) and SSL-enabled servers (such as Apache or Microsoft IIS™) confirm each other’s identities using digital certificates. Digital certificates are issued by trusted third parties called Certificate Authorities (CAs) and provide information about an individual’s claimed identity, as well as their public key. Public keys are a component of public-key cryptographic systems. The sender of a message uses a public key to encrypt data. The recipient of the message can only decrypt the data with the corresponding private key. Public keys are known to everybody; private keys are secret and only known to the owner of the certificate. By validating the CA digital signature on the certificates, both parties can ensure that an imposter has not intercepted the transmission and provided a false public key for which they have the correct private key. SSL uses both public-key and symmetric key encryption. Symmetric key encryption is much faster than public-key encryption, but public-key encryption provides better authentication techniques. So SSL uses public key cryptography for authentication and for exchanging the symmetric keys that are used later for bulk data encryption.

The secure tunnel that SSL creates is an encrypted connection that ensures that all information sent between an SSL-enabled client and an SSL-enabled server remains private. SSL also provides a mechanism for detecting if someone has altered the data in transit. This is done with the help of message integrity checks. These message integrity checks ensure that the connection is reliable. If, at any point during a transmission, SSL detects that a connection is not secure, it terminates the connection and the client and server establish a new secure connection.

SSL Transactions

The SSL transaction has two phases: the SSL Handshake (the key exchange) and the SSL data transfer. These phases work together to secure an SSL transaction.


Figure 2 illustrates an SSL transaction:
1. The handshake begins when a client connects to an SSL-enabled server, requests a secure connection, and presents a list of supported ciphers and versions.
2. From this list, the server picks the strongest cipher and hash function that it also supports and notifies the client of the decision. Additionally, the server sends back its identification in the form of a digital certificate. The certificate usually contains the server name, the trusted certificate authority (CA), and the server’s public encryption key. The server may require client authentication via a signed certificate as well (required for some on-line banking operations); however, many organizations choose not to widely deploy client-side certificates due to the overhead involved in managing a public key infrastructure (PKI).
3. The client verifies that the certificate is valid and that a Certificate Authority (CA) listed in the client’s list of trusted CAs issued it. These CA certificates are typically locally configured.
4. If it determines that the certificate is valid, the client generates a master secret, encrypts it with the server’s public key, and sends the result to the server. When the server receives the master secret, it decrypts it with its private key. Only the server can decrypt it using its private key.
5. The client and server then convert the master secret to a set of symmetric keys called a keyring or the session keys. These symmetric keys are common keys that the server and browser can use to encrypt and decrypt data. This is the one fact that makes the keys hidden from third parties, since only the server and the client have access to the private keys.
6. This concludes the handshake and begins the secured connection allowing the bulk data transfer, which is encrypted and decrypted with the keys until the connection closes. If any one of the above steps fails, the SSL handshake fails, and the connection is not created.

Though the authentication and encryption process may seem rather involved, it happens in less than a second. Generally, the user does not even know it is taking place. However, the user is able to tell when the secure tunnel has been established since most SSL-enabled web browsers display a small closed lock at the bottom (or top) of their screen when the connection is secure. Users can also identify secure web sites by looking at the web site address; a secure web site’s address begins with https rather than the usual http.

SSL Crypto Algorithms

SSL supports a variety of different cryptographic algorithms, or ciphers, that it uses for authentication, transmission of certificates, and establishing session keys. SSL-enabled devices can be configured to support different sets of ciphers, called cipher suites. If an SSL-enabled client and an SSL-enabled server support multiple cipher suites, the client and server negotiate which cipher suites they use to provide the strongest possible security supported by both parties.
A ciphersuite specifies and controls the various cryptographic algorithms used during the SSL handshake and the data transfer phases. Specifically, a cipher suite provides the following:
--> Key exchange algorithm: The asymmetric key algorithm used to exchange the symmetric key. RSA and Diffie Hellman are common examples.
--> Public key algorithm: The asymmetric key algorithm used for authentication. This decides the type of certificates used. RSA and DSA are common examples.
--> Bulk encryption algorithm: The symmetric algorithm used for encrypting data. RC4, AES, and Triple-DES are common examples.
--> Message digest algorithm: The algorithm used to perform integrity checks. MD5 and SHA-1 are common examples.
For instance the cipher suite “RSA-RC4-MD5” means that RSA certificates are used for both authentication and key exchange, while RC4 is used as the bulk encryption cipher, and MD5 is used for digest computation.

SSL and the OSI Model

The SSL protocol is a security protocol that sits on top of TCP at the transport layer. In the OSI model, application layer protocols such as HTTP or IMAP, handle user application tasks such as displaying web pages or running email servers.
Session layer protocols establish and maintain communications channels. Transport layer protocols such as TCP and UDP, handle the flow of data between two hosts. Network layer protocols such as IP and ICMP provide hop-by-hop handling of data packets across the network.
SSL operates independently and transparently of other protocols so it works with any application layer and any transport layer protocol. This allows clients and servers to establish secure SSL connections without requiring knowledge of the other party’s code.

Figure 3 illustrates how SSL functions in the OSI model:
An application layer protocol hands unencrypted data to the session/transport layer, SSL encrypts the data and hands it down through the layers. When the server receives the data at the other end, it passes it up through the layers to the session layer where SSL decrypts it and hands it off to the application layer. Since the client and the server have gone through the key negotiation handshake, the symmetric key used by SSL is the same at both ends.

Cisco Catalyst Multigigabit Ethernet

Q. What is Cisco Multigigabit Ethernet?
A. Cisco® Multigigabit Ethernet is a unique Cisco innovation to the new Cisco Catalyst® Ethernet Access Switches. With the enormous growth of 802.11ac and new wireless applications, wireless devices are driving the demand for more network bandwidth. This creates a need for a technology that supports speeds higher than 1 Gbps on all cabling infrastructure. Cisco multigigabit technology allows you to achieve bandwidth between speeds of 1 and 10 Gbps over traditional Cat 5e cabling or above. In addition, the multigigabit ports on select Cisco Catalyst switches support Universal Power Over Ethernet (UPOE), which is increasingly important for next-generation workspaces and Internet of Things (IoT) ecosystems.

Q. What are the key benefits of this new technology?
A. Cisco multigigabit technology offers significant benefits for a diverse range of speeds, cable types, and Power Over Ethernet (POE) power. The benefits can be grouped into three different areas:
● Multiple speeds: Cisco multigigabit technology supports auto-negotiation of multiple speeds on switch ports. The supported speeds are 100 Mbps, 1 Gbps, 2.5 Gbps, and 5 Gbps on Cat 5e cable and up to 10 Gbps over Cat 6a cabling.
● Cable type: The technology supports a wide range of cable types including Cat 5e, Cat 6, and Cat 6a or above.
● POE power: The technology supports POE, POE+, and UPOE for all the supported speeds and cable types.




http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise-networks/catalyst-multigigabit-switching/multigigabit-ethernet-technology.pdf

Ericsson Teams Up With Amazon Web Services

Ericsson is enlisting the help of Amazon Web Services (AWS) to help bring telecom providers into the brave new world of cloud-based services.
The partnership was among a horde of announcements put forth by Ericsson today at Mobile World Congress in Barcelona (not to mention the swarm that came before MWC). It’s less about telcos competing with AWS and more about them coming to AWS, looking to exploit cloud infrastructure just as enterprises have started doing.