Sunday, December 25, 2011

The meaning of Apple's '647 patent

On December 19th the U.S. International Trade Commission (ITC) issued its final determination on Apple's claims against HTC of Taiwan, finding that HTC violated Section 337 of the Tariff Act by selling Android phones containing a technology that infringed a patent held by Apple. Section 337 enables the ITC to block importation into the US of foreign products that unfairly compete with domestic products, and infringing a valid US patent is considered such an unfair practice. Recently Section 337 investigations are being used more and more as faster and lower cost alternatives to enforcing US patents against foreign entities through litigation in district courts.

Of the 10 patents originally claimed by Apple to be infringed, the ITC rejected all but two in an earlier ruling, and in the final determination reduced this further to a single patent. The two patents in question are 5,946,647 entitled System and Method for Performing an Action on a Structure in Computer-Generated Data (filed Feb. 1 1996 and granted Aug. 31 1999) and 6,343,263 entitled Real-time Signal Processing System for Serially Transmitted Data (filed Aug. 2 1994 and granted Jan. 29, 2002). The ITC found that HTC did not infringe the '263 patent that protects the use of a Hardware Abstraction Layer to isolate real-time code from architectural details of a specific DSP.

The '647 patent discloses a system wherein a computer detects structures in data (such as text data, but possibly digitized sounds or images), highlights these structures via a user interface, and enables the user to select a desired action to perform. Apple's complaint to the ITC gives as an example of infringement the detection and highlighting of a phone number (e.g., in a received SMS) and enabling the user to click to call that number.

I have seen in blogs and forums many completely erroneous statements about what this patent actually means. People have claimed that '647 can't be valid, as hyperlinks or regular expression matching or SQL queries clearly predate the filing. However, a careful reading of the '647 patent shows that it does not claim to cover such obviously prior art. The following analysis is based on the text of the patent and on documents openly available on the web, and should not be considered legal advice.

After eliminating text required for patent validity (an input device, an output device, memory, and a CPU) the invention of '647 has three essential elements. First, an analyzer server parses the input data looking for patterns (called "structures"). Second, via an API the user-interface receives notice of the detected structures and possible actions for each one; displays the detected structures to the user; offers the user a list of actions that can be performed for each structure; and receives the user's selection. Third, an action processor performs the user's selected action (possibly launching new applications). The text of the '647 patent gives as an example the regular expression parsing of an email to find phone numbers, postal addresses, zip-codes, email addresses, and dates, and enabling the user to call a phone number, enter addresses into a contact list, send a fax to a number, draft an email, and similar actions.

Of course plain hyperlinks that are manually inserted into HTML are not covered by this patent since they are not automatically detected by an analyzer. A regular expression engine can potentially be used as an analyzer (although not necessarily by all embodiments as the patent mentions neural networks matching patterns in sounds and images) but is not claimed. The automatic parsing of a document for a list of patterns without offering a list of actions to a user is also not protected; indeed the Rufus file-type classifier is cited as prior art. Even the use of a regular expression engine to parse text and insert hyperlinks into a document is considered prior art, as the application references the Myrmidon text-to-html converter. It is possible that an editor or IDE that offers possible completions of text being typed would be considered infringing, depending on how broadly the patent's concept of input device is interpreted.

The three elements of the '647 patent are all present in many applications and devices used today. Users of Microsoft's Outlook are familiar with its automatic hyperlinking of email addresses and URLs in received messages. My old 2004 Sony-Ericsson K700 2G phone automatically highlights phone numbers in SMSes enabling single-click calling. However, Apple has targeted a very specific infringement - Android's Linkify class. Linkify enables the definition of a list of regular expression patterns to be detected, and a corresponding list of schemes, i.e., actions the user can select to be executed. It even comes with a few pre-defined patterns - email addresses, postal addresses, phone numbers, and URLs - which are almost precisely the examples given in the '647 patent.

While Apple's claims of infringement of '647 may be selective, they are not frivolous. In order to invalidate '647 the Android community would need to find publication of all three essential elements before 1996. I am sure that they have tried.

Removal of the Linkify feature from Android phones will put them at a definite ease-of-use disadvantage in comparison with the iPhone. And HTC has been given until April 19th 2012 to do just that.


Wednesday, November 30, 2011

On exa, zetta, and beyond

Anyone who lives in metric system countries knows what "kilo" means. A kilogram is 1000 grams, a kilometer is 1000 meters. Of course frequencies are measured in kiloHertz and in the computer world we have kilobits and kilobytes (although we are never quite sure if that is 1000 or 1024!).

Most people even know that "mega" means a million. Power stations output megawatts of electricity, FM radios receive at megaHertz frequencies, and atomic bombs deliver megatons. For years our disks were measured in megabytes, and for most of us our Internet connections are in megabits (although we are not quite sure whether that is 1,000,000 of 1024*1024!).

People with state-of-the-art computers are aware that giga means a (US) billion (a thousand million), and that tera means a thousand of those, but only because disk capacities have increased so rapidly. When you ask people what comes next, you tend to get puzzled looks. Most people aren't even sure whether when they say a billion they mean a thousand million or a million million, so don't expect them to be expert in anything bigger than that!

Up to now only astrophysicists were interested in such large numbers, but with global data traffic increasing at over 30% per year, networking people are getting accustomed to them as well.

For those who are interested, the following numbers are peta (10^15), exa (10^18), zetta (10^21), and finally yotta (10^24). The last two names were only formally accepted in 1991. For those who prefer powers of two, the
IEC has standardized kibi (Ki) for 2^10, mebi (Mi) for 2^20, gibi (Gi) for 2^30, tebi (Ti) for^40, etc., although these terms don't seem to have caught on.

Several years ago I heard that the total amount of printed information in the worlds' libraries does not exceed a few hundred petabytes. On the other hand, present estimates are that global IP traffic now amounts to about 30 exabytes per month, or about ten times the world's accumulated printed knowledge every day. By the middle of this decade should surpass 100 exabytes per month, i.e., about the entire world's printed knowledge per hour.

These datarates, and particularly their time derivatives, present the telecommunications community with major challenges. We have grown accustomed to sophisticated applications that transfer massive amounts of data. A prime example is the new breed of cellphone voice/meaning recognition that sends copious amounts of raw data back to huge servers for processing. Such applications can only continue to be efficiently and inexpensively provided if the transport infrastructure can keep up with the demand for datarates.

And that infrastructure is simply not scaling up quickly enough. We haven't found ways to continually increase the number of bits/sec we can put into long-distance fiber to compensate for >30% annual increase in demand (although new research into mode division multiplexing may help). Moore's law is only marginally sufficient to cover increases in raw computation power needed for routers, but we will need Koomey's law for power consumption (MIPS / unit of energy doubles every year and a half) to continue unabated as well. And we haven't even been able to transition away from IPv4 after all of its addresses were exhausted!

If we don't find ways to make the infrastructure scale, then keeping up with exponential increases in demand will require exponential increase in cost.


Wednesday, November 23, 2011

My new CTO job

As you all probably know, I have changed job titles.
I am now RAD's Chief Technology Officer instead of (or perhaps in addition to?) Chief Scientist.

Our previous CTO, Prof. Daniel Kofman, is still in touch with the company. However, he is a bit busy since in addition to his position as Professor at Telecom ParisTech (formerly ENST), he has been appointed by France's Minister of Research and Innovation as director of LINCS (Laboratory of Information, Networking, and Communication Sciences), a new research center in Paris.

So, what will I do be doing? Well, I will no longer be managing any R&D teams. The physical layer DSL chip development department I used to run closed many years ago, and last year my DSP software development department was dissolved as well. With my new appointment my HW/FPGA/Innovations department has transitioned to the newly formed Hardware and Innovations department, and my software team is moving to the new Advanced Technologies department. The Algorithmic Research department will still report to me.

I will continue to be responsible for tracking fundamental technology trends, and to steer RAD's participation in standardization forums (IETF, ITU, MEF, BBF, etc.). I will be working with academic research groups here in Israel, and perhaps abroad as well. I will be spending more time on IPR work - over the last few years this work has tended to be more defensive than creative. I will be doing more lecturing and more writing, and will function as editor in chief of the RAD Series on Essentials of Telecommunications (more on that some other time).

And I hope to have more time to blog.


Thursday, November 17, 2011

MPLS-TP update

At the MPLS Working Group meeting this week it was announced that the core set of MPLS-TP RFCs have been finished.

Indeed, we now have (I hope that I haven't missed too many):
•RFC 5586 MPLS Generic Associated Channel (G-ACh and GAL)
•RFC 5654 Requirements of an MPLS Transport Profile
•RFC 5718 An In-Band Data Communication Network for MPLS-TP
•RFC 5860 Requirements for OAM in MPLS Transport Networks
•RFC 5921 A Framework for MPLS in Transport Networks
•RFC 5950 Network Management Framework for MPLS-TP
•RFC 5951 Network Management Requirements for MPLS-TP
•RFC 5960 MPLS-TP Data Plane Architecture
•RFC 5994 Application of Ethernet Pseudowires to MPLS Transport Networks
•RFC 6370 MPLS-TP Identifiers
•RFC 6371 OAM Framework for MPLS-TP
•RFC 6372 MPLS-TP Survivability Framework
•RFC 6373 MPLS-TP Control Plane Framework
•RFC 6374 Packet Loss and Delay Measurement for MPLS Networks
•RFC 6375 Packet Loss and Delay Measurement Profile for MPLS-TP
•RFC 6378 Linear Protection MPLS-TP
•RFC 6425 Detecting Data-Plane Failures in Point-to-Multipoint MPLS - Extensions to LSP Ping
•RFC 6426 MPLS On-Demand Connectivity Verification and Route Tracing
•RFC 6427 MPLS Fault Management Operations, Administration, and Maintenance (OAM)
•RFC 6428 Proactive Connectivity Verification, Continuity Check, and Remote Defect Indication for the MPLS-TP
•RFC 6435 MPLS Transport Profile Lock Instruct and Loopback Functions

In addition, before the IETF meeting the ITU issued a statement reasserting that the IETF holds the pen on MPLS-TP.

It seems that the game is over.


Wednesday, November 16, 2011

The notorious IP checksum algorithm

I have been asked several times to explain the checksum calculation used in the IP suite (IPv4, TCP and UDP all utilize the same checksum algorithm).

RFC 791, which defines IPv4, gives the checksum algorithm as follows :
The checksum field is the 16 bit one's complement of the one's
complement sum of all 16 bit words in the header. For purposes of
computing the checksum, the value of the checksum field is zero.
and the algorithm description was further updated in RFCs 1071, 1141, and 1624.

RFC 791 further states
This is a simple to compute checksum and experimental evidence
indicates it is adequate, but it is provisional and may be replaced by a CRC procedure, depending on further experience.

Back in 1981 when the RFC was written, Jon Postel already realized that this algorithm is very limited in its error detection capabilities (see below), but at the time CRC computation was too expensive computationally.

RFC 793, which defines TCP, says
The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header and text.

while RFC 768 for UDP says the same thing, but leaves a loophole
Checksum is the 16-bit one's complement of the one's complement sum of a pseudo header of information from the IP header, the UDP header, and the data, padded with zero octets at the end (if necessary) to make a multiple of two octets

If the computed checksum is zero, it is transmitted as all ones (the equivalent in one's complement arithmetic). An all zero transmitted checksum value means that the transmitter generated no checksum (for debugging or for higher level protocols that don't care).

On this latter issue, RFC 1180 “A TCP/IP Tutorial” adds
An incoming IP packet with an IP header type field indicating "UDP" is passed up to the UDP module by IP. When the UDP module receives the UDP datagram from IP it examines the UDP checksum. If the checksum is zero, it means that checksum was not calculated by the sender and can be ignored. Thus the sending computer's UDP module may or may not generate checksums. If Ethernet is the only network between the 2 UDP modules communicating, then you may not need checksumming. However, it is recommended that checksum generation always be enabled because at some point in the future a route table change may send the data across less reliable media.

and RFC 1122 “Requirements for Internet Hosts” adds
Some applications that normally run only across local area networks have chosen to turn off UDP checksums for efficiency. As a result, numerous cases of undetected errors have been reported. The advisability of ever turning off UDP checksumming is very controversial.

IPv6, as defined in RFC 2460, doesn’t bother with a header checksum, but closes the UDP loophole
Unlike IPv4, when UDP packets are originated by an IPv6 node,
the UDP checksum is not optional. That is, whenever originating a UDP packet, an IPv6 node must compute a UDP checksum over the packet and the pseudo-header, and, if that computation yields a result of zero, it must be changed to hex FFFF for placement in the UDP header. IPv6 receivers must discard UDP packets containing a zero checksum, and should log the error.

So, how precisely does the IP checksum algorithm work, and why is it designed this way?

The simplest method to protect against bit errors would be to xor bytes (or 16-bit words) together. This method suffers from the disadvantage that two bit errors in the same column cancel out, leaving no trace. Checksums are slightly stronger since they add words together instead of xoring them. Thus, two bit errors in the same column indeed leave that column correct in the sum, but the carry to the next column will be different.

Why does the IP checksum algorithm take the ones complement after adding together all of the words? Since this is a one-to-one transformation, it obviously doesn’t reduce the number of undetected errors. It does, however, protect against one special case – that of all zeros. If somehow the entire packet were wiped out and replaced by all zeros, the sum would still be OK (sum of zeros is zero). By flipping the result we catch this kind of bug.

To compute the IP checksum of some sequence of an even number of bytes (if the length is odd one pads with a zero byte), one groups the bytes in pairs which are considered as 16-bit words. Were one to have a computer that employs ones complement arithmetic, the algorithm would be simple to describe. One adds all of these words together, and returns the negative of this sum.

Unfortunately, ones complement machines are no longer in vogue, and essentially all computers now use twos complement representation. Ones complement and twos complement agree on how to represent positive numbers - they have a zero in the MSB. They also agree that negative numbers have a one in the MSB, but disagree about all the rest. Neither simply set the MSB (that’s called “signed magnitude” representation). In ones complement machines the negative of a positive number (its ones complement) is made by flipping all its bits. In two complement machines one flips all the bits and then adds one. Note that in ones complement representation there are two zeros – positive zero is all zeros and negative zero is all ones. Twos complement has only one zero - all zeros (all ones means -1).

Because of the difference in representation, the addition algorithms are also somewhat different for the two machine types. Twos complement machines add bits from LSB to MSB, and discard any carry from the MSB. Ones complement addition similarly adds the bits, but if a carry remains from the MSB it is added back to the LSB.

So if everyone uses twos complement arithmetic today, why does the IP checksum algorithm use ones complement addition and ones complement negation? Well, perhaps when the checksum algorithm was chosen one’s complement machines were more common (sigh).

More importantly, ones complement arithmetic has two (minor) advantages.

The first has to do with big- and little-endian conventions. Saying that a machine uses twos complement arithmetic still doesn’t completely pin things down. When building larger integers from bytes big endian machines place the higher order bytes to the left of the lower ones, thus if A and B are bytes, AB means A*256+B. Little-endian machines do the opposite – AB means B*256+A. Ones complement arithmetic has an interesting characteristic – addition is the same for big-endian and little-endian machines. This is not the case for twos complement arithmetic due to the discarding of the MSB carries.

For example,
in ones complement FF.FF+02.00=02.00 while FF.FF+00.02=00.02
in twos complement FF.FF+02.00=01.FF while FF.FF+00.02=00.01
thus one can write generic IP checksum code that directly uses 16-bit words that runs correctly on little-endian or big-endian machines, without knowing which kind of machine you have and without putting in compilation conditionals (#ifdef).

Another reason for ones complement is that it is slightly better at catching errors. Remember that twos complement addition discards MSB carries, so two bit errors in MSB positions are not caught, while ones complement propagates these carries back to the LSBs, thus catching this type of error. The difference is minor (for large TCP or UDP payloads the percentage of two bit errors missed by XORing is 6.25%, twos complement summing misses 3.32%, while ones complement summing only misses 3.125%).

Do these two small advantages justify the added complexity of using ones complement arithmetic? Probably not, but it is too late to change. With the greater computational power now available, stronger error detection algorithms should now be implemented. However, when IP is sent via Ethernet, it enjoys Ethernet's Frame Check Sequence, which is not only a CRC rather than a checksum, but is 32 bits in length! This makes the IP checksums superfluous.


Wednesday, October 5, 2011

Network Coding

In conventional communications networks the active network elements (e.g., Ethernet switches or IP routers) are store-and-forward devices. They perform no nontrivial computation. It turns out that in certain cases it is possible to optimize network operation (to conserve some network resource or to improve some network performance measure) by embedding more intelligence in the network elements.

In order to understand how this is done, it is useful to start with two special cases.

First case : Two individuals communicating via a satellite having a single downlink/uplink coverage beam.

A transmits to B via satellite S and B transmits back to A via the same satellite. Since A and B must share satellite resources (namely time and frequency), the uplink transmissions must be separated in either time or frequency. In the conventional case the downlink transmissions are separated as well. Thus, if it costs one cent to transmit an uplink message from A or B to the satellite, and similarly one cent for the downlink message from the satellite to A or B, then the exchange of two messages, one from A to B and one from B to A costs 4 cents.

But, this does not need to be the case! Rather than S transmitting A’s message to B and afterwards (or on another frequency) B’s message to A, it can transmit only once the message A xor B on a frequency and at a time when both A and B are listening. A retrieves B’s message by xoring the received message with his own message (since B = (A xor B) xor A), and B performs the same operation to retrieve the message from A (since A = (A xor B) xor B).

This reduces the price from 4 units to 3 units (since there are only three transmissions: A-S, B-S, S-A+B) at the cost of the satellite having to perform the simple operation of xoring two messages. The xor operation performed by the satellite is a kind of “coding” operation that leads to reduction of required network resources.

Second case : Using coding to protect real-time or broadcast transmissions against packet loss.

In real-time and broadcast transmissions it is not possible for a receiver to request retransmission of a lost packet, as TCP and ARQ systems do. Some critical control protocols send each packet multiple times (three times is common), but this is extremely wasteful in network resources. RFC 2198 proposes repeating the audio data from the previous RTP packet in the present one, thus maintain the number of packets per second, but still doubling bandwidth requirements. The FECFRAME working group in the IETF standardized more efficient mechanisms in RFCs 5053 and 6015. I will explain only the simplest possible coding.

Assume that we know that there will never be more than 1 packet loss in 4 consecutive packets. Then for every four packets transmitted, a fifth “protection” packet consisting of the xor of these four packets is sent. If all four packets are received then this fifth packet is discarded. If any single packet is lost then it can be recovered by xoring the received three packets with the fifth “protection” packet. Thus, packet loss can be mitigated with only an increase of 25% in bandwidth, and an increase in delay.

But what does this have to do with network coding? In both of the above cases an information source performed some nontrivial operation in order to conserve some resource or to protect against some network defect. The extension to full network coding requires simply that the computation be performed by some network element along the information path that is able to perform the network coding. Unfortunately, examples of network coding can be quite complex. The simplest one is the “butterfly network” (see Figure 1) presented in a paper by Ahlswede, Cai, Li, and Yeung entitled “Network Information Flow”.

In this example a source S needs to multicast two packets of information P1 and P2 to two destinations A and B over the particular network of network elements U, V, W and X shown in the figure. All of the links have the same bandwidth, which is precisely the bandwidth needed to transmit the packets in the desired time.

It turns out that S can send P1 and P2 to both A and B at once, as shown in Figure 2. Network elements U, V, and X are multicast devices that are able to replicate a packet received on its input port to both of its output port. Network element W performs network coding by calculating the xor of two packets received on its two input ports and sending this to its output port.

Were W not able to perform this operation, it would need first to send P1 and then P2, thus taking twice the time, or alternatively would require twice the bandwidth on the link to X (contrary to our assumption on link bandwidths). It is not hard to convince yourself that without the network coding it is not possible to perform the desired task.

Network coding can be used for purposes other than bandwidth or delay minimization and packet loss protection. Recent research has explored applications to energy reduction, information security, file sharing, congestion control, and fairness.


Sunday, August 21, 2011

The PW Associated Channel

In the beginning of the development of pseudowire technology, it was obvious to many of us that PWs would require some sort of OAM support. As always with OAM the question was how to make OAM packets fate-share with user data packets. The original RAD proposition was to define a special "OAM PW" that would be placed alongside the monitored PWs. In the MPLS case this meant a special PW label for OAM (RAD's proposal was to use the "all-ones" label), but to ensure that this OAM PW was placed in the same MPLS tunnel. This proposal still exists in Appendix D of RFC 5087.

The alternative proposal ("VCCV") placed special OAM packets in every PW. This meant much more OAM traffic for the prevalent case of many PWs in a single PSN tunnel, but simplified the assurance of fate-sharing. In order to enable interworking with other vendors, RAD abandoned its own proposal and adopted the VCCV approach, including advocating conformance with the newly standardized PWE3 control word and upgrading its equipment base accordingly.

Digression: VCCV stands for Virtual Channel Connectivity Verification, and is a complete misnomer. VC was an old (ATM-style) name for what is now called a PW. It was used in the early days of the PWE3 WG before the introduction of the term pseudowire, and should have been completely replaced. CV is a well-defined OAM term for detection of misconnections, that is, detecting that a packet arrives at the wrong destination. It should never be confused with Continuity Check (CC), which means checking that packets sent are actually received. Of course that is precisely the meaning in the term VCCV. Unfortunately by the time of the RFC it was too late to rename this function PWCC.

Three mechanisms were proposed for distinguishing between VCCV packets and user data packets, and all three became part of the standard. In the language of RFC 5085, there are three Control Channel types.

  • CC TYPE 1 When the PWE3 control word (CW) is used, the first nibble is set to 0001, instead of 0000.

  • CC TYPE 2 Router Alert Label (AKA out-of-band VCCV) - placing the reserved MPLS RA label above the PW label.

  • CC TYPE 3 TTL expiry - i.e., ensuring that the TTL in the PW label equals 1 at the PW endpoint.
Having three options sounds a bit confusing, but there were good reasons for all three. First, not all PWs use the CW; in fact, in some cases it would be wasteful to add 4 bytes to a small payload. Second, it has been argued that types 2 and 3 must be supported, as they are integral parts of the MPLS architecture. If a PW gateway receives a packet with the RA label, or with an expired TTL, it can not be expected to process it as a regular user packet!

It was realized early on that the CC types defined a PW associated channel could be used for functions other than VCCV, and that realization is captured in RFC 4385. However, this channel is limited to PWs, and could not be used for adding OAM functionality to non-PW MPLS traffic. So when the MPLS-TP effort required such functionality, the idea of an associated channel was generalized to a Generic Associated Channel (GACh) in RFC 5586. The generalization is obtained by defining what is essentially a fourth CC type - the GACh Label (GAL). This reserved MPLS label, unlike CC TYPE 2, sits at the bottom of the stack (there being no PW label), and is followed by what is essentially the PWE control word.

Those involved in the MPLS-TP effort want MPLS-TP mechanisms to work for PWs as well. This has led to a proposal to enable the use of the GAL for PW packets as well as for MPLS packets. For the PW case the idea is to put the GAL under the PW label. This proposal breaks an underlying characteristic of all PWs (explicitly stated in numerous RFCs), namely that the PW label sits at the bottom of the stack.

In my opinion three methods of indicating an associated channel packet is quite enough, and we don't need a fourth method. Yet, another proposal goes even further. This proposal suggests eliminating CC types 2 and 2, and leaving only type 1 (using the CW) and the new GAL approach. Were this proposed ten years ago I would probably have been in favor, although it is still not clear to me what a receiving PW gateway does when it receives a type 2 or type 3 packet. (Losing type 3 also excludes traceroute mechanisms.) However, at the present time this proposal would require upgrading ten years of live PW deployments, and I can not see how it can be implemented.


Monday, May 30, 2011

"Seamless MPLS" and Denial of Service

A Denial of Service (DoS) attack is an attack that attempts to render a service temporarily unavailable to legitimate users of the service. DoS attacks are carried out by attackers disrupting the function of any link in the service supply chain. In the context of services provided over telecommunications networks, DoS attacks can be directed at a web or mail server, routers, or at any necessary utility functions such as the DNS system.

There are two main DoS attack strategies :
1. The attacker can send malware to the attacked device, causing its malfunction. In extreme cases (called phlashing) the attacked device may need to be completely replaced.
2. The attacker can flood the attacked device with a large number of seemingly legitimate service requests, thereby consuming its resources and degrading its ability to service other users. In order to more completely overwhelm a device (and camouflaging the source of the attack), Distributed Denial of Service (DDoS) attacks simultaneously send service requests from multiple sources. Rate limiting and traffic shaping are not true DoS prevention methods. First, they are ineffectual against the first type of attack. Second, although they may prevent overload of devices under attack, since they do not distinguish between attackers and legitimate users, they themselves reduce service quality. In addition, they become Achilles’ heels providing attackers with new devices to attack.

There are only two true defenses against DoS attacks :
1. discarding illegitimate service requests,
2. allowing only legitimate service requesters.

The first method is typically used against attacks that exploit packets carefully designed to confuse network devices or require greater than average processing resources. It is ineffectual against brute-force attacks by properly formed service requests, such as DDoS attacks. It also usually requires costly Deep Packet Inspection (DPI). The second method is universally effective, but can only be used when there is a way to accurately identify legitimate users of the service.

That way is called source authentication, and it works by verifying that each received packet was authentically sent by the source claiming to have sent it. Thus source identification is limited to packet formats that include a source address, such as Ethernet, IPv4, and IPv6. IPsec uses a Hash-based Message Authentication Code (HMAC) to verify both the integrity and authenticity of an IP packet. MACsec uses a combined algorithm to verify integrity and authenticity, and optionally encrypting the packet.

As is certainly well-known to readers of this blog, MPLS packets contain labels that proxy for destination addresses, but no explicit addresses, and certainly no source address. As stated RFC 5920 - Security Framework for MPLS and GMPLS Networks :

The MPLS data plane, as presently defined, is not amenable to source authentication, as there are no source identifiers in the MPLS packet to authenticate. The MPLS label is only locally meaningful. It may be assigned by a downstream node or upstream node for multicast support.

When the MPLS payload carries identifiers that may be authenticated (e.g., IP packets), authentication may be carried out at the client level, but this does not help the MPLS SP, as these client identifiers belong to an external, untrusted network.

An attacker with physical access to an MPLS network can readily cause mayhem. There are only a million possible MPLS labels, and thus it will not take an attacker long to come across a valid one. Once that is accomplished, nothing can stop packets he injects from traversing the network and appearing at supposedly isolated egress points. The attack is made even simpler because many LSRs are configured to employ platform-wide label spaces, and many LSR label generators produce labels in order from low to high.

Of course if the MPLS is carrying only IP traffic, then that network layer can be protected using well-known IPsec methods. But MPLS can also carry non-IP traffic, e.g. pseudowires. Imagine what would happen if extra TDM-PW traffic were successfully injected - buffer overflows, loss of timing, and complete service shutdown. Imagine what would happen if an attacker injected multicast PAUSE frames into an Ethernet PW – delayed frames, buffer overflow, and complete service denial.

So why haven’t there been widespread devastating attacks on the critical MPLS infrastructure ? Mainly because MPLS networks have, until now, been walled gardens, that is, closed tightly controlled networks, with no access to outside attackers. RFC 5920 calls them trusted zones, which it describes in the following manner :

A trusted zone contains elements and users with similar security properties, such as exposure and risk level. In the MPLS context, an organization is typically considered as one trusted zone.

The boundaries of a trust domain should be carefully defined when analyzing the security properties of each individual network … In principle, the trusted zones should be separate …
A key requirement of MPLS and GMPLS networks is that the security of the trusted zone not be compromised by interconnecting the MPLS/GMPLS core infrastructure with another provider's core (MPLS/GMPLS or non-MPLS/GMPLS), the Internet, or end users.

So, MPLS has been safe since it has been hidden away in the core, with no access to outsiders.

But this is about to change. The IETF MPLS WG recently elevated to working group status a document entitled Seamless MPLS Architecture (draft-leymann-mpls-seamless-mpls). This document proposes extending MPLS from the core into access networks, and seamlessly integrating the access domain into the core MPLS domain. In the words of the draft :

The motivation of Seamless MPLS is to provide an architecture which supports a wide variety of different services on a single MPLS platform fully integrating access, aggregation and core network. The architecture can be used for residential services, mobile backhaul, business services and supports fast reroute, redundancy and load balancing. Seamless MPLS provides the deployment of service creation points which can be virtually everywhere in the network.

With Seamless MPLS there are no technology boundaries and no topology boundaries for the services. Network (or region) boundaries are for scaling and manageability, and do not affect the service layer, since the Transport Pseudowire that carries packets from the AN to the SN doesn't care whether it takes two hops or twenty, nor how many region boundaries it needs to cross.

Seamless MPLS drops the boundaries between access, aggregation, and core networks. This may indeed simplify network management – but how are the security issues handled? The draft’s “Security Considerations” section states the following :

In a typical MPLS deployment the use of MPLS is limited to relatively small network consisting of core and edge nodes. Those nodes are under full control of the services provider and placed at locations where only authorized personal has access (this also includes physical access to the nodes). With the extensions of MPLS towards access and aggregation nodes not all nodes will be "locked away" in secure locations. Small access nodes like DSLAMs will be located in street cabinets, potentially offering access to the "interested researcher".

So far, so good. The draft authors understand the security problem they raise. But now for the punch line …

Nevertheless the unauthorized access to such in device SHOULD NOT impose any security risks to the MPLS infrastructure itself.

The term SHOULD NOT can be understood in two ways. Perhaps it is simply a statement that the authors believe that this placement of nodes in sites where they will be accessible to outsiders simply shouldn’t cause any problems, since no-one would think of attempting to exploit this vulnerability. Or perhaps this is a requirement for implementations, but not a strong MUST requirement, just a SHOULD requirement. In this case the authors are saying that perhaps in some cases it would be a good idea to do something about this, but only if there isn’t some other more important consideration.

But don’t panic - the draft authors add an additional sentence :

Seamless MPLS must be stable regarding attacks against access and aggregation nodes running MPLS.

Note that this requirement carries a non-normative must rather than a MUST. Also, seamless MPLS need not be impregnable to attacks, just stable. Network stability is defined in RFC 2360, the Guide for Internet Standards Writers. It means that the network does not take an infinite time to return to normal operation after some type of change. In this context, it apparently means that after a DoS attack is over, the network should return to normal functioning. Not a very strong requirement !

Can seamless MPLS be made safe (or at least as safe as present networks) ? Of course, but the effort would be substantial, requiring IETF to develop security mechanisms for non-IP traffic, something that has not been attempted to date. As the draft authors requested that the draft be accepted with all the rest of the security section marked “TBD”, fixing this lacuna does not seem to be very high on their list.


Monday, January 10, 2011

MPLS is not a "successful" protocol

RFC 5218 defines what the Internet Architecture Board considers to be a "successful" protocol. A "successful" protocol is one that meets its original goals and is widely deployed, such as DNS, BGP, SMTP, and SIP. A "wildly successful" protocol far exceeds its original goals in terms of purpose and scale. Examples of the latter are IPv4, ARP, and HTTP. A protocol may be considered successful even if its deployment is still limited, as long as it meets its original goals.

At the technical plenary of the 74th IETF meeting in May 2009, there were presentations on the occasion of the 12th anniversary of the formation of the MPLS working group (subtitled “MPLS becoming a teenager”). This session was subtitled “Many consider MPLS a success, in the sense of RFC 5812's (sic) "What Makes for a Successful Protocol?" (see agenda and slide) .

Note the reference to 5812 instead of 5218. I find this typo enlightening. The first presentation of the session claimed that MPLS is a "wildly successful" protocol. In my opinion, MPLS can not be considered even “successful” in the sense of RFC 5218, but it may indeed be in the spirit of RFC 5812.

For those who haven’t read 5812, it is a proposed standard entitled “Forwarding and Control Element Separation (ForCES) Forwarding Element Model”. ForCES is a framework and a set of protocols that aim to standardize information exchange between the IP control and forwarding planes, enabling control elements (CEs) and Forwarding Elements (FEs) to become physically separated components. Although this was certainly not the intention of the speaker, this type of separation is indeed one of the ancillary benefits of MPLS.

The second talk at IETF-74 was an interesting presentation on the history of MPLS, but it carefully avoided stating the relevant facts. In the mid to late 90s, after opening up the Internet to the public at large and to commercial interests, the Internet started growing exponentially. This growth was exciting, but brought two main concerns, namely
1) address exhaustion - which lead to the development of IPv6 (we are still waiting for IPv6 to become a successful protocol …), and
2) slowing down of IP forwarding due to router table explosion - which lead to the development of MPLS.

The first issue was temporarily solved by the introduction of NAT, and I won’t discuss it further here. The second brought about a wave of innovation, with at least five solutions offered :

1) Cell Switching Router (Toshiba) (see RFCs 2098,2129)
2) IP Switching (Ipsilon, bought by Nokia) (see RFC 2297)
3) Tag Switching (Cisco) (see RFC 2105)
4) Aggregate Route-based IP Switching (IBM)
5) IP Navigator (Cascade acquired by Ascend which was acquired by Lucent which merged with Alcatel to become ALU)

With so many alternatives, BOFs were held in 1994-1995 and the MPLS working group chartered in 1997 with co-chairs from Cisco and IBM (which is the reason MPLS is so similar to tag switching and borrows a bit from ARIS).

However, the router manufacturers were not sitting idly waiting for MPLS to succeed, and improvements in algorithms and hardware increased the IPv4 forwarding speed to the point where MPLS was no longer needed.

So why is MPLS still being used ? There are at least two reasons. First, RSVP-TE-enabled MPLS enables hard QoS guarantees that are not possible in pure IP due to the lack of adoption of IntServ. Second, MPLS can carry non-IP packets (pseudowires).

I have heard the argument that the first reason was the true design goal of MPLS. However, a casual reading of RFC 3031, the RFC that defines MPLS, shows that QoS was considered an added advantage, not a design goal.

Some routers analyze a packet's network layer header not merely to choose the packet's next hop, but also to determine a packet's "precedence" or "class of service". They may then apply different discard thresholds or scheduling disciplines to different packets.
MPLS allows (but does not require) the precedence or class of service to be fully or partially inferred from the label.

Although MPLS is very widely deployed, the problem it was designed to solve has gone away (although it may return when IPv6 becomes more prevalent), and indeed on some platforms MPLS-based forwarding is actually slower than native IPv4 forwarding. Thus, according to 5218 MPLS is not successful. Yet.