Tuesday, October 9, 2012

SDN : A Backward Step Forward

Everyone in the industry is talking about Software Defined Networks or SDNs, but unfortunately no two people using the term mean precisely the same thing. However, after hearing dozens of academicians and even more networking converts, and after reading hundreds of press releases, I believe that when people mention SDN they are talking about a network based on one or both of two guiding principles – 1) fully programmable forwarding behavior and 2) centralized control.

Both of these principles are in fact revolutionary in that they challenge well established principles of packet switched networks (PSNs) and actually regress to communications technology as it existed before the development of PSNs and the Internet. As I will discuss below, fully programmable forwarding behavior as espoused by SDN is fundamentally at odds with network layering, and centralized control diametrically opposes the philosophy of distributed routing protocols. It is worthwhile understanding these tensions before considering whether it makes sense to discard everything that has been learnt about networking in the past thirty years.


Programmable behavior vs. network layering

When packet switched networks were first proposed as an alternative to circuit switched ones, it was realized that the subject was so complex that it had to be broken down into more readily digestible parts. That’s why network layers (the seven OSI layers) were first invented. Later it was realized that network layering (a la ITU-T G.800) enabled hierarchies of service providers to coexist. Layering is employed despite the fact that it significantly reduces the efficiency of communications networks by introducing constructs that are not mandated by Shannon’s separation theorem. For example, in a VoIP application one may go to great lengths to compress 10 milliseconds of telephony quality speech from 80 Bytes down to 10 Bytes, but to that payload one then adds an RTP header of at least 12 bytes, a UDP header of 8 bytes, an IPv4 header of 20 bytes, and an Ethernet header and trailer of at least 18 bytes, for a grand total of at least 48 bytes of overhead – almost five times the payload size!

So why do we employ all these layers? Simply because it makes the task at hand manageable! Each layer contacts the layer above it (except for the top layer, i.e., the application) and below it (except for the bottom layer, i.e., the physical layer) through well-defined interfaces. Thus, when a lower layer receives information from an upper layer, it should treat it completely transparently, without attempting to discern what belongs to each of the layers above it. Similarly, a layer should not expect any special treatment from lower layers, other than that defined in the interface description.

It is thus understandable that communications theory purists shun layer violations which allow one layer’s processing to peek at information belonging to another layer. The consequences of disobeying this principle may be dire. Consider what would happen if MPLS label switching processing were to depend on details of the IP packet format. Then we would need two versions – one for IPv4 and one for IPv6, and we would not have pseudowires (which explicitly exploit the fact that MPLS is not cognizant of aspects of its payload). What would happen if UDP were to care about RTP fields? Then every modification to these fields would require a modification to UDP; and if IP were dependent on UDP then the domino effect would continue on down, negating the benefits of layering.

Of course in practice some layer violations are condoned, but only when they are absolutely necessary optimizations and can be shown to be benign. Even then they often take a heavy toll, as can be seen in the following three examples:
  1. Network Address Translation (NAT) intertwines the IP layer with the transport layer above it. This optimization became absolutely necessary since the world ran out of IPv4 addresses faster than it could adopt IPv6, but it broke the end-to-end principle underlying IP. Although there was little choice, NAT technology needed to embrace complex ALGs, and then various hole-punching technologies such as TURN, STUN, and ICE. And NAT still breaks various applications.
  2. The IEEE 1588 Transparent Clock (TC) modifies a field in the 1588 header (which sits above Ethernet or IP) based on arrival times of physical layer bits. Without this optimization there is no way to compensate for variable in-switch dwell times, and thus no way of distributing highly accurate timing over multiple hops. However, once again this layer violation comes at a price. The 1588 parser needs to be able to classify arbitrary Ethernet and/or IP packet formats and locate the offset of the field to be updated; not only does this require a complex processor, it also requires updating every time the IEEE or IETF changes a header field. Furthermore, updating the TC correction field is not compatible with Ethernet-layer security (e.g., MACsec that protects the integrity of the Ethernet frame, and optionally encrypts it).
  3. It is common practice for Ethernet Link Aggregation (LAG) to determine which physical link to employ by hashing fields from layer 2, 3, and 4 headers. This three-layer kludge enables transcending the link bandwidth limitations of a particular parallel set of links, and is only allowed because it doesn’t extend past the physical links involved.
But then along came SDN. The guiding SDN principle of completely programmable behavior means that an SDN switch treats an incoming packet as a simple sequence of bytes. The SDN switch may examine an arbitrary combination of fields, and based on these fields takes actions such as rewriting the packet and forwarding it in a particular way. The SDN switch neither knows nor cares to which network layer the bytes it examines belong.

Thus by programming an SDN switch to forward packets based on Ethernet addresses it can act as a layer-2 switch. By having it forward based on IP addresses it can act as a layer-3 router. By telling it to look UDP or TCP ports it can behave as a NAT. By having it block packets based on information further in the packet it can act as a firewall. In principle, we could program an SDN switch to edit a packet and forward it based on hashing arbitrary fields anywhere in the packet. This provides awe-inspiring flexibility, especially when the SDN switch is implemented as software running in a VM. Of course this flexibility comes at the price of obliterating the concept of layering.

Centralized control vs. distributed routing protocols

The most veteran communications network in the world is the Public Switched Telephone Network (PSTN). Unlike routing in the Internet, when a telephone call needs to be routed from source to destination, a centralized computer system works out the optimum path. The optimization algorithms employed can be very sophisticated (witness AT&T's blocking of Karmarkar from disclosing the details of his algorithm!) taking into account overall delay, the present loading throughout the network, etc.

The chief problem with centralized control is that there is a single point of failure. So when ARPA sponsored the design of a network that needed to survive network element failures (no, despite popular belief, there was no design goal to survive nuclear attack), it was decided to rely on distributed routing protocols. Using these protocols routers speak with other routers, and learn enough about the underlying topology to properly forward packets. Local forwarding decisions miraculously lead to globally optimal paths.

Yet, the optimality just mentioned is that of finding the shortest path from source to destination, not that of optimally utilizing network resources. Routing protocols can support traffic engineering, but this means reserving local resources for a flow, not locating under-utilized resources elsewhere and pressing them into service.

But then along came SDN. The guiding SDN principle of centralized control means that the controller sees the entire network (or at least network elements that it controls) and if provided with suitable algorithms can route packets while optimally utilizing network resources. This is precisely what Google are doing in their inter-datacenter WAN SDN – filling up the pipes much more efficiently than could be done purely based on distributed routing protocols. Of course this efficiency comes at the price of reintroducing the problem with a single point of failure. And as a corollary to the CAP theorem it is fundamentally impossible to circumvent this problem.


SDN - is it worth it?

So, no matter how you define it, SDN is fundamentally incompatible with one or more of the well-established principles of PSN design. The question is thus whether the benefits outweigh the costs.

OSI-style layering was an important crutch when PSNs were first being developed, but leads to inefficiencies that should have been addressed long ago. These inefficiencies are not only in bandwidth utilization, they are also in complexity, e.g., the need for ARP in order to match up layer 2 and layer 3 addresses. Were it not for the sheer number of deployed network elements, one could even imagine replacing the present stack of Ethernet, MPLS, IP, and TCP with a single end-to-end protocol (IPv6 formatting might be suitable). This could be accomplished using SDN, but several problems would need to be solved first. The single point of failure argument is not made moot by positing that the controller hardware can be made sufficiently resilient. It is also necessary to make the controller sufficiently secure against malicious attacks. In addition, the boot-strap problem of how the controlled switch can reach and be reached by the controller with a conventional underlay needs to be convincingly resolved.

The previous paragraph addressed OSI-style layering, but what of ITU-style layering imposed in order to support hierarchies of service providers? Eliminating the need for these “layer networks” requires a new model of providing data and communications services. That new model could be the cloud. A cloud service provider which is also a network service provider, or which has business agreements with network service providers, could leverage SDN network elements to revolutionize end-to-end networking. One could envision a host device passing a conventional packet to a first network element in the cloud, which terminates all the conventional layers and applies the single end-to-end protocol header. Thereafter the SDN switches would examine the single header and forward so as to simultaneously ensure high QoE and high network resource utilization efficiency. Present SDN deployments are simply emulating a subset of features of the present network layers, and are not attempting to embody this dream.

SDN technology is indeed a major step backward, but has the potential of being a revolutionary step forward.

Sunday, August 19, 2012

Quality of Experience (QoE)

As a reader of this blog you are doubtless familiar with the concept of the Quality of Service (QoS) of a telecommunications service, by which we mean meeting defined levels of a set of measureable network parameters, such as availability, delay, delay variability, and information loss. The precise set of parameters depends on the service type; for example Bit Error Rate (BER) and Errored Seconds Ratio, are important to TDM services, while Bandwidth Profile and Delay percentiles are two of the parameters measured for Ethernet services.

On the other hand, you may be less familiar with the related concept of Quality of Experience (QoE).

QoE is defined as the acceptability of a service, as perceived subjectively by the end-user (see ITU-T E.800, P.10, G.1080, and the ETSI 2010 QoS QoE User Experience Workshop). It too depends on the service being provided, being diminished when the user perceives low voice or video quality, long response times, service outages, information loss, lack of service reliability, or inconsistent behavior. Unfortunately, the end-user may not always distinguish whether QoE degradation is due to a defect in the communications network or in an information processing resource; for example, response time to a database query is partially due to computational resource availability and speed, and partially due to network delays in both directions.

While QoE as defined above is absolute and subjective, for reasons that we will discuss below, it may be measured in comparative and/or objective ways. By absolute QoE we mean the quality perceived by an end-user based solely on the received information, while comparative QoE refers to the somewhat artificial case of an end-user who has access to the non-degraded information. Subjective QoE determination is the perception of a true end-user, while objective QoE means QoE estimated by an algorithm designed to correlate with true user perception.

Telecommunications Service Providers originally earned their income by providing basic connectivity, but now, in the age of free WiFi, Skype, hotmail, dropbox, and other free best-effort services, the service provider’s only justification for charging a fee is providing a certain QoE level. When the QoE remains above a certain threshold the service is perceived as good, and the end-user is content. Below that level but above some other threshold the end-user perceives service degradation, but is able to tolerate it. Below the lower threshold the user becomes frustrated and typically abandons the service; surveys show that a large percentage of users experiencing low QoE desert the service provider without ever complaining to the provider’s customer service department.

Unfortunately, direct measurement of QoE is often difficult, and so for many years guaranteeing QoS levels has served as a proxy for QoE guarantees. The theory is that for the QoE for a given application is always a function of the network QoS parameters
                                 QoE = f (application, QoS)
but until recently one could only guess the form of this function. However, it is important to emphasize that QoS does not map to QoE independently of the application. For example, for interactive applications such as voice conversations, low delay is critical while packet loss is relatively insignificant, while for others, e.g., progressive download over TCP, the opposite is true.

Unsurprisingly, the first QoE parameter to be directly measured was voice quality, since telephony was for many years the paramount telecommunications service. Telephony service providers promised “toll quality” speech (literally, quality for which they could charge a “toll”), and it was thus natural to specify what that meant. This QoE was quantified using the Mean Opinion Score (MOS), defined in ITU-T Recommendation P.800. MOS is measured by having a number of listeners subjectively scoring the speech quality on a scale from 1 (bad) to 5 (excellent), and averaging over these scores (finding the mean). Many variations are defined, including Absolute Category Rating (ACR) in which the listeners hear only the degraded speech, and a comparative method called Degradation Category Rating (DCR) in which the listeners hear both original and degraded speech and compare the two. The comparative method is used here because it often returns more accurate results.

Unfortunately, direct measurement of MOS in this fashion is an expensive and time-consuming task. So the ITU-T looked into ways of defining objective measures that could be automated. The first method developed was called PSQM (ITU-T P.861), and the second PESQ (ITU-T P.862). Both of these are objective comparative measures in that they compare degraded speech with the original telephone quality speech, using appropriate signal processing (such as computing a logarithmic scale frequency representation) to model the human auditory perception system. Similarly, PEAQ (ITU-R BS-1387) determines the quality of wideband audio. The particular methods were selected in competitions to have highest correlation with human MOS scoring.

PSQM, PESQ, and PEAQ are all comparative, and are thus not suitable for estimating quality in operational systems where only the degraded audio is available. This was rectified by the ITU-T P.563 single-ended method for measuring absolute objective speech quality. P.563 determines the un-naturalness of telephone-grade speech sounds and how much non-speech-like noise is present.

Another approach championed by the ITU-T is the E-model (Recommendation G.107). The E-model is a planning tool that predicts a mouth-to-ear “transmission rating factor” R between 0 and 100, with higher values signifying better voice quality. An R value should be uniquely convertible to a MOS level. The expression for R starts with the basic signal to noise ratio and reduces it to account for various impairments including simultaneous impairments (loudness, quantization noise), delay impairments (delay, echo), and equipment impairments (codec distortion, packet loss). On the other hand, R is increased to compensate for advantageous scenarios such as mobility (cellphone, satellite).

Several years before ITU-T’s P.563, ETSI TIPHON (Telecommunications and Internet Protocol Harmonization Over Networks) produced TS 101 329-5 on QoS measurement methodologies. Annex E of that document described VQMON a single-ended method for estimating the E-model factors for VoIP, based on network parameters.

But voice is not the only service for which QoE has been defined. The ITU-R produced BT.500 on the subjective assessment of television quality. It defines MOS-like scores - television sequences are shown to a group of viewers, and their subjective opinions are averaged.

Among the notable ITU-T Recommendations for video quality are :
  • P.910 Subjective video quality assessment methods for multimedia applications 
  • P.911 Subjective audiovisual quality assessment methods for multimedia applications 
  • P.920 Interactive test methods for audiovisual communications 
  • P.930 Principles of a reference impairment system for video 
  • P.931 Multimedia communications delay, synchronization and frame rate measurement 
  • J.143 User requirements for objective perceptual video quality measurements in digital cable television 
  • J.144 Objective perceptual video quality measurement techniques for digital cable television in the presence of a full reference. 
  • J.246 Perceptual audiovisual quality measurement techniques for multimedia services over digital cable television networks in the presence of a reduced bandwidth reference 
  • J.247 Objective perceptual multimedia video quality measurement in the presence of a full reference (PEVQ) 
  • J.341 Objective perceptual multimedia video quality measurement of HDTV for digital cable television in the presence of a full reference 
Since 1997 the principle body working on video quality is the Video Quality Experts Group (VQEG). VQEG has produced a tutorial on comparative (“full-reference”) objective assessment of television quality, and is working on others.
In addition to audio and video, the ITU has looked into multimedia and data applications. Recommendation G.1011 is a reference guide to existing standards for QoE assessment, and identifies a taxonomy for such standards.
Recommendation G.1010 discusses applications (conversational voice, voice messaging, streaming audio, videophone, one-way video, web-browsing, bulk data transfer, email, e-commerce, interactive games, SMS, instant messaging, etc.) and gives performance targets for delay, delay variation, and loss QoS parameters for each.
Recommendation G.1030 provides network planners with end-to-end (E-model-like) tools for applications over IP networks, with an appendix devoted to web browsing. The appendix presents empirical perception of users to response times, and proposes a MOS measure. This work is complemented by G.1050 which describes an IP network model that can be used for evaluating the performance of IP streams based on QoS parameters (delay, delay variation, and loss). Recommendation G.1070 proposes an algorithm that estimates videophone quality for planners. Other documents include J.163 on QoS for real-time services over cable modems, and X.140 on QoS parameters for public data networks.

Outside the ITU, the Broadband Forum (BBF) has produced TR-126, which is an excellent tutorial on QoE as well as a useful set of guidelines for the relationship between QoE and QoS for broadband triple play applications. The document commences with a definition of QoE that is consistent with that of the ITU-T, namely a measure of end-to-end performance from the user’s perspective, in contrast with QoS as metrics of network performance. TR-126 provides a clear relationship between the two, so that given a set of QoS measurements, one could predict the QoE for a user, and conversely given a target QoE, one could deduce the required network performance. TR-126 discusses QoE “dimensions”: service set-up, operation, and tear-down; QoE “facets”: user effort, application responsiveness, information fidelity, security, and dependability/availability; and the service, application, and transport “layers”. While QoE is quintessentially end-to-end, TR-126 breaks down the contribution of various segments, such as access technologies (e.g., DSL and PON), ISPs, and application service providers. Specific guidelines are given for video (various kinds of entertainment video, video conferencing, surveillance video, streaming video, …), voice (wired, wireless, voice messaging, IVR), and best-effort Internet data (web browsing, email, file transfer, VPN, P2P, ecommerce, …).

The TeleManagement Forum (TMF), as could be expected, has documents discussing QoE from the Service Level Agreement (SLA) management perspective. TMF’s Wireless Services Measurement Handbook GB923 defines Key Quality Indicators (KQIs) and Key Performance Indicators (KPIs), similar to QoE scores and QoS parameters. KQIs experienced by end-users may in principle be determined from KPIs (although the mapping may be complex), while KPIs are derived from QoS parameters. The TMF has defined a set of KQIs including response time, service availability, speech/video quality, transaction rate, offered throughput, etc. An SLA consists of a set of thresholds for KQIs and KPIs, and these are specified in the SLA Management Handbook GB917 and its Application Notes.

The Apdex Alliance is a group of collaborating companies that functions as a program under the auspices of the IEEE Industry Standards and Technology Organization (IEEE-ISTO). Its mission is to develop open standards that define standardized methods to report, benchmark, and track application performance. The Application Performance Index (Apdex) is a number between 0 and 1 that attempts to capture user satisfaction with an application. Zero signifies that no user would be satisfied, while 1 would mean that all users would be. More formally, users are divided into three categories, satisfied, tolerating, and frustrated; and the Apdex represents the ratio between the number of satisfied users and half the tolerating ones, to the total number of users.
                         Satisfied Count + Tolerated Count / 2
 Apdex = -----------------------------------------------------------------------------
               Satisfied Count + Tolerated Count + Frustrated Count
Apdex deconstructs application transactions into sessions (the “connect” time) consisting of processes (interactions accomplishing a goal) that are made up of tasks (individual interactions), and further into turns, protocols, and individual packets. The user is mainly aware of the task response time, since (s)he must wait for the task to complete before proceeding. For example, users may be satisfied if a web page completely loads within 2 seconds, and may tolerate the delay if it loads within 8 seconds. Above that frustration sets in.

The problem with all of the above subjective and objective QoE measures is that they are service/application-specific. Since new applications are coming out every day, and furthermore different users may use completely different features of a single application, it is no longer feasible to study each application in depth. A new approach being studied is behavioral QoE estimation, where the user’s satisfaction is gauged based on his actions and reactions. An extreme example is the high measured correlation between a user being unsatisfied with a service level, and his aborting the application (or at least waiting until the service level improves). Such behavioral QoE may be used to automatically map QoS to QoE for new applications, or may be used directly instead of traditional QoE measurement.

Y(J)S

Sunday, May 20, 2012

What is time ?

There is an anecdote about a foreigner visiting London asking a man on the street “what is time?” and receiving the answer “I’m sorry, but I am not a philosopher”.

I don’t want to discuss here the philosophical or physical questions of what time is, but rather what we mean by time in telecommunications applications. In particular, we frequently hear the terms “UTC”, “GPS time”, “NTP time”, and “1588 time”, and I would like to clarify what these terms mean.

Everything starts with the question “what is a second?”. Until 1960 the second’s duration was based on the rotation of Earth. Specifically, the second was defined as the unit of time of which there are precisely 24*60*60 =86,400 of them in a mean solar day. Unfortunately, the Earth’s rotation is slowing down due to tidal friction, and so between 1960 and 1967 the second was redefined as a particular fraction of the duration of the year 1900. Since it is hard to reproduce the year 1900 in the lab, the second was finally linked to a stable, reproducible, physical phenomenon, namely the radiation emitted when an electron transitions between the two hyperfine levels of the ground state of the cesium 133 atom. Cesium atomic clocks need only count 9,192,631,770 oscillations and declare that a second has passed. (Cesium is chosen because all of its 55 electrons except the outermost one are in stable shells, minimizing their effect on the outermost electron.)

Even such a stable phenomenon as the hyperfine transition is somewhat subject to variability (due to contaminants, undesired fields, and General Theory of Relativity corrections due to height above sea level) leading to variability on the order of a nanosecond or two per day. In order to remove even this small variability the TAI international time scale (TAI stands for “Temps Atomique International” or International Atomic Time) maintained by the International Bureau of Weights and Measures (BIPM) in Paris, is defined as the weighted average of over 300 atomic clocks located around the world (the higher weightings going to the more stable clocks).

TAI is precisely defined, but has become entirely divorced from the Earth’s rotation. Were we to adopt only TAI the time of day would slowly lose connection with the position of the sun in the sky, and after a long enough time we would be having breakfast at 12 noon. IN order to resynchronize the two definitions of the second UTC is defined. UTC stands for Coordinated Universal Time (the order of letters is a compromise between several languages), and it replaced older time standards such as “GMT”. It is defined in ITU-R Recommendation TF.460-6 to be TAI adjusted by leap seconds introduced to compensate for the changing of Earth’s rotational velocity. When to introduce leap seconds is now determined by the International Earth Rotation and Reference Systems Service (IERS). While leap seconds can be either positive or negative, and can be introduced at the end of any month, there have only been positive ones (corresponding to slowing down of Earth’s rotation) and they have only been introduced on the last day of June or December. There are presently proposals to eliminate leap seconds entirely (in which case TAI would be abolished), and perhaps introduce leap hours should the need arise.

UTC is now exactly 34 seconds behind TAI, because of a 10 second introduced in 1972 when the present system was adopted, and 24 positive leap seconds that have been declared since then. The next leap second will be at the end of June 2012, increasing the difference to 35 seconds.

Actually there are several versions of Universal Time. UT0 and UT1 are found by observing the motion of stars (UT0) or distant quasars (UT1), as well as from laser ranging of the Moon and artificial Earth satellites (such as GPS satellites). UT1R and UT2R are smoothed versions of UT1, filtered to remove periodic and stochastic variations in the Earth’s rotation. UT2R is smoother than UT1, and any variations left in it are because of erratic changes in the Earth’s rotation, due to plate tectonics and climate change.

So, what kind of time do we use in GPS and our time distribution protocols ?

The time of day reported by GPS, which is often called “GPS time”, is not UTC. Every GPS satellite has several on-board atomic clocks, and these clocks are set according to the master clock at the US Naval Observatory in Boulder Colorado. “GPS time” does not include leap seconds, but GPS satellites periodically transmit a UTC offset message for this purpose (the GPS-UTC offset field is 8 bits and can thus accommodate 255 leap seconds, which should be sufficient for several hundred years). Once thus compensated, USNO time is within tens of nanoseconds of UTC. However, it can take over 10 minutes until you receive an offset message.

It is interesting that the on-board atomic clocks must be corrected for relativistic effects. Since the satellites are moving at high speeds with respect to an observer on the ground, the Special Theory of Relativity predicts that the on-board clocks will seem to be running about 7 microseconds per day slower than were they stationary with respect to the observer. On the other hand, the General Theory of Relativity predicts that because the satellite is high above the Earth, and thus experiences a weaker gravitational field, the on-board clocks will seem to be running faster by about 45 microseconds per day. The net relativistic correction is about 38 microseconds per day. After compensating for relativistic effects, the accuracy of time derived from a good GPS receiver is about 50 nanoseconds.

NTP (and that included SNTP) distributes UTC (i.e., it does takes leap seconds into account) and specifies UTC in seconds since Jan 1, 1900. The NTP 64-bit timestamp consists of 32 bits of whole seconds (about 136 years until roll-over) and 32 bits of fractional seconds (about 233 picoseconds of resolution). However, any specific NTP server distributes time according to the stratum of its reference clock. Of course, the time a particular NTP client obtains depends on the network between the client and the NTP server. You can expect an NTP client to be within tens of milliseconds of its server on a LAN, but only 100s of milliseconds of error over the Internet. However, NTP allows a client to track several servers, and thus improve its accuracy.

IEEE 1588 distributes TAI according to UNIX epochs. Since the UNIX time epoch started Jan 1, 1970, 1588 time is now ahead of UTC by 24 seconds (soon to be 25). The 1588v2 10-byte timestamp consists of 48 bits of whole seconds, and 32-bits of nanoseconds. Once again, the precise time accuracy depends on the type of grand master to which the 1588 master is synchronized. The big difference between 1588 and NTP is the possibility of on-path support in the network. If you have Boundary Clocks (BCs) or Transparent Clocks (TCs) in your network, the time error should be very small (perhaps a microsecond or less). 1588 can't simultaneously track multiple masters, but it can choose the best one from a list.

So that's what we mean by time.

Y(J)S

Tuesday, May 8, 2012

iPhone Storms

A few years ago RAD’s president Zohar Zisapel asked me to accompany him to a meeting with another Israeli company concerning possible cooperation on an important issue. On our way I asked him what this important issue was. He replied the iPhone problem and I immediately understood.

He informed me that he had been in the US the previous week, and although he carried a Blackberry and not an iPhone, he had experienced inability to connect to the network even for voice calls, calls dropping in the middle, cell breathing (which he graphically described as the signal strength bars undulating up and down), and of course inability to connect to data services. Once back in Tel Aviv, he had contacted companies with whom RAD could cooperate in trying to solve the problem.

I had seen many reports on the problems AT&T was experiencing in New York and San Francisco since the introduction of Apple’s iPhone, but had not known it was really that bad. Obviously the iPhone brought significantly increased bandwidth usage due to users being “always on” and consuming more video streaming and other high-datarate services rather than just sporadically sending an email or downloading a file. However, networks in other parts of the world with many different kinds of smartphones were not experiencing such catastrophic failures; in fact, many operators with whom I had spoken were not observing any problems at all!

What could be causing these problems? There were really only three possibilities:
  1. lack of resources in the air interface (known as spectrum crunch or spectral exhaustion),
  2. under-provisioning of the backhaul network,
  3. failure of the signaling servers (due to what are known as signaling storms);
and if the second item was the problem (or at least a major chunk of it), then RAD was uniquely positioned to help.

Why did we expect that the second problem to be at the root of the problem? Well, the backhaul network is extremely cost sensitive, and increasing bandwidth there was an expensive and time consuming task. We didn’t expect the air interface to be already congested (although we expected the spectrum to eventually become exhausted) since AT&T had already deployed HSPA+. We ruled out signaling as the major issue, since denser networks of smartphones were not experiencing similar problems.

Of course we now know that we were completely wrong, and that signaling server failure was the major problem. The explanation was intimately related to the slim design of the iPhone, and to fact that Americans had never adopted text and multimedia messaging as avidly as Europeans did.

To understand what went wrong and how the issue was eventually solved, I need to explain 3G Radio Resource Control (RRC) states. The RRC protocol is the control plane between the 3G network and the UE (User Equipment, e.g., cellphone). It is responsible for handling many interactions such as locating the UE, waking it up, establishing/releasing connections for voice and data, and for sending SMS’es.

The UE can be in one of five possible RRC states, called Idle, URA_PCH, Cell_PCH, Cell_FACH, and Cell_DCH. In Idle mode the UE is only known to the network by its IMSI (telephone number), and only listens to system broadcasts and paging information. It only very rarely transmits (and even then only location updates) and barely uses its receiver (waking up periodically to check if it has been paged). Battery drain is thus extremely low. At the other extreme is the Cell Dedicated Channel state. Here the UE is using a dedicated high-speed data channel, and may be consuming 100 times more battery power. In between are the PCH states where the UE is connected but still relatively inactive, consuming only a little battery power; and the FACH state where the UE is using shared channels for exchange of small bursts of data, and consuming perhaps half of what it would consume in DCH.

Now, a UE in the Cell_PCH state that needs to send a short data packet (e.g., an application keepalive) will need to transition to Cell_FACH. It does this by sending a single signaling message and receiving a single reply. After sending its data packet, the UE will only drop back to Cell_PCH after a relatively long timeout (several seconds), and in the meantime will be wasting battery power. In order to conserve battery power many manufacturers, starting with RIM in its Blackberry, but more notably Apple in the iPhone and various manufacturers for Android devices, devised a trick. The UE sends a SCRI Signaling Connection Release Indication message, a message that was intended to convey that some unexpected error has occurred in the UE, and that the network should immediately release its connection. The UE drops into the Idle state, with almost no battery drain. However, the network effectively forgets it, and the next time the UE needs to transmit something, it needs to go from idle state to FACH, which is a signaling-intensive (over 25 messages) and lengthy operation.

The consequences of this trick were not very apparent when it was only used by Blackberry handsets, which are mainly used for email and occasional short data transfers. On the other hand, iPhone users tend to continually pull and push data, watch and stream videos, and are generally “always on”. In addition, the iPhone’s iconic slimness meant that Apple couldn’t use anything larger than a 1400 mAh battery, so that Apple was particularly aggressive in sending SCRIs. Finally, in the US where SMS had never been as popular as in Europe, the signaling infrastructure was woefully undersized for millions of iPhones disconnecting and reconnecting to the network.

The initial resolution involved increasing server resources and freeing up bandwidth for signaling channels. The eventual solution was a signaling enhancement in 3GPP Release 8 called Fast Dormancy, which Apple adopted towards the end of 2010. This enhancement enables the UE to transition quickly from FACH state to PCH, rather than to Idle as in the trick. Thus the network remembers the UE, and it can rapidly transition back and forth between FACH and PCH states.

Of course, iPhones are not alone in having caused signaling storms. In mid 2011 the Android port of Angry Birds caused significant signaling traffic that stressed networks until an update solved the problem, and in January 2012 NTT Docomo suffered a 4½ hour outage in Tokyo due to an Android application that overloaded the signaling plane.

And according to many reports, spectral exhaustion is right around the corner.

Y(J)S

Monday, January 9, 2012

Jobs and Ritchie

October 2011 marked the passing away of two men well-known in the computation and communications industries. One was Steve Jobs. In his honor Apple, Microsoft, and Disneyland all flew their flags at half-staff. October 16, 2011, was declared "Steve Jobs Day" in California. President Obama gave a eulogy calling Jobs “among the greatest of American innovators … a visionary”.

The other was Dennis Ritchie.

Ritchie died alone. His passing was not mentioned on the TV news, and was not picked up as a major item by the press. The only formal recognition was the dedication to his memory of the Fedora 16 distribution. For those who don’t recognize his name, Ritchie is the R in K&R (Kernighan and Ritchie’s “The C Programming Language”), a book known by heart to everyone who has ever written in C. In addition to creating C and introducing many of the constructs of imperative programming, Ritchie, along with Ken Thompson, created the UNIX operating system. In fact, C was created as a vehicle to make UNIX more portable.

For his contributions to computer science, Ritchie was awarded the Turing Award, the Hamming Medal, and the US National Medal of Technology. Until his retirement in 2007, Ritchie was head of research at Lucent’s System Software Department.

The papers eulogized Jobs as a great inventor, but were not very specific as to what precisely he invented. Of course they extolled technologies and devices with which his name is connected - the Apple 2, the MacIntosh, the mouse+icon GUI, the iPod, iPhone, and iPad, but mostly admitted that his contributions were in the area of design and evangelization, rather than invention. What they omitted was his major invention – his amazingly successful method of monetizing. Bill Gates convinced people to pay for software rather than receive it free of charge when purchasing hardware, but it was Steve Jobs who convinced people to give him a 30% royalty on third-party software (and music and videos) just in order to use it on his hardware.

In contrast, Ritchie convinced his employer AT&T to distribute UNIX to universities, under license but free of charge. The sources (mostly in C) were widely circulated in the book form and enabled programmers to enhance its features as well as to create their own software. After its divestiture AT&T was allowed to market software and quickly changed Unix System V into a proprietary closed system. This prompted a group at Berkeley to continue development of the BSD UNIX as an Open Source alternative, Ritchie to help in the development of the GNU free version of UNIX, and eventually Linux Torvalds to create Linux.

The computer industry is now segmented into Microsoft, Google/Android, and Apple. Microsoft’s most important asset is its Windows Operating System; this indeed is not based on UNIX but is programmed in C++ and promotes C#, two direct descendants of C. Google’s Android may exploit the Java language, not a direct descendent of C, but is itself based on Linux, a descendant of UNIX. And Job’s Apple uses the iOS operating system, a version of UNIX, and Objective C language – a derivative of C. So while Job’s influence is limited to a small a minority of PCs and one sector of the smartphone market, there is no mainstream computer or smart device without Ritchie’s fingerprints all over it.

Y(J)S