How to Maximize Network Bandwidth Efficiency (And Avoid a “Muxponder Tax”)

WHITE PAPER
How to Maximize Network
Bandwidth Efficiency
(And Avoid a “Muxponder Tax”)
Network bandwidth efficiency is a key metric that measures
what portion of deployed network bandwidth is actually used
to carry revenue-generating services. Infinera’s Bandwidth
Virtualization architecture combines digital grooming with
high-capacity WDM transport to efficiently groom all services,
from low-rate Gigabit Ethernet up to 100G services, onto a
pool of PIC-based WDM bandwidth. This enables network
operators using Infinera’s Digital Optical Network solution
to make maximum use of all WDM bandwidth deployed,
approaching 100%, regardless of service type. In contrast,
muxponder/ROADM solutions that assign sub-wavelength
services to a point-to-point wavelength can often strand
unused bandwidth, resulting in a significant over-provisioning
of deployed capacity (and thus CapEx) versus services
sold. This paper provides a look at the network planning
considerations related to maximizing network efficiency for
next-generation WDM networks.
1. Introduction
Continued growth in network capacity requirements is driving the development of higher spectral
efficiency WDM systems operating at 40Gb/s or 100Gb/s per wavelength with channel spacings
of 25-50Ghz. While these technologies will enable network capacity scaling to multiple terabits
per second per fiber pair, they do not necessarily consider the underlying mix and variety of
client services these networks must support, nor do they optimize for efficient transport of “subwavelength” services where data rate is less than the WDM line rate.
In fact, while high data rate services are emerging and expected to grow with the ratification
of standards for 40 and 100 Gigagbit Ethernet, the bulk of client services and associated
revenues today and in the medium-term remain at 10Gb/s or lower bit rates (see figure 1). For
example, while WDM systems with 40G interfaces have been in deployment since early 2007,
by the end of 2008 more than 80 percent of service interfaces deployed in long-haul networks
remained at 10Gb/s, with this number forecast to remain essentially unchanged for 2009.1 Other
research2 indicates that a sizeable portion of 40G WDM deployed in 2009 consisted of 40Gb/s
muxponders, with this number forecast to further double by 2012.
Figure 1
Thus while high-capacity WDM systems operating at 40Gb/s and 100Gb/s per wavelength
provide a fundamental path toward maximum spectral efficiency and total fiber capacity,
network planners need to also consider what mix of client service types are expected to carried
over the transport network, and whether the WDM system enables efficient use of deployed
capacity. The economic impact of imperfect planning or inefficient WDM system architectures
is the accelerated deployment of underutilized capacity, resulting in higher deployed capital
requirements—essentially a bandwidth “tax” that reduces effective network capacity of deployed
WDM systems or incurs additional capital outlays for a given network traffic volume.
Previous studies have shown the occurrence of a “muxponder tax” through the inefficiency of
such muxponder/ROADM architectures for transporting sub-wavelength services in metro core
networks.3 The impact of this inefficiency is especially important in capital-constrained situations,
as network bandwidth efficiency can most directly be tied to network cost (in the form of unused
capacity for equipment purchased to provision a given service demand), and thus represents a
key parameter to consider in ensuring maximum capital efficiency.
Page 2
One of the key concerns for most network operators is how much revenue-generating traffic
can be supported from a given amount of capital spent on the optical layer. Metrics such as
wavelength bit-rate or total fiber capacity don’t capture the aspect of cost-efficiency, while others,
like cost per bit-kilometer, do not accurately reflect capital efficiency for real-world network and
service topologies. To analyze the impact of network bandwidth efficiency, as well as quantify
the differences between WDM system architectures, this study proposes the metric of network
bandwidth utilization as a measure of how efficiently optical capacity is actually used for a given
traffic demand. This can provide a simple, relevant and comprehensive metric that can be applied
to different technology solutions and network topologies to assess the level of capital efficiency
and scalability of a given network architecture.
This network planning study quantifies the amount of unused capacity that results from
“fragmentation” of deployed capacity when transporting 10Gb/s services over next-generation
WDM systems. Results indicate that the use of WDM systems with integrated digital bandwidth
management provide significant efficiency benefits over the use of muxponder/ROADM-based
systems, enabling reduced capital expenses (CapEx) for a given set of network service demands.
2. Overview of Transponder/ROADM-Based WDM Networking
In conventional all-optical WDM systems, the client service is typically mapped directly to a
wavelength using a transponder (i.e., a 10GbE LAN circuit is typically carried by a 10Gb/s optical
wavelength). When the client service data rate is less than the wavelength bit rate, muxponders
are used to map multiple sub-wavelength services to a higher data rate WDM wavelength (i.e.,
4 x 2.5G services mapped into a 10Gb/s wavelength). In both cases, once the client service is
mapped to a wavelength, individual wavelengths are then routed end-to-end across the WDM
network and switched purely in the optical domain using a Reconfigurable Optical Add/Drop
Multiplexer (ROADM) or Wavelength Selective Switch (WSS). A muxponder/ROADM architecture
can redirect wavelengths at intermediate nodes. However, there is no ability to provide
bandwidth management and grooming of the sub-wavelength service demands either within or
across wavelengths at these nodes (Figure 2).
Figure 2: Architecture of typical transponder/ROADM-based WDM system showing both service aggregation into
wavelengths, and wavelength switching architecture.
Page 3
When enough sub-wavelength services need to be provisioned across the same end-to-end path
to completely consume the bandwidth of each wavelength, muxponders can efficiently fill the
wavelength between service endpoints. However, until enough service demands exist between
muxponder endpoints, a portion of the total traffic-carrying capacity of each wavelength remains
unused, and is essentially “stranded,” until more bandwidth is required on the same service path.
In muxponder/ROADM architectures, services allocated across multiple under-utilized
wavelengths and converge on a given ROADM node cannot be re-groomed to efficiently
and more fully utilize a smaller, total number of wavelengths. Rather, multiple under-utilized
wavelengths remain under-utilized on an end-to-end basis, even when they share some portion of
a common path (Figure 3).
Figure 3: Inability to re-groom sub-wavelength traffic across multiple under-utilized wavelengths, even
when they share a common fiber path on a muxponder/ROADM-based network.
While this inefficiency could be reduced by frequent re-grooming, the need to minimize cost per
wavelength (driven by the number of muxponders deployed) leads carriers to typically minimize
the number of back-to-back muxponders that could otherwise provide access to sub-wavelength
services at intermediate nodes, and thereby enable efficient re-grooming of all service demands
between wavelengths.
Naturally, this leads to network inefficiency and deployment of excess capacity and capital, both
of which are undesirable and which we call a “muxponder tax.” This leads one to predict that
as the optical wavelength bit-rate increases for a given service mix, the amount of unallocated
capacity increases, resulting in an under-utilized, capital inefficient network. The impact on
network cost can be further compounded by multistage muxponder architectures. For example,
mapping 2.5Gb/s services into 100Gb/s wavelengths requires two stages of multiplexing; first a 4
x 2.5G  10Gb/s muxponder is used to aggregate 2.5Gb/s services into 10Gb/s payloads, which
are then connected to a 10 x 10Gb/s  100Gb/s muxponder to aggregate these into a 100Gb/s
wavelength. As can be inferred, this adds additional cost due to extra stages of multiplexing
equipment required.
Page 4
Thus, as the disparity between the service data rate and wavelength speed increases, the use of
conventional muxponder/ROADM-based optical networks without integrated sub-wavelength
grooming at every node is expected to lead to higher CapEx costs and increased operational
complexity.
3. Integrated WDM + Switching Digital Networking
In this study we compare the conventional trans/muxponder/ROADM-based WDM systems
described above to a novel “digital” optical network which integrates sub-wavelength bandwidth
management and grooming with high-capacity WDM transport at every add/drop node. This
enables a “Bandwidth Virtualization” architecture that provides operators with the ability to
continually re-groom sub-wavelength services within and between wavelengths at all add/drop
locations, thereby maximizing wavelength packing efficiency and eliminating stranded bandwidth
within wavelengths carrying sub-wavelength services. This serves to maximize bandwidth
utilization of the capacity of WDM systems deployed across the network.
The fundamental building block of a Digital Optical Network is the Digital ROADM node, which
provides high capacity WDM optical transport and digital add/drop flexibility (Figure 4). A Digital
ROADM utilizes Photonic Integrated Circuit (PIC) technology to provide affordable access to the
WDM bandwidth via Optical-Electrical-Optical (OEO) conversion, allowing the optical signals to
be converted into the electronic domain. At that point, the underlying services can be switched
and groomed at a sub-wavelength granularity, typically ODU0 (1.25Gb/s) or ODU1 (2.5Gps),
before re-grooming back into line-side WDM wavelengths.
Digital ROADM System Architecture opWcal (analog) electrical (digital) Digital Electronics & Sobware Rx PIC Rx PIC Digital 3R and PMs Tx PIC : Rx PIC Add/Drop Tx PIC PIC-­‐based WDM line-­‐side capacity DLM GbE DLM 10GbE Digital ProtecWon 2.5Gb/s : Rx PIC 40Gb/s Switch / Mux / Groom Tx PIC 100Gb/s OpWcal Band Mux Tx PIC OpWcal Band Mux DLM DLM PIC-­‐based WDM line-­‐side capacity opWcal (analog) Local Add / Drop Services Figure 4: Architecture of a Digital ROADM node with integrated OTN
bandwidth management and WDM transport.
The sub-wavelength digital bandwidth management of a Digital ROADM thus enables
multiplexing, grooming and add/drop of sub-wavelength services from line-to-line, line-totributary and tributary-to-tributary across the WDM capacity of the node. Digital add/drop is
Page 5
done simply by adding client-side interfaces and cross-connecting via an electronic switch matrix
to/from the WDM line. Since WDM line capacity is terminated independent of customer service
interfaces, a Digital ROADM can be designed to support a wide range of service types and data
rates, independent of the system’s WDM line rate. Sub-wavelength services can be switched and
groomed between and within wavelengths at every node to maximize wavelength utilization and
thus eliminate unused or “stranded” bandwidth between nodes (see Figure 5)
Figure 5: Integrated sub-wavelength bandwidth management and WDM transport in a Digital Optical
Network maximizes bandwidth utilization, and minimizes stranded bandwidth compared to a muxponder/
ROADM-based architecture.
Integrated digital sub-wavelength grooming thus allows a Digital Optical Network to operate
at the most cost-effective and spectrally efficient line rate while simultaneously enabling service
providers to connect customers at the service rate they desire. Fundamentally, Bandwidth
Virtualization enables services, of any type or bit rate, to be delivered across the entire “pool” of
WDM bandwidth regardless of specific wavelength data rate, rather than be inflexibly dedicated
to a specific wavelength and line rate as in a conventional muxponder/ROADM network.
4. Network and Architecture Models
In this study, we compare the network efficiency of two WDM architectures, each providing
100Gb/s of capacity per line card: 1) muxponder/ROADM system with 100Gb/s muxponders;
2) Digital ROADM system with 100Gb/s Bandwidth Virtualized WDM line modules. Additional
details for each architecture are described below:
100Gb/s Muxponder consists of 10 client side interfaces at 10Gb/s multiplexed into a 100Gb/s
WDM line-side output. The client signals are transparently mapped to the WDM output in a
nailed-up fashion without the option of being able to selectively add/drop individual 10Gb/s
client signals at an intermediate site. Add/drop of specific client signals requires de-multiplexing
and re-multiplexing of all 10 x 10Gb/s client signals from each wavelength at the add/drop site.
Page 6
100Gb/s WDM line module with Bandwidth Virtualization integrates a 100Gb/s of WDM line
side interface with sub-wavelength digital switching and separate client-side interfaces using
a Bandwidth Virtualization architecture described above. In this study we assume the 100Gb/s
WDM line-side interface uses PIC technology to integrate ten channels each operating at 10Gb/s
per channel into a single WDM line card. Future advances in PIC technology in development
will enable scaling to 500Gb/s line cards (5 channels at 100Gb/s each) and 1Tbps per line card
(10 channels at 100Gb/s each). This enables any mix of client services (from 1G to 100G) to
be groomed by a switching fabric (typically with ODU0/1 granularity), enabling grooming of
services both within and across wavelengths across the entire WDM capacity of the node. By also
decoupling client services from the WDM line-side interface, operators benefit from a Digital
Optical Network where every add/drop node in the network integrates a digital switching layer to
provide access to any individual client service at any node.
Table 1: Summary of key parameters of the 100Gb/s muxponder/ROADM and Digital ROADM systems
Parameter
10 x 10G Muxponder
100G Bandwidth Virtualization
DWDM line side capacity per card
100G (single wave)
100G (10 x 10G waves)
Client interfaces
10 x 10G
Any mix of clients, including
1G, 2.5G, 10G, 40G and 100G.
Modeling assumed 10 x 10G
Optical reach
1,500 km
1,500 km
Intra-wavelength
switching/grooming
No
Yes (typically ODU0/1)
Inter-lambda switching/grooming
No
Yes (typically ODU0/1)
Normalized cost/bit
1
1
The two architectures were compared using a network model representative of a large North
American core fiber optic network spanning 33,000km with 47 links and points of presence in 40
cities (see Figure 6). The topology is 2-edge connected. Each city is categorized into one of the
following designations: Tier 1, Tier 2, and Datacenter. Tier 1 and Tier 2 categorization is based on
the population of the city. Datacenters are chosen based on external factors such as inexpensive
power, tax-free regions and other economic factors. Among the 40 cities in the network model, 15
cities were categorized as Tier 1, two cities were in the Tier 2 class, and 11 were Datacenter cities.
Network Fiber Topology Network A-­‐Z Demands Figure 6: topology and city-to-city A-Z demand for nation-wide reference network model.
Page 7
Service demands between all cities were created using a multidimensional gravity model, and
scaled by a factor of two in each planning period to model ongoing growth in service demands.
It was assumed that a large fraction of traffic (50 percent) is between Datacenters (the majority of
Datacenter cities also have large populations). The remaining traffic is inter-Tier 1 and between
Tier 1 and Datacenter cities. Traffic originating/terminating from/to a Tier 2 city is aggregated to/
from the nearest Tier 1 city before being transported across the network. For modeling simplicity,
the model assumed that all end-to-end service demands had a data rate granularity of 10Gb/s.
The resulting total traffic volumes on the network for each demand period are summarized in
Table 2.
Table 2: Summary of total traffic demands on reference network model
Planning Period (Year)
Relative Service Demands
Total Traffic Volume
1
1
231 x 10Gbps = 2.3Tbps
2
2 times Yr 1
463 x 10Gbps = 4.6Tbps
3
4 times Yr 1
928 x 10Gbps = 9.3Tbps
4
8 times Yr 1
1855 x 10Gbps = 18.6Tbps
5
16 times Yr 1
3712 x 10Gbps = 37.1Tbps
The 10Gb/s service demands were routed over both the muxponder/ROADM and Bandwidth
Virtualization architectures in two ways. First, the total set of cumulative service demands were
provisioned without assuming any previously deployed equipment. This represents “perfect”
planning with optimal routing of all demands in each planning period, and represents a bestcase scenario. Second, the incremental service demands for a given planning period were routed
on top of those from previous periods, using either available capacity, or new equipment, if
necessary. This represents a more typical real-world planning scenario. For the case of 100Gb/s
muxponders, there was no difference because in either case any incremental traffic between a
source-destination pair is added to the existing deployed 100Gb/s wave (if possible) before a
new 100Gb/s wave is provisioned for the incremental traffic between the same node pairs. Note
that sub-wavelength level grooming in this case is not economically feasible at intermediate
nodes. In both cases, WDM systems are deployed to maximize optical reach, thereby reducing
the cost associated with undesirable regeneration sites.
5. Results and Analysis
Analysis results are shown in Figure 7 for traffic periods 1 (initial traffic demand), period 3 (4x
initial traffic demand) and period 5 (16x initial traffic demands). As shown, the lack of intra- and
inter-wavelength grooming in the muxponder/ROADM architecture creates considerable initial
inefficiency at low traffic volumes, up to 90 percent on some links and greater than 50 percent
on over half the links. This imposes the deployment of significantly more initial capacity (and
Page 8
thus CapEx) than actual service bandwidth. This improves at higher traffic volumes as new
demands can be provisioned over excess capacity deployed in prior planning periods. In
contrast, the 100G Bandwidth Virtualization architecture with integrated sub-wavelength
bandwidth management enables grooming of service demands to the WDM line capacity at
every add/drop site, therefore enabling each WDM wavelength to be utilized to the maximum
possible capacity.
Figure 7: Total service bandwidth demands and corresponding deployed link capacity for 100G
Bandwidth Virtualization and 100Gb/s muxponder/ROADM architectures for (a) planning period #1, (b)
planning period #3 and (c) planning period #5.
Page 9
Figure 8: Analysis shows network bandwidth efficiency for the 100G muxponder/ROADM
and 100G Bandwidth Virtualization architectures.
In Figure 8, we plot a summary of the network bandwidth efficiency for the two architectures over
the five planning periods, corresponding to a 16x increase in total service demands. Network
bandwidth efficiency is defined as the percentage of equipped capacity that is consumed by
the service demands. As shown in the figure, the ability to efficiently groom service demands
within/between wavelengths at all add/drop sites using the Bandwidth Virtualization architecture
significantly increases network bandwidth efficiency across all planning periods, and provides up
to 2-4 times better capital efficiency at low to moderate traffic volumes.
The results are quite remarkable, showing that provisioning 10Gb/s service demands over a
100Gb/s muxponder/ROADM system without integrated bandwidth management onto networks
with sub-100Gb/s services will initially require a large investment of capital with only a small
fraction of that being used to carry revenue-generating traffic. Obviously as network demands
increase over time the deployed capacity will progressively be utilized, but at the expense of
a time value of money for the CapEx that was pre-deployed until that time. The magnitude of
these results suggests that WDM systems with integrated bandwidth management will provide
significant initial cost savings, and also enable a closer coupling of CapEx deployment to service
revenue, yielding important financial benefits to service providers.
Muxponder/ROADM Inefficiencies for sub-10Gb/s demands:
The bandwidth efficiency results described above, along with the corresponding CapEx
implications, all assume a muxponder/ROADM architecture that requires only a single level of
multiplexing/aggregation, as exemplified by the use in the network model of 100Gb/sGb/s
muxponders which support direct aggregation of 10 x 10Gb/sGb/s services into a 100Gb/sGb/s
wavelength.
However, network operators should also consider the impact on muxponder inefficiency in the
case of sub-10Gb/sGb/s services as well (for example 1Gb/sGb/s Gigabit Ethernet or 2.5Gb/s
SONET/SDH) given the continued prevalence of these service data rates. In this case, sub-10Gb/
Page 10
Figure 9 Typical MulO-­‐Stage Muxponder Architecture 1st -­‐ Stage Muxponders 2nd -­‐ Stage Muxponders O-­‐E-­‐O O-­‐E-­‐O O-­‐E-­‐O O-­‐E-­‐O O-­‐E-­‐O O-­‐E-­‐O O-­‐E-­‐O O-­‐E-­‐O Examples: 8 x 1Gb/s -­‐> 10Gb/s 4 x 2.5Gb/s -­‐> 10Gb/s WDM MulOplexing λ1 …λN λi, λj Examples: 4 x 10Gb/s -­‐> 40Gb/s 10 x 10Gb/s -­‐> 100Gb/s Figure 9: Cascaded muxponder architecture for sub-10G services.
sGb/s services would typically require a cascaded muxponder architecture to be multiplexed
into a 100Gb/s wavelength. Thus sub-10Gb/s services are first multiplexed to a 10Gb/s optical
output,
and multiple aggregated 10Gb/s data streams would then in turn be multiplexed to
12 | Infinera Confiden-al & Proprietary 100Gb/s WDM wavelengths (see Figure 9 below).
This cascaded muxponder architecture further reduces bandwidth efficiency and increases CapEx
deployment for several reasons. First, bandwidth inefficiency if further increased from the fact
that both sub-10Gb/s and >10Gb/s muxponders may have unallocated bandwidth unless there
exists enough end-to-end service demand to fully fill all available muxponder bandwidth. This thus
compounds the problem of fragmented and stranded bandwidth using muxponders. Second, the
cascaded architecture requires two stages of multiplexing and associated OEO hardware, thereby
increasing total hardware deployed. Finally, the use of a two-stage muxponder architecture
impacts operational costs by increasing the space and power footprint due to the additional cards
and back-to-back connections, and the need for more service “truck-rolls” for service turn-up.
In contrast, the Bandwidth Virtualization architecture which leverages the integration of WDM
transport with bandwidth management inherently separates the client interfaces from the WDM
line modules via a digital switching matrix. This enables sub-wavelength client services at 1Gb/s
or 2.5Gb/s, for example, to be directly mapped onto line-side wavelengths (at 10Gb/s, 40Gb/s,
100Gb/s) with only a single stage of bandwidth management by simply plugging in client
modules and without the need for changing the line-side module.
6. Conclusion
This study introduces the metric of network bandwidth efficiency as a means of comparing the
capital efficiency of next-generation WDM system architectures. The results obtained in this study
highlight the benefits of a Bandwidth Virtualization architecture which integrates WDM transport
with digital sub-wavelength grooming capabilities at every add/drop node. The ability to provide
grooming of sub-wavelength services within and across wavelengths significantly improves
network bandwidth efficiency, and enables any client service type–from 1Gb/s to 100Gb/s–to be
Page 11
transported over the most economical WDM line rate, i.e., 10Gb/s today, 40Gb/s or 100Gb/s in
the future, without forcing a compromise on one or the other.
In contrast, muxponder/ROADM-based architectures can incur a very low degree of bandwidth
efficiency in networks with low/medium traffic volumes , requiring considerable amounts of
WDM bandwidth initially; far in excess of actual service demand bandwidth. This will force the
over-deployment of CapEx until end-to-end service bandwidth grows to fully fill the muxponder
“pipes.” This imposes what we term a “muxponder tax” on networks which utilize this
architecture.
These results are thus expected to prompt network operators to consider not just total fiber
capacity requirements in planning their network growth plans, but also the efficiency of this
bandwidth to ensure maximal use is made of CapEx investments in next-generation, highcapacity networks.
References
[1] Dell’Oro Group – World-wide WDM market report Ovum report (4Q/2009)
[2] Ovum – Optical Networks Global market report (June 2008)
[3] S. Melle and V. Vusirikala, “Network Planning and Architecture Analysis of Wavelength Blocking in Optical and Digital
ROADM Networks,” Technical Digest, OFC 2007, Anaheim, CA
[4] S. Melle, D. Perkins and C. Villamizar, ‘Network Cost Savings from Router Bypass in IP over WDM Core Networks’,
OFC/NFOEC, Session NTuD, Anaheim, CA, Feb. 24-28, 2008
Page 12
Infinera Corporation
169 Java Drive
Sunnyvale, CA 94089 USA
Telephone: +1 408 572 5200
Fax: +1 408 572 5454
www.infinera.com
Have a question about Infinera’s products or services?
Please contact us via the email addresses below.
Americas: Asia & Pacific Rim: Europe, Middle East,
and Africa: General E-Mail: [email protected]
[email protected]
[email protected]
[email protected]
www.infinera.com
Specifications subject to change without notice.
Document Number: WP-NE-08.2010
© Copyright 2010 Infinera Corporation. All rights reserved.
Infinera, Infinera DTN™, IQ™, Bandwidth Virtualization™, Digital Virtual Concatenation™ and
Infinera Digital Optical Network™ are trademarks of Infinera Corporation
Page 13