N5 NETWORKING BEST PRACTICES

N5 NETWORKING BEST PRACTICES
Table of Contents
Nexgen N5 Networking ................................................................................................................................................................... 2
Overview of Storage Networking Best Practices ............................................................................................................. 2
Recommended Switch features for an iSCSI Network .................................................................................................. 2
Setting up the iSCSI Network for High Performance and High Availability ......................................................... 3
iSCSI SAN Topologies .................................................................................................................................................................. 7
Networking Best Practices Summary................................................................................................................................... 9
iSCSI Network Best Practices Summary ...........................................................................................................................10
General Application Setup for iSCSI Volume Access ....................................................................................................11
Overview of NexGen N5 Networking Options ................................................................................................................12
NexGen N5 Network Cabling Options ................................................................................................................................12
Network Cabling: 10GBT & 10GbE SFP+ ..........................................................................................................................13
Appendix A: NexGen N5 TCP/UDP Port Numbers ........................................................................................................14
NexGen N5 Networking Best Practices | 1
NEXGEN N5 NETWORKING
Overview of Storage Networking Best Practices
A high performance, high availability, storage network can be done in many ways. In the case of the NexGen
N5 Hybrid Array, we recommend the following settings which are explained in further detail throughout this
guide.








Implement a fault-tolerant switch environment with multiple redundant switches.
Implement MPIO at the host for high availability and performance.
Implement a high performing network for data (10GbE SFP+ or 10GBT RJ45)
Implement separate data and management subnets.
Implement separate subnets or VLAN for dedicated data bandwidth.
Set/verify the individual ports on the switch, host and storage to full-duplex mode.
Enable Jumbo frames on all ports to maximize throughput.
Flow Control should be enabled on all ports.
Recommended Switch features for an iSCSI Network
NexGen does not recommend a particular switch. However, the following illustrates the minimum set of
switch capabilities to optimize the operation and performance of the N5.
Item
Description
10Gb Ethernet
Support
Full duplex 10Gb Ethernet operation ensures the minimum
network latency and highest throughput.
In order to ensure the network is not the bottleneck for
application server performance, implement end-to-end
10Gb from application server to storage
Non-blocking
Backplane
Optimal iSCSI data communications require switches with
a backplane that has enough bandwidth to support fullduplex connectivity and full utilization, line rate for all ports
at the same time.
NexGen N5 Networking Best Practices | 2
Buffer Cache
High performing iSCSI data communications require
switches with at least 512KB of buffer space for each port.
Therefore, a 48 port switch needs at least 24 MB of buffer
cache.
Jumbo Frames
Support for Jumbo Frames ensures maximum performance
for sequential read write workloads.
Flow Control
Flow Control ensures graceful communication between
initiator and target.
Switch Trunking
Use link aggregation of two or more 10Gb or 40Gb links to
connect multiple switches together.




Switch Features
Managed switch
Layer 2 switching
VLAN
Spanning Tree Protocol
Setting up the iSCSI Network for High Performance and High Availability
The NexGen N5 Hybrid Flash Array is equipped with four 1GbE management ports; two on each storage
processor. By default, the Management Port-1 is enabled for DHCP. Management Port-2 is configured from
the factory with a static IP address (details below). For data ports, each N5 is set up with four 10GbE data
ports; two on each processor. The following is summary of out-of-factory network configuration.
Network Interface
Mgmt. Port-2
Mgmt. Port-1
Data Ports 1 – 2
Lights Out Management Ports
(Accessed via Mgmt. Port-1)
Storage Processor A
Enabled
Static IP: 192.168.100.100
Mask: 255.255.255.0
Gateway: None
Enabled
DHCP
Disabled
No IP Configuration
Disabled
No IP Configuration
Storage Processor B
Enabled
Static IP: 192.168.100.200
Mask: 255.255.255.0
Gateway: None
Enabled
DHCP
Disabled
No IP Configuration
Disabled
No IP Configuration
Figure 1 – Default Network Settings
Reference Appendix A – TCP/UDP Inbound Ports Used for Normal SAN Operations for additional information.
NexGen N5 Networking Best Practices | 3
To setup the network interfaces on the NexGen N5 Hybrid Flash Array; navigate to the ‘Settings’ window
and select the ‘Networking Addressing’ tab. Both the management and data ports on each storage
processor in the system are listed in a dialog that allows you to select each port and configure it individually
by clicking on the ‘Edit’ button.
Figure 2 – Network Addressing screen
After clicking on the ‘Edit’ button, the Edit Network Interface Configuration window appears. Set the
Mode (DHCP, Disabled or Static), Address, Mask, Gateway, and Frame Size (1500 or 9000). Click
‘Save Config Changes’ to save the information. The Validate Configuration region of the window can be
used to test network connectivity:
NexGen N5 Networking Best Practices | 4
Figure 3 – Configuring a Management Port
Click on the ‘Edit’ button next to the specific data port that you want to configure. Set the Mode (DHCP,
Disabled or Static), Address, Mask, Gateway, and Frame Size (1500 or 9000). Click ‘Save Config Changes’
to save the information.
NexGen N5 Networking Best Practices | 5
Figure 4 – Configuring a Data Port
The Validate Configuration region of the window can be used to test network connectivity by clicking on
the ‘Ping Address’ option. If successful, the RTT (round trip time) and Count (how many successful
trips) are displayed:
Figure 5 – Configuration Success Screen
NexGen N5 Networking Best Practices | 6
iSCSI SAN Topologies
There are two types of recommended iSCSI SAN topologies for use with the NexGen N5 Storage System:

Single IP SAN network (single subnet or single vlan)

Dual IP SAN networks (two subnets or two vlans)
For the Single IP SAN network topology configuration, all Data Ports on both storage processors will be
configured with IP addresses on the same IP SAN network. Application servers are also connected to the
single IP SAN network and volumes are connected via iSCSI with either a single session or multiple sessions
(MPIO). The following logical network diagrams depict how to setup single and dual IP SAN network
configurations with the NexGen N5 Storage System.
Figure 6 – Single IP SAN Network Configuration
NexGen N5 Networking Best Practices | 7
Figure 7 – Dual IP SAN Network Configuration
For optimal performance and availability in the application servers, it is recommended that MPIO be used. The
best MPIO option is to configure two or more paths from the application server to the NexGen N5 Hybrid Flash
Array in an Active-Active MPIO mode. The minimum number of MPIO paths should be two paths across two
NICS in the application server connected to both storage processors. The recommended physical network
topology for the IP SAN network(s) is to have redundant physical paths for the volume connections made from
the application servers to the storage. This is easily done with multiple switches in the environment connected
to multiple NICs in the application server and storage. Below is an example of a two physical switch
configuration. The switches are trunked together so that the Single IP SAN network can span both switches. If
a Dual IP SAN network is implemented, the trunk links between the switches are not necessary unless other
VLANs are being spanned across switches.
Figure 8 – Two Switch Physical Topology in Single IP SAN Network Configuration
NexGen N5 Networking Best Practices | 8
Networking Best Practices Summary
Item
Logical Network Topologies
Physical Network Topologies
MPIO
Jumbo Frames
Flow Control
Link Aggregation Technology
(LACP, MC-LAG, Virtual Port Channels, etc.)
NexGen N5 Networking Best Practices | 9
Best Practice
 Implement a Single IP SAN or Dual IP SAN
network topology.
 Implement redundant switches.
 In a Single IP SAN implementation, the two
switches must be trunked together.
 In a Dual IP SAN implementation, the logically
separated networks should also be on
physically different switches for redundancy.
 Utilize core switch topologies that utilize
multiple high bandwidth low-latency trunk
links without requiring use of Spanning Tree.
 Use host-based MPIO.
 Setup at least two paths to a volume.
 Use MPIO ALUA with a Round-Robin path
selection policy.
 Use caution when implementing Jumbo
Frames.
 Configure Jumbo Frame support on all
switches between the application servers and
the NexGen N5.
 Enable Jumbo Frames on all application
servers and storage network interfaces that
are connected to the IP SAN network(s).
 There is no need to configure Jumbo Frames
on the Management Ports on the NexGen N5
unless you are using replication.
 Proper configuration of Jumbo Frames should
yield anywhere from 0-20% performance
benefit depending on the workload.
 Misconfiguration of Jumbo Frames can result
in a negative performance impact.
 Enable Flow Control on all switches and switch
ports connected to the IP SAN network(s).
 Flow Control is good practice for optimal iSCSI
performance on 10 Gigabit Ethernet networks.
 Enable tx/rx flow control in the application
server and VM environment NIC configurations
if not on by default.
 Flow Control is enabled by default on the
NexGen N5 NICs. There is no need to configure
this.
 Link aggregation technologies cannot be used
with the NexGen N5 network ports for
management or data. Use MPIO at the host to
provide path redundancy and improved
performance.


Link aggregation technologies (LACP, MC-LAG,
VTP) should be used for trunking switches
together in order to achieve connection
reliability and higher performance
(bandwidth).
Consult the switch vendor documentation on
proper setup.
iSCSI Network Best Practices Summary
Item
Data Ports and Management Ports
Flow Control
Jumbo Frames
NexGen N5 Networking Best Practices | 10
Best Practice
 Management and Data ports should be
configured on separate networks.
 Data Ports should use static IP addresses on a
dedicated IP SAN network, ideally isolated from
all other traffic.
 Network ports which are not being used should
be set to Disabled.
 Flow Control should be enabled on all switches
and ports that will carry iSCSI traffic when using
10 Gigabit Ethernet hosts and storage. Enable
tx/rx flow control in the application server NIC
configurations if not on by default. Flow Control
is enabled by default on the NexGen N5 NICs.
There is no need to configure this.
 Data Ports should be enabled for Jumbo Frames
(9000 Frame Size) only if all switches, switch
ports and application servers connected to the
IP SAN network are configured for Jumbo
Frames.
 Misconfiguration of Jumbo Frames can have a
negative performance impact.
 Configure Jumbo Frame support on the
switches first, followed by the application
server iSCSI network interfaces, etc. There is no
need to configure Jumbo Frames on the
Management Ports on the NexGen N5.
General Application Setup for iSCSI Volume Access
The NexGen N5 Hybrid Flash Array presents data volumes for access by application servers via the iSCSI
protocol with Asymmetric Logical Unit Access (ALUA)-enabled. There are several general setup
instructions for connecting volumes to all iSCSI initiators available for the most common operating systems.
Best Practice
Use multiple iSCSI discovery addresses
(portals) for high availability
Use MPIO for volume connectivity high
availability
Use Round-Robin MPIO policy for optimal
host connectivity performance
Use iqn (iSCSI Qualified Name) based iSCSI
Security for volume access.
NexGen N5 Networking Best Practices | 11
Details
 Configure at least two discovery addresses in
the application server’s iSCSI initiator for
accessing volumes on the N5 Storage System.
 For high availability of iSCSI discovery on the
iSCSI initiator, specify at least two (preferably
four) iSCSI discovery IP addresses that
correspond to data port IP addresses on each
Storage Processor in the NexGen N5.
 All volumes on the NexGen N5 are advertised
for discovery on all Data Ports. Volumes are
NOT advertised for discovery on the
Management Ports.
 If the host operating system and iSCSI initiator
support MPIO, configure at least two iSCSI
sessions per volume. Create one session
connected to Storage Processor-A and the
other session connected to Storage ProcessorB.
 If the host operating system and iSCSI initiator
support MPIO, use MPIO ALUA with a RoundRobin path selection policy.
 Each iSCSI initiator will have a unique iqn on the
iSCSI network. Specify the iqn(s) of the
application servers in the Host Access Group
and assign volumes to that group.
 The ‘Allow-all Access’ Host Access Group is not
recommended for use on production server
volumes.
Overview of NexGen N5 Networking Options
Model
Management Ports
Redundant Data Port NICs
N5-200
4x 1Gb RJ45
4x 1/10GbE SFP+ -or- 4x 1/10GbaseT RJ45
N5-300
4x 1Gb RJ45
4x 1/10GbE SFP+ -or- 4x 1/10GbaseT RJ45
N5-500
4x 1Gb RJ45
4x 1/10GbE SFP+ -or- 4x 1/10GbaseT RJ45
N5-1000
4x 1Gb RJ45
4x 1/10GbE SFP+ -or- 4x 1/10GbaseT RJ45
NexGen N5 Network Cabling Options
Port
# of
Cable Type
NIC type
What to buy with N5
Ports
1GbE Management
4
Cat6 or better (RJ45)
N/A
Cables only
1GbE Data
4
Cat6 or better (RJ45)
SFP+
GBIC SKU plus cables
1GBT Data
4
Cat6 or better (RJ45)
GBase-T
Cables only
10GbE Data
4
SFP+ Twinax
SFP+
Cables only
10GbE Data
4
SFP+ Optical
SFP+
Cables with optic
(OM3 or better)
10GBT Data
4
Cat6a or better (RJ45)
NexGen N5 Networking Best Practices | 12
adapters/modules
GBase-T
Cables only
Network Cabling: 10GBT & 10GbE SFP+
Type
10G Base-T
10Gb SFP+ SR Optical
10Gb SFP+ SR Twinax Passive
10Gb SFP+ SR Twinax Active
Speed
10 Gb/sec
10 Gb/sec
10 Gb/sec
10 Gb/sec
Figure 9 – 10GBT Cat7 Cable
Figure 10 – 10GbE SFP+ Twinax Cable
NexGen N5 Networking Best Practices | 13
Max Distance
100m
300m
10m
25m
Latency per Link
2.6µs
0.3µs
0.3µs
0.3µs
Figure 11 – 10GbE SFP+ Optical Cable
Figure 12 – 10GbE SR Optical Gbic
Power
2.7W
0.7W
0.7W
0.7W
Appendix A: NexGen N5 TCP/UDP Port Numbers
IP Protocol
Port(s)
Name
Description
TCP
22
SSH
TCP
80
HTTP
UDP
123
NTP
UDP
161
SNMP
TCP/UDP
162
SNMPTRAP
TCP
443
HTTPS
TCP
TCP
860
3260
iSCSI
iSCSI target
Secure Shell access for
SAN support only. Not
meant for normal day-today operations.
Permit intermediate
network elements to
improve or enable
communications
between clients and
servers.
Network Time Protocol
used for time
synchronization.
Simple Network
Management Protocol
Simple Network
Management Protocol
Trap
HTTPS (Hypertext
Transfer Protocol over
SSL/TLS)
iSCSI System port
iSCSI port
NexGen N5 Networking Best Practices | 14