IBM z13 Hardware Innovation

3/18/2015
IBM z13 Hardware Innovation
– Technical Overview –
Michael Großmann
[email protected]
Trademarks, Thanks and Contact Information
• This presentation contains trade-marked IBM products and technologies. Refer to the
following Web site:
http://www.ibm.com/legal/copytrade.shtml
• Thanks to
•
Parwez Hamid, IBM Poughkeepsie
•
Ewerson Palacio, IBM Brazil
•
Frank Packheiser, IBM Germany
Michael Großmann
Senior Technical Sales Professional
IBM Sales & Distribution, STG Sales
STG Technical Sales for IBM z Systems
Phone: +49-171-5601157
Mail: [email protected]
© 2015 IBM Corporation
2
3/18/2015
Agenda
The IBM z13 – Introduction and Overview
IBM z13 Processor drawer and memory structure
IBM z13 I/O Connectivity
IBM z Systems Crypto
Flash Express, zEDC, RoCE, IBM zAware V2
IBM z13 Operating System Support
Summary
Statements of Direction (KVM, GDPS VA)
Appendix
© 2015 IBM Corporation
3
Introducing the IBM z13
The mainframe optimized for the digital era
IBM z13™ (z13)
Machine Type: 2964
Models:
N30, N63, N96, NC9, NE1
10%
Up to
40%
Up to
10 TB
2
SODs
Up to
141
1
Single thread capacity
improvement over zEC12
Total capacity improvement
over zEC121
3X more available memory to
help z/OS® or Linux® workloads
zKVM * and GDPS® virtual
appliance for Linux on IBM z
Systems™* opens the door for
more Linux
Configurable cores – CP, zIIP,
IFLs, ICFs, SAP
• Performance, scale, intelligent I/O and security
enhancements to support transaction growth in
the mobile world
• More memory, new cache design, improved I/O
bandwidth and compression help to serve up
more data for analytics
• Enterprise grade Linux solution, open standards,
enhanced sharing and focus on business
continuity to support cloud
Upgradeable from IBM zEnterprise® 196
(z196) and
IBM zEnterprise EC12 (zEC12)
Based on preliminary internal measurements and projections. Official performance data will be available upon announce and can be obtained online at LSPR (Large Systems
Performance Reference) website at: https://www-304.ibm.com/servers/resourcelink/lib03060.nsf/pages/lsprindex?OpenDocument . Actual performance results may vary by
customer based on individual workload, configuration and software levels
* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
© 2015 IBM Corporation
4
3/18/2015
z13 Radiator-based Air cooled – Front View (Model
NC9 or NE1)
Overhead Power
Cables
(option)
2 x 1U Support
Elements
PCIe I/O drawer
number 5
Internal
Batteries
(optional)
System Control Hubs
(used to be BPHs)
P
H
Y
S
I
C
A
L
Power
Supplies
Displays and keyboards
for Support Elements
4
0
3
1
2
2
1
3
L
O
G
I
C
A
L
PCIe I/O drawers
numbers 1 to 4
(Note: for an
upgraded System,
drawer slots 1 and
2 are used for the
I/O Drawer
CPC Drawers, PCIe
Fanouts, Cooling water
manifold and pipes, PCIe
I/O interconnect cables,
FSPs and ethernet cables
Note: CPC Drawer plugging
numbers are on the left and
logical numbers on the right
N+2
Radiator Pumps
Overhead I/O feature
is a co-req for
6
© 2015
IBM Corporation
overhead
power option
z13 Functions and Features
(GA Driver Level 22)
System, Processor, Memory
I/O Subsystem, Parallel Sysplex, STP, Securty
Five hardware models
New PCIe Gen3 I/O fanouts with 16 GBps Buses
Eight core 22nm PU SCM
LCSS increased from 4 to 6
Up to 141 processors configurable as CPs,
zIIPs, IFLs, ICFs, or optional SAPs
4th Subchannel Set per LCSS
z13
Increased Uni processor capacity
Up to 30 sub capacity CPs at capacity
settings 4, 5, or 6
Increased (24k to 32k) I/O Devices (subchannels)
per channel for all z13 FICON features
FICON Enhancements
CPC Drawers and backplane Oscillator
SR-IOV support for RoCE
SMT (for IFLs and zIIPs only) and SIMD
New Integrated Coupling Adapter (PCIe-O SR ) for
coupling links
Enhanced processor/cache design with
bigger cache sizes
Support for up to 256 coupling CHPIDs per CPC
CFCC Level 20
Up to 10 TB of Redundant Array of
Independent Memory (RAIM)
Crypto Express5S and Cryptographic
enhancements with support for 85 Domains
CPC Drawer/Memory Affinity
STP Enhancements
LPARs increased from 60 to 85
RAS, Other Infrastructure Enhancements
IBM zAware for Linux on z Systems (June 23, 2015)
System Control Hub (SCH). Replaces BPH
New N+2 ‘radiator’ design for Air Cooled System
Rack Mounted Support Elements in the CPC
Key Locks for doors
Rack mounted HMCs for customer supplied rack
Support for ASHRAE Class A2 datacenter
TKE 8.0 LICC
© 2015 IBM Corporation
7
3/18/2015
IBM z13 and zBX Model 004
IBM z13 (2964)
Announce – 01/15
5 models – NE1, NC9, N96, N63, N30
– Up to 141 customer configurable engines
Sub-capacity Offerings for up to 30 CPs
PU (Engine) Characterization
– CP, IFL, ICF, zIIP, SAP, IFP (No zAAPs)
SIMD instructions, SMT for IFL and zIIP
On Demand Capabilities
– CoD: CIU, CBU, On/Off CoD, CPE
Memory – up to 10 TB
– Up to 10 TB per LPAR (if no FICON Express8)
– 96 GB Fixed HSA
Channels
–
–
–
–
–
–
–
–
–
PCIe Gen3 16 GBps channel buses
Six LCSSs, up to 85 LPARs
4 Subchannel Sets per LCSS
FICON Express16S or 8S (8 Carry forward)
OSA Express5S (4S carry forward)
HiperSockets – up to 32
Flash Express
zEnterprise Data Compression
RDMA over CE (RoCE) with SR-IOV Support
Crypto Express5S (4S carry forward)
Parallel Sysplex clustering, PCIe Coupling, and
InfiniBand Coupling
IBM zAware: z/OS and Linux on z Systems
Operating Systems
– z/OS, z/VM, z/VSE, z/TPF, Linux on z Systems
IBM zBX Model 4 (2458-004)
Announce – 01/15
Upgrade ONLY stand alone Ensemble node converted
from an installed zBX Model 2 or 3
Doesn’t require a ‘owning’ CPC
Management – Unified Resource Manager
zBX Racks (up to 4) with:
– Dual 1U Support Elements, Dual INMN and IEDN TOR
switches in the 1st rack
– HMC LAN attached (no CPC BPH attachment)
– 2 or 4 PDUs per rack
Up to 8 BladeCenter H Chassis
–
–
–
–
Space for 14 blades each
10 GbE and 8 Gbps FC connectivity
Advanced Management Modules
Redundant connectivity, power, and cooling
Up to 112 single wide IBM blades
– IBM BladeCenter PS701 Express
– IBM BladeCenter HX5 7873
– IBM WebSphere DataPower Integration Appliance XI50
for zEnterprise (M/T 2462-4BX)
– IBM WebSphere DataPower® Integration Appliance
XI52 Virtual Edition on System x
Operating Systems
– AIX 5.3 and higher
– Linux on System x
– Microsoft Windows on System x
Hypervisors
– KVM Hypervisor on System x
– PowerVM Enterprise Edition
© 2015 IBM Corporation
8
Agenda
The IBM z13 – Introduction and Overview
IBM z13 Processor drawer and memory structure
IBM z13 I/O Connectivity
IBM z Systems Crypto
Flash Express, zEDC, RoCE, IBM zAware V2
IBM z13 Operating System Support
Summary
Statements of Direction (KVM, GDPS VA)
Appendix
© 2015 IBM Corporation
9
3/18/2015
IBM z13 Processor Drawer (Top View)
Two physical nodes, left and right
Each logical node:
– Three PU chips
– One SC chip (480 MB L4 cache)
– Three Memory Controllers:
One per CP Chip
– Five DDR3 DIMM slots per Memory
Controller: 15 total per logical node
Each drawer:
– Six PU Chips: 39 active PUs
(42 in z13 Model NE1)
– Two SC Chips (960 MB L4 cache)
– Populated DIMM slots: 20 or 25 DIMMs
to support up to 2,368 GB of
addressable memory (3,200 GB RAIM)
– Water cooling for PU chips
– Two Flexible Support Processors
– Ten fanout slots for PCIe I/O drawer
fanouts or PCIe coupling fanouts
– Four fanout slots for IFB I/O drawer
fanouts or PSIFB coupling link fanouts
© 2015 IBM Corporation
10
IBM z13 8-Core Processor Chip Detail
Up to eight active cores (PUs) per chip
– 5.0 GHz (v5.5 GHz zEC12)
– L1 cache/ core
• 96 KB I-cache
• 128 KB D-cache
– L2 cache/ core
• 2M+2M Byte eDRAM split private L2 cache
Single Instruction/Multiple Data (SIMD)
Single thread or 2-way simultaneous
multithreaded (SMT) operation
Improved instruction execution bandwidth:
– Greatly improved branch prediction and
instruction fetch to support SMT
– Instruction decode, dispatch, complete increased
to 6 instructions per cycle*
– Issue up to 10 instructions per cycle*
– Integer and floating point execution units
On chip 64 MB eDRAM L3 Cache
– Shared by all cores
I/O buses
14S0 22nm SOI Technology
– 17 layers of metal
– 3.99 Billion Transistors
– 13.7 miles of copper wire
Chip Area
–
–
–
–
678.8 mm2
28.4 x 23.9 mm
17,773 power pins
1,603 signal I/Os
– One GX++ I/O bus
– Two PCIe I/O buses
Memory Controller (MCU)
– Interface to controller on memory DIMMs
– Supports RAIM design
* zEC12 decodes 3 instructions and executes 7
© 2015 IBM Corporation
11
3/18/2015
z Systems Cache Topology – zEC12 vs. z13
4 L4 Caches
8 L4 Caches
384MB
Shared eDRAM L4
48MB Shr
eDRAM L3
L2 L2 L2 L2
L1 L1 L1 L1
L2 L2
L1 L1
6 L3s,
36 L1 / L2s
480MB
NIC
Directory Shared eDRAM L4
Intra-node Interface
48MB Shr
eDRAM L3
L2 L2
L1 L1
64MB Shr
eDRAM L3
L2 L2 L2 L2
L1 L1 L1 L1
L2 L2 L2 L2 L2 L2
L1 L1 L1 L1 L1 L1
L2 L2
L1 L1
Total
3 L3s and
24 L1 / L2s
64MB Shr
eDRAM L3
L2 L2 L2 L2 L2 L2
L1 L1 L1 L1 L1 L1
L1:
64KI + 96KD
6w DL1, 4w IL1
256B line size
L1:
96KI + 128KD
8w DL1, 6w IL1
256B line size
L2
Private 1MB Inclusive of DL1
Private 1MB Inclusive of IL1
L2
Private 2MB Inclusive of DL1
Private 2MB Inclusive of IL1
L3
Shared 48MB Inclusive of L2s
12w Set Associative
256B cache line size
L3
Shared 64MB Inclusive of L2s
16w Set Associative
256B cache line size
L4
384MB Inclusive
24w Set Associative
256B cache line size
L4
480MB + 224MB NonData Inclusive Coherent
Directory
30w Set Associative
256B cache line size
zEC12 (Per Book)
L2 L2
L1 L1
z13 (half of CPC drawer node)
z13 Continues the CMOS Mainframe Heritage
5.5 GHz
5.2 GHz
6000
5.0 GHz
4.4 GHz
5000
4000
MHz
1695*
+12%
GHz
-9%
1514*
3000
1202*
1.7 GHz
2000
1.2 GHz
902*
770 MHz
+50%
GHz
+159%
1000
+33%
GHz
+18%
+26%
GHz
+6%
0
2000
z900
z900
189 nm SOI
16 Cores**
Full 64-bit
z/Architecture
2003
z990
z990
130 nm SOI
32 Cores**
Superscalar
Modular SMP
2005
z9ec
z9 EC
2008
z10ec
z10 EC
2010
z196
z196
65 nm SOI
64 Cores**
High-freq core
3-level cache
45 nm SOI
80 Cores**
OOO core
eDRAM cache
RAIM memory
zBX integration
90 nm SOI
54 Cores**
System level
scaling
* MIPS Tables are NOT adequate for making comparisons of z Systems
. processors. Additional capacity planning required
** Number of PU cores for customer use
2012
2015
zEC12
zNext
zEC12
z13
32 nm SOI
22 nm SOI
EC
101 Cores**
141
Cores**
OOO and eDRAM
cache
improvements
PCIe Flash
Arch extensions
for scaling
SMT &SIMD
Up to 10TB of
Memory
© 2015 IBM Corporation
15
3/18/2015
CPU Clock speed versus Computer Performance
Facts
Why the overall CPU frequency approach is changing ?
Consistent frequency growth in the past decade
from hundreds of megahertz to gigahertz
CPU frequency has slowed or reduced in the past couple of years
the 10GHz mark is still unreachable as it was five years ago
Designing chips for better performance
Limits are imposed by physics, technology or economics
Controls the rate of improvements in different dimensions
Different processor architectures have different issues with overclocking
Physical limitations
Speed of light limits how fast signals travel form one end to the other on a chip
Power and heat dissipation
Cooling
How many memory elements (caches) can be within a given latency from the CPU
Physical limitations force the designers to make trade-offs
“Shrinking” a processor chip
pro: Faster due to the shorter distances
con: Reduced area for dissipation
Power dissipation increases as the chip speeds up
Raising the processor voltages would make transistors to switch quicker
pro: Frequency could then be increased
con: current also increases creating more heat
Sounds easy.. but… it causes serious problems with heat
Emerging technologies allow frequency variation according to processing needs
© 2015 IBM Corporation
16
IBM z13 z/Architecture Extensions
Two-way simultaneous multithreaded (SMT) operation
– Up to two active execution threads per core can dynamically share the caches, TLBs and execution resources of
each IFL and zIIP core. SMT is designed to improve both core capacity and single thread performance
significantly.
– PR/SM online logical processors to dispatches physical cores; but, an operating system with SMT support can be
configured to dispatch work to a thread on an IFL or zIIP core in single thread or SMT mode so that HiperDispatch
cache optimization is considered. (Zero, one or two threads can be active in SMT mode). Enhanced hardware
monitoring support will measure thread usage and capacity.
Core micro-architecture radically altered to increase parallelism
– New branch prediction and instruction fetch front end to support SMT and to improve branch prediction
throughput.
– Wider instruction decode, dispatch and completion bandwidth:
Increased to six instructions per cycle compared to three on zEC12
– Decodes 6, executes 10 (zEC12 decodes 3, executes 7)
– Larger instruction issue bandwidth: Increased to up to 10 instructions issued per cycle (2 branch, 4 FXU, 2 LSU, 2
BFU/DFU/SIMD) compared to 7 on zEC12
– Greater integer execution bandwidth: Four FXU execution units
– Greater floating point execution bandwidth: Two BFUs and two DFUs; improved fixed point and floating point
divide
Single Instruction Multiple Data (SIMD) instruction set and execution: Business
Analytics Vector Processing
– Data types: Integer: byte to quad-word; String: 8, 16, 32 bit; binary floating point
– New instructions (139) include string operations, vector integer and vector floating point operations: two 64-bit,
four 32-bit, eight 16-bit and sixteen 8-bit operations.
– Floating Point Instructions operate on newly architected vector registers (32 new 128-bit registers). Existing FPRs
overlay these vector registers.
© 2015 IBM Corporation
18
3/18/2015
Simultaneous Multi Threading (SMT)
• Simultaneous multithreading allows
instructions from more than one thread to
execute in any given pipeline stage at a time
• SMT helps address memory latency, resulting
in overall throughput gains
• It can increase processing efficiency, and
throughput
50
80
• Currently applies to Linux (IFL) and zIIPs, not
CPs this time
• The number of concurrent threads is limited to
two and can be turned on or off by an operator
command and also set up through parmlib
• Note: SMT is designed to deliver better
overall throughput for many workloads.
Performance in some cases may be superior
using single threading
Which approach is designed for the highest
volume of traffic? Which road is faster?
*Illustrative numbers only
© 2015 IBM Corporation
19
SMT – throughput improvement
Two Threads,
One core
80
50
Why only 50 ?
Well that‘s why !!!
Which approach is designed
for the highest volume** of
traffic? Which road is faster?
**Two lanes at 50 carry 25% more volume
if traffic density per lane is equal
© 2015 IBM Corporation
20
3/18/2015
SIMD (Single Instruction Multiple Data) processing
Increased parallelism to enable analytics processing
Smaller amount of code helps improve execution
efficiency
Process elements in parallel enabling more iterations
Supports analytics, compression, cryptography,
video/imaging processing
Scalar
SINGLE INSTRUCTION, SINGLE DATA
A2
A1
B2
B1
Enable new applications
Offload CPU
Simplify coding
SIMD
SINGLE INSTRUCTION, MULTIPLE DATA
C3
B3
A3
Value
C2
C1
Sum and Store
Instruction is performed for
every data element
INSTRUCTION
A3
B3
A2
B2
A1
B1
C3
C2
C1
Sum and Store
Perform instructions on
every element at once
© 2015 IBM Corporation
21
Single Instruction Multiple Data (SIMD) Vector
Processing
Single Instruction Multiple Data (SIMD)
– A type of data parallel computing that can accelerate code with integer, string,
character, and floating point data types
Provide optimized SIMD math & linear algebra libraries that
will minimize the effort on the part of middleware/application
developers
Data pool
Instruction pool
Provide compiler built-in functions for SIMD that software
applications can leverage as needed (e.g. for use of string
instructions)
Results
OS/Hypervisor Support:
− z/OS: 2.1 SPE available at GA
− Linux: IBM is working with its Linux Distribution partners to support new
functions/features
− No z/VM Support for SIMD
− Compiler exploitation
• IBM Java => 1Q2015
• XL C/C++ on zOS => 1Q2015
• XL C/C++ on Linux on z => 2Q2015
• Enterprise COBOL => 1Q2015
• Enterprise PL/I => 1Q2015
Data pool
Instruction pool
Results
© 2015 IBM Corporation
22
3/18/2015
SIMD - Exploitation
We introduced a set of new assembler instructions which directly use the vector
facility (this is not the full list) :c
• VL Vector Load
• VLL Vector Load with Length
• VSTL Vector Store with Length
• VCEQ Vector Compare
• VFAE Vector Find Any Element Equal
• VFEE Vector Find Element Equal
• Using these instructions promises maximum improvements from SIMD
• ILOG CPLEX, COBOL Inspect .. Tallying, JAVA 8, self written ASM programs are examples for this
We „millicoded“ some instructions to make use out of SIMD „under the covers“ (not a
complete list)
• Compare Operations: CLCL, CLCLE, CLCLU
• Translate Operations: TRT, TRE, TRTR, TRTT, TRTE, TRTO, TROO, TROT
• This helps to exploit SIMD even if the programs are not recompiled
All floating point operations will benefit – without change – from the new VFU design
(all units are doubled now comapred to zEC12).
MASS - Mathematical Acceleration Sub-System
ATLAS - Automatically Tuned Linear Algebra Software
23
© 2015 IBM Corporation
z13 Processor Unit Allocation/Usage
Model
Drawers
/PUs
CPs
IFLs
uIFLs
zIIPs
ICFs
Std
SAPs
Optional
SAPs
Std.
Spares
IFP
N30
1/39
0-30
0-30
0-29
0-20
0-30
6
0-4
2
1
N63
2/78
0-63
0-63
0-62
0-42
0-63
12
0-8
2
1
N96
3/117
0-96
0-96
0-95
0-64
0-96
18
0-12
2
1
NC9
4/156
0-129
0-128
0-127
0-86
0-129
24
0-16
2
1
NE1
4/168
0-141
0-141
0-140
0-94
0-141
24
0-16
2
1
z13 Models N30 to NC9 use drawers with 39 cores. The Model NE1 has 4 drawers with 42 cores.
The maximum number of logical ICFs or logical CPs supported in a CF logical partition is 16
The integrated firmware processor (IFP) is used for PCIe I/O support functions
Concurrent Drawer Add is available to upgrade in steps from model N30 to model NC9
1.
2.
3.
4.
5.
At least one CP, IFL, or ICF must be purchased in every machine
Two zIIPs may be purchased for each CP purchased if PUs are available. This remains true for sub-capacity CPs and for “banked” CPs.
On an upgrade from z196 or zEC12, installed zAAPs are converted to zIIPs by default. (Option: Convert to another engine type)
“uIFL” stands for Unassigned IFL
The IFP is conceptually an additional, special purpose SAP
© 2015 IBM Corporation
25
3/18/2015
z13 5-Channel RAIM Memory Controller Overview
RAIM = Redundant Array of Independent Memory)
Ch4 RAIM
Layers of Memory Recovery
Ch3
Ch2
ECC
DIMM DRAM
Powerful 90B/64B Reed Solomon code
Ch1
DRAM Failure
Ch0
Marking technology; no half sparing
C
needed
X
R
MCU0
2 DRAM can be marked
Call for replacement on third DRAM
C
Lane Failure
CRC with Retry
Data – lane sparing
CLK – RAIM with lane sparing
X
ASIC
CLK X
Diff
DIMM Failure (discrete
components, VTT Reg.)
C
R
C
CRC with Retry
Data – lane sparing
CLK – RAIM with lane sparing
DIMM Controller ASIC Failure
CLK
Diff
RAIM Recovery
Channel Failure
z13: Each memory channel supports only one DIMM
RAIM Recovery
© 2015 IBM Corporation
26
z13 Purchased Memory Offering Ranges
Model
N30
Standard
Memory
GB
Flexible
Memory
GB
64 - 2464
NA
N63
64 - 5024
64 - 2464
N96
64 - 7584
64 - 5024
NC9
64 - 10144
64 - 7584
NE1
64 - 10144
64 - 7584
Purchased Memory - Memory available for
assignment to LPARs
Hardware System Area – Standard 96 GB of
addressable memory for system use outside
customer memory
Standard Memory - Provides minimum physical
memory required to hold customer purchase
memory plus 96 GB HSA
Flexible Memory - Provides additional physical
memory needed to support activation base
customer memory and HSA on a multiple CPC
drawer z13 with one drawer out of service.
Plan Ahead Memory – Provides additional
physical memory needed for a concurrent
upgrade (LIC CC change only) to a preplanned
target customer memory
© 2015 IBM Corporation
27
3/18/2015
Agenda
The IBM z13 – Introduction and Overview
IBM z13 Processor drawer and memory structure
IBM z13 I/O Connectivity
IBM z Systems Crypto
Flash Express, zEDC, RoCE, IBM zAware V2
IBM z13 Operating System Support
Summary
Statements of Direction (KVM, GDPS VA)
Appendix
© 2015 IBM Corporation
29
z13 “New Build” I/O and MES Features Supported
New Build Features
Features – PCIe I/O drawer
– FICON Express16S (SX and LX, 2 SFPs, 2 CHPIDs)
– FICON Express8S (SX and LX, 2 SFPs, 2 CHPIDs)
– OSA-Express5S
• 10 GbE LR and SR (1 SFP, 1 CHPID)
• GbE SX, LX, and 1000BASE-T (2 SFPs, 1 CHPID)
– 10 GbE RoCE Express (2 supported SR ports)
– zEDC Express
– Crypto Express5S
– Flash Express
PCIe I/O drawer
32 I/O slots
Integrated Coupling Adapter (ICA) Fanout
– PCIe-O SR two 8 GBps PCIe Gen3 Coupling Link
InfiniBand Coupling Feature Fanouts
– HCA3-O two 12x 6GBps InfiniBand DDR Coupling Links
– HCA3-O LR four 1x 5Gbps InfiniBand DDR or SDR Coupling Links
© 2015 IBM Corporation
32
3/18/2015
z13 “Carry Forward” I/O Features Supported
Carry Forward Features
Features – PCIe I/O drawer
–
–
–
–
–
–
PCIe I/O drawer 32 I/O slots
FICON Express8S (SX and LX, 2 SFPs, 2 CHPIDs)
OSA-Express5S (All)
OSA-Express4S (All)
10 GbE RoCE Express (Both ports supported on z13)
zEDC Express
Flash Express
Features – I/O drawer (No MES adds)
I/O drawer 8 I/O slots
– FICON Express8 (SX and LX, 4 SFPs, 4 CHPIDs)
– Not Supported: ESCON, FICON Express4, OSA-Express3,
ISC-3, and Crypto Express3
InfiniBand Coupling Features (Fanouts)
– HCA3-O two 12x 6GBps InfiniBand DDR Coupling Links
– HCA3-O LR four 1x 5Gbps InfiniBand DDR or SDR Coupling Links
– NOT Supported: HCA2-O 12x, HCA2-O LR 1x InfiniBand Coupling Links
© 2015 IBM Corporation
33
Agenda
The IBM z13 – Introduction and Overview
IBM z13 Processor drawer and memory structure
IBM z13 I/O Connectivity – for Storage
IBM z Systems Crypto
Flash Express, zEDC, RoCE, IBM zAware V2
IBM z13 Operating System Support
Summary
Statements of Direction (KVM, GDPS VA)
Appendix
© 2015 IBM Corporation
34
3/18/2015
FICON Express16S – SX and 10KM
For FICON, zHPF, and FCP
environments
–
FLASH
CHPID types: FC and FCP
• 2 PCHIDs/CHPIDs
4, 8, 16 Gbps
SFP+
HBA
ASIC
IBM
ASIC
SFP+
HBA
ASIC
IBM
ASIC
Auto-negotiates to 4, 8, or 16 Gbps
‒
‒
2Gbps connectivity NOT supported
FICON Express8S will be available
to order for 2Gbps connectivity
4, 8, 16 Gbps
FLASH
Increased I/O Devices (subchannels)
per channel for all FICON features:
•
PCIe
Switch
FC 0418 – 10KM LX, FC 0419 – SX
TYPE=FC: Increased from 24k to 32k to support
more base and alias devices
OM3
Increased bandwidth compared to
FICON Express8S
10KM LX - 9 micron single mode fiber
–
–
Unrepeated distance - 10 kilometers (6.2 miles)
Receiving device must also be LX
or
SX - 50 or 62.5 micron multimode fiber
–
–
OM2
Distance variable with link data rate and
fiber type
Receiving device must also be SX
2 channels of LX or SX (no mix)
Small form factor pluggable (SFP)
optics
–
LX/LX
SX/SX
OR
Concurrent repair/replace action for each SFP
© 2015 IBM Corporation
zHPF and FICON Performance * z13
100000
90000
80000
70000
I/O driver benchmark
I/Os per second
z
4k block size
H
Channel 100% utilized P
F
40000
FICON
Express4
and
FICON
Express2
30000
20000
10000
ESCON
1200
0
3000
2800
2600
2400
2200
2000
1800
1600
1400
1000
400
FICON
Express4
4 Gbps
200
350
600
z10
92000
93000
31000
FICON
FICON
Express8S Express
16S
52000
FICON
Express8
FICON
Express8
20000
14000
z196
z10
z10
z196
z10
FICON
FICON Express
Express8S 16S
23000
23000
zEC12
zBC12
z196,z114
z13
zEC12
zBC12
z196,z114
z
H
P
F
z
H
P
F
1600
FICON
FICON
FICON
Express8
Express Express8S
FICON
FICON
FICON
8 Gbps
16S
Express8 8 Gbps Express8S
Express4 8 Gbps
8 Gbps 16 Gbps
770
4 Gbps
520
z10
z13
z
H
P
F
I/O driver benchmark
MegaBytes per second
Full-duplex
Large sequential
read/write mix
z
H
P
F
1200
800
FICON
Express4
and
FICON
Express2
z10
z
H
P
F
z
H
P
F
60000
50000
z
H
P
F
35
620
620
z196
z10
z196
z10
zEC12
zBC12
z196,z114
620
zEC12
zBC12
z196,z114
z13
0
*This performance data was measured in a controlled environment running an I/O driver program under z/OS. The actual
throughput or performance that any user will experience will vary depending upon considerations such as the amount of
multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed.
2600
63% increase
FICON
Express
16S
16 Gbps
z13
© 2015 IBM Corporation
36
3/18/2015
FCP Performance* for z13
120000
100000
I/Os per second
Read/writes/mix
4k block size, channel 100% utilized
110000
20% increase
92000
80000
60000
40000
84000
60000
FICON
Express4
4 Gbps
FICON
Express8
8 Gbps
20000
z10
0
3000
2500
z196, z10
1000
500
zEC12
zBC12
z196, z114
FICON
Express8S
8 Gbps
FICON
Express4
4Gbps
FICON
Express8
8 Gbps
770
520
z10
z196, z10
FICON
Express16S
16 Gbps
z13
FICON
Express16S
16 Gbps
MegaBytes per second (full-duplex)
Large sequential
Read/write mix
2000
1500
FICON
Express8S
8 Gbps
2560
60% increase
1600
zEC12
zBC12
z196, z114
z13
0
*This performance data was measured in a controlled environment running an I/O driver program under z/OS. The actual
throughput or performance that any user will experience will vary depending upon considerations such as the amount of
multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed.
© 2015 IBM Corporation
37
z13 I/O Subsystem Enhancements with IBM Storage
GOALS
• Performance
– Measureable latency Reduction for DB2 transactions
– Substantial throughput improvement for database logs
including DB2 and IMS
• Batch Window Reduction
– Add client value differentiation from prior generations with
higher I/O throughput with the same HW footprint, cabling
infrastructure and architectural addressing limits
• Scale
– More devices per channel, larger devices
– More logical channel subsystems and LPARs
• Resilience
– Lead in Software Defined Environment (SDE) by
extending WLM policy based management for I/O into the
SAN fabric across all z Systems servers, increasing scale
and enhanced resilience for mainframe data
– Reduce impact to production work when I/O components
fail
– Reduction in false repair actions
– Improved resilience by allowing automatic re-routing of
channel traffic through SAN after switch failures
– Simplify migration to new machines for FCP users
Supporting Technologies
Managed File Transfer Acceleration
zHyperWrite for DB2
New Tooling for I/O Resilience
FICON Expres16S
Forward Error Correction Codes
FICON Dynamic Routing
Fabric I/O Priority
zHPF Extended Distance II
FICON IU Pacing Enhancement
SPoF Elimination w/Storage
32K UA/Channel, 6 LCSS, 4 SS
WWPN Preservation II
© 2015 IBM Corporation
38
3/18/2015
z13 Storage Connectivity Options
Description
F/C
Ports
Available
FICON Express16S
10KM LX
0418
2
New
10 km
FICON Express8S
10KM LX
0409
2
New and carry forward
10 km
FICON Express8
10KM LX
3325
4
Carry Forward only
10 km
Cable Type
FICON Express16S SX
0419
2
New
FICON Express8S SX
0410
2
New and carry forward
FICON Express8 SX
3326
4
Carry Forward only
Distance
OM1
OM2
OM3
16
15m
35m
100m
125m
8
21m
50m
150m
190m
4
70m
150m
380m
400m
2
150m
200m
500m
na
@Gbps
OM4
Maximum FICON features varies with mix of Drawers types and Model of the System
All use LC Duplex connectors
© 2015 IBM Corporation
39
Agenda
The IBM z13 – Introduction and Overview
IBM z13 Processor drawer and memory structure
IBM z13 I/O Connectivity – for Networking
IBM z Systems Crypto
Flash Express, zEDC, RoCE, IBM zAware V2
IBM z13 Operating System Support
Summary
Statements of Direction (KVM, GDPS VA)
Appendix
© 2015 IBM Corporation
40
3/18/2015
OSA-Express5S 1000BASE-T Ethernet Feature
PCI-e form factor feature supported by PCIe I/O
drawer
– One two-port CHPID per feature
Half the density of the OSA-Express3 version
Small form factor pluggable (SFP+) transceiversSFP+
– Concurrent repair/replace action for each SFP
SFP+
Exclusively Supports: Auto-negotiation to 100
or 1000 Mbps and full duplex only on Category 5
or better copper
RJ-45 connector
Operates at “line speed”
Connector = RJ-45
CHPID TYPE Support:
Mode
OSA-ICC
QDIO
Non-QDIO
Unified Resource Manager
OSA for NCP (LP-to-LP)
FC 0417
TYPE
OSC
OSD
OSE
OSM
OSN
Description
TN3270E, non-SNA DFT, OS system console operations
TCP/IP traffic when Layer 3, Protocol-independent when Layer 2
TCP/IP and/or SNA/APPN/HPR traffic
Connectivity to intranode management network (INMN)
NCPs running under IBM Communication Controller for Linux (CCL)
Note: OSA-Express5S features are designed to have the same
performance and to require the same software support as
equivalent OSA-Express4S features.
© 2015 IBM Corporation
41
OSA-Express5S Fiber Optic Features – PCIe Drawer
10 Gigabit Ethernet (10 GbE)
– CHPID types: OSD, OSX
– Single mode (LR) or multimode (SR) fiber
– One port of LR or one port of SR
• 1 PCHID/CHPID
– Small form factor pluggable (SFP+) optics
– Concurrent repair/replace action for each SFP
– LC duplex
SFP+
Gigabit Ethernet (1 GbE)
– CHPID types: OSD (OSN not supported)
– Single mode (LX) or multimode (SX) fiber
– Two ports of LX or two ports of SX
• 1 PCHID/CHPID
– Small form factor pluggable (SFP+) optics
– Concurrent repair/replace action for each SFP
– LC Duplex
FC 0415 – 10 GbE LR, FC 0416 – 10 GbE SR
SFP+
SFP+
Note: OSA-Express5S features are designed to have the same
performance and to require the same software support as
equivalent OSA-Express4S features.
FC 0413 – GbE LX, FC 0414 – GbE SX
© 2015 IBM Corporation
42
3/18/2015
Open Systems Adapter in the PCIe I/O drawer
Description
Feature
Code
Ports
Available
CHPID
OSA-Express4S GbE LX
0404
21
Carry Forward
OSD
OSA-Express4S GbE SX
0405
21
Carry Forward
OSD
OSA-Express4S 10 GbE LR
0406
1
Carry Forward
OSD, OSX
OSA-Express4S 10 GbE SR
0407
1
Carry Forward
OSD, OSX
0408
21
Carry Forward
OSC, OSD, OSE,
OSM, OSN
Feature
Code
Ports
Available
CHPID
OSA-Express5S GbE LX
0413
21
New and Carry Forward
OSD
OSA-Express5S GbE SX
0414
21
New and Carry Forward
OSD
OSA-Express5S 10 GbE LR
0415
1
New and Carry Forward
OSD, OSX
OSA-Express5S 10 GbE SR
0416
1
New and Carry Forward
OSD, OSX
OSA-Express5S 1000BASE-T
0417
21
New and Carry Forward
OSC, OSD, OSE,
OSM, OSN
OSA-Express4S 1000BASE-T
Description
1
Two ports per CHPID
© 2015 IBM Corporation
43
Agenda
The IBM z13 – Introduction and Overview
IBM z13 Processor drawer and memory structure
IBM z13 I/O Connectivity – for Coupling Link and STP
IBM z Systems Crypto
Flash Express, zEDC, RoCE, IBM zAware V2
IBM z13 Operating System Support
Summary
Statements of Direction (KVM, GDPS VA)
Appendix
© 2015 IBM Corporation
46
3/18/2015
z13 Parallel Sysplex Coupling Connectivity
z196 and z114
zEC12 and zBC12
12x IFB, 12x IFB3, 1x IFB
HCA3-O LR
1x IFB, 5 Gbps
10/100 km
12x IFB, 12x IFB3, 1x IFB
z13
HCA3-O LR
HCA3-O LR
1x IFB, 5 Gbps
10/100 km
HCA2-O LR
HCA3-O
HCA3-O LR
HCA2-O LR
12x IFB, 6 GBps
Up to 150 m
HCA3-O
HCA3-O
12x IFB, 6 GBps HCA3-O
Up to 150 m
HCA2-O
HCA2-O
ICA SR
HCA3-O LR
Integrated Coupling Adapter (ICA SR)
8 GBps, up to 150 m
z13 to z13 Connectivity ONLY
HCA3-O
1x IFB
5 Gbps
10/100 km
12x IFB
6 GBps
Up to 150 m
HCA3-O LR
HCA3-O
IC (Internal Coupling Link):
Only supports IC-to-IC connectivity
ICA SR
z10, z9 EC, z9 BC,
z890, z990
Not supported in same
Parallel Sysplex
or STP CTN with z13
z13
HCA2-O and HCA2-O LR are NOT supported on
z13
ISC-3 is not supported on z13 even if I/O Drawer is
Carried Forward for FICON Express8
Note: The link data rates in GBps or Gbps, do not represent the performance of the links. The
actual performance is dependent upon many factors including latency through the adapters,
cable lengths, and the type of workload.
© 2015 IBM Corporation
47
© 2015 IBM Corporation
48
Integrated Coupling Adapter (ICA SR)
Integrated Coupling Adapter SR (ICA SR) Fanout in
the CPC drawer
Recommended for Short Distance Coupling z13 to z13, not
available on older servers
No performance degradation compared to Coupling over
Infiniband 12X IFB3 protocol
Hardware Details
Short reach adapter, distance up to 150 m
Up to 32 ports maximum
IOCP Channel Type = CS5
Feature code 0172, 2 ports per adapter
−
Up to 4 CHPIDs per port, 8 per feature, 7 buffers (i.e. 7
subchannels) per CHPID
ICA requires new cabling for single MTP connector
−
Differs from 12X Infiniband split Transmit/Receive connector
Requirements
−
−
CF: z13; z/OS: z13
z/OS V2.1, V1.13, or V1.12 with PTFs for APARs OA44440 and
OA44287
3/18/2015
CPC Drawer Front View – Coupling Links
PCIe Gen3 Fanouts
HCA2 and 3 Fanouts
PCIe Gen3 Fanouts
ICA
ICA SR Coupling Link
(Integrated Coupling Adapter)
HCA2-C (I/O Drawer)
or
HCA3 (1X or 12X PSIFB Links)
© 2015 IBM Corporation
49
ICA SR Advantages
Greater Connectivity
– z13 provides more ICA SR coupling fanouts per CPC drawer when compared to 12x PSIFB Coupling on
either z196, zEC12 or z13
– A single z13 CPC drawer supports up to 20 ICA SR links vs 16 12x on z196/zEC12, 8 12x on z13
Alleviate PSIFB Constrained Configurations
– Utilizing ICA SR frees HCA fanout slots for essential PSIFB Coupling links during migration
– For z13 to z13 connectivity, using ICA SR in place of PSIFB over Infiniband may enable clients to remain in
the same CPC footprint as their z196 or zEC12 enterprises
PSIFB and ICA SR Coupling Link Maximums
Books / Drawers
1
2
3
4
32
32
32
24
32
32
64
48
64
64
ICA SR (2 port/fanout, short distance) 1 & 3
z196/zEC12 Links (ports)
z13 Links (ports)
N/A
20
32
12x IFB (2 port/fanout, short distance) 2 & 3
z196/zEC12 Links (ports)
z13 Links (ports)
16
8
32
16
1x IFB (4 port/fanout, long distance)
z196/zEC12 Links (ports)
z13 Links (ports)
32
16
64
32
2&3
NOTES
1)ICA supports z13 to z13 connectivity only
2)PSIFB links contend for adapter space. Total port counts vary depending upon mix of 1x and 12x links configured and will never exceed the single 1x maximum of 64
ports total.
3)PSIFB and ICA SR links type do not contend with each other for adapter space, can have a max of 64 PSIFB 1x ports and 32 ICA SR ports for 96 ports total
© 2015 IBM Corporation
50
3/18/2015
Coupling links on z13
Type
Speed
Distance
Fanout
ICA SR
8 GBps
150 meters
PCIe-O SR
12x InfiniBand
6 GBps
150 meters
HCA3-O
1x InfiniBand
5 or 2.5 Gbps
10 km
HCA3-O LR
Up to 4 CHPIDs – per port
ICA SR
ICA SR
ICA SR
Up to 16 CHPIDs – across 2 ports
IFB & IFB3
HCA3-O
IFB & IFB3
Up to 16 CHPIDs – across 4 ports*
Ports exit from the front of a CPC
drawer with HCA3s or ICA SRs.
ICA SR
– 8 GBps
12x InfiniBand
– 6 GBps
1x InfiniBand
– 5 Gbps (Server to Server and with DWDM)
– 2.5 Gbps (with DWDM)
* Performance considerations may
reduce the number of CHPIDs per port
© 2015 IBM Corporation
51
z13 Parallel Sysplex Coupling Link Summary
InfiniBand Coupling Links Support (same HCA3-O adapters as used on zEC12)
– HCA3-O LR 1x, 5 Gbps long distance links – Up to 16 features (4 per drawer) = 64 ports
– Up to 4 CHPID definitions per port, 4 ports per feature
– CHPID TYPE=CIB
– HCA3-O 12x, 6 GBps (150 m) – Up to 16 features (Up to 4 per drawer) = 32 ports
– Recommend up to 4 CHPID definitions per port for IFB3 protocol, 2 ports per feature
– CHPID TYPE=CIB
ICA SR (PCIe-O SR), 2 ports per feature
– PCIe-O SR, 8 GBps (150 m) – Up to 16 features (Up to 10 per drawer) = 32 ports
– Up to 4 CHPIDs per port, 8 CHPIDs per feature
– CHPID TYPE=CS5
– Cable/point to point maximum distance options:
150 Meters – OM4 (24 fiber, 4700 MHz-km 50/125 micron fiber with MTP connectors)
100 Meters – OM3 (24 fiber, 2000 MHz-km 50/125 micron fiber with MTP connectors)
(Note: InfiniBand 12x DDR links also use 24 fiber OM3 cabling with different MPO connectors)
Internal Coupling Links
− Microcode - no external connection
− Only between LPARs same processor
© 2015 IBM Corporation
52
3/18/2015
Agenda
The IBM z13 – Introduction and Overview
IBM z13 Processor drawer and memory structure
IBM z13 I/O Connectivity
IBM z Systems Crypto
Flash Express, zEDC, RoCE, IBM zAware V2
IBM z13 Operating System Support
Summary
Statements of Direction (KVM, GDPS VA)
Appendix
© 2015 IBM Corporation
53
CPACF - CP Assist For Cryptographic Functions
Provides a set of symmetric cryptographic
functions and hashing functions for:
− Data privacy and confidentiality
− Data integrity
− Random Number generation
− Message Authentication
Enhances the encryption/decryption
performance of clear-key operations for
− SSL
− VPN
Supported
Algorithms
Clear
Key
Protect
Key
DES, T-DES
AES128
AES192
AES256
Y
Y
Y
Y
Y
Y
Y
Y
SHA-1
SHA-256
SHA-384
SHA-512
Y
Y
Y
Y
N/A
N/A
N/A
N/A
PRNG
DRNG
Y
Y
N/A
N/A
− Data storing applications
Available on every Processor Unit defined as a
CP, IFL, zAAP and zIIP
Supported by z/OS, z/VM, z/VSE, z/TPF and
Linux on z Systems
Must be explicitly enabled, using a no-charge
enablement feature (#3863)
− SHA algorithms enabled with each server
Protected key support for additional security of
cryptographic keys
− Crypto Express4s or Crypto Express5S required in CCA mode
© 2015 IBM Corporation
54
3/18/2015
z13 CPACF enhancements
CP Assist for Cryptographic Function Co-processor redesigned from "ground up“
Enhanced performance over zEC12
•
Does not include overhead for COP start/end and cache effects
•
Enhanced performance for large blocks of data
–
AES: 2x throughput vs. zEC12
–
TDES: 2x throughput vs. zEC12
–
SHA: 3.5x throughput vs. zEC12
Exploiters of the CPACF benefit from exploited by the throughput
improvements of z13's CPACF such as:
•
DB2/IMS encryption tool
•
DB2® built in encryption
•
z/OS Communication Server: IPsec/IKE/AT-TLS
•
z/OS System SSL
•
z/OS Network Authentication Service (Kerberos)
•
DFDSS Volume encryption
•
z/OS Java SDK
•
z/OS Encryption Facility
•
Linux on z Systems; kernel, openssl, openCryptoki, GSKIT
© 2015 IBM Corporation
55
Agenda
The IBM z13 – Introduction and Overview
IBM z13 Processor drawer and memory structure
IBM z13 I/O Connectivity
IBM z Systems Crypto
Flash Express, zEDC, RoCE, IBM zAware V2
IBM z13 Operating System Support
Summary
Statements of Direction (KVM, GDPS VA)
Appendix
© 2015 IBM Corporation
61
3/18/2015
“Native” PCIe I/O Features – (AKA “Direct Attach”)
Flash Express, zEDC Express and 10GbE RoCE Express
Traditional z System I/O PCIe
Feature
– One z System ASIC per Channel/PCHID
– Definition and LPAR Assignment
• HCD/IOCP CHPID definition or
• Firmware definition outside HCD/IOCP is
possible for some. For example: Crypto
Express4S is not defined as a CHPID
– Virtualization and support by Channel
Subsystem LIC on System Assist Processors
(SAPs)
Native PCIe Features
– z System ASIC role moved to the
new z System I/O Controller (zIOC) function
moved to the z13 CP chip
– Definition and LPAR Assignment
• HCD/IOCP FUNCTION definition similar to
CHPID definition but with different rules or
• Firmware definition outside HCD/IOCP is
possible for some. For example: Flash
Express and Crypto Express5s are not
defined with FUNCTIONs
– Virtualization and support by the Redundancy
Group LIC running on the Integrated Firmware
Processor (IFP)
(Note: NOT applicable to Flash Express or
Crypto Express5S)
z System
ASIC
*
Traditional z System I/O PCIe Features: FICON Express16S and 8S,
OSA-Express5S and 4S, and Crypto Express4S.
*
Native PCIe Feature: zEDC Express, 10GbE RoCE Express, Flash Express,
and Crypto Express5S
*PCIe Adapter Connector
© 2015 IBM Corporation
62
Flash Express PCIe Adapter Card
Four 400 GByte SSDs support 1.4 TBytes of
Storage Class Memory (AES encrypted)
Cable connections to form a RAID 10 Array
across a pair of Flash Express Cards.
Note: For z13, the Flash Express feature (FC 0403) is just a technology refresh of
the SSD. There are no performance or usage differences between FC 0403 and
the prior FC 0402. FC 0402 will still be used during 1H2015 for certain
configurations
© 2015 IBM Corporation
63
3/18/2015
zEDC PCIe Adapter card
Operating system requirements
– Requires z/OS 2.1 (with PTFs) and the zEDC Express for z/OS feature
– z/OS V1.13 and V1.12 offer software decompression support only
– z/VM V6.3 support for z/OS V2.1 guest:
zEDC should be
installed on all
Server requirements
systems accessing
compressed data *
– Available on zEC12, zBC12 and z13
– zEDC Express feature for PCIe I/O drawer (FC#0420)
• Each feature can be shared across up to 15 LPARs
• Up to 8 features available on zEC12/zBC12/z13
– Recommended high availability configuration per server is four features
• This provides up to 4 GB/s of compression/decompression
• Provides high availability during concurrent update (half devices unavailable
during update)
• Recommended minimum configuration per server is two features
– Steps for installing zEDC Express in an existing zEC12/zBC12/z13
• Apply z/OS Service; Hot plug a zEDC Express adapter; update your IODF, and
Dynamic Activate
* For the full zEDC benefit, zEDC should be active on ALL systems that might
access or share compressed format data sets. This eliminates instances where
software inflation would be used when zEDC is not available.
© 2015 IBM Corporation
64
10GbE RoCE Express Feature
Designed to support high performance system interconnect
– Shared Memory Communication (SMC) over Remote Direct Memory Access (RDMA)
(SMC-R) Architecture exploits RDMA over Converged Ethernet (CE) - RoCE
– Shares memory between peers
– Read/write access to the same memory buffers without application changes
– Designed to increase transaction rates greatly with low latency and reduced CPU cost
Configuration
– z13 - Both 10 GbE SFP+ ports enabled
– z13 - Support for up to 31 Logical Partitions
– A switched connection requires an
10 GbE SFP+
enterprise-class 10 GbE switch with SR Optics,
10 GbE SFP+
Global Pause enabled &
Priority Flow Control (PFC) disabled
– Point-to-point connection is supported
– Either connection supported to z13, zEC12 and zBC12
– Not defined as a CHPID and does not consume a CHPID number
– Up to 16 features supported on a zBC12/zEC12
– Link distance up to 300 meters over OM3 50 micron multimode fiber
FC 0411 10GbE RoCE Express
Exploitation and Compatibility
–
–
–
–
z/OS V2.1
OM3 fiber recommended
IBM SDK for z/OS Java Technology Edition, Version 7.1
z/VM V6.3 support for z/OS V2.1 guest exploitation
Linux on z Systems – IBM is working with Linux distribution partners to include support in future releases*
*Note: All statements regarding IBM's plans, directions, and intent are subject to change or
withdrawal without notice. Any reliance on these Statements of General Direction is at the relying
party's sole risk and will not create liability or obligation for IBM.
© 2015 IBM Corporation
65
3/18/2015
IBM zAware V2.0 - Analyze z/OS and Linux on z
Systems
z13 IBM zAware host
z/OS
IBM
zAware
Host
Partition
Linux
on z
Systems
IBM zAware monitored client
z/OS
IBM zAware
Web GUI to
monitor results
Linux on
z
Systems
Linux on
z
Systems
z/VM
Identify unusual system behavior of Linux on system z images
Monitors syslog* from guest or native image in real time
Improved analytics for z/OS message logs
Upgraded internal database for improved RAS
Completely rewritten UI, including heat map views
© 2015 IBM Corporation
66
© 2015 IBM Corporation
67
Heat Map – All Systems in a group
UI with Drill down system list (ModelGroup)
3/18/2015
Agenda
The IBM z13 – Introduction and Overview
IBM z13 Processor drawer and memory structure
IBM z13 I/O Connectivity
IBM z Systems Crypto
Flash Express, zEDC, RoCE, IBM zAware V2
IBM z13 Operating System Support
Summary
Statements of Direction (KVM, GDPS VA)
Appendix
© 2015 IBM Corporation
68
Agenda
The IBM z13 – Introduction and Overview
IBM z13 Processor drawer and memory structure
IBM z13 I/O Connectivity
IBM z Systems Crypto
Flash Express, zEDC, RoCE, IBM zAware V2
IBM z13 Operating System Support
Summary
Statements of Direction (KVM, GDPS VA)
Appendix
© 2015 IBM Corporation
87
3/18/2015
Statements of Direction
IBM plans to accept for review certification requests from cryptography providers
by the end of 2015, and intends to support the use of cryptography algorithms and
equipment from providers meeting IBM's certification requirements in conjunction with
z/OS and z Systems processors in specific countries. This is expected to make it easier
for customers to meet the cryptography requirements of local governments.
KVM offering for IBM z Systems: In addition to the continued investment in z/VM, IBM
intends to support a Kernel-based Virtual Machine (KVM) offering for z Systems that will
host Linux on z Systems guest virtual machines. The KVM offering will be software that
can be installed on z Systems processors like an operating system and can co-exist with
z/VM virtualization environments, z/OS, Linux on z Systems, z/VSE and z/TPF. The KVM
offering will be optimized for z Systems architecture and will provide standard Linux and
KVM interfaces for operational control of the environment, as well as providing the
required technical enablement for OpenStack for virtualization management, allowing
enterprises to easily integrate Linux servers into their existing infrastructure and cloud
offerings.
In the first half of 2015, IBM intends to deliver a GDPS/Peer to Peer Remote Copy
(GDPS/PPRC) multiplatform resiliency capability for customers who do not run the z/OS
operating system in their environment. This solution is intended to provide IBM z
Systems customers who run z/VM and their associated guests, for instance, Linux on z
Systems, with similar high availability and disaster recovery benefits to those who run on
z/OS. This solution will be applicable for any IBM z Systems announced after and
including the zBC12 and zEC12
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
z13TLLB88
© 2015 IBM Corporation
Standardized virtualization for z System
SOD at announcement for KVM optimized for z System
Expanded audience for Linux on z Systems
Optimized for z System scalability,
performance, security and resiliency
– Standard software distribution from IBM
Flexible integration to cloud offerings
– Standard use of storage and networking
drivers (including SCSI disk)
– No proprietary agent management
– Off-the-shelf OpenStack and cloud drivers
– Standard enterprise monitoring and automation
(i.e. GDPS)
Linux on Sys z
Linux on Sys z
z/OS
Linux on Sys z
Linux on Sys z
z/OS
– Provisioning, mobility, memory over-commit
– Standard management and operational controls
– Simplicity and familiarity for Intel Linux users
Linux on Sys z
Support of modernized open source KVM
hypervisor for Linux
z/OS
z System Host
– KVM on z System will co-exist with z/VM
– Attracting new clients with in house KVM skills
– Simplified startup with standard KVM interfaces
KVM
z/VM
PR/SM™
z CPU, Memory and IO
Support Element
A new hypervisor choice
for the mainframe
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
z13TLLB89
© 2015 IBM Corporation
3/18/2015
What is GDPS Virtual Appliance*
Fully integrated Continuous Availability & Disaster Recovery solution for Linux on z
Systems customers with no or little z/OS skills
– It is an image comprising of an operating system, the application components, an
appliance management layer which makes the image self-containing, and APIs / UIs
for customization, administration, and operation tailored to the appliance function.
– It improves both consumability and time-to-value for customers.
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
z13TLLB90
© 2015 IBM Corporation
GDPS Virtual Appliance* in Linux on z Environments
z/VM
LPAR
z/VM
LPAR
xDR Proxy/
GDPS disks
LPAR
z System PR/SM
z System PR/SM
Support Element
Support Element
z System CPC
z System CPC
z/VM + Linux disks
(primary)
Site 1
xDR Proxy
Linux
GDPS
Appliance
Linux
xDR Proxy
Linux
Linux
network communication
z/VM + Linux disks
(secondary)
PPRC
HyperSwap
HMC
Site 2
PPRC (point-to-Point Remote Copy)
ensures the remote copy is identical to
the primary data. The synchronization
takes place at the time of I/O operation
One dedicated Linux guest is
configured as xDR Proxy for GDPS
which is used for tasks that have z/VM
scope (HyperSwap, shutdown z/VM,
IPL z/VM guest)
Manages remote copy environment
using HyperSwap function and keeps
data available & consistent for
operating systems and applications.
Disaster Detection and ensures
successful & faster recovery via
automated processes
Single point of control from GDPS
Appliance. No need for availability of
all experts for e.g. storage team,
hardware team, OS team, application
team etc.
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
z13TLLB91
© 2015 IBM Corporation
3/18/2015
Statements of Direction
The IBM z13 will be the last z Systems server to support running an operating system
in ESA/390 architecture mode; all future systems will only support operating systems
running in z/Architecture mode. This applies to operating systems running native on PR/SM
as well as operating systems running as second level guests. IBM operating systems that
run in ESA/390 mode are either no longer in service or only currently available with
extended service contracts, and they will not be usable on systems beyond IBM z13.
However, all 24-bit and 31-bit problem-state application programs originally written to run on
the ESA/390 architecture will be unaffected by this change.
Stabilization of z/VM V6.2 support: The IBM z13 server family is planned to be the last z
Systems server supported by z/VM V6.2 and the last z systems server that will be supported
where z/VM V6.2 is running as a guest (second level). This is in conjunction with the
statement of direction that the IBM z13 server family will be the last to support ESA/390
architecture mode, which z/VM V6.2 requires. z/VM V6.2 will continue to be supported until
December 31, 2016, as announced in announcement letter # 914-012.
Product Delivery of z/VM on DVD/Electronic only: z/VM V6.3 will be the last release of
z/VM that will be available on tape. Subsequent releases will be available on DVD or
electronically.
Removal of support for Classic Style User Interface on the Hardware Management
Console and Support Element: The IBM z13 will be the last z Systems server to support
Classic Style User Interface. In the future, user interface enhancements will be focused on
the Tree Style User Interface.
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
z13TLLB93
© 2015 IBM Corporation
Statements of Direction
Removal of support for the Hardware Management Console Common Infrastructure
Model (CIM) Management Interface: IBM z13 will be the last z Systems server to support
the Hardware Console Common Infrastructure module (CIM) Management Interface. The
Hardware Management Console Simple Network Management Protocol (SNMP), and Web
Services Application Programming Interfaces (APIs) will continue to be supported.
The IBM z13 will be the last z Systems server to support FICON Express8 channels:
IBM z13 will be the last high-end server to support FICON Express8. Enterprises should
begin migrating from FICON Express8 channel features (#3325, #3326) to FICON
Express16S channel features (#0418, #0419). FICON Express8 will not be supported on
future high-end z Systems servers as carry forward on an upgrade.
The IBM z13 server will be the last z Systems server to offer ordering of FICON
Express8S channel features. Enterprises that have 2 Gb device connectivity requirements
must carry forward these channels.
Removal of an option for the way shared logical processors are managed under
PR/SM LPAR: The IBM z13 will be the last high-end server to support selection of the
option to "Do not end the timeslice if a partition enters a wait state" when the option to set a
processor run time value has been previously selected in the CPC RESET profile. The CPC
RESET profile applies to all shared logical partitions on the machine, and is not selectable
by logical partition.
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
z13TLLB94
© 2015 IBM Corporation
3/18/2015
Performance delivered through multiple dimensions
Hardware/
software
integration
leads to richer
optimization
• 40% more total capacity
• 2X performance boost for cryptographic coprocessors
• 50-80% more bandwidth per I/O domain
• 2X increase in channel speed
• 3X increase in memory
• 2X increase in cache
• Lower cloud cost
• Faster fraud detection
• More scale for mobile transactions
• Faster data sharing between systems
• Less exposure to regulatory penalties
• Faster decision making with data-in-memory
z13TLLB97
97
© 2015 IBM Corporation
© 2015 IBM Corporation
z13TLLB98
98
© 2015 IBM Corporation
© 2015 IBM Corporation