Web-Scale Hyper-Converged Infrastructure

Web-Scale Hyper-Converged
Infrastructure
WHY CHALK TALK:
You’ve only got one chance to differentiate yourself from the competition, and traditional Power Point slides are not
the answer. Use the “Power of the Pen”—it’s the power of a salesperson to get up in front of a C-level buyer and
deliver a visually-rich and interactive presentation with complete confidence and command of the material.
BEFORE YOU CHALK TALK:
• It goes without saying that you need to practice the story repeatedly until you know it from memory.
• Put your chalk talk study guide aside and draw out as much of the chalk talk as you can from beginning to end. When you are finished, refer back to the visuals to see what you missed. Repeat this 4-5 times and you will have mastered the visuals.
• Once you have mastery of the content and flow to the point where you can draw out the entire chalk talk
without referring to notes, only then should you integrate the script and practice presenting the story “in role”.
• Practice, Practice, Practice.
COME PREPARED:
• Call ahead to ensure your meeting is in a conference room or office with a white board.
• Always bring your own set of dry erase markers.
• Bringing a package of “white board wipes”—single use towelettes that you can get at any office store—will show your customer you have thought ahead.
2
HOW TO CHALK TALK
Consider your stance—Position yourself so your feet are perpendicular to the white board surface, and be
conscious to never alter this position except to turn and face your audience completely.
Engage—Smile. Make eye contact. Use hand gestures.
Avoid “dead air”—One of the most common missteps is to write on the white board in silence, then turn to your
audience and regurgitate what you have just drawn, word-for-word. This creates an awkward pause that interrupts
the flow of your presentation.
Take your time—Bad penmanship is primarily a result of going too fast and not knowing your story. A good story
is not a rushed story, so pace yourself, have fun, take time to engage your audience.
SMALL NUMBERS IN THE VISUALS AND IN THE TEXT ARE PROVIDED AS A GUIDE TO INDICATE
WHAT SHOULD BE DRAWN WHEN.
3
Most datacenters1 take advantage of server
virtualization—and that’s had a dramatic impact on
the ability to maximize server utilization,2 offer high
availability, and easily move workloads between servers.
Part of the requirement to having that flexibility and
efficiency is the use of shared storage.
Typical datacenters now use a 3-tier approach—you
have the servers which host the VMs, storage network,
and the storage arrays.3 The flexibility on the server side
has led to significant challenges on the storage side:
•
There are specialty resources needed to maintain this system with certifications from the different
suppliers and lots of complex4 components.
[What are you using today for servers?
Networking? Storage?]
•
Requirements need to be planned out 3-5 years in advance but are your projects planned out in that same cycle? Of course not, which means teams either
over-purchase capacity which is expensive,5 run out of capacity which stifles the business and is not
4
acceptable, or risk a rip and replace sooner than planned
to get the additional capacity which is super expensive.
• Performance is limited6 by the I/O at the storage
controller, an inherent bottleneck.7
• Maintaining these systems limits innovation8
because resources are siloed and consumed with
keeping the lights on.
There are newer technologies that try to mask some of
these storage challenges and basically speed up part of
the process but the bottleneck remains.
Analogy:
If you are sitting in highway traffic everyday during rush
hour, adding some sort of nitro booster to your engine
isn’t going to help you get to work faster—you are
still stuck in traffic in the same lanes that were on the
highway yesterday.
5
The complexity and expense are replicated for different work load types—business applications, virtual desktop
infrastructure, and big data.1
[What applications are slowing down your system today?]
6
7
What if you had a way to change the way you deploy technology? What if you could come up with a way to
minimize or eliminate the bottlenecks that live in your infrastructure today? Nutanix was built from the ground
up to address these very specific issues.1 Nutanix changes the game and just eliminates the bottleneck. It’s as if
you have your own private road to get to work—no more sitting in traffic on the highway.
8
9
We start with basic hardware, x86 servers.1 No special certifications or components are needed to run the
servers. Within the 2U appliance there are nodes2 and each node can run the hypervisor, compute,3 VMs,4
and the storage5 right next to each other. Each node also has its own virtual storage controller.6 All server a
ttached storage units interconnect into a distributed file system. Nutanix supports multiple hypervisors
including support for vSphere, Hyper-V, and KVM.
Analogy:
With each node having its own virtual storage controller it’s like being at a stadium and when the event is over,
instead of funneling with the crowd to the exit, everyone has their own private door to leave.
10
11
Let’s look at this more closely.
Infrastructure built from standard server hardware1 is pooled together using intelligent software.2
1. The software in the system is distributed across all the nodes. You don’t have central metadata servers or
named nodes—no controller bottlenecks.3
2. Everything in the system, including storage functions like deduplication, metadata management, and system
cleanup, is distributed across all nodes. There are no hotspots or bottlenecks, allowing for scale.
3. Compute and storage sit very close to each other. Data does not have to go back and forth between storage
and compute over a network. Data has gravity, so co-locating storage and compute eliminates network
bottlenecks and system slowdown.
12
13
Within the node you have both flash and spin disk for storage.1 When VMs are actively interacting with files
from storage the data is in flash. And as that data gets cold it automatically moves to spin disk. If the data
becomes hot again Nutanix automatically moves the data back into flash. Data is shared throughout the pooled
system eliminating single points of failure. If a node fails the data is retrieved from one of the other nodes.
You can add capacity one node at a time.2 The scale out is linear and extremely predictable3 because every time
you add a node you are adding a storage controller—the number of VMs or virtual desktops that can be supported on the cluster grows in a linear, predictable way without limits.4
[How old is your infrastructure? How often do you refresh your storage array?] Start transitioning now and as
you need more capacity expand. Predictable cost,5 capacity, and performance.6
Analogy:
Imagine a company when it first starts out—the team is usually pretty small. They look at office space, maybe
get a little more than they need so there is space to grow. And they think they will grow fast but they don’t
start out with an entire campus of buildings when they only need part of a single floor in a single building. They
add space as they need it. And with Nutanix you can add server and storage resources as you need it in your
data center.
14
15
This single system supports all virtual workload types1– enterprise applications,2 virtual desktop infrastructure3
and big data.4
• Nodes can be configured to workload needs. Perhaps node 4 needs to be storage heavy (draw over in blue5) and node 5 needs to be compute heavy (draw over in green6), adjust it to the workload requirements.
• With the software-defined approach to infrastructure, policies around resilience, data protection, etc. are late-bound in the system.
• The systems coming out of the factory don’t have rigid restrictions and preset configurations, so you don’t have to buy different solutions for different workloads.7
• The Nutanix solution also natively uses read and write caches, intelligent data tiering, and data locality to achieve optimal performance8 automatically for a variety of workloads.9
[What performance issues do you have? Where is the system slowing down?]
Analogy:
This type of flexibility is just like having the person capacity of a mini-van while still getting the speed,
performance, and fun of a sports car all wrapped into a single vehicle.
16
17
Nutanix: web-scale hyper-converged infrastructure. This approach was developed for Google and other
organizations that require massive scale.1 But the ability to scale one node at a time to infinity allows all
organizations to access this incredible flexibility and adaptability within their datacenter, no matter how
large or small.
The architects who developed this technology for Google founded Nutanix, designing a solution made for
growth, deliverable to the masses.
• Fractional consumption and predictable scale
• No single point of failure2
• Distributed everything
• Always-on systems
18
19
Management of the environment is from a single, HTML 5 dashboard with APIs for custom analytics. The
interface is extremely intuitive1 allowing admins to quickly do root cause analysis for challenges, providing
extensive automation and rich analytics. One single interface manages the entire environment2 including all
servers and storage, freeing up critical time for more important business activities.
Innovation at it’s finest, driving down cost and complexity3 while adding performance.4 It’s why we received
Best of VMworld for 3 years and why Gartner identified Nutanix as a visionary leader in their Magic Quadrant
for Integrated Systems.
20
21
22