Planning Guide version 6.0

version 6.0
Planning Guide
Mirantis OpenStack v5.1
Planning Guide
Contents
Preface
1
Intended Audience
1
Documentation History
1
Introduction to Mirantis OpenStack and Fuel
2
System Requirements
3
Fuel Master Node Hardware Recommendations
3
Target Node Server Hardware Recommendations
4
Sample Hardware Configuration for Target Nodes
4
Controller nodes (3)
4
Storage nodes (3)
5
Compute nodes (5)
5
Planning Summary
6
Choose Network Topology
7
Linux Distribution for Nodes
8
Choosing the Storage Model
9
Glance: Storage for Images
9
Object Storage for Applications
10
Cinder: Block Storage for Volumes
10
Nodes and Roles
12
Plan Monitoring Facilities
14
Ceilometer and MongoDB
14
Zabbix monitoring tool
15
Planning a Sahara Deployment
16
Preparing for vSphere Integration
17
vSphere Installation
17
ESXi Host Networks Configuration
18
Limitations
19
Preparing to run Fuel on vSphere
21
Preparing for VMware NSX integration
22
VMware NSX Installation
22
VMware NSX cluster configuration
22
©2014, Mirantis Inc.
Page i
Mirantis OpenStack v5.1
Planning Guide
Limitations
24
Calculate hardware requirements
26
Example of Hardware Requirements Calculation
27
Calculating CPU
27
Calculating Memory
28
Calculating Storage
28
Throughput
29
Calculating Network
30
Scalability and oversubscription
31
Hardware for this example
31
Mellanox ConnectX-3 network adapters
31
Summary
32
Reference configuration of hardware switches
33
Tagged ports
33
Untagged ports
34
Example 1: HA + Nova-network FlatDHCP manager
35
Detailed Port Configuration
36
Nova-network Switch configuration (Cisco Catalyst 2960G)
38
Nova-network Switch configuration (Juniper EX4200)
41
Example 2: HA + Neutron with GRE
48
Detailed port configuration
49
Neutron Switch configuration (Cisco Catalyst 2960G)
49
Neutron Switch configuration (Juniper EX4200)
53
Example 3: HA + Neutron with VLAN + SR-IOV & iSER
59
Neutron Switch configuration (Mellanox SX1036)
62
Index
©2014, Mirantis Inc.
65
Page ii
Mirantis OpenStack v5.1
Planning Guide
Preface
Preface
This documentation provides information on how to use Mirantis Fuel to deploy OpenStack environment. The
information is for reference purposes and is subject to change.
Intended Audience
This documentation is intended for OpenStack administrators and developers; it assumes that you have
experience with network and cloud concepts.
Documentation History
The following table lists the released revisions of this documentation:
Revision Date
October, 2014
©2014, Mirantis Inc.
Description
6.0 Technical Preview
Page 1
Mirantis OpenStack v5.1
Planning Guide
Introduction to Mirantis OpenStack and Fuel
Introduction to Mirantis OpenStack and Fuel
OpenStack is an extensible, versatile, and flexible cloud management platform. It is a portfolio of cloud
infrastructure services – compute, storage, networking and other core resources — that are exposed through
ReST APIs. It enables a wide range of control over these services, both from the perspective of an Integrated
Infrastructure as a Service (IaaS) controlled by applications and as a set of tools that enable automated
manipulation of the infrastructure itself.
Mirantis OpenStack is a productized snapshot of the open source technologies. It includes Fuel, a graphical web
tool that helps you to quickly deploy your cloud environment. Fuel includes scripts that dramatically facilitate
and speed up the process of cloud deployment, without requiring you to completely familiarize yourself with the
intricate processes required to install the OpenStack environment components.
This guide provides details to get you started with Mirantis OpenStack and Fuel on a set of physical servers
("bare-metal installation") See the User Guide for detailed instructions about how to download and install Fuel on
the Fuel Master Node and then how to use the Fuel interface to deploy your OpenStack environment.
Further reading is available in the following documents:
• Terminology Reference is an alphabetical listing of technologies and concepts that serves as both a glossary
and a master index of information in the Mirantis docs and the open source documentation.
• Operations Guide gives information about advanced tasks required to maintain the OpenStack environment
after it is deployed. Most of these tasks are done in the shell using text editors and command line tools.
• Reference Architecture provides background information about how Mirantis OpenStack and its supporting
HA architecture is implemented.
You have ways to use Mirantis OpenStack and Fuel other than the bare-metal installation:
• You can install Fuel and use it to deploy a Mirantis OpenStack Environment on Oracle VirtualBox. VirtualBox
deployment is useful for demonstrations and is a good way to begin your exploration of the tools and
technologies. See 0 to OpenStack in 60 Minutes or less to get started, with additional information in
Running Fuel on VirtualBox. Note that deployments on top of VirtualBox do not generally meet the
performance and robustness requirements of most production environments.
• You can install Fuel master node on vSphere and deploy a Mirantis OpenStack Environment on either
baremetal or on vSphere.
• Mirantis OpenStack is also available on-demand, preconfigured, and ready to use with our Hosted Private
Cloud product, Mirantis OpenStack Express.
For community members or partners looking to take Fuel even further, see the developer documentation for
information about the internal architecture of Fuel, instructions for building the project, information about
interacting with the REST API and other topics of interest to more advanced developers. You can also visit the
Fuel project for more detailed information and become a contributor.
©2014, Mirantis Inc.
Page 2
Mirantis OpenStack v5.1
Planning Guide
System Requirements
System Requirements
Fuel and Mirantis OpenStack run well on high-quality commodity hardware. The following sections give some
basic information about minimal hardware requirements; choosing the right hardware requires an understanding
of how your cloud environment will be used.
Some general remarks:
• For optimal performance in production environments, configure enough servers and network adapters to
minimize contention for resources in the environment.
• Understand the nature of the software that will run on your environment and configure the hardware
appropriately. For example, an environment that mostly provides remote storage has different hardware
requirements than an environment that is mostly used for extensive calculations.
• One Fuel Master node can manage multiple environments. If you need to support applications with very
different hardware needs, consider deploying separate environments that are targeted to these needs; for
example, deploy one environment that is optimized for data storage and another environment that is
optimized for compute power.
• Select peripheral hardware that is supported by the operating system distribution that you are using for the
target nodes (see Linux Distribution for Nodes) and for the VM instances that will be deployed.
Integrating new hardware drivers into the Mirantis OpenStack distribution is possible but complicated at
this time. This will be made easier through a simplified plug-in architecture that is planned for an upcoming
release.
Note also that some proprietary drivers may work only with specific Linux kernel versions and not be
compatible with the kernel versions that are shipped with Mirantis OpenStack. Other drivers may not be
included in the Mirantis OpenStack distribution because of licensing restrictions.
Fuel Master Node Hardware Recommendations
To install the Fuel Master Node, you should base your hardware on the anticipated load of your server. Logically,
deploying more node servers in your environment requires more CPU, RAM, and disk performance.
Suggested minimum configuration for installation in production environment:
• Quad-core CPU
• 4GB RAM
• 1 gigabit network port
• 128GB SAS Disk
• IPMI access through independent management network
Suggested minimum configuration for installation in lab environment:
• Dual-core CPU
• 2GB RAM
• 1 gigabit network port
• 50GB disk
©2014, Mirantis Inc.
Page 3
Mirantis OpenStack v5.1
Planning Guide
Target Node Server Hardware Recommendations
• Physical console access
Target Node Server Hardware Recommendations
Determining the appropriate hardware for the target nodes requires a good understanding of the software that
will be run on those nodes. The OpenStack community documentation provides basic guidelines:
• Compute and Image System Requirements
• System requirements for Object Storage
The OpenStack Operations Guide includes advice about capacity planning and scaling and should be read in
conjunction with this Planning Guide.
Other information is available elsewhere in this documentation set:
• For information about assigning roles to nodes, see Nodes and Roles.
• For information about storage requirements and other hardware issues for the MongoDB database that is
used with Ceilometer, see Ceilometer and MongoDB.
• For general information about calculating hardware requirements, see Calculate hardware requirements.
To help determine the correct sizing for OpenStack Node servers, use the Mirantis Hardware Bill of Materials
calculator.
For more information on the logic used in the utility and basic directions, see: “How do you calculate how much
hardware you need for your OpenStack cloud?”.
Sample Hardware Configuration for Target Nodes
This is a general-purpose medium-sized hardware configuration that is provides room for moderate growth and is
adequate for a variety of installations. Note that this is not a recommendation or some sort of "best practices"
guide; but provides a reasonable starting point that avoids common configuration mistakes.
The characteristics of the environment are:
• 12 servers (one Fuel Master node, 3 Controller nodes, 3 Storage nodes, 5 Compute nodes)
• High-availability configuration
• Neutron networking, using either the VLAN or GRE topology
• Ceph as the backend for Cinder, Glance, and Nova (for ephemeral storage)
• No Ceilometer/MongoDB monitoring is configured; three additional servers to be used as MongoDB nodes
would be required to add Ceilometer/MongoDB to the environment
Controller nodes (3)
Controllers require sufficient resources to run the Ceph monitors and MySQL in addition to the core OpenStack
components (Nova, Neutron, Cinder, and Glance).
Each controller should have:
• Single or dual-socket CPU with at least 4 physical cores
©2014, Mirantis Inc.
Page 4
Mirantis OpenStack v5.1
Planning Guide
Target Node Server Hardware Recommendations
• 32GB RAM recommended; 24GB minimum (Note that, for testing, a Controller node can run with as little as
2GB of memory, but production systems need the larger memory size.)
• Hardware RAID1 Controller with at least 500GB capacity for the Host operating system disk. Larger disks
may be warranted if you determine that you need large database capacity. Note that it is non-trivial to
expand the disk storage on running Controller nodes.
• 2 NICs, either 1 Gbit/s or 10 Gbit/s
Storage nodes (3)
We recommend separate Ceph nodes for scalability and robustness. The hardware estimate is based on the
requirement of .5 cores per Ceph-OSD CPU and 1GB of RAM per TB of Ceph-OSD space. All Ceph storage and
journal hard disks can be configured in JBOD (Just a Bunch of Disks) mode on the RAID controller, or can be
plugged into free MB ports.
For production installations, the Ceph replication factor should be set to 3 or more; see Storage.
• Single-socket CPU with at least 4 physical cores
• 24GB RAM
• RAID1 Controller with at least 500GB capacity for the Host operating system disk
• 2 NICs, either 1 Gbit/s or 10 Gbit/s
• 18 TB of Ceph storage (6 x 3TB)
• 1-2 SSDs, 64GB or more each, for the Ceph Journal
Compute nodes (5)
Virtual Machines are hosted on the Compute nodes. By default, Fuel configures the Scheduler to use a fairly
aggressive 8:1 CPU overcommit ratio on the Compute nodes; if you do not know what your VM workload is going
to be, reset this ratio to 1:1 to avoid CPU congestion.
Each Compute node should have:
• Dual-socket CPU with at least 4 physical cores per socket
• 64GB RAM
• RAID1 Controller with at least 500GB capacity for the Host operating system disk
• 2 NICs, either 1 Gbit/s or 10 Gbit/s
©2014, Mirantis Inc.
Page 5
Mirantis OpenStack v5.1
Planning Guide
Planning Summary
Planning Summary
Before installation, determine the deployment type that is appropriate for you configuration needs. You may want
to print this list and make notes indicating your selection so you can be sure you have planned your deployment
correctly.
The following table provides a list of configuration steps that you must complete to plan the Mirantis OpenStack
deployment.
Step Description
Additional Information
Select a network topology
See Choose Network Topology
Choose the Linux distro to use on your
nodes
See Linux Distribution for Nodes
Determine how many nodes to deploy
and which roles to assign to each and the
high-availability to implement.
See Nodes and Roles
Plan monitoring facilities
See Plan Monitoring Facilities
If you want to run Hadoop plan to install
Sahara.
See Planning a Sahara Deployment
If you want to run vCenter or
:ref`vsphere-term`
See Preparing for vSphere Integration
If you want to use NSX
See Preparing for VMware NSX integration
Calculate the server and network
hardware needed
See Calculate hardware requirements
Prepare an IP address management plan
and network association.
Identify the network addresses and VLAN IDs for your Public,
floating, Management, Storage, and virtual machine (fixed)
networks. Prepare a logical network diagram.
©2014, Mirantis Inc.
Page 6
Mirantis OpenStack v5.1
Planning Guide
Choose Network Topology
Choose Network Topology
Mirantis OpenStack supports two network modes, which in turn support five topologies. For architectural
descriptions of the five topologies, see:
• Neutron with VLAN segmentation and OVS
• Neutron with GRE segmentation and OVS
• Neutron with VMware NSX
• Nova-network FlatDHCP Manager
• Nova-network VLAN Manager
Nova-network is a simple legacy network manager. It can operate with predefined Private IP spaces only.
• If you do not want to split your VMs into isolated groups (tenants), you can choose the Nova-network with
FlatDHCP topology. In this case, you will have one big network for all tenants and their VMs.
• The Nova-network with VLANManager topology is a form of provider network. This allows you to have
multiple tenants, each of which supports approximately the same number of VMs and an equal amount of
Private IP space. The maximum number of tenants and the private IP space size must be defined before you
deploy your environment. You must also set up appropriate VLANs and gateways on your underlying
network equipment.
Neutron is a modern and more complicated network manager. It separates tenants, decreases the requirements
for the underlying network (physical switches and topology), and gives a great deal of flexibility for manipulating
Private IP spaces. You can create Private IP spaces with different sizes and manipulate them on the fly.
• The Neutron with VLAN topology, like Nova-network with VLANManager, requires a predefined maximum
number of tenants value and underlying network equipment configuration.
• The Neutron with GRE topology allows more flexibility in the maximum number of tenants (it supports up to
65535 tenants) and simplifies the network equipment configuration, but GRE encapsulation decreases the
speed of communication between the VMs and increases the CPU utilization on the Compute and Controller
nodes.
• The Neutron with VMware NSX topology utilizes the VMware NSX network hypervisor as a backend for
Neutron. As with GRE segmentation, it does not restrict the maximum number of available networks.
Before deploying the OpenStack environment that will be using NSX, the user must obtain the NSX OVS
packages for Ubuntu or CentOS and manually upload them to a web server (usually the Fuel master node).
Puppet can then download and install these packages on controllers and compute nodes during the
deployment stage.
Some other considerations when choosing a network topology:
• OVS (Open vSwitch), Bonding, and Murano can only be implemented on Neutron.
• VMWare vCenter and vSphere can only be implemented on Nova-network with the FlatDHCP topology.
• Neutron with VMware NSX can only be implemented with the KVM/QEMU hypervisor.
• Bonding is not supported for SR-IOV over the Mellanox ConnectX-3 network adapters family.
• Mellanox SR-IOV and iSER are supported only when choosing Neutron with VLAN.
©2014, Mirantis Inc.
Page 7
Mirantis OpenStack v5.1
Planning Guide
Linux Distribution for Nodes
Linux Distribution for Nodes
Fuel allows you to deploy either the CentOS or Ubuntu Linux distribution as the Host O/S on the nodes. All nodes
in the environment must run the same Linux distribution. Often, the choice is made based on personal
preference; many administrative tasks on the nodes must be performed at shell level and many people choose
the distribution with which they are most comfortable.
Some specific considerations:
• Each distribution has some hardware support issues. See Release Notes for Mirantis OpenStack 6.0 for details
about known issues.
• In particular, the CentOS version used for OpenStack does not include native support for VLANs while the
Ubuntu version does. In order to use VLANs on CentOS based nodes, you must configure VLAN splinters.
• CentOS supports .rpm packages; Ubuntu supports .deb packages.
©2014, Mirantis Inc.
Page 8
Mirantis OpenStack v5.1
Planning Guide
Choosing the Storage Model
Choosing the Storage Model
This section discusses considerations for choosing the storage model for your OpenStack environment. You need
to consider two types of data:
• Persistent storage exists outside an instance.
• Ephemeral storage is allocated for an instance and is deleted when the instance is deleted.
Fuel deploys storage for two types of persistent data:
• Glance, the image storage service, which can use either Swift or Ceph RBD as the storage backend
• Cinder, the block storage service, which can use either LVM or Ceph RBD as the storage backend
The Nova compute service manages ephemeral storage. By default, Nova stores ephemeral drives as files on local
disks on the Compute nodes but can instead use Ceph RBD as the storage backend for ephemeral storage.
See:
• Calculating Storage for information about choosing the hardware to use for your storage objects
• Storage Decisions is an OpenStack community document that gives guidelines for choosing the storage
model to use.
Glance: Storage for Images
Fuel configures a storage backend for the Glance image service. This provides data resilience for VM images;
multiple Glance instances running on controller nodes can store and retrieve images from the same data set,
while the object store takes care of data protection and HA.
When you create your OpenStack environment, you choose one of the following storage backends for Glance:
• Swift object store, the standard OpenStack object storage component
• Ceph RBD, the distributed block device provided by the Ceph storage platform.
Note
Older versions of Fuel provided the Multi-Node deployment mode that was used to deploy OpenStack
environments that had a single Compute node and used the file system backend for Glance. This mode is
still available in Fuel 5.1 but is deprecated; instead, use the HA deployment mode with a single controller.
This mode creates an environment that is not highly available but is managed using the same services as
those that manage the full HA environment and can be scaled up to have multiple controllers and be
highly available. In this mode, you can choose either Swift or Ceph RBD as the Glance storage backend.
Factors to consider when choosing between Swift and Ceph RBD for the storage backend for the Glance image
server include:
• Ceph provides a single shared pool of storage nodes that can handle all classes of persistent and ephemeral
data that is required for OpenStack instances.
©2014, Mirantis Inc.
Page 9
Mirantis OpenStack v5.1
Planning Guide
Object Storage for Applications
• When using Ceph, a single set of hard drives can serve as a backend for Glance, Cinder, and Nova.
Otherwise, you must have dedicated disks for image storage, block storage, and ephemeral disks.
• Ceph's copy-on-write facility allows you to clone a system image and to start different VMs based on that
image without any unnecessary data copy operations; this speeds up the time required to launch VMs and
create volumes from images. Without the Ceph copy-on-write facility, OpenStack images and volumes must
be copied to separate Glance, Cinder, and Nova locations.
• Ceph supports live migration of VMs with ephemeral drives whereas Cinder with LVM backend only supports
live migration of volume backed VMs. Note that live migration of Ceph backed VMs requires that Ceph RBD
is used for ephemeral volumes as well as Glance and Cinder.
• Swift provides multi-site support. Ceph is unable to replicate RBD block device data on a long-distance link
which means you cannot replicate between multiple sites. Starting with Ceph 0.80 "Firefly" (integrated into
Mirantis OpenStack beginning with Release 5.1), the Ceph Object Gateway (radosgw) supports multi-site
replication of object data but this support is more limited than that of Swift and does not apply to the RBD
backend for Glance.
• Ceph cannot tolerate clock drift greater than 50ms. If your servers drift out of sync, your Ceph cluster
breaks. When using Ceph, it is extremely important that you configure NTP or some other time
synchronization facility for your environment.
Object Storage for Applications
The object storage systems supported by Mirantis OpenStack can also be used by applications that need to store
data in an object store. Swift provides a native REST API as well as the S3 API compatibility layer that emulates
the Amazon S3 API on top of Swift Object Storage.
Ceph includes the optional Ceph Object Gateway component (radosgw) that applications can use to access RGW
objects.
Note that the radosgw implementation of the Swift API does not implement all operations.
Ceph RBD uses RADOS directly and does not use the Swift API, so it is possible to store Glance images in Ceph
and still use Swift as the object store for applications.
Because the Ceph Object Gateway replaces Swift as the provider of the Swift APIs, it is not possible to have both
radosgw and Swift running in the same OpenStack environment.
Cinder: Block Storage for Volumes
Cinder serves block devices for the OpenStack environment. You have two choices for the storage back-end:
• Cinder LVM (default): each volume is stored as a logical volume in an LVM volume group on one of your
Cinder nodes.
• Ceph: each volume is stored as an object in the Ceph RADOS object storage system.
Note
If you are using vCenter as the hypervisor, you must use the VMDK driver to store your volumes in the
vCenter datastore.
©2014, Mirantis Inc.
Page 10
Mirantis OpenStack v5.1
Planning Guide
Object Storage for Applications
Factors to consider when choosing between Cinder LVM and Ceph for the Cinder storage backend include:
• Ceph provides a single shared pool of storage nodes as discussed above for image storage.
• Ceph provides object replication capabilities by storing Cinder volumes as Ceph RBD objects; each Ceph
RBD object is stored as multiple RADOS objects. Ceph ensures that each replica of an object is stored on a
different node. This means that your volumes are protected against hard drive and node failures or even the
failure of the data center itself.
The Ceph data replication rules (CRUSH map) can be customized separately for each object pool to modify
the number of object replicas, add different types of failure domains, etc.
• LVM provides much less protection of your data than Ceph does. Even if you use RAID on each Cinder node,
your data is only protected against a hard drive failure. If the Cinder node itself is lost, all volumes that
were stored on that node are lost.
• Ceph consumes more disk space than LVM. LVM stores a single replica of the data whereas Ceph stores at
least two copies of your data so that your actual raw storage capacity must be two to three times bigger
than your data set. Beginning with the "Firefly" release of Ceph (supported by Mirantis OpenStack beginning
with Release 5.1), you can manually implement erasure coding striping to reduce the data multiplication
requirements of Ceph.
• Ceph provides multi-node striping and redundancy for block storage.
• If you combine Ceph RBD backends for Cinder and Glance, you gain another important advantage over
Cinder LVM: copy-on-write cloning of Glance images into bootable Ceph volumes.
• Ceph supports live migration of VMs with ephemeral drives whereas LVM only supports live migration of
volume backed VMs.
If you use Cinder LVM, you have the following configuation options:
• Let Fuel create a JBOD partition that spans all the storage drives in a node.
• You can join all drives into a RAID array before deployment and have the array appear to Fuel as a single
block device.
When deploying Ceph, Fuel partitions the Ceph storage nodes so that most of the space is reserved for Ceph-OSD
storage; all other partitions for the node consume a portion of the first drive. To improve system performance,
you might configure one or two SSDs and assign them the "Ceph General" role on the Assign a role or roles to each
node server screen. See Ceph Hardware Recommendations for details.
©2014, Mirantis Inc.
Page 11
Mirantis OpenStack v5.1
Planning Guide
Nodes and Roles
Nodes and Roles
Your OpenStack environment contains a set of specialized nodes and roles; see OpenStack Environment
Architecture for a description. When planning your OpenStack deployment, you must determine the proper mix of
node types and what roles will be installed on each. When you create your OpenStack environment, you will
assign a role or roles to each node server.
All production environments should be deployed for high availability although you can deploy your environment
without the replicated servers required for high availability and then add the replicated servers later. But part of
your Nodes and Roles planning is to determine the level of HA you want to implement and to plan for adequate
hardware.
Some general guiding principles:
• When deploying a production-grade OpenStack environment, it is best to spread the roles (and, hence, the
workload) over as many servers as possible in order to have a fully redundant, highly-available OpenStack
environment and to avoid performance bottlenecks.
• For demonstration and study purposes, you can deploy OpenStack on VirtualBox; see Running Fuel on
VirtualBox for more information. This option has the lowest hardware requirements
• OpenStack can be deployed on smaller hardware configurations by combining multiple roles on the nodes
and mapping multiple Logical Networks to a single physical NIC.
• When looking to maximize performance, carefully choose your hardware components and make sure that
the performance features of the selected hardware are supported and enabled. For an example of what tight
integration of hardware and software can do for you, see Mellanox ConnectX-3 network adapters.
This section provides information to help you decide how many nodes you need and which roles to assign to
each.
The absolute minimum requirement for a highly-available OpenStack deployment is to allocate 4 nodes:
• 3 Controller nodes, combined with Storage
• 1 Compute node
In production environments, it is highly recommended to separate storage nodes from controllers. This helps
avoid resource contention, isolates failure domains, and allows to optimize hardware configurations for specific
workloads. To achieve that, you will need a minimum of 5 nodes when using Swift and Cinder storage backends,
or 7 nodes for a fully redundant Ceph storage cluster:
• 3 Controller nodes
• 1 Cinder node or 3 Ceph OSD nodes
• 1 Compute node
Note
You do not need Cinder storage nodes if you are using Ceph RBD as storage backend for Cinder volumes.
©2014, Mirantis Inc.
Page 12
Mirantis OpenStack v5.1
Planning Guide
Nodes and Roles
Note
You do not need Compute nodes if you are using vCenter as the hypervisor.
Of course, you are free to choose how to deploy OpenStack based on the amount of available hardware and on
your goals (such as whether you want a compute-oriented or storage-oriented environment).
For a typical OpenStack compute deployment, you can use this table as high-level guidance to determine the
number of controllers, compute, and storage nodes you should have:
# of Nodes
Controllers
Computes
Storages
4-10
3
1-7
3 (on controllers)
11-40
3
3-32
3+ (swift) + 2 (proxy)
41-100
4
29-88
6+ (swift) + 2 (proxy)
>100
5
>84
9+ (swift) + 2 (proxy)
©2014, Mirantis Inc.
Page 13
Mirantis OpenStack v5.1
Planning Guide
Plan Monitoring Facilities
Plan Monitoring Facilities
Mirantis OpenStack can deploy two different metering and montoring facilities:
• Ceilometer to query and store metering data that can be used for billing purposes
• Zabbix to monitor the health of the OpenStack services running in your environment (experimental feature)
Ceilometer and MongoDB
Fuel can deploy Ceilometer (OpenStack Telemetry), the OpenStack Telemetry component. When enabled,
Ceilometer collects and shares measurement data gathered from all OpenStack components. This data can be
used for monitoring and capacity planning purposes as well as for an alarming service. Ceilometer's REST API can
also provide data to external monitoring software for a customer's billing system.
Mirantis OpenStack 5.0 and later installs MongoDB as the back-end database for OpenStack Telemetry. This
significantly improves the Ceilometer performance issues encountered in earlier releases that used the MySQL
database; MongoDB is better able to handle the volume of concurrent read/write operations.
Ceilometer collects two different categories of data:
• Billable events such as "instance X was created", "volume Z was deleted". These notifications appear in the
cloud ecosystem continuously; monitoring them does not consume significant amounts of resources in the
cloud environment.
• Metrics that analyze the activity in the cloud environment at the moment of sampling. This information is
gathered by polling and can consume significant amounts of I/O processing resources and large amounts of
database storage.
Ceilometer can be configured to collect a large amount of metering data and thus perform a high volume of
database writes. In planning the resources required, consider the following:
• The resources consumed by metrics sampling are determined by the polling interval, the number of metrics
being collected, and the number of resources from which metrics are collected. The amount of storage
required is also affected by the frequency with which you offload or purge the data from the database.
• Frequent polling of metrics yields a better picture of what is happening in the cloud environment but also
significantly increases the amount of data being processed and stored. For example, in one test sampling
the same metrics for the same (fairly small) number of resources in the same environment:
• 1 minute polling accumulated .8TB of data over a year.
• 30 second polling accumulated 1.4TB of data over a year.
• 5 second polling accumulated 14.5TB of data over a year.
• Ceilometer consumes fairly small amounts of CPU but the I/O processing is extremely intensive when the
data is written to the disk. This is why we recommend using dedicated MongoDB nodes rather than running
the MongoDB role on Controller nodes. In our lab tests, nearly 100% of the disk I/O resources on the
Controller nodes were sometimes consumed by Ceilometer writing data to MongoDB when the database
was located on the Controller and a small polling interval was used. This nearly halted all other processing
on the Controller node and prevented other processes from running.
To install Ceilometer:
©2014, Mirantis Inc.
Page 14
Mirantis OpenStack v5.1
Planning Guide
Zabbix monitoring tool
• Check the appropriate box on the Related projects screen when configuring your environment.
• Assign the Telemetry-MongoDB role to an appropriate number of servers on the Assign a role or roles to each
node server screen.
See Running Ceilometer for information about configuring and running Ceilometer.
Ideally, MongoDB should run on dedicated servers, with at least as many MongoDB nodes as Controller nodes in
the environment.
Zabbix monitoring tool
Zabbix is an open source infrastructure monitoring utility that Fuel 5.1 and later can deploy with your OpenStack
environment. See Zabbix implementation for details about how Zabbix is implemented in Mirantis OpenStack.
When planning your Mirantis OpenStack deployment, you must consider the following resource requirements if
you will be deploying Zabbix:
• The Zabbix server must run on its own dedicated node in Mirantis OpenStack 5.1. This server also stores the
Zabbix database.
• A Zabbix agent is installed on each Compute and Storage node in the environment. The agents send all
information to the Zabbix server immediately although some small amount of temporary data may be
written to the local disk for processing.
• Significant network traffic is generated on the Zabbix node as the agents report back to the server; the
agents themselves do not put much load on the network.
• The amount of storage required on the Zabbix node depends on the number of resources being monitored,
the amount of data being gathered for each, and so forth but our internal tests indicate that 30GB of data
storage is adequate for monitoring up to 100 nodes.
• The agents running on the Compute and Storage nodes run periodically; they mainly consume CPU
resources, although they are fairly light-weight processes.
See Performance tuning for information about how to maximize the performance of Zabbix.
To deploy Zabbix in your Mirantis OpenStack environment:
• Enable Experimental tools.
• Assign the Zabbix role to the appropriate node on the Assign a role or roles to each node server screen.
• If you like, reset the password used to access the Zabbix dashboard on the Resetting the Zabbix password
screen.
For information about using the Zabbix Web UI, see Zabbix Web interface.
Limitations:
• The Zabbix server cannot be replicated for high availability in Mirantis OpenStack.
• OVS bonding and Mellanox SR-IOV based networking over the Mellanox ConnectX-3 adapter family are not
supported on the Zabbix node.
©2014, Mirantis Inc.
Page 15
Mirantis OpenStack v5.1
Planning Guide
Planning a Sahara Deployment
Planning a Sahara Deployment
Sahara enables users to easily provision and manage Apache Hadoop clusters in an OpenStack environment.
Sahara works with either Release 1.x or 2.x of Hadoop.
The Sahara control processes run on the Controller node. The entire Hadoop cluster runs in VMs that run on
Compute Nodes. A typical set-up is:
• One VM that serves as the Hadoop master node to run JobTracker (ResourceManager for Hadoop Release
2.x) and NameNode.
• Many VMs that serve as Hadoop worker nodes, each of which runs TaskTracker (NodeManager for Hadoop
Release 2.x) and DataNodes.
You must have exactly one NameNode and one JobTracker running in the environment and you cannot run
Hadoop HA under Sahara. Other than that, you are free to use other configurations. For example, you can run the
TaskTracker/NodeManager and Datanodes in the same VM that runs JobTracker/ResourceManager and
NameNode; such a configuration may not produce performance levels that are acceptable for a production
environment but it works for evaluation and demonstration purposes. You could also run DataNodes and
TaskTrackers in separate VMs.
Sahara can use either Swift Object Storage or Ceph for object storage. Special steps are required to implement
data locality for Swift; see Data-locality for details. Data locality is not available for Ceph.
Plan the size and number of nodes for your environment based on the information in Nodes and Roles.
When deploying an OpenStack Environment that includes Sahara for running Hadoop you need to consider a few
special conditions.
Floating IPs
Fuel configures Sahara to use floating IPs to manage the VMs. This means that you must provide a Floating IP
pool in each Node Group Template you define. See Public and Floating IP address requirements for general
information about floating IPs.
A special case is if you are using Nova-Network and you have set the auto_assign_floating_ip parameter to true
by checking the appropriate box on the Fuel UI. In this case, a floating IP is automatically assigned to each VM
and the "floating ip pool" dropdown menu is hidden in the OpenStack Dashboard.
In either case, Sahara assigns a floating IP to each VM it spawns so be sure to allocate enough floating IPs.
Security Groups
Sahara does not configure OpenStack Security Groups so you must manually configure the default security group
in each tenant where Sahara will be used. See Ports Used by Sahara for a list of ports that need to be opened.
VM Flavor Requirements
Hadoop requires at least 1G of memory to run. That means you must use flavors that have at least 1G of memory
for Hadoop cluster nodes.
Communication between virtual machines
Be sure that communication between virtual machines is not blocked.
For additional information about using Sahara to run Apache Hadoop, see the Sahara documentation.
©2014, Mirantis Inc.
Page 16
Mirantis OpenStack v5.1
Planning Guide
Preparing for vSphere Integration
Preparing for vSphere Integration
Fuel 5.0 and later can deploy a Mirantis OpenStack environment that boots and manages virtual machines in
VMware vSphere. VMware provides a vCenter driver for OpenStack that enables the Nova-compute service to
communicate with a VMware vCenter server that manages one or more ESX host clusters. If your vCenter
manages multiple ESX host clusters, Fuel 5.1 allows you to specify several or all clusters for a single OpenStack
environment, so that one Nova-compute service manages multiple ESX host clusters via single vCenter server.
Note
In 5.x environments that use vCenter as the hypervisor, the Nova-compute service runs only on Controller
nodes.
In future Fuel releases, the plan is to change the relation between a Nova-compute service and an ESX
host cluster from one-to-many to one-to-one. In other words, to manage multiple ESX host clusters, you
will need to run multiple Nova-compute services.
The vCenter driver makes management convenient from both the OpenStack Dashboard (Horizon) and from
vCenter, where advanced vSphere features can be accessed.
This section summarizes the planning you should do and other steps that are required before you attempt to
deploy Mirantis OpenStack with vCenter integration.
For more information:
• See VMware vSphere Integration for information about how vCenter support is implemented in Mirantis
OpenStack;
• vSphere deployment notes gives instructions for creating and deploying a Mirantis OpenStack environment
that is integrated with VMware vSphere.
• For background information about VMware vSphere support in OpenStack, see the VMware vSphere OpenStack Manuals.
• The official vSphere installation guide can be found here: vSphere Installation and Setup.
vSphere Installation
Before installing Fuel and using it to create a Mirantis OpenStack environment that is integrated with VMware
vSphere, the vSphere installation must be up and running. Please check that you completed the following steps: *
Install vSphere * Install vCenter * Install ESXi * Configure vCenter
• Create DataCenter
• Create vCenter cluster
• Add ESXi host(s)
©2014, Mirantis Inc.
Page 17
Mirantis OpenStack v5.1
Planning Guide
ESXi Host Networks Configuration
ESXi Host Networks Configuration
The ESXi host(s) network must be configured appropriately in order to enable integration of Mirantis OpenStack
with vCenter. Follow the steps below:
1. Open the ESXi host page, select the "Manage" tab and click on "Networking". vSwitch0 and all its networks
are shown. Click the Add Network button:
2. In the "Add Networking" wizard, select the Virtual Machine Port group:
©2014, Mirantis Inc.
Page 18
Mirantis OpenStack v5.1
Planning Guide
Limitations
3. On the next page, select the "Virtual Machine Port Group" option to ensure that the network will be created
in vSwitch0:
4. Always name the network br100; this is the only value that works with Fuel; type a VLAN Tag in the VLAN
ID field; (the value must be equal to the VLAN Tag at VM Fixed on Fuel’s Network settings tab):
Limitations
• Only vCenter versions 5.1 and later are supported
• It is not possible to specify the vCenter cluster where virtual instances will be launched.
©2014, Mirantis Inc.
Page 19
Mirantis OpenStack v5.1
Planning Guide
Limitations
• Only Nova Network with FlatDHCP mode is supported in the current version of the integration.
• Each OpenStack environment can support one vCenter cluster.
• Security groups are not supported.
• The only supported backend for Cinder is VMDK.
• Volumes that are created by Cinder appear as SCSI disks. To be able to read/write that disk, be sure that the
operating system inside the instance supports SCSI disks. The CirrOS image that is shipped with Fuel
supports only IDE disks, so even if the volume is attached to it, CirrOS is not able to use it.
• The Ceph backend for Glance, Cinder and RadosGW object storage is not supported.
• The VMware vCenter-managed datastore is not supported as a backend for Glance.
• Murano is not supported. It requires Neutron and vCenter utilizes nova-network.
• Fuel does not configure Ceilometer to collect metrics from vCenter virtual resources. For more details about
the Ceilometer plugin for vCenter, see Support for VMware vCenter Server
For background information about how vCenter support is integrated into Mirantis OpenStack, see VMware
vSphere Integration.
Follow the instructions in vSphere deployment notes to deploy your Mirantis OpenStack environment with vCenter
support.
©2014, Mirantis Inc.
Page 20
Mirantis OpenStack v5.1
Planning Guide
Preparing to run Fuel on vSphere
Preparing to run Fuel on vSphere
For information about how Fuel runs on vSphere, see Fuel running under vSphere.
For instructions for setting up your vSphere and installing Fuel on vSphere, follow the instructions in Installing
Fuel Master Node on vSphere.
©2014, Mirantis Inc.
Page 21
Mirantis OpenStack v5.1
Planning Guide
Preparing for VMware NSX integration
Preparing for VMware NSX integration
Fuel 5.1 and later can deploy a Mirantis OpenStack environment that can manage virtual networks in VMware
NSX. VMware provides an NSX plug-in for OpenStack that enables the Neutron service to communicate and
provision virtual networks in NSX that can manage Open vSwitches on controller and compute nodes.
This section summarizes the planning you should do and other steps that are required before you attempt to
deploy Mirantis OpenStack with NSX integration.
For more information:
• See Neutron with VMware NSX for information about how NSX support is implemented in Mirantis
OpenStack;
• NSX deployment notes gives instructions for creating and deploying a Mirantis OpenStack environment that
is integrated with an NSX networking backend that utilizes the NSX Neutron plug-in.
• The official VMware NSX installation guide can be found here: NSX Installation and Upgrade Guide.
VMware NSX Installation
Before installing Fuel and using it to create a Mirantis OpenStack environment that is integrated with VMware
NSX, the VMware NSX installation must be up and running. Please check that you completed the following steps:
• Install NSX Controller node
• Install NSX Gateway node
• Install NSX Manager node
• Install NSX Service node
Note
According to VMware documentation, an NSX cluster can operate successfully without an NSX Service
node, but its presence is mandatory for deploying Mirantis OpenStack. Support of NSX clusters without a
Service node might appear in future versions of Fuel.
VMware NSX cluster configuration
• Configure NSX Controller
• Assign IP address to NSX controller. If the controller is going to be placed in any of the OpenStack
logical networks (Public, Management, Storage), you must assign an IP address that does not
overlap with IP addresses that are managed by OpenStack. For example if the Public network has
range 172.16.0.0/24 and addresses 172.16.0.1 - 172.16.0.126 are managed, any IP address in the
range 172.16.0.127 - 172.16.0.254 can be used for the NSX controller. If the controller IP belongs
to a separate network, there must be L3 connectivity between the Public network and the
network where the VMware NSX controller resides.
©2014, Mirantis Inc.
Page 22
Mirantis OpenStack v5.1
Planning Guide
Preparing for VMware NSX integration
• Configure NSX Gateway node
• Configure NSX Service node
• Create NSX cluster in NSX Manager
• Create new cluster
• Create new Transport Zone. You need to write down the Transport Zone UUID; you will use this
value when configuring parameters on the Settings tab in the Fuel web UI.
• Add Gateway node to the NSX cluster
• When you add the Gateway node, you must select the Transport Type the Gateway node will be
using.
• You need to write down the Transport Type you chose. Later, you will provide this value on the
Settings tab in the Fuel web UI.
• Add the L3 Gateway Service to NSX cluster. You need to write down the Gateway Service UUID;
later you need to provide this value on the Settings tab in the Fuel web UI.
Attention!
You must specify the same transport type on the Settings tab in FUEL web UI.
©2014, Mirantis Inc.
Page 23
Mirantis OpenStack v5.1
Planning Guide
Limitations
• Obtain and put NSX specific packages on the Fuel Master node
• Upload NSX package archives to the Fuel Master node which has IP address 10.20.0.2 in this
example:
$ scp nsx-ovs-2.0.0-build30176-rhel61_x86_64.tar.gz [email protected]:
$ scp nsx-ovs-2.0.0-build30176-ubuntu_precise_amd64.tar.gz [email protected]:
• Go to the Fuel Master node and put the NSX packages in the /var/www/nailgun/ directory:
[root@fuel
[root@fuel
[root@fuel
[root@fuel
~]# mkdir /var/www/nailgun/nsx
~]# cd /var/www/nailgun/nsx
nsx]# tar -xf ~/nsx-ovs-2.0.0-build30176-rhel61_x86_64.tar.gz
nsx]# tar -xf ~/nsx-ovs-2.0.0-build30176-ubuntu_precise_amd64.tar.gz
• Check out that the files are listed by web server. Open the URL http://10.20.0.2:8080/nsx/ in a
web browser and check that the web server successfully lists the packages.
• Now you can provide the URL http://10.20.0.2:8080/nsx/ for the "URL for NSX bits" setting on the
Settings tab in the Fuel web UI.
Seealso
You can read blog posts NSX appliances installation and NSX cluster configuration for details about the
NSX cluster deployment process.
Limitations
• Only KVM or QEMU are supported as hypervisor options when using VMware NSX.
• Only VMware NSX 4.0 is supported
• Resetting or deleting the environment via "Reset" and "Delete" buttons on the Actions tab does not flush
the entities (logical switches, routers, load balancers, etc) that were created in the NSX cluster. Eventually,
the cluster may run out of resources; it is up to the cloud operator to remove unneeded entities from the
VMware NSX cluster. Each time the deployment fails or is interrupted; after solving the problem, restart the
deployment process.
To cleanup the NSX cluster, log into the NSX Manager, open the dashboard and click on numbered link in
"Hypervisor Software Version Summary":
©2014, Mirantis Inc.
Page 24
Mirantis OpenStack v5.1
Planning Guide
Limitations
Tick all registered nodes and press "Delete Checked" button:
Then click on "Logical Layer" in the "category" column, tick all remaining logical entities and remove them
by pressing the corresponding "Delete Checked" button:
©2014, Mirantis Inc.
Page 25
Mirantis OpenStack v5.1
Planning Guide
Calculate hardware requirements
Calculate hardware requirements
You can use the Fuel Hardware Calculator to calculate the hardware required for your OpenStack environment.
When choosing the hardware on which you will deploy your OpenStack environment, you should think about:
• CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the
CPU per virtual machine. Also consider how the environment will be used: environments used for heavy
computational work may require more powerful CPUs than environments used primarily for storage, for
example.
• Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node.
• Storage -- Depends on the local drive space per virtual machine, remote volumes that can be attached to a
virtual machine, and object storage.
• Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and
network storage.
See Example of Hardware Requirements Calculation for some specific calculations you can make when choosing
your hardware.
©2014, Mirantis Inc.
Page 26
Mirantis OpenStack v5.1
Planning Guide
Example of Hardware Requirements Calculation
Example of Hardware Requirements Calculation
When you calculate resources for your OpenStack environment, consider the resources required for expanding
your environment.
The example described in this section presumes that your environment has the following prerequisites:
• 100 virtual machines
• 2 x Amazon EC2 compute units 2 GHz average
• 16 x Amazon EC2 compute units 16 GHz maximum
Calculating CPU
Use the following formula to calculate the number of CPU cores per virtual machine:
max GHz /(number of GHz per core x 1.3 for hyper-threading)
Example:
16 GHz / (2.4 x 1.3) = 5.12
Therefore, you must assign at least 5 CPU cores per virtual machine.
Use the following formula to calculate the total number of CPU cores:
(number of VMs x number of GHz per VM) / number of GHz per core
Example:
(100 VMs * 2 GHz per VM) / 2.4 GHz per core = 84
Therefore, the total number of CPU cores for 100 virtual machines is 84.
Depending on the selected CPU you can calculate the required number of sockets. Use the following formula:
total number of CPU cores / number of cores per socket
For example, you use Intel E5 2650-70 8 core CPU:
84 / 8 = 11
Therefore, you need 11 sockets. To calculate the number of servers required for your deployment, use the
following formula:
total number of sockets / number of sockets per server
©2014, Mirantis Inc.
Page 27
Mirantis OpenStack v5.1
Planning Guide
Calculating Memory
Round the number of sockets to an even number to get 12 sockets. Use the following formula:
12 / 2 = 6
Therefore, you need 6 dual socket servers. You can calculate the number of virtual machines per server using the
following formula:
number of virtual machines / number of servers
Example:
100 / 6 = 16.6
Therefore, you can deploy 17 virtual machines per server.
Using this calculation, you can add additional servers accounting for 17 virtual machines per server.
The calculation presumes the following conditions:
• No CPU oversubscribing
• If you use hyper-threading, count each core as 1.3, not 2.
• CPU supports the technologies required for your deployment
Calculating Memory
Continuing to use the example from the previous section, we need to determine how much RAM will be required
to support 17 VMs per server. Let's assume that you need an average of 4 GBs of RAM per VM with dynamic
allocation for up to 12GBs for each VM. Calculating that all VMs will be using 12 GBs of RAM requires that each
server have 204 GBs of available RAM.
You must also consider that the node itself needs sufficient RAM to accommodate core OS operations as well as
RAM for each VM container (not the RAM allocated to each VM, but the memory the core OS uses to run the VM).
The node's OS must run it's own operations, schedule processes, allocate dynamic resources, and handle network
operations, so giving the node itself at least 16 GBs or more RAM is not unreasonable.
Considering that the RAM we would consider for servers comes in 4 GB, 8 GB, 16 GB and 32 GB sticks, we would
need a total of 256 GBs of RAM installed per server. For an average 2-CPU socket server board you get 16-24
RAM slots. To have 256 GBs installed you would need sixteen 16 GB sticks of RAM to satisfy your RAM needs for
up to 17 VMs requiring dynamic allocation up to 12 GBs and to support all core OS requirements.
You can adjust this calculation based on your needs.
Calculating Storage
When planning disk space, you need to consider several types of data:
• Ephemeral (the local drive space for a VM)
• Persistent object storage
• Persistent block storage
©2014, Mirantis Inc.
Page 28
Mirantis OpenStack v5.1
Planning Guide
Calculating Memory
See Choosing the Storage Model for more information about the options for storage and how to choose the
appropriate model.
As far as local drive space that must reside on the compute nodes, in our example of 100 VMs we make the
following assumptions:
• 150 GB local storage per VM
• 5 TB total of local storage (100 VMs * 50 GB per VM)
• 500 GB of persistent volume storage per VM
• 50 TB total persistent storage
Returning to our already established example, we need to figure out how much storage to install per server. This
storage will service the 17 VMs per server. If we are assuming 50 GBs of storage for each VMs drive container,
then we would need to install 2.5 TBs of storage on the server. Since most servers have anywhere from 4 to 32
2.5" drive slots or 2 to 12 3.5" drive slots, depending on server form factor (i.e., 2U vs. 4U), you will need to
consider how the storage will be impacted by the intended use.
If storage impact is not expected to be significant, then you may consider using unified storage. For this example
a single 3 TB drive would provide more than enough storage for seventeen 150 GB VMs. If speed is really not an
issue, you might even consider installing two or three 3 TB drives and configure a RAID-1 or RAID-5 for
redundancy. If speed is critical, however, you will likely want to have a single hardware drive for each VM. In this
case you would likely look at a 3U form factor with 24-slots.
Don't forget that you will also need drive space for the node itself, and don't forget to order the correct
backplane that supports the drive configuration that meets your needs. Using our example specifications and
assuming that speed is critical, a single server would need 18 drives, most likely 2.5" 15,000 RPM 146 GB SAS
drives.
Throughput
As far as throughput, that's going to depend on what kind of storage you choose. In general, you calculate IOPS
based on the packing density (drive IOPS * drives in the server / VMs per server), but the actual drive IOPS will
depend on the drive technology you choose. For example:
• 3.5" slow and cheap (100 IOPS per drive, with 2 mirrored drives)
• 100 IOPS * 2 drives / 17 VMs per server = 12 Read IOPS, 6 Write IOPS
• 2.5" 15K (200 IOPS, four 600 GB drive, RAID-10)
• 200 IOPS * 4 drives / 17 VMs per server = 48 Read IOPS, 24 Write IOPS
• SSD (40K IOPS, eight 300 GB drive, RAID-10)
• 40K * 8 drives / 17 VMs per server = 19K Read IOPS, 9.5K Write IOPS
Clearly, SSD gives you the best performance, but the difference in cost between SSDs and the less costly
platter-based solutions is going to be significant, to say the least. The acceptable cost burden is determined by
the balance between your budget and your performance and redundancy needs. It is also important to note that
the rules for redundancy in a cloud environment are different than a traditional server installation in that entire
servers provide redundancy as opposed to making a single server instance redundant.
In other words, the weight for redundant components shifts from individual OS installation to server redundancy.
It is far more critical to have redundant power supplies and hot-swappable CPUs and RAM than to have
©2014, Mirantis Inc.
Page 29
Mirantis OpenStack v5.1
Planning Guide
Calculating Network
redundant compute node storage. If, for example, you have 18 drives installed on a server and have 17 drives
directly allocated to each VM installed and one fails, you simply replace the drive and push a new node copy. The
remaining VMs carry whatever additional load is present due to the temporary loss of one node.
Remote storage
IOPS will also be a factor in determining how you plan to handle persistent storage. For example, consider these
options for laying out your 50 TB of remote volume space:
• 12 drive storage frame using 3 TB 3.5" drives mirrored
• 36 TB raw, or 18 TB usable space per 2U frame
• 3 frames (50 TB / 18 TB per server)
• 12 slots x 100 IOPS per drive = 1200 Read IOPS, 600 Write IOPS per frame
• 3 frames x 1200 IOPS per frame / 100 VMs = 36 Read IOPS, 18 Write IOPS per VM
• 24 drive storage frame using 1TB 7200 RPM 2.5" drives
• 24 TB raw, or 12 TB usable space per 2U frame
• 5 frames (50 TB / 12 TB per server)
• 24 slots x 100 IOPS per drive = 2400 Read IOPS, 1200 Write IOPS per frame
• 5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frame
You can accomplish the same thing with a single 36 drive frame using 3 TB drives, but this becomes a single
point of failure in your environment.
Object storage
When it comes to object storage, you will find that you need more space than you think. For example, this
example specifies 50 TB of object storage.
Object storage uses a default of 3 times the required space for replication, which means you will need 150 TB.
However, to accommodate two hands-off zones, you will need 5 times the required space, which actually means
250 TB. The calculations don't end there. You don't ever want to run out of space, so "full" should really be more
like 75% of capacity, which means you will need a total of 333 TB, or a multiplication factor of 6.66.
Of course, that might be a bit much to start with; you might want to start with a happy medium of a multiplier of
4, then acquire more hardware as your drives begin to fill up. That calculates to 200 TB in our example. So how
do you put that together? If you were to use 3 TB 3.5" drives, you could use a 12 drive storage frame, with 6
servers hosting 36 TB each (for a total of 216 TB). You could also use a 36 drive storage frame, with just 2 servers
hosting 108 TB each, but its not recommended due to the high cost of failure to replication and capacity issues.
Calculating Network
Perhaps the most complex part of designing an OpenStack environment is the networking.
An OpenStack environment can involve multiple networks even beyond the Public, Private, and Internal
networks. Your environment may involve tenant networks, storage networks, multiple tenant private networks,
and so on. Many of these will be VLANs, and all of them will need to be planned out in advance to avoid
configuration issues.
©2014, Mirantis Inc.
Page 30
Mirantis OpenStack v5.1
Planning Guide
Mellanox ConnectX-3 network adapters
In terms of the example network, consider these assumptions:
• 100 Mbits/second per VM
• HA architecture
• Network Storage is not latency sensitive
In order to achieve this, you can use two 1 Gb links per server (2 x 1000 Mbits/second / 17 VMs = 118
Mbits/second).
Using two links also helps with HA. You can also increase throughput and decrease latency by using two 10 Gb
links, bringing the bandwidth per VM to 1 Gb/second, but if you're going to do that, you've got one more factor to
consider.
Scalability and oversubscription
It is one of the ironies of networking that 1 Gb Ethernet generally scales better than 10Gb Ethernet -- at least
until 100 Gb switches are more commonly available. It's possible to aggregate the 1 Gb links in a 48 port switch,
so that you have 48 x 1 Gb links down, but 4 x 10 Gb links up. Do the same thing with a 10 Gb switch, however,
and you have 48 x 10 Gb links down and 4 x 100b links up, resulting in oversubscription.
Like many other issues in OpenStack, you can avoid this problem to a great extent with careful planning.
Problems only arise when you are moving between racks, so plan to create "pods", each of which includes both
storage and compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain.
Hardware for this example
In this example, you are looking at:
• 2 data switches (for HA), each with a minimum of 12 ports for data (2 x 1 Gb links per server x 6 servers)
• 1 x 1 Gb switch for IPMI (1 port per server x 6 servers)
• Optional Cluster Management switch, plus a second for HA
Because your network will in all likelihood grow, it's best to choose 48 port switches. Also, as your network
grows, you will need to consider uplinks and aggregation switches.
Mellanox ConnectX-3 network adapters
Beginning with version 5.1, Fuel can configure Mellanox ConnectX-3 network adapters to accelerate the
performance of compute and storage traffic.
This implements the following performance enhancements:
• Compute nodes use SR-IOV based networking.
• Cinder nodes use iSER block storage as the iSCSI transport rather than the default iSCSI over TCP.
These features reduce CPU overhead, boost throughput, reduce latency, and enable network traffic to bypass the
software switch layer (e.g. Open vSwitch).
The Mellanox ConnectX-3 adapters family supports up to 40/56GbE. To reach 56 GbE speed in your network with
ConnectX-3 adapters, you must use Mellanox Ethernet switches (e.g. SX1036) with the additional 56GbE license.
The switch ports should be configured specifically to use 56GbE speed. No additional configuration is required on
©2014, Mirantis Inc.
Page 31
Mirantis OpenStack v5.1
Planning Guide
Summary
the adapter side. For additional information about how to run in 56GbE speed, see HowTo Configure 56GbE Link
on Mellanox Adapters and Switches.
Summary
In general, your best bet is to choose a 2 socket server with a balance in I/O, CPU, Memory, and Disk that meets
your project requirements. Look for a 1U R-class or 2U high density C-class servers. Some good options from Dell
for compute nodes include:
• Dell PowerEdge R620
• Dell PowerEdge C6220 Rack Server
• Dell PowerEdge R720XD (for high disk or IOPS requirements)
You may also want to consider systems from HP (http://www.hp.com/servers) or from a smaller systems builder
like Aberdeen, a manufacturer that specializes in powerful, low-cost systems and storage servers
(http://www.aberdeeninc.com).
©2014, Mirantis Inc.
Page 32
Mirantis OpenStack v5.1
Planning Guide
Reference configuration of hardware switches
Reference configuration of hardware switches
This section describes reference configuration for Cisco, Arista, Mellanox and Juniper network switches.
Tagged ports
Cisco Catalyst
interface [Ten]GigabitEthernet[interface number]
description [port description]
switchport trunk encapsulation dot1q
switchport trunk allowed vlan [vlan IDs for specific networks]
switchport mode trunk
spanning-tree portfast trunk
switchport trunk native vlan [vlan ID] - if necessary one untagged VLAN
Cisco Nexus/ Arista
interface ethernet[interface number]
description [port description]
switchport
switchport mode trunk
switchport trunk allowed vlan [vlan IDs for specific networks]
switchport trunk native vlan [vlan ID] - if necessary one untagged VLAN
Mellanox
interface ethernet [interface number]
description [port description]
switchport mode hybrid
switchport hybrid allowed-vlan [add|remove|except] [vlan ID|vlan IDs range]
switchport hybrid allowed-vlan [vlan ID|vlan IDs range]
switchport access vlan [vlan ID]
Juniper
interfaces {
[interface_name]-[interface_number] {
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members [ vlan IDs or names of specific
networks];
}
native-vlan-id [vlan ID] if necessary one untagged VLAN
©2014, Mirantis Inc.
Page 33
Mirantis OpenStack v5.1
Planning Guide
Untagged ports
}
}
}
}
Untagged ports
Cisco Catalyst
interface [Ten]GigabitEthernet[interface number]
description [port description]
switchport access [vlan ID for specific network]
switchport mode access
spanning-tree portfast
Cisco Nexus/Arista
interface ethernet[interface number]
description [port description]
switchport
switchport access vlan [vlan ID for specific network]
Mellanox
interface ethernet [interface number]
description [port description]
switchport mode access
switchport access vlan [vlan ID]
Juniper
interfaces {
[interface_name]-[interface_number] {
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members [vlan ID or name for specific network];
}
}
}
}
}
©2014, Mirantis Inc.
Page 34
Mirantis OpenStack v5.1
Planning Guide
Example 1: HA + Nova-network FlatDHCP manager
Example 1: HA + Nova-network FlatDHCP manager
As a model example, the following configuration is used:
• Deployment mode: Multi-node HA
• Networking model: Nova-network FlatDHCP manager
Hardware and environment:
• 7 servers with two 1Gb/s Ethernet NIC and IPMI
• 1 Cisco Catalyst 2960G switch
• Independent out of band management network for IPMI
• Connection to the Internet or/and DC network via a router called Gateway and IP 172.16.1.1
Node Server roles:
• 1 server as Fuel Node
• 3 servers as Controller Node
• 1 server as Cinder Node
• 2 servers as Compute Node
Network configuration plan:
• Public network 172.16.1.0/24
• Floating network 172.16.0.0/24 in VLAN 100
• Management network 192.168.0.0/24 in VLAN 101
• Storage network 192.168.1.0/24 in VLAN 102
• Private (Fixed) network 10.0.0.0/24 in VLAN 103
• Administrative network (for Fuel) 10.20.0.0/24 in VLAN 104
Network Parameters:
• Fuel server IP: 10.20.0.2/24
• Default gateway: 10.20.0.1
• DNS 10.20.0.1
Note
The Internet and rest of DC access is available through the Public network (for OpenStack Nodes) and
Administrative network (Fuel server)
From the server node side, ports with the following VLAN IDs for networks are used:
• eth0 - Management VLAN 101 (tagged), Storage VLAN 102(tagged) and Administrative VLAN 104 (untagged)
©2014, Mirantis Inc.
Page 35
Mirantis OpenStack v5.1
Planning Guide
Detailed Port Configuration
• eth1 - Public/Floating VLAN 100 (tagged), Private VLAN 103 (tagged)
Detailed Port Configuration
The following table describes the detailed port configuration and VLAN assignment.
Switch
Port
Server name
Server
NIC
tagged /
untagged
VLAN ID
G0/1
Fuel
eth0
untagged
104
G0/2
Fuel
eth1
untagged
100
G0/3
Compute Node 1
eth0
tagged
101, 102, 104 (untagged)
G0/4
Compute Node 1
eth1
tagged
100, 103
G0/5
Compute Node n
eth0
tagged
101, 102, 104 (untagged)
G0/6
Compute Node n
eth1
tagged
100, 103
G0/7
Controller Node 1
eth0
tagged
101, 102, 104(untagged)
G0/8
Controller Node 1
eth1
tagged
100, 103
G0/9
Controller Node 2
eth0
tagged
101, 102, 104 (untagged)
G0/10
Controller Node 2
eth1
tagged
100, 103
G0/11
Controller Node 3
eth0
tagged
101, 102, 104 (untagged)
G0/12
Controller Node 3
eth1
tagged
100, 103
G0/13
Cinder Node
eth0
tagged
101, 102, 104 (untagged)
G0/14
Cinder Node
eth1
tagged
100, 103
G0/24
Router (default gateway)
---
untagged
100
Connect the servers to the switch as in the diagram below:
©2014, Mirantis Inc.
Page 36
Mirantis OpenStack v5.1
Planning Guide
Detailed Port Configuration
The following diagram describes the network topology for this environment.
©2014, Mirantis Inc.
Page 37
Mirantis OpenStack v5.1
Planning Guide
Nova-network Switch configuration (Cisco Catalyst
2960G)
Nova-network Switch configuration (Cisco Catalyst 2960G)
Use the following configuration to deploy Mirantis OpenStack using a Cisco Catalyst 2960G network switch.:
service timestamps debug datetime msec localtime show-timezone
service timestamps log datetime msec localtime show-timezone
service password-encryption
service sequence-numbers
!
hostname OpenStack\_sw1
!
logging count
logging buffered 64000 informational
logging rate-limit console 100 except errors
logging console informational
enable secret r00tme
!
username root privilege 15 secret r00tme
!
no aaa new-model
aaa session-id common
ip subnet-zero
ip domain-name domain.ltd
ip name-server [ip of domain name server]
!
©2014, Mirantis Inc.
Page 38
Mirantis OpenStack v5.1
Planning Guide
Nova-network Switch configuration (Cisco Catalyst
2960G)
spanning-tree mode rapid-pvst
spanning-tree loopguard default
spanning-tree etherchannel guard misconfig
spanning-tree extend system-id
!
ip ssh time-out 60
ip ssh authentication-retries 2
ip ssh version 2
!
vlan 100
name Public
vlan 101
name Management
vlan 102
name Storage
vlan 103
name Private
vlan 104
name Admin
!
interface GigabitEthernet0/1
description Fuel Node eth0
switchport access vlan 104
switchport mode access
spanning-tree portfast
!
interface GigabitEthernet0/2
description Fuel Node eth1 (optional to have direct access to Public net)
switchport access vlan 100
switchport mode access
spanning-tree portfast
interface GigabitEthernet0/3
description Compute Node 1 eth0
switchport trunk native vlan 104
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101, 102, 104
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/4
description Compute Node 1 eth1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 103
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/5
description Compute Node 2 eth0
switchport trunk native vlan 104
switchport trunk encapsulation dot1q
©2014, Mirantis Inc.
Page 39
Mirantis OpenStack v5.1
Planning Guide
switchport trunk allowed vlan 101, 102,
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/6
description Compute Node 2 eth1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 103
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/7
description Controller Node 1 eth0
switchport trunk native vlan 104
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101, 102,
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/8
description Controller Node 1 eth1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 103
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/9
description Controller Node 2 eth0
switchport trunk native vlan 104
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101, 102,
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/10
description Controller Node 2 eth1
switchport trunk encapsulation dot1
switchport trunk allowed vlan 100, 103
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/11
description Controller Node 3 eth0
switchport trunk native vlan 104
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101, 102,
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/12
description Controller Node 3 eth1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 103
switchport mode trunk
spanning-tree portfast trunk
©2014, Mirantis Inc.
Nova-network Switch configuration (Cisco Catalyst
2960G)
104
104
104
104
Page 40
Mirantis OpenStack v5.1
Planning Guide
Nova-network Switch configuration (Juniper EX4200)
interface GigabitEthernet0/13
description Cinder Node eth0
switchport trunk native vlan 104
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101, 102, 104
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/14
description Cinder Node eth1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 103
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/24
description Connection to default gateway
switchport access vlan 100
switchport mode access
!
interface Vlan100
ip address 172.16.1.254 255.255.255.0
ip address 172.16.0.254 255.255.255.0 secondary
no shutdown
!
ip route 0.0.0.0 0.0.0.0 172.16.1.1
!
ip classless
no ip http server
no ip http secure-server
!
line con 0
session-timeout 15
privilege level 15
login local
password r00tme
!
line vty 0 15
session-timeout 15
login local
password r00tme
!
ntp server [ntp_server1] prefer
ntp server [ntp_server2]
Nova-network Switch configuration (Juniper EX4200)
Use the following configuration to deploy Mirantis OpenStack using Juniper EX4200 network switch.:
©2014, Mirantis Inc.
Page 41
Mirantis OpenStack v5.1
Planning Guide
Nova-network Switch configuration (Juniper EX4200)
system {
host-name OpenStack_sw1;
domain-name domain.ltd;
authentication-order [ password ];
root-authentication {
encrypted-password "xxxxxxxxxxxxxxxxxxx";
}
}
services {
ssh;
}
ntp {
server [ntp_server1] prefer;
server [ntp_server2];
}
}
interfaces {
ge-0/0/0 {
description Fuel Node eth0;
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_104;
}
}
}
}
ge-0/0/1 {
description Fuel Node eth1 (optional to have direct access to Public
net);
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_100;
}
}
}
}
ge-0/0/2 {
description Compute Node 1 eth0;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
©2014, Mirantis Inc.
Page 42
Mirantis OpenStack v5.1
Planning Guide
Nova-network Switch configuration (Juniper EX4200)
}
native-vlan-id vlan_104;
}
}
}
ge-0/0/3 {
description Compute Node 1 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_100, vlan_103;
}
}
}
}
ge-0/0/4 {
description Compute Node 2 eth0;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
}
native-vlan-id vlan_104;
}
}
}
ge-0/0/5 {
description Compute Node 2 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_100, vlan_103;
}
}
}
}
ge-0/0/6 {
description Controller Node 1 eth0;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
}
native-vlan-id vlan_104;
©2014, Mirantis Inc.
Page 43
Mirantis OpenStack v5.1
Planning Guide
Nova-network Switch configuration (Juniper EX4200)
}
}
}
ge-0/0/7 {
description controller Node 1 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_100, vlan_103;
}
}
}
}
ge-0/0/8 {
description Controller Node 2 eth0;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
}
native-vlan-id vlan_104;
}
}
}
ge-0/0/9 {
description Controller Node 2 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_100, vlan_103;
}
}
}
}
ge-0/0/10 {
description Controller Node 3 eth0;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
}
native-vlan-id vlan_104;
}
}
©2014, Mirantis Inc.
Page 44
Mirantis OpenStack v5.1
Planning Guide
Nova-network Switch configuration (Juniper EX4200)
}
ge-0/0/11 {
description Controller Node 3 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_100, vlan_103;
}
}
}
}
ge-0/0/12 {
description Cinder Node 1 eth0;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
}
native-vlan-id vlan_104;
}
}
}
ge-0/0/13 {
description Cinder Node 1 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_100, vlan_103;
}
}
}
}
ge-0/0/23 {
description Connection to default gateway;
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_100;
}
}
}
}
vlan {
unit 100 {
©2014, Mirantis Inc.
Page 45
Mirantis OpenStack v5.1
Planning Guide
Nova-network Switch configuration (Juniper EX4200)
family inet {
address 172.16.1.254/24;
address 172.16.0.254/24;
}
}
}
}
routing-options {
static {
route 0.0.0.0/0 next-hop 172.16.1.1;
}
}
protocols {
dcbx {
interface all;
}
rstp {
bridge-priority 32k;
interface ge-0/0/0.0 {
edge;
}
interface ge-0/0/1.0 {
edge;
}
interface ge-0/0/23.0 {
edge;
}
bpdu-block-on-edge;
}
lldp {
interface all;
}
}
vlans {
vlan_1;
vlan_100 {
description Public;
vlan-id 100;
l3-interface vlan.100;
}
vlan_101 {
description Management;
vlan-id 101;
}
vlan_102 {
description Storage;
vlan-id 102;
}
©2014, Mirantis Inc.
Page 46
Mirantis OpenStack v5.1
Planning Guide
Nova-network Switch configuration (Juniper EX4200)
vlan_103 {
description Private;
vlan-id 103;
}
vlan_104 {
description Admin;
vlan-id 104;
}
}
©2014, Mirantis Inc.
Page 47
Mirantis OpenStack v5.1
Planning Guide
Example 2: HA + Neutron with GRE
Example 2: HA + Neutron with GRE
As a model example, the following configuration is used:
• Deploying mode: Multi-node HA
• Networking model: Neutron with GRE
Hardware and environment:
• 7 servers with two 1Gb/s ethernet NIC and IPMI
• 1 Cisco Catalyst 3750 switch
• Independent out of band management network for IPMI
• Connection to the Internet or/and DC network via a router called Gateway and IP 172.16.1.1
Node servers roles:
• 1 server as Fuel Node
• 3 servers as Controller Node
• 1 server as Cinder Node
• 2 servers as Compute Node
Network Configuration Plan:
• Floating/Public network 172.16.0.0/24 in VLAN 100 (untagged on servers)
• Floating IP range 172.16.0.130 - 254
• Internal network (private) 192.168.111.0/24
• Gateway 192.168.111.1
• DNS 8.8.4.4, 8.8.8.8
• Tunnel ID range 2 - 65535
• Management network 192.168.0.0/24 in VLAN 101
• Storage network 192.168.1.0/24 in VLAN 102
• Administrative network (for Fuel) 10.20.0.0/24 in VLAN 103
Network Parameters
• Fuel server: IP 10.20.0.2/24
• Default gateway: 10.20.0.1
• DNS: 10.20.0.1
Note
The Internet and rest of DC access via Public network (for OpenStack Nodes) and Administrative network
(Fuel server).
©2014, Mirantis Inc.
Page 48
Mirantis OpenStack v5.1
Planning Guide
Detailed port configuration
From server side, ports with following VLAN IDs are used:
• eth0 - Administrative VLAN 103 (untagged)
• eth1 - Public/Floating VLAN 100 (untagged), Management VLAN 101 (tagged), Storage VLAN 102 (tagged)
Detailed port configuration
The following table describes port configuration for this deployment.
Switch
Port
Server name
Server
NIC
tagged /
untagged
VLAN ID
G0/1
Fuel
eth0
untagged
103
G0/2
Fuel
eth1
untagged
100
G0/3
Compute Node 1
eth0
untagged
103
G0/4
Compute Node 1
eth1
tagged
100(untagged), 101, 102
G0/5
Compute Node n
eth0
tagged
103
G0/6
Compute Node n
eth1
tagged
100(untagged), 101, 102
G0/7
Controller Node 1
eth0
tagged
103
G0/8
Controller Node 1
eth1
tagged
100(untagged), 101, 102
G0/9
Controller Node 2
eth0
tagged
103
G0/10
Controller Node 2
eth1
tagged
100(untagged), 101, 102
G0/11
Controller Node 3
eth0
tagged
103
G0/12
Controller Node 3
eth1
tagged
100(untagged), 101, 102
G0/13
Cinder Node
eth0
tagged
103
G0/14
Cinder Node
eth1
tagged
100(untagged), 101, 102
G0/24
Router (default gateway)
untagged
100
• Neutron Switch configuration (Cisco Catalyst 2960G)
Use the following configuration to deploy Mirantis OpenStack using a Cisco Catalyst 2960G network switch.
service timestamps debug datetime msec localtime show-timezone
service timestamps log datetime msec localtime show-timezone
service password-encryption
service sequence-numbers
!
hostname OpenStack_sw1
!
logging count
©2014, Mirantis Inc.
Page 49
Mirantis OpenStack v5.1
Planning Guide
Detailed port configuration
logging buffered 64000 informational
logging rate-limit console 100 except errors
logging console informational
enable secret r00tme
!
username root privilege 15 secret r00tme
!
no aaa new-model
aaa session-id common
ip subnet-zero
ip domain-name domain.ltd
ip name-server [ip of domain name server]
!
spanning-tree mode rapid-pvst
spanning-tree loopguard default
spanning-tree etherchannel guard misconfig
spanning-tree extend system-id
!
ip ssh time-out 60
ip ssh authentication-retries 2
ip ssh version 2
!
vlan 100
name Public
vlan 101
name Management
vlan 102
name Storage
vlan 103
name Admin
!
interface GigabitEthernet0/1
description Fuel Node eth0
switchport access vlan 103
switchport mode access
spanning-tree portfast
!
interface GigabitEthernet0/2
description Fuel Node eth1 (optional to have direct access to Public net)
switchport access vlan 100
switchport mode access
spanning-tree portfast
!
interface GigabitEthernet0/3
description Compute Node 1 eth0
switchport access vlan 103
switchport mode access
spanning-tree portfast
©2014, Mirantis Inc.
Page 50
Mirantis OpenStack v5.1
Planning Guide
!
interface GigabitEthernet0/4
description Compute Node 1 eth1
switchport trunk native vlan 100
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 101
switchport mode trunk
spanning-tree portfast trunk
!
interface GigabitEthernet0/5
description Compute Node 2 eth0
switchport access vlan 103
switchport mode access
spanning-tree portfast
!
interface GigabitEthernet0/6
description Compute Node 2 eth1
switchport trunk native vlan 100
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 101
switchport mode trunk
spanning-tree portfast trunk
!
interface GigabitEthernet0/7
description Controller Node 1 eth0
switchport access vlan 103
switchport mode access
spanning-tree portfast
!
interface GigabitEthernet0/8
description Controller Node 1 eth1
switchport trunk native vlan 100
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 101
switchport mode trunk
spanning-tree portfast trunk
!
interface GigabitEthernet0/9
description Controller Node 2 eth0
switchport access vlan 103
switchport mode access
spanning-tree portfast
!
interface GigabitEthernet0/10
description Controller Node 2 eth1
switchport trunk native vlan 100
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 101
©2014, Mirantis Inc.
Detailed port configuration
102
102
102
102
Page 51
Mirantis OpenStack v5.1
Planning Guide
Detailed port configuration
switchport mode trunk
spanning-tree portfast trunk
!
interface GigabitEthernet0/11
description Controller Node 3 eth0
switchport access vlan 103
switchport mode access
spanning-tree portfast
!
interface GigabitEthernet0/12
description Controller Node 3 eth1
switchport trunk native vlan 100
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 101 102
switchport mode trunk
spanning-tree portfast trunk
!
interface GigabitEthernet0/13
description Cinder Node eth0
switchport access vlan 103
switchport mode access
spanning-tree portfast
!
interface GigabitEthernet0/14
description Cinder Node eth1
switchport trunk native vlan 100
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100, 101 102
switchport mode trunk
spanning-tree portfast trunk
!
interface GigabitEthernet0/24
description Connection to default gateway
switchport access vlan 100
switchport mode access
!
interface Vlan100
ip address 172.16.1.254 255.255.255.0
ip address 172.16.0.254 255.255.255.0 secondary
no shutdown
!
ip route 0.0.0.0 0.0.0.0 172.16.1.1
!
ip classless
no ip http server
no ip http secure-server
!
line con 0
©2014, Mirantis Inc.
Page 52
Mirantis OpenStack v5.1
Planning Guide
Neutron Switch configuration (Juniper EX4200)
session-timeout 15
privilege level 15
login local
password r00tme
!
line vty 0 15
session-timeout 15
login local
password r00tme
!
ntp server [ntp_server1] prefer
ntp server [ntp_server2]
Neutron Switch configuration (Juniper EX4200)
Use the following configuration to deploy Mirantis OpenStack using Juniper EX4200 network switch.
system {
host-name OpenStack_sw1;
domain-name domain.ltd;
authentication-order [ password ];
root-authentication {
encrypted-password "xxxxxxxxxxxxxxxxxxx";
}
}
services {
ssh;
}
ntp {
server [ntp_server1] prefer;
server [ntp_server2];
}
}
interfaces {
ge-0/0/0 {
description Fuel Node eth0;
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_103;
}
}
}
}
ge-0/0/1 {
©2014, Mirantis Inc.
Page 53
Mirantis OpenStack v5.1
Planning Guide
Neutron Switch configuration (Juniper EX4200)
description Fuel Node eth1 (optional to have direct access to Public
net);
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_100;
}
}
}
}
ge-0/0/2 {
description Compute Node 1 eth0;
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_103;
}
}
}
}
ge-0/0/3 {
description Compute Node 1 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
}
native-vlan-id vlan_100;
}
}
}
ge-0/0/4 {
description Compute Node 2 eth0;
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_103;
}
}
}
}
ge-0/0/5 {
description Compute Node 2 eth1;
unit 0 {
©2014, Mirantis Inc.
Page 54
Mirantis OpenStack v5.1
Planning Guide
Neutron Switch configuration (Juniper EX4200)
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
}
native-vlan-id vlan_100;
}
}
}
ge-0/0/6 {
description Controller Node 1 eth0;
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_103;
}
}
}
}
ge-0/0/7 {
description controller Node 1 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
}
native-vlan-id vlan_100;
}
}
}
ge-0/0/8 {
description Controller Node 2 eth0;
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_103;
}
}
}
}
ge-0/0/9 {
description Controller Node 2 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
©2014, Mirantis Inc.
Page 55
Mirantis OpenStack v5.1
Planning Guide
Neutron Switch configuration (Juniper EX4200)
vlan {
members vlan_101, vlan_102;
}
native-vlan-id vlan_100;
}
}
}
ge-0/0/10 {
description Controller Node 3 eth0;
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_103;
}
}
}
}
ge-0/0/11 {
description Controller Node 3 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
}
native-vlan-id vlan_100;
}
}
}
ge-0/0/12 {
description Cinder Node 1 eth0;
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_103;
}
}
}
}
ge-0/0/13 {
description Cinder Node 1 eth1;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members vlan_101, vlan_102;
©2014, Mirantis Inc.
Page 56
Mirantis OpenStack v5.1
Planning Guide
Neutron Switch configuration (Juniper EX4200)
}
native-vlan-id vlan_100;
}
}
}
ge-0/0/23 {
description Connection to default gateway;
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members vlan_100;
}
}
}
}
vlan {
unit 100 {
family inet {
address 172.16.1.254/24;
address 172.16.0.254/24;
}
}
}
}
routing-options {
static {
route 0.0.0.0/0 next-hop 172.16.1.1;
}
}
protocols {
dcbx {
interface all;
}
rstp {
bridge-priority 32k;
interface ge-0/0/0.0 {
edge;
}
interface ge-0/0/1.0 {
edge;
}
interface ge-0/0/2.0 {
edge;
}
interface ge-0/0/4.0 {
edge;
©2014, Mirantis Inc.
Page 57
Mirantis OpenStack v5.1
Planning Guide
Neutron Switch configuration (Juniper EX4200)
}
interface ge-0/0/6.0 {
edge;
}
interface ge-0/0/8.0 {
edge;
}
interface ge-0/0/10.0 {
edge;
}
interface ge-0/0/12.0 {
edge;
}
interface ge-0/0/23.0 {
edge;
}
bpdu-block-on-edge;
}
lldp {
interface all;
}
}
vlans {
vlan_1;
vlan_100 {
description Public;
vlan-id 100;
l3-interface vlan.100;
}
vlan_101 {
description Management;
vlan-id 101;
}
vlan_102 {
description Storage;
vlan-id 102;
}
vlan_103 {
description Admin;
vlan-id 103;
}
}
©2014, Mirantis Inc.
Page 58
Mirantis OpenStack v5.1
Planning Guide
Example 3: HA + Neutron with VLAN + SR-IOV & iSER
Example 3: HA + Neutron with VLAN + SR-IOV & iSER
As a model example, the following configuration is used:
• Deploying mode: Multi-node HA
• Networking model: Neutron with VLAN
Hardware and environment:
• 16 servers (1U) with two 1Gb/s ethernet NIC on-board, IPMI
• 16 Mellanox ConnectX-3 Pro 40/56GbE adapter card
• 16 40/56GbE cables (Mellanox OPN MC2207130)
• 1 Mellanox Ethernet switch SX1036 (36 ports of 40/56GbE)
• 1 (or 2) 1G Ethernet switch
• 1 storage server (2U) with two 1Gb/s ethernet NIC on-board, IPMI
• 2 External JBODs + SATA cables - Optional disk + adaptec RAID controller
Note
The Fuel node doesn't have to be equipped with Mellanox ConnectX-3 Pro adapter card, as it is not
connected to the high speed network.
Node servers roles:
• 1 server as Fuel Node
• 3 servers as Controller Nodes
• 1 server as Storage Node (Cinder)
• 12 servers as Compute Nodes
Network Configuration Plan:
The switches and servers is designed in this example to support five networks as follows:
Cloud Networks
• Admin (PXE) network - 10.142.10.0/24 (no gateway, untagged via the 1GbE switch)
• Managemenet network - 192.168.0.0/24 (no gateway, VLAN 3 via Mellanox SX1036 40/56GbE switch)
• Storage network - 192.168.1.0/24 (no gateway, VLAN 4 via Mellanox SX1036 40/56GbE switch)
• Public network - 10.7.208.0/24 (gateway 10.7.208.1, untagged via the 1GbE switch)
• Private network - <any> (no gateway, use range of VLANs e.g. 5-15 via Mellanox SX1036 40/56GbE switch)
©2014, Mirantis Inc.
Page 59
Mirantis OpenStack v5.1
Planning Guide
Example 3: HA + Neutron with VLAN + SR-IOV & iSER
Note
Internet access is acheived via the Public network.
Note
All nodes should be connected to all networks (besides the Fuel node).
Note
The 1GbE switch configuraiton for the Public and Admin (PXE) networks are similar to examples 1 and 2
above.
Network Configuation
• Floating IP range 10.7.208.101-200
• DNS 8.8.4.4, 8.8.8.8
• Fuel server: Admin (PXE) IP 10.142.10.10/24
From server side (all nodes), ports with following VLAN IDs are used:
• eth0 - Admin (PXE) - untagged
• eth1 - Public - untagged
• eth2 (40/56GbE port) - Management VLAN 3, Storage VLAN 4 (+ VLANS 5-15 for the privte networks)
©2014, Mirantis Inc.
Page 60
Mirantis OpenStack v5.1
Planning Guide
Example 3: HA + Neutron with VLAN + SR-IOV & iSER
Here is an example of the network diagram:
©2014, Mirantis Inc.
Page 61
Mirantis OpenStack v5.1
Planning Guide
Neutron Switch configuration (Mellanox SX1036)
Rack Design
Here is a recommended rack design configuration. Design the rack as the 1G switches are on top, followed by the
servers, then the 40/56GbE switch and then the storage server and JBODs.
Neutron Switch configuration (Mellanox SX1036)
Use the following configuration to deploy Mirantis OpenStack using a Mellanox SX1036 40/56GE 36 port switch
The switch configuration is required prior to the Fuel instllation. Prior the installation, the network connectivity
between all hosts should be ready.
Here is an example of Mellanox switch VLAN configuation and flow control:
switch
switch
switch
switch
> enable
# configure terminal
(config) # vlan 1-20
(config vlan 1-20) # exit
# Note
switch
switch
switch
switch
switch
switch
that VLAN 1 is an untagged VLAN by default
(config) # interface ethernet 1/1 switchport
(config) # interface ethernet 1/1 switchport
(config) # interface ethernet 1/2 switchport
(config) # interface ethernet 1/2 switchport
(config) # interface ethernet 1/3 switchport
(config) # interface ethernet 1/3 switchport
©2014, Mirantis Inc.
mode hybrid
hybrid allowed-vlan 2-20
mode hybrid
hybrid allowed-vlan 2-20
mode hybrid
hybrid allowed-vlan 2-20
Page 62
Mirantis OpenStack v5.1
Planning Guide
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
switch
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
(config)
©2014, Mirantis Inc.
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
Neutron Switch configuration (Mellanox SX1036)
interface ethernet 1/4 switchport mode hybrid
interface ethernet 1/4 switchport hybrid allowed-vlan 2-20
interface ethernet 1/5 switchport mode hybrid
interface ethernet 1/5 switchport hybrid allowed-vlan 2-20
interface ethernet 1/6 switchport mode hybrid
interface ethernet 1/6 switchport hybrid allowed-vlan 2-20
interface ethernet 1/7 switchport mode hybrid
interface ethernet 1/7 switchport hybrid allowed-vlan 2-20
interface ethernet 1/8 switchport mode hybrid
interface ethernet 1/8 switchport hybrid allowed-vlan 2-20
interface ethernet 1/9 switchport mode hybrid
interface ethernet 1/9 switchport hybrid allowed-vlan 2-20
interface ethernet 1/10 switchport mode hybrid
interface ethernet 1/10 switchport hybrid allowed-vlan 2-20
interface ethernet 1/11 switchport mode hybrid
interface ethernet 1/11 switchport hybrid allowed-vlan 2-20
interface ethernet 1/12 switchport mode hybrid
interface ethernet 1/12 switchport hybrid allowed-vlan 2-20
interface ethernet 1/13 switchport mode hybrid
interface ethernet 1/13 switchport hybrid allowed-vlan 2-20
interface ethernet 1/14 switchport mode hybrid
interface ethernet 1/14 switchport hybrid allowed-vlan 2-20
interface ethernet 1/15 switchport mode hybrid
interface ethernet 1/15 switchport hybrid allowed-vlan 2-20
interface ethernet 1/16 switchport mode hybrid
interface ethernet 1/16 switchport hybrid allowed-vlan 2-20
interface ethernet 1/1-1/16 flowcontrol receive on force
interface ethernet 1/1-1/16 flowcontrol send on force
configuration write
Page 63
Mirantis OpenStack v5.1
Planning Guide
Index
Index
P
Preparing for the Mirantis OpenStack Deployment
S
System Requirements
©2014, Mirantis Inc.
Page 65