The Road to Rolling Upgrade of Intel Private Cloud

The Road to Rolling Upgrade of
Intel Private Cloud
Huang, Shuquan
May 2015
Agenda
•
Intel Private Cloud
•
Enterprise Upgrade Strategy
•
Rehearsal Plan
•
Roll back Plan
•
Validation Plan
•
Automation & local CI
•
Detail Steps of Rolling Upgrade
•
Result
2
OpenStack - Intel IT Convergence Platform
Enterprise
Hosting
Lab Hosting
New business
Hosting
OpenStack
Existing Infrastructure
(Proprietary Hypervisor & Storage)
New Infrastructure
(KVM, Open Source Storage)
Open Stack provides a convergence opportunity for IT Hosting
Enterprise Upgrade Strategy
Do we need to upgrade our existing Cluster?
•
Track the related Bug & Blueprint List.
•
How to perform a upgrade?
•
•
•
•
•
What’s the right timing of upgrade?
•
•
Version by version other than skip version.
Rolling upgrade. Upgrading each component of the cluster, piece by piece,
eventually give us a cloud running on the new version
Minimal downtime of each component
Based on monitor data, find out a proper timing
Long term upgrade strategy
4
Rehearsal Plan
VM environment
•
•
•
Beta environment
•
•
•
•
•
Come up with local repo and config file changes
Can’t use VLAN mode in VM environment
Validate the local repo and config file changes
Refine the upgrade procedure in similar topology & architecture
There are still some mismatch between beta and production, such as DB, Storage,
etc.
Production environment
5
Rollback Plan
•
Ability to rollback to a pre-upgrade state if things fail
•
Backup
•
•
•
•
DB
Config file
Repo
Restore
6
Validation Plan
Congress??
•
•
•
•
•
Don’t gather all tables. Keystone, for example.
Data is stored in memory.
Checking result is print as log which is not friendly to read.
Script to collect the status of the Cluster from API and diff the result after
performing a upgrade.
7
Automation & Local CI
Confi file converter
•
•
•
•
Deploy new cluster by using some
version source code and cluster topology
Input: previous version config file
Output: new version config file
Automation upgrade multiply nodes
by leveraging local CI
Puppet Master
After deployment, run Tempest
to validate changes
Polling changes
Local Git Repo
Jenkins
Manual Trigger CI or
Configure polling stratigies
Admin
8
OpenStack
Cluster
Detail Steps of Rolling Upgrade
•
Preparation
•
•
•
•
•
Backup config files, DB
Build Icehouse local repo, including source code, pip packages, config file template
Edit the /etc/mysql/my.cnf file, change MySQL default character set to UTF-8.
Stop puppet services in Cluster
Check haproxy status. Select Controller01 to upgrade and keep Controller02
running.
•
Upgrade keystone, glance, nova in Controller01
•
Upgrade nova, neutron, cinder in compute/network node
•
Stop corresponding services on Controller02, and start it on Controller01
9
keystone
Edit the /etc/keystone/keystone.conf file for
compatibility for Icehouse:
1.
•
•
•
•
# service keystone stop
Uninstall the Havana package & code
3.
•
10
5.
Add the [database] section.
Move the connection key from the [sql] section to
the [database] section.
bind_host = 0.0.0.0 change to
public_bind_host=0.0.0.0 &
6.
admin_bind_host=0.0.0.0
Stop the services:
2.
# pip uninstall –y keystone pythonkeystoneclient
Install the Icehouse package
4.
Upgrade the database:
•
•
# keystone-manage token_flush
# keystone-manage db_sync
Start the services.
•
# service keystone start
glance
1.
Before upgrading the Image Service database, you must convert the character
set for each table to UTF-8.
2.
Edit the /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf files for
compatibility with Icehouse:
Add the [database] section.
Rename the sql_connection key to connection and move it to the [database] section
•
•
3.
•
•
•
•
•
•
•
•
•
•
•
11
Edit the /etc/glance/glance-api.conf
sql_idle_timeout = 3600 -> idle_timeout = 3600
notifier_strategy = noop -> notification_driver = noop
rabbit_ha_queues = true -> comment out
In the /etc/glance/glance-api.conf file, add RabbitMQ message broker keys to the
[DEFAULT] section.
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
...
Replace RABBIT_PASS with the password you chose for the guest account in
RabbitMQ.
4.
Stop the services:
•
# service glance-api stop
•
# service glance-registry stop
5.
Uninstall the Havana package & code
•
# pip uninstall –y glance python-glanceclient
6.
Install the Icehouse package
7.
Upgrade the database:
•
# glance-manage db_sync
8.
Start the services:
•
# service glance-api start
•
# service glance-registry start
nova services in controller
1.
•
•
•
•
•
•
•
•
Edit the /etc/nova/api-paste.ini file and comment out or remove any
keys in the [filter:authtoken] section beneath the paste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory statement.
2.
•
Stop the services:
# service nova-api stop
# service nova-scheduler stop
# service nova-conductor stop
# service nova-cert stop
# service nova-consoleauth stop
# service nova-novncproxy stop
•
Uninstall the Havana package & code
# pip uninstall –y nova python-novaclient
3.
•
•
•
•
•
4.
5.
12
Edit the /etc/nova/nova.conf file
rpc_backend = nova.rpc.impl_kombu -> rpc_backend = rabbit.
libvirt_inject_password = false -> inject_password = false
libvirt_inject_key=false -> inject_key -> false
libvirt_inject_partition=-2 -> inject_partition = -2
[upgrade_levels]
…
compute=icehouse-compat
…
Install the Icehouse package
6.
•
7.
•
•
•
•
•
•
Upgrade the database:
# nova-manage db sync
Start the services:
# service nova-api start
# service nova-scheduler start
# service nova-conductor start
# service nova-cert start
# service nova-consoleauth start
# service nova-novncproxy start
neutron-server
1.
Before upgrading the Networking database, you must convert the character set for each
table to UTF-8.
2.
Populate the /etc/neutron/plugins/ml2/ml2_conf.ini file with the equivalent configuration
for your environment.
Note: Do not edit the /etc/neutron/neutron.conf file until after the conversion steps.
Warning: Because the conversion script cannot roll back, you must perform a database
backup prior to executing the following commands.
8.
•
•
•
•
Stop the service:
# service neutron-server stop
•
Uninstall the Havana package & code
# pip uninstall –y neutron python-neutronclient
3.
4.
5.
Install the Icehouse package
6.
Keep the Havana configuration in the /etc/neutron. Make sure the correct setting of
ml2_conf.ini already exist in the /etc/neutron/plugins/ml2.
•
9.
•
•
•
•
•
•
•
•
•
•
7.
•
•
•
•
•
•
13
Upgrade the database:
# neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini stamp
Havana
Note: make sure the file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
existing.
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
# neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade icehouse
•
•
Perform the conversion from OVS to ML2, replace NEUTRON_DBPASS with the
password you chose for the database.:
# python -m neutron.db.migration.migrate_to_ml2 openvswitch \
mysql://neutron:NEUTRON_DBPASS@controller/neutron
Edit the /etc/neutron/neutron.conf file to use the ML2 plug-in and enable network
change notifications:
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_username = nova:q
nova_admin_tenant_id = SERVICE_TENANT_ID
nova_admin_password = NOVA_PASS
nova_admin_auth_url = http://controller:35357/v2.0
•
•
Note: Replace SERVICE_TENANT_ID with the service tenant identifier (id) in the Identity
service and NOVA_PASS with the password you chose for the nova user in the Identity
service.
•
•
Edit the /etc/default/neutron-server to use ML2 plugin
NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini"
•
Start Networking services:
# service neutron-server start
10.
11.
neutron-*-agents
Restart Networking services:
1.
•
•
•
•
•
# pip uninstall –y neutron pythonneutronclient
•
•
•
Edit the /etc/neutron/neutron.conf file to
use the ML2 plug-in:
4.
•
•
•
[DEFAULT]
core_plugin = ml2
service_plugins = router
Create the /etc/init/neutron-ovs-cleanup.conf
# ln -sv /lib/init/upstart-job /etc/init.d/ neutron-ovscleanup
# service neutron-ovs-cleanup restart
Edit /etc/init/neutron-plugin-openvswitchagent.conf
7.
•
Install the Icehouse package
3.
Clean the active OVS configuration:
6.
Uninstall the Havana package & code
2.
14
# service neutron-dhcp-agent stop
# service neutron-l3-agent stop
# service neutron-metadata-agent stop
# service neutron-plugin-openvswitch-agent
stop
Populate the
/etc/neutron/plugins/ml2/ml2_conf.ini file with the
equivalent configuration for your environment.
5.
--config-file=/etc/neutron/plugins/ml2/ml2_conf.in
Start Networking services:
8.
•
•
•
•
# service neutron-dhcp-agent start
# service neutron-l3-agent start
# service neutron-metadata-agent start
# service neutron-plugin-openvswitch-agent start
neutron-plugin-openvswitch-agent
1.
•
2.
•
Restart Networking services:
# service neutron-plugin-openvswitch-agent
stop
5.
Populate the
/etc/neutron/plugins/ml2/ml2_conf.ini file with
the equivalent configuration for your
environment.
Uninstall the Havana package & code
# pip uninstall –y neutron python-neutronclient
6.
Clean the active OVS configuration:
7.
Create the /etc/init/neutron-ovs-cleanup.conf
3.
Install the Icehouse package
•
4.
Edit the /etc/neutron/neutron.conf file to use the
ML2 plug-in:
[DEFAULT]
core_plugin = ml2
service_plugins = router
•
•
•
•
8.
Edit /etc/init/neutron-plugin-openvswitchagent.conf
9.
--configfile=/etc/neutron/plugins/ml2/ml2_conf.in
10.
Restart Networking services:
•
15
# ln -sv /lib/init/upstart-job /etc/init.d/ neutron-ovscleanup
# service neutron-ovs-cleanup restart
# service neutron-plugin-openvswitch-agent start
nova-compute
Edit the /etc/nova/nova.conf file
1.
•
•
•
•
2.
3.
•
4.
•
5.
16
rpc_backend = nova.rpc.impl_kombu -> rpc_backend = rabbit.
libvirt_inject_password = false -> inject_password = false
libvirt_inject_key=false -> inject_key -> false
libvirt_inject_partition=-2 -> inject_partition = -2
6.
•
7.
Edit the /etc/nova/api-paste.ini file and comment out or remove
any keys in the [filter:authtoken] section beneath the
paste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory statement.
•
Stop the services:
•
•
# service nova-compute stop
•
Uninstall the Havana package & code
•
# pip uninstall –y nova python-novaclient
Install the Icehouse package
Start the services:
# service nova-compute start
Remove the compatible mode of compute
service on the controller node
Edit the /etc/nova/nova.conf file
[upgrade_levels]
…
#compute=icehouse-compat
…
cinder
Stop the services:
1.
•
•
# service cinder-api stop
# service cinder-schedule stop
Uninstall the Havana package & code
2.
•
Install the Icehouse package
4.
Upgrade the DB
•
•
17
# cinder-manage db sync
Start the services:
5.
•
2.
# pip uninstall –y cinder python- cinder client
3.
•
1.
# service cinder-api start
# service cinder-schedule start
•
Stop the services:
# service cinder-volume stop
Uninstall the Havana package & code
# pip uninstall –y cinder python- cinder
client
3.
Install the Icehouse package
4.
Start the services:
# service cinder-volume start
•
Result
Issue
•
•
•
•
•
Convert glance and neutron DB to utf8 in Mysql Galera Cluster
conversion from OVS to ML2 (sequences matters)
DHCP tap port is tagged to 4095
VM’s tap device is lost in br-int
18
Disclaimer
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software
or service activation. Performance varies depending on system configuration. No computer system can be absolutely
secure. Check with your system manufacturer or retailer or learn more at [intel.com].
Software and workloads used in performance tests may have been optimized for performance only on Intel
microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems,
components, software, operations and functions. Any change to any of those factors may cause the results to vary. You
should consult other information and performance tests to assist you in fully evaluating your contemplated purchases,
including the performance of that product when combined with other products.
Include as footnote in appropriate slide where performance data is shown:
o
o
§ Configurations: [describe config + what test used + who did testing]
§ For more information go to http://www.intel.com/performance.
Intel, the Intel logo, {List the Intel trademarks in your document} are trademarks of Intel Corporation in the U.S. and/or
other countries.
*Other names and brands may be claimed as the property of others.
© 2015 Intel Corporation.
19