Fuel version 0 - OpenStack Documentation

Fuel
version 0
2012-2014, Mirantis
December 29, 2014
Contents
Table of contents
Development Documentation
1
1
Fuel Architecture
1
Sequence Diagrams
3
Fuel Development Quick-Start
5
Fuel Development Examples
6
Fuel Development Environment
8
System Tests
12
Fuel Development Environment on Live Master Node
13
Nailgun
13
Contributing to Fuel Library
61
Using Fuel settings
66
Example module
67
Resource duplication and file conflicts
68
Puppet module containment
69
Puppet scope and variables
70
Where to find more information
71
Fuel Master Node Deployment over PXE
72
Health Check (OSTF) Contributor's Guide
76
User Guide
85
Devops Guide
85
Introduction
85
Installation
85
Configuration
87
Environment creation via Devops + Fuel_main
88
Important notes for Sahara and Murano tests
89
Run single OSTF tests several times
89
Fuel ISO build system
89
Quick start
89
Build system structure
90
Build targets
91
Customizing build process
91
Other options
92
Index
Python Module Index
95
101
Table of contents
Table of contents
Development Documentation
Fuel Architecture
Good overview of Fuel architecture is represented on OpenStack wiki. You can find a detailed
breakdown of how this works in the Sequence Diagrams.
Master node is the main part of the Fuel project. It contains all the services needed for network
provisioning of other managed nodes, installing an operating system, and then deploying OpenStack
services to create a cloud environment. Nailgun is the most important service. It is a RESTful
application written in Python that contains all the business logic of the system. A user can interact
with it either using the Fuel Web interface or by the means of CLI utility. He can create a new
environment, edit its settings, assign roles to the discovered nodes, and start the deployment process
of the new OpenStack cluster.
Nailgun stores all of its data in a PostgreSQL database. It contains the hardware configuration of all
discovered managed nodes, the roles, environment settings, current deployment status and progress
of running deployments.
Managed nodes are discovered over PXE using a special bootstrap image and the PXE boot server
located on the master node. The bootstrap image runs a special script called Nailgun agent. The
agent nailgun-agent.rb collects the server's hardware information and submits it to Nailgun through
the REST API.
The deployment process is started by the user after he has configured a new environment. The
Nailgun service creates a JSON data structure with the environment settings, its nodes and their roles
and puts this file into the RabbitMQ queue. This message should be received by one of the worker
processes who will actually deploy the environment. These processes are called Astute.
1
Table of contents
The Astute workers are listening to the RabbitMQ queue and receives messages. They use the Astute
library which implements all deployment actions. First, it starts the provisioning of the environment's
nodes. Astute uses XML-RPC to set these nodes' configuration in Cobbler and then reboots the nodes
using MCollective agent to let Cobbler install the base operating system. Cobbler is a deployment
system that can control DHCP and TFTP services and use them to network boot the managed node
and start the OS installer with the user-configured settings.
Astute puts a special message into the RabbitMQ queue that contains the action that should be
executed on the managed node. MCollective servers are started on all bootstrapped nodes and they
constantly listen for these messages, when they receive a message, they run the required agent
action with the given parameters. MCollective agents are just Ruby scripts with a set of procedures.
These procedures are actions that the MCollective server can run when asked to.
When the managed node's OS is installed, Astute can start the deployment of OpenStack services.
First, it uploads the node's configuration to the /etc/astute.yaml file on node using the uploadfile
agent. This file contains all the variables and settings that will be needed for the deployment.
Next, Astute uses the puppetsync agent to synchronize Puppet modules and manifests. This agent
runs an rsync process that connects to the rsyncd server on the Master node and downloads the
latest version of Puppet modules and manifests.
2
Table of contents
When the modules are synchronized, Astute can run the actual deployment by applying the main
Puppet manifest site.pp. MCollective agent runs the Puppet process in the background using the
daemonize tool. The command looks like this:
daemonize puppet apply /etc/puppet/manifests/site.pp"
Astute periodically polls the agent to check if the deployment has finished and reports the progress to
Nailgun through its RabbitMQ queue.
When started, Puppet reads the astute.yaml file content as a fact and then parses it into the
$fuel_settings structure used to get all deployment settings.
When the Puppet process exits either successfully or with an error, Astute gets the summary file from
the node and reports the results to Nailgun. The user can always monitor both the progress and the
results using Fuel Web interface or the CLI tool.
Fuel installs the puppet-pull script. Developers can use it if they need to manually synchronize
manifests from the Master node and run the Puppet process on node again.
Astute also does some additional actions, depending on environment configuration, either before the
deployment of after successful one.
• Generates and uploads SSH keys that will be needed during deployment.
• During network verification phase net_verify.py script.
• Uploads CirrOS guest image into Glance after the deployment.
• Updates /etc/hosts file on all nodes when new nodes are deployed.
• Updates RadosGW map when Ceph nodes are deployed.
Astute also uses MCollective agents when a node or the entire environment is being removed. It
erases all boot sectors on the node and reboots it. The node will be network booted with the bootstrap
image again, and will be ready to be used in a new environment.
Sequence Diagrams
OS Provisioning
3
Table of contents
Networks Verification
Details on Cluster Provisioning & Deployment (via Facter extension)
4
Table of contents
Once deploy and provisioning messages are accepted by Astute, provisioning method is called.
Provisioning part creates system in Cobbler and calls reboot over Cobbler. Then Astute uses
MCollective direct addressing mode to check if all required nodes are available, include puppet agent
on them. If some nodes are not yet ready, Astute waits for a few seconds and tries to request again.
When nodes are booted in target OS, Astute uses upload_file MCollective plugin to push data to a
special file /etc/astute.yaml on the target system. Data include role and all other variables needed for
deployment. Then, Astute calls puppetd MCollective plugin to start deployment. Puppet is started on
nodes.
Accordingly, puppet agent starts its run. Modules contain facter extension, which runs before
deployment. Extension reads data from /etc/astute.yaml placed by mcollective, and extends Facter
data with it as a single fact, which is then parsed by parseyaml function to create $::fuel_settings data
structure. This structure contains all variables as a single hash and supports embedding of other rich
structures such as nodes hash or arrays. Case structure in running class chooses appropriate class to
import, based on role and deployment_mode variables found in /etc/astute.yaml.
Fuel Development Quick-Start
If you are interested in contributing to Fuel or modifying Fuel for your own purposes, this short guide
should get you pointed to all the information you need to get started.
If you are new to contributing to OpenStack, read through the “How To Contribute” page on the
OpenStack wiki. See: How to contribute.
For this walk-through, let’s use the example of modifying an option to the “new environment wizard”
in Fuel (example here: https://review.openstack.org/#/c/90687/1). This enhancement required
modification to three files in the fuel-web repository:
fuel-web/nailgun/static/i18n/translation.json
fuel-web/nailgun/static/js/views/dialogs.js
fuel-web/nailgun/static/templates/dialogs/create_cluster_wizard/storage.html
In order to add, test and commit the code necessary to implement this feature, these steps were
followed:
1. Create a Fuel development environment by following the instructions found here: Fuel
Development Environment.
2. In your development environment, prepare your environment for Nailgun unit tests and Web UI
tests by following the instructions found here: Nailgun Dev Environment. Be sure to run the tests
noted in each section to ensure your environment confirms to a known good baseline.
3. Branch your fuel-web checkout (see Gerrit Workflow for more information on the gerrit workflow):
cd fuel-web
git fetch --all;git checkout -b vcenter-wizard-fix origin/master
4. Modify the necessary files (refer to Fuel Architecture to understand how the components of Fuel
work together).
5. Test your Nailgun changes:
cd fuel-web
./run_tests.sh
./run_tests.sh
./run_tests.sh
./run_tests.sh
--no-lint-ui --no-webui
--flake8
--lint-ui
--webui
6. You should also test Nailgun in fake UI mode by following the steps found here: Running Nailgun
in Fake Mode
7. When all tests pass you should commit your code, which will subject it to further testing via
Jenkins and Fuel CI. Be sure to include a good commit message, guidelines can be found here: Git
Commit Messages.:
git commit -a
git review
5
Table of contents
8. Frequently, the review process will suggest changes be made before your code can be merged.
In that case, make your changes locally, test the changes, and then re-submit for review by
following these steps:
git commit -a --amend
git review
9. Now that your code has been committed, you should change your Fuel ISO makefile to point to
your specific commit. As noted in the Fuel Development documentation, when you build a Fuel
ISO it pulls down the additional repositories rather than using your local repos. Even though you
have a local clone of fuel-web holding the branch you just worked on, the build script will be
pulling code from git for the sub-components (Nailgun, Astute, OSTF) based on the repository and
commit specified in environment variables when calling “make iso”, or as found in config.mk. You
will need to know the gerrit commit ID and patch number. For this example we are looking at
https://review.openstack.org/#/c/90687/1 with the gerrit ID 90687, patch 1. In this instance, you
would build the ISO with:
cd fuel-main
NAILGUN_GERRIT_COMMIT=refs/changes/32/90687/1 make iso
10 Once your ISO build is complete, you can test it. If you have access to hardware that can run the
. KVM hypervisor, you can follow the instructions found in the Devops Guide to create a robust
testing environment. Otherwise you can test the ISO with Virtualbox (the download link can be
found at https://software.mirantis.com/)
11 Once your code has been merged, you can return your local repo to the master branch so you
. can start fresh on your next commit by following these steps:
cd fuel-web
git remote update
git checkout master
git pull
Fuel Development Examples
This section provides examples of the Fuel development process. It builds on the information in the
How to contribute document, and the Fuel Development Quick-Start Guide which illustrate the
development process for a single Fuel component. These examples show how to manage
development and integration of a more complicated example.
Any new feature effort should start with the creation of a blueprint where implementation decisions
and related commits are tracked. More information on launchpad blueprints can be found here:
https://wiki.openstack.org/wiki/Blueprints.
Understanding the Fuel architecture helps you understand which components any particular addition
will impact. The following documents provide valuable information about the Fuel architecture, and
the provisioning and deployment process:
• Fuel architecture on the OpenStack wiki
• Architecture section of Fuel documentation
• Visual of provisioning tasks
Adding Zabbix Role
This section outlines the steps followed to add a new role to Fuel. In this case, monitoring service
functionality was added by enabling the deployment of a Zabbix server configured to monitor an
OpenStack environment deployed by Fuel.
The monitoring server role was initially planned in this blueprint. Core Fuel developers provided
feedback to small commits via Gerrit and IRC while the work was coming together. Ultimately the
work was rolled up into two commits including over 23k lines of code, and these two commits were
merged into fuel-web and fuel-library.
Additions to Fuel-Web for Zabbix role
6
Table of contents
In fuel-web, the Support for Zabbix commit added the additional role to Nailgun. The reader is urged
to review this commit closely as a good example of where specific additions fit. In order to include this
as an option in the Fuel deployment process, the following files were included in the commit for
fuel-web:
UI components:
nailgun/static/i18n/translation.json
nailgun/static/js/views/cluster_page_tabs/nodes_tab_screens/node_list_screen.js
Testing additions:
nailgun/nailgun/test/integration/test_cluster_changes_handler.py
nailgun/nailgun/test/integration/test_orchestrator_serializer.py
General Nailgun additions:
nailgun/nailgun/errors/__init__.py
nailgun/nailgun/fixtures/openstack.yaml
nailgun/nailgun/network/manager.py
nailgun/nailgun/orchestrator/deployment_serializers.py
nailgun/nailgun/rpc/receiver.py
nailgun/nailgun/settings.yaml
nailgun/nailgun/task/task.py
nailgun/nailgun/utils/zabbix.py
Additions to Fuel-Library for Zabbix role
In addition to the Nailgun additions, the related Puppet modules were added to the fuel-library
repository. This Zabbix fuel-library integration commit included all the puppet files, many of which are
brand new modules specifically for Zabbix, in addition to adjustments to the following files:
deployment/puppet/openstack/manifests/logging.pp
deployment/puppet/osnailyfacter/manifests/cluster_ha.pp
deployment/puppet/osnailyfacter/manifests/cluster_simple.pp
Once all these commits passed CI and had been reviewed by both community members and the Fuel
PTLs, they were merged into master.
Adding Hardware Support
This section outlines the steps followed to add support for a Mellanox network card, which requires a
kernel driver that is available in most Linux distributions but was not loaded by default. Adding
support for other hardware would touch similar Fuel components, so this outline should provide a
reasonable guide for contributors wishing to add support for new hardware to Fuel.
It is important to keep in mind that the Fuel node discovery process works by providing a bootstrap
image via PXE. Once the node boots with this image, a basic inventory of hardware information is
gathered and sent back to the Fuel controller. If a node contains hardware requiring a unique kernel
module, the bootstrap image must contain that module in order to detect the hardware during
discovery.
In this example, loading the module in the bootstrap image was enabled by adjusting the ISO
makefile and specifying the appropriate requirements.
Adding a hardware driver to bootstrap
The Added bootstrap support to Mellanox commit shows how this is achieved by adding the modprobe
call to load the driver specified in the requirements-rpm.txt file, requiring modification of only two
files in the fuel-main repository:
bootstrap/module.mk
requirements-rpm.txt
7
Table of contents
Note
Any package specified in bootstrap building procedure must be listed in the requirements-rpm.txt
file explicitly. The Fuel mirrors must be rebuilt by the OSCI team prior to merging requests like this
one.
Note
Changes made to bootstrap do not affect package sets for target systems, so in case if you're
adding support for NIC, for example, you have to add installation of all related packages to
kickstart/preceed as well.
The Adding OFED drivers installation commit shows the changes made to the preseed (for Ubuntu)
and kickstart (for CentOS) files in the fuel-library repository:
deployment/puppet/cobbler/manifests/snippets.pp
deployment/puppet/cobbler/templates/kickstart/centos.ks.erb
deployment/puppet/cobbler/templates/preseed/ubuntu-1204.preseed.erb
deployment/puppet/cobbler/templates/snippets/centos_ofed_prereq_pkgs_if_enabled.erb
deployment/puppet/cobbler/templates/snippets/ofed_install_with_sriov.erb
deployment/puppet/cobbler/templates/snippets/ubuntu_packages.erb
Though this example did not require it, if the hardware driver is required during the operating system
installation, the installer images (debian-installer and anaconda) would also need to be repacked. For
most installations though, ensuring the driver package is available during installation should be
sufficient.
Adding to Fuel package repositories
If the addition will be committed back to the public Fuel codebase to benefit others, you will need to
submit a bug in the Fuel project to request the package be added to the repositories.
Let's look at this process step by step by the example of Add neutron-lbaas-agent package bug:
• you create a bug in the Fuel project providing full description on the packages to be added, and
assign it to the Fuel OSCI team
• you create a request to add these packages to Fuel requirements-*.txt files Add all neutron
packages to requirements You receive +1 vote from Fuel CI if these packages already exist on
either Fuel internal mirrors or upstream mirrors for respective OS type (rpm/deb), or -1 vote in
any other case.
• if requested packages do not exist in the upstream OS distributive, OSCI team builds them and
then places on internal Fuel mirrors
• OSCI team rebuilds public Fuel mirrors with Add all neutron packages to requirements request
• Add all neutron packages to requirements request is merged
Note
The package must include a license that complies with the Fedora project license requirements for
binary firmware. See the Fedora Project licensing page for more information.
Fuel Development Environment
If you are modifying or augmenting the Fuel source code or if you need to build a Fuel ISO from the
latest branch, you will need an environment with the necessary packages installed. This page lays out
8
Table of contents
the steps you will need to follow in order to prepare the development environment, test the individual
components of Fuel, and build the ISO which will be used to deploy your Fuel master node.
The basic operating system for Fuel development is Ubuntu Linux. The setup instructions below
assume Ubuntu 14.04 though most of them should be applicable to other Ubuntu and Debian
versions, too.
Each subsequent section below assumes that you have followed the steps described in all preceding
sections. By the end of this document, you should be able to run and test all key components of Fuel,
build the Fuel master node installation ISO, and generate documentation.
Getting the Source Code
Source code of OpenStack Fuel can be found on Stackforge. Follow these steps to clone the
repositories for each of the Fuel components:
apt-get install git
git clone https://github.com/stackforge/fuel-main
git clone https://github.com/stackforge/fuel-web
git clone https://github.com/stackforge/fuel-astute
git clone https://github.com/stackforge/fuel-ostf
git clone https://github.com/stackforge/fuel-library
git clone https://github.com/stackforge/fuel-docs
Building the Fuel ISO
The "fuel-main" repository is the only one required in order to build the Fuel ISO. The make script then
downloads the additional components (Fuel Library, Nailgun, Astute and OSTF). Unless otherwise
specified in the makefile, the master branch of each respective repo is used to build the ISO.
The basic steps to build the Fuel ISO from trunk in an Ubuntu 14.04 environment are:
apt-get install git
git clone https://github.com/stackforge/fuel-main
cd fuel-main
./prepare-build-env.sh
make iso
If you want to build an ISO using a specific commit or repository, you will need to modify the "Repos
and versions" section in the config.mk file found in the fuel-main repo before executing "make iso".
For example, this would build a Fuel ISO against the v5.0 tag of Fuel:
# Repos and versions
FUELLIB_COMMIT?=tags/5.0
NAILGUN_COMMIT?=tags/5.0
ASTUTE_COMMIT?=tags/5.0
OSTF_COMMIT?=tags/5.0
FUELLIB_REPO?=https://github.com/stackforge/fuel-library.git
NAILGUN_REPO?=https://github.com/stackforge/fuel-web.git
ASTUTE_REPO?=https://github.com/stackforge/fuel-astute.git
OSTF_REPO?=https://github.com/stackforge/fuel-ostf.git
To build an ISO image from custom gerrit patches on review, edit the "Gerrit URLs and commits"
section of config.mk, e.g. for https://review.openstack.org/#/c/63732/8 (id:63732, patch:8) set:
FUELLIB_GERRIT_COMMIT?=refs/changes/32/63732/8
If you are building Fuel from an older branch that does not contain the "prepare-build-env.sh" script,
you can follow these steps to prepare your Fuel ISO build environment on Ubuntu 14.04:
1. ISO build process requires sudo permissions, allow yourself to run commands as root user
without request for a password:
echo "`whoami` ALL=(ALL) NOPASSWD: ALL" | sudo tee -a /etc/sudoers
9
Table of contents
2. Install software:
sudo apt-get update
sudo apt-get install apt-transport-https
echo deb http://mirror.yandex.ru/mirrors/docker/ docker main | sudo tee /etc/apt/sources
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950
sudo apt-get update
sudo apt-get install lxc-docker
sudo apt-get update
sudo apt-get remove nodejs nodejs-legacy npm
sudo apt-get install software-properties-common python-software-properties
sudo add-apt-repository -y ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install build-essential make git ruby ruby-dev rubygems debootstrap creater
python-setuptools yum yum-utils libmysqlclient-dev isomd5sum \
python-nose libvirt-bin python-ipaddr python-paramiko python-yaml \
python-pip kpartx extlinux unzip genisoimage nodejs multistrap \
lrzip python-daemon
sudo gem install bundler -v 1.2.1
sudo gem install builder
sudo pip install xmlbuilder jinja2
sudo npm install -g grunt-cli
3. If you haven't already done so, get the source code:
git clone https://github.com/stackforge/fuel-main
4. Now you can build the Fuel ISO image:
cd fuel-main
make iso
5. If you encounter issues and need to rebase or start over:
make clean
make deep_clean
#remove build/ directory
#remove build/ and local_mirror/
Note
In case you are using Virtualbox for building iso, please ensure that the build directory BUILD_DIR
and LOCAL_MIRROR (see config.mk) both are OUT of the Virtualbox shared folder path
Nailgun (Fuel-Web)
Nailgun is the heart of Fuel project. It implements a REST API as well as deployment data
management. It manages disk volume configuration data, network configuration data and any other
environment specific data necessary for a successful deployment of OpenStack. It provides the
required orchestration logic for provisioning and deployment of the OpenStack components and
nodes in the right order. Nailgun uses a SQL database to store its data and an AMQP service to
interact with workers.
Requirements for preparing the nailgun development environment, along with information on how to
modify and test nailgun can be found in the Nailgun Development Instructions document: Nailgun
Development Instructions
Astute
Astute is the Fuel component that represents Nailgun's workers, and its function is to run actions
according to the instructions provided from Nailgun. Astute provides a layer which encapsulates all
the details about interaction with a variety of services such as Cobbler, Puppet, shell scripts, etc. and
provides a universal asynchronous interface to those services.
10
Table of contents
1. Astute can be found in fuel-astute repository
2. Install Ruby dependencies:
sudo apt-get install git curl
curl -sSL https://get.rvm.io | bash -s stable
source ~/.rvm/scripts/rvm
rvm install 2.1
rvm use 2.1
git clone https://github.com/nulayer/raemon.git
cd raemon
git checkout b78eaae57c8e836b8018386dd96527b8d9971acc
gem build raemon.gemspec
gem install raemon-0.3.0.gem
cd ..
rm -Rf raemon
3. Install or update dependencies and run unit tests:
cd fuel-astute
./run_tests.sh
4. (optional) Run Astute MCollective integration test (you'll need to have MCollective server running
for this to work):
cd fuel-astute
bundle exec rspec spec/integration/mcollective_spec.rb
Running Fuel Puppet Modules Unit Tests
If you are modifying any puppet modules used by Fuel, or including additional modules, you can use
the PuppetLabs RSpec Helper to run the unit tests for any individual puppet module. Follow these
steps to install the RSpec Helper:
1. Install PuppetLabs RSpec Helper:
cd ~
gem2deb puppetlabs_spec_helper
sudo dpkg -i ruby-puppetlabs-spec-helper_0.4.1-1_all.deb
gem2deb rspec-puppet
sudo dpkg -i ruby-rspec-puppet_0.1.6-1_all.deb
2. Run unit tests for a Puppet module:
cd fuel/deployment/puppet/module
rake spec
Installing Cobbler
Install Cobbler from GitHub (it can't be installed from PyPi, and deb package in Ubuntu is outdated):
cd ~
git clone git://github.com/cobbler/cobbler.git
cd cobbler
git checkout release24
sudo make install
Building Documentation
You should prepare your build environment before you can build this documentation. First you must
install Java, using the appropriate procedure for your operating system.
Java is needed to use PlantUML to automatically generate UML diagrams from the source. You can
also use PlantUML Server for a quick preview of your diagrams and language documentation.
Then you need to install all the packages required for creating of the Python virtual environment and
dependencies installation.
11
Table of contents
sudo apt-get install make postgresql postgresql-server-dev-9.1
sudo apt-get install python-dev python-pip python-virtualenv
Now you can create the virtual environment and activate it.
virtualenv fuel-web-venv
. virtualenv/bin/activate
And then install the dependencies.
pip install ./shotgun
pip install -r nailgun/test-requirements.txt
Now you can look at the list of available formats and generate the one you need:
cd docs
make help
make html
There is a helper script build-docs.sh. It can perform all the required steps automatically. The script
can build documentation in required format.
Documentation build helper
-o - Open generated documentation after build
-c - Clear the build directory
-n - Don't install any packages
-f - Documentation format [html,signlehtml,latexpdf,pdf,epub]
For example, if you want to build HTML documentation you can just use the following script, like this:
./build-docs.sh -f html -o
It will create virtualenv, install the required dependencies and build the documentation in HTML
format. It will also open the documentation with your default browser.
If you don't want to install all the dependencies and you are not interested in building automatic API
documentation there is an easy way to do it.
First remove autodoc modules from extensions section of conf.py file in the docs directory. This
section should be like this:
extensions = [
'rst2pdf.pdfbuilder',
'sphinxcontrib.plantuml',
]
Then remove develop/api_doc.rst file and reference to it from develop.rst index.
Now you can build documentation as usual using make command. This method can be useful if you
want to make some corrections to text and see the results without building the entire environment.
The only Python packages you need are Sphinx packages:
Sphinx
sphinxcontrib-actdiag
sphinxcontrib-blockdiag
sphinxcontrib-nwdiag
sphinxcontrib-plantuml
sphinxcontrib-seqdiag
Just don't forget to rollback all these changes before you commit your corrections.
System Tests
To include documentation on system tests, SYSTEM_TESTS_PATH environment variable should be set
before running sphinx-build or make.
12
Table of contents
Fuel Development Environment on Live Master Node
If you need to deploy your own developer version of FUEL on live Master Node, you will need to use
the helper script, fuel-web/fuel_development/manage.py. The helper script configures development
environment on masternode, deploys code or restores the production environment.
Help information about manage.py can be found by running it with the '-h' parameter.
Nailgun Developer Version on Live Master Node
Configure the Nailgun development environment by following the instructions: Nailgun Dev
Environment
In your local fuel-web repository run:
workon fuel
cd fuel_development
python manage.py -m MASTER.NODE.ADDRESS nailgun deploy
Nailgun source code will be deployed, all required packages will be installed, required services will be
reconfigured and restarted. After that, developer version of Nailgun can be accessed.
For deploying Nailgun source code only without reconfiguring services run:
python manage.py -m MASTER.NODE.ADDRESS nailgun deploy --synconly
For restoring production version of Nailgun run:
python manage.py -m MASTER.NODE.ADDRESS nailgun revert
If you need to add a new python package or use another version of the python package, make
appropriate changes in the nailgun/requirements.txt file and run:
python manage.py -m MASTER.NODE.ADDRESS nailgun deploy
Nailgun
Nailgun Development Instructions
Setting up Environment
For information on how to get source code see Getting the Source Code.
Preparing Development Environment
Warning
Nailgun requires Python 2.6. with development files. Please check installed Python version using
python --version. If the version check does not match, you can use PPA (Ubuntu) or pyenv
(Universal)
For PPA:
sudo add-apt-repository ppa:fkrull/deadsnakes
sudo apt-get install python2.6 python2.6-dev
1. Nailgun can be found in fuel-web/nailgun
2. Install and configure PostgreSQL database:
sudo apt-get install postgresql postgresql-server-dev-9.1
sudo -u postgres createuser -SDRP nailgun # enter password "nailgun"
sudo -u postgres createdb nailgun
13
Table of contents
3. Install pip and development tools:
sudo apt-get install python-dev python-pip
4. Install virtualenv. This is an optional step that increases flexibility when dealing with environment
settings and package installation:
sudo pip install virtualenv virtualenvwrapper
source /usr/local/bin/virtualenvwrapper.sh # you can save this to .bashrc
mkvirtualenv fuel # you can use any name instead of 'fuel'
workon fuel # command selects the particular environment
5. Install Python dependencies. This section assumes that you use virtual environment. Otherwise,
you must install all packages globally. You can install pip and use it to require all the other
packages at once.:
pip install ./shotgun # this fuel project is listed in setup.py requirements
pip install --allow-all-external -r nailgun/test-requirements.txt
6. Create required folder for log files:
sudo mkdir /var/log/nailgun
sudo chown -R `whoami`.`whoami` /var/log/nailgun
Setup for Nailgun Unit Tests
1. Nailgun unit tests use Tox for generating test environments. This means that you don't need to
install all Python packages required for the project to run them, because Tox does this by itself.
2. First, create a virtualenv the way it's described in previous section. Then, install the Tox package:
pip install tox
3. Run the Nailgun backend unit tests:
./run_tests.sh --no-lint-ui --no-webui
4. Run the Nailgun flake8 test:
./run_tests.sh --flake8
5. You can also run the same tests by hand, using tox itself:
cd nailgun
tox -epy26 -- -vv nailgun/test
tox -epep8
6. Tox reuses the previously created environment. After making some changes with package
dependencies, tox should be run with -r option to recreate existing virtualenvs:
tox -r -epy26 -- -vv nailgun/test
tox -r -epep8
Setup for Web UI Tests
1. Install NodeJS and JS dependencies:
sudo apt-get remove nodejs nodejs-legacy
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs
sudo npm install -g grunt-cli
cd nailgun
npm install
2. Install CasperJS:
14
Table of contents
sudo npm install -g phantomjs
cd ~
git clone git://github.com/n1k0/casperjs.git
cd casperjs
git checkout tags/1.0.0-RC4
sudo ln -sf `pwd`/bin/casperjs /usr/local/bin/casperjs
3. Run full Web UI test suite (this will wipe your Nailgun database in PostgreSQL):
cd fuel-web
./run_tests.sh --lint-ui
./run_tests.sh --webui
Running Nailgun in Fake Mode
1. Fetch JS dependencies:
cd nailgun
npm install
grunt bower
2. Populate the database from fixtures:
./manage.py syncdb
./manage.py loaddefault # It loads all basic fixtures listed in settings.yaml
./manage.py loaddata nailgun/fixtures/sample_environment.json # Loads fake nodes
3. Start application in "fake" mode, when no real calls to orchestrator are performed:
python manage.py run -p 8000 --fake-tasks | egrep --line-buffered -v '^$|HTTP' >> /var/l
4. (optional) You can also use --fake-tasks-amqp option if you want to make fake environment use
real RabbitMQ instead of fake one:
python manage.py run -p 8000 --fake-tasks-amqp | egrep --line-buffered -v '^$|HTTP' >> /
5. (optional) To create a compressed version of UI and put it into static_compressed dir:
grunt build --static-dir=static_compressed
Note: Diagnostic Snapshot is not available in a Fake mode.
Running the Fuel System Tests
For fuel-devops configuration info please refer to Devops Guide article.
1. Run the integration test:
cd fuel-main
make test-integration
2. To save time, you can execute individual test cases from the integration test suite like this (nice
thing about TestAdminNode is that it takes you from nothing to a Fuel master with 9 blank nodes
connected to 3 virtual networks):
cd fuel-main
export PYTHONPATH=$(pwd)
export ENV_NAME=fuelweb
export PUBLIC_FORWARD=nat
export ISO_PATH=`pwd`/build/iso/fuelweb-centos-6.5-x86_64.iso
./fuelweb_tests/run_tests.py --group=test_cobbler_alive
3. The test harness creates a snapshot of all nodes called 'empty' before starting the tests, and
creates a new snapshot if a test fails. You can revert to a specific snapshot with this command:
dos.py revert --snapshot-name <snapshot_name> <env_name>
4. To fully reset your test environment, tell the Devops toolkit to erase it:
15
Table of contents
dos.py list
dos.py erase <env_name>
Fuel UI Internationalization Guidelines
Fuel UI internationalization is done using i18next library. Please read i18next documentation first.
All translations are stored in nailgun/static/i18n/translation.json
If you want to add new strings to the translations file, follow these rules:
1. Use words describing placement of strings like "button", "title", "summary", "description", "label"
and place them at the end of the key (like "apply_button", "cluster_description", etc.). One-word
strings may look better without any of these suffixes.
2. Do NOT use shortcuts ("bt" instead of "button", "descr" instead of "description", etc.)
3. Nest keys if it makes sense, for example, if there are a few values for statuses, etc.
4. If some keys are used in a few places (for example, in utils), move them to "common.*"
namespace.
5. Use defaultValue ONLY with dynamically generated keys.
Validating translations
To search for absent and unnecessary translation keys you can perform the following steps:
1. Open terminal and cd to fuel-web/nailgun directory.
2. Run "grunt i18n:validate" to start the validation. If there are any mismatches, you'll see the list of
mismatching keys.
Grunt task "i18n:validate" has one optional argument - a comma-separated list of languages to
compare with base English en-US translations. Run "grunt i18n:validate:zh-CN" to perform comparison
only between English and Chinese keys. You can also run "grunt i18n:validate:zh-CN,ru-RU" to
perform comparison between English-Chinese and English-Russian keys.
Nailgun database migrations
Nailgun uses Alembic (http://alembic.readthedocs.org/en/latest/) for database migrations, allowing
access to all common Alembic commands through "python manage.py migrate"
This command creates DB tables for Nailgun service:
python manage.py syncdb
This is done by applying one by one a number of database migration files, which are located in
nailgun/nailgun/db/migration/alembic_migrations/versions.
This
command
does
not
create
corresponding DB tables unless you created another migration file or updated an existing one, even if
you're making some changes in SQLAlchemy models or creating the new ones. A new migration file
can be generated by running:
python manage.py migrate revision -m "Revision message" --autogenerate
There are two important points here:
1. This command always creates a "diff" between the current database state and the one
described by your SQLAlchemy models, so you should always run "python manage.py
syncdb" before this command. This prevents running the migrate command with an empty
database, which would cause it to create all tables from scratch.
2. Some modifications may not be detected by "--autogenerate", which require manual addition
to the migration file. For example, adding a new value to ENUM field is not detected.
After creating a migration file, you can upgrade the database to a new state by using this command:
python manage.py migrate upgrade +1
16
Table of contents
To merge your migration with an existing migration file, you can just move lines of code from the
"upgrade()" and "downgrade()" methods to the bottom of corresponding methods in previous
migration file. As of this writing, the migration file is called "current.py".
For all additional features and needs, you
http://alembic.readthedocs.org/en/latest/tutorial.html
may
refer
to
Alembic
documentation:
Interacting with Nailgun using Shell
Launching shell
17
Interaction
17
Objects approach
17
SQLAlchemy approach
18
Frequently Asked Questions
18
Launching shell
Development shell for Nailgun can only be accessed inside its virtualenv, which can be activated by
launching the following command:
source /opt/nailgun/bin/activate
After that, the shell is accessible through this command:
python /opt/nailgun/bin/manage.py shell
Its appearance depends on availability of ipython on current system. This package is not available by
default on the master node but you can use the command above to run a default Python shell inside
the Nailgun environment:
Python 2.7.3 (default, Feb 27 2014, 19:58:35)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>>
Interaction
There are two ways user may interact with Nailgun object instances through shell:
• Using Nailgun objects abstraction
• Using raw SQLAlchemy queries
IMPORTANT NOTE: Second way (which is equal to straightforward modifying objects in DB) should
only be used if nothing else works.
Objects approach
Importing objects may look like this:
>>> from nailgun import objects
>>> objects.Release
<class 'nailgun.objects.release.Release'>
>>> objects.Cluster
<class 'nailgun.objects.cluster.Cluster'>
>>> objects.Node
<class 'nailgun.objects.node.Node'>
These are common abstractions around basic items Nailgun is dealing with. The reference on how to
work with them can be found here: Objects Reference.
17
Table of contents
These objects allow user to interact with items in DB on higher level, which includes all necessary
business logic which is not executed then values in DB are changed by hands. For working examples
continue to Frequently Asked Questions.
SQLAlchemy approach
Using raw SQLAlchemy models and queries allows user to modify objects through ORM, almost the
same way it can be done through SQL CLI.
First, you need to get a DB session and import models:
>>> from nailgun.db import db
>>> from nailgun.db.sqlalchemy import models
>>> models.Release
<class 'nailgun.db.sqlalchemy.models.release.Release'>
>>> models.Cluster
<class 'nailgun.db.sqlalchemy.models.cluster.Cluster'>
>>> models.Node
<class 'nailgun.db.sqlalchemy.models.node.Node'>
and then get necessary instances from DB, modify them and commit current transaction:
>>> node = db().query(models.Node).get(1) # getting object by ID
>>> node
<nailgun.db.sqlalchemy.models.node.Node object at 0x3451790>
>>> node.status = 'error'
>>> db().commit()
You may refer to SQLAlchemy documentation to find some more info on how to do queries.
Frequently Asked Questions
As a first step in any case objects should be imported as is described here: Objects approach.
Q: How can I change status for particular node?
A: Just retrieve node by its ID and update it:
>>> node = objects.Node.get_by_uid(1)
>>> objects.Node.update(node, {"status": "ready"})
>>> objects.Node.save(node)
Q: How can I remove node from cluster by hands?
A: Get node by ID and call its method:
>>> node = objects.Node.get_by_uid(1)
>>> objects.Node.remove_from_cluster(node)
>>> objects.Node.save(node)
REST API Reference
Releases API
19
Clusters API
21
Nodes API
25
Disks API
29
Network Configuration API
31
Notifications API
35
Tasks API
36
Logs API
38
Version API
40
18
Table of contents
Releases API
Handlers dealing with releases
class nailgun.api.v1.handlers.release.ReleaseHandler
Bases: nailgun.api.v1.handlers.base.SingleHandler
URL: /api/releases/%obj_id%/
Release single handler
single
alias of Release
DELETE (obj_id)
Returns:
Http:
Empty string
• 204 (object successfully deleted)
• 404 (object not found in db)
GET (obj_id)
Returns:
Http:
JSONized REST object.
• 200 (OK)
• 404 (object not found in db)
PUT (obj_id)
Returns:
Http:
JSONized REST object.
• 200 (OK)
• 404 (object not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.release.ReleaseCollectionHandler
Bases: nailgun.api.v1.handlers.base.CollectionHandler
URL: /api/releases/
Release collection handler
19
Table of contents
collection
alias of ReleaseCollection
GET ()
Returns:
Http:
Sorted releases' collection in JSON format
• 200 (OK)
POST ()
Returns:
Http:
JSONized REST object.
• 201 (object successfully created)
• 400 (invalid object data specified)
• 409 (object with such parameters already exists)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.release.ReleaseNetworksHandler
Bases: nailgun.api.v1.handlers.base.SingleHandler
URL: /api/releases/%obj_id%/networks/
Release Handler for network metadata
single
alias of Release
GET (obj_id)
Read release networks metadata
Returns:
Http:
Release networks metadata
• 201 (object successfully created)
• 400 (invalid object data specified)
• 404 (release object not found)
PUT (obj_id)
Updates release networks metadata
20
Table of contents
Returns:
Http:
Release networks metadata
• 201 (object successfully created)
• 400 (invalid object data specified)
• 404 (release object not found)
POST (obj_id)
Creation of metadata disallowed
Http:
• 405 (method not supported)
DELETE (obj_id)
Deletion of metadata disallowed
Http:
• 405 (method not supported)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
Clusters API
Handlers dealing with clusters
class nailgun.api.v1.handlers.cluster.ClusterHandler
Bases: nailgun.api.v1.handlers.base.SingleHandler
URL: /api/clusters/%obj_id%/
Cluster single handler
single
alias of Cluster
DELETE (obj_id)
Returns:
Http:
{}
• 202 (cluster deletion process launched)
• 400 (failed to execute cluster deletion process)
• 404 (cluster not found in db)
21
Table of contents
GET (obj_id)
Returns:
Http:
JSONized REST object.
• 200 (OK)
• 404 (object not found in db)
PUT (obj_id)
Returns:
Http:
JSONized REST object.
• 200 (OK)
• 404 (object not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.cluster.ClusterCollectionHandler
Bases: nailgun.api.v1.handlers.base.CollectionHandler
URL: /api/clusters/
Cluster collection handler
collection
alias of ClusterCollection
GET ()
Returns:
Http:
POST ()
Returns:
Http:
Collection of JSONized REST objects.
• 200 (OK)
JSONized REST object.
• 201 (object successfully created)
• 400 (invalid object data specified)
• 409 (object with such parameters already exists)
get_object_or_404 (obj, *args, **kwargs)
22
Table of contents
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.cluster.ClusterAttributesHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/clusters/%cluster_id%/attributes/
Cluster attributes handler
GET (cluster_id)
Returns:
Http:
JSONized Cluster attributes.
• 200 (OK)
• 404 (cluster not found in db)
• 500 (cluster has no attributes)
PUT (cluster_id)
Returns:
Http:
JSONized Cluster attributes.
• 200 (OK)
• 400 (wrong attributes data specified)
• 404 (cluster not found in db)
• 500 (cluster has no attributes)
PATCH (cluster_id)
Returns:
Http:
JSONized Cluster attributes.
• 200 (OK)
• 400 (wrong attributes data specified)
• 404 (cluster not found in db)
• 500 (cluster has no attributes)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
23
404 when not found
Table of contents
Returns:
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.cluster.ClusterAttributesDefaultsHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/clusters/%cluster_id%/attributes/defaults/
Cluster default attributes handler
GET (cluster_id)
Returns:
Http:
JSONized default Cluster attributes.
• 200 (OK)
• 404 (cluster not found in db)
• 500 (cluster has no attributes)
PUT (cluster_id)
Returns:
Http:
JSONized Cluster attributes.
• 200 (OK)
• 400 (wrong attributes data specified)
• 404 (cluster not found in db)
• 500 (cluster has no attributes)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
24
Table of contents
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.cluster.ClusterGeneratedData
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/clusters/%cluster_id%/generated/
Cluster generated data
GET (cluster_id)
Returns:
Http:
JSONized cluster generated data
• 200 (OK)
• 404 (cluster not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
Nodes API
Handlers dealing with nodes
class nailgun.api.v1.handlers.node.NodeCollectionHandler
Bases: nailgun.api.v1.handlers.base.CollectionHandler
URL: /api/nodes/
Node collection handler
collection
alias of NodeCollection
GET ()
May receive cluster_id parameter to filter list of nodes
Returns:
25
Collection of JSONized Node objects.
Table of contents
Http:
• 200 (OK)
PUT ()
Returns:
Http:
Collection of JSONized Node objects.
• 200 (nodes are successfully updated)
• 400 (invalid nodes data specified)
POST ()
Returns:
Http:
JSONized REST object.
• 201 (object successfully created)
• 400 (invalid object data specified)
• 409 (object with such parameters already exists)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.node.NodeNICsHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/nodes/%node_id%/interfaces/
Node network interfaces handler
GET (node_id)
Returns:
Http:
Collection of JSONized Node interfaces.
• 200 (OK)
• 404 (node not found in db)
PUT (node_id)
Returns:
Http:
Collection of JSONized Node objects.
• 200 (nodes are successfully updated)
• 400 (invalid nodes data specified)
26
Table of contents
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.node.NodeCollectionNICsHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/nodes/interfaces/
Node collection network interfaces handler
PUT ()
Returns:
Http:
Collection of JSONized Node objects.
• 200 (nodes are successfully updated)
• 400 (invalid nodes data specified)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.node.NodeNICsDefaultHandler
27
Table of contents
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/nodes/%node_id%/interfaces/default_assignment/
Node default network interfaces handler
GET (node_id)
Returns:
Http:
Collection of default JSONized interfaces for node.
• 200 (OK)
• 404 (node not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.node.NodeCollectionNICsDefaultHandler
Bases: nailgun.api.v1.handlers.node.NodeNICsDefaultHandler
URL: /api/nodes/interfaces/default_assignment/
Node collection default network interfaces handler
GET ()
May receive cluster_id parameter to filter list of nodes
Returns:
Http:
Collection of JSONized Nodes interfaces.
• 200 (OK)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
28
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
Table of contents
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.node.NodesAllocationStatsHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/nodes/allocation/stats/
Node allocation stats handler
GET ()
Returns:
Http:
Total and unallocated nodes count.
• 200 (OK)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
Disks API
Handlers dealing with disks
class nailgun.api.v1.handlers.disks.NodeDisksHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/nodes/%node_id%/disks/
Node disks handler
GET (node_id)
Returns:
Http:
JSONized node disks.
• 200 (OK)
• 404 (node not found in db)
PUT (node_id)
29
Table of contents
Returns:
Http:
JSONized node disks.
• 200 (OK)
• 400 (invalid disks data specified)
• 404 (node not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.disks.NodeDefaultsDisksHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/nodes/%node_id%/disks/defaults/
Node default disks handler
GET (node_id)
Returns:
Http:
JSONized node disks.
• 200 (OK)
• 404 (node or its attributes not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
30
Table of contents
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.disks.NodeVolumesInformationHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/nodes/%node_id%/volumes/
Node volumes information handler
GET (node_id)
Returns:
Http:
JSONized volumes info for node.
• 200 (OK)
• 404 (node not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
Network Configuration API
Handlers dealing with network configurations
class nailgun.api.v1.handlers.network_configuration.ProviderHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
Base class for network configuration handlers
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
• model -- model object
• ids -- list of ids
31
Table of contents
Http:
Returns:
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.network_configuration.NovaNetworkConfigurationHandler
Bases: nailgun.api.v1.handlers.network_configuration.ProviderHandler
URL: /api/clusters/%cluster_id%/network_configuration/nova_network/
Network configuration handler
GET (cluster_id)
Returns:
Http:
JSONized network configuration for cluster.
• 200 (OK)
• 404 (cluster not found in db)
PUT (cluster_id)
Returns:
Http:
JSONized Task object.
• 202 (network checking task created)
• 404 (cluster not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.network_configuration.NeutronNetworkConfigurationHandler
Bases: nailgun.api.v1.handlers.network_configuration.ProviderHandler
URL: /api/clusters/%cluster_id%/network_configuration/neutron/
Neutron Network configuration handler
GET (cluster_id)
32
Table of contents
Returns:
Http:
JSONized network configuration for cluster.
• 200 (OK)
• 404 (cluster not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.network_configuration.NetworkConfigurationVerifyHandler
Bases: nailgun.api.v1.handlers.network_configuration.ProviderHandler
Network configuration verify handler base
PUT (cluster_id)
IMPORTANT:
Returns:
Http:
this method should be rewritten to be more RESTful
JSONized Task object.
• 202 (network checking task failed)
• 200 (network verification task started)
• 404 (cluster not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
33
Table of contents
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class
nailgun.api.v1.handlers.network_configuration.NovaNetworkConfigurationVerifyHandler
Bases:
nailgun.api.v1.handlers.network_configuration.NetworkConfigurationVerifyHandler
URL: /api/clusters/%cluster_id%/network_configuration/nova_network/verify/
Nova-Network configuration verify handler
PUT (cluster_id)
IMPORTANT:
Returns:
Http:
this method should be rewritten to be more RESTful
JSONized Task object.
• 202 (network checking task failed)
• 200 (network verification task started)
• 404 (cluster not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.network_configuration.NeutronNetworkConfigurationVerifyH andler
Bases:
nailgun.api.v1.handlers.network_configuration.NetworkConfigurationVerifyHandler
URL: /api/clusters/%cluster_id%/network_configuration/neutron/verify/
Neutron network configuration verify handler
PUT (cluster_id)
IMPORTANT:
Returns:
34
this method should be rewritten to be more RESTful
JSONized Task object.
Table of contents
Http:
• 202 (network checking task failed)
• 200 (network verification task started)
• 404 (cluster not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
Notifications API
Handlers dealing with notifications
class nailgun.api.v1.handlers.notifications.NotificationHandler
Bases: nailgun.api.v1.handlers.base.SingleHandler
URL: /api/notifications/%obj_id%/
Notification single handler
DELETE (obj_id)
Returns:
Http:
Empty string
• 204 (object successfully deleted)
• 404 (object not found in db)
GET (obj_id)
Returns:
Http:
JSONized REST object.
• 200 (OK)
• 404 (object not found in db)
PUT (obj_id)
Returns:
Http:
JSONized REST object.
• 200 (OK)
• 404 (object not found in db)
get_object_or_404 (obj, *args, **kwargs)
35
Table of contents
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
Tasks API
class nailgun.api.v1.handlers.tasks.TaskHandler
Bases: nailgun.api.v1.handlers.base.SingleHandler
URL: /api/tasks/%obj_id%/
Task single handler
DELETE (obj_id)
Returns:
Http:
Empty string
• 204 (object successfully deleted)
• 404 (object not found in db)
GET (obj_id)
Returns:
Http:
JSONized REST object.
• 200 (OK)
• 404 (object not found in db)
PUT (obj_id)
Returns:
Http:
JSONized REST object.
• 200 (OK)
• 404 (object not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
36
Table of contents
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.tasks.TaskCollectionHandler
Bases: nailgun.api.v1.handlers.base.CollectionHandler
URL: /api/tasks/
Task collection handler
GET ()
May receive cluster_id parameter to filter list of tasks
Returns:
Http:
Collection of JSONized Task objects.
• 200 (OK)
• 404 (task not found in db)
POST ()
Returns:
Http:
JSONized REST object.
• 201 (object successfully created)
• 400 (invalid object data specified)
• 409 (object with such parameters already exists)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
37
Table of contents
Logs API
Handlers dealing with logs
class nailgun.api.v1.handlers.logs.LogEntryCollectionHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/logs/
Log entry collection handler
GET ()
Receives following parameters:
• date_before - get logs before this date
• date_after - get logs after this date
• source - source of logs
• node - node id (for getting node logs)
• level - log level (all levels showed by default)
• to - number of entries
• max_entries - max number of entries to load
Returns: Collection of log entries, log file size and if there are new entries.
Http:
• 200 (OK)
• 400 (invalid date_before value)
• 400 (invalid date_after value)
• 400 (invalid source value)
• 400 (invalid node value)
• 400 (invalid level value)
• 400 (invalid to value)
• 400 (invalid max_entries value)
• 404 (log file not found)
• 404 (log files dir not found)
• 404 (node not found)
• 500 (node has no assigned ip)
• 500 (invalid regular expression in config)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
38
Table of contents
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.logs.LogPackageHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/logs/package/
Log package handler
PUT ()
Returns:
Http:
JSONized Task object.
• 200 (task successfully executed)
• 400 (failed to execute task)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.logs.LogSourceCollectionHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/logs/sources/
Log source collection handler
GET ()
Returns:
Http:
Collection of log sources (from settings)
• 200 (OK)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
39
404 when not found
object instance
Table of contents
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
class nailgun.api.v1.handlers.logs.LogSourceByNodeCollectionHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/logs/sources/nodes/%node_id%/
Log source by node collection handler
GET (node_id)
Returns:
Http:
Collection of log sources by node (from settings)
• 200 (OK)
• 404 (node not found in db)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
Version API
Product info handlers
class nailgun.api.v1.handlers.version.VersionHandler
Bases: nailgun.api.v1.handlers.base.BaseHandler
URL: /api/version/
40
Table of contents
Version info handler
GET ()
Returns:
Http:
FUEL/FUELWeb commit SHA, release version.
• 200 (OK)
get_object_or_404 (obj, *args, **kwargs)
Get object instance by ID
Http:
Returns:
404 when not found
object instance
get_objects_list_or_404 (obj, ids)
Get list of objects
Parameters:
Http:
Returns:
• model -- model object
• ids -- list of ids
404 when not found
list of object instances
classmethod http (status_code, message='', headers=None)
Raise an HTTP status code, as specified. Useful for returning status codes like 401 Unauthorized
or 403 Forbidden.
Parameters:
• status_code -- the HTTP status code as an integer
• message -- the message to send along, as a string
• headers -- the headers to send along, as a dictionary
Objects Reference
Base Objects
41
Release-related Objects
44
Cluster-related Objects
46
Node-related Objects
49
Base Objects
Base classes for objects and collections
class nailgun.objects.base.NailgunObject
Bases: object
Base class for objects
serializer
Serializer class for object
alias of BasicSerializer
model = None
SQLAlchemy model for object
schema = {'properties': {}}
JSON schema for object
classmethod check_field (field)
Check if field is described in object's JSON schema
41
Table of contents
Parameters:
Returns:
Raises:
field -- name of the field as string
None
errors.InvalidField
classmethod get_by_uid (uid, fail_if_not_found=False, lock_for_update=False)
Get instance by it's uid (PK in case of SQLAlchemy)
Parameters:
• uid -- uid of object
• fail_if_not_found -- raise an exception if object is not found
Returns:
• lock_for_update -- lock returned object for update (DB mutex)
instance of an object (model)
classmethod create (data)
Create object instance with specified parameters in DB
Parameters:
Returns:
data -- dictionary of key-value pairs as object fields
instance of an object (model)
classmethod update (instance, data)
Update existing instance with specified parameters
Parameters:
Returns:
• instance -- object (model) instance
• data -- dictionary of key-value pairs as object fields
instance of an object (model)
classmethod delete (instance)
Delete object (model) instance
Parameters:
Returns:
instance -- object (model) instance
None
classmethod save (instance=None)
Save current changes for instance in DB. Current transaction will be commited (in case of
SQLAlchemy).
Parameters:
Returns:
instance -- object (model) instance
None
classmethod to_dict (instance, fields=None)
Serialize instance to Python dict
Parameters:
Returns:
• instance -- object (model) instance
• fields -- exact fields to serialize
serialized object (model) as dictionary
classmethod to_json (instance, fields=None)
Serialize instance to JSON
Parameters:
Returns:
• instance -- object (model) instance
• fields -- exact fields to serialize
serialized object (model) as JSON string
class nailgun.objects.base.NailgunCollection
Bases: object
Base class for object collections
single
Single object class
alias of NailgunObject
42
Table of contents
classmethod all ()
Get all instances of this object (model)
Returns:
iterable (SQLAlchemy query)
classmethod order_by (iterable, order_by)
Order given iterable by specified order_by.
Parameters:
order_by (tuple of strings or string) -- tuple of model fields names or single
field name for ORDER BY criterion to SQLAlchemy query. If name starts with '-'
desc ordering applies, else asc.
classmethod filter_by (iterable, **kwargs)
Filter given iterable by specified kwargs. In case if iterable=None filters all object instances
Parameters:
Returns:
• iterable -- iterable (SQLAlchemy query)
• order_by -- tuple of model fields names for ORDER BY criterion to
SQLAlchemy query. If name starts with '-' desc ordering applies, else asc.
filtered iterable (SQLAlchemy query)
classmethod filter_by_not (iterable, **kwargs)
Filter given iterable by specified kwargs with negation. In case of iterable is None filters all object
instances.
Parameters:
Returns:
iterable -- iterable (SQLAlchemy query)
filtered iterable (SQLAlchemy query)
classmethod lock_for_update (iterable)
Use SELECT FOR UPDATE on a given iterable (query). In case if iterable=None returns all object
instances
Parameters:
Returns:
iterable -- iterable (SQLAlchemy query)
filtered iterable (SQLAlchemy query)
classmethod filter_by_list (iterable, field_name, list_of_values, order_by=())
Filter given iterable by list of list_of_values. In case if iterable=None filters all object instances
Parameters:
• iterable -- iterable (SQLAlchemy query)
• field_name -- filtering field name
Returns:
• list_of_values -- list of values for objects filtration
filtered iterable (SQLAlchemy query)
classmethod filter_by_id_list (iterable, uid_list)
Filter given iterable by list of uids. In case if iterable=None filters all object instances
Parameters:
Returns:
• iterable -- iterable (SQLAlchemy query)
• uid_list -- list of uids for objects
filtered iterable (SQLAlchemy query)
classmethod eager_base (iterable, options)
Eager load linked object instances (SQLAlchemy FKs). In case if iterable=None applies to all object
instances
Parameters:
Returns:
• iterable -- iterable (SQLAlchemy query)
• options -- list of sqlalchemy eagerload types
iterable (SQLAlchemy query)
classmethod eager (iterable, fields)
Eager load linked object instances (SQLAlchemy FKs). By default joinedload will be applied to
every field. If you want to use custom eagerload method - use eager_base In case if
iterable=None applies to all object instances
43
Table of contents
Parameters:
Returns:
• iterable -- iterable (SQLAlchemy query)
• fields -- list of links (model FKs) to eagerload
iterable (SQLAlchemy query)
classmethod to_list (iterable=None, fields=None)
Serialize iterable to list of dicts In case if iterable=None serializes all object instances
Parameters:
Returns:
• iterable -- iterable (SQLAlchemy query)
• fields -- exact fields to serialize
collection of objects as a list of dicts
classmethod to_json (iterable=None, fields=None)
Serialize iterable to JSON In case if iterable=None serializes all object instances
Parameters:
Returns:
• iterable -- iterable (SQLAlchemy query)
• fields -- exact fields to serialize
collection of objects as a JSON string
classmethod create (data)
Create object instance with specified parameters in DB
Parameters:
Returns:
data -- dictionary of key-value pairs as object fields
instance of an object (model)
nailgun.objects.base.and_ (*clauses)
Produce a conjunction of expressions joined by AND.
E.g.:
from sqlalchemy import and_
stmt = select([users_table]).where(
and_(
users_table.c.name == 'wendy',
users_table.c.enrolled == True
)
)
The and_() conjunction is also available using the Python & operator (though note that compound
expressions need to be parenthesized in order to function with Python operator precedence
behavior):
stmt = select([users_table]).where(
(users_table.c.name == 'wendy') &
(users_table.c.enrolled == True)
)
The and_() operation is also implicit in some cases; the Select.where() method for example can
be invoked multiple times against a statement, which will have the effect of each clause being
combined using and_():
stmt = select([users_table]).\
where(users_table.c.name == 'wendy').\
where(users_table.c.enrolled == True)
Seealso
or_()
Release-related Objects
44
Table of contents
Release object and collection
class nailgun.objects.release.ReleaseOrchestratorData
Bases: nailgun.objects.base.NailgunObject
ReleaseOrchestratorData object
model
SQLAlchemy model
alias of ReleaseOrchestratorData
serializer
Serializer for ReleaseOrchestratorData
alias of ReleaseOrchestratorDataSerializer
schema
=
{'description':
'Serialized
ReleaseOrchestratorData
object',
'title':
'ReleaseOrchestratorData',
'required':
['release_id'],
'$schema':
'http://json-schema.org/draft-04/schema#', 'type': 'object', 'properties': {'puppet_manifests_source':
{'type': 'string'}, 'repo_metadata': {'type': 'object'}, 'release_id': {'type': 'number'}, 'id': {'type':
'number'}, 'puppet_modules_source': {'type': 'string'}}}
JSON schema
class nailgun.objects.release.Release
Bases: nailgun.objects.base.NailgunObject
Release object
model
SQLAlchemy model for Release
alias of Release
serializer
Serializer for Release
alias of ReleaseSerializer
schema = {'description': 'Serialized Release object', 'title': 'Release', 'required': ['name',
'operating_system'],
'$schema':
'http://json-schema.org/draft-04/schema#',
'type':
'object',
'properties': {'roles': {'type': 'array'}, 'operating_system': {'type': 'string'}, 'name': {'type':
'string'}, 'networks_metadata': {'type': 'array'}, 'description': {'type': 'string'}, 'volumes_metadata':
{'type': 'object'}, 'wizard_metadata': {'type': 'object'}, 'state': {'enum': ['not_available',
'downloading', 'error', 'available'], 'type': 'string'}, 'version': {'type': 'string'}, 'roles_metadata':
{'type': 'object'}, 'modes_metadata': {'type': 'object'}, 'is_deployable': {'type': 'boolean'},
'clusters': {'type': 'array'}, 'id': {'type': 'number'}, 'attributes_metadata': {'type': 'object'},
'can_update_from_versions': {'type': 'array'}}}
Release JSON schema
classmethod create (data)
Create Release instance with specified parameters in DB. Corresponding roles are created in DB
using names specified in "roles" field. See update_roles()
Parameters:
Returns:
data -- dictionary of key-value pairs as object fields
Release instance
classmethod update (instance, data)
Update existing Release instance with specified parameters. Corresponding roles are updated in
DB using names specified in "roles" field. See update_roles()
Parameters:
Returns:
• instance -- Release instance
• data -- dictionary of key-value pairs as object fields
Release instance
classmethod update_roles (instance, roles)
Update existing Release instance with specified roles. Previous ones are deleted.
45
Table of contents
IMPORTANT NOTE: attempting to remove roles that are already assigned to nodes will lead to an
Exception.
Parameters:
Returns:
• instance -- Release instance
• roles -- list of new roles names
None
classmethod is_deployable (instance)
Returns whether a given release deployable or not.
Parameters:
Returns:
instance -- a Release instance
True if a given release is deployable; otherwise - False
class nailgun.objects.release.ReleaseCollection
Bases: nailgun.objects.base.NailgunCollection
Release collection
single
Single Release object class
alias of Release
Cluster-related Objects
Cluster-related objects and collections
class nailgun.objects.cluster.Attributes
Bases: nailgun.objects.base.NailgunObject
Cluster attributes object
model
SQLAlchemy model for Cluster attributes
alias of Attributes
classmethod generate_fields (instance)
Generate field values for Cluster attributes using generators.
Parameters:
Returns:
instance -- Attributes instance
None
classmethod merged_attrs (instance)
Generates merged dict which includes generated Cluster attributes recursively updated by new
values from editable attributes.
Parameters:
Returns:
instance -- Attributes instance
dict of merged attributes
classmethod merged_attrs_values (instance)
Transforms raw dict of attributes returned by merged_attrs() into dict of facts for sending to
orchestrator.
Parameters:
Returns:
instance -- Attributes instance
dict of merged attributes
class nailgun.objects.cluster.Cluster
Bases: nailgun.objects.base.NailgunObject
Cluster object
model
SQLAlchemy model for Cluster
alias of Cluster
serializer
46
Table of contents
Serializer for Cluster
alias of ClusterSerializer
schema = {'$schema': 'http://json-schema.org/draft-04/schema#', 'type': 'object', 'description':
'Serialized Cluster object', 'properties': {'status': {'enum': ['new', 'deployment', 'stopped',
'operational', 'error', 'remove', 'update', 'update_error'], 'type': 'string'}, 'release_id': {'type':
'number'}, 'replaced_provisioning_info': {'type': 'object'}, 'replaced_deployment_info': {'type':
'object'}, 'fuel_version': {'type': 'string'}, 'id': {'type': 'number'}, 'is_customized': {'type':
'boolean'}, 'name': {'type': 'string'}, 'net_provider': {'enum': ['nova_network', 'neutron'], 'type':
'string'}, 'mode': {'enum': ['multinode', 'ha_full', 'ha_compact'], 'type': 'string'}, 'grouping': {'enum':
['roles', 'hardware', 'both'], 'type': 'string'}, 'pending_release_id': {'type': 'number'}}, 'title':
'Cluster'}
Cluster JSON schema
classmethod create (data)
Create Cluster instance with specified parameters in DB. This includes:
• creating Cluster attributes and generating default values (see create_attributes())
• creating NetworkGroups for Cluster
• adding default pending changes (see add_pending_changes())
• if "nodes" are specified in data then they are added to Cluster (see update_nodes())
Parameters: data -- dictionary of key-value pairs as object fields
Returns: Cluster instance
classmethod create_attributes (instance)
Create attributes for current Cluster instance and generate default values for them (see
Attributes.generate_fields())
Parameters:
Returns:
instance -- Cluster instance
None
classmethod get_default_editable_attributes (instance)
Get editable attributes from release metadata
Parameters:
Returns:
instance -- Cluster instance
Dict object
classmethod get_attributes (instance)
Get attributes for current Cluster instance
Parameters:
Returns:
instance -- Cluster instance
Attributes instance
classmethod get_network_manager (instance=None)
Get network manager for Cluster instance. If instance is None then the default NetworkManager is
returned
Parameters:
Returns:
instance -- Cluster instance
NetworkManager/NovaNetworkManager/NeutronManager
classmethod add_pending_changes (instance, changes_type, node_id=None)
Add pending changes for current Cluster. If node_id is specified then links created changes with
node.
Parameters:
• instance -- Cluster instance
• changes_type -- name of changes to add
Returns:
• node_id -- node id for changes
None
classmethod clear_pending_changes (instance, node_id=None)
47
Table of contents
Clear pending changes for current Cluster. If node_id is specified then only clears changes
connected to this node.
Parameters:
Returns:
• instance -- Cluster instance
• node_id -- node id for changes
None
classmethod update (instance, data)
Update Cluster object instance with specified parameters in DB. If "nodes" are specified in data
then they will replace existing ones (see update_nodes())
Parameters:
Returns:
• instance -- Cluster instance
• data -- dictionary of key-value pairs as object fields
Cluster instance
classmethod update_nodes (instance, nodes_ids)
Update Cluster nodes by specified node IDs. Nodes with specified IDs will replace existing ones in
Cluster
Parameters:
Returns:
• instance -- Cluster instance
• nodes_ids -- list of nodes ids
None
classmethod get_ifaces_for_network_in_cluster (instance, net)
Method for receiving node_id:iface pairs for all nodes in specific cluster
Parameters:
Returns:
• instance -- Cluster instance
• net (str) -- Nailgun specific network name
List of node_id, iface pairs for all nodes in cluster.
classmethod should_assign_public_to_all_nodes (instance)
Determine whether Public network is to be assigned to all nodes in this cluster.
Parameters:
Returns:
instance -- cluster instance
True when Public network is to be assigned to all nodes
classmethod set_primary_role (intance, nodes, role_name)
Method for assigning primary attribute for specific role. - verify that there is no primary attribute
of specific role assigned to cluster nodes with this role in role list or pending role list, and this
node is not marked for deletion - if there is no primary role assigned, filter nodes which have
current role in roles_list or pending_role_list - if there is nodes with ready state - they should have
higher priority - if role was in primary_role_list - change primary attribute for that association,
same for role_list, this is required because deployment_serializer used by cli to generate
deployment info
Parameters:
• instance -- Cluster db objects
• nodes -- list of Node db objects
• role_name -- string with known role name
classmethod set_primary_roles (instance, nodes)
Idempotent method for assignment of all primary attribute for all roles that requires it. To mark
role as primary add has_primary: true attribute to release
Parameters:
• instance -- Cluster db object
• nodes -- list of Node db objects
class nailgun.objects.cluster.ClusterCollection
Bases: nailgun.objects.base.NailgunCollection
48
Table of contents
Cluster collection
single
Single Cluster object class
alias of Cluster
nailgun.objects.cluster.or_ (*clauses)
Produce a conjunction of expressions joined by OR.
E.g.:
from sqlalchemy import or_
stmt = select([users_table]).where(
or_(
users_table.c.name == 'wendy',
users_table.c.name == 'jack'
)
)
The or_() conjunction is also available using the Python | operator (though note that compound
expressions need to be parenthesized in order to function with Python operator precedence
behavior):
stmt = select([users_table]).where(
(users_table.c.name == 'wendy') |
(users_table.c.name == 'jack')
)
Seealso
and_()
Node-related Objects
Node-related objects and collections
class nailgun.objects.node.Node
Bases: nailgun.objects.base.NailgunObject
Node object
model
SQLAlchemy model for Node
alias of Node
serializer
Serializer for Node
alias of NodeSerializer
schema = {'$schema': 'http://json-schema.org/draft-04/schema#', 'type': 'object', 'description':
'Serialized Node object', 'properties': {'status': {'enum': ['ready', 'discover', 'provisioning',
'provisioned', 'deploying', 'error'], 'type': 'string'}, 'os_platform': {'type': 'string'}, 'name': {'type':
'string'}, 'roles': {'type': 'array'}, 'pending_roles': {'type': 'array'}, 'agent_checksum': {'type':
'string'}, 'error_type': {'enum': ['deploy', 'provision', 'deletion'], 'type': 'string'}, 'pending_addition':
{'type': 'boolean'}, 'fqdn': {'type': 'string'}, 'error_msg': {'type': 'string'}, 'platform_name': {'type':
'string'}, 'kernel_params': {'type': 'string'}, 'mac': {'type': 'string'}, 'meta': {'type': 'object'},
'cluster_id': {'type': 'number'}, 'online': {'type': 'boolean'}, 'progress': {'type': 'number'},
'pending_deletion': {'type': 'boolean'}, 'group_id': {'type': 'number'}, 'id': {'type': 'number'},
'manufacturer': {'type': 'string'}}, 'title': 'Node'}
Node JSON schema
classmethod get_by_mac_or_uid (mac=None, node_uid=None)
49
Table of contents
Get Node instance by MAC or ID.
Parameters:
Returns:
• mac -- MAC address as string
• node_uid -- Node ID
Node instance
classmethod get_by_meta (meta)
Search for instance using mac, node id or interfaces
Parameters:
Returns:
meta -- dict with nodes metadata
Node instance
classmethod search_by_interfaces (interfaces)
Search for instance using MACs on interfaces
Parameters:
Returns:
interfaces -- dict of Node interfaces
Node instance
classmethod should_have_public (instance)
Determine whether this node has Public network.
Parameters:
Returns:
instance -- Node DB instance
True when node has Public network
classmethod create (data)
Create Node instance with specified parameters in DB. This includes:
• generating its name by MAC (if name is not specified in data)
• adding node to Cluster (if cluster_id is not None in data) (see add_into_cluster()) with
specified roles (see update_roles() and update_pending_roles())
• creating interfaces for Node in DB (see update_interfaces())
• creating default Node attributes (see create_attributes())
• creating default volumes allocation for Node (see update_volumes())
• creating Notification about newly discovered Node (see create_discover_notification())
Parameters: data -- dictionary of key-value pairs as object fields
Returns: Node instance
classmethod create_attributes (instance)
Create attributes for Node instance
Parameters:
Returns:
instance -- Node instance
NodeAttributes instance
classmethod update_interfaces (instance)
Update
interfaces
for
Node
instance
get_network_manager())
Parameters:
Returns:
using
Cluster
network
manager
(see
instance -- Node instance
None
classmethod set_volumes (instance, volumes_data)
Set volumes for Node instance from JSON data. Adds pending "disks" changes for Cluster which
Node belongs to
Parameters:
Returns:
• instance -- Node instance
• volumes_data -- JSON with new volumes data
None
classmethod update_volumes (instance)
50
Table of contents
Update volumes for Node instance. Adds pending "disks" changes for Cluster which Node belongs
to
Parameters:
Returns:
instance -- Node instance
None
classmethod create_discover_notification (instance)
Create notification about discovering new Node
Parameters:
Returns:
instance -- Node instance
None
classmethod update (instance, data)
Update Node instance with specified parameters in DB. This includes:
• adding node to Cluster (if cluster_id is not None in data) (see add_into_cluster())
• updating roles for Node
update_pending_roles())
if
it
belongs
to
Cluster
(see
update_roles()
and
• removing node from Cluster (if cluster_id is None in data) (see remove_from_cluster())
• updating interfaces for Node in DB (see update_interfaces())
• creating default Node attributes (see create_attributes())
• updating volumes allocation for Node using Cluster's
update_volumes())
Parameters: data -- dictionary of key-value pairs as object fields
Returns: Node instance
Release
metadata
classmethod reset_to_discover (instance)
Flush database objects which is not consistent with actual node
configuration in the event of resetting node to discover state
Parameters:
Returns:
instance -- Node database object
None
classmethod update_by_agent (instance, data)
Update Node instance with some specific cases for agent.
• don't update provisioning or error state back to discover
• don't update volume information if disks arrays is empty
Parameters: data -- dictionary of key-value pairs as object fields
Returns: Node instance
classmethod update_roles (instance, new_roles)
Update roles for Node instance. Logs an error if node doesn't belong to Cluster
Parameters:
Returns:
• instance -- Node instance
• new_roles -- list of new role names
None
classmethod update_pending_roles (instance, new_pending_roles)
Update pending_roles for Node instance. Logs an error if node doesn't belong to Cluster
Parameters:
Returns:
• instance -- Node instance
• new_pending_roles -- list of new pending role names
None
classmethod add_into_cluster (instance, cluster_id)
Adds Node to Cluster by its ID. Also assigns networks by default for Node.
51
(see
Table of contents
Parameters:
Returns:
• instance -- Node instance
• cluster_id -- Cluster ID
None
classmethod add_pending_change (instance, change)
Add pending change into Cluster.
Parameters:
Returns:
• instance -- Node instance
• change -- string value of cluster change
None
classmethod get_network_manager (instance=None)
Get network manager for Node instance. If instance is None then default NetworkManager is
returned
Parameters:
Returns:
• instance -- Node instance
• cluster_id -- Cluster ID
None
classmethod remove_from_cluster (instance)
Remove Node from Cluster. Also drops networks assignment for Node and clears both roles and
pending roles
Parameters:
Returns:
instance -- Node instance
None
classmethod move_roles_to_pending_roles (instance)
Move roles to pending_roles
classmethod get_kernel_params (instance)
Return cluster kernel_params if they wasnot replaced by custom params.
class nailgun.objects.node.NodeCollection
Bases: nailgun.objects.base.NailgunCollection
Node collection
single
Single Node object class
alias of Node
classmethod eager_nodes_handlers (iterable)
Eager load objects instances that is used in nodes handler.
Parameters:
Returns:
iterable -- iterable (SQLAlchemy query)
iterable (SQLAlchemy query)
classmethod prepare_for_deployment (instances)
Prepare environment for deployment, assign management, public, storage ips
classmethod prepare_for_provisioning (instances)
Prepare environment for provisioning, update fqdns, assign admin IPs
classmethod lock_nodes (instances)
Locking nodes instances, fetched before, but required to be locked :param instances: list of nodes
:return: list of locked nodes
Managing UI Dependencies
The UI has 2 types of dependencies: managed by NPM (run on node.js) and managed by Bower (run in
browser).
52
Table of contents
Managing NPM Packages
NPM packages such as grunt, bower and others are used in a development environment only. Used
NPM packages are listed in the devDependencies section of a package.json file. To install all required
packages, run:
npm install
To use grunt you also need to install the grunt-cli package globally:
sudo npm install -g grunt-cli
To add a new package, it is not enough just to add a new entry to a package.json file because
npm-shrinkwrap is used to lock down package versions. First you need to install the clingwrap
package globally:
sudo npm install -g clingwrap
Then you need to remove the existing npm-shrinkwrap.json file:
rm npm-shrinkwrap.json
Then make required changes to a package.json file and run:
npm install
to remove old packages and install new ones. Then regenerate npm-shrinkwrap.json by running:
npm shrinkwrap --dev
clingwrap npmbegone
Managing Bower Packages
Bower is used to download libraries that run in browser. To add a new package, just add an entry to
dependencies section of a bower.json file and run:
grunt bower
to download it. The new package will be placed in the nailgun/static/js/libs/bower/ directory. If the
package contains more than one JS file, you must add a new entry to the exportsOverride section with
a path to the appropriate file, in order to prevent unwanted JS files from appearing in the final UI
build.
If a library does not exist in the
nailgun/static/js/libs/custom/ directory.
bower
repository,
it
should
be
placed
in
the
Code testing policy
When writing tests, please note the following rules:
1. Each code change MUST be covered with tests. The test for specific code change must fail if that
change to code is reverted, i.e. the test must really cover the code change and not the general
case. Bug fixes should have tests for failing case.
2. The tests MUST be in the same patchset with the code changes.
3. It's permitted not to write tests in extreme cases. The extreme cases are:
• hot-fix / bug-fix with Critical status.
• patching during Feature Freeze (FF) or Hard Code Freeze (HCF).
In this case, request for writing tests should be reported as a bug with technical-debt tag. It has
to be related to the bug which was fixed with a patchset that didn't have the tests included.
4. Before writing tests please consider which type(s) of testing is suitable for the unit/module you're
covering.
5. Test coverage should not be decreased.
53
Table of contents
6. Nailgun application can be sliced up to tree layers (Presentation, Object, Model). Consider usage
of the unit testing if it is performed within one of the layers or implementing mock objects is not
complicated.
7. The tests have to be isolated. The order and count of executions must not influence test results.
8. Tests must be repetitive and must always pass regardless of how many times they are run.
9. Parametrize tests to avoid testing many times the same behaviour but with different data. This
gives an additional flexibility in the methods' usage.
10 Follow DRY principle in tests code. If common code parts are present, please extract them to a
. separate method/class.
11 Unit tests are grouped by namespaces as corresponding unit. For instance, the unit is located at:
. nailgun/db/dl_detector.py,
corresponding
test
would
be
placed
in
nailgun/test/unit/nailgun.db/test_dl_detector.py
12 Integration tests are grouped at the discretion of the developer.
.
13 Consider implementing performance tests for the cases:
.
• new handler is added which depends on number of resources in the database.
• new logic is added which parses/operates on elements like nodes.
Nailgun Customization Instructions
Creating Partitions on Nodes
Fuel generates Anaconda Kickstart scripts for Red Hat based systems and preseed files for Ubuntu to
partition block devices on new nodes. Most of the work is done in the pmanager.py Cobbler script
using the data from the "ks_spaces" variable generated by the Nailgun VolumeManager class based
on the volumes metadata defined in the openstack.yaml release fixture.
Volumes are created following best practices for OpenStack and other components. Following volume
types are supported:
vg
an LVM volume group that can contain one or more volumes with type set to "lv"
partition
plain non-LVM partition
raid
a Linux software RAID-1 array of LVM volumes
Typical slave node will always have an "os" volume group and one or more volumes of other types,
depending on the roles assigned to that node and the role-to-volumes mapping defined in the
"volumes_roles_mapping" section of openstack.yaml.
There are a few different ways to add another volume to a slave node:
1. Add a new logical volume definition to one of the existing LVM volume groups.
2. Create a new volume group containing your new logical volumes.
3. Create a new plain partition.
Adding an LV to an Existing Volume Group
If you need to add a new volume to an existing volume group, for example "os", your volume
definition in openstack.yaml might look like this:
- id: "os"
type: "vg"
min_size: {generator: "calc_min_os_size"}
label: "Base System"
54
Table of contents
volumes:
- mount: "/"
type: "lv"
name: "root"
size: {generator: "calc_total_root_vg"}
file_system: "ext4"
- mount: "swap"
type: "lv"
name: "swap"
size: {generator: "calc_swap_size"}
file_system: "swap"
- mount: "/mnt/some/path"
type: "lv"
name: "LOGICAL_VOLUME_NAME"
size:
generator: "calc_LOGICAL_VOLUME_size"
generator_args: ["arg1", "arg2"]
file_system: "ext4"
Make sure that your logical volume name ("LOGICAL_VOLUME_NAME" in the example above) is not
the same as the volume group name ("os"), and refer to current version of openstack.yaml for
up-to-date format.
Adding Generators to Nailgun VolumeManager
The "size" field in a volume definition can be defined either directly as an integer number in
megabytes, or indirectly via a so called generator. Generator is a Python lambda that can be called to
calculate logical volume size dynamically. In the json example above size is defined as a dictionary
with two keys: "generator" is the name of the generator lambda and "generator_args" is the list of
arguments that will be passed to the generator lambda.
There is the method in the VolumeManager class where generators are defined. New volume
generator 'NEW_GENERATOR_TO_CALCULATE_SIZ' needs to be added in the generators dictionary
inside this method.
class VolumeManager(object):
...
def call_generator(self, generator, *args):
generators = {
...
'NEW_GENERATOR_TO_CALCULATE_SIZE': lambda: 1000,
...
}
Creating a New Volume Group
Another way to add new volume to slave nodes is to create new volume group and to define one or
more logical volume inside the volume group definition:
- id: "NEW_VOLUME_GROUP_NAME"
type: "vg"
min_size: {generator: "calc_NEW_VOLUME_NAME_size"}
label: "Label for NEW VOLUME GROUP as it will be shown on UI"
volumes:
- mount: "/path/to/mount/point"
type: "lv"
name: "LOGICAL_VOLUME_NAME"
size:
generator: "another_generator_to_calc_LOGICAL_VOLUME_size"
generator_args: ["arg"]
file_system: "xfs"
55
Table of contents
Creating a New Plain Partition
Some node roles may be incompatible with LVM and would require plain partitions. If that's the case,
you may have to define a standalone volume with type "partition" instead of "vg":
- id: "NEW_PARTITION_NAME"
type: "partition"
min_size: {generator: "calc_NEW_PARTITION_NAME_size"}
label: "Label for NEW PARTITION as it will be shown on UI"
mount: "none"
disk_label: "LABEL"
file_system: "xfs"
Note how you can set mount point to "none" and define a disk label to identify the partition instead.
Its only possible to set a disk label on a formatted portition, so you have to set "file_system"
parameter to use disk labels.
Updating the Node Role to Volumes Mapping
Unlike a new logical volume added to a pre-existing logical volume group, a new logical volume group
or partition will not be allocated on the node unless it is included in the role-to-volumes mapping
corresponding to one of the node's roles, like this:
volumes_roles_mapping:
controller:
- {allocate_size: "min", id: "os"}
- {allocate_size: "all", id: "image"}
compute:
...
• controller - is a role for which partitioning information is given
• id - is id of volume group or plain partition
• allocate_size - can be "min" or "all" * min - allocate volume with minimal size * all - allocate all
free space for volume, if several volumes have this key then free space will be allocated equally
Setting Volume Parameters from Nailgun Settings
In addition to VolumeManager generators, it is also possible to define sizes or whatever you want in
the nailgun configuration file (/etc/nailgun/settings.yaml). All fixture files are templated using Jinja2
templating engine just before being loaded into nailgun database. For example, we can define mount
point for a new volume as follows:
"mount": "{{settings.NEW_LOGICAL_VOLUME_MOUNT_POINT}}"
Of course, NEW_LOGICAL_VOLUME_MOUNT_POINT must be defined in the settings file.
Nailgun is the core of FuelWeb. To allow an enterprise features be easily connected, and open source
commity to extend it as well, Nailgun must have simple, very well defined and documented core, with
the great pluggable capabilities.
Reliability
All software contains bugs and may fail, and Nailgun is not an exception of this rule. In reality, it is not
possible to cover all failure scenarios, even to come close to 100%. The question is how we can
design the system to avoid bugs in one module causing the damage of the whole system.
Example from the Nailgun's past: Agent collected hardware information, include current_speed param
on the interfaces. One of the interfaces had current_speed=0. At the registration attempt, Nailgun's
validator checked that current_speed > 0, and validator raised an exception InvalidData, which
declined node discovery. current_speed is one of the attibutes which we can easily skip, it is not even
used for deployment in any way at the moment and used only for the information provided to the
user. But it prevented node discovery, and it made the server unusable.
56
Table of contents
Another example. Due to the coincedence of bug and wrong metadata of one of the nodes, GET
request on that node would return 500 Internal Server Error. Looks like it should affect the only one
node, and logically we could remove such failing node from the environment to get it discovered
again. However, UI + API handlers were written in the following way:
• UI calls /api/nodes to fetch info about all nodes to just show how many nodes are allocated, and
how many are not
• NodesCollectionHandler would return 500 if any of nodes raise an exception
It is simple to guess, that the whole UI was completely destroyed by just one failed node. It was
impossible to do any action on UI.
These two examples give us the starting point to rethink on how to avoid Nailgun crash just if one of
the meta attr is wrong.
First, we must devide the meta attributes discovered by agent on two categories:
• absolutely required for node discovering (i.e. MAC address)
• non-required for discovering
• required for deployment (i.e. disks)
• non-required for deployment (i.e. current_speed)
Second, we must have UI refactored to fetch only the information required, not the whole DB to just
show two numbers. To be more specific, we have to make sure that issues in one environment must
not affect the other environment. Such a refactoring will require additional handlers in Nailgun, as
well as some additions, such as pagination and etc. From Nailgun side, it is bad idea to fail the whole
CollectionHandler if one of the objects fail to calculate some attribute. My(mihgen) idea is to simply
set attrubute to Null if failed to calculate, and program UI to handle it properly. Unit tests must help in
testing of this.
Another idea is to limit the /api/nodes, /api/networks and other calls to work only if cluster_id param
provided, whether set to None or some of cluster Ids. In such a way we can be sure that one env will
not be able to break the whole UI.
Creating roles
Each release has its own role list which can be customized. A plain list of roles is stored in the "roles"
section of each release in the openstack.yaml:
roles:
- controller
- compute
- cinder
The order in which the roles are listed here determines the order in which they are displayed on the
UI.
For each role in this list there should also be entry in "roles_metadata" section. It defines role name,
description and conflicts with other roles:
roles_metadata:
controller:
name: "Controller"
description: "..."
conflicts:
- compute
compute:
name: "Compute"
description: "..."
conflicts:
- controller
cinder:
name: "Storage - Cinder LVM"
description: "..."
57
Table of contents
"conflicts" section should contain a list of other roles that cannot be placed on the same node. In this
example, "controller" and "compute" roles cannot be combined.
Extending OpenStack Settings
Each release has a list of OpenStack settings that can be customized. The settings configuration is
stored in the "attributes_metadata.editable" release section in the openstack.yaml file.
Settings are divided into groups. Each group should have a "metadata" section with the following
attributes:
metadata:
toggleable: true
enabled: false
weight: 40
• toggleable defines an ability to enable/disable the whole setting group on UI (checkbox control is
presented near a setting group label)
• enabled indicates whether the group is checked on the UI
• weight defines the order in which this group is displayed on the tab.
• restrictions: see restrictions.
Other sections of a setting group represent separate settings. A setting structure includes the
following attributes:
syslog_transport:
value: "tcp"
label: "Syslog transport protocol"
description: ""
weight: 30
type: "radio"
values:
- data: "udp"
label: "UDP"
description: ""
restrictions:
- "cluster:net_provider != 'neutron'"
- data: "tcp"
label: "TCP"
description: ""
regex:
source: "^[A-z0-9]+$"
error: "Invalid data"
• label is a setting title that is displayed on UI
• weight defines the order in which this setting is displayed in its group. This attribute is desirable
• type defines the type of UI control to use for the setting
• regex section is applicable for settings of "text" type. "regex.source" is used when validating with
a regular expression. "regex.error" contains a warning displayed near invalid field
• restrictions: see restrictions.
• description section should also contain information about setting restrictions (dependencies,
conflicts)
• values list is needed for settings of "radio" or "select" type to declare its possible values. Options
from "values" list also support dependencies and conflcits declaration.
Restrictions
58
Table of contents
Restrictions define when settings and setting groups should be available. Each restriction is defined
as a condition with optional action and message:
restrictions:
- condition: "settings:common.libvirt_type.value != 'kvm'"
message: "KVM only is supported"
- condition: "not ('experimental' in version:feature_groups)"
action: hide
• condition is an expression written in Expression DSL. If returned value is true, then action is
performed and message is shown (if specified).
• action defines what to do if condition is satisfied. Supported values are "disable", "hide" and
"none". "none" can be used just to display message. This field is optional (default value is
"disable").
• message is a message that is shown if condition is satisfied. This field is optional.
There are also short forms of restrictions:
restrictions:
- "settings:common.libvirt_type.value != 'kvm'": "KVM only is supported"
- "settings:storage.volumes_ceph.value == true"
Expression Syntax
Expression DSL can describe arbitrarily complex conditions that compare fields of models and scalar
values.
Supported types are:
• Number (123, 5.67)
• String ("qwe", 'zxc')
• Boolean (true, false)
• Null value (null)
• ModelPath (settings:common.libvirt_type.value, cluster:net_provider)
ModelPaths consist of a model name and a field name separated by ":". Nested fields (like in settings)
are supported, separated by ".". Models available for usage are "cluster", "settings",
"networking_parameters" and "version".
Supported operators are:
• "==". Returns true if operands are equal:
settings:common.libvirt_type.value == 'qemu'
• "!=". Returns true if operands are not equal:
cluster:net_provider != 'neutron'
• "in". Returns true if the right operand (Array or String) contains the left operand:
'ceph-osd' in release:roles
• Boolean operators: "and", "or", "not":
cluster:mode == "ha_compact" and not (settings:common.libvirt_type.value == 'kvm' or 'ex
Parentheses can be used to override the order of precedence.
Bonding in UI/Nailgun
Abstract
The NIC bonding allows you to aggregate multiple physical links to one link to increase speed and
provide fault tolerance.
59
Table of contents
Design docs
https://etherpad.openstack.org/p/fuel-bonding-design
Fuel Support
The Puppet module L23network has support for OVS and native Linux bonding, so we can use it for
both NovaNetwork and Neutron deployments. Only Native OVS bonding (Neutron only) is
implemented in Nailgun now. Vlan splinters cannot be used on bonds now. Three modes are
supported
now:
'active-backup',
'balance-slb',
'lacp-balance-tcp'
(see
nailgun.consts.OVS_BOND_MODES).
Deployment serialization
Most detailed docs on deployment serialization for neutron are here:
1. http://docs.mirantis.com/fuel/fuel-4.0/reference-architecture.html#advanced-network-configuration-using-open-vswit
2. https://etherpad.openstack.org/p/neutron-orchestrator-serialization
Changes related to bonding are in the “transformations” section:
1. "add-bond" section
{
"action": "add-bond",
"name": "bond-xxx", # name is generated in UI
"interfaces": [], # list of NICs; ex: ["eth1", "eth2"]
"bridge": "br-xxx",
"properties": [] # info on bond's policy, mode; ex: ["bond_mode=active-backup"]
}
2. Instead of creating separate OVS bridges for every bonded NIC we need to create one bridge for
the bond itself
{
"action": "add-br",
"name": "br-xxx"
}
REST API
NodeNICsHandler and NodeCollectionNICsHandler are used for bonds creation, update and removal.
Operations with bonds and networks assignment are done in single request fashion. It means that
creation of bond and appropriate networks reassignment is done using one request. Request
parameters must contain sufficient and consistent data for construction of new interfaces topology
and proper assignment of all node's networks.
Request/response data example:
[
{
"name": "ovs-bond0", # only name is set for bond, not id
"type": "bond",
"mode": "balance-slb", # see nailgun.consts.OVS_BOND_MODES for modes list
"slaves": [
{"name": "eth1"}, # only “name” must be in slaves list
{"name": "eth2"}],
"assigned_networks": [
{
"id": 9,
"name": "public"
}
]
},
{
60
Table of contents
"name": "eth0",
"state": "up",
"mac": "52:54:00:78:55:68",
"max_speed": null,
"current_speed": null,
"assigned_networks": [
{
"id": 1,
"name": "fuelweb_admin"
},
{
"id": 10,
"name": "management"
},
{
"id": 11,
"name": "storage"
}
],
"type": "ether",
"id": 5
},
{
"name": "eth1",
"state": "up",
"mac": "52:54:00:88:c8:78",
"max_speed": null,
"current_speed": null,
"assigned_networks": [], # empty for bond slave interfaces
"type": "ether",
"id": 2
},
{
"name": "eth2",
"state": "up",
"mac": "52:54:00:03:d1:d2",
"max_speed": null,
"current_speed": null,
"assigned_networks": [], # empty for bond slave interfaces
"type": "ether",
"id": 1
}
]
Following fields are required in request body for bond interface: name, type, mode, slaves. Following
fields are required in request body for NIC: id, type.
Nailgun DB
Now we have separate models for bond interfaces and NICs: NodeBondInterface and
NodeNICInterface. Node's interfaces can be accessed through Node.nic_interfaces and
Node.bond_interfaces separately or through Node.interfaces (property, read-only) all together.
Relationship between them (bond:NIC ~ 1:M) is expressed in “slaves” field in NodeBondInterface
model. Two more new fields in NodeBondInterface are: “flags” and “mode”. Bond's “mode” can
accept values from nailgun.consts.OVS_BOND_MODES. Bond's “flags” are not in use now. “type”
property
(read-only)
indicates
whether
it
is
a
bond
or
NIC
(see
nailgun.consts.NETWORK_INTERFACE_TYPES).
Contributing to Fuel Library
61
Table of contents
This chapter will explain how to add new module or project into Fuel Library, how to integrate with
other components and how to avoid different problems and potential mistakes. Fuel Library is a very
big project and even experienced Puppet user will have problems understanding its structure and
internal workings.
Adding new modules to fuel-library
Case A. Pulling in an existing module
If you are adding a module that is the work of another project and is already tracked in separate repo
then:
1. Create a review request with a unmodified copy of the upstream module from whichever point you
are working from and no other related modifications.
• This review should also contain the commit hash from the upstream repo in the commit message.
• The review should be evaluated to determine its suitability and either rejected (for licensing,
code quality, outdated version requested) or accepted without requiring modifications.
• The review should not include code that calls this new module.
2. Any changes necessary to make it work with Fuel should then be proposed as a dependent
change(s).
Case B. Adding a new module
If you are adding a new module that is a work purely for Fuel and will not be tracked in a separate
repo then submit incremental reviews that consist of working implementation of features for your
module.
If you have features that are necessary, but do not work fully yet, then prevent them from running
during the deployment. Once your feature is complete, submit a review to activate the module during
deployment.
Contributing to existing fuel-library modules
As developers of Puppet modules, we tend to collaborate with the Puppet OpenStack community. As a
result, we contribute to upstream modules all of the improvements, fixes and customizations we
make to improve Fuel as well. That implies that every contributor must follow Puppet DSL basics,
puppet-openstack dev docs and Puppet rspec tests requirements.
The most common and general rule is that upstream modules should be modified only when bugfixes
and improvements could benefit everyone in the community. And appropriate patch should be
proposed to the upstream project prior to Fuel project.
In other cases (like applying some very specific custom logic or settings) contributor should submit
patches to openstack::* classes
Fuel library includes custom modules as well as ones forked from upstream sources. Note that
Modulefile, if any exists, should be used in order to recognize either given module is forked
upstream one or not. In case there is no Modulefile in module's directory, the contributor may
submit a patch directly to this module in Fuel library. Otherwise, he or she should submit patch to
upstream module first, and once merged or +2 recieved from a core reviewer, the patch should be
backported to Fuel library as well. Note that the patch submitted for Fuel library should contain in
commit message the upstream commit SHA or link to github pull-request (if the module is not on
stackforge) or Change-Id of gerrit patch.
The Puppet modules structure
First let's start with Puppet modules structure. If you want to contribute you code into the Fuel Library
it should be organized into a Puppet module. Modules are self-contained sets of Puppet code that
usually are made to perform specific function. For example you could have a module for every service
you are going to configure or for every part of your project. Usually it's a good idea to make a module
independent but sometimes it could require or be required by other modules so module can be
thinked about as a library.
62
Table of contents
The most important part of every Puppet module is its manifests folder. This folder contains Puppet
classes and definitions which also contain resources managed by this module. Modules and classes
also form namespaces. Each class or definition should be placed each into single file inside manifests
folder and this file should be named same as class or definition. Module should have top level class
that serves as a module's entry point and is named same as the module. This class should be placed
into init.pp file. This example module shows the standard structure every Puppet module should
follow.:
example
example/manifests/init.pp
example/manifests/params.pp
example/manifests/client.pp
example/manifests/server
example/manifests/server/vhost.pp
example/manifests/server/service.pp
example/templates
example/templates/server.conf.erb
example/files
example/files/client.data
The first file in manifests folder is named init.pp and should contain entry point class of this module.
This class should be named same as our module.:
class example {
}
The second file is params.pp. These files are not mandatory but are often used to store different
configuration values and parameters used by other classes of the module. For example, it could
contain service name and package name of our hypothetical example module. There could be
conditional statements if you need to change default values in different environments. Params class
should be named as child to module's namespace as all other classes of the module.:
class example::params {
$service = 'example'
$server_package = 'example-server'
$client_package = 'example-client'
$server_port = '80'
}
All other inside the manifests folder contain classes as well and can perform any action you might
want to identify as a separate piece of code. This generally falls into sub-classes that don't require its
users to configure the parameters explicitly, or possibly these are simply optional classes that are not
required in all cases. In the following example, we create a client class to define a client package that
will be installed, placed into a file called client.pp.:
class example::client {
include example::params
package { $example::params::client_package :
ensure => installed,
}
}
As you can see we have used package name from params class. Consolidating all values that might
require editing into a single class, as opposed to hardcoding them, allows you to reduce the effort
required to maintain and develop the module further in the future. If you are going to use any values
from params class you should not forget to include it first to force its code to execute and create all
required variables.
You can add more levels into the namespace structure if you want. Let's create server folder inside
our manifests folder and add service.pp file there. It would be responsible for installation and running
63
Table of contents
server part of our imaginary software. Placing the class inside subfolder adds one level into name of
contained class.:
class example::server::service (
$port = $example::params::server_port,
) inherits example::params {
$package = $example::params::server_package
$service = $example::params::service
package { $package :
ensure => installed,
}
service { $service :
ensure
=> running,
enabled
=> true,
hasstatus => true,
hasrestart => true,
}
file { 'example_config' :
ensure => present,
path
=> '/etc/example.conf',
owner
=> 'root',
group
=> 'root',
mode
=> '0644',
content => template('example/server.conf.erb'),
}
file { 'example_config_dir' :
ensure => directory,
path
=> '/etc/example.d',
owner => 'example',
group => 'example',
mode
=> '0755',
}
Package[$package] -> File['example_config', 'example_config_dir'] ~>
Service['example_config']
}
This example is a bit more complex. Let's see what it does.
Class example::server::service is parametrized and can accept one parameter - port to which server
process should bind to. It also uses a popular "smart defaults" hack. This class inherits the params
class and uses its values default only if no port parameter is provided. In this case, you can't use
include params to load the default values because it's called by the inherits example::params clause
of the class definition.
Then inside our class we take several variable from params class and declare them as variable of the
local scope. This is conveniency hack to make their names shorter.
Next we declare our resources. These resources are package, service, config file and config dir.
Package resource will install package which name is taken from variable if it's not already installed.
File resources create config file and config dir and service resource would start the daemon process
and enable its autostart.
And the last but not least part of this class is dependency declaration. We have used "chain" syntax to
specify the order of evaluation of these resources. Of course it's important first to install package,
then configuration files and only then start the service. Trying to start service before installing
64
Table of contents
package will definitely fail. So we need to tell Puppet that there are dependencies between our
resources.
The arrow operator that has a tilde instead of a minus sign (~>) means not only dependency
relationship but also notifies the object to the right of the arrow to refresh itself. In our case any
changes in configuration file would make the service to restart and load new configuration file.
Service resource react to notification event by restating managed service. Other resources may
perform different actions instead if they support it.
Ok, but where do we get our configuration file content from? It's generated by template function.
Templates are text files with Ruby's erb language tags that are used to generate needed text file
using pre-defined text and some variables from manifest.
These template files are located inside the templates folder of the module and usually have erb
extension. Calling template function with template name and module name prefix will try to load this
template and compile it using variables from the local scope of the class function was called from. For
example we want to set bind port of our service in its configuration file so we write template like this
and save it inside templates folder as server.conf.erb file.:
bind_port = <%= @port %>
Template function will replace 'port' tag with value of port variable from our class during Puppet's
catalog compilation.
Ok, now we have our service running and client package installed. But what if our service needs
several virtual hosts? Classes cannot be declared several times with different parameters so it's
where definitions come to the rescue. Definitions are very similar to classes, but unlike classes, they
have titles like resources do and can be used many times with different title to produce many
instances of managed resources. Defined types can also accept parameters like parametrized classes
do.
Definitions are placed in single files inside manifests directories same as classes and are similarly
named using namespace hierarchy. Let's create our vhost definition.:
define example::server::vhost (
$path = '/var/data',
) {
include example::params
$config = “/etc/example.d/${title}.conf”
$service = $example::params::service
file { $config :
ensure => present,
owner
=> 'example',
group
=> 'example',
mode
=> '0644',
content => template('example/vhost.conf.erb'),
}
File[$config] ~> Service[$service]
}
This defined type only creates a file resource with its name populated by the title used when it gets
defined and sets notification relationship with service to make it restart when vhost file is changed.
This defined type can be used by other classes like a simple resource type to create as many vhost
files as we need.:
example::server::vhost { 'mydata' :
path => '/path/to/my/data',
}
Defined types can form relationships in a same way as resources do but you need to capitalize all
elements of path to make reference.:
65
Table of contents
File['/path/to/my/data'] -> Example::Server::Vhost['mydata']
Now we can work with text files using templates but what if we need to manage binary data files?
Binary files or text files that will always be same can be placed into files directory of our module and
then be taken by file resource.
Let's imagine that our client package need some binary data file we need to redistribute with it. Let's
add file resource to our example::client class.:
file { 'example_data' :
path
=> '/var/lib/example.data',
owner => 'example',
group => 'example',
mode
=> '0644',
source => 'puppet:///modules/example/client.data',
}
We have specified source as a special puppet URL scheme with module's and file's name. This file will
be placed to specified location during puppet run. But on each run Puppet will check this files
checksum overwriting it if it changes so don't use this method with mutable data. Puppet's fileserving
works both in client-server and masterless modes.
Ok, we have all classes and resources we need to manage our hypothetical example service. Let's try
to put everything together. Our example class defined inside init.pp is still empty so we can use it to
declare all other classes.:
class example {
include example::params
include example::client
class { 'example::server::service' :
port => '100',
}
example::server::vhost { 'site1' :
path => '/data/site1',
}
example::server::vhost { 'site2' :
path => '/data/site2',
}
example::server::vhost { 'test' :
path => '/data/test',
}
}
Now we have entire module packed inside example class and we can just include this class to any
node where we want to see our service running. Declaration of parametrized class also did override
default port number from params file and we have three separate virtual hosts for out service. Client
package is also included into this class.
Using Fuel settings
Fuel uses a special way to pass setting from Nailgun to Puppet manifests. Before the start of
deployment process Astute uploads all settings, each server should have to the file /etc/astute.yaml
placed on every node. When Puppet is run facter reads this file entirely into a single fact
$astute_settings_yaml. Then these settings are parsed by parseyaml function at the very beginning of
site.pp file and set as rich data structure called $fuel_settings. All of the setting used during node
deployment are stored there and can be used anywhere in Puppet code. For example, single top level
variables are available as $::fuel_settings['debug']. More complex structures are also available as
values of $::fuel_settings hash keys and can be accessed like usual hashes and arrays. There are also
66
Table of contents
a lot of aliases and generated values that help you get needed values easier. You can always create
variables from any of settings hash keys and work with this variable within your local scope or from
other classes using fully qualified paths.:
$debug = $::fuel_settings['debug']
Some variables and structures are generated from settings hash by filtering and transformation
functions. For example there is $node structure.:
$node = filter_nodes($nodes_hash, 'name', $::hostname)
It contains only settings of current node filtered from all nodes hash.
If you are going to use your module inside Fuel Library and need some settings you can just get them
from this $::fuel_settings structure. Most variables related to network and OpenStack services
configuration are already available there and you can use them as they are. But if your modules
requires some additional or custom settings you'll have to either use Custom Attributes by editing
json files before deployment, or, if you are integrating your project with Fuel Library, you should
contact Fuel UI developers and ask them to add your configuration options to Fuel setting panel.
Once you have finished definition of all classes you need inside your module you can add this
module's declaration either into the Fuel manifests such as cluster_simple.pp and cluster_ha.pp
located inside osnailyfacter/manifests folder or to the other classes that are already being used if your
additions are related to them.
Example module
Let's demonstrate how to add new module to the Fuel Library by adding a simple class that will
change terminal color of Red Hat based systems. Our module will be named profile and have only one
class.:
profile
profile/manifests
profile/manifests/init.pp
profile/files
profile/files/colorcmd.sh
init.pp could have a class definition like this.:
class profile {
if $::osfamily == 'RedHat' {
file { 'colorcmd.sh' :
ensure
=> present,
owner
=> 'root',
group
=> 'root',
mode
=> '0644',
path
=> "/etc/profile.d/colorcmd.sh",
source
=> 'puppet:///modules/profile/colorcmd.sh',
}
}
}
This class just downloads colorcmd.sh file and places it to the defined location if this class is run on
Red Hat or CentOS system. The profile module can be added to Fuel modules by uploading its folder
to /etc/puppet/modules on the Fuel Master node.
Now we need to declare this module somewhere inside Fuel manifests. Since this module should be
run on every server, we can use our main site.pp manifest found inside the osnailyfacter/examples
folder. On the deployed master node this file will be copied to /etc/puppet/manifests and used to
deploy Fuel on all other nodes. The only thing we need to do here is to add include profile to the end
of
/etc/puppet/manifests/site.pp
file
on
already
deployed
master
node
and
to
osnailyfacter/examples/site.pp file inside Fuel repository.
67
Table of contents
Declaring a class outside of node block will force this class to be included everywhere. If you want to
include you module only on some nodes, you can add its declaration inside cluster_simple and
cluster_ha classed to the blocks associated with required node's role.
You can add some additional logic to allow used to enable or disable this module from Fuel UI or at
least by passing Custom Attributes to Fuel configuration.:
if $::fuel_settings['enable_profile'] {
include 'profile'
}
This block uses the enable_profile variable to enable or disable inclusion of profile module. The
variable should be passed from Nailgun and saved to /etc/astute.yaml files of managed nodes. You
can do it by either downloading settings files and manually editing them before deployment or by
asking Fuel UI developers to include additional options to the settings panel.
Resource duplication and file conflicts
If you have been developing your module that somehow uses services which are already in use by
other components of OpenStack, most likely you will try to declare some of the same resources that
have already been declared. Puppet architecture doesn't allow declaration of resources that have
same type and title even if they do have same attributes.
For example, your module could be using Apache and has Service['apache'] declared. When you are
running your module outside Fuel nothing else tries to control this service to and everything work
fine. But when you will try to add this module to Fuel you will get resource duplication error because
Apache is already managed by Horizon module.
There is pretty much nothing you can do about this problem because uniqueness of Puppet resources
is one on its core principles. But you can try to solve the problem by one of following ways.
The best thing you can do is to try to use an already declared resource by settings dependencies to
the other class that does use it. This will not work in many cases and you may have to modify both
modules or move conflicting resource elsewhere to avoid conflicts.
Puppet does provide a good solution to this problem - virtual resources. The idea behind it is that you
move resource declaration to separate class and make them virtual. Virtual resources will not be
evaluated until you realize them and you can do it in all modules that do require this resources. The
trouble starts when these resources have different attributes and complex dependencies. Most
current Puppet modules doesn't use virtual resources and will require major refactoring to add them.
Puppet style guidelines advise to move all classes related with the same service inside a single
module instead of using many modules to work with same service to minimize conflicts, but in many
cases this approach doesn't work.
There are also some hacks such are defining resource inside if ! defined(Service['apache']) { ... }
block or using ensure_resource function from Puppet's stdlib.
Similar problems often arise then working with configuration files. Even using templates doesn't allow
several modules to directly edit same file. There are a number of solutions to this starting from using
configurations directories and snippets if service supports them to representing lines or configuration
options as resources and managing them instead of entire files.
Many services does support configuration directories where you can place configuration files snippets.
Daemon will read them all, concatenate and use like it was a single file. Such services are the most
convenient to manage with Puppet. You can just separate you configuration and manage its pieces as
templates. If your service doesn't know how to work with snippets you still can use them. You only
need to create parts of your configuration file in some directory and then just combine them all using
simple exec with cat command. There is also a special concat resource type to make this approach
easier.
Some configuration files could have standard structure and can be managed by custom resource
types. For example, there is the ini_file resource type to manage values in compatible configuration
as single resources. There is also augeas resource type that can manage many popular configuration
file formats.
68
Table of contents
Each approach has its own limitations and editing single file from many modules is still non-trivial task
in most cases.
Both resource duplication and file editing problems doesn't have a good solution for every possible
case and significantly limit possibility of code reuse.
The last approach to solving this problem you can try is to modify files by scripts and sed patches ran
by exec resources. This can have unexpected results because you can't be sure of what other
operations are performed on this configuration file, what text patterns exist there, and if your script
breaks another exec.
Puppet module containment
Fuel Library consists of many modules with a complex structure and several dependencies defined
between the provided modules. There is a known Puppet problem related to dependencies between
resources contained inside classes declared from other classes. If you declare resources inside a class
or definition they will be contained inside it and entire container will not be finished until all of its
contents have been evaluated.
For example, we have two classes with one notify resource each.:
class a {
notify { 'a' :}
}
class b {
notify { 'b' :}
}
Class['a'] -> Class['b']
include a
include b
Dependencies between classes will force contained resources to be executed in declared order. But if
we add another layer of containers dependencies between them will not affect resources declared in
first two classes.:
class a {
notify { 'a' :}
}
class b {
notify { 'b' :}
}
class l1 {
include a
}
class l2 {
include b
}
Class['l1'] -> Class['l2']
include 'l1'
include 'l2'
This problem can lead to unexpected and in most cases unwanted behaviour when some resources
'fall out' from their classes and can break the logic of the deployment process.
The most common solution to this issue is Anchor Pattern. Anchors are special 'do-nothing' resources
found in Puppetlab's stdlib module. Anchors can be declared inside top level class and be contained
69
Table of contents
inside as any normal resource. If two anchors was declared they can be named as start and end
anchor. All classes, that should be contained inside the top-level class can have dependencies with
both anchors. If a class should go after the start anchor and before the end anchor it will be locked
between them and will be correctly contained inside the parent class.:
class a {
notify { 'a' :}
}
class b {
notify { 'b' :}
}
class l1 {
anchor { 'l1-start' :}
include a
anchor { 'l1-end' :}
Anchor['l1-start'] -> Class['a'] -> Anchor['l1-end']
}
class l2 {
anchor { 'l2-start' :}
include b
anchor { 'l2-end' :}
Anchor['l2-start'] -> Class['b'] -> Anchor['l2-end']
}
Class['l1'] -> Class['l2']
include 'l1'
include 'l2'
This hack does help to prevent resources from randomly floating out of their places, but look very
ugly and is hard to understand. We have to use this technique in many of Fuel modules which are
rather complex and require such containment. If your module is going to work with dependency
scheme like this, you could find anchors useful too.
There is also another solution found in the most recent versions of Puppet. Contain function can force
declared class to be locked within its container.:
class l1 {
contain 'a'
}
class l2 {
contain 'b'
}
Puppet scope and variables
The way Puppet looks for values of variables from inside classes can be confusing too. There are
several levels of scope in Puppet. Top scope contains all facts and built-in variables and goes from the
start of site.pp file before any class or node declaration. There is also a node scope. It can be different
for every node block. Each class and definition start their own local scopes and their variables and
resource defaults are available their. They can also have parent scopes.
Reference to a variable can consist of two parts $(class_name)::(variable_name) for example
$apache::docroot. Class name can also be empty and such record will explicitly reference top level
scope for example $::ipaddress.
70
Table of contents
If you are going to use value of a fact or top-scope variable it's usually a good idea to add two colons
to the start of its name to ensure that you will get the value you are looking for.
If you want to reference variable found in another class and use fully qualified name like this
$apache::docroot. But you should remember that referenced class should be already declared. Just
having it inside your modules folder is not enough for it. Using include apache before referencing
$apache::docroot will help. This technique is commonly used to make params classes inside every
module and are included to every other class that use their values.
And finally if you reference a local variable you can write just $myvar. Puppet will first look inside
local scope of current class of defined type, then inside parent scope, then node scope and finally top
scope. If variable is found on any of this scopes you get the first match value.
Definition of what the parent scope is varies between Puppet 2.* and Puppet 3.*. Puppet 2.* thinks
about parent scope as a class from where current class was declared and all of its parents too. If
current class was inherited from another class base class also is parent scope allowing to do popular
Smart Defaults trick.:
class a {
$var = ‘a’
}
class b(
$a = $a::var,
) inherits a {
}
Puppet 3.* thinks about parent scope only as a class from which current class was inherited if any and
doesn't take declaration into account.
For example:
$msg = 'top'
class a {
$msg = "a"
}
class a_child inherits a {
notify { $msg :}
}
Will say 'a' in puppet 2.* and 3.* both. But.:
$msg = 'top'
class n1 {
$msg = 'n1'
include 'n2'
}
class n2 {
notify { $msg :}
}
include 'n1'
Will say 'n1' in puppet 2.6, will say 'n1' and issue deprecation warning in 2.7, and will say 'top' in
puppet 3.*
Finding such variable references replacing them with fully qualified names is very important part Fuel
of migration to Puppet 3.*
Where to find more information
71
Table of contents
The best place to start learning Puppet is Puppetlabs' official learning course
(http://docs.puppetlabs.com/learning/). There is also a special virtual machine image you can use to
safely play with Puppet manifests.
Then you can continue to read Puppet reference and other pages of Puppetlabs documentation.
You can also find a number of printed book about Puppet and how to use it to manage your IT
infrastructure.
Pro Puppet http://www.apress.com/9781430230571
Pro Puppet. 2nd Edition http://www.apress.com/9781430260400
Puppet
2.7
Cookbook
http://www.packtpub.com/puppet-2-7-for-reliable-secure-systems-cloud-computing- cookbook/book
Puppet 3 Cookbook http://www.packtpub.com/puppet-3-cookbook/book
Puppet 3: Beginners Guide http://www.packtpub.com/puppet-3-beginners-guide/book
Instant Puppet 3 Starter http://www.packtpub.com/puppet-3-starter/book
Pulling
Strings
with
Puppet
http://www.apress.com/9781590599785
Configuration
Management
Puppet
Types
and
Providers
http://shop.oreilly.com/product/0636920026860.do
Extending
Managing
Infrastructure
with
Puppet.
http://shop.oreilly.com/product/0636920020875.do
Configuration
Puppet
Management
Made
Easy
with
Ruby
at
Scale
Fuel Master Node Deployment over PXE
Tech Explanation of the process
In some cases (such as no installed CD-ROM or no physical access to the servers) we need to install
Fuel Master node somehow other way from CD or USB Flash drive. Starting from Fuel 4.0 it's possible
to deploy Master node with PXE
The process of deployment of Fuel master node over network consists of booting linux kernel by DHCP
and PXE. Then anaconda installer will download configuration file and all packages needed to
complete the installation.
• PXE firmware of the network card makes DHCP query and gets IP address and boot image
name.
• Firmware downloads boot image file using TFTP protocol and starts it.
• This bootloader downloads configuration file with kernel boot option, kernel and initramfs and
starts the installer.
• Installer downloads kickstart configuration file by mounting contents of Fuel ISO file over NFS.
• Installer partitions hard drive, installs the system by downloading packages over NFS, copies
all additional files, installs the bootloader and reboots into new system.
So we need:
• Working system to serve as network installer.
• DHCP server
• TFTP server
• NFS server
• PXE bootloader and its configuration file
• Extracted or mounted Fuel ISO file
In our test we will use 10.20.0.0/24 network. 10.20.0.1/24 will be IP address of our host system.
72
Table of contents
Installing packages
We will be using Ubuntu or Debian system as an installation server. Other linux or even BSD-based
systems could be used too, but paths to configuration files and init scripts may differ.
First we need to install the software:
# TFTP server and client
apt-get install tftp-hpa tftpd-hpa
# DHCP server
apt-get install isc-dhcp-server
# network bootloader
apt-get install syslinux syslinux-common
# nfs server
apt-get install nfs-server
Setting up DHCP server
Standalone ISC DHCPD
First we are going to create configuration file located at /etc/dhcp/dhcpd.conf:
ddns-update-style none;
default-lease-time 600;
max-lease-time 7200;
authoritative;
log-facility local7;
subnet 10.20.0.0 netmask 255.255.255.0 {
range 10.20.0.2 10.20.0.2;
option routers 10.20.0.1;
option domain-name-servers 10.20.0.1;
}
host fuel {
hardware ethernet 52:54:00:31:38:5a;
fixed-address 10.20.0.2;
filename "pxelinux.0";
}
We have declared a subnet with only one IP address available that we are going to give to our master
node. We are not going to serve entire range of IP addresses because it will disrupt Fuel’s own DHCP
service. There is also a host definition with a custom configuration that matches a specific MAC
address. This address should be set to the MAC address of the system that you are going to make
Fuel master node. Other systems on this subnet will not receive any IP addresses and will load
bootstrap from master node when it starts serving DHCP requests. We also give a filename that will
be used to boot the Fuel master node.
Using 10.20.0.0/24 subnet requires you to set 10.20.0.1 on the network interface connected to this
network. You may also need to set the interface manually using the INTERFACES variable in
/etc/default/isc-dhcp-server file.
Start DHCP server:
/etc/init.d/isc-dhcp-server restart
Simple with dnsmasq:
sudo dnsmasq -d --enable-tftp --tftp-root=/var/lib/tftpboot \
--dhcp-range=10.20.0.2,10.20.0.2 \
--port=0 -z -i eth2 \
--dhcp-boot='pxelinux.0'
Libvirt with dnsmasq
73
Table of contents
If you are using libvirt virtual network to install your master node, then you can use its own DHCP
service. Use virsh net-edit default to modify network configuration:
<network>
<name>default</name>
<bridge name="virbr0" />
<forward />
<ip address="10.20.0.1" netmask="255.255.255.0">
<tftp root="/var/lib/tftpboot"/>
<dhcp>
<range start="10.20.0.2" end="10.20.0.2" />
<host mac="52:54:00:31:38:5a" ip="10.20.0.2" />
<bootp file="pxelinux.0"/>
</dhcp>
</ip>
</network>
This configuration includes TFTP server and DHCP server with only one IP address set to your master
node’s MAC address. You don't need to install neither external DHCP server nor TFTP server. Don’t
forget to restart the network after making edits:
virsh net-destroy default
virsh net-start default
Dnsmasq without libvirt
You can also use dnsmasq as a DHCP and TFTP server without libvirt:
strict-order
domain-needed
user=libvirt-dnsmasq
local=//
pid-file=/var/run/dnsmasq.pid
except-interface=lo
bind-dynamic
interface=virbr0
dhcp-range=10.20.0.2,10.20.0.2
dhcp-no-override
enable-tftp
tftp-root=/var/lib/tftpboot
dhcp-boot=pxelinux.0
dhcp-leasefile=/var/lib/dnsmasq/leases
dhcp-lease-max=1
dhcp-hostsfile=/etc/dnsmasq/hostsfile
In /etc/dnsmasq/hostsfile you can specify hosts and their mac addresses:
52:54:00:31:38:5a,10.20.0.2
Dnsmasq provides both DHCP, TFTP, as well as acts as a DNS caching server, so you don't need to
install additional external services.
Setting our TFTP server
If you are not using a libvirt virtual network, then you need to install tftp server. On Debian or Ubuntu
system its configuration file will be located here /etc/default/tftpd-hpa. Checking if all we want are
there:
TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/var/lib/tftpboot"
TFTP_ADDRESS="10.20.0.1:69"
TFTP_OPTIONS="--secure --blocksize 512"
74
Table of contents
Don’t forget to set blocksize here. Some hardware switches have problems with larger block sizes.
And star it:
/etc/init.d/tftpd-hpa restart
Setting up NFS server
You will also need to setup NFS server on your install system. Edit the NFS exports file:
vim /etc/exports
Add the following line:
/var/lib/tftpboot 10.20.0.2(ro,async,no_subtree_check,no_root_squash,crossmnt)
And start it:
/etc/init.d/nfs-kernel-server restart
Set up tftp root
Our tftp root will be located here: /var/lib/tftpboot Let’s create a folder called "fuel" to store ISO image
contents and syslinux folder for bootloader files. If you have installed syslinux package you can find
them in /usr/lib/syslinux folder. Copy this files from /usr/lib/syslinux to /var/lib/tftpboot:
memdisk
menu.c32
poweroff.com
Now we need to write the
/var/lib/tftpboot/pxelinux.cfg/default:
pxelinux.0
pxelinux
reboot.c32
configuration
file.
It
will
be
located
here
DEFAULT menu.c32
prompt 0
MENU TITLE My Distro Installer
TIMEOUT 600
LABEL localboot
MENU LABEL ^Local Boot
MENU DEFAULT
LOCALBOOT 0
LABEL fuel
MENU LABEL Install ^FUEL
KERNEL /fuel/isolinux/vmlinuz
INITRD /fuel/isolinux/initrd.img
APPEND biosdevname=0 ks=nfs:10.20.0.1:/var/lib/tftpboot/fuel/ks.cfg repo=nfs:10.20.0.1:/var/
LABEL reboot
MENU LABEL ^Reboot
KERNEL reboot.c32
LABEL poweroff
MENU LABEL ^Poweroff
KERNEL poweroff.com
You can ensure silent installation without any Anaconda prompts by adding the following APPEND
directives:
• ksdevice=INTERFACE
• installdrive=DEVICENAME
• forceformat=yes
For example:
installdrive=sda ksdevice=eth0 forceformat=yes
75
Table of contents
Now we need to unpack the Fuel ISO file we have downloaded:
mkdir -p /var/lib/tftpboot/fuel /mnt/fueliso
mount -o loop /path/to/your/fuel.iso /mnt/fueliso
rsync -a /mnt/fueliso/ /var/lib/tftpboot/fuel/
umount /mnt/fueliso && rmdir /mnt/fueliso
So that's it! We can boot over the network from this PXE server.
Troubleshooting
After implementing one of the described configuration you should see something like that in your
/var/log/syslog file:
dnsmasq-dhcp[16886]: DHCP, IP range 10.20.0.2 -- 10.20.0.2, lease time 1h
dnsmasq-tftp[16886]: TFTP root is /var/lib/tftpboot
To make sure all of daemon listening sockets as they should:
# netstat -upln | egrep ':(67|69|2049) '
udp
0
0 0.0.0.0:67
udp
0
0 10.20.0.1:69
udp
0
0 0.0.0.0:2049
0.0.0.0:*
0.0.0.0:*
0.0.0.0:*
30791/dnsmas
30791/dnsmas
-
• NFS - udp/2049
• DHCP - udp/67
• TFTP - udp/69
So all of daemons listening as they should.
To test DHCP server does provide an IP address you can do something like that on the node in the
defined PXE network. Please note, it should have Linux system installed or any other OS to test
configuration properly:
# dhclient -v eth0
Internet Systems Consortium DHCP Client 4.1.1-P1
Copyright 2004-2010 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Listening on LPF/eth0/00:25:90:c4:7a:64
Sending on
LPF/eth0/00:25:90:c4:7a:64
Sending on
Socket/fallback
DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x7b6e25dc)
DHCPACK from 10.20.0.1 (xid=0x7b6e25dc)
bound to 10.20.0.2 -- renewal in 1659 seconds.
After running dhclient you should see how it asks one or few times DHCP server with DHCPDISCOVER
and then get 10.20.0.2. If you have more then one NIC you should run dhclient on every one to
determine where our network in connected to.
TFTP server can be tested with tftp console client:
# tftp
(to) 10.20.0.1
tftp> get /pxelinux.0
NFS could be tested with mounting it:
mkdir /mnt/nfsroot
mount -t nfs 10.20.0.1:/var/lib/tftpboot /mnt/nfsroot
Health Check (OSTF) Contributor's Guide
76
Table of contents
Health Check or OSTF? Main goal of OSTF Main rules of code contributions How to setup my
environment? How should my modules look like? How to execute my tests? Now I'm done, what's
next? General OSTF architecture OSTF packages architecture OSTF Adapter architecture Appendix 1
Health Check or OSTF?
Fuel UI has tab which is called Health Check. In development team though, there is an established
acronym OSTF, which stands for OpenStack Testing Framework. This is all about the same. For
simplicity, this document will use widely accepted term OSTF.
Main goal of OSTF
After OpenStack installation via Fuel, it`s very important to understand whether it was successful and
if it`s ready for work. OSTF provides a set of health checks - sanity, smoke, HA and additional
components tests that check the proper operation of all system components in typical conditions.
There are tests for OpenStack scenarios validation and other specific tests useful in validating an
OpenStack deployment.
Main rules of code contributions
There are a few rules you need to follow to successfully pass the code review and contribute
high-quality code.
How to setup my environment?
OSTF repository is located on Stackforge: https://github.com/stackforge/fuel-ostf. You also have to
install and hook-up gerrit, because otherwise you will not be able to contribute code. To do that you
need
to
follow
registration
and
installation
instructions
in
the
document
https://wiki.openstack.org/wiki/CLA#Contributors_License_Agreement After you've completed the
instructions, you're all set to begin editing/creating code.
How should my modules look like?
The rules are quite simple:
• follow Python coding rules
• follow OpenStack contributor's rules
• watch out for mistakes in docstrings
• follow correct test structure
• always execute your tests after you wrote them before sending them to review
Speaking of following Python coding standards, you can find the style guide here:
http://www.python.org/dev/peps/pep-0008/. You should read it carefully once and after implementing
scripts you need to run some checks that will ensure that your code corresponds the standards.
Without correcting issues with coding stadards your scripts will not be merged to master.
You should always follow the following implementation rules:
• name the test module, test class and test method beginning with the word "test"
• if you have some tests that should be ran in a specific order, add a number to test method
name, for example: test_001_create_keypair
• use verify(), verify_response_body_content() and other methods from mixins (see OSTF
package architecture fuel_health/common/test_mixins.py section) with giving them failed
step parameter
• always list all steps you are checking using test_mixins methods in the docstring in Scenario
section in correct order
• always use verify() method when you want to check an operation that can go to an infinite
loop
77
Table of contents
The test docstrings are another important piece and you should always stick to the
following docstring structure:
• test title - test description that will be always shown on UI (the remaining part of docstring
will only be shown in cases when test failed)
• target component (optional) - component name that is tested (e.g. Nova, Keystone)
• blank line
• test scenario, example:
Scenario:
1. Create a new small-size volume.
2. Wait for volume status to become "available".
3. Check volume has correct name.
4. Create new instance.
5. Wait for "Active" status.
6. Attach volume to an instance.
7. Check volume status is "in use".
8. Get information on the created volume by its id.
9. Detach volume from the instance.
10. Check volume has "available" status.
11. Delete volume.
• test duration - an estimate of how much a test will take
deployment tags (optional) - gives information about what kind of environment the test will be run,
possible values are CENTOS, Ubuntu, RHEL nova_network, Heat, Murano, Sahara)
Here's a test example which confirms the above explanations:
Test run ordering and profiles
Each test set (sanity, smoke, ha and platform_tests) contains a special variable in __init__.py module
which is called __profile__. The profile variable makes it possible to set different rules, such as test run
order, set up deployment tags, information gathering on cleanup and expected time estimate for
running a test set.
If you are develop a new set of tests, you need to create __init__.py module and place __profile__ dict
in it. It is important that your profile matches the following structure:
78
Table of contents
__profile__ = {
"test_runs_ordering_priority": 4,
"id": "platform_tests",
"driver": "nose",
"test_path": "fuel_health/tests/platform_tests",
"description": ("Platform services functional tests."
" Duration 3 min - 60 min"),
"cleanup_path": "fuel_health.cleanup",
"deployment_tags": ['additional_components'],
"exclusive_testsets": []
}
Take note of each field in the profile, along with acceptable values.
• test_runs_ordering_priority is a field responsible for setting the priority in which the test set
will be displayed, for example, if you set "6" for sanity tests and "3" for smoke tests, smoke
test set will be displayed first on the HealthCheck tab;
• id is just the unique id of a test set;
• driver field is used for setting the test runner;
• test_path is the field representing path where test set is located starting from fuel_health
directory;
• description is the field which contains the value to be shown on the UI as the tests duration;
• cleanup_path is the field that specifies path to module responsible for cleanup mechanism (if
you do not specify this value, cleanup will not be started after your test set);
• deployment_tags field is used for defining when these tests should be available depending on
cluster settings;
• exclusive_testsets field gives you an opportunity to specify test sets that will be run
successively. For example, you can specify "smoke_sanity" for smoke and sanity test set
profiles, then these tests will be ran not simultaneously, but successively.
It is necessary to specify a value for each of the attributes. The optional attribute is
"deployment_tags", meaning optionally you may not specify it in your profile at all. You can leave the
"exclusive_testsets" empty ([]) to run your testset simultaneously with other ones.
How to execute my tests?
Simplest way is to install Fuel, and OSTF will be installed as part of it.
• install virtualbox
• build Fuel ISO: Building the Fuel ISO
• use virtualbox scripts to run an ISO
• once the installation is finished, go to Fuel UI (usually it's 10.20.0.2:8000) and create a new
cluster with necessary configuration
• execute:
rsync -avz <path to fuel_health>/ [email protected]:/opt/fuel_plugins/ostf/lib/python2.6
• execute:
ssh [email protected]
ps uax | grep supervisor
kill <supervisord process number>
service supervisord start
• go to Fuel UI and run your new tests
Now I'm done, what's next?
79
Table of contents
• don't forget to run pep8 on modified part of code
• commit your changes
• execute git review
• ask to review in IRC
From this part you'll only need to fix and commit review comments (if there are any) by doing the
same steps. If there are no review comments left, the reviewers will accept your code and it will be
automatically merged to master.
General OSTF architecture
Tests are included to Fuel, so they will be accessible as soon as you install Fuel on your
lab. OSTF architecture is quite simple, it consists of two main packages:
• fuel_health which contains the test set itself and related modules
• fuel_plugin which contains OSTF-adapter that forms necessary test list in context of cluster
deployment options and transfers them to UI using REST_API
On the other hand, there is some information necessary for test execution itself. There are several
modules that gather information and parse them into objects which will be used in the tests
themselves. All information is gathered from Nailgun component.
OSTF package architecture
The main modules used in fuel_health package are:
config module is responsible of getting data which is necessary for tests. All data is
gathered from Nailgun component or a text config.
Nailgun provides us with the following data:
• OpenStack admin user name
• OpenStack admin user password
• OpenStack admin user tenant
• ip of controllers node
• ip of compute node - easily get data from nailgun by parsing role key in response
json
• deployment mode (HA /non-HA)
• deployment os (RHEL/CENTOS)
• keystone / horizon urls
• tiny proxy address
All other information we need is stored in config.py itself and remains default in this case. In case you
are using data from Nailgun (OpenStack installation using Fuel) you should to the following: initialize
NailgunConfig() class.
Nailgun is running on Fuel master node, so you can easily get data for each cluster by invoking curl
http:/localhost:8000/api/<uri_here>. Cluster id can be get from OS environment (provided by Fuel)
If you want run OSTF for non Fuel installation, change the initialization of NailgunConfig() to
FileConfig() and set parameters marked with green color in config - see Appendix 1 (default config file
path fuel_health/etc/test.conf)
cleanup.py - invoked by OSTF adapter in case if user stops test execution in Web UI. This module
is responsible for deleting all test resources created during test suite run. It simply finds all
resources whose name starts with ‘ost1_test-’ and destroys each of them using _delete_it method.
80
Table of contents
Important: if you decide to add additional cleanup for this resource, you have to keep in mind:
All resources depend on each other, that's why deleting a resource that is still in use will give
you an exception; Don't forget that deleting several resources requires an ID for each
resource, but not its name. You'll need to specify delete_type optional argument in _delete_it
method to ‘id’
nmanager.py contains base classes for tests. Each base class contains setup, teardown and
methods that act as an interlayer between tests and OpenStack python clients (see nmanager
architecture diagram).
fuel_health/common/test_mixins.py - provides mixins to pack response verification into a
human-readable message. For assertion failure cases, the method requires a step on which we
failed and a descriptive message to be provided. The verify() method also requires a timeout
value to be set. This method should be used when checking OpenStack operations (such as
instance creation). Sometimes a cluster operation taking too long may be a sign of a problem, so
this will secure the tests from such a situation or even from going into infinite loop.
fuel_health/common/ssh.py - provides an easy way to ssh to nodes or instances. This module uses
the paramiko library and contains some useful wrappers that make some routine tasks for you
(such as ssh key authentication, starting transport threads, etc). Also, it contains a rather useful
method exec_command_on_vm(), which makes an ssh to an instance through a controller and
then executes the necessary command on it.
OSTF Adapter architecture
The important thing to remember about OSTF Adapter is that just like when writing tests, all code
should follow pep8 standard.
Appendix 1
IdentityGroup = [
cfg.StrOpt('catalog_type',
default='identity', may be changes on keystone
help="Catalog type of the Identity service."),
cfg.BoolOpt('disable_ssl_certificate_validation',
default=False,
81
Table of contents
help="Set to True if using self-signed SSL certificates."),
cfg.StrOpt('uri',
default='http://localhost/' (If you are using FileConfig set here appropriate addre
help="Full URI of the OpenStack Identity API (Keystone), v2"),
cfg.StrOpt('url',
default='http://localhost:5000/v2.0/', (If you are using FileConfig set here approp
help="Dashboard Openstack url, v2"),
cfg.StrOpt('uri_v3',
help='Full URI of the OpenStack Identity API (Keystone), v3'),
cfg.StrOpt('strategy',
default='keystone',
help="Which auth method does the environment use? "
"(basic|keystone)"),
cfg.StrOpt('region',
default='RegionOne',
help="The identity region name to use."),
cfg.StrOpt('admin_username',
default='nova' , (If you are using FileConfig set appropriate value here)
help="Administrative Username to use for"
"Keystone API requests."),
cfg.StrOpt('admin_tenant_name', (If you are using FileConfig set appropriate value here)
default='service',
help="Administrative Tenant name to use for Keystone API "
"requests."),
cfg.StrOpt('admin_password', (If you are using FileConfig set appropriate value here)
default='nova',
help="API key to use when authenticating as admin.",
secret=True),
]
ComputeGroup = [
cfg.BoolOpt('allow_tenant_isolation',
default=False,
help="Allows test cases to create/destroy tenants and "
"users. This option enables isolated test cases and "
"better parallel execution, but also requires that "
"OpenStack Identity API admin credentials are known."),
cfg.BoolOpt('allow_tenant_reuse',
default=True,
help="If allow_tenant_isolation is True and a tenant that "
"would be created for a given test already exists (such "
"as from a previously-failed run), re-use that tenant "
"instead of failing because of the conflict. Note that "
"this would result in the tenant being deleted at the "
"end of a subsequent successful run."),
cfg.StrOpt('image_ssh_user',
default="root", (If you are using FileConfig set appropriate value here)
help="User name used to authenticate to an instance."),
cfg.StrOpt('image_alt_ssh_user',
default="root", (If you are using FileConfig set appropriate value here)
help="User name used to authenticate to an instance using "
"the alternate image."),
cfg.BoolOpt('create_image_enabled',
default=True,
help="Does the test environment support snapshots?"),
cfg.IntOpt('build_interval',
default=10,
help="Time in seconds between build status checks."),
cfg.IntOpt('build_timeout',
default=160,
82
Table of contents
help="Timeout in seconds to wait for an instance to build."),
cfg.BoolOpt('run_ssh',
default=False,
help="Does the test environment support snapshots?"),
cfg.StrOpt('ssh_user',
default='root', (If you are using FileConfig set appropriate value here)
help="User name used to authenticate to an instance."),
cfg.IntOpt('ssh_timeout',
default=50,
help="Timeout in seconds to wait for authentication to "
"succeed."),
cfg.IntOpt('ssh_channel_timeout',
default=20,
help="Timeout in seconds to wait for output from ssh "
"channel."),
cfg.IntOpt('ip_version_for_ssh',
default=4,
help="IP version used for SSH connections."),
cfg.StrOpt('catalog_type',
default='compute',
help="Catalog type of the Compute service."),
cfg.StrOpt('path_to_private_key',
default='/root/.ssh/id_rsa', (If you are using FileConfig set appropriate value here
help="Path to a private key file for SSH access to remote "
"hosts"),
cfg.ListOpt('controller_nodes',
default=[], (If you are using FileConfig set appropriate value here)
help="IP addresses of controller nodes"),
cfg.ListOpt('compute_nodes',
default=[], (If you are using FileConfig set appropriate value here)
help="IP addresses of compute nodes"),
cfg.StrOpt('controller_node_ssh_user',
default='root', (If you are using FileConfig set appropriate value here)
help="ssh user of one of the controller nodes"),
cfg.StrOpt('controller_node_ssh_password',
default='r00tme', (If you are using FileConfig set appropriate value here)
help="ssh user pass of one of the controller nodes"),
cfg.StrOpt('image_name',
default="TestVM", (If you are using FileConfig set appropriate value here)
help="Valid secondary image reference to be used in tests."),
cfg.StrOpt('deployment_mode',
default="ha", (If you are using FileConfig set appropriate value here)
help="Deployments mode"),
cfg.StrOpt('deployment_os',
default="RHEL", (If you are using FileConfig set appropriate value here)
help="Deployments os"),
cfg.IntOpt('flavor_ref',
default=42,
help="Valid primary flavor to use in tests."),
]
ImageGroup = [
cfg.StrOpt('api_version',
default='1',
help="Version of the API"),
cfg.StrOpt('catalog_type',
default='image',
help='Catalog type of the Image service.'),
cfg.StrOpt('http_image',
83
Table of contents
default='http://download.cirros-cloud.net/0.3.1/'
'cirros-0.3.1-x86_64-uec.tar.gz',
help='http accessable image')
]
NetworkGroup = [
cfg.StrOpt('catalog_type',
default='network',
help='Catalog type of the Network service.'),
cfg.StrOpt('tenant_network_cidr',
default="10.100.0.0/16",
help="The cidr block to allocate tenant networks from"),
cfg.IntOpt('tenant_network_mask_bits',
default=29,
help="The mask bits for tenant networks"),
cfg.BoolOpt('tenant_networks_reachable',
default=True,
help="Whether tenant network connectivity should be "
"evaluated directly"),
cfg.BoolOpt('neutron_available',
default=False,
help="Whether or not neutron is expected to be available"),
]
VolumeGroup = [
cfg.IntOpt('build_interval',
default=10,
help='Time in seconds between volume availability checks.'),
cfg.IntOpt('build_timeout',
default=180,
help='Timeout in seconds to wait for a volume to become'
'available.'),
cfg.StrOpt('catalog_type',
default='volume',
help="Catalog type of the Volume Service"),
cfg.BoolOpt('cinder_node_exist',
default=True,
help="Allow to run tests if cinder exist"),
cfg.BoolOpt('multi_backend_enabled',
default=False,
help="Runs Cinder multi-backend test (requires 2 backends)"),
cfg.StrOpt('backend1_name',
default='BACKEND_1',
help="Name of the backend1 (must be declared in cinder.conf)"),
cfg.StrOpt('backend2_name',
default='BACKEND_2',
help="Name of the backend2 (must be declared in cinder.conf)"),
]
ObjectStoreConfig = [
cfg.StrOpt('catalog_type',
default='object-store',
help="Catalog type of the Object-Storage service."),
cfg.StrOpt('container_sync_timeout',
default=120,
help="Number of seconds to time on waiting for a container"
"to container synchronization complete."),
cfg.StrOpt('container_sync_interval',
default=5,
help="Number of seconds to wait while looping to check the"
84
User Guide
"status of a container to container synchronization"),
]
User Guide
User guide has been moved to docs.mirantis.com. If you want to contribute, checkout the sources
from github.
Devops Guide
Introduction
Fuel-Devops is a sublayer between application and target environment (currently only supported
under libvirt).
This application is used for testing purposes like grouping virtual machines to environments, booting
KVM VMs locally from the ISO image and over the network via PXE, creating, snapshotting and
resuming back the whole environment in single action, create virtual machines with multiple NICs,
multiple hard drives and many other customizations with a few lines of code in system tests.
For sources please refer to fuel-devops repository on github.
Installation
The installation procedure can be implemented in two different ways (suppose you are using Ubuntu
12.04 or Ubuntu 14.04):
• from deb packages (using apt)
• from python packages (using PyPI) (also in virtualenv)
Each of the above approaches is described in detail below.
Before use one of them please install dependencies that are required in all cases:
sudo apt-get install git \
postgresql \
postgresql-server-dev-all \
libyaml-dev \
libffi-dev \
python-dev \
python-libvirt \
python-pip \
qemu-kvm \
qemu-utils \
libvirt-bin \
ubuntu-vm-builder \
bridge-utils
sudo apt-get update && sudo apt-get upgrade -y
Devops installation from packages
1. Adding the repository, checking and applying latest updates
wget -qO - http://mirror.fuel-infra.org/devops/ubuntu/Release.key | sudo apt-key add sudo add-apt-repository "deb http://mirror.fuel-infra.org/devops/ubuntu /"
sudo apt-get update && sudo apt-get upgrade -y
2. Installing required packages
85
User Guide
sudo apt-get install python-psycopg2 \
python-ipaddr \
python-libvirt \
python-paramiko \
python-django \
python-django-openstack \
python-libvirt \
python-django-south \
python-xmlbuilder \
python-mock \
python-devops
Note
Depending on your Linux distribution some of the above packages may not exists in your
upstream repositories. In this case, please exclude them from the installation list and repeat step
2. Missing packages will be installed by python (from PyPI) during the next step
Note
In case of Ubuntu 12.04 LTS we need to update pip and Django<1.7:
sudo pip install pip --upgrade
hash -r
sudo pip install Django\<1.7 --upgrade
3. Next, follow Configuration section
Devops installation using PyPI
The installation procedure should be implemented by following the next steps:
1. Install packages needed for building python eggs
sudo apt-get install libpq-dev \
libgmp-dev
2. In case you are using Ubuntu 12.04 let's update pip, otherwise you can skip this step
sudo pip install pip --upgrade
hash -r
3. Install devops package using python setup tools. Clone fuel-devops and run setup.py
git clone git://github.com/stackforge/fuel-devops.git
cd fuel-devops
sudo python ./setup.py install
4. Next, follow Configuration section
Devops installation in virtualenv
Installation procedure is the same as in the case of Devops installation using PyPI, but we should also
configure virtualenv
1. Install packages needed for building python eggs
sudo apt-get install python-virtualenv
86
User Guide
2. In case you are using Ubuntu 12.04 let's update pip and virtualenv, otherwise you can skip this
step
sudo pip install pip virtualenv --upgrade
hash -r
4. Create virtualenv for the devops project
virtualenv --system-site-packages <path>/fuel-devops-venv
<path> represents the path where your Python virtualenv will be located. (e.g. ~/venv). If it is not
specified, it will use the current working directory.
5. Activate virtualenv and install devops package using python setup tools
. <path>/fuel-devops-venv/bin/activate
pip install git+https://github.com/stackforge/fuel-devops.git --upgrade
setup.py in fuel-devops repository does everything required.
Hint
You can also use virtualenvwrapper which can help you manage virtual environments
6. Next, follow Configuration section
Configuration
Basically devops requires that the following system-wide settings are configured:
• Default libvirt storage pool is active (called 'default')
• Current user must have permission to run KVM VMs with libvirt
• PostgreSQL server running with appropriate grants and schema for devops
• [Optional] Nested Paging is enabled
Configuring libvirt pool
Create libvirt's pool
sudo virsh pool-define-as --type=dir --name=default --target=/var/lib/libvirt/images
sudo virsh pool-autostart default
sudo virsh pool-start default
Permissions to run KVM VMs with libvirt with current user
Give current user permissions to use libvirt (Do not forget to log out and log back in!)
sudo usermod $(whoami) -a -G libvirtd,sudo
Configuring Postgresql database
Set local peers to be trusted by default and load fixtures
sudo sed -ir
sudo service
django-admin
django-admin
's/peer/trust/' /etc/postgresql/9.*/main/pg_hba.conf
postgresql restart
syncdb --settings=devops.settings
migrate devops --settings=devops.settings
If you install from python packages or use virtualenv
87
User Guide
django-admin.py syncdb --settings=devops.settings
django-admin.py migrate devops --settings=devops.settings
Note
Depending on your Linux distribution, django-admin may refer to system-wide django installed from
package. If this happens you could get an exception that says that devops.settings module is not
resolvable. To fix this, run django-admin.py (or django-admin) with a relative path
./bin/django-admin syncdb --settings=devops.settings
./bin/django-admin migrate devops --settings=devops.settings
[Optional] Enabling Nested Paging
This option is enabled by default in the KVM kernel module
$ cat /etc/modprobe.d/qemu-system-x86.conf
options kvm_intel nested=1
In order to be sure that this feature is enabled on your system, please run:
sudo kvm-ok && cat /sys/module/kvm_intel/parameters/nested
The result should be:
INFO: /dev/kvm exists
KVM acceleration can be used
Y
Environment creation via Devops + Fuel_main
1. Clone fuel main GIT repository
git clone https://github.com/stackforge/fuel-main
cd fuel-main/
2. Install requirements
If you use virtualenv
. <path>/fuel-devops-venv/bin/activate
pip install -r ./fuelweb_test/requirements.txt --upgrade
If you do not use virtualenv just
sudo pip install -r ./fuelweb_test/requirements.txt --upgrade
3. Check Configuration section
4. Prepare environment
Download Fuel ISO from Nightly builds or build it yourself (please, refer to Building the Fuel ISO)
Next, you need to define several variables for the future environment
export ISO_PATH=<path_to_iso>
export NODES_COUNT=<number_nodes>
export ENV_NAME=<name_of_env>
If you use virtualenv
export VENV_PATH=<path>/fuel-devops-venv
Alternatively, you can edit this file to set them as a default values
88
Fuel ISO build system
fuelweb_test/settings.py
Start tests by running this command
./utils/jenkins/system_tests.sh -t test -w $(pwd) -j fuelweb_test -i $ISO_PATH -o --group=se
For more information about how tests work, read the usage information
./utils/jenkins/system_tests.sh -h
Important notes for Sahara and Murano tests
• It is not recommended to start tests without KVM.
• For the best performance Put Sahara image savanna-0.3-vanilla-1.2.1-ubuntu-13.04.qcow2
(md5: 9ab37ec9a13bb005639331c4275a308d) in /tmp/ before start, otherwise (If Internet
access is available) the image will download automatically.
• Put
Murano
image
ubuntu-murano-agent.qcow2
b0a0fdc0b4a8833f79701eb25e6807a3) in /tmp before start.
(md5:
• Running Murano tests on instances without an Internet connection will fail.
• For Murano tests execute 'export SLAVE_NODE_MEMORY=5120' before starting.
• Heat
autoscale
tests
require
the
image
F17-x86_64-cfntools.qcow2
afab0f79bac770d61d24b4d0560b5f70) be placed in /tmp before starting.
(md5:
Run single OSTF tests several times
• Export environment variable OSTF_TEST_NAME. Example: export OSTF_TEST_NAME='Request
list of networks'
• Export
environment
variable
OSTF_TEST_RETRIES_COUNT=120
OSTF_TEST_RETRIES_COUNT.
Example:
export
• Execute test_ostf_repetable_tests from tests_strength package
Run tests
sh "utils/jenkins/system_tests.sh" -t test \
-w $(pwd) \
-j "fuelweb_test" \
-i "$ISO_PATH" \
-V $(pwd)/venv/fuelweb_test \
-o \
--group=create_delete_ip_n_times_nova_flat
Fuel ISO build system
Use the fuel-main repository to build Fuel components such as an ISO or an upgrade tarball. This
repository contains a set of GNU Make build scripts.
Quick start
1. You must use one of the following distributions to build Fuel components or the build process
may fail. Note that build only works for x64 platforms.
• Ubuntu 12.04
• Ubuntu 14.04
2. Check whether you have git installed in your system. To do that, use the following command:
89
Fuel ISO build system
which git
If git is not found, install it with the following command:
apt-get install git
3. Clone the fuel-main git repository to the location where you will work. The root of your repo will
be named fuel-main. In this example, it will be located under the ~/fuel directory:
mkdir ~/fuel
cd ~/fuel
git clone https://github.com/stackforge/fuel-main.git
cd fuel-main
4. Run the shell script:
./prepare-build-env.sh
and wait until **prepare-build-env.sh**
installs the Fuel build evironment on your computer.
5. After the script runs successfully, issue the following command to build a Fuel ISO:
make iso
6. Use the following command to list the available make targets:
make help
For the full list of available targets with description, see Build targets section below.
Build system structure
Fuel consists of several components such as web interface, puppet modules, orchestration
components, testing components. Source code of all those components is split into multiple git
repositories like:
• https://github.com/stackforge/fuel-web
• https://github.com/stackforge/fuel-astute
• https://github.com/stackforge/fuel-ostf
• https://github.com/stackforge/fuel-library
• https://github.com/stackforge/fuel-docs
The main component of the Fuel build system is fuel-main directory.
Fuel build processes are quite complicated, so to make the fuel-main code easily maintainable, it is
split into a bunch of files and directories.
Those files and directories contain independent (or at least almost independent) pieces of Fuel build
system:
• Makefile - the main Makefile which includes all other make modules.
• config.mk - contains parameters used to customize the build process, specifying items such as
build paths, upstream mirrors, source code repositories and branches, built-in default Fuel
settings and ISO name.
• rules.mk - defines frequently used macros.
• repos.mk - contains make scripts to download the other repositories to develop Fuel components
put into separate repos.
• sandbox.mk - shell script definitions that create and destroy the special chroot environment
required to build some components. For example, for building RPM packages, CentOS images we
use CentOS chroot environment.
90
Fuel ISO build system
• mirror - contains the code which is used to download all necessary packages from upstream
mirrors and build new ones which are to be copied on Fuel ISO even if Internet connection is
down.
• puppet - contains the code used to pack Fuel puppet modules into a tarball that is afterwards put
on Fuel ISO.
• packages - contains DEB and RPM specs as well as make code for building those packages,
included in Fuel DEB and RPM mirrors.
• bootstrap - contains a make script intended to build CentOS-based miniroot image (a.k.a initrd or
initramfs).
• image - contains make scripts for building CentOS and Ubuntu images using the Fuel mirrors,
built from the scripts in the mirror directory. The images are alternative to using the standard
anaconda and debian installers.
• docker - contains the make scripts to build docker containers, deployed on the Fuel Master node.
• upgrade - contains make scripts for building Fuel upgrade tarball.
• iso - contains make scripts for building Fuel ISO file.
Fuel-main also contains a set of directories which are not directly related to Fuel build processes:
• virtualbox - contains a set of shell scripts which allow one to deploy Fuel demo lab easily using
VirtualBox.
• utils - contains a set of utilities used for maintaining Fuel components.
• fuelweb_test and fuelweb_ui_test - contain the code of Fuel system tests.
Build targets
• all - used for building all Fuel artifacts. Currently, it is an alias for iso target.
• bootstrap - used for building in-memory bootstrap image which is used for auto-discovering.
• mirror - used for building local mirrors (the copies of CentOS and Ubuntu mirrors which are then
placed into Fuel ISO). They contain all necessary packages including those listed in
requirements-.txt* files with their dependencies as well as those which are Fuel packages.
Packages listed in requirements-.txt* files are downloaded from upstream mirrors while Fuel
packages are built from source code.
• iso - used for building Fuel ISO. If build succeeds, ISO is put into build/artifacts folder.
• img - used for building Fuel flash stick image, binary copied to a flash stick. That stick is then
used as a bootable device and contains Fuel ISO as well as some auxiliary boot files.
• clean - removes build directory.
• deep_clean - removes build directory and local mirror. Note that if you remove a local mirror,
then next time the ISO build job will download all necessary packages again. So, the process
goes faster if you keep local mirrors. You should also mind the following: it is better to run make
deep_clean every time when building an ISO to make sure the local mirror is consistent.
Customizing build process
There are plenty of variables in make files. Some of them represent a kind of build parameters. They
are defined in config.mk file:
• TOP_DIR - a default current directory. All other build directories are relative to this path.
• BUILD_DIR - contains all files, used during build process. By default, it is $(TOP_DIR)/build.
• ARTS_DIR - contains
$(BUILD_DIR)/artifacts.
build
• LOCAL_MIRROR - contains
$(TOP_DIR)/local_mirror.
91
artifacts
local
such
CentOS
as
and
ISO
and
Ubuntu
IMG
files
mirrors
By
By
default,
default,
it
is
it
is
Fuel ISO build system
• DEPS_DIR - contains build targets that depend on artifacts of the previous build jobs, placed there
before build starts. By default, it is $(TOP_DIR)/deps.
• ISO_NAME - a name of Fuel ISO without file extension: if ISO_NAME = MY_CUSTOM_NAME, then
Fuel ISO file will be placed into $(MY_CUSTOM_NAME).iso.
• ISO_PATH - used to specify Fuel ISO full path instead of defining just ISO name. By default, it is
$(ARTS_DIR)/$(ISO_NAME).iso.
• UPGRADE_TARBALL_NAME - defines
$(UPGRADE_TARBALL_NAME).tar.
the
name
of
upgrade
tarball.
By
default,
it
is
• UPGRADE_TARBALL_PATH - used to define full upgrade tarball path. By default, it is
$(ARTS_DIR)/$(UPGRADE_TARBALL_NAME).tar.
• VBOX_SCRIPTS_NAME - defines the name of the archive with VirtualBox scripts. By default, it is
placed into $(VBOX_SCRIPTS_NAME).zip.
• VBOX_SCRIPTS_PATH - defines full path for VirtualBox scripts archive. By default, it is
$(ARTS_DIR)/$(VBOX_SCRIPTS_NAME).zip
• Fuel ISO contains some default settings for the Fuel Master node. These settings can be
customized during Fuel Master node installation. One can customize those settings using the
following variables:
• MASTER_IP - the Fuel Master node IP address. By default, it is 10.20.0.2.
• MASTER_NETMASK - Fuel Master node IP netmask. By default, it is 255.255.255.0.
• MASTER_GW - Fuel Master node default gateway. By default, it is is 10.20.0.1.
• MASTER_DNS - the upstream DNS location for the Fuel master node. FUel Master node DNS will
redirect there all DNS requests that it is not able to resolve itself. By default, it is 10.20.0.1.
Other options
• BUILD_OPENSTACK_PACKAGES - list of Openstack packages to be rebuilt from source.
• [repo]_REPO - remote source code repo. URL or git repository can be specified for each of the
Fuel components. (FUELLIB, NAILGUN, ASTUTE, OSTF).
• [repo]_COMMIT - source branch for each of the Fuel components to build.
• [repo]_GERRIT_URL - gerrit repo.
• [repo]_GERRIT_COMMIT - list of extra commits from gerrit.
• [repo]_SPEC_REPO - repo for RPM/DEB specs of OpenStack packages.
• [repo]_SPEC_COMMIT - branch for checkout.
• [repo]_SPEC_GERRIT_URL - gerrit repo for OpenStack specs.
• [repo]_SPEC_GERRIT_COMMIT - list of extra commits from gerrit for specs.
• USE_MIRROR - pre-built mirrors from Fuel infrastructure. The following mirrors can be used: * ext
(external mirror, available from outside of Mirantis network) * none (reserved for building local
mirrors: in this case CentOS and Ubuntu packages will be fetched from upstream mirrors, so that
it will make the build process much slower).
• MIRROR_CENTOS - download CentOS packages from a specific remote repo.
• MIRROR_UBUNTU - download Ubuntu packages from a specific remote repo.
• MIRROR_DOCKER - download docker images from a specific remote url.
• EXTRA_RPM_REPOS - extra repos with RPM packages. Each repo must be comma separated tuple
with
repo-name
and
repo-path:
<first_repo_name>,<repo_path>
<second_repo_name>,<second_repo_path>
For
example,
qemu2,http://osci-obs.vm.mirantis.net:82/centos-fuel-5.1-stable-15943/centos/
libvirt,http://osci-obs.vm.mirantis.net:82/centos-fuel-5.1-stable-17019/centos/.
92
Fuel ISO build system
S - extra repos with DEB packages. Each repo must consist of an url, distro and section parts. Repos must
|<second_repo_path>
For
ory.mirantis.com/repos/ubuntu-fuel-5.1-stable-15955/ubuntu/|http://fuel-repository.mirantis.com/repos/ubuntu-fuel-5.1** FEATURE_GROUPS - options for the ISO.
Combination of the following:
• mirantis (use mirantis logos and logic)
• experimental (allow experimental features on Fuel web UI)
Note that if you want to add more packages to the Fuel Master node, you should update the
requirements-rpm.txt and the requirements-deb.txt files.
93
create_attributes()
(nailgun.objects.cluster.Cluster class method)
Index
(nailgun.objects.node.Node class method)
A
add_into_cluster()
class method)
(nailgun.objects.node.Node
create_discover_notification()
(nailgun.objects.node.Node class method)
D
add_pending_change()
(nailgun.objects.node.Node class method)
DELETE()
(nailgun.api.v1.handlers.cluster.ClusterHandler
method)
add_pending_changes()
(nailgun.objects.cluster.Cluster class method)
all()
(nailgun.objects.base.NailgunCollection
class method)
(nailgun.api.v1.handlers.notifications.NotificationHandler
method)
and_() (in module nailgun.objects.base)
(nailgun.api.v1.handlers.release.ReleaseHandler method)
Attributes (class in nailgun.objects.cluster)
(nailgun.api.v1.handlers.release.ReleaseNetworksHandler
method)
C
(nailgun.api.v1.handlers.tasks.TaskHandler method)
check_field()
(nailgun.objects.base.NailgunObject
method)
class
E
clear_pending_changes()
(nailgun.objects.cluster.Cluster class method)
eager() (nailgun.objects.base.NailgunCollection
class method)
Cluster (class in nailgun.objects.cluster)
ClusterAttributesDefaultsHandler
nailgun.api.v1.handlers.cluster)
(class
in
ClusterAttributesHandler
(class
nailgun.api.v1.handlers.cluster)
in
ClusterCollection
nailgun.objects.cluster)
in
(class
delete()
(nailgun.objects.base.NailgunObject
class method)
ClusterCollectionHandler
(class
nailgun.api.v1.handlers.cluster)
in
ClusterGeneratedData
(class
nailgun.api.v1.handlers.cluster)
in
ClusterHandler
(class
nailgun.api.v1.handlers.cluster)
in
collection (nailgun.api.v1.handlers.cluster.Cluste
rCollectionHandler attribute)
(nailgun.api.v1.handlers.node.NodeCollectionHandler
attribute)
eager_base()
(nailgun.objects.base.NailgunCollection
method)
class
eager_nodes_handlers()
(nailgun.objects.node.NodeCollection
method)
class
F
filter_by()
(nailgun.objects.base.NailgunCollection
method)
class
filter_by_id_list()
(nailgun.objects.base.NailgunCollection
method)
class
filter_by_list()
(nailgun.objects.base.NailgunCollection
method)
class
(nailgun.api.v1.handlers.release.ReleaseCollectionHandler
filter_by_not()
attribute)
(nailgun.objects.base.NailgunCollection
method)
create() (nailgun.objects.base.NailgunCollection
class method)
(nailgun.objects.base.NailgunObject
method)
class
(nailgun.objects.cluster.Cluster
method)
class
(nailgun.objects.node.Node class method)
(nailgun.objects.release.Release
method)
class
class
G
generate_fields()
(nailgun.objects.cluster.Attributes class method)
GET() (nailgun.api.v1.handlers.cluster.ClusterAtt
ributesDefaultsHandler method)
(nailgun.api.v1.handlers.cluster.ClusterAttributesHandler
method)
.api.v1.handlers.cluster.ClusterCollectionHandler
(nailgun.api.v1.handlers.cluster.ClusterCollectionHandler
method)
method)
.api.v1.handlers.cluster.ClusterGeneratedData
(nailgun.api.v1.handlers.cluster.ClusterGeneratedData
method)
method)
.api.v1.handlers.cluster.ClusterHandler method)
(nailgun.api.v1.handlers.cluster.ClusterHandler method)
.api.v1.handlers.disks.NodeDefaultsDisksHandler
(nailgun.api.v1.handlers.disks.NodeDefaultsDisksHandler
method)
method)
.api.v1.handlers.disks.NodeDisksHandler method)
(nailgun.api.v1.handlers.disks.NodeDisksHandler method)
.api.v1.handlers.disks.NodeVolumesInformationHandler
(nailgun.api.v1.handlers.disks.NodeVolumesInformationHandler
method)
method)
.api.v1.handlers.logs.LogEntryCollectionHandler
(nailgun.api.v1.handlers.logs.LogEntryCollectionHandler
method)
method)
.api.v1.handlers.logs.LogSourceByNodeCollectionHandler
(nailgun.api.v1.handlers.logs.LogPackageHandler
method)
method)
.api.v1.handlers.logs.LogSourceCollectionHandler
(nailgun.api.v1.handlers.logs.LogSourceByNodeCollectionHandler
method)
method)
.api.v1.handlers.network_configuration.NeutronNetworkConfigurationHandler
(nailgun.api.v1.handlers.logs.LogSourceCollectionHandler method)
)
(nailgun.api.v1.handlers.network_configuration.NetworkConfigurationVerifyH
.api.v1.handlers.network_configuration.NovaNetworkConfigurationHandler
method)
)
(nailgun.api.v1.handlers.network_configuration.NeutronNetworkConfiguratio
.api.v1.handlers.node.NodeCollectionHandler
method)
method)
.api.v1.handlers.node.NodeCollectionNICsDefaultHandler
(nailgun.api.v1.handlers.network_configuration.NeutronNetworkConfiguratio
method)
method)
.api.v1.handlers.node.NodeNICsDefaultHandler method)
(nailgun.api.v1.handlers.network_configuration.NovaNetworkConfigurationH
.api.v1.handlers.node.NodeNICsHandler method)
method)
.api.v1.handlers.node.NodesAllocationStatsHandler method)
(nailgun.api.v1.handlers.network_configuration.NovaNetworkConfigurationVe
.api.v1.handlers.notifications.NotificationHandler
method)
method)
.api.v1.handlers.release.ReleaseCollectionHandler
(nailgun.api.v1.handlers.network_configuration.ProviderHandler
method)
method)
.api.v1.handlers.release.ReleaseHandler method)
(nailgun.api.v1.handlers.node.NodeCollectionHandler method)
.api.v1.handlers.release.ReleaseNetworksHandler
(nailgun.api.v1.handlers.node.NodeCollectionNICsDefaultHandler
method)
method)
.api.v1.handlers.tasks.TaskCollectionHandler
(nailgun.api.v1.handlers.node.NodeCollectionNICsHandler
method)
method)
.api.v1.handlers.tasks.TaskHandler method)
(nailgun.api.v1.handlers.node.NodeNICsDefaultHandler method)
.api.v1.handlers.version.VersionHandler method)
(nailgun.api.v1.handlers.node.NodeNICsHandler method)
get_attributes()
class method)
(nailgun.objects.cluster.Cluster
(nailgun.api.v1.handlers.node.NodesAllocationStatsHandler method)
(nailgun.api.v1.handlers.notifications.NotificationHandler method)
get_by_mac_or_uid() (nailgun.objects.node.Node
(nailgun.api.v1.handlers.release.ReleaseCollectionHandler method)
class method)
(nailgun.api.v1.handlers.release.ReleaseHandler method)
get_by_meta() (nailgun.objects.node.Node class
method)
(nailgun.api.v1.handlers.release.ReleaseNetworksHandler method)
get_by_uid() (nailgun.objects.base.NailgunObject
(nailgun.api.v1.handlers.tasks.TaskCollectionHandler method)
class method)
(nailgun.api.v1.handlers.tasks.TaskHandler method)
get_default_editable_attributes()
(nailgun.api.v1.handlers.version.VersionHandler method)
(nailgun.objects.cluster.Cluster class method)
get_objects_list_or_404() (nailgun.api.v1.handler
get_ifaces_for_network_in_cluster()
s.cluster.ClusterAttributesDefaultsHandler
(nailgun.objects.cluster.Cluster class method)
method)
get_kernel_params() (nailgun.objects.node.Node
(nailgun.api.v1.handlers.cluster.ClusterAttributesHandler
class method)
method)
get_network_manager()
(nailgun.api.v1.handlers.cluster.ClusterCollectionHandler
(nailgun.objects.cluster.Cluster class method)
method)
(nailgun.objects.node.Node class method)
(nailgun.api.v1.handlers.cluster.ClusterGeneratedData
get_object_or_404() (nailgun.api.v1.handlers.clu method)
ster.ClusterAttributesDefaultsHandler method)
(nailgun.api.v1.handlers.cluster.ClusterHandler method)
(nailgun.api.v1.handlers.cluster.ClusterAttributesHandler
(nailgun.api.v1.handlers.disks.NodeDefaultsDisksHandler
method)
method)
pi.v1.handlers.disks.NodeDisksHandler method)
(nailgun.api.v1.handlers.disks.NodeVolumesInformationHandler class metho
pi.v1.handlers.disks.NodeVolumesInformationHandler
(nailgun.api.v1.handlers.logs.LogEntryCollectionHandler
method)
class method)
pi.v1.handlers.logs.LogEntryCollectionHandler
(nailgun.api.v1.handlers.logs.LogPackageHandler
method)
class method)
pi.v1.handlers.logs.LogPackageHandler method)
(nailgun.api.v1.handlers.logs.LogSourceByNodeCollectionHandler class meth
pi.v1.handlers.logs.LogSourceByNodeCollectionHandler
(nailgun.api.v1.handlers.logs.LogSourceCollectionHandler
method)
class method)
pi.v1.handlers.logs.LogSourceCollectionHandler
(nailgun.api.v1.handlers.network_configuration.NetworkConfigurationVerifyH
method)
method)
pi.v1.handlers.network_configuration.NetworkConfigurationVerifyHandler
(nailgun.api.v1.handlers.network_configuration.NeutronNetworkConfiguratio
method)
pi.v1.handlers.network_configuration.NeutronNetworkConfigurationHandler
(nailgun.api.v1.handlers.network_configuration.NeutronNetworkConfiguratio
class method)
pi.v1.handlers.network_configuration.NeutronNetworkConfigurationVerifyHandler
(nailgun.api.v1.handlers.network_configuration.NovaNetworkConfigurationH
method)
pi.v1.handlers.network_configuration.NovaNetworkConfigurationHandler
(nailgun.api.v1.handlers.network_configuration.NovaNetworkConfigurationVe
class method)
pi.v1.handlers.network_configuration.NovaNetworkConfigurationVerifyHandler
(nailgun.api.v1.handlers.network_configuration.ProviderHandler class metho
pi.v1.handlers.network_configuration.ProviderHandler
(nailgun.api.v1.handlers.node.NodeCollectionHandler
method)
class method)
pi.v1.handlers.node.NodeCollectionHandler (nailgun.api.v1.handlers.node.NodeCollectionNICsDefaultHandler
method)
class meth
pi.v1.handlers.node.NodeCollectionNICsDefaultHandler
(nailgun.api.v1.handlers.node.NodeCollectionNICsHandler
method)
class method)
pi.v1.handlers.node.NodeCollectionNICsHandler
(nailgun.api.v1.handlers.node.NodeNICsDefaultHandler
method)
class method)
pi.v1.handlers.node.NodeNICsDefaultHandler
(nailgun.api.v1.handlers.node.NodeNICsHandler
method)
class method)
pi.v1.handlers.node.NodeNICsHandler method)
(nailgun.api.v1.handlers.node.NodesAllocationStatsHandler class method)
pi.v1.handlers.node.NodesAllocationStatsHandler
(nailgun.api.v1.handlers.notifications.NotificationHandler
method)
class method)
pi.v1.handlers.notifications.NotificationHandler
(nailgun.api.v1.handlers.release.ReleaseCollectionHandler
method)
class method)
pi.v1.handlers.release.ReleaseCollectionHandler
(nailgun.api.v1.handlers.release.ReleaseHandler
method)
class method)
pi.v1.handlers.release.ReleaseHandler method)
(nailgun.api.v1.handlers.release.ReleaseNetworksHandler class method)
pi.v1.handlers.release.ReleaseNetworksHandler
(nailgun.api.v1.handlers.tasks.TaskCollectionHandler
method)
class method)
pi.v1.handlers.tasks.TaskCollectionHandler (nailgun.api.v1.handlers.tasks.TaskHandler
method)
class method)
pi.v1.handlers.tasks.TaskHandler method) (nailgun.api.v1.handlers.version.VersionHandler class method)
pi.v1.handlers.version.VersionHandler method)
I
H
http() (nailgun.api.v1.handlers.cluster.ClusterAtt
ributesDefaultsHandler class method)
is_deployable() (nailgun.objects.release.Release
class method)
L
(nailgun.api.v1.handlers.cluster.ClusterAttributesHandler
class method)
lock_for_update()
(nailgun.objects.base.NailgunCollection
class
(nailgun.api.v1.handlers.cluster.ClusterCollectionHandler
method)
class method)
lock_nodes()
(nailgun.api.v1.handlers.cluster.ClusterGeneratedData
(nailgun.objects.node.NodeCollection
class
class method)
method)
(nailgun.api.v1.handlers.cluster.ClusterHandler
class
LogEntryCollectionHandler
(class
in
method)
nailgun.api.v1.handlers.logs)
(nailgun.api.v1.handlers.disks.NodeDefaultsDisksHandler
LogPackageHandler
(class
in
class method)
nailgun.api.v1.handlers.logs)
(nailgun.api.v1.handlers.disks.NodeDisksHandler
class
LogSourceByNodeCollectionHandler (class in
method)
nailgun.api.v1.handlers.logs)
LogSourceCollectionHandler
nailgun.api.v1.handlers.logs)
(class
in
NodeCollectionHandler
nailgun.api.v1.handlers.node)
(class
in
M
NodeCollectionNICsDefaultHandler
nailgun.api.v1.handlers.node)
merged_attrs()
(nailgun.objects.cluster.Attributes class method)
NodeCollectionNICsHandler
nailgun.api.v1.handlers.node)
(class
in
merged_attrs_values()
(nailgun.objects.cluster.Attributes class method)
NodeDefaultsDisksHandler
nailgun.api.v1.handlers.disks)
(class
in
model
attribute)
NodeDisksHandler
(class
nailgun.api.v1.handlers.disks)
in
NodeNICsDefaultHandler
nailgun.api.v1.handlers.node)
in
(nailgun.objects.base.NailgunObject
(nailgun.objects.cluster.Attributes attribute)
(nailgun.objects.cluster.Cluster attribute)
(nailgun.objects.node.Node attribute)
(nailgun.objects.release.Release attribute)
(nailgun.objects.release.ReleaseOrchestratorData
attribute)
move_roles_to_pending_roles()
(nailgun.objects.node.Node class method)
N
nailgun.api.v1.handlers.cluster (module)
nailgun.api.v1.handlers.disks (module)
nailgun.api.v1.handlers.logs (module)
nailgun.api.v1.handlers.network_configuration
(module)
nailgun.api.v1.handlers.node (module)
nailgun.api.v1.handlers.notifications (module)
nailgun.api.v1.handlers.release (module)
nailgun.api.v1.handlers.tasks (module)
nailgun.api.v1.handlers.version (module)
nailgun.objects.base (module)
nailgun.objects.cluster (module)
nailgun.objects.node (module)
nailgun.objects.release (module)
NailgunCollection (class in nailgun.objects.base)
(class
(class
in
NodeNICsHandler
(class
nailgun.api.v1.handlers.node)
in
NodesAllocationStatsHandler
nailgun.api.v1.handlers.node)
in
(class
NodeVolumesInformationHandler
nailgun.api.v1.handlers.disks)
(class
NotificationHandler
(class
nailgun.api.v1.handlers.notifications)
in
in
NovaNetworkConfigurationHandler
(class
in
nailgun.api.v1.handlers.network_configuration)
NovaNetworkConfigurationVerifyHandler (class
in
nailgun.api.v1.handlers.network_configuration)
O
or_() (in module nailgun.objects.cluster)
order_by()
(nailgun.objects.base.NailgunCollection
method)
class
P
PATCH() (nailgun.api.v1.handlers.cluster.Cluster
AttributesHandler method)
POST() (nailgun.api.v1.handlers.cluster.ClusterC
ollectionHandler method)
(nailgun.api.v1.handlers.node.NodeCollectionHandler
method)
NailgunObject (class in nailgun.objects.base)
(nailgun.api.v1.handlers.release.ReleaseCollectionHandler
NetworkConfigurationVerifyHandler
(class
in method)
nailgun.api.v1.handlers.network_configuration)
(nailgun.api.v1.handlers.release.ReleaseNetworksHandler
NeutronNetworkConfigurationHandler (class in method)
nailgun.api.v1.handlers.network_configuration)
(nailgun.api.v1.handlers.tasks.TaskCollectionHandler
NeutronNetworkConfigurationVerifyHandler
method)
(class
in
prepare_for_deployment()
nailgun.api.v1.handlers.network_configuration)
(nailgun.objects.node.NodeCollection
class
Node (class in nailgun.objects.node)
method)
NodeCollection (class in nailgun.objects.node)
prepare_for_provisioning()
(nailgun.objects.node.NodeCollection
method)
class
ProviderHandler
(class
in
nailgun.api.v1.handlers.network_configuration)
(nailgun.objects.node.Node attribute)
(nailgun.objects.release.Release attribute)
PUT() (nailgun.api.v1.handlers.cluster.ClusterAtt
ributesDefaultsHandler method)
pi.v1.handlers.cluster.ClusterAttributesHandler method)
(nailgun.objects.release.ReleaseOrchestratorData
attribute)
search_by_interfaces()
(nailgun.objects.node.Node class method)
pi.v1.handlers.cluster.ClusterHandler method)
pi.v1.handlers.disks.NodeDisksHandler method)
serializer
attribute)
pi.v1.handlers.logs.LogPackageHandler method)
(nailgun.objects.base.NailgunObject
(nailgun.objects.cluster.Cluster attribute)
pi.v1.handlers.network_configuration.NetworkConfigurationVerifyHandler
(nailgun.objects.node.Node attribute)
pi.v1.handlers.network_configuration.NeutronNetworkConfigurationVerifyHandler
(nailgun.objects.release.Release attribute)
(nailgun.objects.release.ReleaseOrchestratorData
pi.v1.handlers.network_configuration.NovaNetworkConfigurationHandler
attribute)
set_primary_role()
pi.v1.handlers.network_configuration.NovaNetworkConfigurationVerifyHandler
(nailgun.objects.cluster.Cluster class method)
set_primary_roles()
(nailgun.objects.cluster.Cluster class method)
pi.v1.handlers.node.NodeCollectionHandler method)
pi.v1.handlers.node.NodeCollectionNICsHandler method)
pi.v1.handlers.node.NodeNICsHandler method)
should_assign_public_to_all_nodes()
(nailgun.objects.cluster.Cluster class method)
pi.v1.handlers.notifications.NotificationHandler method)
pi.v1.handlers.release.ReleaseHandler method)
pi.v1.handlers.release.ReleaseNetworksHandler method)
pi.v1.handlers.tasks.TaskHandler method)
(nailgun.api.v1.handlers.release.ReleaseHandler
attribute)
Release (class in nailgun.objects.release)
(class
in
ReleaseCollectionHandler
(class
nailgun.api.v1.handlers.release)
in
ReleaseHandler
(class
nailgun.api.v1.handlers.release)
in
ReleaseNetworksHandler
(class
nailgun.api.v1.handlers.release)
in
ReleaseOrchestratorData
nailgun.objects.release)
in
(class
(nailgun.objects.node.Node
S
save() (nailgun.objects.base.NailgunObject class
method)
schema
attribute)
(nailgun.objects.base.NailgunObject
(nailgun.objects.cluster.Cluster attribute)
(nailgun.api.v1.handlers.release.ReleaseNetworksHandler
attribute)
(nailgun.objects.base.NailgunCollection attribute)
(nailgun.objects.cluster.ClusterCollection attribute)
(nailgun.objects.node.NodeCollection attribute)
remove_from_cluster()
(nailgun.objects.node.Node class method)
reset_to_discover()
class method)
should_have_public() (nailgun.objects.node.Node
class method)
single
(nailgun.api.v1.handlers.cluster.ClusterHandler
attribute)
R
ReleaseCollection
nailgun.objects.release)
set_volumes() (nailgun.objects.node.Node class
method)
(nailgun.objects.release.ReleaseCollection attribute)
T
TaskCollectionHandler
(class
nailgun.api.v1.handlers.tasks)
in
TaskHandler
(class
nailgun.api.v1.handlers.tasks)
in
to_dict()
(nailgun.objects.base.NailgunObject
class method)
to_json() (nailgun.objects.base.NailgunCollection
class method)
(nailgun.objects.base.NailgunObject
method)
class
to_list() (nailgun.objects.base.NailgunCollection
class method)
U
update()
(nailgun.objects.base.NailgunObject
class method)
(nailgun.objects.cluster.Cluster
method)
class
(nailgun.objects.node.Node class method)
(nailgun.objects.release.Release
method)
class
update_by_agent()
class method)
(nailgun.objects.node.Node
update_interfaces()
class method)
(nailgun.objects.node.Node
update_nodes()
class method)
(nailgun.objects.cluster.Cluster
update_pending_roles()
(nailgun.objects.node.Node class method)
update_roles() (nailgun.objects.node.Node class
method)
(nailgun.objects.release.Release
method)
update_volumes()
class method)
class
(nailgun.objects.node.Node
V
VersionHandler
(class
nailgun.api.v1.handlers.version)
in
Python Module Index
n
nailgun
nailgun.api.v1.handlers.cluster
nailgun.api.v1.handlers.disks
nailgun.api.v1.handlers.logs
nailgun.api.v1.handlers.network_configuration
nailgun.api.v1.handlers.node
nailgun.api.v1.handlers.notifications
nailgun.api.v1.handlers.release
nailgun.api.v1.handlers.tasks
nailgun.api.v1.handlers.version
nailgun.objects.base
nailgun.objects.cluster
nailgun.objects.node
nailgun.objects.release