Upgrading Individual OpenStack Services (Live Compute) in a Standard Environment

This section describes the steps you should follow to upgrade your cloud deployment by updating one service at a time with Live Compute in a non High Availability (HA) environment. This scenario upgrades from Newton to Ocata in environments that do not use TripleO.

A live Compute upgrade minimizes interruptions to your Compute service, with only a few minutes for the smaller services, and a longer migration interval for the workloads moving to newly-upgraded Compute hosts. Existing workloads can run indefinitely, and you do not need to wait for a database migration.

Important
Due to certain package dependencies, upgrading the packages for one OpenStack service might cause Python libraries to upgrade before other OpenStack services upgrade. This might cause certain services to fail prematurely. In this situation, continue upgrading the remaining services. All services should be operational upon completion of this scenario.
Note
This method may require additional hardware resources to bring up the Compute nodes.

Pre-Upgrade Tasks

On each node, install the Ocata release repository.

On RHEL:

# yum install -y https://www.rdoproject.org/repos/rdo-release.rpm

On CentOS:

# yum install -y centos-release-openstack-ocata

Make sure that any previous release repositories are disabled. For example:

# yum-config-manager --disable centos-release-openstack-newton

Before updating, take a systemd snapshot of the OpenStack services:

# systemctl snapshot openstack-services

Disable the main OpenStack services:

# systemctl stop 'openstack-*'
# systemctl stop 'neutron-*'
# systemctl stop 'openvswitch'

Upgrade the openstack-selinux package:

# yum upgrade openstack-selinux

This is necessary to ensure that the upgraded services will run correctly on a system with SELinux enabled.

Upgrading Identity (keystone) and Dashboard (horizon)

Disable the Identity service and the Dashboard WSGI applets:

# systemctl stop httpd

Update the packages for both services:

# yum -d1 -y upgrade \*keystone\*
# yum -y upgrade \*horizon\* \*openstack-dashboard\*
# yum -d1 -y upgrade \*horizon\* \*python-django\*

It is possible that the Identity service’s token table has a large number of expired entries. This can dramatically increase the time it takes to complete the database schema upgrade. To flush expired tokens from the database and alleviate the problem, the keystone-manage command can be used before running the Identity database upgrade.

# keystone-manage token_flush
# su -s /bin/sh -c "keystone-manage db_sync" keystone

This flushes expired tokens from the database. You can arrange to run this command periodically using cron.

Restart the httpd service:

# systemctl start httpd

Upgrading Object Storage (swift)

On your Object Storage hosts, run:

# systemctl stop '*swift*'
# yum -d1 -y upgrade \*swift\*
# systemctl start openstack-swift-account-auditor \
    openstack-swift-account-reaper \
    openstack-swift-account-replicator \
    openstack-swift-account \
    openstack-swift-container-auditor \
    openstack-swift-container-replicator \
    openstack-swift-container-updater \
    openstack-swift-container \
    openstack-swift-object-auditor \
    openstack-swift-object-replicator \
    openstack-swift-object-updater \
    openstack-swift-object \
    openstack-swift-proxy

Upgrading Image Service (glance)

On your Image Service host, run:

# systemctl stop '*glance*'
# yum -d1 -y upgrade \*glance\*
# su -s /bin/sh -c "glance-manage db_sync" glance
# systemctl start openstack-glance-api \
    openstack-glance-registry

Upgrading Block Storage (cinder)

On your Block Storage host, run:

# systemctl stop '*cinder*'
# yum -d1 -y upgrade \*cinder\*
# su -s /bin/sh -c "cinder-manage db sync" cinder
# systemctl start openstack-cinder-api \
    openstack-cinder-scheduler \
    openstack-cinder-volume

Upgrading Orchestration (heat)

On your Orchestration host, run:

# systemctl stop '*heat*'
# yum -d1 -y upgrade \*heat\*
# su -s /bin/sh -c "heat-manage db_sync" heat
# systemctl start openstack-heat-api-cfn \
    openstack-heat-api-cloudwatch \
    openstack-heat-api \
    openstack-heat-engine

Upgrading Telemetry (ceilometer)

  1. On all nodes hosting Telemetry component services, run:

    # systemctl stop '*ceilometer*'
    # systemctl stop '*aodh*'
    # systemctl stop '*gnocchi*'
    # yum -d1 -y upgrade \*ceilometer\* \*aodh\* \*gnocchi\*
  2. On the Controller node, where database is installed, run:

    # ceilometer-dbsync
    # aodh-dbsync
    # gnocchi-upgrade
  3. After completing the package upgrade, restart the Telemetry service by running the following command on all nodes hosting Telemetry component services:

    # systemctl start openstack-ceilometer-api \
        openstack-ceilometer-central \
        openstack-ceilometer-collector \
        openstack-ceilometer-notification \
        openstack-aodh-evaluator \
        openstack-aodh-listener \
        openstack-aodh-notifier \
        openstack-gnocchi-metricd \
        openstack-gnocchi-statsd

Upgrading Compute (nova)

  1. If you are performing a rolling upgrade of your Compute hosts, you need to set explicit API version limits to ensure compatibility in your environment.

    Before starting Compute services on Controller or Compute nodes, set the compute option in the [upgrade_levels] section of nova.conf to the previous OpenStack version (newton):

    # crudini --set /etc/nova/nova.conf upgrade_levels compute newton

    You need to make this change on your Controller and Compute nodes.

    You should undo this operation after upgrading all of your Compute nodes.

    Note

    If the crudini command is not available, install the crudini package:

    # yum install crudini
  2. On your Controller and Compute nodes, run:

    # systemctl stop '*nova*'
    # yum -d1 -y upgrade \*nova\*
    Note

    Unless stated otherwise, run the following commands on your Controller host.

  3. In Ocata, the Compute service needs at least the cell0 database and one Cell V2 cell database, which you can name, for example, cell1.

    If you have not created the cell0 database in your Newton environment, you need to create it in Ocata, similarly to how you created other nova databases in the past. The database name should follow the form of nova_cell0, where nova is the name of the existing nova database.

  4. After creating the cell0 database, create a Placement service user. For example:

    $ openstack user create --domain default --password-prompt placement
  5. Add the appropriate role to the Placement service user. For example:

    $ openstack role add --project service --user placement admin
  6. Create the Placement API entry. For example:

    $ openstack service create --name placement --description "Placement API" placement
  7. Create the appropriate Placement API service endpoints. For example:

    $ openstack endpoint create --region RegionOne placement public http://controller:8778
    
    $ openstack endpoint create --region RegionOne placement internal http://controller:8778
    
    $ openstack endpoint create --region RegionOne placement admin http://controller:8778
  8. Make sure that you have the required Placement API package installed:

    # yum install openstack-nova-placement-api
  9. Configure the Placement API in the [placement] section of the /etc/nova/nova.conf file.

    Note

    Configure the [placement] section in /etc/nova/nova.conf on both the Controller and Compute nodes.

    For example:

    [placement]
    
    os_region_name = RegionOne
    project_domain_name = Default
    project_name = service
    auth_type = password
    user_domain_name = Default
    auth_url = http://controller:35357/v3
    username = placement
    password = MyPlacementPassword
    Note

    Due to a known issue, you might need to enable access to the Placement API by configuring the related Apache virtual host. For more information, see the bug report and the OpenStack Installation Tutorial.

  10. Create and map the cell0 database with Cells V2:

    # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
  11. Populate the cell0 database by updating the db database schema:

    # su -s /bin/sh -c "nova-manage db sync" nova
  12. Create a cell1 cell:

    # su -s /bin/sh -c "nova-manage --config-file /etc/nova/nova.conf cell_v2 create_cell --name=cell1 --verbose" nova

    Run this command for each cell in your environment.

  13. Discover Compute hosts:

    # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
    Note

    If you add more Compute hosts in your environment, remember to run the discover_hosts command again.

  14. List registered cells to get their UUIDs:

    # nova-manage cell_v2 list_cells
  15. Pass the UUID for cell1 to the map_instances command to map instances to cells:

    # su -s /bin/sh -c "nova-manage cell_v2 map_instances --cell_uuid <cell1 UUID>" nova
  16. Update the api_db database schema:

    # su -s /bin/sh -c "nova-manage api_db sync" nova
  17. After you have upgraded all of your hosts, you will want to remove the API limits configured in the previous step. On all of your hosts:

    # crudini --del /etc/nova/nova.conf upgrade_levels compute
  18. Restart the Compute service on all Controller nodes:

    # systemctl start openstack-nova-api \
        openstack-nova-conductor \
        openstack-nova-consoleauth \
        openstack-nova-novncproxy \
        openstack-nova-scheduler
  19. Restart the Compute service on all Compute nodes:

    # systemctl start openstack-nova-compute

Upgrading Clustering (sahara)

  1. On all nodes hosting Clustering component services, run:

    # systemctl stop '*sahara*'
    # yum -d1 -y upgrade \*sahara\*
  2. On the Controller node, where database is installed, run:

    # su -s /bin/sh -c "sahara-db-manage upgrade heads" sahara
  3. After completing the package upgrade, restart the Clustering service by running the following command on all nodes hosting Clustering component services:

    # systemctl start openstack-sahara-api \
        openstack-sahara-engine

Upgrading OpenStack Networking (neutron)

  1. On your OpenStack Networking host, run:

    # systemctl stop '*neutron*'
    # yum -d1 -y upgrade \*neutron\*
  2. On the same host, update the OpenStack Networking database schema:

    # su -s /bin/sh -c "neutron-db-manage upgrade heads" neutron
  3. Restart the OpenStack Networking service:

    # systemctl start neutron-dhcp-agent \
        neutron-l3-agent \
        neutron-metadata-agent \
        neutron-openvswitch-agent \
        neutron-server
Note
Start any additional OpenStack Networking services enabled in your environment.

Post-Upgrade Tasks

After completing all of your individual service upgrades, you should perform a complete package upgrade on all of your systems:

# yum upgrade

This will ensure that all packages are up-to-date. You may want to schedule a restart of your OpenStack hosts at a future date in order to ensure that all running processes are using updated versions of the underlying binaries.

Review the resulting configuration files. The upgraded packages will have installed .rpmnew files appropriate to the Ocata version of the service.

New versions of OpenStack services may deprecate certain configuration options. You should also review your OpenStack logs for any deprecation warnings, because these may cause problems during a future upgrade. For more information on the new, updated and deprecated configuration options for each service, see Configuration Reference available from http://docs.openstack.org/ocata/config-reference.