RDO Community News

See also blogs.rdoproject.org

Recent blog posts

It's been a few weeks since I did one of these blog wrapups, and there's been a lot of great content by the RDO community recently.

Here's some of what we've been talking about recently:

Project Teams Gathering (PTG) report - Zuul by tristanC

The OpenStack infrastructure team gathered in Denver (September 2017). This article reports some of Zuul's topics that were discussed at the PTG.

Read more at http://rdoproject.org/blog/2017/09/PTG-report-zuul/

Evaluating Total Cost of Ownership of the Identity Management Solution by Dmitri Pal

Increasing Interest in Identity Management: During last several months I’ve seen a rapid growth of interest in Red Hat’s Identity Management (IdM) solution. This might have been due to different reasons.

Read more at http://rhelblog.redhat.com/2017/09/18/evaluating-total-cost-of-ownership-of-the-identity-management-solution/

Debugging TripleO Ceph-Ansible Deployments by John

Starting in Pike it is possible to use TripleO to deploy Ceph in containers using ceph-ansible. This is a guide to help you if there is a problem. It asks questions, somewhat rhetorically, to help you track down the problem.

Read more at http://blog.johnlikesopenstack.com/2017/09/debug-tripleo-ceph-ansible.html

Make a NUMA-aware VM with virsh by John

Grégory showed me how he uses virsh edit on a VM to add something like the following:

Read more at http://blog.johnlikesopenstack.com/2017/09/make-numa-aware-vm-with-virsh.html

Writing a SELinux policy from the ground up by tristanC

SELinux is a mechanism that implements mandatory access controls in Linux systems. This article shows how to create a SELinux policy that confines a standard service:

Read more at http://rdoproject.org/blog/2017/09/SELinux-policy-from-the-ground-up/

Trick to test external ceph clusters using only tripleo-quickstart by John

TripleO can stand up a Ceph cluster as part of an overcloud. However, if all you have is a tripleo-quickstart env and want to test an overcloud feature which uses an external Ceph cluster, then can have quickstart stand up two heat stacks, one to make a separate ceph cluster and the other to stand up an overcloud which uses that ceph cluster.

Read more at http://blog.johnlikesopenstack.com/2017/09/trick-to-test-external-ceph-clusters.html

RDO Pike released by Rich Bowen

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Pike for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Pike is the 16th release from the OpenStack project, which is the work of more than 2300 contributors from around the world (source).

Read more at http://rdoproject.org/blog/2017/09/rdo-pike-released/

OpenStack Summit Sydney preview: Red Hat to present at more than 40 sessions by Peter Pawelski, Product Marketing Manager, Red Hat OpenStack Platform

The next OpenStack Summit will take place in Sydney, Australia, November 6-8. And despite the fact that the conference will only run three days instead of the usual four, there will be plenty of opportunities to learn about OpenStack from Red Hat’s thought leaders.

Read more at http://redhatstackblog.redhat.com/2017/08/31/openstack-summit-fall2017-preview/

Scheduled snapshots by Tim Bell

While most of the machines on the CERN cloud are configured using Puppet with state stored in external databases or file stores, there are a few machines where this has been difficult, especially for legacy applications. Doing a regular snapshot of these machines would be a way of protecting against failure scenarios such as hypervisor failure or disk corruptions.

Read more at http://openstack-in-production.blogspot.com/2017/08/scheduled-snapshots.html

Ada Lee: OpenStack Security, Barbican, Novajoin, TLS Everywhere in Ocata by Rich Bowen

Ada Lee talks about OpenStack Security, Barbican, Novajoin, and TLS Everywhere in Ocata, at the OpenStack PTG in Atlanta, 2017.

Read more at http://rdoproject.org/blog/2017/08/ada-lee-openstack-security-barbican-novajoin-tls-everywhere-in-ocata/

Octavia Developer Wanted by assafmuller

I’m looking for a Software Engineer to join the Red Hat OpenStack Networking team. I am presently looking to hire in Europe, Israel and US East. The candidate may work from home or from one of the Red Hat offices. The team is globally distributed and comprised of talented, autonomous, empowered and passionate individuals with a healthy work/life balance. The candidate will work on OpenStack Octavia and LBaaS. The candidate will write and review code while working with upstream community members and fellow Red Hatters. If you want to do open source, Red Hat is objectively where it’s at. We have an institutional culture of open source at all levels and this has a ripple effect on your day to day and your career at the company.

Read more at https://assafmuller.com/2017/08/18/octavia-developer-wanted/

View article »

Project Teams Gathering (PTG) report - Zuul

The OpenStack infrastructure team gathered in Denver (September 2017). This article reports some of Zuul's topics that were discussed at the PTG.

For your reference, I highlighted some of the new features comming in the Zuul version 3 in this article.

Cutover and jobs migration

The OpenStack community grew a complex set of CI jobs over the past several years, that needs to be migrated. A zuul-migrate script has been created to automate the migration from the Jenkins-Jobs-Builder format to the new Ansible based job definition. The migrated jobs are prefixed with "-legacy" to indicate they still need to be manually refactored to fully benefit from the ZuulV3 features.

The team couldn't finish the migration and disable the current ZuulV2 services at the PTG because the jobs migration took longer than expected. However, a new cutover attemp will occur in the next few weeks.

Ansible devstack job

The devstack job has been completely rewritten to a fully fledged Ansible job. This is a good example of what a job looks like in the new Zuul:

A project that needs a devstack CI job needs this new job definition:

- job:
    name: shade-functional-devstack-base
    parent: devstack
    description: |
      Base job for devstack-based functional tests
    pre-run: playbooks/devstack/pre
    run: playbooks/devstack/run
    post-run: playbooks/devstack/post
    required-projects:
      # These jobs will DTRT when shade triggers them, but we want to make
      # sure stable branches of shade never get cloned by other people,
      # since stable branches of shade are, well, not actually things.
      - name: openstack-infra/shade
        override-branch: master
      - name: openstack/heat
      - name: openstack/swift
    roles:
      - zuul: openstack-infra/devstack-gate
    timeout: 9000
    vars:
      devstack_localrc:
        SWIFT_HASH: "1234123412341234"
      devstack_local_conf:
        post-config:
          "$CINDER_CONF":
            DEFAULT:
              osapi_max_limit: 6
      devstack_services:
        ceilometer-acentral: False
        ceilometer-acompute: False
        ceilometer-alarm-evaluator: False
        ceilometer-alarm-notifier: False
        ceilometer-anotification: False
        ceilometer-api: False
        ceilometer-collector: False
        horizon: False
        s-account: True
        s-container: True
        s-object: True
        s-proxy: True
      devstack_plugins:
        heat: https://git.openstack.org/openstack/heat
      shade_environment:
        # Do we really need to set this? It's cargo culted
        PYTHONUNBUFFERED: 'true'
        # Is there a way we can query the localconf variable to get these
        # rather than setting them explicitly?
        SHADE_HAS_DESIGNATE: 0
        SHADE_HAS_HEAT: 1
        SHADE_HAS_MAGNUM: 0
        SHADE_HAS_NEUTRON: 1
        SHADE_HAS_SWIFT: 1
      tox_install_siblings: False
      tox_envlist: functional
      zuul_work_dir: src/git.openstack.org/openstack-infra/shade

This new job definition simplifies a lot the devstack integration tests and projects now have a much more fine grained control over their integration with the other OpenStack projects.

Dashboard

I have been working on the new zuul-web interfaces to replace the scheduler webapp so that we can scale out the REST endpoints and prevent direct connections to the scheduler. Here is a summary of the new interfaces:

  • /tenants.json : return the list of tenants,
  • /{tenant}/status.json : return the status of the pipelines,
  • /{tenant}/jobs.json : return the list of jobs defined, and
  • /{tenant}/builds.json : return the list of builds from the sql reporter.

Moreover, the new interfaces enable new use cases, for example, users can now:

  • Get the list of available jobs and their description,
  • Check the results of post and periodic jobs, and
  • Dynamically list jobs' results using filters, for example, the last tripleo periodic jobs can be obtained using:
$ curl ${TENANT_URL}/builds.json?project=tripleo&pipeline=periodic | python -mjson.tool
[
    {
        "change": 0,
        "patchset": 0,
        "id": 16,
        "job_name": "periodic-tripleo-ci-centos-7-ovb-ha-oooq",
        "log_url": "https://logs.openstack.org/periodic-tripleo-ci-centos-7-ovb-ha-oooq/2cde3fd/",
        "pipeline": "periodic",
		...
    },
    ...
]

OpenStack health

The openstack-health service is likely to be modified to better interface with the new Zuul design. It is currently connected to an internal gearman bus to receive job completion events before running the subunit2sql process.

This processing could be rewritten as a post playbook to do the subunit processing as part of the job. Then the data could be pushed to the SQL server with the credencials stored in a Zuul's secret.

Roadmap

The last day, even though most of us were exhausted, we spend some time discussing the roadmap for the upcoming months. While the roadmap is still being defined, here are some hilights:

  • Based on new user's walkthrough, the documentation will be greatly improved, For example see this nodepool contribution.
  • Jobs will be able to return structured data to improve the reporting. For example a pypi publisher may return the published url. Similarly, a rpm-build job may return the repository url.
  • Dashboard web interface and javascript tooling,
  • Admin interface to manually trigger unique build or cancel a buildset,
  • Nodepool quota to improve performances,
  • Cross source dependencies, for example a github change in Ansible could depends-on a gerrit change in shade,
  • More Nodepool drivers such as Kubernetes or AWS, and
  • Fedmsg and mqtt zuul driver for message bus repporting and trigger source.

In conclusion, the ZuulV3 efforts were extremly fruitful and this article only covers a few of the design sessions. Once again, we have made great progress and I'm looking forward to further developments. Thanks you all for the great team gathering event!

View article »

Writing a SELinux policy from the ground up

SELinux is a mechanism that implements mandatory access controls in Linux systems. This article shows how to create a SELinux policy that confines a standard service:

  • Limit its network interfaces,
  • Restrict its system access, and
  • Protect its secrets.

Mandatory access control

By default, unconfined processes use discretionary access controls (DAC). A user has all the permissions over its objects, for example the owner of a log file can modify it or make it world readable.

In contrast, mandatory access control (MAC) enables more fine grained controls, for example it can restrict the owner of a log file to only append operations. Moreover, MAC can also be used to reduce the capability of a regular process, for example by denying debugging or networking capabilities.

This is great for system security, but is also a powerful tool to control and better understand an application. Security policies reduce services' attack surface and describes service system operations in depth.

Policy module files

A SELinux policy is composed of:

  • A type enforcement file (.te): describes the policy type and access control,
  • An interface file (.if): defines functions available to other policies,
  • A file context file (.fc): describes the path labels, and
  • A package spec file (.spec): describes how to build and install the policy.

The packaging is optional but highly recommended since it's a standard method to distribute and install new pieces on a system.

Under the hood, these files are written using macros processors:

  • A policy file (.pp) is generated using: make NAME=targeted -f "/usr/share/selinux/devel/Makefile"
  • An intermediary file (.cil) is generated using: /usr/libexec/selinux/hll/pp

Policy developpment workflow:

The first step is to get the services running in a confined domain. Then we define new labels to better protect the service. Finally the service is run in permissive mode to collect the access it needs.

As an example, we are going to create a security policy for the scheduler service of the Zuul program.

Confining a Service

To get the basic policy definitions, we use the sepolicy generate command to generate a bootstrap zuul-scheduler policy:

sepolicy generate --init /opt/rh/rh-python35/root/bin/zuul-scheduler

The –init argument tells the command to generate a service policy. Other types of policy could be generated such as user application, inetd daemon or confined administrator.

The .te file contains:

  • A new zuul_scheduler_t domain,
  • A new zuul_scheduler_exec_t file label,
  • A domain transition from systemd to zuul_scheduler_t when the zuul_scheduler_exec_t is executed, and
  • Miscellaneous definitions such as the ability to read localization settings.

The .fc file contains regular expressions to match a file path with a label: /bin/zuul-scheduler is associated with zuul_scheduler_exec_t.

The .if file contains methods (macros) that enable role extension. For example, we could use the zuul_scheduler_admin method to authorize a staff role to administrate the zuul service. We won't use this file because the admin user (root) is unconfined by default and it doesn't need special permission to administrate the service.

To install the zuul-scheduler policy we can run the provided script:

$ sudo ./zuul_scheduler.sh
Building and Loading Policy
+ make -f /usr/share/selinux/devel/Makefile zuul_scheduler.pp
Creating targeted zuul_scheduler.pp policy package
+ /usr/sbin/semodule -i zuul_scheduler.pp

Restarting the service should show (using "ps Zax") that it is now running with the system_u:system_r:zuul_scheduler_t:s0 context instead of the system_u:system_r:unconfined_service_t:s0.

And looking at the audit.log, it should show many "avc: denied error" because no permissions have yet been defined. Note that the service is running fine because this initial policy defines the zuul_scheduler_t domain as permissive.

Before authorizing the service's access, let's define the zuul resources.

Define the service resources

The service is trying to access /etc/opt/rh/rh-python35/zuul and /var/opt/rh/rh-python35/lib/zuul which inherited the etc_t and var_lib_t labels. Instead of giving zuul_scheduler_t access to etc_t and var_lib_t, we will create new types. Moreover the zuul-scheduler manages secret keys we could isolate from its general home directory and it requires two tcp ports.

In the .fc file, define the new paths:

/var/opt/rh/rh-python35/lib/zuul/keys(/.*)?  gen_context(system_u:object_r:zuul_keys_t,s0)
/etc/opt/rh/rh-python35/zuul(/.*)?           gen_context(system_u:object_r:zuul_conf_t,s0)
/var/opt/rh/rh-python35/lib/zuul(/.*)?       gen_context(system_u:object_r:zuul_var_lib_t,s0)
/var/opt/rh/rh-python35/log/zuul(/.*)?       gen_context(system_u:object_r:zuul_log_t,s0)

In the .te file, declare the new types:

# System files
type zuul_conf_t;
files_type(zuul_conf_t)
type zuul_var_lib_t;
files_type(zuul_var_lib_t)
type zuul_log_t;
logging_log_file(zuul_log_t)

# Secret files
type zuul_keys_t;
files_type(zuul_keys_t)

# Network label
type zuul_gearman_port_t;
corenet_port(zuul_gearman_port_t)
type zuul_webapp_port_t;
corenet_port(zuul_webapp_port_t);

Note that the file_type() macro is important since it provides unconfined access to the new types. Without it, even the admin user could not access the file.

In the .spec file, add the new path and setup the tcp port labels:

%define relabel_files() \
restorecon -R /var/opt/rh/rh-python35/lib/zuul/keys
...

# In the %post section, add
semanage port -a -t zuul_gearman_port_t -p tcp 4730
semanage port -a -t zuul_webapp_port_t -p tcp 8001

# In the %postun section, add
for port in 4730 8001; do semanage port -d -p tcp $port; done

Rebuild and install the package:

sudo ./zuul_scheduler.sh && sudo rpm -ivh ./noarch/*.rpm

Check that the new types are installed using "ls -Z" and "semanage port -l":

$ ls -Zd /var/opt/rh/rh-python35/lib/zuul/keys/
drwx------. zuul zuul system_u:object_r:zuul_keys_t:s0 /var/opt/rh/rh-python35/lib/zuul/keys/
$ sudo semanage port -l | grep zuul
zuul_gearman_port_t            tcp      4730
zuul_webapp_port_t             tcp      8001

Update the policy

With the service resources now declared, let's restart the service and start using it to collect all the access it needs.

After a while, we can update the policy using "./zuul_scheduler.sh –update" which basically does: "ausearch -m avc –raw | audit2allow -R". This collects all the permissions denied to generates type enforcement rules.

We can repeat this steps until all the required accesses are collected.

Here's what looks like the resulting zuul-scheduler rules:

allow zuul_scheduler_t gerrit_port_t:tcp_socket name_connect;
allow zuul_scheduler_t mysqld_port_t:tcp_socket name_connect;
allow zuul_scheduler_t net_conf_t:file { getattr open read };
allow zuul_scheduler_t proc_t:file { getattr open read };
allow zuul_scheduler_t random_device_t:chr_file { open read };
allow zuul_scheduler_t zookeeper_client_port_t:tcp_socket name_connect;
allow zuul_scheduler_t zuul_conf_t:dir getattr;
allow zuul_scheduler_t zuul_conf_t:file { getattr open read };
allow zuul_scheduler_t zuul_exec_t:file getattr;
allow zuul_scheduler_t zuul_gearman_port_t:tcp_socket { name_bind name_connect };
allow zuul_scheduler_t zuul_keys_t:dir getattr;
allow zuul_scheduler_t zuul_keys_t:file { create getattr open read write };
allow zuul_scheduler_t zuul_log_t:file { append open };
allow zuul_scheduler_t zuul_var_lib_t:dir { add_name create remove_name write };
allow zuul_scheduler_t zuul_var_lib_t:file { create getattr open rename write };
allow zuul_scheduler_t zuul_webapp_port_t:tcp_socket name_bind;

Once the service is no longer being denied permissions, we can remove the "permissive zuul_scheduler_t;" declaration and deploy it in production. To avoid issues, the domain can be set to permissive at first using:

$ sudo semanage permissive -a zuul_scheduler_t

Too long, didn't read

In short, to confine a service:

  • Use sepolicy generate
  • Declare the service's resources
  • Install the policy and restart the service
  • Use audit2allow

Here are some useful documents:

View article »

RDO Pike released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Pike for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Pike is the 16th release from the OpenStack project, which is the work of more than 2300 contributors from around the world (source).

The release is making its way out to the CentOS mirror network, and should be on your favorite mirror site momentarily.

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO, and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

New and Improved

Interesting things in the Pike release include:

Added/Updated packages

The following packages and services were added or updated in this release:

  • Kuryr and Kuryr-kubernetes: an integration between OpenStack and Kubernetes networking.
  • Senlin: a clustering service for OpenStack clouds.
  • Shade: a simple client library for interacting with OpenStack clouds, used by Ansible among others.
  • python-pankoclient: a client library for the event storage and REST API for Ceilometer.
  • python-scciclient: a ServerView Common Command Interface Client Library, for the FUJITSU iRMC S4 - integrated Remote Management Controller.

Other additions include:

Python Libraries

  • os-xenapi
  • ovsdbapp (deps)
  • python-daiquiri (deps)
  • python-deprecation (deps)
  • python-exabgp
  • python-json-logger (deps)
  • python-netmiko (deps)
  • python-os-traits
  • python-paunch
  • python-scciclient
  • python-scrypt (deps)
  • python-sphinxcontrib-actdiag (deps) (pending)
  • python-sphinxcontrib-websupport (deps)
  • python-stestr (deps)
  • python-subunit2sql (deps)
  • python-sushy
  • shade (SDK)
  • update XStatic packages (update)
  • update crudini to 0.9 (deps) (update)
  • upgrade liberasurecode and pyeclib libraries to 1.5.0 (update) (deps)

Tempest Plugins

  • python-barbican-tests-tempest
  • python-keystone-testst-tempest
  • python-kuryr-tests-tempest
  • python-patrole-tests-tempest
  • python-vmware-nsx-tests-tempest
  • python-watcher-tests-tempest

Puppet-Modules

  • puppet-murano
  • puppet-veritas_hyperscale
  • puppet-vitrage

OpenStack Projects

  • kuryr
  • kuryr-kubernetes
  • openstack-glare
  • openstack-panko
  • openstack-senlin

OpenStack Clients

  • mistral-lib
  • python-glareclient
  • python-pankoclient
  • python-senlinclient

Contributors

During the Pike cycle, we started the EasyFix initiative, which has resulted in several new people joining our ranks. These include:

  • Christopher Brown
  • Anthony Chow
  • T. Nicole Williams
  • Ricardo Arguello

But, we wouldn't want to overlook anyone. Thank you to all 172 contributors who participated in producing this release:

Aditya Prakash Vaja, Alan Bishop, Alan Pevec, Alex Schultz, Alexander Stafeyev, Alfredo Moralejo, Andrii Kroshchenko, Anil, Antoni Segura Puimedon, Arie Bregman, Assaf Muller, Ben Nemec, Bernard Cafarelli, Bogdan Dobrelya, Brent Eagles, Brian Haley, Carlos Gonçalves, Chandan Kumar, Christian Schwede, Christopher Brown, Damien Ciabrini, Dan Radez, Daniel Alvarez, Daniel Farrell, Daniel Mellado, David Moreau Simard, Derek Higgins, Doug Hellmann, Dougal Matthews, Edu Alcañiz, Eduardo Gonzalez, Elise Gafford, Emilien Macchi, Eric Harney, Eyal, Feng Pan, Frederic Lepied, Frederic Lepied, Garth Mollett, Gaël Chamoulaud, Giulio Fidente, Gorka Eguileor, Hanxi Liu, Harry Rybacki, Honza Pokorny, Ian Main, Igor Yozhikov, Ihar Hrachyshka, Jakub Libosvar, Jakub Ruzicka, Janki, Jason E. Rist, Jason Joyce, Javier Peña, Jeffrey Zhang, Jeremy Liu, Jiří Stránský, Johan Guldmyr, John Eckersberg, John Fulton, John R. Dennis, Jon Schlueter, Juan Antonio Osorio, Juan Badia Payno, Julie Pichon, Julien Danjou, Karim Boumedhel, Koki Sanagi, Lars Kellogg-Stedman, Lee Yarwood, Leif Madsen, Lon Hohberger, Lucas Alvares Gomes, Luigi Toscano, Luis Tomás, Luke Hinds, Martin André, Martin Kopec, Martin Mágr, Matt Young, Matthias Runge, Michal Pryc, Michele Baldessari, Mike Burns, Mike Fedosin, Mohammed Naser, Oliver Walsh, Parag Nemade, Paul Belanger, Petr Kovar, Pradeep Kilambi, Rabi Mishra, Radomir Dopieralski, Raoul Scarazzini, Ricardo Arguello, Ricardo Noriega, Rob Crittenden, Russell Bryant, Ryan Brady, Ryan Hallisey, Sarath Kumar, Spyros Trigazis, Stephen Finucane, Steve Baker, Steve Gordon, Steven Hardy, Suraj Narwade, Sven Anderson, T. Nichole Williams, Telles Nóbrega, Terry Wilson, Thierry Vignaud, Thomas Hervé, Thomas Morin, Tim Rozet, Tom Barron, Tony Breeds, Tristan Cacqueray, afazekas, danpawlik, dnyanmpawar, hamzy, inarotzk, j-zimnowoda, kamleshp, marios, mdbooth, michaelhenkel, mkolesni, numansiddique, pawarsandeepu, prateek1192, ratailor, shreshtha90, vakwetu, vtas-hyperscale-ci, yrobla, zhangguoqing, Vladislav Odintsov, Xin Wu, XueFengLiu, Yatin Karel, Yedidyah Bar David, adriano petrich, bcrochet, changzhi, diana, djipko, dprince, dtantsur, eggmaster, eglynn, elmiko, flaper87, gpocentek, gregswift, hguemar, jason guiditta, jprovaznik, mangelajo, marcosflobo, morsik, nmagnezi, sahid, sileht, slagle, trown, vkmc, wes hayutin, xbezdick, zaitcev, and zaneb.

Getting Started

There are three ways to get started with RDO.

  • To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.
  • For a production deployment of RDO, use the TripleO Quickstart and you'll be running a production cloud in short order.
  • Finally, if you want to try out OpenStack, but don't have the time or hardware to run it yourself, visit TryStack, where you can use a free public OpenStack instance, running RDO packages, to experiment with the OpenStack management interface and API, launch instances, configure networks, and generally familiarize yourself with OpenStack. (TryStack is not, at this time, running Pike, although it is running RDO.)

Getting Help

The RDO Project participates in a Q&A service at ask.openstack.org, for more developer-oriented content we recommend joining the rdo-list mailing list. Remember to post a brief introduction about yourself and your RDO story. You can also find extensive documentation on the RDO docs site.

The #rdo channel on Freenode IRC is also an excellent place to find help and give help.

We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience in the RDO venues.

Getting Involved

To get involved in the OpenStack RPM packaging effort, see the RDO community pages and the CentOS Cloud SIG page. See also the RDO packaging documentation.

Join us in #rdo on the Freenode IRC network, and follow us at @RDOCommunity on Twitter. If you prefer Facebook, we're there too, and also Google+.

View article »

Video interviews at the Denver PTG (Sign up now!)

TL;DR: Sign up here for the video interviews at the PTG in Denver next month.

Earlier this year, at the PTG in Atlanta I did video interviews with some of the Red Hat engineering who were there.

You can see these videos on the RDO YouTube channel.

Or you can see the teaser video here:

This year, I'll be expanding that to everyone - not just Red Hat - to emphasize the awesome cooperation and collaboration that happens across projects, and across companies.

If you'll be at the PTG, please consider signing up to talk to me about your project. I'll be conducting interviews starting on Tuesday morning, and you can sign up here

Please see the "planning for your interview" tab of that spreadsheet for the answers to all of your questions about the interviews. Or contact me directly at rbowen AT red hat DOT com if you have more questions.

View article »

Introducing opstools-ansible

Introducing Opstools-ansible

Ansible

Ansible is an agentless, declarative configuration management tool. Ansible can be used to install and configure packages on a wide variety of targets. Targets are defined in the inventory file for Ansible to apply the predefined actions. Actions are defined as playbooks or sometime roles in the form of YAML files. Details of Ansible can be found here.

Opstools-ansible

The project opstools-ansible hosted in Github is to use Ansible to configure an environment that provides the support of opstools, namely centralized logging and analysis, availability monitoring, and performance monitoring.

One prerequisite to run opstools-ansible is that the servers have to be running with CentOS 7 or RHEL 7 (or a compatible distribution).

Inventory file

These servers are to be defined in the inventory file with reference structure to this file that defines 3 high level host groups:

  • am_hosts
  • pm_hosts
  • logging_host

There are lower level host groups but documentation stated that they are not tested.

Configuration File

Once the inventory file is defined, the Ansible configuration files can be used to tailor to individual needs. The READM.rst file for opstools-ansible suggests the following as an example:

fluentd_use_ssl: true

fluentd_shared_key: secret

fluentd_ca_cert:

—–BEGIN CERTIFICATE—–

—–END CERTIFICATE—–

fluentd_private_key:

—–BEGIN RSA PRIVATE KEY—–

—–END RSA PRIVATE KEY—–

If there is no Ansible configuration file to tune the system, the default settings/options are applied.

Playbooks and roles

The playbook specifies what packages are to be installed in for the opstools environment by Ansible. Basically, the packages to be installed are:

Besides the above packages, opstools-ansible playbook also applies these additional roles

  • Firewall – this role manages the firewall rules for the servers.
  • Prereqs – this role checks and installs all the dependency packages such as python-netaddr or libselinux-python … etc. for the successful installation of opstools.
  • Repos - this is a collection of roles for configuring additional package repositories.
  • Chrony – this role installs and configures the NTP client to make sure the time in each server is in sync with each other.

opstools environment

Once these are done, we can simply apply the following command to create the opstools environment:

    ansible-playbook playbook.yml -e @config.yml

TripleO Integration

TripleO (OpenStack on OpenStack) has the concept of Undercloud and Overcloud

  • Undercloud : for deployment, configuration and management of OpenStack nodes.
  • Overcloud : the actual OpenStack cluster that is consumed by user.

RedHat has an in-depth blog post on TripleO and OpenStack has this document on contributing and installing TripleO

When opstools is installed at the TripleO Undercloud, the OpenStack instances running on the Overcloud can be configured to run the opstools service when it deployed. For example:

openstack overcloud deploy … \

-e /usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml \

-e /usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml \

-e params.yaml

There are only 3 steps to integrate opstools with TripleO with opstools-ansible. Detail of the steps can be found here.

  1. Use opstools-ansible to create the opstools environment at the Undercloud.
  2. Create the params.yaml for TripleO to points to the Sensu and Fluentd agents at the opstools hosts.
  3. Deploy with the "openstack overcloud deploy …" command.
View article »

EasyFix: Getting started contributing to RDO

It can be intimidating trying to get involved in an open source project. Particularly one as huge and complicated as OpenStack. But we want you to join us on the RDO project, so we're trying to make it as easy as possible to get started.

To that end, we've started the EasyFix initiative. EasyFix is a collection of "easy" tickets that you should be able to get started on without having a deep knowledge of the RDO project. And in the process, you'll learn what you need to know to move on to more complex things.

These tickets cover everything from documentation, to packaging, to helping with events, to writing tools to make things easier. And each ticket comes with a mentor - someone who has volunteered to help you get through that first commit.

There's also a general mentors list, categorized by area of expertise, if you just have a question and you're not sure who to ping on IRC.

We've been running the EasyFix program for about 3 weeks now, and in that time, four new contributors have started on the RDO project.

We're very pleased to welcome these new contributors, and hope they'll be around for a long time to come.

Christopher "snecklifter" Brown - I'm Christopher Brown, a HPC Cloud Engineer based in Sheffield in the UK. I've worked on OpenStack since June 2015. I used to work on packaging for the Fedora project so I am transferring and updating those skills to help out with RDO when I can.

Anthony Chow - I am a software developer for legacy networking equipment wanting to be a Developer Advocate. I have been venturing into other technologies such as cloud, container, and configuration management tools. Passionate in learning and sharing technologies related topics.

Treva Williams - T. Nichole Williams is an RHCSA 7 Certified Linux and OpenStack engineer and content author for LinuxAcademy.com, & an OpenStack active technical contributor in the Magnum project. She is actively involved in several OpenStack, OpenShift, RDO, and Ceph communities and groups. When not OpenStacking or Cephing, she enjoys doggos, candy, cartoons, and playing "So You Think You're a Marine Biologist" on Google.

Ricardo Arguello - I am an Open Source enthusiast that tries to collaborate as much as I can. I have helped in the past with patches for WildFly, and as a Fedora packager. The RDO project is very complex an intimidating, but collaborators are welcome and there are easy to fix bugs for newbies to make their first contributions! That makes RDO a great community if you are interested in helping the project while learning about Open Stack internals in the process.

If you want to participate in the RDO project, we encourage you to find something on the EasyFix list and get started. And please consider attending EasyFix office hours, Tuesdays at 13:30 UTC.

View article »

What's new in ZuulV3

Zuul is a program used to gate a project's source code repository so that changes are only merged if they pass integration tests. This article presents some of the new features in the next version: ZuulV3

Distributed configuration

The configuration is distributed accross projects' repositories, for example, here is what the new zuul main.yaml configuration will look like:

- tenant:
    name: downstream
    source:
      gerrit:
        config-projects:
          - config
        untrusted-projects:
          - restfuzz
      openstack.org:
        untrusted-projects:
          - openstack-infra/zuul-jobs:
              include: job
              shadow: config

This configuration describes a downstream tenant with two sources. Gerrit is a local gerrit instance and openstack.org is the review.openstack.org service. For each sources, there are 2 types of projects:

  • config-projects hold configuration information such as logserver access. Jobs defined in config-projects run with elevated privileges.
  • untrusted-projects are projects being tested or deployed.

The openstack-infra/zuul-jobs has special settings discussed below.

Default jobs with openstack-infra/zuul-jobs

The openstack-infra/zuul-jobs repository contains common job definitions and Zuul only imports jobs that are not already defined (shadow) in the local config.

This is great news for Third Party CIs that will easily be able to re-use upstream jobs such as tox-docs or tox-py35 with their convenient post-processing of unittest results.

In-repository configuration

The new distributed configuration enables a more streamlined workflow. Indeed, pipelines and projects are now defined in the project's repository which allows changes to be tested before merging.

Traditionaly, projects' CI needed to be configured in two steps: first, the jobs were defined, then a test change was rechecked until the job was working. This is no longer needed because the jobs and configurations are directly set in the repository and CI change undergoes the CI workflow.

After being registered in the main.yaml file, a project author can submit a .zuul.yaml file (along with any other changes needed to make the test succeed). Here is a minimal zuul.yaml setting:

- project:
    name: restfuzz
    check:
      jobs:
        - tox-py35

Zuul will look for a zuul.yaml file or a zuul.d directory as well as hidden versions prefixed by a '.'. The project can also define its own jobs.

Ansible job definition

Jobs are now created in Ansible, which brings many advantages over the Jenkins Jobs Builder format:

  • Multi-node architecture where tasks are easily distributed,
  • Ansible module ecosystem simplify complex task, and
  • Manual execution of jobs.

Here is an example:

- job:
    name: restfuzz-rdo
    parent: base
    run: playbooks/rdo
    nodes:
      - name: cloud
        label: centos
      - name: fuzzer
        label: fedora

Then the playbook can be written like this:

- hosts: cloud
  tasks:
    - name: "Deploy rdo"
      command: packstack --allinone
      become: yes
      become_user: root

    - name: "Store openstackrc"
      command: "cat /root/keystonerc_admin
      register: openstackrc
      become: yes
      become_user: root

- hosts: fuzzer
  tasks:
    - name: "Setup openstackrc"
      copy:
        content: "{{ hostvars['cloud']['openstackrc'].stdout }}"
        dest: "{{ zuul_work_dir }}/.openstackrc"

    - name: "Deploy restfuzz"
      command: python setup.py install
      args:
        chdir: "{{ zuul_work_dir }}"
      become: yes
      become_user: root

    - name: "Run restfuzz"
      command: "restfuzz --target {{ hostvars['cloud']['ansible_eth0']['ipv4']['address'] }}"

The base parent from the config project manages the pre phase to copy the sources to the instances and the post phase to publish the job logs.

Nodepool drivers

This is still a work in progress but it's worth noting that Nodepool is growing a driver based design to support non-openstack providers. The primary goal is to support static node assignements, and the interface can be used to implement new providers. A driver needs to implement a Provider class to manage access to a new API, and a RequestHandler to manage resource creation.

As a Proof Of Concept, I wrote an OpenContainer driver that can spawn thin instances using RunC:

providers:
  - name: container-host-01
    driver: oci
    hypervisor: fedora.example.com
    pools:
      - name: main
        max-servers: 42
        labels:
          - name: fedora-26-oci
            path: /
          - name: centos-6-oci
            path: /srv/centos6
          - name: centos-7-oci
            path: /srv/centos7
          - name: rhel-7.4-oci
            path: /srv/rhel7.4

This is good news for operators and users who don't have access to an OpenStack cloud since Zuul/Nodepool may be able to use new providers such as OpenShift for example.

In conclusion, ZuulV3 brings a lot of new cool features to the table, and this article only covers a few of them. Check the documentation for more information and stay tuned for the upcoming release.

View article »

rdopkg-0.44 ChangeBlog

I'm happy to annouce version 0.44.2 of rdopkg RPM packaging automation tool has been released.

While a changelog generated from git commits is available in the original 0.44 release commit message, I think it's also worth a human readable summary of the work done by the rdopkg community for this release. I'm not sure about the format yet, so I'll start with a blog post about the changes - a ChangeBlog ;)

41 commits from 7 contributors were merged over the course of 4 months since last release with average time to land of 6 days. More stats

For more information about each change, follow the link to inspect related commit on github.

Software Factory migration

Migrate to softwarefactory-project.io

Versioning

Adopt pbr for version and setup.py management

Include minor version 0.44 -> 0.44.0 as pbr dictates

Python 3 compatibility

Add some Python 3 compatibility fixes

More python 3 compatibility fixes

Testing

Add BDD feature tests using python-behave

  • Unit tests sucked for testing high level behavior so I tried an alternative. I'm quite pleased with python-behave, see one of first new-version scenarios written in Gherkin:

    Scenario: rdopkg new-version with upstream patches
        Given a distgit at Version 0.1 and Release 0.1
        Given a patches branch with 5 patches
        Given a new version 1.0.0 with 2 patches from patches branch
        When I run rdopkg new-version -lntU 1.0.0
        Then .spec file tag Version is 1.0.0
        Then .spec file tag Release is 1%{?dist}
        Then .spec file doesn't contain patches_base
        Then .spec file has 3 patches defined
        Then .spec file contains new changelog entry with 1 lines
        Then new commit was created
    

    It also looks reasonable on the pyton side.

Avoid test failure due to git hooks

tests: include pep8 in test-requirements.txt

tests: enable nice py.test diffs for common test code

New Features

pkgenv: display color coded hashes for branches

  • You can now easily tell the state of branches just by looking at color:

distgit: new -H/–commit-header-file option

patch: new -B/–no-bump option to only sync patches

Add support for buildsys-tags in info-tags-diff

Add options to specify user and mail in changelog entry

allow patches remote and branch to be set in git config

new-version: handle RHCEPH and RHSCON products

guess: return RH osdist for eng- dist-git branches

Improvements

distgit: Use NVR for commit title for multiple changelog lines

Improve %changelog handling

Improve patches_ignore detection

Avoid prompt on non interactive console

Update yum references to dnf

Switch to pycodestyle (pep8 rename)

Fixes

Use absolute path for repo_path

  • This caused trouble when using rdopkg info -l.

Use always parse_info_file in get_info

Fix output of info-tags-diff for new packages

Refactoring

refactor: merge legacy rdopkg.utils.exception

  • There is only one place for exceptions now \o/

core: refactor unreasonable default atomic=False

make git.config_get behave more like dict.get

specfile: fix improper naming: get_nvr, get_vr

fixed linting

Documentation

document new-version's –bug argument

Update doc to reflect output change in info-tags-diff

Happy rdopkging!

View article »