RDO Community News

See also blogs.rdoproject.org

Recent blog posts, June 19

Using Ansible Validations With Red Hat OpenStack Platform – Part 3 by August Simonelli, Technical Marketing Manager, Cloud

In the previous two blogposts (Part 1 and Part 2) we demonstrated how to create a dynamic Ansible inventory file for a running OpenStack cloud. We then used that inventory to run Ansible-based validations with the ansible-playbook command from the CLI.

Read more at http://redhatstackblog.redhat.com/2017/06/15/using-ansible-validations-with-red-hat-openstack-platform-part-3/

TripleO deep dive session index by Carlos Camacho

This is a brief index with all TripleO deep dive sessions, you can see all videos on the TripleO YouTube channel.

Read more at http://anstack.github.io/blog/2017/06/15/tripleo-deep-dive-session-index.html

TripleO deep dive session #10 (Containers) by Carlos Camacho

This is the 10th release of the TripleO “Deep Dive” sessions

Read more at http://anstack.github.io/blog/2017/06/15/tripleo-deep-dive-session-10.html

OpenStack, Containers, and Logging by Lars Kellogg-Stedman

I've been thinking about logging in the context of OpenStack and containerized service deployments. I'd like to lay out some of my thoughts on this topic and see if people think I am talking crazy or not.

Read more at http://blog.oddbit.com/2017/06/14/openstack-containers-and-logging/

John Trowbridge: TripleO in Ocata by Rich Bowen

John Trowbridge (Trown) talks about his work on TripleO in the OpenStack Ocata period, and what's coming in Pike.

Read more at http://rdoproject.org/blog/2017/06/john-trowbridge-tripleo-in-ocata/

Doug Hellmann: Release management in OpenStack Ocata by Rich Bowen

Doug Hellmann talks about release management in OpenStack Ocata, at the OpenStack PTG in Atlanta.

Read more at http://rdoproject.org/blog/2017/06/doug-hellmann-release-management-in-openstack-ocata/

Using Ansible Validations With Red Hat OpenStack Platform – Part 2 by August Simonelli, Technical Marketing Manager, Cloud

In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).

Read more at http://redhatstackblog.redhat.com/2017/06/12/using-ansible-validations-with-red-hat-openstack-platform-part-2/

View article »

Recent blog posts: June 12

Experiences with Cinder in Production by Arne Wiebalck

The CERN OpenStack cloud service is providing block storage via Cinder since Havana days in early 2014. Users can choose from seven different volume types, which offer different physical locations, different power feeds, and different performance characteristics. All volumes are backed by Ceph, deployed in three separate clusters across two data centres.

Read more at http://openstack-in-production.blogspot.com/2017/06/experiences-with-cinder-in-production.html

Using Ansible Validations With Red Hat OpenStack Platform – Part 1 by August Simonelli, Technical Marketing Manager, Cloud

Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.

Read more at http://redhatstackblog.redhat.com/2017/06/08/using-ansible-validations-with-red-hat-openstack-platform-part-1/

Upstream First…or Second? by Adam Young

From December 2011 until December 2016, my professional life was driven by OpenStack Keystone development. As I’ve made an effort to diversify myself a bit since then, I’ve also had the opportunity to reflect on our approach, and perhaps see somethings I would like to do differently in the future.

Read more at http://adam.younglogic.com/2017/06/upstream-first-or-second/

Accessing a Mistral Environment in a CLI workflow by John

Recently, with some help of the Mistral devs in freenode #openstack-mistral, I was able to create a simple environment and then write a workflow to access it. I will share my example below.

Read more at http://blog.johnlikesopenstack.com/2017/06/accessing-mistral-environment-in-cli.html

OpenStack papers community on Zenodo by Tim Bell

At the recent summit in Boston, Doug Hellmann and I were discussing research around OpenStack, both the software itself but also how it is used by applications. There are many papers being published in proceedings of conferences and PhD theses but finding out about these can be difficult. While these papers may not necessarily lead to open source code contribution, the results of this research is a valuable resource for the community.

Read more at http://openstack-in-production.blogspot.com/2017/06/openstack-papers-community-on-zenodo.html

Event report: Red Hat Summit, OpenStack Summit by rbowen

During the first two weeks of May, I attended Red Hat Summit, followed by OpenStack Summit. Since both events were in Boston (although not at the same venue), many aspects of them have run together.

Read more at http://drbacchus.com/event-report-red-hat-summit-openstack-summit/

View article »

RDO Contributor Survey

We recently ran a contributor survey in the RDO community, and while the participation was fairly small (21 respondants), there's a lot of important insight we can glean from it.

First, and unsurprisingly:

Of the 20 people who answered the "corporate affiliation" question, 18 were Red Hat employees. While we are already aware that this is a place where we need to improve, it's good to know just how much room for improvement there is. What's useful here will be figuring out why people outside of Red Hat are not participating more. This is touched on in later questions.

Next, we have the frequency of contributions:

Here we see that while 14% of our contributors are pretty much working on RDO all the time, the majority of contributors only touch it a few times per release - probably updating a single package, or addressing a single bug, for that particular cycle.

This, too, is mostly in line with what we expected. With most of the RDO pipeline being automated, there's little that most participants would need to do beyond a handful of updates each release. Meanwhile, a core team works on the infrastructure and the tools every week to keep it all moving.

We asked contributors where they participate:

Most of the contributors - 75% - indicate that they are involved in packaging. (Respondants could choose more than one area in which they participate.) Test day participation was a distant second place (35%), followed by documentation (25%) and end user support (25%)

I've personally seen way more people than that participate in end user support, on the IRC channel, mailing list, and ask.openstack.org. Possibly these people don't think of what they're doing as support, but it is still a very important way that we grow our user community.

The rest of the survey delves into deeper details about the contribution process.

When asked about the ease of contribution, 80% said that it was ok, with just 10% saying that the contribution process was too hard.

When asked about difficulties encountered in the contribution process:

Answers were split fairly evenly between "Confusing or outdated documentation", "Complexity of process", and "Lack of documentation". Encouragingly, "lack of community support" placed far behind these other responses.

It sounds like we have a need to update the documentation, and greatly clarify it. Having a first-time contributor's view of the documentation, and what unwarranted assumptions it makes, would be very beneficial in this area.

When asked how these difficulties were overcome, 60% responded that they got help on IRC, 15% indicated that they just kept at it until they figured it out, and another 15% indicated that they gave up and focused their attentions elsewhere.

Asked for general comments about the contribution process, almost all comments focused on the documentation - it's confusing, outdated, and lacks useful examples. A number of people complained about the way that the process seems to change almost every time they come to contribute. Remember: Most of our contributors only touch RDO once or twice a release, and they feel that they have to learn the process from scratch every time. Finally, two people complained that the review process for changes is too slow, perhaps due to the small number of reviewers.

I'll be sharing the full responses on the RDO-List mailing list later today.

Thank you to everyone who participated in the survey. Your insight is greatly appreciated, and we hope to use it to improve the contributor process in the future.

View article »

Recent blog posts - May 22nd

Here's some of the recent blog posts from our community:

Some lessons an IT department can learn from OpenStack by jpena

I have spent a lot of my professional career working as an IT Consultant/Architect. In those positions, you talk to many customers with different backgrounds, and see companies that run their IT in many different ways. Back in 2014, I joined the OpenStack Engineering team at Red Hat, and started being involved with the OpenStack community. And guess what, I found yet another way of managing IT.

Read more at http://rdoproject.org/blog/2017/05/some-lessons-an-it-department-can-learn-from-openstack/

When is it not cool to add a new OpenStack configuration option? by assafmuller

Adding new configuration options has a cost, and makes already complex projects (Hi Neutron!) even more so. Double so when we speak of architecture choices, it means that we have to test and document all permutations. Of course, we don’t always do that, nor do we test all interactions between deployment options and other advanced features, leaving users with fun surprises. With some projects seeing an increased rotation of contributors, we’re seeing wastelands of unmaintained code behind left behind, increasing the importance of being strategic about introducing new complexity.

Read more at https://assafmuller.com/2017/05/19/when-is-not-cool-to-add-a-new-openstack-configuration-option/

Running (and recording) fully automated GUI tests in the cloud by Matthieu Huin

The problem Software Factory is a full-stack software development platform: it hosts repositories, a bug tracker and CI/CD pipelines. It is the engine behind RDO's CI pipeline, but it is also very versatile and suited for all kinds of software projects. Also, I happen to be one of Software Factory's main contributors. :)

Read more at http://rdoproject.org/blog/2017/05/running-and-recording-fully-automated-GUI-tests-in-the-cloud/

View article »

Some lessons an IT department can learn from OpenStack

I have spent a lot of my professional career working as an IT Consultant/Architect. In those positions, you talk to many customers with different backgrounds, and see companies that run their IT in many different ways. Back in 2014, I joined the OpenStack Engineering team at Red Hat, and started being involved with the OpenStack community. And guess what, I found yet another way of managing IT.

These last 3 years have taught me a lot about how to efficiently run an IT infrastructure at scale, and what's better, they proved that many of the concepts I had been previously preaching to customers (automate, automate, automate!) are not only viable, but also required to handle ever-growing requirements with a limited team and budget.

So, would you like to know what I have learnt so far in this 3-year journey?

Processes

The OpenStack community relies on several processes to develop a cloud operating system. Most of these processes have evolved over time, and together they allow a very large contributor base to collaborate effectively. Also, we need to manage a complex infrastructure to support this our processes.

  • Infrastructure as code: there are several important servers in the OpenStack infrastructure, providing service to thousands of users every day: the Git repositories, the Gerrit code review infrastructure, the CI bits, etc. The deployment and configuration of all those pieces is automated, as you would expect, and the Puppet modules and Ansible playbooks used to do so are available at their Git repository. There can be no snowflakes, no "this server requires a very specific configuration, so I have to log on and do it manually" excuses. If it cannot be automated, it is not efficient enough. Also, storing our infrastructure definitions as code allows us to take changes through peer-review and CI before applying in production. More about that later.

  • Development practices: each OpenStack project follows the same structure:

    • There is a Project Team Leader (PTL), elected from the project contributors every six months. A PTL acts as a project coordinator, rather than a manager in the traditional sense, and is usually expected to rotate every few cycles.
    • There are several core reviewers, people with enough knowledge on the project to judge if a change is correct or not.
    • And then we have multiple project contributors, who can create patches and peer-review other people's patches.

    Whenever a patch is created, it is sent to review using a code review system, and then:

    • It is checked by multiple CI jobs, that ensure the patch is not breaking any existing functionality.
    • It is reviewed by other contributors.

    Peer review is done by core reviewers and other project contributors. Each of them have the rights to provide different votes:

    • A +2 vote can only be set by a core reviewer, and means that the code looks ok for that core reviewer, and he/she thinks it can be merged as-is.
    • Any project contributor can set a +1 or -1 vote. +1 means "code looks ok to me" while -1 means "this code needs some adjustments". A vote by itself does not provide a lot of feedback, so it is expanded by some comments on what should be changed, if needed.
    • A -2 vote can only be set by a core reviewer, and means that the code cannot be merged until that vote is lifted. -2 votes can be caused by code that goes against some of the project design goals, or just because the project is currently in feature freeze and the patch has to wait for a while.

    When the patch passes all CI jobs, and received enough +2 votes from the core reviewers (usually two), it goes through another round of CI jobs and is finally merged in the repository.

    This may seem as a complex process, but it has several advantages:

    • It ensures a certain level of quality on the master branch, since we have to ensure that CI jobs are passing.
    • It encourages peer reviews, so code should always be checked by more than one person before merging.
    • It engages core reviewers, because they need to have enough knowledge of the project codebase to decide if a patch deserves a +2 vote.
  • Use the cloud: it would not make much sense to develop a cloud operating system if we could not use the cloud ourselves, would it? As expected, all the OpenStack infrastructure is hosted in OpenStack-based clouds donated by different companies. Since the infrastructure deployment and configuration is automated, it is quite easy to manage in a cloud environment. And as we will see later, it is also a perfect match for our continuous integration processes.

  • Automated continuous integration: this is a key part of the development process in the OpenStack community. Each month, 5000 to 8000 commits are reviewed in all the OpenStack projects. This requires a large degree of automation in testing, otherwise it would not be possible to review all those patches manually.

    • Each project defines a number of CI jobs, covering unit and integration tests. These projects are defined as code using Jenkins Job Builder, and reviewed just like any other code contribution.
    • For each commit:
      • Our CI automation tooling will spawn short-lived VMs in one of the OpenStack-based clouds, and add them to the test pool
      • The CI jobs will be executed on those short-lived VMs, and the test results will be fed back as part of the code review
      • The VM will be deleted at the end of the CI job execution

    This process, together with the requirement for CI jobs to pass before merging any code, minimizes the amount of regressions in our codebase.

  • Use (and contribute to) Open Source: one of the "Four Opens" that drive the OpenStack community is Open Source. As such, all of the development and infrastructure processes happen using Open Source software. And not just that, the OpenStack community has created several libraries and applications with great potential for reuse outside the OpenStack use case. Applications like Zuul and nodepool, general-purpose libraries like pbr, or the contributions to the SQLAlchemy library are good examples of this.

Tools

So, which tools do we use to make all of this happen? As stated above, the OpenStack community relies on several open source tools to do its work:

  • Infrastructure as code
    • Git to store the infrastructure definitions
    • Puppet and Ansible as configuration management and orchestration tools
  • Development
    • Git as a code repository
    • Gerrit as a code review and repository management tool
    • Etherpad as a collaborative editing tool
  • Continuous integration
    • Zuul as an orchestrator of the gate checks
    • Nodepool to automate the creation and deletion of short-lived VMs for CI jobs across multiple clouds
    • Jenkins to execute CI jobs (actually, it has now been replaced by Zuul itself)
    • Jenkins Job Builder as a tool to define CI jobs as code

Replicating this outside OpenStack

It is perfectly possible to replicate this model outside the OpenStack community. We use it in RDO, too! Although we are very closely related to OpenStack, we have our own infrastructure and tools, following a very similar process for development and infrastructure maintenance.

We use an integrated solution, SoftwareFactory, which includes most of the common tools described earlier (and then some other interesting ones). This allows us to simplify our toolset and have:

  • Infrastructure as code
  • Development and continuous integration
    • https://review.rdoproject.org, our SoftwareFactory instance, to integrate our development and CI workflow
    • Our own RDO Cloud as an infrastructure provider

You can do it, too

Implementing this way of working in an established organization is probably a non-straightforward task. It requires your IT department and application owners to become as cloud-conscious as possible, reduce the amount of micro-managed systems to a minimum, and establish a whole new way of managing your development… But the results speak for themselves, and the OpenStack community (also RDO!) is a proof that it works.

View article »

Running (and recording) fully automated GUI tests in the cloud

The problem

Software Factory is a full-stack software development platform: it hosts repositories, a bug tracker and CI/CD pipelines. It is the engine behind RDO's CI pipeline, but it is also very versatile and suited for all kinds of software projects. Also, I happen to be one of Software Factory's main contributors. :)

Software Factory has many cool features that I won't list here, but among these is a unified web interface that helps navigating through its components. Obviously we want this interface thoroughly tested; ideally within Software Factory's own CI system, which runs on test nodes being provisioned on demand on an OpenStack cloud (If you have read Tristan's previous article, you might already know that Software Factory's nodes are managed and built by Nodepool).

When it comes to testing web GUIs, Selenium is quite ubiquitous because of its many features, among which:

  • it works with most major browsers, on every operating system
  • it has bindings for every major language, making it easy to write GUI tests in your language of choice.¹

¹ Our language of choice, today, will be python.

Due to the very nature of GUI tests, however, it is not easy to fully automate Selenium tests into a CI pipeline:

  • usually these tests are run on dedicated physical machines for each operating system to test, making them choke points and sacrificing resources that could be used somewhere else.
  • a failing test usually means that there is a problem of a graphical nature; if the developer or the QA engineer does not see what happens it is difficult to qualify and solve the problem. Therefore human eyes and validation are still needed to an extent.

Legal issues preventing running Mac OS-based virtual machines on non-Apple hardware aside, it is possible to run Selenium tests on virtual machines without need for a physical display (aka "headless") and also capture what is going on during these tests for later human analysis.

This article will explain how to achieve this on linux-based distributions, more specifically on CentOS.

Running headless (or "Look Ma! No screen!")

The secret here is to install Xvfb (X virtual framebuffer) to emulate a display in memory on our headless machine …

My fellow Software Factory dev team and I have configured Nodepool to provide us with customized images based on CentOS on which to run any kind of jobs. This makes sure that our test nodes are always "fresh", in other words that our test environments are well defined, reproducible at will and not tainted by repeated tests.

The customization occurs through post-install scripts: if you look at our configuration repository, you will find the image we use for our CI tests is sfstack-centos-7 and its customization script is sfstack_centos_setup.sh.

We added the following commands to this script in order to install the dependencies we need:

sudo yum install -y firefox Xvfb libXfont Xorg jre
sudo mkdir /usr/lib/selenium /var/log/selenium /var/log/Xvfb
sudo wget -O /usr/lib/selenium/selenium-server.jar http://selenium-release.storage.googleapis.com/3.4/selenium-server-standalone-3.4.0.jar
sudo pip install selenium```

The dependencies are:

* __Firefox__, the browser on which we will run the GUI tests
* __libXfont__ and __Xorg__ to manage displays
* __Xvfb__
* __JRE__ to run the __selenium server__
* the __python selenium bindings__

Then when the test environment is set up, we start the selenium server and Xvfb
in the background:

```bash
/usr/bin/java -jar /usr/lib/selenium/selenium-server.jar -host 127.0.0.1 >/var/log/selenium/selenium.log 2>/var/log/selenium/error.log
Xvfb :99 -ac -screen 0 1920x1080x24 >/var/log/Xvfb/Xvfb.log 2>/var/log/Xvfb/error.log```

Finally, set the display environment variable to :99 (the Xvfb display) and run your tests:

```bash
export DISPLAY=:99
./path/to/seleniumtests```

The tests will run as if the VM was plugged to a display.

## Taking screenshots

With this headless setup, we can now run GUI tests on virtual machines within our
automated CI; but we need a way to visualize what happens in the GUI if a test
fails.

It turns out that the selenium bindings have a screenshot feature that we can use
for that. Here is how to define a decorator in python that will save a screenshot
if a test fails.

```python
import functools
import os
import unittest
from selenium import webdriver

[...]

def snapshot_if_failure(func):
    @functools.wraps(func)
    def f(self, *args, **kwargs):
        try:
            func(self, *args, **kwargs)
        except Exception as e:
            path = '/tmp/gui/'
            if not os.path.isdir(path):
                os.makedirs(path)
            screenshot = os.path.join(path, '%s.png' % func.__name__)
            self.driver.save_screenshot(screenshot)
            raise e
    return f


class MyGUITests(unittest.TestCase):
    def setUp(self):
        self.driver = webdriver.Firefox()
        self.driver.maximize_window()
        self.driver.implicitly_wait(20)

    @snapshot_if_failure
    def test_login_page(self):
        ...

If test_login_page fails, a screenshot of the browser at the time of the exception will be saved under /tmp/gui/test_login_page.png.

Video recording

We can go even further and record a video of the whole testing session, as it turns out that ffmpeg can capture X sessions with the "x11grab" option. This is interesting beyond simply test debugging, as the video can be used to illustrate the use cases that you are testing, for demos or fancy video documentations.

In order to have ffmpeg on your test node, you can either add compilation steps to the node's post-install script or go the easy way and use an external repository:

# install ffmpeg
sudo rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro
sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm
sudo yum update
sudo yum install -y ffmpeg

To record the Xfvb buffer, you'd simply run

export FFREPORT=file=/tmp/gui/ffmpeg-$(date +%Y%m%s).log && ffmpeg -f x11grab -video_size 1920x1080 -i 127.0.0.1$DISPLAY -codec:v mpeg4 -r 16 -vtag xvid -q:v 8 /tmp/gui/tests.avi ```

The catch is that ffmpeg expects the user to press __q__ to stop the recording
and save the video (killing the process will corrupt the video). We can use
[tmux](https://tmux.github.io/) to save the day; run your GUI tests like so:

```bash
export DISPLAY=:99
tmux new-session -d -s guiTestRecording 'export FFREPORT=file=/tmp/gui/ffmpeg-$(date +%Y%m%s).log && ffmpeg -f x11grab -video_size 1920x1080 -i 127.0.0.1'$DISPLAY' -codec:v mpeg4 -r 16 -vtag xvid -q:v 8 /tmp/gui/tests.avi && sleep 5'
./path/to/seleniumtests
tmux send-keys -t guiTestRecording q

Accessing the artifacts

Nodepool destroys VMs when their job is done in order to free resources (that is, after all, the spirit of the cloud). That means that our pictures and videos will be lost unless they're uploaded to an external storage.

Fortunately Software Factory handles this: predefined publishers can be appended to our jobs definitions; one of which allows to push any artifact to a Swift object store. We can then retrieve our videos and screenshots easily.

Conclusion

With little effort, you can now run your selenium tests on virtual hardware as well to further automate your CI pipeline, while still ensuring human supervision.

Further reading

View article »