RDO Community News

See also blogs.rdoproject.org

Recent blog posts: June 12

Experiences with Cinder in Production by Arne Wiebalck

The CERN OpenStack cloud service is providing block storage via Cinder since Havana days in early 2014. Users can choose from seven different volume types, which offer different physical locations, different power feeds, and different performance characteristics. All volumes are backed by Ceph, deployed in three separate clusters across two data centres.

Read more at http://openstack-in-production.blogspot.com/2017/06/experiences-with-cinder-in-production.html

Using Ansible Validations With Red Hat OpenStack Platform – Part 1 by August Simonelli, Technical Marketing Manager, Cloud

Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.

Read more at http://redhatstackblog.redhat.com/2017/06/08/using-ansible-validations-with-red-hat-openstack-platform-part-1/

Upstream First…or Second? by Adam Young

From December 2011 until December 2016, my professional life was driven by OpenStack Keystone development. As I’ve made an effort to diversify myself a bit since then, I’ve also had the opportunity to reflect on our approach, and perhaps see somethings I would like to do differently in the future.

Read more at http://adam.younglogic.com/2017/06/upstream-first-or-second/

Accessing a Mistral Environment in a CLI workflow by John

Recently, with some help of the Mistral devs in freenode #openstack-mistral, I was able to create a simple environment and then write a workflow to access it. I will share my example below.

Read more at http://blog.johnlikesopenstack.com/2017/06/accessing-mistral-environment-in-cli.html

OpenStack papers community on Zenodo by Tim Bell

At the recent summit in Boston, Doug Hellmann and I were discussing research around OpenStack, both the software itself but also how it is used by applications. There are many papers being published in proceedings of conferences and PhD theses but finding out about these can be difficult. While these papers may not necessarily lead to open source code contribution, the results of this research is a valuable resource for the community.

Read more at http://openstack-in-production.blogspot.com/2017/06/openstack-papers-community-on-zenodo.html

Event report: Red Hat Summit, OpenStack Summit by rbowen

During the first two weeks of May, I attended Red Hat Summit, followed by OpenStack Summit. Since both events were in Boston (although not at the same venue), many aspects of them have run together.

Read more at http://drbacchus.com/event-report-red-hat-summit-openstack-summit/

View article »

RDO Contributor Survey

We recently ran a contributor survey in the RDO community, and while the participation was fairly small (21 respondants), there's a lot of important insight we can glean from it.

First, and unsurprisingly:

Of the 20 people who answered the "corporate affiliation" question, 18 were Red Hat employees. While we are already aware that this is a place where we need to improve, it's good to know just how much room for improvement there is. What's useful here will be figuring out why people outside of Red Hat are not participating more. This is touched on in later questions.

Next, we have the frequency of contributions:

Here we see that while 14% of our contributors are pretty much working on RDO all the time, the majority of contributors only touch it a few times per release - probably updating a single package, or addressing a single bug, for that particular cycle.

This, too, is mostly in line with what we expected. With most of the RDO pipeline being automated, there's little that most participants would need to do beyond a handful of updates each release. Meanwhile, a core team works on the infrastructure and the tools every week to keep it all moving.

We asked contributors where they participate:

Most of the contributors - 75% - indicate that they are involved in packaging. (Respondants could choose more than one area in which they participate.) Test day participation was a distant second place (35%), followed by documentation (25%) and end user support (25%)

I've personally seen way more people than that participate in end user support, on the IRC channel, mailing list, and ask.openstack.org. Possibly these people don't think of what they're doing as support, but it is still a very important way that we grow our user community.

The rest of the survey delves into deeper details about the contribution process.

When asked about the ease of contribution, 80% said that it was ok, with just 10% saying that the contribution process was too hard.

When asked about difficulties encountered in the contribution process:

Answers were split fairly evenly between "Confusing or outdated documentation", "Complexity of process", and "Lack of documentation". Encouragingly, "lack of community support" placed far behind these other responses.

It sounds like we have a need to update the documentation, and greatly clarify it. Having a first-time contributor's view of the documentation, and what unwarranted assumptions it makes, would be very beneficial in this area.

When asked how these difficulties were overcome, 60% responded that they got help on IRC, 15% indicated that they just kept at it until they figured it out, and another 15% indicated that they gave up and focused their attentions elsewhere.

Asked for general comments about the contribution process, almost all comments focused on the documentation - it's confusing, outdated, and lacks useful examples. A number of people complained about the way that the process seems to change almost every time they come to contribute. Remember: Most of our contributors only touch RDO once or twice a release, and they feel that they have to learn the process from scratch every time. Finally, two people complained that the review process for changes is too slow, perhaps due to the small number of reviewers.

I'll be sharing the full responses on the RDO-List mailing list later today.

Thank you to everyone who participated in the survey. Your insight is greatly appreciated, and we hope to use it to improve the contributor process in the future.

View article »

Recent blog posts - May 22nd

Here's some of the recent blog posts from our community:

Some lessons an IT department can learn from OpenStack by jpena

I have spent a lot of my professional career working as an IT Consultant/Architect. In those positions, you talk to many customers with different backgrounds, and see companies that run their IT in many different ways. Back in 2014, I joined the OpenStack Engineering team at Red Hat, and started being involved with the OpenStack community. And guess what, I found yet another way of managing IT.

Read more at http://rdoproject.org/blog/2017/05/some-lessons-an-it-department-can-learn-from-openstack/

When is it not cool to add a new OpenStack configuration option? by assafmuller

Adding new configuration options has a cost, and makes already complex projects (Hi Neutron!) even more so. Double so when we speak of architecture choices, it means that we have to test and document all permutations. Of course, we don’t always do that, nor do we test all interactions between deployment options and other advanced features, leaving users with fun surprises. With some projects seeing an increased rotation of contributors, we’re seeing wastelands of unmaintained code behind left behind, increasing the importance of being strategic about introducing new complexity.

Read more at https://assafmuller.com/2017/05/19/when-is-not-cool-to-add-a-new-openstack-configuration-option/

Running (and recording) fully automated GUI tests in the cloud by Matthieu Huin

The problem Software Factory is a full-stack software development platform: it hosts repositories, a bug tracker and CI/CD pipelines. It is the engine behind RDO's CI pipeline, but it is also very versatile and suited for all kinds of software projects. Also, I happen to be one of Software Factory's main contributors. :)

Read more at http://rdoproject.org/blog/2017/05/running-and-recording-fully-automated-GUI-tests-in-the-cloud/

View article »

Some lessons an IT department can learn from OpenStack

I have spent a lot of my professional career working as an IT Consultant/Architect. In those positions, you talk to many customers with different backgrounds, and see companies that run their IT in many different ways. Back in 2014, I joined the OpenStack Engineering team at Red Hat, and started being involved with the OpenStack community. And guess what, I found yet another way of managing IT.

These last 3 years have taught me a lot about how to efficiently run an IT infrastructure at scale, and what's better, they proved that many of the concepts I had been previously preaching to customers (automate, automate, automate!) are not only viable, but also required to handle ever-growing requirements with a limited team and budget.

So, would you like to know what I have learnt so far in this 3-year journey?

Processes

The OpenStack community relies on several processes to develop a cloud operating system. Most of these processes have evolved over time, and together they allow a very large contributor base to collaborate effectively. Also, we need to manage a complex infrastructure to support this our processes.

  • Infrastructure as code: there are several important servers in the OpenStack infrastructure, providing service to thousands of users every day: the Git repositories, the Gerrit code review infrastructure, the CI bits, etc. The deployment and configuration of all those pieces is automated, as you would expect, and the Puppet modules and Ansible playbooks used to do so are available at their Git repository. There can be no snowflakes, no "this server requires a very specific configuration, so I have to log on and do it manually" excuses. If it cannot be automated, it is not efficient enough. Also, storing our infrastructure definitions as code allows us to take changes through peer-review and CI before applying in production. More about that later.

  • Development practices: each OpenStack project follows the same structure:

    • There is a Project Team Leader (PTL), elected from the project contributors every six months. A PTL acts as a project coordinator, rather than a manager in the traditional sense, and is usually expected to rotate every few cycles.
    • There are several core reviewers, people with enough knowledge on the project to judge if a change is correct or not.
    • And then we have multiple project contributors, who can create patches and peer-review other people's patches.

    Whenever a patch is created, it is sent to review using a code review system, and then:

    • It is checked by multiple CI jobs, that ensure the patch is not breaking any existing functionality.
    • It is reviewed by other contributors.

    Peer review is done by core reviewers and other project contributors. Each of them have the rights to provide different votes:

    • A +2 vote can only be set by a core reviewer, and means that the code looks ok for that core reviewer, and he/she thinks it can be merged as-is.
    • Any project contributor can set a +1 or -1 vote. +1 means "code looks ok to me" while -1 means "this code needs some adjustments". A vote by itself does not provide a lot of feedback, so it is expanded by some comments on what should be changed, if needed.
    • A -2 vote can only be set by a core reviewer, and means that the code cannot be merged until that vote is lifted. -2 votes can be caused by code that goes against some of the project design goals, or just because the project is currently in feature freeze and the patch has to wait for a while.

    When the patch passes all CI jobs, and received enough +2 votes from the core reviewers (usually two), it goes through another round of CI jobs and is finally merged in the repository.

    This may seem as a complex process, but it has several advantages:

    • It ensures a certain level of quality on the master branch, since we have to ensure that CI jobs are passing.
    • It encourages peer reviews, so code should always be checked by more than one person before merging.
    • It engages core reviewers, because they need to have enough knowledge of the project codebase to decide if a patch deserves a +2 vote.
  • Use the cloud: it would not make much sense to develop a cloud operating system if we could not use the cloud ourselves, would it? As expected, all the OpenStack infrastructure is hosted in OpenStack-based clouds donated by different companies. Since the infrastructure deployment and configuration is automated, it is quite easy to manage in a cloud environment. And as we will see later, it is also a perfect match for our continuous integration processes.

  • Automated continuous integration: this is a key part of the development process in the OpenStack community. Each month, 5000 to 8000 commits are reviewed in all the OpenStack projects. This requires a large degree of automation in testing, otherwise it would not be possible to review all those patches manually.

    • Each project defines a number of CI jobs, covering unit and integration tests. These projects are defined as code using Jenkins Job Builder, and reviewed just like any other code contribution.
    • For each commit:
      • Our CI automation tooling will spawn short-lived VMs in one of the OpenStack-based clouds, and add them to the test pool
      • The CI jobs will be executed on those short-lived VMs, and the test results will be fed back as part of the code review
      • The VM will be deleted at the end of the CI job execution

    This process, together with the requirement for CI jobs to pass before merging any code, minimizes the amount of regressions in our codebase.

  • Use (and contribute to) Open Source: one of the "Four Opens" that drive the OpenStack community is Open Source. As such, all of the development and infrastructure processes happen using Open Source software. And not just that, the OpenStack community has created several libraries and applications with great potential for reuse outside the OpenStack use case. Applications like Zuul and nodepool, general-purpose libraries like pbr, or the contributions to the SQLAlchemy library are good examples of this.

Tools

So, which tools do we use to make all of this happen? As stated above, the OpenStack community relies on several open source tools to do its work:

  • Infrastructure as code
    • Git to store the infrastructure definitions
    • Puppet and Ansible as configuration management and orchestration tools
  • Development
    • Git as a code repository
    • Gerrit as a code review and repository management tool
    • Etherpad as a collaborative editing tool
  • Continuous integration
    • Zuul as an orchestrator of the gate checks
    • Nodepool to automate the creation and deletion of short-lived VMs for CI jobs across multiple clouds
    • Jenkins to execute CI jobs (actually, it has now been replaced by Zuul itself)
    • Jenkins Job Builder as a tool to define CI jobs as code

Replicating this outside OpenStack

It is perfectly possible to replicate this model outside the OpenStack community. We use it in RDO, too! Although we are very closely related to OpenStack, we have our own infrastructure and tools, following a very similar process for development and infrastructure maintenance.

We use an integrated solution, SoftwareFactory, which includes most of the common tools described earlier (and then some other interesting ones). This allows us to simplify our toolset and have:

  • Infrastructure as code
  • Development and continuous integration
    • https://review.rdoproject.org, our SoftwareFactory instance, to integrate our development and CI workflow
    • Our own RDO Cloud as an infrastructure provider

You can do it, too

Implementing this way of working in an established organization is probably a non-straightforward task. It requires your IT department and application owners to become as cloud-conscious as possible, reduce the amount of micro-managed systems to a minimum, and establish a whole new way of managing your development… But the results speak for themselves, and the OpenStack community (also RDO!) is a proof that it works.

View article »

Running (and recording) fully automated GUI tests in the cloud

The problem

Software Factory is a full-stack software development platform: it hosts repositories, a bug tracker and CI/CD pipelines. It is the engine behind RDO's CI pipeline, but it is also very versatile and suited for all kinds of software projects. Also, I happen to be one of Software Factory's main contributors. :)

Software Factory has many cool features that I won't list here, but among these is a unified web interface that helps navigating through its components. Obviously we want this interface thoroughly tested; ideally within Software Factory's own CI system, which runs on test nodes being provisioned on demand on an OpenStack cloud (If you have read Tristan's previous article, you might already know that Software Factory's nodes are managed and built by Nodepool).

When it comes to testing web GUIs, Selenium is quite ubiquitous because of its many features, among which:

  • it works with most major browsers, on every operating system
  • it has bindings for every major language, making it easy to write GUI tests in your language of choice.¹

¹ Our language of choice, today, will be python.

Due to the very nature of GUI tests, however, it is not easy to fully automate Selenium tests into a CI pipeline:

  • usually these tests are run on dedicated physical machines for each operating system to test, making them choke points and sacrificing resources that could be used somewhere else.
  • a failing test usually means that there is a problem of a graphical nature; if the developer or the QA engineer does not see what happens it is difficult to qualify and solve the problem. Therefore human eyes and validation are still needed to an extent.

Legal issues preventing running Mac OS-based virtual machines on non-Apple hardware aside, it is possible to run Selenium tests on virtual machines without need for a physical display (aka "headless") and also capture what is going on during these tests for later human analysis.

This article will explain how to achieve this on linux-based distributions, more specifically on CentOS.

Running headless (or "Look Ma! No screen!")

The secret here is to install Xvfb (X virtual framebuffer) to emulate a display in memory on our headless machine …

My fellow Software Factory dev team and I have configured Nodepool to provide us with customized images based on CentOS on which to run any kind of jobs. This makes sure that our test nodes are always "fresh", in other words that our test environments are well defined, reproducible at will and not tainted by repeated tests.

The customization occurs through post-install scripts: if you look at our configuration repository, you will find the image we use for our CI tests is sfstack-centos-7 and its customization script is sfstack_centos_setup.sh.

We added the following commands to this script in order to install the dependencies we need:

sudo yum install -y firefox Xvfb libXfont Xorg jre
sudo mkdir /usr/lib/selenium /var/log/selenium /var/log/Xvfb
sudo wget -O /usr/lib/selenium/selenium-server.jar http://selenium-release.storage.googleapis.com/3.4/selenium-server-standalone-3.4.0.jar
sudo pip install selenium```

The dependencies are:

* __Firefox__, the browser on which we will run the GUI tests
* __libXfont__ and __Xorg__ to manage displays
* __Xvfb__
* __JRE__ to run the __selenium server__
* the __python selenium bindings__

Then when the test environment is set up, we start the selenium server and Xvfb
in the background:

```bash
/usr/bin/java -jar /usr/lib/selenium/selenium-server.jar -host 127.0.0.1 >/var/log/selenium/selenium.log 2>/var/log/selenium/error.log
Xvfb :99 -ac -screen 0 1920x1080x24 >/var/log/Xvfb/Xvfb.log 2>/var/log/Xvfb/error.log```

Finally, set the display environment variable to :99 (the Xvfb display) and run your tests:

```bash
export DISPLAY=:99
./path/to/seleniumtests```

The tests will run as if the VM was plugged to a display.

## Taking screenshots

With this headless setup, we can now run GUI tests on virtual machines within our
automated CI; but we need a way to visualize what happens in the GUI if a test
fails.

It turns out that the selenium bindings have a screenshot feature that we can use
for that. Here is how to define a decorator in python that will save a screenshot
if a test fails.

```python
import functools
import os
import unittest
from selenium import webdriver

[...]

def snapshot_if_failure(func):
    @functools.wraps(func)
    def f(self, *args, **kwargs):
        try:
            func(self, *args, **kwargs)
        except Exception as e:
            path = '/tmp/gui/'
            if not os.path.isdir(path):
                os.makedirs(path)
            screenshot = os.path.join(path, '%s.png' % func.__name__)
            self.driver.save_screenshot(screenshot)
            raise e
    return f


class MyGUITests(unittest.TestCase):
    def setUp(self):
        self.driver = webdriver.Firefox()
        self.driver.maximize_window()
        self.driver.implicitly_wait(20)

    @snapshot_if_failure
    def test_login_page(self):
        ...

If test_login_page fails, a screenshot of the browser at the time of the exception will be saved under /tmp/gui/test_login_page.png.

Video recording

We can go even further and record a video of the whole testing session, as it turns out that ffmpeg can capture X sessions with the "x11grab" option. This is interesting beyond simply test debugging, as the video can be used to illustrate the use cases that you are testing, for demos or fancy video documentations.

In order to have ffmpeg on your test node, you can either add compilation steps to the node's post-install script or go the easy way and use an external repository:

# install ffmpeg
sudo rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro
sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm
sudo yum update
sudo yum install -y ffmpeg

To record the Xfvb buffer, you'd simply run

export FFREPORT=file=/tmp/gui/ffmpeg-$(date +%Y%m%s).log && ffmpeg -f x11grab -video_size 1920x1080 -i 127.0.0.1$DISPLAY -codec:v mpeg4 -r 16 -vtag xvid -q:v 8 /tmp/gui/tests.avi ```

The catch is that ffmpeg expects the user to press __q__ to stop the recording
and save the video (killing the process will corrupt the video). We can use
[tmux](https://tmux.github.io/) to save the day; run your GUI tests like so:

```bash
export DISPLAY=:99
tmux new-session -d -s guiTestRecording 'export FFREPORT=file=/tmp/gui/ffmpeg-$(date +%Y%m%s).log && ffmpeg -f x11grab -video_size 1920x1080 -i 127.0.0.1'$DISPLAY' -codec:v mpeg4 -r 16 -vtag xvid -q:v 8 /tmp/gui/tests.avi && sleep 5'
./path/to/seleniumtests
tmux send-keys -t guiTestRecording q

Accessing the artifacts

Nodepool destroys VMs when their job is done in order to free resources (that is, after all, the spirit of the cloud). That means that our pictures and videos will be lost unless they're uploaded to an external storage.

Fortunately Software Factory handles this: predefined publishers can be appended to our jobs definitions; one of which allows to push any artifact to a Swift object store. We can then retrieve our videos and screenshots easily.

Conclusion

With little effort, you can now run your selenium tests on virtual hardware as well to further automate your CI pipeline, while still ensuring human supervision.

Further reading

View article »

Recent blog posts

Here's what other RDO users have been blogging about in the past few weeks:

Dear RDO Enthusiast by rainsdance

Remember that post from over a month ago?

Read more at http://groningenrain.nl/dear-rdo-enthusiast/

Red Hat Virtualization 4.1 is LIVE! by CaptainKVM

Today marks another milestone in the evolution of our flagship virtualization platform, Red Hat Virtualization (RHV), as we announce the release of version 4.1. There are well over 165 new features, and while I don’t have the space to cover all of the new features, I would like to highlight some of them, especially in the area of integration. But first I’d like to put that integration into perspective.

Read more at http://rhelblog.redhat.com/2017/04/19/red-hat-virtualization-4-1-is-live/

More than 60 Red Hat-led sessions confirmed for OpenStack Summit Boston by Peter Pawelski, Product Marketing Manager, Red Hat OpenStack Platform

This Spring’s 2017 OpenStack Summit in Boston should be another great and educational event. The OpenStack Foundation has posted the final session agenda detailing the entire week’s schedule of events. And once again Red Hat will be very busy during the four-day event, including delivering more than 60 sessions, from technology overviews to deep dive’s around the OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much, much more. 

Read more at http://redhatstackblog.redhat.com/2017/04/18/60-red-hat-sessions-openstack-summit-boston/

Using the OPTIONS Verb for RBAC by Adam Young

Lets say you have  RESTful Web Service.  For any given URL, you might support one or more of the HTTP verbs:  GET, PUT, POST, DELETE and so on.  A user might wonder what they mean, and which you actually support.  One way of reporting that is by using the OPTION Verb.  While this is a relatively unusual verb, using it to describe a resource is a fairly well known mechanism.  I want to take it one step further.

Read more at http://adam.younglogic.com/2017/04/options-verb-rbac/

OpenStack Days Poland 2017 by rainsdance

I am super excited to introduce guest blogger, Ana Krivokapić, who ran over to OpenStack Days Poland to represent Red Hat as the TripleO RDO OpenStack guru in residence. Thanks so much to Ana for attending, answering any technical TripleO RDO OpenStack questions sent her way, and reporting back on her experience.

Read more at http://groningenrain.nl/openstack-days-poland-2017/

3 ways you find the right type of contributor and where to find them by rbowen

Another one of Stormy’s questions caught my eye:

Read more at http://drbacchus.com/3-ways-you-find-the-right-type-of-contributor-and-where-to-find-them/

View article »

Introducing Rain Leander

Dear RDO Enthusiast,

We have some important news this week about what’s shifting in the RDO community.

As you may know, Rich Bowen has been serving in the role of Community Liaison for the last 4 years. In that capacity, he’s done a variety of things for the community, including event coordination, social media, podcasts and videos, managing the website, and so on.

Starting next week, this role is going to include Rain Leander as TripleO Community Liaison. Rain has been working with the RDO community, and, more generally, with Red Hat’s upstream OpenStack development efforts, for the past 18 months. She’s helped out at a number of events, including two OpenStack Summits, numerous OpenStack Days and Meetups. And she’s been a passionate advocate of TripleO in the community at large.

You may have seen her at some of these events and you’ve probably seen her on IRC as leanderthal. Please give her all the support that you’ve given Rich as she moves into this role.

If you have any questions about how this is going to work, what Rain’s priorities will be in the coming months, or concerns about getting stuff done in the next few months, please don’t hesitate to contact either one of us via email (rain@redhat.com and rbowen@redhat.com), on IRC (leanderthal and rbowen) or via Twitter (@rainleander and @rbowen).

Thanks,

Rich Bowen and Rain Leander

View article »

Recent blog posts

I haven't done an update in a few weeks. Here are some of the blog posts from our community in the last few weeks.

Red Hat joins the DPDK Project by Marcos Garcia - Principal Technical Marketing Manager

Today, the DPDK community announced during the Open Networking Summit that they are moving the project to the Linux Foundation, and creating a new governance structure to enable companies to engage with the project, and pool resources to promote the DPDK community. As a long-time contributor to DPDK, Red Hat is proud to be a founding Gold member of the new DPDK Project initiative under the Linux Foundation.

Read more at http://redhatstackblog.redhat.com/2017/04/06/red-hat-joins-the-dpdk-project/

What’s new in OpenStack Ocata by rbowen

OpenStack Ocata has now been out for a little over a month – https://releases.openstack.org/ – and we’re about to see the first milestone of the Pike release. Past cycles show that now’s about the time when people start looking at the new release to see if they should consider moving to it. So here’s a quick overview of what’s new in this release.

Read more at http://drbacchus.com/whats-new-in-openstack-ocata/

Steve Hardy: OpenStack TripleO in Ocata, from the OpenStack PTG in Atlanta by Rich Bowen

Steve Hardy talks about TripleO in the Ocata release, at the Openstack PTG in Atlanta.

Read more at http://rdoproject.org/blog/2017/04/steve-hardy-openstack-tripido-in-ocata-from-the-openstack-ptg-in-atlanta/

Using a standalone Nodepool service to manage cloud instances by tristanC

Nodepool is a service used by the OpenStack CI team to deploy and manage a pool of devstack images on a cloud server for use in OpenStack project testing.

Read more at http://rdoproject.org/blog/2017/03/standalone-nodepool/

Red Hat Summit 2017 – Planning your OpenStack labs by Eric D. Schabell

This year in Boston, MA you can attend the Red Hat Summit 2017, the event to get your updates on open source technologies and meet with all the experts you follow throughout the year.

Read more at http://redhatstackblog.redhat.com/2017/04/04/red-hat-summit-2017-planning-your-openstack-labs/

Stephen Finucane - OpenStack Nova - What's new in Ocata by Rich Bowen

At the OpenStack PTG in February, Stephen Finucane speaks about what's new in Nova in the Ocata release of OpenStack.

Read more at http://rdoproject.org/blog/2017/03/stephen-finucane-openstack-nova-whats-new-in-ocata/

Zane Bitter - OpenStack Heat, OpenStack PTG, Atlanta by Rich Bowen

At the OpenStack PTG last month, Zane Bitter speaks about his work on OpenStack Heat in the Ocata cycle, and what comes next.

Read more at http://rdoproject.org/blog/2017/03/zane-bitter-openstack-heat-openstack-ptg-atlanta/

The journey of a new OpenStack service in RDO by amoralej

When new contributors join RDO, they ask for recommendations about how to add new services and help RDO users to adopt it. This post is not a official policy document nor a detailed description about how to carry out some activities, but provides some high level recommendations to newcomers based on what I have learned and observed in the last year working in RDO.

Read more at http://rdoproject.org/blog/2017/03/the-journey-of-a-service-in-rdo/

InfraRed: Deploying and Testing Openstack just made easier! by bregman

Deploying and testing OpenStack is very easy. If you read the headline and your eyebrows raised, you are at the right place. I believe that most of us, who experienced at least one deployment of OpenStack, will agree that deploying OpenStack can be a quite frustrating experience. It doesn’t matter if you are using it for […]

Read more at http://abregman.com/2017/03/20/infrared-deploying-and-testing-openstack-just-made-easier/

View article »

Steve Hardy: OpenStack TripleO in Ocata, from the OpenStack PTG in Atlanta

Steve Hardy talks about TripleO in the Ocata release, at the Openstack PTG in Atlanta.

Steve: My name is Steve Hardy. I work primarily on the TripleO project, which is an OpenStack deployment project. What makes TripleO interesting is that it uses OpenStack components primarily in order to deploy a production OpenStack cloud. It uses OpenStack Ironic to do bare metal provisioning. It uses Heat orchestration in order to drive the configuration workflow. And we also recently started using Mistral, which is an OpenStack workflow component.

So it's kind of different from some of the other deployment initiatives. And it's a nice feedback loop where we're making use of the OpenStack services in the deployment story, as well as in the deployed cloud.

This last couple of cycles we've been working towards more composability. That basically means allowing operators more flexibility with service placement, and also allowing them to define groups of node in a more flexible way so that you could either specify different configurations - perhaps you have multiple types of hardware for different compute configurations for Nova, or perhaps you want to scale services into particular groups of clusters for particular services.

It's basically about giving more choice and flexibility into how they deploy their architecture.

Rich: Upgrades have long been a pain point. I understand there's some improvement in this cycle there as well?

Steve: Yes. Having delivered composable services and composable roles for the Newton OpenStack release, the next big challenge was giving operators the flexibility to deploy services on arbitrary nodes in your OpenStack environment, you need some way to upgrade, and you can't necessarily make assumptions about which service is running on which group of nodes. So we've implented the new feature which is called composable upgrades. That uses some Heat functionality combined with Ansible tasks, in order to allow very flexible dynamic definition of what upgrade actions need to take place when you're upgrading some specific group of nodes within your environment. That's part of the new Ocata release. It's hopefully going to provide a better upgrade experience, for end-to-end upgrades of all the OpenStack services that TripleO supports.

Rich: It was a very short cycle. Did you get done what you wanted to get done, or are things pushed off to Pike now.

Steve: I think there's a few remaining improvements around operator-driven upgrades, which we'll be looking at during the Pike cycle. It certainly has been a bit of a challenge with the short development timeframe during Ocata. But the architecture has landed, and we've got composable upgrade support for all the services in Heat upstream, so I feel like we've done what we set out to do in this cycle, and there will be further improvements around operator-drive upgrade workflow and also containerization during the Pike timeframe.

Rich: This week we're at the PTG. Have you already had your team meetings, or are they still to come.

Steve: The TripleO team meetings start tomorrow, which is Wednesday. The previous two days have mostly been cross-project discussion. Some of which related to collaborations which may impact TripleO features, some of which was very interesting. But the TripleO schedule starts tomorrow - Wednesday and Thursday. We've got a fairly packed agenda, which is going to focus around - primarily the next steps for upgrades, containerization, and ways that we can potentially collaborate more closely with some of the other deployment projects within the OpenStack community.

Rich: Is Kolla something that TripleO uses to deploy, or is that completely unrelated?

Steve: The two projects are collaborating. Kolla provides a number of components, one of which is container definitions for the OpenStack services themselves, and the containerized TripleO architecture actually consumes those. There are some other pieces which are different between the two projects. We use Heat to orchestrate container deployment, and there's an emphasis on Ansible and Kubernetes on the Kolla side, where we're having discussions around future collaboration.

There's a session planned on our agenda for a meeting between the Kolla Kubernetes folks and TripleO folks to figure out of there's long-term collaboration there. But at the moment there's good collaboration around the container definitions and we just orchestrate deploying those containers.

We'll see what happens in the next couple of days of sessions, and getting on with the work we have planned for Pike.

Rich: Thank you very much.

View article »

Using a standalone Nodepool service to manage cloud instances

Nodepool is a service used by the OpenStack CI team to deploy and manage a pool of devstack images on a cloud server for use in OpenStack project testing.

This article presents how to use Nodepool to manage cloud instances.

Requirements

For the purpose of this demonstration, we'll use a CentOS system and the Software Factory distribution to get all the requirements:

sudo yum install -y --nogpgcheck https://softwarefactory-project.io/repos/sf-release-2.5.rpm
sudo yum install -y nodepoold nodepool-builder gearmand
sudo -u nodepool ssh-keygen -N '' -f /var/lib/nodepool/.ssh/id_rsa

Note that this installs nodepool version 0.4.0, which relies on Gearman and still supports snapshot based images. More recent versions of Nodepool require a Zookeeper service and only support diskimage builder images. Even though the usage is similar and easy to adapt.

Configuration

Configure a cloud provider

Nodepool uses os-client-config to define cloud providers and it needs a clouds.yaml file like this:

cat > /var/lib/nodepool/.config/openstack/clouds.yaml <<EOF
clouds:
  le-cloud:
    auth:
      username: "${OS_USERNAME}"
      password: "${OS_PASSWORD}"
      auth_url: "${OS_AUTH_URL}"
    project_name: "${OS_PROJECT_NAME}"
    regions:
      - "${OS_REGION_NAME}"
EOF

Using the OpenStack client, we can verify that the configuration is correct and get the available network names:

sudo -u nodepool env OS_CLOUD=le-cloud openstack network list

Diskimage builder elements

Nodepool uses disk-image-builder to create images locally so that the exact same image can be used across multiple clouds. For this demonstration we'll use a minimal element to setup basic ssh access:

mkdir -p /etc/nodepool/elements/nodepool-minimal/{extra-data.d,install.d}

In extra-data.d, scripts are executed outside of the image and the one bellow is used to authorize ssh access:

cat > /etc/nodepool/elements/nodepool-minimal/extra-data.d/01-user-key <<'EOF'
#!/bin/sh
set -ex
cat /var/lib/nodepool/.ssh/id_rsa.pub > $TMP_HOOKS_PATH/id_rsa.pub
EOF
chmod +x /etc/nodepool/elements/nodepool-minimal/extra-data.d/01-user-key

In install.d, scripts are executed inside the image and the following is used to create a user and install the authorized_key file:

cat > /etc/nodepool/elements/nodepool-minimal/install.d/50-jenkins <<'EOF'
#!/bin/sh
set -ex
useradd -m -d /home/jenkins jenkins
mkdir /home/jenkins/.ssh
mv /tmp/in_target.d/id_rsa.pub /home/jenkins/.ssh/authorized_keys
chown -R jenkins:jenkins /home/jenkins

# Nodepool expects this dir to exist when it boots slaves.
mkdir /etc/nodepool
chmod 0777 /etc/nodepool
EOF
chmod +x /etc/nodepool/elements/nodepool-minimal/install.d/50-jenkins

Note: all the examples in this articles are available in this repository: sf-elements. More information to create elements is available here.

Nodepool configuration

Nodepool main configuration is /etc/nodepool/nodepool.yaml:

elements-dir: /etc/nodepool/elements
images-dir: /var/lib/nodepool/dib

cron:
  cleanup: '*/30 * * * *'
  check: '*/15 * * * *'

targets:
  - name: default

gearman-servers:
  - host: localhost

diskimages:
  - name: dib-centos-7
    elements:
      - centos-minimal
      - vm
      - dhcp-all-interfaces
      - growroot
      - openssh-server
      - nodepool-minimal

providers:
  - name: default
    cloud: le-cloud
    images:
      - name: centos-7
        diskimage: dib-centos-7
        username: jenkins
        private-key: /var/lib/nodepool/.ssh/id_rsa
        min-ram: 2048
    networks:
      - name: defaultnet
    max-servers: 10
    boot-timeout: 120
    clean-floating-ips: true
    image-type: raw
    pool: nova
    rate: 10.0

labels:
  - name: centos-7
    image: centos-7
    min-ready: 1
    providers:
      - name: default

Nodepool uses a gearman server to get node requests and to dispatch image rebuild jobs. We'll uses a local gearmand server on localhost. Thus, Nodepool will only respect the min-ready value and it won't dynamically start node.

Diskimages define images' names and dib elements. All the elements provided by dib, such as centos-minimal, are available, here is the full list.

Providers define specific cloud provider settings such as the network name or boot timeout. Lastly, labels define generic names for cloud images to be used by jobs definition.

To sum up, labels reference images in providers that are constructed with disk-image-builder.

Create the first node

Start the services:

sudo systemctl start gearmand nodepool nodepool-builder

Nodepool will automatically initiate the image build, as shown in /var/log/nodepool/nodepool.log: WARNING nodepool.NodePool: Missing disk image centos-7. Image building logs are available in /var/log/nodepool/builder-image.log.

Check the building process:

# nodepool dib-image-list
+----+--------------+-----------------------------------------------+------------+----------+-------------+
| ID | Image        | Filename                                      | Version    | State    | Age         |
+----+--------------+-----------------------------------------------+------------+----------+-------------+
| 1  | dib-centos-7 | /var/lib/nodepool/dib/dib-centos-7-1490688700 | 1490702806 | building | 00:00:00:05 |
+----+--------------+-----------------------------------------------+------------+----------+-------------+

Once the dib image is ready, nodepool will upload the image: nodepool.NodePool: Missing image centos-7 on default When the image fails to build, nodepool will try again indefinitely, look for "after-error" in builder-image.log.

Check the upload process:

# nodepool image-list
+----+----------+----------+----------+------------+----------+-----------+----------+-------------+
| ID | Provider | Image    | Hostname | Version    | Image ID | Server ID | State    | Age         |
+----+----------+----------+----------+------------+----------+-----------+----------+-------------+
| 1  | default  | centos-7 | centos-7 | 1490703207 | None     | None      | building | 00:00:00:43 |
+----+----------+----------+----------+------------+----------+-----------+----------+-------------+

Once the image is ready, nodepool will create an instance nodepool.NodePool: Need to launch 1 centos-7 nodes for default on default:

# nodepool list
+----+----------+------+----------+---------+---------+--------------------+--------------------+-----------+------+----------+-------------+
| ID | Provider | AZ   | Label    | Target  | Manager | Hostname           | NodeName           | Server ID | IP   | State    | Age         |
+----+----------+------+----------+---------+---------+--------------------+--------------------+-----------+------+----------+-------------+
| 1  | default  | None | centos-7 | default | None    | centos-7-default-1 | centos-7-default-1 | XXX       | None | building | 00:00:01:37 |
+----+----------+------+----------+---------+---------+--------------------+--------------------+-----------+------+----------+-------------+

Once the node is ready, you have completed the first part of the process described in this article and the Nodepool service should be working properly. If the node goes directly from the building to the delete state, Nodepool will try to recreate the node indefinitely. Look for errors in nodepool.log. One common mistake is to have an incorrect provider network configuration, you need to set a valid network name in nodepool.yaml.

Nodepool operations

Here is a summary of the most common operations:

  • Force the rebuild of an image: nodepool image-build image-name
  • Force the upload of an image: nodepool image-upload provider-name image-name
  • Delete a node: nodepool delete node-id
  • Delete a local dib image: nodepool dib-image-delete image-id
  • Delete a glance image: nodepool image-delete image-id

Nodepool "check" cron periodically verifies that nodes are available. When a node is shutdown, it will automatically recreate it.

Ready to use application deployment with Nodepool

As a Cloud developper, it is convenient to always have access to a fresh OpenStack deployment for testing purpose. It's easy to break things and it takes time to recreate a test environment, so let's use Nodepool.

First we'll add a new elements to pre-install the typical rdo requirements:

diskimages:
  - name: dib-rdo-newton
    elements:
      - centos-minimal
      - nodepool-minimal
      - rdo-requirements
    env-vars:
      RDO_RELEASE: "ocata"

providers:
  - name: default
    images:
      - name: rdo-newton
        diskimage: dib-rdo-newton
        username: jenkins
        min-ram: 8192
        private-key: /var/lib/nodepool/.ssh/id_rsa
        ready-script: run_packstack.sh

Then using a ready-script, we can execute packstack to deploy services after the node has been created:

label:
  - name: rdo-ocata
    image: rdo-ocata
    min-ready: 1
    ready-script: run_packstack.sh
    providers:
      - name: default

Once the node is ready, use nodepool list to get the IP address:

# ssh -i /var/lib/nodepool/.ssh/id_rsa jenkins@node
jenkins$ . keystonerc_admin
jenkins (keystone_admin)$ openstack catalog list
+-----------+-----------+-------------------------------+
| Name      | Type      | Endpoints                     |
+-----------+-----------+-------------------------------+
| keystone  | identity  | RegionOne                     |
|           |           |   public: http://node:5000/v3 |
...

To get a new instance, either terminate the current one, or manually delete it using nodepool delete node-id. A few minutes later you will have a fresh and pristine environment!

View article »