RDO Community News

See also blogs.rdoproject.org

Introducing Software Factory - part 1

Introducing Software Factory

Software Factory is an open source, software development forge with an emphasis on collaboration and ensuring code quality through Continuous Integration (CI). It is inspired by OpenStack's development workflow that has proven to be reliable for fast-changing, interdependent projects driven by large communities.

Software Factory is batteries included, easy to deploy and fully leverages OpenStack clouds. Software Factory is an ideal solution for code hosting, feature and issue tracking, and Continuous Integration, Delivery and Deployment.

Why Software Factory ?

OpenStack, as a very large collection of open source projects with thousands of contributors across the world, had to solve scale and interdependency problems to ensure the quality of its codebase; this led to designing new best practices and tools in the fields of software engineering, testing and delivery.

Unfortunately, until recently these tools were mostly custom-tailored for OpenStack, meaning that deploying and configuring all these tools together to set up a similar development forge would require a tremendous effort.

The purpose of Software Factory is to make these new tools easily available to development teams, and thus help to spread these new best practices as well. With a minimal effort in configuration, a complete forge can be deployed and customized in minutes, so that developers can focus on what matters most to them: delivering quality code!

Innovative features

Multiple repository project support

Software projects requiring multiple, interdependent repositories are very common. Among standard architectural patterns, we have:

  • Modular software that supports plugins,drivers, add-ons …
  • Clients and Servers
  • Distribution of packages
  • Micro-services

But being common does not mean being easy to deal with.

For starters, ensuring that code validation jobs are always built from the adequate combination of source code repositories at the right version can be headache-inducing.

Another common problem is that issue tracking and task management solutions must be aware that issues and tasks might and will span over several repositories at a time.

Software Factory supports multiple repository projects natively at every step of the development workflow: code hosting, story tracking and CI.

Smart Gating system

Usually, a team working on new features will split the tasks among its members. Each member will work on his or her branch, and it will be up to one or more release managers to ensure that branches get merged back correctly. This can be a daunting task with long living branches, and prone to human errors on fast-evolving projects. Isn't there a way to automate this?

Software Factory provides a CI system that ensures master branches are always "healthy" using a smart gate pipeline. Every change proposed on a repository is tested by the CI and must pass these tests before being eligible to land on the master branch. Nowadays this is quite common in modern CI systems, but Software Factory goes above and beyond to really make the life of code maintainers and release managers easier:

Automatic merging of patches into the repositories when all conditions of eligibility are satisfied

Software Factory will automatically merge mergeable patches that have been validated by the CI and at least one privileged repository maintainer (in Software Factory terminology, a "core developer"). This is configurable and can be adapted to any workflow or team size.

The Software Factory gating system takes care of some of the release manager's tasks, namely managing the merge order of patches, testing them, and integrating them or requiring further work from the author of the patch.

Speculative merging strategy

Actually, once a patch is deemed mergeable, it is not merged immediately into the code base. Instead, Software Factory will run another batch of (fully customizable) validation jobs on what the master code base would look like if that patch plus others mergeables patches was merged. In other words, the validation jobs are run on a code base consisting of:

  • the latest commit on the master branch, plus
  • any other mergeable patches for this repository (rebased on each others in approval order), plus
  • the mergeable patch on top

This ensures not only that the patch to merge is always compatible with the current code base, but also detects compatibility problems between patches before they can do any harm (if the validation jobs fail, the patch remains at the gate and will need to be reworked by its author).

This speculative strategy minimizes greatly the time needed to merge patches because the CI assumes by default that all mergeable patches will pass their validation jobs successfully. And even if a patch in the middle of the currently tested chain of patches fails, then the CI will discard only the failing patch and automatically rebase the others (those previously rebased on the failed one) to the closest one. This is really powerful especially when integration tests take a long time to run.

Jobs orchestration in parallel or chained

For developers, a fast CI feedback is critical to avoid context switching. Software Factory can run CI jobs in parallel for any given patch, thus reducing the time it takes to assess the quality of submissions.

Software Factory also supports chaining CI jobs, allowing artifacts from a job to be consumed by the next one (for example, making sure software can be built in a job, then running functional tests on that build in the next job).

Complete control on jobs environments

One of the most common and annoying problems in Continuous Integration is dealing with jobs flakiness, or unstability, meaning that successive runs of the same job in the same conditions might not have the same results. This is often due to running these jobs on the same, long-lived worker nodes, which is prone to interferences especially if the environment is not correctly cleaned up between runs.

Despite all this, long-lived nodes are often used because recreating a test environment from scratch might be costly, in terms of time and resources.

The Software Factory CI system brings a simple solution to this problem by managing a pool of virtual machines on an OpenStack cloud, where jobs will be executed. This consists in:

  • Provisioning worker VMs according to developers' specifications
  • Ensuring that a given minimal amount of VMs are ready for new jobs at any time
  • Discarding and destroying VMs once a job has run its course
  • Keeping VMs up to date when their image definitions have changed
  • Abiding by cloud usage quotas as defined in Software Factory's configuration

A fresh VM will be selected from the pool as soon as a job must be executed.

This approach has several advantages:

  • The flakiness due to environment decay is completely eliminated
  • Great flexibility and liberty in the way you define your images
  • CI costs can be decreased by using computing resources only when needed

On top of this Software Factory can support multiple OpenStack cloud providers at the same time, improving service availability via failover.

User-driven platform configuration, and configuration as code

In lots of organizations, development teams rely on operational teams to manage their resources, like adding contributors with the correct access credentials to a project, or provisioning a new test node. This can induce a lot of delays for reasons ranging from legal issues to human error, and be very frustrating for developers.

With Software Factory, everything can be managed by development teams themselves.

Software Factory's general configuration is stored in a Git repository, aptly named config, where the following resources among others can be defined:

  • Git projects/repositories/ACLs
  • job running VM images and provisioning
  • validation jobs and gating strategies

Users of a Software Factory instance can propose, review and approve (with the adequate accreditations) configuration changes that are validated by a built-in CI workflow. When a configuration change is approved and merged the platform re-configures itself automatically, and the configuration change is immediately applied.

This simplifies the work of the platform operator, and gives more freedom, flexibility and trust to the community of users when it comes to managing projects and resources like job running VMs.

Others features included in Software Factory

As we said at the beginning of this article Software Factory is a complete development forge. Here is the list of its main features:

  • Code hosting
  • Code review
  • Issue tracker
  • Continuous Integration, Delivery and Deployment
  • Job logs storage
  • Job logs search engine
  • Notification system over IRC
  • Git repository statistics and reporting
  • Collaboration tools: etherpad, pasteboard, voip server

To sum up

We can list how Software Factory may improve your team's productivity. Software Factory will:

  • help ensure that your code base is always healthy, buildable and deployable
  • ease merging patches into the master branch
  • ease managing projects scattered across multiple Git repositories
  • improve community building and knowledge sharing on the code base
  • help reduce test flakiness
  • give developers more freedom towards their test environments
  • simplify projects creation and configuration
  • simplify switching between services (code review, CI, …) thanks to its navigation panel

Software Factory is also operators friendly:

  • 100% Open source, self hosted, ie no vendor lock-in or dependency to external providers
  • based on Centos 7 and benefits from the stability of this Linux distribution
  • fast and easy to deploy and upgrade
  • simple configuration and documentation

This article is the first in a series that will introduce the components of Software Factory and explain how they provide those features.

In the meantime you may want to check out softwarefactory-project.io and review.rdoproject.org (both are Software Factory deployments) to explore the features we discussed in this article.

Thank you for reading and stay tuned !

View article »

Recent blog posts, June 19

Using Ansible Validations With Red Hat OpenStack Platform – Part 3 by August Simonelli, Technical Marketing Manager, Cloud

In the previous two blogposts (Part 1 and Part 2) we demonstrated how to create a dynamic Ansible inventory file for a running OpenStack cloud. We then used that inventory to run Ansible-based validations with the ansible-playbook command from the CLI.

Read more at http://redhatstackblog.redhat.com/2017/06/15/using-ansible-validations-with-red-hat-openstack-platform-part-3/

TripleO deep dive session index by Carlos Camacho

This is a brief index with all TripleO deep dive sessions, you can see all videos on the TripleO YouTube channel.

Read more at http://anstack.github.io/blog/2017/06/15/tripleo-deep-dive-session-index.html

TripleO deep dive session #10 (Containers) by Carlos Camacho

This is the 10th release of the TripleO “Deep Dive” sessions

Read more at http://anstack.github.io/blog/2017/06/15/tripleo-deep-dive-session-10.html

OpenStack, Containers, and Logging by Lars Kellogg-Stedman

I've been thinking about logging in the context of OpenStack and containerized service deployments. I'd like to lay out some of my thoughts on this topic and see if people think I am talking crazy or not.

Read more at http://blog.oddbit.com/2017/06/14/openstack-containers-and-logging/

John Trowbridge: TripleO in Ocata by Rich Bowen

John Trowbridge (Trown) talks about his work on TripleO in the OpenStack Ocata period, and what's coming in Pike.

Read more at http://rdoproject.org/blog/2017/06/john-trowbridge-tripleo-in-ocata/

Doug Hellmann: Release management in OpenStack Ocata by Rich Bowen

Doug Hellmann talks about release management in OpenStack Ocata, at the OpenStack PTG in Atlanta.

Read more at http://rdoproject.org/blog/2017/06/doug-hellmann-release-management-in-openstack-ocata/

Using Ansible Validations With Red Hat OpenStack Platform – Part 2 by August Simonelli, Technical Marketing Manager, Cloud

In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).

Read more at http://redhatstackblog.redhat.com/2017/06/12/using-ansible-validations-with-red-hat-openstack-platform-part-2/

View article »

Recent blog posts: June 12

Experiences with Cinder in Production by Arne Wiebalck

The CERN OpenStack cloud service is providing block storage via Cinder since Havana days in early 2014. Users can choose from seven different volume types, which offer different physical locations, different power feeds, and different performance characteristics. All volumes are backed by Ceph, deployed in three separate clusters across two data centres.

Read more at http://openstack-in-production.blogspot.com/2017/06/experiences-with-cinder-in-production.html

Using Ansible Validations With Red Hat OpenStack Platform – Part 1 by August Simonelli, Technical Marketing Manager, Cloud

Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.

Read more at http://redhatstackblog.redhat.com/2017/06/08/using-ansible-validations-with-red-hat-openstack-platform-part-1/

Upstream First…or Second? by Adam Young

From December 2011 until December 2016, my professional life was driven by OpenStack Keystone development. As I’ve made an effort to diversify myself a bit since then, I’ve also had the opportunity to reflect on our approach, and perhaps see somethings I would like to do differently in the future.

Read more at http://adam.younglogic.com/2017/06/upstream-first-or-second/

Accessing a Mistral Environment in a CLI workflow by John

Recently, with some help of the Mistral devs in freenode #openstack-mistral, I was able to create a simple environment and then write a workflow to access it. I will share my example below.

Read more at http://blog.johnlikesopenstack.com/2017/06/accessing-mistral-environment-in-cli.html

OpenStack papers community on Zenodo by Tim Bell

At the recent summit in Boston, Doug Hellmann and I were discussing research around OpenStack, both the software itself but also how it is used by applications. There are many papers being published in proceedings of conferences and PhD theses but finding out about these can be difficult. While these papers may not necessarily lead to open source code contribution, the results of this research is a valuable resource for the community.

Read more at http://openstack-in-production.blogspot.com/2017/06/openstack-papers-community-on-zenodo.html

Event report: Red Hat Summit, OpenStack Summit by rbowen

During the first two weeks of May, I attended Red Hat Summit, followed by OpenStack Summit. Since both events were in Boston (although not at the same venue), many aspects of them have run together.

Read more at http://drbacchus.com/event-report-red-hat-summit-openstack-summit/

View article »

RDO Contributor Survey

We recently ran a contributor survey in the RDO community, and while the participation was fairly small (21 respondants), there's a lot of important insight we can glean from it.

First, and unsurprisingly:

Of the 20 people who answered the "corporate affiliation" question, 18 were Red Hat employees. While we are already aware that this is a place where we need to improve, it's good to know just how much room for improvement there is. What's useful here will be figuring out why people outside of Red Hat are not participating more. This is touched on in later questions.

Next, we have the frequency of contributions:

Here we see that while 14% of our contributors are pretty much working on RDO all the time, the majority of contributors only touch it a few times per release - probably updating a single package, or addressing a single bug, for that particular cycle.

This, too, is mostly in line with what we expected. With most of the RDO pipeline being automated, there's little that most participants would need to do beyond a handful of updates each release. Meanwhile, a core team works on the infrastructure and the tools every week to keep it all moving.

We asked contributors where they participate:

Most of the contributors - 75% - indicate that they are involved in packaging. (Respondants could choose more than one area in which they participate.) Test day participation was a distant second place (35%), followed by documentation (25%) and end user support (25%)

I've personally seen way more people than that participate in end user support, on the IRC channel, mailing list, and ask.openstack.org. Possibly these people don't think of what they're doing as support, but it is still a very important way that we grow our user community.

The rest of the survey delves into deeper details about the contribution process.

When asked about the ease of contribution, 80% said that it was ok, with just 10% saying that the contribution process was too hard.

When asked about difficulties encountered in the contribution process:

Answers were split fairly evenly between "Confusing or outdated documentation", "Complexity of process", and "Lack of documentation". Encouragingly, "lack of community support" placed far behind these other responses.

It sounds like we have a need to update the documentation, and greatly clarify it. Having a first-time contributor's view of the documentation, and what unwarranted assumptions it makes, would be very beneficial in this area.

When asked how these difficulties were overcome, 60% responded that they got help on IRC, 15% indicated that they just kept at it until they figured it out, and another 15% indicated that they gave up and focused their attentions elsewhere.

Asked for general comments about the contribution process, almost all comments focused on the documentation - it's confusing, outdated, and lacks useful examples. A number of people complained about the way that the process seems to change almost every time they come to contribute. Remember: Most of our contributors only touch RDO once or twice a release, and they feel that they have to learn the process from scratch every time. Finally, two people complained that the review process for changes is too slow, perhaps due to the small number of reviewers.

I'll be sharing the full responses on the RDO-List mailing list later today.

Thank you to everyone who participated in the survey. Your insight is greatly appreciated, and we hope to use it to improve the contributor process in the future.

View article »

Recent blog posts - May 22nd

Here's some of the recent blog posts from our community:

Some lessons an IT department can learn from OpenStack by jpena

I have spent a lot of my professional career working as an IT Consultant/Architect. In those positions, you talk to many customers with different backgrounds, and see companies that run their IT in many different ways. Back in 2014, I joined the OpenStack Engineering team at Red Hat, and started being involved with the OpenStack community. And guess what, I found yet another way of managing IT.

Read more at http://rdoproject.org/blog/2017/05/some-lessons-an-it-department-can-learn-from-openstack/

When is it not cool to add a new OpenStack configuration option? by assafmuller

Adding new configuration options has a cost, and makes already complex projects (Hi Neutron!) even more so. Double so when we speak of architecture choices, it means that we have to test and document all permutations. Of course, we don’t always do that, nor do we test all interactions between deployment options and other advanced features, leaving users with fun surprises. With some projects seeing an increased rotation of contributors, we’re seeing wastelands of unmaintained code behind left behind, increasing the importance of being strategic about introducing new complexity.

Read more at https://assafmuller.com/2017/05/19/when-is-not-cool-to-add-a-new-openstack-configuration-option/

Running (and recording) fully automated GUI tests in the cloud by Matthieu Huin

The problem Software Factory is a full-stack software development platform: it hosts repositories, a bug tracker and CI/CD pipelines. It is the engine behind RDO's CI pipeline, but it is also very versatile and suited for all kinds of software projects. Also, I happen to be one of Software Factory's main contributors. :)

Read more at http://rdoproject.org/blog/2017/05/running-and-recording-fully-automated-GUI-tests-in-the-cloud/

View article »