RDO Community News

See also blogs.rdoproject.org

RDO Pike released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Pike for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Pike is the 16th release from the OpenStack project, which is the work of more than 2300 contributors from around the world (source).

The release is making its way out to the CentOS mirror network, and should be on your favorite mirror site momentarily.

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO, and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

New and Improved

Interesting things in the Pike release include:

Added/Updated packages

The following packages and services were added or updated in this release:

  • Kuryr and Kuryr-kubernetes: an integration between OpenStack and Kubernetes networking.
  • Senlin: a clustering service for OpenStack clouds.
  • Shade: a simple client library for interacting with OpenStack clouds, used by Ansible among others.
  • python-pankoclient: a client library for the event storage and REST API for Ceilometer.
  • python-scciclient: a ServerView Common Command Interface Client Library, for the FUJITSU iRMC S4 - integrated Remote Management Controller.

Other additions include:

Python Libraries

  • os-xenapi
  • ovsdbapp (deps)
  • python-daiquiri (deps)
  • python-deprecation (deps)
  • python-exabgp
  • python-json-logger (deps)
  • python-netmiko (deps)
  • python-os-traits
  • python-paunch
  • python-scciclient
  • python-scrypt (deps)
  • python-sphinxcontrib-actdiag (deps) (pending)
  • python-sphinxcontrib-websupport (deps)
  • python-stestr (deps)
  • python-subunit2sql (deps)
  • python-sushy
  • shade (SDK)
  • update XStatic packages (update)
  • update crudini to 0.9 (deps) (update)
  • upgrade liberasurecode and pyeclib libraries to 1.5.0 (update) (deps)

Tempest Plugins

  • python-barbican-tests-tempest
  • python-keystone-testst-tempest
  • python-kuryr-tests-tempest
  • python-patrole-tests-tempest
  • python-vmware-nsx-tests-tempest
  • python-watcher-tests-tempest

Puppet-Modules

  • puppet-murano
  • puppet-veritas_hyperscale
  • puppet-vitrage

OpenStack Projects

  • kuryr
  • kuryr-kubernetes
  • openstack-glare
  • openstack-panko
  • openstack-senlin

OpenStack Clients

  • mistral-lib
  • python-glareclient
  • python-pankoclient
  • python-senlinclient

Contributors

During the Pike cycle, we started the EasyFix initiative, which has resulted in several new people joining our ranks. These include:

  • Christopher Brown
  • Anthony Chow
  • T. Nicole Williams
  • Ricardo Arguello

But, we wouldn't want to overlook anyone. Thank you to all 172 contributors who participated in producing this release:

Aditya Prakash Vaja, Alan Bishop, Alan Pevec, Alex Schultz, Alexander Stafeyev, Alfredo Moralejo, Andrii Kroshchenko, Anil, Antoni Segura Puimedon, Arie Bregman, Assaf Muller, Ben Nemec, Bernard Cafarelli, Bogdan Dobrelya, Brent Eagles, Brian Haley, Carlos Gonçalves, Chandan Kumar, Christian Schwede, Christopher Brown, Damien Ciabrini, Dan Radez, Daniel Alvarez, Daniel Farrell, Daniel Mellado, David Moreau Simard, Derek Higgins, Doug Hellmann, Dougal Matthews, Edu Alcañiz, Eduardo Gonzalez, Elise Gafford, Emilien Macchi, Eric Harney, Eyal, Feng Pan, Frederic Lepied, Frederic Lepied, Garth Mollett, Gaël Chamoulaud, Giulio Fidente, Gorka Eguileor, Hanxi Liu, Harry Rybacki, Honza Pokorny, Ian Main, Igor Yozhikov, Ihar Hrachyshka, Jakub Libosvar, Jakub Ruzicka, Janki, Jason E. Rist, Jason Joyce, Javier Peña, Jeffrey Zhang, Jeremy Liu, Jiří Stránský, Johan Guldmyr, John Eckersberg, John Fulton, John R. Dennis, Jon Schlueter, Juan Antonio Osorio, Juan Badia Payno, Julie Pichon, Julien Danjou, Karim Boumedhel, Koki Sanagi, Lars Kellogg-Stedman, Lee Yarwood, Leif Madsen, Lon Hohberger, Lucas Alvares Gomes, Luigi Toscano, Luis Tomás, Luke Hinds, Martin André, Martin Kopec, Martin Mágr, Matt Young, Matthias Runge, Michal Pryc, Michele Baldessari, Mike Burns, Mike Fedosin, Mohammed Naser, Oliver Walsh, Parag Nemade, Paul Belanger, Petr Kovar, Pradeep Kilambi, Rabi Mishra, Radomir Dopieralski, Raoul Scarazzini, Ricardo Arguello, Ricardo Noriega, Rob Crittenden, Russell Bryant, Ryan Brady, Ryan Hallisey, Sarath Kumar, Spyros Trigazis, Stephen Finucane, Steve Baker, Steve Gordon, Steven Hardy, Suraj Narwade, Sven Anderson, T. Nichole Williams, Telles Nóbrega, Terry Wilson, Thierry Vignaud, Thomas Hervé, Thomas Morin, Tim Rozet, Tom Barron, Tony Breeds, Tristan Cacqueray, afazekas, danpawlik, dnyanmpawar, hamzy, inarotzk, j-zimnowoda, kamleshp, marios, mdbooth, michaelhenkel, mkolesni, numansiddique, pawarsandeepu, prateek1192, ratailor, shreshtha90, vakwetu, vtas-hyperscale-ci, yrobla, zhangguoqing, Vladislav Odintsov, Xin Wu, XueFengLiu, Yatin Karel, Yedidyah Bar David, adriano petrich, bcrochet, changzhi, diana, djipko, dprince, dtantsur, eggmaster, eglynn, elmiko, flaper87, gpocentek, gregswift, hguemar, jason guiditta, jprovaznik, mangelajo, marcosflobo, morsik, nmagnezi, sahid, sileht, slagle, trown, vkmc, wes hayutin, xbezdick, zaitcev, and zaneb.

Getting Started

There are three ways to get started with RDO.

  • To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.
  • For a production deployment of RDO, use the TripleO Quickstart and you'll be running a production cloud in short order.
  • Finally, if you want to try out OpenStack, but don't have the time or hardware to run it yourself, visit TryStack, where you can use a free public OpenStack instance, running RDO packages, to experiment with the OpenStack management interface and API, launch instances, configure networks, and generally familiarize yourself with OpenStack. (TryStack is not, at this time, running Pike, although it is running RDO.)

Getting Help

The RDO Project participates in a Q&A service at ask.openstack.org, for more developer-oriented content we recommend joining the rdo-list mailing list. Remember to post a brief introduction about yourself and your RDO story. You can also find extensive documentation on the RDO docs site.

The #rdo channel on Freenode IRC is also an excellent place to find help and give help.

We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience in the RDO venues.

Getting Involved

To get involved in the OpenStack RPM packaging effort, see the RDO community pages and the CentOS Cloud SIG page. See also the RDO packaging documentation.

Join us in #rdo on the Freenode IRC network, and follow us at @RDOCommunity on Twitter. If you prefer Facebook, we're there too, and also Google+.

View article »

Video interviews at the Denver PTG (Sign up now!)

TL;DR: Sign up here for the video interviews at the PTG in Denver next month.

Earlier this year, at the PTG in Atlanta I did video interviews with some of the Red Hat engineering who were there.

You can see these videos on the RDO YouTube channel.

Or you can see the teaser video here:

This year, I'll be expanding that to everyone - not just Red Hat - to emphasize the awesome cooperation and collaboration that happens across projects, and across companies.

If you'll be at the PTG, please consider signing up to talk to me about your project. I'll be conducting interviews starting on Tuesday morning, and you can sign up here

Please see the "planning for your interview" tab of that spreadsheet for the answers to all of your questions about the interviews. Or contact me directly at rbowen AT red hat DOT com if you have more questions.

View article »

Introducing opstools-ansible

Introducing Opstools-ansible

Ansible

Ansible is an agentless, declarative configuration management tool. Ansible can be used to install and configure packages on a wide variety of targets. Targets are defined in the inventory file for Ansible to apply the predefined actions. Actions are defined as playbooks or sometime roles in the form of YAML files. Details of Ansible can be found here.

Opstools-ansible

The project opstools-ansible hosted in Github is to use Ansible to configure an environment that provides the support of opstools, namely centralized logging and analysis, availability monitoring, and performance monitoring.

One prerequisite to run opstools-ansible is that the servers have to be running with CentOS 7 or RHEL 7 (or a compatible distribution).

Inventory file

These servers are to be defined in the inventory file with reference structure to this file that defines 3 high level host groups:

  • am_hosts
  • pm_hosts
  • logging_host

There are lower level host groups but documentation stated that they are not tested.

Configuration File

Once the inventory file is defined, the Ansible configuration files can be used to tailor to individual needs. The READM.rst file for opstools-ansible suggests the following as an example:

fluentd_use_ssl: true

fluentd_shared_key: secret

fluentd_ca_cert:

—–BEGIN CERTIFICATE—–

—–END CERTIFICATE—–

fluentd_private_key:

—–BEGIN RSA PRIVATE KEY—–

—–END RSA PRIVATE KEY—–

If there is no Ansible configuration file to tune the system, the default settings/options are applied.

Playbooks and roles

The playbook specifies what packages are to be installed in for the opstools environment by Ansible. Basically, the packages to be installed are:

Besides the above packages, opstools-ansible playbook also applies these additional roles

  • Firewall – this role manages the firewall rules for the servers.
  • Prereqs – this role checks and installs all the dependency packages such as python-netaddr or libselinux-python … etc. for the successful installation of opstools.
  • Repos - this is a collection of roles for configuring additional package repositories.
  • Chrony – this role installs and configures the NTP client to make sure the time in each server is in sync with each other.

opstools environment

Once these are done, we can simply apply the following command to create the opstools environment:

    ansible-playbook playbook.yml -e @config.yml

TripleO Integration

TripleO (OpenStack on OpenStack) has the concept of Undercloud and Overcloud

  • Undercloud : for deployment, configuration and management of OpenStack nodes.
  • Overcloud : the actual OpenStack cluster that is consumed by user.

RedHat has an in-depth blog post on TripleO and OpenStack has this document on contributing and installing TripleO

When opstools is installed at the TripleO Undercloud, the OpenStack instances running on the Overcloud can be configured to run the opstools service when it deployed. For example:

openstack overcloud deploy … \

-e /usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml \

-e /usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml \

-e params.yaml

There are only 3 steps to integrate opstools with TripleO with opstools-ansible. Detail of the steps can be found here.

  1. Use opstools-ansible to create the opstools environment at the Undercloud.
  2. Create the params.yaml for TripleO to points to the Sensu and Fluentd agents at the opstools hosts.
  3. Deploy with the "openstack overcloud deploy …" command.
View article »

EasyFix: Getting started contributing to RDO

It can be intimidating trying to get involved in an open source project. Particularly one as huge and complicated as OpenStack. But we want you to join us on the RDO project, so we're trying to make it as easy as possible to get started.

To that end, we've started the EasyFix initiative. EasyFix is a collection of "easy" tickets that you should be able to get started on without having a deep knowledge of the RDO project. And in the process, you'll learn what you need to know to move on to more complex things.

These tickets cover everything from documentation, to packaging, to helping with events, to writing tools to make things easier. And each ticket comes with a mentor - someone who has volunteered to help you get through that first commit.

There's also a general mentors list, categorized by area of expertise, if you just have a question and you're not sure who to ping on IRC.

We've been running the EasyFix program for about 3 weeks now, and in that time, four new contributors have started on the RDO project.

We're very pleased to welcome these new contributors, and hope they'll be around for a long time to come.

Christopher "snecklifter" Brown - I'm Christopher Brown, a HPC Cloud Engineer based in Sheffield in the UK. I've worked on OpenStack since June 2015. I used to work on packaging for the Fedora project so I am transferring and updating those skills to help out with RDO when I can.

Anthony Chow - I am a software developer for legacy networking equipment wanting to be a Developer Advocate. I have been venturing into other technologies such as cloud, container, and configuration management tools. Passionate in learning and sharing technologies related topics.

Treva Williams - T. Nichole Williams is an RHCSA 7 Certified Linux and OpenStack engineer and content author for LinuxAcademy.com, & an OpenStack active technical contributor in the Magnum project. She is actively involved in several OpenStack, OpenShift, RDO, and Ceph communities and groups. When not OpenStacking or Cephing, she enjoys doggos, candy, cartoons, and playing "So You Think You're a Marine Biologist" on Google.

Ricardo Arguello - I am an Open Source enthusiast that tries to collaborate as much as I can. I have helped in the past with patches for WildFly, and as a Fedora packager. The RDO project is very complex an intimidating, but collaborators are welcome and there are easy to fix bugs for newbies to make their first contributions! That makes RDO a great community if you are interested in helping the project while learning about Open Stack internals in the process.

If you want to participate in the RDO project, we encourage you to find something on the EasyFix list and get started. And please consider attending EasyFix office hours, Tuesdays at 13:30 UTC.

View article »

What's new in ZuulV3

Zuul is a program used to gate a project's source code repository so that changes are only merged if they pass integration tests. This article presents some of the new features in the next version: ZuulV3

Distributed configuration

The configuration is distributed accross projects' repositories, for example, here is what the new zuul main.yaml configuration will look like:

- tenant:
    name: downstream
    source:
      gerrit:
        config-projects:
          - config
        untrusted-projects:
          - restfuzz
      openstack.org:
        untrusted-projects:
          - openstack-infra/zuul-jobs:
              include: job
              shadow: config

This configuration describes a downstream tenant with two sources. Gerrit is a local gerrit instance and openstack.org is the review.openstack.org service. For each sources, there are 2 types of projects:

  • config-projects hold configuration information such as logserver access. Jobs defined in config-projects run with elevated privileges.
  • untrusted-projects are projects being tested or deployed.

The openstack-infra/zuul-jobs has special settings discussed below.

Default jobs with openstack-infra/zuul-jobs

The openstack-infra/zuul-jobs repository contains common job definitions and Zuul only imports jobs that are not already defined (shadow) in the local config.

This is great news for Third Party CIs that will easily be able to re-use upstream jobs such as tox-docs or tox-py35 with their convenient post-processing of unittest results.

In-repository configuration

The new distributed configuration enables a more streamlined workflow. Indeed, pipelines and projects are now defined in the project's repository which allows changes to be tested before merging.

Traditionaly, projects' CI needed to be configured in two steps: first, the jobs were defined, then a test change was rechecked until the job was working. This is no longer needed because the jobs and configurations are directly set in the repository and CI change undergoes the CI workflow.

After being registered in the main.yaml file, a project author can submit a .zuul.yaml file (along with any other changes needed to make the test succeed). Here is a minimal zuul.yaml setting:

- project:
    name: restfuzz
    check:
      jobs:
        - tox-py35

Zuul will look for a zuul.yaml file or a zuul.d directory as well as hidden versions prefixed by a '.'. The project can also define its own jobs.

Ansible job definition

Jobs are now created in Ansible, which brings many advantages over the Jenkins Jobs Builder format:

  • Multi-node architecture where tasks are easily distributed,
  • Ansible module ecosystem simplify complex task, and
  • Manual execution of jobs.

Here is an example:

- job:
    name: restfuzz-rdo
    parent: base
    run: playbooks/rdo
    nodes:
      - name: cloud
        label: centos
      - name: fuzzer
        label: fedora

Then the playbook can be written like this:

- hosts: cloud
  tasks:
    - name: "Deploy rdo"
      command: packstack --allinone
      become: yes
      become_user: root

    - name: "Store openstackrc"
      command: "cat /root/keystonerc_admin
      register: openstackrc
      become: yes
      become_user: root

- hosts: fuzzer
  tasks:
    - name: "Setup openstackrc"
      copy:
        content: "{{ hostvars['cloud']['openstackrc'].stdout }}"
        dest: "{{ zuul_work_dir }}/.openstackrc"

    - name: "Deploy restfuzz"
      command: python setup.py install
      args:
        chdir: "{{ zuul_work_dir }}"
      become: yes
      become_user: root

    - name: "Run restfuzz"
      command: "restfuzz --target {{ hostvars['cloud']['ansible_eth0']['ipv4']['address'] }}"

The base parent from the config project manages the pre phase to copy the sources to the instances and the post phase to publish the job logs.

Nodepool drivers

This is still a work in progress but it's worth noting that Nodepool is growing a driver based design to support non-openstack providers. The primary goal is to support static node assignements, and the interface can be used to implement new providers. A driver needs to implement a Provider class to manage access to a new API, and a RequestHandler to manage resource creation.

As a Proof Of Concept, I wrote an OpenContainer driver that can spawn thin instances using RunC:

providers:
  - name: container-host-01
    driver: oci
    hypervisor: fedora.example.com
    pools:
      - name: main
        max-servers: 42
        labels:
          - name: fedora-26-oci
            path: /
          - name: centos-6-oci
            path: /srv/centos6
          - name: centos-7-oci
            path: /srv/centos7
          - name: rhel-7.4-oci
            path: /srv/rhel7.4

This is good news for operators and users who don't have access to an OpenStack cloud since Zuul/Nodepool may be able to use new providers such as OpenShift for example.

In conclusion, ZuulV3 brings a lot of new cool features to the table, and this article only covers a few of them. Check the documentation for more information and stay tuned for the upcoming release.

View article »

rdopkg-0.44 ChangeBlog

I'm happy to annouce version 0.44.2 of rdopkg RPM packaging automation tool has been released.

While a changelog generated from git commits is available in the original 0.44 release commit message, I think it's also worth a human readable summary of the work done by the rdopkg community for this release. I'm not sure about the format yet, so I'll start with a blog post about the changes - a ChangeBlog ;)

41 commits from 7 contributors were merged over the course of 4 months since last release with average time to land of 6 days. More stats

For more information about each change, follow the link to inspect related commit on github.

Software Factory migration

Migrate to softwarefactory-project.io

Versioning

Adopt pbr for version and setup.py management

Include minor version 0.44 -> 0.44.0 as pbr dictates

Python 3 compatibility

Add some Python 3 compatibility fixes

More python 3 compatibility fixes

Testing

Add BDD feature tests using python-behave

  • Unit tests sucked for testing high level behavior so I tried an alternative. I'm quite pleased with python-behave, see one of first new-version scenarios written in Gherkin:

    Scenario: rdopkg new-version with upstream patches
        Given a distgit at Version 0.1 and Release 0.1
        Given a patches branch with 5 patches
        Given a new version 1.0.0 with 2 patches from patches branch
        When I run rdopkg new-version -lntU 1.0.0
        Then .spec file tag Version is 1.0.0
        Then .spec file tag Release is 1%{?dist}
        Then .spec file doesn't contain patches_base
        Then .spec file has 3 patches defined
        Then .spec file contains new changelog entry with 1 lines
        Then new commit was created
    

    It also looks reasonable on the pyton side.

Avoid test failure due to git hooks

tests: include pep8 in test-requirements.txt

tests: enable nice py.test diffs for common test code

New Features

pkgenv: display color coded hashes for branches

  • You can now easily tell the state of branches just by looking at color:

distgit: new -H/–commit-header-file option

patch: new -B/–no-bump option to only sync patches

Add support for buildsys-tags in info-tags-diff

Add options to specify user and mail in changelog entry

allow patches remote and branch to be set in git config

new-version: handle RHCEPH and RHSCON products

guess: return RH osdist for eng- dist-git branches

Improvements

distgit: Use NVR for commit title for multiple changelog lines

Improve %changelog handling

Improve patches_ignore detection

Avoid prompt on non interactive console

Update yum references to dnf

Switch to pycodestyle (pep8 rename)

Fixes

Use absolute path for repo_path

  • This caused trouble when using rdopkg info -l.

Use always parse_info_file in get_info

Fix output of info-tags-diff for new packages

Refactoring

refactor: merge legacy rdopkg.utils.exception

  • There is only one place for exceptions now \o/

core: refactor unreasonable default atomic=False

make git.config_get behave more like dict.get

specfile: fix improper naming: get_nvr, get_vr

fixed linting

Documentation

document new-version's –bug argument

Update doc to reflect output change in info-tags-diff

Happy rdopkging!

View article »

Upcoming events

There's a number of upcoming events where RDO enthusiasts will be present. Mark your calendar!

Join us for Test Day!

Milestone 3 of the Pike cycle was released last week, and so it's time to test the RDO packages. Join us on Thursday and Friday of next week (August 10th and 11th) for the Pike M3 test day. We'll be on the #RDO channel on the Freenode IRC network to test, help, answer questions, and propose solutions.

Meetups

We encourage you to attend your local OpenStack meetup. There's hundreds of them, all over the world, every day. We post a list of upcoming meetups to the RDO mailing list each Monday, so that you can mark your calendar. Or you can search meetup.com for events near you.

If you're going to be speaking at an OpenStack meetup group, and you'd like to have some RDO swag to take along with you, please contact me - rbowen@redhat.com - or Rain - rain@redhat.com - with your request, at least two weeks ahead of time.

And if you're a meetup organizer who is looking for speakers, we can sometimes help you with that, too.

OpenStack Days

The following OpenStack days are coming up, and each of them will have RDO enthusiasts in attendance:

If you're speaking at any of these events, please get in touch with me, so that I can help promote your presence there.

Other Events

The week of September 11 - 15th, the OpenStack PTG will be held in Denver, Colorado. At this event, project teams will meet to determine what features will be worked on for the upcoming Queens release. Rich Bowen will be there conducting video interviews, like last time, where OpenStack developers will be talking about what they worked on in Pike, and what to expect for Queens. If you'll be there, watch the announcements from the OpenStack foundation for how to sign up for an interview slot.

On October 20th, we'll be joining up with the CentOS community at the CentOS Dojo at CERN. Details of that event may be found on the CERN website. CERN is located just north of Geneva. The event is expected to have a number of RDO developers in attendance, as CERN has one of the largest OpenStack deployments in the world, running on RDO.

Other RDO events, including the many OpenStack meetups around the world, are always listed on the RDO events page. If you have an RDO-related event, please feel free to add it by submitting a pull request on Github.

View article »

Recent blog posts from the community

Here's some of the great blogs from the RDO community which you may have missed in recent weeks:

Using NFS for OpenStack (glance,nova) with selinux by Fabian Arrotin

As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing DevCloud setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That's when I found out that Gluster isn't a valid option anymore : Gluster is was deprecated and was now even removed from Cinder. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don't even try to do that, through fuse).

Read more at https://arrfab.net/posts/2017/Jul/28/using-nfs-for-openstack-glancenova-with-selinux/

Nested quota models by Tim Bell

At the Boston Forum, there were many interesting discussions on models which could be used for nested quota management (https://etherpad.openstack.org/p/BOS-forum-quotas).Some of the background for the use has been explained previously in the blog (http://openstack-in-production.blogspot.fr/2016/04/resource-management-at-cern.html), but the subsequent discussions have also led to further review.

Read more at http://openstack-in-production.blogspot.com/2017/07/nested-quota-models.html

Understanding ceph-ansible in TripleO by Giulio Fidente

One of the goals for the TripleO Pike release was to introduce ceph-ansible as an alternative to puppet-ceph for the deployment of Ceph.

Read more at http://giuliofidente.com/2017/07/understanding-ceph-ansible-in-tripleo.html

Tuning for Zero Packet Loss in Red Hat OpenStack Platform – Part 3 by m4r1k

In Part 1 of this series Federico Iezzi, EMEA Cloud Architect with Red Hat covered the architecture and planning requirements to begin the journey into achieving zero packet loss in Red Hat OpenStack Platform 10 for NFV deployments. In Part 2 he went into the details around the specific tuning and parameters required. Now, in Part 3, Federico concludes the series with an example of how all this planning and tuning comes together!

Read more at http://redhatstackblog.redhat.com/2017/07/18/tuning-for-zero-packet-loss-in-red-hat-openstack-platform-part-3/

View article »