Module of the Week: puppetlabs/openstack – Configures a fully functional OpenStack environment

Purpose Configures OpenStack
Module puppetlabs/openstack
Puppet Version 2.7.14+, Puppet Enterprise 2.5+
Platforms Ubuntu 12.04 (Precise), Debian 6 (Wheezy)

In April, we announced Puppet’s support for OpenStack, a popular open source project that can be used to build private clouds. At that time, we posted the first OpenStack configuration module on Puppet Forge, which enabled you to add Puppet Cloud Provisioner support for OpenStack.

Today, we’ll dive deeper and show you to get you up and running with a single or multi-node OpenStack deployment as quickly as possible. This week’s featured module provides a simple way of deploying OpenStack that is based on best practices shaped by the OpenStack community.

OpenStack is composed of several subprojects that need to be installed and configured together to deliver its full functionality:

Getting OpenStack up and running can be a daunting process. Devstack was written to help developers build functional openstack environments but it is not flexible enough for production deployments and provides no capabilities for on-going management.

The OpenStack modules provide a simple, reliable, and flexible way to deploy fully functional OpenStack environments. They were written in collaboration with OpenStack users to capture deployment knowledge and best practices. These modules are suitable for spinning up a single node test environment for evaluation, or as a production deployment toolkit.

Installing the module

Complexity Easy
Installation Time 5 minutes

The latest version of this module can easily be installed from the Puppet Forge using the puppet module tool (which is included with Puppet 2.7.14+ or Puppet Enterprise 2.5+)

$ puppet module install puppetlabs/openstack

This command will install the openstack module from the Puppet Forge along with all of its dependencies.

Installing the module from source

Complexity Medium
Installation Time 5 minutes

Users that are interested in contributing should install the modules from GitHub. This requires that rake and the git are installed.

First, clone the openstack module into your module path using the git command:

$ git clone git://github.com/puppetlabs/puppetlabs-openstack /openstack

Then cd into the newly created directory and run the modules:clone rake task to clone all of the dependent modules into your module path.

$ cd /openstack; rake modules:clone;

NOTE: You should preview the modules that will be installed to ensure that they will not conflict with modules already installed in your environment.

$ cat /other_repos.yaml

Configuration Interfaces

The OpenStack module allows users to install OpenStack using the following configuration interfaces:

openstack::all

This class can be used to install all of the openstack components onto a single node and is intended for users that are interested in trying out OpenStack.

openstack::controller and openstack::compute

These classes are used to configure multi-node deployments of openstack.

The openstack::controller class can be used to deploy a central OpenStack management node.

The openstack::compute class is used to deploy compute nodes (which are associated with the underlying hypervisors being managed).

In a multi-node deployment scenario, a user would typically deploy a single openstack::controller with multiple openstack::compute nodes.

Example Usage

An example manifest is provided with the OpenStack module and can be found at:

/openstack/examples/site.pp

To get up and going with an all-in-one installation, use the following node declaration:

node /openstack_all/ {

  class { 'openstack::all':
    public_address            => ,
    public_interface          => ,
    private_interface         => ,
    admin_email               => 'some_admin@some_company',
    admin_password            => 'admin_password',
    keystone_admin_token      => 'keystone_admin_token',
    nova_user_password        => 'nova_user_password',
    glance_user_password      => 'glance_user_password',
    rabbit_password           => 'rabbit_password',
    rabbit_user               => 'rabbit_user',
    libvirt_type              => 'kvm',
    fixed_range               => '10.0.0.0/24',
  }
}

You can use this node block to assign the all-in-one role to a node by simply running puppet apply:

    puppet apply /etc/puppet/modules/openstack/examples/site.pp --certname openstack_all

NOTE: puppet apply runs without a master and assumes that all puppet manifests are on the client that needs to be classified.

The same example manifest also contains an example that can be used to perform a multi-node installation.

NOTE: This configuration interface requires that both nodes have at least 2 interfaces and that one of those interfaces does not have an ip address assigned to it. Check out the OpenStack docs for more information on nova networking.

The configurations:

node /openstack_controller/ {

  class { 'openstack::controller':
    public_address           => ',
    public_interface         => ,
    private_interface        => ,
    internal_address         => ,
    floating_range           => '192.168.101.64/28',
    fixed_range              => '10.0.0.0/24', 
    multi_host               => false ,
    network_manager          => 'nova.network.manager.FlatDHCPManager',
    admin_email              => 'admin_email',
    admin_password           => 'admin_password',
    keystone_admin_token     => 'keystone_admin_token',
    glance_user_password     => 'glance_user_password',
    nova_user_password       => 'nova_user_password',
    rabbit_password          => 'rabbit_password',
    rabbit_user              => 'rabbit_user',
  }

}

and

node /openstack_compute/ {

  class { 'openstack::compute':
    private_interface    => 'eth1',
    internal_address     => ,
    libvirt_type         => 'kvm',
    fixed_range          => '10.0.0.0/24',
    network_manager      => 'nova.network.manager.FlatDHCPManager',
    multi_host           => false,
    sql_connection       => 'mysql://nova:nova_db_passwd@/nova',
    rabbit_host          => ,
    glance_api_servers   => ':9292',
    vncproxy_host        => ,
    vnc_enabled          => true,
    manage_volumes       => true,
  }

}

can be used to configure agents as either controller or compute nodes.

These multi-node examples should be deployed using a Puppet Master and require these additional setup steps:

  1. The openstack modules are installed on the Puppet Master. This includes the puppetlabs/openstack module along with all of its dependencies.
  2. The master is using /openstack/exampes/site.pp as its manifest.
  3. #puppet.conf
    manifest = /openstack/examples/site.pp
    
  4. The agents should contact the master and identify themselves as either a controller or a compute node.
             puppet agent --server  --pluginsync --certname  -t

Configuring the module

Complexity Difficult
Installation Time 15 minutes

Although the Puppet modules for OpenStack provide constrained interfaces for the deployment of OpenStack, they still require some level of OpenStack experience to configure and customize properly.

common parameters:

verbose - Can be used to increase the level at which the services log.

parameters shared by openstack::all and openstack::controller

Parameter Name Description
public_address IP address of public interface that is used to proxy vnc traffic
public_interface interface used to route public traffic by the network service (only required on nodes that host the nova network service)
private_interface Interface used for traffic between VMs. This interface is expected to be active, but not have an ip address assigned to it.
_user_password Password for each application's service user
_db_password Passwords for each service's database user
admin_password Password for the admin user
keystone_admin_token Token for the keystone admin.
rabbit_password Password for the rabbitmq user.
rabbit_user Name of the rabbitmq user.
libvirt_type The hypervisor being used (this has only been tested with kvm and qemu).
fixed_range The ip range used for private ip addresses for the VMs.
floating_range The floating ip pool that should be created. Public addresses for VMs will be allocated out of this pool.
network_manager The network manager that should be used. Currently, flat dhcp and vlan have been tested.
network_config Used to specify network manager specific parameters. Accepts a Hash.

openstack::controller specific options

Parameter Name Description
internal_address Address that all of the openstack services bind to.
create_networks Rather Puppet should automatically create the private and public ip addresses for networking. Specifying false assumes that the user will manually create their networks using nova-manage.
num_networks Number of networks that fixed range should be split into.
multi_host Rather the multi-node OpenStack environment should run in multi-host mode. In multi-host mode, the network service runs on each compute node for HA. Public_interface becomes a required parameter on the compute nodes when multi-host is set to true.

openstack::compute specific parameters:

Parameter Name Description
manage_volumes Whether the compute node should also serve as a volume service.
vnc_enabled Whether vnc should be enabled on the compute host.
multi-host Whether each compute node should host its own network service.

Conclusion

With the OpenStack Puppet modules, it's really easy to get an OpenStack environment up and going quickly.

These modules serve as a great starting place for the deployments of OpenStack production deployments, but there is still a lot more that needs to be done.

The community is actively engaged in a number of features:

  • Puppet modules that represent best practices for monitoring OpenStack
  • Puppet modules that represent best practices for providing highly available OpenStack deployments
  • Better Integration with Razor for policy based provisioning
  • Better integration with PuppetDB for service auto-discovery
Learn More:
Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.