This getting started guide will help you deploy a standard MCollective environment. Jump to the first step, or keep reading for some context.
Note: If you’ve never used MCollective before, start with the Vagrant-based demo toolkit instead. This guide is meant for production-grade deployments, and requires a bit of building before you can do a simple
The Choria project provides an alternative standard deployment with simplified setup tools.
See the MCollective components overview for an introduction to the parts that make up an MCollective deployment.
MCollective is very pluggable, but the developers and community have settled on some common conventions for the most important configuration options. These defaults work very well for most new users, and create a good foundation for future expansion and reconfiguration.
In summary, these are the architecture and configuration conventions that make up the standard MCollective deployment:
In brief, this is what MCollective’s security model looks like with these conventions in place:
With the SSL security plugin, each client user has a unique key pair and all servers share a single key pair. Each server node holds a collection of all authorized client public keys.
These measures focus mainly on strict control over who can command your infrastructure and protection of sensitive information in transit. They assume that authorized servers and clients are both sufficiently trusted to view all sensitive information passing through the middleware.
This is suitable for most use cases. If some authorized servers are untrustworthy, there are opportunities for them to send misleading replies and bogus traffic, but they can’t command other nodes.
Later, you may need to expand to a cluster of ActiveMQ servers; at that point, you might also:
If you have already used modular Puppet code to set up a standard deployment, these changes can be incremental instead of a complete overhaul.
You need to do the following to deploy MCollective:
This process isn’t 100% linear, but that’s the general order in which these tasks should be approached.
We don’t currently have drop-in Puppet code for a standard MCollective deployment, so you’ll have to do some building.
Credentials are the biggest area of shared global configuration in MCollective. Get them sorted before doing much else.
A standard deployment uses the following credentials:
|ActiveMQ username/password||Middleware, servers, clients|
|CA certificate||Middleware, servers, clients|
|Signed certificate and private key for ActiveMQ||Middleware|
|Signed certificate and private key for each server||Servers|
|Signed certificate and private key for each user||Clients (both parts), servers (certificate only)|
|Shared server public and private key||Servers (both parts), clients (public key only)|
Make sure you’ve covered each of the following credentials, and keep track of the credentials for use in future steps. This guide assumes you’re using Puppet as your certificate authority. If you aren’t, you’ll need to generate each credential some other way.
Directories: Below, we refer to directories called
$privatekeydir— these are defined by Puppet settings of the same names. Their locations may vary by platform, so you can locate them with
sudo puppet agent --configprint certdir,privatekeydir(on an agent node) or
sudo puppet master --configprint certdir,privatekeydir(on the CA master).
mcollective. Create a strong arbitrary password for this user.
sudo puppet cert generate activemq.example.com(this name cannot conflict with an existing certificate name). In either case, find the certificate and private key at
sudo puppet cert generate mcollective-servers. (If you use a different name, substitute it for “mcollective-servers” everywhere we mention it below. Note that the name can only use letters, numbers, periods, and hyphens.) Retrieve the certificate and private key from
sudo puppet cert generate <NAME>(letters, numbers, periods, and hyphens only) and retrieve the cert and key from
$privatekeydir/<NAME>.pem; delete the CA’s copy of the private key once you’ve retrieved it.
Deployment status: Nothing has happened yet.
As ever, note that you’ll have an easier time later if you perform these steps with Puppet or something like it. We suggest using a template for the activemq.xml file and using the
java_ks resource type for the keystores.
activemqpackage. The most recent versions of Debian and Ubuntu have ActiveMQ packages, and you may be able to install
activemqwithout enabling any extra repos. For other systems, adapt the instructions from the ActiveMQ documentation, or roll your own packages.
mcollectiveuser. For the MCollective user, use the password from the list of credentials above.
stomp+nio+ssl://0.0.0.0:61614?needClientAuth=true&transport.enabledProtocols=TLSv1,TLSv1.1,TLSv1.2, since we’ll be using CA-verified TLS. (If you are running ActiveMQ before 5.9.x, set it to
stomp+ssl://0.0.0.0:61614?needClientAuth=true&transport.enabledProtocols=TLSv1,TLSv1.1,TLSv1.2instead; the stomp+nio+ssl protocol have had several bugs in earlier releases.)
sslContextelement in the activemq.xml file to use the keystores you created. (If you are using ActiveMQ 5.5, make sure you are arranging elements alphabetically to work around the XML validation bug.)
For more details about configuring ActiveMQ, see the ActiveMQ config reference for MCollective users. It’s fairly exhaustive, and is mostly for users doing things like networks of brokers and traffic filtering; for a standard deployment, you just need to change the passwords and configure TLS.
Deployment status: The middleware is fully ready, but nothing is using it yet.
mcollectivepackage on your server nodes.
mcollective-clientpackage on your admin workstations.
Deployment status: MCollective is installed, but isn’t ready to do anything at this point. The
mcollectiveservice will probably refuse to start since it lacks a connector and security plugin.
To configure servers, you’ll need to:
As mentioned above in Step 1, servers need the CA, an individual certificate and key, the shared server keypair, and every authorized client certificate.
/etc/puppetlabs/mcollective/clientsdirectory and put a copy of every client certificate in it. You will need to maintain this directory centrally, and keep it up to date on every server as you add and delete admin users. (E.g. as a file resource with
ensure => directory, recurse => true.)
Every MCollective server will need to populate the
/etc/puppetlabs/mcollective/facts.yaml file with a cache of its facts. (You can get by without this file, but doing so will limit your ability to filter requests.)
Make sure you include a resource like the following in the Puppet code you’re using to deploy MCollective:
The server config file is located at
See the server configuration reference for complete details about the server’s config file, including its format and available settings.
This config file has many settings that should be identical across the deployment, and several settings that must be unique per server, which is why we suggest managing it with Puppet. If your site uses only a few agent plugins and they don’t require a lot of configuration, you can use a template; otherwise, we recommend managing each setting as a resource.
Be sure to always restart the
mcollective service after editing the config file. In your Puppet code, you can do this with a notification relationship.
This example template snippet shows the settings you need to use in a standard deployment. Converting it to settings-as-resources would be fairly straightforward.
(Note that it assumes an
/etc/puppetlabs/puppet/ssl, which might differ in your Puppet setup. This template also requires variables named
Deployment status: The servers are ready, connected to the middleware, and will accept and process requests from authorized clients. The authorized clients don’t exist yet.
Unlike servers, clients will probably run with per-user configs on admin workstations, and will have to be configured partially by hand. (If you are running any automated clients, you’ll want to deploy those with config management; most of the principles covered below will still apply.)
To configure clients, each new admin user will need to:
Unless the client will be run by root or a system user, we recommend putting the client config file at
~/.mcollective and supporting files like credentials in
For your first admin user, you can manually generate a certificate (as suggested in Step 1) and add it to the authorized clients directory that you’re syncing to servers with Puppet. However, this does not scale beyond one or two users.
When a new admin user joins your team, you need a documented process that does ALL of the following:
Note: The filename of the public key must be identical on both the client and the servers. The client uses the filename to set the caller ID in its requests, and the servers use the request’s caller ID to choose which public key file to validate it with.
This will have to be at least partially manual, but if you’ve used the Puppet CA to issue certificates, you can pretty easily patch together and document a process using the existing Puppet tools.
Below, we outline a suggested process. It assumes a flat hierarchy of admins where everyone can command all servers, with any additional restrictions being handled by the ActionPolicy plugin (see “Step 6: Deploy Plugins” below) rather than the certificate distribution process.
The new user should run the following commands on their workstation — note that the name can only use letters, numbers, periods, and hyphens:
$ mkdir -p ~/.mcollective.d/credentials $ puppet certificate generate <NAME> --ssldir ~/.mcollective.d/credentials --ca-location remote --ca_server <CA PUPPET MASTER>
(Note the use of the
puppet certificate command, which isn’t the same thing as the
puppet cert command. This specific invocation will send a certificate signing request to the CA while safeguarding the private key.)
sudo puppet cert sign <NAME>on the CA puppet master, then copy the certificate from
$certdir/<NAME>.peminto the directory of authorized client keys that is being synced to the MCollective servers; each server will recognize the new user after its next Puppet run.
The new user should run the following commands on their workstation:
$ puppet certificate find <NAME> --ssldir ~/.mcollective.d/credentials --ca-location remote --ca_server <CA PUPPET MASTER> $ puppet certificate find mcollective-servers --ssldir ~/.mcollective.d/credentials --ca-location remote --ca_server <CA PUPPET MASTER> $ puppet certificate find ca --ssldir ~/.mcollective.d/credentials --ca-location remote --ca_server <CA PUPPET MASTER>
~/.mcollectiveon their workstation, and finish filling it out as described below.
After all these steps, and following a Puppet run on each MCollective server, the new user should be able to issue valid mco commands.
For admin users running commands on a workstation, the client config file is located at
~/.mcollective. For system users (e.g. for use in automated scripts), it is located at
See the client configuration reference for complete details about the client config file, including its format and available settings.
This config file has many settings that should be identical across the deployment, and several settings that must be unique per user. To save your new users time, we recommend giving them a partial config file with settings like the ActiveMQ hostname/port/password already entered; this way, they only have to fill in the paths to their unique credentials. The settings that must be modified by each user are:
After receiving this partial config file, a new user should fill out the credential paths, substituting
<HOME> for the fully qualified path to their home directory and
<NAME> for the name of the certificate they requested. (Note that MCollective cannot expand shorthand paths to the home directory —
~/.mcollective.d/credentials... — so you must use fully qualified paths.)
# ~/.mcollective # or # /etc/puppetlabs/mcollective/client.cfg # ActiveMQ connector settings: connector = activemq direct_addressing = 1 plugin.activemq.pool.size = 1 plugin.activemq.pool.1.host = <ActiveMQ SERVER HOSTNAME> plugin.activemq.pool.1.port = 61614 plugin.activemq.pool.1.user = mcollective plugin.activemq.pool.1.password = <ActiveMQ PASSWORD> plugin.activemq.pool.1.ssl = 1 plugin.activemq.pool.1.ssl.ca = <HOME>/.mcollective.d/credentials/certs/ca.pem plugin.activemq.pool.1.ssl.cert = <HOME>/.mcollective.d/credentials/certs/<NAME>.pem plugin.activemq.pool.1.ssl.key = <HOME>/.mcollective.d/credentials/private_keys/<NAME>.pem plugin.activemq.pool.1.ssl.fallback = 0 # SSL security plugin settings: securityprovider = ssl plugin.ssl_server_public = <HOME>/.mcollective.d/credentials/certs/mcollective-servers.pem plugin.ssl_client_private = <HOME>/.mcollective.d/credentials/private_keys/<NAME>.pem plugin.ssl_client_public = <HOME>/.mcollective.d/credentials/certs/<NAME>.pem # Interface settings: default_discovery_method = mc direct_addressing_threshold = 10 ttl = 60 color = 1 rpclimitmethod = first # No additional subcollectives: collectives = mcollective main_collective = mcollective # Platform defaults: # These settings differ based on platform; the default config file created # by the package should include correct values or omit the setting if the # default value is fine. libdir = /opt/puppetlabs/mcollective/plugins helptemplatedir = /etc/puppetlabs/mcollective # Logging: logger_type = console loglevel = warn
Deployment status: MCollective is fully functional. Any configured admin user can run
mco pingto discover nodes, use the
mco inventorycommand to search for more detailed information, and use the
mco rpccommand to trigger actions from installed agents (currently only the
rpcutilagent). See the mco command-line interface documentation for more detailed information on filtering and addressing commands.
However, it can’t yet do much other than collect inventory info. To perform more useful functions, you must install agent plugins on each server and admin workstation. Additionally, if you want to do per-action authorization for certain dangerous commands, you will need to install and configure the ActionPolicy plugin.
To let MCollective do anything beyond retrieving inventory data, you must deploy various plugins to all of your server and client nodes. You will usually also want to write custom agent plugins to serve business purposes in your infrastructure.
For a long-lived standard deployment, we recommend that you deploy the ActionPolicy authorization plugin to all servers.
By default, the standard deployment allows all authorized clients to execute all actions on all servers. This is reasonable as long as MCollective’s capabilities are limited, but as you hire more admin staff and deploy agent plugins that can cause significant changes to production servers, you may wish to begin limiting who can execute what. ActionPolicy allows you to distribute policy files for specific agents, which will restrict the set of users able to run a given action.
plugin.actionpolicy.allow_unconfiguredsettings in the server config file.
<NAME>is the filename of the client’s public key file without the
.pemextension. This string (including the
cert=) can be used as the second field of a policy line. (The ActionPolicy documentation uses
uid=for its examples, which is a caller ID set by the PSK security plugin.)
Agent plugins do all of MCollective’s heavy lifting. All parts of an agent need to be installed on servers, and the DDL file needs to be installed on clients.
For more information on how to install these agents, see “Installing and Packaging MCollective Plugins.”.
Deployment status: MCollective can do anything you’ve written or downloaded plugins for, on any number of servers, filtered and grouped by arbitrary metadata.