You can add ActiveMQ hubs and spokes to large Puppet Enterprise deployments. Building out your ActiveMQ brokers provides efficient load balancing of network connections for relaying MCollective messages through your large PE infrastructure.
For more information about MCollective, refer to the MCollective docs and, more specifically, ActiveMQ clustering.
Adding ActiveMQ hubs and spokes can be done in addition to, or independent from, adding additional Puppet masters to your PE infrastructure.
See the PE hardware recommendations for guidance on recommended hardware for large environment installations.
Be sure to review these procedures before beginning, as performing these steps out of order can cause problems for your configuration.
In addition, note the following about this guide:
In this example, we’ll assume we’re building a datacenter in Portland, Oregon, and one in Sydney, Australia. We’ll use the following hostnames:
MASTER.EXAMPLE.COM
CONSOLE.EXAMPLE.COM
AGENT1.EXAMPLE.COM
and AGENT2.EXAMPLE.COM
ACTIVEMQ-HUB.EXAMPLE.COM
ACTVIEMQ.SYD.EXAMPLE.COM
ACTIVEMQ.PDX.EXAMPLE.COM
The general procedure proceeds as follows:
Before you can set up your hubs and spokes, you must install Puppet agents on the following nodes:
ACTIVEMQ-HUB.EXAMPLE.COM
ACTIVEMQ.SYD.EXAMPLE.COM
ACTIVEMQ.PDX.EXAMPLE.COM
AGENT1.EXAMPLE.COM
AGENT2.EXAMPLE.COM
You must perform this step so that each machine has a Puppet agent installed.
SSH into each machine and run the following command:
curl -k https://<MASTER.EXAMPLE.COM>:8140/packages/current/install.bash | sudo bash -s agent:ca_server=<MASTER.EXAMPLE.COM>
Use the PE console to sign their certificate requests.
Now that you’ve installed agents on your hub and broker nodes, you’re ready to create the ActiveMQ hub group.
Specify options for the new node group:
Option | Value |
---|---|
Parent name | PE Infrastructure |
Group name | PE ActiveMQ Hub |
Environment | select environment agents are in |
Environment group | Do not select this option |
ACTIVEMQ-HUB.EXAMPLE.COM
.puppet_enterprise::profile::amq::hub
.In the puppet_enterprise::profile::amq::hub
class, specify parameters:
network_connector_spoke_collect_tag
pe-amq-network-connectors-for-ACTIVEMQ-HUB.EXAMPLE.COM
ACTIVEMQ-HUB.EXAMPLE.COM
run Puppet.After you create the hub group, you add your spoke nodes (ACTIVEMQ.SYD.EXAMPLE.COM
and ACTIVEMQ.PDX.EXAMPLE.COM
) to the PE ActiveMQ Broker group, which is a pre-configured node group in PE.
To add additional spokes to PE ActiveMQ broker group:
ACTIVEMQ.SYD.EXAMPLE.COM
and ACTIVEMQ.PDX.EXAMPLE.COM
.In the puppet_enterprise::profile::amq::broker class and specify parameters:
activemq_hubname
["ACTIVEMQ-HUB.EXAMPLE.COM"]
Note: The hub FQDN should be entered as an array. Additional Hubs can be added as needed.
Run Puppet on ACTIVEMQ.SYD.EXAMPLE.COM
and ACTIVEMQ.PDX.EXAMPLE.COM
.
After you run Puppet on all spoke nodes, run Puppet on each hub node (in this case, ACTIVEMQ-HUB.EXAMPLE.COM
).
Note: The Puppet master (e.g.,
MASTER.EXAMPLE.COM
) is, by default, already an MCollective broker. If needed, you can unpin it from the PE ActiveMQ Broker group.
By default managed nodes use the Puppet master (master-of-masters) as the ActiveMQ broker. In a hub and spoke configuration all the nodes other than Puppet infrastructure nodes use the most suitable spoke as their broker. Suitable spokes are usually those that share a geographic location or share network segments can be due to geo-diversity or different network segments etc. In some circumstance the Spokes may also be behind a load balancer.
Each node in your infrastructure needs to connect to the most suitable spoke, and and you can create such connections easily with custom facts..
You’ll need to create a custom fact that can be used to identify the physical attributes of the groups of MCO subscribers/servers in Portland and Sydney.
You’ll later use the custom fact you’ll make (for example, data_center
) to classify spokes in the console or to bind agents to spokes with Hiera.
In this example, we’ll create a custom fact to associate AGENT.EXAMPLE.COM
with SPOKE.SYD.EXAMPLE.COM
.
Tip: If needed, refer to the Facter 2.4 documentation for information about creating and deploying custom facts.
Depending on your operating system, choose one of the following to create the custom fact:
a. If AGENT.EXAMPLE.COM
is a *nix machine, run:
puppet apply -e 'file { ["/etc/puppetlabs", "/etc/puppetlabs/facter", "/etc/puppetlabs/facter/facts.d"]: ensure => directory }'
puppet apply -e 'file {"/etc/puppetlabs/facter/facts.d/data_center.txt": ensure => file, content => "data_center=syd"}'
b. If AGENT.EXAMPLE.COM
is a Windows machine (Windows Vista, 7, 8, 2008, 2012), run:
puppet apply -e "file { ['C:/ProgramData/PuppetLabs', 'C:/ProgramData/PuppetLabs/facter', 'C:/ProgramData/PuppetLabs/facter/facts.d']: ensure => directory }"
puppet apply -e "file {'C:/ProgramData/PuppetLabs/facter/facts.d/data_center.txt': ensure => file, content => 'data_center=syd'}"
Proceed to one of the following tasks to assign agents to your spokes:
Use the PE console to create new node groups for each spoke or groups of spokes in Portland and Sydney. You can use your custom fact as the classifying rule for each node group.
These groups should inherit the PE MCollective group, and they should include the puppet_enterprise::profile::mcollective::agent
class, with the activemq_brokers
parameter set to the name of the desired spoke(s).
Note: The node must still belong to the PE MCollective node group.
Specify options for the new node group:
Options | Value |
---|---|
Parent name | PE MCollective |
Group name | Sydney_datacenter |
Environment | Select environment agents are in |
Environment group | Do not select this option |
On the Rules tab, create a rule to add agents to this group:
Option | Value |
---|---|
Fact | data_center |
Operator | = |
Value | syd |
puppet_enterprise::profile::mcollective::agent
, and click Add class.activemq_brokers
.In the Value field, add the names of the desired spokes (e.g., ["SPOKE.SYD.EXAMPLE.COM"]
).
Note: The hubs’ FQDNS must be entered as an array.
If you do not want to use the PE console to create new node groups for each spoke or groups of spokes in Portland and Sydney, you can use Hiera and use automatic data binding instead.
In this case, you’ll need to remove the mcollective_middleware_hosts parameter
from the puppet_enterprise
class in the PE Infrastructure group, and place this parameter within Hiera at the appropriate level to distinguish the different spokes.
On the Puppet master, edit your Hiera config file(/etc/puppetlabs/puppet/hiera.yaml
) so that it contains the data_center
fact as part of the hierarchy.
Your Hiera config file should resemble the following:
#hiera.yaml
---
:backends:
- eyaml
- yaml
:hierarchy:
- "%{clientcert}"
- "%{data_center}"
- global
:yaml:
:datadir: "/etc/puppetlabs/code/environments/%{environment}/hieradata"
On the Puppet master, add Hiera data files to map the desired ActiveMQ spokes to the data_center
custom facts.
The following example commands are for the syd
and pdx
datacenters; they assume you’re using the production
environment.
Important: the name of file you create in this step must match custom fact value.
a. Navigate to /etc/puppetlabs/code/environments/production/hieradata/
, and create a file called syd.yaml
.
b. Edit syd.yaml
so that it contains the following content:
---
puppet_enterprise::profile::mcollective::agent::activemq_brokers:
- 'SPOKE.SYD.EXAMPLE.COM'
c. Still in the hieradata
directory, create a file called pdx.yaml
.
d. Edit pdx.yaml
so that it contains the following content:
---
puppet_enterprise::profile::mcollective::agent::activemq_brokers:
- 'SPOKE.PDX.EXAMPLE.COM'
Verify that Hiera and the custom fact are configured properly.
Verify the custom fact on the end node. On AGENT.EXAMPLE.COM
run facter data_center
.
This should return the expected value of syd
for this example.
Verify that Hiera picks up the expected value for ActiveMQ spoke given the appropriate parameters. On the Puppet master, run the following:
hiera puppet_enterprise::profile::mcollective::agent::activemq_brokers data_center=syd environment=production
This should return the expected value of ["SPOKE.SYD.EXAMPLE.COM"]
for this example.
On the Puppet master, reload the pe-puppetserver service.
sudo service pe-puppetserver reload
Since Puppet Server doesn’t actively monitor the hiera.yaml file for changes, it requires a reload whenever you edit it.
Run Puppet on the ActiveMQ hub and spokes (including the Puppet master) and on any PE agents, or wait for a scheduled run.
Continue to verify connections in your infrastructure .
The final thing you need to do is verify all connections have been correctly established.
To verify the MCollective group is correctly set up, go to MASTER.EXAMPLE.COM
and run su peadmin
and then mco ping
.
You should see the ActiveMQ hub and spokes (including the Puppet master) and any PE agents listed.
To verify the ActiveMQ hub’s connections are correctly established, go to ACTIVEMQ-HUB.EXAMPLE.COM
and run the following command:
For RHEL 7 and derivatives: ss -a -n | grep '61616'
.
For other platforms: netstat -an | grep '61616'
.
You should see that the ActiveMQ hub has connections set up between the ActiveMQ broker nodes.
Tip: if you need to increase the number of processes the
pe-activemq
user can open/process, refer to Increasing the ulimit for thepe-activemq
User for instructions.