Puppetlabs-NTP: A Puppet Enterprise Supported Module

Coinciding with the launch of Puppet Enterprise 3.2, Puppet Labs has announced fully supported modules, including the puppetlabs-ntp module, which handles configuring, installing and running NTP.

Configuring and running NTP to manage and maintain infrastructure is a common use case for Puppet, and our Puppet Enterprise supported module program is designed to better empower and assist Puppet Enterprise customers as they use puppetlabs-ntp and other modules that address a sysadmin's most frequent tasks. We'll continue to add to the supported modules list over time. Puppet Enterprise customers can expect to receive extensive support as they use these modules to manage and maintain their infrastructure, with the assurance that they will always function as expected on all supported platforms. Check out the Puppet Forge for more information on module compatibility with specific platforms.

In this post, I’m going to walk you through some fairly common use cases for the puppetlabs-ntp module, and demonstrate how you can configure and use it out of the box.

For the following examples we have three separate nodes, all of which are running Puppet Enterprise. The first node is running as an all-in-one master with the hostname master.puppetlabs.vm. Our second node is running as an agent with the hostname agent.puppetlabs.vm, and our final node is running as agent with the hostname time.puppetlabs.vm. All the nodes mentioned are running CentOS 6.4.

First we need to get the module installed. This can be accomplished by using the Puppet Module Tool, included in Puppet Enterprise, to install the module on our puppet master.

[root@master ~]# puppet module install puppetlabs-ntp
Notice: Preparing to install into /etc/puppetlabs/puppet/modules ...
Notice: Downloading from https://forge.puppetlabs.com ...
Notice: Installing -- do not interrupt ...
/etc/puppetlabs/puppet/modules
└── puppetlabs-ntp (v3.0.3)`

Basic Setup

In the following example, I will demonstrate how simple it is to get all of your nodes synced up in a flash with any given time server. We will configure agent.puppetlabs.vm to sync to the time server pool.ntp.org and configure the node to fail over to its local time in case of connectivity issues.

node 'agent.puppetlabs.vm' {
    class { '::ntp':
      servers => ['pool.ntp.org'],
      udlc    => true,
    }
  }

As you see in the example above, we declared the default ntp class and set the server parameter to pool.ntp.org. We also set the udlc parameter to true to allow the node to use its own local time if it is unable to connect to the NTP server specified.

It is likely you could want to use more than one time server, in case you are unable to resolve the first time server you specified. This is accomplished by appending another entry to the server parameter array. In this case I will add an NTP server utcnist.colorado.edu. In the case that pool.ntp.org fails, the nodes clock will then sync with utcnist.colorado.edu.

node 'agent.puppetlabs.vm' {
  class { '::ntp':
    servers => ['pool.ntp.org','utcnist.colorado.edu'],
    udlc    => true,
  }
}

By default, the module ensures both that the package is installed and that the service is started before managing the NTP configuration. Out of the box, the module ships with a default ntp.conf in the form of a template written in the erb templating language. Without going into too much detail about the actual template, we can glance at what the following node definition builds within the ntp.conf configuration file located on the Puppet agent.

From /etc/ntp.conf on agent.puppetlabs.vm

# ntp.conf: Managed by puppet.
#
# Keep ntpd from panicking in the event of a large clock skew
# when a VM guest is suspended and resumed.
tinker panic 0

# Permit time synchronization with our time source, but do not'
# permit the source to query or modify the service on this system.'
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1


server pool.ntp.org
server utcnist.colorado.edu

# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available. 
server	127.127.1.0 
fudge	127.127.1.0 stratum 10
restrict 127.127.1.0

# Driftfile.
driftfile /var/lib/ntp/drift

As you can see, the configuration file is being managed by Puppet. By default, the module sets panic to false, restricts synchronization to the local machine, and sets the default location for the driftfile. We can also see where we have added our array of NTP servers: pool.ntp.org and utcnist.colorado.edu, as well as having set the backup to synchronize with our local clock in case of connectivity issues, or if we are unable to resolve both of the servers.

If you are interested in the default template that the module ships with, check out /etc/puppetlabs/puppet/modules/ntp/templates/ntp.conf.erb. If we want to use a custom template, we will need to supply the config_template parameter.

NTP Expanded

Sometimes you may run into a case where you need to configure your own NTP server. Given that the NTP server and client are packaged into one, the server is always installed by default, but is set to restrict connections to localhost. You can change the restrictions by making a few simple modifications to the definition from the previous example.

In this example, we will be creating our own NTP server using the node time.puppetlabs.vm. We will need to modify the permissions to accept requests from other servers within our infrastructure. We will still be supplying the servers we want to sync our main NTP server to, while our NTP clients within our infrastructure will connect to our main NTP server for syncing purposes.

Let's assume you already have DNS properly configured, or the time server added to /etc/hosts on the NTP client machines, and all the necessary firewall rules are in place to allow traffic on port 123.

Our main NTP server:

node 'time.puppetlabs.vm' {
    class { '::ntp':
      servers => ['pool.ntp.org','utcnist.colorado.edu'],
      restrict => ['10.20.1.0 mask 255.255.255.0 nomodify notrap']
      udlc => true,
    }
  }

Our NTP client machines:

node 'agent.puppetlabs.vm' {
  class { '::ntp':
    servers => ['time.puppetlabs.vm'],
  }
}

In the example above, we configured our clients to connect with the main NTP server, our main NTP server to connect to the NTP server(s) we specified, and supplied access for all connections within the proper subnet.

Please keep in mind when using the restrict parameter, we will want to add all the necessary restrictions to the restrict array to ensure the NTP source is unable to query or modify the NTP service on the system.

This has been a basic introduction and demonstration of how you can use and configure the puppetlabs-ntp module. For more information and parameters, check out the documentation included with the module.

Jay Wallace is a support engineer at Puppet Labs.

Learn More

Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.