Published on 24 March 2014 by

Coinciding with the launch of Puppet Enterprise 3.2, Puppet Labs has announced fully supported modules, including the puppetlabs-ntp module, which handles configuring, installing and running NTP.

Configuring and running NTP to manage and maintain infrastructure is a common use case for Puppet, and our Puppet Enterprise supported module program is designed to better empower and assist Puppet Enterprise customers as they use puppetlabs-ntp and other modules that address a sysadmin's most frequent tasks. We'll continue to add to the supported modules list over time. Puppet Enterprise customers can expect to receive extensive support as they use these modules to manage and maintain their infrastructure, with the assurance that they will always function as expected on all supported platforms. Check out the Puppet Forge for more information on module compatibility with specific platforms.

In this post, I’m going to walk you through some fairly common use cases for the puppetlabs-ntp module, and demonstrate how you can configure and use it out of the box.

For the following examples we have three separate nodes, all of which are running Puppet Enterprise. The first node is running as an all-in-one master with the hostname master.puppetlabs.vm. Our second node is running as an agent with the hostname agent.puppetlabs.vm, and our final node is running as agent with the hostname time.puppetlabs.vm. All the nodes mentioned are running CentOS 6.4.

First we need to get the module installed. This can be accomplished by using the Puppet Module Tool, included in Puppet Enterprise, to install the module on our puppet master.

[[email protected] ~]# puppet module install puppetlabs-ntp
Notice: Preparing to install into /etc/puppetlabs/puppet/modules ...
Notice: Downloading from ...
Notice: Installing -- do not interrupt ...
└── puppetlabs-ntp (v3.0.3)`

Basic Setup

In the following example, I will demonstrate how simple it is to get all of your nodes synced up in a flash with any given time server. We will configure agent.puppetlabs.vm to sync to the time server and configure the node to fail over to its local time in case of connectivity issues.

node 'agent.puppetlabs.vm' {
    class { '::ntp':
      servers => [''],
      udlc    => true,

As you see in the example above, we declared the default ntp class and set the server parameter to We also set the udlc parameter to true to allow the node to use its own local time if it is unable to connect to the NTP server specified.

It is likely you could want to use more than one time server, in case you are unable to resolve the first time server you specified. This is accomplished by appending another entry to the server parameter array. In this case I will add an NTP server In the case that fails, the nodes clock will then sync with

node 'agent.puppetlabs.vm' {
  class { '::ntp':
    servers => ['',''],
    udlc    => true,

By default, the module ensures both that the package is installed and that the service is started before managing the NTP configuration. Out of the box, the module ships with a default ntp.conf in the form of a template written in the erb templating language. Without going into too much detail about the actual template, we can glance at what the following node definition builds within the ntp.conf configuration file located on the Puppet agent.

From /etc/ntp.conf on agent.puppetlabs.vm

# ntp.conf: Managed by puppet.
# Keep ntpd from panicking in the event of a large clock skew
# when a VM guest is suspended and resumed.
tinker panic 0

# Permit time synchronization with our time source, but do not'
# permit the source to query or modify the service on this system.'
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict -6 ::1


# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available. 
fudge stratum 10

# Driftfile.
driftfile /var/lib/ntp/drift

As you can see, the configuration file is being managed by Puppet. By default, the module sets panic to false, restricts synchronization to the local machine, and sets the default location for the driftfile. We can also see where we have added our array of NTP servers: and, as well as having set the backup to synchronize with our local clock in case of connectivity issues, or if we are unable to resolve both of the servers.

If you are interested in the default template that the module ships with, check out /etc/puppetlabs/puppet/modules/ntp/templates/ntp.conf.erb. If we want to use a custom template, we will need to supply the config_template parameter.

NTP Expanded

Sometimes you may run into a case where you need to configure your own NTP server. Given that the NTP server and client are packaged into one, the server is always installed by default, but is set to restrict connections to localhost. You can change the restrictions by making a few simple modifications to the definition from the previous example.

In this example, we will be creating our own NTP server using the node time.puppetlabs.vm. We will need to modify the permissions to accept requests from other servers within our infrastructure. We will still be supplying the servers we want to sync our main NTP server to, while our NTP clients within our infrastructure will connect to our main NTP server for syncing purposes.

Let's assume you already have DNS properly configured, or the time server added to /etc/hosts on the NTP client machines, and all the necessary firewall rules are in place to allow traffic on port 123.

Our main NTP server:

node 'time.puppetlabs.vm' {
    class { '::ntp':
      servers => ['',''],
      restrict => [' mask nomodify notrap']
      udlc => true,

Our NTP client machines:

node 'agent.puppetlabs.vm' {
  class { '::ntp':
    servers => ['time.puppetlabs.vm'],

In the example above, we configured our clients to connect with the main NTP server, our main NTP server to connect to the NTP server(s) we specified, and supplied access for all connections within the proper subnet.

Please keep in mind when using the restrict parameter, we will want to add all the necessary restrictions to the restrict array to ensure the NTP source is unable to query or modify the NTP service on the system.

This has been a basic introduction and demonstration of how you can use and configure the puppetlabs-ntp module. For more information and parameters, check out the documentation included with the module.

Jay Wallace is a support engineer at Puppet Labs.

Learn More

Share via:
Posted in:

> It is likely you could want to use more than one time server, in case you are unable to > resolve the first time server you specified.
That's somewhat wrong. I've seen NTP appliances which lost their radio signal and correctly changed their stratum level, but could still be reached. So the reason to have multiple servers is to let NTP choose the best one.
The best practice is to have at least 3 ntp servers as that allows NTP to detect which one is the outlier.

It's great to see a supported and tested module for NTP. But installing and configuring NTP is just the beginning. The most important step (which is often forgotten or done wrong) is monitoring if NTP is working correctly (has a peer and candidate). The monitoring step should be done with something like Nagios and is not puppets task.

srinivasa nagaraja

Dear all, Need to know how to configure puppet master itself as an NTP server, with no net access to external world, basically we are trying to design the infrastructure with puppetmaster and a few nodes as agent to it . Puppet master on CentOS and a nodes on Solaris 11 , CentOS 7 ,RHEL 6 & Ubantu 14.

Kindly advise step by step process .


The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.