Module of the Week: puppetlabs/vcenter – VMware vCenter Deployment

Purpose Installs vCenter 5 on Windows and manages vCenter resources.
Module puppetlabs/vcenter
Puppet Version 2.7+
Platforms Windows 2008R2 64bit

VMware vCenter is a software application that provides central management of vSphere environments. It's an essential component for managing large numbers of VMware virtual machines and the fleet of ESX systems supporting those hosts. As a high level overview, the VMware infrastrastructure stack is essentially VMware VMs running in vSphere ESX hosts, which are managed by vCenter, which can in turn be managed by vCloud Director:

In this case, the vCenter Puppet module can install a full blown vCenter instance with no limitations on Windows 2008 R2 64bit. Puppet not only manages the database dependencies (by deploying Microsoft SQL 2008R2), it will also take care of the installation of VMware vCenter and optionally deploy the vSphere client. Since the vCenter installation process is fairly involved and requires a Windows license, VMware offers vCenter Server Appliance (vCSA) to simplify deployment in a Linux environment. In addition to deploying vCenter on Windows, this module also includes management of vCenter resources such as datacenter, folders, ESX clusters, and ESX hosts modeled as Puppet resources for both Puppet deployed vCenter installation and vCSA.

Installing the Module

Complexity Easy
Installation Time 5 minutes

On the Puppet master, execute the Puppet module tool to download the vCenter module and its dependencies:

$ puppet module install puppetlabs/vcenter
Preparing to install into /etc/puppetlabs/puppet/modules ...
Downloading from http://forge.puppetlabs.com ...
Installing -- do not interrupt ...
/etc/puppetlabs/puppet/modules
└─┬ puppetlabs-vcenter (v0.1.0)
  ├─┬ puppetlabs-mssql (v0.1.0)
  │ └── puppetlabs-dism (v0.1.0)
  └── puppetlabs-registry (v0.1.1)

Configuring the module

Complexity Easy
Installation Time 5 minutes

The vCenter module supports the following parameters:

  • media: vCenter installation software media location.
  • sql_media: Microsoft SQL installation software media location.
  • username: vCenter service account username.
  • password: vCenter service account password.
  • jvm_memory_option: vcenter inventory size, support S, M, L.
  • client: install vsphere client software (default: true).

To deploy vCenter, simply specify the options above for the target windows node on the Puppet master. If you have vCSA in your environment, this step can be omitted and you can skip ahead to the resource management section.

node vcenter.puppetlabs.lan {
  class { 'vcenter':
    media             => 'M:\software\vCenter',
    sql_media         => 'M:\software\SQL2008',
    jvm_memory_option => 'M',
    client            => false,
  }
}

After the Puppet agent applies the changes, you should be able to log into the system and access vCenter. This process can take upwards of 30 minutes, because Microsoft SQL and vCenter combined deploy over 6.5 GB of binary.

Resource Overview

The vCenter module introduces the following Puppet types and providers:

  • vc_datacenter: Data Centers
  • vc_folder: Folders
  • vc_cluster: ESX Clusters
  • vc_host: ESX Hosts

If you deployed vCenter using the Puppet module, the same node can be used to manage and attach ESX hosts as shown below:

All vCenter resources depends on the rbvmomi Ruby gem, which is installed on the Windows agent as part of the vcenter class, and any vCenter resources can be specified as part of the node.

node vcenter.puppetlabs.lan {
  class { 'vcenter':
    media             => 'M:\software\vCenter',
    sql_media         => 'M:\software\SQL2008',
    jvm_memory_option => 'M',
    client            => false,
  }

  vc_folder {  '/lab_env':                            
    ensure     => present,                                               
    connection => 'administrator:puppet@vcenter.puppetlabs.lan',
    require    => Class['vcenter'],              
  }
}

If you already have a vCSA server in your environment, we can still manage the vCSA server indirectly through a proxy system as illustrated here:

Simply install the rbvmomi gem on a proxy node, and make sure the connection info reflects the vCSA device info (the example below vcenter_proxy.puppetlabs.lan manages vcsa.puppetlabs.lan through the vSphere API). The proxy host can be any server that has network access to the vCSA system and it's not required to use a dedicated host for this purpose.

node vcenter_proxy.puppetlabs.lan {
  package { 'rbvmomi':
    ensure   => present,
    provider => gem,
  }

  vc_folder {  '/lab_env':                            
    ensure     => present,                                               
    connection => 'administrator:vcsa_puppet@vcsa.puppetlabs.lan',
    require    => Package['rbvmomi'],              
  }
}

vCenter folders and datacenters are containers for managing inventory objects. The resource title is similar to filepath to indicate how these containers are organized. The example below datacenter1 would exist under the folder /lab_env.

vc_folder {  '/lab_env':                                           
   ensure     => present,                                               
   connection => 'administrator:puppet@vcenter.puppetlabs.lan',                
}

vc_datacenter {  '/lab_env/datacenter1':
   ensure     => present,
   connection => 'administrator:puppet@vcenter.puppetlabs.lan',
}

vCenter cluster is a group of hosts, and when a host is added to a cluster its resource becomes part of the cluster resource. Cluster resource titles are also specified like filepath to indicate where they exist in the vCenter hierarchy. The example below cluster1 is part of datacenter1 under the lab_env folder:

vc_cluster {  '/lab_env/datacenter1/cluster1':
   ensure     => present,
   connection => 'administrator:puppet@vcenter.puppetlabs.lan',
}

If we add the resources above to the node vcenter.puppetlabs.lan it will result in the following organization in vCenter:

vCenter host are vSphere ESX hosts which contains virtual machines. The resource title is either the ESX hostname or ipaddress, and the username/password is the login to the ESX host, where the path indicates where the ESX host resides in the vCenter inventory hierarchy:

vc_host { 'esx01.lab.puppetlabs.lan':
   ensure     => present,
   username   => 'root',
   password   => 'test1234',
   path       => '/lab_env/datacenter1/cluster1',
   connection => 'administrator:puppet@vcenter.puppetlabs.lan,
}

Note in this initial release, all resources have a connection attribute which specifies the vCenter connection information. This was intended to support managing resources on different vCenter servers in a single puppet manifests. However we are considering changing this information to be stored in a configuration file similar to puppet device.conf file so it does not need to be part of every vCenter resource.

Also because we are not able to identify resource uniqueness for folders and datacenters, we can not support migration of those resource to a new location. As an example if we change following resources:

vc_folder {  '/folder/lab_env1/':                                           
   ensure     => present,                                               
   connection => 'administrator:puppet@vcenter.puppetlabs.lan',                
}
to:
vc_folder {  '/folder/lab_env2/':                                           
   ensure     => present,                                               
   connection => 'administrator:puppet@vcenter.puppetlabs.lan',                
} 

This will result in two folders named /folder/lab_env1 and /folder/lab_env2. This is not a surprise to longtime Puppet users since this is the same behavior as file resources, but I want to make sure new users are aware of this behavior.

The vCenter module also depends on the Puppet Labs DISM and registry modules that may be of interest to Windows' Puppet users, so I will provide a brief overview.

Windows Deployment Image Servicing and Management or DISM is a utility that can enable/disable features on Windows 7/Windows 2008 through a command line tool. The DISM Puppet resource provides the ability to manage them on those platforms. For example, the following manifests will ensure .Net 3.5 is available on the system:

dism { 'NetFx3':
  ensure => present,
}

The Windows registry is a central repository of hierarchical configuration data. For most applications, it's far more convenient to manipulate the configuration settings directly in the registry rather than adjust them through a graphical user interface. In the vCenter module, we use the registry type to create the database ODBC connectivity:

registry_key { 'HKLMSOFTWAREODBCODBC.INIVMware VirtualCenter':
  ensure => present,
}

registry_value { 'HKLMSOFTWAREODBCODBC.INIODBC Data SourcesVMware VirtualCenter':
  data => 'SQL Server Native Client 10.0',
  type => string,
}
...

One of the challenges on 64 bit Windows platform is Microsoft's implementation of registry reflection, which isolates 32bit and 64bit registry keys. Users typically have to be very conscientious about whether they are writing to 64 bit or 32 bit key, and the registry provider includes registry::value defined type which solves this problem by writing to the system native architecture:

registry::value { 'VMware VirtualCenter': 
  key   => 'HKLMSOFTWAREODBCODBC.INIODBC Data Sources',
  value => 'VMware VirtualCenter',
  data  => 'SQL Server Native Client 10.0',
  type  => string,
}

Conclusion

The vCenter module allows rapid deployment of VMware vCenter on Windows 2008R2, and it provides resources for managing vCenter folder, datacenter, cluster, and ESX hosts for vCSA and vCenter systems. The module isn't perfect at this point, and we ran into issues during testing due to pending reboot, when the system requires a restart to complete software installation (such as Windows patches installed in the background). Also in this initial release we are merely creating/destroying vCenter resources and not managing resources attributes such as permissions. However, we intend to identify and support some key resource functionalities, and also manage them in a way that would not interfere and conflict with adhoc management of vCenter after the initial deployment. If you have any suggestions or find any other issues, please file a feature/bug request and keep us posted.

Learn More
Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.