Published on 12 September 2014 by

Recently I had a discussion with a great customer where they wondered if there was a smart and automated way of deploying operating systems together with applications. Of course, I said, you can use Razor and Puppet for those things. However, they wanted a completely hands-off approach that included a function for server locality. The hands-off piece is already built-in with Razor, but server locality? Not really. Razor just pulls in nodes that fit the hardware specifics from a pool of available nodes, and deploys operating systems on them. What the customer wanted was a way to say that the top five servers in a rack should be deployed with something, the next 10 something else, and the bottom five with a third thing. So what to do?

Razor Server Locality with LLDP example

We sat down and discussed different ways of acquiring a server’s location in a rack/datacenter/city/country. Some included manual labeling of servers, which we both agreed was not a good idea. Some included talking to specialized power strips, but we wanted it to be generic and not tied to specific hardware. Other ways included talking to the network equipment because that had to be there in the first place for us to actually be able to deploy anything, and we thought that was probably the best way to go. We discussed in depth two different routes on how to generate server locality information from network equipment, and I’d like to share one of those with you today.

Within Razor, the new one, there is a function of extending the MicroKernel with new facts, packages and functions. This means we can extend the MK without having to rebuild it from scratch, and we can have the extensions separate from the main MK tree. Very nice IMHO, and this is the route I took to enable one way of identifying server locality within Razor.

The whole thing is built off work already done that @shchung did but was not implemented for the first version of Razor, you can find the old pull request here and code here. Big thanks for that code as it made it a lot easier for me to start this in the first place. If it had been implemented last year already, I would probably not be writing this blog post at all :)

So what does this new extension do?

From the node that is booted with the MicroKernel, it asks for information about the switch it’s connected to such as the name of the switch, switch port, switch IP and other information that is available using regular LLDP, a standardized version of CDP, which can be used on pretty much any standard datacenter switch today (just make sure it’s enabled). So there’s no real hardware dependency which is exactly what we wanted.

It then takes this information, creates facts for it and sends it back to the Razor server. So when you look at the facts of a server, you will see something like this:

[email protected]:~# razor nodes node1
From http://localhost:8080/api/collections/nodes/node1:
                           interfaces: enp4s0f0,enp4s0f1,enp8s0f0,enp8s0f1,ens2f0,ens2f1,ens2f2,ens2f3,lo
                    macaddress_ens2f0: 00:1e:67:4d:c2:06
       lldp_neighbor_chassisid_ens2f0: 00:1c:73:28:65:d8
          lldp_neighbor_portid_ens2f0: Ethernet17
         lldp_neighbor_sysname_ens2f0: razor-switch1
            lldp_neighbor_pvid_ens2f0: 100
             lldp_neighbor_mtu_ens2f0: 9236
                          hardwareisa: x86_64
                           macaddress: 00:1e:67:9d:b2:90                                   architecture: x86_64
                        hardwaremodel: x86_64
                           processor0: Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
                           processor1: Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
                           processor2: Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
                           processor3: Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz

The important piece here is highlighted in bold, namely the LLDP facts for portid, sysname etc, which we acquire directly from the switch. With these facts, we can create new tags like this:

[email protected]:~# razor tags
From http://localhost:8080/api/collections/tags:
| name                | rule                                                             | nodes | policies |
| razor-switch1       | ["=", ["fact", "lldp_neighbor_sysname_ens2f0"], "razor-switch1"] | 3     | 3        |
| gigabit-ethernet-17 | ["=", ["fact", "lldp_neighbor_portid_ens2f0"], "Ethernet17"]     | 1     | 1        |
| gigabit-ethernet-18 | ["=", ["fact", "lldp_neighbor_portid_ens2f0"], "Ethernet18"]     | 1     | 1        |
| gigabit-ethernet-19 | ["=", ["fact", "lldp_neighbor_portid_ens2f0"], "Ethernet19"]     | 1     | 1        |

The tags can then be used to create standard Razor policies like this:

[email protected]:~# razor policies
From http://localhost:8080/api/collections/policies:
| name                    | repo       | task   | broker | enabled | max_count | tags                               | nodes |
| centos-for-scaleio-mdm1 | centos-6.5 | centos | puppet | true    | 1         | razor-switch1, gigabit-ethernet-17 | 1     |
| centos-for-scaleio-mdm2 | centos-6.5 | centos | puppet | true    | 1         | razor-switch1, gigabit-ethernet-18 | 1     |
| centos-for-scaleio-tb   | centos-6.5 | centos | puppet | true    | 1         | razor-switch1, gigabit-ethernet-19 | 1     |

So with this MicroKernel extension together with a simple and proper LLDP configuration on your switches, we can easily define what Razor policy should be applied to a specific server based on server locality within a rack and/or datacenter. One example (thanks @mcowger) could be that a server is connected to two switches, one called “database-internal” and another called “sc1-pod1″, which would then tell us that this server should be a database server and it’s located in Pod1 in our Santa Clara DC1.

This is a great new Razor functionality that would of course mostly apply if you have a certain standard for your racking and cabling of servers, perhaps for certain sets of your datacenter where you need to have control over what gets installed where, for others where you just have a pool of nodes this won’t be necessary.

So what is the MicroKernel extension made up of?

Essentially just a few basic things:

  1. A directory with “bin”, “lib” and “lib/ruby” folders
  2. Fedora binaries for lldpad, lldptool and dcbtool from the lldpad RPM put in the “bin” folder
  3. Fedora libraries from RPMs for, and put in the “lib” folder
  4. A modified version of the openlldp.rb file that allows you to run the lldpad daemon for a set time to be able to grab the LLDP information and push it back as facts to the Razor server, put in the “lib/ruby” folder

Put all of that in a zip-file (or get the code here) and modify your Razor server according to the Razor MicroKernel extension instructions outlined here, enable LLDP on your switches (refer to your manufacturers docs) and you should be all set! You should now see your nodes boot up with the Razor MK, download the extension, and report back with LLDP information.


Jonas Rosland

About the author: Jonas Rosland, born in Sweden and now lives in Boston. Works at the Office of the CTO at EMC, focusing on infrastructure automation and third platform app development. Avid computer and retro gamer. You can follow him on his personal blog pureVirtual and Twitter.

Learn More

Share via:
Posted in:
The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.