• Overview
  • Deploying MCollective
  • Configuration / Deployment Topics
  • Use and Administer MCollective
  • Write Agent Plugins
  • Write Clients and Applications
  • Write Other Plugins
  • Plugin Directory
  • Internals
  • Older and Non-Recommended Information

MCollective Plugin: Central RPC Audit Log

Sections

This is a SimpleRPC Audit Plugin and Agent that sends all SimpleRPC audit events to a central point for logging.

You’d run the audit plugin on every node and designate one node as the receiver of audit logs. The receiver will have a detailed log of every SimpleRPC request processed on your entire server estate.

There are 2 receiving agents, one that writes a log file:

01/22/10 12:57:34 dev2.foo.net> b10c1a33ad5e8cfaf5f564afa9957c32: 01/22/10 12:57:34 caller=uid=500@devel.your.com agent=iptables action=block
01/22/10 12:57:34 dev2.foo.net> b10c1a33ad5e8cfaf5f564afa9957c32: {:ipaddr=>"62.x.x.242"}
01/22/10 12:57:34 dev1.foo.net> b10c1a33ad5e8cfaf5f564afa9957c32: 01/22/10 12:57:34 caller=uid=500@devel.your.com agent=iptables action=block
01/22/10 12:57:34 dev2.foo.net> b10c1a33ad5e8cfaf5f564afa9957c32: {:ipaddr=>"62.x.x.242"}

The example log file is from a remote node devel.your.com it is for a message with the ID b10c1a33ad5e8cfaf5f564afa9957c32, the caller ran as unix process id 500.

It sent a request to the iptables agent with the action block and the parameter ipaddr = 62.x.x.242.

The other plugin will write to MongoDB:

$ mongo
MongoDB shell version: 1.4.4
> use mcollective
switched to db mcollective
> db.rpclog.find()
{ "_id" : ObjectId("4c5975e2dc3ecb0c3b000001"), "agent" : "nrpe", "senderid" : "monitor1.xxx.net", "requestid" : "6c311d786b2d187b231d41f14cbb03ce", "action" : "runcommand", "data" : { "command" : "check_bacula-fd", "process_results" : true }, "caller" : "cert=nagios@monitor1.xxx.net" }

There are some limitations to the design of this plugin, I suspect it will be affective to only a few 100 machines. This is due to RPC requests being used to create the audit entries. If the central host isn’t fast there might be some overflow and discarding happening.

I’d be interested in working with someone to improve this, we’d essentially write audit log entries to a Queue and have a daemon that consumes the queue, this will ensure that all logs get saved to the DB.

Installation

Every Node

Add to the configuration:

rpcaudit = 1
rpcauditprovider = centralrpclog

Since version 1.1.3 of MCollective we support sub collective, you can specify which collective to use:

plugin.centralrpclog.collective = audit

And restart

Central Audit Node

File logging agent

Add to the configuration:

plugin.centralrpclog.logfile = /var/log/mcollective-rpcaudit.log
  • Set up log rotation for /var/log/mcollective-rpcaudit.log using your Operating Systems log rotation system.

MongoDB agent

Add to your configuration, these are the defaults so you can just keep it like this if its ok:

plugin.centralrpclog.mongohost = localhost
plugin.centralrpclog.mongodb = mcollective
plugin.centralrpclog.collection = rpclog
See an issue? Please file a JIRA ticket in our [DOCUMENTATION] project
Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.