- Overview
- Setup - The basics of getting started with collectd
- Usage - Configuration options and additional functionality
- Supported Platforms
This Puppet module installs the collectd from SignalFx. It also configures the installed collectd to send metrics to SignalFx.
With this module you can also configure collectd plugins (e.g. collectd-rabbitmq, collectd-elasticsearch, collectd-redis, etc.) to send metrics to SignalFx.
puppet module install signalfx/collectdThis module installs and configures collectd on your system to send various metrics to SignalFx. Be careful if you already have a working collectd as it will replace your existing collectd configuration.
class { 'collectd' :
signalfx_api_token => 'YOUR_SIGNALFX_API_TOKEN'
}Other valid parameters are (check the params.pp file for default values):
| Parameter | Description |
|---|---|
| signalfx_api_token | Your SignalFx API Token |
| dimension_list | Set custom dimensions on all of the metrics that collectd sends to SignalFx. For example, you can use a custom dimension to indicate that one of your servers is running Kafka by including it in the hash map as follows: dimension_list => {"serverType" => "kafka"} |
| aws_integration | Controls AWS metadata syncing to SignalFx. Default is true. |
| signalfx_api_endpoint | The API endpoint to post your metrics. This will be useful if you are using a proxy. |
| ensure_signalfx_collectd_version | Ensures the collectd version on the system. Accepted values are of ensure from Puppet. |
| signalfx_collectd_repo_source | The source of the collectd repository from SignalFx. This will be useful when you mirror a SignalFx repository. Valid on Ubuntu and Debian systems. |
| signalfx_plugin_repo_source | The source of the signalfx-collectd-plugin repository from SignalFx. This will be useful when you mirror a SignalFx repository. Valid on Ubuntu and Debian systems. |
| fqdnlookup | Fqdnlookup of the collectd.conf file |
| hostname | Hostname to be used if fqdnlookup is true, default value would be the hostname from Puppet Facter. |
| interval | Interval of the collectd.conf file |
| timeout | Timeout of the collectd.conf file |
| read_threads | ReadThreads of the collectd.conf file |
| write_queue_limit_high | WriteQueueLimitHigh of the collectd.conf file |
| write_queue_limit_low | WriteQueueLimitLow of the collectd.conf file |
| collect_internal_stats | CollectInternalStats of the collectd.conf file |
| log_file | The location of log file to be used by collectd |
| log_level | The log level to be used by collectd |
| write_http_timeout | Timeout option of write_http plugin |
| write_http_buffersize | BufferSize option of write_http plugin |
| write_http_flush_interval | FlushInterval option of write_http plugin |
| write_http_log_http_error | LogHttpError option of write_http plugin |
| ensure_signalfx_plugin_version | Ensures the signalfx-collectd-plugin version on the system. Accepted values are of ensure from Puppet. |
| signalfx_plugin_log_traces | LogTraces of signalfx-collectd-plugin |
| signalfx_plugin_interactive | Interactive option of signalfx-collectd-plugin |
| signalfx_plugin_notifications | Notifications option of signalfx-collectd-plugin |
| signalfx_plugin_notify_level | NotifyLevel option of signalfx-collectd-plugin |
| signalfx_plugin_dpm | DPM option of signalfx-collectd-plugin |
| signalfx_plugin_utilization | Utilization option of signalfx-collectd-plugin |
You may specify parameters on a per-plugin basis. Please check the notes under each plugin.
- Apache
- Cassandra
- Docker
- Elasticsearch
- Kafka
- Mesos
- MongoDB
- MySQL
- Nginx
- Postgresql
- RabbitMQ
- Redis
- Zookeeper
####Class: collectd::plugin::apache
class { 'collectd::plugins::apache':
instances => {
'myinstance' => {
'URL' => '"http://localhost/mod_status?auto"',
}
}
}See collectd-apache for configurable parameters and apache configuration instructions.
####Class: collectd::plugin::cassandra
class { 'collectd::plugins::cassandra' :
connections => {
'connection1' => {
'ServiceURL' => '"service:jmx:rmi:///jndi/rmi://localhost:7199/jmxrmi"',
'Host' => '"testcassandraserver[hostHasService=cassandra]"',
'collect_metrics' => [
'classes',
'garbage_collector',
'memory-heap',
'memory-nonheap',
'memory_pool',
'threading',
'cassandra-client-read-latency',
'cassandra-client-read-timeouts',
'cassandra-client-read-unavailables',
'cassandra-client-rangeslice-latency',
'cassandra-client-rangeslice-timeouts',
'cassandra-client-rangeslice-unavailables',
'cassandra-client-write-latency',
'cassandra-client-write-timeouts',
'cassandra-client-write-unavailables',
'cassandra-storage-load',
'cassandra-storage-hints',
'cassandra-storage-hints-in-progress',
'cassandra-compaction-pending-tasks',
'cassandra-compaction-total-completed',
]
}
}
}See collectd-cassandra for configurable parameters.
####Class: collectd::plugin::docker
class { 'collectd::plugins::docker':
modules => {
'dockerplugin' => {
'BaseURL' => '"unix://var/run/docker.sock"',
'Timeout' => '3',
'Verbose' => false
}
}
}
See collectd-docker for configurable parameters.
####Class: collectd::plugin::elasticsearch
class { 'collectd::plugins::elasticsearch':
modules => {
'elasticsearch_collectd' => {
'Verbose' => false,
'Cluster' => '"elasticsearch"',
'Indexes' => '["_all"]',
'EnableIndexStats' => false,
'EnableClusterHealth' => true,
'Interval' => 10,
'IndexInterval' => 300,
'DetailedMetrics' => false,
'ThreadPools' => '["search","index"]',
'AdditionalMetrics' => '[""]',
}
}
}See collectd-elasticsearch for configurable parameters. The sample output file generated would look like 20-elasticsearch.conf. Currently, the plugin only monitors one elasticsearch instance, so you should include only one module in the above class arguments.
####Class: collectd::plugin::kafka
class { 'collectd::plugins::kafka' :
connections => {
'connection1' => {
'ServiceURL' => '"service:jmx:rmi:///jndi/rmi://localhost:7099/jmxrmi"',
'Host' => '"testkafkaserver[hostHasService=kafka]"',
'collect_metrics' => [
'classes',
'garbage_collector',
'memory-heap',
'memory-nonheap',
'memory_pool',
'threading',
'kafka-all-messages',
'kafka-all-bytes-in',
'kafka-all-bytes-out',
'kafka-log-flush',
'kafka-active-controllers',
'kafka-underreplicated-partitions',
'kafka-request-queue',
'kafka.fetch-consumer.total-time',
'kafka.fetch-follower.total-time',
'kafka.produce.total-time',
]
}
}
}See collectd-kafka for configurable parameters.
####Class: collectd::plugin::mesos
class { 'collectd::plugins::mesos' :
modules => {
'mesos-master' => {
'Cluster' => '"cluster-0"',
'Instance' => '"master-0"',
'Path' => '"/usr/sbin"',
'Host' => '"localhost"',
'Port' => '5050',
'Verbose' => 'false',
}
}
}See collectd-mesos for configurable parameters.
####Class: collectd::plugin::mongodb
class { 'collectd::plugins::mongodb' :
modules => {
'module1' => {
'Host' => '"localhost"',
'Port' => '"27017"',
'User' => '"collectd"',
'Password' => '"password"',
'Database' => '"db1"',
},
'module2' => {
'Host' => '"localhost"',
'Port' => '"27017"',
'Database' => '"test"',
}
}
}See collectd-mongodb for configurable parameters.
####Class: collectd::plugin::mysql
class { 'collectd::plugins::mysql' :
databases => {
'mydb_plugin_instance' => {
'Host' => 'localhost',
'User' => 'admin',
'Password' => 'root',
'Database' => 'mydb',
'Socket' => '/var/run/mysqld/mysqld.sock'
}
}
}See collectd.conf for configurable parameters. The sample output file generated would look like 10-mysql.conf
####Class: collectd::plugin::nginx
class { 'collectd::plugins::nginx':
'config' => {
'URL' => '"http://localhost:80/nginx_status"',
}
}See collectd-nginx for configurable parameters and nginx configuration instructions.
####Class: collectd::plugin::postgresql
class { 'collectd::plugins::postgresql' :
databases => {
'database1' => {
'Host' => '"127.0.0.1"',
'User' => '"postgres"',
'Password' => '"password"',
'queries' => [
'custom_deadlocks',
'backends',
'transactions',
'queries',
'queries_by_table',
'query_plans',
'table_states',
'query_plans_by_table',
'table_states_by_table',
'disk_io',
'disk_io_by_table',
'disk_usage',
]
}
}
}See collectd-postgresql for configurable parameters.
####Class: collectd::plugin::rabbitmq
class { 'collectd::plugins::rabbitmq' :
modules => {
'rabbitmq-1' => {
'Username' => '"guest"',
'Password' => '"guest"',
'Host' => '"localhost"',
'Port' => '"15672"',
'CollectChannels' => true,
'CollectConnections' => true,
'CollectExchanges' => true,
'CollectNodes' => true,
'CollectQueues' => true,
'FieldLength' => '1024'
}
}
}See collectd-rabbitmq for configurable parameters. The sample output file generated would look like 10-rabbitmq.conf. Currently, the plugin only monitors one rabbitmq instance, so you should include only one module in the above class arguments.
####Class: collectd::plugin::redis
class { 'collectd::plugins::redis' :
modules => {
'redis_info' => {
'Host' => '"localhost"',
'Port' => 6379,
'Verbose' => 'false',
'Redis_uptime_in_seconds' => '"gauge"',
'Redis_used_cpu_sys' => '"counter"',
'Redis_used_cpu_user' => '"counter"',
'Redis_used_cpu_sys_children' => '"counter"',
'Redis_used_cpu_user_children' => '"counter"',
'Redis_uptime_in_days' => '"gauge"',
'Redis_lru_clock' => '"counter"',
'Redis_connected_clients' => '"gauge"',
'Redis_connected_slaves' => '"gauge"',
'Redis_client_longest_output_list' => '"gauge"',
'Redis_client_biggest_input_buf' => '"gauge"',
'Redis_blocked_clients' => '"gauge"',
'Redis_expired_keys' => '"counter"',
'Redis_evicted_keys' => '"counter"',
'Redis_rejected_connections' => '"counter"',
'Redis_used_memory' => '"bytes"',
'Redis_used_memory_rss' => '"bytes"',
'Redis_used_memory_peak' => '"bytes"',
'Redis_used_memory_lua' => '"bytes"',
'Redis_mem_fragmentation_ratio' => '"gauge"',
'Redis_changes_since_last_save' => '"gauge"',
'Redis_instantaneous_ops_per_sec' => '"gauge"',
'Redis_rdb_bgsave_in_progress' => '"gauge"',
'Redis_total_connections_received' => '"counter"',
'Redis_total_commands_processed' => '"counter"',
'Redis_total_net_input_bytes' => '"counter"',
'Redis_total_net_output_bytes' => '"counter"',
'Redis_keyspace_hits' => '"derive"',
'Redis_keyspace_misses' => '"derive"',
'Redis_latest_fork_usec' => '"gauge"',
'Redis_connected_slaves' => '"gauge"',
'Redis_repl_backlog_first_byte_offset' => '"gauge"',
'Redis_master_repl_offset' => '"gauge"',
}
}
}See redis-collectd-plugin for configurable parameters. The sample output file generated would look like 10-redis_master.conf
####Class: collectd::plugin::zookeeper
class { 'collectd::plugins::zookeeper' :
modules => {
'module' => {
'Hosts' => '"localhost"',
'Port' => '2181',
}
}
}See collectd-zookeeper for configurable parameters.
Currently, the supported platforms for this module are:
- Ubuntu 12.04
- Ubuntu 14.04
- Ubuntu 15.04
- Ubuntu 16.04
- CentOS 6
- CentOS 7
- RHEL 6
- RHEL 7
- Amazon Linux 2014.09
- Amazon Linux 2015.03
- Amazon Linux 2015.09
- Amazon Linux 2016.03
- Debian GNU/Linux 7 (wheezy)
- Debian GNU/Linux 8 (jessie)