Alex Petrov [Wed, 17 Dec 2014 11:43:24 +0000 (12:43 +0100)]
Change "plugin_dispatch_multivalue" to accept any metric type.
Currently, "plugin_dispatch_multivalue" works only with
"gauge_t" metric type. This commit changes it to accept a
"store_type" (one of "DS_TYPE_{GAUGE|COUTNTER|ABSOLUTE|DERIVE}").
Vincent Bernat [Wed, 10 Dec 2014 14:41:49 +0000 (15:41 +0100)]
write_kafka: check for partition availability before selecting one
When a partition is unavailable, sending to it will just lead to a lost
metric. Therefore, after selecting the partition, check if it is
available. If not, select the next one until we tried them all.
A future iteration may use consistent hashing to avoid to double the
work done on a partition when the previous one is unavailable.
Marc Fournier [Tue, 2 Dec 2014 23:10:54 +0000 (00:10 +0100)]
zookeeper: initialize a variable
If the loop on line 132 doesn't iterate at least once, the function would
return the "sk" variable uninitialized.
This fixes the following build error:
cc1: warnings being treated as errors
zookeeper.c: In function 'zookeeper_read':
zookeeper.c:107: warning: 'sk' may be used uninitialized in this function
make[3]: *** [zookeeper.lo] Error 1
Florian Forster [Tue, 2 Dec 2014 10:09:32 +0000 (11:09 +0100)]
cpu plugin: Fix ValuesPercentage to behave as documented.
The documentation claims that ValuesPercentage is only considered when
!ByState && !ByCpu. Fix the behavior to match this documented behavior.
This makes cpu_commit_without_aggregation much easier.
- The query is expected to be the block argument
- The type instance is inferred from the query if unsupplied
- The type will default to gauge if not supplied
Now that the redis plugin has moved to hiredis, it could
be worthwhile to add support for custom commands.
This diff implements a mechanism for executing commands which
allows for setting the type and type-instance. It doesn not
support hash or array returns, but if this is deemed necessary
could be added later on.
The canonical use case for this is for people using redis
has a queue (for instance, using solutions such as rq,
sidekiq and similar solutions) who want a simple way to
ensure the work queue size is not growing. To address this
you would use:
When reading from tables, upon errors the PDUs sent are already
freed by snmp_synch_response since they are right after
snmp_send is called.
This commit syncs collectd's approach with other occurences of
snmp_synch_response calls.
There might be a few corner cases where we leak PDUs, but it
is unclear how to check for those since we would need to
have an indication that snmp_send was never called, which
as far as I can tell is not possible.
The potential for failure in snmp_send is rather low and will
be easily spotted though, since when crafting invalid PDUs
snmp send will constantly fail and since valid configurations
can never leak memory.
When reading from tables, upon errors the PDUs sent are already
freed by snmp_synch_response since they are right after
snmp_send is called.
This commit syncs collectd's approach with other occurences of
snmp_synch_response calls.
There might be a few corner cases where we leak PDUs, but it
is unclear how to check for those since we would need to
have an indication that snmp_send was never called, which
as far as I can tell is not possible.
The potential for failure in snmp_send is rather low and will
be easily spotted though, since when crafting invalid PDUs
snmp send will constantly fail and since valid configurations
can never leak memory.
When reading from tables, upon errors the PDUs sent are already
freed by snmp_synch_response since they are right after
snmp_send is called.
This commit syncs collectd's approach with other occurences of
snmp_synch_response calls.
There might be a few corner cases where we leak PDUs, but it
is unclear how to check for those since we would need to
have an indication that snmp_send was never called, which
as far as I can tell is not possible.
The potential for failure in snmp_send is rather low and will
be easily spotted though, since when crafting invalid PDUs
snmp send will constantly fail and since valid configurations
can never leak memory.
Vincent Bernat [Mon, 17 Nov 2014 09:35:16 +0000 (10:35 +0100)]
libstatgrab: only use one configure test for 0.90 API change
Previously, each API change was tested in configure.ac. Some of the
tests are relying on signature checks and would need to have -Werror
flag enabled to make them work. This is quite fragile.
Instead, we assume that if `sg_init()` requires an argument, we must use
the 0.90 API.
Marc Fournier [Fri, 14 Nov 2014 21:04:16 +0000 (22:04 +0100)]
write_redis: avoid passing a float/double to redisCommand()
... as it seems to not be well supported by hiredis 0.10.1 on Debian
7.0, leading to a segfault. Storing the string representation in a
variable instead is the compromise I found to make the plugin work on
this system.
Vincent Bernat [Thu, 13 Nov 2014 16:57:46 +0000 (17:57 +0100)]
libstatgrab: fix sg_get_disk_io_stats() invocation for libstatgrab >= 0.9
In those versions, `sg_get_disk_io_stats()` need to be invoked a pointer
to size_t instead of pointer to int. Such a requirement is detected at
configure-time.