If you have not read the tutorials or man pages either on the official site or those by others, then I strongly encourage you to do so.
As said in the description, this article will only explain how a PDP is calculated, but not the definition of it.
-So please read the following materials to get a basic understanding of PDP:
+So please read the following materials to get a basic understanding of PDP:
L<http://rrdtool.vandenbogaerdt.nl/process.php> - By Alex van den Bogaerdt. This article explained PDP in a very detailed and clear way, however, it does not explain the "normalization process" in its "Normalize interval" section in the right way( as opposed to the official version I confirmed with @oetiker himself). The flaw can be easily seen in the bar charts, discussed in the "Calculation logics" section.
|
- | (v1)
+ | (v1)
| _______ (v4) (v5)
| | | (v3) ____________
| | | ______________| || |
As can be seen on this page: L<http://rrdtool.vandenbogaerdt.nl/process.php>, after all the primary data are transformed to rates( except for GAUGE, of course), they have to go through a B<normalization process> if they are not distributed exactly according to the step or on well-defined boundaries in time, in the words of the author.
-What does that mean? Basically, if all the B<known> (as opposed to an B<unknown> value) data make up at least 50% of all slots during a period, then a PDP is calculated from them.
+What does that mean? Basically, if all the B<known> (as opposed to an B<unknown> value) data make up at least 50% of all slots during a period, then a PDP is calculated from them.
This version seems to go well until we reach the bar chart part.
-According to the ASCII bar chart, we have the following results:
+According to the ASCII bar chart, we have the following results:
-From second 1 on, the PDP of each period( 1-5,6-10, ...) is computed by averaging all the values within it.
+From second 1 on, the PDP of each period( 1-5,6-10, ...) is computed by averaging all the values within it.
So:
- the PDP from 1 to 5 is (v1*3+v2*2)/5
Because the difference between the official version and Bogaerdt version stems from the way they do the calculation for PDP(6-10) and PDP(11-15).
-Let's discuss this in more detail using the above bar chart.
+Let's discuss this in more detail using the above bar chart.
-=head3 Bogaerdt's version,
+=head3 Bogaerdt's version,
PDPs are B<always computed individually> no matter how values arrive.
=head3 The official version
-PDPs are B<always computed in terms of the steps which the next update spans>, be it 1 step, 2 steps or n steps; in other words, PDPs may be computed B<together>.
+PDPs are B<always computed in terms of the steps which the next update spans>, be it 1 step, 2 steps or n steps; in other words, PDPs may be computed B<together>.
-For example, the update at slot 17 spans PDP(6-10) and PDP(11-15) because the B<immediate> previous value is at 7 and 7 is within 6 and 10 , and 17 is after 15. PDP(1-5) and PDP(16-20) are not included since the update at slot 7 has already triggered the calculation for PDP(1-5) and the update at slot 17 comes before the last slot of PDP(16-20) which is 20.
+For example, the update at slot 17 spans PDP(6-10) and PDP(11-15) because the B<immediate> previous value is at 7 and 7 is within 6 and 10 , and 17 is after 15. PDP(1-5) and PDP(16-20) are not included since the update at slot 7 has already triggered the calculation for PDP(1-5) and the update at slot 17 comes before the last slot of PDP(16-20) which is 20.
That's the reason why PDP(6-10) and PDP(11-15) have the same value, (v2*2+v3*8).
Let's get our hands dirty with some commands
rrdtool create target.rrd --start 1000000000 --step 5 DS:mem:GAUGE:20:0:100 RRA:AVERAGE:0.5:1:10
- rrdtool update target.rrd 1000000003:8 1000000006:1 1000000017:6 \
- 1000000020:7 1000000021:7 1000000022:4 \
- 1000000023:3 1000000036:1 1000000037:2 \
- 1000000038:3 1000000039:3 1000000042:5
+ rrdtool update target.rrd 1000000003:8 1000000006:1 1000000017:6 \
+ 1000000020:7 1000000021:7 1000000022:4 \
+ 1000000023:3 1000000036:1 1000000037:2 \
+ 1000000038:3 1000000039:3 1000000042:5
rrdtool fetch target.rrd AVERAGE --start 1000000000 --end 1000000045
Basically, the above codes contain 3 commands: create, update and fetch. First create a new rrd file, and then we feed in some data and last we fetch all the PDPs from the rrd.
rrd_graph functionality, you can supply your own rrd_fetch function and register it using
the B<rrd_fetch_cb_register> function.
-The argument signature and api must be the same of the callback function and must be equivalent to the one of B<rrd_fetch_fn> in
+The argument signature and api must be the same of the callback function and must be equivalent to the one of B<rrd_fetch_fn> in
F<rrd_fetch.c>.
To activate the callback function you can use the pseudo filename F<cb//>I<free_form_text>.
# calculate the average of the array
my $tot_mem_ave = $tot_mem_sum/($count);
# create the graph
- RRDs::graph ("/images/mem_$count.png",
- "--title= Memory Usage",
- "--vertical-label=Memory Consumption (MB)",
- "--start=$start_time",
- "--end=$end_time",
- "--color=BACK#CCCCCC",
- "--color=CANVAS#CCFFFF",
- "--color=SHADEB#9999CC",
- "--height=125",
- "--upper-limit=656",
- "--lower-limit=0",
- "--rigid",
- "--base=1024",
- "DEF:tot_mem=target.rrd:mem:AVERAGE",
+ RRDs::graph ("/images/mem_$count.png",
+ "--title= Memory Usage",
+ "--vertical-label=Memory Consumption (MB)",
+ "--start=$start_time",
+ "--end=$end_time",
+ "--color=BACK#CCCCCC",
+ "--color=CANVAS#CCFFFF",
+ "--color=SHADEB#9999CC",
+ "--height=125",
+ "--upper-limit=656",
+ "--lower-limit=0",
+ "--rigid",
+ "--base=1024",
+ "DEF:tot_mem=target.rrd:mem:AVERAGE",
"CDEF:tot_mem_cor=tot_mem,0,671744,LIMIT,UN,0,tot_mem,IF,1024,/",
"CDEF:machine_mem=tot_mem,656,+,tot_mem,-",
"COMMENT:Memory Consumption between $start_time",
"COMMENT: and $end_time ",
"HRULE:656#000000:Maximum Available Memory - 656 MB",
- "AREA:machine_mem#CCFFFF:Memory Unused",
+ "AREA:machine_mem#CCFFFF:Memory Unused",
"AREA:tot_mem_cor#6699CC:Total memory consumed in MB");
my $err=RRDs::error;
if ($err) {print "problem generating the graph: $err\n";}
cd $BUILD_DIR
Lets first assume you already have all the necessary libraries
-pre-installed.
+pre-installed.
wget http://oss.oetiker.ch/rrdtool/pub/rrdtool-1.7.0.tar.gz
gunzip -c rrdtool-1.7.0.tar.gz | tar xf -
bad since OpenSolaris does not include an F<xrender.pc> file. Use Perl to
fix this:
- perl -i~ -p -e 's/(Requires.*?)\s*xrender.*/$1/' /usr/lib/pkgconfig/cairo.pc
+ perl -i~ -p -e 's/(Requires.*?)\s*xrender.*/$1/' /usr/lib/pkgconfig/cairo.pc
Make sure the RRDtool build system finds your new compiler
=item Solaris
- export LDFLAGS=-R${INSTALL_DIR}/lib
+ export LDFLAGS=-R${INSTALL_DIR}/lib
if you are using the Sun Studio/Forte compiler, you may also want to set
=item Linux
- export LDFLAGS="-Wl,--rpath -Wl,${INSTALL_DIR}/lib"
+ export LDFLAGS="-Wl,--rpath -Wl,${INSTALL_DIR}/lib"
=item HPUX
export LDFLAGS="-Wl,-blibpath:${INSTALL_DIR}/lib"
-=back
+=back
If you have GNU make installed and it is not called 'make',
then do
=head3 Building zlib
-Chances are very high that you already have that on your system ...
+Chances are very high that you already have that on your system ...
cd $BUILD_DIR
wget http://oss.oetiker.ch/rrdtool/pub/libs/zlib-1.2.3.tar.gz
This tag gets replaced by an internal var. Currently these vars are known:
VERSION, COMPILETIME.
-These vars represent the compiled-in values.
+These vars represent the compiled-in values.
=back
See also AT-STYLE TIME SPECIFICATION section in the
I<rrdfetch> documentation for other ways to specify time.
-If one or more source files is used to pre-fill the new B<RRD>,
+If one or more source files is used to pre-fill the new B<RRD>,
the B<--start> option may be omitted. In that case, the latest update
time among all source files will be used as the last update time of
the new B<RRD> file, effectively setting the start time.
=head2 B<--daemon>|B<-d> I<address>
-Address of the L<rrdcached> daemon. For a list of accepted formats, see
+Address of the L<rrdcached> daemon. For a list of accepted formats, see
the B<-l> option in the L<rrdcached> manual.
rrdtool create --daemon unix:/var/run/rrdcached.sock /var/lib/rrd/foo.rrd I<other options>
Specifies a template B<RRD> file to take step, DS and RRA definitions from. This allows one
to base the structure of a new file on some existing file. The data of the template
-file is NOT used for pre-filling, but it is possible to specify the same file as a
+file is NOT used for pre-filling, but it is possible to specify the same file as a
source file (see below).
Additional DS and RRA definitions are permitted, and will be added to those taken
=head2 B<--source>|B<-r> I<source-file>
-One or more source B<RRD> files may be named on the command line. Data from these
+One or more source B<RRD> files may be named on the command line. Data from these
source files will be used to prefill the created B<RRD> file. The output file and one source
file may refer to the same file name. This will effectively replace the source file with the
-new B<RRD> file. While there is the danger to loose the source file because it
-gets replaced, there is no danger that the source and the new file may be
+new B<RRD> file. While there is the danger to loose the source file because it
+gets replaced, there is no danger that the source and the new file may be
"garbled" together at any point in time, because the new file will always be
-created as a temporary file first and will only be moved to its final
+created as a temporary file first and will only be moved to its final
destination once it has been written in its entirety.
Prefilling is done by matching up DS names, RRAs and consolidation
In other words: A best effort is made to preserve data during
prefilling. Also, pre-filling of RRAs may only be possible for
-certain kinds of DS types. Prefilling may also have strange effects on
-Holt-Winters forecasting RRAs. In other words: there is no guarantee
+certain kinds of DS types. Prefilling may also have strange effects on
+Holt-Winters forecasting RRAs. In other words: there is no guarantee
for data-correctness.
-When "pre-filling" a B<RRD> file, the structure of the new file must be
-specified as usual using DS and RRA specifications as outlined below. Data will
+When "pre-filling" a B<RRD> file, the structure of the new file must be
+specified as usual using DS and RRA specifications as outlined below. Data will
be taken from source files based on DS names and types and in the order the source files
are specified in. Data sources with the same name from different source files
-will be combined to form a new data source. Generally, for any point in time the
-new B<RRD> file will cover after its creation, data from only one source file
-will have been used for pre-filling. However, data from multiple sources may be
-combined if it refers to different times or an earlier named source file holds
+will be combined to form a new data source. Generally, for any point in time the
+new B<RRD> file will cover after its creation, data from only one source file
+will have been used for pre-filling. However, data from multiple sources may be
+combined if it refers to different times or an earlier named source file holds
unknown data for a time where a later one holds known data.
If this automatic data selection is not desired, the DS syntax allows one to specify
-a mapping of target and source data sources for prefilling. This syntax allows one to
-rename data sources and to restrict prefilling for a DS to only use data from a
+a mapping of target and source data sources for prefilling. This syntax allows one to
+rename data sources and to restrict prefilling for a DS to only use data from a
single source file.
-Prefilling currently only works reliably for RRAs using one of the classic
+Prefilling currently only works reliably for RRAs using one of the classic
consolidation functions, that is one of: AVERAGE, MIN, MAX, LAST. It might also
currently have problems with COMPUTE data sources.
and B<CDEF>s previously defined in the same graph command.
When pre-filling the new B<RRD> file using one or more source B<RRD>s, the DS specification
-may hold an optional mapping after the DS name. This takes the form of an
-equal sign followed by a mapped-to DS name and an optional source index enclosed
+may hold an optional mapping after the DS name. This takes the form of an
+equal sign followed by a mapped-to DS name and an optional source index enclosed
in square brackets.
For example, the DS
DS:a=b[2]:GAUGE:120:0:U
-specifies that the DS named I<a> should be pre-filled from the DS named I<b> in
+specifies that the DS named I<a> should be pre-filled from the DS named I<b> in
the second listed source file (source indices are 1-based).
=head2 B<RRA:>I<CF>B<:>I<cf arguments>
The data is also processed with the consolidation function (I<CF>) of
the archive. There are several consolidation functions that
consolidate primary data points via an aggregate function: B<AVERAGE>,
-B<MIN>, B<MAX>, B<LAST>.
+B<MIN>, B<MAX>, B<LAST>.
=over
Second, these B<RRAs> are interdependent. To generate real-time confidence
bounds, a matched set of SEASONAL, DEVSEASONAL, DEVPREDICT, and either
HWPREDICT or MHWPREDICT must exist. Generating smoothed values of the primary
-data points requires a SEASONAL B<RRA> and either an HWPREDICT or MHWPREDICT
+data points requires a SEASONAL B<RRA> and either an HWPREDICT or MHWPREDICT
B<RRA>. Aberrant behavior detection requires FAILURES, DEVSEASONAL, SEASONAL,
and either HWPREDICT or MHWPREDICT.
place.
The predicted deviations are stored in DEVPREDICT (think a standard deviation
-which can be scaled to yield a confidence band). The FAILURES B<RRA> stores
-binary indicators. A 1 marks the indexed observation as failure; that is, the
-number of confidence bounds violations in the preceding window of observations
-met or exceeded a specified threshold. An example of using these B<RRAs> to graph
+which can be scaled to yield a confidence band). The FAILURES B<RRA> stores
+binary indicators. A 1 marks the indexed observation as failure; that is, the
+number of confidence bounds violations in the preceding window of observations
+met or exceeded a specified threshold. An example of using these B<RRAs> to graph
confidence bounds and failures appears in L<rrdgraph>.
The SEASONAL and DEVSEASONAL B<RRAs> store the seasonal coefficients for the
u|05| /
u|06|/ "hbt" expired
u|07|
- |08|----* sample2, restart "hb"
- |09| /
+ |08|----* sample2, restart "hb"
+ |09| /
|10| /
u|11|----* sample3, restart "hb"
u|12| /
step1_u|14| /
u|15|/ "swt" expired
u|16|
- |17|----* sample4, restart "hb", create "pdp" for step1 =
+ |17|----* sample4, restart "hb", create "pdp" for step1 =
|18| / = unknown due to 10 "u" labeled secs > 0.5 * step
|19| /
|20| /
|27|----* sample7, restart "hb"
step2__|28| /
|22| /
- |23|----* sample8, restart "hb", create "pdp" for step1, create "cdp"
+ |23|----* sample8, restart "hb", create "pdp" for step1, create "cdp"
|24| /
|25| /
The same RRD file and B<RRAs> are created with the following command,
which explicitly creates all specialized function B<RRAs>
using L<"STEP, HEARTBEAT, and Rows As Durations">.
-
+
rrdtool create monitor.rrd --step 5m \
DS:ifOutOctets:COUNTER:30m:0:4294967295 \
RRA:AVERAGE:0.5:1:2016 \
-r $rrdres -e @{[int($ctime/$rrdres)*$rrdres]} -s e-1h"'
Or using the B<--align-start> flag:
-
+
rrdtool fetch subdata.rrd AVERAGE -a -r 15m -s -1h
reference. B<Now> refers to the current moment (and is also the default
time reference). B<Start> (B<end>) can be used to specify a time
relative to the start (end) time for those tools that use these
-categories (B<rrdfetch>, L<rrdgraph>) and B<epoch> indicates the
+categories (B<rrdfetch>, L<rrdgraph>) and B<epoch> indicates the
*IX epoch (*IX timestamp 0 = 1970-01-01 00:00:00 UTC). B<epoch> is
useful to disambiguate between a timestamp value and some forms
-of abbreviated date/time specifications, because it allows one to use
+of abbreviated date/time specifications, because it allows one to use
time offset specifications using units, eg. B<epoch>+19711205s unambiguously
denotes timestamp 19711205 and not 1971-12-05 00:00:00 UTC.
=item B<--daemon>|B<-d> I<address>
-Address of the L<rrdcached> daemon. For a list of accepted formats, see
+Address of the L<rrdcached> daemon. For a list of accepted formats, see
the B<-l> option in the L<rrdcached> manual.
rrdtool first --daemon unix:/var/run/rrdcached.sock /var/lib/rrd/foo.rrd
[B<-t>|B<--title> I<string>]
-A horizontal string placed at the top of the graph which may be
+A horizontal string placed at the top of the graph which may be
separated into multiple lines using <br/> or \n
[B<-v>|B<--vertical-label> I<string>]
ensures that you always have a grid, that there are enough but not too many
grid lines, and that the grid is metric. That is the grid lines are placed
every 1, 2, 5 or 10 units. This parameter will also ensure that you get
-enough decimals displayed even if your graph goes from 69.998 to 70.001.
+enough decimals displayed even if your graph goes from 69.998 to 70.001.
(contributed by Sasha Mikheev).
[B<-o>|B<--logarithmic>]
[B<--units=si>]
-With this option y-axis values on logarithmic graphs will be scaled to
+With this option y-axis values on logarithmic graphs will be scaled to
the appropriate units (k, M, etc.) instead of using exponential notation.
Note that for linear graphs, SI notation is used by default.
lazy in this regard has seen several changes over time. The only thing you
can really rely on before RRDtool 1.3.7 is that lazy will not generate the
graph when it is already there and up to date, and also that it will output
-the size of the graph.
+the size of the graph.
[B<-d>|B<--daemon> I<address>]
by default the grid is drawn in a 1 on, 1 off pattern. With this option you can set this yourself
--grid-dash 1:3 for a dot grid
-
+
--grid-dash 1:0 for uninterrupted grid lines
[B<--border> I<width>]
All text in RRDtool is rendered using Pango. With the B<--pango-markup> option, all
text will be processed by pango markup. This allows one to embed some simple html
-like markup tags using
+like markup tags using
<span key="value">text</span>
sup Superscript
small Makes font relatively smaller, equivalent to <span size="smaller">
tt Monospace font
- u Underline
+ u Underline
More details on L<http://developer.gnome.org/pango/stable/PangoMarkupFormat.html>.
Helvetica-BoldOblique, Helvetica-Oblique, Helvetica, Symbol,
Times-Bold, Times-BoldItalic, Times-Italic, Times-Roman, and ZapfDingbats.
-For Export type you can define
+For Export type you can define
XML, XMLENUM (enumerates the value tags <v0>,<v1>,<v2>,...),
JSON, JSONTIME (adds a timestamp to each data row),
CSV (=comma separated values), TSV (=tab separated values), SSV (=semicolon separated values),
[B<-W>|B<--watermark> I<string>]
-Adds the given string as a watermark, horizontally centered, at the bottom
+Adds the given string as a watermark, horizontally centered, at the bottom
of the graph.
[B<-Z>|B<--use-nan-for-all-missing-data>]
example, I<VDEF:max=ds0,MAXIMUM> would scan each of the array members
and store the maximum value.
-=head2 When do you use B<VDEF> versus B<CDEF>?
+=head2 When do you use B<VDEF> versus B<CDEF>?
Use B<CDEF> to transform your data prior to graphing. In the above
example, we'd use a B<CDEF> to transform bytes to bits before
Since RRDtool 1.3 is using Pango for rending text, you can use Pango markup.
Pango uses the xml B<span> tags for inline formatting instructions.
-A simple example of a marked-up string might be:
+A simple example of a marked-up string might be:
<span foreground="blue" size="x-large">Blue text</span> is <i>cool</i>!
=item B<u>
-Underline
+Underline
=back
=item B<E<lt>[*]unixtimestamp columnE<gt>>
- defines the column of <table> which contains the unix-timestamp
+ defines the column of <table> which contains the unix-timestamp
- if this is a DATETIME field in the database, then prefix with leading '*'
hex-type-encoding via %xx are translated to the actual value, use %% to use %
* Naturally you can also use any other kind of driver that libdbi supports - e.g. postgres, ...
-* From the way the data source is joined, it should also be possible to do joins over different tables
- (separate tables with "," in table and add in the WHERE Clauses the table equal joins.
+* From the way the data source is joined, it should also be possible to do joins over different tables
+ (separate tables with "," in table and add in the WHERE Clauses the table equal joins.
This has not been tested!!!)
* It should also be relatively simple to add to the database using the same data source string.
This has not been implemented...
-* The aggregation functions are ignored and several data columns are used instead
+* The aggregation functions are ignored and several data columns are used instead
to avoid querying the same SQL several times when minimum, average and maximum are needed for graphing...
* for DB efficiency you should think of having 2 tables, one containing historic values and the other containing the latest data.
This second table should be kept small to allow for the least amount of blocking SQL statements.
- With mysql you can even use myisam table-type for the first and InnoDB for the second.
+ With mysql you can even use myisam table-type for the first and InnoDB for the second.
This is especially interesting as with tables with +100M rows myisam is much smaller then InnoDB.
* To debug the SQL statements set the environment variable RRDDEBUGSQL and the actual SQL statements and the timing is printed to stderr.
=head1 Performance issues with MySQL backend
-Previous versions of LibDBI have a big performance issue when retrieving data from a MySQL server. Performance impact is exponentially based
-on the number of values you retrieve from the database.
-For example, it would take more than 2 seconds to graph 5DS on 150 hours of data with a precision of 5 minutes
-(against 100ms when data comes from a RRD file). This bug has been fixed in version 0.9.0 of LibDBI.
+Previous versions of LibDBI have a big performance issue when retrieving data from a MySQL server. Performance impact is exponentially based
+on the number of values you retrieve from the database.
+For example, it would take more than 2 seconds to graph 5DS on 150 hours of data with a precision of 5 minutes
+(against 100ms when data comes from a RRD file). This bug has been fixed in version 0.9.0 of LibDBI.
You can find more information on this libdbi-users mailing list thread: http://sourceforge.net/mailarchive/message.php?msg_id=30320894
=head1 BUGS
-* at least on Linux please make sure that the libdbi driver is explicitly linked against libdbi.so.0
- check via ldd /usr/lib/dbd/libmysql.so, that there is a line with libdbi.so.0.
+* at least on Linux please make sure that the libdbi driver is explicitly linked against libdbi.so.0
+ check via ldd /usr/lib/dbd/libmysql.so, that there is a line with libdbi.so.0.
otherwise at least the perl module RRDs will fail because the dynamic linker cannot find some symbols from libdbi.so.
(this only happens when the libdbi driver is actually used the first time!)
This is KNOWN to be the case with RHEL4 and FC4 and FC5! (But actually this is a bug with libdbi make files!)
* at least version 0.8.1 of libdbi exhibits a bug with BINARY fields
- (shorttext,text,mediumtext,longtext and possibly also BINARY and BLOB fields),
- that can result in coredumps of rrdtool.
+ (shorttext,text,mediumtext,longtext and possibly also BINARY and BLOB fields),
+ that can result in coredumps of rrdtool.
The tool will tell you on stderr if this occurs, so that you know what may be the reason.
- If you are not experiencing these coredumps, then set the environment variable RRD_NO_LIBDBI_BUG_WARNING,
+ If you are not experiencing these coredumps, then set the environment variable RRD_NO_LIBDBI_BUG_WARNING,
and then the message will not get shown.
=head1 AUTHOR
=head1 DESCRIPTION
-The B<lastupdate> function returns the UNIX timestamp and the
+The B<lastupdate> function returns the UNIX timestamp and the
value stored for each datum in the most recent update of an RRD.
=over 8
package.cpath
require 'rrd'
---------------------------------------------------------------
-
+
OBS: If you configured with --enable-lua-site-install, you don't need
to set package.cpath like above.
LUA_PATH = original_LUA_PATH
original_LUA_PATH = nil
--- end of code to require compat-5.1 ---------------------------
-
+
Now we can require the rrd module in the same way we did for 5.1 above:
-
+
---------------------------------------------------------------
package.cpath = '/usr/local/rrdtool-1.3.2/lib/lua/5.0/?.so;' ..
package.cpath
protected calls - 'pcall' or 'xpcall'.
Ex: program t.lua
-
+
--- compat-5.1.lua is only necessary for Lua 5.0 ----------------
-- uncomment below if your distro has not compat-5.1
-- original_LUA_PATH = LUA_PATH
-- try only compat-5.1.lua installed with RRDtool package
-- LUA_PATH = '/usr/local/rrdtool-1.3.2/lib/lua/5.0/?.lua'
-
+
-- here we use a protected call to require compat-5.1
local r = pcall(require, 'compat-5.1')
if not r then
print('** could not load compat-5.1.lua')
os.exit(1)
end
-
+
-- uncomment below if your distro has not compat-5.1
-- LUA_PATH = original_LUA_PATH
-- original_LUA_PATH = nil
--- end of code to require compat-5.1 ---------------------------
-
+
-- If the Lua RRDtool module was installed together with RRDtool,
-- in /usr/local/rrdtool-1.3.2/lib/lua/5.0, package.cpath must be
-- set accordingly so that 'require' can find the module:
-
+
package.cpath = '/usr/local/rrdtool-1.3.2/lib/lua/5.0/?.so;' ..
package.cpath
-
+
local rrd = require 'rrd'
rrd.update ("mydemo.rrd","N:12:13")
-
+
If we execute the program above we'll get:
$ lua t.lua
-
+
lua: t.lua:27: opening 'mydemo.rrd': No such file or directory
stack traceback:
[C]: in function `update'
the final timestamp.
--require compat-5.1 if necessary
-
+
package.cpath = '/usr/local/rrdtool-1.3.2/lib/lua/5.0/?.so;' ..
package.cpath
-
+
local rrd = require "rrd"
local first, last = rrd.first("test.rrd"), rrd.last("test.rrd")
local start, step, names, data =
=item I<filename.xml>
-The name of the B<XML> file you want to restore. The special filename "-"
-(a single dash) is interpreted as standard input.
+The name of the B<XML> file you want to restore. The special filename "-"
+(a single dash) is interpreted as standard input.
In order to support the restore command in pipe mode (especially when
using B<RRDtool> over a network connection), when using "-" as a filename
=head1 EXAMPLE
$: << '/path/to/rrdtool/lib/ruby/1.8/i386-linux'
- require "RRD"
+ require "RRD"
name = "test"
rrd = "#{name}.rrd"
start = Time.now.to_i
- RRD.create(
+ RRD.create(
rrd,
"--start", "#{start - 1}",
"--step", "300",
puts
puts "fetching data from #{rrd}"
- (fstart, fend, data, step) = RRD.fetch(rrd, "--start", start.to_s, "--end",
+ (fstart, fend, data, step) = RRD.fetch(rrd, "--start", start.to_s, "--end",
(start + 300 * 300).to_s, "AVERAGE")
puts "got #{data.length} data points from #{fstart} to #{fend}"
puts
puts "generating graph #{name}.png"
RRD.graph(
"#{name}.png",
- "--title", " RubyRRD Demo",
+ "--title", " RubyRRD Demo",
"--start", "#{start+3600}",
"--end", "start + 1000 min",
- "--interlace",
+ "--interlace",
"--imgformat", "PNG",
"--width=450",
"DEF:a=#{rrd}:a:AVERAGE",
"CDEF:line=TIME,2400,%,300,LT,a,UNKN,IF",
"AREA:b#00b6e4:beta",
"AREA:line#0022e9:alpha",
- "LINE3:line#ff0000")
+ "LINE3:line#ff0000")
puts
If you use the B<--ruby-site-install> configure option you can drop the $:
A second application of the B<tune> function is to set or alter parameters
used by the specialized function B<RRAs> for aberrant behavior detection.
-Still another application is to add or remove data sources (DS) or
-add / remove or alter some aspects of round-robin archives (RRA). These operations
-are not really done in-place, but rather generate a new RRD file internally and
+Still another application is to add or remove data sources (DS) or
+add / remove or alter some aspects of round-robin archives (RRA). These operations
+are not really done in-place, but rather generate a new RRD file internally and
move it over the original file. Data is kept intact during these operations.
-For even more in-depth modifications you may review the
-S<B<--source>> and S<B<--template>> options of the B<create> function which
-allow you to combine multiple RRD files into a new one and which is even more clever
+For even more in-depth modifications you may review the
+S<B<--source>> and S<B<--template>> options of the B<create> function which
+allow you to combine multiple RRD files into a new one and which is even more clever
in what data it is able to keep or "regenerate".
=over 8
deviation coefficients to unknown. For the FAILURES B<RRA>, it erases the
violation history. Note that reset does not erase past predictions
(the values of the HWPREDICT or MHWPREDICT B<RRA>), predicted deviations (the
-values of the DEVPREDICT B<RRA>), or failure history (the values of the
-FAILURES B<RRA>). This option will function even if not all the listed
+values of the DEVPREDICT B<RRA>), or failure history (the values of the
+FAILURES B<RRA>). This option will function even if not all the listed
B<RRAs> are present.
Due to the implementation of this option, there is an indirect impact on
=item B<--daemon>|B<-D> I<address>
-B<NOTE>: Because the B<-d> (small letter 'd') option was already taken, this
-function (unlike most other) uses the capital letter 'D' for the one-letter
+B<NOTE>: Because the B<-d> (small letter 'd') option was already taken, this
+function (unlike most other) uses the capital letter 'D' for the one-letter
option to name the cache daemon.
If given, B<RRDtool> will try to connect to the caching daemon
forgotten by the cache daemon, so that the next access using the
caching daemon will read the proper structure.
-This sequence of operations is designed to achieve a consistent overall
+This sequence of operations is designed to achieve a consistent overall
result with respect to
-RRD internal file consistency when using one of the B<DS> or B<RRA> changing
+RRD internal file consistency when using one of the B<DS> or B<RRA> changing
operations (that is: the resulting file should always be a valid RRD file,
regardless of concurrent updates through the caching daemon).
-Regarding data consistency such guarantees are not made: Without external
+Regarding data consistency such guarantees are not made: Without external
synchronization concurrent updates may be lost.
For a list of accepted formats, see the B<-l> option in the L<rrdcached> manual.
=item B<DEL:>I<ds-name>
Every data source named with a DEL specification will be removed.
-The resulting RRD will miss both the definition and the data for that
+The resulting RRD will miss both the definition and the data for that
data source. Multiple DEL specifications are permitted.
=item B<DS:>I<ds-spec>
For every such data source definition (for the exact syntax see the
-B<create> command), a new data source will be added to the RRD. Multiple DS
+B<create> command), a new data source will be added to the RRD. Multiple DS
specifications are permitted.
=item B<DELRRA:>I<index>
-Removes the RRA with index I<index>. The index is zero-based,
+Removes the RRA with index I<index>. The index is zero-based,
that is the very first RRA has index 0.
=item B<RRA:>I<rra-spec>
920808600: 6.6666666667e-03
920808900: 3.3333333333e-03
920809200: nan
- 920809500: nan
+ 920809500: nan
Note that you might get more rows than you expect. The reason for this is
that you ask for a time range that ends on 920809200. The number that is