charts where the traffic is added up into a monthly total.
Value at sample (t1) will be the average between (t1-delay) and (t1)
Value at sample (t2) will be the average between (t2-delay) and (t2)
-TRENDNAN is - in contrast to TREND - NAN-safe. If you use TREND and one
-source value is NAN the complete sliding window is affected. The TRENDNAN
-operation ignores all NAN-values in a sliding window and computes the
+TRENDNAN is - in contrast to TREND - NAN-safe. If you use TREND and one
+source value is NAN the complete sliding window is affected. The TRENDNAN
+operation ignores all NAN-values in a sliding window and computes the
average of the remaining values.
B<PREDICT, PREDICTSIGMA, PREDICTPERC>
-Create a "sliding window" average/sigma/percentil of another data series,
+Create a "sliding window" average/sigma/percentil of another data series,
that also shifts the data series by given amounts of time as well
Usage - explicit stating shifts:
and between (t1-shift2-window) and (t1-shift2)
-The function is by design NAN-safe.
+The function is by design NAN-safe.
This also allows for extrapolation into the future (say a few days)
-- you may need to define the data series whit the optional start= parameter, so that
+- you may need to define the data series whit the optional start= parameter, so that
the source data series has enough data to provide prediction also at the beginning of a graph...
-The percentile can be between [-100:+100].
+The percentile can be between [-100:+100].
The positive percentiles interpolates between values while the negative will take the closest.
Example: you run 7 shifts with a window of 1800seconds. Assuming that the rrd-file
has a step size of 300 seconds this means we have to do the percentile calculation
-based on a max of 42 distinct values (less if you got NAN). that means that in the
+based on a max of 42 distinct values (less if you got NAN). that means that in the
best case you get a step rate between values of 2.4 percent.
so if you ask for the 99th percentile, then you would need to look at the 41.59th
value. As we only have integers, either the 41st or the 42nd value.
The negative returns the closest value distance wise - so in the above case 42nd value,
which is effectively returning the Percentile100 or the max of the previous 7 days in the window.
-Here an example, that will create a 10 day graph that also shows the
+Here an example, that will create a 10 day graph that also shows the
prediction 3 days into the future with its uncertainty value (as defined by avg+-4*sigma)
This also shows if the prediction is exceeded at a certain point.
CDEF:perc95=86400,-7,1800,95,value,PREDICTPERC \
LINE1:perc95#ffff00:95th_percentile
-Note: Experience has shown that a factor between 3 and 5 to scale sigma is a good
-discriminator to detect abnormal behavior. This obviously depends also on the type
+Note: Experience has shown that a factor between 3 and 5 to scale sigma is a good
+discriminator to detect abnormal behavior. This obviously depends also on the type
of data and how "noisy" the data series is.
Also Note the explicit use of start= in the CDEF - this is necessary to load all
CDEF:abs=rate,STEPWIDTH,*,PREF,ADDNAN
+B<NEWDAY>,B<NEWWEEK>,B<NEWMONTH>,B<NEWYEAR>
+
+These three operators will return 1.0 whenever a step is the first of the given periode. The periodes are determined
+according to the local timezone AND the C<LC_TIME> settings.
+
+ CDEF:mtotal=rate,STEPWIDTH,*,NEWMONTH,PREV,0,IF,ADDNAN
+
B<TIME>
Pushes the time the currently processed value was taken at onto the stack.
=item LSLSLOPE, LSLINT, LSLCORREL
-Return the parameters for a B<L>east B<S>quares B<L>ine I<(y = mx +b)>
+Return the parameters for a B<L>east B<S>quares B<L>ine I<(y = mx +b)>
which approximate the provided dataset. LSLSLOPE is the slope I<(m)> of
-the line related to the COUNT position of the data. LSLINT is the
-y-intercept I<(b)>, which happens also to be the first data point on the
-graph. LSLCORREL is the Correlation Coefficient (also know as Pearson's
-Product Moment Correlation Coefficient). It will range from 0 to +/-1
-and represents the quality of fit for the approximation.
+the line related to the COUNT position of the data. LSLINT is the
+y-intercept I<(b)>, which happens also to be the first data point on the
+graph. LSLCORREL is the Correlation Coefficient (also know as Pearson's
+Product Moment Correlation Coefficient). It will range from 0 to +/-1
+and represents the quality of fit for the approximation.
Example: C<VDEF:slope=mydata,LSLSLOPE>
*/
im->gdes[gdi].step = lcd(steparray);
free(steparray);
- /* supply the actual stepwith for this run */
- for (rpi = 0; im->gdes[gdi].rpnp[rpi].op != OP_END; rpi++) {
- if (im->gdes[gdi].rpnp[rpi].op == OP_STEPWIDTH) {
- im->gdes[gdi].rpnp[rpi].val = im->gdes[gdi].step;
- im->gdes[gdi].rpnp[rpi].op = OP_NUMBER;
- }
- }
if ((im->gdes[gdi].data = (rrd_value_t*)malloc(((im->gdes[gdi].end -
im->gdes[gdi].start)
* we use the fact that time_t is a synonym for long
*/
if (rpn_calc(rpnp, &rpnstack, (long) now,
- im->gdes[gdi].data, ++dataidx) == -1) {
+ im->gdes[gdi].data, ++dataidx,im->gdes[gdi].step) == -1) {
/* rpn_calc sets the error string */
rpnstack_free(&rpnstack);
rpnp_freeextra(rpnp);
#include <limits.h>
#include <locale.h>
#include <stdlib.h>
+#include <time.h>
+#include "rrd_tool.h"
+#ifdef HAVE_LANGINFO_H
+#include <langinfo.h>
+#endif
#include "rrd_strtod.h"
#include "rrd_tool.h"
#include "rrd_rpncalc.h"
add_op(OP_ISINF, ISINF)
add_op(OP_NOW, NOW)
add_op(OP_LTIME, LTIME)
+ add_op(OP_NEWDAY, NEWDAY)
+ add_op(OP_NEWWEEK, NEWWEEK)
+ add_op(OP_NEWMONTH, NEWMONTH)
+ add_op(OP_NEWYEAR, NEWYEAR)
add_op(OP_STEPWIDTH, STEPWIDTH)
add_op(OP_TIME, TIME)
add_op(OP_ATAN2, ATAN2)
* COMPUTE DS specific. This is less efficient, but creation doesn't
* occur too often. */
for (i = 0; rpnp[i].op != OP_END; i++) {
- if (rpnp[i].op == OP_TIME || rpnp[i].op == OP_LTIME || rpnp[i].op == OP_STEPWIDTH ||
+ if (rpnp[i].op == OP_TIME || rpnp[i].op == OP_LTIME ||
rpnp[i].op == OP_PREV || rpnp[i].op == OP_COUNT ||
rpnp[i].op == OP_TREND || rpnp[i].op == OP_TRENDNAN ||
rpnp[i].op == OP_PREDICT || rpnp[i].op == OP_PREDICTSIGMA ||
- rpnp[i].op == OP_PREDICTPERC ) {
+ rpnp[i].op == OP_PREDICTPERC ||
+ /* these could actually go into COMPUTE with RRD format 06 ... since adding new
+ stuff into COMPUTE requires a fileformat update and that can only happen with the
+ 1.6 release */
+ rpnp[i].op == OP_STEPWIDTH ||
+ rpnp[i].op == OP_NEWDAY ||
+ rpnp[i].op == OP_NEWWEEK ||
+ rpnp[i].op == OP_NEWMONTH ||
+ rpnp[i].op == OP_NEWYEAR
+ ) {
+
rrd_set_error
- ("operators TIME LTIME STEPWIDTH PREV COUNT TREND TRENDNAN PREDICT PREDICTSIGMA PREDICTPERC are not supported with DS COMPUTE");
+ ("operators TIME LTIME STEPWIDTH PREV NEW* COUNT TREND TRENDNAN PREDICT PREDICTSIGMA PREDICTPERC are not supported with DS COMPUTE");
free(rpnp);
return;
}
match_op(OP_EXC, EXC)
match_op(OP_POP, POP)
match_op(OP_LTIME, LTIME)
+ match_op(OP_NEWDAY, NEWDAY)
+ match_op(OP_NEWWEEK, NEWWEEK)
+ match_op(OP_NEWMONTH, NEWMONTH)
+ match_op(OP_NEWYEAR, NEWYEAR)
match_op(OP_STEPWIDTH, STEPWIDTH)
match_op(OP_LT, LT)
match_op(OP_LE, LE)
return (diff < 0) ? -1 : (diff > 0) ? 1 : 0;
}
+static int find_first_weekday(void){
+ static int first_weekday = -1;
+ if (first_weekday == -1){
+#ifdef HAVE__NL_TIME_WEEK_1STDAY
+ /* according to http://sourceware.org/ml/libc-locales/2009-q1/msg00011.html */
+ /* See correct way here http://pasky.or.cz/dev/glibc/first_weekday.c */
+ first_weekday = nl_langinfo (_NL_TIME_FIRST_WEEKDAY)[0];
+ int week_1stday;
+ long week_1stday_l = (long) nl_langinfo (_NL_TIME_WEEK_1STDAY);
+ if (week_1stday_l == 19971130) week_1stday = 0; /* Sun */
+ else if (week_1stday_l == 19971201) week_1stday = 1; /* Mon */
+ else
+ {
+ first_weekday = 1;
+ return first_weekday; /* we go for a monday default */
+ }
+ first_weekday=(week_1stday + first_weekday - 1) % 7;
+#else
+ first_weekday = 1;
+#endif
+ }
+ return first_weekday;
+}
+
/* rpn_calc: run the RPN calculator; also performs variable substitution;
* moved and modified from data_calc() originally included in rrd_graph.c
* arguments:
rpnstack_t *rpnstack,
long data_idx,
rrd_value_t *output,
- int output_idx)
-{
+ int output_idx,
+ int step_width
+){
int rpi;
long stptr = -1;
-
+ struct tm tmtmp1,tmtmp2;
+ time_t timetmp;
/* process each op from the rpn in turn */
for (rpi = 0; rpnp[rpi].op != OP_END; rpi++) {
/* allocate or grow the stack */
}
break;
case OP_STEPWIDTH:
- rrd_set_error("STEPWIDTH should never show up here... aborting");
- return -1;
+ rpnstack->s[++stptr] = step_width;
break;
case OP_COUNT:
rpnstack->s[++stptr] = (output_idx + 1); /* Note: Counter starts at 1 */
rpnstack->s[++stptr] =
(double) tzoffset(data_idx) + (double) data_idx;
break;
+ case OP_NEWDAY:
+ timetmp = data_idx;
+ localtime_r(&timetmp,&tmtmp1);
+ timetmp = data_idx-step_width;
+ localtime_r(&timetmp,&tmtmp2);
+ rpnstack->s[++stptr] = tmtmp1.tm_mday != tmtmp2.tm_mday ? 1.0 : 0.0;
+ break;
+ case OP_NEWWEEK:
+ timetmp = data_idx;
+ localtime_r(&timetmp,&tmtmp1);
+ timetmp = data_idx-step_width;
+ localtime_r(&timetmp,&tmtmp2);
+ rpnstack->s[++stptr] = (tmtmp1.tm_wday == find_first_weekday() && tmtmp1.tm_wday != tmtmp2.tm_wday) ? 1.0 : 0.0;
+ break;
+ case OP_NEWMONTH:
+ timetmp = data_idx;
+ localtime_r(&timetmp,&tmtmp1);
+ timetmp = data_idx-step_width;
+ localtime_r(&timetmp,&tmtmp2);
+ rpnstack->s[++stptr] = tmtmp1.tm_mon != tmtmp2.tm_mon? 1.0 : 0.0;
+ break;
+ case OP_NEWYEAR:
+ timetmp = data_idx;
+ localtime_r(&timetmp,&tmtmp1);
+ timetmp = data_idx-step_width;
+ localtime_r(&timetmp,&tmtmp2);
+ rpnstack->s[++stptr] = tmtmp1.tm_year != tmtmp2.tm_year ? 1.0: 0.0;
+ break;
case OP_ADD:
stackunderflow(1);
rpnstack->s[stptr - 1] = rpnstack->s[stptr - 1]
OP_AVG, OP_ABS, OP_ADDNAN,
OP_MINNAN, OP_MAXNAN,
OP_MEDIAN, OP_PREDICTPERC,
- OP_DEPTH, OP_COPY, OP_ROLL, OP_INDEX, OP_STEPWIDTH
+ OP_DEPTH, OP_COPY, OP_ROLL, OP_INDEX, OP_STEPWIDTH,
+ OP_NEWDAY, OP_NEWWEEK, OP_NEWMONTH, OP_NEWYEAR
};
typedef struct rpnp_t {
rpnstack_t *rpnstack,
long data_idx,
rrd_value_t *output,
- int output_idx);
+ int output_idx,
+ int step_width
+);
#endif
ENV_RRDCACHED_ADDRESS, argv[0]);
goto end_tag;
}
-
+
/* need at least 2 arguments: filename, data. */
if (argc - optind < 2) {
rrd_set_error("Not enough arguments");
/* the file-template-cache implementation */
static GTree *rrd_file_template_cache = NULL;
/* the neccesary functions for the gtree */
-static gint cache_compare_names (gconstpointer name1,
+static gint cache_compare_names (gconstpointer name1,
gconstpointer name2,
gpointer data)
{
}
/* fetch from cache */
- format = (char *) g_tree_lookup(rrd_file_template_cache,
+ format = (char *) g_tree_lookup(rrd_file_template_cache,
filename);
if (format)
return format;
goto free_format;
/* and add object to tree */
- g_tree_insert (rrd_file_template_cache,
- (char *)filename,
+ g_tree_insert (rrd_file_template_cache,
+ (char *)filename,
format);
return format;
/* now calculate effective length and allocate it */
len = strlen(value) /* length of the value */
+ 1 /* terminating null byte */
- + (fields_file_tpl-fields_tpl) /* number of fields that we have
- * more in the file_template
- * compared to the given template
+ + (fields_file_tpl-fields_tpl) /* number of fields that we have
+ * more in the file_template
+ * compared to the given template
*/
- * 2 /* = strlen(":U") */;
+ * 2 /* = strlen(":U") */;
mapped = (char *) malloc(len);
if (!mapped)
/* check that we do not have a missmatch */
if (fields_count != fields_tpl) {
/* we could here more explicit,
- * by checking the missing fields
+ * by checking the missing fields
*/
rrd_set_error("rrd_map_template_to_values: "
"there are fields in template (%s) "
} /* }}} const char *rrd_map_template_to_values */
static int rrd_template_update(const char *filename, /* {{{ */
- const char *tpl,
+ const char *tpl,
int values_num,
const char * const *values)
{
}
/* now call the real function */
- ret = rrdc_update(filename, values_num,
+ ret = rrdc_update(filename, values_num,
(const char * const *) mapped_values);
-error:
+error:
/* free the temporary structures again */
if (mapped_values) {
for(i=0;i<values_num;i++)
{ /* try to connect to rrdcached */
int status = rrdc_connect(opt_daemon);
- if (status != 0) {
- rc = status;
- goto out;
- }
+ if (status != 0) {
+ rc = status;
+ goto out;
+ }
}
if (! rrdc_is_connected(opt_daemon))
argc - optind - 1, /* values_num */
(const char *const *) (argv + optind + 1)); /* values */
if (rc > 0)
- if (!rrd_test_error())
+ if (!rrd_test_error())
rrd_set_error("Failed sending the values to rrdcached: %s",
rrd_strerror (rc));
}
unsigned long rra_begin; /* byte pointer to the rra
* area in the rrd file. this
* pointer never changes value */
- rrd_value_t *pdp_new; /* prepare the incoming data to be added
+ rrd_value_t *pdp_new; /* prepare the incoming data to be added
* to the existing entry */
- rrd_value_t *pdp_temp; /* prepare the pdp values to be added
+ rrd_value_t *pdp_temp; /* prepare the pdp values to be added
* to the cdp values */
long *tmpl_idx; /* index representing the settings
+ (double) ((long) *current_time_usec -
(long) rrd->live_head->last_up_usec) / 1e6f;
- /* process the data sources and update the pdp_prep
+ /* process the data sources and update the pdp_prep
* area accordingly */
if (update_pdp_prep(rrd, updvals, pdp_new, interval) == -1) {
return -1;
else {
rrd_set_error("found extra data on update argument: %s",p+1);
return -1;
- }
+ }
}
}
}
/*
- * Parse the time in a DS string, store it in current_time and
+ * Parse the time in a DS string, store it in current_time and
* current_time_usec and verify that it's later than the last
* update for this DS.
*
unsigned long proc_pdp_st; /* which pdp_st was the last to be processed */
unsigned long occu_pdp_st; /* when was the pdp_st before the last update
* time */
- unsigned long proc_pdp_age; /* how old was the data in the pdp prep area
+ unsigned long proc_pdp_age; /* how old was the data in the pdp prep area
* when it was last updated */
unsigned long occu_pdp_age; /* how long ago was the last pdp_step time */
rpnp[i].free_extra = NULL;
}
/* run the rpn calculator */
- if (rpn_calc(rpnp, &rpnstack, 0, pdp_temp, ds_idx) == -1) {
+ if (rpn_calc(rpnp, &rpnstack, 0, pdp_temp, ds_idx,rrd->stat_head->pdp_step) == -1) {
rpnp_freeextra(rpnp);
free(rpnp);
rpnstack_free(&rpnstack);
return 0;
}
-/*
+/*
* Are we due for a smooth? Also increments our position in the burn-in cycle.
*/
static int do_schedule_smooth(
(cum_val + cur_val * start_pdp_offset) /
(pdp_cnt - scratch[CDP_unkn_pdp_cnt].u_cnt);
break;
- case CF_MAXIMUM:
+ case CF_MAXIMUM:
cum_val = IFDNAN(scratch[CDP_val].u_val, -DINF);
cur_val = IFDNAN(pdp_temp_val, -DINF);
return 0;
default:
return DNAN;
- }
- }
+ }
+ }
else {
switch (current_cf) {
case CF_AVERAGE:
return pdp_temp_val * pdp_into_cdp_cnt ;
default:
return pdp_temp_val;
- }
- }
+ }
+ }
}
/*
return 0;
}
-/*
+/*
* Move sequentially through the file, writing one RRA at a time. Note this
* architecture divorces the computation of CDP with flushing updated RRA
* entries to disk.
time_t rra_time = 0; /* time of update for a RRA */
unsigned long ds_cnt = rrd->stat_head->ds_cnt;
-
+
/* Ready to write to disk */
rra_start = rra_begin;
/* append info to the return hash */
*pcdp_summary = rrd_info_push(*pcdp_summary,
sprintf_alloc
- ("[%lli]RRA[%s][%lu]DS[%s]",
+ ("[%lli]RRA[%s][%lu]DS[%s]",
(long long)rra_time,
rrd->rra_def[rra_idx].cf_nam,
rrd->rra_def[rra_idx].pdp_cnt,
RRD=rpn2.rrd
-$RRDTOOL create $RRD --start 920804400 DS:speed:DCOUNTER:600:U:U RRA:AVERAGE:0.5:1:24 RRA:AVERAGE:0.5:6:10
+$RRDTOOL create $RRD --step 7200 --start 1167487000 DS:speed:DCOUNTER:14000:U:U RRA:AVERAGE:0.5:1:30
report "create"
-$RRDTOOL update $RRD 920804700:10 920805000:20 920805300:30
-$RRDTOOL update $RRD 920805600:40 920805900:50 920806200:60
-$RRDTOOL update $RRD 920806500:70 920806800:80 920807100:90
-$RRDTOOL update $RRD 920807400:100 920807700:110 920808000:120
-$RRDTOOL update $RRD 920808300:130 920808600:140 920808900:150
+
+# Sunday 2006-12-31T23:50:00 = 1167605400
+# is a reat test for the wrap detector as they ALL should wrap now
+
+$RRDTOOL update $RRD 1167487200:0 1167494400:720 1167501600:1440 1167508800:2160 1167516000:2880 1167523200:3600 1167530400:4320 1167537600:5040 1167544800:5760 1167552000:6480 1167559200:7200 1167566400:7920 1167573600:8640 1167580800:9360 1167588000:10080 1167595200:10800 1167602400:11520 1167609600:12240 1167616800:12960
+
report "update"
$RRDTOOL xport \
--json \
- --start 920804400 --end 920808000 \
+ --start 1167487200 --end 1167616800 \
DEF:myspeed=$RRD:speed:AVERAGE \
- CDEF:total=myspeed,STEPWIDTH,*,PREV,ADDNAN \
- XPORT:myspeed:myspeed XPORT:total:total |\
+ CDEF:rday=myspeed,POP,NEWDAY \
+ CDEF:rweek=myspeed,POP,NEWWEEK \
+ CDEF:rmonth=myspeed,POP,NEWMONTH \
+ CDEF:ryear=myspeed,POP,NEWYEAR \
+ CDEF:day=myspeed,STEPWIDTH,*,NEWDAY,0,PREV,IF,ADDNAN \
+ CDEF:week=myspeed,STEPWIDTH,*,NEWWEEK,0,PREV,IF,ADDNAN \
+ CDEF:month=myspeed,STEPWIDTH,*,NEWMONTH,0,PREV,IF,ADDNAN \
+ CDEF:year=myspeed,STEPWIDTH,*,NEWYEAR,0,PREV,IF,ADDNAN \
+ XPORT:myspeed:myspeed \
+ XPORT:day:day XPORT:rday:rday \
+ XPORT:week:week XPORT:rweek:rweek \
+ XPORT:month:month XPORT:rmonth:rmonth \
+ XPORT:year:year XPORT:ryear:ryear |\
$DIFF9 - $BASEDIR/rpn2.output
report "xport"
{ "about": "RRDtool graph JSON output",
"meta": {
- "start": 920804400,
- "end": 920808000,
- "step": 300,
+ "start": 1167487200,
+ "end": 1167616800,
+ "step": 7200,
"legend": [
"myspeed",
- "total"
+ "day",
+ "rday",
+ "week",
+ "rweek",
+ "month",
+ "rmonth",
+ "year",
+ "ryear"
]
},
"data": [
- [ null, null ],
- [ 3.333333333e-02, 1.000000000e+01 ],
- [ 3.333333333e-02, 2.000000000e+01 ],
- [ 3.333333333e-02, 3.000000000e+01 ],
- [ 3.333333333e-02, 4.000000000e+01 ],
- [ 3.333333333e-02, 5.000000000e+01 ],
- [ 3.333333333e-02, 6.000000000e+01 ],
- [ 3.333333333e-02, 7.000000000e+01 ],
- [ 3.333333333e-02, 8.000000000e+01 ],
- [ 3.333333333e-02, 9.000000000e+01 ],
- [ 3.333333333e-02, 1.000000000e+02 ],
- [ 3.333333333e-02, 1.100000000e+02 ]
+ [ 1.000000000e-01, 7.200000000e+02, 0.000000000e+00, 7.200000000e+02, 0.000000000e+00, 7.200000000e+02, 0.000000000e+00, 7.200000000e+02, 0.000000000e+00 ],
+ [ 1.000000000e-01, 1.440000000e+03, 0.000000000e+00, 1.440000000e+03, 0.000000000e+00, 1.440000000e+03, 0.000000000e+00, 1.440000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 2.160000000e+03, 0.000000000e+00, 2.160000000e+03, 0.000000000e+00, 2.160000000e+03, 0.000000000e+00, 2.160000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 2.880000000e+03, 0.000000000e+00, 2.880000000e+03, 0.000000000e+00, 2.880000000e+03, 0.000000000e+00, 2.880000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 7.200000000e+02, 1.000000000e+00, 7.200000000e+02, 1.000000000e+00, 3.600000000e+03, 0.000000000e+00, 3.600000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 1.440000000e+03, 0.000000000e+00, 1.440000000e+03, 0.000000000e+00, 4.320000000e+03, 0.000000000e+00, 4.320000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 2.160000000e+03, 0.000000000e+00, 2.160000000e+03, 0.000000000e+00, 5.040000000e+03, 0.000000000e+00, 5.040000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 2.880000000e+03, 0.000000000e+00, 2.880000000e+03, 0.000000000e+00, 5.760000000e+03, 0.000000000e+00, 5.760000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 3.600000000e+03, 0.000000000e+00, 3.600000000e+03, 0.000000000e+00, 6.480000000e+03, 0.000000000e+00, 6.480000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 4.320000000e+03, 0.000000000e+00, 4.320000000e+03, 0.000000000e+00, 7.200000000e+03, 0.000000000e+00, 7.200000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 5.040000000e+03, 0.000000000e+00, 5.040000000e+03, 0.000000000e+00, 7.920000000e+03, 0.000000000e+00, 7.920000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 5.760000000e+03, 0.000000000e+00, 5.760000000e+03, 0.000000000e+00, 8.640000000e+03, 0.000000000e+00, 8.640000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 6.480000000e+03, 0.000000000e+00, 6.480000000e+03, 0.000000000e+00, 9.360000000e+03, 0.000000000e+00, 9.360000000e+03, 0.000000000e+00 ],
+ [ 1.000000000e-01, 7.200000000e+03, 0.000000000e+00, 7.200000000e+03, 0.000000000e+00, 1.008000000e+04, 0.000000000e+00, 1.008000000e+04, 0.000000000e+00 ],
+ [ 1.000000000e-01, 7.920000000e+03, 0.000000000e+00, 7.920000000e+03, 0.000000000e+00, 1.080000000e+04, 0.000000000e+00, 1.080000000e+04, 0.000000000e+00 ],
+ [ 1.000000000e-01, 8.640000000e+03, 0.000000000e+00, 8.640000000e+03, 0.000000000e+00, 1.152000000e+04, 0.000000000e+00, 1.152000000e+04, 0.000000000e+00 ],
+ [ 1.000000000e-01, 7.200000000e+02, 1.000000000e+00, 9.360000000e+03, 0.000000000e+00, 7.200000000e+02, 1.000000000e+00, 7.200000000e+02, 1.000000000e+00 ],
+ [ 1.000000000e-01, 1.440000000e+03, 0.000000000e+00, 1.008000000e+04, 0.000000000e+00, 1.440000000e+03, 0.000000000e+00, 1.440000000e+03, 0.000000000e+00 ]
]
}