Currently, tc_calc_xmittime and tc_calc_xmitsize round from double to
int three times — once when they call tc_core_time2tick /
tc_core_tick2time (whose argument is int), once when those functions
return (their return value is int), and then finally when the tc_calc_*
functions return. This leads to extremely granular and inaccurate
conversions.
As a result, for example, on my test system (where tick_in_usec=15.625,
clock_factor=1, and hz=
1000000000) for a bitrate of 1Gbps, all tc htb
burst values between 0 and 999 bytes get encoded as 0 ticks; all values
between 1000 and 1999 bytes get encoded as 15 ticks (equivalent to 960
bytes); all values between 2000 and 2999 bytes as 31 ticks (1984 bytes);
etc.
The patch changes the code so these calculations are done internally in
floating-point, and only rounded to integer values when the value is
returned. It also changes tc_calc_xmittime to round its calculated value
up, rather than down, to ensure that the calculated time is actually
sufficient for the requested size.
Signed-off-by: Jonathan Lennox <jonathan.lennox@8x8.com>
Signed-off-by: David Ahern <dsahern@kernel.org>
static double tick_in_usec = 1;
static double clock_factor = 1;
-static unsigned int tc_core_time2tick(unsigned int time)
+static double tc_core_time2tick(double time)
{
return time * tick_in_usec;
}
-unsigned int tc_core_tick2time(unsigned int tick)
+double tc_core_tick2time(double tick)
{
return tick / tick_in_usec;
}
unsigned int tc_calc_xmittime(__u64 rate, unsigned int size)
{
- return tc_core_time2tick(TIME_UNITS_PER_SEC*((double)size/(double)rate));
+ return ceil(tc_core_time2tick(TIME_UNITS_PER_SEC*((double)size/(double)rate)));
}
unsigned int tc_calc_xmitsize(__u64 rate, unsigned int ticks)
};
-unsigned tc_core_tick2time(unsigned tick);
+double tc_core_tick2time(double tick);
unsigned tc_core_time2ktime(unsigned time);
unsigned tc_core_ktime2time(unsigned ktime);
unsigned tc_calc_xmittime(__u64 rate, unsigned size);