From: Harlan Stenn Our resident cryptographer; now you see him, now you don't. Last update:
- 31-Oct-2010 4:18
+ 03-Aug-2011 19:50
UTC
from Alice's Adventures in Wonderland, Lewis Carroll
Related Links
@@ -52,7 +52,7 @@ required.

Figure 1. Typical Symmetric Key File
-Figure 1 shows a typical keys file used by the reference implementation. In the case of MD5, the key consists of 16 characters randomized over the ASCII printing codes The string can be edited later for a password, for instance as 2late4Me for key ID 10. For message digest algorithms other than MD5, the key is a 20-octet random hex string.
+Figure 1 shows a typical keys file used by the reference implementation. In the case of MD5, the key consists of 16 characters randomized over the ASCII printing codes The string can be edited later for a password, for instance as 2late4Me for key ID 10. For message digest algorithms other than MD5, the key is a 20-octet (40 hex digits) random hex string.
When ntpd is first started, it reads the key file specified by the keys command and installs the keys in the key cache. However, individual keys must be activated with the trustedkey configuration command before use. This allows, for instance, the installation of possibly several batches of keys and then activating a key remotely using ntpdc. The requestkey command selects the key ID used as the password for the ntpdc utility, while the controlkey command selects the key ID used as the password for the ntpq utility.
Microsoft Windows Authentication
diff --git a/html/authopt.html b/html/authopt.html index bf5ebb917..1f32f027f 100644 --- a/html/authopt.html +++ b/html/authopt.html @@ -20,7 +20,7 @@ color: #FF0000;
from Alice's Adventures in Wonderland, Lewis Carroll
Our resident cryptographer; now you see him, now you don't.
Last update: - 02-Jan-2011 4:45 + 04-Aug-2011 1:13 UTC
Last update: - 21-Sep-2010 4:15 + 30-Jul-2011 23:36 UTC
In the NTPv4 specification and reference implementation a state machine is used to manage the system clock under exceptional conditions, as when the daemon is first started or when encountering severe network congestion. This page describes the design and operation of the state machine in detail.
The state machine is activated upon receipt of an update by the clock discipline algorithm. its primary purpose is to determines whether the clock is slewed or stepped and how the initial time and frequency are determined using three thresholds: panic, step and stepout, and one timer: hold.
Most computers today incorporate a time-of-year (TOY) chip to maintain the time when the power is off. When the computer is restarted, the chip is used to initialize the operating system time. In case there is no TOY chip or the TOY time is different from NTP time by more than the panic threshold, the daemon assumes something must be terribly wrong, so exits with a message to the system operator to set the time manually. With the -g option on the command line, the daemon sets the clock to NTP time at the first update, but exits if the offset exceeds the panic threshold at subsequent updates. The panic threshold default is 1000 s, but it can be changed with the panic option of the tinker command.
Under ordinary conditions, the clock discipline gradually slews the clock to the correct time, so that the time is effectively continuous and never stepped forward pr backward. If, due to extreme network congestion, an offset spike exceeds the step threshold, the spike is discarded. However, if offset spikes greater than the step threshold persist for an interval more than the stepout threshold, the system clock is stepped to the correct time. In practice, the need for a step has been extremely rare and almost always the result of a hardware failure or operator error. The step threshold and stepout thresholds default to 125 ms and 300 s, respective, but can be changed with the step and stepout options of the tinker command, respectively. If the step threshold is set to zero, the step function is entirely disabled and the clock is always slewed. The daemon sets the step threshold to 600 s using the -x option on the command line. If the -g option is used or the step threshold is set greater than 0.5 s, the precision time kernel support is disabled.
+Under ordinary conditions, the clock discipline gradually slews the clock to the correct time, so that the time is effectively continuous and never stepped forward or backward. If, due to extreme network congestion, an offset spike exceeds the step threshold, by default 128 ms, the spike is discarded. However, if offset spikes greater than the step threshold persist for an interval more than the stepout threshold, by default 300 s, the system clock is stepped to the correct time.
+In practice, the need for a step has been extremely rare and almost always the result of a hardware failure or operator error. The step threshold and stepout threshold can be changed using the step and stepout options of the tinker command, respectively. If the step threshold is set to zero, the step function is entirely disabled and the clock is always slewed. The daemon sets the step threshold to 600 s using the -x option on the command line. If the -g option is used or the step threshold is set greater than 0.5 s, the precision time kernel support is disabled.
Historically, the most important application of the step function was when a leap second was inserted in the Coordinated Universal Time (UTC) timescale and the kernel precision time support was not available. This also happened with older reference clocks that indicated an impending leap second, but the radio itself did not respond until it resynchronized some minutes later. Further details are on the Leap Second Processing page.
In some applications the clock can never be set backward, even it accidentally set forward a week by some evil means. The issues should be carefully considered before using these options. The slew rate is fixed at 500 parts-per-million (PPM) by the Unix kernel. As a result, the clock can take 33 minutes to amortize each second the clock is outside the acceptable range. During this interval the clock will not be consistent with any other network clock and the system cannot be used for distributed applications that require correctly synchronized network time.
Last update: - 05-Oct-2010 21:29 + 29-Jul-2011 12:45 UTC
The clock cluster algorithm processes the truechimers produced by the clock select algorithm to produce the survivors used by the mitigation algorithms to discipline the system clock. It operates in a series of rounds, where at each round the truechimer furthest from the offset centroid is pruned from the population. The rounds are continued until a specified termination condition results. This page discusses the algorithm in detail.
@@ -18,12 +19,12 @@di(j) = q(j) - q(i),
where q(i) is the peer offset of the ith entry and q(j) is the peer offset of the jth entry, both produced by the clock filter algorithm. Then, the select jitter jS(i) of the ith entry is the root distance of the ith entry times the RMS sum of di(j) as j ranges from 1 to n. For the purpose of notation in the example to follow, let jR(i) be the peer jitter computed by the clock filter algorithm for the ith entry. In general, the expected error statistic for the ith entry is the RMS sum of these two components, but that is not needed by the clock cluster algorithm.
-The object at each round is to prune the entry with the largest select jitter until the termination condition is met. Note that the select jitter must be recomputed at each round, but the peer jitter does not change. At each round the remaining entries on the list represent the survivors of that round. While the number of entries on the list is not limited, the list is always pruned to the maxclock threshold. It has default 10, but can be set by the maxclock option of the tos command. This threshold is useful to limit the number of survivors when automatic server discovery schemes are in use.
+The object at each round is to prune the entry with the largest select jitter until the termination condition is met. Note that the select jitter must be recomputed at each round, but the peer jitter does not change. At each round the remaining entries on the list represent the survivors of that round. If the survivor to be pruned is preempt able and the number of survivors is greater than the maxclock threshold, the association is demobilized. This is useful in the automatic server discovery schemes described on the Association Management page. The maxclock threshold default is 10, but it can be changed using the maxclock option of the tos command. Further pruning is subject to the following termination conditions, but no associations will be demobilized
The termination condition has two parts. First, if the number of candidates is not greater than the minclock threshold set by the minclock option of the tos command, the pruning process terminates. The minclock default is 3, but can be changed to fit special conditions, as described on the Mitigation Rules and the prefer Keyword page.
Figure 1. Cluster Algorithm
The second termination condition is more intricate. Figure 1 shows a round where a candidate of (a) is pruned to yield the candidates of (b). Let jmax be the maximum select jitter and jmin be the minimum peer jitter over all entries on the list. In (a), candidate 1 has the highest select jitter, so jmax = jS(1). Candidate 4 has the lowest peer jitter, so jmin = jR(4). Since jmax > jmin, select jitter dominates total jitter, so the algorithm prunes candidate 1. In (b), jmax = jS(3) and jmin = jR(4). Since jmax < jmin, pruning additional candidates will not significantly reduce total jitter. So, the algorithm terminates with candidates 2, 3 and 4 as survivors.
+The second termination condition is more intricate. Figure 1 shows a round where a candidate of (a) is pruned to yield the candidates of (b). Let jmax be the maximum select jitter and jmin be the minimum peer jitter over all entries on the list. In (a), candidate 1 has the highest select jitter, so jmax = jS(1). Candidate 4 has the lowest peer jitter, so jmin = jR(4). Since jmax > jmin, select jitter dominates total jitter, so the algorithm prunes candidate 1. In (b), jmax = jS(3) and jmin = jR(4). Since jmax < jmin, pruning additional candidates will not significantly reduce total jitter. So, the algorithm terminates with candidates 2, 3 and 4 as survivors.
"Clone me," says Dolly sheepishly.
-Last update: 03-Sep-2010 21:20 UTC
+
Last update: + 04-Aug-2011 1:37 + UTC
+The following copyright notice applies to all files collectively called the Network Time Protocol Version 4 Distribution. Unless specifically declared otherwise in an individual file, this notice applies as if the text was explicitly included in the file.
*********************************************************************** * * -* Copyright (c) David L. Mills 1992-2010 * +* Copyright (c) University of Delaware 1992-2011 * * * * Permission to use, copy, modify, and distribute this software and * * its documentation for any purpose with or without fee is hereby * @@ -46,7 +49,7 @@
Last update: - 20-May-2011 2:25 + 30-Jul-2011 20:07 UTC
At the heart of the NTP specification and reference implementation is the clock discipline algorithm, which is best described as an adaptive parameter, hybrid phase/frequency-lock feedback loop. It is an intricately crafted algorithm that automatically adapts for optimum performance while minimizing network overhead. Operation is in two modes, phase-lock loop (PLL), which is used at poll intervals below the Allan intercept, by default 2048 s, and frequency-lock loop (FLL), which is used above that.
Figure 1. Clock Discipline Algorithm
A block diagram of the clock discipline is shown in Figure 1. The timestamp of a reference clock or remote server is compared with the timestamp of the system clock, represented as a variable frequency oscillator (VFO), to produce a raw offset sample Vd. Offset samples are processed by the clock filter to produce a filtered update Vs. The loop filter implements a type-2 proportional-integrator controller (PIC). The PIC can minimize errors in both time and frequency using predictors x and y, respectively. The clock adjust process samples these predictors once each second for the daemon discipline or once each tick interrupt for the kernel discipline to produce the system clock update Vc.
-In PLL mode the frequency predictor is an integral of the offset over past updates, while the phase predictor is the offset amortized over time in order to avoid setting the clock backward. In FLL mode the phase predictor is not used, while the frequency predictor uses the NIST lockclock algorithm. In this algorithm, the frequency predictor is computed as a fraction of the current offset divided by the time since the last update in order to minimize the offset at the next update.
-Its response in PLL mode is determined by the time constant, which results in a "stiffness" depending on the jitter of the available sources and the wander of the system clock oscillator. The scaled time constant is also used as the poll interval by the poll processes. However, in NTP symmetric mode, each peer manages its own poll interval and the two might not be the same. In such cases either peer uses the minimum of its own poll interval and that of the other peer, which is included in the NTP packet header.
+In PLL mode the frequency predictor is an integral of the offset over past updates, while the phase predictor is the offset amortized over time in order to avoid setting the clock backward. In FLL mode the phase predictor is not used, while the frequency predictor is similar to the NIST lockclock algorithm. In this algorithm, the frequency predictor is computed as a fraction of the current offset divided by the time since the last update in order to minimize the offset at the next update.
+The discipline response in PLL mode is determined by the time constant, which results in a "stiffness" depending on the jitter of the available sources and the wander of the system clock oscillator. The scaled time constant is also used as the poll interval by the poll processes. However, in NTP symmetric mode, each peer manages its own poll interval and the two might not be the same. In such cases either peer uses the minimum of its own poll interval and that of the other peer, which is included in the NTP packet header.
It is necessary to verify the PLL is stable and satisfies the Nyquist criterion, which requires that the sampling rate be at least twice the bandwidth. In this case the bandwidth can be approximated by the reciprocal of the time constant. At a poll exponent of 6, the poll interval is 64 s and the time constant 2048 s. The Nyquist criterion requires the sample interval to be not more than half the time constant or 1024 s. The clock filter guarantees at least one sample in eight poll intervals, so the sample interval is not more than 512 s. This would be described as oversampling by a factor of two. Finally, the PLL parameters have been chosen for a damping factor of 2, which results in a much faster risetime than with critical damping, but results in modest overshoot of 6 percent.
-It is important to understand how the dynamics of the PLL are affected by the poll interval. At the default poll interval of 64 s and a offset step change of 100 ms, the time response crosses zero in about 50 min and overshoots about 6 ms, as per design. Ordinarily, a step correction would causes a temporary frequency surge of about 5 PPM, which is slowly dissipates over a few hours. The result would be that the overshoot slowly subsides over that interval.
+It is important to understand how the dynamics of the PLL are affected by the time constant and poll interval. At the default poll interval of 64 s and a step offset change of 100 ms, the time response crosses zero in about 50 min and overshoots about 6 ms, as per design. Ordinarily, a step correction would causes a temporary frequency surge of about 5 PPM, which along with the overshoot slowly dissipates over a few hours.
However, the clock state machine used with the discipline algorithm avoids this transient at startup. It does this using a previously saved frequency file, if present, or by measuring the oscillator frequency, if not. It then quickly amortizes the residual offset at startup without affecting the oscillator frequency. In this way the offset error is less than 0.5 ms within 5 min if the file is present and within 10 min if not. See the Clock State Machine page for further details.
Since the PLL is linear, the response with different offset step amplitudes and poll intervals has the same characteristic shape, but scaled differently in amplitude and time. The response scales exactly with step amplitude, so that the response to a 10-ms step has the same shape as at 64 s, but with amplitude compressed by one-tenth. The response scales exactly with poll interval, so that response at a poll interval of 8 s has the same shape as at 64 s, but with time compressed by one-eighth.
In the NTP specification and reference implementation, poll intervals are expressed as exponents of 2. The clock discipline time constant is proportional to the poll interval by a factor of 32. Thus, a poll exponent of 6 corresponds to a poll interval of 64 s and a time constant of 2048 s. A change in the poll interval changes the time constant by a corresponding amount.
diff --git a/html/filter.html b/html/filter.html index b1eb18181..2e8489bff 100644 --- a/html/filter.html +++ b/html/filter.html @@ -9,7 +9,7 @@Last update: - 27-Sep-2010 18:00 + 29-Jul-2011 12:54 UTC
The clock filter algorithm processes the offset and delay samples produced by the on-wire protocol for each peer process separately. It uses a sliding window of eight samples and picks out the sample with the least expected error. This page describes the algorithm design principles along with an example of typical performance..
@@ -20,7 +20,7 @@In the clock filter algorithm the offset and delay samples from the on-wire protocol are inserted as the youngest stage of an eight-stage shift register, thus discarding the oldest stage. Each time an NTP packet is received from a source, a dispersion sample is initialized as the sum of the precisions of the server and client. Precision is defined by the latency to read the system clock and various from 1000 ns to 100 ns in modern machines. The dispersion ample is inserted in the shift register along with the offset and delay samples. Subsequently, the dispersion sample in each stage is increased at a fixed rate of 15 ms/s, representing the worst case error due to skew between the server and client clock frequencies's.
In each peer process the clock filter algorithm selects the stage with the smallest delay, which generally represents the most accurate data, and it and the associated offset sample become the peer variables of the same name. The peer jitter statistic is computed as the root-mean-square (RMS) differences between the offset samples and the offset of the selected stage.
The peer dispersion statistic is determined as a weighted sum of the dispersion samples in the shift register. Initially, the dispersion of all shift register stages is set to a large number "infinity" equal to 16 s. The weight factor for each stage, starting from the youngest numbered i = 1, is 2-i, which means the peer dispersion is approximately 16. As samples enter the register, the peer dispersion drops from 16 to 8, 4, 2... and so forth. In practice, the dispersion falls below the select threshold of 1.5 s in about four updates. This gives some time for meaningful comparison between sources, if more than one are available. The dispersion continues to grow at the same rate as the sample dispersion. As explained elsewhere, when a source becomes unreachable, the poll process inserts a dummy infinity sample in the shift register for each poll sent. After eight polls, the register returns to its original state.
-
+
Figure 2. Raw (left) and Filtered (right) Offsets
Figure 2 shows the performance of the algorithm using offsets for a typical Internet path over a 24-hr period. The graph on the left shows the raw offsets produced by the on-wired protocol, while the figure on the right shows the filtered offsets produced by the algorithm. If we consider the series formed as the absolute value of the offset samples, the mean error is defined as the mean of this series. Thus, the mean error of the raw samples is 0.724 ms, while the mean error of the filtered series is 0.192 ms. Radio engineers would interpret this as a processing gain of 11.5 dB.
diff --git a/html/miscopt.html b/html/miscopt.html index 888d77b05..04f6a3a4c 100644 --- a/html/miscopt.html +++ b/html/miscopt.html @@ -10,7 +10,7 @@
from Pogo, Walt Kelly
We have three, now looking for more.
Last update: - 12-May-2011 13:17 + 04-Aug-2011 1:27 UTC
Last update: - 12-May-2011 13:18 + 01-Aug-2011 17:56 UTC
Sometimes an NTP subnet becomes isolated from all UTC sources such as local reference clocks or Internet time servers. In such cases it may be necessary that the subnet servers and clients remain synchronized to a common timescale, not necessarily the UTC timescale. Previously, this function was provided by the local clock driver to simulate a UTC source. A server with this driver could be used to synchronize other hosts in the subnet directly or indirectly.
There are many disadvantages using the local clock driver, primarily that the subnet is vulnerable to single-point failures and multiple server redundancy is not possible. Orphan mode is intended to replace the local clock driver. It provides a single simulated UTC source with multiple servers and provides seamless switching as servers fail and recover.
A common configuration for private networks includes one or more core servers operating at the lowest stratum. Good practice is to configure each of these servers as backup for the others using symmetric or broadcast modes. As long as at least one core server can reach a UTC source, the entire subnet can synchronize to it.
-If no UTC sources are available to any core server, one of them can provide a simulated UTC source for all other hosts in the subnet. However, only one core server can simulate the UTC source and all direct dependents, called orphan children, must select the same one, called the orphan parent.
-A host is enabled for orphan mode using the tos orphan stratum command, where stratum is some stratum less than 16 and greater than any anticipated stratum that might occur with configured Internet time servers. However, sufficient headroom should remain so every subnet host dependent on the orphan children has stratum less than 16. Where no associations for other servers or reference clocks are configured, the orphan stratum can be set to 1. These are the same considerations that guide the local clock driver stratum selection.
-In order to avoid premature enabling orphan mode, a holdoff delay occurs when the daemon is first started and when all sources have been lost after that. The delay is intended to allow time for other sources to become reachable and selected. Only when the delay has expired with no sources will orphan parent mode be enabled. By default the delay is 300 s (five minutes), but this can be changed using the tos orphanwait delay command.
+If no UTC sources are available to any core server, one of them can provide a simulated UTC source for all other hosts in the subnet. However, only one core server can simulate the UTC source and all direct dependents, called orphan children, must select the same server, called the orphan parent.
+Hosts sharing the same common subnet, including potential orphan parents and potential orphan children, can be enabled for orphan mode using the orphan stratum option of the tos command, where stratum is some stratum less than 16 and greater than any anticipated stratum that might occur with configured Internet time servers. However, sufficient headroom should remain so every subnet host dependent on the orphan children has stratum less than 16. Where no associations for other servers or reference clocks are configured, the orphan stratum can be set to 1. These are the same considerations that guide the local clock driver stratum selection.
+In order to avoid premature enabling orphan mode, a holdoff delay occurs when the daemon is first started and when all sources have been lost after that. The delay is intended to allow time for other sources to become reachable and selectable. Only when the delay has expired with no sources will orphan mode be enabled. By default the delay is 300 s (five minutes), but this can be changed using the orphanwait option of the tos command.
A orphan parent with no sources shows reference ID LOOP if operating at stratum 1 and 127.0.0.1 (IPv4 loopback address) otherwise. While ordinary NTP clients use a selection metric based on delay and dispersion, orphan children use a metric computed from the IP address of each core server. Each orphan child chooses the orphan - parent as the root server with the smallest metric.
+ parent as the core server with the smallest metric.For orphan mode to work well, each core server with available sources should operate at the same stratum. All core servers and orphan children should include the same tos command in the configuration file. Each orphan child should include in the configuration file all root servers.
-
For example, consider the peer network configuration above, where two or more campus primary or secondary (stratum 2) servers are configured with reference clocks or public Internet primary servers and with each other using symmetric modes. With this configuration a server that loses all sources continues to discipline the system clock using the other servers as backup. Only the core servers and orphan children need to be enabled for orphan mode.
-
For broadcast networks each core server is configured in both broadcast server and broadcast client modes as shown above. Orphan children operate as broadcast clients of all core servers. As in peer networks, the core servers back up each other and only they and the orphan children need to be enabled for orphan mode.
-In normal operation subnet hosts operate below stratum 5, so the subnet is automatically configured as described in the NTP specification. If all UTC sources are lost, all core servers become orphans and the orphan children will select the same root server to become the orphan parent.
+
+ Figure 1. Orphan Peer Configuration
+For example, consider the peer network configuration in Figure 1, where two or more campus primary or secondary (stratum 2) servers are configured with reference clocks or public Internet primary servers and with each other using symmetric modes. With this configuration a server that loses all sources continues to discipline the system clock using the other servers as backup. Only the core servers and orphan children need to be enabled for orphan mode.
+
+ Figure 2. Orphan Broadcast Configuration
+For broadcast networks each core server is configured in both broadcast server and broadcast client modes as shown in Figure 2. Orphan children operate as broadcast clients of all core servers. As in peer networks, the core servers back up each other and only they and the orphan children need to be enabled for orphan mode.
+In normal operation subnet hosts operate below stratum 5, so the subnet is automatically configured as described in the NTP specification. If all UTC sources are lost, all core servers become orphans and the orphan children will select the same core server to become the orphan parent.
+ +
diff --git a/html/poll.html b/html/poll.html index 1173a76bb..dec1fb548 100644 --- a/html/poll.html +++ b/html/poll.html @@ -9,10 +9,10 @@Last update: - 12-Oct-2010 3:19 + 31-Jul-2011 15:34 UTC
Each poll process sends NTP packets at intervals determined by the poll program. The program is designed to provide a sufficient update rate for thc clock discipline algorithm while minimizing network overhead. The program is designed to operate over a poll exponent range between 3 (8 s) and 17 (36 hr). The minimum and maximum within this range can be set using the minpoll and maxpoll options of the server command, with default 6 (64 s) and 10 (1024 s), respectively.
+Each poll process sends NTP packets at intervals determined by the poll program. The program is designed to provide a sufficient update rate for thc clock discipline algorithm, while minimizing network overhead. The program is designed to operate over a poll exponent range between 3 (8 s) and 17 (36 hr). The minimum and maximum poll exponent within this range can be set using the minpoll and maxpoll options of the server command, with default 6 (64 s) and 10 (1024 s), respectively.
The poll interval is managed by a heuristic algorithm developed over several years of experimentation. It depends on an exponentially weighted average of clock offset differences, called clock jitter, and a jiggle counter, which is initially set to zero. When a clock update is received and the offset exceeds the clock jitter by a factor of 4, the jiggle counter is increased by the poll exponent; otherwise, it is decreased by twice the poll exponent. If the jiggle counter is greater than an arbitrary threshold of 30, it is reset to 0 and the the poll exponent is increased by 1. If the jiggle counter is less than -30, it is set to 0 and the poll exponent decreased by 1. In effect, the algorithm has a relatively slow reaction to good news, but a relatively fast reaction to bad news.
As an option of the server command, instead of a single packet, the poll program can send a burst of six packets at 2-s intervals. This is designed to reduce the time to synchronize the clock at initial startup (iburst) and/or to reduce the phase noise at the longer poll intervals (burst). The iburst option is effective only when the server is unreachable, while the burst option is effective only when the server is reachable.
For the iburst option the number of packets in the burst is six, which is the number normally needed to synchronize the clock; for the burst option, the number of packets in the burst is determined by the difference between the current poll exponent and the minimum poll exponent as a power of 2. For instance, with the default minimum poll exponent of 6 (64 s), only a single packet is sent for every poll, while the full number of eight packets is sent at poll exponents of 9 (512 s) or more. This insures that the average headway will never exceed the minimum headway.
diff --git a/html/prefer.html b/html/prefer.html index a74866ebd..71148cea7 100644 --- a/html/prefer.html +++ b/html/prefer.html @@ -10,14 +10,14 @@
from Alice's Adventures in Wonderland, Lewis Carroll
Listen carefully to what I say; it is very complicated.
Last update: - 29-Oct-2010 19:45 + 30-Jul-2011 19:45 UTC
This page summarizes the criteria for choosing from among a number of potential sources suitable contributors to the clock discipline algorithm. The criteria are very meticulous, since they have to handle many different scenarios that may be optimized for peculiar circumstances, including some scenarios designed to support planetary and deep space missions.
-Recall the suite of NTP data acquisition and grooming algorithms as these algorithms proceed in five phases. Phase one discovers the available sources and mobilizes an association for each candidate found. These candidates can result from explicit configuration, broadcast discovery or the pool and manycast autonomous configuration schemes. Phase two grooms the selectable candidates excluding those sources showing one or more of the following errors
+Recall the suite of NTP data acquisition and grooming algorithms as these algorithms proceed in five phases. Phase one discovers the available sources and mobilizes an association for each candidate found. These candidates can result from explicit configuration, broadcast discovery or the pool and manycast autonomous configuration schemes.
+Phase two grooms the selectable candidates excluding those sources showing one or more of the following errors
Phase three uses an intersection algorithm to select the truechimers from - among the candidates, leaving behind the falsetickers. A server or peer configured - with the true option is ipso facto a truechimer independent of this - algorithm. Phase four uses a clustering algorithm to cast off statistical outliers - from the truechimers until a set of survivors not less than minclock remain. The minclock has default 3, but can be changed with the minclock option of the tos command.
-Phase five uses a set of algorithms and mitigation rules to rank the survivors to produce combined statistics used to discipline the clock, as well as to select from among the survivors - a system peer from which a set of system statistics can be inherited and passed - along to a dependent client population. The algorithms and rules are the main topic of this page. The clock offset developed from these - algorithms can discipline the system clock either using the ntpd clock - discipline algorithm or enable the kernel to discipline the system clock directly, - as described on the A Kernel Model for Precision Timekeeping page. - Phase five is the topic of this page.
+Phase three uses the select algorithm to determine the truechimers from among the candidates, leaving behind the falsetickers. A server or peer configured with the true option is declared a truechimer independent of this algorithm. Phase four uses the cluster algorithm to cast off statistical outliers from the truechimers until a set of survivors not less than minclock remain. The minclock has default 3, but can be changed with the minclock option of the tos command.
+Phase five uses a set of algorithms and mitigation rules to rank the survivors to produce combined statistics used to discipline the clock, as well as to select from among the survivors a system peer from which a set of system statistics can be inherited and passed along to a dependent client population. The algorithms and rules are the main topic of this page. The clock offset developed from these algorithms can discipline the system clock either using the ntpd clock discipline algorithm or enable the kernel to discipline the system clock directly, as described on the A Kernel Model for Precision Timekeeping page. Phase five is the topic of this page.
The cluster algorithm produces a list of survivors ranked by increasing synchronization distance. The combine algorithm uses this list to produce a weighted average of both offset and jitter. Absent other considerations discussed later, the combined offset is used to discipline the clock, while the combined jitter is combined with other components to produce the system jitter statistic inherited by dependent clients.
The combine algorithm uses a weight factor for each survivor computed as the select threshold minus the synchronization distance. Since the select algorithm rejects candidates with synchronization distance greater than the select threshold, the weight factor is always positive. The system jitter is calculated as the RMS weighted differences between the offset of each survivor and the offset of the first candidate on the survivor list.
@@ -58,15 +49,14 @@The behavior of the various algorithms and mitigation rules involved depends on how the various synchronization sources are classified. This depends on whether the source is local or remote and if local the type of source. The following classes are defined:
The mitigation rules are designed to provide an intelligent selection of the system peer from among the survivors of different types. When used with the server or peer commands, the prefer option designates one or more survivors as preferred over all others. While the rules do not forbid it, it is usually not useful to designate more than one source as preferred; however, if more than one source is so designated, they are used in the order specified in the configuration file; that is, if the first one becomes unselectable, the second one is considered and so forth. This order of priority is also applicable to multiple PPS drivers, multiple modem drivers and even multiple local drivers, although that would not normally be useful.
-The clustering algorithm works on the set of truechimers produced by the intersection algorithms. Ordinarily, any one of them can in principle provide correct time; however, due to various latency variations, not all can provide the most accurate and stable time. The clustering algorithm, processes the truechimers in one or more rounds to cast off a statistical outlier until no more than the minclock option of the tos command are left. The default for this option is 3.
-In the prefer scheme the cluster algorithm is modified so that the prefer peer is never discarded; on the contrary, its potential removal becomes a rounds-termination condition. However, the prefer peer can still be discarded by the intersection algorithm as a falseticker. To avoid this, it is usually wise to increase the mindist option of the tos command from the default .005 s to something like .05 s.
+The mitigation rules are designed to provide an intelligent selection of the system peer from among the selectable sources of different types. When used with the server or peer commands, the prefer option designates one or more survivors as preferred over all others. While the rules do not forbid it, it is usually not useful to designate more than one source as preferred; however, if more than one source is so designated, they are used in the order specified in the configuration file; that is, if the first one becomes unselectable, the second one is considered and so forth. This order of priority is also applicable to multiple PPS drivers, multiple modem drivers and even multiple local drivers, although that would not normally be useful.
+The cluster algorithm works on the set of truechimers produced by the select algorithm. At each round the algorithm casts off the survivor least likely to influence the choice of system peer. However, in the prefer scheme the cluster algorithm is modified so that the prefer peer is never discarded; on the contrary, its potential removal becomes a termination condition. However, the prefer peer can still be discarded by the select algorithm as a falseticker; otherwise, the prefer peer becomes the system peer.
Ordinarily, the combining algorithm computes a weighted average of the survivor offsets to produce the final synchronization source. However, if a prefer peer is among the survivors, the combining algorithm is not used. Instead, @@ -77,7 +67,7 @@ remains operational. However, if the radio fails or becomes a falseticker, the averaged backup sources continue to discipline the system clock.
As the selection algorithm scans the associations for selectable candidates, the modem driver and local driver are segregated for later, but only if not designated a prefer peer. If so designated, a driver is included among the candidate population. In addition, if orphan parents are found the parent with the lowest metric is segregated for later; the others are discarded. For this purpose the metric is defined as the four-octet IPv4 address or the first four octets of the hashed IPv6 address. The resulting candidates, including any prefer peers found, are processed by the intersection to produce a possibly empty set of truechimers. The clustering algorithm ranks the truechimers first by stratum then by synchronization distance and designates the survivor with the lowest distance as the potential system peer.
+As the select algorithm scans the associations for selectable candidates, the modem driver and local driver are segregated for later, but only if not designated a prefer peer. If so designated, a driver is included among the candidate population. In addition, if orphan parents are found, the parent with the lowest metric is segregated for later; the others are discarded. For this purpose the metric is defined as the four-octet IPv4 address or the first four octets of the hashed IPv6 address. The resulting candidates, including any prefer peers found, are processed by the select algorithm to produce a possibly empty set of truechimers. The clustering algorithm ranks the truechimers first by stratum then by synchronization distance and temporarily designates the survivor with the lowest distance as the potential system peer.
If one or more truechimers support a pulse-per-second (PPS) signal and the PPS signal is operating correctly, it is designated a PPS driver. If more than one PPS diver are found, only the first one is used. The PPS driver is not included @@ -93,13 +83,13 @@
The mitigation algorithm proceeds in three steps in turn.
If none of the above is the case, the data are disregarded and the system variables remain as they are.
The minsane option of the tos command, the prefer option of the server and peer commands and the flag options of the fudge command for the PPS driver can be used with the mitigation rules to provide many useful configurations. The minsane option specifies the minimum number of survivors required to synchronized the system clock. The prefer option designates the prefer peer. The driver-dependent flag options enable the PPS driver for various conditions.
+The minsane option of the tos command, the prefer option of the server and peer commands and the flag options of the fudge command for the PPS driver can be used with the mitigation rules to provide many useful configurations. The minsane option specifies the minimum number of survivors required to synchronized the system clock. The prefer option designates the prefer peer. The driver-dependent flag options enable the PPS driver for various conditions.
A common scenario is a GPS driver with a serial timecode and PPS signal. The PPS signal is disabled until the system clock has been set by some means, not necessarily the GPS driver. If the serial timecode is within 0.4 s of the PPS @@ -108,7 +98,7 @@ disconnected, the GPS driver stops updating the system clock and so eventually becomes unreachable and replaced by other sources.
Whether or not the GPS driver disables the PPS signal when unreachable is - at the discretion of the driver. Ordinarily, the PPS signal would be disabled in this case; however, When the GPS receiver has a precision holdover oscillator, the driver may elect to continue PPS operation. In this case the PPS signal continues to discipline the system clock.
+ at the discretion of the driver. Ordinarily, the PPS signal is disabled in this case; however, when the GPS receiver has a precision holdover oscillator, the driver may elect to continue PPS operation. In this case, minsane can be set to zero so the PPS signal continues to discipline the system clock.
Authentication Commands and Options
Miscellaneous Commands and Options
Last update: - 04-Jan-2011 22:18 + 29-Jul-2011 1:39 UTC
The clock select algorithm determines from a set of candidates, which are correct (truechimers) and which are not (falsetickers) according to a set of formal correctness assertions. The principles are based on the observation that the maximum error in determining the offset of a candidate cannot exceed one-half the roundtrip delay to the primary reference clock at the time of measurement. This must be increased by the maximum error that can accumulate since then. In NTP the total, called the synchronization distance, is one-half the roundtrip root delay plus the root dispersion plus minor error contributions not considered here.
@@ -25,7 +26,7 @@The algorithm operates as shown in Figure 2. Let m be the number of candidates and f the number of falsetickers, initially zero. Move a pointer from the leftmost endpoint towards the rightmost endpoint in Figure 1 and count the number of candidates, stopping when that number reaches m - f; this is the left endpoint of the intersection interval. Then, do the same, but moving from the rightmost endpoint towards the leftmost endpoint; this is the right endpoint of the intersection interval. If the left endpoint is greater than the right endpoint; i.e., the interval appears backwards. increase f by 1. If f is less than n / 2, try again; otherwise, the algorithm fails and no truechimers could be found..
The clock select algorithm then scans the associations. If the right endpoint of the correctness interval for a candidate is greater than the left endpoint of the intersection interval, or if the left endpoint of the correctness interval is less than the right endpoint of the intersection interval, the candidate is a truechimer; otherwise, it is a falseticker.
-In practice, with fast LANs and modern computers, the correctness interval can be quite small, especially when the candidates are multiple reference clocks. In such cases the intersection interval may be empty, due to insignificant differences in the reference clock offsets. To avoid this, the synchronization distance must be at least the value of mindist, with default 1 ms. This value can be changed using the mindist option of the tos command.
+In practice, with fast LANs and modern computers, the correctness interval can be quite small, especially when the candidates are multiple reference clocks. In such cases the intersection interval might be empty, due to insignificant differences in the reference clock offsets. To avoid this, the size of the correctness interval must be at least the value of mindist, with default 1 ms. This value can be changed using the mindist option of the tos command.
from Alice's Adventures in Wonderland, Lewis Carroll
S is for snakeoil.
Last update: - 04-Sep-2010 17:48 + 04-Aug-2010 1:38 UTC
This program is a Simple Network Time Protocol (SNTP) client that can be used to query a Network Time Protocol (NTP) server and display the time offset of the system clock relative to the server clock. Run as root it can correct the system clock to this offset as well. It can be run as an interactive command or from a script by a cron job. The program implements the SNTP protocol defined in RFC-4330, which is a subset of the NTP protocol defined in RFC-1305, but does not provide the sanity checks, access controls, security functions and mitigation algorithms as in the full NTP implementation.
+This program is a Simple Network Time Protocol (SNTP) client that can be used to query a Network Time Protocol (NTP) server and display the time offset of the system clock relative to the server clock. Run as root it can correct the system clock to this offset as well. It can be run as an interactive command or from a script by a cron job. The program implements the SNTP client protocol defined in RFC 5905, including the full on-wire protocol but does not provide the sanity checks, access controls, security functions and mitigation algorithms as in the full NTP version 4 specification, also defined in RFC 5905.
By default, sntp writes the local date and time (i.e., not UTC) to the standard output in the format
-1996 Oct 15 20:17:25.123 + 4.567 +/- 0.089 secs,
-where the + 4.567 +/- 0.089 secs indicates the time offset and error bound of the system clock relative to the server clock.
-If a NTP server address is explicitly specified, the program sends a single message to the server and waits up to delay seconds for a unicast server message. Otherwise, it sends no message and waits up to delay seconds for a broadcast server message.
+2011-08-04 00:40:36.642222 (+0000) +0.006611 +/- 0.041061 psp-os1 149.20.68.26
+where the +0.006611 +/- 0.041061 indicates the time offset and error bound of the system clock relative to the server clock, in seconds.
+If -b bcaddress is not specified, the program sends a single message to each address and waits up to uctimeout (default 5) seconds for a unicast server response. Otherwise, it sends no message and waits up to bctimeout (default 68) seconds for a broadcast NTP message.
sntp recognizes the following options:
The program returns an exit status of zero for success and non-zero otherwise.
+The program returns an exit status of zero for if a valid response is received and non-zero otherwise.
sntp was developed by N.M. Maclaren of the University of Cambridge Computing Service.
+This sntp was originally developed by Johannes Maximilian Kuehn. Harlan Stenn and Dave Hart modified it to query more than one server at a time. See the file ChangeLog in the distribution for details.