20010525
- Portability: gcc 2.6.3 does not have _attribute__ (Clive
+ Portability: gcc 2.6.3 does not have __attribute__ (Clive
Jones, dgw.co.uk). File: util/sys_defs.h.
Bugfix: the SMTP and LMTP clients claimed that a queue file
20060516
- Portability: _float80 alignment, by Albert Chin. File:
+ Portability: __float80 alignment, by Albert Chin. File:
util/sys_defs.h.
Further testing of Milter support uncovered typos; a missing
of negative feedback with concurrency window 1 could
be improved.
- Feature: support to look up null sender addresses
- in sender-dependent relayhost maps. Parameter name:
+ Feature: support to look up null sender addresses in
+ sender-dependent relayhost maps. Parameter name:
empty_address_relayhost_maps_lookup_key (default; <>).
Keean Schupke. File: trivial-rewrite/resolve.c.
feedback parameter settings with constants and variables
such as 1/8 or 1/concurrency. Some experimental parameters
were removed and others were renamed. The new names are:
- default_concurrency_negative_feedback,
- default_concurrency_positive_feedback, concurrency_feedback_debug,
- default_concurrency_failed_cohort_limit.
+ default_destination_concurrency_negative_feedback,
+ default_destination_concurrency_positive_feedback,
+ default_destination_concurrency_failed_cohort_limit,
+ destination_concurrency_feedback_debug.
Also available are transport-specific overrides:
<transport>_initial_destination_concurrency,
- <transport>_concurrency_negative_feedback,
- <transport>_concurrency_positive_feedback,
- <transport>_concurrency_failed_cohort_limit.
+ <transport>_destination_concurrency_negative_feedback,
+ <transport>_destination_concurrency_positive_feedback,
+ <transport>_destination_concurrency_failed_cohort_limit.
Files: global/mail_params.h, qmgr/qmgr.c, qmgr/qmgr_transport.c,
- qmgr/qmge_queue.c, qmgr/qmgr_feedback.c.
+ qmgr/qmge_queue.c, qmgr/qmgr_feedback.c, postconf/auto.awk.
The queue manager is by far the most complex part of the Postfix mail system.
It schedules delivery of new mail, retries failed deliveries at specific times,
-and removes mail from the queue after the last delivery attempt. Once started,
-the qmgr(8) process runs until "postfix reload" or "postfix stop".
+and removes mail from the queue after the last delivery attempt. There are two
+major classes of mechanisms that control the operation of the queue manager.
-As a persistent process, the queue manager has to meet strict requirements with
-respect to code correctness and robustness. Unlike non-persistent daemon
-processes, the queue manager cannot benefit from Postfix's process rejuvenation
-mechanism that limit the impact from resource leaks and other coding errors.
+The first class of mechanisms is concerned with the number of concurrent
+deliveries to a specific destination, including decisions on when to suspend
+deliveries after persistent failures:
-There are two major classes of mechanisms that control the operation of the
-queue manager:
+ * Concurrency scheduling
- * Mechanisms concerned with the number of concurrent deliveries to a specific
- destination, including decisions on when to suspend deliveries after
- persistent failures. These are described under "Concurrency scheduling".
+ o Summary of the Postfix 2.5 concurrency feedback algorithm
+ o Summary of the Postfix 2.5 "dead destination" detection algorithm
+ o Pseudocode for the Postfix 2.5 concurrency scheduler
+ o Results for delivery to concurrency limited servers
+ o Discussion of concurrency limited server results
+ o Limitations of less-than-1 per delivery feedback
+ o Concurrency configuration parameters
- * Mechanisms concerned with the selection of what mail to deliver to a given
- destination. These are described under "Preemptive scheduling".
+The second class of mechanisms is concerned with the selection of what mail to
+deliver to a given destination:
+
+ * Preemptive scheduling
+
+ o Why the non-preemptive Postfix queue manager was replaced
+ o How the non-preemptive queue manager scheduler works
+
+And this document would not be complete without:
+
+ * Credits
C\bCo\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by s\bsc\bch\bhe\bed\bdu\bul\bli\bin\bng\bg
limit. And when a destination's concurrency level dropped to zero, the
destination was declared "dead" and delivery was suspended.
-Drawbacks of the old +/-1 feedback concurrency scheduler are:
+Drawbacks of the old +/-1 feedback per delivery are:
* Overshoot due to exponential delivery concurrency growth with each pseudo-
cohort(*). For example, with the default initial concurrency of 5,
The revised concurrency scheduler has a highly modular structure. It uses
separate mechanisms for per-destination concurrency control and for "dead
destination" detection. The concurrency control in turn is built from two
-separate mechanisms: it supports less-than-1 feedback to allow for more gradual
-concurrency adjustments, and it uses feedback hysteresis to suppress
-concurrency oscillations. And instead of waiting for delivery concurrency to
-throttle down to zero, a destination is declared "dead" after a configurable
-number of pseudo-cohorts reports connection or handshake failure.
+separate mechanisms: it supports less-than-1 feedback per delivery to allow for
+more gradual concurrency adjustments, and it uses feedback hysteresis to
+suppress concurrency oscillations. And instead of waiting for delivery
+concurrency to throttle down to zero, a destination is declared "dead" after a
+configurable number of pseudo-cohorts reports connection or handshake failure.
S\bSu\bum\bmm\bma\bar\bry\by o\bof\bf t\bth\bhe\be P\bPo\bos\bst\btf\bfi\bix\bx 2\b2.\b.5\b5 c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by f\bfe\bee\bed\bdb\bba\bac\bck\bk a\bal\blg\bgo\bor\bri\bit\bth\bhm\bm
-We want to increment a destination's delivery concurrency after some (not
-necessarily consecutive) number of deliveries without connection or handshake
-failure. This is implemented with positive feedback g(N) where N is the
-destination's delivery concurrency. With g(N)=1 we get the old scheduler's
-exponential growth in time, while g(N)=1/N gives linear growth in time. Less-
-than-1 feedback and integer truncation naturally give us hysteresis, so that
-transitions to larger concurrency happen every 1/g(N) positive feedback events.
-
-We want to decrement a destination's delivery concurrency after some (not
-necessarily consecutive) number of deliveries suffer connection or handshake
-failure. This is implemented with negative feedback f(N) where N is the
-destination's delivery concurrency. With f(N)=1 we get the old scheduler's
-behavior where concurrency is throttled down dramatically after a single
-pseudo-cohort failure, while f(N)=1/N backs off more gently. Again, less-than-
-1 feedback and integer truncation naturally give us hysteresis, so that
-transitions to lower concurrency happen every 1/f(N) negative feedback events.
+We want to increment a destination's delivery concurrency when some (not
+necessarily consecutive) number of deliveries complete without connection or
+handshake failure. This is implemented with positive feedback g(N) where N is
+the destination's delivery concurrency. With g(N)=1 feedback per delivery,
+concurrency increases by 1 after each positive feedback event; this gives us
+the old scheduler's exponential growth in time. With g(N)=1/N feedback per
+delivery, concurrency increases by 1 after an entire pseudo-cohort N of
+positive feedback reports; this gives us linear growth in time. Less-than-
+1 feedback per delivery and integer truncation naturally give us hysteresis, so
+that transitions to larger concurrency happen every 1/g(N) positive feedback
+events.
+
+We want to decrement a destination's delivery concurrency when some (not
+necessarily consecutive) number of deliveries complete after connection or
+handshake failure. This is implemented with negative feedback f(N) where N is
+the destination's delivery concurrency. With f(N)=1 feedback per delivery,
+concurrency decreases by 1 after each negative feedback event; this gives us
+the old scheduler's behavior where concurrency is throttled down dramatically
+after a single pseudo-cohort failure. With f(N)=1/N feedback per delivery,
+concurrency backs off more gently. Again, less-than-1 feedback per delivery and
+integer truncation naturally give us hysteresis, so that transitions to lower
+concurrency happen every 1/f(N) negative feedback events.
However, with negative feedback we introduce a subtle twist. We "reverse" the
-hysteresis cycle so that the transition to lower concurrency happens at the
-b\bbe\beg\bgi\bin\bnn\bni\bin\bng\bg of a sequence of 1/f(N) negative feedback events. Otherwise, a
-correction for overload would be made too late. In the case of a concurrency-
-limited server, this makes the choice of f(N) relatively unimportant, as borne
-out by measurements.
+negative hysteresis cycle so that the transition to lower concurrency happens
+at the b\bbe\beg\bgi\bin\bnn\bni\bin\bng\bg of a sequence of 1/f(N) negative feedback events. Otherwise, a
+correction for overload would be made too late. This makes the choice of f(N)
+relatively unimportant, as borne out by measurements later in this document.
In summary, the main ingredients for the Postfix 2.5 concurrency feedback
-algorithm are a) the option of less-than-1 positive feedback to avoid
-overwhelming servers, b) the option of less-than-1 negative feedback to avoid
-or giving up too fast, c) feedback hysteresis to avoid rapid oscillation, and
-c) a "reverse" hysteresis cycle for negative feedback, so that it can correct
-for overload quickly.
+algorithm are a) the option of less-than-1 positive feedback per delivery to
+avoid overwhelming servers, b) the option of less-than-1 negative feedback per
+delivery to avoid giving up too fast, c) feedback hysteresis to avoid rapid
+oscillation, and c) a "reverse" hysteresis cycle for negative feedback, so that
+it can correct for overload quickly.
S\bSu\bum\bmm\bma\bar\bry\by o\bof\bf t\bth\bhe\be P\bPo\bos\bst\btf\bfi\bix\bx 2\b2.\b.5\b5 "\b"d\bde\bea\bad\bd d\bde\bes\bst\bti\bin\bna\bat\bti\bio\bon\bn"\b" d\bde\bet\bte\bec\bct\bti\bio\bon\bn a\bal\blg\bgo\bor\bri\bit\bth\bhm\bm
We want to suspend deliveries to a specific destination after some number of
deliveries suffers connection or handshake failure. The old scheduler declares
a destination "dead" when negative (-1) feedback throttles the delivery
-concurrency down to zero. With less-than-1 feedback, this throttling down would
-obviously take too long. We therefore have to separate "dead destination"
-detection from concurrency feedback. This is implemented by introducing the
-concept of pseudo-cohort failure. The Postfix 2.5 concurrency scheduler
-declares a destination "dead" after a configurable number of pseudo-cohort
-failures. The old scheduler corresponds to the special case where the pseudo-
-cohort failure limit is equal to 1.
+concurrency down to zero. With less-than-1 feedback per delivery, this
+throttling down would obviously take too long. We therefore have to separate
+"dead destination" detection from concurrency feedback. This is implemented by
+introducing the concept of pseudo-cohort failure. The Postfix 2.5 concurrency
+scheduler declares a destination "dead" after a configurable number of pseudo-
+cohorts suffers from connection or handshake failures. The old scheduler
+corresponds to the special case where the pseudo-cohort failure limit is equal
+to 1.
P\bPs\bse\beu\bud\bdo\boc\bco\bod\bde\be f\bfo\bor\br t\bth\bhe\be P\bPo\bos\bst\btf\bfi\bix\bx 2\b2.\b.5\b5 c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by s\bsc\bch\bhe\bed\bdu\bul\ble\ber\br
qmgr/qmgr_queue.c.
Types:
- Each destination has one set of the following variables
- int window
+ Each destination has one set of the following variables
+ int concurrency
double success
double failure
double fail_cohorts
Feedback functions:
- N is concurrency; x, y are arbitrary numbers in [0..1] inclusive
+ N is concurrency; x, y are arbitrary numbers in [0..1] inclusive
positive feedback: g(N) = x/N | x/sqrt(N) | x
negative feedback: f(N) = y/N | y/sqrt(N) | y
Initialization:
- window = initial_concurrency
+ concurrency = initial_concurrency
success = 0
failure = 0
fail_cohorts = 0
After success:
fail_cohorts = 0
Be prepared for feedback > hysteresis, or rounding error
- success += g(window)
- while (success >= 1) Hysteresis 1
- window += 1 Hysteresis 1
+ success += g(concurrency)
+ while (success >= 1) Hysteresis 1
+ concurrency += 1 Hysteresis 1
failure = 0
- success -= 1 Hysteresis 1
+ success -= 1 Hysteresis 1
Be prepared for overshoot
- if (window > concurrency limit)
- window = concurrency limit
+ if (concurrency > concurrency limit)
+ concurrency = concurrency limit
Safety:
Don't apply positive feedback unless
- window < busy_refcount + init_dest_concurrency
+ concurrency < busy_refcount + init_dest_concurrency
otherwise negative feedback effect could be delayed
After failure:
- if (window > 0)
- fail_cohorts += 1.0 / window
+ if (concurrency > 0)
+ fail_cohorts += 1.0 / concurrency
if (fail_cohorts > cohort_failure_limit)
- window = 0
- if (window > 0)
+ concurrency = 0
+ if (concurrency > 0)
Be prepared for feedback > hysteresis, rounding errors
- failure -= f(window)
+ failure -= f(concurrency)
while (failure < 0)
- window -= 1 Hysteresis 1
- failure += 1 Hysteresis 1
+ concurrency -= 1 Hysteresis 1
+ failure += 1 Hysteresis 1
success = 0
Be prepared for overshoot
- if (window < 1)
- window = 1
+ if (concurrency < 1)
+ concurrency = 1
-R\bRe\bes\bsu\bul\blt\bts\bs f\bfo\bor\br t\bth\bhe\be P\bPo\bos\bst\btf\bfi\bix\bx 2\b2.\b.5\b5 c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by f\bfe\bee\bed\bdb\bba\bac\bck\bk s\bsc\bch\bhe\bed\bdu\bul\ble\ber\br
+R\bRe\bes\bsu\bul\blt\bts\bs f\bfo\bor\br d\bde\bel\bli\biv\bve\ber\bry\by t\bto\bo c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by l\bli\bim\bmi\bit\bte\bed\bd s\bse\ber\brv\bve\ber\brs\bs
Discussions about the concurrency scheduler redesign started early 2004, when
the primary goal was to find alternatives that did not exhibit exponential
2007, when the primary concern had shifted towards better handling of server
concurrency limits. For this reason we measure how well the new scheduler does
this job. The table below compares mail delivery performance of the old +/-
-1 feedback with other feedback functions, for different server concurrency
-enforcement methods. Measurements were done with a FreeBSD 6.2 client and with
-FreeBSD 6.2 and various Linux servers.
+1 feedback per delivery with several less-than-1 feedback functions, for
+different limited-concurrency server scenarios. Measurements were done with a
+FreeBSD 6.2 client and with FreeBSD 6.2 and various Linux servers.
Server configuration:
* The mail flow was slowed down with 1 second latency per recipient
("smtpd_client_restrictions = sleep 1"). The purpose was to make results
- less dependent on hardware details, by reducing the slow-downs by disk I/O,
- logging I/O, and network I/O.
+ less dependent on hardware details, by avoiding slow-downs by queue file I/
+ O, logging I/O, and network I/O.
* Concurrency was limited by the server process limit ("default_process_limit
- = 5", "smtpd_client_event_limit_exceptions = static:all"). Postfix was
+ = 5" and "smtpd_client_event_limit_exceptions = static:all"). Postfix was
stopped and started after changing the process limit, because the same
number is also used as the backlog argument to the listen(2) system call,
and "postfix reload" does not re-issue this call.
"smtp_destination_recipient_limit = 2". A smaller limit would cause Postfix
to schedule the concurrency per recipient instead of domain, which is not
what we want.
- * Maximal concurrency was limited with "smtp_destination_concurrency_limit =
+ * Maximum concurrency was limited with "smtp_destination_concurrency_limit =
20", and initial_destination_concurrency was set to the same value.
* The positive and negative concurrency feedback hysteresis was 1.
Concurrency was incremented by 1 at the END of 1/feedback steps of positive
* The SMTP client used the default 30s SMTP connect timeout and 300s SMTP
greeting timeout.
+I\bIm\bmp\bpa\bac\bct\bt o\bof\bf t\bth\bhe\be 3\b30\b0s\bs S\bSM\bMT\bTP\bP c\bco\bon\bnn\bne\bec\bct\bt t\bti\bim\bme\beo\bou\but\bt
+
The first results are for a FreeBSD 6.2 server, where our artificially low
listen(2) backlog results in a very short kernel queue for established
-connections. As the table shows, all deferred deliveries failed due to a 30s
+connections. The table shows that all deferred deliveries failed due to a 30s
connection timeout, and none failed due to a server greeting timeout. This
measurement simulates what happens when the server's connection queue is
completely full under load, and the TCP engine drops new connections.
A busy server with a completely full connection queue. N is the client
delivery concurrency. Failed deliveries time out after 30s without
- completing the TCP handshake. See below for a discussion of results.
+ completing the TCP handshake. See text for a discussion of results.
+
+I\bIm\bmp\bpa\bac\bct\bt o\bof\bf t\bth\bhe\be 3\b30\b00\b0s\bs S\bSM\bMT\bTP\bP g\bgr\bre\bee\bet\bti\bin\bng\bg t\bti\bim\bme\beo\bou\but\bt
The next table shows results for a Fedora Core 8 server (results for RedHat 7.3
-are identical). In this case, the listen(2) backlog argument has little if any
-effect on the kernel's established connection queue. As the table shows,
-practically all deferred deliveries fail after the 300s SMTP greeting timeout.
-As these timeouts were 10x longer than with the previous measurement, we
-increased the recipient count (and thus the running time) by a factor of 10 to
-keep the results comparable.
+are identical). In this case, the artificially small listen(2) backlog argument
+does not impact our measurement. The table shows that practically all deferred
+deliveries fail after the 300s SMTP greeting timeout. As these timeouts were
+10x longer than with the first measurement, we increased the recipient count
+(and thus the running time) by a factor of 10 to keep the results comparable.
+The deferred mail percentages are a factor 10 lower than with the first
+measurement, because the 1s per-recipient delay was 1/300th of the greeting
+timeout instead of 1/30th of the connection timeout.
c\bcl\bli\bie\ben\bnt\bt s\bse\ber\brv\bve\ber\br f\bfe\bee\bed\bdb\bba\bac\bck\bk c\bco\bon\bnn\bne\bec\bct\bti\bio\bon\bn p\bpe\ber\brc\bce\ben\bnt\bta\bag\bge\be c\bcl\bli\bie\ben\bnt\bt t\bti\bim\bme\bed\bd-\b-o\bou\but\bt i\bin\bn
l\bli\bim\bmi\bit\bt l\bli\bim\bmi\bit\bt s\bst\bty\byl\ble\be c\bca\bac\bch\bhi\bin\bng\bg d\bde\bef\bfe\ber\brr\bre\bed\bd c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by c\bco\bon\bnn\bne\bec\bct\bt/\b/
A busy server with a non-full connection queue. N is the client delivery
concurrency. Failed deliveries complete at the TCP level, but time out
- after 300s while waiting for the SMTP greeting. See below for a discussion
+ after 300s while waiting for the SMTP greeting. See text for a discussion
of results.
+I\bIm\bmp\bpa\bac\bct\bt o\bof\bf a\bac\bct\bti\biv\bve\be s\bse\ber\brv\bve\ber\br c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by l\bli\bim\bmi\bit\bte\ber\br
+
The final concurrency limited result shows what happens when SMTP connections
don't time out, but are rejected immediately with the Postfix server's
-smtpd_client_connection_count_limit feature. Similar results can be expected
-with concurrency limiting features built into other MTAs or firewalls. For this
+smtpd_client_connection_count_limit feature (the server replies with a 421
+status and disconnects immediately). Similar results can be expected with
+concurrency limiting features built into other MTAs or firewalls. For this
measurement we specified a server concurrency limit and a client initial
-destination concurrency of 5, and a server process limit of 10. The server was
-FreeBSD 6.2 but that does not matter here, because the "push back" is done
-entirely by the server's Postfix itself.
+destination concurrency of 5, and a server process limit of 10; all other
+conditions were the same as with the first measurement. The same result would
+be obtained with a FreeBSD or Linux server, because the "pushing back" is done
+entirely by the receiving Postfix.
c\bcl\bli\bie\ben\bnt\bt s\bse\ber\brv\bve\ber\br f\bfe\bee\bed\bdb\bba\bac\bck\bk c\bco\bon\bnn\bne\bec\bct\bti\bio\bon\bn p\bpe\ber\brc\bce\ben\bnt\bta\bag\bge\be c\bcl\bli\bie\ben\bnt\bt t\bth\bhe\beo\bor\bre\bet\bti\bic\bca\bal\bl
l\bli\bim\bmi\bit\bt l\bli\bim\bmi\bit\bt s\bst\bty\byl\ble\be c\bca\bac\bch\bhi\bin\bng\bg d\bde\bef\bfe\ber\brr\bre\bed\bd c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by d\bde\bef\bfe\ber\br r\bra\bat\bte\be
-------------------------------------------------------------------------
A server with active per-client concurrency limiter that replies with 421
- and disconnects. N is the client delivery concurrency. The theoretical mail
- deferral rate is 1/(1+roundup(1/feedback)). This is always 1/2 with the
- fixed +/-1 feedback; with the variable feedback variants, the defer rate
- decreases with increasing concurrency. See below for a discussion of
- results.
-
-The results are based on the first delivery runs only; they do not include any
-second etc. delivery attempts.
-
-The first two examples show that the feedback method matters little when
-concurrency is limited due to congestion. This is because the initial
-concurrency was already at the client's concurrency maximum, and because there
-was 10-100 times more positive than negative feedback. The contribution from
-SMTP connection caching was also minor for these two examples.
-
-In the last example, the old +/-1 feedback scheduler defers 50% of the mail
-when confronted with an active (anvil-style) server concurrency limit, where
-the server hangs up immediately with a 421 status (a TCP-level RST would have
-the same result). Less aggressive feedback mechanisms fare better here, and the
-concurrency-dependent feedback fares even better at higher concurrencies than
-shown here, but they have limitations as discussed in the next section.
-
-L\bLi\bim\bmi\bit\bta\bat\bti\bio\bon\bns\bs o\bof\bf l\ble\bes\bss\bs-\b-t\bth\bha\ban\bn-\b-1\b1 f\bfe\bee\bed\bdb\bba\bac\bck\bk
-
-The delivery concurrency scheduler with less-than-1 feedback solves a problem
-with servers that have active concurrency limiters, but this works well only
-because feedback is handled in a peculiar manner: positive feedback increments
-the concurrency by 1 at the end of a sequence of events of length 1/feedback,
-while negative feedback decrements concurrency by 1 at the beginning of such a
-sequence. This is how Postfix adjusts quickly for overshoot without causing
-lots of mail to be deferred. Without this difference in feedback treatment,
-less-than-1 feedback would defer 50% of the mail, and would be no better in
-this respect than the simple +/-1 feedback scheduler.
+ and disconnects. N is the client delivery concurrency. The theoretical
+ defer rate is 1/(1+roundup(1/feedback)). This is always 1/2 with the fixed
+ +/-1 feedback per delivery; with the concurrency-dependent feedback
+ variants, the defer rate decreases with increasing concurrency. See text
+ for a discussion of results.
+
+D\bDi\bis\bsc\bcu\bus\bss\bsi\bio\bon\bn o\bof\bf c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by l\bli\bim\bmi\bit\bte\bed\bd s\bse\ber\brv\bve\ber\br r\bre\bes\bsu\bul\blt\bts\bs
+
+All results in the previous sections are based on the first delivery runs only;
+they do not include any second etc. delivery attempts. The first two examples
+show that the feedback method matters little when concurrency is limited due to
+congestion. This is because the initial concurrency is already at the client's
+concurrency maximum, and because there is 10-100 times more positive than
+negative feedback. Under these conditions, the contribution from SMTP
+connection caching is negligible.
+
+In the last example, the old +/-1 feedback per delivery will defer 50% of the
+mail when confronted with an active (anvil-style) server concurrency limit,
+where the server hangs up immediately with a 421 status (a TCP-level RST would
+have the same result). Less aggressive feedback mechanisms fare better than
+more aggressive ones. Concurrency-dependent feedback fares even better at
+higher concurrencies than shown here, but has limitations as discussed in the
+next section.
+
+L\bLi\bim\bmi\bit\bta\bat\bti\bio\bon\bns\bs o\bof\bf l\ble\bes\bss\bs-\b-t\bth\bha\ban\bn-\b-1\b1 p\bpe\ber\br d\bde\bel\bli\biv\bve\ber\bry\by f\bfe\bee\bed\bdb\bba\bac\bck\bk
+
+The delivery concurrency scheduler with less-than-1 concurrency feedback per
+delivery solves a problem with servers that have active concurrency limiters.
+This works only because feedback is handled in a peculiar manner: positive
+feedback will increment the concurrency by 1 at the e\ben\bnd\bd of a sequence of events
+of length 1/feedback, while negative feedback will decrement concurrency by 1
+at the b\bbe\beg\bgi\bin\bnn\bni\bin\bng\bg of such a sequence. This is how Postfix adjusts quickly for
+overshoot without causing lots of mail to be deferred. Without this difference
+in feedback treatment, less-than-1 feedback per delivery would defer 50% of the
+mail, and would be no better in this respect than the old +/-1 feedback per
+delivery.
Unfortunately, the same feature that corrects quickly for concurrency overshoot
also makes the scheduler more sensitive for noisy negative feedback. The reason
is that one lonely negative feedback event has the same effect as a complete
sequence of length 1/feedback: in both cases delivery concurrency is dropped by
-1 immediately. For example, when multiple servers are placed behind a load
-balancer on a single IP address, and 1 out of K servers fails to complete the
-SMTP handshake, a scheduler with 1/N (N = concurrency) feedback will stop
-increasing its concurrency once it reaches roughly K. Even though the good
-servers behind the load balancer are perfectly capable of handling more mail,
-the 1/N feedback scheduler will linger around concurrency K.
-
-This problem with 1/N feedback gets worse as 1/N gets smaller. A workaround is
-to use fixed less-than-1 values for positive and negative feedback that limit
-the noise sensitivity, for example: positive feedback of 1/4 and negative
-feedback 1/10. Of course using fixed feedback means concurrency growth is
-moderated only for a limited range of concurrencies. Sites that deliver at per-
-destination concurrencies of 50 or more will require special configuration.
+1 immediately. As a worst-case scenario, consider multiple servers behind a
+load balancer on a single IP address, and no backup MX address. When 1 out of K
+servers fails to complete the SMTP handshake or drops the connection, a
+scheduler with 1/N (N = concurrency) feedback stops increasing its concurrency
+once it reaches a concurrency level of about K, even though the good servers
+behind the load balancer are perfectly capable of handling more traffic.
+
+This noise problem gets worse as the amount of positive feedback per delivery
+gets smaller. A compromise is to avoid concurrency-dependent positive feedback,
+and to use fixed less-than-1 feedback values instead. For example, to tolerate
+1 of 4 bad servers in the above load balancer scenario, use positive feedback
+of 1/4 per "good" delivery (no connect or handshake error), and use an equal or
+smaller amount of negative feedback per "bad" delivery. The downside of using
+concurrency-independent feedback is that some of the old +/-1 feedback problems
+will return at large concurrencies. Sites that deliver at non-trivial per-
+destination concurrencies will require special configuration.
+
+C\bCo\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by c\bco\bon\bnf\bfi\big\bgu\bur\bra\bat\bti\bio\bon\bn p\bpa\bar\bra\bam\bme\bet\bte\ber\brs\bs
+
+The Postfix 2.5 concurrency scheduler is controlled with the following
+configuration parameters, where "transport_foo" provides a transport-specific
+parameter override. All parameter default settings are compatible with earlier
+Postfix versions.
+
+ P\bPa\bar\bra\bam\bme\bet\bte\ber\br n\bna\bam\bme\be P\bPo\bos\bst\btf\bfi\bix\bx D\bDe\bes\bsc\bcr\bri\bip\bpt\bti\bio\bon\bn
+ v\bve\ber\brs\bsi\bio\bon\bn
+
+ ---------------------------------------------------------------------------
+ Initial per-
+ initial_destination_concurrency all destination
+ transport_initial_destination_concurrency 2.5 delivery
+ concurrency
+
+ Maximum per-
+ default_destination_concurrency_limit all destination
+ transport_destination_concurrency_limit all delivery
+ concurrency
+
+ Per-
+ destination
+ positive
+ feedback
+ default_destination_concurrency_positive_feedback 2.5 amount, per
+ transport_destination_concurrency_positive_feedback 2.5 delivery that
+ does not fail
+ with
+ connection or
+ handshake
+ failure
+
+ Per-
+ destination
+ negative
+ feedback
+ default_destination_concurrency_negative_feedback 2.5 amount, per
+ transport_destination_concurrency_negative_feedback 2.5 delivery that
+ fails with
+ connection or
+ handshake
+ failure
+
+ Number of
+ failed
+ pseudo-
+ cohorts after
+ default_destination_concurrency_failed_cohort_limit 2.5 which a
+ transport_destination_concurrency_failed_cohort_limit 2.5 destination
+ is declared
+ "dead" and
+ delivery is
+ suspended
+
+ Enable
+ verbose
+ destination_concurrency_feedback_debug 2.5 logging of
+ concurrency
+ scheduler
+ activity
+
+ ---------------------------------------------------------------------------
P\bPr\bre\bee\bem\bmp\bpt\bti\biv\bve\be s\bsc\bch\bhe\bed\bdu\bul\bli\bin\bng\bg
and were the real coding work, but I believe that to understand the scheduling
algorithm itself (which was the real thinking work) is fairly easy.
+C\bCr\bre\bed\bdi\bit\bts\bs
+
+ * Wietse Venema designed and implemented the initial queue manager with per-
+ domain FIFO scheduling, and per-delivery +/-1 concurrency feedback.
+ * Patrik Rak designed and implemented preemption where mail with fewer
+ recipients can slip past mail with more recipients.
+ * Wietse Venema initiated a discussion with Patrik Rak and Victor Duchovni on
+ alternatives for the +/-1 feedback scheduler's aggressive behavior. This is
+ when K/N feedback was reviewed (N = concurrency). The discussion ended
+ without a good solution for both negative feedback and dead site detection.
+ * Victor Duchovni resumed work on concurrency feedback in the context of
+ concurrency-limited servers.
+ * Wietse Venema then re-designed the concurrency scheduler in terms of
+ simplest possible concepts: less-than-1 concurrency feedback per delivery,
+ forward and reverse concurrency feedback hysteresis, and pseudo-cohort
+ failure. At this same time, concurrency feedback was separated from dead
+ site detection.
+ * These simplifications, and their modular implementation, helped to develop
+ further insights into the different roles that positive and negative
+ concurrency feedback play, and helped to avoid all the known worst-case
+ scenarios.
+
If you upgrade from Postfix 2.3 or earlier, read RELEASE_NOTES-2.4
before proceeding.
-Major changes with Postfix snapshot 20071129
+Major changes with Postfix snapshot 20071130
============================================
Revised queue manager with separate mechanisms for per-destination
low-concurrency channels where a single failure could be sufficient
to mark a destination as "dead", and suspend further deliveries.
-New configuration parameters: concurrency_feedback_debug,
-default_concurrency_positive_feedback,
-default_concurrency_negative_feedback,
-default_concurrency_failed_cohort_limit, as well as transport-specific
-versions of the same. See postconf(5) for extensive descriptions,
-and SCHEDULER_README for background information on why things work
-the way they work.
+New configuration parameters: destination_concurrency_feedback_debug,
+default_destination_concurrency_positive_feedback,
+default_destination_concurrency_negative_feedback,
+default_destination_concurrency_failed_cohort_limit, as well as
+transport-specific versions of the same. See postconf(5) for
+extensive descriptions, and SCHEDULER_README for background information
+on the theory and practice of how these settings work.
The default parameter settings are backwards compatible with older
Postfix versions. This may change after better defaults are field
<p> The queue manager is by far the most complex part of the Postfix
mail system. It schedules delivery of new mail, retries failed
deliveries at specific times, and removes mail from the queue after
-the last delivery attempt. Once started, the <a href="qmgr.8.html">qmgr(8)</a> process runs
-until "postfix reload" or "postfix stop". </p>
+the last delivery attempt. There are two major classes of mechanisms
+that control the operation of the queue manager. </p>
-<p> As a persistent process, the queue manager has to meet strict
-requirements with respect to code correctness and robustness. Unlike
-non-persistent daemon processes, the queue manager cannot benefit
-from Postfix's process rejuvenation mechanism that limit the impact
-from resource leaks and other coding errors. </p>
+<p> The first class of mechanisms is concerned with the number of
+concurrent deliveries to a specific destination, including decisions
+on when to suspend deliveries after persistent failures: </p>
-<p> There are two major classes of mechanisms that control the
-operation of the queue manager: </p>
+ <ul>
-<ul>
+ <li> <a href="#concurrency"> Concurrency scheduling </a>
-<li> <p> Mechanisms concerned with the number of concurrent deliveries
-to a specific destination, including decisions on when to suspend
-deliveries after persistent failures. These are described under "<a
-href="#concurrency">Concurrency scheduling</a>". </p>
+ <ul>
-<li> <p> Mechanisms concerned with the selection of what mail to
-deliver to a given destination. These are described under "<a
-href="#jobs">Preemptive scheduling</a>". </p>
+ <li> <a href="#concurrency_summary_2_5"> Summary of the
+ Postfix 2.5 concurrency feedback algorithm </a>
-</ul>
+ <li> <a href="#dead_summary_2_5"> Summary of the Postfix
+ 2.5 "dead destination" detection algorithm </a>
+
+ <li> <a href="#pseudo_code_2_5"> Pseudocode for the Postfix
+ 2.5 concurrency scheduler </a>
+
+ <li> <a href="#concurrency_results"> Results for delivery
+ to concurrency limited servers </a>
+
+ <li> <a href="#concurrency_discussion"> Discussion of
+ concurrency limited server results </a>
+
+ <li> <a href="#concurrency_limitations"> Limitations of
+ less-than-1 per delivery feedback </a>
+
+ <li> <a href="#concurrency_config"> Concurrency configuration
+ parameters </a>
+
+ </ul>
+
+ </ul>
+
+<p> The second class of mechanisms is concerned with the selection
+of what mail to deliver to a given destination: </p>
+
+ <ul>
+
+ <li> <a href="#jobs"> Preemptive scheduling </a>
+
+ <ul>
+
+ <li> <a href="#job_motivation"> Why the non-preemptive Postfix queue
+ manager was replaced </a>
+
+ <li> <a href="#job_design"> How the non-preemptive queue manager
+ scheduler works </a>
+
+ </ul>
+
+ </ul>
+
+<p> And this document would not be complete without: </p>
+
+ <ul>
+
+ <li> <a href="#credits"> Credits </a>
+
+ </ul>
+
+<!--
+
+<p> Once started, the <a href="qmgr.8.html">qmgr(8)</a> process runs until "postfix reload"
+or "postfix stop". As a persistent process, the queue manager has
+to meet strict requirements with respect to code correctness and
+robustness. Unlike non-persistent daemon processes, the queue manager
+cannot benefit from Postfix's process rejuvenation mechanism that
+limit the impact from resource leaks and other coding errors
+(translation: replacing a process after a short time covers up bugs
+before they can become a problem). </p>
+
+-->
<h2> <a name="concurrency"> Concurrency scheduling </a> </h2>
destination's concurrency level dropped to zero, the destination
was declared "dead" and delivery was suspended. </p>
-<p> Drawbacks of the old +/-1 feedback concurrency scheduler are:
-<p>
+<p> Drawbacks of the old +/-1 feedback per delivery are: <p>
<ul>
It uses separate mechanisms for per-destination concurrency control
and for "dead destination" detection. The concurrency control in
turn is built from two separate mechanisms: it supports less-than-1
-feedback to allow for more gradual concurrency adjustments, and it
-uses feedback hysteresis to suppress concurrency oscillations. And
-instead of waiting for delivery concurrency to throttle down to
-zero, a destination is declared "dead" after a configurable number
-of pseudo-cohorts reports connection or handshake failure. </p>
-
-<h2> Summary of the Postfix 2.5 concurrency feedback algorithm </h2>
-
-<p> We want to increment a destination's delivery concurrency after
-some (not necessarily consecutive) number of deliveries without
-connection or handshake failure. This is implemented with positive
-feedback g(N) where N is the destination's delivery concurrency.
-With g(N)=1 we get the old scheduler's exponential growth in time,
-while g(N)=1/N gives linear growth in time. Less-than-1 feedback
-and integer truncation naturally give us hysteresis, so that
-transitions to larger concurrency happen every 1/g(N) positive
-feedback events. </p>
-
-<p> We want to decrement a destination's delivery concurrency after
-some (not necessarily consecutive) number of deliveries suffer
-connection or handshake failure. This is implemented with negative
-feedback f(N) where N is the destination's delivery concurrency.
-With f(N)=1 we get the old scheduler's behavior where concurrency
-is throttled down dramatically after a single pseudo-cohort failure,
-while f(N)=1/N backs off more gently. Again, less-than-1 feedback
-and integer truncation naturally give us hysteresis, so that
-transitions to lower concurrency happen every 1/f(N) negative
-feedback events. </p>
+feedback per delivery to allow for more gradual concurrency
+adjustments, and it uses feedback hysteresis to suppress concurrency
+oscillations. And instead of waiting for delivery concurrency to
+throttle down to zero, a destination is declared "dead" after a
+configurable number of pseudo-cohorts reports connection or handshake
+failure. </p>
+
+<h3> <a name="concurrency_summary_2_5"> Summary of the Postfix 2.5 concurrency feedback algorithm </a> </h3>
+
+<p> We want to increment a destination's delivery concurrency when
+some (not necessarily consecutive) number of deliveries complete
+without connection or handshake failure. This is implemented with
+positive feedback g(N) where N is the destination's delivery
+concurrency. With g(N)=1 feedback per delivery, concurrency increases
+by 1 after each positive feedback event; this gives us the old
+scheduler's exponential growth in time. With g(N)=1/N feedback per
+delivery, concurrency increases by 1 after an entire pseudo-cohort
+N of positive feedback reports; this gives us linear growth in time.
+Less-than-1 feedback per delivery and integer truncation naturally
+give us hysteresis, so that transitions to larger concurrency happen
+every 1/g(N) positive feedback events. </p>
+
+<p> We want to decrement a destination's delivery concurrency when
+some (not necessarily consecutive) number of deliveries complete
+after connection or handshake failure. This is implemented with
+negative feedback f(N) where N is the destination's delivery
+concurrency. With f(N)=1 feedback per delivery, concurrency decreases
+by 1 after each negative feedback event; this gives us the old
+scheduler's behavior where concurrency is throttled down dramatically
+after a single pseudo-cohort failure. With f(N)=1/N feedback per
+delivery, concurrency backs off more gently. Again, less-than-1
+feedback per delivery and integer truncation naturally give us
+hysteresis, so that transitions to lower concurrency happen every
+1/f(N) negative feedback events. </p>
<p> However, with negative feedback we introduce a subtle twist.
-We "reverse" the hysteresis cycle so that the transition to lower
-concurrency happens at the <b>beginning</b> of a sequence of 1/f(N)
-negative feedback events. Otherwise, a correction for overload
-would be made too late. In the case of a concurrency-limited server,
-this makes the choice of f(N) relatively unimportant, as borne out
-by measurements. </p>
+We "reverse" the negative hysteresis cycle so that the transition
+to lower concurrency happens at the <b>beginning</b> of a sequence
+of 1/f(N) negative feedback events. Otherwise, a correction for
+overload would be made too late. This makes the choice of f(N)
+relatively unimportant, as borne out by measurements later in this
+document. </p>
<p> In summary, the main ingredients for the Postfix 2.5 concurrency
feedback algorithm are a) the option of less-than-1 positive feedback
-to avoid overwhelming servers, b) the option of less-than-1 negative
-feedback to avoid or giving up too fast, c) feedback hysteresis to
-avoid rapid oscillation, and c) a "reverse" hysteresis cycle for
-negative feedback, so that it can correct for overload quickly. </p>
+per delivery to avoid overwhelming servers, b) the option of
+less-than-1 negative feedback per delivery to avoid giving up too
+fast, c) feedback hysteresis to avoid rapid oscillation, and c) a
+"reverse" hysteresis cycle for negative feedback, so that it can
+correct for overload quickly. </p>
-<h2> Summary of the Postfix 2.5 "dead destination" detection algorithm </h2>
+<h3> <a name="dead_summary_2_5"> Summary of the Postfix 2.5 "dead destination" detection algorithm </a> </h3>
<p> We want to suspend deliveries to a specific destination after
some number of deliveries suffers connection or handshake failure.
The old scheduler declares a destination "dead" when negative (-1)
feedback throttles the delivery concurrency down to zero. With
-less-than-1 feedback, this throttling down would obviously take too
-long. We therefore have to separate "dead destination" detection
-from concurrency feedback. This is implemented by introducing the
-concept of pseudo-cohort failure. The Postfix 2.5 concurrency
-scheduler declares a destination "dead" after a configurable number
-of pseudo-cohort failures. The old scheduler corresponds to the
-special case where the pseudo-cohort failure limit is equal to 1.
-</p>
+less-than-1 feedback per delivery, this throttling down would
+obviously take too long. We therefore have to separate "dead
+destination" detection from concurrency feedback. This is implemented
+by introducing the concept of pseudo-cohort failure. The Postfix
+2.5 concurrency scheduler declares a destination "dead" after a
+configurable number of pseudo-cohorts suffers from connection or
+handshake failures. The old scheduler corresponds to the special
+case where the pseudo-cohort failure limit is equal to 1. </p>
-<h2> Pseudocode for the Postfix 2.5 concurrency scheduler </h2>
+<h3> <a name="pseudo_code_2_5"> Pseudocode for the Postfix 2.5 concurrency scheduler </a> </h3>
<p> The pseudo code shows how the ideas behind new concurrency
scheduler are implemented as of November 2007. The actual code can
<pre>
Types:
- Each destination has one set of the following variables
- int window
+ Each destination has one set of the following variables
+ int concurrency
double success
double failure
double fail_cohorts
Feedback functions:
- N is concurrency; x, y are arbitrary numbers in [0..1] inclusive
+ N is concurrency; x, y are arbitrary numbers in [0..1] inclusive
positive feedback: g(N) = x/N | x/sqrt(N) | x
negative feedback: f(N) = y/N | y/sqrt(N) | y
Initialization:
- window = initial_concurrency
+ concurrency = initial_concurrency
success = 0
failure = 0
fail_cohorts = 0
After success:
fail_cohorts = 0
Be prepared for feedback > hysteresis, or rounding error
- success += g(window)
- while (success >= 1) Hysteresis 1
- window += 1 Hysteresis 1
+ success += g(concurrency)
+ while (success >= 1) Hysteresis 1
+ concurrency += 1 Hysteresis 1
failure = 0
- success -= 1 Hysteresis 1
+ success -= 1 Hysteresis 1
Be prepared for overshoot
- if (window > concurrency limit)
- window = concurrency limit
+ if (concurrency > concurrency limit)
+ concurrency = concurrency limit
Safety:
Don't apply positive feedback unless
- window < busy_refcount + init_dest_concurrency
+ concurrency < busy_refcount + init_dest_concurrency
otherwise negative feedback effect could be delayed
After failure:
- if (window > 0)
- fail_cohorts += 1.0 / window
+ if (concurrency > 0)
+ fail_cohorts += 1.0 / concurrency
if (fail_cohorts > cohort_failure_limit)
- window = 0
- if (window > 0)
+ concurrency = 0
+ if (concurrency > 0)
Be prepared for feedback > hysteresis, rounding errors
- failure -= f(window)
+ failure -= f(concurrency)
while (failure < 0)
- window -= 1 Hysteresis 1
- failure += 1 Hysteresis 1
+ concurrency -= 1 Hysteresis 1
+ failure += 1 Hysteresis 1
success = 0
Be prepared for overshoot
- if (window < 1)
- window = 1
+ if (concurrency < 1)
+ concurrency = 1
</pre>
-<h2> Results for the Postfix 2.5 concurrency feedback scheduler </h2>
+<h3> <a name="concurrency_results"> Results for delivery to concurrency limited servers </a> </h3>
<p> Discussions about the concurrency scheduler redesign started
early 2004, when the primary goal was to find alternatives that did
shifted towards better handling of server concurrency limits. For
this reason we measure how well the new scheduler does this
job. The table below compares mail delivery performance of the old
-+/-1 feedback with other feedback functions, for different server
-concurrency enforcement methods. Measurements were done with a
-FreeBSD 6.2 client and with FreeBSD 6.2 and various Linux servers.
-</p>
++/-1 feedback per delivery with several less-than-1 feedback
+functions, for different limited-concurrency server scenarios.
+Measurements were done with a FreeBSD 6.2 client and with FreeBSD
+6.2 and various Linux servers. </p>
-<li> Server configuration:
+<p> Server configuration: </p>
<ul> <li> The mail flow was slowed down with 1 second latency per
recipient ("<a href="postconf.5.html#smtpd_client_restrictions">smtpd_client_restrictions</a> = sleep 1"). The purpose was
-to make results less dependent on hardware details, by reducing the
-slow-downs by disk I/O, logging I/O, and network I/O.
+to make results less dependent on hardware details, by avoiding
+slow-downs by queue file I/O, logging I/O, and network I/O.
<li> Concurrency was limited by the server process limit
-("<a href="postconf.5.html#default_process_limit">default_process_limit</a> = 5", "<a href="postconf.5.html#smtpd_client_event_limit_exceptions">smtpd_client_event_limit_exceptions</a>
+("<a href="postconf.5.html#default_process_limit">default_process_limit</a> = 5" and "<a href="postconf.5.html#smtpd_client_event_limit_exceptions">smtpd_client_event_limit_exceptions</a>
= static:all"). Postfix was stopped and started after changing the
process limit, because the same number is also used as the backlog
argument to the listen(2) system call, and "postfix reload" does
</ul>
-<li> Client configuration:
+<p> Client configuration: </p>
<ul>
Postfix to schedule the concurrency per recipient instead of domain,
which is not what we want.
-<li> Maximal concurrency was limited with
+<li> Maximum concurrency was limited with
"<a href="postconf.5.html#smtp_destination_concurrency_limit">smtp_destination_concurrency_limit</a> = 20", and
<a href="postconf.5.html#initial_destination_concurrency">initial_destination_concurrency</a> was set to the same value.
</ul>
+<h4> Impact of the 30s SMTP connect timeout </h4>
+
<p> The first results are for a FreeBSD 6.2 server, where our
artificially low listen(2) backlog results in a very short kernel
-queue for established connections. As the table shows, all deferred
+queue for established connections. The table shows that all deferred
deliveries failed due to a 30s connection timeout, and none failed
due to a server greeting timeout. This measurement simulates what
happens when the server's connection queue is completely full under
<p> A busy server with a completely full connection queue. N is
the client delivery concurrency. Failed deliveries time out after
-30s without completing the TCP handshake. See below for a discussion
+30s without completing the TCP handshake. See text for a discussion
of results. </p>
</blockquote>
+<h4> Impact of the 300s SMTP greeting timeout </h4>
+
<p> The next table shows results for a Fedora Core 8 server (results
-for RedHat 7.3 are identical). In this case, the listen(2) backlog
-argument has little if any effect on the kernel's established
-connection queue. As the table shows, practically all deferred
-deliveries fail after the 300s SMTP greeting timeout. As these
-timeouts were 10x longer than with the previous measurement, we
-increased the recipient count (and thus the running time) by a
-factor of 10 to keep the results comparable. </p>
+for RedHat 7.3 are identical). In this case, the artificially small
+listen(2) backlog argument does not impact our measurement. The
+table shows that practically all deferred deliveries fail after the
+300s SMTP greeting timeout. As these timeouts were 10x longer than
+with the first measurement, we increased the recipient count (and
+thus the running time) by a factor of 10 to keep the results
+comparable. The deferred mail percentages are a factor 10 lower
+than with the first measurement, because the 1s per-recipient delay
+was 1/300th of the greeting timeout instead of 1/30th of the
+connection timeout. </p>
<blockquote>
<p> A busy server with a non-full connection queue. N is the client
delivery concurrency. Failed deliveries complete at the TCP level,
but time out after 300s while waiting for the SMTP greeting. See
-below for a discussion of results. </p>
+text for a discussion of results. </p>
</blockquote>
+<h4> Impact of active server concurrency limiter </h4>
<p> The final concurrency limited result shows what happens when
SMTP connections don't time out, but are rejected immediately with
-the Postfix server's <a href="postconf.5.html#smtpd_client_connection_count_limit">smtpd_client_connection_count_limit</a> feature.
+the Postfix server's <a href="postconf.5.html#smtpd_client_connection_count_limit">smtpd_client_connection_count_limit</a> feature
+(the server replies with a 421 status and disconnects immediately).
Similar results can be expected with concurrency limiting features
built into other MTAs or firewalls. For this measurement we specified
a server concurrency limit and a client initial destination concurrency
-of 5, and a server process limit of 10. The server was FreeBSD 6.2
-but that does not matter here, because the "push back" is done
-entirely by the server's Postfix itself. </p>
+of 5, and a server process limit of 10; all other conditions were
+the same as with the first measurement. The same result would be
+obtained with a FreeBSD or Linux server, because the "pushing back"
+is done entirely by the receiving Postfix. </p>
<blockquote>
<p> A server with active per-client concurrency limiter that replies
with 421 and disconnects. N is the client delivery concurrency.
-The theoretical mail deferral rate is 1/(1+roundup(1/feedback)).
-This is always 1/2 with the fixed +/-1 feedback; with the variable
-feedback variants, the defer rate decreases with increasing
-concurrency. See below for a discussion of results. </p>
+The theoretical defer rate is 1/(1+roundup(1/feedback)). This is
+always 1/2 with the fixed +/-1 feedback per delivery; with the
+concurrency-dependent feedback variants, the defer rate decreases
+with increasing concurrency. See text for a discussion of results.
+</p>
</blockquote>
-<p> The results are based on the first delivery runs only; they do
-not include any second etc. delivery attempts.
-
-<p> The first two examples show that the feedback method matters
-little when concurrency is limited due to congestion. This is because
-the initial concurrency was already at the client's concurrency
-maximum, and because there was 10-100 times more positive than
-negative feedback. The contribution from SMTP connection caching
-was also minor for these two examples. </p>
-
-<p> In the last example, the old +/-1 feedback scheduler defers 50%
-of the mail when confronted with an active (anvil-style) server
-concurrency limit, where the server hangs up immediately with a 421
-status (a TCP-level RST would have the same result). Less aggressive
-feedback mechanisms fare better here, and the concurrency-dependent
-feedback fares even better at higher concurrencies than shown here,
-but they have limitations as discussed in the next section. </p>
-
-<h2> Limitations of less-than-1 feedback </h2>
-
-<p> The delivery concurrency scheduler with less-than-1 feedback
-solves a problem with servers that have active concurrency limiters,
-but this works well only because feedback is handled in a peculiar
-manner: positive feedback increments the concurrency by 1 at the
-end of a sequence of events of length 1/feedback, while negative
-feedback decrements concurrency by 1 at the beginning of such a
-sequence. This is how Postfix adjusts quickly for overshoot without
-causing lots of mail to be deferred. Without this difference in
-feedback treatment, less-than-1 feedback would defer 50% of the
-mail, and would be no better in this respect than the simple +/-1
-feedback scheduler. </p>
+<h3> <a name="concurrency_discussion"> Discussion of concurrency limited server results </a> </h3>
+
+<p> All results in the previous sections are based on the first
+delivery runs only; they do not include any second etc. delivery
+attempts. The first two examples show that the feedback method
+matters little when concurrency is limited due to congestion. This
+is because the initial concurrency is already at the client's
+concurrency maximum, and because there is 10-100 times more positive
+than negative feedback. Under these conditions, the contribution
+from SMTP connection caching is negligible. </p>
+
+<p> In the last example, the old +/-1 feedback per delivery will
+defer 50% of the mail when confronted with an active (anvil-style)
+server concurrency limit, where the server hangs up immediately
+with a 421 status (a TCP-level RST would have the same result).
+Less aggressive feedback mechanisms fare better than more aggressive
+ones. Concurrency-dependent feedback fares even better at higher
+concurrencies than shown here, but has limitations as discussed in
+the next section. </p>
+
+<h3> <a name="concurrency_limitations"> Limitations of less-than-1 per delivery feedback </a> </h3>
+
+<p> The delivery concurrency scheduler with less-than-1 concurrency
+feedback per delivery solves a problem with servers that have active
+concurrency limiters. This works only because feedback is handled
+in a peculiar manner: positive feedback will increment the concurrency
+by 1 at the <b>end</b> of a sequence of events of length 1/feedback,
+while negative feedback will decrement concurrency by 1 at the
+<b>beginning</b> of such a sequence. This is how Postfix adjusts
+quickly for overshoot without causing lots of mail to be deferred.
+Without this difference in feedback treatment, less-than-1 feedback
+per delivery would defer 50% of the mail, and would be no better
+in this respect than the old +/-1 feedback per delivery. </p>
<p> Unfortunately, the same feature that corrects quickly for
concurrency overshoot also makes the scheduler more sensitive for
noisy negative feedback. The reason is that one lonely negative
feedback event has the same effect as a complete sequence of length
1/feedback: in both cases delivery concurrency is dropped by 1
-immediately. For example, when multiple servers are placed behind
-a load balancer on a single IP address, and 1 out of K servers fails
-to complete the SMTP handshake, a scheduler with 1/N (N = concurrency)
-feedback will stop increasing its concurrency once it reaches roughly
-K. Even though the good servers behind the load balancer are
-perfectly capable of handling more mail, the 1/N feedback scheduler
-will linger around concurrency K. </p>
-
-<p> This problem with 1/N feedback gets worse as 1/N gets smaller.
-A workaround is to use fixed less-than-1 values for positive and
-negative feedback that limit the noise sensitivity, for example:
-positive feedback of 1/4 and negative feedback 1/10. Of course
-using fixed feedback means concurrency growth is moderated only for
-a limited range of concurrencies. Sites that deliver at per-destination
-concurrencies of 50 or more will require special configuration.
-</p>
+immediately. As a worst-case scenario, consider multiple servers
+behind a load balancer on a single IP address, and no backup MX
+address. When 1 out of K servers fails to complete the SMTP handshake
+or drops the connection, a scheduler with 1/N (N = concurrency)
+feedback stops increasing its concurrency once it reaches a concurrency
+level of about K, even though the good servers behind the load
+balancer are perfectly capable of handling more traffic. </p>
+
+<p> This noise problem gets worse as the amount of positive feedback
+per delivery gets smaller. A compromise is to avoid concurrency-dependent
+positive feedback, and to use fixed less-than-1 feedback values
+instead. For example, to tolerate 1 of 4 bad servers in the above
+load balancer scenario, use positive feedback of 1/4 per "good"
+delivery (no connect or handshake error), and use an equal or smaller
+amount of negative feedback per "bad" delivery. The downside of
+using concurrency-independent feedback is that some of the old +/-1
+feedback problems will return at large concurrencies. Sites that
+deliver at non-trivial per-destination concurrencies will require
+special configuration. </p>
+
+<h3> <a name="concurrency_config"> Concurrency configuration parameters </a> </h3>
+
+<p> The Postfix 2.5 concurrency scheduler is controlled with the
+following configuration parameters, where "<i>transport</i>_foo"
+provides a transport-specific parameter override. All parameter
+default settings are compatible with earlier Postfix versions. </p>
+
+<blockquote>
+
+<table border="0">
+
+<tr> <th> Parameter name </th> <th> Postfix version </th> <th>
+Description </th> </tr>
+
+<tr> <td colspan="3"> <hr> </td> </tr>
+
+<tr> <td> <a href="postconf.5.html#initial_destination_concurrency">initial_destination_concurrency</a><br>
+<a href="postconf.5.html#transport_initial_destination_concurrency"><i>transport</i>_initial_destination_concurrency</a> </td> <td
+align="center"> all<br> 2.5 </td> <td> Initial per-destination
+delivery concurrency </td> </tr>
+
+<tr> <td> <a href="postconf.5.html#default_destination_concurrency_limit">default_destination_concurrency_limit</a><br>
+<a href="postconf.5.html#transport_destination_concurrency_limit"><i>transport</i>_destination_concurrency_limit</a> </td> <td align="center">
+all<br> all </td> <td> Maximum per-destination delivery concurrency
+</td> </tr>
+
+<tr> <td> <a href="postconf.5.html#default_destination_concurrency_positive_feedback">default_destination_concurrency_positive_feedback</a><br>
+<a href="postconf.5.html#transport_destination_concurrency_positive_feedback"><i>transport</i>_destination_concurrency_positive_feedback</a> </td>
+<td align="center"> 2.5<br> 2.5 </td> <td> Per-destination positive
+feedback amount, per delivery that does not fail with connection
+or handshake failure </td> </tr>
+
+<tr> <td> <a href="postconf.5.html#default_destination_concurrency_negative_feedback">default_destination_concurrency_negative_feedback</a><br>
+<a href="postconf.5.html#transport_destination_concurrency_positive_feedback"><i>transport</i>_destination_concurrency_negative_feedback</a> </td>
+<td align="center"> 2.5<br> 2.5 </td> <td> Per-destination negative
+feedback amount, per delivery that fails with connection or handshake
+failure </td> </tr>
+
+<tr> <td> <a href="postconf.5.html#default_destination_concurrency_failed_cohort_limit">default_destination_concurrency_failed_cohort_limit</a><br>
+<a href="postconf.5.html#transport_destination_concurrency_failed_cohort_limit"><i>transport</i>_destination_concurrency_failed_cohort_limit</a> </td>
+<td align="center"> 2.5<br> 2.5 </td> <td> Number of failed
+pseudo-cohorts after which a destination is declared "dead" and
+delivery is suspended </td> </tr>
+
+<tr> <td> <a href="postconf.5.html#destination_concurrency_feedback_debug">destination_concurrency_feedback_debug</a></td> <td align="center">
+2.5 </td> <td> Enable verbose logging of concurrency scheduler
+activity </td> </tr>
+
+<tr> <td colspan="3"> <hr> </td> </tr>
+
+</table>
+
+</blockquote>
<h2> <a name="jobs"> Preemptive scheduling </a> </h2>
will for some time will be available under the name of "<a href="qmgr.8.html">oqmgr(8)</a>".
</p>
-<h3>Why the non-preemptive Postfix queue manager was replaced</h3>
+<h3> <a name="job_motivation"> Why the non-preemptive Postfix queue manager was replaced </a> </h3>
<p> The non-preemptive Postfix scheduler had several limitations
due to unfortunate choices in its design. </p>
</ol>
-<h3>How the non-preemptive queue manager scheduler works </h3>
+<h3> <a name="job_design"> How the non-preemptive queue manager scheduler works </a> </h3>
<p> The following text is from Patrik Rak and should be read together
with the <a href="postconf.5.html">postconf(5)</a> manual that describes each configuration
that to understand the scheduling algorithm itself (which was the
real thinking work) is fairly easy. </p>
+<h2> <a name="credits"> Credits </a> </h2>
+
+<ul>
+
+<li> Wietse Venema designed and implemented the initial queue manager
+with per-domain FIFO scheduling, and per-delivery +/-1 concurrency
+feedback.
+
+<li> Patrik Rak designed and implemented preemption where mail with
+fewer recipients can slip past mail with more recipients.
+
+<li> Wietse Venema initiated a discussion with Patrik Rak and Victor
+Duchovni on alternatives for the +/-1 feedback scheduler's aggressive
+behavior. This is when K/N feedback was reviewed (N = concurrency).
+The discussion ended without a good solution for both negative
+feedback and dead site detection.
+
+<li> Victor Duchovni resumed work on concurrency feedback in the
+context of concurrency-limited servers.
+
+<li> Wietse Venema then re-designed the concurrency scheduler in
+terms of simplest possible concepts: less-than-1 concurrency feedback
+per delivery, forward and reverse concurrency feedback hysteresis,
+and pseudo-cohort failure. At this same time, concurrency feedback
+was separated from dead site detection.
+
+<li> These simplifications, and their modular implementation, helped
+to develop further insights into the different roles that positive
+and negative concurrency feedback play, and helped to avoid all the
+known worst-case scenarios.
+
+</ul>
+
</body>
</html>
The default maximal number of parallel deliveries
to the same destination.
- <b><a href="postconf.5.html#transport_destination_concurrency_limit"><i>transport</i>_destination_concurrency_limit</a></b>
+ <b><a href="postconf.5.html#transport_destination_concurrency_limit"><i>transport</i>_destination_concurrency_limit</a> ($<a href="postconf.5.html#default_destination_concurrency_limit">default_destina</a>-</b>
+ <b><a href="postconf.5.html#default_destination_concurrency_limit">tion_concurrency_limit</a>)</b>
Idem, for delivery via the named message <i>transport</i>.
+ Available in Postfix version 2.5 and later:
+
+ <b><a href="postconf.5.html#transport_initial_destination_concurrency"><i>transport</i>_initial_destination_concurrency</a> ($<a href="postconf.5.html#initial_destination_concurrency">initial_desti</a>-</b>
+ <b><a href="postconf.5.html#initial_destination_concurrency">nation_concurrency</a>)</b>
+ Initial concurrency for delivery via the named mes-
+ sage <i>transport</i>.
+
+ <b><a href="postconf.5.html#default_destination_concurrency_failed_cohort_limit">default_destination_concurrency_failed_cohort_limit</a> (1)</b>
+ How many pseudo-cohorts must suffer connection or
+ handshake failure before a specific destination is
+ considered unavailable (and further delivery is
+ suspended).
+
+ <b><a href="postconf.5.html#transport_destination_concurrency_failed_cohort_limit"><i>transport</i>_destination_concurrency_failed_cohort_limit</a></b>
+ <b>($<a href="postconf.5.html#default_destination_concurrency_failed_cohort_limit">default_destination_concurrency_failed_cohort_limit</a>)</b>
+ Idem, for delivery via the named message <i>transport</i>.
+
+ <b><a href="postconf.5.html#default_destination_concurrency_negative_feedback">default_destination_concurrency_negative_feedback</a> (1)</b>
+ The per-destination amount of negative delivery
+ concurrency feedback, after a delivery completes
+ with a connection or handshake failure.
+
+ <b><a href="postconf.5.html#transport_destination_concurrency_positive_feedback"><i>transport</i>_destination_concurrency_negative_feedback</a></b>
+ <b>($<a href="postconf.5.html#default_destination_concurrency_negative_feedback">default_destination_concurrency_negative_feedback</a>)</b>
+ Idem, for delivery via the named message <i>transport</i>.
+
+ <b><a href="postconf.5.html#default_destination_concurrency_positive_feedback">default_destination_concurrency_positive_feedback</a> (1)</b>
+ The per-destination amount of positive delivery
+ concurrency feedback, after a delivery completes
+ without connection or handshake failure.
+
+ <b><a href="postconf.5.html#transport_destination_concurrency_positive_feedback"><i>transport</i>_destination_concurrency_positive_feedback</a></b>
+ <b>($<a href="postconf.5.html#default_destination_concurrency_positive_feedback">default_destination_concurrency_positive_feedback</a>)</b>
+ Idem, for delivery via the named message <i>transport</i>.
+
+ <b><a href="postconf.5.html#destination_concurrency_feedback_debug">destination_concurrency_feedback_debug</a> (no)</b>
+ Make the queue manager's feedback algorithm verbose
+ for performance analysis purposes.
+
<b>RECIPIENT SCHEDULING CONTROLS</b>
<b><a href="postconf.5.html#default_destination_recipient_limit">default_destination_recipient_limit</a> (50)</b>
The default maximal number of recipients per mes-
</DD>
-<DT><b><a name="<i>transport</i>_concurrency_failed_cohort_limit"><i>transport</i>_concurrency_failed_cohort_limit</a>
-(default: $<a href="postconf.5.html#default_concurrency_failed_cohort_limit">default_concurrency_failed_cohort_limit</a>)</b></DT><DD>
+<DT><b><a name="<i>transport</i>_destination_concurrency_failed_cohort_limit"><i>transport</i>_destination_concurrency_failed_cohort_limit</a>
+(default: $<a href="postconf.5.html#default_destination_concurrency_failed_cohort_limit">default_destination_concurrency_failed_cohort_limit</a>)</b></DT><DD>
<p> A transport-specific override for the
-<a href="postconf.5.html#default_concurrency_failed_cohort_limit">default_concurrency_failed_cohort_limit</a> parameter value, where
-<i>transport</i> is the <a href="master.5.html">master.cf</a> name of the message delivery
+<a href="postconf.5.html#default_destination_concurrency_failed_cohort_limit">default_destination_concurrency_failed_cohort_limit</a> parameter value,
+where <i>transport</i> is the <a href="master.5.html">master.cf</a> name of the message delivery
transport. </p>
<p> This feature is available in Postfix 2.5 and later. </p>
</DD>
<DT><b><a name="address_verify_sender_dependent_relayhost_maps">address_verify_sender_dependent_relayhost_maps</a>
-(default: empty)</b></DT><DD>
+(default: $<a href="postconf.5.html#sender_dependent_relayhost_maps">sender_dependent_relayhost_maps</a>)</b></DT><DD>
<p>
Overrides the <a href="postconf.5.html#sender_dependent_relayhost_maps">sender_dependent_relayhost_maps</a> parameter setting for address
</p>
-</DD>
-
-<DT><b><a name="concurrency_feedback_debug">concurrency_feedback_debug</a>
-(default: no)</b></DT><DD>
-
-<p> Make the queue manager's feedback algorithm verbose for performance
-analysis purposes. </p>
-
-<p> This feature is available in Postfix 2.5 and later. </p>
-
-
</DD>
<DT><b><a name="config_directory">config_directory</a>
</DD>
-<DT><b><a name="connection_cache_service">connection_cache_service</a>
+<DT><b><a name="connection_cache_service_name">connection_cache_service_name</a>
(default: scache)</b></DT><DD>
<p> The name of the <a href="scache.8.html">scache(8)</a> connection cache service. This service
</pre>
-</DD>
-
-<DT><b><a name="default_concurrency_failed_cohort_limit">default_concurrency_failed_cohort_limit</a>
-(default: 1)</b></DT><DD>
-
-<p> How many pseudo-cohorts must suffer connection or handshake
-failure before a specific destination is considered unavailable
-(and further delivery is suspended). Specify zero to disable this
-feature. A destination's pseudo-cohort failure count is reset each
-time a delivery completes without connection or handshake failure
-for that specific destination. </p>
-
-<p> A pseudo-cohort is the number of deliveries equal to a destination's
-delivery concurrency. </p>
-
-<p> Use <a href="postconf.5.html#transport_concurrency_failed_cohort_limit"><i>transport</i>_concurrency_failed_cohort_limit</a> to specify
-a transport-specific override, where <i>transport</i> is the <a href="master.5.html">master.cf</a>
-name of the message delivery transport. </p>
-
-<p> This feature is available in Postfix 2.5. The default setting
-is compatible with earlier Postfix versions. </p>
-
-
-</DD>
-
-<DT><b><a name="default_concurrency_negative_feedback">default_concurrency_negative_feedback</a>
-(default: 1)</b></DT><DD>
-
-<p> The per-destination amount of negative delivery concurrency
-feedback, after a delivery completes with a connection or handshake
-failure. Feedback values are in range 0..1 inclusive. With negative
-feedback, concurrency is decremented at the beginning of a sequence
-of length 1/feedback. This is unlike positive feedback, where
-concurrency is incremented at the end of a sequence of length
-1/feedback. </p>
-
-<p> As of Postfix version 2.5, negative feedback cannot reduce
-delivery concurrency to zero. Instead, a destination is marked
-dead (further delivery suspended) after the failed pseudo-cohort
-count reaches $<a href="postconf.5.html#default_concurrency_failed_cohort_limit">default_concurrency_failed_cohort_limit</a> (or
-$<a href="postconf.5.html#transport_concurrency_failed_cohort_limit"><i>transport</i>_concurrency_failed_cohort_limit</a>). To make the
-scheduler completely immune to connection or handshake failures,
-specify a zero feedback value and a zero failed pseudo-cohort limit.
-</p>
-
-<p> Specify one of the following forms: </p>
-
-<dl>
-
-<dt> <b><i>number</i> </b> </dt>
-
-<dt> <b><i>number</i> / <i>number</i> </b> </dt>
-
-<dd> Constant feedback. The value must be in the range 0..1 inclusive.
-The default setting of "1" is compatible with Postfix versions
-before 2.5, where a destination's delivery concurrency is throttled
-down to zero (and further delivery suspended) after a single failed
-pseudo-cohort. </dd>
-
-<dt> <b><i>number</i> / concurrency </b> </dt>
-
-<dd> Variable feedback of "<i>number</i> / (delivery concurrency)".
-The <i>number</i> must be in the range 0..1 inclusive. With
-<i>number</i> equal to "1", a destination's delivery concurrency
-is decremented by 1 after each failed pseudo-cohort. </dd>
-
-<dt> <b><i>number</i> / sqrt_concurrency </b> </dt>
-
-<dd> Variable feedback of "<i>number</i> / sqrt(delivery concurrency)".
-The <i>number</i> must be in the range 0..1 inclusive. This setting
-may be removed in a future version. </dd>
-
-</dl>
-
-<p> A pseudo-cohort is the number of deliveries equal to a destination's
-delivery concurrency. </p>
-
-<p> Use <a href="postconf.5.html#transport_concurrency_positive_feedback"><i>transport</i>_concurrency_negative_feedback</a> to specify
-a transport-specific override, where <i>transport</i> is the <a href="master.5.html">master.cf</a>
-name of the message delivery transport. </p>
-
-<p> This feature is available in Postfix 2.5. The default setting
-is compatible with earlier Postfix versions. </p>
-
-
-</DD>
-
-<DT><b><a name="default_concurrency_positive_feedback">default_concurrency_positive_feedback</a>
-(default: 1)</b></DT><DD>
-
-<p> The per-destination amount of positive delivery concurrency
-feedback, after a delivery completes without connection or handshake
-failure. Feedback values are in the range 0..1 inclusive. The
-concurrency increases until it reaches the per-destination maximal
-concurrency limit. With positive feedback, concurrency is incremented
-at the end of a sequence with length 1/feedback. This is unlike
-negative feedback, where concurrency is decremented at the start
-of a sequence of length 1/feedback. </p>
-
-<p> Specify one of the following forms: </p>
-
-<dl>
-
-<dt> <b><i>number</i> </b> </dt>
-
-<dt> <b><i>number</i> / <i>number</i> </b> </dt>
-
-<dd> Constant feedback. The value must be in the range 0..1
-inclusive. The default setting of "1" is compatible with Postfix
-versions before 2.5, where a destination's delivery concurrency
-doubles after each successful pseudo-cohort. </dd>
-
-<dt> <b><i>number</i> / concurrency </b> </dt>
-
-<dd> Variable feedback of "<i>number</i> / (delivery concurrency)".
-The <i>number</i> must be in the range 0..1 inclusive. With
-<i>number</i> equal to "1", a destination's delivery concurrency
-is incremented by 1 after each successful pseudo-cohort. </dd>
-
-<dt> <b><i>number</i> / sqrt_concurrency </b> </dt>
-
-<dd> Variable feedback of "<i>number</i> / sqrt(delivery concurrency)".
-The <i>number</i> must be in the range 0..1 inclusive. This setting
-may be removed in a future version. </dd>
-
-</dl>
-
-<p> A pseudo-cohort is the number of deliveries equal to a destination's
-delivery concurrency. </p>
-
-<p> Use <a href="postconf.5.html#transport_concurrency_positive_feedback"><i>transport</i>_concurrency_positive_feedback</a> to specify
-a transport-specific override, where <i>transport</i> is the <a href="master.5.html">master.cf</a>
-name of the message delivery transport. </p>
-
-<p> This feature is available in Postfix 2.5 and later. </p>
-
-
</DD>
<DT><b><a name="default_database_type">default_database_type</a>
</p>
+</DD>
+
+<DT><b><a name="default_destination_concurrency_failed_cohort_limit">default_destination_concurrency_failed_cohort_limit</a>
+(default: 1)</b></DT><DD>
+
+<p> How many pseudo-cohorts must suffer connection or handshake
+failure before a specific destination is considered unavailable
+(and further delivery is suspended). Specify zero to disable this
+feature. A destination's pseudo-cohort failure count is reset each
+time a delivery completes without connection or handshake failure
+for that specific destination. </p>
+
+<p> A pseudo-cohort is the number of deliveries equal to a destination's
+delivery concurrency. </p>
+
+<p> Use <a href="postconf.5.html#transport_destination_concurrency_failed_cohort_limit"><i>transport</i>_destination_concurrency_failed_cohort_limit</a> to specify
+a transport-specific override, where <i>transport</i> is the <a href="master.5.html">master.cf</a>
+name of the message delivery transport. </p>
+
+<p> This feature is available in Postfix 2.5. The default setting
+is compatible with earlier Postfix versions. </p>
+
+
</DD>
<DT><b><a name="default_destination_concurrency_limit">default_destination_concurrency_limit</a>
</p>
+</DD>
+
+<DT><b><a name="default_destination_concurrency_negative_feedback">default_destination_concurrency_negative_feedback</a>
+(default: 1)</b></DT><DD>
+
+<p> The per-destination amount of delivery concurrency negative
+feedback, after a delivery completes with a connection or handshake
+failure. Feedback values are in the range 0..1 inclusive. With
+negative feedback, concurrency is decremented at the beginning of
+a sequence of length 1/feedback. This is unlike positive feedback,
+where concurrency is incremented at the end of a sequence of length
+1/feedback. </p>
+
+<p> As of Postfix version 2.5, negative feedback cannot reduce
+delivery concurrency to zero. Instead, a destination is marked
+dead (further delivery suspended) after the failed pseudo-cohort
+count reaches $<a href="postconf.5.html#default_destination_concurrency_failed_cohort_limit">default_destination_concurrency_failed_cohort_limit</a>
+(or $<a href="postconf.5.html#transport_destination_concurrency_failed_cohort_limit"><i>transport</i>_destination_concurrency_failed_cohort_limit</a>).
+To make the scheduler completely immune to connection or handshake
+failures, specify a zero feedback value and a zero failed pseudo-cohort
+limit. </p>
+
+<p> Specify one of the following forms: </p>
+
+<dl>
+
+<dt> <b><i>number</i> </b> </dt>
+
+<dt> <b><i>number</i> / <i>number</i> </b> </dt>
+
+<dd> Constant feedback. The value must be in the range 0..1 inclusive.
+The default setting of "1" is compatible with Postfix versions
+before 2.5, where a destination's delivery concurrency is throttled
+down to zero (and further delivery suspended) after a single failed
+pseudo-cohort. </dd>
+
+<dt> <b><i>number</i> / concurrency </b> </dt>
+
+<dd> Variable feedback of "<i>number</i> / (delivery concurrency)".
+The <i>number</i> must be in the range 0..1 inclusive. With
+<i>number</i> equal to "1", a destination's delivery concurrency
+is decremented by 1 after each failed pseudo-cohort. </dd>
+
+<dt> <b><i>number</i> / sqrt_concurrency </b> </dt>
+
+<dd> Variable feedback of "<i>number</i> / sqrt(delivery concurrency)".
+The <i>number</i> must be in the range 0..1 inclusive. This setting
+may be removed in a future version. </dd>
+
+</dl>
+
+<p> A pseudo-cohort is the number of deliveries equal to a destination's
+delivery concurrency. </p>
+
+<p> Use <a href="postconf.5.html#transport_destination_concurrency_positive_feedback"><i>transport</i>_destination_concurrency_negative_feedback</a>
+to specify a transport-specific override, where <i>transport</i>
+is the <a href="master.5.html">master.cf</a>
+name of the message delivery transport. </p>
+
+<p> This feature is available in Postfix 2.5. The default setting
+is compatible with earlier Postfix versions. </p>
+
+
+</DD>
+
+<DT><b><a name="default_destination_concurrency_positive_feedback">default_destination_concurrency_positive_feedback</a>
+(default: 1)</b></DT><DD>
+
+<p> The per-destination amount of delivery concurrency positive
+feedback, after a delivery completes without connection or handshake
+failure. Feedback values are in the range 0..1 inclusive. The
+concurrency increases until it reaches the per-destination maximal
+concurrency limit. With positive feedback, concurrency is incremented
+at the end of a sequence with length 1/feedback. This is unlike
+negative feedback, where concurrency is decremented at the start
+of a sequence of length 1/feedback. </p>
+
+<p> Specify one of the following forms: </p>
+
+<dl>
+
+<dt> <b><i>number</i> </b> </dt>
+
+<dt> <b><i>number</i> / <i>number</i> </b> </dt>
+
+<dd> Constant feedback. The value must be in the range 0..1
+inclusive. The default setting of "1" is compatible with Postfix
+versions before 2.5, where a destination's delivery concurrency
+doubles after each successful pseudo-cohort. </dd>
+
+<dt> <b><i>number</i> / concurrency </b> </dt>
+
+<dd> Variable feedback of "<i>number</i> / (delivery concurrency)".
+The <i>number</i> must be in the range 0..1 inclusive. With
+<i>number</i> equal to "1", a destination's delivery concurrency
+is incremented by 1 after each successful pseudo-cohort. </dd>
+
+<dt> <b><i>number</i> / sqrt_concurrency </b> </dt>
+
+<dd> Variable feedback of "<i>number</i> / sqrt(delivery concurrency)".
+The <i>number</i> must be in the range 0..1 inclusive. This setting
+may be removed in a future version. </dd>
+
+</dl>
+
+<p> A pseudo-cohort is the number of deliveries equal to a destination's
+delivery concurrency. </p>
+
+<p> Use <a href="postconf.5.html#transport_destination_concurrency_positive_feedback"><i>transport</i>_destination_concurrency_positive_feedback</a>
+to specify a transport-specific override, where <i>transport</i>
+is the <a href="master.5.html">master.cf</a> name of the message delivery transport. </p>
+
+<p> This feature is available in Postfix 2.5 and later. </p>
+
+
</DD>
<DT><b><a name="default_destination_recipient_limit">default_destination_recipient_limit</a>
</p>
+</DD>
+
+<DT><b><a name="destination_concurrency_feedback_debug">destination_concurrency_feedback_debug</a>
+(default: no)</b></DT><DD>
+
+<p> Make the queue manager's feedback algorithm verbose for performance
+analysis purposes. </p>
+
+<p> This feature is available in Postfix 2.5 and later. </p>
+
+
</DD>
<DT><b><a name="detect_8bit_encoding_header">detect_8bit_encoding_header</a>
</p>
-</DD>
-
-<DT><b><a name="transport_concurrency_negative_feedback">transport_concurrency_negative_feedback</a>
-(default: $<a href="postconf.5.html#default_concurrency_negative_feedback">default_concurrency_negative_feedback</a>)</b></DT><DD>
-
-<p> A transport-specific override for the
-<a href="postconf.5.html#default_concurrency_negative_feedback">default_concurrency_negative_feedback</a> parameter value, where
-<i>transport</i> is the <a href="master.5.html">master.cf</a> name of the message delivery
-transport. </p>
-
-<p> This feature is available in Postfix 2.5 and later. </p>
-
-
-</DD>
-
-<DT><b><a name="transport_concurrency_positive_feedback">transport_concurrency_positive_feedback</a>
-(default: $<a href="postconf.5.html#default_concurrency_positive_feedback">default_concurrency_positive_feedback</a>)</b></DT><DD>
-
-<p> A transport-specific override for the
-<a href="postconf.5.html#default_concurrency_positive_feedback">default_concurrency_positive_feedback</a> parameter value, where
-<i>transport</i> is the <a href="master.5.html">master.cf</a> name of the message delivery
-transport. </p>
-
-<p> This feature is available in Postfix 2.5 and later. </p>
-
-
</DD>
<DT><b><a name="transport_delivery_slot_cost">transport_delivery_slot_cost</a>
transport. </p>
+</DD>
+
+<DT><b><a name="transport_destination_concurrency_negative_feedback">transport_destination_concurrency_negative_feedback</a>
+(default: $<a href="postconf.5.html#default_destination_concurrency_negative_feedback">default_destination_concurrency_negative_feedback</a>)</b></DT><DD>
+
+<p> A transport-specific override for the
+<a href="postconf.5.html#default_destination_concurrency_negative_feedback">default_destination_concurrency_negative_feedback</a> parameter value,
+where <i>transport</i> is the <a href="master.5.html">master.cf</a> name of the message delivery
+transport. </p>
+
+<p> This feature is available in Postfix 2.5 and later. </p>
+
+
+</DD>
+
+<DT><b><a name="transport_destination_concurrency_positive_feedback">transport_destination_concurrency_positive_feedback</a>
+(default: $<a href="postconf.5.html#default_destination_concurrency_positive_feedback">default_destination_concurrency_positive_feedback</a>)</b></DT><DD>
+
+<p> A transport-specific override for the
+<a href="postconf.5.html#default_destination_concurrency_positive_feedback">default_destination_concurrency_positive_feedback</a> parameter value,
+where <i>transport</i> is the <a href="master.5.html">master.cf</a> name of the message delivery
+transport. </p>
+
+<p> This feature is available in Postfix 2.5 and later. </p>
+
+
</DD>
<DT><b><a name="transport_destination_recipient_limit">transport_destination_recipient_limit</a>
-(default: $<a href="postconf.5.html#default_destination_concurrency_limit">default_destination_concurrency_limit</a>)</b></DT><DD>
+(default: $<a href="postconf.5.html#default_destination_recipient_limit">default_destination_recipient_limit</a>)</b></DT><DD>
<p> A transport-specific override for the
<a href="postconf.5.html#default_destination_recipient_limit">default_destination_recipient_limit</a> parameter value, where
Initial concurrency for delivery via the named mes-
sage <i>transport</i>.
- <b><a href="postconf.5.html#default_concurrency_failed_cohort_limit">default_concurrency_failed_cohort_limit</a> (1)</b>
+ <b><a href="postconf.5.html#default_destination_concurrency_failed_cohort_limit">default_destination_concurrency_failed_cohort_limit</a> (1)</b>
How many pseudo-cohorts must suffer connection or
handshake failure before a specific destination is
considered unavailable (and further delivery is
suspended).
- <b><a href="postconf.5.html#transport_concurrency_failed_cohort_limit"><i>transport</i>_concurrency_failed_cohort_limit</a> ($<a href="postconf.5.html#default_concurrency_failed_cohort_limit">default_con</a>-</b>
- <b><a href="postconf.5.html#default_concurrency_failed_cohort_limit">currency_failed_cohort_limit</a>)</b>
+ <b><a href="postconf.5.html#transport_destination_concurrency_failed_cohort_limit"><i>transport</i>_destination_concurrency_failed_cohort_limit</a></b>
+ <b>($<a href="postconf.5.html#default_destination_concurrency_failed_cohort_limit">default_destination_concurrency_failed_cohort_limit</a>)</b>
Idem, for delivery via the named message <i>transport</i>.
- <b><a href="postconf.5.html#default_concurrency_negative_feedback">default_concurrency_negative_feedback</a> (1)</b>
+ <b><a href="postconf.5.html#default_destination_concurrency_negative_feedback">default_destination_concurrency_negative_feedback</a> (1)</b>
The per-destination amount of negative delivery
concurrency feedback, after a delivery completes
with a connection or handshake failure.
- <b><a href="postconf.5.html#transport_concurrency_positive_feedback"><i>transport</i>_concurrency_negative_feedback</a> ($<a href="postconf.5.html#default_concurrency_negative_feedback">default_concur</a>-</b>
- <b><a href="postconf.5.html#default_concurrency_negative_feedback">rency_negative_feedback</a>)</b>
+ <b><a href="postconf.5.html#transport_destination_concurrency_positive_feedback"><i>transport</i>_destination_concurrency_negative_feedback</a></b>
+ <b>($<a href="postconf.5.html#default_destination_concurrency_negative_feedback">default_destination_concurrency_negative_feedback</a>)</b>
Idem, for delivery via the named message <i>transport</i>.
- <b><a href="postconf.5.html#default_concurrency_positive_feedback">default_concurrency_positive_feedback</a> (1)</b>
+ <b><a href="postconf.5.html#default_destination_concurrency_positive_feedback">default_destination_concurrency_positive_feedback</a> (1)</b>
The per-destination amount of positive delivery
concurrency feedback, after a delivery completes
without connection or handshake failure.
- <b><a href="postconf.5.html#transport_concurrency_positive_feedback"><i>transport</i>_concurrency_positive_feedback</a> ($<a href="postconf.5.html#default_concurrency_positive_feedback">default_concur</a>-</b>
- <b><a href="postconf.5.html#default_concurrency_positive_feedback">rency_positive_feedback</a>)</b>
+ <b><a href="postconf.5.html#transport_destination_concurrency_positive_feedback"><i>transport</i>_destination_concurrency_positive_feedback</a></b>
+ <b>($<a href="postconf.5.html#default_destination_concurrency_positive_feedback">default_destination_concurrency_positive_feedback</a>)</b>
Idem, for delivery via the named message <i>transport</i>.
- <b><a href="postconf.5.html#concurrency_feedback_debug">concurrency_feedback_debug</a> (no)</b>
+ <b><a href="postconf.5.html#destination_concurrency_feedback_debug">destination_concurrency_feedback_debug</a> (no)</b>
Make the queue manager's feedback algorithm verbose
for performance analysis purposes.
P.O. Box 704
Yorktown Heights, NY 10598, USA
- Scheduler enhancements:
+ Preemptive scheduler enhancements:
Patrik Rak
Modra 6
155 00, Prague, Czech Republic
Available in Postfix version 2.3 and later:
- <b><a href="postconf.5.html#address_verify_sender_dependent_relayhost_maps">address_verify_sender_dependent_relayhost_maps</a> (empty)</b>
+ <b><a href="postconf.5.html#address_verify_sender_dependent_relayhost_maps">address_verify_sender_dependent_relayhost_maps</a></b>
+ <b>($<a href="postconf.5.html#sender_dependent_relayhost_maps">sender_dependent_relayhost_maps</a>)</b>
Overrides the <a href="postconf.5.html#sender_dependent_relayhost_maps">sender_dependent_relayhost_maps</a>
parameter setting for address verification probes.
The recipient of undeliverable mail that cannot be returned to
the sender. This feature is enabled with the notify_classes
parameter.
-<DT>\fB\fItransport\fR_concurrency_failed_cohort_limit
-(default: $default_concurrency_failed_cohort_limit)\fR</DT><DD>
+<DT>\fB\fItransport\fR_destination_concurrency_failed_cohort_limit
+(default: $default_destination_concurrency_failed_cohort_limit)\fR</DT><DD>
.PP
A transport-specific override for the
-default_concurrency_failed_cohort_limit parameter value, where
-\fItransport\fR is the master.cf name of the message delivery
+default_destination_concurrency_failed_cohort_limit parameter value,
+where \fItransport\fR is the master.cf name of the message delivery
transport.
.PP
This feature is available in Postfix 2.5 and later.
.ft R
.PP
This feature is available in Postfix 2.1 and later.
-.SH address_verify_sender_dependent_relayhost_maps (default: empty)
+.SH address_verify_sender_dependent_relayhost_maps (default: $sender_dependent_relayhost_maps)
Overrides the sender_dependent_relayhost_maps parameter setting for address
verification probes.
.PP
.PP
Note: if you set this time limit to a large value you must update the
global ipc_timeout parameter as well.
-.SH concurrency_feedback_debug (default: no)
-Make the queue manager's feedback algorithm verbose for performance
-analysis purposes.
-.PP
-This feature is available in Postfix 2.5 and later.
.SH config_directory (default: see "postconf -d" output)
The default location of the Postfix main.cf and master.cf
configuration files. This can be overruled via the following
operations. The time limit is enforced in the client.
.PP
This feature is available in Postfix 2.3 and later.
-.SH connection_cache_service (default: scache)
+.SH connection_cache_service_name (default: scache)
The name of the \fBscache\fR(8) connection cache service. This service
maintains a limited pool of cached sessions.
.SH connection_cache_status_update_time (default: 600s)
.fi
.ad
.ft R
-.SH default_concurrency_failed_cohort_limit (default: 1)
-How many pseudo-cohorts must suffer connection or handshake
-failure before a specific destination is considered unavailable
-(and further delivery is suspended). Specify zero to disable this
-feature. A destination's pseudo-cohort failure count is reset each
-time a delivery completes without connection or handshake failure
-for that specific destination.
-.PP
-A pseudo-cohort is the number of deliveries equal to a destination's
-delivery concurrency.
-.PP
-Use \fItransport\fR_concurrency_failed_cohort_limit to specify
-a transport-specific override, where \fItransport\fR is the master.cf
-name of the message delivery transport.
-.PP
-This feature is available in Postfix 2.5. The default setting
-is compatible with earlier Postfix versions.
-.SH default_concurrency_negative_feedback (default: 1)
-The per-destination amount of negative delivery concurrency
-feedback, after a delivery completes with a connection or handshake
-failure. Feedback values are in range 0..1 inclusive. With negative
-feedback, concurrency is decremented at the beginning of a sequence
-of length 1/feedback. This is unlike positive feedback, where
-concurrency is incremented at the end of a sequence of length
-1/feedback.
-.PP
-As of Postfix version 2.5, negative feedback cannot reduce
-delivery concurrency to zero. Instead, a destination is marked
-dead (further delivery suspended) after the failed pseudo-cohort
-count reaches $default_concurrency_failed_cohort_limit (or
-$\fItransport\fR_concurrency_failed_cohort_limit). To make the
-scheduler completely immune to connection or handshake failures,
-specify a zero feedback value and a zero failed pseudo-cohort limit.
-.PP
-Specify one of the following forms:
-.IP "\fB\fInumber\fR \fR"
-.IP "\fB\fInumber\fR / \fInumber\fR \fR"
-Constant feedback. The value must be in the range 0..1 inclusive.
-The default setting of "1" is compatible with Postfix versions
-before 2.5, where a destination's delivery concurrency is throttled
-down to zero (and further delivery suspended) after a single failed
-pseudo-cohort.
-.IP "\fB\fInumber\fR / concurrency \fR"
-Variable feedback of "\fInumber\fR / (delivery concurrency)".
-The \fInumber\fR must be in the range 0..1 inclusive. With
-\fInumber\fR equal to "1", a destination's delivery concurrency
-is decremented by 1 after each failed pseudo-cohort.
-.IP "\fB\fInumber\fR / sqrt_concurrency \fR"
-Variable feedback of "\fInumber\fR / sqrt(delivery concurrency)".
-The \fInumber\fR must be in the range 0..1 inclusive. This setting
-may be removed in a future version.
-.PP
-A pseudo-cohort is the number of deliveries equal to a destination's
-delivery concurrency.
-.PP
-Use \fItransport\fR_concurrency_negative_feedback to specify
-a transport-specific override, where \fItransport\fR is the master.cf
-name of the message delivery transport.
-.PP
-This feature is available in Postfix 2.5. The default setting
-is compatible with earlier Postfix versions.
-.SH default_concurrency_positive_feedback (default: 1)
-The per-destination amount of positive delivery concurrency
-feedback, after a delivery completes without connection or handshake
-failure. Feedback values are in the range 0..1 inclusive. The
-concurrency increases until it reaches the per-destination maximal
-concurrency limit. With positive feedback, concurrency is incremented
-at the end of a sequence with length 1/feedback. This is unlike
-negative feedback, where concurrency is decremented at the start
-of a sequence of length 1/feedback.
-.PP
-Specify one of the following forms:
-.IP "\fB\fInumber\fR \fR"
-.IP "\fB\fInumber\fR / \fInumber\fR \fR"
-Constant feedback. The value must be in the range 0..1
-inclusive. The default setting of "1" is compatible with Postfix
-versions before 2.5, where a destination's delivery concurrency
-doubles after each successful pseudo-cohort.
-.IP "\fB\fInumber\fR / concurrency \fR"
-Variable feedback of "\fInumber\fR / (delivery concurrency)".
-The \fInumber\fR must be in the range 0..1 inclusive. With
-\fInumber\fR equal to "1", a destination's delivery concurrency
-is incremented by 1 after each successful pseudo-cohort.
-.IP "\fB\fInumber\fR / sqrt_concurrency \fR"
-Variable feedback of "\fInumber\fR / sqrt(delivery concurrency)".
-The \fInumber\fR must be in the range 0..1 inclusive. This setting
-may be removed in a future version.
-.PP
-A pseudo-cohort is the number of deliveries equal to a destination's
-delivery concurrency.
-.PP
-Use \fItransport\fR_concurrency_positive_feedback to specify
-a transport-specific override, where \fItransport\fR is the master.cf
-name of the message delivery transport.
-.PP
-This feature is available in Postfix 2.5 and later.
.SH default_database_type (default: see "postconf -d" output)
The default database type for use in \fBnewaliases\fR(1), \fBpostalias\fR(1)
and \fBpostmap\fR(1) commands. On many UNIX systems the default type is
plus transport_delivery_slot_loan still remains to be accumulated.
Note that the full amount will still have to be accumulated before
another preemption can take place later.
+.SH default_destination_concurrency_failed_cohort_limit (default: 1)
+How many pseudo-cohorts must suffer connection or handshake
+failure before a specific destination is considered unavailable
+(and further delivery is suspended). Specify zero to disable this
+feature. A destination's pseudo-cohort failure count is reset each
+time a delivery completes without connection or handshake failure
+for that specific destination.
+.PP
+A pseudo-cohort is the number of deliveries equal to a destination's
+delivery concurrency.
+.PP
+Use \fItransport\fR_destination_concurrency_failed_cohort_limit to specify
+a transport-specific override, where \fItransport\fR is the master.cf
+name of the message delivery transport.
+.PP
+This feature is available in Postfix 2.5. The default setting
+is compatible with earlier Postfix versions.
.SH default_destination_concurrency_limit (default: 20)
The default maximal number of parallel deliveries to the same
destination. This is the default limit for delivery via the \fBlmtp\fR(8),
\fBpipe\fR(8), \fBsmtp\fR(8) and \fBvirtual\fR(8) delivery agents.
+.SH default_destination_concurrency_negative_feedback (default: 1)
+The per-destination amount of delivery concurrency negative
+feedback, after a delivery completes with a connection or handshake
+failure. Feedback values are in the range 0..1 inclusive. With
+negative feedback, concurrency is decremented at the beginning of
+a sequence of length 1/feedback. This is unlike positive feedback,
+where concurrency is incremented at the end of a sequence of length
+1/feedback.
+.PP
+As of Postfix version 2.5, negative feedback cannot reduce
+delivery concurrency to zero. Instead, a destination is marked
+dead (further delivery suspended) after the failed pseudo-cohort
+count reaches $default_destination_concurrency_failed_cohort_limit
+(or $\fItransport\fR_destination_concurrency_failed_cohort_limit).
+To make the scheduler completely immune to connection or handshake
+failures, specify a zero feedback value and a zero failed pseudo-cohort
+limit.
+.PP
+Specify one of the following forms:
+.IP "\fB\fInumber\fR \fR"
+.IP "\fB\fInumber\fR / \fInumber\fR \fR"
+Constant feedback. The value must be in the range 0..1 inclusive.
+The default setting of "1" is compatible with Postfix versions
+before 2.5, where a destination's delivery concurrency is throttled
+down to zero (and further delivery suspended) after a single failed
+pseudo-cohort.
+.IP "\fB\fInumber\fR / concurrency \fR"
+Variable feedback of "\fInumber\fR / (delivery concurrency)".
+The \fInumber\fR must be in the range 0..1 inclusive. With
+\fInumber\fR equal to "1", a destination's delivery concurrency
+is decremented by 1 after each failed pseudo-cohort.
+.IP "\fB\fInumber\fR / sqrt_concurrency \fR"
+Variable feedback of "\fInumber\fR / sqrt(delivery concurrency)".
+The \fInumber\fR must be in the range 0..1 inclusive. This setting
+may be removed in a future version.
+.PP
+A pseudo-cohort is the number of deliveries equal to a destination's
+delivery concurrency.
+.PP
+Use \fItransport\fR_destination_concurrency_negative_feedback
+to specify a transport-specific override, where \fItransport\fR
+is the master.cf
+name of the message delivery transport.
+.PP
+This feature is available in Postfix 2.5. The default setting
+is compatible with earlier Postfix versions.
+.SH default_destination_concurrency_positive_feedback (default: 1)
+The per-destination amount of delivery concurrency positive
+feedback, after a delivery completes without connection or handshake
+failure. Feedback values are in the range 0..1 inclusive. The
+concurrency increases until it reaches the per-destination maximal
+concurrency limit. With positive feedback, concurrency is incremented
+at the end of a sequence with length 1/feedback. This is unlike
+negative feedback, where concurrency is decremented at the start
+of a sequence of length 1/feedback.
+.PP
+Specify one of the following forms:
+.IP "\fB\fInumber\fR \fR"
+.IP "\fB\fInumber\fR / \fInumber\fR \fR"
+Constant feedback. The value must be in the range 0..1
+inclusive. The default setting of "1" is compatible with Postfix
+versions before 2.5, where a destination's delivery concurrency
+doubles after each successful pseudo-cohort.
+.IP "\fB\fInumber\fR / concurrency \fR"
+Variable feedback of "\fInumber\fR / (delivery concurrency)".
+The \fInumber\fR must be in the range 0..1 inclusive. With
+\fInumber\fR equal to "1", a destination's delivery concurrency
+is incremented by 1 after each successful pseudo-cohort.
+.IP "\fB\fInumber\fR / sqrt_concurrency \fR"
+Variable feedback of "\fInumber\fR / sqrt(delivery concurrency)".
+The \fInumber\fR must be in the range 0..1 inclusive. This setting
+may be removed in a future version.
+.PP
+A pseudo-cohort is the number of deliveries equal to a destination's
+delivery concurrency.
+.PP
+Use \fItransport\fR_destination_concurrency_positive_feedback
+to specify a transport-specific override, where \fItransport\fR
+is the master.cf name of the message delivery transport.
+.PP
+This feature is available in Postfix 2.5 and later.
.SH default_destination_recipient_limit (default: 50)
The default maximal number of recipients per message delivery.
This is the default limit for delivery via the \fBlmtp\fR(8), \fBpipe\fR(8),
.PP
Time units: s (seconds), m (minutes), h (hours), d (days), w (weeks).
The default time unit is s (seconds).
+.SH destination_concurrency_feedback_debug (default: no)
+Make the queue manager's feedback algorithm verbose for performance
+analysis purposes.
+.PP
+This feature is available in Postfix 2.5 and later.
.SH detect_8bit_encoding_header (default: yes)
Automatically detect 8BITMIME body content by looking at
Content-Transfer-Encoding: message headers; historically, this
delivery is requested with "\fBsendmail -v\fR".
.PP
This feature is available in Postfix 2.1 and later.
-.SH transport_concurrency_negative_feedback (default: $default_concurrency_negative_feedback)
-A transport-specific override for the
-default_concurrency_negative_feedback parameter value, where
-\fItransport\fR is the master.cf name of the message delivery
-transport.
-.PP
-This feature is available in Postfix 2.5 and later.
-.SH transport_concurrency_positive_feedback (default: $default_concurrency_positive_feedback)
-A transport-specific override for the
-default_concurrency_positive_feedback parameter value, where
-\fItransport\fR is the master.cf name of the message delivery
-transport.
-.PP
-This feature is available in Postfix 2.5 and later.
.SH transport_delivery_slot_cost (default: $default_delivery_slot_cost)
A transport-specific override for the default_delivery_slot_cost
parameter value, where \fItransport\fR is the master.cf name of
default_destination_concurrency_limit parameter value, where
\fItransport\fR is the master.cf name of the message delivery
transport.
-.SH transport_destination_recipient_limit (default: $default_destination_concurrency_limit)
+.SH transport_destination_concurrency_negative_feedback (default: $default_destination_concurrency_negative_feedback)
+A transport-specific override for the
+default_destination_concurrency_negative_feedback parameter value,
+where \fItransport\fR is the master.cf name of the message delivery
+transport.
+.PP
+This feature is available in Postfix 2.5 and later.
+.SH transport_destination_concurrency_positive_feedback (default: $default_destination_concurrency_positive_feedback)
+A transport-specific override for the
+default_destination_concurrency_positive_feedback parameter value,
+where \fItransport\fR is the master.cf name of the message delivery
+transport.
+.PP
+This feature is available in Postfix 2.5 and later.
+.SH transport_destination_recipient_limit (default: $default_destination_recipient_limit)
A transport-specific override for the
default_destination_recipient_limit parameter value, where
\fItransport\fR is the master.cf name of the message delivery
.IP "\fBdefault_destination_concurrency_limit (20)\fR"
The default maximal number of parallel deliveries to the same
destination.
-.IP \fItransport\fB_destination_concurrency_limit\fR
+.IP "\fItransport\fB_destination_concurrency_limit ($default_destination_concurrency_limit)\fR"
Idem, for delivery via the named message \fItransport\fR.
+.PP
+Available in Postfix version 2.5 and later:
+.IP "\fItransport\fB_initial_destination_concurrency ($initial_destination_concurrency)\fR"
+Initial concurrency for delivery via the named message
+\fItransport\fR.
+.IP "\fBdefault_destination_concurrency_failed_cohort_limit (1)\fR"
+How many pseudo-cohorts must suffer connection or handshake
+failure before a specific destination is considered unavailable
+(and further delivery is suspended).
+.IP "\fItransport\fB_destination_concurrency_failed_cohort_limit ($default_destination_concurrency_failed_cohort_limit)\fR"
+Idem, for delivery via the named message \fItransport\fR.
+.IP "\fBdefault_destination_concurrency_negative_feedback (1)\fR"
+The per-destination amount of negative delivery concurrency
+feedback, after a delivery completes with a connection or handshake
+failure.
+.IP "\fItransport\fB_destination_concurrency_negative_feedback ($default_destination_concurrency_negative_feedback)\fR"
+Idem, for delivery via the named message \fItransport\fR.
+.IP "\fBdefault_destination_concurrency_positive_feedback (1)\fR"
+The per-destination amount of positive delivery concurrency
+feedback, after a delivery completes without connection or handshake
+failure.
+.IP "\fItransport\fB_destination_concurrency_positive_feedback ($default_destination_concurrency_positive_feedback)\fR"
+Idem, for delivery via the named message \fItransport\fR.
+.IP "\fBdestination_concurrency_feedback_debug (no)\fR"
+Make the queue manager's feedback algorithm verbose for performance
+analysis purposes.
.SH "RECIPIENT SCHEDULING CONTROLS"
.na
.nf
.IP "\fItransport\fB_initial_destination_concurrency ($initial_destination_concurrency)\fR"
Initial concurrency for delivery via the named message
\fItransport\fR.
-.IP "\fBdefault_concurrency_failed_cohort_limit (1)\fR"
+.IP "\fBdefault_destination_concurrency_failed_cohort_limit (1)\fR"
How many pseudo-cohorts must suffer connection or handshake
failure before a specific destination is considered unavailable
(and further delivery is suspended).
-.IP "\fItransport\fB_concurrency_failed_cohort_limit ($default_concurrency_failed_cohort_limit)\fR"
+.IP "\fItransport\fB_destination_concurrency_failed_cohort_limit ($default_destination_concurrency_failed_cohort_limit)\fR"
Idem, for delivery via the named message \fItransport\fR.
-.IP "\fBdefault_concurrency_negative_feedback (1)\fR"
+.IP "\fBdefault_destination_concurrency_negative_feedback (1)\fR"
The per-destination amount of negative delivery concurrency
feedback, after a delivery completes with a connection or handshake
failure.
-.IP "\fItransport\fB_concurrency_negative_feedback ($default_concurrency_negative_feedback)\fR"
+.IP "\fItransport\fB_destination_concurrency_negative_feedback ($default_destination_concurrency_negative_feedback)\fR"
Idem, for delivery via the named message \fItransport\fR.
-.IP "\fBdefault_concurrency_positive_feedback (1)\fR"
+.IP "\fBdefault_destination_concurrency_positive_feedback (1)\fR"
The per-destination amount of positive delivery concurrency
feedback, after a delivery completes without connection or handshake
failure.
-.IP "\fItransport\fB_concurrency_positive_feedback ($default_concurrency_positive_feedback)\fR"
+.IP "\fItransport\fB_destination_concurrency_positive_feedback ($default_destination_concurrency_positive_feedback)\fR"
Idem, for delivery via the named message \fItransport\fR.
-.IP "\fBconcurrency_feedback_debug (no)\fR"
+.IP "\fBdestination_concurrency_feedback_debug (no)\fR"
Make the queue manager's feedback algorithm verbose for performance
analysis purposes.
.SH "RECIPIENT SCHEDULING CONTROLS"
P.O. Box 704
Yorktown Heights, NY 10598, USA
-Scheduler enhancements:
+Preemptive scheduler enhancements:
Patrik Rak
Modra 6
155 00, Prague, Czech Republic
probes.
.PP
Available in Postfix version 2.3 and later:
-.IP "\fBaddress_verify_sender_dependent_relayhost_maps (empty)\fR"
+.IP "\fBaddress_verify_sender_dependent_relayhost_maps ($sender_dependent_relayhost_maps)\fR"
Overrides the sender_dependent_relayhost_maps parameter setting for address
verification probes.
.SH "MISCELLANEOUS CONTROLS"
s;\bqmgr_message_recip[-</bB>]*\n* *[<bB>]*ient_minimum\b;<a href="postconf.5.html#qmgr_message_recipient_minimum">$&</a>;g;
s;\bqmqpd_authorized_clients\b;<a href="postconf.5.html#qmqpd_authorized_clients">$&</a>;g;
- s;\bdefault_concur[-</Bb>]*\n* *[<Bb>]*rency_negative_feedback\b;<a href="postconf.5.html#default_concurrency_negative_feedback">$&</a>;g;
- s;\bdefault_concur[-</Bb>]*\n* *[<Bb>]*rency_positive_feedback\b;<a href="postconf.5.html#default_concurrency_positive_feedback">$&</a>;g;
- s;\bdefault_con[-</Bb>]*\n* *[<Bb>]*currency_failed_cohort_limit\b;<a href="postconf.5.html#default_concurrency_failed_cohort_limit">$&</a>;g;
- s;\bconcurrency_feedback_debug\b;<a href="postconf.5.html#concurrency_feedback_debug">$&</a>;g;
+ s;\bdefault_destination_concur[-</Bb>]*\n* *[<Bb>]*rency_negative_feedback\b;<a href="postconf.5.html#default_destination_concurrency_negative_feedback">$&</a>;g;
+ s;\bdefault_destination_concur[-</Bb>]*\n* *[<Bb>]*rency_positive_feedback\b;<a href="postconf.5.html#default_destination_concurrency_positive_feedback">$&</a>;g;
+ s;\bdefault_destination_con[-</Bb>]*\n* *[<Bb>]*currency_failed_cohort_limit\b;<a href="postconf.5.html#default_destination_concurrency_failed_cohort_limit">$&</a>;g;
+ s;\bdestination_concurrency_feedback_debug\b;<a href="postconf.5.html#destination_concurrency_feedback_debug">$&</a>;g;
s;\bqmqpd_error_delay\b;<a href="postconf.5.html#qmqpd_error_delay">$&</a>;g;
s;\bqmqpd_timeout\b;<a href="postconf.5.html#qmqpd_timeout">$&</a>;g;
# Transport-dependent magical parameters.
- s;(<i>transport</i>)(<b>)?(_concurrency_failed_cohort_limit)\b;$2<a href="postconf.5.html#transport_concurrency_failed_cohort_limit">$1$3</a>;g;
- s;(<i>transport</i>)(<b>)?(_concurrency_negative_feedback)\b;$2<a href="postconf.5.html#transport_concurrency_positive_feedback">$1$3</a>;g;
- s;(<i>transport</i>)(<b>)?(_concurrency_positive_feedback)\b;$2<a href="postconf.5.html#transport_concurrency_positive_feedback">$1$3</a>;g;
+ s;(<i>transport</i>)(<b>)?(_destination_concurrency_failed_cohort_limit)\b;$2<a href="postconf.5.html#transport_destination_concurrency_failed_cohort_limit">$1$3</a>;g;
+ s;(<i>transport</i>)(<b>)?(_destination_concurrency_negative_feedback)\b;$2<a href="postconf.5.html#transport_destination_concurrency_positive_feedback">$1$3</a>;g;
+ s;(<i>transport</i>)(<b>)?(_destination_concurrency_positive_feedback)\b;$2<a href="postconf.5.html#transport_destination_concurrency_positive_feedback">$1$3</a>;g;
s;(<i>transport</i>)(<b>)?(_delivery_slot_cost)\b;$2<a href="postconf.5.html#transport_delivery_slot_cost">$1$3</a>;g;
s;(<i>transport</i>)(<b>)?(_delivery_slot_discount)\b;$2<a href="postconf.5.html#transport_delivery_slot_discount">$1$3</a>;g;
s;(<i>transport</i>)(<b>)?(_delivery_slot_loan)\b;$2<a href="postconf.5.html#transport_delivery_slot_loan">$1$3</a>;g;
<p> The queue manager is by far the most complex part of the Postfix
mail system. It schedules delivery of new mail, retries failed
deliveries at specific times, and removes mail from the queue after
-the last delivery attempt. Once started, the qmgr(8) process runs
-until "postfix reload" or "postfix stop". </p>
+the last delivery attempt. There are two major classes of mechanisms
+that control the operation of the queue manager. </p>
-<p> As a persistent process, the queue manager has to meet strict
-requirements with respect to code correctness and robustness. Unlike
-non-persistent daemon processes, the queue manager cannot benefit
-from Postfix's process rejuvenation mechanism that limit the impact
-from resource leaks and other coding errors. </p>
+<p> The first class of mechanisms is concerned with the number of
+concurrent deliveries to a specific destination, including decisions
+on when to suspend deliveries after persistent failures: </p>
-<p> There are two major classes of mechanisms that control the
-operation of the queue manager: </p>
+ <ul>
-<ul>
+ <li> <a href="#concurrency"> Concurrency scheduling </a>
-<li> <p> Mechanisms concerned with the number of concurrent deliveries
-to a specific destination, including decisions on when to suspend
-deliveries after persistent failures. These are described under "<a
-href="#concurrency">Concurrency scheduling</a>". </p>
+ <ul>
-<li> <p> Mechanisms concerned with the selection of what mail to
-deliver to a given destination. These are described under "<a
-href="#jobs">Preemptive scheduling</a>". </p>
+ <li> <a href="#concurrency_summary_2_5"> Summary of the
+ Postfix 2.5 concurrency feedback algorithm </a>
-</ul>
+ <li> <a href="#dead_summary_2_5"> Summary of the Postfix
+ 2.5 "dead destination" detection algorithm </a>
+
+ <li> <a href="#pseudo_code_2_5"> Pseudocode for the Postfix
+ 2.5 concurrency scheduler </a>
+
+ <li> <a href="#concurrency_results"> Results for delivery
+ to concurrency limited servers </a>
+
+ <li> <a href="#concurrency_discussion"> Discussion of
+ concurrency limited server results </a>
+
+ <li> <a href="#concurrency_limitations"> Limitations of
+ less-than-1 per delivery feedback </a>
+
+ <li> <a href="#concurrency_config"> Concurrency configuration
+ parameters </a>
+
+ </ul>
+
+ </ul>
+
+<p> The second class of mechanisms is concerned with the selection
+of what mail to deliver to a given destination: </p>
+
+ <ul>
+
+ <li> <a href="#jobs"> Preemptive scheduling </a>
+
+ <ul>
+
+ <li> <a href="#job_motivation"> Why the non-preemptive Postfix queue
+ manager was replaced </a>
+
+ <li> <a href="#job_design"> How the non-preemptive queue manager
+ scheduler works </a>
+
+ </ul>
+
+ </ul>
+
+<p> And this document would not be complete without: </p>
+
+ <ul>
+
+ <li> <a href="#credits"> Credits </a>
+
+ </ul>
+
+<!--
+
+<p> Once started, the qmgr(8) process runs until "postfix reload"
+or "postfix stop". As a persistent process, the queue manager has
+to meet strict requirements with respect to code correctness and
+robustness. Unlike non-persistent daemon processes, the queue manager
+cannot benefit from Postfix's process rejuvenation mechanism that
+limit the impact from resource leaks and other coding errors
+(translation: replacing a process after a short time covers up bugs
+before they can become a problem). </p>
+
+-->
<h2> <a name="concurrency"> Concurrency scheduling </a> </h2>
destination's concurrency level dropped to zero, the destination
was declared "dead" and delivery was suspended. </p>
-<p> Drawbacks of the old +/-1 feedback concurrency scheduler are:
-<p>
+<p> Drawbacks of the old +/-1 feedback per delivery are: <p>
<ul>
It uses separate mechanisms for per-destination concurrency control
and for "dead destination" detection. The concurrency control in
turn is built from two separate mechanisms: it supports less-than-1
-feedback to allow for more gradual concurrency adjustments, and it
-uses feedback hysteresis to suppress concurrency oscillations. And
-instead of waiting for delivery concurrency to throttle down to
-zero, a destination is declared "dead" after a configurable number
-of pseudo-cohorts reports connection or handshake failure. </p>
-
-<h2> Summary of the Postfix 2.5 concurrency feedback algorithm </h2>
-
-<p> We want to increment a destination's delivery concurrency after
-some (not necessarily consecutive) number of deliveries without
-connection or handshake failure. This is implemented with positive
-feedback g(N) where N is the destination's delivery concurrency.
-With g(N)=1 we get the old scheduler's exponential growth in time,
-while g(N)=1/N gives linear growth in time. Less-than-1 feedback
-and integer truncation naturally give us hysteresis, so that
-transitions to larger concurrency happen every 1/g(N) positive
-feedback events. </p>
-
-<p> We want to decrement a destination's delivery concurrency after
-some (not necessarily consecutive) number of deliveries suffer
-connection or handshake failure. This is implemented with negative
-feedback f(N) where N is the destination's delivery concurrency.
-With f(N)=1 we get the old scheduler's behavior where concurrency
-is throttled down dramatically after a single pseudo-cohort failure,
-while f(N)=1/N backs off more gently. Again, less-than-1 feedback
-and integer truncation naturally give us hysteresis, so that
-transitions to lower concurrency happen every 1/f(N) negative
-feedback events. </p>
+feedback per delivery to allow for more gradual concurrency
+adjustments, and it uses feedback hysteresis to suppress concurrency
+oscillations. And instead of waiting for delivery concurrency to
+throttle down to zero, a destination is declared "dead" after a
+configurable number of pseudo-cohorts reports connection or handshake
+failure. </p>
+
+<h3> <a name="concurrency_summary_2_5"> Summary of the Postfix 2.5 concurrency feedback algorithm </a> </h3>
+
+<p> We want to increment a destination's delivery concurrency when
+some (not necessarily consecutive) number of deliveries complete
+without connection or handshake failure. This is implemented with
+positive feedback g(N) where N is the destination's delivery
+concurrency. With g(N)=1 feedback per delivery, concurrency increases
+by 1 after each positive feedback event; this gives us the old
+scheduler's exponential growth in time. With g(N)=1/N feedback per
+delivery, concurrency increases by 1 after an entire pseudo-cohort
+N of positive feedback reports; this gives us linear growth in time.
+Less-than-1 feedback per delivery and integer truncation naturally
+give us hysteresis, so that transitions to larger concurrency happen
+every 1/g(N) positive feedback events. </p>
+
+<p> We want to decrement a destination's delivery concurrency when
+some (not necessarily consecutive) number of deliveries complete
+after connection or handshake failure. This is implemented with
+negative feedback f(N) where N is the destination's delivery
+concurrency. With f(N)=1 feedback per delivery, concurrency decreases
+by 1 after each negative feedback event; this gives us the old
+scheduler's behavior where concurrency is throttled down dramatically
+after a single pseudo-cohort failure. With f(N)=1/N feedback per
+delivery, concurrency backs off more gently. Again, less-than-1
+feedback per delivery and integer truncation naturally give us
+hysteresis, so that transitions to lower concurrency happen every
+1/f(N) negative feedback events. </p>
<p> However, with negative feedback we introduce a subtle twist.
-We "reverse" the hysteresis cycle so that the transition to lower
-concurrency happens at the <b>beginning</b> of a sequence of 1/f(N)
-negative feedback events. Otherwise, a correction for overload
-would be made too late. In the case of a concurrency-limited server,
-this makes the choice of f(N) relatively unimportant, as borne out
-by measurements. </p>
+We "reverse" the negative hysteresis cycle so that the transition
+to lower concurrency happens at the <b>beginning</b> of a sequence
+of 1/f(N) negative feedback events. Otherwise, a correction for
+overload would be made too late. This makes the choice of f(N)
+relatively unimportant, as borne out by measurements later in this
+document. </p>
<p> In summary, the main ingredients for the Postfix 2.5 concurrency
feedback algorithm are a) the option of less-than-1 positive feedback
-to avoid overwhelming servers, b) the option of less-than-1 negative
-feedback to avoid or giving up too fast, c) feedback hysteresis to
-avoid rapid oscillation, and c) a "reverse" hysteresis cycle for
-negative feedback, so that it can correct for overload quickly. </p>
+per delivery to avoid overwhelming servers, b) the option of
+less-than-1 negative feedback per delivery to avoid giving up too
+fast, c) feedback hysteresis to avoid rapid oscillation, and c) a
+"reverse" hysteresis cycle for negative feedback, so that it can
+correct for overload quickly. </p>
-<h2> Summary of the Postfix 2.5 "dead destination" detection algorithm </h2>
+<h3> <a name="dead_summary_2_5"> Summary of the Postfix 2.5 "dead destination" detection algorithm </a> </h3>
<p> We want to suspend deliveries to a specific destination after
some number of deliveries suffers connection or handshake failure.
The old scheduler declares a destination "dead" when negative (-1)
feedback throttles the delivery concurrency down to zero. With
-less-than-1 feedback, this throttling down would obviously take too
-long. We therefore have to separate "dead destination" detection
-from concurrency feedback. This is implemented by introducing the
-concept of pseudo-cohort failure. The Postfix 2.5 concurrency
-scheduler declares a destination "dead" after a configurable number
-of pseudo-cohort failures. The old scheduler corresponds to the
-special case where the pseudo-cohort failure limit is equal to 1.
-</p>
+less-than-1 feedback per delivery, this throttling down would
+obviously take too long. We therefore have to separate "dead
+destination" detection from concurrency feedback. This is implemented
+by introducing the concept of pseudo-cohort failure. The Postfix
+2.5 concurrency scheduler declares a destination "dead" after a
+configurable number of pseudo-cohorts suffers from connection or
+handshake failures. The old scheduler corresponds to the special
+case where the pseudo-cohort failure limit is equal to 1. </p>
-<h2> Pseudocode for the Postfix 2.5 concurrency scheduler </h2>
+<h3> <a name="pseudo_code_2_5"> Pseudocode for the Postfix 2.5 concurrency scheduler </a> </h3>
<p> The pseudo code shows how the ideas behind new concurrency
scheduler are implemented as of November 2007. The actual code can
<pre>
Types:
- Each destination has one set of the following variables
- int window
+ Each destination has one set of the following variables
+ int concurrency
double success
double failure
double fail_cohorts
Feedback functions:
- N is concurrency; x, y are arbitrary numbers in [0..1] inclusive
+ N is concurrency; x, y are arbitrary numbers in [0..1] inclusive
positive feedback: g(N) = x/N | x/sqrt(N) | x
negative feedback: f(N) = y/N | y/sqrt(N) | y
Initialization:
- window = initial_concurrency
+ concurrency = initial_concurrency
success = 0
failure = 0
fail_cohorts = 0
After success:
fail_cohorts = 0
Be prepared for feedback > hysteresis, or rounding error
- success += g(window)
- while (success >= 1) Hysteresis 1
- window += 1 Hysteresis 1
+ success += g(concurrency)
+ while (success >= 1) Hysteresis 1
+ concurrency += 1 Hysteresis 1
failure = 0
- success -= 1 Hysteresis 1
+ success -= 1 Hysteresis 1
Be prepared for overshoot
- if (window > concurrency limit)
- window = concurrency limit
+ if (concurrency > concurrency limit)
+ concurrency = concurrency limit
Safety:
Don't apply positive feedback unless
- window < busy_refcount + init_dest_concurrency
+ concurrency < busy_refcount + init_dest_concurrency
otherwise negative feedback effect could be delayed
After failure:
- if (window > 0)
- fail_cohorts += 1.0 / window
+ if (concurrency > 0)
+ fail_cohorts += 1.0 / concurrency
if (fail_cohorts > cohort_failure_limit)
- window = 0
- if (window > 0)
+ concurrency = 0
+ if (concurrency > 0)
Be prepared for feedback > hysteresis, rounding errors
- failure -= f(window)
+ failure -= f(concurrency)
while (failure < 0)
- window -= 1 Hysteresis 1
- failure += 1 Hysteresis 1
+ concurrency -= 1 Hysteresis 1
+ failure += 1 Hysteresis 1
success = 0
Be prepared for overshoot
- if (window < 1)
- window = 1
+ if (concurrency < 1)
+ concurrency = 1
</pre>
-<h2> Results for the Postfix 2.5 concurrency feedback scheduler </h2>
+<h3> <a name="concurrency_results"> Results for delivery to concurrency limited servers </a> </h3>
<p> Discussions about the concurrency scheduler redesign started
early 2004, when the primary goal was to find alternatives that did
shifted towards better handling of server concurrency limits. For
this reason we measure how well the new scheduler does this
job. The table below compares mail delivery performance of the old
-+/-1 feedback with other feedback functions, for different server
-concurrency enforcement methods. Measurements were done with a
-FreeBSD 6.2 client and with FreeBSD 6.2 and various Linux servers.
-</p>
++/-1 feedback per delivery with several less-than-1 feedback
+functions, for different limited-concurrency server scenarios.
+Measurements were done with a FreeBSD 6.2 client and with FreeBSD
+6.2 and various Linux servers. </p>
-<li> Server configuration:
+<p> Server configuration: </p>
<ul> <li> The mail flow was slowed down with 1 second latency per
recipient ("smtpd_client_restrictions = sleep 1"). The purpose was
-to make results less dependent on hardware details, by reducing the
-slow-downs by disk I/O, logging I/O, and network I/O.
+to make results less dependent on hardware details, by avoiding
+slow-downs by queue file I/O, logging I/O, and network I/O.
<li> Concurrency was limited by the server process limit
-("default_process_limit = 5", "smtpd_client_event_limit_exceptions
+("default_process_limit = 5" and "smtpd_client_event_limit_exceptions
= static:all"). Postfix was stopped and started after changing the
process limit, because the same number is also used as the backlog
argument to the listen(2) system call, and "postfix reload" does
</ul>
-<li> Client configuration:
+<p> Client configuration: </p>
<ul>
Postfix to schedule the concurrency per recipient instead of domain,
which is not what we want.
-<li> Maximal concurrency was limited with
+<li> Maximum concurrency was limited with
"smtp_destination_concurrency_limit = 20", and
initial_destination_concurrency was set to the same value.
</ul>
+<h4> Impact of the 30s SMTP connect timeout </h4>
+
<p> The first results are for a FreeBSD 6.2 server, where our
artificially low listen(2) backlog results in a very short kernel
-queue for established connections. As the table shows, all deferred
+queue for established connections. The table shows that all deferred
deliveries failed due to a 30s connection timeout, and none failed
due to a server greeting timeout. This measurement simulates what
happens when the server's connection queue is completely full under
<p> A busy server with a completely full connection queue. N is
the client delivery concurrency. Failed deliveries time out after
-30s without completing the TCP handshake. See below for a discussion
+30s without completing the TCP handshake. See text for a discussion
of results. </p>
</blockquote>
+<h4> Impact of the 300s SMTP greeting timeout </h4>
+
<p> The next table shows results for a Fedora Core 8 server (results
-for RedHat 7.3 are identical). In this case, the listen(2) backlog
-argument has little if any effect on the kernel's established
-connection queue. As the table shows, practically all deferred
-deliveries fail after the 300s SMTP greeting timeout. As these
-timeouts were 10x longer than with the previous measurement, we
-increased the recipient count (and thus the running time) by a
-factor of 10 to keep the results comparable. </p>
+for RedHat 7.3 are identical). In this case, the artificially small
+listen(2) backlog argument does not impact our measurement. The
+table shows that practically all deferred deliveries fail after the
+300s SMTP greeting timeout. As these timeouts were 10x longer than
+with the first measurement, we increased the recipient count (and
+thus the running time) by a factor of 10 to keep the results
+comparable. The deferred mail percentages are a factor 10 lower
+than with the first measurement, because the 1s per-recipient delay
+was 1/300th of the greeting timeout instead of 1/30th of the
+connection timeout. </p>
<blockquote>
<p> A busy server with a non-full connection queue. N is the client
delivery concurrency. Failed deliveries complete at the TCP level,
but time out after 300s while waiting for the SMTP greeting. See
-below for a discussion of results. </p>
+text for a discussion of results. </p>
</blockquote>
+<h4> Impact of active server concurrency limiter </h4>
<p> The final concurrency limited result shows what happens when
SMTP connections don't time out, but are rejected immediately with
-the Postfix server's smtpd_client_connection_count_limit feature.
+the Postfix server's smtpd_client_connection_count_limit feature
+(the server replies with a 421 status and disconnects immediately).
Similar results can be expected with concurrency limiting features
built into other MTAs or firewalls. For this measurement we specified
a server concurrency limit and a client initial destination concurrency
-of 5, and a server process limit of 10. The server was FreeBSD 6.2
-but that does not matter here, because the "push back" is done
-entirely by the server's Postfix itself. </p>
+of 5, and a server process limit of 10; all other conditions were
+the same as with the first measurement. The same result would be
+obtained with a FreeBSD or Linux server, because the "pushing back"
+is done entirely by the receiving Postfix. </p>
<blockquote>
<p> A server with active per-client concurrency limiter that replies
with 421 and disconnects. N is the client delivery concurrency.
-The theoretical mail deferral rate is 1/(1+roundup(1/feedback)).
-This is always 1/2 with the fixed +/-1 feedback; with the variable
-feedback variants, the defer rate decreases with increasing
-concurrency. See below for a discussion of results. </p>
+The theoretical defer rate is 1/(1+roundup(1/feedback)). This is
+always 1/2 with the fixed +/-1 feedback per delivery; with the
+concurrency-dependent feedback variants, the defer rate decreases
+with increasing concurrency. See text for a discussion of results.
+</p>
</blockquote>
-<p> The results are based on the first delivery runs only; they do
-not include any second etc. delivery attempts.
-
-<p> The first two examples show that the feedback method matters
-little when concurrency is limited due to congestion. This is because
-the initial concurrency was already at the client's concurrency
-maximum, and because there was 10-100 times more positive than
-negative feedback. The contribution from SMTP connection caching
-was also minor for these two examples. </p>
-
-<p> In the last example, the old +/-1 feedback scheduler defers 50%
-of the mail when confronted with an active (anvil-style) server
-concurrency limit, where the server hangs up immediately with a 421
-status (a TCP-level RST would have the same result). Less aggressive
-feedback mechanisms fare better here, and the concurrency-dependent
-feedback fares even better at higher concurrencies than shown here,
-but they have limitations as discussed in the next section. </p>
-
-<h2> Limitations of less-than-1 feedback </h2>
-
-<p> The delivery concurrency scheduler with less-than-1 feedback
-solves a problem with servers that have active concurrency limiters,
-but this works well only because feedback is handled in a peculiar
-manner: positive feedback increments the concurrency by 1 at the
-end of a sequence of events of length 1/feedback, while negative
-feedback decrements concurrency by 1 at the beginning of such a
-sequence. This is how Postfix adjusts quickly for overshoot without
-causing lots of mail to be deferred. Without this difference in
-feedback treatment, less-than-1 feedback would defer 50% of the
-mail, and would be no better in this respect than the simple +/-1
-feedback scheduler. </p>
+<h3> <a name="concurrency_discussion"> Discussion of concurrency limited server results </a> </h3>
+
+<p> All results in the previous sections are based on the first
+delivery runs only; they do not include any second etc. delivery
+attempts. The first two examples show that the feedback method
+matters little when concurrency is limited due to congestion. This
+is because the initial concurrency is already at the client's
+concurrency maximum, and because there is 10-100 times more positive
+than negative feedback. Under these conditions, the contribution
+from SMTP connection caching is negligible. </p>
+
+<p> In the last example, the old +/-1 feedback per delivery will
+defer 50% of the mail when confronted with an active (anvil-style)
+server concurrency limit, where the server hangs up immediately
+with a 421 status (a TCP-level RST would have the same result).
+Less aggressive feedback mechanisms fare better than more aggressive
+ones. Concurrency-dependent feedback fares even better at higher
+concurrencies than shown here, but has limitations as discussed in
+the next section. </p>
+
+<h3> <a name="concurrency_limitations"> Limitations of less-than-1 per delivery feedback </a> </h3>
+
+<p> The delivery concurrency scheduler with less-than-1 concurrency
+feedback per delivery solves a problem with servers that have active
+concurrency limiters. This works only because feedback is handled
+in a peculiar manner: positive feedback will increment the concurrency
+by 1 at the <b>end</b> of a sequence of events of length 1/feedback,
+while negative feedback will decrement concurrency by 1 at the
+<b>beginning</b> of such a sequence. This is how Postfix adjusts
+quickly for overshoot without causing lots of mail to be deferred.
+Without this difference in feedback treatment, less-than-1 feedback
+per delivery would defer 50% of the mail, and would be no better
+in this respect than the old +/-1 feedback per delivery. </p>
<p> Unfortunately, the same feature that corrects quickly for
concurrency overshoot also makes the scheduler more sensitive for
noisy negative feedback. The reason is that one lonely negative
feedback event has the same effect as a complete sequence of length
1/feedback: in both cases delivery concurrency is dropped by 1
-immediately. For example, when multiple servers are placed behind
-a load balancer on a single IP address, and 1 out of K servers fails
-to complete the SMTP handshake, a scheduler with 1/N (N = concurrency)
-feedback will stop increasing its concurrency once it reaches roughly
-K. Even though the good servers behind the load balancer are
-perfectly capable of handling more mail, the 1/N feedback scheduler
-will linger around concurrency K. </p>
-
-<p> This problem with 1/N feedback gets worse as 1/N gets smaller.
-A workaround is to use fixed less-than-1 values for positive and
-negative feedback that limit the noise sensitivity, for example:
-positive feedback of 1/4 and negative feedback 1/10. Of course
-using fixed feedback means concurrency growth is moderated only for
-a limited range of concurrencies. Sites that deliver at per-destination
-concurrencies of 50 or more will require special configuration.
-</p>
+immediately. As a worst-case scenario, consider multiple servers
+behind a load balancer on a single IP address, and no backup MX
+address. When 1 out of K servers fails to complete the SMTP handshake
+or drops the connection, a scheduler with 1/N (N = concurrency)
+feedback stops increasing its concurrency once it reaches a concurrency
+level of about K, even though the good servers behind the load
+balancer are perfectly capable of handling more traffic. </p>
+
+<p> This noise problem gets worse as the amount of positive feedback
+per delivery gets smaller. A compromise is to avoid concurrency-dependent
+positive feedback, and to use fixed less-than-1 feedback values
+instead. For example, to tolerate 1 of 4 bad servers in the above
+load balancer scenario, use positive feedback of 1/4 per "good"
+delivery (no connect or handshake error), and use an equal or smaller
+amount of negative feedback per "bad" delivery. The downside of
+using concurrency-independent feedback is that some of the old +/-1
+feedback problems will return at large concurrencies. Sites that
+deliver at non-trivial per-destination concurrencies will require
+special configuration. </p>
+
+<h3> <a name="concurrency_config"> Concurrency configuration parameters </a> </h3>
+
+<p> The Postfix 2.5 concurrency scheduler is controlled with the
+following configuration parameters, where "<i>transport</i>_foo"
+provides a transport-specific parameter override. All parameter
+default settings are compatible with earlier Postfix versions. </p>
+
+<blockquote>
+
+<table border="0">
+
+<tr> <th> Parameter name </th> <th> Postfix version </th> <th>
+Description </th> </tr>
+
+<tr> <td colspan="3"> <hr> </td> </tr>
+
+<tr> <td> initial_destination_concurrency<br>
+<i>transport</i>_initial_destination_concurrency </td> <td
+align="center"> all<br> 2.5 </td> <td> Initial per-destination
+delivery concurrency </td> </tr>
+
+<tr> <td> default_destination_concurrency_limit<br>
+<i>transport</i>_destination_concurrency_limit </td> <td align="center">
+all<br> all </td> <td> Maximum per-destination delivery concurrency
+</td> </tr>
+
+<tr> <td> default_destination_concurrency_positive_feedback<br>
+<i>transport</i>_destination_concurrency_positive_feedback </td>
+<td align="center"> 2.5<br> 2.5 </td> <td> Per-destination positive
+feedback amount, per delivery that does not fail with connection
+or handshake failure </td> </tr>
+
+<tr> <td> default_destination_concurrency_negative_feedback<br>
+<i>transport</i>_destination_concurrency_negative_feedback </td>
+<td align="center"> 2.5<br> 2.5 </td> <td> Per-destination negative
+feedback amount, per delivery that fails with connection or handshake
+failure </td> </tr>
+
+<tr> <td> default_destination_concurrency_failed_cohort_limit<br>
+<i>transport</i>_destination_concurrency_failed_cohort_limit </td>
+<td align="center"> 2.5<br> 2.5 </td> <td> Number of failed
+pseudo-cohorts after which a destination is declared "dead" and
+delivery is suspended </td> </tr>
+
+<tr> <td> destination_concurrency_feedback_debug</td> <td align="center">
+2.5 </td> <td> Enable verbose logging of concurrency scheduler
+activity </td> </tr>
+
+<tr> <td colspan="3"> <hr> </td> </tr>
+
+</table>
+
+</blockquote>
<h2> <a name="jobs"> Preemptive scheduling </a> </h2>
will for some time will be available under the name of "oqmgr(8)".
</p>
-<h3>Why the non-preemptive Postfix queue manager was replaced</h3>
+<h3> <a name="job_motivation"> Why the non-preemptive Postfix queue manager was replaced </a> </h3>
<p> The non-preemptive Postfix scheduler had several limitations
due to unfortunate choices in its design. </p>
</ol>
-<h3>How the non-preemptive queue manager scheduler works </h3>
+<h3> <a name="job_design"> How the non-preemptive queue manager scheduler works </a> </h3>
<p> The following text is from Patrik Rak and should be read together
with the postconf(5) manual that describes each configuration
that to understand the scheduling algorithm itself (which was the
real thinking work) is fairly easy. </p>
+<h2> <a name="credits"> Credits </a> </h2>
+
+<ul>
+
+<li> Wietse Venema designed and implemented the initial queue manager
+with per-domain FIFO scheduling, and per-delivery +/-1 concurrency
+feedback.
+
+<li> Patrik Rak designed and implemented preemption where mail with
+fewer recipients can slip past mail with more recipients.
+
+<li> Wietse Venema initiated a discussion with Patrik Rak and Victor
+Duchovni on alternatives for the +/-1 feedback scheduler's aggressive
+behavior. This is when K/N feedback was reviewed (N = concurrency).
+The discussion ended without a good solution for both negative
+feedback and dead site detection.
+
+<li> Victor Duchovni resumed work on concurrency feedback in the
+context of concurrency-limited servers.
+
+<li> Wietse Venema then re-designed the concurrency scheduler in
+terms of simplest possible concepts: less-than-1 concurrency feedback
+per delivery, forward and reverse concurrency feedback hysteresis,
+and pseudo-cohort failure. At this same time, concurrency feedback
+was separated from dead site detection.
+
+<li> These simplifications, and their modular implementation, helped
+to develop further insights into the different roles that positive
+and negative concurrency feedback play, and helped to avoid all the
+known worst-case scenarios.
+
+</ul>
+
</body>
</html>
<p> This feature is available in Postfix 2.2 and later. </p>
-%PARAM connection_cache_service scache
+%PARAM connection_cache_service_name scache
<p> The name of the scache(8) connection cache service. This service
maintains a limited pool of cached sessions. </p>
earlier versions, sender_dependent_relayhost_maps lookups were
skipped for the null sender address. </p>
-%PARAM address_verify_sender_dependent_relayhost_maps empty
+%PARAM address_verify_sender_dependent_relayhost_maps $sender_dependent_relayhost_maps
<p>
Overrides the sender_dependent_relayhost_maps parameter setting for address
<p> This feature is available in Postfix 2.5 and later. </p>
-%PARAM concurrency_feedback_debug no
+%PARAM destination_concurrency_feedback_debug no
<p> Make the queue manager's feedback algorithm verbose for performance
analysis purposes. </p>
<p> This feature is available in Postfix 2.5 and later. </p>
-%PARAM default_concurrency_failed_cohort_limit 1
+%PARAM default_destination_concurrency_failed_cohort_limit 1
<p> How many pseudo-cohorts must suffer connection or handshake
failure before a specific destination is considered unavailable
<p> A pseudo-cohort is the number of deliveries equal to a destination's
delivery concurrency. </p>
-<p> Use <i>transport</i>_concurrency_failed_cohort_limit to specify
+<p> Use <i>transport</i>_destination_concurrency_failed_cohort_limit to specify
a transport-specific override, where <i>transport</i> is the master.cf
name of the message delivery transport. </p>
<p> This feature is available in Postfix 2.5. The default setting
is compatible with earlier Postfix versions. </p>
-%PARAM default_concurrency_negative_feedback 1
+%PARAM default_destination_concurrency_negative_feedback 1
-<p> The per-destination amount of negative delivery concurrency
+<p> The per-destination amount of delivery concurrency negative
feedback, after a delivery completes with a connection or handshake
-failure. Feedback values are in range 0..1 inclusive. With negative
-feedback, concurrency is decremented at the beginning of a sequence
-of length 1/feedback. This is unlike positive feedback, where
-concurrency is incremented at the end of a sequence of length
+failure. Feedback values are in the range 0..1 inclusive. With
+negative feedback, concurrency is decremented at the beginning of
+a sequence of length 1/feedback. This is unlike positive feedback,
+where concurrency is incremented at the end of a sequence of length
1/feedback. </p>
<p> As of Postfix version 2.5, negative feedback cannot reduce
delivery concurrency to zero. Instead, a destination is marked
dead (further delivery suspended) after the failed pseudo-cohort
-count reaches $default_concurrency_failed_cohort_limit (or
-$<i>transport</i>_concurrency_failed_cohort_limit). To make the
-scheduler completely immune to connection or handshake failures,
-specify a zero feedback value and a zero failed pseudo-cohort limit.
-</p>
+count reaches $default_destination_concurrency_failed_cohort_limit
+(or $<i>transport</i>_destination_concurrency_failed_cohort_limit).
+To make the scheduler completely immune to connection or handshake
+failures, specify a zero feedback value and a zero failed pseudo-cohort
+limit. </p>
<p> Specify one of the following forms: </p>
<p> A pseudo-cohort is the number of deliveries equal to a destination's
delivery concurrency. </p>
-<p> Use <i>transport</i>_concurrency_negative_feedback to specify
-a transport-specific override, where <i>transport</i> is the master.cf
+<p> Use <i>transport</i>_destination_concurrency_negative_feedback
+to specify a transport-specific override, where <i>transport</i>
+is the master.cf
name of the message delivery transport. </p>
<p> This feature is available in Postfix 2.5. The default setting
is compatible with earlier Postfix versions. </p>
-%PARAM default_concurrency_positive_feedback 1
+%PARAM default_destination_concurrency_positive_feedback 1
-<p> The per-destination amount of positive delivery concurrency
+<p> The per-destination amount of delivery concurrency positive
feedback, after a delivery completes without connection or handshake
failure. Feedback values are in the range 0..1 inclusive. The
concurrency increases until it reaches the per-destination maximal
<p> A pseudo-cohort is the number of deliveries equal to a destination's
delivery concurrency. </p>
-<p> Use <i>transport</i>_concurrency_positive_feedback to specify
-a transport-specific override, where <i>transport</i> is the master.cf
-name of the message delivery transport. </p>
+<p> Use <i>transport</i>_destination_concurrency_positive_feedback
+to specify a transport-specific override, where <i>transport</i>
+is the master.cf name of the message delivery transport. </p>
<p> This feature is available in Postfix 2.5 and later. </p>
-%PARAM <i>transport</i>_concurrency_failed_cohort_limit $default_concurrency_failed_cohort_limit
+%PARAM <i>transport</i>_destination_concurrency_failed_cohort_limit $default_destination_concurrency_failed_cohort_limit
<p> A transport-specific override for the
-default_concurrency_failed_cohort_limit parameter value, where
-<i>transport</i> is the master.cf name of the message delivery
+default_destination_concurrency_failed_cohort_limit parameter value,
+where <i>transport</i> is the master.cf name of the message delivery
transport. </p>
<p> This feature is available in Postfix 2.5 and later. </p>
-%PARAM transport_concurrency_positive_feedback $default_concurrency_positive_feedback
+%PARAM transport_destination_concurrency_positive_feedback $default_destination_concurrency_positive_feedback
<p> A transport-specific override for the
-default_concurrency_positive_feedback parameter value, where
-<i>transport</i> is the master.cf name of the message delivery
+default_destination_concurrency_positive_feedback parameter value,
+where <i>transport</i> is the master.cf name of the message delivery
transport. </p>
<p> This feature is available in Postfix 2.5 and later. </p>
-%PARAM transport_concurrency_negative_feedback $default_concurrency_negative_feedback
+%PARAM transport_destination_concurrency_negative_feedback $default_destination_concurrency_negative_feedback
<p> A transport-specific override for the
-default_concurrency_negative_feedback parameter value, where
-<i>transport</i> is the master.cf name of the message delivery
+default_destination_concurrency_negative_feedback parameter value,
+where <i>transport</i> is the master.cf name of the message delivery
transport. </p>
<p> This feature is available in Postfix 2.5 and later. </p>
<i>transport</i> is the master.cf name of the message delivery
transport. </p>
-%PARAM transport_destination_recipient_limit $default_destination_concurrency_limit
+%PARAM transport_destination_recipient_limit $default_destination_recipient_limit
<p> A transport-specific override for the
default_destination_recipient_limit parameter value, where
/*
* Scheduler concurrency feedback algorithms.
*/
-#define VAR_CONC_POS_FDBACK "default_concurrency_positive_feedback"
+#define VAR_CONC_POS_FDBACK "default_destination_concurrency_positive_feedback"
#define _CONC_POS_FDBACK "_concurrency_positive_feedback"
#define DEF_CONC_POS_FDBACK "1"
extern char *var_conc_pos_feedback;
-#define VAR_CONC_NEG_FDBACK "default_concurrency_negative_feedback"
+#define VAR_CONC_NEG_FDBACK "default_destination_concurrency_negative_feedback"
#define _CONC_NEG_FDBACK "_concurrency_negative_feedback"
#define DEF_CONC_NEG_FDBACK "1"
extern char *var_conc_neg_feedback;
#define CONC_FDBACK_NAME_WIN "concurrency"
#define CONC_FDBACK_NAME_SQRT_WIN "sqrt_concurrency"
-#define VAR_CONC_COHORT_LIM "default_concurrency_failed_cohort_limit"
+#define VAR_CONC_COHORT_LIM "default_destination_concurrency_failed_cohort_limit"
#define _CONC_COHORT_LIM "_concurrency_failed_cohort_limit"
#define DEF_CONC_COHORT_LIM 1
extern int var_conc_cohort_limit;
-#define VAR_CONC_FDBACK_DEBUG "concurrency_feedback_debug"
+#define VAR_CONC_FDBACK_DEBUG "destination_concurrency_feedback_debug"
#define DEF_CONC_FDBACK_DEBUG 0
extern bool var_conc_feedback_debug;
static int stamp_stream(VSTREAM *fp, time_t when)
{
- struct timeval tv;
+ struct timeval tv[2];
if (when != 0) {
- tv.tv_sec = when;
- tv.tv_usec = 0;
- return (futimesat(vstream_fileno(fp), (char *) 0, &tv));
+ tv[0].tv_sec = tv[1].tv_sec = when;
+ tv[0].tv_usec = tv[1].tv_usec = 0;
+ return (futimesat(vstream_fileno(fp), (char *) 0, tv));
} else {
return (futimesat(vstream_fileno(fp), (char *) 0, (struct timeval *) 0));
}
static int stamp_stream(VSTREAM *fp, time_t when)
{
- struct timeval tv;
+ struct timeval tv[2];
if (when != 0) {
- tv.tv_sec = when;
- tv.tv_usec = 0;
- return (futimes(vstream_fileno(fp), &tv));
+ tv[0].tv_sec = tv[1].tv_sec = when;
+ tv[0].tv_usec = tv[1].tv_usec = 0;
+ return (futimes(vstream_fileno(fp), tv));
} else {
return (futimes(vstream_fileno(fp), (struct timeval *) 0));
}
* Patches change both the patchlevel and the release date. Snapshots have no
* patchlevel; they change the release date only.
*/
-#define MAIL_RELEASE_DATE "20071129"
+#define MAIL_RELEASE_DATE "20071130"
#define MAIL_VERSION_NUMBER "2.5"
#ifdef SNAPSHOT
SHELL = /bin/sh
SRCS = qmgr.c qmgr_active.c qmgr_transport.c qmgr_queue.c qmgr_entry.c \
qmgr_message.c qmgr_deliver.c qmgr_move.c \
- qmgr_defer.c qmgr_enable.c qmgr_scan.c qmgr_bounce.c qmgr_error.c
+ qmgr_defer.c qmgr_enable.c qmgr_scan.c qmgr_bounce.c qmgr_error.c \
+ qmgr_feedback.c
OBJS = qmgr.o qmgr_active.o qmgr_transport.o qmgr_queue.o qmgr_entry.o \
qmgr_message.o qmgr_deliver.o qmgr_move.o \
- qmgr_defer.o qmgr_enable.o qmgr_scan.o qmgr_bounce.o qmgr_error.o
+ qmgr_defer.o qmgr_enable.o qmgr_scan.o qmgr_bounce.o qmgr_error.o \
+ qmgr_feedback.o
HDRS = qmgr.h
TESTSRC =
DEFS = -I. -I$(INC_DIR) -D$(SYSTYPE)
TESTPROG=
PROG = qmgr
INC_DIR = ../../include
-LIBS = ../../lib/libmaster.a ../../lib/libglobal.a ../../lib/libutil.a
+LIBS = ../../lib/libmaster.a ../../lib/libglobal.a ../../lib/libutil.a -lm
.c.o:; $(CC) $(CFLAGS) -c $*.c
qmgr_error.o: ../../include/vstring.h
qmgr_error.o: qmgr.h
qmgr_error.o: qmgr_error.c
+qmgr_feedback.o: ../../include/dsn.h
+qmgr_feedback.o: ../../include/mail_conf.h
+qmgr_feedback.o: ../../include/mail_params.h
+qmgr_feedback.o: ../../include/msg.h
+qmgr_feedback.o: ../../include/mymalloc.h
+qmgr_feedback.o: ../../include/name_code.h
+qmgr_feedback.o: ../../include/recipient_list.h
+qmgr_feedback.o: ../../include/scan_dir.h
+qmgr_feedback.o: ../../include/stringops.h
+qmgr_feedback.o: ../../include/sys_defs.h
+qmgr_feedback.o: ../../include/vbuf.h
+qmgr_feedback.o: ../../include/vstream.h
+qmgr_feedback.o: ../../include/vstring.h
+qmgr_feedback.o: qmgr.h
+qmgr_feedback.o: qmgr_feedback.c
qmgr_message.o: ../../include/argv.h
qmgr_message.o: ../../include/attr.h
qmgr_message.o: ../../include/bounce.h
qmgr_move.o: ../../include/vstring.h
qmgr_move.o: qmgr.h
qmgr_move.o: qmgr_move.c
+qmgr_queue.o: ../../include/attr.h
qmgr_queue.o: ../../include/dsn.h
qmgr_queue.o: ../../include/events.h
qmgr_queue.o: ../../include/htable.h
+qmgr_queue.o: ../../include/iostuff.h
qmgr_queue.o: ../../include/mail_params.h
+qmgr_queue.o: ../../include/mail_proto.h
qmgr_queue.o: ../../include/msg.h
qmgr_queue.o: ../../include/mymalloc.h
qmgr_queue.o: ../../include/recipient_list.h
/* .IP "\fBdefault_destination_concurrency_limit (20)\fR"
/* The default maximal number of parallel deliveries to the same
/* destination.
-/* .IP \fItransport\fB_destination_concurrency_limit\fR
+/* .IP "\fItransport\fB_destination_concurrency_limit ($default_destination_concurrency_limit)\fR"
/* Idem, for delivery via the named message \fItransport\fR.
+/* .PP
+/* Available in Postfix version 2.5 and later:
+/* .IP "\fItransport\fB_initial_destination_concurrency ($initial_destination_concurrency)\fR"
+/* Initial concurrency for delivery via the named message
+/* \fItransport\fR.
+/* .IP "\fBdefault_destination_concurrency_failed_cohort_limit (1)\fR"
+/* How many pseudo-cohorts must suffer connection or handshake
+/* failure before a specific destination is considered unavailable
+/* (and further delivery is suspended).
+/* .IP "\fItransport\fB_destination_concurrency_failed_cohort_limit ($default_destination_concurrency_failed_cohort_limit)\fR"
+/* Idem, for delivery via the named message \fItransport\fR.
+/* .IP "\fBdefault_destination_concurrency_negative_feedback (1)\fR"
+/* The per-destination amount of negative delivery concurrency
+/* feedback, after a delivery completes with a connection or handshake
+/* failure.
+/* .IP "\fItransport\fB_destination_concurrency_negative_feedback ($default_destination_concurrency_negative_feedback)\fR"
+/* Idem, for delivery via the named message \fItransport\fR.
+/* .IP "\fBdefault_destination_concurrency_positive_feedback (1)\fR"
+/* The per-destination amount of positive delivery concurrency
+/* feedback, after a delivery completes without connection or handshake
+/* failure.
+/* .IP "\fItransport\fB_destination_concurrency_positive_feedback ($default_destination_concurrency_positive_feedback)\fR"
+/* Idem, for delivery via the named message \fItransport\fR.
+/* .IP "\fBdestination_concurrency_feedback_debug (no)\fR"
+/* Make the queue manager's feedback algorithm verbose for performance
+/* analysis purposes.
/* RECIPIENT SCHEDULING CONTROLS
/* .ad
/* .fi
int var_proc_limit;
bool var_verp_bounce_off;
int var_qmgr_clog_warn_time;
+char *var_conc_pos_feedback;
+char *var_conc_neg_feedback;
+int var_conc_cohort_limit;
+int var_conc_feedback_debug;
static QMGR_SCAN *qmgr_scans[2];
{
static CONFIG_STR_TABLE str_table[] = {
VAR_DEFER_XPORTS, DEF_DEFER_XPORTS, &var_defer_xports, 0, 0,
+ VAR_CONC_POS_FDBACK, DEF_CONC_POS_FDBACK, &var_conc_pos_feedback, 1, 0,
+ VAR_CONC_NEG_FDBACK, DEF_CONC_NEG_FDBACK, &var_conc_neg_feedback, 1, 0,
0,
};
static CONFIG_TIME_TABLE time_table[] = {
VAR_LOCAL_RCPT_LIMIT, DEF_LOCAL_RCPT_LIMIT, &var_local_rcpt_lim, 0, 0,
VAR_LOCAL_CON_LIMIT, DEF_LOCAL_CON_LIMIT, &var_local_con_lim, 0, 0,
VAR_PROC_LIMIT, DEF_PROC_LIMIT, &var_proc_limit, 1, 0,
+ VAR_CONC_COHORT_LIM, DEF_CONC_COHORT_LIM, &var_conc_cohort_limit, 0, 0,
0,
};
static CONFIG_BOOL_TABLE bool_table[] = {
VAR_ALLOW_MIN_USER, DEF_ALLOW_MIN_USER, &var_allow_min_user,
VAR_VERP_BOUNCE_OFF, DEF_VERP_BOUNCE_OFF, &var_verp_bounce_off,
+ VAR_CONC_FDBACK_DEBUG, DEF_CONC_FDBACK_DEBUG, &var_conc_feedback_debug,
0,
};
typedef struct QMGR_QUEUE_LIST QMGR_QUEUE_LIST;
typedef struct QMGR_ENTRY_LIST QMGR_ENTRY_LIST;
typedef struct QMGR_SCAN QMGR_SCAN;
+typedef struct QMGR_FEEDBACK QMGR_FEEDBACK;
/*
* Hairy macros to update doubly-linked lists.
extern struct HTABLE *qmgr_transport_byname; /* transport by name */
extern QMGR_TRANSPORT_LIST qmgr_transport_list; /* transports, round robin */
+ /*
+ * Delivery agents provide feedback, as hints that Postfix should expend
+ * more or fewer resources on a specific destination domain. The main.cf
+ * file specifies how feedback affects delivery concurrency: add/subtract a
+ * constant, a ratio of constants, or a constant divided by the delivery
+ * concurrency; and it specifies how much feedback must accumulate between
+ * concurrency updates.
+ */
+struct QMGR_FEEDBACK {
+ int hysteresis; /* to pass, need to be this tall */
+ double base; /* pre-computed from main.cf */
+ int index; /* none, window, sqrt(window) */
+};
+
+#define QMGR_FEEDBACK_IDX_NONE 0 /* no window dependence */
+#define QMGR_FEEDBACK_IDX_WIN 1 /* 1/window dependence */
+#define QMGR_FEEDBACK_IDX_SQRT_WIN 2 /* 1/sqrt(window) dependence */
+
+#ifdef QMGR_FEEDBACK_IDX_SQRT_WIN
+#include <math.h>
+#endif
+
+extern void qmgr_feedback_init(QMGR_FEEDBACK *, const char *, const char *, const char *, const char *);
+
+#ifndef QMGR_FEEDBACK_IDX_SQRT_WIN
+#define QMGR_FEEDBACK_VAL(fb, win) \
+ ((fb).index == QMGR_FEEDBACK_IDX_NONE ? (fb).base : (fb).base / (win))
+#else
+#define QMGR_FEEDBACK_VAL(fb, win) \
+ ((fb).index == QMGR_FEEDBACK_IDX_NONE ? (fb).base : \
+ (fb).index == QMGR_FEEDBACK_IDX_WIN ? (fb).base / (win) : \
+ (fb).base / sqrt(win))
+#endif
+
/*
* Each transport (local, smtp-out, bounce) can have one queue per next hop
* name. Queues are looked up by next hop name (when we have resolved a
QMGR_QUEUE_LIST queue_list; /* queues, round robin order */
QMGR_TRANSPORT_LIST peers; /* linkage */
DSN *dsn; /* why unavailable */
+ QMGR_FEEDBACK pos_feedback; /* positive feedback control */
+ QMGR_FEEDBACK neg_feedback; /* negative feedback control */
+ int fail_cohort_limit; /* flow shutdown control */
};
#define QMGR_TRANSPORT_STAT_DEAD (1<<1)
int todo_refcount; /* queue entries (todo list) */
int busy_refcount; /* queue entries (busy list) */
int window; /* slow open algorithm */
+ double success; /* accumulated positive feedback */
+ double failure; /* accumulated negative feedback */
+ double fail_cohorts; /* pseudo-cohort failure count */
QMGR_TRANSPORT *transport; /* transport linkage */
QMGR_ENTRY_LIST todo; /* todo queue entries */
QMGR_ENTRY_LIST busy; /* messages on the wire */
if (VSTRING_LEN(dsb->reason) == 0)
vstring_strcpy(dsb->reason, "unknown error");
vstring_prepend(dsb->reason, SUSPENDED, sizeof(SUSPENDED) - 1);
- qmgr_queue_throttle(queue, DSN_FROM_DSN_BUF(dsb));
- if (queue->window == 0)
- qmgr_defer_todo(queue, &dsb->dsn);
+ if (queue->window > 0) {
+ qmgr_queue_throttle(queue, DSN_FROM_DSN_BUF(dsb));
+ if (queue->window == 0)
+ qmgr_defer_todo(queue, &dsb->dsn);
+ }
}
}
--- /dev/null
+/*++
+/* NAME
+/* qmgr_feedback 3
+/* SUMMARY
+/* delivery agent feedback management
+/* SYNOPSIS
+/* #include "qmgr.h"
+/*
+/* void qmgr_feedback_init(fbck_ctl, name_prefix, name_tail,
+/* def_name, def_value)
+/* QMGR_FEEDBACK *fbck_ctl;
+/* const char *name_prefix;
+/* const char *name_tail;
+/* const char *def_name;
+/* const char *def_value;
+/*
+/* double QMGR_FEEDBACK_VAL(fbck_ctl, concurrency)
+/* QMGR_FEEDBACK *fbck_ctl;
+/* const int concurrency;
+/* DESCRIPTION
+/* Upon completion of a delivery request, a delivery agent
+/* provides a hint that the scheduler should dedicate fewer or
+/* more resources to a specific destination.
+/*
+/* qmgr_feedback_init() looks up transport-dependent positive
+/* or negative concurrency feedback control information from
+/* main.cf, and converts it to internal form.
+/*
+/* QMGR_FEEDBACK_VAL() computes a concurrency adjustment based
+/* on a preprocessed feedback control information and the
+/* current concurrency window. This is an "unsafe" macro that
+/* evaluates some arguments multiple times.
+/*
+/* Arguments:
+/* .IP fbck_ctl
+/* Pointer to QMGR_FEEDBACK structure where the result will
+/* be stored.
+/* .IP name_prefix
+/* Mail delivery transport name, used as the initial portion
+/* of a transport-dependent concurrency feedback parameter
+/* name.
+/* .IP name_tail
+/* The second, and fixed, portion of a transport-dependent
+/* concurrency feedback parameter.
+/* .IP def_name
+/* The name of a default feedback parameter.
+/* .IP def_val
+/* The value of the default feedback parameter.
+/* .IP concurrency
+/* Delivery concurrency for concurrency-dependent feedback calculation.
+/* DIAGNOSTICS
+/* Warning: configuration error or unreasonable input. The program
+/* uses name_tail feedback instead.
+/* Panic: consistency check failure.
+/* LICENSE
+/* .ad
+/* .fi
+/* The Secure Mailer license must be distributed with this software.
+/* AUTHOR(S)
+/* Wietse Venema
+/* IBM T.J. Watson Research
+/* P.O. Box 704
+/* Yorktown Heights, NY 10598, USA
+/*--*/
+
+/* System library. */
+
+#include <sys_defs.h>
+#include <stdlib.h>
+#include <limits.h> /* INT_MAX */
+#include <stdio.h> /* sscanf() */
+#include <string.h>
+
+/* Utility library. */
+
+#include <msg.h>
+#include <name_code.h>
+#include <stringops.h>
+#include <mymalloc.h>
+
+/* Global library. */
+
+#include <mail_params.h>
+#include <mail_conf.h>
+
+/* Application-specific. */
+
+#include "qmgr.h"
+
+ /*
+ * Lookup tables for main.cf feedback method names.
+ */
+NAME_CODE qmgr_feedback_map[] = {
+ CONC_FDBACK_NAME_WIN, QMGR_FEEDBACK_IDX_WIN,
+#ifdef QMGR_FEEDBACK_IDX_SQRT_WIN
+ CONC_FDBACK_NAME_SQRT_WIN, QMGR_FEEDBACK_IDX_SQRT_WIN,
+#endif
+ 0, QMGR_FEEDBACK_IDX_NONE,
+};
+
+/* qmgr_feedback_init - initialize feedback control */
+
+void qmgr_feedback_init(QMGR_FEEDBACK *fb,
+ const char *name_prefix,
+ const char *name_tail,
+ const char *def_name,
+ const char *def_val)
+{
+ double enum_val;
+ char denom_str[30 + 1];
+ double denom_val;
+ char slash;
+ char junk;
+ char *fbck_name;
+ char *fbck_val;
+
+ /*
+ * Look up the transport-dependent feedback value.
+ */
+ fbck_name = concatenate(name_prefix, name_tail, (char *) 0);
+ fbck_val = get_mail_conf_str(fbck_name, def_val, 1, 0);
+
+ /*
+ * We allow users to express feedback as 1/8, as a more user-friendly
+ * alternative to 0.125 (or worse, having users specify the number of
+ * events in a feedback hysteresis cycle).
+ *
+ * We use some sscanf() fu to parse the value into numerator and optional
+ * "/" followed by denominator. We're doing this only a few times during
+ * the process life time, so we strive for convenience instead of speed.
+ */
+#define INCLUSIVE_BOUNDS(val, low, high) ((val) >= (low) && (val) <= (high))
+
+ fb->hysteresis = 1; /* legacy */
+ fb->base = -1; /* assume error */
+
+ switch (sscanf(fbck_val, "%lf %1[/] %30s%c",
+ &enum_val, &slash, denom_str, &junk)) {
+ case 1:
+ fb->index = QMGR_FEEDBACK_IDX_NONE;
+ fb->base = enum_val;
+ break;
+ case 3:
+ if ((fb->index = name_code(qmgr_feedback_map, NAME_CODE_FLAG_NONE,
+ denom_str)) != QMGR_FEEDBACK_IDX_NONE) {
+ fb->base = enum_val;
+ } else if (INCLUSIVE_BOUNDS(enum_val, 0, INT_MAX)
+ && sscanf(denom_str, "%lf%c", &denom_val, &junk) == 1
+ && INCLUSIVE_BOUNDS(denom_val, 1.0 / INT_MAX, INT_MAX)) {
+ fb->base = enum_val / denom_val;
+ }
+ break;
+ }
+
+ /*
+ * Sanity check. If input is bad, we just warn and use a reasonable
+ * default.
+ */
+ if (!INCLUSIVE_BOUNDS(fb->base, 0, 1)) {
+ msg_warn("%s: ignoring malformed or unreasonable feedback: %s",
+ strcmp(fbck_val, def_val) ? fbck_name : def_name, fbck_val);
+ fb->index = QMGR_FEEDBACK_IDX_NONE;
+ fb->base = 1;
+ }
+
+ /*
+ * Performance debugging/analysis.
+ */
+ if (var_conc_feedback_debug)
+ msg_info("%s: %s feedback type %d value at %d: %g",
+ name_prefix, strcmp(fbck_val, def_val) ?
+ fbck_name : def_name, fb->index, var_init_dest_concurrency,
+ QMGR_FEEDBACK_VAL(*fb, var_init_dest_concurrency));
+
+ myfree(fbck_name);
+ myfree(fbck_val);
+}
/* non-empty `todo' list.
/*
/* qmgr_queue_throttle() handles a delivery error, and decrements the
-/* concurrency limit for the destination. When the concurrency limit
-/* for a destination becomes zero, qmgr_queue_throttle() starts a timer
+/* concurrency limit for the destination, with a lower bound of 1.
+/* When the cohort failure bound is reached, qmgr_queue_throttle()
+/* sets the concurrency limit to zero and starts a timer
/* to re-enable delivery to the destination after a configurable delay.
/*
/* qmgr_queue_unthrottle() undoes qmgr_queue_throttle()'s effects.
#include <mail_params.h>
#include <recipient_list.h>
+#include <mail_proto.h> /* QMGR_LOG_WINDOW */
/* Application-specific. */
int qmgr_queue_count;
+#define QMGR_ERROR_OR_RETRY_QUEUE(queue) \
+ (strcmp(queue->transport->name, MAIL_SERVICE_RETRY) == 0 \
+ || strcmp(queue->transport->name, MAIL_SERVICE_ERROR) == 0)
+
+#define QMGR_LOG_FEEDBACK(feedback) \
+ if (var_conc_feedback_debug && !QMGR_ERROR_OR_RETRY_QUEUE(queue)) \
+ msg_info("%s: feedback %g", myname, feedback);
+
+#define QMGR_LOG_WINDOW(queue) \
+ if (var_conc_feedback_debug && !QMGR_ERROR_OR_RETRY_QUEUE(queue)) \
+ msg_info("%s: queue %s: limit %d window %d success %g failure %g fail_cohorts %g", \
+ myname, queue->name, queue->transport->dest_concurrency_limit, \
+ queue->window, queue->success, queue->failure, queue->fail_cohorts);
+
/* qmgr_queue_unthrottle_wrapper - in case (char *) != (struct *) */
static void qmgr_queue_unthrottle_wrapper(int unused_event, char *context)
{
const char *myname = "qmgr_queue_unthrottle";
QMGR_TRANSPORT *transport = queue->transport;
+ double feedback;
if (msg_verbose)
msg_info("%s: queue %s", myname, queue->name);
+ /*
+ * Don't restart the negative feedback hysteresis cycle with every
+ * positive feedback. Restart it only when we make a positive concurrency
+ * adjustment (i.e. at the end of a positive feedback hysteresis cycle).
+ * Otherwise negative feedback would be too aggressive: negative feedback
+ * takes effect immediately at the start of its hysteresis cycle.
+ */
+ queue->fail_cohorts = 0;
+
/*
* Special case when this site was dead.
*/
msg_panic("%s: queue %s: window 0 status 0", myname, queue->name);
dsn_free(queue->dsn);
queue->dsn = 0;
- queue->window = transport->init_dest_concurrency;
+ /* Back from the almost grave, best concurrency is anyone's guess. */
+ if (queue->busy_refcount > 0)
+ queue->window = queue->busy_refcount;
+ else
+ queue->window = transport->init_dest_concurrency;
+ queue->success = queue->failure = 0;
+ QMGR_LOG_WINDOW(queue);
return;
}
* Increase the destination's concurrency limit until we reach the
* transport's concurrency limit. Allow for a margin the size of the
* initial destination concurrency, so that we're not too gentle.
+ *
+ * Why is the concurrency increment based on preferred concurrency and not
+ * on the number of outstanding delivery requests? The latter fluctuates
+ * wildly when deliveries complete in bursts (artificial benchmark
+ * measurements), and does not account for cached connections.
+ *
+ * Keep the window within reasonable distance from actual concurrency
+ * otherwise negative feedback will be ineffective. This expression
+ * assumes that busy_refcount changes gradually. This is invalid when
+ * deliveries complete in bursts (artificial benchmark measurements).
*/
if (transport->dest_concurrency_limit == 0
|| transport->dest_concurrency_limit > queue->window)
- if (queue->window <= queue->busy_refcount + transport->init_dest_concurrency)
- queue->window++;
+ if (queue->window < queue->busy_refcount + transport->init_dest_concurrency) {
+ feedback = QMGR_FEEDBACK_VAL(transport->pos_feedback, queue->window);
+ QMGR_LOG_FEEDBACK(feedback);
+ queue->success += feedback;
+ /* Prepare for overshoot (feedback > hysteresis, rounding error). */
+ while (queue->success + feedback / 2 >= transport->pos_feedback.hysteresis) {
+ queue->window += transport->pos_feedback.hysteresis;
+ queue->success -= transport->pos_feedback.hysteresis;
+ queue->failure = 0;
+ }
+ /* Prepare for overshoot. */
+ if (transport->dest_concurrency_limit > 0
+ && queue->window > transport->dest_concurrency_limit)
+ queue->window = transport->dest_concurrency_limit;
+ }
+ QMGR_LOG_WINDOW(queue);
}
/* qmgr_queue_throttle - handle destination delivery failure */
void qmgr_queue_throttle(QMGR_QUEUE *queue, DSN *dsn)
{
const char *myname = "qmgr_queue_throttle";
+ QMGR_TRANSPORT *transport = queue->transport;
+ double feedback;
/*
* Sanity checks.
myname, queue->name, dsn->status, dsn->reason);
/*
- * Decrease the destination's concurrency limit until we reach zero, at
- * which point the destination is declared dead. Decrease the concurrency
- * limit by one, instead of using actual concurrency - 1, to avoid
- * declaring a host dead after just one single delivery failure.
+ * Don't restart the positive feedback hysteresis cycle with every
+ * negative feedback. Restart it only when we make a negative concurrency
+ * adjustment (i.e. at the start of a negative feedback hysteresis
+ * cycle). Otherwise positive feedback would be too weak (positive
+ * feedback does not take effect until the end of its hysteresis cycle).
*/
- if (queue->window > 0)
- queue->window--;
+
+ /*
+ * This queue is declared dead after a configurable number of
+ * pseudo-cohort failures.
+ */
+ if (queue->window > 0) {
+ queue->fail_cohorts += 1.0 / queue->window;
+ if (transport->fail_cohort_limit > 0
+ && queue->fail_cohorts >= transport->fail_cohort_limit)
+ queue->window = 0;
+ }
+
+ /*
+ * Decrease the destination's concurrency limit until we reach 1. Base
+ * adjustments on the concurrency limit itself, instead of using the
+ * actual concurrency. The latter fluctuates wildly when deliveries
+ * complete in bursts (artificial benchmark measurements).
+ *
+ * Even after reaching 1, we maintain the negative hysteresis cycle so that
+ * negative feedback can cancel out positive feedback.
+ */
+ if (queue->window > 0) {
+ feedback = QMGR_FEEDBACK_VAL(transport->neg_feedback, queue->window);
+ QMGR_LOG_FEEDBACK(feedback);
+ queue->failure -= feedback;
+ /* Prepare for overshoot (feedback > hysteresis, rounding error). */
+ while (queue->failure - feedback / 2 < 0) {
+ queue->window -= transport->neg_feedback.hysteresis;
+ queue->success = 0;
+ queue->failure += transport->neg_feedback.hysteresis;
+ }
+ /* Prepare for overshoot. */
+ if (queue->window < 1)
+ queue->window = 1;
+ }
/*
* Special case for a site that just was declared dead.
(char *) queue, var_min_backoff_time);
queue->dflags = 0;
}
+ QMGR_LOG_WINDOW(queue);
}
/* qmgr_queue_select - select in-core queue for delivery */
queue->busy_refcount = 0;
queue->transport = transport;
queue->window = transport->init_dest_concurrency;
+ queue->success = queue->failure = queue->fail_cohorts = 0;
QMGR_LIST_INIT(queue->todo);
QMGR_LIST_INIT(queue->busy);
queue->dsn = 0;
* Use global configuration settings or transport-specific settings.
*/
transport->dest_concurrency_limit =
- get_mail_conf_int2(name, "_destination_concurrency_limit",
+ get_mail_conf_int2(name, _DEST_CON_LIMIT,
var_dest_con_limit, 0, 0);
transport->recipient_limit =
- get_mail_conf_int2(name, "_destination_recipient_limit",
+ get_mail_conf_int2(name, _DEST_RCPT_LIMIT,
var_dest_rcpt_limit, 0, 0);
+ transport->init_dest_concurrency =
+ get_mail_conf_int2(name, _INIT_DEST_CON,
+ var_init_dest_concurrency, 1, 0);
- if (transport->dest_concurrency_limit == 0
- || transport->dest_concurrency_limit >= var_init_dest_concurrency)
- transport->init_dest_concurrency = var_init_dest_concurrency;
- else
+ if (transport->dest_concurrency_limit != 0
+ && transport->dest_concurrency_limit < transport->init_dest_concurrency)
transport->init_dest_concurrency = transport->dest_concurrency_limit;
transport->queue_byname = htable_create(0);
QMGR_LIST_INIT(transport->queue_list);
transport->dsn = 0;
+ qmgr_feedback_init(&transport->pos_feedback, name, _CONC_POS_FDBACK,
+ VAR_CONC_POS_FDBACK, var_conc_pos_feedback);
+ qmgr_feedback_init(&transport->neg_feedback, name, _CONC_NEG_FDBACK,
+ VAR_CONC_NEG_FDBACK, var_conc_neg_feedback);
+ transport->fail_cohort_limit =
+ get_mail_conf_int2(name, _CONC_COHORT_LIM,
+ var_conc_cohort_limit, 0, 0);
if (qmgr_transport_byname == 0)
qmgr_transport_byname = htable_create(10);
htable_enter(qmgr_transport_byname, name, (char *) transport);
TESTPROG=
MAKES = bool_table.h bool_vars.h int_table.h int_vars.h str_table.h \
str_vars.h time_table.h time_vars.h raw_table.h raw_vars.h
+AUTOS = auto_table.h auto_vars.h
PROG = postconf
SAMPLES = ../../conf/main.cf.default
INC_DIR = ../../include
$(MAKES): $(INC_DIR)/mail_params.h ../global/mail_params.c
$(AWK) -f extract.awk ../*/*.c
+$(AUTOS): auto.awk
+ $(AWK) -f auto.awk
+
printfck: $(OBJS) $(PROG)
rm -rf printfck
mkdir printfck
lint $(DEFS) $(SRCS) $(LINTFIX)
clean:
- rm -f *.o *core $(PROG) $(TESTPROG) junk $(MAKES)
+ rm -f *.o *core $(PROG) $(TESTPROG) junk $(MAKES) $(AUTOS)
rm -rf printfck
tidy: clean
--- /dev/null
+BEGIN {
+
+ split("local lmtp relay smtp virtual", transports)
+
+ vars["destination_concurrency_failed_cohort_limit"] = "default_destination_concurrency_failed_cohort_limit"
+ vars["destination_concurrency_limit"] = "default_destination_concurrency_limit"
+ vars["destination_concurrency_negative_feedback"] = "default_destination_concurrency_negative_feedback"
+ vars["destination_concurrency_positive_feedback"] = "default_destination_concurrency_positive_feedback"
+ vars["destination_recipient_limit"] = "default_destination_recipient_limit"
+ vars["initial_destination_concurrency"] = "initial_destination_concurrency"
+
+ # auto_table.h
+
+ for (var in vars) {
+ for (transport in transports) {
+ if (transports[transport] != "local" || (var != "destination_recipient_limit" && var != "destination_concurrency_limit"))
+ print "\"" transports[transport] "_" var "\", \"$" vars[var] "\", &var_" transports[transport] "_" var ", 0, 0," > "auto_table.h"
+ }
+ print "" > "auto_table.h"
+ }
+
+ # auto_vars.h
+
+ for (var in vars) {
+ for (transport in transports) {
+ if (transports[transport] != "local" || (var != "destination_recipient_limit" && var != "destination_concurrency_limit"))
+ print "char *var_" transports[transport] "_" var ";" > "auto_vars.h"
+ }
+ print "" > "auto_vars.h"
+ }
+}
+++ /dev/null
- "lmtp_destination_concurrency_limit", "$default_destination_concurrency_limit", &var_lmtp_destination_concurrency_limit, 0, 0,
- "relay_destination_concurrency_limit", "$default_destination_concurrency_limit", &var_relay_destination_concurrency_limit, 0, 0,
- "smtp_destination_concurrency_limit", "$default_destination_concurrency_limit", &var_smtp_destination_concurrency_limit, 0, 0,
- "virtual_destination_concurrency_limit", "$default_destination_concurrency_limit", &var_virtual_destination_concurrency_limit, 0, 0,
- "lmtp_destination_recipient_limit", "$default_destination_recipient_limit", &var_lmtp_destination_recipient_limit, 0, 0,
- "relay_destination_recipient_limit", "$default_destination_recipient_limit", &var_relay_destination_recipient_limit, 0, 0,
- "smtp_destination_recipient_limit", "$default_destination_recipient_limit", &var_smtp_destination_recipient_limit, 0, 0,
- "virtual_destination_recipient_limit", "$default_destination_recipient_limit", &var_virtual_destination_recipient_limit, 0, 0,
+++ /dev/null
-char *var_lmtp_destination_concurrency_limit;
-char *var_relay_destination_concurrency_limit;
-char *var_smtp_destination_concurrency_limit;
-char *var_virtual_destination_concurrency_limit;
-char *var_lmtp_destination_recipient_limit;
-char *var_smtp_destination_recipient_limit;
-char *var_relay_destination_recipient_limit;
-char *var_virtual_destination_recipient_limit;
/* .IP "\fItransport\fB_initial_destination_concurrency ($initial_destination_concurrency)\fR"
/* Initial concurrency for delivery via the named message
/* \fItransport\fR.
-/* .IP "\fBdefault_concurrency_failed_cohort_limit (1)\fR"
+/* .IP "\fBdefault_destination_concurrency_failed_cohort_limit (1)\fR"
/* How many pseudo-cohorts must suffer connection or handshake
/* failure before a specific destination is considered unavailable
/* (and further delivery is suspended).
-/* .IP "\fItransport\fB_concurrency_failed_cohort_limit ($default_concurrency_failed_cohort_limit)\fR"
+/* .IP "\fItransport\fB_destination_concurrency_failed_cohort_limit ($default_destination_concurrency_failed_cohort_limit)\fR"
/* Idem, for delivery via the named message \fItransport\fR.
-/* .IP "\fBdefault_concurrency_negative_feedback (1)\fR"
+/* .IP "\fBdefault_destination_concurrency_negative_feedback (1)\fR"
/* The per-destination amount of negative delivery concurrency
/* feedback, after a delivery completes with a connection or handshake
/* failure.
-/* .IP "\fItransport\fB_concurrency_negative_feedback ($default_concurrency_negative_feedback)\fR"
+/* .IP "\fItransport\fB_destination_concurrency_negative_feedback ($default_destination_concurrency_negative_feedback)\fR"
/* Idem, for delivery via the named message \fItransport\fR.
-/* .IP "\fBdefault_concurrency_positive_feedback (1)\fR"
+/* .IP "\fBdefault_destination_concurrency_positive_feedback (1)\fR"
/* The per-destination amount of positive delivery concurrency
/* feedback, after a delivery completes without connection or handshake
/* failure.
-/* .IP "\fItransport\fB_concurrency_positive_feedback ($default_concurrency_positive_feedback)\fR"
+/* .IP "\fItransport\fB_destination_concurrency_positive_feedback ($default_destination_concurrency_positive_feedback)\fR"
/* Idem, for delivery via the named message \fItransport\fR.
-/* .IP "\fBconcurrency_feedback_debug (no)\fR"
+/* .IP "\fBdestination_concurrency_feedback_debug (no)\fR"
/* Make the queue manager's feedback algorithm verbose for performance
/* analysis purposes.
/* RECIPIENT SCHEDULING CONTROLS
/* P.O. Box 704
/* Yorktown Heights, NY 10598, USA
/*
-/* Scheduler enhancements:
+/* Preemptive scheduler enhancements:
/* Patrik Rak
/* Modra 6
/* 155 00, Prague, Czech Republic
/* P.O. Box 704
/* Yorktown Heights, NY 10598, USA
/*
-/* Scheduler enhancements:
+/* Preemptive scheduler enhancements:
/* Patrik Rak
/* Modra 6
/* 155 00, Prague, Czech Republic
/* P.O. Box 704
/* Yorktown Heights, NY 10598, USA
/*
-/* Scheduler enhancements:
+/* Preemptive scheduler enhancements:
/* Patrik Rak
/* Modra 6
/* 155 00, Prague, Czech Republic
/* P.O. Box 704
/* Yorktown Heights, NY 10598, USA
/*
-/* Scheduler enhancements:
+/* Preemptive scheduler enhancements:
/* Patrik Rak
/* Modra 6
/* 155 00, Prague, Czech Republic
/* P.O. Box 704
/* Yorktown Heights, NY 10598, USA
/*
-/* Scheduler enhancements:
+/* Preemptive scheduler enhancements:
/* Patrik Rak
/* Modra 6
/* 155 00, Prague, Czech Republic
/* P.O. Box 704
/* Yorktown Heights, NY 10598, USA
/*
-/* Scheduler enhancements:
+/* Preemptive scheduler enhancements:
/* Patrik Rak
/* Modra 6
/* 155 00, Prague, Czech Republic
/* Patrik Rak
/* Modra 6
/* 155 00, Prague, Czech Republic
+/*
+/* Concurrency scheduler enhancements with:
+/* Victor Duchovni
+/* Morgan Stanley
/*--*/
/* System library. */
/* P.O. Box 704
/* Yorktown Heights, NY 10598, USA
/*
-/* Scheduler enhancements:
+/* Preemptive scheduler enhancements:
/* Patrik Rak
/* Modra 6
/* 155 00, Prague, Czech Republic
/* probes.
/* .PP
/* Available in Postfix version 2.3 and later:
-/* .IP "\fBaddress_verify_sender_dependent_relayhost_maps (empty)\fR"
+/* .IP "\fBaddress_verify_sender_dependent_relayhost_maps ($sender_dependent_relayhost_maps)\fR"
/* Overrides the sender_dependent_relayhost_maps parameter setting for address
/* verification probes.
/* MISCELLANEOUS CONTROLS