Cleanup: the queue manager and SMTP client now distinguish
between connection cache store and retrieve hints. Once the
- queue manager enables enables connection caching (store and
- load) hints on a per-destination queue, it keeps sending
- connection cache retrieve hints to the delivery agent even
- after it stops sending connection cache store hints. This
- prevents the SMTP client from making a new connection without
- checking the connection cache first. Victor Duchovni. Files:
+ queue manager enables connection caching (store and load)
+ hints on a per-destination queue, it keeps sending connection
+ cache retrieve hints to the delivery agent even after it
+ stops sending connection cache store hints. This prevents
+ the SMTP client from making a new connection without checking
+ the connection cache first. Victor Duchovni. Files:
*qmgr/qmgr_entry.c, smtp/smtp_connect.c.
Bugfix (introduced Postfix 2.3): the SMTP client never
without connect or handshake error. Victor Duchovni. Files:
smtp/smtp_connect.c, smtp/smtp_session.c, smtp/smtp_proto.c,
smtp/smtp_trouble.c.
+
+20071215
+
+ Documentation and code cleanup. Files: global/deliver_request.h,
+ *qmgr/qmgr_entry.c, smtp/smtp_connect.c,
+ proto/SCHEDULER_README.html.
+
+ Bugfix: qmqpd ignored the qmqpd_client_port_logging parameter
+ setting. File: qmqpd/qmqpd.c.
and removes mail from the queue after the last delivery attempt. There are two
major classes of mechanisms that control the operation of the queue manager.
- * Concurrency scheduling is concerned with the number of concurrent
- deliveries to a specific destination, including decisions on when to
- suspend deliveries after persistent failures.
- * Preemptive scheduling is concerned with the selection of email messages and
+Topics covered by this document:
+
+ * Concurrency scheduling, concerned with the number of concurrent deliveries
+ to a specific destination, including decisions on when to suspend
+ deliveries after persistent failures.
+ * Preemptive scheduling, concerned with the selection of email messages and
recipients for a given destination.
- * Credits. This document would not be complete without.
+ * Credits, something this document would not be complete without.
C\bCo\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by s\bsc\bch\bhe\bed\bdu\bul\bli\bin\bng\bg
D\bDr\bra\baw\bwb\bba\bac\bck\bks\bs o\bof\bf t\bth\bhe\be e\bex\bxi\bis\bst\bti\bin\bng\bg c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by s\bsc\bch\bhe\bed\bdu\bul\ble\ber\br
From the start, Postfix has used a simple but robust algorithm where the per-
-destination delivery concurrency is decremented by 1 after a delivery suffered
-connection or handshake failure, and incremented by 1 otherwise. Of course the
-concurrency is never allowed to exceed the maximum per-destination concurrency
-limit. And when a destination's concurrency level drops to zero, the
-destination is declared "dead" and delivery is suspended.
+destination delivery concurrency is decremented by 1 after delivery failed due
+to connection or handshake failure, and incremented by 1 otherwise. Of course
+the concurrency is never allowed to exceed the maximum per-destination
+concurrency limit. And when a destination's concurrency level drops to zero,
+the destination is declared "dead" and delivery is suspended.
Drawbacks of +/-1 concurrency feedback per delivery are:
number is also used as the backlog argument to the listen(2) system call,
and "postfix reload" does not re-issue this call.
* Mail was discarded with "local_recipient_maps = static:all" and
- "local_transport = discard". The discard action in header/body checks could
- not be used as it fails to update the in_flow_delay counters.
+ "local_transport = discard". The discard action in access maps or header/
+ body checks could not be used as it fails to update the in_flow_delay
+ counters.
Client configuration:
destination concurrency of 5, and a server process limit of 10; all other
conditions were the same as with the first measurement. The same result would
be obtained with a FreeBSD or Linux server, because the "pushing back" is done
-entirely by the receiving Postfix.
+entirely by the receiving side.
c\bcl\bli\bie\ben\bnt\bt s\bse\ber\brv\bve\ber\br f\bfe\bee\bed\bdb\bba\bac\bck\bk c\bco\bon\bnn\bne\bec\bct\bti\bio\bon\bn p\bpe\ber\brc\bce\ben\bnt\bta\bag\bge\be c\bcl\bli\bie\ben\bnt\bt t\bth\bhe\beo\bor\bre\bet\bti\bic\bca\bal\bl
l\bli\bim\bmi\bit\bt l\bli\bim\bmi\bit\bt s\bst\bty\byl\ble\be c\bca\bac\bch\bhi\bin\bng\bg d\bde\bef\bfe\ber\brr\bre\bed\bd c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by d\bde\bef\bfe\ber\br r\bra\bat\bte\be
D\bDi\bis\bsc\bcu\bus\bss\bsi\bio\bon\bn o\bof\bf c\bco\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by l\bli\bim\bmi\bit\bte\bed\bd s\bse\ber\brv\bve\ber\br r\bre\bes\bsu\bul\blt\bts\bs
All results in the previous sections are based on the first delivery runs only;
-they do not include any second etc. delivery attempts. The first two examples
-show that the effect of feedback is negligible when concurrency is limited due
-to congestion. This is because the initial concurrency is already at the
-client's concurrency maximum, and because there is 10-100 times more positive
-than negative feedback. Under these conditions, it is no surprise that the
-contribution from SMTP connection caching is also negligible.
+they do not include any second etc. delivery attempts. It's also worth noting
+that the measurements look at steady-state behavior only. They don't show what
+happens when the client starts sending at a much higher or lower concurrency.
+
+The first two examples show that the effect of feedback is negligible when
+concurrency is limited due to congestion. This is because the initial
+concurrency is already at the client's concurrency maximum, and because there
+is 10-100 times more positive than negative feedback. Under these conditions,
+it is no surprise that the contribution from SMTP connection caching is also
+negligible.
In the last example, the old +/-1 feedback per delivery will defer 50% of the
mail when confronted with an active (anvil-style) server concurrency limit,
L\bLi\bim\bmi\bit\bta\bat\bti\bio\bon\bns\bs o\bof\bf l\ble\bes\bss\bs-\b-t\bth\bha\ban\bn-\b-1\b1 p\bpe\ber\br d\bde\bel\bli\biv\bve\ber\bry\by f\bfe\bee\bed\bdb\bba\bac\bck\bk
+Less-than-1 feedback is of interest primarily when sending large amounts of
+mail to destinations with active concurrency limiters (servers that reply with
+421, or firewalls that send RST). When sending small amounts of mail per
+destination, less-than-1 per-delivery feedback won't have a noticeable effect
+on the per-destination concurrency, because the number of deliveries to the
+same destination is too small. You might just as well use zero per-delivery
+feedback and stay with the initial per-destination concurrency. And when mail
+deliveries fail due to congestion instead of active concurrency limiters, the
+measurements above show that per-delivery feedback has no effect. With large
+amounts of mail you might just as well use zero per-delivery feedback and start
+with the maximal per-destination concurrency.
+
The scheduler with less-than-1 concurrency feedback per delivery solves a
problem with servers that have active concurrency limiters. This works only
because feedback is handled in a peculiar manner: positive feedback will
1/4 per "good" delivery (no connect or handshake error), and use an equal or
smaller amount of negative feedback per "bad" delivery. The downside of using
concurrency-independent feedback is that some of the old +/-1 feedback problems
-will return at large concurrencies. Sites that deliver at non-trivial per-
-destination concurrencies will require special configuration.
+will return at large concurrencies. Sites that must deliver mail at non-trivial
+per-destination concurrencies will require special configuration.
C\bCo\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by c\bco\bon\bnf\bfi\big\bgu\bur\bra\bat\bti\bio\bon\bn p\bpa\bar\bra\bam\bme\bet\bte\ber\brs\bs
P\bPr\bre\bee\bem\bmp\bpt\bti\biv\bve\be s\bsc\bch\bhe\bed\bdu\bul\bli\bin\bng\bg
-This document attempts to describe the new queue manager and its preemptive
+The following sections describe the new queue manager and its preemptive
scheduler algorithm. Note that the document was originally written to describe
the changes between the new queue manager (in this text referred to as nqmgr,
the name it was known by before it became the default queue manager) and the
* Wietse Venema designed and implemented the initial queue manager with per-
domain FIFO scheduling, and per-delivery +/-1 concurrency feedback.
* Patrik Rak designed and implemented preemption where mail with fewer
- recipients can slip past mail with more recipients.
+ recipients can slip past mail with more recipients in a controlled manner,
+ and wrote up its documentation.
* Wietse Venema initiated a discussion with Patrik Rak and Victor Duchovni on
alternatives for the +/-1 feedback scheduler's aggressive behavior. This is
when K/N feedback was reviewed (N = concurrency). The discussion ended
without a good solution for both negative feedback and dead site detection.
* Victor Duchovni resumed work on concurrency feedback in the context of
concurrency-limited servers.
- * Wietse Venema then re-designed the concurrency scheduler in terms of
+ * Wietse Venema then re-designed the concurrency scheduler in terms of the
simplest possible concepts: less-than-1 concurrency feedback per delivery,
forward and reverse concurrency feedback hysteresis, and pseudo-cohort
failure. At this same time, concurrency feedback was separated from dead
the last delivery attempt. There are two major classes of mechanisms
that control the operation of the queue manager. </p>
+<p> Topics covered by this document: </p>
+
<ul>
-<li> <a href="#concurrency"> Concurrency scheduling </a> is concerned
+<li> <a href="#concurrency"> Concurrency scheduling</a>, concerned
with the number of concurrent deliveries to a specific destination,
including decisions on when to suspend deliveries after persistent
failures.
-<li> <a href="#jobs"> Preemptive scheduling </a> is concerned with
+<li> <a href="#jobs"> Preemptive scheduling</a>, concerned with
the selection of email messages and recipients for a given destination.
-<li> <a href="#credits"> Credits </a>. This document would not be
+<li> <a href="#credits"> Credits</a>, something this document would not be
complete without.
</ul>
<p> From the start, Postfix has used a simple but robust algorithm
where the per-destination delivery concurrency is decremented by 1
-after a delivery suffered connection or handshake failure, and
+after delivery failed due to connection or handshake failure, and
incremented by 1 otherwise. Of course the concurrency is never
allowed to exceed the maximum per-destination concurrency limit.
And when a destination's concurrency level drops to zero, the
not re-issue this call.
<li> Mail was discarded with "<a href="postconf.5.html#local_recipient_maps">local_recipient_maps</a> = static:all" and
-"<a href="postconf.5.html#local_transport">local_transport</a> = discard". The discard action in header/body checks
+"<a href="postconf.5.html#local_transport">local_transport</a> = discard". The discard action in access maps or
+header/body checks
could not be used as it fails to update the <a href="postconf.5.html#in_flow_delay">in_flow_delay</a> counters.
</ul>
of 5, and a server process limit of 10; all other conditions were
the same as with the first measurement. The same result would be
obtained with a FreeBSD or Linux server, because the "pushing back"
-is done entirely by the receiving Postfix. </p>
+is done entirely by the receiving side. </p>
<blockquote>
<p> All results in the previous sections are based on the first
delivery runs only; they do not include any second etc. delivery
-attempts. The first two examples show that the effect of feedback
+attempts. It's also worth noting that the measurements look at
+steady-state behavior only. They don't show what happens when the
+client starts sending at a much higher or lower concurrency.
+</p>
+
+<p> The first two examples show that the effect of feedback
is negligible when concurrency is limited due to congestion. This
is because the initial concurrency is already at the client's
concurrency maximum, and because there is 10-100 times more positive
<h3> <a name="concurrency_limitations"> Limitations of less-than-1 per delivery feedback </a> </h3>
+<p> Less-than-1 feedback is of interest primarily when sending large
+amounts of mail to destinations with active concurrency limiters
+(servers that reply with 421, or firewalls that send RST). When
+sending small amounts of mail per destination, less-than-1 per-delivery
+feedback won't have a noticeable effect on the per-destination
+concurrency, because the number of deliveries to the same destination
+is too small. You might just as well use zero per-delivery feedback
+and stay with the initial per-destination concurrency. And when
+mail deliveries fail due to congestion instead of active concurrency
+limiters, the measurements above show that per-delivery feedback
+has no effect. With large amounts of mail you might just as well
+use zero per-delivery feedback and start with the maximal per-destination
+concurrency. </p>
+
<p> The scheduler with less-than-1 concurrency
feedback per delivery solves a problem with servers that have active
concurrency limiters. This works only because feedback is handled
amount of negative feedback per "bad" delivery. The downside of
using concurrency-independent feedback is that some of the old +/-1
feedback problems will return at large concurrencies. Sites that
-deliver at non-trivial per-destination concurrencies will require
-special configuration. </p>
+must deliver mail at non-trivial per-destination concurrencies will
+require special configuration. </p>
<h3> <a name="concurrency_config"> Concurrency configuration parameters </a> </h3>
<p>
-This document attempts to describe the new queue manager and its
+The following sections describe the new queue manager and its
preemptive scheduler algorithm. Note that the document was originally
written to describe the changes between the new queue manager (in
this text referred to as <tt>nqmgr</tt>, the name it was known by
feedback.
<li> Patrik Rak designed and implemented preemption where mail with
-fewer recipients can slip past mail with more recipients.
+fewer recipients can slip past mail with more recipients in a
+controlled manner, and wrote up its documentation.
<li> Wietse Venema initiated a discussion with Patrik Rak and Victor
Duchovni on alternatives for the +/-1 feedback scheduler's aggressive
context of concurrency-limited servers.
<li> Wietse Venema then re-designed the concurrency scheduler in
-terms of simplest possible concepts: less-than-1 concurrency feedback
-per delivery, forward and reverse concurrency feedback hysteresis,
-and pseudo-cohort failure. At this same time, concurrency feedback
-was separated from dead site detection.
+terms of the simplest possible concepts: less-than-1 concurrency
+feedback per delivery, forward and reverse concurrency feedback
+hysteresis, and pseudo-cohort failure. At this same time, concurrency
+feedback was separated from dead site detection.
<li> These simplifications, and their modular implementation, helped
to develop further insights into the different roles that positive
<p>
The initial per-destination concurrency level for parallel delivery
-to the same destination. This limit applies to delivery via <a href="smtp.8.html">smtp(8)</a>,
-and via the <a href="pipe.8.html">pipe(8)</a> and <a href="virtual.8.html">virtual(8)</a> delivery agents.
+to the same destination.
With per-destination recipient limit > 1, a destination is a domain,
otherwise it is a recipient.
</p>
.ft R
.SH initial_destination_concurrency (default: 5)
The initial per-destination concurrency level for parallel delivery
-to the same destination. This limit applies to delivery via \fBsmtp\fR(8),
-and via the \fBpipe\fR(8) and \fBvirtual\fR(8) delivery agents.
+to the same destination.
With per-destination recipient limit > 1, a destination is a domain,
otherwise it is a recipient.
.PP
the last delivery attempt. There are two major classes of mechanisms
that control the operation of the queue manager. </p>
+<p> Topics covered by this document: </p>
+
<ul>
-<li> <a href="#concurrency"> Concurrency scheduling </a> is concerned
+<li> <a href="#concurrency"> Concurrency scheduling</a>, concerned
with the number of concurrent deliveries to a specific destination,
including decisions on when to suspend deliveries after persistent
failures.
-<li> <a href="#jobs"> Preemptive scheduling </a> is concerned with
+<li> <a href="#jobs"> Preemptive scheduling</a>, concerned with
the selection of email messages and recipients for a given destination.
-<li> <a href="#credits"> Credits </a>. This document would not be
+<li> <a href="#credits"> Credits</a>, something this document would not be
complete without.
</ul>
<p> From the start, Postfix has used a simple but robust algorithm
where the per-destination delivery concurrency is decremented by 1
-after a delivery suffered connection or handshake failure, and
+after delivery failed due to connection or handshake failure, and
incremented by 1 otherwise. Of course the concurrency is never
allowed to exceed the maximum per-destination concurrency limit.
And when a destination's concurrency level drops to zero, the
not re-issue this call.
<li> Mail was discarded with "local_recipient_maps = static:all" and
-"local_transport = discard". The discard action in header/body checks
+"local_transport = discard". The discard action in access maps or
+header/body checks
could not be used as it fails to update the in_flow_delay counters.
</ul>
of 5, and a server process limit of 10; all other conditions were
the same as with the first measurement. The same result would be
obtained with a FreeBSD or Linux server, because the "pushing back"
-is done entirely by the receiving Postfix. </p>
+is done entirely by the receiving side. </p>
<blockquote>
<p> All results in the previous sections are based on the first
delivery runs only; they do not include any second etc. delivery
-attempts. The first two examples show that the effect of feedback
+attempts. It's also worth noting that the measurements look at
+steady-state behavior only. They don't show what happens when the
+client starts sending at a much higher or lower concurrency.
+</p>
+
+<p> The first two examples show that the effect of feedback
is negligible when concurrency is limited due to congestion. This
is because the initial concurrency is already at the client's
concurrency maximum, and because there is 10-100 times more positive
<h3> <a name="concurrency_limitations"> Limitations of less-than-1 per delivery feedback </a> </h3>
+<p> Less-than-1 feedback is of interest primarily when sending large
+amounts of mail to destinations with active concurrency limiters
+(servers that reply with 421, or firewalls that send RST). When
+sending small amounts of mail per destination, less-than-1 per-delivery
+feedback won't have a noticeable effect on the per-destination
+concurrency, because the number of deliveries to the same destination
+is too small. You might just as well use zero per-delivery feedback
+and stay with the initial per-destination concurrency. And when
+mail deliveries fail due to congestion instead of active concurrency
+limiters, the measurements above show that per-delivery feedback
+has no effect. With large amounts of mail you might just as well
+use zero per-delivery feedback and start with the maximal per-destination
+concurrency. </p>
+
<p> The scheduler with less-than-1 concurrency
feedback per delivery solves a problem with servers that have active
concurrency limiters. This works only because feedback is handled
amount of negative feedback per "bad" delivery. The downside of
using concurrency-independent feedback is that some of the old +/-1
feedback problems will return at large concurrencies. Sites that
-deliver at non-trivial per-destination concurrencies will require
-special configuration. </p>
+must deliver mail at non-trivial per-destination concurrencies will
+require special configuration. </p>
<h3> <a name="concurrency_config"> Concurrency configuration parameters </a> </h3>
<p>
-This document attempts to describe the new queue manager and its
+The following sections describe the new queue manager and its
preemptive scheduler algorithm. Note that the document was originally
written to describe the changes between the new queue manager (in
this text referred to as <tt>nqmgr</tt>, the name it was known by
feedback.
<li> Patrik Rak designed and implemented preemption where mail with
-fewer recipients can slip past mail with more recipients.
+fewer recipients can slip past mail with more recipients in a
+controlled manner, and wrote up its documentation.
<li> Wietse Venema initiated a discussion with Patrik Rak and Victor
Duchovni on alternatives for the +/-1 feedback scheduler's aggressive
context of concurrency-limited servers.
<li> Wietse Venema then re-designed the concurrency scheduler in
-terms of simplest possible concepts: less-than-1 concurrency feedback
-per delivery, forward and reverse concurrency feedback hysteresis,
-and pseudo-cohort failure. At this same time, concurrency feedback
-was separated from dead site detection.
+terms of the simplest possible concepts: less-than-1 concurrency
+feedback per delivery, forward and reverse concurrency feedback
+hysteresis, and pseudo-cohort failure. At this same time, concurrency
+feedback was separated from dead site detection.
<li> These simplifications, and their modular implementation, helped
to develop further insights into the different roles that positive
<p>
The initial per-destination concurrency level for parallel delivery
-to the same destination. This limit applies to delivery via smtp(8),
-and via the pipe(8) and virtual(8) delivery agents.
+to the same destination.
With per-destination recipient limit > 1, a destination is a domain,
otherwise it is a recipient.
</p>
#define DEL_REQ_FLAG_MTA_VRFY (1<<8) /* MTA-requested address probe */
#define DEL_REQ_FLAG_USR_VRFY (1<<9) /* user-requested address probe */
#define DEL_REQ_FLAG_RECORD (1<<10) /* record and deliver */
-#define DEL_REQ_FLAG_SCACHE_LD (1<<11) /* Consult opportunistic cache */
-#define DEL_REQ_FLAG_SCACHE_ST (1<<12) /* Update opportunistic cache */
+#define DEL_REQ_FLAG_CONN_LOAD (1<<11) /* Consult opportunistic cache */
+#define DEL_REQ_FLAG_CONN_STORE (1<<12) /* Update opportunistic cache */
/*
- * Cache Load and Store as value or mask. Use explicit names for multi-bit
+ * Cache Load and Store as value or mask. Use explicit _MASK for multi-bit
* values.
*/
-#define DEL_REQ_FLAG_SCACHE_MASK (DEL_REQ_FLAG_SCACHE_LD|DEL_REQ_FLAG_SCACHE_ST)
+#define DEL_REQ_FLAG_CONN_MASK \
+ (DEL_REQ_FLAG_CONN_LOAD | DEL_REQ_FLAG_CONN_STORE)
/*
* For compatibility, the old confusing names.
* Patches change both the patchlevel and the release date. Snapshots have no
* patchlevel; they change the release date only.
*/
-#define MAIL_RELEASE_DATE "20071213"
+#define MAIL_RELEASE_DATE "20071215"
#define MAIL_VERSION_NUMBER "2.5"
#ifdef SNAPSHOT
* prevents unnecessary session caching when we have a burst of mail
* <= the initial concurrency limit.
*/
- if ((queue->dflags & DEL_REQ_FLAG_SCACHE_ST) == 0) {
+ if ((queue->dflags & DEL_REQ_FLAG_CONN_STORE) == 0) {
if (BACK_TO_BACK_DELIVERY()) {
if (msg_verbose)
msg_info("%s: allowing on-demand session caching for %s",
myname, queue->name);
- queue->dflags |= DEL_REQ_FLAG_SCACHE_MASK;
+ queue->dflags |= DEL_REQ_FLAG_CONN_MASK;
}
}
if (msg_verbose)
msg_info("%s: disallowing on-demand session caching for %s",
myname, queue->name);
- queue->dflags &= ~DEL_REQ_FLAG_SCACHE_ST;
+ queue->dflags &= ~DEL_REQ_FLAG_CONN_STORE;
}
}
}
* prevents unnecessary session caching when we have a burst of mail
* <= the initial concurrency limit.
*/
- if ((queue->dflags & DEL_REQ_FLAG_SCACHE_ST) == 0) {
+ if ((queue->dflags & DEL_REQ_FLAG_CONN_STORE) == 0) {
if (BACK_TO_BACK_DELIVERY()) {
if (msg_verbose)
msg_info("%s: allowing on-demand session caching for %s",
myname, queue->name);
- queue->dflags |= DEL_REQ_FLAG_SCACHE_MASK;
+ queue->dflags |= DEL_REQ_FLAG_CONN_MASK;
}
}
if (msg_verbose)
msg_info("%s: disallowing on-demand session caching for %s",
myname, queue->name);
- queue->dflags &= ~DEL_REQ_FLAG_SCACHE_ST;
+ queue->dflags &= ~DEL_REQ_FLAG_CONN_STORE;
}
}
}
single_server_main(argc, argv, qmqpd_service,
MAIL_SERVER_TIME_TABLE, time_table,
MAIL_SERVER_STR_TABLE, str_table,
+ MAIL_SERVER_BOOL_TABLE, bool_table,
MAIL_SERVER_PRE_INIT, pre_jail_init,
MAIL_SERVER_PRE_ACCEPT, pre_accept,
MAIL_SERVER_POST_INIT, post_jail_init,
if (smtp_cache_dest && string_list_match(smtp_cache_dest, dest)) {
state->misc_flags |= SMTP_MISC_FLAG_CONN_CACHE_MASK;
} else if (var_smtp_cache_demand) {
- if (request->flags & DEL_REQ_FLAG_SCACHE_LD)
+ if (request->flags & DEL_REQ_FLAG_CONN_LOAD)
state->misc_flags |= SMTP_MISC_FLAG_CONN_LOAD;
- if (request->flags & DEL_REQ_FLAG_SCACHE_ST)
+ if (request->flags & DEL_REQ_FLAG_CONN_STORE)
state->misc_flags |= SMTP_MISC_FLAG_CONN_STORE;
}
}