mantools/postlink, proto/postconf.proto, tls/tls_mgr.c,
tls/tls_misc.c, tlsproxy/tls-proxy.c, smtp/smtp.c,
smtpd/smtpd.c.
+
+20130629
+
+ Cleanup: documentation. Files: proto/CONNECTION_CACHE_README.html,
+ proto/SCHEDULER_README.html.
+
+20130708
+
+ Cleanup: postscreen_upstream_proxy_protocol setting. Files:
+ global/mail_params.h, postscreen/postscreen_endpt.c.
+
+20130709
+
+ Cleanup: qmgr documentation clarification by Patrik Rak.
+ Files: proto/SCHEDULER_README.html, qmgr/qmgr_job.c.
+
+ Cleanup: re-indented code. File: qmgr/qmgr_job.c.
+
+ Logging: minimal DNAME support. Viktor Dukhovni. dns/dns.h,
+ dns/dns_lookup.c, dns/dns_strtype.c, dns/test_dns_lookup.c.
+
* SMTP Connection caching introduces some overhead: the client needs to send
an RSET command to find out if a connection is still usable, before it can
- send the next MAIL FROM command.
+ send the next MAIL FROM command. This introduces one additional round-trip
+ delay.
For other potential issues with SMTP connection caching, see the discussion of
limitations at the end of this document.
C\bCo\bon\bnc\bcu\bur\brr\bre\ben\bnc\bcy\by s\bsc\bch\bhe\bed\bdu\bul\bli\bin\bng\bg
The following sections document the Postfix 2.5 concurrency scheduler, after a
-discussion of the limitations of the existing concurrency scheduler. This is
+discussion of the limitations of the earlier concurrency scheduler. This is
followed by results of medium-concurrency experiments, and a discussion of
trade-offs between performance and robustness.
to be delivered and what transports are going to be used for the delivery.
* Each recipient entry groups a batch of recipients of one message which are
- all going to be delivered to the same destination.
+ all going to be delivered to the same destination (and over the same
+ transport).
* Each transport structure groups everything what is going to be delivered by
delivery agents dedicated for that transport. Each transport maintains a
delivered can preempt this job.
[Well, the truth is, the counter is incremented every time an entry is selected
-and it is divided by k when it is used. Or even more true, there is no
-division, the other side of the equation is multiplied by k. But for the
-understanding it's good enough to use the above approximation of the truth.]
+and it is divided by k when it is used. But for the understanding it's good
+enough to use the above approximation of the truth.]
OK, so now we know the conditions which must be satisfied so one job can
preempt another one. But what job gets preempted, how do we choose what job
<li> <p> SMTP Connection caching introduces some overhead: the
client needs to send an RSET command to find out if a connection
is still usable, before it can send the next MAIL FROM command.
-</p>
+This introduces one additional round-trip delay. </p>
</ul>
/etc/postfix/<a href="postconf.5.html">main.cf</a>:
<a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = $<a href="postconf.5.html#relayhost">relayhost</a>
<a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = hotmail.com, ...
- <a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = static:all (<i>not recommended</i>)
+ <a href="postconf.5.html#smtp_connection_cache_destinations">smtp_connection_cache_destinations</a> = <a href="DATABASE_README.html#types">static</a>:all (<i>not recommended</i>)
</pre>
</blockquote>
<h2> <a name="concurrency"> Concurrency scheduling </a> </h2>
<p> The following sections document the Postfix 2.5 concurrency
-scheduler, after a discussion of the limitations of the existing
+scheduler, after a discussion of the limitations of the earlier
concurrency scheduler. This is followed by results of medium-concurrency
experiments, and a discussion of trade-offs between performance and
robustness. </p>
going to be used for the delivery. </p>
<li> <p> Each recipient entry groups a batch of recipients of one
-message which are all going to be delivered to the same destination.
+message which are all going to be delivered to the same destination
+(and over the same transport).
</p>
<li> <p> Each transport structure groups everything what is going
<p>
[Well, the truth is, the counter is incremented every time an entry
-is selected and it is divided by k when it is used. Or even more
-true, there is no division, the other side of the equation is
-multiplied by k. But for the understanding it's good enough to use
+is selected and it is divided by k when it is used.
+But for the understanding it's good enough to use
the above approximation of the truth.]
</p>
<li> <p> SMTP Connection caching introduces some overhead: the
client needs to send an RSET command to find out if a connection
is still usable, before it can send the next MAIL FROM command.
-</p>
+This introduces one additional round-trip delay. </p>
</ul>
<h2> <a name="concurrency"> Concurrency scheduling </a> </h2>
<p> The following sections document the Postfix 2.5 concurrency
-scheduler, after a discussion of the limitations of the existing
+scheduler, after a discussion of the limitations of the earlier
concurrency scheduler. This is followed by results of medium-concurrency
experiments, and a discussion of trade-offs between performance and
robustness. </p>
going to be used for the delivery. </p>
<li> <p> Each recipient entry groups a batch of recipients of one
-message which are all going to be delivered to the same destination.
+message which are all going to be delivered to the same destination
+(and over the same transport).
</p>
<li> <p> Each transport structure groups everything what is going
<p>
[Well, the truth is, the counter is incremented every time an entry
-is selected and it is divided by k when it is used. Or even more
-true, there is no division, the other side of the equation is
-multiplied by k. But for the understanding it's good enough to use
+is selected and it is divided by k when it is used.
+But for the understanding it's good enough to use
the above approximation of the truth.]
</p>
#endif
#ifndef T_RRSIG
#define T_RRSIG 46 /* Avoid unknown RR in logs */
+#endif
+#ifndef T_DNAME
+#define T_DNAME 39 /* [RFC6672] */
#endif
/*
msg_panic("dns_get_rr: don't know how to extract resource type %s",
dns_strtype(fixed->type));
case T_CNAME:
+ case T_DNAME:
case T_MB:
case T_MG:
case T_MR:
#ifdef T_RRSIG
T_RRSIG, "RRSIG",
#endif
+#ifdef T_DNAME
+ T_DNAME, "DNAME",
+#endif
#ifdef T_ANY
T_ANY, "ANY",
#endif
printf("%s: %s\n", dns_strtype(rr->type), host.buf);
break;
case T_CNAME:
+ case T_DNAME:
case T_MB:
case T_MG:
case T_MR:
#define DEF_PSC_WLIST_IF "static:all"
extern char *var_psc_wlist_if;
+#define NOPROXY_PROTO_NAME ""
+
#define VAR_PSC_UPROXY_PROTO "postscreen_upstream_proxy_protocol"
-#define DEF_PSC_UPROXY_PROTO ""
+#define DEF_PSC_UPROXY_PROTO NOPROXY_PROTO_NAME
extern char *var_psc_uproxy_proto;
#define VAR_PSC_UPROXY_TMOUT "postscreen_upstream_proxy_timeout"
* Patches change both the patchlevel and the release date. Snapshots have no
* patchlevel; they change the release date only.
*/
-#define MAIL_RELEASE_DATE "20130623"
+#define MAIL_RELEASE_DATE "20130709"
#define MAIL_VERSION_NUMBER "2.11"
#ifdef SNAPSHOT
} PSC_ENDPT_LOOKUP_INFO;
static const PSC_ENDPT_LOOKUP_INFO psc_endpt_lookup_info[] = {
- DEF_PSC_UPROXY_PROTO, psc_endpt_local_lookup,
+ NOPROXY_PROTO_NAME, psc_endpt_local_lookup,
HAPROXY_PROTO_NAME, psc_endpt_haproxy_lookup,
0,
};
{
QMGR_TRANSPORT *transport = job->transport;
QMGR_MESSAGE *message = job->message;
- QMGR_JOB *prev,
- *next,
- *list_prev,
- *list_next,
- *unread,
- *current;
+ QMGR_JOB *prev, *next, *list_prev, *list_next, *unread, *current;
int delay;
/*
* for jobs which are created long after the first chunk of recipients
* was read in-core (either of these can happen only for multi-transport
* messages).
+ *
+ * XXX Note that we test stack_parent rather than stack_level below. This
+ * subtle difference allows us to enqueue the job in correct time order
+ * with respect to orphaned children even after their original parent on
+ * level zero is gone. Consequently, the early loop stop in candidate
+ * selection works reliably, too. These are the reasons why we care to
+ * bother with children adoption at all.
*/
current = transport->job_current;
for (next = 0, prev = transport->job_list.prev; prev;
QMGR_TRANSPORT *transport = job->transport;
QMGR_MESSAGE *message = job->message;
QMGR_JOB *next = transport->job_next_unread;
- int rcpt_unused,
- msg_rcpt_unused;
+ int rcpt_unused, msg_rcpt_unused;
/*
* Find next unread job on the job list if necessary. Cache it for later.
static QMGR_JOB *qmgr_job_candidate(QMGR_JOB *current)
{
QMGR_TRANSPORT *transport = current->transport;
- QMGR_JOB *job,
- *best_job = 0;
- double score,
- best_score = 0.0;
- int max_slots,
- max_needed_entries,
- max_total_entries;
+ QMGR_JOB *job, *best_job = 0;
+ double score, best_score = 0.0;
+ int max_slots, max_needed_entries, max_total_entries;
int delay;
time_t now = sane_time();
{
const char *myname = "qmgr_job_preempt";
QMGR_TRANSPORT *transport = current->transport;
- QMGR_JOB *job,
- *prev;
+ QMGR_JOB *job, *prev;
int expected_slots;
int rcpt_slots;
/*
* Adjust the number of delivery slots available to preempt job's parent.
+ * Note that the -= actually adds back any unused slots, as we have
+ * already subtracted the expected amount of slots from both counters
+ * when we did the preemption.
*
* Note that we intentionally do not adjust slots_used of the parent. Doing
* so would decrease the maximum per message inflation factor if the
* in. Otherwise single recipient for slow destination might starve the
* entire message delivery, leaving lot of fast destination recipients
* sitting idle in the queue file.
- *
- * Ideally we would like to read in recipients whenever there is a
- * space, but to prevent excessive I/O, we read them only when enough
- * time has passed or we can read enough of them at once.
- *
+ *
+ * Ideally we would like to read in recipients whenever there is a space,
+ * but to prevent excessive I/O, we read them only when enough time has
+ * passed or we can read enough of them at once.
+ *
* Note that even if we read the recipients few at a time, the message
* loading code tries to put them to existing recipient entries whenever
* possible, so the per-destination recipient grouping is not grossly
* affected.
- *
+ *
* XXX Workaround for logic mismatch. The message->refcount test needs
* explanation. If the refcount is zero, it means that qmgr_active_done()
* is being completed asynchronously. In such case, we can't read in
&& message->refcount > 0
&& (message->rcpt_limit - message->rcpt_count >= job->transport->refill_limit
|| (message->rcpt_limit > message->rcpt_count
- && sane_time() - message->refill_time >= job->transport->refill_delay)))
+ && sane_time() - message->refill_time >= job->transport->refill_delay)))
qmgr_message_realloc(message);
/*
return (peer);
/*
- * There is no suitable peer in-core, so try reading in more recipients if possible.
- * This is our last chance to get suitable peer before giving up on this job for now.
+ * There is no suitable peer in-core, so try reading in more recipients
+ * if possible. This is our last chance to get suitable peer before
+ * giving up on this job for now.
*
* XXX For message->refcount, see above.
*/
QMGR_ENTRY *qmgr_job_entry_select(QMGR_TRANSPORT *transport)
{
- QMGR_JOB *job,
- *next;
+ QMGR_JOB *job, *next;
QMGR_PEER *peer;
QMGR_ENTRY *entry;
/* qmgr_job_blocker_update - update "blocked job" status */
-void qmgr_job_blocker_update(QMGR_QUEUE *queue)
+void qmgr_job_blocker_update(QMGR_QUEUE *queue)
{
QMGR_TRANSPORT *transport = queue->transport;
queue->blocker_tag = 0;
}
}
-