ensures that time only ever increases (the timestamps it gives are however not the "real"
world clock).
-The simplest handling of transfer time would be to just always call `curlx_now()`. However
-there is a performance penalty to that - varying by platform - so this is not a desirable
-strategy. Processing thousands of transfers in a loop needs a smarter approach.
-
## Initial Approach (now historic)
The loop processing functions called `curlx_now()` at the beginning and then passed
The strategy of handling transfer's time is now:
-* Keep a "now" timestamp in `data->progress.now`.
-* Perform time checks and event recording using `data->progress.now`.
-* Set `data->progress.now` at the start of API calls (e.g. `curl_multi_perform()`, etc.).
-* Set `data->progress.now` when recorded events happen (for precision).
-* Set `data->progress.now` on multi state changes.
-* Set `data->progress.now` in `pingpong` timeout handling, since `pingpong` is old and not always non-blocking.
-
-In addition to *setting* `data->progress.now` this timestamp can be *advanced* using 2 new methods:
-
-* `Curl_pgrs_now_at_least(data, &now)`: code that has a "now" timestamp can progress the `data`'s own "now" to be at least as new. If `data->progress.now` is already newer, no change is done. A transfer never goes **back**.
-* `Curl_pgrs_now_update(data1, data2)`: update the "now" in `data1` to be at least as new as the one in `data2`. If it already is newer, nothing changes.
-
-### Time Advancing Loops
-
-This advancing is used in the following way in loop like `curl_multi_perform()`:
-
-```C
-struct curltime now = curlx_now(); /* start of API call */
-forall data in transfers {
- Curl_pgrs_set_at_least(data, now);
- progress(data); /* may update "now" */
- now = data->progress.now;
-}
-```
-
-Transfers that update their "now" pass that timestamp to the next transfer processed.
-
-### Transfers triggering other transfers
-
-In HTTP/2 and HTTP/3 processing, incoming data causes actions on transfers other than
-the calling one. The protocols may receive data for any transfer on the connection and need
-to dispatch it:
-
-* a Close/Reset comes in for another transfer. That transfer is marked as "dirty", making sure it is processed in a timely manner.
-* Response Data arrives: this data is written out to the client. Before this is done, the "now" timestamp is updated via `Curl_pgrs_now_update(data, calling)` from the "calling" transfer.
-
-## Blocking Operations
+* Keep a "now" timestamp in the multi handle. Keep a fallback "now" timestamp in the easy handle.
+* Always use `Curl_pgrs_now(data)` to get the current time of a transfer.
+* Do not use `curlx_now()` directly for transfer handling (exceptions apply for loops).
-We still have places in `libcurl` where we do blocking operations. We should always use `Curl_pgrs_now_set(data)` afterwards since we cannot be sure how much time has passed. Since loop processing passed an updated "now" to the next transfer, a delay due to blocking is passed on.
+This has the following advantages:
-There are other places where we may lose track of time:
+* No need to pass a `struct curltime` around or pass a pointer to an outdated timestamp to other functions.
+* No need to calculate the exact `now` until it is really used.
+* Passing a `const` pointer is better than struct passing. Updating and passing a pointer to the same memory location for all transfers is even better.
-* Cache/Pool Locks: no "now" updates happen after a lock has been acquired. These locks should not be kept for a longer time.
-* User Callbacks: no "now" updates happen after callbacks have been invoked. The expectation is that those do not take long.
+Caveats:
-Should these assumptions prove wrong, we need to add updates.
+* do not store the pointer returned by `Curl_pgrs_now(data)` anywhere that outlives the current code invocation.
/* This is only set to non-zero if the timer was started. */
(ares->happy_eyeballs_dns_time.tv_sec ||
ares->happy_eyeballs_dns_time.tv_usec) &&
- (curlx_timediff_ms(data->progress.now, ares->happy_eyeballs_dns_time) >=
+ (curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &ares->happy_eyeballs_dns_time) >=
HAPPY_EYEBALLS_DNS_TIMEOUT)) {
/* Remember that the EXPIRE_HAPPY_EYEBALLS_DNS timer is no longer
running. */
result = CURLE_ABORTED_BY_CALLBACK;
else {
struct curltime now = curlx_now(); /* update in loop */
- timediff_t elapsed_ms = curlx_timediff_ms(now, data->progress.now);
+ timediff_t elapsed_ms = curlx_ptimediff_ms(&now, Curl_pgrs_now(data));
if(elapsed_ms <= 0)
timeout_ms -= 1; /* always deduct at least 1 */
else if(elapsed_ms > timeout_ms)
timeout_ms = -1;
else
timeout_ms -= elapsed_ms;
- Curl_pgrs_now_at_least(data, &now);
}
if(timeout_ms < 0)
result = CURLE_OPERATION_TIMEDOUT;
timeout to prevent it. After all, we do not even know where in the
c-ares retry cycle each request is.
*/
- ares->happy_eyeballs_dns_time = data->progress.now;
+ ares->happy_eyeballs_dns_time = *Curl_pgrs_now(data);
Curl_expire(data, HAPPY_EYEBALLS_DNS_TIMEOUT, EXPIRE_HAPPY_EYEBALLS_DNS);
}
}
#include "url.h"
#include "multiif.h"
#include "curl_threads.h"
+#include "progress.h"
#include "select.h"
#ifdef USE_ARES
/* passing addr_ctx to the thread adds a reference */
addr_ctx->ref_count = 2;
- addr_ctx->start = data->progress.now;
+ addr_ctx->start = *Curl_pgrs_now(data);
#ifdef HAVE_GETADDRINFO
addr_ctx->thread_hnd = Curl_thread_create(getaddrinfo_thread, addr_ctx);
else {
/* poll for name lookup done with exponential backoff up to 250ms */
/* should be fine even if this converts to 32-bit */
- timediff_t elapsed = curlx_timediff_ms(data->progress.now,
- data->progress.t_startsingle);
+ timediff_t elapsed = curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &data->progress.t_startsingle);
if(elapsed < 0)
elapsed = 0;
result = Curl_pollset_add_in(data, ps, thrdd->addr->sock_pair[0]);
#else
timediff_t milli;
- timediff_t ms = curlx_timediff_ms(data->progress.now, thrdd->addr->start);
+ timediff_t ms =
+ curlx_ptimediff_ms(Curl_pgrs_now(data), &thrdd->addr->start);
if(ms < 3)
milli = 0;
else if(ms <= 50)
#include "multiif.h"
#include "cf-https-connect.h"
#include "http2.h"
+#include "progress.h"
#include "select.h"
#include "vquic/vquic.h"
struct Curl_cfilter *save = cf->next;
cf->next = NULL;
- b->started = data->progress.now;
+ b->started = *Curl_pgrs_now(data);
switch(b->alpn_id) {
case ALPN_h3:
transport = TRNSPRT_QUIC;
if(reply_ms >= 0)
CURL_TRC_CF(data, cf, "connect+handshake %s: %dms, 1st data: %dms",
winner->name,
- (int)curlx_timediff_ms(data->progress.now,
- winner->started), reply_ms);
+ (int)curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &winner->started), reply_ms);
else
CURL_TRC_CF(data, cf, "deferred handshake %s: %dms",
- winner->name, (int)curlx_timediff_ms(data->progress.now,
- winner->started));
+ winner->name, (int)curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &winner->started));
/* install the winning filter below this one. */
cf->next = winner->cf;
ctx->ballers[idx].name);
return TRUE;
}
- elapsed_ms = curlx_timediff_ms(now, ctx->started);
+ elapsed_ms = curlx_ptimediff_ms(&now, &ctx->started);
if(elapsed_ms >= ctx->hard_eyeballs_timeout_ms) {
CURL_TRC_CF(data, cf, "hard timeout of %" FMT_TIMEDIFF_T "ms reached, "
"starting %s",
for(i = 0; i < ctx->baller_count; i++)
DEBUGASSERT(!ctx->ballers[i].cf);
CURL_TRC_CF(data, cf, "connect, init");
- ctx->started = data->progress.now;
+ ctx->started = *Curl_pgrs_now(data);
cf_hc_baller_init(&ctx->ballers[0], cf, data, ctx->ballers[0].transport);
if(ctx->baller_count > 1) {
Curl_expire(data, ctx->soft_eyeballs_timeout_ms, EXPIRE_ALPN_EYEBALLS);
}
}
- if(time_to_start_next(cf, data, 1, data->progress.now)) {
+ if(time_to_start_next(cf, data, 1, *Curl_pgrs_now(data))) {
cf_hc_baller_init(&ctx->ballers[1], cf, data, ctx->ballers[1].transport);
}
struct Curl_cfilter *cfb = ctx->ballers[i].cf;
memset(&t, 0, sizeof(t));
if(cfb && !cfb->cft->query(cfb, data, query, NULL, &t)) {
- if((t.tv_sec || t.tv_usec) && curlx_timediff_us(t, tmax) > 0)
+ if((t.tv_sec || t.tv_usec) && curlx_ptimediff_us(&t, &tmax) > 0)
tmax = t;
}
}
/* no attempt connected yet, start another one? */
if(!ongoing) {
if(!bs->started.tv_sec && !bs->started.tv_usec)
- bs->started = data->progress.now;
+ bs->started = *Curl_pgrs_now(data);
do_more = TRUE;
}
else {
more_possible = cf_ai_iter_has_more(&bs->ipv6_iter);
#endif
do_more = more_possible &&
- (curlx_timediff_ms(data->progress.now, bs->last_attempt_started) >=
+ (curlx_ptimediff_ms(Curl_pgrs_now(data), &bs->last_attempt_started) >=
bs->attempt_delay_ms);
if(do_more)
CURL_TRC_CF(data, cf, "happy eyeballs timeout expired, "
while(*panchor)
panchor = &((*panchor)->next);
*panchor = a;
- bs->last_attempt_started = data->progress.now;
+ bs->last_attempt_started = *Curl_pgrs_now(data);
bs->last_attempt_ai_family = ai_family;
/* and run everything again */
goto evaluate;
/* tried all addresses, no success but some where inconclusive.
* Let's restart the inconclusive ones. */
timediff_t since_ms =
- curlx_timediff_ms(data->progress.now, bs->last_attempt_started);
+ curlx_ptimediff_ms(Curl_pgrs_now(data), &bs->last_attempt_started);
timediff_t delay_ms = bs->attempt_delay_ms - since_ms;
if(delay_ms <= 0) {
CURL_TRC_CF(data, cf, "all attempts inconclusive, restarting one");
CURL_TRC_CF(data, cf, "restarted baller %d -> %d", i, result);
if(result) /* serious failure */
goto out;
- bs->last_attempt_started = data->progress.now;
+ bs->last_attempt_started = *Curl_pgrs_now(data);
goto evaluate;
}
DEBUGASSERT(0); /* should not come here */
next_expire_ms = Curl_timeleft_ms(data, TRUE);
if(next_expire_ms <= 0) {
failf(data, "Connection timeout after %" FMT_OFF_T " ms",
- curlx_timediff_ms(data->progress.now, data->progress.t_startsingle));
+ curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &data->progress.t_startsingle));
return CURLE_OPERATION_TIMEDOUT;
}
if(more_possible) {
timediff_t expire_ms, elapsed_ms;
elapsed_ms =
- curlx_timediff_ms(data->progress.now, bs->last_attempt_started);
+ curlx_ptimediff_ms(Curl_pgrs_now(data), &bs->last_attempt_started);
expire_ms = CURLMAX(bs->attempt_delay_ms - elapsed_ms, 0);
next_expire_ms = CURLMIN(next_expire_ms, expire_ms);
if(next_expire_ms <= 0) {
for(a = bs->running; a; a = a->next) {
memset(&t, 0, sizeof(t));
if(!a->cf->cft->query(a->cf, data, query, NULL, &t)) {
- if((t.tv_sec || t.tv_usec) && curlx_timediff_us(t, tmax) > 0)
+ if((t.tv_sec || t.tv_usec) && curlx_ptimediff_us(&t, &tmax) > 0)
tmax = t;
}
}
proxy_name ? "via " : "",
proxy_name ? proxy_name : "",
proxy_name ? " " : "",
- curlx_timediff_ms(data->progress.now, data->progress.t_startsingle),
+ curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &data->progress.t_startsingle),
curl_easy_strerror(result));
}
}
CURL_TRC_CF(data, cf, "init ip ballers for transport %u", ctx->transport);
- ctx->started = data->progress.now;
+ ctx->started = *Curl_pgrs_now(data);
return cf_ip_ballers_init(&ctx->ballers, cf->conn->ip_version,
dns->addr, ctx->cf_create, ctx->transport,
data->set.happy_eyeballs_timeout);
(void)data;
DEBUGASSERT(ctx->sock == CURL_SOCKET_BAD);
- ctx->started_at = data->progress.now;
+ ctx->started_at = *Curl_pgrs_now(data);
#ifdef SOCK_NONBLOCK
/* Do not tuck SOCK_NONBLOCK into socktype when opensocket callback is set
* because we would not know how socketype is about to be used in the
}
else if(isconnected) {
set_local_ip(cf, data);
- ctx->connected_at = data->progress.now;
+ ctx->connected_at = *Curl_pgrs_now(data);
cf->connected = TRUE;
}
CURL_TRC_CF(data, cf, "cf_socket_open() -> %d, fd=%" FMT_SOCKET_T,
else if(rc == CURL_CSELECT_OUT || cf->conn->bits.tcp_fastopen) {
if(verifyconnect(ctx->sock, &ctx->error)) {
/* we are connected with TCP, awesome! */
- ctx->connected_at = data->progress.now;
+ ctx->connected_at = *Curl_pgrs_now(data);
set_local_ip(cf, data);
*done = TRUE;
cf->connected = TRUE;
ULONG ideal;
DWORD ideallen;
- if(curlx_timediff_ms(data->progress.now, ctx->last_sndbuf_query_at) > 1000) {
+ if(curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &ctx->last_sndbuf_query_at) > 1000) {
if(!WSAIoctl(ctx->sock, SIO_IDEAL_SEND_BACKLOG_QUERY, 0, 0,
&ideal, sizeof(ideal), &ideallen, 0, 0) &&
ideal != ctx->sndbuf_size &&
(const char *)&ideal, sizeof(ideal))) {
ctx->sndbuf_size = ideal;
}
- ctx->last_sndbuf_query_at = data->progress.now;
+ ctx->last_sndbuf_query_at = *Curl_pgrs_now(data);
}
}
CURL_TRC_CF(data, cf, "recv(len=%zu) -> %d, %zu", len, result, *pnread);
if(!result && !ctx->got_first_byte) {
- ctx->first_byte_at = data->progress.now;
+ ctx->first_byte_at = *Curl_pgrs_now(data);
ctx->got_first_byte = TRUE;
}
return result;
return CURLE_OK;
case CF_QUERY_CONNECT_REPLY_MS:
if(ctx->got_first_byte) {
- timediff_t ms = curlx_timediff_ms(ctx->first_byte_at, ctx->started_at);
+ timediff_t ms = curlx_ptimediff_ms(&ctx->first_byte_at,
+ &ctx->started_at);
*pres1 = (ms < INT_MAX) ? (int)ms : INT_MAX;
}
else
timeout_ms = other_ms;
else {
/* subtract elapsed time */
- timeout_ms -= curlx_timediff_ms(data->progress.now, ctx->started_at);
+ timeout_ms -= curlx_ptimediff_ms(Curl_pgrs_now(data), &ctx->started_at);
if(!timeout_ms)
/* avoid returning 0 as that means no timeout! */
timeout_ms = -1;
cf_tcp_set_accepted_remote_ip(cf, data);
set_local_ip(cf, data);
ctx->active = TRUE;
- ctx->connected_at = data->progress.now;
+ ctx->connected_at = *Curl_pgrs_now(data);
cf->connected = TRUE;
CURL_TRC_CF(data, cf, "accepted_set(sock=%" FMT_SOCKET_T
", remote=%s port=%d)",
goto out;
Curl_conn_cf_add(data, conn, sockindex, cf);
- ctx->started_at = data->progress.now;
+ ctx->started_at = *Curl_pgrs_now(data);
conn->sock[sockindex] = ctx->sock;
set_local_ip(cf, data);
CURL_TRC_CF(data, cf, "set filter for listen socket fd=%" FMT_SOCKET_T
* socket and ip related information. */
cf_cntrl_update_info(data, data->conn);
conn_report_connect_stats(cf, data);
- data->conn->keepalive = data->progress.now;
+ data->conn->keepalive = *Curl_pgrs_now(data);
#ifndef CURL_DISABLE_VERBOSE_STRINGS
result = cf_verboseconnect(data, cf);
#endif
cpool->share ? "[SHARE] " : "", cpool->num_conn);
/* Move all connections to the shutdown list */
sigpipe_init(&pipe_st);
- Curl_pgrs_now_set(cpool->idata);
CPOOL_LOCK(cpool, cpool->idata);
conn = cpool_get_first(cpool);
while(conn) {
static struct connectdata *
cpool_bundle_get_oldest_idle(struct cpool_bundle *bundle,
- struct curltime *pnow)
+ const struct curltime *pnow)
{
struct Curl_llist_node *curr;
timediff_t highscore = -1;
if(!CONN_INUSE(conn)) {
/* Set higher score for the age passed since the connection was used */
- score = curlx_timediff_ms(*pnow, conn->lastused);
+ score = curlx_ptimediff_ms(pnow, &conn->lastused);
if(score > highscore) {
highscore = score;
}
static struct connectdata *cpool_get_oldest_idle(struct cpool *cpool,
- struct curltime *pnow)
+ const struct curltime *pnow)
{
struct Curl_hash_iterator iter;
struct Curl_llist_node *curr;
if(CONN_INUSE(conn) || conn->bits.close || conn->connect_only)
continue;
/* Set higher score for the age passed since the connection was used */
- score = curlx_timediff_ms(*pnow, conn->lastused);
+ score = curlx_ptimediff_ms(pnow, &conn->lastused);
if(score > highscore) {
highscore = score;
oldest_idle = conn;
if(!dest_limit && !total_limit)
return CPOOL_LIMIT_OK;
- Curl_pgrs_now_update(cpool->idata, data);
CPOOL_LOCK(cpool, cpool->idata);
if(dest_limit) {
size_t live;
/* The bundle is full. Extract the oldest connection that may
* be removed now, if there is one. */
oldest_idle = cpool_bundle_get_oldest_idle(bundle,
- &data->progress.now);
+ Curl_pgrs_now(data));
if(!oldest_idle)
break;
/* disconnect the old conn and continue */
}
else {
struct connectdata *oldest_idle =
- cpool_get_oldest_idle(cpool, &data->progress.now);
+ cpool_get_oldest_idle(cpool, Curl_pgrs_now(data));
if(!oldest_idle)
break;
/* disconnect the old conn and continue */
out:
CPOOL_UNLOCK(cpool, cpool->idata);
- Curl_pgrs_now_update(data, cpool->idata);
return result;
}
maxconnects = data->multi->maxconnects;
}
- conn->lastused = data->progress.now; /* it was used up until now */
+ conn->lastused = *Curl_pgrs_now(data); /* it was used up until now */
if(cpool && maxconnects) {
/* may be called form a callback already under lock */
bool do_lock = !CPOOL_IS_LOCKED(cpool);
infof(data, "Connection pool is full, closing the oldest of %zu/%u",
cpool->num_conn, maxconnects);
- oldest_idle = cpool_get_oldest_idle(cpool, &data->progress.now);
+ oldest_idle = cpool_get_oldest_idle(cpool, Curl_pgrs_now(data));
kept = (oldest_idle != conn);
if(oldest_idle) {
Curl_conn_terminate(data, oldest_idle, FALSE);
* If we do a shutdown for an aborted transfer, the server might think
* it was successful otherwise (for example an ftps: upload). This is
* not what we want. */
- Curl_pgrs_now_update(cpool->idata, data);
if(aborted)
done = TRUE;
if(!done) {
CPOOL_UNLOCK(cpool, data);
}
+struct cpool_reaper_ctx {
+ size_t checked;
+ size_t reaped;
+};
+
static int cpool_reap_dead_cb(struct Curl_easy *data,
struct connectdata *conn, void *param)
{
- (void)param;
- if((!CONN_INUSE(conn) && conn->bits.no_reuse) ||
- Curl_conn_seems_dead(conn, data)) {
+ struct cpool_reaper_ctx *reaper = param;
+ bool terminate = !CONN_INUSE(conn) && conn->bits.no_reuse;
+
+ if(!terminate) {
+ reaper->checked++;
+ terminate = Curl_conn_seems_dead(conn, data);
+ }
+ if(terminate) {
/* stop the iteration here, pass back the connection that was pruned */
+ reaper->reaped++;
Curl_conn_terminate(data, conn, FALSE);
return 1;
}
void Curl_cpool_prune_dead(struct Curl_easy *data)
{
struct cpool *cpool = cpool_get_instance(data);
+ struct cpool_reaper_ctx reaper;
timediff_t elapsed;
if(!cpool)
return;
+ memset(&reaper, 0, sizeof(reaper));
CPOOL_LOCK(cpool, data);
- elapsed = curlx_timediff_ms(data->progress.now, cpool->last_cleanup);
+ elapsed = curlx_ptimediff_ms(Curl_pgrs_now(data), &cpool->last_cleanup);
if(elapsed >= 1000L) {
- while(cpool_foreach(data, cpool, NULL, cpool_reap_dead_cb))
+ while(cpool_foreach(data, cpool, &reaper, cpool_reap_dead_cb))
;
- cpool->last_cleanup = data->progress.now;
+ cpool->last_cleanup = *Curl_pgrs_now(data);
}
CPOOL_UNLOCK(cpool, data);
}
struct connectdata *conn,
void *param)
{
- struct curltime *now = param;
- Curl_conn_upkeep(data, conn, now);
+ (void)param;
+ Curl_conn_upkeep(data, conn);
return 0; /* continue iteration */
}
return CURLE_OK;
CPOOL_LOCK(cpool, data);
- cpool_foreach(data, cpool, &data->progress.now, conn_upkeep);
+ cpool_foreach(data, cpool, NULL, conn_upkeep);
CPOOL_UNLOCK(cpool, data);
return CURLE_OK;
}
* @param duringconnect TRUE iff connect timeout is also taken into account.
* @unittest: 1303
*/
-timediff_t Curl_timeleft_ms(struct Curl_easy *data,
- bool duringconnect)
+timediff_t Curl_timeleft_now_ms(struct Curl_easy *data,
+ const struct curltime *pnow,
+ bool duringconnect)
{
timediff_t timeleft_ms = 0;
timediff_t ctimeleft_ms = 0;
if(data->set.timeout) {
timeleft_ms = data->set.timeout -
- curlx_timediff_ms(data->progress.now, data->progress.t_startop);
+ curlx_ptimediff_ms(pnow, &data->progress.t_startop);
if(!timeleft_ms)
timeleft_ms = -1; /* 0 is "no limit", fake 1 ms expiry */
}
ctimeout_ms = (data->set.connecttimeout > 0) ?
data->set.connecttimeout : DEFAULT_CONNECT_TIMEOUT;
ctimeleft_ms = ctimeout_ms -
- curlx_timediff_ms(data->progress.now, data->progress.t_startsingle);
+ curlx_ptimediff_ms(pnow, &data->progress.t_startsingle);
if(!ctimeleft_ms)
ctimeleft_ms = -1; /* 0 is "no limit", fake 1 ms expiry */
if(!timeleft_ms)
return (ctimeleft_ms < timeleft_ms) ? ctimeleft_ms : timeleft_ms;
}
+timediff_t Curl_timeleft_ms(struct Curl_easy *data,
+ bool duringconnect)
+{
+ return Curl_timeleft_now_ms(data, Curl_pgrs_now(data), duringconnect);
+}
+
void Curl_shutdown_start(struct Curl_easy *data, int sockindex,
int timeout_ms)
{
struct connectdata *conn = data->conn;
DEBUGASSERT(conn);
- conn->shutdown.start[sockindex] = data->progress.now;
+ conn->shutdown.start[sockindex] = *Curl_pgrs_now(data);
conn->shutdown.timeout_ms = (timeout_ms > 0) ?
(timediff_t)timeout_ms :
((data->set.shutdowntimeout > 0) ?
return 0; /* not started or no limits */
left_ms = conn->shutdown.timeout_ms -
- curlx_timediff_ms(data->progress.now,
- conn->shutdown.start[sockindex]);
+ curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &conn->shutdown.start[sockindex]);
return left_ms ? left_ms : -1;
}
to the timeouts set */
timediff_t Curl_timeleft_ms(struct Curl_easy *data,
bool duringconnect);
+timediff_t Curl_timeleft_now_ms(struct Curl_easy *data,
+ const struct curltime *pnow,
+ bool duringconnect);
#define DEFAULT_CONNECT_TIMEOUT 300000 /* milliseconds == five minutes */
struct Curl_easy *data,
int timeout_ms)
{
- struct curltime started = data->progress.now;
+ struct curltime started = *Curl_pgrs_now(data);
struct Curl_llist_node *e;
SIGPIPE_VARIABLE(pipe_st);
}
/* wait for activity, timeout or "nothing" */
- Curl_pgrs_now_set(data); /* update in loop */
- spent_ms = curlx_timediff_ms(data->progress.now, started);
+ spent_ms = curlx_ptimediff_ms(Curl_pgrs_now(data), &started);
if(spent_ms >= (timediff_t)timeout_ms) {
CURL_TRC_M(data, "[SHUTDOWN] shutdown finished, %s",
(timeout_ms > 0) ? "timeout" : "best effort done");
#include "cf-haproxy.h"
#include "cf-https-connect.h"
#include "cf-ip-happy.h"
+#include "progress.h"
#include "socks.h"
#include "curlx/strparse.h"
#include "vtls/vtls.h"
if(CURL_TRC_TIMER_is_verbose(data)) {
struct Curl_llist_node *e = Curl_llist_head(&data->state.timeoutlist);
if(e) {
+ const struct curltime *pnow = Curl_pgrs_now(data);
while(e) {
struct time_node *n = Curl_node_elem(e);
e = Curl_node_next(e);
CURL_TRC_TIMER(data, n->eid, "expires in %" FMT_TIMEDIFF_T "ns",
- curlx_timediff_us(n->time, data->progress.now));
+ curlx_ptimediff_us(&n->time, pnow));
}
}
}
}
/* In case of bug fix this function has a counterpart in tool_util.c */
-struct curltime curlx_now(void)
+void curlx_pnow(struct curltime *pnow)
{
- struct curltime now;
bool isVistaOrGreater;
isVistaOrGreater = Curl_isVistaOrGreater;
if(isVistaOrGreater) { /* QPC timer might have issues pre-Vista */
freq = Curl_freq;
DEBUGASSERT(freq.QuadPart);
QueryPerformanceCounter(&count);
- now.tv_sec = (time_t)(count.QuadPart / freq.QuadPart);
- now.tv_usec = (int)((count.QuadPart % freq.QuadPart) * 1000000 /
+ pnow->tv_sec = (time_t)(count.QuadPart / freq.QuadPart);
+ pnow->tv_usec = (int)((count.QuadPart % freq.QuadPart) * 1000000 /
freq.QuadPart);
}
else {
#pragma warning(pop)
#endif
- now.tv_sec = (time_t)(milliseconds / 1000);
- now.tv_usec = (int)((milliseconds % 1000) * 1000);
+ pnow->tv_sec = (time_t)(milliseconds / 1000);
+ pnow->tv_usec = (int)((milliseconds % 1000) * 1000);
}
- return now;
}
#elif defined(HAVE_CLOCK_GETTIME_MONOTONIC) || \
defined(HAVE_CLOCK_GETTIME_MONOTONIC_RAW)
-struct curltime curlx_now(void)
+void curlx_pnow(struct curltime *pnow)
{
/*
** clock_gettime() is granted to be increased monotonically when the
#ifdef HAVE_GETTIMEOFDAY
struct timeval now;
#endif
- struct curltime cnow;
struct timespec tsnow;
/*
have_clock_gettime &&
#endif
(clock_gettime(CLOCK_MONOTONIC_RAW, &tsnow) == 0)) {
- cnow.tv_sec = tsnow.tv_sec;
- cnow.tv_usec = (int)(tsnow.tv_nsec / 1000);
+ pnow->tv_sec = tsnow.tv_sec;
+ pnow->tv_usec = (int)(tsnow.tv_nsec / 1000);
}
else
#endif
have_clock_gettime &&
#endif
(clock_gettime(CLOCK_MONOTONIC, &tsnow) == 0)) {
- cnow.tv_sec = tsnow.tv_sec;
- cnow.tv_usec = (int)(tsnow.tv_nsec / 1000);
+ pnow->tv_sec = tsnow.tv_sec;
+ pnow->tv_usec = (int)(tsnow.tv_nsec / 1000);
}
/*
** Even when the configure process has truly detected monotonic clock
#ifdef HAVE_GETTIMEOFDAY
else {
(void)gettimeofday(&now, NULL);
- cnow.tv_sec = now.tv_sec;
- cnow.tv_usec = (int)now.tv_usec;
+ pnow->tv_sec = now.tv_sec;
+ pnow->tv_usec = (int)now.tv_usec;
}
#else
else {
- cnow.tv_sec = time(NULL);
- cnow.tv_usec = 0;
+ pnow->tv_sec = time(NULL);
+ pnow->tv_usec = 0;
}
#endif
- return cnow;
}
#elif defined(HAVE_MACH_ABSOLUTE_TIME)
#include <stdint.h>
#include <mach/mach_time.h>
-struct curltime curlx_now(void)
+void curlx_pnow(struct curltime *pnow)
{
/*
** Monotonic timer on macOS is provided by mach_absolute_time(), which
** mach_timebase_info().
*/
static mach_timebase_info_data_t timebase;
- struct curltime cnow;
uint64_t usecs;
if(timebase.denom == 0)
usecs /= timebase.denom;
usecs /= 1000;
- cnow.tv_sec = usecs / 1000000;
- cnow.tv_usec = (int)(usecs % 1000000);
-
- return cnow;
+ pnow->tv_sec = usecs / 1000000;
+ pnow->tv_usec = (int)(usecs % 1000000);
}
#elif defined(HAVE_GETTIMEOFDAY)
-struct curltime curlx_now(void)
+void curlx_pnow(struct curltime *pnow)
{
/*
** gettimeofday() is not granted to be increased monotonically, due to
** forward or backward in time.
*/
struct timeval now;
- struct curltime ret;
(void)gettimeofday(&now, NULL);
- ret.tv_sec = now.tv_sec;
- ret.tv_usec = (int)now.tv_usec;
- return ret;
+ pnow->tv_sec = now.tv_sec;
+ pnow->tv_usec = (int)now.tv_usec;
}
#else
-struct curltime curlx_now(void)
+void curlx_pnow(struct curltime *pnow)
{
/*
** time() returns the value of time in seconds since the Epoch.
*/
- struct curltime now;
- now.tv_sec = time(NULL);
- now.tv_usec = 0;
- return now;
+ pnow->tv_sec = time(NULL);
+ pnow->tv_usec = 0;
}
#endif
+struct curltime curlx_now(void)
+{
+ struct curltime now;
+ curlx_pnow(&now);
+ return now;
+}
+
/*
* Returns: time difference in number of milliseconds. For too large diffs it
* returns max value.
*
* @unittest: 1323
*/
-timediff_t curlx_timediff_ms(struct curltime newer, struct curltime older)
+timediff_t curlx_ptimediff_ms(const struct curltime *newer,
+ const struct curltime *older)
{
- timediff_t diff = (timediff_t)newer.tv_sec - older.tv_sec;
+ timediff_t diff = (timediff_t)newer->tv_sec - older->tv_sec;
if(diff >= (TIMEDIFF_T_MAX / 1000))
return TIMEDIFF_T_MAX;
else if(diff <= (TIMEDIFF_T_MIN / 1000))
return TIMEDIFF_T_MIN;
- return diff * 1000 + (newer.tv_usec - older.tv_usec) / 1000;
+ return diff * 1000 + (newer->tv_usec - older->tv_usec) / 1000;
+}
+
+
+timediff_t curlx_timediff_ms(struct curltime newer, struct curltime older)
+{
+ return curlx_ptimediff_ms(&newer, &older);
}
/*
* Returns: time difference in number of microseconds. For too large diffs it
* returns max value.
*/
-timediff_t curlx_timediff_us(struct curltime newer, struct curltime older)
+timediff_t curlx_ptimediff_us(const struct curltime *newer,
+ const struct curltime *older)
{
- timediff_t diff = (timediff_t)newer.tv_sec - older.tv_sec;
+ timediff_t diff = (timediff_t)newer->tv_sec - older->tv_sec;
if(diff >= (TIMEDIFF_T_MAX / 1000000))
return TIMEDIFF_T_MAX;
else if(diff <= (TIMEDIFF_T_MIN / 1000000))
return TIMEDIFF_T_MIN;
- return diff * 1000000 + newer.tv_usec - older.tv_usec;
+ return diff * 1000000 + newer->tv_usec - older->tv_usec;
+}
+
+timediff_t curlx_timediff_us(struct curltime newer, struct curltime older)
+{
+ return curlx_ptimediff_us(&newer, &older);
}
#if defined(__MINGW32__) && (__MINGW64_VERSION_MAJOR <= 3)
#endif
struct curltime curlx_now(void);
+void curlx_pnow(struct curltime *pnow);
/*
* Make sure that the first argument (newer) is the more recent time and older
* Returns: the time difference in number of milliseconds.
*/
timediff_t curlx_timediff_ms(struct curltime newer, struct curltime older);
+timediff_t curlx_ptimediff_ms(const struct curltime *newer,
+ const struct curltime *older);
/*
* Make sure that the first argument (newer) is the more recent time and older
* Returns: the time difference in number of microseconds.
*/
timediff_t curlx_timediff_us(struct curltime newer, struct curltime older);
+timediff_t curlx_ptimediff_us(const struct curltime *newer,
+ const struct curltime *older);
CURLcode curlx_gmtime(time_t intime, struct tm *store);
CURLMsg *msg;
struct pollfd fds[4];
int pollrc;
- struct curltime before;
+ struct curltime start;
const unsigned int numfds = populate_fds(fds, ev);
/* get the time stamp to use to figure out how long poll takes */
- before = curlx_now();
+ curlx_pnow(&start);
result = poll_fds(ev, fds, numfds, &pollrc);
if(result)
/* If nothing updated the timeout, we decrease it by the spent time.
* If it was updated, it has the new timeout time stored already.
*/
- timediff_t spent_ms = curlx_timediff_ms(curlx_now(), before);
+ timediff_t spent_ms = curlx_timediff_ms(curlx_now(), start);
if(spent_ms > 0) {
#if DEBUG_EV_POLL
curl_mfprintf(stderr, "poll timeout %ldms not updated, decrease by "
if(dupset(outcurl, data))
goto fail;
- Curl_pgrs_now_set(outcurl); /* start of API call */
outcurl->progress.hide = data->progress.hide;
outcurl->progress.callback = data->progress.callback;
if(Curl_is_in_callback(data))
recursive = TRUE;
- Curl_pgrs_now_set(data); /* start of API call */
recv_paused = Curl_xfer_recv_is_paused(data);
recv_paused_new = (action & CURLPAUSE_RECV);
send_paused = Curl_xfer_send_is_paused(data);
Curl_multi_mark_dirty(data); /* make it run */
/* On changes, tell application to update its timers. */
if(changed) {
- if(Curl_update_timer(data->multi, &data->progress.now) && !result)
+ if(Curl_update_timer(data->multi) && !result)
result = CURLE_ABORTED_BY_CALLBACK;
}
}
conn->bits.ftp_use_control_ssl = TRUE;
}
- Curl_pp_init(pp, &data->progress.now); /* once per transfer */
+ Curl_pp_init(pp, Curl_pgrs_now(data)); /* once per transfer */
/* When we connect, we start in the state where we await the 220
response */
* data has been transferred. This happens when doing through NATs etc that
* abandon old silent connections.
*/
- pp->response = data->progress.now; /* timeout relative now */
+ pp->response = *Curl_pgrs_now(data); /* timeout relative now */
result = getftpresponse(data, &nread, &ftpcode);
if(!nread && (CURLE_OPERATION_TIMEDOUT == result)) {
result = Curl_pp_sendf(data, &ftpc->pp, "%s", cmd);
if(!result) {
- pp->response = data->progress.now; /* timeout relative now */
+ pp->response = *Curl_pgrs_now(data); /* timeout relative now */
result = getftpresponse(data, &nread, &ftpcode);
}
if(result)
#include "multiif.h"
#include "doh.h"
#include "curlx/warnless.h"
+#include "progress.h"
#include "select.h"
#include "strcase.h"
#include "easy_lock.h"
if(dns->timestamp.tv_sec || dns->timestamp.tv_usec) {
/* get age in milliseconds */
- timediff_t age = curlx_timediff_ms(prune->now, dns->timestamp);
+ timediff_t age = curlx_ptimediff_ms(&prune->now, &dns->timestamp);
if(!dns->addr)
age *= 2; /* negative entries age twice as fast */
if(age >= prune->max_age_ms)
do {
/* Remove outdated and unused entries from the hostcache */
timediff_t oldest_ms =
- dnscache_prune(&dnscache->entries, timeout_ms, data->progress.now);
+ dnscache_prune(&dnscache->entries, timeout_ms, *Curl_pgrs_now(data));
if(Curl_hash_count(&dnscache->entries) > MAX_DNS_CACHE_SIZE)
/* prune the ones over half this age */
/* See whether the returned entry is stale. Done before we release lock */
struct dnscache_prune_data user;
- user.now = data->progress.now;
+ user.now = *Curl_pgrs_now(data);
user.max_age_ms = data->set.dns_cache_timeout_ms;
user.oldest_ms = 0;
dns->timestamp.tv_usec = 0; /* an entry that never goes stale */
}
else {
- dns->timestamp = data->progress.now;
+ dns->timestamp = *Curl_pgrs_now(data);
}
dns->hostport = port;
if(hostlen)
the time we spent until now! */
if(prev_alarm) {
/* there was an alarm() set before us, now put it back */
- timediff_t elapsed_secs = curlx_timediff_ms(data->progress.now,
- data->conn->created) / 1000;
+ timediff_t elapsed_secs = curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &data->conn->created) / 1000;
/* the alarm period is counted in even number of seconds */
unsigned long alarm_set = (unsigned long)(prev_alarm - elapsed_secs);
DEBUGF(infof(data, "cr_exp100_read, start AWAITING_CONTINUE, "
"timeout %dms", data->set.expect_100_timeout));
ctx->state = EXP100_AWAITING_CONTINUE;
- ctx->start = data->progress.now;
+ ctx->start = *Curl_pgrs_now(data);
Curl_expire(data, data->set.expect_100_timeout, EXPIRE_100_TIMEOUT);
*nread = 0;
*eos = FALSE;
*eos = FALSE;
return CURLE_READ_ERROR;
case EXP100_AWAITING_CONTINUE:
- ms = curlx_timediff_ms(data->progress.now, ctx->start);
+ ms = curlx_ptimediff_ms(Curl_pgrs_now(data), &ctx->start);
if(ms < data->set.expect_100_timeout) {
DEBUGF(infof(data, "cr_exp100_read, AWAITING_CONTINUE, not expired"));
*nread = 0;
static int32_t cf_h2_get_desired_local_win(struct Curl_cfilter *cf,
struct Curl_easy *data)
{
- curl_off_t avail =
- Curl_rlimit_avail(&data->progress.dl.rlimit, &data->progress.now);
+ curl_off_t avail = Curl_rlimit_avail(&data->progress.dl.rlimit,
+ Curl_pgrs_now(data));
(void)cf;
if(avail < CURL_OFF_T_MAX) { /* limit in place */
struct Curl_cfilter *cf = userp;
struct cf_h2_ctx *ctx = cf->ctx;
struct h2_stream_ctx *stream;
- struct Curl_easy *data_s, *calling = CF_DATA_CURRENT(cf);
+ struct Curl_easy *data_s;
(void)flags;
DEBUGASSERT(stream_id); /* should never be a zero stream ID here */
stream = H2_STREAM_CTX(ctx, data_s);
if(!stream)
return NGHTTP2_ERR_CALLBACK_FAILURE;
- if(calling)
- Curl_pgrs_now_update(data_s, calling);
h2_xfer_write_resp(cf, data_s, stream, (const char *)mem, len, FALSE);
Curl_sasl_init(&imapc->sasl, data, &saslimap);
curlx_dyn_init(&imapc->dyn, DYN_IMAP_CMD);
- Curl_pp_init(pp, &data->progress.now);
+ Curl_pp_init(pp, Curl_pgrs_now(data));
if(Curl_conn_meta_set(conn, CURL_META_IMAP_CONN, imapc, imap_conn_dtor))
return CURLE_OUT_OF_MEMORY;
result = Curl_xfer_send(data, buf, len, FALSE, &n);
if(result)
return result;
- mq->lastTime = data->progress.now;
+ mq->lastTime = *Curl_pgrs_now(data);
Curl_debug(data, CURLINFO_HEADER_OUT, buf, n);
if(len != n) {
size_t nsend = len - n;
}
/* we received something */
- mq->lastTime = data->progress.now;
+ mq->lastTime = *Curl_pgrs_now(data);
/* if QoS is set, message contains packet id */
result = Curl_client_write(data, CLIENTWRITE_BODY, buffer, nread);
if(!mq)
return CURLE_FAILED_INIT;
- mq->lastTime = data->progress.now;
+ mq->lastTime = *Curl_pgrs_now(data);
mq->pingsent = FALSE;
result = mqtt_connect(data);
if(mqtt->state == MQTT_FIRST &&
!mq->pingsent &&
data->set.upkeep_interval_ms > 0) {
- struct curltime t = data->progress.now;
- timediff_t diff = curlx_timediff_ms(t, mq->lastTime);
+ struct curltime t = *Curl_pgrs_now(data);
+ timediff_t diff = curlx_ptimediff_ms(&t, &mq->lastTime);
if(diff > data->set.upkeep_interval_ms) {
/* 0xC0 is PINGREQ, and 0x00 is remaining length */
Curl_debug(data, CURLINFO_HEADER_IN, (const char *)&mq->firstbyte, 1);
/* we received something */
- mq->lastTime = data->progress.now;
+ mq->lastTime = *Curl_pgrs_now(data);
/* remember the first byte */
mq->npacket = 0;
#define CURL_MULTI_HANDLE 0x000bab1e
+
#ifdef DEBUGBUILD
/* On a debug build, we want to fail hard on multi handles that
* are not NULL, but no longer have the MAGIC touch. This gives
static void move_pending_to_connect(struct Curl_multi *multi,
struct Curl_easy *data);
-static CURLMcode add_next_timeout(struct curltime now,
+static CURLMcode add_next_timeout(const struct curltime *pnow,
struct Curl_multi *multi,
struct Curl_easy *d);
static void multi_timeout(struct Curl_multi *multi,
- struct curltime *pnow,
struct curltime *expire_time,
long *timeout_ms);
static void process_pending_handles(struct Curl_multi *multi);
static void multi_xfer_tbl_dump(struct Curl_multi *multi);
#endif
+static const struct curltime *multi_now(struct Curl_multi *multi)
+{
+ curlx_pnow(&multi->now);
+ return &multi->now;
+}
+
/* function pointer called once when switching TO a state */
typedef void (*init_multistate_func)(struct Curl_easy *data);
#endif
/* really switching state */
- Curl_pgrs_now_set(data);
data->mstate = state;
switch(state) {
case MSTATE_DONE:
Curl_uint32_bset_clear(&multi->msgsent);
}
- Curl_pgrs_now_set(data); /* start of API call */
if(data->multi_easy) {
/* if this easy handle was previously used for curl_easy_perform(), there
is a private multi handle here that we can kill */
/* Necessary in event based processing, where dirty handles trigger
* a timeout callback invocation. */
- mresult = Curl_update_timer(multi, &data->progress.now);
+ mresult = Curl_update_timer(multi);
if(mresult) {
data->multi = NULL; /* not anymore */
Curl_uint32_tbl_remove(&multi->xfers, data->mid);
if(multi->in_callback)
return CURLM_RECURSIVE_API_CALL;
- Curl_pgrs_now_set(data); /* start of API call */
premature = (data->mstate < MSTATE_COMPLETED);
/* If the 'state' is not INIT or COMPLETED, we might need to do something
process_pending_handles(multi);
if(removed_timer) {
- mresult = Curl_update_timer(multi, &data->progress.now);
+ mresult = Curl_update_timer(multi);
if(mresult)
return mresult;
}
CURLcode result = CURLE_OK;
if(ps->n) {
- bool send_blocked =
- (Curl_rlimit_avail(&data->progress.dl.rlimit, &data->progress.now) <= 0);
- bool recv_blocked =
- (Curl_rlimit_avail(&data->progress.ul.rlimit, &data->progress.now) <= 0);
+ const struct curltime *pnow = Curl_pgrs_now(data);
+ bool send_blocked, recv_blocked;
+ recv_blocked = (Curl_rlimit_avail(&data->progress.dl.rlimit, pnow) <= 0);
+ send_blocked = (Curl_rlimit_avail(&data->progress.ul.rlimit, pnow) <= 0);
if(send_blocked || recv_blocked) {
int i;
for(i = 0; i <= SECONDARYSOCKET; ++i) {
struct Curl_multi *multi = m;
struct easy_pollset ps;
unsigned int i, mid;
- struct curltime now = curlx_now(); /* start of API call */
(void)exc_fd_set;
if(!GOOD_MULTI_HANDLE(multi))
continue;
}
- Curl_pgrs_now_at_least(data, &now);
Curl_multi_pollset(data, &ps);
for(i = 0; i < ps.n; i++) {
if(!FDSET_SOCK(ps.sockets[i]))
struct Curl_multi *multi = m;
struct easy_pollset ps;
unsigned int need = 0, mid;
- struct curltime now = curlx_now(); /* start of API call */
if(!ufds && (size || !fd_count))
return CURLM_BAD_FUNCTION_ARGUMENT;
Curl_uint32_bset_remove(&multi->dirty, mid);
continue;
}
- Curl_pgrs_now_at_least(data, &now);
Curl_multi_pollset(data, &ps);
need += Curl_waitfds_add_ps(&cwfds, &ps);
} while(Curl_uint32_bset_next(&multi->process, mid, &mid));
unsigned int curl_nfds = 0; /* how many pfds are for curl transfers */
struct Curl_easy *data = NULL;
CURLMcode mresult = CURLM_OK;
- struct curltime now = curlx_now(); /* start of API call */
uint32_t mid;
#ifdef USE_WINSOCK
Curl_uint32_bset_remove(&multi->dirty, mid);
continue;
}
- Curl_pgrs_now_at_least(data, &now);
Curl_multi_pollset(data, &ps);
if(Curl_pollfds_add_ps(&cpfds, &ps)) {
mresult = CURLM_OUT_OF_MEMORY;
goto out;
}
- now = data->progress.now;
} while(Curl_uint32_bset_next(&multi->process, mid, &mid));
}
* poll. Collecting the sockets may install new timers by protocols
* and connection filters.
* Use the shorter one of the internal and the caller requested timeout. */
- multi_timeout(multi, &now, &expire_time, &timeout_internal);
+ multi_timeout(multi, &expire_time, &timeout_internal);
if((timeout_internal >= 0) && (timeout_internal < (long)timeout_ms))
timeout_ms = (int)timeout_internal;
long sleep_ms = 0;
/* Avoid busy-looping when there is nothing particular to wait for */
- multi_timeout(multi, &now, &expire_time, &sleep_ms);
+ multi_timeout(multi, &expire_time, &sleep_ms);
if(sleep_ms) {
if(sleep_ms > timeout_ms)
sleep_ms = timeout_ms;
CURLcode *result)
{
bool connect_timeout = data->mstate < MSTATE_DO;
- timediff_t timeout_ms =
- Curl_timeleft_ms(data, connect_timeout);
+ timediff_t timeout_ms;
+
+ timeout_ms = Curl_timeleft_ms(data, connect_timeout);
if(timeout_ms < 0) {
/* Handle timed out */
struct curltime since;
since = data->progress.t_startop;
if(data->mstate == MSTATE_RESOLVING)
failf(data, "Resolving timed out after %" FMT_TIMEDIFF_T
- " milliseconds", curlx_timediff_ms(data->progress.now, since));
+ " milliseconds",
+ curlx_ptimediff_ms(Curl_pgrs_now(data), &since));
else if(data->mstate == MSTATE_CONNECTING)
failf(data, "Connection timed out after %" FMT_TIMEDIFF_T
- " milliseconds", curlx_timediff_ms(data->progress.now, since));
+ " milliseconds",
+ curlx_ptimediff_ms(Curl_pgrs_now(data), &since));
else {
struct SingleRequest *k = &data->req;
if(k->size != -1) {
failf(data, "Operation timed out after %" FMT_TIMEDIFF_T
" milliseconds with %" FMT_OFF_T " out of %"
FMT_OFF_T " bytes received",
- curlx_timediff_ms(data->progress.now, since),
+ curlx_ptimediff_ms(Curl_pgrs_now(data), &since),
k->bytecount, k->size);
}
else {
failf(data, "Operation timed out after %" FMT_TIMEDIFF_T
" milliseconds with %" FMT_OFF_T " bytes received",
- curlx_timediff_ms(data->progress.now, since), k->bytecount);
+ curlx_ptimediff_ms(Curl_pgrs_now(data), &since),
+ k->bytecount);
}
}
*result = CURLE_OPERATION_TIMEDOUT;
static CURLcode mspeed_check(struct Curl_easy *data)
{
+ const struct curltime *pnow = Curl_pgrs_now(data);
timediff_t recv_wait_ms = 0;
timediff_t send_wait_ms = 0;
/* check if our send/recv limits require idle waits */
- send_wait_ms =
- Curl_rlimit_wait_ms(&data->progress.ul.rlimit, &data->progress.now);
- recv_wait_ms =
- Curl_rlimit_wait_ms(&data->progress.dl.rlimit, &data->progress.now);
+ send_wait_ms = Curl_rlimit_wait_ms(&data->progress.ul.rlimit, pnow);
+ recv_wait_ms = Curl_rlimit_wait_ms(&data->progress.dl.rlimit, pnow);
if(send_wait_ms || recv_wait_ms) {
if(data->mstate != MSTATE_RATELIMITING) {
}
static CURLMcode multi_perform(struct Curl_multi *multi,
- struct curltime *pnow,
int *running_handles)
{
CURLMcode returncode = CURLM_OK;
- struct Curl_tree *t = NULL;
+ struct curltime start = *multi_now(multi);
uint32_t mid;
SIGPIPE_VARIABLE(pipe_st);
continue;
}
sigpipe_apply(data, &pipe_st);
- Curl_pgrs_now_at_least(data, pnow);
mresult = multi_runsingle(multi, data);
- *pnow = data->progress.now; /* in case transfer updated */
if(mresult)
returncode = mresult;
} while(Curl_uint32_bset_next(&multi->process, mid, &mid));
* then and then we risk this loop to remove timers that actually have not
* been handled!
*/
- do {
- multi->timetree = Curl_splaygetbest(*pnow, multi->timetree, &t);
- if(t) {
- /* the removed may have another timeout in queue */
- struct Curl_easy *data = Curl_splayget(t);
- (void)add_next_timeout(*pnow, multi, data);
- if(data->mstate == MSTATE_PENDING) {
- bool stream_unused;
- CURLcode result_unused;
- if(multi_handle_timeout(data, &stream_unused, &result_unused)) {
- infof(data, "PENDING handle timeout");
- move_pending_to_connect(multi, data);
+ if(multi->timetree) {
+ struct Curl_tree *t = NULL;
+ do {
+ multi->timetree = Curl_splaygetbest(&start, multi->timetree, &t);
+ if(t) {
+ /* the removed may have another timeout in queue */
+ struct Curl_easy *data = Curl_splayget(t);
+ (void)add_next_timeout(&start, multi, data);
+ if(data->mstate == MSTATE_PENDING) {
+ bool stream_unused;
+ CURLcode result_unused;
+ if(multi_handle_timeout(data, &stream_unused, &result_unused)) {
+ infof(data, "PENDING handle timeout");
+ move_pending_to_connect(multi, data);
+ }
}
}
- }
- } while(t);
+ } while(t);
+ }
if(running_handles) {
unsigned int running = Curl_multi_xfers_running(multi);
}
if(CURLM_OK >= returncode)
- returncode = Curl_update_timer(multi, pnow);
+ returncode = Curl_update_timer(multi);
return returncode;
}
CURLMcode curl_multi_perform(CURLM *m, int *running_handles)
{
- struct curltime now = curlx_now(); /* start of API call */
struct Curl_multi *multi = m;
if(!GOOD_MULTI_HANDLE(multi))
return CURLM_BAD_HANDLE;
- return multi_perform(multi, &now, running_handles);
+ return multi_perform(multi, running_handles);
}
CURLMcode curl_multi_cleanup(CURLM *m)
* The splay tree only has each sessionhandle as a single node and the nearest
* timeout is used to sort it on.
*/
-static CURLMcode add_next_timeout(struct curltime now,
+static CURLMcode add_next_timeout(const struct curltime *pnow,
struct Curl_multi *multi,
struct Curl_easy *d)
{
for(e = Curl_llist_head(list); e;) {
struct Curl_llist_node *n = Curl_node_next(e);
struct time_node *node = Curl_node_elem(e);
- timediff_t diff = curlx_timediff_us(node->time, now);
+ timediff_t diff = curlx_ptimediff_us(&node->time, pnow);
if(diff <= 0)
/* remove outdated entry */
Curl_node_remove(e);
/* Insert this node again into the splay. Keep the timer in the list in
case we need to recompute future timers. */
- multi->timetree = Curl_splayinsert(*tv, multi->timetree,
+ multi->timetree = Curl_splayinsert(tv, multi->timetree,
&d->state.timenode);
}
return CURLM_OK;
struct multi_run_ctx {
struct Curl_multi *multi;
- struct curltime now;
size_t run_xfers;
SIGPIPE_MEMBER(pipe_st);
};
-static void multi_mark_expired_as_dirty(struct multi_run_ctx *mrc)
+static void multi_mark_expired_as_dirty(struct multi_run_ctx *mrc,
+ const struct curltime *ts)
{
struct Curl_multi *multi = mrc->multi;
struct Curl_easy *data = NULL;
while(1) {
/* Check if there is one (more) expired timer to deal with! This function
extracts a matching node if there is one */
- multi->timetree = Curl_splaygetbest(mrc->now, multi->timetree, &t);
+ multi->timetree = Curl_splaygetbest(ts, multi->timetree, &t);
if(!t)
return;
}
}
#endif
- (void)add_next_timeout(mrc->now, multi, data);
+ (void)add_next_timeout(ts, multi, data);
Curl_multi_mark_dirty(data);
}
}
mrc->run_xfers++;
sigpipe_apply(data, &mrc->pipe_st);
/* runsingle() clears the dirty mid */
- Curl_pgrs_now_at_least(data, &mrc->now);
mresult = multi_runsingle(multi, data);
- mrc->now = data->progress.now; /* in case transfer updated */
if(CURLM_OK >= mresult) {
/* reassess event handling of data */
(void)ev_bitmask;
memset(&mrc, 0, sizeof(mrc));
mrc.multi = multi;
- mrc.now = curlx_now(); /* start of API call */
sigpipe_init(&mrc.pipe_st);
if(checkall) {
/* *perform() deals with running_handles on its own */
- mresult = multi_perform(multi, &mrc.now, running_handles);
+ mresult = multi_perform(multi, running_handles);
if(mresult != CURLM_BAD_HANDLE) {
/* Reassess event status of all active transfers */
- mresult = Curl_multi_ev_assess_xfer_bset(multi, &multi->process,
- &mrc.now);
+ mresult = Curl_multi_ev_assess_xfer_bset(multi, &multi->process);
}
goto out;
}
memset(&multi->last_expire_ts, 0, sizeof(multi->last_expire_ts));
}
- multi_mark_expired_as_dirty(&mrc);
+ multi_mark_expired_as_dirty(&mrc, multi_now(multi));
mresult = multi_run_dirty(&mrc);
if(mresult)
goto out;
* to set a 0 timeout and call us again, we run them here.
* Do that only once or it might be unfair to transfers on other
* sockets. */
- multi_mark_expired_as_dirty(&mrc);
+ multi_mark_expired_as_dirty(&mrc, &multi->now);
mresult = multi_run_dirty(&mrc);
}
}
if(CURLM_OK >= mresult)
- mresult = Curl_update_timer(multi, &mrc.now);
+ mresult = Curl_update_timer(multi);
return mresult;
}
}
static void multi_timeout(struct Curl_multi *multi,
- struct curltime *pnow,
struct curltime *expire_time,
long *timeout_ms)
{
}
if(multi_has_dirties(multi)) {
- *expire_time = *pnow;
+ *expire_time = *multi_now(multi);
*timeout_ms = 0;
return;
}
else if(multi->timetree) {
+ const struct curltime *pnow = multi_now(multi);
/* splay the lowest to the bottom */
- multi->timetree = Curl_splay(tv_zero, multi->timetree);
+ multi->timetree = Curl_splay(&tv_zero, multi->timetree);
/* this will not return NULL from a non-empty tree, but some compilers
* are not convinced of that. Analyzers are hard. */
*expire_time = multi->timetree ? multi->timetree->key : tv_zero;
/* 'multi->timetree' will be non-NULL here but the compilers sometimes
yell at us if we assume so */
if(multi->timetree &&
- curlx_timediff_us(multi->timetree->key, *pnow) > 0) {
+ curlx_ptimediff_us(&multi->timetree->key, pnow) > 0) {
/* some time left before expiration */
- timediff_t diff_ms = curlx_timediff_ceil_ms(multi->timetree->key, *pnow);
+ timediff_t diff_ms =
+ curlx_timediff_ceil_ms(multi->timetree->key, *pnow);
#ifndef CURL_DISABLE_VERBOSE_STRINGS
data = Curl_splayget(multi->timetree);
#endif
{
struct curltime expire_time;
struct Curl_multi *multi = m;
- struct curltime now = curlx_now(); /* start of API call */
/* First, make some basic checks that the CURLM handle is a good handle */
if(!GOOD_MULTI_HANDLE(multi))
if(multi->in_callback)
return CURLM_RECURSIVE_API_CALL;
- multi_timeout(multi, &now, &expire_time, timeout_ms);
+ multi_timeout(multi, &expire_time, timeout_ms);
return CURLM_OK;
}
* Tell the application it should update its timers, if it subscribes to the
* update timer callback.
*/
-CURLMcode Curl_update_timer(struct Curl_multi *multi,
- struct curltime *pnow)
+CURLMcode Curl_update_timer(struct Curl_multi *multi)
{
struct curltime expire_ts;
long timeout_ms;
if(!multi->timer_cb || multi->dead)
return CURLM_OK;
- multi_timeout(multi, pnow, &expire_ts, &timeout_ms);
+ multi_timeout(multi, &expire_ts, &timeout_ms);
if(timeout_ms < 0 && multi->last_timeout_ms < 0) {
/* nothing to do */
CURL_TRC_M(multi->admin, "[TIMER] set %ldms, none before", timeout_ms);
set_value = TRUE;
}
- else if(curlx_timediff_us(multi->last_expire_ts, expire_ts)) {
+ else if(curlx_ptimediff_us(&multi->last_expire_ts, &expire_ts)) {
/* We had a timeout before and have one now, the absolute timestamp
* differs. The relative timeout_ms may be the same, but the starting
* point differs. Let the application restart its timer. */
*/
static CURLMcode multi_addtimeout(struct Curl_easy *data,
struct curltime *stamp,
- expire_id eid,
- const struct curltime *nowp)
+ expire_id eid)
{
struct Curl_llist_node *e;
struct time_node *node;
size_t n;
struct Curl_llist *timeoutlist = &data->state.timeoutlist;
- (void)nowp;
node = &data->state.expires[eid];
/* copy the timestamp and id */
/* find the correct spot in the list */
for(e = Curl_llist_head(timeoutlist); e; e = Curl_node_next(e)) {
struct time_node *check = Curl_node_elem(e);
- timediff_t diff = curlx_timediff_ms(check->time, node->time);
+ timediff_t diff = curlx_ptimediff_ms(&check->time, &node->time);
if(diff > 0)
break;
prev = e;
Curl_llist_insert_next(timeoutlist, prev, node, &node->list);
CURL_TRC_TIMER(data, eid, "set for %" FMT_TIMEDIFF_T "ns",
- curlx_timediff_us(node->time, *nowp));
+ curlx_ptimediff_us(&node->time, Curl_pgrs_now(data)));
return CURLM_OK;
}
DEBUGASSERT(id < EXPIRE_LAST);
- set = data->progress.now;
+ set = *Curl_pgrs_now(data);
set.tv_sec += (time_t)(milli / 1000); /* may be a 64 to 32-bit conversion */
set.tv_usec += (int)(milli % 1000) * 1000;
/* Add it to the timer list. It must stay in the list until it has expired
in case we need to recompute the minimum timer later. */
- multi_addtimeout(data, &set, id, &data->progress.now);
+ multi_addtimeout(data, &set, id);
if(curr_expire->tv_sec || curr_expire->tv_usec) {
/* This means that the struct is added as a node in the splay tree.
Compare if the new time is earlier, and only remove-old/add-new if it
is. */
- timediff_t diff = curlx_timediff_ms(set, *curr_expire);
+ timediff_t diff = curlx_ptimediff_ms(&set, curr_expire);
int rc;
if(diff > 0) {
value since it is our local minimum. */
*curr_expire = set;
Curl_splayset(&data->state.timenode, data);
- multi->timetree = Curl_splayinsert(*curr_expire, multi->timetree,
+ multi->timetree = Curl_splayinsert(curr_expire, multi->timetree,
&data->state.timenode);
}
}
CURLMcode Curl_multi_ev_assess_xfer_bset(struct Curl_multi *multi,
- struct uint32_bset *set,
- struct curltime *pnow)
+ struct uint32_bset *set)
{
uint32_t mid;
CURLMcode mresult = CURLM_OK;
do {
struct Curl_easy *data = Curl_multi_get_easy(multi, mid);
if(data) {
- Curl_pgrs_now_at_least(data, pnow);
mresult = Curl_multi_ev_assess_xfer(multi, data);
}
} while(!mresult && Curl_uint32_bset_next(set, mid, &mid));
struct Curl_easy *data);
/* Assess all easy handles on the list */
CURLMcode Curl_multi_ev_assess_xfer_bset(struct Curl_multi *multi,
- struct uint32_bset *set,
- struct curltime *pnow);
+ struct uint32_bset *set);
/* Assess the connection by getting its current pollset */
CURLMcode Curl_multi_ev_assess_conn(struct Curl_multi *multi,
struct Curl_easy *data,
struct PslCache psl;
#endif
+ /* current time for transfers running in this multi handle */
+ struct curltime now;
/* timetree points to the splay-tree of time nodes to figure out expire
times of all currently set timers */
struct Curl_tree *timetree;
unsigned int maxconnects; /* if >0, a fixed limit of the maximum number of
entries we are allowed to grow the connection
cache to */
+#ifdef DEBUGBUILD
+ unsigned int now_access_count;
+#endif
#define IPV6_UNKNOWN 0
#define IPV6_DEAD 1
#define IPV6_WORKS 2
timediff_t milli, expire_id id);
bool Curl_expire_clear(struct Curl_easy *data);
void Curl_expire_done(struct Curl_easy *data, expire_id id);
-CURLMcode Curl_update_timer(struct Curl_multi *multi,
- struct curltime *pnow) WARN_UNUSED_RESULT;
+CURLMcode Curl_update_timer(struct Curl_multi *multi) WARN_UNUSED_RESULT;
void Curl_attach_connection(struct Curl_easy *data,
struct connectdata *conn);
void Curl_detach_connection(struct Curl_easy *data);
/* Clear transfer from the dirty set. */
void Curl_multi_clear_dirty(struct Curl_easy *data);
+void Curl_multi_set_now(struct Curl_multi *multi);
+
#endif /* HEADER_CURL_MULTIIF_H */
supposed to govern the response for any given server response, not for
the time from connect to the given server response. */
- /* pingpong can spend some time processing, always update
- * the transfer timestamp before checking timeouts. */
- Curl_pgrs_now_set(data);
-
/* Without a requested timeout, we only wait 'response_time' seconds for the
full response to arrive before we bail out */
timeout_ms = response_time -
- curlx_timediff_ms(data->progress.now, pp->response);
+ curlx_ptimediff_ms(Curl_pgrs_now(data), &pp->response);
if(data->set.timeout && !disconnecting) {
/* if timeout is requested, find out how much overall remains */
}
/* initialize stuff to prepare for reading a fresh new response */
-void Curl_pp_init(struct pingpong *pp, struct curltime *pnow)
+void Curl_pp_init(struct pingpong *pp, const struct curltime *pnow)
{
DEBUGASSERT(!pp->initialised);
pp->nread_resp = 0;
else {
pp->sendthis = NULL;
pp->sendleft = pp->sendsize = 0;
- pp->response = data->progress.now;
+ pp->response = *Curl_pgrs_now(data);
}
return CURLE_OK;
else {
pp->sendthis = NULL;
pp->sendleft = pp->sendsize = 0;
- pp->response = data->progress.now;
+ pp->response = *Curl_pgrs_now(data);
}
return CURLE_OK;
}
bool block, bool disconnecting);
/* initialize stuff to prepare for reading a fresh new response */
-void Curl_pp_init(struct pingpong *pp, struct curltime *pnow);
+void Curl_pp_init(struct pingpong *pp, const struct curltime *pnow);
/* Returns timeout in ms. 0 or negative number means the timeout has already
triggered */
Curl_sasl_init(&pop3c->sasl, data, &saslpop3);
/* Initialise the pingpong layer */
- Curl_pp_init(pp, &data->progress.now);
+ Curl_pp_init(pp, Curl_pgrs_now(data));
/* Parse the URL options */
result = pop3_parse_url_options(conn);
* @unittest: 1606
*/
UNITTEST CURLcode pgrs_speedcheck(struct Curl_easy *data,
- struct curltime *pnow)
+ const struct curltime *pnow)
{
if(!data->set.low_speed_time || !data->set.low_speed_limit ||
Curl_xfer_recv_is_paused(data) || Curl_xfer_send_is_paused(data))
data->state.keeps_speed = *pnow;
else {
/* how long has it been under the limit */
- timediff_t howlong = curlx_timediff_ms(*pnow, data->state.keeps_speed);
+ timediff_t howlong =
+ curlx_ptimediff_ms(pnow, &data->state.keeps_speed);
if(howlong >= data->set.low_speed_time * 1000) {
/* too long */
return CURLE_OK;
}
-void Curl_pgrs_now_set(struct Curl_easy *data)
+const struct curltime *Curl_pgrs_now(struct Curl_easy *data)
{
- data->progress.now = curlx_now();
-}
-
-void Curl_pgrs_now_at_least(struct Curl_easy *data, struct curltime *pts)
-{
- if((pts->tv_sec > data->progress.now.tv_sec) ||
- ((pts->tv_sec == data->progress.now.tv_sec) &&
- (pts->tv_usec > data->progress.now.tv_usec))) {
- data->progress.now = *pts;
- }
-}
-
-void Curl_pgrs_now_update(struct Curl_easy *data, struct Curl_easy *other)
-{
- Curl_pgrs_now_at_least(data, &other->progress.now);
+ struct curltime *pnow = data->multi ?
+ &data->multi->now : &data->progress.now;
+ curlx_pnow(pnow);
+ return pnow;
}
/*
case TIMER_POSTQUEUE:
/* Queue time is accumulative from all involved redirects */
data->progress.t_postqueue +=
- curlx_timediff_us(timestamp, data->progress.t_startqueue);
+ curlx_ptimediff_us(×tamp, &data->progress.t_startqueue);
break;
case TIMER_STARTACCEPT:
data->progress.t_acceptdata = timestamp;
delta = &data->progress.t_posttransfer;
break;
case TIMER_REDIRECT:
- data->progress.t_redirect = curlx_timediff_us(timestamp,
- data->progress.start);
+ data->progress.t_redirect = curlx_ptimediff_us(×tamp,
+ &data->progress.start);
data->progress.t_startqueue = timestamp;
break;
}
if(delta) {
- timediff_t us = curlx_timediff_us(timestamp, data->progress.t_startsingle);
+ timediff_t us = curlx_ptimediff_us(×tamp,
+ &data->progress.t_startsingle);
if(us < 1)
us = 1; /* make sure at least one microsecond passed */
*delta += us;
*/
void Curl_pgrsTime(struct Curl_easy *data, timerid timer)
{
- Curl_pgrs_now_set(data); /* update on real progress */
- Curl_pgrsTimeWas(data, timer, data->progress.now);
+ Curl_pgrsTimeWas(data, timer, *Curl_pgrs_now(data));
}
void Curl_pgrsStartNow(struct Curl_easy *data)
{
struct Progress *p = &data->progress;
+
p->speeder_c = 0; /* reset the progress meter display */
- p->start = data->progress.now;
+ p->start = *Curl_pgrs_now(data);
p->is_t_startransfer_set = FALSE;
p->dl.cur_size = 0;
p->ul.cur_size = 0;
{
if(delta) {
data->progress.dl.cur_size += delta;
- Curl_rlimit_drain(&data->progress.dl.rlimit, delta, &data->progress.now);
+ Curl_rlimit_drain(&data->progress.dl.rlimit, delta, Curl_pgrs_now(data));
}
}
{
if(delta) {
data->progress.ul.cur_size += delta;
- Curl_rlimit_drain(&data->progress.ul.rlimit, delta, &data->progress.now);
+ Curl_rlimit_drain(&data->progress.ul.rlimit, delta, Curl_pgrs_now(data));
}
}
}
/* returns TRUE if it is time to show the progress meter */
-static bool progress_calc(struct Curl_easy *data, struct curltime *pnow)
+static bool progress_calc(struct Curl_easy *data,
+ const struct curltime *pnow)
{
struct Progress * const p = &data->progress;
int i_next, i_oldest, i_latest;
curl_off_t amount;
/* The time spent so far (from the start) in microseconds */
- p->timespent = curlx_timediff_us(*pnow, p->start);
+ p->timespent = curlx_ptimediff_us(pnow, &p->start);
p->dl.speed = trspeed(p->dl.cur_size, p->timespent);
p->ul.speed = trspeed(p->ul.cur_size, p->timespent);
/* Make a new record only when some time has passed.
* Too frequent calls otherwise ruin the history. */
- if(curlx_timediff_ms(*pnow, p->speed_time[i_latest]) >= 1000) {
+ if(curlx_ptimediff_ms(pnow, &p->speed_time[i_latest]) >= 1000) {
p->speeder_c++;
i_latest = i_next;
p->speed_amount[i_latest] = p->dl.cur_size + p->ul.cur_size;
/* How much we transferred between oldest and current records */
amount = p->speed_amount[i_latest] - p->speed_amount[i_oldest];
/* How long this took */
- duration_ms = curlx_timediff_ms(p->speed_time[i_latest],
- p->speed_time[i_oldest]);
+ duration_ms = curlx_ptimediff_ms(&p->speed_time[i_latest],
+ &p->speed_time[i_oldest]);
if(duration_ms <= 0)
duration_ms = 1;
return CURLE_OK;
}
-static CURLcode pgrs_update(struct Curl_easy *data, struct curltime *pnow)
+static CURLcode pgrs_update(struct Curl_easy *data,
+ const struct curltime *pnow)
{
bool showprogress = progress_calc(data, pnow);
return pgrsupdate(data, showprogress);
CURLcode Curl_pgrsUpdate(struct Curl_easy *data)
{
- return pgrs_update(data, &data->progress.now);
+ return pgrs_update(data, Curl_pgrs_now(data));
}
CURLcode Curl_pgrsCheck(struct Curl_easy *data)
{
CURLcode result;
- result = pgrs_update(data, &data->progress.now);
+ result = pgrs_update(data, Curl_pgrs_now(data));
if(!result && !data->req.done)
- result = pgrs_speedcheck(data, &data->progress.now);
+ result = pgrs_speedcheck(data, Curl_pgrs_now(data));
return result;
}
*/
void Curl_pgrsUpdate_nometer(struct Curl_easy *data)
{
- (void)progress_calc(data, &data->progress.now);
+ (void)progress_calc(data, Curl_pgrs_now(data));
}
TIMER_LAST /* must be last */
} timerid;
-#define CURL_PGRS_NOW_MONOTONIC
-
-/* Set current time in data->progress.now */
-void Curl_pgrs_now_set(struct Curl_easy *data);
-/* Advance `now` timestamp at least to given timestamp.
- * No effect it data's `now` is already later than `pts`. */
-void Curl_pgrs_now_at_least(struct Curl_easy *data, struct curltime *pts);
-/* `data` progressing continues after `other` processing. Advance `data`s
- * now timestamp to at least `other's` timestamp. */
-void Curl_pgrs_now_update(struct Curl_easy *data, struct Curl_easy *other);
+/* Get the current timestamp of the transfer */
+const struct curltime *Curl_pgrs_now(struct Curl_easy *data);
int Curl_pgrsDone(struct Curl_easy *data);
void Curl_pgrsStartNow(struct Curl_easy *data);
#ifdef UNITTESTS
UNITTEST CURLcode pgrs_speedcheck(struct Curl_easy *data,
- struct curltime *pnow);
+ const struct curltime *pnow);
#endif
#endif /* HEADER_CURL_PROGRESS_H */
#ifdef USE_LIBPSL
#include "psl.h"
+#include "progress.h"
#include "curl_share.h"
void Curl_psl_destroy(struct PslCache *pslcache)
}
}
-static time_t now_seconds(void)
-{
- struct curltime now = curlx_now();
-
- return now.tv_sec;
-}
-
const psl_ctx_t *Curl_psl_use(struct Curl_easy *easy)
{
struct PslCache *pslcache = easy->psl;
const psl_ctx_t *psl;
- time_t now;
+ time_t now_sec;
if(!pslcache)
return NULL;
Curl_share_lock(easy, CURL_LOCK_DATA_PSL, CURL_LOCK_ACCESS_SHARED);
- now = now_seconds();
- if(!pslcache->psl || pslcache->expires <= now) {
+ now_sec = Curl_pgrs_now(easy)->tv_sec;
+ if(!pslcache->psl || pslcache->expires <= now_sec) {
/* Let a chance to other threads to do the job: avoids deadlock. */
Curl_share_unlock(easy, CURL_LOCK_DATA_PSL);
Curl_share_lock(easy, CURL_LOCK_DATA_PSL, CURL_LOCK_ACCESS_SINGLE);
/* Recheck in case another thread did the job. */
- now = now_seconds();
- if(!pslcache->psl || pslcache->expires <= now) {
+ if(pslcache->expires <= now_sec) {
+ now_sec = Curl_pgrs_now(easy)->tv_sec;
+ }
+ if(!pslcache->psl || pslcache->expires <= now_sec) {
bool dynamic = FALSE;
time_t expires = TIME_T_MAX;
psl = psl_latest(NULL);
dynamic = psl != NULL;
/* Take care of possible time computation overflow. */
- expires = now < TIME_T_MAX - PSL_TTL ? now + PSL_TTL : TIME_T_MAX;
+ expires = (now_sec < TIME_T_MAX - PSL_TTL) ?
+ (now_sec + PSL_TTL) : TIME_T_MAX;
/* Only get the built-in PSL if we do not already have the "latest". */
if(!psl && !pslcache->dynamic)
static bool seeded = FALSE;
unsigned int rnd;
if(!seeded) {
- struct curltime now = curlx_now();
+ struct curltime now;
+ curlx_pnow(&now);
randseed += (unsigned int)now.tv_usec + (unsigned int)now.tv_sec;
randseed = randseed * 1103515245 + 12345;
randseed = randseed * 1103515245 + 12345;
#include "curl_setup.h"
#include "curlx/timeval.h"
+#include "progress.h"
#include "ratelimit.h"
void Curl_rlimit_init(struct Curl_rlimit *r,
curl_off_t rate_per_s,
curl_off_t burst_per_s,
- struct curltime *pts)
+ const struct curltime *pts)
{
curl_off_t rate_steps;
r->blocked = FALSE;
}
-void Curl_rlimit_start(struct Curl_rlimit *r, struct curltime *pts)
+void Curl_rlimit_start(struct Curl_rlimit *r, const struct curltime *pts)
{
r->tokens = r->rate_per_step;
r->spare_us = 0;
}
static void ratelimit_update(struct Curl_rlimit *r,
- struct curltime *pts)
+ const struct curltime *pts)
{
timediff_t elapsed_us, elapsed_steps;
curl_off_t token_gain;
if((r->ts.tv_sec == pts->tv_sec) && (r->ts.tv_usec == pts->tv_usec))
return;
- elapsed_us = curlx_timediff_us(*pts, r->ts);
+ elapsed_us = curlx_ptimediff_us(pts, &r->ts);
if(elapsed_us < 0) { /* not going back in time */
DEBUGASSERT(0);
return;
}
curl_off_t Curl_rlimit_avail(struct Curl_rlimit *r,
- struct curltime *pts)
+ const struct curltime *pts)
{
if(r->blocked)
return 0;
void Curl_rlimit_drain(struct Curl_rlimit *r,
size_t tokens,
- struct curltime *pts)
+ const struct curltime *pts)
{
if(r->blocked || !r->rate_per_step)
return;
}
timediff_t Curl_rlimit_wait_ms(struct Curl_rlimit *r,
- struct curltime *pts)
+ const struct curltime *pts)
{
timediff_t wait_us, elapsed_us;
wait_us = (1 + (-r->tokens / r->rate_per_step)) * r->step_us;
wait_us -= r->spare_us;
- elapsed_us = curlx_timediff_us(*pts, r->ts);
+ elapsed_us = curlx_ptimediff_us(pts, &r->ts);
if(elapsed_us >= wait_us)
return 0;
wait_us -= elapsed_us;
void Curl_rlimit_block(struct Curl_rlimit *r,
bool activate,
- struct curltime *pts)
+ const struct curltime *pts)
{
if(!activate == !r->blocked)
return;
#include "curlx/timeval.h"
+struct Curl_easy;
+
/* This is a rate limiter that provides "tokens" to be consumed
* per second with a "burst" rate limitation. Example:
* A rate limit of 1 megabyte per second with a burst rate of 1.5MB.
void Curl_rlimit_init(struct Curl_rlimit *r,
curl_off_t rate_per_s,
curl_off_t burst_per_s,
- struct curltime *pts);
+ const struct curltime *pts);
/* Start ratelimiting with the given timestamp. Resets available tokens. */
-void Curl_rlimit_start(struct Curl_rlimit *r, struct curltime *pts);
+void Curl_rlimit_start(struct Curl_rlimit *r, const struct curltime *pts);
/* How many milliseconds to wait until token are available again. */
timediff_t Curl_rlimit_wait_ms(struct Curl_rlimit *r,
- struct curltime *pts);
+ const struct curltime *pts);
/* Return if rate limiting of tokens is active */
bool Curl_rlimit_active(struct Curl_rlimit *r);
/* Return how many tokens are available to spend, may be negative */
curl_off_t Curl_rlimit_avail(struct Curl_rlimit *r,
- struct curltime *pts);
+ const struct curltime *pts);
/* Drain tokens from the ratelimit, return how many are now available. */
void Curl_rlimit_drain(struct Curl_rlimit *r,
size_t tokens,
- struct curltime *pts);
+ const struct curltime *pts);
/* Block/unblock ratelimiting. A blocked ratelimit has 0 tokens available. */
void Curl_rlimit_block(struct Curl_rlimit *r,
bool activate,
- struct curltime *pts);
+ const struct curltime *pts);
#endif /* HEADER_Curl_rlimit_H */
CURLcode Curl_req_start(struct SingleRequest *req,
struct Curl_easy *data)
{
- req->start = data->progress.now;
+ req->start = *Curl_pgrs_now(data);
return Curl_req_soft_reset(req, data);
}
if(!ctx->started_response &&
!(type & (CLIENTWRITE_INFO | CLIENTWRITE_CONNECT))) {
Curl_pgrsTime(data, TIMER_STARTTRANSFER);
- Curl_rlimit_start(&data->progress.dl.rlimit, &data->progress.now);
+ Curl_rlimit_start(&data->progress.dl.rlimit, Curl_pgrs_now(data));
ctx->started_response = TRUE;
}
DEBUGASSERT(data->req.reader_stack);
}
if(!data->req.reader_started) {
- Curl_rlimit_start(&data->progress.ul.rlimit, &data->progress.now);
+ Curl_rlimit_start(&data->progress.ul.rlimit, Curl_pgrs_now(data));
data->req.reader_started = TRUE;
}
if(Curl_rlimit_active(&data->progress.ul.rlimit)) {
- curl_off_t ul_avail =
- Curl_rlimit_avail(&data->progress.ul.rlimit, &data->progress.now);
+ curl_off_t ul_avail = Curl_rlimit_avail(&data->progress.ul.rlimit,
+ Curl_pgrs_now(data));
if(ul_avail <= 0) {
result = CURLE_OK;
*eos = FALSE;
return CURLE_BAD_FUNCTION_ARGUMENT;
s->max_send_speed = offt;
Curl_rlimit_init(&data->progress.ul.rlimit, offt, offt,
- &data->progress.now);
+ Curl_pgrs_now(data));
break;
case CURLOPT_MAX_RECV_SPEED_LARGE:
/*
return CURLE_BAD_FUNCTION_ARGUMENT;
s->max_recv_speed = offt;
Curl_rlimit_init(&data->progress.dl.rlimit, offt, offt,
- &data->progress.now);
+ Curl_pgrs_now(data));
break;
case CURLOPT_RESUME_FROM_LARGE:
/*
Curl_sasl_init(&smtpc->sasl, data, &saslsmtp);
/* Initialise the pingpong layer */
- Curl_pp_init(&smtpc->pp, &data->progress.now);
+ Curl_pp_init(&smtpc->pp, Curl_pgrs_now(data));
/* Parse the URL options */
result = smtp_parse_url_options(data->conn, smtpc);
* zero : when i is equal to j
* positive when : when i is larger than j
*/
-#define compare(i, j) curlx_timediff_us(i, j)
+#define splay_compare(i, j) curlx_ptimediff_us(i, j)
/*
* Splay using the key i (which may or may not be in the tree.) The starting
* root is t.
*/
-struct Curl_tree *Curl_splay(struct curltime i,
+struct Curl_tree *Curl_splay(const struct curltime *pkey,
struct Curl_tree *t)
{
struct Curl_tree N, *l, *r, *y;
l = r = &N;
for(;;) {
- timediff_t comp = compare(i, t->key);
+ timediff_t comp = splay_compare(pkey, &t->key);
if(comp < 0) {
if(!t->smaller)
break;
- if(compare(i, t->smaller->key) < 0) {
+ if(splay_compare(pkey, &t->smaller->key) < 0) {
y = t->smaller; /* rotate smaller */
t->smaller = y->larger;
y->larger = t;
else if(comp > 0) {
if(!t->larger)
break;
- if(compare(i, t->larger->key) > 0) {
+ if(splay_compare(pkey, &t->larger->key) > 0) {
y = t->larger; /* rotate larger */
t->larger = y->smaller;
y->smaller = t;
*
* @unittest: 1309
*/
-struct Curl_tree *Curl_splayinsert(struct curltime i,
+struct Curl_tree *Curl_splayinsert(const struct curltime *pkey,
struct Curl_tree *t,
struct Curl_tree *node)
{
DEBUGASSERT(node);
if(t) {
- t = Curl_splay(i, t);
+ t = Curl_splay(pkey, t);
DEBUGASSERT(t);
- if(compare(i, t->key) == 0) {
+ if(splay_compare(pkey, &t->key) == 0) {
/* There already exists a node in the tree with the same key. Build a
doubly-linked circular list of nodes. We add the new 'node' struct to
the end of this list. */
if(!t) {
node->smaller = node->larger = NULL;
}
- else if(compare(i, t->key) < 0) {
+ else if(splay_compare(pkey, &t->key) < 0) {
node->smaller = t->smaller;
node->larger = t;
t->smaller = NULL;
node->smaller = t;
t->larger = NULL;
}
- node->key = i;
+ node->key = *pkey;
/* no identical nodes (yet), we are the only one in the list of nodes */
node->samen = node;
/* Finds and deletes the best-fit node from the tree. Return a pointer to the
resulting tree. best-fit means the smallest node if it is not larger than
the key */
-struct Curl_tree *Curl_splaygetbest(struct curltime i,
+struct Curl_tree *Curl_splaygetbest(const struct curltime *pkey,
struct Curl_tree *t,
struct Curl_tree **removed)
{
}
/* find smallest */
- t = Curl_splay(tv_zero, t);
+ t = Curl_splay(&tv_zero, t);
DEBUGASSERT(t);
- if(compare(i, t->key) < 0) {
+ if(splay_compare(pkey, &t->key) < 0) {
/* even the smallest is too big */
*removed = NULL;
return t;
DEBUGASSERT(removenode);
- if(compare(SPLAY_SUBNODE, removenode->key) == 0) {
+ if(splay_compare(&SPLAY_SUBNODE, &removenode->key) == 0) {
/* It is a subnode within a 'same' linked list and thus we can unlink it
easily. */
DEBUGASSERT(removenode->samen != removenode);
return 0;
}
- t = Curl_splay(removenode->key, t);
+ t = Curl_splay(&removenode->key, t);
DEBUGASSERT(t);
/* First make sure that we got the same root node as the one we want
if(!t->smaller)
x = t->larger;
else {
- x = Curl_splay(removenode->key, t->smaller);
+ x = Curl_splay(&removenode->key, t->smaller);
DEBUGASSERT(x);
x->larger = t->larger;
}
void *ptr; /* data the splay code does not care about */
};
-struct Curl_tree *Curl_splay(struct curltime i,
+struct Curl_tree *Curl_splay(const struct curltime *pkey,
struct Curl_tree *t);
-struct Curl_tree *Curl_splayinsert(struct curltime key,
+struct Curl_tree *Curl_splayinsert(const struct curltime *pkey,
struct Curl_tree *t,
struct Curl_tree *newnode);
-struct Curl_tree *Curl_splaygetbest(struct curltime key,
+struct Curl_tree *Curl_splaygetbest(const struct curltime *pkey,
struct Curl_tree *t,
struct Curl_tree **removed);
} /* switch */
if(data->set.timeout) {
- Curl_pgrs_now_set(data);
- if(curlx_timediff_ms(data->progress.now, conn->created) >=
+ if(curlx_ptimediff_ms(Curl_pgrs_now(data), &conn->created) >=
data->set.timeout) {
failf(data, "Time-out");
result = CURLE_OPERATION_TIMEDOUT;
} /* poll switch statement */
if(data->set.timeout) {
- Curl_pgrs_now_set(data);
- if(curlx_timediff_ms(data->progress.now, conn->created) >=
+ if(curlx_ptimediff_ms(Curl_pgrs_now(data), &conn->created) >=
data->set.timeout) {
failf(data, "Time-out");
result = CURLE_OPERATION_TIMEDOUT;
if(bytestoread && Curl_rlimit_active(&data->progress.dl.rlimit)) {
curl_off_t dl_avail = Curl_rlimit_avail(&data->progress.dl.rlimit,
- &data->progress.now);
+ Curl_pgrs_now(data));
/* DEBUGF(infof(data, "dl_rlimit, available=%" FMT_OFF_T, dl_avail));
*/
/* In case of rate limited downloads: if this loop already got
failf(data, "Operation timed out after %" FMT_TIMEDIFF_T
" milliseconds with %" FMT_OFF_T " out of %"
FMT_OFF_T " bytes received",
- curlx_timediff_ms(data->progress.now,
- data->progress.t_startsingle),
+ curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &data->progress.t_startsingle),
k->bytecount, k->size);
}
else {
failf(data, "Operation timed out after %" FMT_TIMEDIFF_T
" milliseconds with %" FMT_OFF_T " bytes received",
- curlx_timediff_ms(data->progress.now,
- data->progress.t_startsingle),
+ curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &data->progress.t_startsingle),
k->bytecount);
}
result = CURLE_OPERATION_TIMEDOUT;
CURLcode Curl_xfer_pause_send(struct Curl_easy *data, bool enable)
{
CURLcode result = CURLE_OK;
- Curl_rlimit_block(&data->progress.ul.rlimit, enable, &data->progress.now);
+ Curl_rlimit_block(&data->progress.ul.rlimit, enable, Curl_pgrs_now(data));
if(!enable && Curl_creader_is_paused(data))
result = Curl_creader_unpause(data);
Curl_pgrsSendPause(data, enable);
CURLcode Curl_xfer_pause_recv(struct Curl_easy *data, bool enable)
{
CURLcode result = CURLE_OK;
- Curl_rlimit_block(&data->progress.dl.rlimit, enable, &data->progress.now);
+ Curl_rlimit_block(&data->progress.dl.rlimit, enable, Curl_pgrs_now(data));
if(!enable && Curl_cwriter_is_paused(data))
result = Curl_cwriter_unpause(data);
Curl_conn_ev_data_pause(data, enable);
#endif
Curl_netrc_init(&data->state.netrc);
Curl_init_userdefined(data);
- Curl_pgrs_now_set(data); /* on easy handle create */
*curl = data;
return CURLE_OK;
timediff_t age_ms;
if(data->set.conn_max_idle_ms) {
- age_ms = curlx_timediff_ms(now, conn->lastused);
+ age_ms = curlx_ptimediff_ms(&now, &conn->lastused);
if(age_ms > data->set.conn_max_idle_ms) {
infof(data, "Too old connection (%" FMT_TIMEDIFF_T
" ms idle, max idle is %" FMT_TIMEDIFF_T " ms), disconnect it",
}
if(data->set.conn_max_age_ms) {
- age_ms = curlx_timediff_ms(now, conn->created);
+ age_ms = curlx_ptimediff_ms(&now, &conn->created);
if(age_ms > data->set.conn_max_age_ms) {
infof(data,
"Too old connection (created %" FMT_TIMEDIFF_T
use */
bool dead;
- if(conn_maxage(data, conn, data->progress.now)) {
+ if(conn_maxage(data, conn, *Curl_pgrs_now(data))) {
/* avoid check if already too old */
dead = TRUE;
}
}
CURLcode Curl_conn_upkeep(struct Curl_easy *data,
- struct connectdata *conn,
- struct curltime *now)
+ struct connectdata *conn)
{
CURLcode result = CURLE_OK;
- if(curlx_timediff_ms(*now, conn->keepalive) <= data->set.upkeep_interval_ms)
+ if(curlx_ptimediff_ms(Curl_pgrs_now(data), &conn->keepalive) <=
+ data->set.upkeep_interval_ms)
return result;
/* briefly attach for action */
}
Curl_detach_connection(data);
- conn->keepalive = *now;
+ conn->keepalive = *Curl_pgrs_now(data);
return result;
}
conn->remote_port = -1; /* unknown at this point */
/* Store creation time to help future close decision making */
- conn->created = data->progress.now;
+ conn->created = *Curl_pgrs_now(data);
/* Store current time to give a baseline to keepalive connection times. */
conn->keepalive = conn->created;
else if(result == CURLE_OPERATION_TIMEDOUT) {
failf(data, "Failed to resolve %s '%s' with timeout after %"
FMT_TIMEDIFF_T " ms", peertype, ehost->dispname,
- curlx_timediff_ms(data->progress.now, data->progress.t_startsingle));
+ curlx_ptimediff_ms(Curl_pgrs_now(data),
+ &data->progress.t_startsingle));
return CURLE_OPERATION_TIMEDOUT;
}
else if(result) {
* Perform upkeep operations on the connection.
*/
CURLcode Curl_conn_upkeep(struct Curl_easy *data,
- struct connectdata *conn,
- struct curltime *now);
+ struct connectdata *conn);
/**
* Always eval all arguments, return the first result != CURLE_OK.
struct Curl_cfilter *cf)
{
struct cf_ngtcp2_ctx *ctx = cf->ctx;
+ const struct curltime *pnow = Curl_pgrs_now(data);
- vquic_ctx_update_time(data, &ctx->q);
- pktx->ts = (ngtcp2_tstamp)ctx->q.last_op.tv_sec * NGTCP2_SECONDS +
- (ngtcp2_tstamp)ctx->q.last_op.tv_usec * NGTCP2_MICROSECONDS;
+ vquic_ctx_update_time(&ctx->q, pnow);
+ pktx->ts = (ngtcp2_tstamp)pnow->tv_sec * NGTCP2_SECONDS +
+ (ngtcp2_tstamp)pnow->tv_usec * NGTCP2_MICROSECONDS;
}
static void pktx_init(struct pkt_io_ctx *pktx,
struct Curl_easy *data)
{
struct cf_ngtcp2_ctx *ctx = cf->ctx;
+ const struct curltime *pnow = Curl_pgrs_now(data);
pktx->cf = cf;
pktx->data = data;
ngtcp2_path_storage_zero(&pktx->ps);
- vquic_ctx_set_time(data, &ctx->q);
- pktx->ts = (ngtcp2_tstamp)ctx->q.last_op.tv_sec * NGTCP2_SECONDS +
- (ngtcp2_tstamp)ctx->q.last_op.tv_usec * NGTCP2_MICROSECONDS;
+ vquic_ctx_set_time(&ctx->q, pnow);
+ pktx->ts = (ngtcp2_tstamp)pnow->tv_sec * NGTCP2_SECONDS +
+ (ngtcp2_tstamp)pnow->tv_usec * NGTCP2_MICROSECONDS;
}
static int cb_h3_acked_req_body(nghttp3_conn *conn, int64_t stream_id,
if(!ctx || !data)
return NGHTTP3_ERR_CALLBACK_FAILURE;
- Curl_pgrs_now_set(data); /* real change */
- ctx->handshake_at = data->progress.now;
+ ctx->handshake_at = *Curl_pgrs_now(data);
ctx->tls_handshake_complete = TRUE;
Curl_vquic_report_handshake(&ctx->tls, cf, data);
"ms, remote transport[max_udp_payload=%" PRIu64
", initial_max_data=%" PRIu64
"]",
- curlx_timediff_ms(ctx->handshake_at, ctx->started_at),
+ curlx_ptimediff_ms(&ctx->handshake_at, &ctx->started_at),
rp->max_udp_payload_size, rp->initial_max_data);
}
#endif
/* How many byte to ack on the stream? */
/* how much does rate limiting allow us to acknowledge? */
- avail = Curl_rlimit_avail(&data->progress.dl.rlimit, &data->progress.now);
+ avail = Curl_rlimit_avail(&data->progress.dl.rlimit,
+ Curl_pgrs_now(data));
if(avail == CURL_OFF_T_MAX) { /* no rate limit, ack all */
ack_len = stream->download_unacked;
}
struct Curl_cfilter *cf = user_data;
struct cf_ngtcp2_ctx *ctx = cf->ctx;
struct Curl_easy *data = stream_user_data;
- struct Curl_easy *calling = CF_DATA_CURRENT(cf);
struct h3_stream_ctx *stream = H3_STREAM_CTX(ctx, data);
(void)conn;
if(!stream)
return NGHTTP3_ERR_CALLBACK_FAILURE;
- if(calling)
- Curl_pgrs_now_update(data, calling);
h3_xfer_write_resp(cf, data, stream, (const char *)buf, blen, FALSE);
CURL_TRC_CF(data, cf, "[%" PRId64 "] DATA len=%zu", stream->id, blen);
CF_DATA_SAVE(save, cf, data);
if(!ctx->qconn) {
- ctx->started_at = data->progress.now;
+ ctx->started_at = *Curl_pgrs_now(data);
result = cf_connect_start(cf, data, &pktx);
if(result)
goto out;
}
case CF_QUERY_CONNECT_REPLY_MS:
if(ctx->q.got_first_byte) {
- timediff_t ms = curlx_timediff_ms(ctx->q.first_byte_at, ctx->started_at);
+ timediff_t ms = curlx_ptimediff_ms(&ctx->q.first_byte_at,
+ &ctx->started_at);
*pres1 = (ms < INT_MAX) ? (int)ms : INT_MAX;
}
else
rp = ngtcp2_conn_get_remote_transport_params(ctx->qconn);
if(rp && rp->max_idle_timeout) {
timediff_t idletime_ms =
- curlx_timediff_ms(data->progress.now, ctx->q.last_io);
+ curlx_ptimediff_ms(Curl_pgrs_now(data), &ctx->q.last_io);
if(idletime_ms > 0) {
uint64_t max_idle_ms =
(uint64_t)(rp->max_idle_timeout / NGTCP2_MILLISECONDS);
if(acked_len > 0 || (eos && !s->send_blocked)) {
/* Since QUIC buffers the data written internally, we can tell
* nghttp3 that it can move forward on it */
- ctx->q.last_io = curlx_now();
+ ctx->q.last_io = *Curl_pgrs_now(data);
rv = nghttp3_conn_add_write_offset(ctx->h3.conn, s->id, acked_len);
if(rv && rv != NGHTTP3_ERR_STREAM_NOT_FOUND) {
failf(data, "nghttp3_conn_add_write_offset returned error: %s",
CF_DATA_SAVE(save, cf, data);
if(!ctx->tls.ossl.ssl) {
- ctx->started_at = data->progress.now;
+ ctx->started_at = *Curl_pgrs_now(data);
result = cf_osslq_ctx_start(cf, data);
if(result)
goto out;
int readable = SOCKET_READABLE(ctx->q.sockfd, 0);
if(readable > 0 && (readable & CURL_CSELECT_IN)) {
ctx->got_first_byte = TRUE;
- ctx->first_byte_at = data->progress.now;
+ ctx->first_byte_at = *Curl_pgrs_now(data);
}
}
/* if not recorded yet, take the timestamp before we called
* SSL_do_handshake() as the time we received the first packet. */
ctx->got_first_byte = TRUE;
- ctx->first_byte_at = data->progress.now;
+ ctx->first_byte_at = *Curl_pgrs_now(data);
}
/* Record the handshake complete with a new time stamp. */
- Curl_pgrs_now_set(data);
- ctx->handshake_at = data->progress.now;
- ctx->q.last_io = data->progress.now;
+ ctx->handshake_at = *Curl_pgrs_now(data);
+ ctx->q.last_io = *Curl_pgrs_now(data);
CURL_TRC_CF(data, cf, "handshake complete after %" FMT_TIMEDIFF_T "ms",
- curlx_timediff_ms(data->progress.now, ctx->started_at));
+ curlx_ptimediff_ms(Curl_pgrs_now(data), &ctx->started_at));
result = cf_osslq_verify_peer(cf, data);
if(!result) {
CURL_TRC_CF(data, cf, "peer verified");
int detail = SSL_get_error(ctx->tls.ossl.ssl, err);
switch(detail) {
case SSL_ERROR_WANT_READ:
- ctx->q.last_io = data->progress.now;
+ ctx->q.last_io = *Curl_pgrs_now(data);
CURL_TRC_CF(data, cf, "QUIC SSL_connect() -> WANT_RECV");
goto out;
case SSL_ERROR_WANT_WRITE:
- ctx->q.last_io = data->progress.now;
+ ctx->q.last_io = *Curl_pgrs_now(data);
CURL_TRC_CF(data, cf, "QUIC SSL_connect() -> WANT_SEND");
result = CURLE_OK;
goto out;
#ifdef SSL_ERROR_WANT_ASYNC
case SSL_ERROR_WANT_ASYNC:
- ctx->q.last_io = data->progress.now;
+ ctx->q.last_io = *Curl_pgrs_now(data);
CURL_TRC_CF(data, cf, "QUIC SSL_connect() -> WANT_ASYNC");
result = CURLE_OK;
goto out;
goto out;
}
CURL_TRC_CF(data, cf, "negotiated idle timeout: %" PRIu64 "ms", idle_ms);
- idletime = curlx_timediff_ms(data->progress.now, ctx->q.last_io);
+ idletime = curlx_ptimediff_ms(Curl_pgrs_now(data), &ctx->q.last_io);
if(idle_ms && idletime > 0 && (uint64_t)idletime > idle_ms)
goto out;
}
}
case CF_QUERY_CONNECT_REPLY_MS:
if(ctx->got_first_byte) {
- timediff_t ms = curlx_timediff_ms(ctx->first_byte_at, ctx->started_at);
+ timediff_t ms = curlx_ptimediff_ms(&ctx->first_byte_at,
+ &ctx->started_at);
*pres1 = (ms < INT_MAX) ? (int)ms : INT_MAX;
}
else
}
static void cf_quiche_process_ev(struct Curl_cfilter *cf,
- struct Curl_easy *calling,
struct Curl_easy *data,
struct h3_stream_ctx *stream,
quiche_h3_event *ev)
if(!stream)
return;
- Curl_pgrs_now_update(data, calling);
switch(quiche_h3_event_type(ev)) {
case QUICHE_H3_EVENT_HEADERS: {
struct cb_ctx cb_ctx;
uint64_t stream_id;
struct Curl_cfilter *cf;
struct Curl_multi *multi;
- struct Curl_easy *calling;
quiche_h3_event *ev;
};
if(stream->id == dctx->stream_id) {
struct Curl_easy *sdata = Curl_multi_get_easy(dctx->multi, mid);
if(sdata)
- cf_quiche_process_ev(dctx->cf, dctx->calling, sdata, stream, dctx->ev);
+ cf_quiche_process_ev(dctx->cf, sdata, stream, dctx->ev);
return FALSE; /* stop iterating */
}
return TRUE;
stream = H3_STREAM_CTX(ctx, data);
if(stream && stream->id == (uint64_t)rv) {
/* event for calling transfer */
- cf_quiche_process_ev(cf, data, data, stream, ev);
+ cf_quiche_process_ev(cf, data, stream, ev);
quiche_h3_event_free(ev);
if(stream->xfer_result)
return stream->xfer_result;
struct cf_quich_disp_ctx dctx;
dctx.stream_id = (uint64_t)rv;
dctx.cf = cf;
- dctx.calling = data;
dctx.multi = data->multi;
dctx.ev = ev;
Curl_uint32_hash_visit(&ctx->streams, cf_quiche_disp_event, &dctx);
*pnread = 0;
(void)buf;
(void)blen;
- vquic_ctx_update_time(data, &ctx->q);
+ vquic_ctx_update_time(&ctx->q, Curl_pgrs_now(data));
if(!stream)
return CURLE_RECV_ERROR;
CURLcode result;
*pnwritten = 0;
- vquic_ctx_update_time(data, &ctx->q);
+ vquic_ctx_update_time(&ctx->q, Curl_pgrs_now(data));
result = cf_process_ingress(cf, data);
if(result)
}
*done = FALSE;
- vquic_ctx_update_time(data, &ctx->q);
+ vquic_ctx_update_time(&ctx->q, Curl_pgrs_now(data));
if(!ctx->qconn) {
result = cf_quiche_ctx_open(cf, data);
if(quiche_conn_is_established(ctx->qconn)) {
ctx->handshake_at = ctx->q.last_op;
CURL_TRC_CF(data, cf, "handshake complete after %" FMT_TIMEDIFF_T "ms",
- curlx_timediff_ms(ctx->handshake_at, ctx->started_at));
+ curlx_ptimediff_ms(&ctx->handshake_at, &ctx->started_at));
result = cf_quiche_verify_peer(cf, data);
if(!result) {
CURL_TRC_CF(data, cf, "peer verified");
int err;
ctx->shutdown_started = TRUE;
- vquic_ctx_update_time(data, &ctx->q);
+ vquic_ctx_update_time(&ctx->q, Curl_pgrs_now(data));
err = quiche_conn_close(ctx->qconn, TRUE, 0, NULL, 0);
if(err) {
CURL_TRC_CF(data, cf, "error %d adding shutdown packet, "
}
case CF_QUERY_CONNECT_REPLY_MS:
if(ctx->q.got_first_byte) {
- timediff_t ms = curlx_timediff_ms(ctx->q.first_byte_at, ctx->started_at);
+ timediff_t ms = curlx_ptimediff_ms(&ctx->q.first_byte_at,
+ &ctx->started_at);
*pres1 = (ms < INT_MAX) ? (int)ms : INT_MAX;
}
else
}
}
#endif
- vquic_ctx_set_time(data, qctx);
+ vquic_ctx_set_time(qctx, Curl_pgrs_now(data));
return CURLE_OK;
}
Curl_bufq_free(&qctx->sendbuf);
}
-void vquic_ctx_set_time(struct Curl_easy *data,
- struct cf_quic_ctx *qctx)
+void vquic_ctx_set_time(struct cf_quic_ctx *qctx,
+ const struct curltime *pnow)
{
- qctx->last_op = data->progress.now;
+ qctx->last_op = *pnow;
}
-void vquic_ctx_update_time(struct Curl_easy *data,
- struct cf_quic_ctx *qctx)
+void vquic_ctx_update_time(struct cf_quic_ctx *qctx,
+ const struct curltime *pnow)
{
- Curl_pgrs_now_set(data);
- qctx->last_op = data->progress.now;
+ qctx->last_op = *pnow;
}
static CURLcode send_packet_no_gso(struct Curl_cfilter *cf,
struct cf_quic_ctx *qctx);
void vquic_ctx_free(struct cf_quic_ctx *qctx);
-void vquic_ctx_set_time(struct Curl_easy *data,
- struct cf_quic_ctx *qctx);
+void vquic_ctx_set_time(struct cf_quic_ctx *qctx,
+ const struct curltime *pnow);
-void vquic_ctx_update_time(struct Curl_easy *data,
- struct cf_quic_ctx *qctx);
+void vquic_ctx_update_time(struct cf_quic_ctx *qctx,
+ const struct curltime *pnow);
void vquic_push_blocked_pkt(struct Curl_cfilter *cf,
struct cf_quic_ctx *qctx,
bool disconnect)
{
CURLcode result = CURLE_OK;
- struct curltime start = data->progress.now;
+ struct curltime start = *Curl_pgrs_now(data);
while((sshc->state != SSH_STOP) && !result) {
bool block;
timediff_t left_ms = 1000;
- Curl_pgrs_now_set(data); /* timeout disconnect */
result = ssh_statemachine(data, sshc, sshp, &block);
if(result)
break;
return CURLE_OPERATION_TIMEDOUT;
}
}
- else if(curlx_timediff_ms(data->progress.now, start) > 1000) {
+ else if(curlx_ptimediff_ms(Curl_pgrs_now(data), &start) > 1000) {
/* disconnect timeout */
failf(data, "Disconnect timed out");
result = CURLE_OK;
}
shared->refcount = 1;
- shared->time = data->progress.now;
+ shared->time = *Curl_pgrs_now(data);
*pcreds = shared;
return CURLE_OK;
}
/* key to use at `multi->proto_hash` */
#define MPROTO_GTLS_X509_KEY "tls:gtls:x509:share"
-static bool gtls_shared_creds_expired(const struct Curl_easy *data,
+static bool gtls_shared_creds_expired(struct Curl_easy *data,
const struct gtls_shared_creds *sc)
{
const struct ssl_general_config *cfg = &data->set.general_ssl;
- timediff_t elapsed_ms = curlx_timediff_ms(data->progress.now, sc->time);
+ timediff_t elapsed_ms = curlx_ptimediff_ms(Curl_pgrs_now(data), &sc->time);
timediff_t timeout_ms = cfg->ca_cache_timeout * (timediff_t)1000;
if(timeout_ms < 0)
#include "openssl.h"
#include "../connect.h"
#include "../slist.h"
+#include "../progress.h"
#include "../select.h"
#include "../curlx/wait.h"
#include "vtls.h"
curlx_free(share);
}
-static bool ossl_cached_x509_store_expired(const struct Curl_easy *data,
+static bool ossl_cached_x509_store_expired(struct Curl_easy *data,
const struct ossl_x509_share *mb)
{
const struct ssl_general_config *cfg = &data->set.general_ssl;
if(cfg->ca_cache_timeout < 0)
return FALSE;
else {
- timediff_t elapsed_ms = curlx_timediff_ms(data->progress.now, mb->time);
+ timediff_t elapsed_ms = curlx_ptimediff_ms(Curl_pgrs_now(data), &mb->time);
timediff_t timeout_ms = cfg->ca_cache_timeout * (timediff_t)1000;
return elapsed_ms >= timeout_ms;
}
static X509_STORE *ossl_get_cached_x509_store(struct Curl_cfilter *cf,
- const struct Curl_easy *data,
+ struct Curl_easy *data,
bool *pempty)
{
struct Curl_multi *multi = data->multi;
}
static void ossl_set_cached_x509_store(struct Curl_cfilter *cf,
- const struct Curl_easy *data,
+ struct Curl_easy *data,
X509_STORE *store,
bool is_empty)
{
curlx_free(share->CAfile);
}
- share->time = data->progress.now;
+ share->time = *Curl_pgrs_now(data);
share->store = store;
share->store_is_empty = is_empty;
share->CAfile = CAfile;
connssl->connecting_state = ssl_connect_2;
memset(rs, 0, sizeof(*rs));
rs->io_need = CURL_SSL_IO_NEED_SEND;
- rs->start_time = curlx_now();
+ rs->start_time = *Curl_pgrs_now(data);
rs->started = TRUE;
}
curl_socket_t readfd, writefd;
timediff_t elapsed;
- elapsed = curlx_timediff_ms(curlx_now(), rs->start_time);
+ elapsed = curlx_ptimediff_ms(Curl_pgrs_now(data), &rs->start_time);
if(elapsed >= MAX_RENEG_BLOCK_TIME) {
failf(data, "schannel: renegotiation timeout");
result = CURLE_SSL_CONNECT_ERROR;
if(result)
break;
- elapsed = curlx_timediff_ms(curlx_now(), rs->start_time);
+ elapsed = curlx_ptimediff_ms(Curl_pgrs_now(data), &rs->start_time);
if(elapsed >= MAX_RENEG_BLOCK_TIME) {
failf(data, "schannel: renegotiation timeout");
result = CURLE_SSL_CONNECT_ERROR;
}
HCERTSTORE Curl_schannel_get_cached_cert_store(struct Curl_cfilter *cf,
- const struct Curl_easy *data)
+ struct Curl_easy *data)
{
struct ssl_primary_config *conn_config = Curl_ssl_cf_get_primary_config(cf);
struct Curl_multi *multi = data->multi;
const struct ssl_general_config *cfg = &data->set.general_ssl;
timediff_t timeout_ms;
timediff_t elapsed_ms;
- struct curltime now;
unsigned char info_blob_digest[CURL_SHA256_DIGEST_LENGTH];
DEBUGASSERT(multi);
negative timeout means retain forever. */
timeout_ms = cfg->ca_cache_timeout * (timediff_t)1000;
if(timeout_ms >= 0) {
- now = curlx_now();
- elapsed_ms = curlx_timediff_ms(now, share->time);
+ elapsed_ms = curlx_ptimediff_ms(Curl_pgrs_now(data), &share->time);
if(elapsed_ms >= timeout_ms) {
return NULL;
}
}
bool Curl_schannel_set_cached_cert_store(struct Curl_cfilter *cf,
- const struct Curl_easy *data,
+ struct Curl_easy *data,
HCERTSTORE cert_store)
{
struct ssl_primary_config *conn_config = Curl_ssl_cf_get_primary_config(cf);
};
HCERTSTORE Curl_schannel_get_cached_cert_store(struct Curl_cfilter *cf,
- const struct Curl_easy *data);
+ struct Curl_easy *data);
bool Curl_schannel_set_cached_cert_store(struct Curl_cfilter *cf,
- const struct Curl_easy *data,
+ struct Curl_easy *data,
HCERTSTORE cert_store);
#endif /* USE_SCHANNEL */
if(!result && *done) {
cf->connected = TRUE;
if(connssl->state == ssl_connection_complete) {
- Curl_pgrs_now_set(data);
- connssl->handshake_done = data->progress.now;
+ connssl->handshake_done = *Curl_pgrs_now(data);
}
/* Connection can be deferred when sending early data */
DEBUGASSERT(connssl->state == ssl_connection_complete ||
curlx_free(share);
}
-static bool wssl_cached_x509_store_expired(const struct Curl_easy *data,
+static bool wssl_cached_x509_store_expired(struct Curl_easy *data,
const struct wssl_x509_share *mb)
{
const struct ssl_general_config *cfg = &data->set.general_ssl;
- timediff_t elapsed_ms = curlx_timediff_ms(data->progress.now, mb->time);
+ timediff_t elapsed_ms = curlx_ptimediff_ms(Curl_pgrs_now(data), &mb->time);
timediff_t timeout_ms = cfg->ca_cache_timeout * (timediff_t)1000;
if(timeout_ms < 0)
}
static WOLFSSL_X509_STORE *wssl_get_cached_x509_store(struct Curl_cfilter *cf,
- const struct Curl_easy *data)
+ struct Curl_easy *data)
{
struct Curl_multi *multi = data->multi;
struct wssl_x509_share *share;
}
static void wssl_set_cached_x509_store(struct Curl_cfilter *cf,
- const struct Curl_easy *data,
+ struct Curl_easy *data,
WOLFSSL_X509_STORE *store)
{
struct ssl_primary_config *conn_config = Curl_ssl_cf_get_primary_config(cf);
curlx_free(share->CAfile);
}
- share->time = data->progress.now;
+ share->time = *Curl_pgrs_now(data);
share->store = store;
share->CAfile = CAfile;
}
NOW(run[i].now_s, run[i].now_us);
TIMEOUTS(run[i].timeout_ms, run[i].connecttimeout_ms);
easy->progress.now = now;
- timeout = Curl_timeleft_ms(easy, run[i].connecting);
+ timeout = Curl_timeleft_now_ms(easy, &now, run[i].connecting);
if(timeout != run[i].result)
fail(run[i].comment);
}
key.tv_usec = (541 * i) % 1023;
storage[i] = key.tv_usec;
Curl_splayset(&nodes[i], &storage[i]);
- root = Curl_splayinsert(key, root, &nodes[i]);
+ root = Curl_splayinsert(&key, root, &nodes[i]);
}
puts("Result:");
for(j = 0; j <= i % 3; j++) {
storage[i * 3 + j] = key.tv_usec * 10 + j;
Curl_splayset(&nodes[i * 3 + j], &storage[i * 3 + j]);
- root = Curl_splayinsert(key, root, &nodes[i * 3 + j]);
+ root = Curl_splayinsert(&key, root, &nodes[i * 3 + j]);
}
}
for(i = 0; i <= 1100; i += 100) {
curl_mprintf("Removing nodes not larger than %d\n", i);
tv_now.tv_usec = i;
- root = Curl_splaygetbest(tv_now, root, &removed);
+ root = Curl_splaygetbest(&tv_now, root, &removed);
while(removed) {
curl_mprintf("removed payload %zu[%zu]\n",
*(size_t *)Curl_splayget(removed) / 10,
*(size_t *)Curl_splayget(removed) % 10);
- root = Curl_splaygetbest(tv_now, root, &removed);
+ root = Curl_splaygetbest(&tv_now, root, &removed);
}
}
struct Curl_easy data;
struct curltime now = curlx_now();
+ data.multi = NULL;
data.progress.now = now;
data.progress.t_nslookup = 0;
data.progress.t_connect = 0;
#include "urldata.h"
#include "connect.h"
+#include "progress.h"
#include "curl_share.h"
static CURLcode t1607_setup(void)