An overview of the time values available from curl_easy_getinfo(3)
-~~~
-curl_easy_perform()
- |
- |--QUEUE_TIME
- |--|--NAMELOOKUP
- |--|--|--CONNECT
- |--|--|--|--APPCONNECT
- |--|--|--|--|--PRETRANSFER
- |--|--|--|--|--|--STARTTRANSFER
- |--|--|--|--|--|--|--TOTAL
- |--|--|--|--|--|--|--REDIRECT
-~~~
-
-## QUEUE_TIME
+ curl_easy_perform()
+ |
+ |--QUEUE
+ |--|--NAMELOOKUP
+ |--|--|--CONNECT
+ |--|--|--|--APPCONNECT
+ |--|--|--|--|--PRETRANSFER
+ |--|--|--|--|--|--STARTTRANSFER
+ |--|--|--|--|--|--|--TOTAL
+ |--|--|--|--|--|--|--REDIRECT
+
+## CURLINFO_QUEUE_TIME
CURLINFO_QUEUE_TIME_T(3). The time during which the transfer was held in a
waiting queue before it could start for real. (Added in 8.6.0)
-## NAMELOOKUP
+## CURLINFO_NAMELOOKUP_TIME
CURLINFO_NAMELOOKUP_TIME(3) and CURLINFO_NAMELOOKUP_TIME_T(3). The time it
took from the start until the name resolving was completed.
-## CONNECT
+## CURLINFO_CONNECT_TIME
CURLINFO_CONNECT_TIME(3) and CURLINFO_CONNECT_TIME_T(3). The time it took from
the start until the connect to the remote host (or proxy) was completed.
-## APPCONNECT
+## CURLINFO_APPCONNECT_TIME
CURLINFO_APPCONNECT_TIME(3) and CURLINFO_APPCONNECT_TIME_T(3). The time it
took from the start until the SSL connect/handshake with the remote host was
completed. (Added in 7.19.0) The latter is the integer version (measuring
microseconds). (Added in 7.60.0)
-## PRETRANSFER
+## CURLINFO_PRETRANSFER_TIME
CURLINFO_PRETRANSFER_TIME(3) and CURLINFO_PRETRANSFER_TIME_T(3). The time it
took from the start until the file transfer is just about to begin. This
includes all pre-transfer commands and negotiations that are specific to the
particular protocol(s) involved.
-## STARTTRANSFER
+## CURLINFO_STARTTRANSFER_TIME
CURLINFO_STARTTRANSFER_TIME(3) and CURLINFO_STARTTRANSFER_TIME_T(3). The time
it took from the start until the first byte is received by libcurl.
-## TOTAL
+## CURLINFO_TOTAL_TIME
CURLINFO_TOTAL_TIME(3) and CURLINFO_TOTAL_TIME_T(3). Total time
of the previous request.
-## REDIRECT
+## CURLINFO_REDIRECT_TIME
CURLINFO_REDIRECT_TIME(3) and CURLINFO_REDIRECT_TIME_T(3). The time it took
for all redirection steps include name lookup, connect, pretransfer and
All callback arguments must be set to valid function pointers. The
prototypes for the given callbacks must match these:
-## void *malloc_callback(size_t size);
+## `void *malloc_callback(size_t size);`
To replace malloc()
-## void free_callback(void *ptr);
+## `void free_callback(void *ptr);`
To replace free()
-## void *realloc_callback(void *ptr, size_t size);
+## `void *realloc_callback(void *ptr, size_t size);`
To replace realloc()
-## char *strdup_callback(const char *str);
+## `char *strdup_callback(const char *str);`
To replace strdup()
-## void *calloc_callback(size_t nmemb, size_t size);
+## `void *calloc_callback(size_t nmemb, size_t size);`
To replace calloc()
# TRACE COMPONENTS
-## tcp
+## `tcp`
Tracing of TCP socket handling: connect, reads, writes.
-## ssl
+## `ssl`
Tracing of SSL/TLS operations, whichever SSL backend is used in your build.
-## http/2
+## `http/2`
Details about HTTP/2 handling: frames, events, I/O, etc.
-## http/3
+## `http/3`
Details about HTTP/3 handling: connect, frames, events, I/O etc.
-## http-proxy
+## `http-proxy`
Involved when transfers are tunneled through an HTTP proxy. "h1-proxy" or
"h2-proxy" are also involved, depending on the HTTP version negotiated with
};
~~~
-## age
+## `age`
This field specify the age of this struct. It is always zero for now.
-## flags
+## `flags`
-This is a bitmask with individual bits set that describes the WebSocket
-data. See the list below.
+This is a bitmask with individual bits set that describes the WebSocket data.
+See the list below.
-## offset
+## `offset`
When this frame is a continuation of fragment data already delivered, this is
the offset into the final fragment where this piece belongs.
-## bytesleft
+## `bytesleft`
-If this is not a complete fragment, the *bytesleft* field informs about
-how many additional bytes are expected to arrive before this fragment is
-complete.
+If this is not a complete fragment, the *bytesleft* field informs about how
+many additional bytes are expected to arrive before this fragment is complete.
# FLAGS
description of what they do. Also note that curl, the command line tool,
supports a set of additional environment variables independently of this.
-## [scheme]_proxy
+## `[scheme]_proxy`
When libcurl is given a URL to use in a transfer, it first extracts the scheme
part from the URL and checks if there is a given proxy set for that in its
where libcurl first checks **ws_proxy** or **wss_proxy** but if they are
not set, it will fall back and try the http and https versions instead if set.
-## ALL_PROXY
+## `ALL_PROXY`
This is a setting to set proxy for all URLs, independently of what scheme is
being used. Note that the scheme specific variables overrides this one if set.
-## CURL_SSL_BACKEND
+## `CURL_SSL_BACKEND`
When libcurl is built to support multiple SSL backends, it selects a specific
backend at first use. If no selection is done by the program using libcurl,
SSL backend names (case-insensitive): BearSSL, GnuTLS, mbedTLS,
nss, OpenSSL, rustls, Schannel, Secure-Transport, wolfSSL
-## HOME
+## `HOME`
When the netrc feature is used (CURLOPT_NETRC(3)), this variable is
checked as the primary way to find the "current" home directory in which
the .netrc file is likely to exist.
-## USERPROFILE
+## `USERPROFILE`
When the netrc feature is used (CURLOPT_NETRC(3)), this variable is
checked as the secondary way to find the "current" home directory (on Windows
only) in which the .netrc file is likely to exist.
-## LOGNAME
+## `LOGNAME`
Username to use when invoking the *ntlm-wb* tool, if *NTLMUSER* was
not set.
-## NO_PROXY
+## `NO_PROXY`
This has the same functionality as the CURLOPT_NOPROXY(3) option: it
gives libcurl a comma-separated list of hostname patterns for which libcurl
should not use a proxy.
-## NTLMUSER
+## `NTLMUSER`
Username to use when invoking the *ntlm-wb* tool.
-## SSLKEYLOGFILE
+## `SSLKEYLOGFILE`
When set and libcurl runs with a SSL backend that supports this feature,
libcurl saves SSL secrets into the given filename. Using those SSL secrets,
These secrets and this file might be sensitive. Users are advised to take
precautions so that they are not stolen or otherwise inadvertently revealed.
-## USER
+## `USER`
Username to use when invoking the *ntlm-wb* tool, if *NTLMUSER* and *LOGNAME*
were not set.
libcurl is your friend here too.
-## CUSTOMREQUEST
+## CURLOPT_CUSTOMREQUEST
If just changing the actual HTTP request keyword is what you want, like when
GET, HEAD or POST is not good enough for you, CURLOPT_CUSTOMREQUEST(3)
is there for you. It is simple to use:
+
~~~c
curl_easy_setopt(handle, CURLOPT_CUSTOMREQUEST, "MYOWNREQUEST");
~~~
+
When using the custom request, you change the request keyword of the actual
request you are performing. Thus, by default you make a GET request but you
can also make a POST operation (as described before) and then replace the POST
combine with CURLOPT_NOBODY(3). If this option is set, no actual file
content transfer is performed.
-## FTP Custom CUSTOMREQUEST
+## FTP Custom CURLOPT_CUSTOMREQUEST
If you do want to list the contents of an FTP directory using your own defined
-FTP command, CURLOPT_CUSTOMREQUEST(3) does just that. "NLST" is the
-default one for listing directories but you are free to pass in your idea of a
-good alternative.
+FTP command, CURLOPT_CUSTOMREQUEST(3) does just that. "NLST" is the default
+one for listing directories but you are free to pass in your idea of a good
+alternative.
# Cookies Without Chocolate Chips
In the HTTP sense, a cookie is a name with an associated value. A server sends
the name and value to the client, and expects it to get sent back on every
-subsequent request to the server that matches the particular conditions
-set. The conditions include that the domain name and path match and that the
-cookie has not become too old.
+subsequent request to the server that matches the particular conditions set.
+The conditions include that the domain name and path match and that the cookie
+has not become too old.
In real-world cases, servers send new cookies to replace existing ones to
update them. Server use cookies to "track" users and to keep "sessions".
To just send whatever cookie you want to a server, you can use
CURLOPT_COOKIE(3) to set a cookie string like this:
+
~~~c
curl_easy_setopt(handle, CURLOPT_COOKIE, "name1=var1; name2=var2;");
~~~
-In many cases, that is not enough. You might want to dynamically save
-whatever cookies the remote server passes to you, and make sure those cookies
-are then used accordingly on later requests.
+
+In many cases, that is not enough. You might want to dynamically save whatever
+cookies the remote server passes to you, and make sure those cookies are then
+used accordingly on later requests.
One way to do this, is to save all headers you receive in a plain file and
when you make a request, you tell libcurl to read the previous headers to
CURLMOPT_SOCKETFUNCTION(3) option.
This pointer is not touched by libcurl but is only passed in as the socket
-callbacks's **clientp** argument.
+callback's **clientp** argument.
# DEFAULT
CURLMOPT_TIMERFUNCTION(3) option.
This pointer is not touched by libcurl but is only be passed in to the timer
-callbacks's **clientp** argument.
+callback's **clientp** argument.
# DEFAULT
Pass a char pointer to a *cookie* string.
Such a cookie can be either a single line in Netscape / Mozilla format or just
-regular HTTP-style header (Set-Cookie: ...) format. This option also enables
-the cookie engine. This adds that single cookie to the internal cookie store.
+regular HTTP-style header (`Set-Cookie:`) format. This option also enables the
+cookie engine. This adds that single cookie to the internal cookie store.
We strongly advice against loading cookies from an HTTP header file, as that
is an inferior data exchange format.
Exercise caution if you are using this option and multiple transfers may
-occur. If you use the Set-Cookie format and the string does not specify a
+occur. If you use the `Set-Cookie` format and the string does not specify a
domain, then the cookie is sent for any domain (even after redirects are
followed) and cannot be modified by a server-set cookie. If a server sets a
cookie of the same name (or maybe you have imported one) then both are sent on
future transfers to that server, likely not what you intended. To address
-these issues set a domain in Set-Cookie (doing that includes subdomains) or
+these issues set a domain in `Set-Cookie` (doing that includes subdomains) or
much better: use the Netscape file format.
Additionally, there are commands available that perform actions if you pass in
these exact strings:
-## ALL
+## `ALL`
erases all cookies held in memory
-## SESS
+## `SESS`
erases all session cookies held in memory
-## FLUSH
+## `FLUSH`
writes all known cookies to the file specified by CURLOPT_COOKIEJAR(3)
-## RELOAD
+## `RELOAD`
loads all cookies from the files specified by CURLOPT_COOKIEFILE(3)
This function is passed the following arguments:
-## conn_primary_ip
+## `conn_primary_ip`
A null-terminated pointer to a C string containing the primary IP of the
remote server established with this connection. For FTP, this is the IP for
the control connection. IPv6 addresses are represented without surrounding
brackets.
-## conn_local_ip
+## `conn_local_ip`
A null-terminated pointer to a C string containing the originating IP for this
connection. IPv6 addresses are represented without surrounding brackets.
-## conn_primary_port
+## `conn_primary_port`
The primary port number on the remote server established with this connection.
For FTP, this is the port for the control connection. This can be a TCP or a
UDP port number depending on the protocol.
-## conn_local_port
+## `conn_local_port`
The originating port number for this connection. This can be a TCP or a UDP
port number depending on the protocol.
-## clientp
+## `clientp`
The pointer you set with CURLOPT_PREREQDATA(3).
cleared (1L = set, 0 = clear). The option is set by default. This has no
effect after the connection has been established.
-Setting this option to 1L disables TCP's Nagle algorithm on connections
-created using this handle. The purpose of this algorithm is to try to minimize
-the number of small packets on the network (where "small packets" means TCP
-segments less than the Maximum Segment Size for the network).
+Setting this option to 1L disables the Nagle algorithm on connections created
+using this handle. The purpose of this algorithm is to minimize the number of
+small packets on the network (where "small packets" means TCP segments less
+than the Maximum Segment Size for the network).
Maximizing the amount of data sent per TCP segment is good because it
amortizes the overhead of the send. However, in some cases small segments may
ftp://example.com/some/path/*.txt
-for all txt's from the root directory. Only two asterisks are allowed within
-the same pattern string.
+matches all `.txt` files in the root directory. Only two asterisks are allowed
+within the same pattern string.
## ? - QUESTION MARK