Adjusted badwords to find them.
Plus: make badwords run on all markdown files in the repo and update
markdowns previously unchecked
Closes #15898
# If separator is '=', the string will be compared case sensitively.
# If separator is ':', the check is done case insensitively.
#
+# To add white listed uses of bad words that are removed before checking for
+# the bad ones:
+#
+# ---(accepted word)
+#
my $w;
while(<STDIN>) {
chomp;
if($_ =~ /^#/) {
next;
}
- if($_ =~ /^([^:=]*)([:=])(.*)/) {
+ if($_ =~ /^---(.*)/) {
+ push @whitelist, $1;
+ }
+ elsif($_ =~ /^([^:=]*)([:=])(.*)/) {
my ($bad, $sep, $better)=($1, $2, $3);
push @w, $bad;
$alt{$bad} = $better;
$in =~ s/(\[.*\])\(.*\)/$1/g;
# remove backticked texts
$in =~ s/\`.*\`//g;
+ # remove whitelisted patterns
+ for my $p (@whitelist) {
+ $in =~ s/$p//g;
+ }
foreach my $w (@w) {
my $case = $exactcase{$w};
if(($in =~ /^(.*)$w/i && !$case) ||
64-bits:64 bits or 64-bit
32-bits:32 bits or 32-bit
\bvery\b:rephrase using an alternative word
+\bCurl\b=curl
+\bLibcurl\b=libcurl
+---WWW::Curl
+---NET::Curl
+---Curl Corporation
name: checkout
- name: badwords
- run: .github/scripts/badwords.pl < .github/scripts/badwords.txt docs/*.md docs/libcurl/*.md docs/libcurl/opts/*.md docs/cmdline-opts/*.md docs/TODO docs/KNOWN_BUGS tests/*.md
+ run: .github/scripts/badwords.pl < .github/scripts/badwords.txt `git ls-files '**.md'` docs/TODO docs/KNOWN_BUGS packages/OS400/README.OS400
- name: verify-synopsis
run: .github/scripts/verify-synopsis.pl docs/libcurl/curl*.md
./configure --disable-shared --enable-debug --enable-maintainer-mode
-In environments that don't support configure (i.e. Windows), do this:
+In environments that do not support configure (i.e. Windows), do this:
buildconf.bat
# [](https://curl.se/)
-Curl is a command-line tool for transferring data specified with URL syntax.
+curl is a command-line tool for transferring data specified with URL syntax.
Learn how to use curl by reading [the
manpage](https://curl.se/docs/manpage.html) or [everything
curl](https://everything.curl.dev/).
## Notice
-Curl contains pieces of source code that is Copyright (c) 1998, 1999 Kungliga
+curl contains pieces of source code that is Copyright (c) 1998, 1999 Kungliga
Tekniska Högskolan. This notice is included here to comply with the
distribution terms.
## There are still bugs
- Curl and libcurl keep being developed. Adding features and changing code
+ curl and libcurl keep being developed. Adding features and changing code
means that bugs sneak in, no matter how hard we try to keep them out.
Of course there are lots of bugs left. Not to mention misfeatures.
## Using ECH and DoH
-Curl supports using DoH for A/AAAA lookups so it was relatively easy to add
+curl supports using DoH for A/AAAA lookups so it was relatively easy to add
retrieval of HTTPS RRs in that situation. To use ECH and DoH together:
```bash
## Default settings
-Curl has various ways to configure default settings, e.g. in ``$HOME/.curlrc``,
+curl has various ways to configure default settings, e.g. in ``$HOME/.curlrc``,
so one can set the DoH URL and enable ECH that way:
```bash
major operating systems. The never-quite-understood -F option was added and
curl could now simulate quite a lot of a browser. TELNET support was added.
-Curl 5 was released in December 1998 and introduced the first ever curl man
+curl 5 was released in December 1998 and introduced the first ever curl man
page. People started making Linux RPM packages out of it.
1999
This release bumped the major SONAME to 3 due to the removal of the
`curl_formparse()` function
-August: Curl and libcurl 7.12.1
+August: curl and libcurl 7.12.1
Public curl release number: 82
Releases counted from the beginning: 109
curl and libcurl are installed in an estimated 5 *billion* instances
world-wide.
- October 31: Curl and libcurl 7.62.0
+ October 31: curl and libcurl 7.62.0
Public curl releases: 177
Command line options: 219
### HTTP
-Curl also supports user and password in HTTP URLs, thus you can pick a file
+curl also supports user and password in HTTP URLs, thus you can pick a file
like:
curl http://name:passwd@http.server.example/full/path/to/file
curl also supports SOCKS4 and SOCKS5 proxies with `--socks4` and `--socks5`.
-See also the environment variables Curl supports that offer further proxy
+See also the environment variables curl supports that offer further proxy
control.
Most FTP proxy servers are set up to appear as a normal FTP server from the
## Ranges
HTTP 1.1 introduced byte-ranges. Using this, a client can request to get only
-one or more sub-parts of a specified document. Curl supports this with the
+one or more sub-parts of a specified document. curl supports this with the
`-r` flag.
Get the first 100 bytes of a document:
curl -r -500 http://www.example.com/
-Curl also supports simple ranges for FTP files as well. Then you can only
+curl also supports simple ranges for FTP files as well. Then you can only
specify start and stop position.
Get the first 100 bytes of a document using FTP:
curl -T localfile -a ftp://ftp.example.com/remotefile
-Curl also supports ftp upload through a proxy, but only if the proxy is
+curl also supports ftp upload through a proxy, but only if the proxy is
configured to allow that kind of tunneling. If it does, you can run curl in a
fashion similar to:
If curl fails where it is not supposed to, if the servers do not let you in,
if you cannot understand the responses: use the `-v` flag to get verbose
-fetching. Curl outputs lots of info and what it sends and receives in order to
+fetching. curl outputs lots of info and what it sends and receives in order to
let the user see all client-server interaction (but it does not show you the
actual data).
extensive.
For HTTP, you can get the header information (the same as `-I` would show)
-shown before the data by using `-i`/`--include`. Curl understands the
+shown before the data by using `-i`/`--include`. curl understands the
`-D`/`--dump-header` option when getting files from both FTP and HTTP, and it
then stores the headers in the specified file.
## User Agent
An HTTP request has the option to include information about the browser that
-generated the request. Curl allows it to be specified on the command line. It
+generated the request. curl allows it to be specified on the command line. It
is especially useful to fool or trick stupid servers or CGI scripts that only
accept certain browsers.
curl -b "name=Daniel" www.example.com
-Curl also has the ability to use previously received cookies in following
+curl also has the ability to use previously received cookies in following
sessions. If you get cookies from a server and store them in a file in a
manner similar to:
curl -L -b empty.txt www.example.com
The file to read cookies from must be formatted using plain HTTP headers OR as
-Netscape's cookie file. Curl determines what kind it is based on the file
+Netscape's cookie file. curl determines what kind it is based on the file
contents. In the above command, curl parses the header and store the cookies
received from www.example.com. curl sends the stored cookies which match the
request to the server as it follows the location. The file `empty.txt` may be
## Speed Limit
-Curl allows the user to set the transfer speed conditions that must be met to
+curl allows the user to set the transfer speed conditions that must be met to
let the transfer keep going. By using the switch `-y` and `-Y` you can make
curl abort transfers if the transfer speed is below the specified lowest limit
for a specified time.
## Config File
-Curl automatically tries to read the `.curlrc` file (or `_curlrc` file on
+curl automatically tries to read the `.curlrc` file (or `_curlrc` file on
Microsoft Windows systems) from the user's home directory on startup.
The config file could be made up with normal command line switches, but you
## Environment Variables
-Curl reads and understands the following environment variables:
+curl reads and understands the following proxy related environment variables:
http_proxy, HTTPS_PROXY, FTP_PROXY
therefore most Unix programs do not read this file unless it is only readable
by yourself (curl does not care though).
-Curl supports `.netrc` files if told to (using the `-n`/`--netrc` and
+curl supports `.netrc` files if told to (using the `-n`/`--netrc` and
`--netrc-optional` options). This is not restricted to just FTP, so curl can
use it for all protocols where authentication is used.
## Kerberos FTP Transfer
-Curl supports kerberos4 and kerberos5/GSSAPI for FTP transfers. You need the
+curl supports kerberos4 and kerberos5/GSSAPI for FTP transfers. You need the
kerberos package installed and used at curl build time for it to be available.
First, get the krb-ticket the normal way, like with the `kinit`/`kauth` tool.
## TELNET
-The curl telnet support is basic and easy to use. Curl passes all data passed
+The curl telnet support is basic and easy to use. curl passes all data passed
to it on stdin to the remote server. Connect to a remote telnet server using a
command line similar to:
# Rustls
-[Rustls is a TLS backend written in Rust](https://docs.rs/rustls/). Curl can
+[Rustls is a TLS backend written in Rust](https://docs.rs/rustls/). curl can
be built to use it as an alternative to OpenSSL or other TLS backends. We use
the [rustls-ffi C bindings](https://github.com/rustls/rustls-ffi/). This
version of curl depends on version v0.14.0 of rustls-ffi.
SPDX-License-Identifier: curl
-->
-# The Art Of Scripting HTTP Requests Using Curl
+# The Art Of Scripting HTTP Requests Using curl
## Background
extract information from the web, to fake users, to post or upload data to
web servers are all important tasks today.
- Curl is a command line tool for doing all sorts of URL manipulations and
+ curl is a command line tool for doing all sorts of URL manipulations and
transfers, but this particular document focuses on how to use it when doing
HTTP requests for fun and profit. This documents assumes that you know how to
invoke `curl --help` or `curl --manual` to get basic information about it.
- Curl is not written to do everything for you. It makes the requests, it gets
+ curl is not written to do everything for you. It makes the requests, it gets
the data, it sends data and it retrieves the information. You probably need
to glue everything together using some kind of script language or repeated
manual invokes.
new page keeping newly generated output. The header that tells the browser to
redirect is `Location:`.
- Curl does not follow `Location:` headers by default, but simply displays such
+ curl does not follow `Location:` headers by default, but simply displays such
pages in the same manner it displays all HTTP replies. It does however
feature an option that makes it attempt to follow the `Location:` pointers.
If you use curl to POST to a site that immediately redirects you to another
page, you can safely use [`--location`](https://curl.se/docs/manpage.html#-L)
- (`-L`) and `--data`/`--form` together. Curl only uses POST in the first
+ (`-L`) and `--data`/`--form` together. curl only uses POST in the first
request, and then revert to GET in the following operations.
## Other redirects
[`--cookie-jar`](https://curl.se/docs/manpage.html#-c) option described
below is a better way to store cookies.)
- Curl has a full blown cookie parsing engine built-in that comes in use if you
+ curl has a full blown cookie parsing engine built-in that comes in use if you
want to reconnect to a server and use cookies that were stored from a
previous connection (or hand-crafted manually to fool the server into
believing you had a previous connection). To use previously stored cookies,
curl --cookie stored_cookies_in_file http://www.example.com
- Curl's "cookie engine" gets enabled when you use the
+ curl's "cookie engine" gets enabled when you use the
[`--cookie`](https://curl.se/docs/manpage.html#-b) option. If you only
want curl to understand received cookies, use `--cookie` with a file that
does not exist. Example, if you want to let curl understand cookies from a
curl --cookie nada --location http://www.example.com
- Curl has the ability to read and write cookie files that use the same file
+ curl has the ability to read and write cookie files that use the same file
format that Netscape and Mozilla once used. It is a convenient way to share
cookies between scripts or invokes. The `--cookie` (`-b`) switch
automatically detects if a given file is such a cookie file and parses it,
SSL (or TLS as the current version of the standard is called) offers a set of
advanced features to do secure transfers over HTTP.
- Curl supports encrypted fetches when built to use a TLS library and it can be
+ curl supports encrypted fetches when built to use a TLS library and it can be
built to use one out of a fairly large set of libraries - `curl -V` shows
which one your curl was built to use (if any). To get a page from an HTTPS
server, simply run curl like:
## Certificates
In the HTTPS world, you use certificates to validate that you are the one you
- claim to be, as an addition to normal passwords. Curl supports client- side
+ claim to be, as an addition to normal passwords. curl supports client- side
certificates. All certificates are locked with a passphrase, which you need
to enter before the certificate can be used by curl. The passphrase can be
specified on the command line or if not, entered interactively when curl
Version Numbers and Releases
============================
- Curl is not only curl. Curl is also libcurl. They are actually individually
+ The command line tool curl and the library libcurl are individually
versioned, but they usually follow each other closely.
The version numbering is always built up using the same system:
FTP session is used, an error code was sent over the control connection or
similar.
## 11
-FTP weird PASS reply. Curl could not parse the reply sent to the PASS request.
+FTP weird PASS reply. curl could not parse the reply sent to the PASS request.
## 12
During an active FTP session while waiting for the server to connect back to
curl, the timeout expired.
## 13
-FTP weird PASV reply, Curl could not parse the reply sent to the PASV request.
+FTP weird PASV reply, curl could not parse the reply sent to the PASV request.
## 14
-FTP weird 227 format. Curl could not parse the 227-line the server sent.
+FTP weird 227 format. curl could not parse the 227-line the server sent.
## 15
FTP cannot use host. Could not resolve the host IP we got in the 227-line.
## 16
error with the HTTP error code being 400 or above. This return code only
appears if --fail is used.
## 23
-Write error. Curl could not write data to a local filesystem or similar.
+Write error. curl could not write data to a local filesystem or similar.
## 25
Failed starting the upload. For FTP, the server typically denied the STOR
command.
# `--cookie-jar`
Specify to which file you want curl to write all cookies after a completed
-operation. Curl writes all cookies from its in-memory cookie storage to the
+operation. curl writes all cookies from its in-memory cookie storage to the
given file at the end of operations. Even if no cookies are known, a file is
created so that it removes any formerly existing cookies from the file. The
file uses the Netscape cookie file format. If you set the filename to a single
# `--disable-eprt`
Disable the use of the EPRT and LPRT commands when doing active FTP transfers.
-Curl normally first attempts to use EPRT before using PORT, but with this
+curl normally first attempts to use EPRT before using PORT, but with this
option, it uses PORT right away. EPRT is an extension to the original FTP
protocol, and does not work on all servers, but enables more functionality in
a better way than the traditional PORT command.
# `--disable-epsv`
-Disable the use of the EPSV command when doing passive FTP transfers. Curl
+Disable the use of the EPSV command when doing passive FTP transfers. curl
normally first attempts to use EPSV before PASV, but with this option, it does
not try EPSV.
# `--hostpubsha256`
Pass a string containing a Base64-encoded SHA256 hash of the remote host's
-public key. Curl refuses the connection with the host unless the hashes match.
+public key. curl refuses the connection with the host unless the hashes match.
This feature requires libcurl to be built with libssh2 and does not work with
other SSH backends.
Make curl scan the *.netrc* file in the user's home directory for login name
and password. This is typically used for FTP on Unix. If used with HTTP, curl
enables user authentication. See *netrc(5)* and *ftp(1)* for details on the
-file format. Curl does not complain if that file does not have the right
+file format. curl does not complain if that file does not have the right
permissions (it should be neither world- nor group-readable). The environment
variable "HOME" is used to find the home directory.
# `--silent`
-Silent or quiet mode. Do not show progress meter or error messages. Makes Curl
+Silent or quiet mode. Do not show progress meter or error messages. Makes curl
mute. It still outputs the data you ask for, potentially even to the
terminal/stdout unless you redirect it.
## --prefix
-This is the prefix used when libcurl was installed. Libcurl is then installed
+This is the prefix used when libcurl was installed. libcurl is then installed
in $prefix/lib and its header files are installed in $prefix/include and so
on. The prefix is set with "configure --prefix".
CURLMsg *msg; /* for picking up messages with the transfer status */
int msgs_left; /* how many messages are left */
- /* Allocate one CURL handle per transfer */
+ /* Allocate one curl handle per transfer */
for(i = 0; i < HANDLECOUNT; i++)
handles[i] = curl_easy_init();
CURLMsg *msg; /* for picking up messages with the transfer status */
int msgs_left; /* how many messages are left */
- /* Allocate one CURL handle per transfer */
+ /* Allocate one curl handle per transfer */
for(i = 0; i < HANDLECOUNT; i++)
handles[i] = curl_easy_init();
curl_multi_cleanup(multi_handle);
- /* Free the CURL handles */
+ /* Free the curl handles */
for(i = 0; i < HANDLECOUNT; i++)
curl_easy_cleanup(handles[i]);
## read/write
-Its basic read/write functions have a similar signature and return code handling
-as many internal Curl read and write ones.
+Its basic read/write functions have a similar signature and return code
+handling as many internal curl read and write ones.
```
CURLcode Curl_bufq_unwrite(struct bufq *q, size_t len);
```
-This will remove `len` bytes from the end of the bufq again. When removing
-more bytes than are present, CURLE_AGAIN is returned and the bufq will be
-empty.
+This removes `len` bytes from the end of the bufq again. When removing more
+bytes than are present, CURLE_AGAIN is returned and bufq is cleared.
## lifetime
## Remove a node
Remove a node again from a list by calling `Curl_llist_remove()`. This
-will destroy the node's `elem` (e.g. calling a registered free function).
+destroys the node's `elem` (e.g. calling a registered free function).
-To remove a node without destroying it's `elem`, use
-`Curl_node_take_elem()` which returns the `elem` pointer and
-removes the node from the list. The caller then owns this pointer
-and has to take care of it.
+To remove a node without destroying its `elem`, use `Curl_node_take_elem()`
+which returns the `elem` pointer and removes the node from the list. The
+caller then owns this pointer and has to take care of it.
## Iterate
curl mqtt://host.home/bedroom/temp
-This will send an MQTT SUBSCRIBE packet for the topic `bedroom/temp` and listen in for incoming PUBLISH packets.
+This sends an MQTT SUBSCRIBE packet for the topic `bedroom/temp` and listen in
+for incoming PUBLISH packets.
### Publishing
curl -d 75 mqtt://host.home/bedroom/dimmer
-This will send an MQTT PUBLISH packet to the topic `bedroom/dimmer` with the payload `75`.
+This sends an MQTT PUBLISH packet to the topic `bedroom/dimmer` with the
+payload `75`.
## What does curl deliver as a response to a subscribe
These difference between TLS protocol versions are reflected in curl's
handling of session tickets. More below.
-## Curl's `ssl_peer_key`
+## curl's `ssl_peer_key`
In order to find a ticket from a previous TLS session, curl
needs a name for TLS sessions that uniquely identifies the peer
Different configurations produce different keys which is just what
curl needs when handling SSL session tickets.
-One important thing: peer keys do not contain confidential
-information. If you configure a client certificate or SRP authentication
-with username/password, these will not be part of the peer key.
+One important thing: peer keys do not contain confidential information. If you
+configure a client certificate or SRP authentication with username/password,
+these are not part of the peer key.
However, peer keys carry the hostnames you use curl for. The *do*
leak the privacy of your communication. We recommend to *not* persist
peer keys for this reason.
-**Caveat**: The key may contain file names or paths. It does not
-reflect the *contents* in the filesystem. If you change `/etc/ssl/cert.pem`
-and reuse a previous ticket, curl might trust a server which no
-longer has a root certificate in the file.
+**Caveat**: The key may contain filenames or paths. It does not reflect the
+*contents* in the filesystem. If you change `/etc/ssl/cert.pem` and reuse a
+previous ticket, curl might trust a server which no longer has a root
+certificate in the file.
## Session Cache Access
When a new connection is being established, each SSL connection filter creates
its own peer_key and calls into the cache. The cache then looks for a ticket
with exactly this peer_key. Peer keys between proxy SSL filters and SSL
-filters talking through a tunnel will differ, as they talk to different
-peers.
+filters talking through a tunnel differ, as they talk to different peers.
If the connection filter wants to use a client certificate or SRP
-authentication, the cache will check those as well. If the cache peer
-carries client cert or SRP auth, the connection filter must have
-those with the same values (and vice versa).
-
-On a match, the connection filter gets the session ticket and feeds that
-to the TLS implementation which, on accepting it, will try to resume it
-for a shorter handshake. In addition, the filter gets the ALPN used
-before and the amount of 0-RTT data that the server announced to be
-willing to accept. The filter can then decide if it wants to attempt
-0-RTT or not. (The ALPN is needed to know if the server speaks the
-protocol you want to send in 0-RTT. It makes no sense to send HTTP/2
-requests to a server that only knows HTTP/1.1.)
+authentication, the cache checks those as well. If the cache peer carries
+client cert or SRP auth, the connection filter must have those with the same
+values (and vice versa).
+
+On a match, the connection filter gets the session ticket and feeds that to
+the TLS implementation which, on accepting it, tries to resume it for a
+shorter handshake. In addition, the filter gets the ALPN used before and the
+amount of 0-RTT data that the server announced to be willing to accept. The
+filter can then decide if it wants to attempt 0-RTT or not. (The ALPN is
+needed to know if the server speaks the protocol you want to send in 0-RTT. It
+makes no sense to send HTTP/2 requests to a server that only knows HTTP/1.1.)
#### Updates
a ticket from the cache, meaning a returned ticket is removed. The filter
then configures its TLS backend and *returns* the ticket to the cache.
-The cache needs to treat tickets from TLSv1.2 and 1.3 differently.
-1.2 tickets should be reused, but 1.3 tickets SHOULD NOT (RFC 8446).
-The session cache will simply drop 1.3 tickets when they are returned
-after use, but keep a 1.2 ticket.
+The cache needs to treat tickets from TLSv1.2 and 1.3 differently. 1.2 tickets
+should be reused, but 1.3 tickets SHOULD NOT (RFC 8446). The session cache
+simply drops 1.3 tickets when they are returned after use, but keeps a 1.2
+ticket.
When a ticket is *put* into the cache, there is also a difference. There
can be several 1.3 tickets at the same time, but only a single 1.2 ticket.
amount.
By having a "put/take/return" we reflect the 1.3 use case nicely. Two
-concurrent connections will not reuse the same ticket.
+concurrent connections do not reuse the same ticket.
## Session Ticket Persistence
#### Privacy and Security
-As mentioned above, ssl peer keys are not intended for storage in a
-file system. They'll clearly show which hosts the user talked to. This
-maybe "just" privacy relevant, but has security implications as an
-attacker might find worthy targets among your peer keys.
+As mentioned above, ssl peer keys are not intended for storage in a file
+system. They clearly show which hosts the user talked to. This maybe "just"
+privacy relevant, but has security implications as an attacker might find
+worthy targets among your peer keys.
Also, we do not recommend to persist TLSv1.2 tickets.
#### Export
-The salt is generated randomly for each peer key on export. The
-SHA256 makes sure that the peer key cannot be reversed and that
-a slightly different key still produces a very different result.
+The salt is generated randomly for each peer key on export. The SHA256 makes
+sure that the peer key cannot be reversed and that a slightly different key
+still produces a different result.
-This means an attacker cannot just "grep" a session file for a
-particular entry, e.g. if they want to know if you accessed a
-specific host. They *can* however compute the SHA256 hashes for
-all salts in the file and find a specific entry. But they *cannot*
-find a hostname they do not know. They'd have to brute force by
-guessing.
+This means an attacker cannot just "grep" a session file for a particular
+entry, e.g. if they want to know if you accessed a specific host. They *can*
+however compute the SHA256 hashes for all salts in the file and find a
+specific entry. They *cannot* find a hostname they do not know. They would
+have to brute force by guessing.
#### Import
-When session tickets are imported from a file, curl only gets the
-salted hashes. The tickets imported will belong to an *unknown*
-peer key.
-
-When a connection filter tries to *take* a session ticket, it will
-pass its peer key. This peer key will initially not match any
-tickets in the cache. The cache then checks all entries with
-unknown peer keys if the passed key matches their salted hash. If
-it does, the peer key is recovered and remembered at the cache
-entry.
-
-This is a performance penalty in the order of "unknown" peer keys
-which will diminish over time when keys are rediscovered. Note that
-this also works for putting a new ticket into the cache: when no
-present entry matches, a new one with peer key is created. This
-peer key will then no longer bear the cost of hash computes.
+When session tickets are imported from a file, curl only gets the salted
+hashes. The imported tickets belong to an *unknown* peer key.
+
+When a connection filter tries to *take* a session ticket, it passes its peer
+key. This peer key initially does not match any tickets in the cache. The
+cache then checks all entries with unknown peer keys if the passed key matches
+their salted hash. If it does, the peer key is recovered and remembered at the
+cache entry.
+
+This is a performance penalty in the order of "unknown" peer keys which
+diminishes over time when keys are rediscovered. Note that this also works for
+putting a new ticket into the cache: when no present entry matches, a new one
+with peer key is created. This peer key then no longer bears the cost of hash
+computes.
# DESCRIPTION
-This function allocates and returns a CURL easy handle. Such a handle is used
-as input to other functions in the easy interface. This call must have a
+This function allocates and returns an easy handle. Such a handle is used as
+input to other functions in the easy interface. This call must have a
corresponding call to curl_easy_cleanup(3) when the operation is complete.
The easy handle is used to hold and control a single network transfer. It is
# DESCRIPTION
-Re-initializes all options previously set on a specified CURL handle to the
+Re-initializes all options previously set on a specified curl handle to the
default values. This puts back the handle to the same state as it was in when
it was just created with curl_easy_init(3).
The **ct** pointer is set to NULL or pointing to private memory. You MUST
NOT free it - it gets freed when you call curl_easy_cleanup(3) on the
-corresponding CURL handle.
+corresponding curl handle.
The modern way to get this header from a response is to instead use the
curl_easy_header(3) function.
The **methodp** pointer is NULL or points to private memory. You MUST NOT
free - it gets freed when you call curl_easy_cleanup(3) on the
-corresponding CURL handle.
+corresponding curl handle.
# %PROTOCOLS%
value you set with CURLOPT_URL(3).
The **urlp** pointer is NULL or points to private memory. You MUST NOT free
-- it gets freed when you call curl_easy_cleanup(3) on the corresponding
-CURL handle.
+- it gets freed when you call curl_easy_cleanup(3) on the corresponding curl
+handle.
# %PROTOCOLS%
something is wrong.
The **path** pointer is NULL or points to private memory. You MUST NOT free
-- it gets freed when you call curl_easy_cleanup(3) on the corresponding
-CURL handle.
+- it gets freed when you call curl_easy_cleanup(3) on the corresponding curl
+handle.
# %PROTOCOLS%
get a pointer to a memory area that is reused at next request so you need to
copy the string if you want to keep the information.
-The **ip** pointer is NULL or points to private memory. You MUST NOT free -
-it gets freed when you call curl_easy_cleanup(3) on the corresponding
-CURL handle.
+The **ip** pointer is NULL or points to private memory. You MUST NOT free - it
+gets freed when you call curl_easy_cleanup(3) on the corresponding curl
+handle.
# %PROTOCOLS%
most recent request.
The **hdrp** pointer is NULL or points to private memory you MUST NOT free -
-it gets freed when you call curl_easy_cleanup(3) on the corresponding
-CURL handle.
+it gets freed when you call curl_easy_cleanup(3) on the corresponding curl
+handle.
# %PROTOCOLS%
Applications wishing to resume an RTSP session on another connection should
retrieve this info before closing the active connection.
-The **id** pointer is NULL or points to private memory. You MUST NOT free -
-it gets freed when you call curl_easy_cleanup(3) on the corresponding
-CURL handle.
+The **id** pointer is NULL or points to private memory. You MUST NOT free - it
+gets freed when you call curl_easy_cleanup(3) on the corresponding curl
+handle.
# %PROTOCOLS%
The **scheme** pointer is NULL or points to private memory. You MUST NOT
free - it gets freed when you call curl_easy_cleanup(3) on the corresponding
-CURL handle.
+curl handle.
The returned scheme might be upper or lowercase. Do comparisons case
insensitively.
## CURL_PUSH_OK (0)
The application has accepted the stream and it can now start receiving data,
-the ownership of the CURL handle has been taken over by the application.
+the ownership of the curl handle has been taken over by the application.
## CURL_PUSH_DENY (1)
the DoH server must indicate that the server name is the same as the server
name to which you meant to connect to, or the connection fails.
-Curl considers the DoH server the intended one when the Common Name field or a
+curl considers the DoH server the intended one when the Common Name field or a
Subject Alternate Name field in the certificate matches the hostname in the
-DoH URL to which you told Curl to connect.
+DoH URL to which you told curl to connect.
When the *verify* value is set to 1L it is treated the same as 2L. However
for consistency with the other *VERIFYHOST* options we suggest use 2 and
only affects requests to the DoH server.
When negotiating a TLS or SSL connection, the server sends a certificate
-indicating its identity. Curl verifies whether the certificate is authentic,
+indicating its identity. curl verifies whether the certificate is authentic,
i.e. that you can trust that the server is who the certificate says it is.
This trust is based on a chain of digital signatures, rooted in certification
authority (CA) certificates you supply. curl uses a default bundle of CA
# DESCRIPTION
Pass a long to tell libcurl how to act on content decoding. If set to zero,
-content decoding is disabled. If set to 1 it is enabled. Libcurl has no
+content decoding is disabled. If set to 1 it is enabled. libcurl has no
default content decoding but requires you to use
CURLOPT_ACCEPT_ENCODING(3) for that.
This callback function gets called by libcurl as soon as it has received
interleaved RTP data. This function gets called for each $ block and therefore
-contains exactly one upper-layer protocol unit (e.g. one RTP packet). Curl
+contains exactly one upper-layer protocol unit (e.g. one RTP packet). curl
writes the interleaved header as well as the included data for each call. The
first byte is always an ASCII dollar sign. The dollar sign is followed by a
one byte channel identifier and then a 2 byte integer length in network byte
indicate that the server is the proxy to which you meant to connect to, or the
connection fails.
-Curl considers the proxy the intended one when the Common Name field or a
+curl considers the proxy the intended one when the Common Name field or a
Subject Alternate Name field in the certificate matches the hostname in the
proxy string which you told curl to use.
ordinary HTTPS servers.
When negotiating a TLS or SSL connection, the server sends a certificate
-indicating its identity. Curl verifies whether the certificate is authentic,
+indicating its identity. curl verifies whether the certificate is authentic,
i.e. that you can trust that the server is who the certificate says it is.
This trust is based on a chain of digital signatures, rooted in certification
authority (CA) certificates you supply. curl uses a default bundle of CA
application to act and decide for libcurl how to proceed. The callback is only
called if CURLOPT_SSH_KNOWNHOSTS(3) is also set.
-This callback function gets passed the CURL handle, the key from the
-known_hosts file *knownkey*, the key from the remote site *foundkey*,
-info from libcurl on the matching status and a custom pointer (set with
-CURLOPT_SSH_KEYDATA(3)). It MUST return one of the following return
-codes to tell libcurl how to act:
+This callback function gets passed the curl handle, the key from the
+known_hosts file *knownkey*, the key from the remote site *foundkey*, info
+from libcurl on the matching status and a custom pointer (set with
+CURLOPT_SSH_KEYDATA(3)). It MUST return one of the following return codes to
+tell libcurl how to act:
## CURLKHSTAT_FINE_REPLACE
certificate. A value of 1 means curl verifies; 0 (zero) means it does not.
When negotiating a TLS or SSL connection, the server sends a certificate
-indicating its identity. Curl verifies whether the certificate is authentic,
+indicating its identity. curl verifies whether the certificate is authentic,
i.e. that you can trust that the server is who the certificate says it is.
This trust is based on a chain of digital signatures, rooted in certification
authority (CA) certificates you supply. curl uses a default bundle of CA
# DESCRIPTION
-Pass a CURL pointer in *dephandle* to identify the stream within the same
+Pass a `CURL` pointer in *dephandle* to identify the stream within the same
connection that this stream is depending upon exclusively. That means it
depends on it and sets the Exclusive bit.
/* Set stream weight, 1 - 256 (default is 16) */
CURLOPT(CURLOPT_STREAM_WEIGHT, CURLOPTTYPE_LONG, 239),
- /* Set stream dependency on another CURL handle */
+ /* Set stream dependency on another curl handle */
CURLOPT(CURLOPT_STREAM_DEPENDS, CURLOPTTYPE_OBJECTPOINT, 240),
- /* Set E-xclusive stream dependency on another CURL handle */
+ /* Set E-xclusive stream dependency on another curl handle */
CURLOPT(CURLOPT_STREAM_DEPENDS_E, CURLOPTTYPE_OBJECTPOINT, 241),
/* Do not send any tftp option requests to the server */
*
* DESCRIPTION
*
- * Re-initializes a CURL handle to the default values. This puts back the
+ * Re-initializes a curl handle to the default values. This puts back the
* handle to the same state as it was in when it was just created.
*
* It does keep: live connections, the Session ID cache, the DNS cache and the
/*
* The automagic conversion from IPv4 literals to IPv6 literals only
* works if the SCDynamicStoreCopyProxies system function gets called
- * first. As Curl currently does not support system-wide HTTP proxies, we
+ * first. As curl currently does not support system-wide HTTP proxies, we
* therefore do not use any value this function might return.
*
* This function is only available on macOS and is not needed for
/*
CURL_SOCKET_HASH_TABLE_SIZE should be a prime number. Increasing it from 97
to 911 takes on a 32-bit machine 4 x 804 = 3211 more bytes. Still, every
- CURL handle takes 45-50 K memory, therefore this 3K are not significant.
+ curl handle takes 6K memory, therefore this 3K are not significant.
*/
#ifndef CURL_SOCKET_HASH_TABLE_SIZE
#define CURL_SOCKET_HASH_TABLE_SIZE 911
#define USE_UPPERCASE_KRBAPI 1
-/* AI_NUMERICHOST needed for IP V6 support in Curl */
+/* AI_NUMERICHOST needed for IP V6 support in curl */
#ifdef HAVE_NETDB_H
#include <netdb.h>
#ifndef AI_NUMERICHOST
struct ssl_peer;
-/* Struct to hold a Curl OpenSSL instance */
+/* Struct to hold a curl OpenSSL instance */
struct ossl_ctx {
/* these ones requires specific SSL-types */
SSL_CTX* ssl_ctx;
This is a true OS/400 ILE implementation, not a PASE implementation (for
PASE, use AIX implementation).
- The biggest problem with OS/400 is EBCDIC. Libcurl implements an internal
+ The biggest problem with OS/400 is EBCDIC. libcurl implements an internal
conversion mechanism, but it has been designed for computers that have a
single native character set. OS/400 default native character set varies
-depending on the country for which it has been localized. And more, a job
+depending on the country for which it has been localized. Further, a job
may dynamically alter its "native" character set.
Several characters that do not have fixed code in EBCDIC variants are
used in libcurl strings. As a consequence, using the existing conversion
Another OS/400 problem comes from the fact that the last fixed argument of a
vararg procedure may not be of type char, unsigned char, short or unsigned
short. Enums that are internally implemented by the C compiler as one of these
-types are also forbidden. Libcurl uses enums as vararg procedure tagfields...
+types are also forbidden. libcurl uses enums as vararg procedure tagfields...
Happily, there is a pragma forcing enums to type "int". The original libcurl
header files are thus altered during build process to use this pragma, in
order to force libcurl enums of being type int (the pragma disposition in use
should be released with curl_free() after use, as opposite to the non-ccsid
versions of these procedures.
Please note that HTTP2 is not (yet) implemented on OS/400, thus these
-functions will always return NULL.
+functions always return NULL.
_ curl_easy_option_by_name_ccsid() returns a pointer to an untranslated option
metadata structure. As each curl_easyoption structure holds the option name in
_ curl_from_ccsid() and curl_to_ccsid() are string encoding conversion
functions between ASCII (latin1) and the given CCSID. The first parameter is
the source string, the second is the CCSID and the returned value is a pointer
-to the dynamically allocated string. These functions do not impact on Curl's
+to the dynamically allocated string. These functions do not impact on curl's
behavior and are only provided for user convenience. After use, returned values
must be released with curl_free().
- Standard compilation environment does support neither autotools nor make;
-in fact, very few common utilities are available. As a consequence, the
-config-os400.h has been coded manually and the compilation scripts are
-a set of shell scripts stored in subdirectory packages/OS400.
+ Standard compilation environment does support neither autotools nor make; in
+fact, few common utilities are available. As a consequence, the config-os400.h
+has been coded manually and the compilation scripts are a set of shell scripts
+stored in subdirectory packages/OS400.
The test environment is currently not supported on OS/400.
Compiling on OS/400:
These instructions targets people who knows about OS/400, compiling, IFS and
-archive extraction. Do not ask questions about these subjects if you're not
+archive extraction. Do not ask questions about these subjects if you are not
familiar with.
_ As a prerequisite, QADRT development environment must be installed.
_ Examine the makelog file to check for compilation errors. CZM0383 warnings on
C or system standard API come from QADRT inlining and can safely be ignored.
- Without configuration parameters override, this will produce the following
+ Without configuration parameters override, this produces the following
OS/400 objects:
-_ Library CURL. All other objects will be stored in this library.
+_ libcurl. All other objects are stored in this library.
_ Modules for all libcurl units.
_ Binding directory CURL_A, to be used at calling program link time for
statically binding the modules (specify BNDSRVPGM(QADRTTS QGLDCLNT QGLDBRDR)
when creating a program using CURL_A).
_ Service program CURL.<soname>, where <soname> is extracted from the
- lib/Makefile.am VERSION variable. To be used at calling program run-time
+ lib/Makefile.am VERSION variable. To be used at calling program runtime
when this program has dynamically bound curl at link time.
_ Binding directory CURL. To be used to dynamically bind libcurl when linking a
calling program.
-- CLI tool bound program CURL.
-- CLI command CURL.
+- CLI tool bound program curl.
+- CLI command curl.
_ Source file H. It contains all the include members needed to compile a C/C++
module using libcurl, and an ILE/RPG /copy member for support in this
language.
This document describes how to compile, build and install curl and libcurl
from sources using legacy versions of Visual Studio 2010 - 2013.
-You will need to generate the project files before using them. Please run
-"generate -help" for usage details.
+You need to generate the project files before using them. Please run "generate
+-help" for usage details.
To generate project files for recent versions of Visual Studio instead, use
cmake. Refer to INSTALL-CMAKE in the docs directory.
The projects files also support build configurations that require third party
dependencies such as OpenSSL and libssh2. If you wish to support these, you
-will also need to download and compile those libraries as well.
+also need to download and compile those libraries as well.
To support compilation of these libraries using different versions of
compilers, the following directory structure has been used for both the output
|_VC <version>
|_<configuration>
-As OpenSSL doesn't support side-by-side compilation when using different
-versions of Visual Studio, a helper batch file has been provided to assist with
-this. Please run `build-openssl -help` for usage details.
+As OpenSSL does not support side-by-side compilation when using different
+versions of Visual Studio, a helper batch file has been provided to assist
+with this. Please run `build-openssl -help` for usage details.
## Building with Visual C++
-To build with VC++, you will of course have to first install VC++ which is
-part of Visual Studio.
+To build with VC++, you have to first install VC++ which is part of Visual
+Studio.
Once you have VC++ installed you should launch the application and open one of
the solution or workspace files. The VC directory names are based on the
-version of Visual C++ that you will be using. Each version of Visual Studio
-has a default version of Visual C++. We offer these versions:
+version of Visual C++ that you use. Each version of Visual Studio has a
+default version of Visual C++. We offer these versions:
- VC10 (Visual Studio 2010 Version 10.0)
- VC11 (Visual Studio 2012 Version 11.0)
## Running DLL based configurations
If you are a developer and plan to run the curl tool from Visual Studio with
-any third-party libraries (such as OpenSSL or libssh2) then you will
-need to add the search path of these DLLs to the configuration's PATH
-environment. To do that:
+any third-party libraries (such as OpenSSL or libssh2) then you need to add
+the search path of these DLLs to the configuration's PATH environment. To do
+that:
1. Open the 'curl-all.sln' or 'curl.sln' solutions
2. Right-click on the 'curl' project and select Properties
C:\Windows;C:\Windows\System32\Wbem
If you are using a configuration that uses multiple third-party library DLLs
-(such as DLL Debug - DLL OpenSSL - DLL libssh2) then 'Path to DLL' will need
-to contain the path to both of these.
+(such as DLL Debug - DLL OpenSSL - DLL libssh2) then 'Path to DLL' need to
+contain the path to both of these.
## Notes
bugs in the project files that need correcting, and would like to submit
updated files back then please note that, whilst the solution files can be
edited directly, the templates for the project files (which are stored in the
-git repository) will need to be modified rather than the generated project
-files that Visual Studio uses.
+git repository) need to be modified rather than the generated project files
+that Visual Studio uses.
## Legacy Windows and SSL
Some of the project configurations use Schannel (Windows SSPI), the native SSL
library that comes with the Windows OS. Schannel in Windows 8 and earlier is
not able to connect to servers that no longer support the legacy handshakes
-and algorithms used by those versions. If you will be using curl in one of
-those earlier versions of Windows you should choose another SSL backend like
+and algorithms used by those versions. If you are using curl in one of those
+earlier versions of Windows you should choose another SSL backend like
OpenSSL.
* feature macro settings, and one of the exit routines is hidden at compile
* time.
*
- * Since we want Curl to work properly under the VMS DCL shell and Unix
+ * Since we want curl to work properly under the VMS DCL shell and Unix
* shells under VMS, this routine should compile correctly regardless of
* the settings.
*/
# Continuous Integration for curl
-Curl runs in many different environments, so every change is run against a
+curl runs in many different environments, so every change is run against a
large number of test suites.
Every pull request is verified for each of the following:
- macOS tests with a variety of different compilation options
- Fuzz tests ([see the curl-fuzzer repo for more
info](https://github.com/curl/curl-fuzzer)).
-- Curl compiled using the Rust TLS backend with Hyper
+- curl compiled using the Rust TLS backend with Hyper
These are each configured in different files in `.github/workflows`.
# Usage
-The test cases and necessary files are in `tests/http`. You can invoke `pytest` from there or from the top level curl checkout and it will find all tests.
+The test cases and necessary files are in `tests/http`. You can invoke
+`pytest` from there or from the top level curl checkout and it finds all
+tests.
```
curl> pytest test/http
runs all test cases that have `test_01_02` in their name. This does not have to be the start of the name.
-Depending on your setup, some test cases may be skipped and appear as `s` in the output. If you run pytest verbose, it will also give you the reason for skipping.
+Depending on your setup, some test cases may be skipped and appear as `s` in
+the output. If you run pytest verbose, it also gives you the reason for
+skipping.
# Prerequisites
-You will need:
+You need:
1. a recent Python, the `cryptography` module and, of course, `pytest`
-2. an apache httpd development version. On Debian/Ubuntu, the package `apache2-dev` has this.
+2. an apache httpd development version. On Debian/Ubuntu, the package `apache2-dev` has this
3. a local `curl` project build
-3. optionally, a `nghttpx` with HTTP/3 enabled or h3 test cases will be skipped.
+3. optionally, a `nghttpx` with HTTP/3 enabled or h3 test cases are skipped
### Configuration
In `conftest.py` 3 "fixtures" are defined that are used by all test cases:
-1. `env`: the test environment. It is an instance of class `testenv/env.py:Env`. It holds all information about paths, availability of features (HTTP/3), port numbers to use, domains and SSL certificates for those.
-2. `httpd`: the Apache httpd instance, configured and started, then stopped at the end of the test suite. It has sites configured for the domains from `env`. It also loads a local module `mod_curltest?` and makes it available in certain locations. (more on mod_curltest below).
-3. `nghttpx`: an instance of nghttpx that provides HTTP/3 support. `nghttpx` proxies those requests to the `httpd` server. In a direct mapping, so you may access all the resources under the same path as with HTTP/2. Only the port number used for HTTP/3 requests will be different.
-
-`pytest` manages these fixture so that they are created once and terminated before exit. This means you can `Ctrl-C` a running pytest and the server will shutdown. Only when you brutally chop its head off, might there be servers left
-behind.
+1. `env`: the test environment. It is an instance of class
+ `testenv/env.py:Env`. It holds all information about paths, availability of
+ features (HTTP/3), port numbers to use, domains and SSL certificates for
+ those.
+2. `httpd`: the Apache httpd instance, configured and started, then stopped at
+ the end of the test suite. It has sites configured for the domains from
+ `env`. It also loads a local module `mod_curltest?` and makes it available
+ in certain locations. (more on mod_curltest below).
+3. `nghttpx`: an instance of nghttpx that provides HTTP/3 support. `nghttpx`
+ proxies those requests to the `httpd` server. In a direct mapping, so you
+ may access all the resources under the same path as with HTTP/2. Only the
+ port number used for HTTP/3 requests are different.
+
+`pytest` manages these fixture so that they are created once and terminated
+before exit. This means you can `Ctrl-C` a running pytest and the server then
+shutdowns. Only when you brutally chop its head off, might there be servers
+left behind.
### Test Cases
* `s`: seconds (the default)
* `ms`: milliseconds
-As you can see, `mod_curltest`'s tweak handler allow to simulate many kinds of responses. An example of its use is `test_03_01` where responses are delayed using `chunk_delay`. This gives the response a defined duration and the test uses that to reload `httpd` in the middle of the first request. A graceful reload in httpd lets ongoing requests finish, but will close the connection afterwards and tear down the serving process. The following request need then to open a new connection. This is verified by the test case.
+As you can see, `mod_curltest`'s tweak handler allow to simulate many kinds of
+responses. An example of its use is `test_03_01` where responses are delayed
+using `chunk_delay`. This gives the response a defined duration and the test
+uses that to reload `httpd` in the middle of the first request. A graceful
+reload in httpd lets ongoing requests finish, but closes the connection
+afterwards and tears down the serving process. The following request then
+needs to open a new connection. This is verified by the test case.
global_init(CURL_GLOBAL_ALL);
- /* Allocate one CURL handle per transfer */
+ /* Allocate one curl handle per transfer */
easy = curl_easy_init();
/* init a multi stack */
test_cleanup:
curl_multi_cleanup(multi_handle);
- /* Free the CURL handles */
+ /* Free the curl handles */
curl_easy_cleanup(easy);
curl_global_cleanup();
## Build Unit Tests
`./configure --enable-debug` is required for the unit tests to build. To
-enable unit tests, there will be a separate static libcurl built that will be
-used exclusively for linking unit test programs. Just build everything as
-normal, and then you can run the unit test cases as well.
+enable unit tests, there is a separate static libcurl built that is used
+exclusively for linking unit test programs. Just build everything as normal,
+and then you can run the unit test cases as well.
## Run Unit Tests
## Debug Unit Tests
-If a specific test fails you will get told. The test case then has output left
-in the %LOGDIR subdirectory, but most importantly you can re-run the test again
+If a specific test fails you get told. The test case then has output left in
+the %LOGDIR subdirectory, but most importantly you can re-run the test again
using gdb by doing `./runtests.pl -g NNNN`. That is, add a `-g` to make it
start up gdb and run the same case using that.
previously unused number.
Add your test to `tests/unit/Makefile.inc` (if it is a unit test). Add your
-test data file name to `tests/data/Makefile.am`
+test data filename to `tests/data/Makefile.am`
You also need a separate file called `tests/data/testNNNN` (using the same
number) that describes your test case. See the test1300 file for inspiration
# Building curl with Visual C++\r
\r
This document describes how to compile, build and install curl and libcurl\r
- from sources using the Visual C++ build tool. To build with VC++, you will of\r
- course have to first install VC++. The minimum required version of VC is 6\r
- (part of Visual Studio 6). However using a more recent version is strongly\r
- recommended.\r
+ from sources using the Visual C++ build tool. To build with VC++, you have to\r
+ first install VC++. The minimum required version of VC is 6 (part of Visual\r
+ Studio 6). However using a more recent version is strongly recommended.\r
\r
VC++ is also part of the Windows Platform SDK. You do not have to install the\r
full Visual Studio or Visual C++ if all you want is to build curl.\r
\r
## Prerequisites\r
\r
- If you wish to support zlib, OpenSSL, c-ares, ssh2, you will have to download\r
- them separately and copy them to the `deps` directory as shown below:\r
+ If you wish to support zlib, OpenSSL, c-ares, ssh2, you have to download them\r
+ separately and copy them to the `deps` directory as shown below:\r
\r
somedirectory\\r
|_curl-src\r
\r
## Build in the console\r
\r
- Once you are in the console, go to the winbuild directory in the Curl\r
+ Once you are in the console, go to the winbuild directory in the curl\r
sources:\r
\r
cd curl-src\winbuild\r
\r
Then you can call `nmake /f Makefile.vc` with the desired options (see\r
- below). The builds will be in the top src directory, `builds\` directory, in\r
- a directory named using the options given to the nmake call.\r
+ below). The builds are in the top src directory, `builds\` directory, in a\r
+ directory named using the options given to the nmake call.\r
\r
nmake /f Makefile.vc mode=<static or dll> <options>\r
\r
\r
## Static linking of Microsoft's C runtime (CRT):\r
\r
- If you are using mode=static nmake will create and link to the static build\r
- of libcurl but *not* the static CRT. If you must you can force nmake to link\r
- in the static CRT by passing `RTLIBCFG=static`. Typically you shouldn't use\r
- that option, and nmake will default to the DLL CRT. `RTLIBCFG` is rarely used\r
- and therefore rarely tested. When passing `RTLIBCFG` for a configuration that\r
- was already built but not with that option, or if the option was specified\r
+ If you are using mode=static, nmake creates and links to the static build of\r
+ libcurl but *not* the static CRT. If you must you can force nmake to link in\r
+ the static CRT by passing `RTLIBCFG=static`. Typically you shouldn't use that\r
+ option, and nmake defaults to the DLL CRT. `RTLIBCFG` is rarely used and\r
+ therefore rarely tested. When passing `RTLIBCFG` for a configuration that was\r
+ already built but not with that option, or if the option was specified\r
differently, you must destroy the build directory containing the\r
configuration so that nmake can build it from scratch.\r
\r
\r
## Building your own application with libcurl (Visual Studio example)\r
\r
- When you build curl and libcurl, nmake will show the relative path where the\r
- output directory is. The output directory is named from the options nmake used\r
- when building. You may also see temp directories of the same name but with\r
- suffixes -obj-curl and -obj-lib.\r
+ When you build curl and libcurl, nmake shows the relative path where the\r
+ output directory is. The output directory is named from the options nmake\r
+ used when building. You may also see temp directories of the same name but\r
+ with suffixes -obj-curl and -obj-lib.\r
\r
- For example let's say you've built curl.exe and libcurl.dll from the Visual\r
+ For example let's say you have built curl.exe and libcurl.dll from the Visual\r
Studio 2010 x64 Win64 Command Prompt:\r
\r
nmake /f Makefile.vc mode=dll VC=10\r
\r
- The output directory will have a name similar to\r
+ The output directory has a name similar to\r
`..\builds\libcurl-vc10-x64-release-dll-ipv6-sspi-schannel`.\r
\r
The output directory contains subdirectories bin, lib and include. Those are\r
need to make a separate x86 build of libcurl.\r
\r
If you build libcurl static (`mode=static`) or debug (`DEBUG=yes`) then the\r
- library name will vary and separate builds may be necessary for separate\r
+ library name varies and separate builds may be necessary for separate\r
configurations of your project within the same platform. This is discussed in\r
the next section.\r
\r
## Building your own application with a static libcurl\r
\r
When building an application that uses the static libcurl library on Windows,\r
- you must define `CURL_STATICLIB`. Otherwise the linker will look for dynamic\r
+ you must define `CURL_STATICLIB`. Otherwise the linker looks for dynamic\r
import symbols.\r
\r
The static library name has an `_a` suffix in the basename and the debug\r
## Legacy Windows and SSL\r
\r
When you build curl using the build files in this directory the default SSL\r
- backend will be Schannel (Windows SSPI), the native SSL library that comes\r
- with the Windows OS. Schannel in Windows 8 and earlier is not able to connect\r
- to servers that no longer support the legacy handshakes and algorithms used by\r
- those versions. If you will be using curl in one of those earlier versions of\r
+ backend is Schannel (Windows SSPI), the native SSL library that comes with\r
+ the Windows OS. Schannel in Windows 8 and earlier is not able to connect to\r
+ servers that no longer support the legacy handshakes and algorithms used by\r
+ those versions. If you are using curl in one of those earlier versions of\r
Windows you should choose another SSL backend like OpenSSL.\r