Based on the standards and guidelines we use for our documentation.
- expand contractions (they're => they are etc)
- host name = > hostname
- file name => filename
- user name = username
- man page => manpage
- run-time => runtime
- set-up => setup
- back-end => backend
- a HTTP => an HTTP
- Two spaces after a period => one space after period
Closes #14073
file names\b:filenames
\buser name\b:username
\buser names\b:usernames
+\bpass phrase:passphrase
didn't:did not
doesn't:does not
won't:will not
## Certificates
- In the HTTPS world, you use certificates to validate that you are the one
- you claim to be, as an addition to normal passwords. Curl supports client-
- side certificates. All certificates are locked with a pass phrase, which you
- need to enter before the certificate can be used by curl. The pass phrase
- can be specified on the command line or if not, entered interactively when
- curl queries for it. Use a certificate with curl on an HTTPS server like:
+ In the HTTPS world, you use certificates to validate that you are the one you
+ claim to be, as an addition to normal passwords. Curl supports client- side
+ certificates. All certificates are locked with a passphrase, which you need
+ to enter before the certificate can be used by curl. The passphrase can be
+ specified on the command line or if not, entered interactively when curl
+ queries for it. Use a certificate with curl on an HTTPS server like:
curl --cert mycert.pem https://secure.example.com
SPDX-License-Identifier: curl
Long: pass
Arg: <phrase>
-Help: Pass phrase for the private key
+Help: Passphrase for the private key
Protocols: SSH TLS
Category: ssh tls auth
Added: 7.9.3
SPDX-License-Identifier: curl
Long: proxy-pass
Arg: <phrase>
-Help: Pass phrase for the private key for HTTPS proxy
+Help: Passphrase for the private key for HTTPS proxy
Added: 7.52.0
Category: proxy tls auth
Multi: single
Pass a pointer to a null-terminated string as parameter. It is used as the
password required to use the CURLOPT_SSLKEY(3) or
-CURLOPT_SSH_PRIVATE_KEYFILE(3) private key. You never need a pass phrase to
+CURLOPT_SSH_PRIVATE_KEYFILE(3) private key. You never need a passphrase to
load a certificate but you need one to load your private key.
The application does not have to keep the string around after setting this
This option is for connecting to an HTTPS proxy, not an HTTPS server.
Pass a pointer to a null-terminated string as parameter. It is used as the
-password required to use the CURLOPT_PROXY_SSLKEY(3) private key. You
-never need a pass phrase to load a certificate but you need one to load your
-private key.
+password required to use the CURLOPT_PROXY_SSLKEY(3) private key. You never
+need a passphrase to load a certificate but you need one to load your private
+key.
The application does not have to keep the string around after setting this
option.
Public include files for libcurl, external users.
-They're all placed in the curl subdirectory here for better fit in any kind of
+They are all placed in the curl subdirectory here for better fit in any kind of
environment. You must include files from here using...
#include <curl/curl.h>
#endif
#include "curlver.h" /* libcurl version defines */
-#include "system.h" /* determine things run-time */
+#include "system.h" /* determine things runtime */
#include <stdio.h>
#include <limits.h>
#if !(defined(_WINSOCKAPI_) || defined(_WINSOCK_H) || \
defined(__LWIP_OPT_H__) || defined(LWIP_HDR_OPT_H))
/* The check above prevents the winsock2 inclusion if winsock.h already was
- included, since they can't co-exist without problems */
+ included, since they cannot co-exist without problems */
#include <winsock2.h>
#include <ws2tcpip.h>
#endif
files */
long flags; /* as defined below */
-/* specified content is a file name */
+/* specified content is a filename */
#define CURL_HTTPPOST_FILENAME (1<<0)
-/* specified content is a file name */
+/* specified content is a filename */
#define CURL_HTTPPOST_READFILE (1<<1)
/* name is only stored pointer do not free in formfree */
#define CURL_HTTPPOST_PTRNAME (1<<2)
/* use size in 'contentlen', added in 7.46.0 */
#define CURL_HTTPPOST_LARGE (1<<7)
- char *showfilename; /* The file name to show. If not set, the
- actual file name will be used (if this
+ char *showfilename; /* The filename to show. If not set, the
+ actual filename will be used (if this
is a file part) */
void *userp; /* custom pointer used for
HTTPPOST_CALLBACK posts */
download of an individual chunk finished.
Note! After this callback was set then it have to be called FOR ALL chunks.
Even if downloading of this chunk was skipped in CHUNK_BGN_FUNC.
- This is the reason why we don't need "transfer_info" parameter in this
+ This is the reason why we do not need "transfer_info" parameter in this
callback and we are not interested in "remains" parameter too. */
typedef long (*curl_chunk_end_callback)(void *ptr);
/* return codes for FNMATCHFUNCTION */
#define CURL_FNMATCHFUNC_MATCH 0 /* string corresponds to the pattern */
-#define CURL_FNMATCHFUNC_NOMATCH 1 /* pattern doesn't match the string */
+#define CURL_FNMATCHFUNC_NOMATCH 1 /* pattern does not match the string */
#define CURL_FNMATCHFUNC_FAIL 2 /* an error occurred */
/* callback type for wildcard downloading pattern matching. If the
/* These are the return codes for the seek callbacks */
#define CURL_SEEKFUNC_OK 0
#define CURL_SEEKFUNC_FAIL 1 /* fail the entire transfer */
-#define CURL_SEEKFUNC_CANTSEEK 2 /* tell libcurl seeking can't be done, so
+#define CURL_SEEKFUNC_CANTSEEK 2 /* tell libcurl seeking cannot be done, so
libcurl might try other means instead */
typedef int (*curl_seek_callback)(void *instream,
curl_off_t offset,
#ifndef CURL_DID_MEMORY_FUNC_TYPEDEFS
/*
* The following typedef's are signatures of malloc, free, realloc, strdup and
- * calloc respectively. Function pointers of these types can be passed to the
+ * calloc respectively. Function pointers of these types can be passed to the
* curl_global_init_mem() function to set user defined memory management
* callback routines.
*/
CURLE_WRITE_ERROR, /* 23 */
CURLE_OBSOLETE24, /* 24 - NOT USED */
CURLE_UPLOAD_FAILED, /* 25 - failed upload "command" */
- CURLE_READ_ERROR, /* 26 - couldn't open/read from file */
+ CURLE_READ_ERROR, /* 26 - could not open/read from file */
CURLE_OUT_OF_MEMORY, /* 27 */
CURLE_OPERATION_TIMEDOUT, /* 28 - the timeout time was reached */
CURLE_OBSOLETE29, /* 29 - NOT USED */
CURLE_FTP_PORT_FAILED, /* 30 - FTP PORT operation failed */
CURLE_FTP_COULDNT_USE_REST, /* 31 - the REST command failed */
CURLE_OBSOLETE32, /* 32 - NOT USED */
- CURLE_RANGE_ERROR, /* 33 - RANGE "command" didn't work */
+ CURLE_RANGE_ERROR, /* 33 - RANGE "command" did not work */
CURLE_HTTP_POST_ERROR, /* 34 */
CURLE_SSL_CONNECT_ERROR, /* 35 - wrong when connecting with SSL */
- CURLE_BAD_DOWNLOAD_RESUME, /* 36 - couldn't resume download */
+ CURLE_BAD_DOWNLOAD_RESUME, /* 36 - could not resume download */
CURLE_FILE_COULDNT_READ_FILE, /* 37 */
CURLE_LDAP_CANNOT_BIND, /* 38 */
CURLE_LDAP_SEARCH_FAILED, /* 39 */
CURLE_RECV_ERROR, /* 56 - failure in receiving network data */
CURLE_OBSOLETE57, /* 57 - NOT IN USE */
CURLE_SSL_CERTPROBLEM, /* 58 - problem with the local certificate */
- CURLE_SSL_CIPHER, /* 59 - couldn't use specified cipher */
+ CURLE_SSL_CIPHER, /* 59 - could not use specified cipher */
CURLE_PEER_FAILED_VERIFICATION, /* 60 - peer's certificate or fingerprint
- wasn't verified fine */
+ was not verified fine */
CURLE_BAD_CONTENT_ENCODING, /* 61 - Unrecognized/bad encoding */
CURLE_OBSOLETE62, /* 62 - NOT IN USE since 7.82.0 */
CURLE_FILESIZE_EXCEEDED, /* 63 - Maximum file size exceeded */
CURLE_SSL_SHUTDOWN_FAILED, /* 80 - Failed to shut down the SSL
connection */
CURLE_AGAIN, /* 81 - socket is not ready for send/recv,
- wait till it's ready and try again (Added
+ wait till it is ready and try again (Added
in 7.18.2) */
CURLE_SSL_CRL_BADFILE, /* 82 - could not load CRL file, missing or
wrong format (Added in 7.19.0) */
CURLPROXY_SOCKS5 = 5, /* added in 7.10 */
CURLPROXY_SOCKS4A = 6, /* added in 7.18.0 */
CURLPROXY_SOCKS5_HOSTNAME = 7 /* Use the SOCKS5 protocol but pass along the
- host name rather than the IP address. added
+ hostname rather than the IP address. added
in 7.18.0 */
} curl_proxytype; /* this enum was added in 7.10 */
CURLKHSTAT_FINE_ADD_TO_FILE,
CURLKHSTAT_FINE,
CURLKHSTAT_REJECT, /* reject the connection, return an error */
- CURLKHSTAT_DEFER, /* do not accept it, but we can't answer right now.
+ CURLKHSTAT_DEFER, /* do not accept it, but we cannot answer right now.
Causes a CURLE_PEER_FAILED_VERIFICATION error but the
connection will be left intact etc */
CURLKHSTAT_FINE_REPLACE, /* accept and replace the wrong key */
#define CURLOPT(na,t,nu) na = t + nu
#define CURLOPTDEPRECATED(na,t,nu,v,m) na CURL_DEPRECATED(v,m) = t + nu
-/* CURLOPT aliases that make no run-time difference */
+/* CURLOPT aliases that make no runtime difference */
/* 'char *' argument to a string with a trailing zero */
#define CURLOPTTYPE_STRINGPOINT CURLOPTTYPE_OBJECTPOINT
*
* For large file support, there is also a _LARGE version of the key
* which takes an off_t type, allowing platforms with larger off_t
- * sizes to handle larger files. See below for INFILESIZE_LARGE.
+ * sizes to handle larger files. See below for INFILESIZE_LARGE.
*/
CURLOPT(CURLOPT_INFILESIZE, CURLOPTTYPE_LONG, 14),
*
* Note there is also a _LARGE version of this key which uses
* off_t types, allowing for large file offsets on platforms which
- * use larger-than-32-bit off_t's. Look below for RESUME_FROM_LARGE.
+ * use larger-than-32-bit off_t's. Look below for RESUME_FROM_LARGE.
*/
CURLOPT(CURLOPT_RESUME_FROM, CURLOPTTYPE_LONG, 21),
/* Set the interface string to use as outgoing network interface */
CURLOPT(CURLOPT_INTERFACE, CURLOPTTYPE_STRINGPOINT, 62),
- /* Set the krb4/5 security level, this also enables krb4/5 awareness. This
- * is a string, 'clear', 'safe', 'confidential' or 'private'. If the string
- * is set but doesn't match one of these, 'private' will be used. */
+ /* Set the krb4/5 security level, this also enables krb4/5 awareness. This
+ * is a string, 'clear', 'safe', 'confidential' or 'private'. If the string
+ * is set but does not match one of these, 'private' will be used. */
CURLOPT(CURLOPT_KRBLEVEL, CURLOPTTYPE_STRINGPOINT, 63),
/* Set if we should verify the peer in ssl handshake, set 1 to verify. */
/* 73 = OBSOLETE */
/* Set to explicitly use a new connection for the upcoming transfer.
- Do not use this unless you're absolutely sure of this, as it makes the
+ Do not use this unless you are absolutely sure of this, as it makes the
operation slower and is less friendly for the network. */
CURLOPT(CURLOPT_FRESH_CONNECT, CURLOPTTYPE_LONG, 74),
/* Set to explicitly forbid the upcoming transfer's connection to be reused
- when done. Do not use this unless you're absolutely sure of this, as it
+ when done. Do not use this unless you are absolutely sure of this, as it
makes the operation slower and is less friendly for the network. */
CURLOPT(CURLOPT_FORBID_REUSE, CURLOPTTYPE_LONG, 75),
- /* Set to a file name that contains random data for libcurl to use to
+ /* Set to a filename that contains random data for libcurl to use to
seed the random engine when doing SSL connects. */
CURLOPTDEPRECATED(CURLOPT_RANDOM_FILE, CURLOPTTYPE_STRINGPOINT, 76,
7.84.0, "Serves no purpose anymore"),
* provided hostname. */
CURLOPT(CURLOPT_SSL_VERIFYHOST, CURLOPTTYPE_LONG, 81),
- /* Specify which file name to write all known cookies in after completed
- operation. Set file name to "-" (dash) to make it go to stdout. */
+ /* Specify which filename to write all known cookies in after completed
+ operation. Set filename to "-" (dash) to make it go to stdout. */
CURLOPT(CURLOPT_COOKIEJAR, CURLOPTTYPE_STRINGPOINT, 82),
/* Specify which SSL ciphers to use */
CURLOPT(CURLOPT_PROXYAUTH, CURLOPTTYPE_VALUES, 111),
/* Option that changes the timeout, in seconds, associated with getting a
- response. This is different from transfer timeout time and essentially
+ response. This is different from transfer timeout time and essentially
places a demand on the server to acknowledge commands in a timely
manner. For FTP, SMTP, IMAP and POP3. */
CURLOPT(CURLOPT_SERVER_RESPONSE_TIMEOUT, CURLOPTTYPE_LONG, 112),
an HTTP or FTP server.
Note there is also _LARGE version which adds large file support for
- platforms which have larger off_t sizes. See MAXFILESIZE_LARGE below. */
+ platforms which have larger off_t sizes. See MAXFILESIZE_LARGE below. */
CURLOPT(CURLOPT_MAXFILESIZE, CURLOPTTYPE_LONG, 114),
/* See the comment for INFILESIZE above, but in short, specifies
*/
CURLOPT(CURLOPT_INFILESIZE_LARGE, CURLOPTTYPE_OFF_T, 115),
- /* Sets the continuation offset. There is also a CURLOPTTYPE_LONG version
+ /* Sets the continuation offset. There is also a CURLOPTTYPE_LONG version
* of this; look above for RESUME_FROM.
*/
CURLOPT(CURLOPT_RESUME_FROM_LARGE, CURLOPTTYPE_OFF_T, 116),
/* Sets the maximum size of data that will be downloaded from
- * an HTTP or FTP server. See MAXFILESIZE above for the LONG version.
+ * an HTTP or FTP server. See MAXFILESIZE above for the LONG version.
*/
CURLOPT(CURLOPT_MAXFILESIZE_LARGE, CURLOPTTYPE_OFF_T, 117),
- /* Set this option to the file name of your .netrc file you want libcurl
+ /* Set this option to the filename of your .netrc file you want libcurl
to parse (using the CURLOPT_NETRC option). If not set, libcurl will do
a poor attempt to find the user's home directory and check for a .netrc
file in there. */
/* Callback function for opening socket (instead of socket(2)). Optionally,
callback is able change the address or refuse to connect returning
- CURL_SOCKET_BAD. The callback should have type
+ CURL_SOCKET_BAD. The callback should have type
curl_opensocket_callback */
CURLOPT(CURLOPT_OPENSOCKETFUNCTION, CURLOPTTYPE_FUNCTIONPOINT, 163),
CURLOPT(CURLOPT_OPENSOCKETDATA, CURLOPTTYPE_CBPOINT, 164),
CURLOPTDEPRECATED(CURLOPT_REDIR_PROTOCOLS, CURLOPTTYPE_LONG, 182,
7.85.0, "Use CURLOPT_REDIR_PROTOCOLS_STR"),
- /* set the SSH knownhost file name to use */
+ /* set the SSH knownhost filename to use */
CURLOPT(CURLOPT_SSH_KNOWNHOSTS, CURLOPTTYPE_STRINGPOINT, 183),
/* set the SSH host key callback, must point to a curl_sshkeycallback
future libcurl release.
libcurl will ask for the compressed methods it knows of, and if that
- isn't any, it will not ask for transfer-encoding at all even if this
+ is not any, it will not ask for transfer-encoding at all even if this
option is set to 1.
*/
/* Service Name */
CURLOPT(CURLOPT_SERVICE_NAME, CURLOPTTYPE_STRINGPOINT, 236),
- /* Wait/don't wait for pipe/mutex to clarify */
+ /* Wait/do not wait for pipe/mutex to clarify */
CURLOPT(CURLOPT_PIPEWAIT, CURLOPTTYPE_LONG, 237),
/* Set the protocol used when curl is given a URL without a protocol */
/* alt-svc control bitmask */
CURLOPT(CURLOPT_ALTSVC_CTRL, CURLOPTTYPE_LONG, 286),
- /* alt-svc cache file name to possibly read from/write to */
+ /* alt-svc cache filename to possibly read from/write to */
CURLOPT(CURLOPT_ALTSVC, CURLOPTTYPE_STRINGPOINT, 287),
/* maximum age (idle time) of a connection to consider it for reuse
/* HSTS bitmask */
CURLOPT(CURLOPT_HSTS_CTRL, CURLOPTTYPE_LONG, 299),
- /* HSTS file name */
+ /* HSTS filename */
CURLOPT(CURLOPT_HSTS, CURLOPTTYPE_STRINGPOINT, 300),
/* HSTS read callback */
/* These enums are for use with the CURLOPT_HTTP_VERSION option. */
enum {
- CURL_HTTP_VERSION_NONE, /* setting this means we don't care, and that we'd
- like the library to choose the best possible
- for us! */
+ CURL_HTTP_VERSION_NONE, /* setting this means we do not care, and that we
+ would like the library to choose the best
+ possible for us! */
CURL_HTTP_VERSION_1_0, /* please use HTTP 1.0 in the request */
CURL_HTTP_VERSION_1_1, /* please use HTTP 1.1 in the request */
CURL_HTTP_VERSION_2_0, /* please use HTTP 2 in the request */
*
* DESCRIPTION
*
- * Set mime part remote file name.
+ * Set mime part remote filename.
*/
CURL_EXTERN CURLcode curl_mime_filename(curl_mimepart *part,
const char *filename);
* DESCRIPTION
*
* curl_global_init() or curl_global_init_mem() should be invoked exactly once
- * for each application that uses libcurl. This function can be used to
+ * for each application that uses libcurl. This function can be used to
* initialize libcurl and set user defined memory management callback
- * functions. Users can implement memory management routines to check for
- * memory leaks, check for mis-use of the curl library etc. User registered
+ * functions. Users can implement memory management routines to check for
+ * memory leaks, check for mis-use of the curl library etc. User registered
* callback routines will be invoked by this library instead of the system
* memory management routines like malloc, free etc.
*/
for with CURLOPT_CERTINFO / CURLINFO_CERTINFO */
struct curl_certinfo {
int num_of_certs; /* number of certificates with information */
- struct curl_slist **certinfo; /* for each index in this array, there's a
+ struct curl_slist **certinfo; /* for each index in this array, there is a
linked list with textual information for a
certificate in the format "name:content".
eg "Subject:foo", "Issuer:bar", etc. */
} CURLSHcode;
typedef enum {
- CURLSHOPT_NONE, /* don't use */
+ CURLSHOPT_NONE, /* do not use */
CURLSHOPT_SHARE, /* specify a data type to share */
CURLSHOPT_UNSHARE, /* specify which data type to stop sharing */
CURLSHOPT_LOCKFUNC, /* pass in a 'curl_lock_function' pointer */
* DESCRIPTION
*
* The curl_easy_strerror function may be used to turn a CURLcode value
- * into the equivalent human readable error string. This is useful
+ * into the equivalent human readable error string. This is useful
* for printing meaningful error messages.
*/
CURL_EXTERN const char *curl_easy_strerror(CURLcode);
* DESCRIPTION
*
* The curl_share_strerror function may be used to turn a CURLSHcode value
- * into the equivalent human readable error string. This is useful
+ * into the equivalent human readable error string. This is useful
* for printing meaningful error messages.
*/
CURL_EXTERN const char *curl_share_strerror(CURLSHcode);
#include "websockets.h"
#include "mprintf.h"
-/* the typechecker doesn't work in C++ (yet) */
+/* the typechecker does not work in C++ (yet) */
#if defined(__GNUC__) && defined(__GNUC_MINOR__) && \
((__GNUC__ > 4) || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) && \
!defined(__cplusplus) && !defined(CURL_DISABLE_TYPECHECK)
Where XX, YY and ZZ are the main version, release and patch numbers in
hexadecimal (using 8 bits each). All three numbers are always represented
- using two digits. 1.2 would appear as "0x010200" while version 9.11.7
+ using two digits. 1.2 would appear as "0x010200" while version 9.11.7
appears as "0x090b07".
This 6-digit (24 bits) hexadecimal number does not show pre-release number,
*
* Request internal information from the curl session with this function.
* The third argument MUST be pointing to the specific type of the used option
- * which is documented in each man page of the option. The data pointed to
+ * which is documented in each manpage of the option. The data pointed to
* will be filled in accordingly and can be relied upon only if the function
* returns CURLE_OK. This function is intended to get used *AFTER* a performed
* transfer, all results from this function are undefined until the transfer
*
***************************************************************************/
/*
- This is an "external" header file. Don't give away any internals here!
+ This is an "external" header file. Do not give away any internals here!
GOALS
CURLM_OK,
CURLM_BAD_HANDLE, /* the passed-in handle is not a valid CURLM handle */
CURLM_BAD_EASY_HANDLE, /* an easy handle was not good/valid */
- CURLM_OUT_OF_MEMORY, /* if you ever get this, you're in deep sh*t */
+ CURLM_OUT_OF_MEMORY, /* if you ever get this, you are in deep sh*t */
CURLM_INTERNAL_ERROR, /* this is a libcurl bug */
CURLM_BAD_SOCKET, /* the passed in socket argument did not match */
CURLM_UNKNOWN_OPTION, /* curl_multi_setopt() with unsupported option */
typedef struct CURLMsg CURLMsg;
/* Based on poll(2) structure and values.
- * We don't use pollfd and POLL* constants explicitly
+ * We do not use pollfd and POLL* constants explicitly
* to cover platforms without poll(). */
#define CURL_WAIT_POLLIN 0x0001
#define CURL_WAIT_POLLPRI 0x0002
/*
* Name: curl_multi_perform()
*
- * Desc: When the app thinks there's data available for curl it calls this
+ * Desc: When the app thinks there is data available for curl it calls this
* function to read/write whatever there is right now. This returns
* as soon as the reads and writes are done. This function does not
* require that there actually is data available for reading or that
/*
* Name: curl_multi_info_read()
*
- * Desc: Ask the multi handle if there's any messages/informationals from
+ * Desc: Ask the multi handle if there is any messages/informationals from
* the individual transfers. Messages include informationals such as
* error code from the transfer or just the fact that a transfer is
* completed. More details on these should be written down as well.
* we will provide the particular "transfer handle" in that struct
* and that should/could/would be used in subsequent
* curl_easy_getinfo() calls (or similar). The point being that we
- * must never expose complex structs to applications, as then we'll
+ * must never expose complex structs to applications, as then we will
* undoubtably get backwards compatibility problems in the future.
*
* Returns: A pointer to a filled-in struct, or NULL if it failed or ran out
* Name: curl_multi_strerror()
*
* Desc: The curl_multi_strerror function may be used to turn a CURLMcode
- * value into the equivalent human readable error string. This is
+ * value into the equivalent human readable error string. This is
* useful for printing meaningful error messages.
*
* Returns: A pointer to a null-terminated error message.
* Desc: An alternative version of curl_multi_perform() that allows the
* application to pass in one of the file descriptors that have been
* detected to have "action" on them and let libcurl perform.
- * See man page for details.
+ * See manpage for details.
*/
#define CURL_POLL_NONE 0
#define CURL_POLL_IN 1
* As a general rule, curl_off_t shall not be mapped to off_t. This rule shall
* only be violated if off_t is the only 64-bit data type available and the
* size of off_t is independent of large file support settings. Keep your
- * build on the safe side avoiding an off_t gating. If you have a 64-bit
+ * build on the safe side avoiding an off_t gating. If you have a 64-bit
* off_t then take for sure that another 64-bit data type exists, dig deeper
* and you will find it.
*
# define CURL_PULL_SYS_SOCKET_H 1
#else
-/* generic "safe guess" on old 32 bit style */
+/* generic "safe guess" on old 32-bit style */
# define CURL_TYPEOF_CURL_OFF_T long
# define CURL_FORMAT_CURL_OFF_T "ld"
# define CURL_FORMAT_CURL_OFF_TU "lu"
* _curl_easy_setopt_err_sometype below
*
* NOTE: We use two nested 'if' statements here instead of the && operator, in
- * order to work around gcc bug #32061. It affects only gcc 4.3.x/4.4.x
+ * order to work around gcc bug #32061. It affects only gcc 4.3.x/4.4.x
* when compiling with -Wlogical-op.
*
- * To add an option that uses the same type as an existing option, you'll just
- * need to extend the appropriate _curl_*_option macro
+ * To add an option that uses the same type as an existing option, you will
+ * just need to extend the appropriate _curl_*_option macro
*/
#define curl_easy_setopt(handle, option, value) \
__extension__({ \
/* To add a new option to one of the groups, just add
* (option) == CURLOPT_SOMETHING
- * to the or-expression. If the option takes a long or curl_off_t, you don't
+ * to the or-expression. If the option takes a long or curl_off_t, you do not
* have to do anything
*/
const void *);
#ifdef HEADER_SSL_H
/* hack: if we included OpenSSL's ssl.h, we know about SSL_CTX
- * this will of course break if we're included before OpenSSL headers...
+ * this will of course break if we are included before OpenSSL headers...
*/
typedef CURLcode (*_curl_ssl_ctx_callback5)(CURL *, SSL_CTX *, void *);
typedef CURLcode (*_curl_ssl_ctx_callback6)(CURL *, SSL_CTX *, const void *);
#define CURLU_NO_AUTHORITY (1<<10) /* Allow empty authority when the
scheme is unknown. */
#define CURLU_ALLOW_SPACE (1<<11) /* Allow spaces in the URL */
-#define CURLU_PUNYCODE (1<<12) /* get the host name in punycode */
+#define CURLU_PUNYCODE (1<<12) /* get the hostname in punycode */
#define CURLU_PUNY2IDN (1<<13) /* punycode => IDN conversion */
#define CURLU_GET_EMPTY (1<<14) /* allow empty queries and fragments
when extracting the URL or the
components */
-#define CURLU_NO_GUESS_SCHEME (1<<15) /* for get, don't accept a guess */
+#define CURLU_NO_GUESS_SCHEME (1<<15) /* for get, do not accept a guess */
typedef struct Curl_URL CURLU;
/*
* curl_url_strerror() turns a CURLUcode value into the equivalent human
- * readable error string. This is useful for printing meaningful error
+ * readable error string. This is useful for printing meaningful error
* messages.
*/
CURL_EXTERN const char *curl_url_strerror(CURLUcode);
CURLcode result = CURLE_OK;
FILE *fp;
- /* we need a private copy of the file name so that the altsvc cache file
+ /* we need a private copy of the filename so that the altsvc cache file
name survives an easy handle reset */
free(asi->filename);
asi->filename = strdup(file);
file = altsvc->filename;
if((altsvc->flags & CURLALTSVC_READONLYFILE) || !file || !file[0])
- /* marked as read-only, no file or zero length file name */
+ /* marked as read-only, no file or zero length filename */
return CURLE_OK;
result = Curl_fopen(data, file, &out, &tempstore);
if(hlen && (host[hlen - 1] == '.'))
hlen--;
if(hlen != clen)
- /* they can't match if they have different lengths */
+ /* they cannot match if they have different lengths */
return FALSE;
return strncasecompare(host, check, hlen);
}
* Curl_altsvc_parse() takes an incoming alt-svc response header and stores
* the data correctly in the cache.
*
- * 'value' points to the header *value*. That's contents to the right of the
+ * 'value' points to the header *value*. That is contents to the right of the
* header name.
*
* Currently this function rejects invalid data without returning an error.
- * Invalid host name, port number will result in the specific alternative
+ * Invalid hostname, port number will result in the specific alternative
* being rejected. Unknown protocols are skipped.
*/
CURLcode Curl_altsvc_parse(struct Curl_easy *data,
bool valid = TRUE;
p++;
if(*p != ':') {
- /* host name starts here */
+ /* hostname starts here */
const char *hostp = p;
if(*p == '[') {
/* pass all valid IPv6 letters - does not handle zone id */
len = p - hostp;
}
if(!len || (len >= MAX_ALTSVC_HOSTLEN)) {
- infof(data, "Excessive alt-svc host name, ignoring.");
+ infof(data, "Excessive alt-svc hostname, ignoring.");
valid = FALSE;
}
else {
}
else
break;
- /* after the double quote there can be a comma if there's another
+ /* after the double quote there can be a comma if there is another
string or a semicolon if no more */
if(*p == ',') {
/* comma means another alternative is presented */
#ifdef CURLRES_AMIGA
/*
- * Because we need to handle the different cases in hostip4.c at run-time,
+ * Because we need to handle the different cases in hostip4.c at runtime,
* not at compile-time, based on what was detected in Curl_amiga_init(),
* we replace it completely with our own as to not complicate the baseline
* code. Assumes malloc/calloc/free are thread safe because Curl_he2ai()
#define CURL_GA 249 /* Go Ahead, reverse the line */
#define CURL_SB 250 /* SuBnegotiation */
#define CURL_WILL 251 /* Our side WILL use this option */
-#define CURL_WONT 252 /* Our side WON'T use this option */
+#define CURL_WONT 252 /* Our side will not use this option */
#define CURL_DO 253 /* DO use this option! */
#define CURL_DONT 254 /* DON'T use this option! */
#define CURL_IAC 255 /* Interpret As Command */
# define CARES_STATICLIB
#endif
#include <ares.h>
-#include <ares_version.h> /* really old c-ares didn't include this by
+#include <ares_version.h> /* really old c-ares did not include this by
itself */
#if ARES_VERSION >= 0x010500
/* How long we are willing to wait for additional parallel responses after
obtaining a "definitive" one. For old c-ares without getaddrinfo.
- This is intended to equal the c-ares default timeout. cURL always uses that
- default value. Unfortunately, c-ares doesn't expose its default timeout in
+ This is intended to equal the c-ares default timeout. cURL always uses that
+ default value. Unfortunately, c-ares does not expose its default timeout in
its API, but it is officially documented as 5 seconds.
See query_completed_cb() for an explanation of how this is used.
/*
* Curl_resolver_global_init() - the generic low-level asynchronous name
- * resolve API. Called from curl_global_init() to initialize global resolver
- * environment. Initializes ares library.
+ * resolve API. Called from curl_global_init() to initialize global resolver
+ * environment. Initializes ares library.
*/
int Curl_resolver_global_init(void)
{
*
* Called from curl_easy_init() -> Curl_open() to initialize resolver
* URL-state specific environment ('resolver' member of the UrlState
- * structure). Fills the passed pointer by the initialized ares_channel.
+ * structure). Fills the passed pointer by the initialized ares_channel.
*/
CURLcode Curl_resolver_init(struct Curl_easy *easy, void **resolver)
{
*
* Called from curl_easy_cleanup() -> Curl_close() to cleanup resolver
* URL-state specific environment ('resolver' member of the UrlState
- * structure). Destroys the ares channel.
+ * structure). Destroys the ares channel.
*/
void Curl_resolver_cleanup(void *resolver)
{
* Curl_resolver_duphandle()
*
* Called from curl_easy_duphandle() to duplicate resolver URL-state specific
- * environment ('resolver' member of the UrlState structure). Duplicates the
+ * environment ('resolver' member of the UrlState structure). Duplicates the
* 'from' ares channel and passes the resulting channel to the 'to' pointer.
*/
CURLcode Curl_resolver_duphandle(struct Curl_easy *easy, void **to, void *from)
}
/*
- * We're equivalent to Curl_resolver_cancel() for the c-ares resolver. We
+ * We are equivalent to Curl_resolver_cancel() for the c-ares resolver. We
* never block.
*/
void Curl_resolver_kill(struct Curl_easy *data)
{
- /* We don't need to check the resolver state because we can be called safely
+ /* We do not need to check the resolver state because we can be called safely
at any time and we always do the same thing. */
Curl_resolver_cancel(data);
}
/*
* Curl_resolver_getsock() is called when someone from the outside world
- * (using curl_multi_fdset()) wants to get our fd_set setup and we're talking
+ * (using curl_multi_fdset()) wants to get our fd_set setup and we are talking
* with ares. The caller must make sure that this function is only called when
* we have a working ares channel.
*
if(!nfds)
/* Call ares_process() unconditionally here, even if we simply timed out
- above, as otherwise the ares name resolve won't timeout! */
+ above, as otherwise the ares name resolve will not timeout! */
ares_process_fd((ares_channel)data->state.async.resolver, ARES_SOCKET_BAD,
ARES_SOCKET_BAD);
else {
return CURLE_UNRECOVERABLE_POLL;
#ifndef HAVE_CARES_GETADDRINFO
- /* Now that we've checked for any last minute results above, see if there are
- any responses still pending when the EXPIRE_HAPPY_EYEBALLS_DNS timer
+ /* Now that we have checked for any last minute results above, see if there
+ are any responses still pending when the EXPIRE_HAPPY_EYEBALLS_DNS timer
expires. */
if(res
&& res->num_pending
&res->happy_eyeballs_dns_time, 0, sizeof(res->happy_eyeballs_dns_time));
/* Cancel the raw c-ares request, which will fire query_completed_cb() with
- ARES_ECANCELLED synchronously for all pending responses. This will
+ ARES_ECANCELLED synchronously for all pending responses. This will
leave us with res->num_pending == 0, which is perfect for the next
block. */
ares_cancel((ares_channel)data->state.async.resolver);
*entry = data->state.async.dns;
if(result)
- /* close the connection, since we can't return failure here without
+ /* close the connection, since we cannot return failure here without
cleaning up this connection properly. */
connclose(data->conn, "c-ares resolve failed");
/* If there are responses still pending, we presume they must be the
complementary IPv4 or IPv6 lookups that we started in parallel in
- Curl_resolver_getaddrinfo() (for Happy Eyeballs). If we've got a
+ Curl_resolver_getaddrinfo() (for Happy Eyeballs). If we have got a
"definitive" response from one of a set of parallel queries, we need to
- think about how long we're willing to wait for more responses. */
+ think about how long we are willing to wait for more responses. */
if(res->num_pending
/* Only these c-ares status values count as "definitive" for these
- purposes. For example, ARES_ENODATA is what we expect when there is
- no IPv6 entry for a domain name, and that's not a reason to get more
- aggressive in our timeouts for the other response. Other errors are
+ purposes. For example, ARES_ENODATA is what we expect when there is
+ no IPv6 entry for a domain name, and that is not a reason to get more
+ aggressive in our timeouts for the other response. Other errors are
either a result of bad input (which should affect all parallel
requests), local or network conditions, non-definitive server
responses, or us cancelling the request. */
&& (status == ARES_SUCCESS || status == ARES_ENOTFOUND)) {
- /* Right now, there can only be up to two parallel queries, so don't
+ /* Right now, there can only be up to two parallel queries, so do not
bother handling any other cases. */
DEBUGASSERT(res->num_pending == 1);
- /* It's possible that one of these parallel queries could succeed
- quickly, but the other could always fail or timeout (when we're
+ /* it is possible that one of these parallel queries could succeed
+ quickly, but the other could always fail or timeout (when we are
talking to a pool of DNS servers that can only successfully resolve
IPv4 address, for example).
- It's also possible that the other request could always just take
+ it is also possible that the other request could always just take
longer because it needs more time or only the second DNS server can
- fulfill it successfully. But, to align with the philosophy of Happy
- Eyeballs, we don't want to wait _too_ long or users will think
- requests are slow when IPv6 lookups don't actually work (but IPv4 ones
- do).
+ fulfill it successfully. But, to align with the philosophy of Happy
+ Eyeballs, we do not want to wait _too_ long or users will think
+ requests are slow when IPv6 lookups do not actually work (but IPv4
+ ones do).
So, now that we have a usable answer (some IPv4 addresses, some IPv6
addresses, or "no such domain"), we start a timeout for the remaining
- pending responses. Even though it is typical that this resolved
- request came back quickly, that needn't be the case. It might be that
- this completing request didn't get a result from the first DNS server
- or even the first round of the whole DNS server pool. So it could
- already be quite some time after we issued the DNS queries in the
- first place. Without modifying c-ares, we can't know exactly where in
- its retry cycle we are. We could guess based on how much time has
- gone by, but it doesn't really matter. Happy Eyeballs tells us that,
- given usable information in hand, we simply don't want to wait "too
- much longer" after we get a result.
+ pending responses. Even though it is typical that this resolved
+ request came back quickly, that needn't be the case. It might be that
+ this completing request did not get a result from the first DNS
+ server or even the first round of the whole DNS server pool. So it
+ could already be quite some time after we issued the DNS queries in
+ the first place. Without modifying c-ares, we cannot know exactly
+ where in its retry cycle we are. We could guess based on how much
+ time has gone by, but it does not really matter. Happy Eyeballs tells
+ us that, given usable information in hand, we simply do not want to
+ wait "too much longer" after we get a result.
We simply wait an additional amount of time equal to the default
- c-ares query timeout. That is enough time for a typical parallel
- response to arrive without being "too long". Even on a network
+ c-ares query timeout. That is enough time for a typical parallel
+ response to arrive without being "too long". Even on a network
where one of the two types of queries is failing or timing out
constantly, this will usually mean we wait a total of the default
c-ares timeout (5 seconds) plus the round trip time for the successful
- request, which seems bearable. The downside is that c-ares might race
+ request, which seems bearable. The downside is that c-ares might race
with us to issue one more retry just before we give up, but it seems
better to "waste" that request instead of trying to guess the perfect
- timeout to prevent it. After all, we don't even know where in the
+ timeout to prevent it. After all, we do not even know where in the
c-ares retry cycle each request is.
*/
res->happy_eyeballs_dns_time = Curl_now();
/* If server is NULL or empty, this would purge all DNS servers
* from ares library, which will cause any and all queries to fail.
- * So, just return OK if none are configured and don't actually make
- * any changes to c-ares. This lets c-ares use its defaults, which
+ * So, just return OK if none are configured and do not actually make
+ * any changes to c-ares. This lets c-ares use its defaults, which
* it gets from the OS (for instance from /etc/resolv.conf on Linux).
*/
if(!(servers && servers[0]))
result = Curl_addrinfo_callback(data, tsd->sock_error, tsd->res);
/* The tsd->res structure has been copied to async.dns and perhaps the DNS
- cache. Set our copy to NULL so destroy_thread_sync_data doesn't free it.
+ cache. Set our copy to NULL so destroy_thread_sync_data does not free it.
*/
tsd->res = NULL;
{
struct thread_data *td = data->state.async.tdata;
- /* If we're still resolving, we must wait for the threads to fully clean up,
- unfortunately. Otherwise, we can simply cancel to clean up any resolver
+ /* If we are still resolving, we must wait for the threads to fully clean up,
+ unfortunately. Otherwise, we can simply cancel to clean up any resolver
data. */
#ifdef _WIN32
if(td && td->complete_ev) {
* Curl_resolver_init()
* Called from curl_easy_init() -> Curl_open() to initialize resolver
* URL-state specific environment ('resolver' member of the UrlState
- * structure). Should fill the passed pointer by the initialized handler.
+ * structure). Should fill the passed pointer by the initialized handler.
* Returning anything else than CURLE_OK fails curl_easy_init() with the
* correspondent code.
*/
* Curl_resolver_cleanup()
* Called from curl_easy_cleanup() -> Curl_close() to cleanup resolver
* URL-state specific environment ('resolver' member of the UrlState
- * structure). Should destroy the handler and free all resources connected to
+ * structure). Should destroy the handler and free all resources connected to
* it.
*/
void Curl_resolver_cleanup(void *resolver);
/*
* Curl_resolver_duphandle()
* Called from curl_easy_duphandle() to duplicate resolver URL-state specific
- * environment ('resolver' member of the UrlState structure). Should
+ * environment ('resolver' member of the UrlState structure). Should
* duplicate the 'from' handle and pass the resulting handle to the 'to'
- * pointer. Returning anything else than CURLE_OK causes failed
+ * pointer. Returning anything else than CURLE_OK causes failed
* curl_easy_duphandle() call.
*/
CURLcode Curl_resolver_duphandle(struct Curl_easy *easy, void **to,
*
* It is called from inside other functions to cancel currently performing
* resolver request. Should also free any temporary resources allocated to
- * perform a request. This never waits for resolver threads to complete.
+ * perform a request. This never waits for resolver threads to complete.
*
* It is safe to call this when conn is in any state.
*/
* Curl_resolver_kill().
*
* This acts like Curl_resolver_cancel() except it will block until any threads
- * associated with the resolver are complete. This never blocks for resolvers
- * that do not use threads. This is intended to be the "last chance" function
+ * associated with the resolver are complete. This never blocks for resolvers
+ * that do not use threads. This is intended to be the "last chance" function
* that cleans up an in-progress resolver completely (before its owner is about
* to die).
*
int *waitp);
#ifndef CURLRES_ASYNCH
-/* convert these functions if an asynch resolver isn't used */
+/* convert these functions if an asynch resolver is not used */
#define Curl_resolver_cancel(x) Curl_nop_stmt
#define Curl_resolver_kill(x) Curl_nop_stmt
#define Curl_resolver_is_resolved(x,y) CURLE_COULDNT_RESOLVE_HOST
}
/*
- * Free the buffer and re-init the necessary fields. It doesn't touch the
+ * Free the buffer and re-init the necessary fields. It does not touch the
* 'signature' field and thus this buffer reference can be reused.
*/
result = hyper_each_header(data, NULL, 0, NULL, 0) ?
CURLE_WRITE_ERROR : CURLE_OK;
if(result)
- failf(data, "hyperstream: couldn't pass blank header");
+ failf(data, "hyperstream: could not pass blank header");
/* Hyper does chunked decoding itself. If it was added during
* response header processing, remove it again. */
Curl_cwriter_remove_by_name(data, "chunked");
data->req.done = TRUE;
infof(data, "hyperstream is done");
if(!k->bodywritten) {
- /* hyper doesn't always call the body write callback */
+ /* hyper does not always call the body write callback */
result = Curl_http_firstwrite(data);
}
break;
*didwhat = KEEP_RECV;
if(!resp) {
- failf(data, "hyperstream: couldn't get response");
+ failf(data, "hyperstream: could not get response");
return CURLE_RECV_ERROR;
}
headers = hyper_response_headers(resp);
if(!headers) {
- failf(data, "hyperstream: couldn't get response headers");
+ failf(data, "hyperstream: could not get response headers");
result = CURLE_RECV_ERROR;
break;
}
resp_body = hyper_response_body(resp);
if(!resp_body) {
- failf(data, "hyperstream: couldn't get response body");
+ failf(data, "hyperstream: could not get response body");
result = CURLE_RECV_ERROR;
break;
}
goto out;
}
/* increasing the writebytecount here is a little premature but we
- don't know exactly when the body is sent */
+ do not know exactly when the body is sent */
data->req.writebytecount += fillcount;
Curl_pgrsSetUploadCounter(data, data->req.writebytecount);
rc = HYPER_POLL_READY;
if(!result) {
headers = hyper_response_headers(resp);
if(!headers) {
- failf(data, "hyperstream: couldn't get 1xx response headers");
+ failf(data, "hyperstream: could not get 1xx response headers");
result = CURLE_RECV_ERROR;
}
}
data->info.httpcode = 0; /* clear it as it might've been used for the
proxy */
/* If a proxy-authorization header was used for the proxy, then we should
- make sure that it isn't accidentally used for the document request
- after we've connected. So let's free and clear it here. */
+ make sure that it is not accidentally used for the document request
+ after we have connected. So let's free and clear it here. */
Curl_safefree(data->state.aptr.proxyuserpwd);
#ifdef USE_HYPER
data->state.hconnect = FALSE;
int http_minor;
CURLcode result;
- /* This only happens if we've looped here due to authentication
- reasons, and we don't really use the newly cloned URL here
+ /* This only happens if we have looped here due to authentication
+ reasons, and we do not really use the newly cloned URL here
then. Just free() it. */
Curl_safefree(data->req.newurl);
if(ts->cl) {
/* A Content-Length based body: simply count down the counter
- and make sure to break out of the loop when we're done! */
+ and make sure to break out of the loop when we are done! */
ts->cl--;
if(ts->cl <= 0) {
ts->keepon = KEEPON_DONE;
if(result)
return result;
if(Curl_httpchunk_is_done(data, &ts->ch)) {
- /* we're done reading chunks! */
+ /* we are done reading chunks! */
infof(data, "chunk reading DONE");
ts->keepon = KEEPON_DONE;
}
if(result)
return result;
- /* Newlines are CRLF, so the CR is ignored as the line isn't
+ /* Newlines are CRLF, so the CR is ignored as the line is not
really terminated until the LF comes. Treat a following CR
as end-of-headers as well.*/
}
else {
/* without content-length or chunked encoding, we
- can't keep the connection alive since the close is
+ cannot keep the connection alive since the close is
the end signal so we bail out at once instead */
CURL_TRC_CF(data, cf, "CONNECT: no content-length or chunked");
ts->keepon = KEEPON_DONE;
return result;
Curl_dyn_reset(&ts->rcvbuf);
- } /* while there's buffer left and loop is requested */
+ } /* while there is buffer left and loop is requested */
if(error)
result = CURLE_RECV_ERROR;
goto error;
}
- /* This only happens if we've looped here due to authentication
- reasons, and we don't really use the newly cloned URL here
+ /* This only happens if we have looped here due to authentication
+ reasons, and we do not really use the newly cloned URL here
then. Just free() it. */
Curl_safefree(data->req.newurl);
DEBUGASSERT(ts->tunnel_state == H1_TUNNEL_RESPONSE);
if(data->info.httpproxycode/100 != 2) {
- /* a non-2xx response and we have no next url to try. */
+ /* a non-2xx response and we have no next URL to try. */
Curl_safefree(data->req.newurl);
/* failure, close this connection to avoid reuse */
streamclose(conn, "proxy CONNECT failure");
* and not waiting on something, we are tunneling. */
curl_socket_t sock = Curl_conn_cf_get_socket(cf, data);
if(ts) {
- /* when we've sent a CONNECT to a proxy, we should rather either
+ /* when we have sent a CONNECT to a proxy, we should rather either
wait for the socket to become readable to be able to get the
- response headers or if we're still sending the request, wait
+ response headers or if we are still sending the request, wait
for write. */
if(tunnel_want_send(ts))
Curl_pollset_set_out_only(data, ps, sock);
CURL_TRC_CF(data, cf, "[%d] new tunnel state 'failed'", ts->stream_id);
ts->state = new_state;
/* If a proxy-authorization header was used for the proxy, then we should
- make sure that it isn't accidentally used for the document request
- after we've connected. So let's free and clear it here. */
+ make sure that it is not accidentally used for the document request
+ after we have connected. So let's free and clear it here. */
Curl_safefree(data->state.aptr.proxyuserpwd);
break;
}
if(ctx->tunnel.error == NGHTTP2_REFUSED_STREAM) {
CURL_TRC_CF(data, cf, "[%d] REFUSED_STREAM, try again on a new "
"connection", ctx->tunnel.stream_id);
- connclose(cf->conn, "REFUSED_STREAM"); /* don't use this anymore */
+ connclose(cf->conn, "REFUSED_STREAM"); /* do not use this anymore */
*err = CURLE_RECV_ERROR; /* trigger Curl_retry_request() later */
return -1;
}
result = proxy_h2_progress_egress(cf, data);
if(result == CURLE_AGAIN) {
- /* pending data to send, need to be called again. Ideally, we'd
+ /* pending data to send, need to be called again. Ideally, we would
* monitor the socket for POLLOUT, but we might not be in SENDING
* transfer state any longer and are unable to make this happen.
*/
return FALSE;
if(*input_pending) {
- /* This happens before we've sent off a request and the connection is
- not in use by any other transfer, there shouldn't be any data here,
+ /* This happens before we have sent off a request and the connection is
+ not in use by any other transfer, there should not be any data here,
only "protocol frames" */
CURLcode result;
ssize_t nread = -1;
if(data->state.httpwant == CURL_HTTP_VERSION_3ONLY) {
result = Curl_conn_may_http3(data, conn);
- if(result) /* can't do it */
+ if(result) /* cannot do it */
goto out;
try_h3 = TRUE;
try_h21 = FALSE;
struct Curl_sockaddr_ex dummy;
if(!addr)
- /* if the caller doesn't want info back, use a local temp copy */
+ /* if the caller does not want info back, use a local temp copy */
addr = &dummy;
Curl_sock_assign_addr(addr, ai, transport);
Buffer Size
The problem described in this knowledge-base is applied only to pre-Vista
- Windows. Following function trying to detect OS version and skips
+ Windows. Following function trying to detect OS version and skips
SO_SNDBUF adjustment for Windows Vista and above.
*/
#define DETECT_OS_NONE 0
*
* <iface_or_host> - can be either an interface name or a host.
* if!<iface> - interface name.
- * host!<host> - host name.
- * ifhost!<iface>!<host> - interface name and host name.
+ * host!<host> - hostname.
+ * ifhost!<iface>!<host> - interface name and hostname.
*
* Parameters:
*
*/
if(setsockopt(sockfd, SOL_SOCKET, SO_BINDTODEVICE,
iface, (curl_socklen_t)strlen(iface) + 1) == 0) {
- /* This is often "errno 1, error: Operation not permitted" if you're
+ /* This is often "errno 1, error: Operation not permitted" if you are
* not running as root or another suitable privileged user. If it
* succeeds it means the parameter was a valid interface and not an IP
* address. Return immediately.
switch(if2ip_result) {
case IF2IP_NOT_FOUND:
if(iface_input && !host_input) {
- /* Do not fall back to treating it as a host name */
+ /* Do not fall back to treating it as a hostname */
failf(data, "Couldn't bind to interface '%s'", iface);
return CURLE_INTERFACE_FAILED;
}
}
if(!iface_input || host_input) {
/*
- * This was not an interface, resolve the name as a host name
+ * This was not an interface, resolve the name as a hostname
* or IP number
*
* Temporarily force name resolution to use only the address type
if(scope_ptr) {
/* The "myhost" string either comes from Curl_if2ip or from
Curl_printable_address. The latter returns only numeric scope
- IDs and the former returns none at all. So the scope ID, if
+ IDs and the former returns none at all. So the scope ID, if
present, is known to be numeric */
unsigned long scope_id = strtoul(scope_ptr, NULL, 10);
if(scope_id > UINT_MAX)
* Gisle Vanem could reproduce the former problems with this function, but
* could avoid them by adding this SleepEx() call below:
*
- * "I don't have Rational Quantify, but the hint from his post was
- * ntdll::NtRemoveIoCompletion(). So I'd assume the SleepEx (or maybe
+ * "I do not have Rational Quantify, but the hint from his post was
+ * ntdll::NtRemoveIoCompletion(). I would assume the SleepEx (or maybe
* just Sleep(0) would be enough?) would release whatever
* mutex/critical-section the ntdll call is waiting on.
*
if(0 != getsockopt(sockfd, SOL_SOCKET, SO_ERROR, (void *)&err, &errSize))
err = SOCKERRNO;
#ifdef _WIN32_WCE
- /* Old WinCE versions don't support SO_ERROR */
+ /* Old WinCE versions do not support SO_ERROR */
if(WSAENOPROTOOPT == err) {
SET_SOCKERRNO(0);
err = 0;
}
#endif
#if defined(EBADIOCTL) && defined(__minix)
- /* Minix 3.1.x doesn't support getsockopt on UDP sockets */
+ /* Minix 3.1.x does not support getsockopt on UDP sockets */
if(EBADIOCTL == err) {
SET_SOCKERRNO(0);
err = 0;
/* we are connected, awesome! */
rc = TRUE;
else
- /* This wasn't a successful connect */
+ /* This was not a successful connect */
rc = FALSE;
if(error)
*error = err;
DEBUGASSERT(ctx->sock == CURL_SOCKET_BAD);
ctx->started_at = Curl_now();
#ifdef SOCK_NONBLOCK
- /* Don't tuck SOCK_NONBLOCK into socktype when opensocket callback is set
- * because we wouldn't know how socketype is about to be used in the
+ /* Do not tuck SOCK_NONBLOCK into socktype when opensocket callback is set
+ * because we would not know how socketype is about to be used in the
* callback, SOCK_NONBLOCK might get factored out before calling socket().
*/
if(!data->set.fopensocket)
/* Currently, cf->ctx->sock is always non-blocking because the only
* caller to cf_udp_setup_quic() is cf_udp_connect() that passes the
* non-blocking socket created by cf_socket_open() to it. Thus, we
- * don't need to call curlx_nonblock() in cf_udp_setup_quic() anymore.
+ * do not need to call curlx_nonblock() in cf_udp_setup_quic() anymore.
*/
switch(ctx->addr.family) {
#if defined(__linux__) && defined(IP_MTU_DISCOVER)
* the pollset. Filters, whose filter "below" is not connected, should
* also do no adjustments.
*
- * Examples: a TLS handshake, while ongoing, might remove POLL_IN
- * when it needs to write, or vice versa. A HTTP/2 filter might remove
- * POLL_OUT when a stream window is exhausted and a WINDOW_UPDATE needs
- * to be received first and add instead POLL_IN.
+ * Examples: a TLS handshake, while ongoing, might remove POLL_IN when it
+ * needs to write, or vice versa. An HTTP/2 filter might remove POLL_OUT when
+ * a stream window is exhausted and a WINDOW_UPDATE needs to be received first
+ * and add instead POLL_IN.
*
* @param cf the filter to ask
* @param data the easy handle the pollset is about
* handler belonging to the connection. Protocols like 'file:' rely on
* being invoked to clean up their allocations in the easy handle.
* When a connection comes from the cache, the transfer is no longer
- * there and we use the cache's own closure handle.
+ * there and we use the cache is own closure handle.
*/
struct Curl_easy *data = last_data? last_data : connc->closure_handle;
bool done = FALSE;
DEBUGASSERT(!conn->bundle);
/*
- * If this connection isn't marked to force-close, leave it open if there
+ * If this connection is not marked to force-close, leave it open if there
* are other users of it
*/
if(CONN_INUSE(conn) && !aborted) {
* disassociated from an easy handle.
*
* This function MUST NOT reset state in the Curl_easy struct if that
- * isn't strictly bound to the life-time of *this* particular connection.
+ * is not strictly bound to the life-time of *this* particular connection.
*
*/
static void connc_disconnect(struct Curl_easy *data,
* Tear down the connection. If `aborted` is FALSE, the connection
* will be shut down first before discarding. If the shutdown
* is not immediately complete, the connection
- * will be placed into the cache's shutdown queue.
+ * will be placed into the cache is shutdown queue.
*/
void Curl_conncache_disconnect(struct Curl_easy *data,
struct connectdata *conn,
/*
* Curl_timeleft() returns the amount of milliseconds left allowed for the
- * transfer/connection. If the value is 0, there's no timeout (ie there's
+ * transfer/connection. If the value is 0, there is no timeout (ie there is
* infinite time left). If the value is negative, the timeout time has already
* elapsed.
* @param data the transfer to check on
#endif
)
{
- /* close if a connection, or a stream that isn't multiplexed. */
+ /* close if a connection, or a stream that is not multiplexed. */
/* This function will be called both before and after this connection is
associated with a transfer. */
bool closeit, is_multiplex;
CURLcode result;
- /* Don't close a previous cfilter yet to ensure that the next IP's
+ /* Do not close a previous cfilter yet to ensure that the next IP's
socket gets a different file descriptor, which can prevent bugs when
the curl_multi_socket_action interface is used with certain select()
replacements such as kqueue. */
}
/*
- * Connect to the given host with timeout, proxy or remote doesn't matter.
+ * Connect to the given host with timeout, proxy or remote does not matter.
* There might be more than one IP address to try out.
*/
static CURLcode start_connect(struct Curl_cfilter *cf,
CF_CTRL_CONN_INFO_UPDATE, 0, NULL);
if(cf->conn->handler->protocol & PROTO_FAMILY_SSH)
- Curl_pgrsTime(data, TIMER_APPCONNECT); /* we're connected already */
+ Curl_pgrsTime(data, TIMER_APPCONNECT); /* we are connected already */
Curl_verboseconnect(data, cf->conn, cf->sockindex);
data->info.numconnects++; /* to track the # of connections made */
}
struct Curl_dns_entry;
struct ip_quadruple;
-/* generic function that returns how much time there's left to run, according
+/* generic function that returns how much time there is left to run, according
to the timeouts set */
timediff_t Curl_timeleft(struct Curl_easy *data,
struct curltime *nowp,
void Curl_shutdown_start(struct Curl_easy *data, int sockindex,
struct curltime *nowp);
-/* return how much time there's left to shutdown the connection at
+/* return how much time there is left to shutdown the connection at
* sockindex. */
timediff_t Curl_shutdown_timeleft(struct connectdata *conn, int sockindex,
struct curltime *nowp);
#define ASCII_FLAG 0x01 /* bit 0 set: file probably ascii text */
#define HEAD_CRC 0x02 /* bit 1 set: header CRC present */
#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
-#define ORIG_NAME 0x08 /* bit 3 set: original file name present */
+#define ORIG_NAME 0x08 /* bit 3 set: original filename present */
#define COMMENT 0x10 /* bit 4 set: file comment present */
#define RESERVED 0xE0 /* bits 5..7: reserved */
zp->zlib_init != ZLIB_GZIP_INFLATING)
return exit_zlib(data, z, &zp->zlib_init, CURLE_WRITE_ERROR);
- /* Dynamically allocate a buffer for decompression because it's uncommonly
+ /* Dynamically allocate a buffer for decompression because it is uncommonly
large to hold on the stack */
decomp = malloc(DSIZ);
if(!decomp)
to fix and continue anyway */
if(zp->zlib_init == ZLIB_INIT) {
/* Do not use inflateReset2(): only available since zlib 1.2.3.4. */
- (void) inflateEnd(z); /* don't care about the return code */
+ (void) inflateEnd(z); /* do not care about the return code */
if(inflateInit2(z, -MAX_WBITS) == Z_OK) {
z->next_in = orig_in;
z->avail_in = nread;
}
free(decomp);
- /* We're about to leave this call so the `nread' data bytes won't be seen
+ /* We are about to leave this call so the `nread' data bytes will not be seen
again. If we are in a state that would wrongly allow restart in raw mode
at the next call, assume output has already started. */
if(nread && zp->zlib_init == ZLIB_INIT)
flags = data[3];
if(method != Z_DEFLATED || (flags & RESERVED) != 0) {
- /* Can't handle this compression method or unknown flag */
+ /* cannot handle this compression method or unknown flag */
return GZIP_BAD;
}
}
if(flags & ORIG_NAME) {
- /* Skip over NUL-terminated file name */
+ /* Skip over NUL-terminated filename */
while(len && *data) {
--len;
++data;
return exit_zlib(data, z, &zp->zlib_init, CURLE_WRITE_ERROR);
#else
- /* This next mess is to get around the potential case where there isn't
- * enough data passed in to skip over the gzip header. If that happens, we
- * malloc a block and copy what we have then wait for the next call. If
- * there still isn't enough (this is definitely a worst-case scenario), we
+ /* This next mess is to get around the potential case where there is not
+ * enough data passed in to skip over the gzip header. If that happens, we
+ * malloc a block and copy what we have then wait for the next call. If
+ * there still is not enough (this is definitely a worst-case scenario), we
* make the block bigger, copy the next part in and keep waiting.
*
* This is only required with zlib versions < 1.2.0.4 as newer versions
break;
case GZIP_UNDERFLOW:
- /* We need more data so we can find the end of the gzip header. It's
+ /* We need more data so we can find the end of the gzip header. it is
* possible that the memory block we malloc here will never be freed if
- * the transfer abruptly aborts after this point. Since it's unlikely
+ * the transfer abruptly aborts after this point. Since it is unlikely
* that circumstances will be right for this code path to be followed in
- * the first place, and it's even more unlikely for a transfer to fail
+ * the first place, and it is even more unlikely for a transfer to fail
* immediately afterwards, it should seldom be a problem.
*/
z->avail_in = (uInt) nbytes;
}
memcpy(z->next_in, buf, z->avail_in);
zp->zlib_init = ZLIB_GZIP_HEADER; /* Need more gzip header data state */
- /* We don't have any data to inflate yet */
+ /* We do not have any data to inflate yet */
return CURLE_OK;
case GZIP_BAD:
case GZIP_OK:
/* This is the zlib stream data */
free(z->next_in);
- /* Don't point into the malloced block since we just freed it */
+ /* Do not point into the malloced block since we just freed it */
z->next_in = (Bytef *) buf + hlen + nbytes - z->avail_in;
z->avail_in = z->avail_in - (uInt)hlen;
zp->zlib_init = ZLIB_GZIP_INFLATING; /* Inflating stream state */
break;
case GZIP_UNDERFLOW:
- /* We still don't have any data to inflate! */
+ /* We still do not have any data to inflate! */
return CURLE_OK;
case GZIP_BAD:
}
if(z->avail_in == 0) {
- /* We don't have any data to inflate; wait until next time */
+ /* We do not have any data to inflate; wait until next time */
return CURLE_OK;
}
- /* We've parsed the header, now uncompress the data */
+ /* We have parsed the header, now uncompress the data */
return inflate_stream(data, writer, type, ZLIB_GZIP_INFLATING);
#endif
}
return NULL;
}
-/* Set-up the unencoding stack from the Content-Encoding header value.
+/* Setup the unencoding stack from the Content-Encoding header value.
* See RFC 7231 section 3.1.2.2. */
CURLcode Curl_build_unencoding_stack(struct Curl_easy *data,
const char *enclist, int is_transfer)
boolean informs the cookie if a secure connection is achieved or
not.
- It shall only return cookies that haven't expired.
+ It shall only return cookies that have not expired.
Example set of cookies:
}
/*
- * matching cookie path and url path
+ * matching cookie path and URL path
* RFC6265 5.1.4 Paths and Path-Match
*/
static bool pathmatch(const char *cookie_path, const char *request_uri)
*
* Remove expired cookies from the hash by inspecting the expires timestamp on
* each cookie in the hash, freeing and deleting any where the timestamp is in
- * the past. If the cookiejar has recorded the next timestamp at which one or
+ * the past. If the cookiejar has recorded the next timestamp at which one or
* more cookies expire, then processing will exit early in case this timestamp
* is in the future.
*/
/*
* If the earliest expiration timestamp in the jar is in the future we can
- * skip scanning the whole jar and instead exit early as there won't be any
- * cookies to evict. If we need to evict however, reset the next_expiration
- * counter in order to track the next one. In case the recorded first
- * expiration is the max offset, then perform the safe fallback of checking
- * all cookies.
+ * skip scanning the whole jar and instead exit early as there will not be
+ * any cookies to evict. If we need to evict however, reset the
+ * next_expiration counter in order to track the next one. In case the
+ * recorded first expiration is the max offset, then perform the safe
+ * fallback of checking all cookies.
*/
if(now < cookies->next_expiration &&
cookies->next_expiration != CURL_OFF_T_MAX)
}
else {
/*
- * If this cookie has an expiration timestamp earlier than what we've
+ * If this cookie has an expiration timestamp earlier than what we have
* seen so far then record it for the next round of expirations.
*/
if(co->expires && co->expires < cookies->next_expiration)
* Curl_cookie_add
*
* Add a single cookie line to the cookie keeping object. Be aware that
- * sometimes we get an IP-only host name, and that might also be a numerical
+ * sometimes we get an IP-only hostname, and that might also be a numerical
* IPv6 address.
*
* Returns NULL on out of memory or invalid cookie. This is suboptimal,
/* First, alloc and init a new struct for it */
co = calloc(1, sizeof(struct Cookie));
if(!co)
- return NULL; /* bail out if we're this low on memory */
+ return NULL; /* bail out if we are this low on memory */
if(httpheader) {
/* This line was read off an HTTP-header */
else if((nlen == 8) && strncasecompare("httponly", namep, 8))
co->httponly = TRUE;
else if(sep)
- /* there was a '=' so we're not done parsing this field */
+ /* there was a '=' so we are not done parsing this field */
done = FALSE;
}
if(done)
#ifndef USE_LIBPSL
/*
- * Without PSL we don't know when the incoming cookie is set on a
+ * Without PSL we do not know when the incoming cookie is set on a
* TLD or otherwise "protected" suffix. To reduce risk, we require a
- * dot OR the exact host name being "localhost".
+ * dot OR the exact hostname being "localhost".
*/
if(bad_domain(valuep, vlen))
domain = ":";
/*
* Defined in RFC2109:
*
- * Optional. The Max-Age attribute defines the lifetime of the
- * cookie, in seconds. The delta-seconds value is a decimal non-
- * negative integer. After delta-seconds seconds elapse, the
- * client should discard the cookie. A value of zero means the
+ * Optional. The Max-Age attribute defines the lifetime of the
+ * cookie, in seconds. The delta-seconds value is a decimal non-
+ * negative integer. After delta-seconds seconds elapse, the
+ * client should discard the cookie. A value of zero means the
* cookie should be discarded immediately.
*/
CURLofft offt;
}
/*
- * Else, this is the second (or more) name we don't know about!
+ * Else, this is the second (or more) name we do not know about!
*/
}
else {
if(!badcookie && !co->path && path) {
/*
- * No path was given in the header line, set the default. Note that the
+ * No path was given in the header line, set the default. Note that the
* passed-in path to this function MAY have a '?' and following part that
* MUST NOT be stored as part of the path.
*/
}
/*
- * If we didn't get a cookie name, or a bad one, the this is an illegal
+ * If we did not get a cookie name, or a bad one, the this is an illegal
* line so bail out.
*/
if(badcookie || !co->name) {
}
if(lineptr[0]=='#') {
- /* don't even try the comments */
+ /* do not even try the comments */
free(co);
return NULL;
}
case 2:
/* The file format allows the path field to remain not filled in */
if(strcmp("TRUE", ptr) && strcmp("FALSE", ptr)) {
- /* only if the path doesn't look like a boolean option! */
+ /* only if the path does not look like a boolean option! */
co->path = strdup(ptr);
if(!co->path)
badcookie = TRUE;
}
break;
}
- /* this doesn't look like a path, make one up! */
+ /* this does not look like a path, make one up! */
co->path = strdup("/");
if(!co->path)
badcookie = TRUE;
if(!c->running && /* read from a file */
c->newsession && /* clean session cookies */
- !co->expires) { /* this is a session cookie since it doesn't expire! */
+ !co->expires) { /* this is a session cookie since it does not expire! */
freecookie(co);
return NULL;
}
#ifdef USE_LIBPSL
/*
* Check if the domain is a Public Suffix and if yes, ignore the cookie. We
- * must also check that the data handle isn't NULL since the psl code will
+ * must also check that the data handle is not NULL since the psl code will
* dereference it.
*/
if(data && (domain && co->domain && !Curl_host_is_ipnum(co->domain))) {
if(replace_old && !co->livecookie && clist->livecookie) {
/*
- * Both cookies matched fine, except that the already present cookie is
- * "live", which means it was set from a header, while the new one was
- * read from a file and thus isn't "live". "live" cookies are preferred
- * so the new cookie is freed.
+ * Both cookies matched fine, except that the already present cookie
+ * is "live", which means it was set from a header, while the new one
+ * was read from a file and thus is not "live". "live" cookies are
+ * preferred so the new cookie is freed.
*/
freecookie(co);
return NULL;
}
/*
- * Now that we've added a new cookie to the jar, update the expiration
+ * Now that we have added a new cookie to the jar, update the expiration
* tracker in case it is the next one to expire.
*/
if(co->expires && (co->expires < c->next_expiration))
FILE *handle = NULL;
if(!inc) {
- /* we didn't get a struct, create one */
+ /* we did not get a struct, create one */
c = calloc(1, sizeof(struct CookieInfo));
if(!c)
return NULL; /* failed to get memory */
/*
- * Initialize the next_expiration time to signal that we don't have enough
+ * Initialize the next_expiration time to signal that we do not have enough
* information yet.
*/
c->next_expiration = CURL_OFF_T_MAX;
}
data->state.cookie_engine = TRUE;
}
- c->running = TRUE; /* now, we're running */
+ c->running = TRUE; /* now, we are running */
return c;
}
* should send to the server if used now. The secure boolean informs the cookie
* if a secure connection is achieved or not.
*
- * It shall only return cookies that haven't expired.
+ * It shall only return cookies that have not expired.
*/
struct Cookie *Curl_cookie_getlist(struct Curl_easy *data,
struct CookieInfo *c,
co = c->cookies[myhash];
while(co) {
- /* if the cookie requires we're secure we must only continue if we are! */
+ /* if the cookie requires we are secure we must only continue if we are! */
if(co->secure?secure:TRUE) {
/* now check if the domain is correct */
* cookie_output()
*
* Writes all internally known cookies to the specified file. Specify
- * "-" as file name to write to stdout.
+ * "-" as filename to write to stdout.
*
* The function returns non-zero on write failure.
*/
/** Limits for INCOMING cookies **/
-/* The longest we allow a line to be when reading a cookie from a HTTP header
+/* The longest we allow a line to be when reading a cookie from an HTTP header
or from a cookie jar */
#define MAX_COOKIE_LINE 5000
* the only difference that instead of returning a linked list of
* addrinfo structs this one returns a linked list of Curl_addrinfo
* ones. The memory allocated by this function *MUST* be free'd with
- * Curl_freeaddrinfo(). For each successful call to this function
+ * Curl_freeaddrinfo(). For each successful call to this function
* there must be an associated call later to Curl_freeaddrinfo().
*
* There should be no single call to system's getaddrinfo() in the
* stack, but usable also for IPv4, all hosts and environments.
*
* The memory allocated by this function *MUST* be free'd later on calling
- * Curl_freeaddrinfo(). For each successful call to this function there
+ * Curl_freeaddrinfo(). For each successful call to this function there
* must be an associated call later to Curl_freeaddrinfo().
*
* Curl_addrinfo defined in "lib/curl_addrinfo.h"
/*
* Curl_ip2addr()
*
- * This function takes an internet address, in binary form, as input parameter
+ * This function takes an Internet address, in binary form, as input parameter
* along with its address family and the string version of the address, and it
* returns a Curl_addrinfo chain filled in correctly with information for the
* given address/host
*
* This is strictly for memory tracing and are using the same style as the
* family otherwise present in memdebug.c. I put these ones here since they
- * require a bunch of structs I didn't want to include in memdebug.c
+ * require a bunch of structs I did not want to include in memdebug.c
*/
void
*
* This is strictly for memory tracing and are using the same style as the
* family otherwise present in memdebug.c. I put these ones here since they
- * require a bunch of structs I didn't want to include in memdebug.c
+ * require a bunch of structs I did not want to include in memdebug.c
*/
int
/*
* Curl_addrinfo is our internal struct definition that we use to allow
- * consistent internal handling of this data. We use this even when the
- * system provides an addrinfo structure definition. And we use this for
- * all sorts of IPv4 and IPV6 builds.
+ * consistent internal handling of this data. We use this even when the system
+ * provides an addrinfo structure definition. We use this for all sorts of
+ * IPv4 and IPV6 builds.
*/
struct Curl_addrinfo {
* SPDX-License-Identifier: curl
*
***************************************************************************/
-/* lib/curl_config.h.in. Generated somehow by cmake. */
+/* lib/curl_config.h.in. Generated somehow by cmake. */
/* Location of default ca bundle */
#cmakedefine CURL_CA_BUNDLE "${CURL_CA_BUNDLE}"
/* if GSASL is in use */
#cmakedefine USE_GSASL 1
-/* Define to 1 if you don't want the OpenSSL configuration to be loaded
+/* Define to 1 if you do not want the OpenSSL configuration to be loaded
automatically */
#cmakedefine CURL_DISABLE_OPENSSL_AUTO_LOAD_CONFIG 1
* Curl_des_set_odd_parity()
*
* This is used to apply odd parity to the given byte array. It is typically
- * used by when a cryptography engine doesn't have its own version.
+ * used by when a cryptography engine does not have its own version.
*
* The function is a port of the Java based oddParity() function over at:
*
* Curl_read16_le()
*
* This function converts a 16-bit integer from the little endian format, as
- * used in the incoming package to whatever endian format we're using
+ * used in the incoming package to whatever endian format we are using
* natively.
*
* Parameters:
* Curl_read32_le()
*
* This function converts a 32-bit integer from the little endian format, as
- * used in the incoming package to whatever endian format we're using
+ * used in the incoming package to whatever endian format we are using
* natively.
*
* Parameters:
* Curl_read16_be()
*
* This function converts a 16-bit integer from the big endian format, as
- * used in the incoming package to whatever endian format we're using
+ * used in the incoming package to whatever endian format we are using
* natively.
*
* Parameters:
/*
* Curl_gethostname() is a wrapper around gethostname() which allows
- * overriding the host name that the function would normally return.
+ * overriding the hostname that the function would normally return.
* This capability is used by the test suite to verify exact matching
* of NTLM authentication, which exercises libcurl's MD4 and DES code
* as well as by the SMTP module when a hostname is not provided.
*
- * For libcurl debug enabled builds host name overriding takes place
+ * For libcurl debug enabled builds hostname overriding takes place
* when environment variable CURL_GETHOSTNAME is set, using the value
- * held by the variable to override returned host name.
+ * held by the variable to override returned hostname.
*
* Note: The function always returns the un-qualified hostname rather
* than being provider dependent.
* mechanism which intercepts, and might override, the gethostname()
* function call. In this case a given platform must support the
* LD_PRELOAD mechanism and additionally have environment variable
- * CURL_GETHOSTNAME set in order to override the returned host name.
+ * CURL_GETHOSTNAME set in order to override the returned hostname.
*
* For libcurl static library release builds no overriding takes place.
*/
#ifdef DEBUGBUILD
- /* Override host name when environment variable CURL_GETHOSTNAME is set */
+ /* Override hostname when environment variable CURL_GETHOSTNAME is set */
const char *force_hostname = getenv("CURL_GETHOSTNAME");
if(force_hostname) {
strncpy(name, force_hostname, namelen - 1);
* Allocated memory should be free'd with curlx_unicodefree().
*
* Note: Because these are curlx functions their memory usage is not tracked
- * by the curl memory tracker memdebug. You'll notice that curlx function-like
- * macros call free and strdup in parentheses, eg (strdup)(ptr), and that's to
- * ensure that the curl memdebug override macros do not replace them.
+ * by the curl memory tracker memdebug. you will notice that curlx
+ * function-like macros call free and strdup in parentheses, eg (strdup)(ptr),
+ * and that is to ensure that the curl memdebug override macros do not replace
+ * them.
*/
#if defined(UNICODE) && defined(_WIN32)
#elif defined(USE_WIN32_CRYPTO)
# include <wincrypt.h>
#else
-# error "Can't compile NTLM support without a crypto library with DES."
+# error "cannot compile NTLM support without a crypto library with DES."
# define CURL_NTLM_NOT_SUPPORTED
#endif
#if defined(USE_OPENSSL_DES) || defined(USE_WOLFSSL)
/*
- * Turns a 56 bit key into the 64 bit, odd parity key and sets the key. The
+ * Turns a 56-bit key into a 64-bit, odd parity key and sets the key. The
* key schedule ks is also set.
*/
static void setup_des_key(const unsigned char *key_56,
r->m_sb.sb_socket = (int)conn->sock[FIRSTSOCKET];
- /* We have to know if it's a write before we send the
+ /* We have to know if it is a write before we send the
* connect request packet
*/
if(data->state.upload)
if(data->state.aptr.user)
return TRUE;
- /* EXTERNAL can authenticate without a user name and/or password */
+ /* EXTERNAL can authenticate without a username and/or password */
if(sasl->authmechs & sasl->prefmech & SASL_MECH_EXTERNAL)
return TRUE;
#ifdef _WIN32
/*
- * Don't include unneeded stuff in Windows headers to avoid compiler
+ * Do not include unneeded stuff in Windows headers to avoid compiler
* warnings and macro clashes.
* Make sure to define this macro before including any Windows headers.
*/
/*
* Use getaddrinfo to resolve the IPv4 address literal. If the current network
- * interface doesn't support IPv4, but supports IPv6, NAT64, and DNS64,
+ * interface does not support IPv4, but supports IPv6, NAT64, and DNS64,
* performing this task will result in a synthesized IPv6 address.
*/
#if defined(__APPLE__) && !defined(USE_ARES)
#endif
/*
- * Default sizeof(off_t) in case it hasn't been defined in config file.
+ * Default sizeof(off_t) in case it has not been defined in config file.
*/
#ifndef SIZEOF_OFF_T
#endif
#ifndef SIZE_T_MAX
-/* some limits.h headers have this defined, some don't */
+/* some limits.h headers have this defined, some do not */
#if defined(SIZEOF_SIZE_T) && (SIZEOF_SIZE_T > 4)
#define SIZE_T_MAX 18446744073709551615U
#else
#endif
#ifndef SSIZE_T_MAX
-/* some limits.h headers have this defined, some don't */
+/* some limits.h headers have this defined, some do not */
#if defined(SIZEOF_SIZE_T) && (SIZEOF_SIZE_T > 4)
#define SSIZE_T_MAX 9223372036854775807
#else
#endif
/*
- * Arg 2 type for gethostname in case it hasn't been defined in config file.
+ * Arg 2 type for gethostname in case it has not been defined in config file.
*/
#ifndef GETHOSTNAME_TYPE_ARG2
#endif
/*
- * shutdown() flags for systems that don't define them
+ * shutdown() flags for systems that do not define them
*/
#ifndef SHUT_RD
#define FOPEN_APPENDTEXT "a"
#endif
-/* for systems that don't detect this in configure */
+/* for systems that do not detect this in configure */
#ifndef CURL_SA_FAMILY_T
# if defined(HAVE_SA_FAMILY_T)
# define CURL_SA_FAMILY_T sa_family_t
#endif
/*
- * Definition of timeval struct for platforms that don't have it.
+ * Definition of timeval struct for platforms that do not have it.
*/
#ifndef HAVE_STRUCT_TIMEVAL
#if defined(__minix)
-/* Minix doesn't support recv on TCP sockets */
+/* Minix does not support recv on TCP sockets */
#define sread(x,y,z) (ssize_t)read((RECV_TYPE_ARG1)(x), \
(RECV_TYPE_ARG2)(y), \
(RECV_TYPE_ARG3)(z))
*
* HAVE_RECV is defined if you have a function named recv()
* which is used to read incoming data from sockets. If your
- * function has another name then don't define HAVE_RECV.
+ * function has another name then do not define HAVE_RECV.
*
* If HAVE_RECV is defined then RECV_TYPE_ARG1, RECV_TYPE_ARG2,
* RECV_TYPE_ARG3, RECV_TYPE_ARG4 and RECV_TYPE_RETV must also
*
* HAVE_SEND is defined if you have a function named send()
* which is used to write outgoing data on a connected socket.
- * If yours has another name then don't define HAVE_SEND.
+ * If yours has another name then do not define HAVE_SEND.
*
* If HAVE_SEND is defined then SEND_TYPE_ARG1, SEND_QUAL_ARG2,
* SEND_TYPE_ARG2, SEND_TYPE_ARG3, SEND_TYPE_ARG4 and
#if defined(__minix)
-/* Minix doesn't support send on TCP sockets */
+/* Minix does not support send on TCP sockets */
#define swrite(x,y,z) (ssize_t)write((SEND_TYPE_ARG1)(x), \
(SEND_TYPE_ARG2)(y), \
(SEND_TYPE_ARG3)(z))
/*
* 'bool' exists on platforms with <stdbool.h>, i.e. C99 platforms.
- * On non-C99 platforms there's no bool, so define an enum for that.
+ * On non-C99 platforms there is no bool, so define an enum for that.
* On C99 platforms 'false' and 'true' also exist. Enum uses a
* global namespace though, so use bool_false and bool_true.
*/
} bool;
/*
- * Use a define to let 'true' and 'false' use those enums. There
+ * Use a define to let 'true' and 'false' use those enums. There
* are currently no use of true and false in libcurl proper, but
* there are some in the examples. This will cater for any later
* code happening to use true and false.
* ** written by Evgeny Grin (Karlson2k) for GNU libmicrohttpd. ** *
* ** The author ported the code to libcurl. The ported code is provided ** *
* ** under curl license. ** *
- * ** This is a minimal version with minimal optimisations. Performance ** *
+ * ** This is a minimal version with minimal optimizations. Performance ** *
* ** can be significantly improved. Big-endian store and load macros ** *
- * ** are obvious targets for optimisation. ** */
+ * ** are obvious targets for optimization. ** */
#ifdef __GNUC__
# if defined(__has_attribute) && defined(__STDC_VERSION__)
bits %= 64;
if(0 == bits)
return value;
- /* Defined in a form which modern compiler could optimise. */
+ /* Defined in a form which modern compiler could optimize. */
return (value >> bits) | (value << (64 - bits));
}
See FIPS PUB 180-4 section 5.2.2, 6.7, 6.4. */
curl_uint64_t W[16];
- /* 'Ch' and 'Maj' macro functions are defined with widely-used optimisation.
+ /* 'Ch' and 'Maj' macro functions are defined with widely-used optimization.
See FIPS PUB 180-4 formulae 4.8, 4.9. */
#define Sha512_Ch(x,y,z) ( (z) ^ ((x) & ((y) ^ (z))) )
#define Sha512_Maj(x,y,z) ( ((x) & (y)) ^ ((z) & ((x) ^ (y))) )
*
* Parameters:
*
- * userp [in] - The user name in the format User or Domain\User.
+ * userp [in] - The username in the format User or Domain\User.
* passwdp [in] - The user's password.
* identity [in/out] - The identity structure.
*
if(CURL_WRITEFUNC_PAUSE == nwritten) {
if(data->conn && data->conn->handler->flags & PROTOPT_NONETWORK) {
/* Protocols that work without network cannot be paused. This is
- actually only FILE:// just now, and it can't pause since the
- transfer isn't done using the "normal" procedure. */
+ actually only FILE:// just now, and it cannot pause since the
+ transfer is not done using the "normal" procedure. */
failf(data, "Write callback asked for PAUSE when not supported");
return CURLE_WRITE_ERROR;
}
const char *hostp = host;
/* The expected output length is 16 bytes more than the length of
- * the QNAME-encoding of the host name.
+ * the QNAME-encoding of the hostname.
*
* A valid DNS name may not contain a zero-length label, except at
- * the end. For this reason, a name beginning with a dot, or
+ * the end. For this reason, a name beginning with a dot, or
* containing a sequence of two or more consecutive dots, is invalid
* and cannot be encoded as a QNAME.
*
- * If the host name ends with a trailing dot, the corresponding
- * QNAME-encoding is one byte longer than the host name. If (as is
+ * If the hostname ends with a trailing dot, the corresponding
+ * QNAME-encoding is one byte longer than the hostname. If (as is
* also valid) the hostname is shortened by the omission of the
* trailing dot, then its QNAME-encoding will be two bytes longer
- * than the host name.
+ * than the hostname.
*
* Each [ label, dot ] pair is encoded as [ length, label ],
- * preserving overall length. A final [ label ] without a dot is
+ * preserving overall length. A final [ label ] without a dot is
* also encoded as [ length, label ], increasing overall length
* by one. The encoding is completed by appending a zero byte,
* representing the zero-length root label, again increasing
* TODO: Figure out the conditions under which we want to make
* a request for an HTTPS RR when we are not doing ECH. For now,
* making this request breaks a bunch of DoH tests, e.g. test2100,
- * where the additional request doesn't match the pre-cooked data
- * files, so there's a bit of work attached to making the request
- * in a non-ECH use-case. For the present, we'll only make the
+ * where the additional request does not match the pre-cooked data
+ * files, so there is a bit of work attached to making the request
+ * in a non-ECH use-case. For the present, we will only make the
* request when ECH is enabled in the build and is being used for
* the curl operation.
*/
/* avoid undefined behavior by casting to unsigned before shifting
24 bits, possibly into the sign bit. codegen is same, but
- ub sanitizer won't be upset */
+ ub sanitizer will not be upset */
return ((unsigned)doh[0] << 24) | ((unsigned)doh[1] << 16) |
((unsigned)doh[2] << 8) | doh[3];
}
*
* This function returns a pointer to the first element of a newly allocated
* Curl_addrinfo struct linked list filled with the data from a set of DoH
- * lookups. Curl_addrinfo is meant to work like the addrinfo struct does for
+ * lookups. Curl_addrinfo is meant to work like the addrinfo struct does for
* a IPv6 stack, but usable also for IPv4, all hosts and environments.
*
* The memory allocated by this function *MUST* be free'd later on calling
- * Curl_freeaddrinfo(). For each successful call to this function there
+ * Curl_freeaddrinfo(). For each successful call to this function there
* must be an associated call later to Curl_freeaddrinfo().
*/
CURL_SA_FAMILY_T addrtype;
if(de->addr[i].type == DNS_TYPE_AAAA) {
#ifndef USE_IPV6
- /* we can't handle IPv6 addresses */
+ /* we cannot handle IPv6 addresses */
continue;
#else
ss_size = sizeof(struct sockaddr_in6);
*
* The input buffer pointer will be modified so it points to
* just after the end of the DNS name encoding on output. (And
- * that's why it's an "unsigned char **" :-)
+ * that is why it is an "unsigned char **" :-)
*/
static CURLcode local_decode_rdata_name(unsigned char **buf, size_t *remaining,
char **dnsname)
* output is comma-sep list of the strings
* implementations may or may not handle quoting of comma within
* string values, so we might see a comma within the wire format
- * version of a string, in which case we'll precede that by a
+ * version of a string, in which case we will precede that by a
* backslash - same goes for a backslash character, and of course
* we need to use two backslashes in strings when we mean one;-)
*/
#ifdef DEBUGBUILD
static CURLcode test_alpn_escapes(void)
{
- /* we'll use an example from draft-ietf-dnsop-svcb, figure 10 */
+ /* we will use an example from draft-ietf-dnsop-svcb, figure 10 */
static unsigned char example[] = {
0x08, /* length 8 */
0x66, 0x5c, 0x6f, 0x6f, 0x2c, 0x62, 0x61, 0x72, /* value "f\\oo,bar" */
char *dnsname = NULL;
#ifdef DEBUGBUILD
- /* a few tests of escaping, shouldn't be here but ok for now */
+ /* a few tests of escaping, should not be here but ok for now */
if(test_alpn_escapes() != CURLE_OK)
return CURLE_OUT_OF_MEMORY;
#endif
if(Curl_trc_ft_is_verbose(data, &Curl_doh_trc)) {
- infof(data, "[DoH] Host name: %s", dohp->host);
+ infof(data, "[DoH] hostname: %s", dohp->host);
showdoh(data, &de);
}
}
/*
- * free the buffer and re-init the necessary fields. It doesn't touch the
+ * free the buffer and re-init the necessary fields. It does not touch the
* 'init' field and thus this buffer can be reused to add data to again.
*/
void Curl_dyn_free(struct dynbuf *s)
size_t a = s->allc;
size_t fit = len + indx + 1; /* new string + old string + zero byte */
- /* try to detect if there's rubbish in the struct */
+ /* try to detect if there is rubbish in the struct */
DEBUGASSERT(s->init == DYNINIT);
DEBUGASSERT(s->toobig);
DEBUGASSERT(indx < s->toobig);
const char *name, const char *value);
/**
- * Add a single header from a HTTP/1.1 formatted line at the end. Line
+ * Add a single header from an HTTP/1.1 formatted line at the end. Line
* may contain a delimiting \r\n or just \n. Any characters after
* that will be ignored.
*/
CURLcode Curl_dynhds_h1_cadd_line(struct dynhds *dynhds, const char *line);
/**
- * Add a single header from a HTTP/1.1 formatted line at the end. Line
+ * Add a single header from an HTTP/1.1 formatted line at the end. Line
* may contain a delimiting \r\n or just \n. Any characters after
* that will be ignored.
*/
global_init_lock();
if(initialized) {
- /* Already initialized, don't do it again, but bump the variable anyway to
+ /* Already initialized, do not do it again, but bump the variable anyway to
work like curl_global_init() and require the same amount of cleanup
calls. */
initialized++;
/**
* curl_global_cleanup() globally cleanups curl, uses the value of
- * "easy_init_flags" to determine what needs to be cleaned up and what doesn't.
+ * "easy_init_flags" to determine what needs to be cleaned up and what does
+ * not.
*/
void curl_global_cleanup(void)
{
if(mcode)
return CURLE_URL_MALFORMAT;
- /* we don't really care about the "msgs_in_queue" value returned in the
+ /* we do not really care about the "msgs_in_queue" value returned in the
second argument */
msg = curl_multi_info_read(multi, &pollrc);
if(msg) {
return wait_or_timeout(multi, &evs);
}
#else /* DEBUGBUILD */
-/* when not built with debug, this function doesn't exist */
+/* when not built with debug, this function does not exist */
#define easy_events(x) CURLE_NOT_BUILT_IN
#endif
* easy handle, destroys the multi handle and returns the easy handle's return
* code.
*
- * REALITY: it can't just create and destroy the multi handle that easily. It
+ * REALITY: it cannot just create and destroy the multi handle that easily. It
* needs to keep it around since if this easy handle is used again by this
* function, the same multi handle must be reused so that the same pools and
* caches can be used.
/* run the transfer */
result = events ? easy_events(multi) : easy_transfer(multi);
- /* ignoring the return code isn't nice, but atm we can't really handle
+ /* ignoring the return code is not nice, but atm we cannot really handle
a failure here, room for future improvement! */
(void)curl_multi_remove_handle(multi, data);
bool keep_changed, unpause_read, not_all_paused;
if(!GOOD_EASY_HANDLE(data) || !data->conn)
- /* crazy input, don't continue */
+ /* crazy input, do not continue */
return CURLE_BAD_FUNCTION_ARGUMENT;
if(Curl_is_in_callback(data))
}
else {
if((o->id == id) && !(o->flags & CURLOT_FLAG_ALIAS))
- /* don't match alias options */
+ /* do not match alias options */
return o;
}
o++;
/*
* file_connect() gets called from Curl_protocol_connect() to allow us to
- * do protocol-specific actions at connect-time. We emulate a
+ * do protocol-specific actions at connect-time. We emulate a
* connect-then-transfer protocol and "connect" to the file here
*/
static CURLcode file_connect(struct Curl_easy *data, bool *done)
return result;
#ifdef DOS_FILESYSTEM
- /* If the first character is a slash, and there's
+ /* If the first character is a slash, and there is
something that looks like a drive at the beginning of
- the path, skip the slash. If we remove the initial
+ the path, skip the slash. If we remove the initial
slash in all cases, paths without drive letters end up
- relative to the current directory which isn't how
+ relative to the current directory which is not how
browsers work.
Some browsers accept | instead of : as the drive letter
separator, so we do too.
On other platforms, we need the slash to indicate an
- absolute pathname. On Windows, absolute paths start
+ absolute pathname. On Windows, absolute paths start
with a drive letter.
*/
actual_path = real_path;
bool eos = FALSE;
/*
- * Since FILE: doesn't do the full init, we need to provide some extra
+ * Since FILE: does not do the full init, we need to provide some extra
* assignments here.
*/
fd = open(file->path, mode, data->set.new_file_perms);
if(fd < 0) {
- failf(data, "Can't open %s for writing", file->path);
+ failf(data, "cannot open %s for writing", file->path);
return CURLE_WRITE_ERROR;
}
if(data->state.resume_from < 0) {
if(fstat(fd, &file_stat)) {
close(fd);
- failf(data, "Can't get the size of %s", file->path);
+ failf(data, "cannot get the size of %s", file->path);
return CURLE_WRITE_ERROR;
}
data->state.resume_from = (curl_off_t)file_stat.st_size;
* file_do() is the protocol-specific function for the do-phase, separated
* from the connect-phase above. Other protocols merely setup the transfer in
* the do-phase, to have it done in the main transfer loop but since some
- * platforms we support don't allow select()ing etc on file handles (as
+ * platforms we support do not allow select()ing etc on file handles (as
* opposed to sockets) we instead perform the whole do-operation in this
* function.
*/
static CURLcode file_do(struct Curl_easy *data, bool *done)
{
- /* This implementation ignores the host name in conformance with
+ /* This implementation ignores the hostname in conformance with
RFC 1738. Only local files (reachable via the standard file system)
are supported. This means that files on remotely mounted directories
(via NFS, Samba, NT sharing) can be accessed through a file:// URL
* of the stream if the filesize could be determined */
if(data->state.resume_from < 0) {
if(!fstated) {
- failf(data, "Can't get the size of file.");
+ failf(data, "cannot get the size of file.");
return CURLE_READ_ERROR;
}
data->state.resume_from += (curl_off_t)statbuf.st_size;
if(data->state.resume_from > 0) {
/* We check explicitly if we have a start offset, because
- * expected_size may be -1 if we don't know how large the file is,
+ * expected_size may be -1 if we do not know how large the file is,
* in which case we should not adjust it. */
if(data->state.resume_from <= expected_size)
expected_size -= data->state.resume_from;
if(!S_ISDIR(statbuf.st_mode)) {
while(!result) {
ssize_t nread;
- /* Don't fill a whole buffer if we want less than all data */
+ /* Do not fill a whole buffer if we want less than all data */
size_t bytestoread;
if(size_known) {
/*
The dirslash() function breaks a null-terminated pathname string into
directory and filename components then returns the directory component up
- to, *AND INCLUDING*, a final '/'. If there is no directory in the path,
+ to, *AND INCLUDING*, a final '/'. If there is no directory in the path,
this instead returns a "" string.
This function returns a pointer to malloc'ed memory.
- The input path to this function is expected to have a file name part.
+ The input path to this function is expected to have a filename part.
*/
#ifdef _WIN32
* Curl_fopen() opens a file for writing with a temp name, to be renamed
* to the final name when completed. If there is an existing file using this
* name at the time of the open, this function will clone the mode from that
- * file. if 'tempname' is non-NULL, it needs a rename after the file is
+ * file. if 'tempname' is non-NULL, it needs a rename after the file is
* written.
*/
CURLcode Curl_fopen(struct Curl_easy *data, const char *filename,
dir = dirslash(filename);
if(dir) {
- /* The temp file name should not end up too long for the target file
+ /* The temp filename should not end up too long for the target file
system */
tempstore = aprintf("%s%s.tmp", dir, randbuf);
free(dir);
struct curl_forms *forms = NULL;
char *array_value = NULL; /* value read from an array */
- /* This is a state variable, that if TRUE means that we're parsing an
- array that we got passed to us. If FALSE we're parsing the input
+ /* This is a state variable, that if TRUE means that we are parsing an
+ array that we got passed to us. If FALSE we are parsing the input
va_list arguments. */
bool array_state = FALSE;
switch(option) {
case CURLFORM_ARRAY:
if(array_state)
- /* we don't support an array from within an array */
+ /* we do not support an array from within an array */
return_value = CURL_FORMADD_ILLEGAL_ARRAY;
else {
forms = va_arg(params, struct curl_forms *);
array_state?(curl_off_t)(size_t)array_value:va_arg(params, curl_off_t);
break;
- /* Get contents from a given file name */
+ /* Get contents from a given filename */
case CURLFORM_FILECONTENT:
if(current_form->flags & (HTTPPOST_PTRCONTENTS|HTTPPOST_READFILE))
return_value = CURL_FORMADD_OPTION_TWICE;
array_state?array_value:va_arg(params, char *);
if(userp) {
current_form->userp = userp;
- current_form->value = userp; /* this isn't strictly true but we
+ current_form->value = userp; /* this is not strictly true but we
derive a value from this later on
and we need this non-NULL to be
accepted as a fine form part */
}
if(!(form->flags & HTTPPOST_PTRNAME) &&
(form == first_form) ) {
- /* Note that there's small risk that form->name is NULL here if the
+ /* Note that there is small risk that form->name is NULL here if the
app passed in a bad combo, so we better check for that first. */
if(form->name) {
/* copy name (without strdup; possibly not null-terminated) */
)
free(form->contents); /* free the contents */
free(form->contenttype); /* free the content type */
- free(form->showfilename); /* free the faked file name */
+ free(form->showfilename); /* free the faked filename */
free(form); /* free the struct */
form = next;
} while(form); /* continue */
if(post->flags & (HTTPPOST_FILENAME | HTTPPOST_READFILE)) {
if(!strcmp(file->contents, "-")) {
- /* There are a few cases where the code below won't work; in
+ /* There are a few cases where the code below will not work; in
particular, freopen(stdin) by the caller is not guaranteed
to result as expected. This feature has been kept for backward
- compatibility: use of "-" pseudo file name should be avoided. */
+ compatibility: use of "-" pseudo filename should be avoided. */
result = curl_mime_data_cb(part, (curl_off_t) -1,
(curl_read_callback) fread,
fseeko_wrapper,
}
}
- /* Set fake file name. */
+ /* Set fake filename. */
if(!result && post->showfilename)
if(post->more || (post->flags & (HTTPPOST_FILENAME | HTTPPOST_BUFFER |
HTTPPOST_CALLBACK)))
long flags;
char *buffer; /* pointer to existing buffer used for file upload */
size_t bufferlength;
- char *showfilename; /* The file name to show. If not set, the actual
- file name will be used */
+ char *showfilename; /* The filename to show. If not set, the actual
+ filename will be used */
char *userp; /* pointer for the read callback */
struct curl_slist *contentheader;
struct FormInfo *more;
return result;
if(conn->proto.ftpc.state_saved == FTP_STOR) {
- /* When we know we're uploading a specified file, we can get the file
+ /* When we know we are uploading a specified file, we can get the file
size prior to the actual upload. */
Curl_pgrsSetUploadSize(data, data->state.infilesize);
*
* AllowServerConnect()
*
- * When we've issue the PORT command, we have told the server to connect to
+ * When we have issue the PORT command, we have told the server to connect to
* us. This function checks whether data connection is established if so it is
* accepted.
*
{
/*
* We cannot read just one byte per read() and then go back to select() as
- * the OpenSSL read() doesn't grok that properly.
+ * the OpenSSL read() does not grok that properly.
*
* Alas, read as much as possible, split up into lines, use the ending
* line in a response or continue reading. */
*
* A caution here is that the ftp_readresp() function has a cache that may
* contain pieces of a response from the previous invoke and we need to
- * make sure we don't just wait for input while there is unhandled data in
+ * make sure we do not just wait for input while there is unhandled data in
* that cache. But also, if the cache is there, we call ftp_readresp() and
- * the cache wasn't good enough to continue we must not just busy-loop
+ * the cache was not good enough to continue we must not just busy-loop
* around this function.
*
*/
if(Curl_dyn_len(&pp->recvbuf) && (cache_skip < 2)) {
/*
- * There's a cache left since before. We then skipping the wait for
+ * There is a cache left since before. We then skipping the wait for
* socket action, unless this is the same cache like the previous round
* as then the cache was deemed not enough to act on and we then need to
* wait for more data anyway.
*nreadp += nread;
- } /* while there's buffer left and loop is requested */
+ } /* while there is buffer left and loop is requested */
pp->pending_resp = FALSE;
CURL_TRC_FTP(data, "[%s] ftp_domore_getsock()", FTP_DSTATE(data));
if(FTP_STOP == ftpc->state) {
- /* if stopped and still in this state, then we're also waiting for a
+ /* if stopped and still in this state, then we are also waiting for a
connect on the secondary connection */
DEBUGASSERT(conn->sock[SECONDARYSOCKET] != CURL_SOCKET_BAD ||
(conn->cfilter[SECONDARYSOCKET] &&
#endif
ipstr, hbuf, sizeof(hbuf))) {
case IF2IP_NOT_FOUND:
- /* not an interface, use the given string as host name instead */
+ /* not an interface, use the given string as hostname instead */
host = ipstr;
break;
case IF2IP_AF_NOT_SUPPORTED:
goto out;
case IF2IP_FOUND:
- host = hbuf; /* use the hbuf for host name */
+ host = hbuf; /* use the hbuf for hostname */
break;
}
}
if(!host) {
const char *r;
- /* not an interface and not a host name, get default by extracting
+ /* not an interface and not a hostname, get default by extracting
the IP from the control connection */
sslen = sizeof(ss);
if(getsockname(conn->sock[FIRSTSOCKET], sa, &sslen)) {
if(!r) {
goto out;
}
- host = hbuf; /* use this host name */
+ host = hbuf; /* use this hostname */
possibly_non_local = FALSE; /* we know it is local now */
}
/* It failed. */
error = SOCKERRNO;
if(possibly_non_local && (error == EADDRNOTAVAIL)) {
- /* The requested bind address is not local. Use the address used for
+ /* The requested bind address is not local. Use the address used for
* the control connection instead and restart the port loop
*/
infof(data, "bind(port=%hu) on non-local address failed: %s", port,
goto out;
}
port = port_min;
- possibly_non_local = FALSE; /* don't try this again */
+ possibly_non_local = FALSE; /* do not try this again */
continue;
}
if(error != EADDRINUSE && error != EACCES) {
struct connectdata *conn = data->conn;
if(ftp->transfer != PPTRANSFER_BODY) {
- /* doesn't transfer any data */
+ /* does not transfer any data */
/* still possibly do PRE QUOTE jobs */
ftp_state(data, FTP_RETR_PREQUOTE);
if((ftp->transfer == PPTRANSFER_INFO) && ftpc->file) {
/* if a "head"-like request is being made (on a file) */
- /* we know ftpc->file is a valid pointer to a file name */
+ /* we know ftpc->file is a valid pointer to a filename */
result = Curl_pp_sendf(data, &ftpc->pp, "SIZE %s", ftpc->file);
if(!result)
ftp_state(data, FTP_SIZE);
static CURLcode ftp_state_retr_prequote(struct Curl_easy *data)
{
- /* We've sent the TYPE, now we must send the list of prequote strings */
+ /* We have sent the TYPE, now we must send the list of prequote strings */
return ftp_state_quote(data, TRUE, FTP_RETR_PREQUOTE);
}
static CURLcode ftp_state_stor_prequote(struct Curl_easy *data)
{
- /* We've sent the TYPE, now we must send the list of prequote strings */
+ /* We have sent the TYPE, now we must send the list of prequote strings */
return ftp_state_quote(data, TRUE, FTP_STOR_PREQUOTE);
}
struct ftp_conn *ftpc = &conn->proto.ftpc;
/* If we have selected NOBODY and HEADER, it means that we only want file
- information. Which in FTP can't be much more than the file size and
+ information. Which in FTP cannot be much more than the file size and
date. */
if(data->req.no_body && ftpc->file &&
ftp_need_type(conn, data->state.prefer_ascii)) {
if((data->state.resume_from && !sizechecked) ||
((data->state.resume_from > 0) && sizechecked)) {
- /* we're about to continue the uploading of a file */
+ /* we are about to continue the uploading of a file */
/* 1. get already existing file's size. We use the SIZE command for this
which may not exist in the server! The SIZE command is not in
RFC959. */
/* 2. This used to set REST. But since we can do append, we
- don't another ftp command. We just skip the source file
+ do not another ftp command. We just skip the source file
offset and then we APPEND the rest on the file instead */
/* 3. pass file-size number of bytes in the source file */
failf(data, "Could not seek stream");
return CURLE_FTP_COULDNT_USE_REST;
}
- /* seekerr == CURL_SEEKFUNC_CANTSEEK (can't seek to offset) */
+ /* seekerr == CURL_SEEKFUNC_CANTSEEK (cannot seek to offset) */
do {
char scratch[4*1024];
size_t readthisamountnow =
/* no data to transfer */
Curl_xfer_setup_nop(data);
- /* Set ->transfer so that we won't get any error in
- * ftp_done() because we didn't transfer anything! */
+ /* Set ->transfer so that we will not get any error in
+ * ftp_done() because we did not transfer anything! */
ftp->transfer = PPTRANSFER_NONE;
ftp_state(data, FTP_STOP);
return CURLE_OK;
}
}
- /* we've passed, proceed as normal */
+ /* we have passed, proceed as normal */
} /* resume_from */
result = Curl_pp_sendf(data, &ftpc->pp, append?"APPE %s":"STOR %s",
}
else {
if(data->set.ignorecl || data->state.prefer_ascii) {
- /* 'ignorecl' is used to support download of growing files. It
+ /* 'ignorecl' is used to support download of growing files. It
prevents the state machine from requesting the file size from
- the server. With an unknown file size the download continues
+ the server. With an unknown file size the download continues
until the server terminates it, otherwise the client stops if
- the received byte count exceeds the reported file size. Set
+ the received byte count exceeds the reported file size. Set
option CURLOPT_IGNORE_CONTENT_LENGTH to 1 to enable this
behavior.
In addition: asking for the size for 'TYPE A' transfers is not
- constructive since servers don't report the converted size. So
+ constructive since servers do not report the converted size. So
skip it.
*/
result = Curl_pp_sendf(data, &ftpc->pp, "RETR %s", ftpc->file);
&& !(conn->bits.tunnel_proxy || conn->bits.socksproxy)
#endif
) {
- /* We can't disable EPSV when doing IPv6, so this is instead a fail */
+ /* We cannot disable EPSV when doing IPv6, so this is instead a fail */
failf(data, "Failed EPSV attempt, exiting");
return CURLE_WEIRD_SERVER_REPLY;
}
static char *control_address(struct connectdata *conn)
{
/* Returns the control connection IP address.
- If a proxy tunnel is used, returns the original host name instead, because
+ If a proxy tunnel is used, returns the original hostname instead, because
the effective control connection address is the proxy address,
not the ftp host. */
#ifndef CURL_DISABLE_PROXY
if(conn->bits.proxy) {
/*
* This connection uses a proxy and we need to connect to the proxy again
- * here. We don't want to rely on a former host lookup that might've
+ * here. We do not want to rely on a former host lookup that might've
* expired now, instead we remake the lookup here and now!
*/
const char * const host_name = conn->bits.socksproxy ?
connectport = (unsigned short)conn->primary.remote_port;
if(!addr) {
- failf(data, "Can't resolve proxy host %s:%hu", host_name, connectport);
+ failf(data, "cannot resolve proxy host %s:%hu", host_name, connectport);
return CURLE_COULDNT_RESOLVE_PROXY;
}
}
connectport = ftpc->newport; /* we connect to the remote port */
if(!addr) {
- failf(data, "Can't resolve new host %s:%hu", ftpc->newhost, connectport);
+ failf(data, "cannot resolve new host %s:%hu",
+ ftpc->newhost, connectport);
return CURLE_FTP_CANT_GET_HOST;
}
}
CURL_CF_SSL_ENABLE : CURL_CF_SSL_DISABLE);
if(result) {
- Curl_resolv_unlock(data, addr); /* we're done using this address */
+ Curl_resolv_unlock(data, addr); /* we are done using this address */
if(ftpc->count1 == 0 && ftpcode == 229)
return ftp_epsv_disable(data, conn);
/* this just dumps information about this second connection */
ftp_pasv_verbose(data, addr->addr, ftpc->newhost, connectport);
- Curl_resolv_unlock(data, addr); /* we're done using this address */
+ Curl_resolv_unlock(data, addr); /* we are done using this address */
Curl_safefree(conn->secondaryhostname);
conn->secondary_port = ftpc->newport;
* call to Curl_client_write() so it does the right thing.
*
* Notice that we cannot enable this flag for FTP in general,
- * as an FTP transfer might involve a HTTP proxy connection and
+ * as an FTP transfer might involve an HTTP proxy connection and
* headers from CONNECT should not automatically be part of the
* output. */
CURLcode result;
/* We always (attempt to) get the size of downloads, so it is done before
this even when not doing resumes. */
if(filesize == -1) {
- infof(data, "ftp server doesn't support SIZE");
- /* We couldn't get the size and therefore we can't know if there really
+ infof(data, "ftp server does not support SIZE");
+ /* We could not get the size and therefore we cannot know if there really
is a part of the file left to get, although the server will just
- close the connection when we start the connection so it won't cause
+ close the connection when we start the connection so it will not cause
us any harm, just not make us exit as nicely. */
}
else {
/* We got a file size report, so we check that there actually is a
part of the file left to get, or else we go home. */
if(data->state.resume_from< 0) {
- /* We're supposed to download the last abs(from) bytes */
+ /* We are supposed to download the last abs(from) bytes */
if(filesize < -data->state.resume_from) {
failf(data, "Offset (%" CURL_FORMAT_CURL_OFF_T
") was beyond file size (%" CURL_FORMAT_CURL_OFF_T ")",
Curl_xfer_setup_nop(data);
infof(data, "File already completely downloaded");
- /* Set ->transfer so that we won't get any error in ftp_done()
- * because we didn't transfer the any file */
+ /* Set ->transfer so that we will not get any error in ftp_done()
+ * because we did not transfer the any file */
ftp->transfer = PPTRANSFER_NONE;
ftp_state(data, FTP_STOP);
return CURLE_OK;
!data->set.ignorecl &&
(ftp->downloadsize < 1)) {
/*
- * It seems directory listings either don't show the size or very
+ * It seems directory listings either do not show the size or very
* often uses size 0 anyway. ASCII transfers may very well turn out
* that the transferred amount of data is not the same as this line
* tells, why using this number in those cases only confuses us.
else {
if((instate == FTP_LIST) && (ftpcode == 450)) {
/* simply no matching files in the dir listing */
- ftp->transfer = PPTRANSFER_NONE; /* don't download anything */
+ ftp->transfer = PPTRANSFER_NONE; /* do not download anything */
ftp_state(data, FTP_STOP); /* this phase is over */
}
else {
if(data->set.str[STRING_FTP_ALTERNATIVE_TO_USER] &&
!ftpc->ftp_trying_alternative) {
- /* Ok, USER failed. Let's try the supplied command. */
+ /* Ok, USER failed. Let's try the supplied command. */
result =
Curl_pp_sendf(data, &ftpc->pp, "%s",
data->set.str[STRING_FTP_ALTERNATIVE_TO_USER]);
#endif
if(data->set.use_ssl && !conn->bits.ftp_use_control_ssl) {
- /* We don't have a SSL/TLS control connection yet, but FTPS is
+ /* We do not have a SSL/TLS control connection yet, but FTPS is
requested. Try a FTPS connection now */
ftpc->count3 = 0;
default:
failf(data, "unsupported parameter to CURLOPT_FTPSSLAUTH: %d",
(int)data->set.ftpsslauth);
- return CURLE_UNKNOWN_OPTION; /* we don't know what to do */
+ return CURLE_UNKNOWN_OPTION; /* we do not know what to do */
}
result = Curl_pp_sendf(data, &ftpc->pp, "AUTH %s",
ftpauth[ftpc->count1]);
data->state.most_recent_ftp_entrypath = ftpc->entrypath;
}
else {
- /* couldn't get the path */
+ /* could not get the path */
Curl_dyn_free(&out);
infof(data, "Failed to figure out path");
}
else {
/* return failure */
failf(data, "Server denied you to change to the given directory");
- ftpc->cwdfail = TRUE; /* don't remember this path as we failed
+ ftpc->cwdfail = TRUE; /* do not remember this path as we failed
to enter it */
result = CURLE_REMOTE_ACCESS_DENIED;
}
case CURLE_REMOTE_FILE_NOT_FOUND:
case CURLE_WRITE_ERROR:
/* the connection stays alive fine even though this happened */
- case CURLE_OK: /* doesn't affect the control connection's status */
+ case CURLE_OK: /* does not affect the control connection's status */
if(!premature)
break;
/* free the dir tree and file parts */
freedirs(ftpc);
- /* shut down the socket to inform the server we're done */
+ /* shut down the socket to inform the server we are done */
#ifdef _WIN32_WCE
shutdown(conn->sock[SECONDARYSOCKET], 2); /* SD_BOTH */
if((-1 != data->req.size) &&
(data->req.size != data->req.bytecount) &&
#ifdef CURL_DO_LINEEND_CONV
- /* Most FTP servers don't adjust their file SIZE response for CRLFs, so
- * we'll check to see if the discrepancy can be explained by the number
- * of CRLFs we've changed to LFs.
+ /* Most FTP servers do not adjust their file SIZE response for CRLFs,
+ * so we will check to see if the discrepancy can be explained by the
+ * number of CRLFs we have changed to LFs.
*/
((data->req.size + data->state.crlf_conversions) !=
data->req.bytecount) &&
* ftp_pasv_verbose()
*
* This function only outputs some informationals about this second connection
- * when we've issued a PASV command before and thus we have connected to a
+ * when we have issued a PASV command before and thus we have connected to a
* possibly new IP address.
*
*/
* complete */
struct FTP *ftp = NULL;
- /* if the second connection isn't done yet, wait for it to have
+ /* if the second connection is not done yet, wait for it to have
* connected to the remote host. When using proxy tunneling, this
* means the tunnel needs to have been establish. However, we
* can not expect the remote host to talk to us in any way yet.
*completep = (int)complete;
- /* if we got an error or if we don't wait for a data connection return
+ /* if we got an error or if we do not wait for a data connection return
immediately */
if(result || !ftpc->wait_data_conn)
return result;
/* if we reach the end of the FTP state machine here, *complete will be
TRUE but so is ftpc->wait_data_conn, which says we need to wait for the
- data connection and therefore we're not actually complete */
+ data connection and therefore we are not actually complete */
*completep = 0;
}
if(ftp->transfer <= PPTRANSFER_INFO) {
- /* a transfer is about to take place, or if not a file name was given
- so we'll do a SIZE on it later and then we need the right TYPE first */
+ /* a transfer is about to take place, or if not a filename was given so we
+ will do a SIZE on it later and then we need the right TYPE first */
if(ftpc->wait_data_conn) {
bool serv_conned;
result = Curl_range(data);
if(result == CURLE_OK && data->req.maxdownload >= 0) {
- /* Don't check for successful transfer */
+ /* Do not check for successful transfer */
ftpc->dont_check = TRUE;
}
if(data->set.ftp_filemethod == FTPFILE_NOCWD)
data->set.ftp_filemethod = FTPFILE_MULTICWD;
- /* try to parse ftp url */
+ /* try to parse ftp URL */
result = ftp_parse_url_path(data);
if(result) {
goto fail;
if(result)
return result;
- /* we don't need the Curl_fileinfo of first file anymore */
+ /* we do not need the Curl_fileinfo of first file anymore */
Curl_llist_remove(&wildcard->filelist, wildcard->filelist.head, NULL);
if(wildcard->filelist.size == 0) { /* remains only one file to down. */
bad in any way, sending quit and waiting around here will make the
disconnect wait in vain and cause more problems than we need to.
- ftp_quit() will check the state of ftp->ctl_valid. If it's ok it
+ ftp_quit() will check the state of ftp->ctl_valid. If it is ok it
will try to send the QUIT command, otherwise it will just return.
*/
if(dead_connection)
}
ftpc->dirdepth = 1; /* we consider it to be a single dir */
- fileName = slashPos + 1; /* rest is file name */
+ fileName = slashPos + 1; /* rest is filename */
}
else
- fileName = rawPath; /* file name only (or empty) */
+ fileName = rawPath; /* filename only (or empty) */
break;
default: /* allow pretty much anything */
++compLen;
/* we skip empty path components, like "x//y" since the FTP command
- CWD requires a parameter and a non-existent parameter a) doesn't
+ CWD requires a parameter and a non-existent parameter a) does not
work on many servers and b) has no effect on the others. */
if(compLen > 0) {
char *comp = Curl_memdup0(curPos, compLen);
}
}
DEBUGASSERT((size_t)ftpc->dirdepth <= dirAlloc);
- fileName = curPos; /* the rest is the file name (or empty) */
+ fileName = curPos; /* the rest is the filename (or empty) */
}
break;
} /* switch */
we make it a NULL pointer */
if(data->state.upload && !ftpc->file && (ftp->transfer == PPTRANSFER_BODY)) {
- /* We need a file name when uploading. Return error! */
- failf(data, "Uploading to a URL without a file name");
+ /* We need a filename when uploading. Return error! */
+ failf(data, "Uploading to a URL without a filename");
free(rawPath);
return CURLE_URL_MALFORMAT;
}
/* no data to transfer */
Curl_xfer_setup_nop(data);
else if(!connected)
- /* since we didn't connect now, we want do_more to get called */
+ /* since we did not connect now, we want do_more to get called */
conn->bits.do_more = TRUE;
ftpc->ctl_valid = TRUE; /* seems good */
}
data->req.p.ftp = ftp;
- ftp->path = &data->state.up.path[1]; /* don't include the initial slash */
+ ftp->path = &data->state.up.path[1]; /* do not include the initial slash */
/* FTP URLs support an extension like ";type=<typecode>" that
- * we'll try to get now! */
+ * we will try to get now! */
type = strstr(ftp->path, ";type=");
if(!type)
FTP_STOR_PREQUOTE,
FTP_POSTQUOTE,
FTP_CWD, /* change dir */
- FTP_MKD, /* if the dir didn't exist */
+ FTP_MKD, /* if the dir did not exist */
FTP_MDTM, /* to figure out the datestamp */
FTP_TYPE, /* to set type when doing a head-like request */
FTP_LIST_TYPE, /* set type when about to do a dir list */
char *account;
char *alternative_to_user;
char *entrypath; /* the PWD reply when we logged on */
- char *file; /* url-decoded file name (or path) */
+ char *file; /* url-decoded filename (or path) */
char **dirs; /* realloc()ed array for path components */
char *newhost;
char *prevpath; /* url-decoded conn->path from the previous transfer */
int count1; /* general purpose counter for the state machine */
int count2; /* general purpose counter for the state machine */
int count3; /* general purpose counter for the state machine */
- /* newhost is the (allocated) IP addr or host name to connect the data
+ /* newhost is the (allocated) IP addr or hostname to connect the data
connection to */
unsigned short newport;
ftpstate state; /* always use ftp.c:state() to change state! */
return NULL;
#elif defined(_WIN32)
/* This uses Windows API instead of C runtime getenv() to get the environment
- variable since some changes aren't always visible to the latter. #4774 */
+ variable since some changes are not always visible to the latter. #4774 */
char *buf = NULL;
char *tmp;
DWORD bufsize;
buf = tmp;
bufsize = rc;
- /* It's possible for rc to be 0 if the variable was found but empty.
- Since getenv doesn't make that distinction we ignore it as well. */
+ /* it is possible for rc to be 0 if the variable was found but empty.
+ Since getenv does not make that distinction we ignore it as well. */
rc = GetEnvironmentVariableA(variable, buf, bufsize);
if(!rc || rc == bufsize || rc > max) {
free(buf);
if(!timeout_ms)
timeout_ms = TIMEDIFF_T_MAX;
- /* Don't busyloop. The entire loop thing is a work-around as it causes a
+ /* Do not busyloop. The entire loop thing is a work-around as it causes a
BLOCKING behavior which is a NO-NO. This function should rather be
- split up in a do and a doing piece where the pieces that aren't
+ split up in a do and a doing piece where the pieces that are not
possible to send now will be sent in the doing function repeatedly
until the entire request is sent.
*/
/* Insert the data in the hash. If there already was a match in the hash, that
* data is replaced. This function also "lazily" allocates the table if
- * needed, as it isn't done in the _init function (anymore).
+ * needed, as it is not done in the _init function (anymore).
*
* @unittest: 1305
* @unittest: 1602
break;
}
}
- if(!e) /* this shouldn't happen */
+ if(!e) /* this should not happen */
return CURLHE_MISSING;
}
/* this is the name we want */
/* line folding, append value to the previous header's value */
return unfold_value(data, header, hlen);
else {
- /* Can't unfold without a previous header. Instead of erroring, just
+ /* cannot unfold without a previous header. Instead of erroring, just
pass the leading blanks. */
while(hlen && ISBLANK(*header)) {
header++;
* Generic HMAC algorithm.
*
* This module computes HMAC digests based on any hash function. Parameters
- * and computing procedures are set-up dynamically at HMAC computation context
+ * and computing procedures are setup dynamically at HMAC computation context
* initialization.
*/
* source file are these:
*
* CURLRES_IPV6 - this host has getaddrinfo() and family, and thus we use
- * that. The host may not be able to resolve IPv6, but we don't really have to
- * take that into account. Hosts that aren't IPv6-enabled have CURLRES_IPV4
+ * that. The host may not be able to resolve IPv6, but we do not really have to
+ * take that into account. Hosts that are not IPv6-enabled have CURLRES_IPV4
* defined.
*
* CURLRES_ARES - is defined if libcurl is built to use c-ares for
int timeout = data->set.dns_cache_timeout;
if(!data->dns.hostcache)
- /* NULL hostcache means we can't do it */
+ /* NULL hostcache means we cannot do it */
return;
if(data->share)
size_t entry_len = create_hostcache_id(hostname, 0, port,
entry_id, sizeof(entry_id));
- /* See if it's already in our dns cache */
+ /* See if it is already in our dns cache */
dns = Curl_hash_pick(data->dns.hostcache, entry_id, entry_len + 1);
/* No entry found in cache, check if we might have a wildcard entry */
if(!dns && data->state.wildcard_resolve) {
entry_len = create_hostcache_id("*", 1, port, entry_id, sizeof(entry_id));
- /* See if it's already in our dns cache */
+ /* See if it is already in our dns cache */
dns = Curl_hash_pick(data->dns.hostcache, entry_id, entry_len + 1);
}
}
if(!found) {
- infof(data, "Hostname in DNS cache doesn't have needed family, zapped");
+ infof(data, "Hostname in DNS cache does not have needed family, zapped");
dns = NULL; /* the memory deallocation is being handled by the hash */
Curl_hash_delete(data->dns.hostcache, entry_id, entry_len + 1);
}
* Returns the Curl_dns_entry entry pointer or NULL if not in the cache.
*
* The returned data *MUST* be "unlocked" with Curl_resolv_unlock() after
- * use, or we'll leak memory!
+ * use, or we will leak memory!
*/
struct Curl_dns_entry *
Curl_fetch_addr(struct Curl_easy *data,
bool Curl_ipv6works(struct Curl_easy *data)
{
if(data) {
- /* the nature of most system is that IPv6 status doesn't come and go
+ /* the nature of most system is that IPv6 status does not come and go
during a program's lifetime so we only probe the first time and then we
have the info kept for fast reuse */
DEBUGASSERT(data);
/* probe to see if we have a working IPv6 stack */
curl_socket_t s = socket(PF_INET6, SOCK_DGRAM, 0);
if(s == CURL_SOCKET_BAD)
- /* an IPv6 address was requested but we can't get/use one */
+ /* an IPv6 address was requested but we cannot get/use one */
ipv6_works = 0;
else {
ipv6_works = 1;
/*
* Curl_resolv() is the main name resolve function within libcurl. It resolves
* a name and returns a pointer to the entry in the 'entry' argument (if one
- * is provided). This function might return immediately if we're using asynch
+ * is provided). This function might return immediately if we are using asynch
* resolves. See the return codes.
*
* The cache entry we return will get its 'inuse' counter increased when this
- * function is used. You MUST call Curl_resolv_unlock() later (when you're
+ * function is used. You MUST call Curl_resolv_unlock() later (when you are
* done using this struct) to decrease the counter again.
*
* Return codes:
if(respwait) {
/* the response to our resolve call will come asynchronously at
a later time, good or bad */
- /* First, check that we haven't received the info by now */
+ /* First, check that we have not received the info by now */
result = Curl_resolv_check(data, &dns);
if(result) /* error detected */
return CURLRESOLV_ERROR;
#ifdef USE_ALARM_TIMEOUT
/*
* This signal handler jumps back into the main libcurl code and continues
- * execution. This effectively causes the remainder of the application to run
+ * execution. This effectively causes the remainder of the application to run
* within a signal handler which is nonportable and could lead to problems.
*/
CURL_NORETURN static
/*
* Curl_resolv_timeout() is the same as Curl_resolv() but specifies a
- * timeout. This function might return immediately if we're using asynch
+ * timeout. This function might return immediately if we are using asynch
* resolves. See the return codes.
*
* The cache entry we return will get its 'inuse' counter increased when this
- * function is used. You MUST call Curl_resolv_unlock() later (when you're
+ * function is used. You MUST call Curl_resolv_unlock() later (when you are
* done using this struct) to decrease the counter again.
*
* If built with a synchronous resolver and use of signals is not
will generate a signal and we will siglongjmp() from that here.
This technique has problems (see alarmfunc).
This should be the last thing we do before calling Curl_resolv(),
- as otherwise we'd have to worry about variables that get modified
+ as otherwise we would have to worry about variables that get modified
before we invoke Curl_resolv() (and thus use "volatile"). */
curl_simple_lock_lock(&curl_jmpenv_lock);
keep_copysig = TRUE; /* yes, we have a copy */
sigact.sa_handler = alarmfunc;
#ifdef SA_RESTART
- /* HPUX doesn't have SA_RESTART but defaults to that behavior! */
+ /* HPUX does not have SA_RESTART but defaults to that behavior! */
sigact.sa_flags &= ~SA_RESTART;
#endif
/* now set the new struct */
((alarm_set >= 0x80000000) && (prev_alarm < 0x80000000)) ) {
/* if the alarm time-left reached zero or turned "negative" (counted
with unsigned values), we should fire off a SIGALRM here, but we
- won't, and zero would be to switch it off so we never set it to
+ will not, and zero would be to switch it off so we never set it to
less than 1! */
alarm(1);
rc = CURLRESOLV_TIMEDOUT;
if(data->share)
Curl_share_lock(data, CURL_LOCK_DATA_DNS, CURL_LOCK_ACCESS_SINGLE);
- /* delete entry, ignore if it didn't exist */
+ /* delete entry, ignore if it did not exist */
Curl_hash_delete(data->dns.hostcache, entry_id, entry_len + 1);
if(data->share)
if(data->share)
Curl_share_lock(data, CURL_LOCK_DATA_DNS, CURL_LOCK_ACCESS_SINGLE);
- /* See if it's already in our dns cache */
+ /* See if it is already in our dns cache */
dns = Curl_hash_pick(data->dns.hostcache, entry_id, entry_len + 1);
if(dns) {
if(!result)
result = Curl_dyn_add(d, buf);
if(result) {
- infof(data, "too many IP, can't show");
+ infof(data, "too many IP, cannot show");
goto fail;
}
}
char *alpns; /* keytag = 1 */
bool no_def_alpn; /* keytag = 2 */
/*
- * we don't support ports (keytag = 3) as we don't support
+ * we do not support ports (keytag = 3) as we do not support
* port-switching yet
*/
unsigned char *ipv4hints; /* keytag = 4 */
#ifdef USE_HTTPSRR
struct Curl_https_rrinfo *hinfo;
#endif
- /* timestamp == 0 -- permanent CURLOPT_RESOLVE entry (doesn't time out) */
+ /* timestamp == 0 -- permanent CURLOPT_RESOLVE entry (does not time out) */
time_t timestamp;
/* use-counter, use Curl_resolv_unlock to release reference */
long inuse;
* and port.
*
* The returned data *MUST* be "unlocked" with Curl_resolv_unlock() after
- * use, or we'll leak memory!
+ * use, or we will leak memory!
*/
/* return codes */
enum resolve_t {
* Returns the Curl_dns_entry entry pointer or NULL if not in the cache.
*
* The returned data *MUST* be "unlocked" with Curl_resolv_unlock() after
- * use, or we'll leak memory!
+ * use, or we will leak memory!
*/
struct Curl_dns_entry *
Curl_fetch_addr(struct Curl_easy *data,
{
(void)data;
if(conn->ip_version == CURL_IPRESOLVE_V6)
- /* An IPv6 address was requested and we can't get/use one */
+ /* An IPv6 address was requested and we cannot get/use one */
return FALSE;
return TRUE; /* OK, proceed */
* small. Previous versions are known to return ERANGE for the same
* problem.
*
- * This wouldn't be such a big problem if older versions wouldn't
- * sometimes return EAGAIN on a common failure case. Alas, we can't
+ * This would not be such a big problem if older versions would not
+ * sometimes return EAGAIN on a common failure case. Alas, we cannot
* assume that EAGAIN *or* ERANGE means ERANGE for any given version of
* glibc.
*
* gethostbyname_r() in glibc:
*
* In glibc 2.2.5 the interface is different (this has also been
- * discovered in glibc 2.1.1-6 as shipped by Redhat 6). What I can't
+ * discovered in glibc 2.1.1-6 as shipped by Redhat 6). What I cannot
* explain, is that tests performed on glibc 2.2.4-34 and 2.2.4-32
- * (shipped/upgraded by Redhat 7.2) don't show this behavior!
+ * (shipped/upgraded by Redhat 7.2) do not show this behavior!
*
* In this "buggy" version, the return code is -1 on error and 'errno'
* is set to the ERANGE or EAGAIN code. Note that 'errno' is not a
#elif defined(HAVE_GETHOSTBYNAME_R_3)
/* AIX, Digital Unix/Tru64, HPUX 10, more? */
- /* For AIX 4.3 or later, we don't use gethostbyname_r() at all, because of
+ /* For AIX 4.3 or later, we do not use gethostbyname_r() at all, because of
* the plain fact that it does not return unique full buffers on each
* call, but instead several of the pointers in the hostent structs will
* point to the same actual data! This have the unfortunate down-side that
*
* Troels Walsted Hansen helped us work this out on March 3rd, 2003.
*
- * [*] = much later we've found out that it isn't at all "completely
+ * [*] = much later we have found out that it is not at all "completely
* thread-safe", but at least the gethostbyname() function is.
*/
(struct hostent *)buf,
(struct hostent_data *)((char *)buf +
sizeof(struct hostent)));
- h_errnop = SOCKERRNO; /* we don't deal with this, but set it anyway */
+ h_errnop = SOCKERRNO; /* we do not deal with this, but set it anyway */
}
else
res = -1; /* failure, too smallish buffer size */
h = buf; /* result expected in h */
/* This is the worst kind of the different gethostbyname_r() interfaces.
- * Since we don't know how big buffer this particular lookup required,
- * we can't realloc down the huge alloc without doing closer analysis of
+ * Since we do not know how big buffer this particular lookup required,
+ * we cannot realloc down the huge alloc without doing closer analysis of
* the returned data. Thus, we always use CURL_HOSTENT_SIZE for every
* name lookup. Fixing this would require an extra malloc() and then
* calling Curl_addrinfo_copy() that subsequent realloc()s down the new
#else /* (HAVE_GETADDRINFO && HAVE_GETADDRINFO_THREADSAFE) ||
HAVE_GETHOSTBYNAME_R */
/*
- * Here is code for platforms that don't have a thread safe
+ * Here is code for platforms that do not have a thread safe
* getaddrinfo() nor gethostbyname_r() function or for which
* gethostbyname() is the preferred one.
*/
}
/*
- * Return TRUE if the given host name is currently an HSTS one.
+ * Return TRUE if the given hostname is currently an HSTS one.
*
* The 'subdomain' argument tells the function if subdomain matching should be
* attempted.
file = h->filename;
if((h->flags & CURLHSTS_READONLYFILE) || !file || !file[0])
- /* marked as read-only, no file or zero length file name */
+ /* marked as read-only, no file or zero length filename */
goto skipsave;
result = Curl_fopen(data, file, &out, &tempstore);
free(tempstore);
skipsave:
if(data->set.hsts_write) {
- /* if there's a write callback */
+ /* if there is a write callback */
struct curl_index i; /* count */
i.total = h->list.size;
i.index = 0;
if(!e)
result = hsts_create(h, p, subdomain, expires);
else {
- /* the same host name, use the largest expire time */
+ /* the same hostname, use the largest expire time */
if(expires > e->expires)
e->expires = expires;
}
CURLcode result = CURLE_OK;
FILE *fp;
- /* we need a private copy of the file name so that the hsts cache file
+ /* we need a private copy of the filename so that the hsts cache file
name survives an easy handle reset */
free(h->filename);
h->filename = strdup(file);
curl_off_t expires; /* the timestamp of this entry's expiry */
};
-/* The HSTS cache. Needs to be able to tailmatch host names. */
+/* The HSTS cache. Needs to be able to tailmatch hostnames. */
struct hsts {
struct Curl_llist list;
char *filename;
curl_off_t upload_remain = (expectsend >= 0)? (expectsend - bytessent) : -1;
bool little_upload_remains = (upload_remain >= 0 && upload_remain < 2000);
bool needs_rewind = Curl_creader_needs_rewind(data);
- /* By default, we'd like to abort the transfer when little or
- * unknown amount remains. But this may be overridden by authentications
- * further below! */
+ /* By default, we would like to abort the transfer when little or unknown
+ * amount remains. This may be overridden by authentications further
+ * below! */
bool abort_upload = (!data->req.upload_done && !little_upload_remains);
const char *ongoing_auth = NULL;
/* We decided to abort the ongoing transfer */
streamclose(conn, "Mid-auth HTTP and much data left to send");
/* FIXME: questionable manipulation here, can we do this differently? */
- data->req.size = 0; /* don't download any more than 0 bytes */
+ data->req.size = 0; /* do not download any more than 0 bytes */
}
return CURLE_OK;
}
/* no (known) authentication available,
authentication is not "done" yet and
no authentication seems to be required and
- we didn't try HEAD or GET */
+ we did not try HEAD or GET */
if((data->state.httpreq != HTTPREQ_GET) &&
(data->state.httpreq != HTTPREQ_HEAD)) {
data->req.newurl = strdup(data->state.url); /* clone URL */
if(authhost->want && !authhost->picked)
/* The app has selected one or more methods, but none has been picked
so far by a server round-trip. Then we set the picked one to the
- want one, and if this is one single bit it'll be used instantly. */
+ want one, and if this is one single bit it will be used instantly. */
authhost->picked = authhost->want;
if(authproxy->want && !authproxy->picked)
/* The app has selected one or more methods, but none has been picked so
far by a proxy round-trip. Then we set the picked one to the want one,
- and if this is one single bit it'll be used instantly. */
+ and if this is one single bit it will be used instantly. */
authproxy->picked = authproxy->want;
#ifndef CURL_DISABLE_PROXY
#else
(void)proxytunnel;
#endif /* CURL_DISABLE_PROXY */
- /* we have no proxy so let's pretend we're done authenticating
+ /* we have no proxy so let's pretend we are done authenticating
with it */
authproxy->done = TRUE;
authp->avail |= CURLAUTH_DIGEST;
/* We call this function on input Digest headers even if Digest
- * authentication isn't activated yet, as we need to store the
+ * authentication is not activated yet, as we need to store the
* incoming data from this header in case we are going to use
* Digest */
result = Curl_input_digest(data, proxy, auth);
authp->avail |= CURLAUTH_BASIC;
if(authp->picked == CURLAUTH_BASIC) {
/* We asked for Basic authentication but got a 40X back
- anyway, which basically means our name+password isn't
+ anyway, which basically means our name+password is not
valid. */
authp->avail = CURLAUTH_NONE;
infof(data, "Authentication problem. Ignoring this.");
authp->avail |= CURLAUTH_BEARER;
if(authp->picked == CURLAUTH_BEARER) {
/* We asked for Bearer authentication but got a 40X back
- anyway, which basically means our token isn't valid. */
+ anyway, which basically means our token is not valid. */
authp->avail = CURLAUTH_NONE;
infof(data, "Authentication problem. Ignoring this.");
data->state.authproblem = TRUE;
/* there may be multiple methods on one line, so keep reading */
while(*auth && *auth != ',') /* read up to the next comma */
auth++;
- if(*auth == ',') /* if we're on a comma, skip it */
+ if(*auth == ',') /* if we are on a comma, skip it */
auth++;
while(*auth && ISSPACE(*auth))
auth++;
DEBUGASSERT(data->conn);
/*
- ** If we haven't been asked to fail on error,
- ** don't fail.
+ ** If we have not been asked to fail on error,
+ ** do not fail.
*/
if(!data->set.http_fail_on_error)
return FALSE;
return FALSE;
/*
- ** Any code >= 400 that's not 401 or 407 is always
+ ** Any code >= 400 that is not 401 or 407 is always
** a terminal error
*/
if((httpcode != 401) && (httpcode != 407))
DEBUGASSERT((httpcode == 401) || (httpcode == 407));
/*
- ** Examine the current authentication state to see if this
- ** is an error. The idea is for this function to get
- ** called after processing all the headers in a response
- ** message. So, if we've been to asked to authenticate a
- ** particular stage, and we've done it, we're OK. But, if
- ** we're already completely authenticated, it's not OK to
- ** get another 401 or 407.
+ ** Examine the current authentication state to see if this is an error. The
+ ** idea is for this function to get called after processing all the headers
+ ** in a response message. So, if we have been to asked to authenticate a
+ ** particular stage, and we have done it, we are OK. If we are already
+ ** completely authenticated, it is not OK to get another 401 or 407.
**
- ** It is possible for authentication to go stale such that
- ** the client needs to reauthenticate. Once that info is
- ** available, use it here.
+ ** It is possible for authentication to go stale such that the client needs
+ ** to reauthenticate. Once that info is available, use it here.
*/
/*
- ** Either we're not authenticating, or we're supposed to
- ** be authenticating something else. This is an error.
+ ** Either we are not authenticating, or we are supposed to be authenticating
+ ** something else. This is an error.
*/
if((httpcode == 401) && !data->state.aptr.user)
return TRUE;
DEBUGASSERT(content);
if(!strncasecompare(headerline, header, hlen))
- return FALSE; /* doesn't start with header */
+ return FALSE; /* does not start with header */
/* pass the header */
start = &headerline[hlen];
/* find the end of the header line */
end = strchr(start, '\r'); /* lines end with CRLF */
if(!end) {
- /* in case there's a non-standard compliant line here */
+ /* in case there is a non-standard compliant line here */
end = strchr(start, '\n');
if(!end)
- /* hm, there's no line ending here, use the zero byte! */
+ /* hm, there is no line ending here, use the zero byte! */
end = strchr(start, '\0');
}
}
/* this returns the socket to wait for in the DO and DOING state for the multi
- interface and then we're always _sending_ a request and thus we wait for
+ interface and then we are always _sending_ a request and thus we wait for
the single socket to become writable only */
int Curl_http_getsock_do(struct Curl_easy *data,
struct connectdata *conn,
{
struct connectdata *conn = data->conn;
- /* Clear multipass flag. If authentication isn't done yet, then it will get
+ /* Clear multipass flag. If authentication is not done yet, then it will get
* a chance to be set back to true when we output the next auth header */
data->state.authhost.multipass = FALSE;
data->state.authproxy.multipass = FALSE;
(data->req.bytecount +
data->req.headerbytecount -
data->req.deductheadercount) <= 0) {
- /* If this connection isn't simply closed to be retried, AND nothing was
- read from the HTTP server (that counts), this can't be right so we
+ /* If this connection is not simply closed to be retried, AND nothing was
+ read from the HTTP server (that counts), this cannot be right so we
return an error here */
failf(data, "Empty reply from server");
/* Mark it as closed to avoid the "left intact" message */
DEBUGASSERT(name && value);
if(data->state.aptr.host &&
- /* a Host: header was sent already, don't pass on any custom Host:
+ /* a Host: header was sent already, do not pass on any custom Host:
header as that will produce *two* in the same request! */
hd_name_eq(name, namelen, STRCONST("Host:")))
;
hd_name_eq(name, namelen, STRCONST("Content-Type:")))
;
else if(data->req.authneg &&
- /* while doing auth neg, don't allow the custom length since
+ /* while doing auth neg, do not allow the custom length since
we will force length zero then */
hd_name_eq(name, namelen, STRCONST("Content-Length:")))
;
else if(data->state.aptr.te &&
- /* when asking for Transfer-Encoding, don't pass on a custom
+ /* when asking for Transfer-Encoding, do not pass on a custom
Connection: */
hd_name_eq(name, namelen, STRCONST("Connection:")))
;
else if((conn->httpversion >= 20) &&
hd_name_eq(name, namelen, STRCONST("Transfer-Encoding:")))
- /* HTTP/2 doesn't support chunked requests */
+ /* HTTP/2 does not support chunked requests */
;
else if((hd_name_eq(name, namelen, STRCONST("Authorization:")) ||
hd_name_eq(name, namelen, STRCONST("Cookie:"))) &&
char *compare = semicolonp ? semicolonp : headers->data;
if(data->state.aptr.host &&
- /* a Host: header was sent already, don't pass on any custom Host:
- header as that will produce *two* in the same request! */
+ /* a Host: header was sent already, do not pass on any custom
+ Host: header as that will produce *two* in the same
+ request! */
checkprefix("Host:", compare))
;
else if(data->state.httpreq == HTTPREQ_POST_FORM &&
checkprefix("Content-Type:", compare))
;
else if(data->req.authneg &&
- /* while doing auth neg, don't allow the custom length since
+ /* while doing auth neg, do not allow the custom length since
we will force length zero then */
checkprefix("Content-Length:", compare))
;
else if(data->state.aptr.te &&
- /* when asking for Transfer-Encoding, don't pass on a custom
+ /* when asking for Transfer-Encoding, do not pass on a custom
Connection: */
checkprefix("Connection:", compare))
;
else if((conn->httpversion >= 20) &&
checkprefix("Transfer-Encoding:", compare))
- /* HTTP/2 doesn't support chunked requests */
+ /* HTTP/2 does not support chunked requests */
;
else if((checkprefix("Authorization:", compare) ||
checkprefix("Cookie:", compare)) &&
if(ptr && (!data->state.this_is_a_follow ||
strcasecompare(data->state.first_host, conn->host.name))) {
#if !defined(CURL_DISABLE_COOKIES)
- /* If we have a given custom Host: header, we extract the host name in
+ /* If we have a given custom Host: header, we extract the hostname in
order to possibly use it for cookie reasons later on. We only allow the
custom Host: header if this is NOT a redirect, as setting Host: in the
- redirected request is being out on thin ice. Except if the host name
+ redirected request is being out on thin ice. Except if the hostname
is the same as the first one! */
char *cookiehost = Curl_copy_header_value(ptr);
if(!cookiehost)
}
}
else {
- /* When building Host: headers, we must put the host name within
- [brackets] if the host name is a plain IPv6-address. RFC2732-style. */
+ /* When building Host: headers, we must put the hostname within
+ [brackets] if the hostname is a plain IPv6-address. RFC2732-style. */
const char *host = conn->host.name;
if(((conn->given->protocol&(CURLPROTO_HTTPS|CURLPROTO_WSS)) &&
(conn->remote_port == PORT_HTTPS)) ||
((conn->given->protocol&(CURLPROTO_HTTP|CURLPROTO_WS)) &&
(conn->remote_port == PORT_HTTP)) )
- /* if(HTTPS on port 443) OR (HTTP on port 80) then don't include
+ /* if(HTTPS on port 443) OR (HTTP on port 80) then do not include
the port number in the host string */
aptr->host = aprintf("Host: %s%s%s\r\n", conn->bits.ipv6_ip?"[":"",
host, conn->bits.ipv6_ip?"]":"");
conn->remote_port);
if(!aptr->host)
- /* without Host: we can't make a nice request */
+ /* without Host: we cannot make a nice request */
return CURLE_OUT_OF_MEMORY;
}
return CURLE_OK;
/* The path sent to the proxy is in fact the entire URL. But if the remote
host is a IDN-name, we must make sure that the request we produce only
- uses the encoded host name! */
+ uses the encoded hostname! */
/* and no fragment part */
CURLUcode uc;
}
if(strcasecompare("http", data->state.up.scheme)) {
- /* when getting HTTP, we don't want the userinfo the URL */
+ /* when getting HTTP, we do not want the userinfo the URL */
uc = curl_url_set(h, CURLUPART_USER, NULL, 0);
if(uc) {
curl_url_cleanup(h);
curl_url_cleanup(h);
- /* target or url */
+ /* target or URL */
result = Curl_dyn_add(r, data->set.str[STRING_TARGET]?
data->set.str[STRING_TARGET]:url);
free(url);
if(data->state.resume_from < 0) {
/*
* This is meant to get the size of the present remote-file by itself.
- * We don't support this now. Bail out!
+ * We do not support this now. Bail out!
*/
data->state.resume_from = 0;
}
if(data->req.upgr101 != UPGR101_INIT)
return CURLE_OK;
- /* For really small puts we don't use Expect: headers at all, and for
+ /* For really small puts we do not use Expect: headers at all, and for
the somewhat bigger ones we allow the app to disable it. Just make
sure that the expect100header is always set to the preferred value
here. */
case HTTPREQ_POST_MIME:
#endif
/* We only set Content-Length and allow a custom Content-Length if
- we don't upload data chunked, as RFC2616 forbids us to set both
+ we do not upload data chunked, as RFC2616 forbids us to set both
kinds of headers (Transfer-Encoding: chunked and Content-Length).
We do not override a custom "Content-Length" header, but during
authentication negotiation that header is suppressed.
(data->req.authneg ||
!Curl_checkheaders(data, STRCONST("Content-Length")))) {
/* we allow replacing this header if not during auth negotiation,
- although it isn't very wise to actually set your own */
+ although it is not very wise to actually set your own */
result = Curl_dyn_addf(r,
"Content-Length: %" CURL_FORMAT_CURL_OFF_T
"\r\n", req_clen);
{
if(data->state.use_range) {
/*
- * A range is selected. We use different headers whether we're downloading
+ * A range is selected. We use different headers whether we are downloading
* or uploading and we always let customized headers override our internal
* ones if any such are specified.
*/
free(data->state.aptr.rangeline);
if(data->set.set_resume_from < 0) {
- /* Upload resume was asked for, but we don't know the size of the
+ /* Upload resume was asked for, but we do not know the size of the
remote part so we tell the server (and act accordingly) that we
upload the whole file (again) */
data->state.aptr.rangeline =
if(data->req.newurl) {
if(conn->bits.close) {
/* Abort after the headers if "follow Location" is set
- and we're set to close anyway. */
+ and we are set to close anyway. */
k->keepon &= ~KEEP_RECV;
k->done = TRUE;
return CURLE_OK;
}
- /* We have a new url to load, but since we want to be able to reuse this
+ /* We have a new URL to load, but since we want to be able to reuse this
connection properly, we read the full response in "ignore more" */
k->ignorebody = TRUE;
infof(data, "Ignoring the response-body");
if(k->size == data->state.resume_from) {
/* The resume point is at the end of file, consider this fine even if it
- doesn't allow resume from here. */
+ does not allow resume from here. */
infof(data, "The entire document is already downloaded");
streamclose(conn, "already downloaded");
/* Abort download */
return CURLE_OK;
}
- /* we wanted to resume a download, although the server doesn't seem to
- * support this and we did this with a GET (if it wasn't a GET we did a
+ /* we wanted to resume a download, although the server does not seem to
+ * support this and we did this with a GET (if it was not a GET we did a
* POST or PUT resume) */
- failf(data, "HTTP server doesn't seem to support "
+ failf(data, "HTTP server does not seem to support "
"byte ranges. Cannot resume.");
return CURLE_RANGE_ERROR;
}
if(!Curl_meets_timecondition(data, k->timeofdoc)) {
k->done = TRUE;
- /* We're simulating an HTTP 304 from server so we return
+ /* We are simulating an HTTP 304 from server so we return
what should have been returned from the server */
data->info.httpcode = 304;
infof(data, "Simulate an HTTP 304 response");
/* When we are to insert a TE: header in the request, we must also insert
TE in a Connection: header, so we need to merge the custom provided
Connection: header and prevent the original to get sent. Note that if
- the user has inserted his/her own TE: header we don't do this magic
+ the user has inserted his/her own TE: header we do not do this magic
but then assume that the user will handle it all! */
char *cptr = Curl_checkheaders(data, STRCONST("Connection"));
#define TE_HEADER "TE: gzip\r\n"
if(!(conn->handler->flags&PROTOPT_SSL) &&
conn->httpversion < 20 &&
(data->state.httpwant == CURL_HTTP_VERSION_2)) {
- /* append HTTP2 upgrade magic stuff to the HTTP request if it isn't done
+ /* append HTTP2 upgrade magic stuff to the HTTP request if it is not done
over SSL */
result = Curl_http2_request_upgrade(&req, data);
if(result) {
* Process Content-Encoding. Look for the values: identity,
* gzip, deflate, compress, x-gzip and x-compress. x-gzip and
* x-compress are the same as gzip and compress. (Sec 3.5 RFC
- * 2616). zlib cannot handle compress. However, errors are
+ * 2616). zlib cannot handle compress. However, errors are
* handled further down when the response body is processed
*/
return Curl_build_unencoding_stack(data, v, FALSE);
/*
* An HTTP/1.0 reply with the 'Connection: keep-alive' line
* tells us the connection will be kept alive for our
- * pleasure. Default action for 1.0 is to close.
+ * pleasure. Default action for 1.0 is to close.
*
* [RFC2068, section 19.7.1] */
connkeep(conn, "Connection keep-alive");
* connection will be kept alive for our pleasure.
* Default action for 1.0 is to close.
*/
- connkeep(conn, "Proxy-Connection keep-alive"); /* don't close */
+ connkeep(conn, "Proxy-Connection keep-alive"); /* do not close */
infof(data, "HTTP/1.0 proxy connection set to keep alive");
}
else if((conn->httpversion == 11) && conn->bits.httpproxy &&
HD_IS_AND_SAYS(hd, hdlen, "Proxy-Connection:", "close")) {
/*
- * We get an HTTP/1.1 response from a proxy and it says it'll
+ * We get an HTTP/1.1 response from a proxy and it says it will
* close down after this transfer.
*/
connclose(conn, "Proxy-Connection: asked to close after done");
HD_VAL(hd, hdlen, "Set-Cookie:") : NULL;
if(v) {
/* If there is a custom-set Host: name, use it here, or else use
- * real peer host name. */
+ * real peer hostname. */
const char *host = data->state.aptr.cookiehost?
data->state.aptr.cookiehost:conn->host.name;
const bool secure_context =
if(result)
return result;
if(!k->chunk && data->set.http_transfer_encoding) {
- /* if this isn't chunked, only close can signal the end of this
+ /* if this is not chunked, only close can signal the end of this
* transfer as Content-Length is said not to be trusted for
* transfer-encoding! */
connclose(conn, "HTTP/1.1 transfer-encoding without chunks");
data->state.httpversion = (unsigned char)k->httpversion;
/*
- * This code executes as part of processing the header. As a
- * result, it's not totally clear how to interpret the
+ * This code executes as part of processing the header. As a
+ * result, it is not totally clear how to interpret the
* response code yet as that depends on what other headers may
- * be present. 401 and 407 may be errors, but may be OK
- * depending on how authentication is working. Other codes
+ * be present. 401 and 407 may be errors, but may be OK
+ * depending on how authentication is working. Other codes
* are definitely errors, so give up here.
*/
if(data->state.resume_from && data->state.httpreq == HTTPREQ_GET &&
}
/* Content-Length must be ignored if any Transfer-Encoding is present in the
- response. Refer to RFC 7230 section 3.3.3 and RFC2616 section 4.4. This is
+ response. Refer to RFC 7230 section 3.3.3 and RFC2616 section 4.4. This is
figured out here after all headers have been received but before the final
call to the user's header callback, so that a valid content length can be
retrieved by the user in the final call. */
/* the first "header" is the status-line and it has no colon */
return CURLE_OK;
if(((hd[0] == ' ') || (hd[0] == '\t')) && k->headerline > 2)
- /* line folding, can't happen on line 2 */
+ /* line folding, cannot happen on line 2 */
;
else {
ptr = memchr(hd, ':', hdlen);
case HTTPREQ_POST_MIME:
/* We got an error response. If this happened before the whole
* request body has been sent we stop sending and mark the
- * connection for closure after we've read the entire response.
+ * connection for closure after we have read the entire response.
*/
if(!Curl_req_done_sending(data)) {
if((k->httpcode == 417) && Curl_http_exp100_is_selected(data)) {
k->download_done = TRUE;
/* If max download size is *zero* (nothing) we already have
- nothing and can safely return ok now! But for HTTP/2, we'd
+ nothing and can safely return ok now! But for HTTP/2, we would
like to call http2_handle_stream_close to properly close a
- stream. In order to do this, we keep reading until we
+ stream. In order to do this, we keep reading until we
close the stream. */
if(0 == k->maxdownload
&& !Curl_conn_is_http2(data, conn, FIRSTSOCKET)
or else we consider this to be the body right away! */
bool fine_statusline = FALSE;
- k->httpversion = 0; /* Don't know yet */
+ k->httpversion = 0; /* Do not know yet */
if(data->conn->handler->protocol & PROTO_FAMILY_HTTP) {
/*
* https://datatracker.ietf.org/doc/html/rfc7230#section-3.1.2
*
* The response code is always a three-digit number in HTTP as the spec
* says. We allow any three-digit number here, but we cannot make
- * guarantees on future behaviors since it isn't within the protocol.
+ * guarantees on future behaviors since it is not within the protocol.
*/
const char *p = hd;
*eos = FALSE;
return CURLE_OK;
}
- /* we've waited long enough, continue anyway */
+ /* we have waited long enough, continue anyway */
http_exp100_continue(data, reader);
infof(data, "Done waiting for 100-continue");
FALLTHROUGH();
selected to use no auth at all. Ie, we actively select no auth, as opposed
to not having one selected. The other CURLAUTH_* defines are present in the
public curl/curl.h header. */
-#define CURLAUTH_PICKNONE (1<<30) /* don't use auth */
+#define CURLAUTH_PICKNONE (1<<30) /* do not use auth */
/* MAX_INITIAL_POST_SIZE indicates the number of bytes that will make the POST
data get included in the initial data chunk sent to the server. If the
};
/**
- * Create a HTTP request struct.
+ * Create an HTTP request struct.
*/
CURLcode Curl_http_req_make(struct httpreq **preq,
const char *method, size_t m_len,
};
/**
- * Create a HTTP response struct.
+ * Create an HTTP response struct.
*/
CURLcode Curl_http_resp_make(struct http_resp **presp,
int status,
/* spare chunks we keep for a full window */
#define H2_STREAM_POOL_SPARES (H2_STREAM_WINDOW_SIZE / H2_CHUNK_SIZE)
-/* We need to accommodate the max number of streams with their window
- * sizes on the overall connection. Streams might become PAUSED which
- * will block their received QUOTA in the connection window. And if we
- * run out of space, the server is blocked from sending us any data.
- * See #10988 for an issue with this. */
+/* We need to accommodate the max number of streams with their window sizes on
+ * the overall connection. Streams might become PAUSED which will block their
+ * received QUOTA in the connection window. If we run out of space, the server
+ * is blocked from sending us any data. See #10988 for an issue with this. */
#define HTTP2_HUGE_WINDOW_SIZE (100 * H2_STREAM_WINDOW_SIZE)
#define H2_SETTINGS_IV_LEN 3
return FALSE;
if(*input_pending) {
- /* This happens before we've sent off a request and the connection is
- not in use by any other transfer, there shouldn't be any data here,
+ /* This happens before we have sent off a request and the connection is
+ not in use by any other transfer, there should not be any data here,
only "protocol frames" */
CURLcode result;
ssize_t nread = -1;
break;
case NGHTTP2_HEADERS:
if(stream->bodystarted) {
- /* Only valid HEADERS after body started is trailer HEADERS. We
+ /* Only valid HEADERS after body started is trailer HEADERS. We
buffer them in on_header callback. */
break;
}
if(stream->error == NGHTTP2_REFUSED_STREAM) {
CURL_TRC_CF(data, cf, "[%d] REFUSED_STREAM, try again on a new "
"connection", stream->id);
- connclose(cf->conn, "REFUSED_STREAM"); /* don't use this anymore */
+ connclose(cf->conn, "REFUSED_STREAM"); /* do not use this anymore */
data->state.refused_stream = TRUE;
*err = CURLE_RECV_ERROR; /* trigger Curl_retry_request() later */
return -1;
}
/*
- * Check if there's been an update in the priority /
+ * Check if there is been an update in the priority /
* dependency settings and if so it submits a PRIORITY frame with the updated
* info.
* Flush any out data pending in the network buffer.
out:
result = h2_progress_egress(cf, data);
if(result == CURLE_AGAIN) {
- /* pending data to send, need to be called again. Ideally, we'd
+ /* pending data to send, need to be called again. Ideally, we would
* monitor the socket for POLLOUT, but we might not be in SENDING
* transfer state any longer and are unable to make this happen.
*/
data->state.httpwant == CURL_HTTP_VERSION_2_PRIOR_KNOWLEDGE) {
#ifndef CURL_DISABLE_PROXY
if(conn->bits.httpproxy && !conn->bits.tunnel_proxy) {
- /* We don't support HTTP/2 proxies yet. Also it's debatable
+ /* We do not support HTTP/2 proxies yet. Also it is debatable
whether or not this setting should apply to HTTP/2 proxies. */
infof(data, "Ignoring HTTP/2 prior knowledge due to proxy");
return FALSE;
if(result)
return result;
- conn->httpversion = 20; /* we know we're on HTTP/2 now */
+ conn->httpversion = 20; /* we know we are on HTTP/2 now */
conn->bits.multiplex = TRUE; /* at least potentially multiplexed */
conn->bundle->multiuse = BUNDLE_MULTIPLEX;
Curl_multi_connchanged(data->multi);
return result;
cf_h2 = cf->next;
- cf->conn->httpversion = 20; /* we know we're on HTTP/2 now */
+ cf->conn->httpversion = 20; /* we know we are on HTTP/2 now */
cf->conn->bits.multiplex = TRUE; /* at least potentially multiplexed */
cf->conn->bundle->multiuse = BUNDLE_MULTIPLEX;
Curl_multi_connchanged(data->multi);
" after upgrade: len=%zu", nread);
}
- conn->httpversion = 20; /* we know we're on HTTP/2 now */
+ conn->httpversion = 20; /* we know we are on HTTP/2 now */
conn->bits.multiplex = TRUE; /* at least potentially multiplexed */
conn->bundle->multiuse = BUNDLE_MULTIPLEX;
Curl_multi_connchanged(data->multi);
":%" MAX_SIGV4_LEN_TXT "s",
provider0, provider1, region, service);
if(!provider0[0]) {
- failf(data, "first aws-sigv4 provider can't be empty");
+ failf(data, "first aws-sigv4 provider cannot be empty");
result = CURLE_BAD_FUNCTION_ARGUMENT;
goto fail;
}
"SignedHeaders=%s, "
"Signature=%s\r\n"
/*
- * date_header is added here, only if it wasn't
+ * date_header is added here, only if it was not
* user-specified (using CURLOPT_HTTPHEADER).
* date_header includes \r\n
*/
case CHUNK_LF:
/* waiting for the LF after a chunk size */
if(*buf == 0x0a) {
- /* we're now expecting data to come, unless size was zero! */
+ /* we are now expecting data to come, unless size was zero! */
if(0 == ch->datasize) {
ch->state = CHUNK_TRAILER; /* now check for trailers */
}
break;
}
else {
- /* no trailer, we're on the final CRLF pair */
+ /* no trailer, we are on the final CRLF pair */
ch->state = CHUNK_TRAILER_POSTCR;
- break; /* don't advance the pointer */
+ break; /* do not advance the pointer */
}
}
else {
blen--;
(*pconsumed)++;
/* Record the length of any data left in the end of the buffer
- even if there's no more chunks to read */
+ even if there is no more chunks to read */
ch->datasize = blen;
ch->state = CHUNK_DONE;
CURL_TRC_WRITE(data, "http_chunk, response complete");
sizeof(struct chunked_writer)
};
-/* max length of a HTTP chunk that we want to generate */
+/* max length of an HTTP chunk that we want to generate */
#define CURL_CHUNKED_MINLEN (1024)
#define CURL_CHUNKED_MAXLEN (64 * 1024)
#define CHUNK_MAXNUM_LEN (SIZEOF_CURL_OFF_T * 2)
typedef enum {
- /* await and buffer all hexadecimal digits until we get one that isn't a
+ /* await and buffer all hexadecimal digits until we get one that is not a
hexadecimal digit. When done, we go CHUNK_LF */
CHUNK_HEX,
big deal. */
CHUNK_POSTLF,
- /* Used to mark that we're out of the game. NOTE: that there's a 'datasize'
- field in the struct that will tell how many bytes that were not passed to
- the client in the end of the last buffer! */
+ /* Used to mark that we are out of the game. NOTE: that there is a
+ 'datasize' field in the struct that will tell how many bytes that were
+ not passed to the client in the end of the last buffer! */
CHUNK_STOP,
/* At this point optional trailer headers can be found, unless the next line
Curl_http_auth_cleanup_negotiate(conn);
}
else if(state != GSS_AUTHNONE) {
- /* The server rejected our authentication and hasn't supplied any more
+ /* The server rejected our authentication and has not supplied any more
negotiation mechanisms */
Curl_http_auth_cleanup_negotiate(conn);
return CURLE_LOGIN_DENIED;
if(*state == GSS_AUTHDONE || *state == GSS_AUTHSUCC) {
/* connection is already authenticated,
- * don't send a header in future requests */
+ * do not send a header in future requests */
authp->done = TRUE;
}
Curl_bufref_init(&ntlmmsg);
- /* connection is already authenticated, don't send a header in future
+ /* connection is already authenticated, do not send a header in future
* requests so go directly to NTLMSTATE_LAST */
if(*state == NTLMSTATE_TYPE3)
*state = NTLMSTATE_LAST;
* Curl_idn_decode() returns an allocated IDN decoded string if it was
* possible. NULL on error.
*
- * CURLE_URL_MALFORMAT - the host name could not be converted
+ * CURLE_URL_MALFORMAT - the hostname could not be converted
* CURLE_OUT_OF_MEMORY - memory problem
*
*/
*/
CURLcode Curl_idnconvert_hostname(struct hostname *host)
{
- /* set the name we use to display the host name */
+ /* set the name we use to display the hostname */
host->dispname = host->name;
#ifdef USE_IDN
char *passwd;
/* Check we have a username and password to authenticate with and end the
- connect phase if we don't */
+ connect phase if we do not */
if(!data->state.aptr.user) {
imap_state(data, IMAP_STOP);
saslprogress progress;
/* Check if already authenticated OR if there is enough data to authenticate
- with and end the connect phase if we don't */
+ with and end the connect phase if we do not */
if(imapc->preauth ||
!Curl_sasl_can_authenticate(&imapc->sasl, data)) {
imap_state(data, IMAP_STOP);
chunk = (size_t)size;
if(!chunk) {
- /* no size, we're done with the data */
+ /* no size, we are done with the data */
imap_state(data, IMAP_STOP);
return CURLE_OK;
}
}
}
else {
- /* We don't know how to parse this line */
+ /* We do not know how to parse this line */
failf(data, "Failed to parse FETCH response.");
result = CURLE_WEIRD_SERVER_REPLY;
}
{
/*
* Note that int32_t and int16_t need only be "at least" large enough
- * to contain a value of the specified size. On some systems, like
+ * to contain a value of the specified size. On some systems, like
* Crays, there is no such thing as an integer variable with 16 bits.
* Keep this in mind if you think this function should have been coded
- * to use pointer overlays. All the world's not a VAX.
+ * to use pointer overlays. All the world's not a VAX.
*/
char tmp[sizeof("ffff:ffff:ffff:ffff:ffff:ffff:255.255.255.255")];
char *tp;
*tp++ = ':';
*tp++ = '\0';
- /* Check for overflow, copy, and we're done.
+ /* Check for overflow, copy, and we are done.
*/
if((size_t)(tp - tmp) > size) {
errno = ENOSPC;
* Returns NULL on error and errno set with the specific
* error, EAFNOSUPPORT or ENOSPC.
*
- * On Windows we store the error in the thread errno, not
- * in the winsock error code. This is to avoid losing the
- * actual last winsock error. So when this function returns
- * NULL, check errno not SOCKERRNO.
+ * On Windows we store the error in the thread errno, not in the winsock error
+ * code. This is to avoid losing the actual last winsock error. When this
+ * function returns NULL, check errno not SOCKERRNO.
*/
char *Curl_inet_ntop(int af, const void *src, char *buf, size_t size)
{
#endif
/*
- * WARNING: Don't even consider trying to compile this on a system where
- * sizeof(int) < 4. sizeof(int) > 4 is fine; all the world's not a VAX.
+ * WARNING: Do not even consider trying to compile this on a system where
+ * sizeof(int) < 4. sizeof(int) > 4 is fine; all the world's not a VAX.
*/
static int inet_pton4(const char *src, unsigned char *dst);
* to network format (which is usually some kind of binary format).
* return:
* 1 if the address was valid for the specified address family
- * 0 if the address wasn't valid (`dst' is untouched in this case)
+ * 0 if the address was not valid (`dst' is untouched in this case)
* -1 if some other error occurred (`dst' is untouched in this case, too)
* notice:
* On Windows we store the error in the thread errno, not
* in the winsock error code. This is to avoid losing the
- * actual last winsock error. So when this function returns
+ * actual last winsock error. When this function returns
* -1, check errno not SOCKERRNO.
* author:
* Paul Vixie, 1996.
* return:
* 1 if `src' is a valid dotted quad, else 0.
* notice:
- * does not touch `dst' unless it's returning 1.
+ * does not touch `dst' unless it is returning 1.
* author:
* Paul Vixie, 1996.
*/
* return:
* 1 if `src' is a valid [RFC1884 2.2] address, else 0.
* notice:
- * (1) does not touch `dst' unless it's returning 1.
+ * (1) does not touch `dst' unless it is returning 1.
* (2) :: in a full address is silently ignored.
* credit:
* inspired by Mark Andrews.
if(colonp) {
/*
* Since some memmove()'s erroneously fail to handle
- * overlapping regions, we'll do the shift by hand.
+ * overlapping regions, we will do the shift by hand.
*/
const ssize_t n = tp - colonp;
ssize_t i;
* THIS SOFTWARE IS PROVIDED BY THE INSTITUTE AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE INSTITUTE OR CONTRIBUTORS BE LIABLE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE INSTITUTE OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
if(maj != GSS_S_COMPLETE)
return -1;
- /* malloc a new buffer, in case gss_release_buffer doesn't work as
+ /* malloc a new buffer, in case gss_release_buffer does not work as
expected */
*to = malloc(enc.length);
if(!*to)
/* this loop will execute twice (once for service, once for host) */
for(;;) {
- /* this really shouldn't be repeated here, but can't help it */
+ /* this really should not be repeated here, but cannot help it */
if(service == srv_host) {
result = ftpsend(data, conn, "AUTH GSSAPI");
if(result)
size_t len = Curl_dyn_len(&pp->recvbuf);
p = Curl_dyn_ptr(&pp->recvbuf);
if((len < 4) || (p[0] != '2' && p[0] != '3')) {
- infof(data, "Server didn't accept auth data");
+ infof(data, "Server did not accept auth data");
ret = AUTH_ERROR;
break;
}
if(ret != AUTH_CONTINUE) {
if(ret != AUTH_OK) {
- /* Mechanism has dumped the error to stderr, don't error here. */
+ /* Mechanism has dumped the error to stderr, do not error here. */
return CURLE_USE_SSL_FAILED;
}
DEBUGASSERT(ret == AUTH_OK);
if(ldap_ssl) {
#ifdef HAVE_LDAP_SSL
#ifdef USE_WIN32_LDAP
- /* Win32 LDAP SDK doesn't support insecure mode without CA! */
+ /* Win32 LDAP SDK does not support insecure mode without CA! */
server = ldap_sslinit(host, (curl_ldap_num_t)conn->primary.remote_port, 1);
ldap_set_option(server, LDAP_OPT_SSL, LDAP_OPT_ON);
#else
/*
* The automagic conversion from IPv4 literals to IPv6 literals only
* works if the SCDynamicStoreCopyProxies system function gets called
- * first. As Curl currently doesn't support system-wide HTTP proxies, we
- * therefore don't use any value this function might return.
+ * first. As Curl currently does not support system-wide HTTP proxies, we
+ * therefore do not use any value this function might return.
*
* This function is only available on macOS and is not needed for
* IPv4-only builds, hence the conditions for defining
}
#else
-/* When no other crypto library is available, or the crypto library doesn't
+/* When no other crypto library is available, or the crypto library does not
* support MD4, we use this code segment this implementation of it
*
* This is an OpenSSL-compatible implementation of the RSA Data Security, Inc.
* Author:
* Alexander Peslyak, better known as Solar Designer <solar at openwall.com>
*
- * This software was written by Alexander Peslyak in 2001. No copyright is
- * claimed, and the software is hereby placed in the public domain. In case
+ * This software was written by Alexander Peslyak in 2001. No copyright is
+ * claimed, and the software is hereby placed in the public domain. In case
* this attempt to disclaim copyright and place the software in the public
* domain is deemed null and void, then the software is Copyright (c) 2001
* Alexander Peslyak and it is hereby released to the general public under the
* Redistribution and use in source and binary forms, with or without
* modification, are permitted.
*
- * There's ABSOLUTELY NO WARRANTY, express or implied.
+ * There is ABSOLUTELY NO WARRANTY, express or implied.
*
* (This is a heavily cut-down "BSD license".)
*
* This differs from Colin Plumb's older public domain implementation in that
* no exactly 32-bit integer data type is required (any 32-bit or wider
- * unsigned integer data type will do), there's no compile-time endianness
- * configuration, and the function prototypes match OpenSSL's. No code from
+ * unsigned integer data type will do), there is no compile-time endianness
+ * configuration, and the function prototypes match OpenSSL's. No code from
* Colin Plumb's implementation has been reused; this comment merely compares
* the properties of the two independent implementations.
*
* The primary goals of this implementation are portability and ease of use.
- * It is meant to be fast, but not as fast as possible. Some known
+ * It is meant to be fast, but not as fast as possible. Some known
* optimizations are not included to reduce source code size and avoid
* compile-time configuration.
*/
* in a properly aligned word in host byte order.
*
* The check for little-endian architectures that tolerate unaligned
- * memory accesses is just an optimization. Nothing will break if it
- * doesn't work.
+ * memory accesses is just an optimization. Nothing will break if it
+ * does not work.
*/
#if defined(__i386__) || defined(__x86_64__) || defined(__vax__)
#define MD4_SET(n) \
/*
* This processes one or more 64-byte data blocks, but does NOT update
- * the bit counters. There are no alignment requirements.
+ * the bit counters. There are no alignment requirements.
*/
static const void *my_md4_body(MD4_CTX *ctx,
const void *data, unsigned long size)
/* For Apple operating systems: CommonCrypto has the functions we need.
These functions are available on Tiger and later, as well as iOS 2.0
- and later. If you're building for an older cat, well, sorry.
+ and later. If you are building for an older cat, well, sorry.
Declaring the functions as static like this seems to be a bit more
reliable than defining COMMON_DIGEST_FOR_OPENSSL on older cats. */
* Author:
* Alexander Peslyak, better known as Solar Designer <solar at openwall.com>
*
- * This software was written by Alexander Peslyak in 2001. No copyright is
+ * This software was written by Alexander Peslyak in 2001. No copyright is
* claimed, and the software is hereby placed in the public domain.
* In case this attempt to disclaim copyright and place the software in the
* public domain is deemed null and void, then the software is
* Redistribution and use in source and binary forms, with or without
* modification, are permitted.
*
- * There's ABSOLUTELY NO WARRANTY, express or implied.
+ * There is ABSOLUTELY NO WARRANTY, express or implied.
*
* (This is a heavily cut-down "BSD license".)
*
* This differs from Colin Plumb's older public domain implementation in that
* no exactly 32-bit integer data type is required (any 32-bit or wider
- * unsigned integer data type will do), there's no compile-time endianness
- * configuration, and the function prototypes match OpenSSL's. No code from
+ * unsigned integer data type will do), there is no compile-time endianness
+ * configuration, and the function prototypes match OpenSSL's. No code from
* Colin Plumb's implementation has been reused; this comment merely compares
* the properties of the two independent implementations.
*
* The primary goals of this implementation are portability and ease of use.
- * It is meant to be fast, but not as fast as possible. Some known
+ * It is meant to be fast, but not as fast as possible. Some known
* optimizations are not included to reduce source code size and avoid
* compile-time configuration.
*/
* in a properly aligned word in host byte order.
*
* The check for little-endian architectures that tolerate unaligned
- * memory accesses is just an optimization. Nothing will break if it
- * doesn't work.
+ * memory accesses is just an optimization. Nothing will break if it
+ * does not work.
*/
#if defined(__i386__) || defined(__x86_64__) || defined(__vax__)
#define MD5_SET(n) \
/*
* This processes one or more 64-byte data blocks, but does NOT update
- * the bit counters. There are no alignment requirements.
+ * the bit counters. There are no alignment requirements.
*/
static const void *my_md5_body(my_md5_ctx *ctx,
const void *data, unsigned long size)
#include "urldata.h"
-#define MEMDEBUG_NODEFINES /* don't redefine the standard functions */
+#define MEMDEBUG_NODEFINES /* do not redefine the standard functions */
/* The last 3 #include files should be in this order */
#include "curl_printf.h"
double d;
void *p;
} mem[1];
- /* I'm hoping this is the thing with the strictest alignment
- * requirements. That also means we waste some space :-( */
+ /* I am hoping this is the thing with the strictest alignment
+ * requirements. That also means we waste some space :-( */
};
/*
* remain so. For advanced analysis, record a log file and write perl scripts
* to analyze them!
*
- * Don't use these with multithreaded test programs!
+ * Do not use these with multithreaded test programs!
*/
FILE *curl_dbg_logfile = NULL;
curl_dbg_logfile = NULL;
}
-/* this sets the log file name */
+/* this sets the log filename */
void curl_dbg_memdebug(const char *logname)
{
if(!curl_dbg_logfile) {
else
curl_dbg_logfile = stderr;
#ifdef MEMDEBUG_LOG_SYNC
- /* Flush the log file after every line so the log isn't lost in a crash */
+ /* Flush the log file after every line so the log is not lost in a crash */
if(curl_dbg_logfile)
setbuf(curl_dbg_logfile, (char *)NULL);
#endif
}
}
-/* returns TRUE if this isn't allowed! */
+/* returns TRUE if this is not allowed! */
static bool countcheck(const char *func, int line, const char *source)
{
/* if source is NULL, then the call is made internally and this check
curl_mime *mime = (curl_mime *) ptr;
if(mime && mime->parent) {
- mime->parent->freefunc = NULL; /* Be sure we won't be called again. */
+ mime->parent->freefunc = NULL; /* Be sure we will not be called again. */
cleanup_part_content(mime->parent); /* Avoid dangling pointer in part. */
}
curl_mime_free(mime);
curl_mime *mime = (curl_mime *) ptr;
if(mime && mime->parent) {
- mime->parent->freefunc = NULL; /* Be sure we won't be called again. */
+ mime->parent->freefunc = NULL; /* Be sure we will not be called again. */
cleanup_part_content(mime->parent); /* Avoid dangling pointer in part. */
mime->parent = NULL;
}
curl_mimepart *part;
if(mime) {
- mime_subparts_unbind(mime); /* Be sure it's not referenced anymore. */
+ mime_subparts_unbind(mime); /* Be sure it is not referenced anymore. */
while(mime->firstpart) {
part = mime->firstpart;
mime->firstpart = part->nextpart;
return CURLE_OK;
}
-/* Set mime part remote file name. */
+/* Set mime part remote filename. */
CURLcode curl_mime_filename(curl_mimepart *part, const char *filename)
{
if(!part)
while(root->parent && root->parent->parent)
root = root->parent->parent;
if(subparts == root) {
- /* Can't add as a subpart of itself. */
+ /* cannot add as a subpart of itself. */
return CURLE_BAD_FUNCTION_ARGUMENT;
}
}
curl_slist_free_all(part->curlheaders);
part->curlheaders = NULL;
- /* Be sure we won't access old headers later. */
+ /* Be sure we will not access old headers later. */
if(part->state.state == MIMESTATE_CURLHEADERS)
mimesetstate(&part->state, MIMESTATE_CURLHEADERS, NULL);
return CURLE_PARTIAL_FILE;
}
}
- /* we've passed, proceed as normal */
+ /* we have passed, proceed as normal */
}
return CURLE_OK;
}
curl_mimepart *nextpart; /* Forward linked list. */
enum mimekind kind; /* The part kind. */
unsigned int flags; /* Flags. */
- char *data; /* Memory data or file name. */
+ char *data; /* Memory data or filename. */
curl_read_callback readfunc; /* Read function. */
curl_seek_callback seekfunc; /* Seek function. */
curl_free_callback freefunc; /* Argument free function. */
struct curl_slist *curlheaders; /* Part headers. */
struct curl_slist *userheaders; /* Part headers. */
char *mimetype; /* Part mime type. */
- char *filename; /* Remote file name. */
+ char *filename; /* Remote filename. */
char *name; /* Data name. */
curl_off_t datasize; /* Expected data size. */
struct mime_state state; /* Current readback state. */
str = (char *)iptr->val.str;
if(!str) {
- /* Write null string if there's space. */
+ /* Write null string if there is space. */
if(prec == -1 || prec >= (int) sizeof(nilstr) - 1) {
str = nilstr;
len = sizeof(nilstr) - 1;
{
struct nsprintf *infop = f;
if(infop->length < infop->max) {
- /* only do this if we haven't reached max length yet */
+ /* only do this if we have not reached max length yet */
*infop->buffer++ = (char)outc; /* store */
infop->length++; /* we are now one byte larger */
return 0; /* fputc() returns like this on success */
if(info.max) {
/* we terminate this with a zero byte */
if(info.max == info.length) {
- /* we're at maximum, scrap the last letter */
+ /* we are at maximum, scrap the last letter */
info.buffer[-1] = 0;
DEBUGASSERT(retcode);
- retcode--; /* don't count the nul byte */
+ retcode--; /* do not count the nul byte */
}
else
info.buffer[0] = 0;
start_user = pos + 3 + MQTT_CLIENTID_LEN;
/* position where starts the password payload */
start_pwd = start_user + ulen;
- /* if user name was provided, add it to the packet */
+ /* if username was provided, add it to the packet */
if(ulen) {
start_pwd += 2;
/*
CURL_SOCKET_HASH_TABLE_SIZE should be a prime number. Increasing it from 97
- to 911 takes on a 32-bit machine 4 x 804 = 3211 more bytes. Still, every
+ to 911 takes on a 32-bit machine 4 x 804 = 3211 more bytes. Still, every
CURL handle takes 45-50 K memory, therefore this 3K are not significant.
*/
#ifndef CURL_SOCKET_HASH_TABLE_SIZE
{
/* this is a completed transfer */
- /* Important: reset the conn pointer so that we don't point to memory
+ /* Important: reset the conn pointer so that we do not point to memory
that could be freed anytime */
Curl_detach_connection(data);
Curl_expire_clear(data); /* stop all timers */
#endif
if(oldstate == state)
- /* don't bother when the new state is the same as the old state */
+ /* do not bother when the new state is the same as the old state */
return;
data->mstate = state;
#endif
if(state == MSTATE_COMPLETED) {
- /* changing to COMPLETED means there's one less easy handle 'alive' */
+ /* changing to COMPLETED means there is one less easy handle 'alive' */
DEBUGASSERT(data->multi->num_alive > 0);
data->multi->num_alive--;
if(!data->multi->num_alive) {
* "Some tests at 7000 and 9000 connections showed that the socket hash lookup
* is somewhat of a bottle neck. Its current implementation may be a bit too
* limiting. It simply has a fixed-size array, and on each entry in the array
- * it has a linked list with entries. So the hash only checks which list to
- * scan through. The code I had used so for used a list with merely 7 slots
- * (as that is what the DNS hash uses) but with 7000 connections that would
- * make an average of 1000 nodes in each list to run through. I upped that to
- * 97 slots (I believe a prime is suitable) and noticed a significant speed
- * increase. I need to reconsider the hash implementation or use a rather
+ * it has a linked list with entries. The hash only checks which list to scan
+ * through. The code I had used so for used a list with merely 7 slots (as
+ * that is what the DNS hash uses) but with 7000 connections that would make
+ * an average of 1000 nodes in each list to run through. I upped that to 97
+ * slots (I believe a prime is suitable) and noticed a significant speed
+ * increase. I need to reconsider the hash implementation or use a rather
* large default value like this. At 9000 connections I was still below 10us
* per call."
*
Curl_llist_init(&data->state.timeoutlist, NULL);
/*
- * No failure allowed in this function beyond this point. And no
- * modification of easy nor multi handle allowed before this except for
- * potential multi's connection cache growing which won't be undone in this
- * function no matter what.
+ * No failure allowed in this function beyond this point. No modification of
+ * easy nor multi handle allowed before this except for potential multi's
+ * connection cache growing which will not be undone in this function no
+ * matter what.
*/
if(data->set.errorbuffer)
data->set.errorbuffer[0] = 0;
case CURLE_ABORTED_BY_CALLBACK:
case CURLE_READ_ERROR:
case CURLE_WRITE_ERROR:
- /* When we're aborted due to a callback return code it basically have to
- be counted as premature as there is trouble ahead if we don't. We have
+ /* When we are aborted due to a callback return code it basically have to
+ be counted as premature as there is trouble ahead if we do not. We have
many callbacks and protocols work differently, we could potentially do
this more fine-grained in the future. */
premature = TRUE;
restrictions in our or the server's end
if premature is TRUE, it means this connection was said to be DONE before
- the entire request operation is complete and thus we can't know in what
- state it is for reusing, so we're forced to close it. In a perfect world
+ the entire request operation is complete and thus we cannot know in what
+ state it is for reusing, so we are forced to close it. In a perfect world
we can add code that keep track of if we really must close it here or not,
but currently we have no such detail knowledge.
*/
if(data->conn &&
data->mstate > MSTATE_DO &&
data->mstate < MSTATE_COMPLETED) {
- /* Set connection owner so that the DONE function closes it. We can
+ /* Set connection owner so that the DONE function closes it. We can
safely do this here since connection is killed. */
streamclose(data->conn, "Removed with partial response");
}
/* multi_done() clears the association between the easy handle and the
connection.
- Note that this ignores the return code simply because there's
+ Note that this ignores the return code simply because there is
nothing really useful to do with it anyway! */
(void)multi_done(data, data->result, premature);
}
what we want */
data->mstate = MSTATE_COMPLETED;
- /* This ignores the return code even in case of problems because there's
+ /* This ignores the return code even in case of problems because there is
nothing more to do about that, here */
(void)singlesocket(multi, easy); /* to let the application know what sockets
that vanish with this handle */
/* This removes a handle that was part the multi interface that used
CONNECT_ONLY, that connection is now left alive but since this handle
has bits.close set nothing can use that transfer anymore and it is
- forbidden from reuse. And this easy handle cannot find the connection
+ forbidden from reuse. This easy handle cannot find the connection
anymore once removed from the multi handle
Better close the connection here, at once.
#endif
/* as this was using a shared connection cache we clear the pointer to that
- since we're not part of that multi handle anymore */
+ since we are not part of that multi handle anymore */
data->state.conn_cache = NULL;
data->multi = NULL; /* clear the association to this multi handle */
- /* make sure there's no pending message in the queue sent from this easy
+ /* make sure there is no pending message in the queue sent from this easy
handle */
for(e = multi->msglist.head; e; e = e->next) {
struct Curl_message *msg = e->ptr;
for(i = 0; i < ps.num; i++) {
if(!FDSET_SOCK(ps.sockets[i]))
- /* pretend it doesn't exist */
+ /* pretend it does not exist */
continue;
if(ps.actions[i] & CURL_POLL_IN)
FD_SET(ps.sockets[i], read_fd_set);
}
#ifdef USE_WINSOCK
-/* Reset FD_WRITE for TCP sockets. Nothing is actually sent. UDP sockets can't
+/* Reset FD_WRITE for TCP sockets. Nothing is actually sent. UDP sockets cannot
* be reset this way because an empty datagram would be sent. #9203
*
* "On Windows the internal state of FD_WRITE as returned from
#endif
long sleep_ms = 0;
- /* Avoid busy-looping when there's nothing particular to wait for */
+ /* Avoid busy-looping when there is nothing particular to wait for */
if(!curl_multi_timeout(multi, &sleep_ms) && sleep_ms) {
if(sleep_ms > timeout_ms)
sleep_ms = timeout_ms;
The write socket is set to non-blocking, this way this function
cannot block, making it safe to call even from the same thread
that will call curl_multi_wait(). If swrite() returns that it
- would block, it's considered successful because it means that
+ would block, it is considered successful because it means that
previous calls to this function will wake up the poll(). */
if(wakeup_write(multi->wakeup_pair[1], buf, sizeof(buf)) < 0) {
int err = SOCKERRNO;
if(!rc) {
struct SingleRequest *k = &data->req;
- /* pass in NULL for 'conn' here since we don't want to init the
+ /* pass in NULL for 'conn' here since we do not want to init the
connection, only this transfer */
Curl_init_do(data, NULL);
* second connection.
*
* 'complete' can return 0 for incomplete, 1 for done and -1 for go back to
- * DOING state there's more work to do!
+ * DOING state there is more work to do!
*/
static CURLcode multi_do_more(struct Curl_easy *data, int *complete)
&& conn->bits.protoconnstart) {
/* We already are connected, get back. This may happen when the connect
worked fine in the first call, like when we connect to a local server
- or proxy. Note that we don't know if the protocol is actually done.
+ or proxy. Note that we do not know if the protocol is actually done.
- Unless this protocol doesn't have any protocol-connect callback, as
- then we know we're done. */
+ Unless this protocol does not have any protocol-connect callback, as
+ then we know we are done. */
if(!conn->handler->connecting)
*protocol_done = TRUE;
else
*protocol_done = TRUE;
- /* it has started, possibly even completed but that knowledge isn't stored
+ /* it has started, possibly even completed but that knowledge is not stored
in this bit! */
if(!result)
conn->bits.protoconnstart = TRUE;
if(!result) {
*nowp = Curl_pgrsTime(data, TIMER_POSTQUEUE);
if(async)
- /* We're now waiting for an asynchronous name lookup */
+ /* We are now waiting for an asynchronous name lookup */
multistate(data, MSTATE_RESOLVING);
else {
/* after the connect has been sent off, go WAITCONNECT unless the
/* Update sockets here, because the socket(s) may have been
closed and the application thus needs to be told, even if it
is likely that the same socket(s) will again be used further
- down. If the name has not yet been resolved, it is likely
+ down. If the name has not yet been resolved, it is likely
that new sockets have been opened in an attempt to contact
another resolver. */
rc = singlesocket(multi, data);
Curl_set_in_callback(data, false);
if(prereq_rc != CURL_PREREQFUNC_OK) {
failf(data, "operation aborted by pre-request callback");
- /* failure in pre-request callback - don't do any other processing */
+ /* failure in pre-request callback - do not do any other
+ processing */
result = CURLE_ABORTED_BY_CALLBACK;
Curl_posttransfer(data);
multi_done(data, result, FALSE);
/* skip some states if it is important */
multi_done(data, CURLE_OK, FALSE);
- /* if there's no connection left, skip the DONE state */
+ /* if there is no connection left, skip the DONE state */
multistate(data, data->conn ?
MSTATE_DONE : MSTATE_COMPLETED);
rc = CURLM_CALL_MULTI_PERFORM;
/* after DO, go DO_DONE... or DO_MORE */
else if(data->conn->bits.do_more) {
- /* we're supposed to do more, but we need to sit down, relax
+ /* we are supposed to do more, but we need to sit down, relax
and wait a little while first */
multistate(data, MSTATE_DOING_MORE);
rc = CURLM_CALL_MULTI_PERFORM;
}
else {
- /* we're done with the DO, now DID */
+ /* we are done with the DO, now DID */
multistate(data, MSTATE_DID);
rc = CURLM_CALL_MULTI_PERFORM;
}
data->conn->bits.reuse) {
/*
* In this situation, a connection that we were trying to use
- * may have unexpectedly died. If possible, send the connection
+ * may have unexpectedly died. If possible, send the connection
* back to the CONNECT phase so we can try again.
*/
char *newurl = NULL;
}
}
else {
- /* done didn't return OK or SEND_ERROR */
+ /* done did not return OK or SEND_ERROR */
result = drc;
}
}
else {
- /* Have error handler disconnect conn if we can't retry */
+ /* Have error handler disconnect conn if we cannot retry */
stream_error = TRUE;
}
free(newurl);
/* Check if we can move pending requests to send pipe */
process_pending_handles(multi); /* multiplexed */
- /* Only perform the transfer if there's a good socket to work with.
+ /* Only perform the transfer if there is a good socket to work with.
Having both BAD is a signal to skip immediately to DONE */
if((data->conn->sockfd != CURL_SOCKET_BAD) ||
(data->conn->writesockfd != CURL_SOCKET_BAD))
if(result) {
/*
* The transfer phase returned error, we mark the connection to get
- * closed to prevent being reused. This is because we can't possibly
- * know if the connection is in a good shape or not now. Unless it is
+ * closed to prevent being reused. This is because we cannot possibly
+ * know if the connection is in a good shape or not now. Unless it is
* a protocol which uses two "channels" like FTP, as then the error
* happened in the data connection.
*/
else {
/* after the transfer is done, go DONE */
- /* but first check to see if we got a location info even though we're
- not following redirects */
+ /* but first check to see if we got a location info even though we
+ are not following redirects */
if(data->req.location) {
free(newurl);
newurl = data->req.location;
}
else if(data->state.select_bits && !Curl_xfer_is_blocked(data)) {
/* This avoids CURLM_CALL_MULTI_PERFORM so that a very fast transfer
- won't get stuck on this transfer at the expense of other concurrent
- transfers */
+ will not get stuck on this transfer at the expense of other
+ concurrent transfers */
Curl_expire(data, 0, EXPIRE_RUN_NOW);
}
free(newurl);
}
}
#endif
- /* after we have DONE what we're supposed to do, go COMPLETED, and
- it doesn't matter what the multi_done() returned! */
+ /* after we have DONE what we are supposed to do, go COMPLETED, and
+ it does not matter what the multi_done() returned! */
multistate(data, MSTATE_COMPLETED);
break;
if(data->mstate < MSTATE_COMPLETED) {
if(result) {
/*
- * If an error was returned, and we aren't in completed state now,
+ * If an error was returned, and we are not in completed state now,
* then we go to completed and consider this transfer aborted.
*/
if(data->conn) {
if(stream_error) {
- /* Don't attempt to send data over a connection that timed out */
+ /* Do not attempt to send data over a connection that timed out */
bool dead_connection = result == CURLE_OPERATION_TIMEDOUT;
struct connectdata *conn = data->conn;
/* This is where we make sure that the conn pointer is reset.
- We don't have to do this in every case block above where a
+ We do not have to do this in every case block above where a
failure is detected */
Curl_detach_connection(data);
multistate(data, MSTATE_COMPLETED);
rc = CURLM_CALL_MULTI_PERFORM;
}
- /* if there's still a connection to use, call the progress function */
+ /* if there is still a connection to use, call the progress function */
else if(data->conn && Curl_pgrsUpdate(data)) {
/* aborted due to progress callback return code must close the
connection */
}
}
else {
- /* this is a socket we didn't have before, add it to the hash! */
+ /* this is a socket we did not have before, add it to the hash! */
entry = sh_addentry(&multi->sockhash, s);
if(!entry)
/* fatal */
* Curl_multi_closed()
*
* Used by the connect code to tell the multi_socket code that one of the
- * sockets we were using is about to be closed. This function will then
+ * sockets we were using is about to be closed. This function will then
* remove it from the sockethash for this handle to make the multi_socket API
* behave properly, especially for the case when libcurl will create another
* socket again and it gets the same file descriptor number.
void Curl_multi_closed(struct Curl_easy *data, curl_socket_t s)
{
if(data) {
- /* if there's still an easy handle associated with this connection */
+ /* if there is still an easy handle associated with this connection */
struct Curl_multi *multi = data->multi;
if(multi) {
/* this is set if this connection is part of a handle that is added to
/* copy the first entry to 'tv' */
memcpy(tv, &node->time, sizeof(*tv));
- /* Insert this node again into the splay. Keep the timer in the list in
+ /* Insert this node again into the splay. Keep the timer in the list in
case we need to recompute future timers. */
multi->timetree = Curl_splayinsert(*tv, multi->timetree,
&d->state.timenode);
struct Curl_sh_entry *entry = sh_getentry(&multi->sockhash, s);
if(!entry) {
- /* Unmatched socket, we can't act on it but we ignore this fact. In
+ /* Unmatched socket, we cannot act on it but we ignore this fact. In
real-world tests it has been proved that libevent can in fact give
the application actions even though the socket was just previously
asked to get removed, so thus we better survive stray socket actions
DEBUGASSERT(data->magic == CURLEASY_MAGIC_NUMBER);
if(data->conn && !(data->conn->handler->flags & PROTOPT_DIRLOCK))
- /* set socket event bitmask if they're not locked */
+ /* set socket event bitmask if they are not locked */
data->state.select_bits |= (unsigned char)ev_bitmask;
Curl_expire(data, 0, EXPIRE_RUN_NOW);
}
- /* Now we fall-through and do the timer-based stuff, since we don't want
+ /* Now we fall-through and do the timer-based stuff, since we do not want
to force the user to have to deal with timeouts as long as at least
one connection in fact has traffic. */
data = NULL; /* set data to NULL again to avoid calling
- multi_runsingle() in case there's no need to */
+ multi_runsingle() in case there is no need to */
now = Curl_now(); /* get a newer time since the multi_runsingle() loop
may have taken some time */
}
}
}
- /* Check if there's one (more) expired timer to deal with! This function
+ /* Check if there is one (more) expired timer to deal with! This function
extracts a matching node if there is one */
multi->timetree = Curl_splaygetbest(now, multi->timetree, &t);
if(Curl_splaycomparekeys(multi->timetree->key, now) > 0) {
/* some time left before expiration */
timediff_t diff = Curl_timediff_ceil(multi->timetree->key, now);
- /* this should be safe even on 32-bit archs, as we don't use that overly
- long timeouts */
+ /* this should be safe even on 32-bit archs, as we do not use that
+ overly long timeouts */
*timeout_ms = (long)diff;
}
else
static const struct curltime none = {0, 0};
if(Curl_splaycomparekeys(none, multi->timer_lastcall)) {
multi->timer_lastcall = none;
- /* there's no timeout now but there was one previously, tell the app to
+ /* there is no timeout now but there was one previously, tell the app to
disable it */
set_in_callback(multi, TRUE);
rc = multi->timer_cb(multi, -1, multi->timer_userp);
/* Remove any timer with the same id just in case. */
multi_deltimeout(data, id);
- /* Add it to the timer list. It must stay in the list until it has expired
+ /* Add it to the timer list. It must stay in the list until it has expired
in case we need to recompute the minimum timer later. */
multi_addtimeout(data, &set, id);
if(diff > 0) {
/* The current splay tree entry is sooner than this new expiry time.
- We don't need to update our splay tree entry. */
+ We do not need to update our splay tree entry. */
return;
}
#endif
unsigned int max_concurrent_streams;
unsigned int maxconnects; /* if >0, a fixed limit of the maximum number of
- entries we're allowed to grow the connection
+ entries we are allowed to grow the connection
cache to */
#define IPV6_UNKNOWN 0
#define IPV6_DEAD 1
* Curl_multi_closed()
*
* Used by the connect code to tell the multi_socket code that one of the
- * sockets we were using is about to be closed. This function will then
+ * sockets we were using is about to be closed. This function will then
* remove it from the sockethash for this handle to make the multi_socket API
* behave properly, especially for the case when libcurl will create another
* socket again and it gets the same file descriptor number.
else if(strcasecompare("password", tok))
state_password = 1;
else if(strcasecompare("machine", tok)) {
- /* ok, there's machine here go => */
+ /* ok, there is machine here go => */
state = HOSTFOUND;
state_our_login = FALSE;
}
/*
* @unittest: 1304
*
- * *loginp and *passwordp MUST be allocated if they aren't NULL when passed
+ * *loginp and *passwordp MUST be allocated if they are not NULL when passed
* in.
*/
int Curl_parsenetrc(const char *host, char **loginp, char **passwordp,
#include "curl_setup.h"
#ifndef CURL_DISABLE_NETRC
-/* returns -1 on failure, 0 if the host is found, 1 is the host isn't found */
+/* returns -1 on failure, 0 if the host is found, 1 is the host is not found */
int Curl_parsenetrc(const char *host, char **loginp,
char **passwordp, char *filename);
/* Assume: (*passwordp)[0]=0, host[0] != 0.
if(flags < 0)
return -1;
/* Check if the current file status flags have already satisfied
- * the request, if so, it's no need to call fcntl() to replicate it.
+ * the request, if so, it is no need to call fcntl() to replicate it.
*/
if(!!(flags & O_NONBLOCK) == !!nonblock)
return 0;
char hostip[128];
/*
- * If we don't have a hostname at all, like for example with a FILE
+ * If we do not have a hostname at all, like for example with a FILE
* transfer, we have nothing to interrogate the noproxy list with.
*/
if(!name || name[0] == '\0')
if(!strcmp("*", no_proxy))
return TRUE;
- /* NO_PROXY was specified and it wasn't just an asterisk */
+ /* NO_PROXY was specified and it was not just an asterisk */
if(name[0] == '[') {
char *endptr;
if(1 == Curl_inet_pton(AF_INET, name, &address))
type = TYPE_IPV4;
else {
- /* ignore trailing dots in the host name */
+ /* ignore trailing dots in the hostname */
if(name[namelen - 1] == '.')
namelen--;
}
while(*p == ',')
p++;
} /* while(*p) */
- } /* NO_PROXY was specified and it wasn't just an asterisk */
+ } /* NO_PROXY was specified and it was not just an asterisk */
return FALSE;
}
return 0;
}
-/* We don't need to do anything because libcurl does it already */
+/* We do not need to do anything because libcurl does it already */
static int
ldapsb_tls_close(Sockbuf_IO_Desc *sbiod)
{
}
/* return the time zone offset between GMT and the input one, in number
- of seconds or -1 if the timezone wasn't found/legal */
+ of seconds or -1 if the timezone was not found/legal */
static int checktz(const char *check, size_t len)
{
static void skip(const char **date)
{
- /* skip everything that aren't letters or digits */
+ /* skip everything that are not letters or digits */
while(**date && !ISALNUM(**date))
(*date)++;
}
};
/*
- * time2epoch: time stamp to seconds since epoch in GMT time zone. Similar to
+ * time2epoch: time stamp to seconds since epoch in GMT time zone. Similar to
* mktime but for GMT only.
*/
static time_t time2epoch(int sec, int min, int hour,
((date[-1] == '+' || date[-1] == '-'))) {
/* four digits and a value less than or equal to 1400 (to take into
account all sorts of funny time zone diffs) and it is preceded
- with a plus or minus. This is a time zone indication. 1400 is
+ with a plus or minus. This is a time zone indication. 1400 is
picked since +1300 is frequently used and +1400 is mentioned as
an edge number in the document "ISO C 200X Proposal: Timezone
Functions" at http://david.tribble.com/text/c0xtimezone.html If
interval_ms);
if(block) {
- /* if we didn't wait, we don't have to spend time on this now */
+ /* if we did not wait, we do not have to spend time on this now */
if(Curl_pgrsUpdate(data))
result = CURLE_ABORTED_BY_CALLBACK;
else
DEBUGASSERT(pp->sendthis == NULL);
if(!conn)
- /* can't send without a connection! */
+ /* cannot send without a connection! */
return CURLE_SEND_ERROR;
Curl_dyn_reset(&pp->sendbuf);
char *nl = memchr(line, '\n', Curl_dyn_len(&pp->recvbuf));
if(nl) {
/* a newline is CRLF in pp-talk, so the CR is ignored as
- the line isn't really terminated until the LF comes */
+ the line is not really terminated until the LF comes */
size_t length = nl - line + 1;
/* output debug output if that is requested */
break;
}
- } while(1); /* while there's buffer left to scan */
+ } while(1); /* while there is buffer left to scan */
pp->pending_resp = FALSE;
typedef enum {
PPTRANSFER_BODY, /* yes do transfer a body */
PPTRANSFER_INFO, /* do still go through to get info/headers */
- PPTRANSFER_NONE /* don't get anything and don't get info */
+ PPTRANSFER_NONE /* do not get anything and do not get info */
} curl_pp_transfer;
/*
* Curl_pp_statemach()
*
* called repeatedly until done. Set 'wait' to make it wait a while on the
- * socket if there's no traffic.
+ * socket if there is no traffic.
*/
CURLcode Curl_pp_statemach(struct Curl_easy *data, struct pingpong *pp,
bool block, bool disconnecting);
CURLcode result = CURLE_OK;
/* Check we have a username and password to authenticate with and end the
- connect phase if we don't */
+ connect phase if we do not */
if(!data->state.aptr.user) {
pop3_state(data, POP3_STOP);
char secret[2 * MD5_DIGEST_LEN + 1];
/* Check we have a username and password to authenticate with and end the
- connect phase if we don't */
+ connect phase if we do not */
if(!data->state.aptr.user) {
pop3_state(data, POP3_STOP);
saslprogress progress = SASL_IDLE;
/* Check we have enough data to authenticate with and end the
- connect phase if we don't */
+ connect phase if we do not */
if(!Curl_sasl_can_authenticate(&pop3c->sasl, data)) {
pop3_state(data, POP3_STOP);
return result;
}
}
else {
- /* Clear text is supported when CAPA isn't recognised */
+ /* Clear text is supported when CAPA is not recognised */
if(pop3code != '+')
pop3c->authtypes |= POP3_TYPE_CLEARTEXT;
pop3c->eob = 2;
/* But since this initial CR LF pair is not part of the actual body, we set
- the strip counter here so that these bytes won't be delivered. */
+ the strip counter here so that these bytes will not be delivered. */
pop3c->strip = 2;
if(pop3->transfer == PPTRANSFER_BODY) {
pop3c->eob++;
if(i) {
- /* Write out the body part that didn't match */
+ /* Write out the body part that did not match */
result = Curl_client_write(data, CLIENTWRITE_BODY, &str[last],
i - last);
else if(pop3c->eob == 3)
pop3c->eob++;
else
- /* If the character match wasn't at position 0 or 3 then restart the
+ /* If the character match was not at position 0 or 3 then restart the
pattern matching */
pop3c->eob = 1;
break;
if(pop3c->eob == 1 || pop3c->eob == 4)
pop3c->eob++;
else
- /* If the character match wasn't at position 1 or 4 then start the
+ /* If the character match was not at position 1 or 4 then start the
search again */
pop3c->eob = 0;
break;
pop3c->eob = 0;
}
else
- /* If the character match wasn't at position 2 then start the search
+ /* If the character match was not at position 2 then start the search
again */
pop3c->eob = 0;
break;
msnprintf(max5, 6, "%4" CURL_FORMAT_CURL_OFF_T "k", bytes/ONE_KILOBYTE);
else if(bytes < CURL_OFF_T_C(100) * ONE_MEGABYTE)
- /* 'XX.XM' is good as long as we're less than 100 megs */
+ /* 'XX.XM' is good as long as we are less than 100 megs */
msnprintf(max5, 6, "%2" CURL_FORMAT_CURL_OFF_T ".%0"
CURL_FORMAT_CURL_OFF_T "M", bytes/ONE_MEGABYTE,
(bytes%ONE_MEGABYTE) / (ONE_MEGABYTE/CURL_OFF_T_C(10)) );
else if(bytes < CURL_OFF_T_C(10000) * ONE_MEGABYTE)
- /* 'XXXXM' is good until we're at 10000MB or above */
+ /* 'XXXXM' is good until we are at 10000MB or above */
msnprintf(max5, 6, "%4" CURL_FORMAT_CURL_OFF_T "M", bytes/ONE_MEGABYTE);
else if(bytes < CURL_OFF_T_C(100) * ONE_GIGABYTE)
/* up to 10000PB, display without decimal: XXXXP */
msnprintf(max5, 6, "%4" CURL_FORMAT_CURL_OFF_T "P", bytes/ONE_PETABYTE);
- /* 16384 petabytes (16 exabytes) is the maximum a 64 bit unsigned number can
+ /* 16384 petabytes (16 exabytes) is the maximum a 64-bit unsigned number can
hold, but our data type is signed so 8192PB will be the maximum. */
return max5;
if(!(data->progress.flags & PGRS_HIDE) &&
!data->progress.callback)
- /* only output if we don't use a progress callback and we're not
+ /* only output if we do not use a progress callback and we are not
* hidden */
fprintf(data->set.err, "\n");
case TIMER_STARTTRANSFER:
delta = &data->progress.t_starttransfer;
/* prevent updating t_starttransfer unless:
- * 1) this is the first time we're setting t_starttransfer
+ * 1) this is the first time we are setting t_starttransfer
* 2) a redirect has occurred since the last time t_starttransfer was set
* This prevents repeated invocations of the function from incorrectly
* changing the t_starttransfer time.
/*
* This is used to handle speed limits, calculating how many milliseconds to
- * wait until we're back under the speed limit, if needed.
+ * wait until we are back under the speed limit, if needed.
*
* The way it works is by having a "starting point" (time & amount of data
* transferred by then) used in the speed computation, to be used instead of
- * the start of the transfer. This starting point is regularly moved as
+ * the start of the transfer. This starting point is regularly moved as
* transfer goes on, to keep getting accurate values (instead of average over
* the entire transfer).
*
*/
void Curl_ratelimit(struct Curl_easy *data, struct curltime now)
{
- /* don't set a new stamp unless the time since last update is long enough */
+ /* do not set a new stamp unless the time since last update is long enough */
if(data->set.max_recv_speed) {
if(Curl_timediff(now, data->progress.dl_limit_start) >=
MIN_RATE_LIMIT_PERIOD) {
return CURL_OFF_T_MAX;
}
-/* returns TRUE if it's time to show the progress meter */
+/* returns TRUE if it is time to show the progress meter */
static bool progress_calc(struct Curl_easy *data, struct curltime now)
{
bool timetoshow = FALSE;
/* figure out how many index entries of data we have stored in our speeder
array. With N_ENTRIES filled in, we have about N_ENTRIES-1 seconds of
transfer. Imagine, after one second we have filled in two entries,
- after two seconds we've filled in three entries etc. */
+ after two seconds we have filled in three entries etc. */
countindex = ((p->speeder_c >= CURR_TIME)? CURR_TIME:p->speeder_c) - 1;
- /* first of all, we don't do this if there's no counted seconds yet */
+ /* first of all, we do not do this if there is no counted seconds yet */
if(countindex) {
int checkindex;
timediff_t span_ms;
if(!(data->progress.flags & PGRS_HIDE)) {
if(data->set.fxferinfo) {
int result;
- /* There's a callback set, call that */
+ /* There is a callback set, call that */
Curl_set_in_callback(data, true);
result = data->set.fxferinfo(data->set.progress_client,
data->progress.size_dl,
#if defined(RANDOM_FILE) && !defined(_WIN32)
if(!seeded) {
- /* if there's a random file to read a seed from, use it */
+ /* if there is a random file to read a seed from, use it */
int fd = open(RANDOM_FILE, O_RDONLY);
if(fd > -1) {
/* read random data into the randseed variable */
int Curl_rename(const char *oldpath, const char *newpath)
{
#ifdef _WIN32
- /* rename() on Windows doesn't overwrite, so we can't use it here.
+ /* rename() on Windows does not overwrite, so we cannot use it here.
MoveFileEx() will overwrite and is usually atomic, however it fails
when there are open handles to the file. */
const int max_wait_ms = 1000;
}
}
#endif
- /* Make sure this doesn't send more body bytes than what the max send
+ /* Make sure this does not send more body bytes than what the max send
speed says. The headers do not count to the max speed. */
if(data->set.max_send_speed) {
size_t body_bytes = blen - hds_len;
{
DEBUGASSERT(!data->req.upload_done);
data->req.upload_done = TRUE;
- data->req.keepon &= ~(KEEP_SEND|KEEP_SEND_TIMED); /* we're done sending */
+ data->req.keepon &= ~(KEEP_SEND|KEEP_SEND_TIMED); /* we are done sending */
Curl_creader_done(data, data->req.upload_aborted);
/*
- * Request specific data in the easy handle (Curl_easy). Previously,
+ * Request specific data in the easy handle (Curl_easy). Previously,
* these members were on the connectdata struct but since a conn struct may
* now be shared between different Curl_easys, we store connection-specific
- * data here. This struct only keeps stuff that's interesting for *this*
+ * data here. This struct only keeps stuff that is interesting for *this*
* request, as it will be cleared between multiple ones
*/
struct SingleRequest {
unsigned int headerbytecount; /* received server headers (not CONNECT
headers) */
unsigned int allheadercount; /* all received headers (server + CONNECT) */
- unsigned int deductheadercount; /* this amount of bytes doesn't count when
+ unsigned int deductheadercount; /* this amount of bytes does not count when
we check if anything has been transferred
at the end of a connection. We use this
counter to make only a 100 reply (without
unsigned int checks_to_perform);
/* this returns the socket to wait for in the DO and DOING state for the multi
- interface and then we're always _sending_ a request and thus we wait for
+ interface and then we are always _sending_ a request and thus we wait for
the single socket to become writable only */
static int rtsp_getsock_do(struct Curl_easy *data, struct connectdata *conn,
curl_socket_t *socks)
* Since all RTSP requests are included here, there is no need to
* support custom requests like HTTP.
**/
- data->req.no_body = TRUE; /* most requests don't contain a body */
+ data->req.no_body = TRUE; /* most requests do not contain a body */
switch(rtspreq) {
default:
failf(data, "Got invalid RTSP request");
/* Find the end of Session ID
*
* Allow any non whitespace content, up to the field separator or end of
- * line. RFC 2326 isn't 100% clear on the session ID and for example
+ * line. RFC 2326 is not 100% clear on the session ID and for example
* gstreamer does url-encoded session ID's not covered by the standard.
*/
end = start;
#endif
#if !defined(HAVE_SELECT) && !defined(HAVE_POLL_FINE)
-#error "We can't compile without select() or poll() support."
+#error "We cannot compile without select() or poll() support."
#endif
#ifdef MSDOS
#if TIMEDIFF_T_MAX >= ULONG_MAX
if(timeout_ms >= ULONG_MAX)
timeout_ms = ULONG_MAX-1;
- /* don't use ULONG_MAX, because that is equal to INFINITE */
+ /* do not use ULONG_MAX, because that is equal to INFINITE */
#endif
Sleep((ULONG)timeout_ms);
#else
struct timeval *ptimeout;
#ifdef USE_WINSOCK
- /* WinSock select() can't handle zero events. See the comment below. */
+ /* WinSock select() cannot handle zero events. See the comment below. */
if((!fds_read || fds_read->fd_count == 0) &&
(!fds_write || fds_write->fd_count == 0) &&
(!fds_err || fds_err->fd_count == 0)) {
#ifdef USE_WINSOCK
/* WinSock select() must not be called with an fd_set that contains zero
- fd flags, or it will return WSAEINVAL. But, it also can't be called
+ fd flags, or it will return WSAEINVAL. But, it also cannot be called
with no fd_sets at all! From the documentation:
Any two of the parameters, readfds, writefds, or exceptfds, can be
given as null. At least one must be non-null, and any non-null
descriptor set must contain at least one handle to a socket.
- It is unclear why WinSock doesn't just handle this for us instead of
+ It is unclear why WinSock does not just handle this for us instead of
calling this an error. Luckily, with WinSock, we can _also_ ask how
many bits are set on an fd_set. So, let's just check it beforehand.
*/
/*
* Wait for read or write events on a set of file descriptors. It uses poll()
* when a fine poll() is available, in order to avoid limits with FD_SETSIZE,
- * otherwise select() is used. An error is returned if select() is being used
+ * otherwise select() is used. An error is returned if select() is being used
* and a file descriptor is too large for FD_SETSIZE.
*
* A negative timeout value makes this function wait indefinitely,
}
/*
- * This is a wrapper around poll(). If poll() does not exist, then
- * select() is used instead. An error is returned if select() is
+ * This is a wrapper around poll(). If poll() does not exist, then
+ * select() is used instead. An error is returned if select() is
* being used and a file descriptor is too large for FD_SETSIZE.
* A negative timeout value makes this function wait indefinitely,
* unless no valid file descriptor is given, when this happens the
}
/*
- Note also that WinSock ignores the first argument, so we don't worry
+ Note also that WinSock ignores the first argument, so we do not worry
about the fact that maxfd is computed incorrectly with WinSock (since
curl_socket_t is unsigned in such cases and thus -1 is the largest
value).
case CURL_READFUNC_PAUSE:
if(data->conn->handler->flags & PROTOPT_NONETWORK) {
/* protocols that work without network cannot be paused. This is
- actually only FILE:// just now, and it can't pause since the transfer
- isn't done using the "normal" procedure. */
+ actually only FILE:// just now, and it cannot pause since the transfer
+ is not done using the "normal" procedure. */
failf(data, "Read callback asked for PAUSE when not supported");
return CURLE_READ_ERROR;
}
failf(data, "Could not seek stream");
return CURLE_READ_ERROR;
}
- /* when seekerr == CURL_SEEKFUNC_CANTSEEK (can't seek to offset) */
+ /* when seekerr == CURL_SEEKFUNC_CANTSEEK (cannot seek to offset) */
do {
char scratch[4*1024];
size_t readthisamountnow =
return CURLE_PARTIAL_FILE;
}
}
- /* we've passed, proceed as normal */
+ /* we have passed, proceed as normal */
return CURLE_OK;
}
}
/* no callback set or failure above, makes us fail at once */
- failf(data, "necessary data rewind wasn't possible");
+ failf(data, "necessary data rewind was not possible");
return CURLE_SEND_FAIL_REWIND;
}
return CURLE_OK;
return result;
start = i + 1;
if(!data->set.crlf && (data->state.infilesize != -1)) {
- /* we're here only because FTP is in ASCII mode...
+ /* we are here only because FTP is in ASCII mode...
bump infilesize for the LF we just added */
data->state.infilesize++;
/* comment: this might work for FTP, but in HTTP we could not change
break;
case CURLOPT_FAILONERROR:
/*
- * Don't output the >=400 error code HTML-page, but instead only
+ * Do not output the >=400 error code HTML-page, but instead only
* return error.
*/
data->set.http_fail_on_error = (0 != va_arg(param, long));
*
* If the encoding is set to "" we use an Accept-Encoding header that
* encompasses all the encodings we support.
- * If the encoding is set to NULL we don't send an Accept-Encoding header
+ * If the encoding is set to NULL we do not send an Accept-Encoding header
* and ignore an received Content-Encoding header.
*
*/
case CURLOPT_POST:
/* Does this option serve a purpose anymore? Yes it does, when
- CURLOPT_POSTFIELDS isn't used and the POST data is read off the
+ CURLOPT_POSTFIELDS is not used and the POST data is read off the
callback! */
if(va_arg(param, long)) {
data->set.method = HTTPREQ_POST;
/* general protection against mistakes and abuse */
if(strlen(argptr) > CURL_MAX_INPUT_LENGTH)
return CURLE_BAD_FUNCTION_ARGUMENT;
- /* append the cookie file name to the list of file names, and deal with
+ /* append the cookie filename to the list of filenames, and deal with
them later */
cl = curl_slist_append(data->state.cookielist, argptr);
if(!cl) {
data->state.cookielist = NULL;
if(!data->share || !data->share->cookies) {
- /* throw away all existing cookies if this isn't a shared cookie
+ /* throw away all existing cookies if this is not a shared cookie
container */
Curl_cookie_clearall(data->cookies);
Curl_cookie_cleanup(data->cookies);
case CURLOPT_COOKIEJAR:
/*
- * Set cookie file name to dump all cookies to when we're done.
+ * Set cookie filename to dump all cookies to when we are done.
*/
result = Curl_setstropt(&data->set.str[STRING_COOKIEJAR],
va_arg(param, char *));
auth &= ~CURLAUTH_DIGEST_IE; /* unset ie digest bit */
}
- /* switch off bits we can't support */
+ /* switch off bits we cannot support */
#ifndef USE_NTLM
auth &= ~CURLAUTH_NTLM; /* no NTLM support */
#endif
result = Curl_setstropt(&data->set.str[STRING_CUSTOMREQUEST],
va_arg(param, char *));
- /* we don't set
+ /* we do not set
data->set.method = HTTPREQ_CUSTOM;
here, we continue as if we were using the already set type
and this just changes the actual request keyword */
auth |= CURLAUTH_DIGEST; /* set standard digest bit */
auth &= ~CURLAUTH_DIGEST_IE; /* unset ie digest bit */
}
- /* switch off bits we can't support */
+ /* switch off bits we cannot support */
#ifndef USE_NTLM
auth &= ~CURLAUTH_NTLM; /* no NTLM support */
#endif
* Set proxy server:port to use as proxy.
*
* If the proxy is set to "" (and CURLOPT_SOCKS_PROXY is set to "" or NULL)
- * we explicitly say that we don't want to use a proxy
+ * we explicitly say that we do not want to use a proxy
* (even though there might be environment variables saying so).
*
* Setting it to NULL, means no proxy but allows the environment variables
/*
* Set proxy server:port to use as SOCKS proxy.
*
- * If the proxy is set to "" or NULL we explicitly say that we don't want
+ * If the proxy is set to "" or NULL we explicitly say that we do not want
* to use the socks proxy.
*/
result = Curl_setstropt(&data->set.str[STRING_PRE_PROXY],
case CURLOPT_USERNAME:
/*
- * authentication user name to use in the operation
+ * authentication username to use in the operation
*/
result = Curl_setstropt(&data->set.str[STRING_USERNAME],
va_arg(param, char *));
* Prefix the HOST with dash (-) to _remove_ the entry from the cache.
*
* This API can remove any entry from the DNS cache, but only entries
- * that aren't actually in use right now will be pruned immediately.
+ * that are not actually in use right now will be pruned immediately.
*/
data->set.resolve = va_arg(param, struct curl_slist *);
data->state.resolve = data->set.resolve;
break;
case CURLOPT_PROXYUSERNAME:
/*
- * authentication user name to use in the operation
+ * authentication username to use in the operation
*/
result = Curl_setstropt(&data->set.str[STRING_PROXYUSERNAME],
va_arg(param, char *));
*/
data->set.fdebug = va_arg(param, curl_debug_callback);
/*
- * if the callback provided is NULL, it'll use the default callback
+ * if the callback provided is NULL, it will use the default callback
*/
break;
case CURLOPT_DEBUGDATA:
break;
case CURLOPT_SSLCERT:
/*
- * String that holds file name of the SSL certificate to use
+ * String that holds filename of the SSL certificate to use
*/
result = Curl_setstropt(&data->set.str[STRING_CERT],
va_arg(param, char *));
#ifndef CURL_DISABLE_PROXY
case CURLOPT_PROXY_SSLCERT:
/*
- * String that holds file name of the SSL certificate to use for proxy
+ * String that holds filename of the SSL certificate to use for proxy
*/
result = Curl_setstropt(&data->set.str[STRING_CERT_PROXY],
va_arg(param, char *));
#endif
case CURLOPT_SSLKEY:
/*
- * String that holds file name of the SSL key to use
+ * String that holds filename of the SSL key to use
*/
result = Curl_setstropt(&data->set.str[STRING_KEY],
va_arg(param, char *));
#ifndef CURL_DISABLE_PROXY
case CURLOPT_PROXY_SSLKEY:
/*
- * String that holds file name of the SSL key to use for proxy
+ * String that holds filename of the SSL key to use for proxy
*/
result = Curl_setstropt(&data->set.str[STRING_KEY_PROXY],
va_arg(param, char *));
#endif
case CURLOPT_SSL_VERIFYHOST:
/*
- * Enable verification of the host name in the peer certificate
+ * Enable verification of the hostname in the peer certificate
*/
arg = va_arg(param, long);
/* Obviously people are not reading documentation and too many thought
- this argument took a boolean when it wasn't and misused it.
+ this argument took a boolean when it was not and misused it.
Treat 1 and 2 the same */
data->set.ssl.primary.verifyhost = !!(arg & 3);
#ifndef CURL_DISABLE_DOH
case CURLOPT_DOH_SSL_VERIFYHOST:
/*
- * Enable verification of the host name in the peer certificate for DoH
+ * Enable verification of the hostname in the peer certificate for DoH
*/
arg = va_arg(param, long);
#ifndef CURL_DISABLE_PROXY
case CURLOPT_PROXY_SSL_VERIFYHOST:
/*
- * Enable verification of the host name in the peer certificate for proxy
+ * Enable verification of the hostname in the peer certificate for proxy
*/
arg = va_arg(param, long);
case CURLOPT_PINNEDPUBLICKEY:
/*
* Set pinned public key for SSL connection.
- * Specify file name of the public key in DER format.
+ * Specify filename of the public key in DER format.
*/
#ifdef USE_SSL
if(Curl_ssl_supports(data, SSLSUPP_PINNEDPUBKEY))
case CURLOPT_PROXY_PINNEDPUBLICKEY:
/*
* Set pinned public key for SSL connection.
- * Specify file name of the public key in DER format.
+ * Specify filename of the public key in DER format.
*/
#ifdef USE_SSL
if(Curl_ssl_supports(data, SSLSUPP_PINNEDPUBKEY))
#endif
case CURLOPT_CAINFO:
/*
- * Set CA info for SSL connection. Specify file name of the CA certificate
+ * Set CA info for SSL connection. Specify filename of the CA certificate
*/
result = Curl_setstropt(&data->set.str[STRING_SSL_CAFILE],
va_arg(param, char *));
#ifndef CURL_DISABLE_PROXY
case CURLOPT_PROXY_CAINFO:
/*
- * Set CA info SSL connection for proxy. Specify file name of the
+ * Set CA info SSL connection for proxy. Specify filename of the
* CA certificate
*/
result = Curl_setstropt(&data->set.str[STRING_SSL_CAFILE_PROXY],
#endif
case CURLOPT_CRLFILE:
/*
- * Set CRL file info for SSL connection. Specify file name of the CRL
+ * Set CRL file info for SSL connection. Specify filename of the CRL
* to check certificates revocation
*/
result = Curl_setstropt(&data->set.str[STRING_SSL_CRLFILE],
#ifndef CURL_DISABLE_PROXY
case CURLOPT_PROXY_CRLFILE:
/*
- * Set CRL file info for SSL connection for proxy. Specify file name of the
+ * Set CRL file info for SSL connection for proxy. Specify filename of the
* CRL to check certificates revocation
*/
result = Curl_setstropt(&data->set.str[STRING_SSL_CRLFILE_PROXY],
case CURLOPT_BUFFERSIZE:
/*
* The application kindly asks for a differently sized receive buffer.
- * If it seems reasonable, we'll use it.
+ * If it seems reasonable, we will use it.
*/
arg = va_arg(param, long);
case CURLOPT_SSH_KNOWNHOSTS:
/*
- * Store the file name to read known hosts from.
+ * Store the filename to read known hosts from.
*/
result = Curl_setstropt(&data->set.str[STRING_SSH_KNOWNHOSTS],
va_arg(param, char *));
data->set.http_te_skip = (0 == va_arg(param, long));
break;
#else
- return CURLE_NOT_BUILT_IN; /* hyper doesn't support */
+ return CURLE_NOT_BUILT_IN; /* hyper does not support */
#endif
case CURLOPT_HTTP_CONTENT_DECODING:
}
case CURLOPT_DEFAULT_PROTOCOL:
- /* Set the protocol to use when the URL doesn't include any protocol */
+ /* Set the protocol to use when the URL does not include any protocol */
result = Curl_setstropt(&data->set.str[STRING_DEFAULT_PROTOCOL],
va_arg(param, char *));
break;
result = Curl_setstropt(&data->set.str[STRING_HSTS], argptr);
if(result)
return result;
- /* this needs to build a list of file names to read from, so that it can
+ /* this needs to build a list of filenames to read from, so that it can
read them later, as we might get a shared HSTS handle to load them
into */
h = curl_slist_append(data->state.hstslist, argptr);
* Include header files for windows builds before redefining anything.
* Use this preprocessor block only to include or exclude windows.h,
* winsock2.h or ws2tcpip.h. Any other windows thing belongs
- * to any other further and independent block. Under Cygwin things work
+ * to any other further and independent block. Under Cygwin things work
* just as under linux (e.g. <sys/socket.h>) and the winsock headers should
* never be included when __CYGWIN__ is defined.
*/
# error "_UNICODE is defined but UNICODE is not defined"
# endif
/*
- * Don't include unneeded stuff in Windows headers to avoid compiler
+ * Do not include unneeded stuff in Windows headers to avoid compiler
* warnings and macro clashes.
* Make sure to define this macro before including any Windows headers.
*/
md->buf[md->curlen++] = (unsigned char)0x80;
/* If the length is currently above 56 bytes we append zeros
- * then compress. Then we can fall back to padding zeros and length
+ * then compress. Then we can fall back to padding zeros and length
* encoding like normal.
*/
if(md->curlen > 56) {
return CURLSHE_INVALID;
if(share->dirty)
- /* don't allow setting options while one or more handles are already
+ /* do not allow setting options while one or more handles are already
using this share */
return CURLSHE_IN_USE;
if(share->lockfunc) /* only call this if set! */
share->lockfunc(data, type, accesstype, share->clientdata);
}
- /* else if we don't share this, pretend successful lock */
+ /* else if we do not share this, pretend successful lock */
return CURLSHE_OK;
}
#define CURL_GOOD_SHARE 0x7e117a1e
#define GOOD_SHARE_HANDLE(x) ((x) && (x)->magic == CURL_GOOD_SHARE)
-/* this struct is libcurl-private, don't export details */
+/* this struct is libcurl-private, do not export details */
struct Curl_share {
unsigned int magic; /* CURL_GOOD_SHARE */
unsigned int specifier;
break;
case SMB_CLOSE:
- /* We don't care if the close failed, proceed to tree disconnect anyway */
+ /* We do not care if the close failed, proceed to tree disconnect anyway */
next_state = SMB_TREE_DISCONNECT;
break;
if(smtp->rcpt) {
/* We notify the server we are sending UTF-8 data if a) it supports the
SMTPUTF8 extension and b) The mailbox contains UTF-8 characters, in
- either the local address or host name parts. This is regardless of
- whether the host name is encoded using IDN ACE */
+ either the local address or hostname parts. This is regardless of
+ whether the hostname is encoded using IDN ACE */
bool utf8 = FALSE;
if((!smtp->custom) || (!smtp->custom[0])) {
char *address = NULL;
struct hostname host = { NULL, NULL, NULL, NULL };
- /* Parse the mailbox to verify into the local address and host name
- parts, converting the host name to an IDN A-label if necessary */
+ /* Parse the mailbox to verify into the local address and hostname
+ parts, converting the hostname to an IDN A-label if necessary */
result = smtp_parse_address(smtp->rcpt->data,
&address, &host);
if(result)
((host.encalloc) || (!Curl_is_ASCII_name(address)) ||
(!Curl_is_ASCII_name(host.name)));
- /* Send the VRFY command (Note: The host name part may be absent when the
+ /* Send the VRFY command (Note: The hostname part may be absent when the
host is a local system) */
result = Curl_pp_sendf(data, &conn->proto.smtpc.pp, "VRFY %s%s%s%s",
address,
/* We notify the server we are sending UTF-8 data if a) it supports the
SMTPUTF8 extension and b) The mailbox contains UTF-8 characters, in
- either the local address or host name parts. This is regardless of
- whether the host name is encoded using IDN ACE */
+ either the local address or hostname parts. This is regardless of
+ whether the hostname is encoded using IDN ACE */
bool utf8 = FALSE;
/* Calculate the FROM parameter */
char *address = NULL;
struct hostname host = { NULL, NULL, NULL, NULL };
- /* Parse the FROM mailbox into the local address and host name parts,
- converting the host name to an IDN A-label if necessary */
+ /* Parse the FROM mailbox into the local address and hostname parts,
+ converting the hostname to an IDN A-label if necessary */
result = smtp_parse_address(data->set.str[STRING_MAIL_FROM],
&address, &host);
if(result)
Curl_free_idnconverted_hostname(&host);
}
else
- /* An invalid mailbox was provided but we'll simply let the server worry
- about that and reply with a 501 error */
+ /* An invalid mailbox was provided but we will simply let the server
+ worry about that and reply with a 501 error */
from = aprintf("<%s>", address);
free(address);
char *address = NULL;
struct hostname host = { NULL, NULL, NULL, NULL };
- /* Parse the AUTH mailbox into the local address and host name parts,
- converting the host name to an IDN A-label if necessary */
+ /* Parse the AUTH mailbox into the local address and hostname parts,
+ converting the hostname to an IDN A-label if necessary */
result = smtp_parse_address(data->set.str[STRING_MAIL_AUTH],
&address, &host);
if(result)
Curl_free_idnconverted_hostname(&host);
}
else
- /* An invalid mailbox was provided but we'll simply let the server
+ /* An invalid mailbox was provided but we will simply let the server
worry about it */
auth = aprintf("<%s>", address);
free(address);
}
}
- /* If the mailboxes in the FROM and AUTH parameters don't include a UTF-8
+ /* If the mailboxes in the FROM and AUTH parameters do not include a UTF-8
based address then quickly scan through the recipient list and check if
any there do, as we need to correctly identify our support for SMTPUTF8
in the envelope, as per RFC-6531 sect. 3.4 */
struct curl_slist *rcpt = smtp->rcpt;
while(rcpt && !utf8) {
- /* Does the host name contain non-ASCII characters? */
+ /* Does the hostname contain non-ASCII characters? */
if(!Curl_is_ASCII_name(rcpt->data))
utf8 = TRUE;
char *address = NULL;
struct hostname host = { NULL, NULL, NULL, NULL };
- /* Parse the recipient mailbox into the local address and host name parts,
- converting the host name to an IDN A-label if necessary */
+ /* Parse the recipient mailbox into the local address and hostname parts,
+ converting the hostname to an IDN A-label if necessary */
result = smtp_parse_address(smtp->rcpt->data,
&address, &host);
if(result)
result = Curl_pp_sendf(data, &conn->proto.smtpc.pp, "RCPT TO:<%s@%s>",
address, host.name);
else
- /* An invalid mailbox was provided but we'll simply let the server worry
+ /* An invalid mailbox was provided but we will simply let the server worry
about that and reply with a 501 error */
result = Curl_pp_sendf(data, &conn->proto.smtpc.pp, "RCPT TO:<%s>",
address);
if(smtpcode != 1) {
if(data->set.use_ssl && !Curl_conn_is_ssl(conn, FIRSTSOCKET)) {
- /* We don't have a SSL/TLS connection yet, but SSL is requested */
+ /* We do not have a SSL/TLS connection yet, but SSL is requested */
if(smtpc->tls_supported)
/* Switch to TLS connection now */
result = smtp_perform_starttls(data, conn);
is_smtp_err = (smtpcode/100 != 2) ? TRUE : FALSE;
- /* If there's multiple RCPT TO to be issued, it's possible to ignore errors
+ /* If there is multiple RCPT TO to be issued, it is possible to ignore errors
and proceed with only the valid addresses. */
is_smtp_blocking_err =
(is_smtp_err && !data->set.mail_rcpt_allowfails) ? TRUE : FALSE;
/* Send the next RCPT TO command */
result = smtp_perform_rcpt_to(data);
else {
- /* We weren't able to issue a successful RCPT TO command while going
+ /* We were not able to issue a successful RCPT TO command while going
over recipients (potentially multiple). Sending back last error. */
if(!smtp->rcpt_had_ok) {
failf(data, "RCPT failed: %d (last error)", smtp->rcpt_last_error);
/* Store the first recipient (or NULL if not specified) */
smtp->rcpt = data->set.mail_rcpt;
- /* Track of whether we've successfully sent at least one RCPT TO command */
+ /* Track of whether we have successfully sent at least one RCPT TO command */
smtp->rcpt_had_ok = FALSE;
- /* Track of the last error we've received by sending RCPT TO command */
+ /* Track of the last error we have received by sending RCPT TO command */
smtp->rcpt_last_error = 0;
/* Initial data character is the first character in line: it is implicitly
* smtp_parse_address()
*
* Parse the fully qualified mailbox address into a local address part and the
- * host name, converting the host name to an IDN A-label, as per RFC-5890, if
+ * hostname, converting the hostname to an IDN A-label, as per RFC-5890, if
* necessary.
*
* Parameters:
* address [in/out] - A new allocated buffer which holds the local
* address part of the mailbox. This buffer must be
* free'ed by the caller.
- * host [in/out] - The host name structure that holds the original,
- * and optionally encoded, host name.
+ * host [in/out] - The hostname structure that holds the original,
+ * and optionally encoded, hostname.
* Curl_free_idnconverted_hostname() must be called
* once the caller has finished with the structure.
*
*
* Notes:
*
- * Should a UTF-8 host name require conversion to IDN ACE and we cannot honor
+ * Should a UTF-8 hostname require conversion to IDN ACE and we cannot honor
* that conversion then we shall return success. This allow the caller to send
* the data to the server as a U-label (as per RFC-6531 sect. 3.2).
*
* If an mailbox '@' separator cannot be located then the mailbox is considered
* to be either a local mailbox or an invalid mailbox (depending on what the
* calling function deems it to be) then the input will simply be returned in
- * the address part with the host name being NULL.
+ * the address part with the hostname being NULL.
*/
static CURLcode smtp_parse_address(const char *fqma, char **address,
struct hostname *host)
size_t length;
/* Duplicate the fully qualified email address so we can manipulate it,
- ensuring it doesn't contain the delimiters if specified */
+ ensuring it does not contain the delimiters if specified */
char *dup = strdup(fqma[0] == '<' ? fqma + 1 : fqma);
if(!dup)
return CURLE_OUT_OF_MEMORY;
dup[length - 1] = '\0';
}
- /* Extract the host name from the address (if we can) */
+ /* Extract the hostname from the address (if we can) */
host->name = strpbrk(dup, "@");
if(host->name) {
*host->name = '\0';
host->name = host->name + 1;
- /* Attempt to convert the host name to IDN ACE */
+ /* Attempt to convert the hostname to IDN ACE */
(void) Curl_idnconvert_hostname(host);
/* If Curl_idnconvert_hostname() fails then we shall attempt to continue
- and send the host name using UTF-8 rather than as 7-bit ACE (which is
+ and send the hostname using UTF-8 rather than as 7-bit ACE (which is
our preference) */
}
socks[0] = socks[1] = CURL_SOCKET_BAD;
#if defined(_WIN32) || defined(__CYGWIN__)
- /* don't set SO_REUSEADDR on Windows */
+ /* do not set SO_REUSEADDR on Windows */
(void)reuse;
#ifdef SO_EXCLUSIVEADDRUSE
{
if(connect(socks[0], &a.addr, sizeof(a.inaddr)) == -1)
goto error;
- /* use non-blocking accept to make sure we don't block forever */
+ /* use non-blocking accept to make sure we do not block forever */
if(curlx_nonblock(listener, TRUE) < 0)
goto error;
pfd[0].fd = listener;
nread = sread(socks[1], p, s);
if(nread == -1) {
int sockerr = SOCKERRNO;
- /* Don't block forever */
+ /* Do not block forever */
if(Curl_timediff(Curl_now(), start) > (60 * 1000))
goto error;
if(
(void)data;
if(oldstate == state)
- /* don't bother when the new state is the same as the old state */
+ /* do not bother when the new state is the same as the old state */
return;
sx->state = state;
goto CONNECT_RESOLVED;
}
- /* socks4a doesn't resolve anything locally */
+ /* socks4a does not resolve anything locally */
sxstate(sx, data, CONNECT_REQ_INIT);
goto CONNECT_REQ_INIT;
{
struct Curl_addrinfo *hp = NULL;
/*
- * We cannot use 'hostent' as a struct that Curl_resolv() returns. It
+ * We cannot use 'hostent' as a struct that Curl_resolv() returns. It
* returns a Curl_addrinfo pointer that may not always look the same.
*/
if(dns) {
/* there is no real size limit to this field in the protocol, but
SOCKS5 limits the proxy user field to 255 bytes and it seems likely
that a longer field is either a mistake or malicious input */
- failf(data, "Too long SOCKS proxy user name");
+ failf(data, "Too long SOCKS proxy username");
return CURLPX_LONG_USER;
}
/* copy the proxy name WITH trailing zero */
(packetsize + hostnamelen < sizeof(sx->buffer)))
strcpy((char *)socksreq + packetsize, sx->hostname);
else {
- failf(data, "SOCKS4: too long host name");
+ failf(data, "SOCKS4: too long hostname");
return CURLPX_LONG_HOSTNAME;
}
packetsize += hostnamelen;
break;
case 91:
failf(data,
- "Can't complete SOCKS4 connection to %d.%d.%d.%d:%d. (%d)"
+ "cannot complete SOCKS4 connection to %d.%d.%d.%d:%d. (%d)"
", request rejected or failed.",
socksreq[4], socksreq[5], socksreq[6], socksreq[7],
(((unsigned char)socksreq[2] << 8) | (unsigned char)socksreq[3]),
return CURLPX_REQUEST_FAILED;
case 92:
failf(data,
- "Can't complete SOCKS4 connection to %d.%d.%d.%d:%d. (%d)"
+ "cannot complete SOCKS4 connection to %d.%d.%d.%d:%d. (%d)"
", request rejected because SOCKS server cannot connect to "
"identd on the client.",
socksreq[4], socksreq[5], socksreq[6], socksreq[7],
return CURLPX_IDENTD;
case 93:
failf(data,
- "Can't complete SOCKS4 connection to %d.%d.%d.%d:%d. (%d)"
+ "cannot complete SOCKS4 connection to %d.%d.%d.%d:%d. (%d)"
", request rejected because the client program and identd "
"report different user-ids.",
socksreq[4], socksreq[5], socksreq[6], socksreq[7],
return CURLPX_IDENTD_DIFFER;
default:
failf(data,
- "Can't complete SOCKS4 connection to %d.%d.%d.%d:%d. (%d)"
+ "cannot complete SOCKS4 connection to %d.%d.%d.%d:%d. (%d)"
", Unknown.",
socksreq[4], socksreq[5], socksreq[6], socksreq[7],
(((unsigned char)socksreq[2] << 8) | (unsigned char)socksreq[3]),
struct Curl_easy *data)
{
/*
- According to the RFC1928, section "6. Replies". This is what a SOCK5
+ According to the RFC1928, section "6. Replies". This is what a SOCK5
replies:
+----+-----+-------+------+----------+----------+
CONNECT_AUTH_INIT:
case CONNECT_AUTH_INIT: {
- /* Needs user name and password */
+ /* Needs username and password */
size_t proxy_user_len, proxy_password_len;
if(sx->proxy_user && sx->proxy_password) {
proxy_user_len = strlen(sx->proxy_user);
if(sx->proxy_user && proxy_user_len) {
/* the length must fit in a single byte */
if(proxy_user_len > 255) {
- failf(data, "Excessive user name length for proxy auth");
+ failf(data, "Excessive username length for proxy auth");
return CURLPX_LONG_USER;
}
memcpy(socksreq + len, sx->proxy_user, proxy_user_len);
else if(socksreq[1]) { /* Anything besides 0 is an error */
CURLproxycode rc = CURLPX_REPLY_UNASSIGNED;
int code = socksreq[1];
- failf(data, "Can't complete SOCKS5 connection to %s. (%d)",
+ failf(data, "cannot complete SOCKS5 connection to %s. (%d)",
sx->hostname, (unsigned char)socksreq[1]);
if(code < 9) {
/* RFC 1928 section 6 lists: */
}
/* After a TCP connection to the proxy has been verified, this function does
- the next magic steps. If 'done' isn't set TRUE, it is not done yet and
+ the next magic steps. If 'done' is not set TRUE, it is not done yet and
must be called again.
Note: this function's sub-functions call failf()
(void)curlx_nonblock(sock, FALSE);
- /* As long as we need to keep sending some context info, and there's no */
+ /* As long as we need to keep sending some context info, and there is no */
/* errors, keep sending it... */
for(;;) {
gss_major_status = Curl_gss_init_sec_context(data,
gss_minor_status, "gss_inquire_context")) {
gss_delete_sec_context(&gss_status, &gss_context, NULL);
gss_release_name(&gss_status, &gss_client_name);
- failf(data, "Failed to determine user name.");
+ failf(data, "Failed to determine username.");
return CURLE_COULDNT_CONNECT;
}
gss_major_status = gss_display_name(&gss_minor_status, gss_client_name,
gss_delete_sec_context(&gss_status, &gss_context, NULL);
gss_release_name(&gss_status, &gss_client_name);
gss_release_buffer(&gss_status, &gss_send_token);
- failf(data, "Failed to determine user name.");
+ failf(data, "Failed to determine username.");
return CURLE_COULDNT_CONNECT;
}
user = malloc(gss_send_token.length + 1);
*
* The token is produced by encapsulating an octet containing the
* required protection level using gss_seal()/gss_wrap() with conf_req
- * set to FALSE. The token is verified using gss_unseal()/
+ * set to FALSE. The token is verified using gss_unseal()/
* gss_unwrap().
*
*/
(void)curlx_nonblock(sock, FALSE);
- /* As long as we need to keep sending some context info, and there's no */
+ /* As long as we need to keep sending some context info, and there is no */
/* errors, keep sending it... */
for(;;) {
TCHAR *sname;
if(check_sspi_err(data, status, "QueryCredentialAttributes")) {
s_pSecFn->DeleteSecurityContext(&sspi_context);
s_pSecFn->FreeContextBuffer(names.sUserName);
- failf(data, "Failed to determine user name.");
+ failf(data, "Failed to determine username.");
return CURLE_COULDNT_CONNECT;
}
else {
*
* The token is produced by encapsulating an octet containing the
* required protection level using gss_seal()/gss_wrap() with conf_req
- * set to FALSE. The token is verified using gss_unseal()/
+ * set to FALSE. The token is verified using gss_unseal()/
* gss_unwrap().
*
*/
return t;
}
-/* Insert key i into the tree t. Return a pointer to the resulting tree or
+/* Insert key i into the tree t. Return a pointer to the resulting tree or
* NULL if something went wrong.
*
* @unittest: 1309
}
/* Finds and deletes the best-fit node from the tree. Return a pointer to the
- resulting tree. best-fit means the smallest node if it is not larger than
+ resulting tree. best-fit means the smallest node if it is not larger than
the key */
struct Curl_tree *Curl_splaygetbest(struct curltime i,
struct Curl_tree *t,
}
-/* Deletes the very node we point out from the tree if it's there. Stores a
+/* Deletes the very node we point out from the tree if it is there. Stores a
* pointer to the new resulting tree in 'newroot'.
*
* Returns zero on success and non-zero on errors!
* When returning error, it does not touch the 'newroot' pointer.
*
- * NOTE: when the last node of the tree is removed, there's no tree left so
+ * NOTE: when the last node of the tree is removed, there is no tree left so
* 'newroot' will be made to point to NULL.
*
* @unittest: 1309
/* First make sure that we got the same root node as the one we want
to remove, as otherwise we might be trying to remove a node that
- isn't actually in the tree.
+ is not actually in the tree.
We cannot just compare the keys here as a double remove in quick
succession of a node with key != KEY_NOTUSED && same != NULL
if(t != removenode)
return 2;
- /* Check if there is a list with identical sizes, as then we're trying to
+ /* Check if there is a list with identical sizes, as then we are trying to
remove the root node of a list of nodes with identical keys. */
x = t->samen;
if(x != t) {
struct Curl_tree *samen; /* points to the next node with identical key */
struct Curl_tree *samep; /* points to the prev node with identical key */
struct curltime key; /* this node's "sort" key */
- void *payload; /* data the splay code doesn't care about */
+ void *payload; /* data the splay code does not care about */
};
struct Curl_tree *Curl_splay(struct curltime i,
{
while(*first && *second) {
if(Curl_raw_toupper(*first) != Curl_raw_toupper(*second))
- /* get out of the loop as soon as they don't match */
+ /* get out of the loop as soon as they do not match */
return 0;
first++;
second++;
}
- /* If we're here either the strings are the same or the length is different.
+ /* If we are here either the strings are the same or the length is different.
We can just test if the "current" character is non-zero for one and zero
for the other. Note that the characters may not be exactly the same even
if they match, we only want to compare zero-ness. */
/* if both pointers are NULL then treat them as equal if max is non-zero */
return (NULL == first && NULL == second && max);
}
-/* Copy an upper case version of the string from src to dest. The
- * strings may overlap. No more than n characters of the string are copied
+/* Copy an upper case version of the string from src to dest. The
+ * strings may overlap. No more than n characters of the string are copied
* (including any NUL) and the destination string will NOT be
* NUL-terminated if that limit is reached.
*/
} while(*src++ && --n);
}
-/* Copy a lower case version of the string from src to dest. The
- * strings may overlap. No more than n characters of the string are copied
+/* Copy a lower case version of the string from src to dest. The
+ * strings may overlap. No more than n characters of the string are copied
* (including any NUL) and the destination string will NOT be
* NUL-terminated if that limit is reached.
*/
" this libcurl due to a build-time decision.";
case CURLE_COULDNT_RESOLVE_PROXY:
- return "Couldn't resolve proxy name";
+ return "Could not resolve proxy name";
case CURLE_COULDNT_RESOLVE_HOST:
- return "Couldn't resolve host name";
+ return "Could not resolve hostname";
case CURLE_COULDNT_CONNECT:
- return "Couldn't connect to server";
+ return "Could not connect to server";
case CURLE_WEIRD_SERVER_REPLY:
return "Weird server reply";
return "FTP: unknown 227 response format";
case CURLE_FTP_CANT_GET_HOST:
- return "FTP: can't figure out the host in the PASV response";
+ return "FTP: cannot figure out the host in the PASV response";
case CURLE_HTTP2:
return "Error in the HTTP2 framing layer";
case CURLE_FTP_COULDNT_SET_TYPE:
- return "FTP: couldn't set file type";
+ return "FTP: could not set file type";
case CURLE_PARTIAL_FILE:
return "Transferred a partial file";
case CURLE_FTP_COULDNT_RETR_FILE:
- return "FTP: couldn't retrieve (RETR failed) the specified file";
+ return "FTP: could not retrieve (RETR failed) the specified file";
case CURLE_QUOTE_ERROR:
return "Quote command returned error";
return "SSL connect error";
case CURLE_BAD_DOWNLOAD_RESUME:
- return "Couldn't resume download";
+ return "Could not resume download";
case CURLE_FILE_COULDNT_READ_FILE:
- return "Couldn't read a file:// file";
+ return "Could not read a file:// file";
case CURLE_LDAP_CANNOT_BIND:
return "LDAP: cannot bind";
return "Problem with the local SSL certificate";
case CURLE_SSL_CIPHER:
- return "Couldn't use specified SSL cipher";
+ return "Could not use specified SSL cipher";
case CURLE_PEER_FAILED_VERIFICATION:
return "SSL peer certificate or SSH remote key was not OK";
/*
* By using a switch, gcc -Wall will complain about enum values
* which do not appear, helping keep this function up-to-date.
- * By using gcc -Wall -Werror, you can't forget.
+ * By using gcc -Wall -Werror, you cannot forget.
*
- * A table would not have the same benefit. Most compilers will
- * generate code very similar to a table in any case, so there
- * is little performance gain from a table. And something is broken
- * for the user's application, anyways, so does it matter how fast
- * it _doesn't_ work?
+ * A table would not have the same benefit. Most compilers will generate
+ * code very similar to a table in any case, so there is little performance
+ * gain from a table. Something is broken for the user's application,
+ * anyways, so does it matter how fast it _does not_ work?
*
- * The line number for the error will be near this comment, which
- * is why it is here, and not at the start of the switch.
+ * The line number for the error will be near this comment, which is why it
+ * is here, and not at the start of the switch.
*/
return "Unknown error";
#else
* The 'err' argument passed in to this function MUST be a true errno number
* as reported on this system. We do no range checking on the number before
* we pass it to the "number-to-message" conversion function and there might
- * be systems that don't do proper range checking in there themselves.
+ * be systems that do not do proper range checking in there themselves.
*
- * We don't do range checking (on systems other than Windows) since there is
+ * We do not do range checking (on systems other than Windows) since there is
* no good reliable and portable way to do it.
*
* On Windows different types of error codes overlap. This function has an
return NULL;
}
-#endif /* this was only compiled if strtok_r wasn't present */
+#endif /* this was only compiled if strtok_r was not present */
* NOTE:
*
* In the ISO C standard (IEEE Std 1003.1), there is a strtoimax() function we
- * could use in case strtoll() doesn't exist... See
+ * could use in case strtoll() does not exist... See
* https://www.opengroup.org/onlinepubs/009695399/functions/strtoimax.html
*/
static int get_char(char c, int base);
/**
- * Custom version of the strtooff function. This extracts a curl_off_t
+ * Custom version of the strtooff function. This extracts a curl_off_t
* value from the given input string and returns it.
*/
static curl_off_t strtooff(const char *nptr, char **endptr, int base)
}
}
- /* Matching strtol, if the base is 0 and it doesn't look like
- * the number is octal or hex, we assume it's base 10.
+ /* Matching strtol, if the base is 0 and it does not look like
+ * the number is octal or hex, we assume it is base 10.
*/
if(base == 0) {
base = 10;
* @param c the character to interpret according to base
* @param base the base in which to interpret c
*
- * @return the value of c in base, or -1 if c isn't in range
+ * @return the value of c in base, or -1 if c is not in range
*/
static int get_char(char c, int base)
{
return value;
}
-#endif /* Only present if we need strtoll, but don't have it. */
+#endif /* Only present if we need strtoll, but do not have it. */
/*
- * Parse a *positive* up to 64 bit number written in ascii.
+ * Parse a *positive* up to 64-bit number written in ascii.
*/
CURLofft curlx_strtoofft(const char *str, char **endp, int base,
curl_off_t *num)
str++;
if(('-' == *str) || (ISSPACE(*str))) {
if(endp)
- *endp = (char *)str; /* didn't actually move */
+ *endp = (char *)str; /* did not actually move */
return CURL_OFFT_INVAL; /* nothing parsed */
}
number = strtooff(str, &end, base);
res = WSAStartup(wVersionRequested, &wsaData);
if(res)
- /* Tell the user that we couldn't find a usable */
+ /* Tell the user that we could not find a usable */
/* winsock.dll. */
return CURLE_FAILED_INIT;
if(LOBYTE(wsaData.wVersion) != LOBYTE(wVersionRequested) ||
HIBYTE(wsaData.wVersion) != HIBYTE(wVersionRequested) ) {
- /* Tell the user that we couldn't find a usable */
+ /* Tell the user that we could not find a usable */
/* winsock.dll. */
WSACleanup();
HMODULE hModule = NULL;
LOADLIBRARYEX_FN pLoadLibraryEx = NULL;
- /* Get a handle to kernel32 so we can access it's functions at runtime */
+ /* Get a handle to kernel32 so we can access it is functions at runtime */
HMODULE hKernel32 = GetModuleHandle(TEXT("kernel32"));
if(!hKernel32)
return NULL;
CURLX_FUNCTION_CAST(LOADLIBRARYEX_FN,
(GetProcAddress(hKernel32, LOADLIBARYEX)));
- /* Detect if there's already a path in the filename and load the library if
+ /* Detect if there is already a path in the filename and load the library if
there is. Note: Both back slashes and forward slashes have been supported
since the earlier days of DOS at an API level although they are not
supported by command prompt */
}
return hModule;
#else
- /* the Universal Windows Platform (UWP) can't do this */
+ /* the Universal Windows Platform (UWP) cannot do this */
(void)filename;
return NULL;
#endif
struct TELNET *tn = data->req.p.telnet;
CURLcode result = CURLE_OK;
- /* Add the user name as an environment variable if it
+ /* Add the username as an environment variable if it
was given on the command line */
if(data->state.aptr.user) {
char buffer[256];
if(str_is_nonascii(data->conn->user)) {
- DEBUGF(infof(data, "set a non ASCII user name in telnet"));
+ DEBUGF(infof(data, "set a non ASCII username in telnet"));
return CURLE_BAD_FUNCTION_ARGUMENT;
}
msnprintf(buffer, sizeof(buffer), "USER,%s", data->conn->user);
if(c != CURL_SE) {
if(c != CURL_IAC) {
/*
- * This is an error. We only expect to get "IAC IAC" or "IAC SE".
- * Several things may have happened. An IAC was not doubled, the
+ * This is an error. We only expect to get "IAC IAC" or "IAC SE".
+ * Several things may have happened. An IAC was not doubled, the
* IAC SE was left off, or another option got inserted into the
- * suboption are all possibilities. If we assume that the IAC was
+ * suboption are all possibilities. If we assume that the IAC was
* not doubled, and really the IAC SE was left off, we could get
- * into an infinite loop here. So, instead, we terminate the
+ * into an infinite loop here. So, instead, we terminate the
* suboption, and process the partial suboption if we can.
*/
CURL_SB_ACCUM(tn, CURL_IAC);
else use the old WaitForMultipleObjects() way */
if(GetFileType(stdin_handle) == FILE_TYPE_PIPE ||
data->set.is_fread_set) {
- /* Don't wait for stdin_handle, just wait for event_handle */
+ /* Do not wait for stdin_handle, just wait for event_handle */
obj_count = 1;
/* Check stdin_handle per 100 milliseconds */
wait_timeout = 100;
if(events.lNetworkEvents & FD_READ) {
/* read data from network */
result = Curl_xfer_recv(data, buffer, sizeof(buffer), &nread);
- /* read would've blocked. Loop again */
+ /* read would have blocked. Loop again */
if(result == CURLE_AGAIN)
break;
/* returned not-zero, this an error */
}
/* Negotiate if the peer has started negotiating,
- otherwise don't. We don't want to speak telnet with
+ otherwise do not. We do not want to speak telnet with
non-telnet servers, like POP or SMTP. */
if(tn->please_negotiate && !tn->already_negotiated) {
negotiate(data);
if(pfd[0].revents & POLLIN) {
/* read data from network */
result = Curl_xfer_recv(data, buffer, sizeof(buffer), &nread);
- /* read would've blocked. Loop again */
+ /* read would have blocked. Loop again */
if(result == CURLE_AGAIN)
break;
/* returned not-zero, this an error */
}
/* Negotiate if the peer has started negotiating,
- otherwise don't. We don't want to speak telnet with
+ otherwise do not. We do not want to speak telnet with
non-telnet servers, like POP or SMTP. */
if(tn->please_negotiate && !tn->already_negotiated) {
negotiate(data);
const char *tmp = ptr;
struct Curl_easy *data = state->data;
- /* if OACK doesn't contain blksize option, the default (512) must be used */
+ /* if OACK does not contain blksize option, the default (512) must be used */
state->blksize = TFTP_BLKSIZE_DEFAULT;
while(tmp < ptr + len) {
return CURLE_TFTP_ILLEGAL;
}
else if(blksize > state->requested_blksize) {
- /* could realloc pkt buffers here, but the spec doesn't call out
+ /* could realloc pkt buffers here, but the spec does not call out
* support for the server requesting a bigger blksize than the client
* requests */
failf(data, "%s (%ld)",
setpacketevent(&state->spacket, TFTP_EVENT_RRQ);
}
/* As RFC3617 describes the separator slash is not actually part of the
- file name so we skip the always-present first letter of the path
+ filename so we skip the always-present first letter of the path
string. */
result = Curl_urldecode(&state->data->state.up.path[1], 0,
&filename, NULL, REJECT_ZERO);
return result;
if(strlen(filename) > (state->blksize - strlen(mode) - 4)) {
- failf(data, "TFTP file name too long");
+ failf(data, "TFTP filename too long");
free(filename);
- return CURLE_TFTP_ILLEGAL; /* too long file name field */
+ return CURLE_TFTP_ILLEGAL; /* too long filename field */
}
msnprintf((char *)state->spacket.data + 2,
/* Is this the block we expect? */
rblock = getrpacketblock(&state->rpacket);
if(NEXT_BLOCKNUM(state->block) == rblock) {
- /* This is the expected block. Reset counters and ACK it. */
+ /* This is the expected block. Reset counters and ACK it. */
state->retries = 0;
}
else if(state->block == rblock) {
return CURLE_SEND_ERROR;
}
- /* we're ready to RX data */
+ /* we are ready to RX data */
state->state = TFTP_STATE_RX;
state->rx_time = time(NULL);
break;
/* Increment the retry count and fail if over the limit */
state->retries++;
infof(data,
- "Timeout waiting for block %d ACK. Retries = %d",
+ "Timeout waiting for block %d ACK. Retries = %d",
NEXT_BLOCKNUM(state->block), state->retries);
if(state->retries > state->retry_max) {
state->error = TFTP_ERR_TIMEOUT;
4, SEND_4TH_ARG,
(struct sockaddr *)&state->remote_addr,
state->remote_addrlen);
- /* don't bother with the return code, but if the socket is still up we
- * should be a good TFTP client and let the server know we're done */
+ /* do not bother with the return code, but if the socket is still up we
+ * should be a good TFTP client and let the server know we are done */
state->state = TFTP_STATE_FIN;
break;
int rblock = getrpacketblock(&state->rpacket);
if(rblock != state->block &&
- /* There's a bug in tftpd-hpa that causes it to send us an ack for
- * 65535 when the block number wraps to 0. So when we're expecting
+ /* There is a bug in tftpd-hpa that causes it to send us an ack for
+ * 65535 when the block number wraps to 0. So when we are expecting
* 0, also accept 65535. See
* https://www.syslinux.org/archives/2010-September/015612.html
* */
!(state->block == 0 && rblock == 65535)) {
- /* This isn't the expected block. Log it and up the retry counter */
+ /* This is not the expected block. Log it and up the retry counter */
infof(data, "Received ACK for block %d, expecting %d",
rblock, state->block);
state->retries++;
return result;
}
- /* This is the expected packet. Reset the counters and send the next
+ /* This is the expected packet. Reset the counters and send the next
block */
state->rx_time = time(NULL);
state->block++;
state->retries++;
infof(data, "Timeout waiting for block %d ACK. "
" Retries = %d", NEXT_BLOCKNUM(state->block), state->retries);
- /* Decide if we've had enough */
+ /* Decide if we have had enough */
if(state->retries > state->retry_max) {
state->error = TFTP_ERR_TIMEOUT;
state->state = TFTP_STATE_FIN;
(void)sendto(state->sockfd, (void *)state->spacket.data, 4, SEND_4TH_ARG,
(struct sockaddr *)&state->remote_addr,
state->remote_addrlen);
- /* don't bother with the return code, but if the socket is still up we
- * should be a good TFTP client and let the server know we're done */
+ /* do not bother with the return code, but if the socket is still up we
+ * should be a good TFTP client and let the server know we are done */
state->state = TFTP_STATE_FIN;
break;
return CURLE_OUT_OF_MEMORY;
}
- /* we don't keep TFTP connections up basically because there's none or very
+ /* we do not keep TFTP connections up basically because there is none or very
* little gain for UDP */
connclose(conn, "TFTP");
switch(state->event) {
case TFTP_EVENT_DATA:
- /* Don't pass to the client empty or retransmitted packets */
+ /* Do not pass to the client empty or retransmitted packets */
if(state->rbytes > 4 &&
(NEXT_BLOCKNUM(state->block) == getrpacketblock(&state->rpacket))) {
result = Curl_client_write(data, CLIENTWRITE_BODY,
return result;
*done = (state->state == TFTP_STATE_FIN) ? TRUE : FALSE;
if(*done)
- /* Tell curl we're done */
+ /* Tell curl we are done */
Curl_xfer_setup_nop(data);
}
else {
return result;
*done = (state->state == TFTP_STATE_FIN) ? TRUE : FALSE;
if(*done)
- /* Tell curl we're done */
+ /* Tell curl we are done */
Curl_xfer_setup_nop(data);
}
/* if rc == 0, then select() timed out */
DEBUGF(infof(data, "DO phase is complete"));
}
else if(!result) {
- /* The multi code doesn't have this logic for the DOING state so we
+ /* The multi code does not have this logic for the DOING state so we
provide it for TFTP since it may do the entire transfer in this
state. */
if(Curl_pgrsUpdate(data))
conn->transport = TRNSPRT_UDP;
/* TFTP URLs support an extension like ";mode=<typecode>" that
- * we'll try to get now! */
+ * we will try to get now! */
type = strstr(data->state.up.path, ";mode=");
if(!type)
/*
** clock_gettime() may be defined by Apple's SDK as weak symbol thus
- ** code compiles but fails during run-time if clock_gettime() is
+ ** code compiles but fails during runtime if clock_gettime() is
** called on unsupported OS version.
*/
#if defined(__APPLE__) && defined(HAVE_BUILTIN_AVAILABLE) && \
/*
** Even when the configure process has truly detected monotonic clock
** availability, it might happen that it is not actually available at
- ** run-time. When this occurs simply fallback to other time source.
+ ** runtime. When this occurs simply fallback to other time source.
*/
#ifdef HAVE_GETTIMEOFDAY
else {
#endif
#ifndef HAVE_SOCKET
-#error "We can't compile without socket() support!"
+#error "We cannot compile without socket() support!"
#endif
#include "urldata.h"
return -1;
}
}
- DEBUGF(infof(data, "readwrite_data: we're done"));
+ DEBUGF(infof(data, "readwrite_data: we are done"));
}
DEBUGASSERT(nread >= 0);
return nread;
if(((k->keepon & (KEEP_RECV|KEEP_SEND)) == KEEP_SEND) &&
(conn->bits.close || is_multiplex)) {
- /* When we've read the entire thing and the close bit is set, the server
- may now close the connection. If there's now any kind of sending going
+ /* When we have read the entire thing and the close bit is set, the server
+ may now close the connection. If there is now any kind of sending going
on from our side, we need to stop that immediately. */
infof(data, "we are done reading and this is set to close, stop send");
k->keepon &= ~KEEP_SEND; /* no writing anymore either */
if(!(data->req.no_body) && (k->size != -1) &&
(k->bytecount != k->size) &&
#ifdef CURL_DO_LINEEND_CONV
- /* Most FTP servers don't adjust their file SIZE response for CRLFs,
- so we'll check to see if the discrepancy can be explained
- by the number of CRLFs we've changed to LFs.
+ /* Most FTP servers do not adjust their file SIZE response for CRLFs,
+ so we will check to see if the discrepancy can be explained
+ by the number of CRLFs we have changed to LFs.
*/
(k->bytecount != (k->size + data->state.crlf_conversions)) &&
#endif /* CURL_DO_LINEEND_CONV */
CURLcode result;
if(!data->state.url && !data->set.uh) {
- /* we can't do anything without URL */
+ /* we cannot do anything without URL */
failf(data, "No URL set");
return CURLE_URL_MALFORMAT;
}
}
if(data->set.postfields && data->set.set_resume_from) {
- /* we can't */
+ /* we cannot */
failf(data, "cannot mix POSTFIELDS with RESUME_FROM");
return CURLE_BAD_FUNCTION_ARGUMENT;
}
/*
* Set user-agent. Used for HTTP, but since we can attempt to tunnel
- * basically anything through an HTTP proxy we can't limit this based on
+ * basically anything through an HTTP proxy we cannot limit this based on
* protocol.
*/
if(data->set.str[STRING_USERAGENT]) {
(data->req.httpcode != 401) && (data->req.httpcode != 407) &&
Curl_is_absolute_url(newurl, NULL, 0, FALSE)) {
/* If this is not redirect due to a 401 or 407 response and an absolute
- URL: don't allow a custom port number */
+ URL: do not allow a custom port number */
disallowport = TRUE;
}
}
if(type == FOLLOW_FAKE) {
- /* we're only figuring out the new url if we would've followed locations
- but now we're done so we can get out! */
+ /* we are only figuring out the new URL if we would have followed locations
+ but now we are done so we can get out! */
data->info.wouldredirect = newurl;
if(reachedmax) {
/* 306 - Not used */
/* 307 - Temporary Redirect */
default: /* for all above (and the unknown ones) */
- /* Some codes are explicitly mentioned since I've checked RFC2616 and they
- * seem to be OK to POST to.
+ /* Some codes are explicitly mentioned since I have checked RFC2616 and
+ * they seem to be OK to POST to.
*/
break;
case 301: /* Moved Permanently */
/* (quote from RFC7231, section 6.4.2)
*
* Note: For historical reasons, a user agent MAY change the request
- * method from POST to GET for the subsequent request. If this
+ * method from POST to GET for the subsequent request. If this
* behavior is undesired, the 307 (Temporary Redirect) status code
* can be used instead.
*
/* (quote from RFC7231, section 6.4.3)
*
* Note: For historical reasons, a user agent MAY change the request
- * method from POST to GET for the subsequent request. If this
+ * method from POST to GET for the subsequent request. If this
* behavior is undesired, the 307 (Temporary Redirect) status code
* can be used instead.
*
break;
case 304: /* Not Modified */
/* 304 means we did a conditional request and it was "Not modified".
- * We shouldn't get any Location: header in this response!
+ * We should not get any Location: header in this response!
*/
break;
case 305: /* Use Proxy */
/* (quote from RFC2616, section 10.3.6):
* "The requested resource MUST be accessed through the proxy given
* by the Location field. The Location field gives the URI of the
- * proxy. The recipient is expected to repeat this single request
+ * proxy. The recipient is expected to repeat this single request
* via the proxy. 305 responses MUST only be generated by origin
* servers."
*/
bool retry = FALSE;
*url = NULL;
- /* if we're talking upload, we can't do the checks below, unless the protocol
- is HTTP as when uploading over HTTP we will still get a response */
+ /* if we are talking upload, we cannot do the checks below, unless the
+ protocol is HTTP as when uploading over HTTP we will still get a
+ response */
if(data->state.upload &&
!(conn->handler->protocol&(PROTO_FAMILY_HTTP|CURLPROTO_RTSP)))
return CURLE_OK;
return CURLE_OUT_OF_MEMORY;
connclose(conn, "retry"); /* close this connection */
- conn->bits.retry = TRUE; /* mark this as a connection we're about
+ conn->bits.retry = TRUE; /* mark this as a connection we are about
to retry. Marking it this way should
prevent i.e HTTP transfers to return
error just because nothing has been
if(size > 0)
Curl_pgrsSetDownloadSize(data, size);
}
- /* we want header and/or body, if neither then don't do this! */
+ /* we want header and/or body, if neither then do not do this! */
if(k->getheader || !data->req.no_body) {
if(sockindex != -1)
#endif
#ifndef HAVE_SOCKET
-#error "We can't compile without socket() support!"
+#error "We cannot compile without socket() support!"
#endif
#include <limits.h>
#endif
/* Some parts of the code (e.g. chunked encoding) assume this buffer has at
- * more than just a few bytes to play with. Don't let it become too small or
+ * more than just a few bytes to play with. Do not let it become too small or
* bad things will happen.
*/
#if READBUFFER_SIZE < READBUFFER_MIN
if(data->state.rangestringalloc)
free(data->state.range);
- /* freed here just in case DONE wasn't called */
+ /* freed here just in case DONE was not called */
Curl_req_free(&data->req, data);
/* Close down all open SSL info and sessions */
set->seek_client = ZERO_NULL;
- set->filesize = -1; /* we don't know the size */
+ set->filesize = -1; /* we do not know the size */
set->postfieldsize = -1; /* unknown size */
set->maxredirs = 30; /* sensible default */
Curl_safefree(conn->sasl_authzid);
Curl_safefree(conn->options);
Curl_safefree(conn->oauth_bearer);
- Curl_safefree(conn->host.rawalloc); /* host name buffer */
- Curl_safefree(conn->conn_to_host.rawalloc); /* host name buffer */
+ Curl_safefree(conn->host.rawalloc); /* hostname buffer */
+ Curl_safefree(conn->conn_to_host.rawalloc); /* hostname buffer */
Curl_safefree(conn->hostname_resolve);
Curl_safefree(conn->secondaryhostname);
Curl_safefree(conn->localdev);
* disassociated from an easy handle.
*
* This function MUST NOT reset state in the Curl_easy struct if that
- * isn't strictly bound to the life-time of *this* particular connection.
+ * is not strictly bound to the life-time of *this* particular connection.
*/
void Curl_disconnect(struct Curl_easy *data,
struct connectdata *conn, bool aborted)
conn->connection_id, aborted));
/*
- * If this connection isn't marked to force-close, leave it open if there
+ * If this connection is not marked to force-close, leave it open if there
* are other users of it
*/
if(CONN_INUSE(conn) && !aborted) {
return TRUE;
}
#else
-/* disabled, won't get called */
+/* disabled, will not get called */
#define proxy_info_matches(x,y) FALSE
#define socks_proxy_info_matches(x,y) FALSE
#endif
struct Curl_easy *data)
{
if(!CONN_INUSE(conn)) {
- /* The check for a dead socket makes sense only if the connection isn't in
+ /* The check for a dead socket makes sense only if the connection is not in
use */
bool dead;
struct curltime now = Curl_now();
if(IsMultiplexingPossible(data, needle)) {
if(bundle->multiuse == BUNDLE_UNKNOWN) {
if(data->set.pipewait) {
- infof(data, "Server doesn't support multiplex yet, wait");
+ infof(data, "Server does not support multiplex yet, wait");
*waitpipe = TRUE;
CONNCACHE_UNLOCK(data);
return FALSE; /* no reuse */
}
- infof(data, "Server doesn't support multiplex (yet)");
+ infof(data, "Server does not support multiplex (yet)");
}
else if(bundle->multiuse == BUNDLE_MULTIPLEX) {
if(Curl_multiplex_wanted(data->multi))
if(!canmultiplex) {
if(Curl_resolver_asynch() &&
- /* remote_ip[0] is NUL only if the resolving of the name hasn't
- completed yet and until then we don't reuse this connection */
+ /* remote_ip[0] is NUL only if the resolving of the name has not
+ completed yet and until then we do not reuse this connection */
!check->primary.remote_ip[0])
continue;
}
if(CONN_INUSE(check)) {
if(!canmultiplex) {
- /* transfer can't be multiplexed and check is in use */
+ /* transfer cannot be multiplexed and check is in use */
continue;
}
else {
if(!Curl_conn_is_connected(check, FIRSTSOCKET)) {
foundPendingCandidate = TRUE;
- /* Don't pick a connection that hasn't connected yet */
+ /* Do not pick a connection that has not connected yet */
infof(data, "Connection #%" CURL_FORMAT_CURL_OFF_T
- " isn't open enough, can't reuse", check->connection_id);
+ " is not open enough, cannot reuse", check->connection_id);
continue;
}
if((needle->handler->flags&PROTOPT_SSL) !=
(check->handler->flags&PROTOPT_SSL))
- /* don't do mixed SSL and non-SSL connections */
+ /* do not do mixed SSL and non-SSL connections */
if(get_protocol_family(check->handler) !=
needle->handler->protocol || !check->bits.tls_upgraded)
/* except protocols that have been upgraded via TLS */
continue;
if(needle->bits.conn_to_host != check->bits.conn_to_host)
- /* don't mix connections that use the "connect to host" feature and
- * connections that don't use this feature */
+ /* do not mix connections that use the "connect to host" feature and
+ * connections that do not use this feature */
continue;
if(needle->bits.conn_to_port != check->bits.conn_to_port)
- /* don't mix connections that use the "connect to port" feature and
- * connections that don't use this feature */
+ /* do not mix connections that use the "connect to port" feature and
+ * connections that do not use this feature */
continue;
#ifndef CURL_DISABLE_PROXY
if(!Curl_ssl_conn_config_match(data, check, TRUE)) {
DEBUGF(infof(data,
"Connection #%" CURL_FORMAT_CURL_OFF_T
- " has different SSL proxy parameters, can't reuse",
+ " has different SSL proxy parameters, cannot reuse",
check->connection_id));
continue;
}
if(h2upgrade && !check->httpversion && canmultiplex) {
if(data->set.pipewait) {
- infof(data, "Server upgrade doesn't support multiplex yet, wait");
+ infof(data, "Server upgrade does not support multiplex yet, wait");
*waitpipe = TRUE;
CONNCACHE_UNLOCK(data);
return FALSE; /* no reuse */
}
infof(data, "Server upgrade cannot be used");
- continue; /* can't be used atm */
+ continue; /* cannot be used atm */
}
if(needle->localdev || needle->localport) {
/* If we are bound to a specific local end (IP+port), we must not
- reuse a random other one, although if we didn't ask for a
+ reuse a random other one, although if we did not ask for a
particular one we can reuse one that was bound.
This comparison is a bit rough and too strict. Since the input
same it would take a lot of processing to make it really accurate.
Instead, this matching will assume that reuses of bound connections
will most likely also reuse the exact same binding parameters and
- missing out a few edge cases shouldn't hurt anyone very much.
+ missing out a few edge cases should not hurt anyone very much.
*/
if((check->localport != needle->localport) ||
(check->localportrange != needle->localportrange) ||
if(!(needle->handler->flags & PROTOPT_CREDSPERREQUEST)) {
/* This protocol requires credentials per connection,
- so verify that we're using the same name and password as well */
+ so verify that we are using the same name and password as well */
if(Curl_timestrcmp(needle->user, check->user) ||
Curl_timestrcmp(needle->passwd, check->passwd) ||
Curl_timestrcmp(needle->sasl_authzid, check->sasl_authzid) ||
#endif
/* Additional match requirements if talking TLS OR
- * not talking to a HTTP proxy OR using a tunnel through a proxy */
+ * not talking to an HTTP proxy OR using a tunnel through a proxy */
if((needle->handler->flags&PROTOPT_SSL)
#ifndef CURL_DISABLE_PROXY
|| !needle->bits.httpproxy || needle->bits.tunnel_proxy
!Curl_ssl_conn_config_match(data, check, FALSE)) {
DEBUGF(infof(data,
"Connection #%" CURL_FORMAT_CURL_OFF_T
- " has different SSL parameters, can't reuse",
+ " has different SSL parameters, cannot reuse",
check->connection_id));
continue;
}
}
}
else if(check->http_ntlm_state != NTLMSTATE_NONE) {
- /* Connection is using NTLM auth but we don't want NTLM */
+ /* Connection is using NTLM auth but we do not want NTLM */
continue;
}
continue;
}
else if(check->proxy_ntlm_state != NTLMSTATE_NONE) {
- /* Proxy connection is using NTLM auth but we don't want NTLM */
+ /* Proxy connection is using NTLM auth but we do not want NTLM */
continue;
}
#endif
if(CONN_INUSE(check)) {
DEBUGASSERT(canmultiplex);
DEBUGASSERT(check->bits.multiplex);
- /* If multiplexed, make sure we don't go over concurrency limit */
+ /* If multiplexed, make sure we do not go over concurrency limit */
if(CONN_INUSE(check) >=
Curl_multi_max_concurrent_streams(data->multi)) {
infof(data, "client side MAX_CONCURRENT_STREAMS reached"
conn->primary.remote_port = -1; /* unknown at this point */
conn->remote_port = -1; /* unknown at this point */
- /* Default protocol-independent behavior doesn't support persistent
+ /* Default protocol-independent behavior does not support persistent
connections, so we set this to force-close. Protocols that support
this need to set this to FALSE in their "curl_do" functions. */
connclose(conn, "Default to force-close");
}
}
- /* The protocol was not found in the table, but we don't have to assign it
+ /* The protocol was not found in the table, but we do not have to assign it
to anything since it is already assigned to a dummy-struct in the
create_conn() function when the connectdata struct is allocated. */
failf(data, "Protocol \"%s\" %s%s", protostr,
return CURLE_OUT_OF_MEMORY;
}
else if(strlen(data->state.up.hostname) > MAX_URL_LEN) {
- failf(data, "Too long host name (maximum is %d)", MAX_URL_LEN);
+ failf(data, "Too long hostname (maximum is %d)", MAX_URL_LEN);
return CURLE_URL_MALFORMAT;
}
hostname = data->state.up.hostname;
zonefrom_url(uh, data, conn);
}
- /* make sure the connect struct gets its own copy of the host name */
+ /* make sure the connect struct gets its own copy of the hostname */
conn->host.rawalloc = strdup(hostname ? hostname : "");
if(!conn->host.rawalloc)
return CURLE_OUT_OF_MEMORY;
return result;
/*
- * User name and password set with their own options override the
+ * username and password set with their own options override the
* credentials possibly set in the URL.
*/
if(!data->set.str[STRING_PASSWORD]) {
}
if(!data->set.str[STRING_USERNAME]) {
- /* we don't use the URL API's URL decoder option here since it rejects
+ /* we do not use the URL API's URL decoder option here since it rejects
control codes and we want to allow them for some schemes in the user
and password fields */
uc = curl_url_get(uh, CURLUPART_USER, &data->state.up.user, 0);
/*
- * If we're doing a resumed transfer, we need to setup our stuff
+ * If we are doing a resumed transfer, we need to setup our stuff
* properly.
*/
static CURLcode setup_range(struct Curl_easy *data)
* the first to check for.)
*
* For compatibility, the all-uppercase versions of these variables are
- * checked if the lowercase versions don't exist.
+ * checked if the lowercase versions do not exist.
*/
char proxy_env[20];
char *envp = proxy_env;
proxy = curl_getenv(proxy_env);
/*
- * We don't try the uppercase version of HTTP_PROXY because of
+ * We do not try the uppercase version of HTTP_PROXY because of
* security reasons:
*
* When curl is used in a webserver application
/*
* If this is supposed to use a proxy, we need to figure out the proxy
- * host name, so that we can reuse an existing connection
+ * hostname, so that we can reuse an existing connection
* that may exist registered to the same proxy host.
*/
static CURLcode parse_proxy(struct Curl_easy *data,
conn->primary.remote_port = port;
}
- /* now, clone the proxy host name */
+ /* now, clone the proxy hostname */
uc = curl_url_get(uhp, CURLUPART_HOST, &host, CURLU_URLDECODE);
if(uc) {
result = CURLE_OUT_OF_MEMORY;
#endif
if(proxy && (!*proxy || (conn->handler->flags & PROTOPT_NONETWORK))) {
- free(proxy); /* Don't bother with an empty proxy string or if the
- protocol doesn't work with network */
+ free(proxy); /* Do not bother with an empty proxy string or if the
+ protocol does not work with network */
proxy = NULL;
}
if(socksproxy && (!*socksproxy ||
(conn->handler->flags & PROTOPT_NONETWORK))) {
- free(socksproxy); /* Don't bother with an empty socks proxy string or if
- the protocol doesn't work with network */
+ free(socksproxy); /* Do not bother with an empty socks proxy string or if
+ the protocol does not work with network */
socksproxy = NULL;
}
conn->bits.proxy = conn->bits.httpproxy || conn->bits.socksproxy;
if(!conn->bits.proxy) {
- /* we aren't using the proxy after all... */
+ /* we are not using the proxy after all... */
conn->bits.proxy = FALSE;
conn->bits.httpproxy = FALSE;
conn->bits.socksproxy = FALSE;
/*
* Curl_parse_login_details()
*
- * This is used to parse a login string for user name, password and options in
+ * This is used to parse a login string for username, password and options in
* the following formats:
*
* user
bool url_provided = FALSE;
if(data->state.aptr.user) {
- /* there was a user name in the URL. Use the URL decoded version */
+ /* there was a username in the URL. Use the URL decoded version */
userp = &data->state.aptr.user;
url_provided = TRUE;
}
}
/*
- * Set the login details so they're available in the connection
+ * Set the login details so they are available in the connection
*/
static CURLcode set_login(struct Curl_easy *data,
struct connectdata *conn)
else
infof(data, "Invalid IPv6 address format");
portptr = ptr;
- /* Note that if this didn't end with a bracket, we still advanced the
- * hostptr first, but I can't see anything wrong with that as no host
+ /* Note that if this did not end with a bracket, we still advanced the
+ * hostptr first, but I cannot see anything wrong with that as no host
* name nor a numeric can legally start with a bracket.
*/
#else
host_portno = strchr(portptr, ':');
if(host_portno) {
char *endp = NULL;
- *host_portno = '\0'; /* cut off number from host name */
+ *host_portno = '\0'; /* cut off number from hostname */
host_portno++;
if(*host_portno) {
long portparse = strtol(host_portno, &endp, 10);
}
}
- /* now, clone the cleaned host name */
+ /* now, clone the cleaned hostname */
DEBUGASSERT(hostptr);
*hostname_result = strdup(hostptr);
if(!*hostname_result) {
conn->transport = TRNSPRT_QUIC;
conn->httpversion = 30;
break;
- default: /* shouldn't be possible */
+ default: /* should not be possible */
break;
}
}
/* Resolve the name of the server or proxy */
if(conn->bits.reuse) {
- /* We're reusing the connection - no need to resolve anything, and
+ /* We are reusing the connection - no need to resolve anything, and
idnconvert_hostname() was called already in create_conn() for the reuse
case. */
*async = FALSE;
/*
* Cleanup the connection `temp`, just allocated for `data`, before using the
- * previously `existing` one for `data`. All relevant info is copied over
+ * previously `existing` one for `data`. All relevant info is copied over
* and `temp` is freed.
*/
static void reuse_conn(struct Curl_easy *data,
/* get the user+password information from the temp struct since it may
* be new for this request even when we reuse an existing connection */
if(temp->user) {
- /* use the new user name and password though */
+ /* use the new username and password though */
Curl_safefree(existing->user);
Curl_safefree(existing->passwd);
existing->user = temp->user;
#ifndef CURL_DISABLE_PROXY
existing->bits.proxy_user_passwd = temp->bits.proxy_user_passwd;
if(existing->bits.proxy_user_passwd) {
- /* use the new proxy user name and proxy password though */
+ /* use the new proxy username and proxy password though */
Curl_safefree(existing->http_proxy.user);
Curl_safefree(existing->socks_proxy.user);
Curl_safefree(existing->http_proxy.passwd);
temp->hostname_resolve = NULL;
/* reuse init */
- existing->bits.reuse = TRUE; /* yes, we're reusing here */
+ existing->bits.reuse = TRUE; /* yes, we are reusing here */
Curl_conn_free(data, temp);
}
/**
* create_conn() sets up a new connectdata struct, or reuses an already
- * existing one, and resolves host name.
+ * existing one, and resolves hostname.
*
* if this function returns CURLE_OK and *async is set to TRUE, the resolve
* response will be coming asynchronously. If *async is FALSE, the name is
goto out;
/***********************************************************************
- * file: is a special case in that it doesn't need a network connection
+ * file: is a special case in that it does not need a network connection
***********************************************************************/
#ifndef CURL_DISABLE_FILE
if(conn->handler->flags & PROTOPT_NONETWORK) {
Curl_persistconninfo(data, conn, NULL);
result = conn->handler->connect_it(data, &done);
- /* Setup a "faked" transfer that'll do nothing */
+ /* Setup a "faked" transfer that will do nothing */
if(!result) {
Curl_attach_connection(data, conn);
result = Curl_conncache_add_conn(data);
}
#if defined(USE_NTLM)
- /* If NTLM is requested in a part of this connection, make sure we don't
+ /* If NTLM is requested in a part of this connection, make sure we do not
assume the state is fine as this is a fresh connection and NTLM is
connection based. */
if((data->state.authhost.picked & CURLAUTH_NTLM) &&
#ifndef CURL_DISABLE_PROXY
/* set proxy_connect_closed to false unconditionally already here since it
is used strictly to provide extra information to a parent function in the
- case of proxy CONNECT failures and we must make sure we don't have it
+ case of proxy CONNECT failures and we must make sure we do not have it
lingering set from a previous invoke */
conn->bits.proxy_connect_closed = FALSE;
#endif
/* multiplexed */
*protocol_done = TRUE;
else if(!*asyncp) {
- /* DNS resolution is done: that's either because this is a reused
+ /* DNS resolution is done: that is either because this is a reused
connection, in which case DNS was unnecessary, or because DNS
really did finish already (synch resolver/fast async resolve) */
result = Curl_setup_conn(data, protocol_done);
return result;
}
else if(result && conn) {
- /* We're not allowed to return failure with memory left allocated in the
+ /* We are not allowed to return failure with memory left allocated in the
connectdata struct, free those here */
Curl_detach_connection(data);
Curl_conncache_remove_conn(data, conn, TRUE);
CURLcode result;
if(conn) {
- conn->bits.do_more = FALSE; /* by default there's no curl_do_more() to
+ conn->bits.do_more = FALSE; /* by default there is no curl_do_more() to
use */
- /* if the protocol used doesn't support wildcards, switch it off */
+ /* if the protocol used does not support wildcards, switch it off */
if(data->state.wildcardmatch &&
!(conn->handler->flags & PROTOPT_WILDCARD))
data->state.wildcardmatch = FALSE;
}
/*
- * Find the separator at the end of the host name, or the '?' in cases like
+ * Find the separator at the end of the hostname, or the '?' in cases like
* http://www.example.com?id=2380
*/
static const char *find_host_sep(const char *url)
/* urlencode_str() writes data into an output dynbuf and URL-encodes the
* spaces in the source URL accordingly.
*
- * URL encoding should be skipped for host names, otherwise IDN resolution
+ * URL encoding should be skipped for hostnames, otherwise IDN resolution
* will fail.
*/
static CURLUcode urlencode_str(struct dynbuf *o, const char *url,
if(i && (url[i] == ':') && ((url[i + 1] == '/') || !guess_scheme)) {
/* If this does not guess scheme, the scheme always ends with the colon so
that this also detects data: URLs etc. In guessing mode, data: could
- be the host name "data" with a specified port number. */
+ be the hostname "data" with a specified port number. */
/* the length of the scheme is the name part only */
size_t len = i;
bool skip_slash = FALSE;
*newurl = NULL;
- /* protsep points to the start of the host name */
+ /* protsep points to the start of the hostname */
protsep = strstr(base, "//");
if(!protsep)
protsep = base;
if('/' != relurl[0]) {
int level = 0;
- /* First we need to find out if there's a ?-letter in the URL,
+ /* First we need to find out if there is a ?-letter in the URL,
and cut it and the right-side of that off */
pathsep = strchr(protsep, '?');
if(pathsep)
*pathsep = 0;
- /* we have a relative path to append to the last slash if there's one
+ /* we have a relative path to append to the last slash if there is one
available, or the new URL is just a query string (starts with a '?') or
a fragment (starts with '#') we append the new one at the end of the
current URL */
if(pathsep)
*pathsep = 0;
- /* Check if there's any slash after the host name, and if so, remember
+ /* Check if there is any slash after the hostname, and if so, remember
that position instead */
pathsep = strchr(protsep, '/');
if(pathsep)
if(pathsep) {
/* When people use badly formatted URLs, such as
"http://www.example.com?dir=/home/daniel" we must not use the first
- slash, if there's a ?-letter before it! */
+ slash, if there is a ?-letter before it! */
char *sep = strchr(protsep, '?');
if(sep && (sep < pathsep))
pathsep = sep;
}
else {
/* There was no slash. Now, since we might be operating on a badly
- formatted URL, such as "http://www.example.com?id=2380" which
- doesn't use a slash separator as it is supposed to, we need to check
+ formatted URL, such as "http://www.example.com?id=2380" which does
+ not use a slash separator as it is supposed to, we need to check
for a ?-letter as well! */
pathsep = strchr(protsep, '?');
if(pathsep)
Curl_dyn_init(&newest, CURL_MAX_INPUT_LENGTH);
- /* copy over the root url part */
+ /* copy over the root URL part */
result = Curl_dyn_add(&newest, base);
if(result)
return result;
/*
* parse_hostname_login()
*
- * Parse the login details (user name, password and options) from the URL and
- * strip them out of the host name
+ * Parse the login details (username, password and options) from the URL and
+ * strip them out of the hostname
*
*/
static CURLUcode parse_hostname_login(struct Curl_URL *u,
const char *login,
size_t len,
unsigned int flags,
- size_t *offset) /* to the host name */
+ size_t *offset) /* to the hostname */
{
CURLUcode result = CURLUE_OK;
CURLcode ccode;
if(userp) {
if(flags & CURLU_DISALLOW_USER) {
- /* Option DISALLOW_USER is set and url contains username. */
+ /* Option DISALLOW_USER is set and URL contains username. */
result = CURLUE_USER_NOT_ALLOWED;
goto out;
}
u->options = optionsp;
}
- /* the host name starts at this offset */
+ /* the hostname starts at this offset */
*offset = ptr - login;
return CURLUE_OK;
unsigned long port;
size_t keep = portptr - hostname;
- /* Browser behavior adaptation. If there's a colon with no digits after,
+ /* Browser behavior adaptation. If there is a colon with no digits after,
just cut off the name there which makes us ignore the colon and just
use the default port. Firefox, Chrome and Safari all do that.
- Don't do it if the URL has no scheme, to make something that looks like
+ Do not do it if the URL has no scheme, to make something that looks like
a scheme not work!
*/
Curl_dyn_setlen(host, keep);
char zoneid[16];
int i = 0;
char *h = &hostname[len + 1];
- /* pass '25' if present and is a url encoded percent sign */
+ /* pass '25' if present and is a URL encoded percent sign */
if(!strncmp(h, "25", 2) && h[2] && (h[2] != ']'))
h += 2;
while(*h && (*h != ']') && (i < 15))
char *endp = NULL;
unsigned long l;
if(!ISDIGIT(*c))
- /* most importantly this doesn't allow a leading plus or minus */
+ /* most importantly this does not allow a leading plus or minus */
return HOST_NAME;
l = strtoul(c, &endp, 0);
if(errno)
CURLcode result;
/*
- * Parse the login details and strip them out of the host name.
+ * Parse the login details and strip them out of the hostname.
*/
uc = parse_hostname_login(u, auth, authlen, flags, &offset);
if(uc)
do {
bool dotdot = TRUE;
if(*input == '.') {
- /* A. If the input buffer begins with a prefix of "../" or "./", then
+ /* A. If the input buffer begins with a prefix of "../" or "./", then
remove that prefix from the input buffer; otherwise, */
if(!strncmp("./", input, 2)) {
input += 3;
clen -= 3;
}
- /* D. if the input buffer consists only of "." or "..", then remove
+ /* D. if the input buffer consists only of "." or "..", then remove
that from the input buffer; otherwise, */
else if(!strcmp(".", input) || !strcmp("..", input) ||
dotdot = FALSE;
}
else if(*input == '/') {
- /* B. if the input buffer begins with a prefix of "/./" or "/.", where
+ /* B. if the input buffer begins with a prefix of "/./" or "/.", where
"." is a complete path segment, then replace that prefix with "/" in
the input buffer; otherwise, */
if(!strncmp("/./", input, 3)) {
break;
}
- /* C. if the input buffer begins with a prefix of "/../" or "/..",
+ /* C. if the input buffer begins with a prefix of "/../" or "/..",
where ".." is a complete path segment, then replace that prefix with
"/" in the input buffer and remove the last segment and its
preceding "/" (if any) from the output buffer; otherwise, */
dotdot = FALSE;
if(!dotdot) {
- /* E. move the first path segment in the input buffer to the end of
+ /* E. move the first path segment in the input buffer to the end of
the output buffer, including the initial "/" character (if any) and
any subsequent characters up to, but not including, the next "/"
character or the end of the input buffer. */
* Appendix E, but believe me, it was meant to be there. --MK)
*/
if(ptr[0] != '/' && !STARTS_WITH_URL_DRIVE_PREFIX(ptr)) {
- /* the URL includes a host name, it must match "localhost" or
+ /* the URL includes a hostname, it must match "localhost" or
"127.0.0.1" to be valid */
if(checkprefix("localhost/", ptr) ||
checkprefix("127.0.0.1/", ptr)) {
#if defined(_WIN32)
size_t len;
- /* the host name, NetBIOS computer name, can not contain disallowed
+ /* the hostname, NetBIOS computer name, can not contain disallowed
chars, and the delimiting slash character must be appended to the
- host name */
+ hostname */
path = strpbrk(ptr, "/\\:*?\"<>|");
if(!path || *path != '/') {
result = CURLUE_BAD_FILE_URL;
Curl_dyn_reset(&host);
#if !defined(_WIN32) && !defined(MSDOS) && !defined(__CYGWIN__)
- /* Don't allow Windows drive letters when not in Windows.
+ /* Do not allow Windows drive letters when not in Windows.
* This catches both "file:/c:" and "file:c:" */
if(('/' == path[0] && STARTS_WITH_URL_DRIVE_PREFIX(&path[1])) ||
STARTS_WITH_URL_DRIVE_PREFIX(path)) {
result = CURLUE_BAD_SLASHES;
goto fail;
}
- hostp = p; /* host name starts here */
+ hostp = p; /* hostname starts here */
}
else {
/* no scheme! */
}
}
- /* find the end of the host name + port number */
+ /* find the end of the hostname + port number */
hostlen = strcspn(hostp, "/?#");
path = &hostp[hostlen];
if((flags & CURLU_GUESS_SCHEME) && !schemep) {
const char *hostname = Curl_dyn_ptr(&host);
- /* legacy curl-style guess based on host name */
+ /* legacy curl-style guess based on hostname */
if(checkprefix("ftp.", hostname))
schemep = "ftp";
else if(checkprefix("dict.", hostname))
ifmissing = CURLUE_NO_PORT;
urldecode = FALSE; /* never for port */
if(!ptr && (flags & CURLU_DEFAULT_PORT) && u->scheme) {
- /* there's no stored port number, but asked to deliver
+ /* there is no stored port number, but asked to deliver
a default one for the scheme */
const struct Curl_handler *h = Curl_get_scheme_handler(u->scheme);
if(h) {
h = Curl_get_scheme_handler(scheme);
if(!port && (flags & CURLU_DEFAULT_PORT)) {
- /* there's no stored port number, but asked to deliver
+ /* there is no stored port number, but asked to deliver
a default one for the scheme */
if(h) {
msnprintf(portbuf, sizeof(portbuf), "%u", h->defport);
return CURLUE_MALFORMED_INPUT;
/* if the new thing is absolute or the old one is not
- * (we could not get an absolute url in 'oldurl'),
+ * (we could not get an absolute URL in 'oldurl'),
* then replace the existing with the new. */
if(Curl_is_absolute_url(part, NULL, 0,
flags & (CURLU_GUESS_SCHEME|
else if(what == CURLUPART_HOST) {
size_t n = Curl_dyn_len(&enc);
if(!n && (flags & CURLU_NO_AUTHORITY)) {
- /* Skip hostname check, it's allowed to be empty. */
+ /* Skip hostname check, it is allowed to be empty. */
}
else {
if(!n || hostname_check(u, (char *)newp, n)) {
#ifdef USE_WEBSOCKETS
/* CURLPROTO_GOPHERS (29) is the highest publicly used protocol bit number,
* the rest are internal information. If we use higher bits we only do this on
- * platforms that have a >= 64 bit type and then we use such a type for the
+ * platforms that have a >= 64-bit type and then we use such a type for the
* protocol fields in the protocol handler.
*/
#define CURLPROTO_WS (1<<30)
};
struct ssl_primary_config {
- char *CApath; /* certificate dir (doesn't work on windows) */
+ char *CApath; /* certificate dir (does not work on windows) */
char *CAfile; /* certificate to verify peer against */
char *issuercert; /* optional issuer certificate filename */
char *clientcert;
curl_ssl_ctx_callback fsslctx; /* function to initialize ssl ctx */
void *fsslctxp; /* parameter for call back */
char *cert_type; /* format for certificate (default: PEM)*/
- char *key; /* private key file name */
+ char *key; /* private key filename */
struct curl_blob *key_blob;
char *key_type; /* format for private key (default: PEM) */
char *key_passwd; /* plain text private key password */
BIT(falsestart);
BIT(enable_beast); /* allow this flaw for interoperability's sake */
BIT(no_revoke); /* disable SSL certificate revocation checks */
- BIT(no_partialchain); /* don't accept partial certificate chains */
+ BIT(no_partialchain); /* do not accept partial certificate chains */
BIT(revoke_best_effort); /* ignore SSL revocation offline/missing revocation
list errors */
BIT(native_ca_store); /* use the native ca store of operating system */
/* information stored about one single SSL session */
struct Curl_ssl_session {
- char *name; /* host name for which this ID was used */
- char *conn_to_host; /* host name for the connection (may be NULL) */
+ char *name; /* hostname for which this ID was used */
+ char *conn_to_host; /* hostname for the connection (may be NULL) */
const char *scheme; /* protocol scheme used */
void *sessionid; /* as returned from the SSL layer */
size_t idsize; /* if known, otherwise 0 */
re-attempted at another connection. */
#ifndef CURL_DISABLE_FTP
BIT(ftp_use_epsv); /* As set with CURLOPT_FTP_USE_EPSV, but if we find out
- EPSV doesn't work we disable it for the forthcoming
+ EPSV does not work we disable it for the forthcoming
requests */
BIT(ftp_use_eprt); /* As set with CURLOPT_FTP_USE_EPRT, but if we find out
- EPRT doesn't work we disable it for the forthcoming
+ EPRT does not work we disable it for the forthcoming
requests */
BIT(ftp_use_data_ssl); /* Enabled SSL for the data connection */
BIT(ftp_use_control_ssl); /* Enabled SSL for the control connection */
/* This function *MAY* be set to a protocol-dependent function that is run
* after the connect() and everything is done, as a step in the connection.
* The 'done' pointer points to a bool that should be set to TRUE if the
- * function completes before return. If it doesn't complete, the caller
+ * function completes before return. If it does not complete, the caller
* should call the ->connecting() function until it is.
*/
CURLcode (*connect_it)(struct Curl_easy *data, bool *done);
struct connectdata *conn, curl_socket_t *socks);
/* This function *MAY* be set to a protocol-dependent function that is run
- * by the curl_disconnect(), as a step in the disconnection. If the handler
+ * by the curl_disconnect(), as a step in the disconnection. If the handler
* is called because the connection has been considered dead,
* dead_connection is set to TRUE. The connection is (again) associated with
* the transfer here.
the send function might need to be called while uploading, or vice versa.
*/
#define PROTOPT_DIRLOCK (1<<3)
-#define PROTOPT_NONETWORK (1<<4) /* protocol doesn't use the network! */
+#define PROTOPT_NONETWORK (1<<4) /* protocol does not use the network! */
#define PROTOPT_NEEDSPWD (1<<5) /* needs a password, and if none is set it
gets a default */
-#define PROTOPT_NOURLQUERY (1<<6) /* protocol can't handle
- url query strings (?foo=bar) ! */
+#define PROTOPT_NOURLQUERY (1<<6) /* protocol cannot handle
+ URL query strings (?foo=bar) ! */
#define PROTOPT_CREDSPERREQUEST (1<<7) /* requires login credentials per
request instead of per connection */
#define PROTOPT_ALPN (1<<8) /* set ALPN for this */
this protocol and act as a gateway */
#define PROTOPT_WILDCARD (1<<12) /* protocol supports wildcard matching */
#define PROTOPT_USERPWDCTRL (1<<13) /* Allow "control bytes" (< 32 ascii) in
- user name and password */
-#define PROTOPT_NOTCPPROXY (1<<14) /* this protocol can't proxy over TCP */
+ username and password */
+#define PROTOPT_NOTCPPROXY (1<<14) /* this protocol cannot proxy over TCP */
#define CONNCHECK_NONE 0 /* No checks */
#define CONNCHECK_ISDEAD (1<<0) /* Check if the connection is dead. */
int port;
unsigned char proxytype; /* curl_proxytype: what kind of proxy that is in
use */
- char *user; /* proxy user name string, allocated */
+ char *user; /* proxy username string, allocated */
char *passwd; /* proxy password string, allocated */
};
const struct Curl_sockaddr_ex *remote_addr;
struct hostname host;
- char *hostname_resolve; /* host name to resolve to address, allocated */
- char *secondaryhostname; /* secondary socket host name (ftp) */
+ char *hostname_resolve; /* hostname to resolve to address, allocated */
+ char *secondaryhostname; /* secondary socket hostname (ftp) */
struct hostname conn_to_host; /* the host to connect to. valid only if
bits.conn_to_host is set */
#ifndef CURL_DISABLE_PROXY
these are updated with data which comes directly from the socket. */
struct ip_quadruple primary;
struct ip_quadruple secondary;
- char *user; /* user name string, allocated */
+ char *user; /* username string, allocated */
char *passwd; /* password string, allocated */
char *options; /* options string, allocated */
char *sasl_authzid; /* authorization identity string, allocated */
/* When this connection is created, store the conditions for the local end
bind. This is stored before the actual bind and before any connection is
made and will serve the purpose of being used for comparison reasons so
- that subsequent bound-requested connections aren't accidentally reusing
+ that subsequent bound-requested connections are not accidentally reusing
wrong connections. */
char *localdev;
unsigned short localportrange;
unsigned long httpauthavail; /* what host auth types were announced */
long numconnects; /* how many new connection did libcurl created */
char *contenttype; /* the content type of the object */
- char *wouldredirect; /* URL this would've been redirected to if asked to */
+ char *wouldredirect; /* URL this would have been redirected to if asked to */
curl_off_t retry_after; /* info from Retry-After: header */
unsigned int header_size; /* size of read header(s) in bytes */
struct curl_certinfo certs; /* info about the certs. Asked for with
CURLOPT_CERTINFO / CURLINFO_CERTINFO */
CURLproxycode pxcode;
- BIT(timecond); /* set to TRUE if the time condition didn't match, which
+ BIT(timecond); /* set to TRUE if the time condition did not match, which
thus made the document NOT get fetched */
BIT(used_proxy); /* the transfer used a proxy */
};
curl_off_t current_speed; /* the ProgressShow() function sets this,
bytes / second */
- /* host name, port number and protocol of the first (not followed) request.
- if set, this should be the host name that we will sent authorization to,
+ /* hostname, port number and protocol of the first (not followed) request.
+ if set, this should be the hostname that we will sent authorization to,
no else. Used to make Location: following not keep sending user+password.
This is strdup()ed data. */
char *first_host;
called. */
BIT(allow_port); /* Is set.use_port allowed to take effect or not. This
is always set TRUE when curl_easy_perform() is called. */
- BIT(authproblem); /* TRUE if there's some problem authenticating */
+ BIT(authproblem); /* TRUE if there is some problem authenticating */
/* set after initial USER failure, to prevent an authentication loop */
BIT(wildcardmatch); /* enable wildcard matching */
BIT(disableexpect); /* TRUE if Expect: is disabled due to a previous
struct Curl_multi; /* declared in multihandle.c */
enum dupstring {
- STRING_CERT, /* client certificate file name */
+ STRING_CERT, /* client certificate filename */
STRING_CERT_TYPE, /* format for certificate (default: PEM)*/
- STRING_KEY, /* private key file name */
+ STRING_KEY, /* private key filename */
STRING_KEY_PASSWD, /* plain text private key password */
STRING_KEY_TYPE, /* format for private key (default: PEM) */
- STRING_SSL_CAPATH, /* CA directory name (doesn't work on windows) */
+ STRING_SSL_CAPATH, /* CA directory name (does not work on windows) */
STRING_SSL_CAFILE, /* certificate file to verify peer against */
STRING_SSL_PINNEDPUBLICKEY, /* public key file to verify peer against */
STRING_SSL_CIPHER_LIST, /* list of ciphers to use */
STRING_SSL_ISSUERCERT, /* issuer cert file to check certificate */
STRING_SERVICE_NAME, /* Service name */
#ifndef CURL_DISABLE_PROXY
- STRING_CERT_PROXY, /* client certificate file name */
+ STRING_CERT_PROXY, /* client certificate filename */
STRING_CERT_TYPE_PROXY, /* format for certificate (default: PEM)*/
- STRING_KEY_PROXY, /* private key file name */
+ STRING_KEY_PROXY, /* private key filename */
STRING_KEY_PASSWD_PROXY, /* plain text private key password */
STRING_KEY_TYPE_PROXY, /* format for private key (default: PEM) */
- STRING_SSL_CAPATH_PROXY, /* CA directory name (doesn't work on windows) */
+ STRING_SSL_CAPATH_PROXY, /* CA directory name (does not work on windows) */
STRING_SSL_CAFILE_PROXY, /* certificate file to verify peer against */
STRING_SSL_PINNEDPUBLICKEY_PROXY, /* public key file to verify proxy */
STRING_SSL_CIPHER_LIST_PROXY, /* list of ciphers to use */
STRING_COOKIEJAR, /* dump all cookies to this file */
#endif
STRING_CUSTOMREQUEST, /* HTTP/FTP/RTSP request/method to use */
- STRING_DEFAULT_PROTOCOL, /* Protocol to use when the URL doesn't specify */
+ STRING_DEFAULT_PROTOCOL, /* Protocol to use when the URL does not specify */
STRING_DEVICE, /* local network interface/address to use */
STRING_INTERFACE, /* local network interface to use */
STRING_BINDHOST, /* local address to use */
STRING_SSH_PUBLIC_KEY, /* path to the public key file for auth */
STRING_SSH_HOST_PUBLIC_KEY_MD5, /* md5 of host public key in ascii hex */
STRING_SSH_HOST_PUBLIC_KEY_SHA256, /* sha256 of host public key in base64 */
- STRING_SSH_KNOWNHOSTS, /* file name of knownhosts file */
+ STRING_SSH_KNOWNHOSTS, /* filename of knownhosts file */
#endif
#ifndef CURL_DISABLE_SMTP
STRING_MAIL_FROM,
};
/* callback that gets called when this easy handle is completed within a multi
- handle. Only used for internally created transfers, like for example
+ handle. Only used for internally created transfers, like for example
DoH. */
typedef int (*multidone_func)(struct Curl_easy *easy, CURLcode result);
#ifndef CURL_DISABLE_BINDLOCAL
unsigned short localport; /* local port number to bind to */
unsigned short localportrange; /* number of additional port numbers to test
- in case the 'localport' one can't be
+ in case the 'localport' one cannot be
bind()ed */
#endif
curl_write_callback fwrite_func; /* function that stores the output */
struct curl_slist *postquote; /* after the transfer */
struct curl_slist *prequote; /* before the transfer, after type */
/* Despite the name, ftp_create_missing_dirs is for FTP(S) and SFTP
- 1 - create directories that don't exist
+ 1 - create directories that do not exist
2 - the same but also allow MKD to fail once
*/
unsigned char ftp_create_missing_dirs;
/* Here follows boolean settings that define how to behave during
this session. They are STATIC, set by libcurl users or at least initially
- and they don't change during operations. */
+ and they do not change during operations. */
BIT(quick_exit); /* set 1L when it is okay to leak things (like
- threads), as we're about to exit() anyway and
- don't want lengthy cleanups to delay termination,
+ threads), as we are about to exit() anyway and
+ do not want lengthy cleanups to delay termination,
e.g. after a DNS timeout */
BIT(get_filetime); /* get the time and get of the remote file */
#ifndef CURL_DISABLE_PROXY
us */
BIT(wildcard_enabled); /* enable wildcard matching */
#endif
- BIT(hide_progress); /* don't use the progress meter */
+ BIT(hide_progress); /* do not use the progress meter */
BIT(http_fail_on_error); /* fail on HTTP error codes >= 400 */
BIT(http_keep_sending_on_error); /* for HTTP status codes >= 300 */
BIT(http_follow_location); /* follow HTTP redirects */
#ifdef USE_UNIX_SOCKETS
BIT(abstract_unix_socket);
#endif
- BIT(disallow_username_in_url); /* disallow username in url */
+ BIT(disallow_username_in_url); /* disallow username in URL */
#ifndef CURL_DISABLE_DOH
BIT(doh); /* DNS-over-HTTPS enabled */
BIT(doh_verifypeer); /* DoH certificate peer verification */
* Curl_auth_create_login_message()
*
* This is used to generate an already encoded LOGIN message containing the
- * user name or password ready for sending to the recipient.
+ * username or password ready for sending to the recipient.
*
* Parameters:
*
- * valuep [in] - The user name or user's password.
+ * valuep [in] - The username or user's password.
* out [out] - The result storage.
*
* Returns void.
* Curl_auth_create_external_message()
*
* This is used to generate an already encoded EXTERNAL message containing
- * the user name ready for sending to the recipient.
+ * the username ready for sending to the recipient.
*
* Parameters:
*
- * user [in] - The user name.
+ * user [in] - The username.
* out [out] - The result storage.
*
* Returns void.
* Parameters:
*
* chlg [in] - The challenge.
- * userp [in] - The user name.
+ * userp [in] - The username.
* passwdp [in] - The user's password.
* out [out] - The result storage.
*
case ',':
if(!starts_with_quote) {
- /* This signals the end of the content if we didn't get a starting
+ /* This signals the end of the content if we did not get a starting
quote and then we do "sloppy" parsing */
c = 0; /* the end */
continue;
*
* data [in] - The session handle.
* chlg [in] - The challenge message.
- * userp [in] - The user name.
+ * userp [in] - The username.
* passwdp [in] - The user's password.
* service [in] - The service type such as http, smtp, pop or imap.
* out [out] - The result storage.
}
}
else
- break; /* We're done here */
+ break; /* We are done here */
/* Pass all additional spaces here */
while(*chlg && ISBLANK(*chlg))
if(before && !digest->stale)
return CURLE_BAD_CONTENT_ENCODING;
- /* We got this header without a nonce, that's a bad Digest line! */
+ /* We got this header without a nonce, that is a bad Digest line! */
if(!digest->nonce)
return CURLE_BAD_CONTENT_ENCODING;
* Parameters:
*
* data [in] - The session handle.
- * userp [in] - The user name.
+ * userp [in] - The username.
* passwdp [in] - The user's password.
* request [in] - The HTTP request.
* uripath [in] - The path of the HTTP uri.
return CURLE_OUT_OF_MEMORY;
if(digest->qop && strcasecompare(digest->qop, "auth-int")) {
- /* We don't support auth-int for PUT or POST */
+ /* We do not support auth-int for PUT or POST */
char hashed[65];
char *hashthis2;
Authorization: Digest username="testuser", realm="testrealm", \
nonce="1053604145", uri="/64", response="c55f7f30d83d774a3d2dcacf725abaca"
- Digest parameters are all quoted strings. Username which is provided by
+ Digest parameters are all quoted strings. Username which is provided by
the user will need double quotes and backslashes within it escaped.
realm, nonce, and opaque will need backslashes as well as they were
- de-escaped when copied from request header. cnonce is generated with
- web-safe characters. uri is already percent encoded. nc is 8 hex
- characters. algorithm and qop with standard values only contain web-safe
+ de-escaped when copied from request header. cnonce is generated with
+ web-safe characters. uri is already percent encoded. nc is 8 hex
+ characters. algorithm and qop with standard values only contain web-safe
characters.
*/
userp_quoted = auth_digest_string_quoted(digest->userhash ? userh : userp);
* Parameters:
*
* data [in] - The session handle.
- * userp [in] - The user name.
+ * userp [in] - The username.
* passwdp [in] - The user's password.
* request [in] - The HTTP request.
* uripath [in] - The path of the HTTP uri.
*
* data [in] - The session handle.
* chlg [in] - The challenge message.
- * userp [in] - The user name in the format User or Domain\User.
+ * userp [in] - The username in the format User or Domain\User.
* passwdp [in] - The user's password.
* service [in] - The service type such as http, smtp, pop or imap.
* out [out] - The result storage.
status = s_pSecFn->QuerySecurityPackageInfo((TCHAR *) TEXT(SP_NAME_DIGEST),
&SecurityPackage);
if(status != SEC_E_OK) {
- failf(data, "SSPI: couldn't get auth info");
+ failf(data, "SSPI: could not get auth info");
return CURLE_AUTH_ERROR;
}
}
}
else
- break; /* We're done here */
+ break; /* We are done here */
/* Pass all additional spaces here */
while(*chlg && ISBLANK(*chlg))
{
size_t chlglen = strlen(chlg);
- /* We had an input token before so if there's another one now that means we
- provided bad credentials in the previous request or it's stale. */
+ /* We had an input token before so if there is another one now that means we
+ provided bad credentials in the previous request or it is stale. */
if(digest->input_token) {
bool stale = false;
const char *p = chlg;
* Parameters:
*
* data [in] - The session handle.
- * userp [in] - The user name in the format User or Domain\User.
+ * userp [in] - The username in the format User or Domain\User.
* passwdp [in] - The user's password.
* request [in] - The HTTP request.
* uripath [in] - The path of the HTTP uri.
status = s_pSecFn->QuerySecurityPackageInfo((TCHAR *) TEXT(SP_NAME_DIGEST),
&SecurityPackage);
if(status != SEC_E_OK) {
- failf(data, "SSPI: couldn't get auth info");
+ failf(data, "SSPI: could not get auth info");
return CURLE_AUTH_ERROR;
}
* Parameters:
*
* data [in] - The session handle.
- * userp [in] - The user name.
+ * userp [in] - The username.
* passwdp [in] - The user's password.
* service [in] - The service type such as http, smtp, pop or imap.
- * host [in[ - The host name.
+ * host [in[ - The hostname.
* mutual_auth [in] - Flag specifying whether or not mutual authentication
* is enabled.
* chlg [in] - Optional challenge message.
/* Process the maximum message size the server can receive */
if(max_size > 0) {
/* The server has told us it supports a maximum receive buffer, however, as
- we don't require one unless we are encrypting data, we tell the server
+ we do not require one unless we are encrypting data, we tell the server
our receive buffer is zero. */
max_size = 0;
}
* Parameters:
*
* data [in] - The session handle.
- * userp [in] - The user name in the format User or Domain\User.
+ * userp [in] - The username in the format User or Domain\User.
* passwdp [in] - The user's password.
* service [in] - The service type such as http, smtp, pop or imap.
- * host [in] - The host name.
+ * host [in] - The hostname.
* mutual_auth [in] - Flag specifying whether or not mutual authentication
* is enabled.
* chlg [in] - Optional challenge message.
TEXT(SP_NAME_KERBEROS),
&SecurityPackage);
if(status != SEC_E_OK) {
- failf(data, "SSPI: couldn't get auth info");
+ failf(data, "SSPI: could not get auth info");
return CURLE_AUTH_ERROR;
}
/* Process the maximum message size the server can receive */
if(max_size > 0) {
/* The server has told us it supports a maximum receive buffer, however, as
- we don't require one unless we are encrypting data, we tell the server
+ we do not require one unless we are encrypting data, we tell the server
our receive buffer is zero. */
max_size = 0;
}
/* "NTLMSSP" signature is always in ASCII regardless of the platform */
#define NTLMSSP_SIGNATURE "\x4e\x54\x4c\x4d\x53\x53\x50"
-/* The fixed host name we provide, in order to not leak our real local host
+/* The fixed hostname we provide, in order to not leak our real local host
name. Copy the name used by Firefox. */
#define NTLM_HOSTNAME "WORKSTATION"
* Parameters:
*
* data [in] - The session handle.
- * userp [in] - The user name in the format User or Domain\User.
+ * userp [in] - The username in the format User or Domain\User.
* passwdp [in] - The user's password.
* service [in] - The service type such as http, smtp, pop or imap.
- * host [in] - The host name.
+ * host [in] - The hostname.
* ntlm [in/out] - The NTLM data struct being used and modified.
* out [out] - The result storage.
*
"%c%c" /* 2 zeroes */
"%c%c" /* host length */
"%c%c" /* host allocated space */
- "%c%c" /* host name offset */
+ "%c%c" /* hostname offset */
"%c%c" /* 2 zeroes */
- "%s" /* host name */
+ "%s" /* hostname */
"%s", /* domain string */
0, /* trailing zero */
0, 0, 0, /* part of type-1 long */
* Parameters:
*
* data [in] - The session handle.
- * userp [in] - The user name in the format User or Domain\User.
+ * userp [in] - The username in the format User or Domain\User.
* passwdp [in] - The user's password.
* ntlm [in/out] - The NTLM data struct being used and modified.
* out [out] - The result storage.
12 LM/LMv2 Response security buffer
20 NTLM/NTLMv2 Response security buffer
28 Target Name security buffer
- 36 User Name security buffer
+ 36 username security buffer
44 Workstation Name security buffer
(52) Session Key security buffer (*)
(60) Flags long (*)
userlen = strlen(user);
#ifndef NTLM_HOSTNAME
- /* Get the machine's un-qualified host name as NTLM doesn't like the fully
+ /* Get the machine's un-qualified hostname as NTLM does not like the fully
qualified domain name */
if(Curl_gethostname(host, sizeof(host))) {
infof(data, "gethostname() failed, continuing without");
/* Make sure that the domain, user and host strings fit in the
buffer before we copy them there. */
if(size + userlen + domlen + hostlen >= NTLM_BUFSIZE) {
- failf(data, "user + domain + host name too big");
+ failf(data, "user + domain + hostname too big");
return CURLE_OUT_OF_MEMORY;
}
* Parameters:
*
* data [in] - The session handle.
- * userp [in] - The user name in the format User or Domain\User.
+ * userp [in] - The username in the format User or Domain\User.
* passwdp [in] - The user's password.
* service [in] - The service type such as http, smtp, pop or imap.
- * host [in] - The host name.
+ * host [in] - The hostname.
* ntlm [in/out] - The NTLM data struct being used and modified.
* out [out] - The result storage.
*
status = s_pSecFn->QuerySecurityPackageInfo((TCHAR *) TEXT(SP_NAME_NTLM),
&SecurityPackage);
if(status != SEC_E_OK) {
- failf(data, "SSPI: couldn't get auth info");
+ failf(data, "SSPI: could not get auth info");
return CURLE_AUTH_ERROR;
}
* Parameters:
*
* data [in] - The session handle.
- * userp [in] - The user name in the format User or Domain\User.
+ * userp [in] - The username in the format User or Domain\User.
* passwdp [in] - The user's password.
* ntlm [in/out] - The NTLM data struct being used and modified.
* out [out] - The result storage.
*
* Parameters:
*
- * user[in] - The user name.
- * host[in] - The host name.
+ * user[in] - The username.
+ * host[in] - The hostname.
* port[in] - The port(when not Port 80).
* bearer[in] - The bearer token.
* out[out] - The result storage.
*
* Parameters:
*
- * user[in] - The user name.
+ * user[in] - The username.
* bearer[in] - The bearer token.
* out[out] - The result storage.
*
* Parameters:
*
* data [in] - The session handle.
- * userp [in] - The user name in the format User or Domain\User.
+ * userp [in] - The username in the format User or Domain\User.
* passwdp [in] - The user's password.
* service [in] - The service type such as http, smtp, pop or imap.
- * host [in] - The host name.
+ * host [in] - The hostname.
* chlg64 [in] - The optional base64 encoded challenge message.
* nego [in/out] - The Negotiate data struct being used and modified.
*
if(nego->context && nego->status == GSS_S_COMPLETE) {
/* We finished successfully our part of authentication, but server
- * rejected it (since we're again here). Exit with an error since we
- * can't invent anything better */
+ * rejected it (since we are again here). Exit with an error since we
+ * cannot invent anything better */
Curl_auth_cleanup_spnego(nego);
return CURLE_LOGIN_DENIED;
}
* Parameters:
*
* data [in] - The session handle.
- * user [in] - The user name in the format User or Domain\User.
+ * user [in] - The username in the format User or Domain\User.
* password [in] - The user's password.
* service [in] - The service type such as http, smtp, pop or imap.
- * host [in] - The host name.
+ * host [in] - The hostname.
* chlg64 [in] - The optional base64 encoded challenge message.
* nego [in/out] - The Negotiate data struct being used and modified.
*
if(nego->context && nego->status == SEC_E_OK) {
/* We finished successfully our part of authentication, but server
- * rejected it (since we're again here). Exit with an error since we
- * can't invent anything better */
+ * rejected it (since we are again here). Exit with an error since we
+ * cannot invent anything better */
Curl_auth_cleanup_spnego(nego);
return CURLE_LOGIN_DENIED;
}
TEXT(SP_NAME_NEGOTIATE),
&SecurityPackage);
if(nego->status != SEC_E_OK) {
- failf(data, "SSPI: couldn't get auth info");
+ failf(data, "SSPI: could not get auth info");
return CURLE_AUTH_ERROR;
}
* Parameters:
*
* service [in] - The service type such as http, smtp, pop or imap.
- * host [in] - The host name.
+ * host [in] - The hostname.
* realm [in] - The realm.
*
* Returns a pointer to the newly allocated SPN.
return NULL;
/* Allocate and return a TCHAR based SPN. Since curlx_convert_UTF8_to_tchar
- must be freed by curlx_unicodefree we'll dupe the result so that the
+ must be freed by curlx_unicodefree we will dupe the result so that the
pointer this function returns can be normally free'd. */
tchar_spn = curlx_convert_UTF8_to_tchar(utf8_spn);
free(utf8_spn);
* Domain/User (curl Down-level format - for compatibility with existing code)
* User@Domain (User Principal Name)
*
- * Note: The user name may be empty when using a GSS-API library or Windows
+ * Note: The username may be empty when using a GSS-API library or Windows
* SSPI as the user and domain are either obtained from the credentials cache
* when using GSS-API or via the currently logged in user's credentials when
* using Windows SSPI.
*
* Parameters:
*
- * user [in] - The user name.
+ * user [in] - The username.
*
* Returns TRUE on success; otherwise FALSE.
*/
};
/*
- * Feature presence run-time check functions.
+ * Feature presence runtime check functions.
*
* Warning: the value returned by these should not change between
* curl_global_init() and curl_global_cleanup() calls.
LIBCURL_VERSION,
LIBCURL_VERSION_NUM,
OS, /* as found by configure or set by hand at build-time */
- 0, /* features bitmask is built at run-time */
+ 0, /* features bitmask is built at runtime */
NULL, /* ssl_version */
0, /* ssl_version_num, this is kept at zero */
NULL, /* zlib_version */
static char zstd_buffer[80];
#endif
- (void)stamp; /* avoid compiler warnings, we don't use this */
+ (void)stamp; /* avoid compiler warnings, we do not use this */
#ifdef USE_SSL
Curl_ssl_version(ssl_buffer, sizeof(ssl_buffer));
#if defined(CURL_WINDOWS_APP)
/* We have no way to determine the Windows version from Windows apps,
- so let's assume we're running on the target Windows version. */
+ so let's assume we are running on the target Windows version. */
const WORD fullVersion = MAKEWORD(minorVersion, majorVersion);
const WORD targetVersion = (WORD)_WIN32_WINNT;
}
if(matched && (platform == PLATFORM_WINDOWS)) {
- /* we're always running on PLATFORM_WINNT */
+ /* we are always running on PLATFORM_WINNT */
matched = FALSE;
}
#elif !defined(_WIN32_WINNT) || !defined(_WIN32_WINNT_WIN2K) || \
msh3_data_sent
};
-/* Decode HTTP status code. Returns -1 if no valid status code was
+/* Decode HTTP status code. Returns -1 if no valid status code was
decoded. (duplicate from http2.c) */
static int decode_status_code(const char *value, size_t len)
{
}
/* TODO - msh3/msquic will hold onto this memory until the send complete
- event. How do we make sure curl doesn't free it until then? */
+ event. How do we make sure curl does not free it until then? */
*err = CURLE_OK;
nwritten = len;
}
ctx->api = MsH3ApiOpen();
if(!ctx->api) {
- failf(data, "can't create msh3 api");
+ failf(data, "cannot create msh3 api");
return CURLE_FAILED_INIT;
}
&addr,
!verify);
if(!ctx->qconn) {
- failf(data, "can't create msh3 connection");
+ failf(data, "cannot create msh3 connection");
if(ctx->api) {
MsH3ApiClose(ctx->api);
ctx->api = NULL;
/* The pool keeps spares around and half of a full stream windows
* seems good. More does not seem to improve performance.
* The benefit of the pool is that stream buffer to not keep
- * spares. So memory consumption goes down when streams run empty,
+ * spares. Memory consumption goes down when streams run empty,
* have a large upload done, etc. */
#define H3_STREAM_POOL_SPARES \
(H3_STREAM_WINDOW_SIZE / H3_STREAM_CHUNK_SIZE ) / 2
result = Curl_rand(NULL, dest, destlen);
if(result) {
- /* cb_rand is only used for non-cryptographic context. If Curl_rand
+ /* cb_rand is only used for non-cryptographic context. If Curl_rand
failed, just fill 0 and call it *random*. */
memset(dest, 0, destlen);
}
if(!stream)
return 0;
- /* add a CRLF only if we've received some headers */
+ /* add a CRLF only if we have received some headers */
h3_xfer_write_resp_hd(cf, data, stream, STRCONST("\r\n"), stream->closed);
CURL_TRC_CF(data, cf, "[%" CURL_PRId64 "] end_headers, status=%d",
DEBUGASSERT(nread > 0);
if(pktcnt == 0) {
/* first packet in buffer. This is either of a known, "good"
- * payload size or it is a PMTUD. We'll see. */
+ * payload size or it is a PMTUD. We will see. */
gsolen = (size_t)nread;
}
else if((size_t)nread > gsolen ||
return CURLE_FAILED_INIT;
}
#endif /* !OPENSSL_IS_BORINGSSL && !OPENSSL_IS_AWSLC */
- /* Enable the session cache because it's a prerequisite for the
+ /* Enable the session cache because it is a prerequisite for the
* "new session" callback. Use the "external storage" mode to prevent
* OpenSSL from creating an internal session cache.
*/
alive = TRUE;
if(*input_pending) {
CURLcode result;
- /* This happens before we've sent off a request and the connection is
- not in use by any other transfer, there shouldn't be any data here,
+ /* This happens before we have sent off a request and the connection is
+ not in use by any other transfer, there should not be any data here,
only "protocol frames" */
*input_pending = FALSE;
result = cf_progress_ingress(cf, data, NULL);
/* The pool keeps spares around and half of a full stream window
* seems good. More does not seem to improve performance.
* The benefit of the pool is that stream buffer to not keep
- * spares. So memory consumption goes down when streams run empty,
+ * spares. Memory consumption goes down when streams run empty,
* have a large upload done, etc. */
#define H3_STREAM_POOL_SPARES \
(H3_STREAM_WINDOW_SIZE / H3_STREAM_CHUNK_SIZE ) / 2
/* detail is already set to the SSL error above */
- /* If we e.g. use SSLv2 request-method and the server doesn't like us
+ /* If we e.g. use SSLv2 request-method and the server does not like us
* (RST connection, etc.), OpenSSL gives no explanation whatsoever and
* the SO_ERROR is also lost.
*/
if(!stream)
return 0;
- /* add a CRLF only if we've received some headers */
+ /* add a CRLF only if we have received some headers */
result = write_resp_raw(cf, data, "\r\n", 2, FALSE);
if(result) {
return -1;
*err = cf_osslq_stream_open(&stream->s, ctx->tls.ossl.ssl, 0,
&ctx->stream_bufcp, data);
if(*err) {
- failf(data, "can't get bidi streams");
+ failf(data, "cannot get bidi streams");
*err = CURLE_SEND_ERROR;
goto out;
}
alive = TRUE;
if(*input_pending) {
CURLcode result;
- /* This happens before we've sent off a request and the connection is
- not in use by any other transfer, there shouldn't be any data here,
+ /* This happens before we have sent off a request and the connection is
+ not in use by any other transfer, there should not be any data here,
only "protocol frames" */
*input_pending = FALSE;
result = cf_progress_ingress(cf, data);
#define H3_STREAM_WINDOW_SIZE (128 * 1024)
#define H3_STREAM_CHUNK_SIZE (16 * 1024)
-/* The pool keeps spares around and half of a full stream windows
- * seems good. More does not seem to improve performance.
- * The benefit of the pool is that stream buffer to not keep
- * spares. So memory consumption goes down when streams run empty,
- * have a large upload done, etc. */
+/* The pool keeps spares around and half of a full stream windows seems good.
+ * More does not seem to improve performance. The benefit of the pool is that
+ * stream buffer to not keep spares. Memory consumption goes down when streams
+ * run empty, have a large upload done, etc. */
#define H3_STREAM_POOL_SPARES \
(H3_STREAM_WINDOW_SIZE / H3_STREAM_CHUNK_SIZE ) / 2
/* Receive and Send max number of chunks just follows from the
ctx->cfg = quiche_config_new(QUICHE_PROTOCOL_VERSION);
if(!ctx->cfg) {
- failf(data, "can't create quiche config");
+ failf(data, "cannot create quiche config");
return CURLE_FAILED_INIT;
}
quiche_config_enable_pacing(ctx->cfg, false);
&sockaddr->sa_addr, sockaddr->addrlen,
ctx->cfg, ctx->tls.ossl.ssl, false);
if(!ctx->qconn) {
- failf(data, "can't create quiche connection");
+ failf(data, "cannot create quiche connection");
return CURLE_OUT_OF_MEMORY;
}
return FALSE;
if(*input_pending) {
- /* This happens before we've sent off a request and the connection is
- not in use by any other transfer, there shouldn't be any data here,
+ /* This happens before we have sent off a request and the connection is
+ not in use by any other transfer, there should not be any data here,
only "protocol frames" */
*input_pending = FALSE;
if(cf_process_ingress(cf, data))
}
#ifdef CURL_CA_FALLBACK
else {
- /* verifying the peer without any CA certificates won't work so
+ /* verifying the peer without any CA certificates will not work so
use wolfssl's built-in default as fallback */
wolfSSL_CTX_set_default_verify_paths(ctx->wssl.ctx);
}
return CURLE_URL_MALFORMAT;
}
if(conn->bits.httpproxy && conn->bits.tunnel_proxy) {
- failf(data, "HTTP/3 is not supported over a HTTP proxy");
+ failf(data, "HTTP/3 is not supported over an HTTP proxy");
return CURLE_URL_MALFORMAT;
}
#endif
/*
* ssh_statemach_act() runs the SSH state machine as far as it can without
- * blocking and without reaching the end. The data the pointer 'block' points
+ * blocking and without reaching the end. The data the pointer 'block' points
* to will be set to TRUE if the libssh function returns SSH_AGAIN
* meaning it wants to be called again when the socket is ready
*/
int rc = SSH_NO_ERROR, err;
int seekerr = CURL_SEEKFUNC_OK;
const char *err_msg;
- *block = 0; /* we're not blocking by default */
+ *block = 0; /* we are not blocking by default */
do {
failf(data, "Could not seek stream");
return CURLE_FTP_COULDNT_USE_REST;
}
- /* seekerr == CURL_SEEKFUNC_CANTSEEK (can't seek to offset) */
+ /* seekerr == CURL_SEEKFUNC_CANTSEEK (cannot seek to offset) */
do {
char scratch[4*1024];
size_t readthisamountnow =
/* not set by Curl_xfer_setup to preserve keepon bits */
conn->sockfd = conn->writesockfd;
- /* store this original bitmask setup to use later on if we can't
+ /* store this original bitmask setup to use later on if we cannot
figure out a "real" bitmask */
sshc->orig_waitfor = data->req.keepon;
with both accordingly */
data->state.select_bits = CURL_CSELECT_OUT;
- /* since we don't really wait for anything at this point, we want the
+ /* since we do not really wait for anything at this point, we want the
state machine to move on as soon as possible so we set a very short
timeout here */
Curl_expire(data, 0, EXPIRE_RUN_NOW);
++sshc->slash_pos;
if(rc < 0) {
/*
- * Abort if failure wasn't that the dir already exists or the
+ * Abort if failure was not that the dir already exists or the
* permission was denied (creation might succeed further down the
* path) - retry on unspecific FAILURE also
*/
!(attrs->flags & SSH_FILEXFER_ATTR_SIZE) ||
(attrs->size == 0)) {
/*
- * sftp_fstat didn't return an error, so maybe the server
- * just doesn't support stat()
- * OR the server doesn't return a file size with a stat()
+ * sftp_fstat did not return an error, so maybe the server
+ * just does not support stat()
+ * OR the server does not return a file size with a stat()
* OR file size is 0
*/
data->req.size = -1;
/* We can resume if we can seek to the resume position */
if(data->state.resume_from) {
if(data->state.resume_from < 0) {
- /* We're supposed to download the last abs(from) bytes */
+ /* We are supposed to download the last abs(from) bytes */
if((curl_off_t)size < -data->state.resume_from) {
failf(data, "Offset (%"
CURL_FORMAT_CURL_OFF_T ") was beyond file size (%"
/* not set by Curl_xfer_setup to preserve keepon bits */
conn->sockfd = conn->writesockfd;
- /* store this original bitmask setup to use later on if we can't
+ /* store this original bitmask setup to use later on if we cannot
figure out a "real" bitmask */
sshc->orig_waitfor = data->req.keepon;
FALLTHROUGH();
case SSH_SESSION_DISCONNECT:
- /* during weird times when we've been prematurely aborted, the channel
+ /* during weird times when we have been prematurely aborted, the channel
is still alive when we reach this state and we MUST kill the channel
properly first */
if(sshc->scp_session) {
{
struct ssh_conn *sshc = &conn->proto.sshc;
- /* If it didn't block, or nothing was returned by ssh_get_poll_flags
+ /* If it did not block, or nothing was returned by ssh_get_poll_flags
* have the original set */
conn->waitfor = sshc->orig_waitfor;
(void) dead_connection;
if(ssh->ssh_session) {
- /* only if there's a session still around to use! */
+ /* only if there is a session still around to use! */
state(data, SSH_SESSION_DISCONNECT);
DEBUGF(infof(data, "SSH DISCONNECT starts now"));
if(conn->proto.sshc.ssh_session) {
- /* only if there's a session still around to use! */
+ /* only if there is a session still around to use! */
state(data, SSH_SFTP_SHUTDOWN);
result = myssh_block_statemach(data, TRUE);
}
}
/*
- * SFTP is a binary protocol, so we don't send text commands
+ * SFTP is a binary protocol, so we do not send text commands
* to the server. Instead, we scan for commands used by
* OpenSSH's sftp program and call the appropriate libssh
* functions.
#endif
/*
- * Earlier libssh2 versions didn't have the ability to seek to 64bit positions
- * with 32bit size_t.
+ * Earlier libssh2 versions did not have the ability to seek to 64-bit
+ * positions with 32-bit size_t.
*/
#ifdef HAVE_LIBSSH2_SFTP_SEEK64
#define SFTP_SEEK(x,y) libssh2_sftp_seek64(x, (libssh2_uint64_t)y)
#endif
/*
- * Earlier libssh2 versions didn't do SCP properly beyond 32bit sizes on 32bit
- * architectures so we check of the necessary function is present.
+ * Earlier libssh2 versions did not do SCP properly beyond 32-bit sizes on
+ * 32-bit architectures so we check of the necessary function is present.
*/
#ifndef HAVE_LIBSSH2_SCP_SEND64
#define SCP_SEND(a,b,c,d) libssh2_scp_send_ex(a, b, (int)(c), (size_t)d, 0, 0)
#ifdef HAVE_LIBSSH2_KNOWNHOST_API
if(data->set.str[STRING_SSH_KNOWNHOSTS]) {
- /* we're asked to verify the host against a file */
+ /* we are asked to verify the host against a file */
struct connectdata *conn = data->conn;
struct ssh_conn *sshc = &conn->proto.sshc;
struct libssh2_knownhost *host = NULL;
if(remotekey) {
/*
- * A subject to figure out is what host name we need to pass in here.
- * What host name does OpenSSH store in its file if an IDN name is
+ * A subject to figure out is what hostname we need to pass in here.
+ * What hostname does OpenSSH store in its file if an IDN name is
* used?
*/
enum curl_khmatch keymatch;
break;
#endif
default:
- infof(data, "unsupported key type, can't check knownhosts");
+ infof(data, "unsupported key type, cannot check knownhosts");
keybit = 0;
break;
}
result = sshc->actualcode = CURLE_PEER_FAILED_VERIFICATION;
break;
case CURLKHSTAT_FINE_REPLACE:
- /* remove old host+key that doesn't match */
+ /* remove old host+key that does not match */
if(host)
libssh2_knownhost_del(sshc->kh, host);
FALLTHROUGH();
case CURLKHSTAT_FINE_ADD_TO_FILE:
/* proceed */
if(keycheck != LIBSSH2_KNOWNHOST_CHECK_MATCH) {
- /* the found host+key didn't match but has been told to be fine
+ /* the found host+key did not match but has been told to be fine
anyway so we add it in memory */
int addrc = libssh2_knownhost_add(sshc->kh,
conn->host.name, NULL,
size_t b64_pos = 0;
#ifdef LIBSSH2_HOSTKEY_HASH_SHA256
- /* The fingerprint points to static storage (!), don't free() it. */
+ /* The fingerprint points to static storage (!), do not free() it. */
fingerprint = libssh2_hostkey_hash(sshc->ssh_session,
LIBSSH2_HOSTKEY_HASH_SHA256);
#else
LIBSSH2_HOSTKEY_HASH_MD5);
if(fingerprint) {
- /* The fingerprint points to static storage (!), don't free() it. */
+ /* The fingerprint points to static storage (!), do not free() it. */
int i;
for(i = 0; i < 16; i++) {
msnprintf(&md5buffer[i*2], 3, "%02x", (unsigned char) fingerprint[i]);
/*
* ssh_statemach_act() runs the SSH state machine as far as it can without
- * blocking and without reaching the end. The data the pointer 'block' points
+ * blocking and without reaching the end. The data the pointer 'block' points
* to will be set to TRUE if the libssh2 function returns LIBSSH2_ERROR_EAGAIN
* meaning it wants to be called again when the socket is ready
*/
unsigned long sftperr;
int seekerr = CURL_SEEKFUNC_OK;
size_t readdir_len;
- *block = 0; /* we're not blocking by default */
+ *block = 0; /* we are not blocking by default */
do {
switch(sshc->state) {
* must never change it later. Thus, always specify the correct username
* here, even though the libssh2 docs kind of indicate that it should be
* possible to get a 'generic' list (not user-specific) of authentication
- * methods, presumably with a blank username. That won't work in my
+ * methods, presumably with a blank username. That will not work in my
* experience.
* So always specify it here.
*/
if(sftperr)
result = sftp_libssh2_error_to_CURLE(sftperr);
else
- /* in this case, the error wasn't in the SFTP level but for example
+ /* in this case, the error was not in the SFTP level but for example
a time-out or similar */
result = CURLE_SSH;
sshc->actualcode = result;
}
/*
- * SFTP is a binary protocol, so we don't send text commands
+ * SFTP is a binary protocol, so we do not send text commands
* to the server. Instead, we scan for commands used by
* OpenSSH's sftp program and call the appropriate libssh2
* functions.
if(!strncasecompare(cmd, "chmod", 5)) {
/* Since chown and chgrp only set owner OR group but libssh2 wants to
* set them both at once, we need to obtain the current ownership
- * first. This takes an extra protocol round trip.
+ * first. This takes an extra protocol round trip.
*/
rc = libssh2_sftp_stat_ex(sshc->sftp_session, sshc->quote_path2,
curlx_uztoui(strlen(sshc->quote_path2)),
}
#if SIZEOF_TIME_T > SIZEOF_LONG
if(date > 0xffffffff) {
- /* if 'long' can't old >32bit, this date cannot be sent */
+ /* if 'long' cannot old >32bit, this date cannot be sent */
failf(data, "date overflow");
fail = TRUE;
}
failf(data, "Could not seek stream");
return CURLE_FTP_COULDNT_USE_REST;
}
- /* seekerr == CURL_SEEKFUNC_CANTSEEK (can't seek to offset) */
+ /* seekerr == CURL_SEEKFUNC_CANTSEEK (cannot seek to offset) */
do {
char scratch[4*1024];
size_t readthisamountnow =
sshc->actualcode = result;
}
else {
- /* store this original bitmask setup to use later on if we can't
+ /* store this original bitmask setup to use later on if we cannot
figure out a "real" bitmask */
sshc->orig_waitfor = data->req.keepon;
with both accordingly */
data->state.select_bits = CURL_CSELECT_OUT;
- /* since we don't really wait for anything at this point, we want the
+ /* since we do not really wait for anything at this point, we want the
state machine to move on as soon as possible so we set a very short
timeout here */
Curl_expire(data, 0, EXPIRE_RUN_NOW);
++sshc->slash_pos;
if(rc < 0) {
/*
- * Abort if failure wasn't that the dir already exists or the
+ * Abort if failure was not that the dir already exists or the
* permission was denied (creation might succeed further down the
* path) - retry on unspecific FAILURE also
*/
!(attrs.flags & LIBSSH2_SFTP_ATTR_SIZE) ||
(attrs.filesize == 0)) {
/*
- * libssh2_sftp_open() didn't return an error, so maybe the server
- * just doesn't support stat()
- * OR the server doesn't return a file size with a stat()
+ * libssh2_sftp_open() did not return an error, so maybe the server
+ * just does not support stat()
+ * OR the server does not return a file size with a stat()
* OR file size is 0
*/
data->req.size = -1;
/* We can resume if we can seek to the resume position */
if(data->state.resume_from) {
if(data->state.resume_from < 0) {
- /* We're supposed to download the last abs(from) bytes */
+ /* We are supposed to download the last abs(from) bytes */
if((curl_off_t)attrs.filesize < -data->state.resume_from) {
failf(data, "Offset (%"
CURL_FORMAT_CURL_OFF_T ") was beyond file size (%"
case SSH_SCP_UPLOAD_INIT:
/*
* libssh2 requires that the destination path is a full path that
- * includes the destination file and name OR ends in a "/" . If this is
+ * includes the destination file and name OR ends in a "/" . If this is
* not done the destination file will be named the same name as the last
* directory in the path.
*/
sshc->actualcode = result;
}
else {
- /* store this original bitmask setup to use later on if we can't
+ /* store this original bitmask setup to use later on if we cannot
figure out a "real" bitmask */
sshc->orig_waitfor = data->req.keepon;
break;
case SSH_SESSION_DISCONNECT:
- /* during weird times when we've been prematurely aborted, the channel
+ /* during weird times when we have been prematurely aborted, the channel
is still alive when we reach this state and we MUST kill the channel
properly first */
if(sshc->ssh_channel) {
* When one of the libssh2 functions has returned LIBSSH2_ERROR_EAGAIN this
* function is used to figure out in what direction and stores this info so
* that the multi interface can take advantage of it. Make sure to call this
- * function in all cases so that when it _doesn't_ return EAGAIN we can
+ * function in all cases so that when it _does not_ return EAGAIN we can
* restore the default wait bits.
*/
static void ssh_block2waitfor(struct Curl_easy *data, bool block)
}
}
if(!dir)
- /* It didn't block or libssh2 didn't reveal in which direction, put back
+ /* It did not block or libssh2 did not reveal in which direction, put back
the original set */
conn->waitfor = sshc->orig_waitfor;
}
do {
result = ssh_statemach_act(data, &block);
*done = (sshc->state == SSH_STOP) ? TRUE : FALSE;
- /* if there's no error, it isn't done and it didn't EWOULDBLOCK, then
+ /* if there is no error, it is not done and it did not EWOULDBLOCK, then
try again */
} while(!result && !*done && !block);
ssh_block2waitfor(data, block);
(void) dead_connection;
if(sshc->ssh_session) {
- /* only if there's a session still around to use! */
+ /* only if there is a session still around to use! */
state(data, SSH_SESSION_DISCONNECT);
result = ssh_block_statemach(data, conn, TRUE);
}
DEBUGF(infof(data, "SSH DISCONNECT starts now"));
if(sshc->ssh_session) {
- /* only if there's a session still around to use! */
+ /* only if there is a session still around to use! */
state(data, SSH_SFTP_SHUTDOWN);
result = ssh_block_statemach(data, conn, TRUE);
}
#endif
#ifdef HAVE_LIBSSH2_VERSION
-/* get it run-time if possible */
+/* get it runtime if possible */
#define CURL_LIBSSH2_VERSION libssh2_version(0)
#else
-/* use build-time if run-time not possible */
+/* use build-time if runtime not possible */
#define CURL_LIBSSH2_VERSION LIBSSH2_VERSION
#endif
rc = wolfSSH_SetUsername(sshc->ssh_session, conn->user);
if(rc != WS_SUCCESS) {
- failf(data, "wolfSSH failed to set user name");
+ failf(data, "wolfSSH failed to set username");
goto error;
}
/*
* wssh_statemach_act() runs the SSH state machine as far as it can without
- * blocking and without reaching the end. The data the pointer 'block' points
+ * blocking and without reaching the end. The data the pointer 'block' points
* to will be set to TRUE if the wolfssh function returns EAGAIN meaning it
* wants to be called again when the socket is ready
*/
struct SSHPROTO *sftp_scp = data->req.p.ssh;
WS_SFTPNAME *name;
int rc = 0;
- *block = FALSE; /* we're not blocking by default */
+ *block = FALSE; /* we are not blocking by default */
do {
switch(sshc->state) {
failf(data, "Could not seek stream");
return CURLE_FTP_COULDNT_USE_REST;
}
- /* seekerr == CURL_SEEKFUNC_CANTSEEK (can't seek to offset) */
+ /* seekerr == CURL_SEEKFUNC_CANTSEEK (cannot seek to offset) */
do {
char scratch[4*1024];
size_t readthisamountnow =
sshc->actualcode = result;
}
else {
- /* store this original bitmask setup to use later on if we can't
+ /* store this original bitmask setup to use later on if we cannot
figure out a "real" bitmask */
sshc->orig_waitfor = data->req.keepon;
with both accordingly */
data->state.select_bits = CURL_CSELECT_OUT;
- /* since we don't really wait for anything at this point, we want the
+ /* since we do not really wait for anything at this point, we want the
state machine to move on as soon as possible so we set a very short
timeout here */
Curl_expire(data, 0, EXPIRE_RUN_NOW);
do {
result = wssh_statemach_act(data, &block);
*done = (sshc->state == SSH_STOP) ? TRUE : FALSE;
- /* if there's no error, it isn't done and it didn't EWOULDBLOCK, then
+ /* if there is no error, it is not done and it did not EWOULDBLOCK, then
try again */
if(*done) {
DEBUGF(infof(data, "wssh_statemach_act says DONE"));
DEBUGF(infof(data, "SSH DISCONNECT starts now"));
if(conn->proto.sshc.ssh_session) {
- /* only if there's a session still around to use! */
+ /* only if there is a session still around to use! */
state(data, SSH_SFTP_SHUTDOWN);
result = wssh_block_statemach(data, TRUE);
}
return CURLE_OPERATION_TIMEDOUT;
}
- /* if ssl is expecting something, check if it's available. */
+ /* if ssl is expecting something, check if it is available. */
if(connssl->io_need) {
curl_socket_t writefd = (connssl->io_need & CURL_SSL_IO_NEED_SEND)?
* Source file for all GnuTLS-specific code for the TLS/SSL layer. No code
* but vtls.c should ever call or use these functions.
*
- * Note: don't use the GnuTLS' *_t variable type names in this source code,
+ * Note: do not use the GnuTLS' *_t variable type names in this source code,
* since they were not present in 1.0.X.
*/
return CURLE_OPERATION_TIMEDOUT;
}
- /* if ssl is expecting something, check if it's available. */
+ /* if ssl is expecting something, check if it is available. */
if(connssl->io_need) {
int what;
curl_socket_t writefd = (connssl->io_need & CURL_SSL_IO_NEED_SEND)?
}
if(!tls13support) {
- /* If the running GnuTLS doesn't support TLS 1.3, we must not specify a
+ /* If the running GnuTLS does not support TLS 1.3, we must not specify a
prioritylist involving that since it will make GnuTLS return an en
error back at us */
if((ssl_version_max == CURL_SSLVERSION_MAX_TLSv1_3) ||
tls13support = gnutls_check_version("3.6.5");
/* Ensure +SRP comes at the *end* of all relevant strings so that it can be
- * removed if a run-time error indicates that SRP is not supported by this
+ * removed if a runtime error indicates that SRP is not supported by this
* GnuTLS version */
if(config->version == CURL_SSLVERSION_SSLv2 ||
/* Result is returned to caller */
CURLcode result = CURLE_SSL_PINNEDPUBKEYNOTMATCH;
- /* if a path wasn't specified, don't pin */
+ /* if a path was not specified, do not pin */
if(!pinnedpubkey)
return CURLE_OK;
}
#endif
}
- infof(data, " common name: WARNING couldn't obtain");
+ infof(data, " common name: WARNING could not obtain");
}
if(data->set.ssl.certinfo && chainp) {
peer->sni ? peer->sni :
peer->hostname);
#if GNUTLS_VERSION_NUMBER < 0x030306
- /* Before 3.3.6, gnutls_x509_crt_check_hostname() didn't check IP
+ /* Before 3.3.6, gnutls_x509_crt_check_hostname() did not check IP
addresses. */
if(!rc) {
#ifdef USE_IPV6
size_t certaddrlen = sizeof(certaddr);
int ret = gnutls_x509_crt_get_subject_alt_name(x509_cert, i, certaddr,
&certaddrlen, NULL);
- /* If this happens, it wasn't an IP address. */
+ /* If this happens, it was not an IP address. */
if(ret == GNUTLS_E_SHORT_MEMORY_BUFFER)
continue;
if(ret < 0)
if(!rc) {
if(config->verifyhost) {
failf(data, "SSL: certificate subject name (%s) does not match "
- "target host name '%s'", certname, peer->dispname);
+ "target hostname '%s'", certname, peer->dispname);
gnutls_x509_crt_deinit(x509_cert);
return CURLE_PEER_FAILED_VERIFICATION;
}
* We use the matching rule described in RFC6125, section 6.4.3.
* https://datatracker.ietf.org/doc/html/rfc6125#section-6.4.3
*
- * In addition: ignore trailing dots in the host names and wildcards, so that
+ * In addition: ignore trailing dots in the hostnames and wildcards, so that
* the names are used normalized. This is what the browsers do.
*
* Do not allow wildcard matching on IP numbers. There are apparently
#include <curl/curl.h>
-/* returns TRUE if there's a match */
+/* returns TRUE if there is a match */
bool Curl_cert_hostcheck(const char *match_pattern, size_t matchlen,
const char *hostname, size_t hostlen);
MBEDTLS_SSL_SESSION_TICKETS_DISABLED);
#endif
- /* Check if there's a cached ID we can/should use here! */
+ /* Check if there is a cached ID we can/should use here! */
if(ssl_config->primary.sessionid) {
void *old_session = NULL;
for(i = 0; i < connssl->alpn->count; ++i) {
backend->protocols[i] = connssl->alpn->entries[i];
}
- /* this function doesn't clone the protocols array, which is why we need
+ /* this function does not clone the protocols array, which is why we need
to keep it around */
if(mbedtls_ssl_conf_alpn_protocols(&backend->config,
&backend->protocols[0])) {
return CURLE_SSL_CONNECT_ERROR;
}
- /* If there's already a matching session in the cache, delete it */
+ /* If there is already a matching session in the cache, delete it */
Curl_ssl_sessionid_lock(data);
if(!Curl_ssl_getsessionid(cf, data, &connssl->peer,
&old_ssl_sessionid, NULL))
}
if(ssl_connect_1 == connssl->connecting_state) {
- /* Find out how much more time we're allowed */
+ /* Find out how much more time we are allowed */
timeout_ms = Curl_timeleft(data, NULL, TRUE);
if(timeout_ms < 0) {
return CURLE_OPERATION_TIMEDOUT;
}
- /* if ssl is expecting something, check if it's available. */
+ /* if ssl is expecting something, check if it is available. */
if(connssl->io_need) {
curl_socket_t writefd = (connssl->io_need & CURL_SSL_IO_NEED_SEND)?
*/
#define DEFAULT_CIPHER_SELECTION NULL
#else
-/* ... but it is not the case with old versions of OpenSSL */
+/* not the case with old versions of OpenSSL */
#define DEFAULT_CIPHER_SELECTION \
"ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH"
#endif
#else
/*
* ossl_log_tls12_secret is called by libcurl to make the CLIENT_RANDOMs if the
- * OpenSSL being used doesn't have native support for doing that.
+ * OpenSSL being used does not have native support for doing that.
*/
static void
ossl_log_tls12_secret(const SSL *ssl, bool *keylog_done)
fname[0] = 0; /* blank it first */
RAND_file_name(fname, sizeof(fname));
if(fname[0]) {
- /* we got a file name to try */
+ /* we got a filename to try */
RAND_load_file(fname, RAND_LOAD_LENGTH);
if(rand_enough())
return CURLE_OK;
}
if(!params.cert) {
- failf(data, "ssl engine didn't initialized the certificate "
+ failf(data, "ssl engine did not initialized the certificate "
"properly.");
return 0;
}
sizeof(error_buffer)));
return 0;
}
- X509_free(params.cert); /* we don't need the handle any more... */
+ X509_free(params.cert); /* we do not need the handle any more... */
}
else {
- failf(data, "crypto engine not set, can't load certificate");
+ failf(data, "crypto engine not set, cannot load certificate");
return 0;
}
}
* Note that sk_X509_pop() is used below to make sure the cert is
* removed from the stack properly before getting passed to
* SSL_CTX_add_extra_chain_cert(), which takes ownership. Previously
- * we used sk_X509_value() instead, but then we'd clean it in the
+ * we used sk_X509_value() instead, but then we would clean it in the
* subsequent sk_X509_pop_free() call.
*/
X509 *x = sk_X509_pop(ca);
EVP_PKEY_free(priv_key);
return 0;
}
- EVP_PKEY_free(priv_key); /* we don't need the handle any more... */
+ EVP_PKEY_free(priv_key); /* we do not need the handle any more... */
}
else {
- failf(data, "crypto engine not set, can't load private key");
+ failf(data, "crypto engine not set, cannot load private key");
return 0;
}
}
#if !defined(OPENSSL_NO_RSA) && !defined(OPENSSL_IS_BORINGSSL) && \
!defined(OPENSSL_NO_DEPRECATED_3_0)
{
- /* If RSA is used, don't check the private key if its flags indicate
- * it doesn't support it. */
+ /* If RSA is used, do not check the private key if its flags indicate
+ * it does not support it. */
EVP_PKEY *priv_key = SSL_get_privatekey(ssl);
int pktype;
#ifdef HAVE_OPAQUE_EVP_PKEY
if((size_t)biomem->length < size)
size = biomem->length;
else
- size--; /* don't overwrite the buffer end */
+ size--; /* do not overwrite the buffer end */
memcpy(buf, biomem->data, size);
buf[size] = 0;
/* ====================================================== */
/*
- * Match subjectAltName against the host name.
+ * Match subjectAltName against the hostname.
*/
static bool subj_alt_hostcheck(struct Curl_easy *data,
const char *match_pattern,
Certification Authorities are encouraged to use the dNSName instead.
Matching is performed using the matching rules specified by
- [RFC2459]. If more than one identity of a given type is present in
+ [RFC2459]. If more than one identity of a given type is present in
the certificate (e.g., more than one dNSName name, a match in any one
of the set is considered acceptable.) Names may contain the wildcard
character * which is considered to match any single domain name
bool ipmatched = FALSE;
/* get amount of alternatives, RFC2459 claims there MUST be at least
- one, but we don't depend on it... */
+ one, but we do not depend on it... */
numalts = sk_GENERAL_NAME_num(altnames);
/* loop through all alternatives - until a dnsmatch */
switch(target) {
case GEN_DNS: /* name/pattern comparison */
- /* The OpenSSL man page explicitly says: "In general it cannot be
+ /* The OpenSSL manpage explicitly says: "In general it cannot be
assumed that the data returned by ASN1_STRING_data() is null
terminated or does not contain embedded nulls." But also that
"The actual format of the data will depend on the actual string
is always null-terminated.
*/
if((altlen == strlen(altptr)) &&
- /* if this isn't true, there was an embedded zero in the name
+ /* if this is not true, there was an embedded zero in the name
string and we cannot match it. */
subj_alt_hostcheck(data, altptr, altlen,
peer->hostname, hostlen,
/* an alternative name matched */
;
else if(dNSName || iPAddress) {
- const char *tname = (peer->type == CURL_SSL_PEER_DNS) ? "host name" :
+ const char *tname = (peer->type == CURL_SSL_PEER_DNS) ? "hostname" :
(peer->type == CURL_SSL_PEER_IPV4) ?
"ipv4 address" : "ipv6 address";
infof(data, " subjectAltName does not match %s %s", tname, peer->dispname);
else if(!Curl_cert_hostcheck((const char *)peer_CN,
peerlen, peer->hostname, hostlen)) {
failf(data, "SSL: certificate subject name '%s' does not match "
- "target host name '%s'", peer_CN, peer->dispname);
+ "target hostname '%s'", peer_CN, peer->dispname);
result = CURLE_PEER_FAILED_VERIFICATION;
}
else {
(defined(LIBRESSL_VERSION_NUMBER) && \
LIBRESSL_VERSION_NUMBER <= 0x2040200fL))
/* The authorized responder cert in the OCSP response MUST be signed by the
- peer cert's issuer (see RFC6960 section 4.2.2.2). If that's a root cert,
- no problem, but if it's an intermediate cert OpenSSL has a bug where it
+ peer cert's issuer (see RFC6960 section 4.2.2.2). If that is a root cert,
+ no problem, but if it is an intermediate cert OpenSSL has a bug where it
expects this issuer to be present in the chain embedded in the OCSP
response. So we add it if necessary. */
#endif /* USE_OPENSSL */
-/* The SSL_CTRL_SET_MSG_CALLBACK doesn't exist in ancient OpenSSL versions
+/* The SSL_CTRL_SET_MSG_CALLBACK does not exist in ancient OpenSSL versions
and thus this cannot be done there. */
#ifdef SSL_CTRL_SET_MSG_CALLBACK
ssl_ver >>= 8; /* check the upper 8 bits only below */
- /* SSLv2 doesn't seem to have TLS record-type headers, so OpenSSL
+ /* SSLv2 does not seem to have TLS record-type headers, so OpenSSL
* always pass-up content-type as 0. But the interesting message-type
* is at 'buf[0]'.
*/
}
/* CURL_SSLVERSION_DEFAULT means that no option was selected.
- We don't want to pass 0 to SSL_CTX_set_min_proto_version as
+ We do not want to pass 0 to SSL_CTX_set_min_proto_version as
it would enable all versions down to the lowest supported by
the library.
So we skip this, and stay with the library default
long ssl_version = conn_config->version;
long ssl_version_max = conn_config->version_max;
- (void) data; /* In case it's unused. */
+ (void) data; /* In case it is unused. */
switch(ssl_version) {
case CURL_SSLVERSION_TLSv1_3:
sk_X509_INFO_pop_free(inf, X509_INFO_free);
BIO_free(cbio);
- /* if we didn't end up importing anything, treat that as an error */
+ /* if we did not end up importing anything, treat that as an error */
return (count > 0) ? CURLE_OK : CURLE_SSL_CACERT_BADFILE;
}
#ifdef CURL_CA_FALLBACK
if(!ssl_cafile && !ssl_capath &&
!imported_native_ca && !imported_ca_info_blob) {
- /* verifying the peer without any CA certificates won't
+ /* verifying the peer without any CA certificates will not
work so use openssl's built-in default as fallback */
X509_STORE_set_default_paths(store);
}
if(verifypeer) {
/* Try building a chain using issuers in the trusted store first to avoid
- problems with server-sent legacy intermediates. Newer versions of
+ problems with server-sent legacy intermediates. Newer versions of
OpenSSL do alternate chain checking by default but we do not know how to
determine that in a reliable manner.
https://rt.openssl.org/Ticket/Display.html?id=3621&user=guest&pass=guest
switch(transport) {
case TRNSPRT_TCP:
- /* check to see if we've been told to use an explicit SSL/TLS version */
+ /* check to see if we have been told to use an explicit SSL/TLS version */
switch(ssl_version_min) {
case CURL_SSLVERSION_DEFAULT:
case CURL_SSLVERSION_TLSv1:
octx->ssl_ctx = SSL_CTX_new(req_method);
if(!octx->ssl_ctx) {
- failf(data, "SSL: couldn't create a context: %s",
+ failf(data, "SSL: could not create a context: %s",
ossl_strerror(ERR_peek_error(), error_buffer, sizeof(error_buffer)));
return CURLE_OUT_OF_MEMORY;
}
/* OpenSSL contains code to work around lots of bugs and flaws in various
SSL-implementations. SSL_CTX_set_options() is used to enabled those
- work-arounds. The man page for this option states that SSL_OP_ALL enables
+ work-arounds. The manpage for this option states that SSL_OP_ALL enables
all the work-arounds and that "It is usually safe to use SSL_OP_ALL to
enable the bug workaround options if compatibility with somewhat broken
implementations is desired."
- The "-no_ticket" option was introduced in OpenSSL 0.9.8j. It's a flag to
+ The "-no_ticket" option was introduced in OpenSSL 0.9.8j. it is a flag to
disable "rfc4507bis session ticket support". rfc4507bis was later turned
into the proper RFC5077: https://datatracker.ietf.org/doc/html/rfc5077
infof(data, "Using TLS-SRP username: %s", ssl_username);
if(!SSL_CTX_set_srp_username(octx->ssl_ctx, ssl_username)) {
- failf(data, "Unable to set SRP user name");
+ failf(data, "Unable to set SRP username");
return CURLE_BAD_FUNCTION_ARGUMENT;
}
if(!SSL_CTX_set_srp_password(octx->ssl_ctx, ssl_password)) {
#endif
if(cb_new_session) {
- /* Enable the session cache because it's a prerequisite for the
+ /* Enable the session cache because it is a prerequisite for the
* "new session" callback. Use the "external storage" mode to prevent
* OpenSSL from creating an internal session cache.
*/
SSL_free(octx->ssl);
octx->ssl = SSL_new(octx->ssl_ctx);
if(!octx->ssl) {
- failf(data, "SSL: couldn't create a context (handle)");
+ failf(data, "SSL: could not create a context (handle)");
return CURLE_OUT_OF_MEMORY;
}
ech_config_len = 2 * strlen(b64);
result = Curl_base64_decode(b64, &ech_config, &ech_config_len);
if(result || !ech_config) {
- infof(data, "ECH: can't base64 decode ECHConfig from command line");
+ infof(data, "ECH: cannot base64 decode ECHConfig from command line");
if(data->set.tls_ech & CURLECH_HARD)
return result;
}
# endif /* not BORING */
if(trying_ech_now
&& SSL_set_min_proto_version(octx->ssl, TLS1_3_VERSION) != 1) {
- infof(data, "ECH: Can't force TLSv1.3 [ERROR]");
+ infof(data, "ECH: cannot force TLSv1.3 [ERROR]");
return CURLE_SSL_CONNECT_ERROR;
}
}
int lib;
int reason;
- /* the connection failed, we're not waiting for anything else. */
+ /* the connection failed, we are not waiting for anything else. */
connssl->connecting_state = ssl_connect_2;
/* Get the earliest error code from the thread's error queue and remove
/* detail is already set to the SSL error above */
- /* If we e.g. use SSLv2 request-method and the server doesn't like us
+ /* If we e.g. use SSLv2 request-method and the server does not like us
* (RST connection, etc.), OpenSSL gives no explanation whatsoever and
* the SO_ERROR is also lost.
*/
int psigtype_nid = NID_undef;
const char *negotiated_group_name = NULL;
- /* we connected fine, we're not waiting for anything else. */
+ /* we connected fine, we are not waiting for anything else. */
connssl->connecting_state = ssl_connect_3;
#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
/* Result is returned to caller */
CURLcode result = CURLE_SSL_PINNEDPUBKEYNOTMATCH;
- /* if a path wasn't specified, don't pin */
+ /* if a path was not specified, do not pin */
if(!pinnedpubkey)
return CURLE_OK;
/*
* These checks are verifying we got back the same values as when we
- * sized the buffer. It's pretty weak since they should always be the
+ * sized the buffer. it is pretty weak since they should always be the
* same. But it gives us something to test.
*/
if((len1 != len2) || !temp || ((temp - buff1) != len1))
if(!strict)
return CURLE_OK;
- failf(data, "SSL: couldn't get peer certificate");
+ failf(data, "SSL: could not get peer certificate");
return CURLE_PEER_FAILED_VERIFICATION;
}
buffer, sizeof(buffer));
if(rc) {
if(strict)
- failf(data, "SSL: couldn't get X509-issuer name");
+ failf(data, "SSL: could not get X509-issuer name");
result = CURLE_PEER_FAILED_VERIFICATION;
}
else {
#if (OPENSSL_VERSION_NUMBER >= 0x0090808fL) && !defined(OPENSSL_NO_TLSEXT) && \
!defined(OPENSSL_NO_OCSP)
if(conn_config->verifystatus && !octx->reused_session) {
- /* don't do this after Session ID reuse */
+ /* do not do this after Session ID reuse */
result = verifystatus(cf, data, octx);
if(result) {
/* when verifystatus failed, remove the session id from the cache again
#endif
if(!strict)
- /* when not strict, we don't bother about the verify cert problems */
+ /* when not strict, we do not bother about the verify cert problems */
result = CURLE_OK;
#ifndef CURL_DISABLE_PROXY
/*
* We check certificates to authenticate the server; otherwise we risk
- * man-in-the-middle attack; NEVERTHELESS, if we're told explicitly not to
+ * man-in-the-middle attack; NEVERTHELESS, if we are told explicitly not to
* verify the peer, ignore faults and failures from the server cert
* operations.
*/
}
if(ssl_connect_1 == connssl->connecting_state) {
- /* Find out how much more time we're allowed */
+ /* Find out how much more time we are allowed */
const timediff_t timeout_ms = Curl_timeleft(data, NULL, TRUE);
if(timeout_ms < 0) {
goto out;
}
- /* if ssl is expecting something, check if it's available. */
+ /* if ssl is expecting something, check if it is available. */
if(!nonblocking && connssl->io_need) {
curl_socket_t writefd = (connssl->io_need & CURL_SSL_IO_NEED_SEND)?
break;
case SSL_ERROR_WANT_READ:
case SSL_ERROR_WANT_WRITE:
- /* there's data pending, re-invoke SSL_read() */
+ /* there is data pending, re-invoke SSL_read() */
*curlcode = CURLE_AGAIN;
nread = -1;
goto out;
/* For debug builds be a little stricter and error on any
SSL_ERROR_SYSCALL. For example a server may have closed the connection
abruptly without a close_notify alert. For compatibility with older
- peers we don't do this by default. #4624
+ peers we do not do this by default. #4624
We can use this to gauge how many users may be affected, and
if it goes ok eventually transition to allow in dev and release with
int rc;
if(data) {
if(ossl_seed(data)) /* Initiate the seed if not already done */
- return CURLE_FAILED_INIT; /* couldn't seed for some reason */
+ return CURLE_FAILED_INIT; /* could not seed for some reason */
}
else {
if(!rand_enough())
* - Read out as many plaintext bytes from rustls as possible, until hitting
* error, EOF, or EAGAIN/EWOULDBLOCK, or plainbuf/plainlen is filled up.
*
- * It's okay to call this function with plainbuf == NULL and plainlen == 0.
- * In that case, it will copy bytes from the socket into rustls' TLS input
- * buffer, and process packets, but won't consume bytes from rustls' plaintext
- * output buffer.
+ * it is okay to call this function with plainbuf == NULL and plainlen == 0. In
+ * that case, it will copy bytes from the socket into rustls' TLS input
+ * buffer, and process packets, but will not consume bytes from rustls'
+ * plaintext output buffer.
*/
static ssize_t
cr_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
goto out;
}
else if(rresult != RUSTLS_RESULT_OK) {
- /* n always equals 0 in this case, don't need to check it */
+ /* n always equals 0 in this case, do not need to check it */
char errorbuf[255];
size_t errorlen;
rustls_error(rresult, errorbuf, sizeof(errorbuf), &errorlen);
* - Fully drain rustls' plaintext output buffer into the socket until
* we get either an error or EAGAIN/EWOULDBLOCK.
*
- * It's okay to call this function with plainbuf == NULL and plainlen == 0.
- * In that case, it won't read anything into rustls' plaintext input buffer.
+ * it is okay to call this function with plainbuf == NULL and plainlen == 0.
+ * In that case, it will not read anything into rustls' plaintext input buffer.
* It will only drain rustls' plaintext output buffer into the socket.
*/
static ssize_t
if(!verifypeer) {
rustls_client_config_builder_dangerous_set_certificate_verifier(
config_builder, cr_verify_none);
- /* rustls doesn't support IP addresses (as of 0.19.0), and will reject
+ /* rustls does not support IP addresses (as of 0.19.0), and will reject
* connections created with an IP address, even when certificate
* verification is turned off. Set a placeholder hostname and disable
* SNI. */
roots_builder = rustls_root_cert_store_builder_new();
if(ca_info_blob) {
- /* Enable strict parsing only if verification isn't disabled. */
+ /* Enable strict parsing only if verification is not disabled. */
result = rustls_root_cert_store_builder_add_pem(roots_builder,
ca_info_blob->data,
ca_info_blob->len,
}
}
else if(ssl_cafile) {
- /* Enable strict parsing only if verification isn't disabled. */
+ /* Enable strict parsing only if verification is not disabled. */
result = rustls_root_cert_store_builder_load_roots_from_file(
roots_builder, ssl_cafile, verifypeer);
if(result != RUSTLS_RESULT_OK) {
}
/* We should never fall through the loop. We should return either because
- the handshake is done or because we can't read/write without blocking. */
+ the handshake is done or because we cannot read/write without blocking. */
DEBUGASSERT(false);
}
#ifdef USE_SCHANNEL
#ifndef USE_WINDOWS_SSPI
-# error "Can't compile SCHANNEL support without SSPI."
+# error "cannot compile SCHANNEL support without SSPI."
#endif
#include "schannel.h"
}
else {
/* Pre-Windows 10 1809 or the user set a legacy algorithm list. Although MS
- doesn't document it, currently Schannel will not negotiate TLS 1.3 when
+ does not document it, currently Schannel will not negotiate TLS 1.3 when
SCHANNEL_CRED is used. */
ALG_ID algIds[NUM_CIPHERS];
char *ciphers = conn_config->cipher_list;
#ifdef HAS_ALPN
/* ALPN is only supported on Windows 8.1 / Server 2012 R2 and above.
- Also it doesn't seem to be supported for Wine, see curl bug #983. */
+ Also it does not seem to be supported for Wine, see curl bug #983. */
backend->use_alpn = connssl->alpn &&
!GetProcAddress(GetModuleHandle(TEXT("ntdll")),
"wine_get_version") &&
#ifdef _WIN32_WCE
#ifdef HAS_MANUAL_VERIFY_API
- /* certificate validation on CE doesn't seem to work right; we'll
+ /* certificate validation on CE does not seem to work right; we will
* do it following a more manual process. */
backend->use_manual_cred_validation = true;
#else
/* Schannel InitializeSecurityContext:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa375924.aspx
- At the moment we don't pass inbuf unless we're using ALPN since we only
+ At the moment we do not pass inbuf unless we are using ALPN since we only
use it for that, and Wine (for which we currently disable ALPN) is giving
us problems with inbuf regardless. https://github.com/curl/curl/issues/983
*/
inbuf[1].cbBuffer));
/*
There are two cases where we could be getting extra data here:
- 1) If we're renegotiating a connection and the handshake is already
+ 1) If we are renegotiating a connection and the handshake is already
complete (from the server perspective), it can encrypted app data
(not handshake data) in an extra buffer at this point.
2) (sspi_status == SEC_I_CONTINUE_NEEDED) We are negotiating a
#endif
/* Verify the hostname manually when certificate verification is disabled,
- because in that case Schannel won't verify it. */
+ because in that case Schannel will not verify it. */
if(!conn_config->verifypeer && conn_config->verifyhost)
return Curl_verify_host(cf, data);
if(old_cred != backend->cred) {
DEBUGF(infof(data,
"schannel: old credential handle is stale, removing"));
- /* we're not taking old_cred ownership here, no refcount++ is needed */
+ /* we are not taking old_cred ownership here, no refcount++ is
+ needed */
Curl_ssl_delsessionid(data, (void *)old_cred);
incache = FALSE;
}
}
if(ssl_connect_1 == connssl->connecting_state) {
- /* check out how much more time we're allowed */
+ /* check out how much more time we are allowed */
timeout_ms = Curl_timeleft(data, NULL, TRUE);
if(timeout_ms < 0) {
while(ssl_connect_2 == connssl->connecting_state) {
- /* check out how much more time we're allowed */
+ /* check out how much more time we are allowed */
timeout_ms = Curl_timeleft(data, NULL, TRUE);
if(timeout_ms < 0) {
return CURLE_OPERATION_TIMEDOUT;
}
- /* if ssl is expecting something, check if it's available. */
+ /* if ssl is expecting something, check if it is available. */
if(connssl->io_need) {
curl_socket_t writefd = (connssl->io_need & CURL_SSL_IO_NEED_SEND)?
len = outbuf[0].cbBuffer + outbuf[1].cbBuffer + outbuf[2].cbBuffer;
/*
- It's important to send the full message which includes the header,
- encrypted payload, and trailer. Until the client receives all the
+ it is important to send the full message which includes the header,
+ encrypted payload, and trailer. Until the client receives all the
data a coherent message has not been delivered and the client
- can't read any of it.
+ cannot read any of it.
If we wanted to buffer the unwritten encrypted bytes, we would
tell the client that all data it has requested to be sent has been
DEBUGASSERT(backend);
/****************************************************************************
- * Don't return or set backend->recv_unrecoverable_err unless in the cleanup.
- * The pattern for return error is set *err, optional infof, goto cleanup.
+ * Do not return or set backend->recv_unrecoverable_err unless in the
+ * cleanup. The pattern for return error is set *err, optional infof, goto
+ * cleanup.
*
* Our priority is to always return as much decrypted data to the caller as
* possible, even if an error occurs. The state of the decrypted buffer must
infof(data, "schannel: server indicated shutdown in a prior call");
goto cleanup;
}
- /* It's debatable what to return when !len. Regardless we can't return
+ /* it is debatable what to return when !len. Regardless we cannot return
immediately because there may be data to decrypt (in the case we want to
decrypt all encrypted cached data) so handle !len later in cleanup.
*/
if(sspi_status == SEC_I_RENEGOTIATE) {
infof(data, "schannel: remote party requests renegotiation");
if(*err && *err != CURLE_AGAIN) {
- infof(data, "schannel: can't renegotiate, an error is pending");
+ infof(data, "schannel: cannot renegotiate, an error is pending");
goto cleanup;
}
/* Error if the connection has closed without a close_notify.
- The behavior here is a matter of debate. We don't want to be vulnerable
- to a truncation attack however there's some browser precedent for
+ The behavior here is a matter of debate. We do not want to be vulnerable
+ to a truncation attack however there is some browser precedent for
ignoring the close_notify for compatibility reasons.
Additionally, Windows 2000 (v5.0) is a special case since it seems it
- doesn't return close_notify. In that case if the connection was closed we
- assume it was graceful (close_notify) since there doesn't seem to be a
+ does not return close_notify. In that case if the connection was closed we
+ assume it was graceful (close_notify) since there does not seem to be a
way to tell.
*/
if(len && !backend->decdata_offset && backend->recv_connection_closed &&
if(!*err && !backend->recv_connection_closed)
*err = CURLE_AGAIN;
- /* It's debatable what to return when !len. We could return whatever error
+ /* it is debatable what to return when !len. We could return whatever error
we got from decryption but instead we override here so the return is
consistent.
*/
DEBUGASSERT(backend);
- /* if a path wasn't specified, don't pin */
+ /* if a path was not specified, do not pin */
if(!pinnedpubkey)
return CURLE_OK;
size_t encdata_offset, decdata_offset;
unsigned char *encdata_buffer, *decdata_buffer;
/* encdata_is_incomplete: if encdata contains only a partial record that
- can't be decrypted without another recv() (that is, status is
+ cannot be decrypted without another recv() (that is, status is
SEC_E_INCOMPLETE_MESSAGE) then set this true. after an recv() adds
more bytes into encdata then set this back to false. */
bool encdata_is_incomplete;
#ifdef USE_SCHANNEL
#ifndef USE_WINDOWS_SSPI
-# error "Can't compile SCHANNEL support without SSPI."
+# error "cannot compile SCHANNEL support without SSPI."
#endif
#include "schannel.h"
}
/* Search the substring needle,needlelen into string haystack,haystacklen
- * Strings don't need to be terminated by a '\0'.
+ * Strings do not need to be terminated by a '\0'.
* Similar of OSX/Linux memmem (not available on Visual Studio).
* Return position of beginning of first occurrence or NULL if not found
*/
/*
* Returns the number of characters necessary to populate all the host_names.
- * If host_names is not NULL, populate it with all the host names. Each string
+ * If host_names is not NULL, populate it with all the hostnames. Each string
* in the host_names is null-terminated and the last string is double
* null-terminated. If no DNS names are found, a single null-terminated empty
* string is returned.
}
/* Sanity check to prevent buffer overrun. */
if((actual_length + current_length) > length) {
- failf(data, "schannel: Not enough memory to list all host names.");
+ failf(data, "schannel: Not enough memory to list all hostnames.");
break;
}
dns_w = entry->pwszDNSName;
- /* pwszDNSName is in ia5 string format and hence doesn't contain any
+ /* pwszDNSName is in ia5 string format and hence does not contain any
* non-ascii characters. */
while(*dns_w != '\0') {
*current_pos++ = (TCHAR)(*dns_w++);
#if (TARGET_OS_MAC && !(TARGET_OS_EMBEDDED || TARGET_OS_IPHONE))
#if MAC_OS_X_VERSION_MAX_ALLOWED < 1050
-#error "The Secure Transport back-end requires Leopard or later."
+#error "The Secure Transport backend requires Leopard or later."
#endif /* MAC_OS_X_VERSION_MAX_ALLOWED < 1050 */
#define CURL_BUILD_IOS 0
#define CURL_SUPPORT_MAC_10_9 0
#else
-#error "The Secure Transport back-end requires iOS or macOS."
+#error "The Secure Transport backend requires iOS or macOS."
#endif /* (TARGET_OS_MAC && !(TARGET_OS_EMBEDDED || TARGET_OS_IPHONE)) */
#if CURL_BUILD_MAC
#include "memdebug.h"
-/* From MacTypes.h (which we can't include because it isn't present in iOS: */
+/* From MacTypes.h (which we cannot include because it is not present in
+ iOS: */
#define ioErr -36
#define paramErr -50
0xf7, 0x0d, 0x01, 0x01, 0x01, 0x05,
0x00, 0x03, 0x82, 0x01, 0x0f, 0x00};
#ifdef SECTRANSP_PINNEDPUBKEY_V1
-/* the *new* version doesn't return DER encoded ecdsa certs like the old... */
+/* the *new* version does not return DER encoded ecdsa certs like the old... */
static const unsigned char ecDsaSecp256r1SpkiHeader[] = {
0x30, 0x59, 0x30, 0x13, 0x06, 0x07,
0x2a, 0x86, 0x48, 0xce, 0x3d, 0x02,
#endif /* CURL_BUILD_MAC */
/* Apple provides a myriad of ways of getting information about a certificate
- into a string. Some aren't available under iOS or newer cats. So here's
- a unified function for getting a string describing the certificate that
- ought to work in all cats starting with Leopard. */
+ into a string. Some are not available under iOS or newer cats. Here's a
+ unified function for getting a string describing the certificate that ought
+ to work in all cats starting with Leopard. */
CF_INLINE CFStringRef getsubject(SecCertificateRef cert)
{
CFStringRef server_cert_summary = CFSTR("(null)");
#if CURL_BUILD_IOS
- /* iOS: There's only one way to do this. */
+ /* iOS: There is only one way to do this. */
server_cert_summary = SecCertificateCopySubjectSummary(cert);
#else
#if CURL_BUILD_MAC_10_7
*certp = cbuf;
}
else {
- failf(data, "SSL: couldn't allocate %zu bytes of memory", cbuf_size);
+ failf(data, "SSL: could not allocate %zu bytes of memory", cbuf_size);
result = CURLE_OUT_OF_MEMORY;
}
}
#if CURL_SUPPORT_MAC_10_6
/* The SecKeychainSearch API was deprecated in Lion, and using it will raise
- deprecation warnings, so let's not compile this unless it's necessary: */
+ deprecation warnings, so let's not compile this unless it is necessary: */
static OSStatus CopyIdentityWithLabelOldSchool(char *label,
SecIdentityRef *out_c_a_k)
{
/* identity searches need a SecPolicyRef in order to work */
values[3] = SecPolicyCreateSSL(false, NULL);
keys[3] = kSecMatchPolicy;
- /* match the name of the certificate (doesn't work in macOS 10.12.1) */
+ /* match the name of the certificate (does not work in macOS 10.12.1) */
values[4] = label_cf;
keys[4] = kSecAttrLabel;
query_dict = CFDictionaryCreate(NULL, (const void **)keys,
/* Do we have a match? */
status = SecItemCopyMatching(query_dict, (CFTypeRef *) &keys_list);
- /* Because kSecAttrLabel matching doesn't work with kSecClassIdentity,
+ /* Because kSecAttrLabel matching does not work with kSecClassIdentity,
* we need to find the correct identity ourselves */
if(status == noErr) {
keys_list_count = CFArrayGetCount(keys_list);
/* On macOS SecPKCS12Import will always add the client certificate to
* the Keychain.
*
- * As this doesn't match iOS, and apps may not want to see their client
+ * As this does not match iOS, and apps may not want to see their client
* certificate saved in the user's keychain, we use SecItemImport
* with a NULL keychain to avoid importing it.
*
{
int maj = 0, min = 0;
GetDarwinVersionNumber(&maj, &min);
- /* There's a known bug in early versions of Mountain Lion where ST's ECC
+ /* There is a known bug in early versions of Mountain Lion where ST's ECC
ciphers (cipher suite 0xC001 through 0xC032) simply do not work.
Work around the problem here by disabling those ciphers if we are
running in an affected version of OS X. */
{
/* ST, as of iOS 5 and Mountain Lion, has no public method of deleting a
cached session ID inside the Security framework. There is a private
- function that does this, but I don't want to have to explain to you why I
+ function that does this, but I do not want to have to explain to you why I
got your application rejected from the App Store due to the use of a
private API, so the best we can do is free up our own char array that we
created way back in sectransp_connect_step1... */
CFRelease(backend->ssl_ctx);
backend->ssl_ctx = SSLCreateContext(NULL, kSSLClientSide, kSSLStreamType);
if(!backend->ssl_ctx) {
- failf(data, "SSL: couldn't create a context");
+ failf(data, "SSL: could not create a context");
return CURLE_OUT_OF_MEMORY;
}
}
else {
- /* The old ST API does not exist under iOS, so don't compile it: */
+ /* The old ST API does not exist under iOS, so do not compile it: */
#if CURL_SUPPORT_MAC_10_8
if(backend->ssl_ctx)
(void)SSLDisposeContext(backend->ssl_ctx);
err = SSLNewContext(false, &(backend->ssl_ctx));
if(err != noErr) {
- failf(data, "SSL: couldn't create a context: OSStatus %d", err);
+ failf(data, "SSL: could not create a context: OSStatus %d", err);
return CURLE_OUT_OF_MEMORY;
}
#endif /* CURL_SUPPORT_MAC_10_8 */
(void)SSLDisposeContext(backend->ssl_ctx);
err = SSLNewContext(false, &(backend->ssl_ctx));
if(err != noErr) {
- failf(data, "SSL: couldn't create a context: OSStatus %d", err);
+ failf(data, "SSL: could not create a context: OSStatus %d", err);
return CURLE_OUT_OF_MEMORY;
}
#endif /* CURL_BUILD_MAC_10_8 || CURL_BUILD_IOS */
backend->ssl_write_buffered_length = 0UL; /* reset buffered write length */
- /* check to see if we've been told to use an explicit SSL/TLS version */
+ /* check to see if we have been told to use an explicit SSL/TLS version */
#if CURL_BUILD_MAC_10_8 || CURL_BUILD_IOS
if(SSLSetProtocolVersionMax) {
switch(conn_config->version) {
cert_showfilename_error);
break;
case errSecItemNotFound:
- failf(data, "SSL: Can't find the certificate \"%s\" and its private "
+ failf(data, "SSL: cannot find the certificate \"%s\" and its private "
"key in the Keychain.", cert_showfilename_error);
break;
default:
- failf(data, "SSL: Can't load the certificate \"%s\" and its private "
+ failf(data, "SSL: cannot load the certificate \"%s\" and its private "
"key: OSStatus %d", cert_showfilename_error, err);
break;
}
#if CURL_BUILD_MAC_10_6 || CURL_BUILD_IOS
/* Snow Leopard introduced the SSLSetSessionOption() function, but due to
a library bug with the way the kSSLSessionOptionBreakOnServerAuth flag
- works, it doesn't work as expected under Snow Leopard, Lion or
+ works, it does not work as expected under Snow Leopard, Lion or
Mountain Lion.
So we need to call SSLSetEnableCertVerify() on those older cats in order
to disable certificate validation if the user turned that off.
bool is_cert_file = (!is_cert_data) && is_file(ssl_cafile);
if(!(is_cert_file || is_cert_data)) {
- failf(data, "SSL: can't load CA certificate file %s",
+ failf(data, "SSL: cannot load CA certificate file %s",
ssl_cafile ? ssl_cafile : "(blob memory)");
return CURLE_SSL_CACERT_BADFILE;
}
#if CURL_BUILD_MAC_10_9 || CURL_BUILD_IOS_7
/* We want to enable 1/n-1 when using a CBC cipher unless the user
- specifically doesn't want us doing that: */
+ specifically does not want us doing that: */
if(SSLSetSessionOption) {
SSLSetSessionOption(backend->ssl_ctx, kSSLSessionOptionSendOneByteRecord,
!ssl_config->enable_beast);
}
#endif /* CURL_BUILD_MAC_10_9 || CURL_BUILD_IOS_7 */
- /* Check if there's a cached ID we can/should use here! */
+ /* Check if there is a cached ID we can/should use here! */
if(ssl_config->primary.sessionid) {
char *ssl_sessionid;
size_t ssl_sessionid_len;
/* Informational message */
infof(data, "SSL reusing session ID");
}
- /* If there isn't one, then let's make one up! This has to be done prior
+ /* If there is not one, then let's make one up! This has to be done prior
to starting the handshake. */
else {
ssl_sessionid =
/* Result is returned to caller */
CURLcode result = CURLE_SSL_PINNEDPUBKEYNOTMATCH;
- /* if a path wasn't specified, don't pin */
+ /* if a path was not specified, do not pin */
if(!pinnedpubkey)
return CURLE_OK;
if(err != noErr) {
switch(err) {
- case errSSLWouldBlock: /* they're not done with us yet */
+ case errSSLWouldBlock: /* they are not done with us yet */
connssl->io_need = backend->ssl_direction ?
CURL_SSL_IO_NEED_SEND : CURL_SSL_IO_NEED_RECV;
return CURLE_OK;
- /* The below is errSSLServerAuthCompleted; it's not defined in
+ /* The below is errSSLServerAuthCompleted; it is not defined in
Leopard's headers */
case -9841:
if((conn_config->CAfile || conn_config->ca_info_blob) &&
"authority");
break;
- /* This error is raised if the server's cert didn't match the server's
- host name: */
+ /* This error is raised if the server's cert did not match the server's
+ hostname: */
case errSSLHostNameMismatch:
failf(data, "SSL certificate peer verification failed, the "
"certificate did not match \"%s\"\n", connssl->peer.dispname);
}
else {
char cipher_str[64];
- /* we have been connected fine, we're not waiting for anything else. */
+ /* we have been connected fine, we are not waiting for anything else. */
connssl->connecting_state = ssl_connect_3;
#ifdef SECTRANSP_PINNEDPUBKEY
BUNDLE_MULTIPLEX : BUNDLE_NO_MULTIUSE);
/* chosenProtocol is a reference to the string within alpnArr
- and doesn't need to be freed separately */
+ and does not need to be freed separately */
if(alpnArr)
CFRelease(alpnArr);
}
/* SSLCopyPeerCertificates() is deprecated as of Mountain Lion.
The function SecTrustGetCertificateAtIndex() is officially present
in Lion, but it is unfortunately also present in Snow Leopard as
- private API and doesn't work as expected. So we have to look for
+ private API and does not work as expected. So we have to look for
a different symbol to make sure this code is only executed under
Lion or later. */
if(SecTrustCopyPublicKey) {
}
if(ssl_connect_1 == connssl->connecting_state) {
- /* Find out how much more time we're allowed */
+ /* Find out how much more time we are allowed */
const timediff_t timeout_ms = Curl_timeleft(data, NULL, TRUE);
if(timeout_ms < 0) {
return CURLE_OPERATION_TIMEDOUT;
}
- /* if ssl is expecting something, check if it's available. */
+ /* if ssl is expecting something, check if it is available. */
if(connssl->io_need) {
curl_socket_t writefd = (connssl->io_need & CURL_SSL_IO_NEED_SEND)?
static CURLcode sectransp_random(struct Curl_easy *data UNUSED_PARAM,
unsigned char *entropy, size_t length)
{
- /* arc4random_buf() isn't available on cats older than Lion, so let's
+ /* arc4random_buf() is not available on cats older than Lion, so let's
do this manually for the benefit of the older cats. */
size_t i;
u_int32_t random_number = 0;
Now, one could interpret that as "written to the socket," but actually,
it returns the amount of data that was written to a buffer internal to
- the SSLContextRef instead. So it's possible for SSLWrite() to return
+ the SSLContextRef instead. So it is possible for SSLWrite() to return
errSSLWouldBlock and a number of bytes "written" because those bytes were
encrypted and written to a buffer, not to the socket.
err = SSLWrite(backend->ssl_ctx, NULL, 0UL, &processed);
switch(err) {
case noErr:
- /* processed is always going to be 0 because we didn't write to
+ /* processed is always going to be 0 because we did not write to
the buffer, so return how much was written to the socket */
processed = backend->ssl_write_buffered_length;
backend->ssl_write_buffered_length = 0UL;
}
}
else {
- /* We've got new data to write: */
+ /* We have got new data to write: */
err = SSLWrite(backend->ssl_ctx, mem, len, &processed);
if(err != noErr) {
switch(err) {
*curlcode = CURLE_OK;
return 0;
- /* The below is errSSLPeerAuthCompleted; it's not defined in
+ /* The below is errSSLPeerAuthCompleted; it is not defined in
Leopard's headers */
case -9841:
if((conn_config->CAfile || conn_config->ca_info_blob) &&
DEBUGASSERT(dest);
DEBUGASSERT(!*dest);
if(src) {
- /* only if there's data to dupe! */
+ /* only if there is data to dupe! */
struct curl_blob *d;
d = malloc(sizeof(struct curl_blob) + src->len);
if(!d)
(void)httpwant;
#endif
/* Use the ALPN protocol "http/1.1" for HTTP/1.x.
- Avoid "http/1.0" because some servers don't support it. */
+ Avoid "http/1.0" because some servers do not support it. */
return &ALPN_SPEC_H11;
}
#endif /* USE_SSL */
}
/*
- * Check if there's a session ID for the given connection in the cache, and if
- * there's one suitable, it is provided. Returns TRUE when no entry matched.
+ * Check if there is a session ID for the given connection in the cache, and if
+ * there is one suitable, it is provided. Returns TRUE when no entry matched.
*/
bool Curl_ssl_getsessionid(struct Curl_cfilter *cf,
struct Curl_easy *data,
}
DEBUGF(infof(data, "%s Session ID in cache for %s %s://%s:%d",
- no_match? "Didn't find": "Found",
+ no_match? "Did not find": "Found",
Curl_ssl_cf_is_proxy(cf) ? "proxy" : "host",
cf->conn->handler->scheme, peer->hostname, peer->port));
return no_match;
else
conn_to_port = -1;
- /* Now we should add the session ID and the host name to the cache, (remove
+ /* Now we should add the session ID and the hostname to the cache, (remove
the oldest if necessary) */
/* If using shared SSL session, lock! */
store->idsize = idsize;
store->sessionid_free = sessionid_free_cb;
store->age = *general_age; /* set current age */
- /* free it if there's one already present */
+ /* free it if there is one already present */
free(store->name);
free(store->conn_to_host);
- store->name = clone_host; /* clone host name */
+ store->name = clone_host; /* clone hostname */
clone_host = NULL;
- store->conn_to_host = clone_conn_to_host; /* clone connect to host name */
+ store->conn_to_host = clone_conn_to_host; /* clone connect to hostname */
clone_conn_to_host = NULL;
store->conn_to_port = conn_to_port; /* connect to port number */
/* port number */
(void)data;
#endif
- /* if a path wasn't specified, don't pin */
+ /* if a path was not specified, do not pin */
if(!pinnedpubkey)
return CURLE_OK;
if(!pubkey || !pubkeylen)
end_pos = strstr(begin_pos, ";sha256//");
/*
* if there is an end_pos, null terminate,
- * otherwise it'll go to the end of the original string
+ * otherwise it will go to the end of the original string
*/
if(end_pos)
end_pos[0] = '\0';
/*
* if the size of our certificate is bigger than the file
- * size then it can't match
+ * size then it cannot match
*/
size = curlx_sotouz((curl_off_t) filesize);
if(pubkeylen > size)
if((int) fread(buf, size, 1, fp) != 1)
break;
- /* If the sizes are the same, it can't be base64 encoded, must be der */
+ /* If the sizes are the same, it cannot be base64 encoded, must be der */
if(pubkeylen == size) {
if(!memcmp(pubkey, buf, pubkeylen))
result = CURLE_OK;
}
/*
- * Otherwise we will assume it's PEM and try to decode it
+ * Otherwise we will assume it is PEM and try to decode it
* after placing null terminator
*/
buf[size] = '\0';
pem_read = pubkey_pem_to_der((const char *)buf, &pem_ptr, &pem_len);
- /* if it wasn't read successfully, exit */
+ /* if it was not read successfully, exit */
if(pem_read)
break;
/*
- * if the size of our certificate doesn't match the size of
- * the decoded file, they can't be the same, otherwise compare
+ * if the size of our certificate does not match the size of
+ * the decoded file, they cannot be the same, otherwise compare
*/
if(pubkeylen == pem_len && !memcmp(pubkey, pem_ptr, pubkeylen))
result = CURLE_OK;
const char *ehostname, *edispname;
int eport;
- /* We need the hostname for SNI negotiation. Once handshaked, this
- * remains the SNI hostname for the TLS connection. But when the
- * connection is reused, the settings in cf->conn might change.
- * So we keep a copy of the hostname we use for SNI.
+ /* We need the hostname for SNI negotiation. Once handshaked, this remains
+ * the SNI hostname for the TLS connection. When the connection is reused,
+ * the settings in cf->conn might change. We keep a copy of the hostname we
+ * use for SNI.
*/
#ifndef CURL_DISABLE_PROXY
if(Curl_ssl_cf_is_proxy(cf)) {
return CURLE_SSL_CONNECT_ERROR;
}
- /* check to see if we've been told to use an explicit SSL/TLS version */
+ /* check to see if we have been told to use an explicit SSL/TLS version */
switch(conn_config->version) {
case CURL_SSLVERSION_DEFAULT:
case CURL_SSLVERSION_TLSv1:
}
if(!req_method) {
- failf(data, "SSL: couldn't create a method");
+ failf(data, "SSL: could not create a method");
return CURLE_OUT_OF_MEMORY;
}
backend->ctx = wolfSSL_CTX_new(req_method);
if(!backend->ctx) {
- failf(data, "SSL: couldn't create a context");
+ failf(data, "SSL: could not create a context");
return CURLE_OUT_OF_MEMORY;
}
&& (wolfSSL_CTX_SetMinVersion(backend->ctx, WOLFSSL_TLSV1_3) != 1)
#endif
) {
- failf(data, "SSL: couldn't set the minimum protocol version");
+ failf(data, "SSL: could not set the minimum protocol version");
return CURLE_SSL_CONNECT_ERROR;
}
#endif
}
#ifdef NO_FILESYSTEM
else if(conn_config->verifypeer) {
- failf(data, "SSL: Certificates can't be loaded because wolfSSL was built"
+ failf(data, "SSL: Certificates cannot be loaded because wolfSSL was built"
" with \"no filesystem\". Either disable peer verification"
" (insecure) or if you are building an application with libcurl you"
" can load certificates via CURLOPT_SSL_CTX_FUNCTION.");
wolfSSL_free(backend->handle);
backend->handle = wolfSSL_new(backend->ctx);
if(!backend->handle) {
- failf(data, "SSL: couldn't create a handle");
+ failf(data, "SSL: could not create a handle");
return CURLE_OUT_OF_MEMORY;
}
}
#endif /* HAVE_SECURE_RENEGOTIATION */
- /* Check if there's a cached ID we can/should use here! */
+ /* Check if there is a cached ID we can/should use here! */
if(ssl_config->primary.sessionid) {
void *ssl_sessionid = NULL;
/* we got a session id, use it! */
if(!SSL_set_session(backend->handle, ssl_sessionid)) {
Curl_ssl_delsessionid(data, ssl_sessionid);
- infof(data, "Can't use session ID, going on without");
+ infof(data, "cannot use session ID, going on without");
}
else
infof(data, "SSL reusing session ID");
if(trying_ech_now
&& SSL_set_min_proto_version(backend->handle, TLS1_3_VERSION) != 1) {
- infof(data, "ECH: Can't force TLSv1.3 [ERROR]");
+ infof(data, "ECH: cannot force TLSv1.3 [ERROR]");
return CURLE_SSL_CONNECT_ERROR;
}
word32 echConfigsLen = 1000;
int rv = 0;
- /* this currently doesn't produce the retry_configs */
+ /* this currently does not produce the retry_configs */
rv = wolfSSL_GetEchConfigs(backend->handle, echConfigs,
&echConfigsLen);
if(rv != WOLFSSL_SUCCESS) {
switch(err) {
case SSL_ERROR_WANT_READ:
case SSL_ERROR_WANT_WRITE:
- /* there's data pending, re-invoke SSL_write() */
+ /* there is data pending, re-invoke SSL_write() */
CURL_TRC_CF(data, cf, "wolfssl_send(len=%zu) -> AGAIN", len);
*curlcode = CURLE_AGAIN;
return -1;
case SSL_ERROR_NONE:
case SSL_ERROR_WANT_READ:
case SSL_ERROR_WANT_WRITE:
- /* there's data pending, re-invoke wolfSSL_read() */
+ /* there is data pending, re-invoke wolfSSL_read() */
CURL_TRC_CF(data, cf, "wolfssl_recv(len=%zu) -> AGAIN", blen);
*curlcode = CURLE_AGAIN;
return -1;
}
if(ssl_connect_1 == connssl->connecting_state) {
- /* Find out how much more time we're allowed */
+ /* Find out how much more time we are allowed */
const timediff_t timeout_ms = Curl_timeleft(data, NULL, TRUE);
if(timeout_ms < 0) {
return CURLE_OPERATION_TIMEDOUT;
}
- /* if ssl is expecting something, check if it's available. */
+ /* if ssl is expecting something, check if it is available. */
if(connssl->io_need) {
curl_socket_t writefd = (connssl->io_need & CURL_SSL_IO_NEED_SEND)?
enc->contfragment = FALSE;
}
else if(enc->contfragment) {
- /* the previous fragment was not a final one and this isn't either, keep a
+ /* the previous fragment was not a final one and this is not either, keep a
CONT opcode and no FIN bit */
firstbyte |= WSBIT_OPCODE_CONT;
}
../lib/base64.c \
../lib/dynbuf.c
-# libcurl has sources that provide functions named curlx_* that aren't part of
+# libcurl has sources that provide functions named curlx_* that are not part of
# the official API, but we reuse the code here to avoid duplication.
CURLX_CFILES = \
../lib/base64.c \
}
secs = epoch_offset + tv_sec;
/* !checksrc! disable BANNEDFUNC 1 */
- now = localtime(&secs); /* not thread safe but we don't care */
+ now = localtime(&secs); /* not thread safe but we do not care */
msnprintf(hms_buf, sizeof(hms_buf), "%02d:%02d:%02d",
now->tm_hour, now->tm_min, now->tm_sec);
cached_tv_sec = tv_sec;
const char *text;
struct timeval tv;
char timebuf[20];
- /* largest signed 64bit is: 9,223,372,036,854,775,807
+ /* largest signed 64-bit is: 9,223,372,036,854,775,807
* max length in decimal: 1 + (6*3) = 19
* formatted via TRC_IDS_FORMAT_IDS_2 this becomes 2 + 19 + 1 + 19 + 2 = 43
* negative xfer-id are not printed, negative conn-ids use TRC_IDS_FORMAT_1
case CURLINFO_SSL_DATA_IN:
case CURLINFO_SSL_DATA_OUT:
if(!traced_data) {
- /* if the data is output to a tty and we're sending this debug trace
- to stderr or stdout, we don't display the alert about the data not
+ /* if the data is output to a tty and we are sending this debug trace
+ to stderr or stdout, we do not display the alert about the data not
being shown as the data _is_ shown then just not via this
function */
if(!config->isatty ||
(void)infotype;
fprintf(stream, "%c", ((ptr[i + c] >= 0x20) && (ptr[i + c] < 0x7F)) ?
ptr[i + c] : UNPRINTABLE_CHAR);
- /* check again for 0D0A, to avoid an extra \n if it's at width */
+ /* check again for 0D0A, to avoid an extra \n if it is at width */
if((tracetype == TRACE_ASCII) &&
(i + c + 2 < size) && (ptr[i + c + 1] == 0x0D) &&
(ptr[i + c + 2] == 0x0A)) {
#else
#define BOLD "\x1b[1m"
/* Switch off bold by setting "all attributes off" since the explicit
- bold-off code (21) isn't supported everywhere - like in the mac
+ bold-off code (21) is not supported everywhere - like in the mac
Terminal. */
#define BOLDOFF "\x1b[0m"
/* OSC 8 hyperlink escape sequence */
}
/*
- * Copies a file name part and returns an ALLOCATED data buffer.
+ * Copies a filename part and returns an ALLOCATED data buffer.
*/
static char *parse_filename(const char *ptr, size_t len)
{
}
/* If the filename contains a backslash, only use filename portion. The idea
- is that even systems that don't handle backslashes as path separators
+ is that even systems that do not handle backslashes as path separators
probably want the path removed for convenience. */
q = strrchr(p, '\\');
if(q) {
}
}
- /* make sure the file name doesn't end in \r or \n */
+ /* make sure the filename does not end in \r or \n */
q = strchr(p, '\r');
if(q)
*q = '\0';
#endif /* _WIN32 || MSDOS */
/* in case we built debug enabled, we allow an environment variable
- * named CURL_TESTDIR to prefix the given file name to put it into a
+ * named CURL_TESTDIR to prefix the given filename to put it into a
* specific directory
*/
#ifdef DEBUGBUILD
char buffer[512]; /* suitably large */
msnprintf(buffer, sizeof(buffer), "%s/%s", tdir, copy);
Curl_safefree(copy);
- copy = strdup(buffer); /* clone the buffer, we don't use the libcurl
+ copy = strdup(buffer); /* clone the buffer, we do not use the libcurl
aprintf() or similar since we want to use the
same memory code as the "real" parse_filename
function */
* Treat the Location: header specially, by writing a special escape
* sequence that adds a hyperlink to the displayed text. This makes
* the absolute URL of the redirect clickable in supported terminals,
- * which couldn't happen otherwise for relative URLs. The Location:
+ * which could not happen otherwise for relative URLs. The Location:
* header is supposed to always be absolute so this theoretically
- * shouldn't be needed but the real world returns plenty of relative
+ * should not be needed but the real world returns plenty of relative
* URLs here.
*/
static
goto locdone;
}
- /* Not a "safe" URL: don't linkify it */
+ /* Not a "safe" URL: do not linkify it */
locout:
/* Write the normal output in case of error or unsafe */
if(total) {
/* we know the total data to get... */
if(bar->prev == point)
- /* progress didn't change since last invoke */
+ /* progress did not change since last invoke */
return 0;
else if((tvdiff(now, bar->prevtime) < 100L) && point < total)
- /* limit progress-bar updating to 10 Hz except when we're at 100% */
+ /* limit progress-bar updating to 10 Hz except when we are at 100% */
return 0;
}
else {
memset(bar, 0, sizeof(struct ProgressData));
/* pass the resume from value through to the progress function so it can
- * display progress towards total file not just the part that's left. */
+ * display progress towards total file not just the part that is left. */
if(config->use_resume)
bar->initial_size = config->resume_from;
config->readbusy = TRUE;
return CURL_READFUNC_PAUSE;
}
- /* since size_t is unsigned we can't return negative values fine */
+ /* since size_t is unsigned we cannot return negative values fine */
rc = 0;
}
if((per->uploadfilesize != -1) &&
curl_off_t left = offset;
if(whence != SEEK_SET)
- /* this code path doesn't support other types */
+ /* this code path does not support other types */
return CURL_SEEKFUNC_FAIL;
if(LSEEK_ERROR == lseek(per->infd, 0, SEEK_SET))
- /* couldn't rewind to beginning */
+ /* could not rewind to beginning */
return CURL_SEEKFUNC_FAIL;
while(left) {
long step = (left > OUR_MAX_SEEK_O) ? OUR_MAX_SEEK_L : (long)left;
if(LSEEK_ERROR == lseek(per->infd, step, SEEK_CUR))
- /* couldn't seek forwards the desired amount */
+ /* could not seek forwards the desired amount */
return CURL_SEEKFUNC_FAIL;
left -= step;
}
#endif
if(LSEEK_ERROR == lseek(per->infd, offset, whence))
- /* couldn't rewind, the reason is in errno but errno is just not portable
- enough and we don't actually care that much why we failed. We'll let
+ /* could not rewind, the reason is in errno but errno is just not portable
+ enough and we do not actually care that much why we failed. We will let
libcurl know that it may try other means if it wants to. */
return CURL_SEEKFUNC_CANTSEEK;
int fd;
do {
fd = open(fname, O_CREAT | O_WRONLY | O_EXCL | O_BINARY, OPENMODE);
- /* Keep retrying in the hope that it isn't interrupted sometime */
+ /* Keep retrying in the hope that it is not interrupted sometime */
} while(fd == -1 && errno == EINTR);
if(config->file_clobber_mode == CLOBBER_NEVER && fd == -1) {
int next_num = 1;
}
memcpy(newname, fname, len);
newname[len] = '.';
- while(fd == -1 && /* haven't successfully opened a file */
+ while(fd == -1 && /* have not successfully opened a file */
(errno == EEXIST || errno == EISDIR) &&
/* because we keep having files that already exist */
- next_num < 100 /* and we haven't reached the retry limit */ ) {
+ next_num < 100 /* and we have not reached the retry limit */ ) {
curlx_msnprintf(newname + len + 1, 12, "%d", next_num);
next_num++;
do {
fd = open(newname, O_CREAT | O_WRONLY | O_EXCL | O_BINARY, OPENMODE);
- /* Keep retrying in the hope that it isn't interrupted sometime */
+ /* Keep retrying in the hope that it is not interrupted sometime */
} while(fd == -1 && errno == EINTR);
}
outs->filename = newname; /* remember the new one */
struct curl_slist *cookies; /* cookies to serialize into a single line */
char *cookiejar; /* write to this file */
struct curl_slist *cookiefiles; /* file(s) to load cookies from */
- char *altsvc; /* alt-svc cache file name */
- char *hsts; /* HSTS cache file name */
+ char *altsvc; /* alt-svc cache filename */
+ char *hsts; /* HSTS cache filename */
bool cookiesession; /* new session? */
bool encoding; /* Accept-Encoding please */
bool tr_encoding; /* Transfer-Encoding please */
bool failonerror; /* fail on (HTTP) errors */
bool failwithbody; /* fail on (HTTP) errors but still store body */
bool show_headers; /* show headers to data output */
- bool no_body; /* don't get the body */
+ bool no_body; /* do not get the body */
bool dirlistonly; /* only get the FTP dir list */
bool followlocation; /* follow http redirects */
bool unrestricted_auth; /* Continue to send authentication (user+password)
struct GlobalConfig {
bool showerror; /* show errors when silent */
- bool silent; /* don't show messages, --silent given */
- bool noprogress; /* don't show progress bar */
+ bool silent; /* do not show messages, --silent given */
+ bool noprogress; /* do not show progress bar */
bool isatty; /* Updated internally if output is a tty */
char *trace_dump; /* file to dump the network trace to */
FILE *trace_stream;
bool tracetime; /* include timestamp? */
bool traceids; /* include xfer-/conn-id? */
int progressmode; /* CURL_PROGRESS_BAR / CURL_PROGRESS_STATS */
- char *libcurl; /* Output libcurl code to this file name */
+ char *libcurl; /* Output libcurl code to this filename */
bool fail_early; /* exit on first transfer error */
bool styled_output; /* enable fancy output style detection */
long ms_per_transfer; /* start next transfer after (at least) this
switch(errno) {
#ifdef EACCES
case EACCES:
- errorf(global, "You don't have permission to create %s", name);
+ errorf(global, "You do not have permission to create %s", name);
break;
#endif
#ifdef ENAMETOOLONG
}
dirbuildup[0] = '\0';
- /* Allow strtok() here since this isn't used threaded */
+ /* Allow strtok() here since this is not used threaded */
/* !checksrc! disable BANNEDFUNC 2 */
tempdir = strtok(outdup, PATH_DELIMITERS);
It may seem as though that would harmlessly fail but it could be
a corner case if X: did not exist, since we would be creating it
erroneously.
- eg if outfile is X:\foo\bar\filename then don't mkdir X:
+ eg if outfile is X:\foo\bar\filename then do not mkdir X:
This logic takes into account unsupported drives !:, 1:, etc. */
char *p = strchr(tempdir, ':');
if(p && !p[1])
skip = true;
#endif
- /* the output string doesn't start with a separator */
+ /* the output string does not start with a separator */
strcpy(dirbuildup, tempdir);
}
else
#endif
#ifdef _WIN32
-# define _use_lfn(f) (1) /* long file names always available */
+# define _use_lfn(f) (1) /* long filenames always available */
#elif !defined(__DJGPP__) || (__DJGPP__ < 2) /* DJGPP 2.0 has _use_lfn() */
-# define _use_lfn(f) (0) /* long file names never available */
+# define _use_lfn(f) (0) /* long filenames never available */
#elif defined(__DJGPP__)
# include <fcntl.h> /* _use_lfn(f) prototype */
#endif
Without this flag path separators and colons are sanitized.
SANITIZE_ALLOW_RESERVED: Allow reserved device names.
-Without this flag a reserved device name is renamed (COM1 => _COM1) unless it's
-in a UNC prefixed path.
+Without this flag a reserved device name is renamed (COM1 => _COM1) unless it
+is in a UNC prefixed path.
SANITIZE_ALLOW_TRUNCATE: Allow truncating a long filename.
Without this flag if the sanitized filename or path will be too long an error
max_sanitized_len = PATH_MAX-1;
}
else
- /* The maximum length of a filename.
- FILENAME_MAX is often the same as PATH_MAX, in other words it is 260 and
- does not discount the path information therefore we shouldn't use it. */
+ /* The maximum length of a filename. FILENAME_MAX is often the same as
+ PATH_MAX, in other words it is 260 and does not discount the path
+ information therefore we should not use it. */
max_sanitized_len = (PATH_MAX-1 > 255) ? 255 : PATH_MAX-1;
len = strlen(file_name);
/*
Test if truncating a path to a file will leave at least a single character in
-the filename. Filenames suffixed by an alternate data stream can't be
+the filename. Filenames suffixed by an alternate data stream cannot be
truncated. This performs a dry run, nothing is modified.
Good truncate_pos 9: C:\foo\bar => C:\foo\ba
Bad truncate_pos 1: C:\foo\ => C
* C:foo is ambiguous, C could end up being a drive or file therefore something
- like C:superlongfilename can't be truncated.
+ like C:superlongfilename cannot be truncated.
Returns
SANITIZE_ERR_OK: Good -- 'path' can be truncated
if(strpbrk(&path[truncate_pos - 1], "\\/:"))
return SANITIZE_ERR_INVALID_PATH;
- /* C:\foo can be truncated but C:\foo:ads can't */
+ /* C:\foo can be truncated but C:\foo:ads cannot */
if(truncate_pos > 1) {
const char *p = &path[truncate_pos - 1];
do {
*d = ':';
else if((flags & SANITIZE_ALLOW_PATH) && (*s == '/' || *s == '\\'))
*d = *s;
- /* Dots are special: DOS doesn't allow them as the leading character,
- and a file name cannot have more than a single dot. We leave the
+ /* Dots are special: DOS does not allow them as the leading character,
+ and a filename cannot have more than a single dot. We leave the
first non-leading dot alone, unless it comes too close to the
beginning of the name: we want sh.lex.c to become sh_lex.c, not
sh.lex-c. */
#endif /* MSDOS || UNITTESTS */
/*
-Rename file_name if it's a reserved dos device name.
+Rename file_name if it is a reserved dos device name.
This is a supporting function for sanitize_file_name.
const char *file_name,
int flags)
{
- /* We could have a file whose name is a device on MS-DOS. Trying to
- * retrieve such a file would fail at best and wedge us at worst. We need
+ /* We could have a file whose name is a device on MS-DOS. Trying to
+ * retrieve such a file would fail at best and wedge us at worst. We need
* to rename such files. */
char *p, *base;
char fname[PATH_MAX];
/* This is the legacy portion from rename_if_dos_device_name that checks for
reserved device names. It only works on MSDOS. On Windows XP the stat
check errors with EINVAL if the device name is reserved. On Windows
- Vista/7/8 it sets mode S_IFREG (regular file or device). According to MSDN
- stat doc the latter behavior is correct, but that doesn't help us identify
- whether it's a reserved device name and not a regular file name. */
+ Vista/7/8 it sets mode S_IFREG (regular file or device). According to
+ MSDN stat doc the latter behavior is correct, but that does not help us
+ identify whether it is a reserved device name and not a regular
+ filename. */
#ifdef MSDOS
if(base && ((stat(base, &st_buf)) == 0) && (S_ISCHR(st_buf.st_mode))) {
/* Prepend a '_' */
#ifdef UNICODE
/* sizeof(mod.szExePath) is the max total bytes of wchars. the max total
- bytes of multibyte chars won't be more than twice that. */
+ bytes of multibyte chars will not be more than twice that. */
char buffer[sizeof(mod.szExePath) * 2];
if(!WideCharToMultiByte(CP_ACP, 0, mod.szExePath, -1,
buffer, sizeof(buffer), NULL, NULL))
int rc = 1;
/* Windows stat() may attempt to adjust the unix GMT file time by a daylight
- saving time offset and since it's GMT that is bad behavior. When we have
+ saving time offset and since it is GMT that is bad behavior. When we have
access to a 64-bit type we can bypass stat and get the times directly. */
#if defined(_WIN32) && !defined(CURL_WINDOWS_APP)
HANDLE hfile;
{
if(filetime >= 0) {
/* Windows utime() may attempt to adjust the unix GMT file time by a daylight
- saving time offset and since it's GMT that is bad behavior. When we have
+ saving time offset and since it is GMT that is bad behavior. When we have
access to a 64-bit type we can bypass utime and set the times directly. */
#if defined(_WIN32) && !defined(CURL_WINDOWS_APP)
HANDLE hfile;
*pfilename = filename;
else if(filename)
warnf(config->global,
- "Field file name not allowed here: %s", filename);
+ "Field filename not allowed here: %s", filename);
if(pencoder)
*pencoder = encoder;
* 'name=foo;headers=@headerfile' or why not
* 'name=@filemame;headers=@headerfile'
*
- * To upload a file, but to fake the file name that will be included in the
+ * To upload a file, but to fake the filename that will be included in the
* formpost, do like this:
*
* 'name=@filename;filename=/dev/null' or quote the faked filename like:
struct tool_mime **mimecurrent,
bool literal_value)
{
- /* input MUST be a string in the format 'name=contents' and we'll
+ /* input MUST be a string in the format 'name=contents' and we will
build a linked list with the info */
char *name = NULL;
char *contents = NULL;
}
else if('@' == contp[0] && !literal_value) {
- /* we use the @-letter to indicate file name(s) */
+ /* we use the @-letter to indicate filename(s) */
struct tool_mime *subparts = NULL;
SET_TOOL_MIME_PTR(part, encoder);
/* *contp could be '\0', so we just check with the delimiter */
- } while(sep); /* loop if there's another file name */
+ } while(sep); /* loop if there is another filename */
part = (*mimecurrent)->subparts; /* Set name on group. */
}
else {
ARG_NONE, /* stand-alone but not a boolean */
ARG_BOOL, /* accepts a --no-[name] prefix */
ARG_STRG, /* requires an argument */
- ARG_FILE /* requires an argument, usually a file name */
+ ARG_FILE /* requires an argument, usually a filename */
} desc;
char letter; /* short name option or ' ' */
cmdline_t cmd;
/* Split the argument of -E to 'certname' and 'passphrase' separated by colon.
* We allow ':' and '\' to be escaped by '\' so that we can use certificate
- * nicknames containing ':'. See <https://sourceforge.net/p/curl/bugs/1196/>
+ * nicknames containing ':'. See <https://sourceforge.net/p/curl/bugs/1196/>
* for details. */
#ifndef UNITTESTS
static
strncpy(certname_place, param_place, span);
param_place += span;
certname_place += span;
- /* we just ate all the non-special chars. now we're on either a special
+ /* we just ate all the non-special chars. now we are on either a special
* char or the end of the string. */
switch(*param_place) {
case '\0':
/* Since we live in a world of weirdness and confusion, the win32
dudes can use : when using drive letters and thus c:\file:password
needs to work. In order not to break compatibility, we still use : as
- separator, but we try to detect when it is used for a file name! On
+ separator, but we try to detect when it is used for a filename! On
windows. */
#ifdef _WIN32
if((param_place == &cert_parameter[1]) &&
}
#endif
/* escaped colons and Windows drive letter colons were handled
- * above; if we're still here, this is a separating colon */
+ * above; if we are still here, this is a separating colon */
param_place++;
if(*param_place) {
*passphrase = strdup(param_place);
static void cleanarg(argv_item_t str)
{
/* now that getstr has copied the contents of nextarg, wipe the next
- * argument out so that the username:password isn't displayed in the
+ * argument out so that the username:password is not displayed in the
* system process list */
if(str) {
size_t len = strlen(str);
size_t *lenp)
{
/* [name]=[content], we encode the content part only
- * [name]@[file name]
+ * [name]@[filename]
*
* Case 2: we first load the file using that name and then encode
* the content.
is_file = *p++; /* pass the separator */
}
else {
- /* neither @ nor =, so no name and it isn't a file */
+ /* neither @ nor =, so no name and it is not a file */
nlen = 0;
is_file = 0;
p = nextarg;
}
if('@' == is_file) {
FILE *file;
- /* a '@' letter, it means that a file name or - (stdin) follows */
+ /* a '@' letter, it means that a filename or - (stdin) follows */
if(!strcmp("-", p)) {
file = stdin;
set_binmode(stdin);
if(!tmp)
return CURLE_OUT_OF_MEMORY;
- /* Allow strtok() here since this isn't used threaded */
+ /* Allow strtok() here since this is not used threaded */
/* !checksrc! disable BANNEDFUNC 2 */
token = strtok(tmp, ", ");
while(token) {
return err;
}
else if('@' == *nextarg && (cmd != C_DATA_RAW)) {
- /* the data begins with a '@' letter, it means that a file name
+ /* the data begins with a '@' letter, it means that a filename
or - (stdin) follows */
nextarg++; /* pass the @ */
(void)cleararg;
#endif
- *usedarg = FALSE; /* default is that we don't use the arg */
+ *usedarg = FALSE; /* default is that we do not use the arg */
if(('-' != flag[0]) || ('-' == flag[1])) {
/* this should be a long name */
goto error;
}
if(noflagged && (a->desc != ARG_BOOL)) {
- /* --no- prefixed an option that isn't boolean! */
+ /* --no- prefixed an option that is not boolean! */
err = PARAM_NO_NOT_BOOLEAN;
goto error;
}
if((a->desc != ARG_STRG) &&
(a->desc != ARG_FILE)) {
- /* --expand on an option that isn't a string or a filename */
+ /* --expand on an option that is not a string or a filename */
err = PARAM_EXPAND_ERROR;
goto error;
}
/* this option requires an extra parameter */
if(!longopt && parse[1]) {
nextarg = (char *)&parse[1]; /* this is the actual extra parameter */
- singleopt = TRUE; /* don't loop anymore after this */
+ singleopt = TRUE; /* do not loop anymore after this */
}
else if(!nextarg) {
err = PARAM_REQUIRES_PARAMETER;
if((a->desc == ARG_FILE) &&
(nextarg[0] == '-') && nextarg[1]) {
- /* if the file name looks like a command line option */
- warnf(global, "The file name argument '%s' looks like a flag.",
+ /* if the filename looks like a command line option */
+ warnf(global, "The filename argument '%s' looks like a flag.",
nextarg);
}
else if(!strncmp("\xe2\x80\x9c", nextarg, 3)) {
case C_ANYAUTH: /* --anyauth */
if(toggle)
config->authtype = CURLAUTH_ANY;
- /* --no-anyauth simply doesn't touch it */
+ /* --no-anyauth simply does not touch it */
break;
#ifdef USE_WATT32
case C_WDEBUG: /* --wdebug */
config->url_get = config->url_list;
if(config->url_get) {
- /* there's a node here, if it already is filled-in continue to find
+ /* there is a node here, if it already is filled-in continue to find
an "empty" node */
while(config->url_get && (config->url_get->flags & GETOUT_URL))
config->url_get = config->url_get->next;
while(ISDIGIT(*p))
p++;
if(*p) {
- /* if there's anything more than a plain decimal number */
+ /* if there is anything more than a plain decimal number */
rc = sscanf(p, " - %6s", lrange);
*p = 0; /* null-terminate to make str2unum() work below */
}
else {
err = file2memory(&string, &len, file);
if(!err && string) {
- /* Allow strtok() here since this isn't used threaded */
+ /* Allow strtok() here since this is not used threaded */
/* !checksrc! disable BANNEDFUNC 2 */
char *h = strtok(string, "\r\n");
while(h) {
if(!config->url_out)
config->url_out = config->url_list;
if(config->url_out) {
- /* there's a node here, if it already is filled-in continue to find
+ /* there is a node here, if it already is filled-in continue to find
an "empty" node */
while(config->url_out && (config->url_out->flags & GETOUT_OUTFILE))
config->url_out = config->url_out->next;
break;
case C_DISABLE: /* --disable */
- /* if used first, already taken care of, we do it like this so we don't
+ /* if used first, already taken care of, we do it like this so we do not
cause an error! */
break;
case C_QUOTE: /* --quote */
break;
case C_RANGE: /* --range */
/* Specifying a range WITHOUT A DASH will create an illegal HTTP range
- (and won't actually be range by definition). The man page previously
- claimed that to be a good way, why this code is added to work-around
- it. */
+ (and will not actually be range by definition). The manpage
+ previously claimed that to be a good way, why this code is added to
+ work-around it. */
if(ISDIGIT(*nextarg) && !strchr(nextarg, '-')) {
char buffer[32];
if(curlx_strtoofft(nextarg, NULL, 10, &value)) {
if(!config->url_ul)
config->url_ul = config->url_list;
if(config->url_ul) {
- /* there's a node here, if it already is filled-in continue to find
+ /* there is a node here, if it already is filled-in continue to find
an "empty" node */
while(config->url_ul && (config->url_ul->flags & GETOUT_UPLOAD))
config->url_ul = config->url_ul->next;
case C_WRITE_OUT: /* --write-out */
/* get the output string */
if('@' == *nextarg) {
- /* the data begins with a '@' letter, it means that a file name
+ /* the data begins with a '@' letter, it means that a filename
or - (stdin) follows */
FILE *file;
const char *fname;
now = time(NULL);
config->condtime = (curl_off_t)curl_getdate(nextarg, &now);
if(-1 == config->condtime) {
- /* now let's see if it is a file name to get the time from instead! */
+ /* now let's see if it is a filename to get the time from instead! */
rc = getfiletime(nextarg, global, &value);
if(!rc)
/* pull the time out from the file */
config->timecond = CURL_TIMECOND_NONE;
warnf(global,
"Illegal date format for -z, --time-cond (and not "
- "a file name). Disabling time condition. "
+ "a filename). Disabling time condition. "
"See curl_getdate(3) for valid date syntax.");
}
}
}
}
else if(!result && passarg)
- i++; /* we're supposed to skip this */
+ i++; /* we are supposed to skip this */
}
}
else {
long sts;
short chan;
- /* MSK, 23-JAN-2004, iosbdef.h wasn't in VAX V7.2 or CC 6.4 */
- /* distribution so I created this. May revert back later to */
+ /* MSK, 23-JAN-2004, iosbdef.h was not in VAX V7.2 or CC 6.4 */
+ /* distribution so I created this. May revert back later to */
/* struct _iosb iosb; */
struct _iosb
{
}
/* since echo is disabled, print a newline */
fputs("\n", tool_stderr);
- /* if user didn't hit ENTER, terminate buffer */
+ /* if user did not hit ENTER, terminate buffer */
if(i == buflen)
buffer[buflen-1] = '\0';
noecho.c_lflag &= ~(tcflag_t)ECHO;
ioctl(fd, TCSETA, &noecho);
#else
- /* neither HAVE_TERMIO_H nor HAVE_TERMIOS_H, we can't disable echo! */
+ /* neither HAVE_TERMIO_H nor HAVE_TERMIOS_H, we cannot disable echo! */
(void)fd;
return FALSE; /* not disabled */
#endif
bool disabled;
int fd = open("/dev/tty", O_RDONLY);
if(-1 == fd)
- fd = STDIN_FILENO; /* use stdin if the tty couldn't be used */
+ fd = STDIN_FILENO; /* use stdin if the tty could not be used */
disabled = ttyecho(FALSE, fd); /* disable terminal echo */
#include "tool_setup.h"
#ifndef HAVE_GETPASS_R
-/* If there's a system-provided function named like this, we trust it is
+/* If there is a system-provided function named like this, we trust it is
also found in one of the standard headers. */
/*
{"http", "HTTP and HTTPS protocol options", CURLHELP_HTTP},
{"imap", "IMAP protocol options", CURLHELP_IMAP},
/* important is left out because it is the default help page */
- {"misc", "Options that don't fit into any other category", CURLHELP_MISC},
+ {"misc", "Options that do not fit into any other category", CURLHELP_MISC},
{"output", "Filesystem output", CURLHELP_OUTPUT},
{"pop3", "POP3 protocol options", CURLHELP_POP3},
{"post", "HTTP Post specific options", CURLHELP_POST},
case PARAM_NEGATIVE_NUMERIC:
return "expected a positive numerical parameter";
case PARAM_LIBCURL_DOESNT_SUPPORT:
- return "the installed libcurl version doesn't support this";
+ return "the installed libcurl version does not support this";
case PARAM_LIBCURL_UNSUPPORTED_PROTOCOL:
return "a specified protocol is unsupported by libcurl";
case PARAM_NO_MEM:
return "out of memory";
case PARAM_NO_PREFIX:
- return "the given option can't be reversed with a --no- prefix";
+ return "the given option cannot be reversed with a --no- prefix";
case PARAM_NUMBER_TOO_LARGE:
return "too large number";
case PARAM_NO_NOT_BOOLEAN:
- return "used '--no-' for option that isn't a boolean";
+ return "used '--no-' for option that is not a boolean";
case PARAM_CONTDISP_SHOW_HEADER:
return "showing headers and --remote-header-name cannot be combined";
case PARAM_CONTDISP_RESUME_FROM:
char *home = getenv("HOME");
if(home && *home)
ipfs_path = aprintf("%s/.ipfs/", home);
- /* fallback to "~/.ipfs", as that's the default location. */
+ /* fallback to "~/.ipfs", as that is the default location. */
}
if(!ipfs_path || ensure_trailing_slash(&ipfs_path))
}
/*
- * Rewrite ipfs://<cid> and ipns://<cid> to a HTTP(S)
+ * Rewrite ipfs://<cid> and ipns://<cid> to an HTTP(S)
* URL that can be handled by an IPFS gateway.
*/
CURLcode ipfs_url_rewrite(CURLU *uh, const char *protocol, char **url,
goto clean;
/* We might have a --ipfs-gateway argument. Check it first and use it. Error
- * if we do have something but if it's an invalid url.
+ * if we do have something but if it is an invalid url.
*/
if(config->ipfs_gateway) {
/* ensure the gateway ends in a trailing / */
#include "memdebug.h" /* keep this as LAST include */
-/* global variable definitions, for libcurl run-time info */
+/* global variable definitions, for libcurl runtime info */
static const char *no_protos = NULL;
const char * const *feature_names = fnames;
/*
- * libcurl_info_init: retrieves run-time information about libcurl,
- * setting a global pointer 'curlinfo' to libcurl's run-time info
+ * libcurl_info_init: retrieves runtime information about libcurl,
+ * setting a global pointer 'curlinfo' to libcurl's runtime info
* struct, count protocols and flag those we are interested in.
* Global pointer feature_names is set to the feature names array. If
* the latter is not returned by curl_version_info(), it is built from
CURLcode result = CURLE_OK;
const char *const *builtin;
- /* Pointer to libcurl's run-time version information */
+ /* Pointer to libcurl's runtime version information */
curlinfo = curl_version_info(CURLVERSION_NOW);
if(!curlinfo)
return CURLE_FAILED_INIT;
***************************************************************************/
#include "tool_setup.h"
-/* global variable declarations, for libcurl run-time info */
+/* global variable declarations, for libcurl runtime info */
extern curl_version_info_data *curlinfo;
"Maximum concurrency for parallel transfers",
CURLHELP_CONNECTION | CURLHELP_CURL},
{" --pass <phrase>",
- "Pass phrase for the private key",
+ "Passphrase for the private key",
CURLHELP_SSH | CURLHELP_TLS | CURLHELP_AUTH},
{" --path-as-is",
"Do not squash .. sequences in URL path",
"NTLM authentication with the proxy",
CURLHELP_PROXY | CURLHELP_AUTH},
{" --proxy-pass <phrase>",
- "Pass phrase for the private key for HTTPS proxy",
+ "Passphrase for the private key for HTTPS proxy",
CURLHELP_PROXY | CURLHELP_TLS | CURLHELP_AUTH},
{" --proxy-pinnedpubkey <hashes>",
"FILE/HASHES public key to verify proxy with",
#if defined(HAVE_PIPE) && defined(HAVE_FCNTL)
/*
* Ensure that file descriptors 0, 1 and 2 (stdin, stdout, stderr) are
- * open before starting to run. Otherwise, the first three network
+ * open before starting to run. Otherwise, the first three network
* sockets opened by curl could be used for input sources, downloaded data
* or error logs as they will effectively be stdin, stdout and/or stderr.
*
/* if CURL_MEMDEBUG is set, this starts memory tracking message logging */
env = curl_getenv("CURL_MEMDEBUG");
if(env) {
- /* use the value as file name */
+ /* use the value as filename */
char fname[CURL_MT_LOGFNAME_BUFSIZE];
if(strlen(env) >= CURL_MT_LOGFNAME_BUFSIZE)
env[CURL_MT_LOGFNAME_BUFSIZE-1] = '\0';
*/
#ifdef _UNICODE
#if defined(__GNUC__)
-/* GCC doesn't know about wmain() */
+/* GCC does not know about wmain() */
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wmissing-prototypes"
#pragma GCC diagnostic ignored "-Wmissing-declarations"
#ifndef O_BINARY
/* since O_BINARY as used in bitmasks, setting it to zero makes it usable in
- source code but yet it doesn't ruin anything */
+ source code but yet it does not ruin anything */
# define O_BINARY 0
#endif
# define SOL_IP IPPROTO_IP
#endif
-#define CURL_CA_CERT_ERRORMSG \
- "More details here: https://curl.se/docs/sslcerts.html\n\n" \
- "curl failed to verify the legitimacy of the server and therefore " \
- "could not\nestablish a secure connection to it. To learn more about " \
- "this situation and\nhow to fix it, please visit the web page mentioned " \
+#define CURL_CA_CERT_ERRORMSG \
+ "More details here: https://curl.se/docs/sslcerts.html\n\n" \
+ "curl failed to verify the legitimacy of the server and therefore " \
+ "could not\nestablish a secure connection to it. To learn more about " \
+ "this situation and\nhow to fix it, please visit the webpage mentioned " \
"above.\n"
static CURLcode single_transfer(struct GlobalConfig *global,
if(per->uploadfile && !stdin_upload(per->uploadfile)) {
/* VMS Note:
*
- * Reading binary from files can be a problem... Only FIXED, VAR
+ * Reading binary from files can be a problem... Only FIXED, VAR
* etc WITHOUT implied CC will work. Others need a \n appended to
* a line
*
if((per->infd == -1) || fstat(per->infd, &fileinfo))
#endif
{
- helpf(tool_stderr, "Can't open '%s'", per->uploadfile);
+ helpf(tool_stderr, "cannot open '%s'", per->uploadfile);
if(per->infd != -1) {
close(per->infd);
per->infd = STDIN_FILENO;
memset(outs->utf8seq, 0, sizeof(outs->utf8seq));
#endif
- /* if retry-max-time is non-zero, make sure we haven't exceeded the
+ /* if retry-max-time is non-zero, make sure we have not exceeded the
time */
if(per->retry_remaining &&
(!config->retry_maxtime ||
if((scheme == proto_ftp || scheme == proto_ftps) && response / 100 == 4)
/*
* This is typically when the FTP server only allows a certain
- * amount of users and we are not one of them. All 4xx codes
+ * amount of users and we are not one of them. All 4xx codes
* are transient.
*/
retry = RETRY_FTP;
if(RETRY_HTTP == retry) {
curl_easy_getinfo(curl, CURLINFO_RETRY_AFTER, &retry_after);
if(retry_after) {
- /* store in a 'long', make sure it doesn't overflow */
+ /* store in a 'long', make sure it does not overflow */
if(retry_after > LONG_MAX/1000)
sleeptime = LONG_MAX;
else if((retry_after * 1000) > sleeptime)
/* truncate file at the position where we started appending */
#ifdef HAVE_FTRUNCATE
if(ftruncate(fileno(outs->stream), outs->init)) {
- /* when truncate fails, we can't just append as then we'll
+ /* when truncate fails, we cannot just append as then we will
create something strange, bail out */
errorf(config->global, "Failed to truncate file");
return CURLE_WRITE_ERROR;
rc = fseek(outs->stream, 0, SEEK_END);
#else
/* ftruncate is not available, so just reposition the file
- to the location we would have truncated it. This won't
+ to the location we would have truncated it. This will not
work properly with large files on 32-bit systems, but
most of those will have ftruncate. */
rc = fseek(outs->stream, (long)outs->init, SEEK_SET);
if(curl_strequal(schemep, proto_ipfs) ||
curl_strequal(schemep, proto_ipns)) {
result = ipfs_url_rewrite(uh, schemep, url, config);
- /* short-circuit proto_token, we know it's ipfs or ipns */
+ /* short-circuit proto_token, we know it is ipfs or ipns */
if(curl_strequal(schemep, proto_ipfs))
proto = proto_ipfs;
else if(curl_strequal(schemep, proto_ipns))
(per->outfile && strcmp("-", per->outfile)))) {
/*
- * We have specified a file name to store the result in, or we have
- * decided we want to use the remote file name.
+ * We have specified a filename to store the result in, or we have
+ * decided we want to use the remote filename.
*/
if(!per->outfile) {
- /* extract the file name from the URL */
+ /* extract the filename from the URL */
result = get_url_file_name(&per->outfile, per->this_url);
if(result) {
- errorf(global, "Failed to extract a sensible file name"
+ errorf(global, "Failed to extract a sensible filename"
" from the URL to use for storage");
break;
}
if(!*per->outfile && !config->content_disposition) {
- errorf(global, "Remote file name has no length");
+ errorf(global, "Remote filename has no length");
result = CURLE_WRITE_ERROR;
break;
}
}
if(config->resume_from_current) {
- /* We're told to continue from where we are now. Get the size
+ /* We are told to continue from where we are now. Get the size
of the file as it is now and open it for append instead */
struct_stat fileinfo;
/* VMS -- Danger, the filesize is only valid for stream files */
FILE *file = fopen(per->outfile, "ab");
#endif
if(!file) {
- errorf(global, "Can't open '%s'", per->outfile);
+ errorf(global, "cannot open '%s'", per->outfile);
result = CURLE_WRITE_ERROR;
break;
}
if(per->uploadfile && !stdin_upload(per->uploadfile)) {
/*
- * We have specified a file to upload and it isn't "-".
+ * We have specified a file to upload and it is not "-".
*/
result = add_file_name_to_url(per->curl, &per->this_url,
per->uploadfile);
if(config->authtype & (1UL << bitcheck++)) {
authbits++;
if(authbits > 1) {
- /* more than one, we're done! */
+ /* more than one, we are done! */
break;
}
}
#ifndef DEBUGBUILD
/* On most modern OSes, exiting works thoroughly,
- we'll clean everything up via exit(), so don't bother with
+ we will clean everything up via exit(), so do not bother with
slow cleanups. Crappy ones might need to skip this.
Note: avoid having this setopt added to the --libcurl source
output. */
structblob.len = (size_t)filesize;
structblob.flags = CURL_BLOB_COPY;
my_setopt_str(curl, CURLOPT_SSLCERT_BLOB, &structblob);
- /* if test run well, we are sure we don't reuse
+ /* if test run well, we are sure we do not reuse
* original mem pointer */
memset(certdata, 0, (size_t)filesize);
}
structblob.len = (size_t)filesize;
structblob.flags = CURL_BLOB_COPY;
my_setopt_str(curl, CURLOPT_SSLKEY_BLOB, &structblob);
- /* if test run well, we are sure we don't reuse
+ /* if test run well, we are sure we do not reuse
* original mem pointer */
memset(certdata, 0, (size_t)filesize);
}
if(!errorbuf)
return CURLE_OUT_OF_MEMORY;
- /* parallel connect means that we don't set PIPEWAIT since pipewait
+ /* parallel connect means that we do not set PIPEWAIT since pipewait
will make libcurl prefer multiplexing */
(void)curl_easy_setopt(per->curl, CURLOPT_PIPEWAIT,
global->parallel_connect ? 0L : 1L);
if(tres)
result = tres;
if(added_transfers)
- /* we added new ones, make sure the loop doesn't exit yet */
+ /* we added new ones, make sure the loop does not exit yet */
still_running = 1;
}
if(is_fatal_error(result) || (result && global->fail_early))
return CURLE_FAILED_INIT;
}
- /* On WIN32 we can't set the path to curl-ca-bundle.crt
- * at compile time. So we look here for the file in two ways:
+ /* On WIN32 we cannot set the path to curl-ca-bundle.crt at compile time. We
+ * look for the file in two ways:
* 1: look at the environment variable CURL_CA_BUNDLE for a path
- * 2: if #1 isn't found, use the windows API function SearchPath()
+ * 2: if #1 is not found, use the windows API function SearchPath()
* to find it along the app's path (includes app's dir and CWD)
*
* We support the environment variable thing for non-Windows platforms
long delay;
CURLcode result2 = post_per_transfer(global, per, result, &retry, &delay);
if(!result)
- /* don't overwrite the original error */
+ /* do not overwrite the original error */
result = result2;
/* Free list of given URLs */
}
/*
- * Adds the file name to the URL if it doesn't already have one.
+ * Adds the filename to the URL if it does not already have one.
* url will be freed before return if the returned pointer is different
*/
CURLcode add_file_name_to_url(CURL *curl, char **inurlp, const char *filename)
}
ptr = strrchr(path, '/');
if(!ptr || !*++ptr) {
- /* The URL path has no file name part, add the local file name. In order
+ /* The URL path has no filename part, add the local filename. In order
to be able to do so, we have to create a new URL in another buffer.*/
/* We only want the part of the local path that is on the right
else
filep = filename;
- /* URL encode the file name */
+ /* URL encode the filename */
encfile = curl_easy_escape(curl, filep, 0 /* use strlen */);
if(encfile) {
char *newpath;
#endif /* _WIN32 || MSDOS */
/* in case we built debug enabled, we allow an environment variable
- * named CURL_TESTDIR to prefix the given file name to put it into a
+ * named CURL_TESTDIR to prefix the given filename to put it into a
* specific directory
*/
#ifdef DEBUGBUILD
protoset_set(protoset, p);
}
- /* Allow strtok() here since this isn't used threaded */
+ /* Allow strtok() here since this is not used threaded */
/* !checksrc! disable BANNEDFUNC 2 */
for(token = strtok(buffer, sep);
token;
{
char *endptr;
if(str[0] == '-')
- /* offsets aren't negative, this indicates weird input */
+ /* offsets are not negative, this indicates weird input */
return PARAM_NEGATIVE_NUMERIC;
#if(SIZEOF_CURL_OFF_T > SIZEOF_LONG)
{
static char filebuffer[512];
/* Get the filename of our executable. GetModuleFileName is already declared
- * via inclusions done in setup header file. We assume that we are using
+ * via inclusions done in setup header file. We assume that we are using
* the ASCII version here.
*/
unsigned long len = GetModuleFileNameA(0, filebuffer, sizeof(filebuffer));
if(*line) {
*line = '\0'; /* null-terminate */
- /* to detect mistakes better, see if there's data following */
+ /* to detect mistakes better, see if there is data following */
line++;
/* pass all spaces */
while(*line && ISSPACE(*line))
}
if(!*param)
/* do this so getparameter can check for required parameters.
- Otherwise it always thinks there's a parameter. */
+ Otherwise it always thinks there is a parameter. */
param = NULL;
}
operation = global->last;
if(!res && param && *param && !usedarg)
- /* we passed in a parameter that wasn't used! */
+ /* we passed in a parameter that was not used! */
res = PARAM_GOT_EXTRA_PARAMETER;
if(res == PARAM_NEXT_OPERATION) {
}
if(res != PARAM_OK && res != PARAM_NEXT_OPERATION) {
- /* the help request isn't really an error */
+ /* the help request is not really an error */
if(!strcmp(filename, "-")) {
filename = "<stdin>";
}
rc = 1;
}
else
- rc = 1; /* couldn't open the file */
+ rc = 1; /* could not open the file */
free(pathalloc);
return rc;
* backslash-quoted characters and NUL-terminating the output string.
* Stops at the first non-backslash-quoted double quote character or the
* end of the input string. param must be at least as long as the input
- * string. Returns the pointer after the last handled input character.
+ * string. Returns the pointer after the last handled input character.
*/
static const char *unslashquote(const char *line, char *param)
{
/* fgets() returns s on success, and NULL on error or when end of file
occurs while no characters have been read. */
if(!fgets(buf, sizeof(buf), fp))
- /* only if there's data in the line, return TRUE */
+ /* only if there is data in the line, return TRUE */
return curlx_dyn_len(db) ? TRUE : FALSE;
if(curlx_dyn_add(db, buf)) {
*error = TRUE; /* error */
msnprintf(max5, 6, "%4" CURL_FORMAT_CURL_OFF_T "k", bytes/ONE_KILOBYTE);
else if(bytes < CURL_OFF_T_C(100) * ONE_MEGABYTE)
- /* 'XX.XM' is good as long as we're less than 100 megs */
+ /* 'XX.XM' is good as long as we are less than 100 megs */
msnprintf(max5, 6, "%2" CURL_FORMAT_CURL_OFF_T ".%0"
CURL_FORMAT_CURL_OFF_T "M", bytes/ONE_MEGABYTE,
(bytes%ONE_MEGABYTE) / (ONE_MEGABYTE/CURL_OFF_T_C(10)) );
else if(bytes < CURL_OFF_T_C(10000) * ONE_MEGABYTE)
- /* 'XXXXM' is good until we're at 10000MB or above */
+ /* 'XXXXM' is good until we are at 10000MB or above */
msnprintf(max5, 6, "%4" CURL_FORMAT_CURL_OFF_T "M", bytes/ONE_MEGABYTE);
else if(bytes < CURL_OFF_T_C(100) * ONE_GIGABYTE)
/* up to 10000PB, display without decimal: XXXXP */
msnprintf(max5, 6, "%4" CURL_FORMAT_CURL_OFF_T "P", bytes/ONE_PETABYTE);
- /* 16384 petabytes (16 exabytes) is the maximum a 64 bit unsigned number can
+ /* 16384 petabytes (16 exabytes) is the maximum a 64-bit unsigned number can
hold, but our data type is signed so 8192PB will be the maximum. */
return max5;
}
* OutStruct variables keep track of information relative to curl's
* output writing, which may take place to a standard stream or a file.
*
- * 'filename' member is either a pointer to a file name string or NULL
+ * 'filename' member is either a pointer to a filename string or NULL
* when dealing with a standard stream.
*
* 'alloc_filename' member is TRUE when string pointed by 'filename' has been
*
* 's_isreg' member is TRUE when output goes to a regular file, this also
* implies that output is 'seekable' and 'appendable' and also that member
- * 'filename' points to file name's string. For any standard stream member
+ * 'filename' points to filename's string. For any standard stream member
* 's_isreg' will be FALSE.
*
* 'fopened' member is TRUE when output goes to a regular file and it
#define GETOUT_OUTFILE (1<<0) /* set when outfile is deemed done */
#define GETOUT_URL (1<<1) /* set when URL is deemed done */
-#define GETOUT_USEREMOTE (1<<2) /* use remote file name locally */
+#define GETOUT_USEREMOTE (1<<2) /* use remote filename locally */
#define GETOUT_UPLOAD (1<<3) /* if set, -T has been used */
#define GETOUT_NOUPLOAD (1<<4) /* if set, -T "" has been used */
#define REM1(f,a) ADDF((&easysrc_toohard, f,a))
#define REM3(f,a,b,c) ADDF((&easysrc_toohard, f,a,b,c))
-/* Escape string to C string syntax. Return NULL if out of memory.
+/* Escape string to C string syntax. Return NULL if out of memory.
* Is this correct for those wacky EBCDIC guys? */
#define MAX_STRING_LENGTH_OUTPUT 2000
#else
sum = *amount * with;
if(sum/with != *amount)
- return 1; /* didn't fit, bail out */
+ return 1; /* did not fit, bail out */
#endif
}
*amount = sum;
return GLOBERROR("empty string within braces", *posp,
CURLE_URL_MALFORMAT);
- /* add 1 to size since it'll be incremented below */
+ /* add 1 to size since it will be incremented below */
if(multiply(amount, pat->content.Set.size + 1))
return GLOBERROR("range overflow", 0, CURLE_URL_MALFORMAT);
/*
** Even when the configure process has truly detected monotonic clock
** availability, it might happen that it is not actually available at
- ** run-time. When this occurs simply fallback to other time source.
+ ** runtime. When this occurs simply fallback to other time source.
*/
#ifdef HAVE_GETTIMEOFDAY
else
/*
* Make sure that the first argument is the more recent time, as otherwise
- * we'll get a weird negative time-diff back...
+ * we will get a weird negative time-diff back...
*
* Returns: the time difference in number of milliseconds.
*/
}
/*
- * VMS has two exit() routines. When running under a Unix style shell, then
+ * VMS has two exit() routines. When running under a Unix style shell, then
* Unix style and the __posix_exit() routine is used.
*
* When running under the DCL shell, then the VMS encoded codes and decc$exit()
static const struct decc_feat_t decc_feat_array[] = {
/* Preserve command-line case with SET PROCESS/PARSE_STYLE=EXTENDED */
{ "DECC$ARGV_PARSE_STYLE", 1 },
- /* Preserve case for file names on ODS5 disks. */
+ /* Preserve case for filenames on ODS5 disks. */
{ "DECC$EFS_CASE_PRESERVE", 1 },
- /* Enable multiple dots (and most characters) in ODS5 file names,
+ /* Enable multiple dots (and most characters) in ODS5 filenames,
while preserving VMS-ness of ";version". */
{ "DECC$EFS_CHARSET", 1 },
/* List terminator. */
feat_index = decc$feature_get_index(decc_feat_array[i].name);
if(feat_index >= 0) {
- /* Valid item. Collect its properties. */
+ /* Valid item. Collect its properties. */
feat_value = decc$feature_get_value(feat_index, 1);
feat_value_min = decc$feature_get_value(feat_index, 2);
feat_value_max = decc$feature_get_value(feat_index, 3);
if((decc_feat_array[i].value >= feat_value_min) &&
(decc_feat_array[i].value <= feat_value_max)) {
- /* Valid value. Set it if necessary. */
+ /* Valid value. Set it if necessary. */
if(feat_value != decc_feat_array[i].value) {
sts = decc$feature_set_value(feat_index, 1,
decc_feat_array[i].value);
#pragma nostandard
/* Establish the LIB$INITIALIZE PSECTs, with proper alignment and
- other attributes. Note that "nopic" is significant only on VAX. */
+ other attributes. Note that "nopic" is significant only on VAX. */
#pragma extern_model save
#pragma extern_model strict_refdef "LIB$INITIALIZ" 2, nopic, nowrt
const int spare[8] = {0};
switch(wv->id) {
case VAR_ONERROR:
if(per_result == CURLE_OK)
- /* this isn't error so skip the rest */
+ /* this is not error so skip the rest */
done = TRUE;
break;
case VAR_STDOUT:
}
end = strchr(ptr, '}');
if(end) {
- char fname[512]; /* holds the longest file name */
+ char fname[512]; /* holds the longest filename */
size_t flen = end - ptr;
if(flen < sizeof(fname)) {
FILE *stream2;
else {
char o = (char)*i;
if(lowercase && (o >= 'A' && o <= 'Z'))
- /* do not use tolower() since that's locale specific */
+ /* do not use tolower() since that is locale specific */
o |= ('a' - 'A');
result = curlx_dyn_addn(out, &o, 1);
}
if(*f == '}')
/* end of functions */
break;
- /* On entry, this is known to be a colon already. In subsequent laps, it
+ /* On entry, this is known to be a colon already. In subsequent laps, it
is also known to be a colon since that is part of the FUNCMATCH()
checks */
f++;
unix-sockets
</features>
<name>
-file name argument looks like a flag
+filename argument looks like a flag
</name>
<command>
--stderr %LOGDIR/moo%TESTNUMBER --unix-socket -k hej://moo
<verify>
<file name="%LOGDIR/moo%TESTNUMBER" mode="text">
-Warning: The file name argument '-k' looks like a flag.
+Warning: The filename argument '-k' looks like a flag.
curl: (1) Protocol "hej" not supported
</file>
ftp FTP protocol options
http HTTP and HTTPS protocol options
imap IMAP protocol options
- misc Options that don't fit into any other category
+ misc Options that do not fit into any other category
output Filesystem output
pop3 POP3 protocol options
post HTTP Post specific options
e2: Failed initialization
e3: URL using bad/illegal format or missing URL
e4: A requested feature, protocol or option was not found built-in in this libcurl due to a build-time decision.
-e5: Couldn't resolve proxy name
-e6: Couldn't resolve host name
-e7: Couldn't connect to server
+e5: Could not resolve proxy name
+e6: Could not resolve hostname
+e7: Could not connect to server
e8: Weird server reply
e9: Access denied to remote resource
e10: FTP: The server failed to connect to data port
e12: FTP: Accepting server connect has timed out
e13: FTP: unknown PASV reply
e14: FTP: unknown 227 response format
-e15: FTP: can't figure out the host in the PASV response
+e15: FTP: cannot figure out the host in the PASV response
e16: Error in the HTTP2 framing layer
-e17: FTP: couldn't set file type
+e17: FTP: could not set file type
e18: Transferred a partial file
-e19: FTP: couldn't retrieve (RETR failed) the specified file
+e19: FTP: could not retrieve (RETR failed) the specified file
e20: Unknown error
e21: Quote command returned error
e22: HTTP response code said error
e33: Requested range was not delivered by the server
e34: Internal problem setting up the POST
e35: SSL connect error
-e36: Couldn't resume download
-e37: Couldn't read a file:// file
+e36: Could not resume download
+e37: Could not read a file:// file
e38: LDAP: cannot bind
e39: LDAP: search failed
e40: Unknown error
e56: Failure when receiving data from the peer
e57: Unknown error
e58: Problem with the local SSL certificate
-e59: Couldn't use specified SSL cipher
+e59: Could not use specified SSL cipher
e60: SSL peer certificate or SSH remote key was not OK
e61: Unrecognized or bad HTTP Content or Transfer-Encoding
e62: Unknown error