[Fortran](https://github.com/interkosmos/fortran-curl) Written by Philipp Engel
-[Gambas](https://gambas.sourceforge.net/)
+[Gambas](https://gambaswiki.org/website/en/main.html)
[glib/GTK+](https://web.archive.org/web/20100526203452/atterer.net/glibcurl) Written by Richard Atterer
[PureBasic](https://web.archive.org/web/20250325015028/www.purebasic.com/documentation/http/index.html) uses libcurl in its "native" HTTP subsystem
-[Python](http://pycurl.io/) PycURL by Kjetil Jacobsen
+[Python](https://github.com/pycurl/pycurl) PycURL by Kjetil Jacobsen
[Python](https://pypi.org/project/pymcurl/) mcurl by Ganesh Viswanathan
[Rexx](https://rexxcurl.sourceforge.net/) Written Mark Hessling
-[Ring](https://ring-lang.sourceforge.io/doc1.3/libcurl.html) RingLibCurl by Mahmoud Fayed
+[Ring](https://ring-lang.github.io/doc1.24/libcurl.html) RingLibCurl by Mahmoud Fayed
RPG, support for ILE/RPG on OS/400 is included in source distribution
The curl tool that is shipped as an integrated component of Windows 10 and
Windows 11 is managed by Microsoft. If you were to delete the file or replace
it with a newer version downloaded from [the curl
-website](https://curl.se/windows), then Windows Update will cease to work on
+website](https://curl.se/windows/), then Windows Update will cease to work on
your system.
There is no way to independently force an upgrade of the curl.exe that is part
and controlled entirely by Microsoft as owners of the operating system.
You can always download and install [the latest version of curl for
-Windows](https://curl.se/windows) into a separate location.
+Windows](https://curl.se/windows/) into a separate location.
## Does curl support SOCKS (RFC 1928) ?
you will find that even if `D:\blah.txt` does exist, curl returns a 'file not
found' error.
-According to [RFC 1738](https://www.ietf.org/rfc/rfc1738.txt), `file://` URLs
-must contain a host component, but it is ignored by most implementations. In
-the above example, `D:` is treated as the host component, and is taken away.
-Thus, curl tries to open `/blah.txt`. If your system is installed to drive C:,
-that will resolve to `C:\blah.txt`, and if that does not exist you will get
-the not found error.
+According to [RFC 1738](https://datatracker.ietf.org/doc/html/rfc1738),
+`file://` URLs must contain a host component, but it is ignored by most
+implementations. In the above example, `D:` is treated as the host component,
+and is taken away. Thus, curl tries to open `/blah.txt`. If your system is
+installed to drive C:, that will resolve to `C:\blah.txt`, and if that does
+not exist you will get the not found error.
To fix this problem, use `file://` URLs with *three* leading slashes:
curl release tarballs are hosted on https://curl.se/download.html. They are
uploaded there at release-time by the release manager.
-curl-for-win downloads are hosted on https://curl.se/windows and are uploaded
+curl-for-win downloads are hosted on https://curl.se/windows/ and are uploaded
to the server by Viktor Szakats.
-curl-for-QNX downloads are hosted on <https://curl.se/qnx> and are uploaded to
-the server by Daniel Stenberg.
+curl-for-QNX downloads are hosted on <https://curl.se/qnx/> and are uploaded
+to the server by Daniel Stenberg.
Daily release tarball-like snapshots are generated automatically and are
provided for download at <https://curl.se/snapshots/>.
Passing in Unicode character with -d:
- [curl issue 12231](https://github.com/curl/curl/issues/12231)
+[curl issue 12231](https://github.com/curl/curl/issues/12231)
Windows Unicode builds use the home directory in current locale.
## NTLM does not support password with Unicode 'SECTION SIGN' character
- https://en.wikipedia.org/wiki/Section_sign
- [curl issue 2120](https://github.com/curl/curl/issues/2120)
+Code point: U+00A7
+
+https://en.wikipedia.org/wiki/Section_sign
+[curl issue 2120](https://github.com/curl/curl/issues/2120)
## libcurl can fail to try alternatives with `--proxy-any`
## Do not clear digest for single realm
- [curl issue 3267](https://github.com/curl/curl/issues/3267)
+[curl issue 3267](https://github.com/curl/curl/issues/3267)
## SHA-256 digest not supported in Windows SSPI builds
Microsoft does not document supported digest algorithms and that `SEC_E` error
code is not a documented error for `InitializeSecurityContext` (digest).
- [curl issue 6302](https://github.com/curl/curl/issues/6302)
+[curl issue 6302](https://github.com/curl/curl/issues/6302)
## curl never completes Negotiate over HTTP
to blocking mode. If the network is suddenly disconnected during sftp
transmission, curl is stuck, even if curl is configured with a timeout.
- [curl issue 8632](https://github.com/curl/curl/issues/8632)
+[curl issue 8632](https://github.com/curl/curl/issues/8632)
## Cygwin: "WARNING: UNPROTECTED PRIVATE KEY FILE!"
## HTTP/2 prior knowledge over proxy
- [curl issue 12641](https://github.com/curl/curl/issues/12641)
+[curl issue 12641](https://github.com/curl/curl/issues/12641)
## HTTP/2 frames while in the connection pool kill reuse
## Support DANE
[DNS-Based Authentication of Named Entities
-(DANE)](https://www.rfc-editor.org/rfc/rfc6698.txt) is a way to provide SSL
-keys and certs over DNS using DNSSEC as an alternative to the CA model.
+(DANE)](https://datatracker.ietf.org/doc/html/rfc6698) is a way to provide
+SSL keys and certs over DNS using DNSSEC as an alternative to the CA model.
A patch was posted on March 7 2013
(https://curl.se/mail/lib-2013-03/0075.html) but it was a too simple approach.
The Uniform Resource Locator format is how you specify the address of a
particular resource on the Internet. You know these, you have seen URLs like
- https://curl.se or https://example.com a million times. RFC 3986 is the
+ https://curl.se/ or https://example.com/ a million times. RFC 3986 is the
canonical spec. The formal name is not URL, it is **URI**.
## Host
issues a GET request to the server and receives the document it asked for.
If you issue the command line
- curl https://curl.se
+ curl https://curl.se/
you get a webpage returned in your terminal window. The entire HTML document
this URL identifies.
<!-- Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al. -->
<!-- SPDX-License-Identifier: curl -->
# WWW
-https://curl.se
+https://curl.se/
#include <curl/curl.h>
static const char *urls[] = {
- "https://www.microsoft.com",
- "https://opensource.org",
- "https://www.google.com",
- "https://www.yahoo.com",
- "https://www.ibm.com",
- "https://www.mysql.com",
- "https://www.oracle.com",
- "https://www.ripe.net",
- "https://www.iana.org",
- "https://www.amazon.com",
- "https://www.netcraft.com",
- "https://www.heise.de",
- "https://www.chip.de",
- "https://www.ca.com",
- "https://www.cnet.com",
- "https://www.mozilla.org",
- "https://www.cnn.com",
- "https://www.wikipedia.org",
- "https://www.dell.com",
- "https://www.hp.com",
- "https://www.cert.org",
- "https://www.mit.edu",
- "https://www.nist.gov",
- "https://www.ebay.com",
- "https://www.playstation.com",
- "https://www.uefa.com",
- "https://www.ieee.org",
- "https://www.apple.com",
- "https://www.symantec.com",
- "https://www.zdnet.com",
+ "https://www.microsoft.com/",
+ "https://opensource.org/",
+ "https://www.google.com/",
+ "https://www.yahoo.com/",
+ "https://www.ibm.com/",
+ "https://www.mysql.com/",
+ "https://www.oracle.com/",
+ "https://www.ripe.net/",
+ "https://www.iana.org/",
+ "https://www.amazon.com/",
+ "https://www.netcraft.com/",
+ "https://www.heise.de/",
+ "https://www.chip.de/",
+ "https://www.ca.com/",
+ "https://www.cnet.com/",
+ "https://www.mozilla.org/",
+ "https://www.cnn.com/",
+ "https://www.wikipedia.org/",
+ "https://www.dell.com/",
+ "https://www.hp.com/",
+ "https://www.cert.org/",
+ "https://www.mit.edu/",
+ "https://www.nist.gov/",
+ "https://www.ebay.com/",
+ "https://www.playstation.com/",
+ "https://www.uefa.com/",
+ "https://www.ieee.org/",
+ "https://www.apple.com/",
+ "https://www.symantec.com/",
+ "https://www.zdnet.com/",
"https://www.fujitsu.com/global/",
- "https://www.supermicro.com",
- "https://www.hotmail.com",
- "https://www.ietf.org",
- "https://www.bbc.co.uk",
- "https://news.google.com",
- "https://www.foxnews.com",
- "https://www.msn.com",
- "https://www.wired.com",
- "https://www.sky.com",
- "https://www.usatoday.com",
- "https://www.cbs.com",
+ "https://www.supermicro.com/",
+ "https://www.hotmail.com/",
+ "https://www.ietf.org/",
+ "https://www.bbc.co.uk/",
+ "https://news.google.com/",
+ "https://www.foxnews.com/",
+ "https://www.msn.com/",
+ "https://www.wired.com/",
+ "https://www.sky.com/",
+ "https://www.usatoday.com/",
+ "https://www.cbs.com/",
"https://www.nbc.com/",
- "https://slashdot.org",
- "https://www.informationweek.com",
- "https://apache.org",
- "https://www.un.org",
+ "https://slashdot.org/",
+ "https://www.informationweek.com/",
+ "https://apache.org/",
+ "https://www.un.org/",
};
#define MAX_PARALLEL 10 /* number of simultaneous transfers */
static int max_requests = 500;
static size_t max_link_per_page = 5;
static int follow_relative_links = 0;
-static const char *start_page = "https://www.reuters.com";
+static const char *start_page = "https://www.reuters.com/";
static int pending_interrupt = 0;
static void sighandler(int dummy)
curl = curl_easy_init();
if(curl) {
const char *urls[] = {
- "https://example.com",
- "https://curl.se",
+ "https://example.com/",
+ "https://curl.se/",
"https://www.example/",
NULL /* end of list */
};
goto error;
/* what call to write: */
- curl_easy_setopt(curl, CURLOPT_URL, "HTTPS://secure.site.example");
+ curl_easy_setopt(curl, CURLOPT_URL, "https://secure.site.example/");
curl_easy_setopt(curl, CURLOPT_HEADERDATA, headerfile);
#ifdef USE_ENGINE
of an HTTP/2 request, invoke curl with:
```
-> curl -v --trace-config ids,time,http/2 https://curl.se
+> curl -v --trace-config ids,time,http/2 https://curl.se/
```
Which gives you trace output with time information, transfer+connection ids
looks like this:
```
-* create connection for http://curl.se
+* create connection for http://curl.se/
conn[curl.se] --> SETUP[TCP] --> HAPPY-EYEBALLS --> NULL
* start connect
conn[curl.se] --> SETUP[TCP] --> HAPPY-EYEBALLS --> NULL
The `HAPPY-EYEBALLS` on the other hand stays focused on its side of the problem. We can use it also to make other type of connection by just giving it another filter type to try to have happy eyeballing for QUIC:
```
-* create connection for --http3-only https://curl.se
+* create connection for --http3-only https://curl.se/
conn[curl.se] --> SETUP[QUIC] --> HAPPY-EYEBALLS --> NULL
* start connect
conn[curl.se] --> SETUP[QUIC] --> HAPPY-EYEBALLS --> NULL
shall be attempted:
```
-* create connection for --http3 https://curl.se
+* create connection for --http3 https://curl.se/
conn[curl.se] --> HTTPS-CONNECT --> NULL
* start connect
conn[curl.se] --> HTTPS-CONNECT --> NULL
to drown in output. The newly introduced *connection filters* allows one to
dynamically increase log verbosity for a particular *filter type*. Example:
- CURL_DEBUG=ssl curl -v https://curl.se
+ CURL_DEBUG=ssl curl -v https://curl.se/
makes the `ssl` connection filter log more details. One may do that for
every filter type and also use a combination of names, separated by `,` or
space.
- CURL_DEBUG=ssl,http/2 curl -v https://curl.se
+ CURL_DEBUG=ssl,http/2 curl -v https://curl.se/
The order of filter type names is not relevant. Names used here are
case insensitive. Note that these names are implementation internals and
static Curl_send rtmp_send;
/*
- * RTMP protocol handler.h, based on https://rtmpdump.mplayerhq.hu
+ * RTMP protocol handler.h, based on https://rtmpdump.mplayerhq.hu/
*/
const struct Curl_handler Curl_handler_rtmp = {
[gnv.common_src]curl_*_original_src.bck is the original source of the curl kit
as provided by the curl project. [gnv.vms_src]curl-*_vms_src.bck, if present,
has the OpenVMS specific files that are used for building that are not yet in
-the curl source kits for that release distributed https://curl.se
+the curl source kits for that release distributed https://curl.se/
These backup savesets should be restored to different directory trees on
an ODS-5 volume(s) which are referenced by concealed rooted logical names.
'https://curl.se/rfc/rfc2255.txt' => 1,
'https://curl.se/sponsors.html' => 1,
'https://curl.se/support.html' => 1,
- 'https://curl.se/windows' => 1,
'https://curl.se/windows/' => 1,
'https://testclutch.curl.se/' => 1,
'https://github.com/curl/curl-fuzzer' => 1,
'https://github.com/curl/curl-www' => 1,
- 'https://github.com/curl/curl.git' => 1,
'https://github.com/curl/curl/wcurl' => 1,
);
my %flink;
# list all files to scan for links
-my @files=`git ls-files docs src lib scripts`;
+my @files=`git ls-files docs include lib scripts src`;
sub storelink {
my ($f, $line, $link) = @_;
if($link =~ /^(https|http):/) {
if($whitelist{$link}) {
#print "-- whitelisted: $link\n";
+ $whitelist{$link}++;
}
# example.com is just example
elsif($link =~ /^https:\/\/(.*)example.(com|org|net)/) {
# comma, question mark, colon, closing parenthesis, backslash,
# closing angle bracket, whitespace, pipe, backtick, semicolon
elsif(/(https:\/\/[a-z0-9.\/:%_+@-]+[^."'*\#,?:\)> \t|`;\\])/i) {
- #print "RAW ";
+ #print "RAW '$_'\n";
storelink($f, $line, $1);
}
$line++;
}
}
-#for my $u (sort keys %url) {
-# print "$u\n";
-#}
-#exit;
+for my $u (sort keys %whitelist) {
+ if($whitelist{$u} == 1) {
+ printf "warning: unused whitelist entry: '$u'\n";
+ }
+}
+
+for my $u (sort keys %url) {
+ print "$u\n";
+}
+exit;
my $error;
my @errlist;