Applies the patch from the previous commit to the rsync code.
Also adds documentation.
40. [`--http.transfer-timeout`](#--httptransfer-timeout)
41. [`--http.low-speed-limit`](#--httplow-speed-limit)
41. [`--http.low-speed-time`](#--httplow-speed-time)
+ 41. [`--http.max-file-size`](#--httpmax-file-size)
42. [`--http.ca-path`](#--httpca-path)
43. [`--output.roa`](#--outputroa)
44. [`--output.bgpsec`](#--outputbgpsec)
[--http.transfer-timeout=<unsigned integer>]
[--http.low-speed-limit=<unsigned integer>]
[--http.low-speed-time=<unsigned integer>]
+ [--http.max-file-size=<unsigned integer>]
[--http.ca-path=<directory>]
[--log.enabled=true|false]
[--log.output=syslog|console]
- **Type:** Integer
- **Availability:** `argv` and JSON
-- **Default:** 2
+- **Default:** 0
- **Range:** 0--[`UINT_MAX`](http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/limits.h.html)
-Maximum number of retries whenever there's an error requesting an HTTP URI.
-
-A value of **0** means **no retries**.
+Number of additional HTTP requests after a failed attempt.
-Whenever is necessary to request an HTTP URI, the validator will try the request at least once. If there was an error requesting the URI, the validator will retry at most `--http.retry.count` times to fetch the file, waiting [`--http.retry.interval`](#--httpretryinterval) seconds between each retry.
+If a transient error is returned when Fort tries to perform an HTTP transfer, it will retry this number of times before giving up. Setting the number to 0 makes Fort do no retries (which is the default). "Transient error" is a timeout, an HTTP 408 response code, or an HTTP 5xx response code.
### `--http.retry.interval`
Timeout (in seconds) for the connect phase.
-Whenever an HTTP connection will try to be established, the validator will wait a maximum of `http.connect-timeout` for the peer to respond to the connection request; if the timeout is reached, the connection attempt will be ceased.
+Whenever an HTTP connection will try to be established, the validator will wait a maximum of `http.connect-timeout` for the peer to respond to the connection request; if the timeout is reached, the connection attempt will be aborted.
The value specified (either by the argument or the default value) is utilized in libcurl's option [CURLOPT_CONNECTTIMEOUT](https://curl.haxx.se/libcurl/c/CURLOPT_CONNECTTIMEOUT.html).
See [`--http.low-speed-limit`](#--httplow-speed-limit).
+### `--http.max-file-size`
+
+- **Type:** Integer
+- **Availability:** `argv` and JSON
+- **Default:** 10,000,000 (10 Megabytes)
+- **Range:** 0--2,000,000,000 (2 Gigabytes)
+
+The maximum amount of bytes files are allowed to length during HTTP transfers. Files that exceed this limit are dropped, either early (through [CURLOPT_MAXFILESIZE](https://curl.haxx.se/libcurl/c/CURLOPT_MAXFILESIZE.html)) or as they hit the limit (when the file size is not known prior to download).
+
+This is intended to prevent malicious RPKI repositories from stagnating Fort.
+
+As of 2021-09-20, the largest legitimate file I found in the repositories was aprox. 1120 kilobytes.
+
### `--http.ca-path`
- **Type:** String (Path to directory)
- **Type:** Integer
- **Availability:** `argv` and JSON
-- **Default:** 2
+- **Default:** 0
- **Range:** 0--[`UINT_MAX`](http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/limits.h.html)
Maximum number of retries whenever there's an error executing an RSYNC.
"<a href="#--httpuser-agent">user-agent</a>": "{{ page.command }}/{{ site.fort-latest-version }}",
"<a href="#--httpconnect-timeout">connect-timeout</a>": 30,
"<a href="#--httptransfer-timeout">transfer-timeout</a>": 0,
- "<a href="#--httpidle-timeout">idle-timeout</a>": 15,
+ "<a href="#--httplow-speed-limit">low-speed-limit</a>": 30,
+ "<a href="#--httplow-speed-time">low-speed-time</a>": 10,
+ "<a href="#--httpmax-file-size">max-file-size</a>": 10000000,
"<a href="#--httpca-path">ca-path</a>": "/usr/local/ssl/certs"
},
- **Type:** String array
- **Availability:** JSON only
-- **Default:** `[ "--recursive", "--delete", "--times", "--contimeout=20", "--timeout=15", "$REMOTE", "$LOCAL" ]`
+- **Default:** `[ "--recursive", "--delete", "--times", "--contimeout=20", "--timeout=15", "--max-size", "$HTTP_MAX_FILE_SIZE", "$REMOTE", "$LOCAL" ]`
Arguments needed by [`rsync.program`](#rsyncprogram) to perform a recursive rsync.
-Fort will replace `"$REMOTE"` with the remote URL it needs to download, and `"$LOCAL"` with the target local directory where the file is supposed to be dropped.
+Fort will replace `"$REMOTE"` with the remote URL it needs to download, `"$LOCAL"` with the target local directory where the file is supposed to be dropped, and `"$HTTP_MAX_FILE_SIZE"` with [`--http.max-file-size`](#--httpmax-file-size).
### rsync.arguments-flat
- **Type:** String array
- **Availability:** JSON only
-- **Default:** `[ "--times", "--contimeout=20", "--timeout=15", "--dirs", "$REMOTE", "$LOCAL" ]`
+- **Default:** `[ "--times", "--contimeout=20", "--timeout=15", "--max-size", "$HTTP_MAX_FILE_SIZE", "--dirs", "$REMOTE", "$LOCAL" ]`
Arguments needed by [`rsync.program`](#rsyncprogram) to perform a single-file rsync.
-Fort will replace `"$REMOTE"` with the remote URL it needs to download, and `"$LOCAL"` with the target local directory where the file is supposed to be dropped.
+Fort will replace `"$REMOTE"` with the remote URL it needs to download, `"$LOCAL"` with the target local directory where the file is supposed to be dropped, and `"$HTTP_MAX_FILE_SIZE"` with [`--http.max-file-size`](#--httpmax-file-size).
### `incidences`
- **Availability:** `argv` and JSON
- **Default:** `root-except-ta`
->  This argument **will be DEPRECATED**. Use [`--rsync.strategy`](#--rsyncstrategy) or [`--rsync.enabled`](#--rsyncenabled) (if rsync is meant to be disabled) instead.
+>  This argument **is DEPRECATED**. Use [`--rsync.strategy`](#--rsyncstrategy) or [`--rsync.enabled`](#--rsyncenabled) (if rsync is meant to be disabled) instead.
rsync synchronization strategy. Commands the way rsync URLs are approached during downloads.
-Despite this argument will be deprecated, it still can be utilized. Its possible values and behaviour will be as listed here:
- `off`: will disable rsync execution, setting [`--rsync.enabled`](#--rsyncenabled) as `false`. So, using `--sync-strategy=off` will be the same as `--rsync.enabled=false`.
- `strict`: will be the same as `--rsync.strategy=strict`, see [`strict`](#strict).
- `root`: will be the same as `--rsync.strategy=root`, see [`root`](#root).
.name = "sync-strategy",
.type = >_sync_strategy,
.offset = offsetof(struct rpki_config, sync_strategy),
- .doc = "RSYNC download strategy. Will be deprecated, use 'rsync.strategy' instead.",
+ .doc = "RSYNC download strategy. Deprecated; use 'rsync.strategy' instead.",
}, {
.id = 2001,
.name = "shuffle-uris",
.name = "rrdp.enabled",
.type = >_rrdp_enabled,
.offset = offsetof(struct rpki_config, rrdp.enabled),
- .doc = "Enables RRDP execution. Will be deprecated, use 'http.enabled' instead.",
+ .doc = "Enables RRDP execution. Deprecated; use 'http.enabled' instead.",
}, {
.id = 10001,
.name = "rrdp.priority",
.type = >_rrdp_priority,
.offset = offsetof(struct rpki_config, rrdp.priority),
- .doc = "Priority of execution to fetch repositories files, a higher value means higher priority. Will be deprecated, use 'http.priority' instead.",
+ .doc = "Priority of execution to fetch repositories files, a higher value means higher priority. Deprecated; use 'http.priority' instead.",
.min = 0,
.max = 100,
}, {
.name = "rrdp.retry.count",
.type = >_rrdp_retry_count,
.offset = offsetof(struct rpki_config, rrdp.retry.count),
- .doc = "Maximum amount of retries whenever there's an error fetching RRDP files. Will be deprecated, use 'http.retry.count' instead.",
+ .doc = "Maximum amount of retries whenever there's an error fetching RRDP files. Deprecated; use 'http.retry.count' instead.",
.min = 0,
.max = UINT_MAX,
}, {
.name = "rrdp.retry.interval",
.type = >_rrdp_retry_interval,
.offset = offsetof(struct rpki_config, rrdp.retry.interval),
- .doc = "Period (in seconds) to wait between retries after an error ocurred fetching RRDP files. Will be deprecated, use 'http.retry.interval' instead.",
+ .doc = "Period (in seconds) to wait between retries after an error ocurred fetching RRDP files. Deprecated; use 'http.retry.interval' instead.",
.min = 0,
.max = UINT_MAX,
},
"--times",
"--contimeout=20",
"--timeout=15",
+ "--max-size", "$HTTP_MAX_FILE_SIZE",
"$REMOTE",
"$LOCAL",
};
"--times",
"--contimeout=20",
"--timeout=15",
+ "--max-size", "$HTTP_MAX_FILE_SIZE",
"--dirs",
"$REMOTE",
"$LOCAL",
rpki_config.rsync.enabled = true;
rpki_config.rsync.priority = 50;
rpki_config.rsync.strategy = RSYNC_ROOT_EXCEPT_TA;
- rpki_config.rsync.retry.count = 2;
+ rpki_config.rsync.retry.count = 0;
rpki_config.rsync.retry.interval = 5;
rpki_config.rsync.program = strdup("rsync");
if (rpki_config.rsync.program == NULL) {
/* By default, has a higher priority than rsync */
rpki_config.http.enabled = true;
rpki_config.http.priority = 60;
- rpki_config.http.retry.count = 2;
+ rpki_config.http.retry.count = 0;
rpki_config.http.retry.interval = 5;
rpki_config.http.user_agent = strdup(PACKAGE_NAME "/" PACKAGE_VERSION);
if (rpki_config.http.user_agent == NULL) {
*/
void config_set_rsync_enabled(bool);
void config_set_http_enabled(bool);
-/* TODO (later) This will be deprecated */
+/* TODO (later) Deprecated */
void config_set_rrdp_enabled(bool);
/* TODO (later) Remove when sync-strategy is fully deprecated */
{
/* Warn about future deprecation */
if (strcmp(name, "rrdp.enabled") == 0)
- pr_op_warn("'rrdp.enabled' will be deprecated, use 'http.enabled' instead.");
+ pr_op_warn("'rrdp.enabled' is deprecated; use 'http.enabled' instead.");
config_set_rrdp_enabled(value);
config_set_http_enabled(value);
{
/* Warn about future deprecation */
if (strcmp(name, "rrdp.priority") == 0)
- pr_op_warn("'rrdp.priority' will be deprecated, use 'http.priority' instead.");
+ pr_op_warn("'rrdp.priority' is deprecated; use 'http.priority' instead.");
config_set_rrdp_priority(value);
config_set_http_priority(value);
{
/* Warn about future deprecation */
if (strcmp(name, "rrdp.retry.count") == 0)
- pr_op_warn("'rrdp.retry.count' will be deprecated, use 'http.retry.count' instead.");
+ pr_op_warn("'rrdp.retry.count' is deprecated; use 'http.retry.count' instead.");
config_set_rrdp_retry_count(value);
config_set_http_retry_count(value);
{
/* Warn about future deprecation */
if (strcmp(name, "rrdp.retry.interval") == 0)
- pr_op_warn("'rrdp.retry.interval' will be deprecated, use 'http.retry.interval' instead.");
+ pr_op_warn("'rrdp.retry.interval' is deprecated; use 'http.retry.interval' instead.");
config_set_rrdp_retry_interval(value);
config_set_http_retry_interval(value);
{
int error;
- pr_op_warn("'sync-strategy' will be deprecated.");
+ pr_op_warn("'sync-strategy' is deprecated.");
pr_op_warn("Use 'rsync.strategy' instead; or 'rsync.enabled=false' if you wish to use 'off' strategy.");
if (strcmp(str, RSYNC_VALUE_OFF) == 0) {
return 0; /* Ugh. See fread(3) */
}
- return fwrite(data, size, nmemb, userp);
+ return fwrite(data, size, nmemb, arg->dst);
}
static void
return 0;
}
- if (*response_code >= HTTP_BAD_REQUEST)
- return pr_val_err("Error requesting URL %s (received HTTP code %ld): %s",
- uri, *response_code, curl_err_string(handler, res));
-
- pr_val_err("Error requesting URL %s: %s", uri,
- curl_err_string(handler, res));
+ pr_val_err("Error requesting URL %s: %s. (HTTP code: %ld)", uri,
+ curl_err_string(handler, res), *response_code);
if (log_operation)
- pr_op_err("Error requesting URL %s: %s", uri,
- curl_err_string(handler, res));
+ pr_op_err("Error requesting URL %s: %s. (HTTP code: %ld)", uri,
+ curl_err_string(handler, res), *response_code);
- return EREQFAILED;
+ /*
+ * TODO (performance) FILESIZE_EXCEEDED is probably not the only error
+ * code that should cancel retries.
+ */
+ return (res == CURLE_FILESIZE_EXCEEDED) ? -EFBIG : EREQFAILED;
}
static void
break; /* Note: Usually happy path */
if (retries == config_get_http_retry_count()) {
- pr_val_warn("Max HTTP retries (%u) reached requesting for '%s', won't retry again.",
- retries, uri_get_global(uri));
+ if (retries > 0)
+ pr_val_warn("Max HTTP retries (%u) reached requesting for '%s', won't retry again.",
+ retries, uri_get_global(uri));
break;
}
pr_val_warn("Retrying HTTP request '%s' in %u seconds, %u attempts remaining.",
close(fds[1][0]);
}
+static int
+uint2string(unsigned int uint, char **result)
+{
+ char *str;
+ int str_len;
+
+ str_len = snprintf(NULL, 0, "%u", uint);
+ if (str_len < 0)
+ return pr_val_err("Cannot compute length of '%u' string: Unknown cause", uint);
+
+ str_len++; /* Null chara */
+
+ str = malloc(str_len * sizeof(char));
+ if (str == NULL)
+ return pr_enomem();
+
+ str_len = snprintf(str, str_len, "%u", uint);
+ if (str_len < 0) {
+ free(str);
+ return pr_val_err("Cannot convert '%u' into a string: Unknown cause", uint);
+ }
+
+ *result = str;
+ return 0;
+}
+
static void
release_args(char **args, unsigned int size)
{
struct string_array const *config_args;
char **copy_args;
unsigned int i;
+ int error;
config_args = config_get_rsync_args(is_ta);
/*
copy_args[i + 1] = strdup(uri_get_global(uri));
else if (strcmp(config_args->array[i], "$LOCAL") == 0)
copy_args[i + 1] = strdup(uri_get_local(uri));
- else
+ else if (strcmp(config_args->array[i], "$HTTP_MAX_FILE_SIZE") == 0) {
+ error = uint2string(config_get_http_max_file_size(),
+ ©_args[i + 1]);
+ if (error)
+ return error;
+ } else
copy_args[i + 1] = strdup(config_args->array[i]);
+
if (copy_args[i + 1] == NULL) {
release_args(copy_args, i);
return pr_enomem();
goto release_args;
if (retries == config_get_rsync_retry_count()) {
- pr_val_warn("Max RSYNC retries (%u) reached on '%s', won't retry again.",
- retries, uri_get_global(uri));
+ if (retries > 0)
+ pr_val_warn("Max RSYNC retries (%u) reached on '%s', won't retry again.",
+ retries, uri_get_global(uri));
error = EREQFAILED;
goto release_args;
}
return 0;
}
+static bool
+ia5_starts_with_dot_slash(IA5String_t *string)
+{
+ if (string->size < 2)
+ return false;
+ return string->buf[0] == '.' && string->buf[1] == '/';
+}
+
+/*
+ * Files referenced by manifests are not allowed to be anywhere other than the
+ * manifest's own directory.
+ *
+ * I think. RFC 6486:
+ *
+ * A manifest is a signed object that enumerates all the signed objects
+ * (files) in the repository publication point (directory) that are
+ * associated with an authority responsible for publishing at that
+ * publication point.
+ *
+ * This function checks @ia5 does not contain slashes after the starting chain
+ * of "./"s.
+ */
+static int
+validate_current_directory(IA5String_t *string)
+{
+ IA5String_t clone;
+ size_t i;
+
+ if (string->size == 0)
+ return pr_val_err("Manifest contains a file with an empty string as a name.");
+ if (string->buf[0] == '/')
+ return pr_val_err("Manifest contains a file with an absolute URL.");
+
+ clone.buf = string->buf;
+ clone.size = string->size;
+ while (ia5_starts_with_dot_slash(&clone)) {
+ clone.buf += 2;
+ clone.size -= 2;
+ }
+
+ if (clone.size == 0)
+ return pr_val_err("Manifest contains a file that appears to be a directory.");
+
+ for (i = 0; i < clone.size; i++)
+ if (clone.buf[i] == '/')
+ return pr_val_err("Manifest contains a URL that references a separate repository publication point.");
+
+ return 0;
+}
+
/**
* Initializes @uri->global given manifest path @mft and its referenced file
* @ia5.
*
- * ie. if @mft is "rsync://a/b/c.mft" and @ia5 is "d/e/f.cer", @uri->global will
- * be "rsync://a/b/d/e/f.cer".
+ * ie. if @mft is "rsync://a/b/c.mft" and @ia5 is "d.cer", @uri->global will
+ * be "rsync://a/b/d.cer".
*
* Assumes that @mft is a "global" URL. (ie. extracted from rpki_uri.global.)
*/
return error;
}
+ error = validate_current_directory(ia5);
+ if (error)
+ return error;
+
slash_pos = strrchr(mft, '/');
if (slash_pos == NULL) {
joined = malloc(ia5->size + 1);
check_PROGRAMS += serial.test
check_PROGRAMS += tal.test
check_PROGRAMS += thread_pool.test
+check_PROGRAMS += uri.test
check_PROGRAMS += vcard.test
check_PROGRAMS += vrps.test
check_PROGRAMS += xml.test
thread_pool_test_SOURCES = thread_pool_test.c
thread_pool_test_LDADD = ${MY_LDADD}
+uri_test_SOURCES = uri_test.c
+uri_test_LDADD = ${MY_LDADD}
+
vcard_test_SOURCES = vcard_test.c
vcard_test_LDADD = ${MY_LDADD}
return NULL;
}
-START_TEST(rsync_load_normal)
-{
+#define CK_STR(uint, string) \
+ ck_assert_int_eq(0, uint2string(uint, &str)); \
+ ck_assert_str_eq(string, str); \
+ free(str);
+START_TEST(rsync_test_uint2string)
+{
+ char *str;
+
+ CK_STR(0, "0");
+ CK_STR(1, "1");
+ CK_STR(9, "9");
+ CK_STR(10, "10");
+ CK_STR(100, "100");
+ CK_STR(1000, "1000");
+ CK_STR(10000, "10000");
+ CK_STR(100000, "100000");
+ CK_STR(1000000, "1000000");
+ CK_STR(10000000, "10000000");
+ CK_STR(100000000, "100000000");
+ CK_STR(1000000000, "1000000000");
+ CK_STR(4294967295, "4294967295");
}
END_TEST
TCase *core, *prefix_equals, *uri_list, *test_get_prefix;
core = tcase_create("Core");
- tcase_add_test(core, rsync_load_normal);
+ tcase_add_test(core, rsync_test_uint2string);
prefix_equals = tcase_create("PrefixEquals");
tcase_add_test(prefix_equals, rsync_test_prefix_equals);
switch (vrp->addr_fam) {
case AF_INET:
- PR_DEBUG_MSG("%s asn%u IPv4", flags2str(flags), vrp->asn);
+ printf("%s asn%u IPv4\n", flags2str(flags), vrp->asn);
break;
case AF_INET6:
- PR_DEBUG_MSG("%s asn%u IPv6", flags2str(flags), vrp->asn);
+ printf("%s asn%u IPv6\n", flags2str(flags), vrp->asn);
break;
default:
- PR_DEBUG_MSG("%s asn%u Unknown", flags2str(flags), vrp->asn);
+ printf("%s asn%u Unknown\n", flags2str(flags), vrp->asn);
break;
}
*/
uint8_t pdu_type = pop_expected_pdu();
pr_op_info(" Server sent Router Key PDU.");
- PR_DEBUG_MSG("%s asn%u RK", flags2str(flags), router_key->as);
+ printf("%s asn%u RK\n", flags2str(flags), router_key->as);
ck_assert_msg(pdu_type == PDU_TYPE_ROUTER_KEY,
"Server sent a Router Key. Expected PDU type was %d.", pdu_type);
return 0;
/* From serial 1: Run and validate */
ck_assert_int_eq(0, handle_serial_query_pdu(0, &request));
- PR_DEBUG;
ck_assert_uint_eq(false, has_expected_pdus());
/* From serial 2: Init client request */
--- /dev/null
+#include <check.h>
+#include <errno.h>
+#include <stdint.h>
+
+#include "uri.c"
+#include "common.c"
+#include "log.c"
+#include "impersonator.c"
+
+static int
+test_validate(char const *src)
+{
+ uint8_t buffer[32];
+ IA5String_t dst;
+ unsigned int i;
+
+ dst.size = strlen(src);
+
+ memcpy(buffer, src, dst.size);
+ for (i = dst.size; i < 31; i++)
+ buffer[i] = '_';
+ buffer[31] = 0;
+
+ dst.buf = buffer;
+
+ return validate_current_directory(&dst);
+}
+
+START_TEST(check_validate_current_directory)
+{
+ ck_assert_int_eq(0, test_validate("file"));
+ ck_assert_int_eq(-EINVAL, test_validate(""));
+ ck_assert_int_eq(-EINVAL, test_validate("/file"));
+ ck_assert_int_eq(0, test_validate("./file"));
+ ck_assert_int_eq(0, test_validate("././file"));
+ ck_assert_int_eq(0, test_validate("./././././file"));
+ ck_assert_int_eq(-EINVAL, test_validate("./././././"));
+ ck_assert_int_eq(-EINVAL, test_validate("./././././file/"));
+ ck_assert_int_eq(-EINVAL, test_validate("./././././file/b"));
+}
+END_TEST
+
+Suite *address_load_suite(void)
+{
+ Suite *suite;
+ TCase *core;
+
+ core = tcase_create("Core");
+ tcase_add_test(core, check_validate_current_directory);
+
+ suite = suite_create("Encoding checking");
+ suite_add_tcase(suite, core);
+ return suite;
+}
+
+int main(void)
+{
+ Suite *suite;
+ SRunner *runner;
+ int tests_failed;
+
+ suite = address_load_suite();
+
+ runner = srunner_create(suite);
+ srunner_run_all(runner, CK_NORMAL);
+ tests_failed = srunner_ntests_failed(runner);
+ srunner_free(runner);
+
+ return (tests_failed == 0) ? EXIT_SUCCESS : EXIT_FAILURE;
+}