- Fort SHOULD die as soon as it realizes the VRP table is corrupted, as
we should not send garbage to the routers.
- Also, I'm not entirely sure the code would not crash later anyway,
since the table is, in fact, corrupted.
- Plus, if it doesn't crash, there would be no core dump to further
analyze the bug.
2. Point bug output to the currently active bug report
I was hesitant to put this on main because it seemed like a performance
tank, but truth be told, I'm being a wuss. these table iterations are
nothing compared to the amount of time Fort has to spend downloading.
And it looks like I'm never going to find this bug with the stack trace
alone.
Does not fix #83 nor #89, but prevents the crash at least.
Job Snijders [Sat, 21 Jan 2023 11:43:02 +0000 (11:43 +0000)]
Ensure X509 extensions are hashed & cached, before deciding a cert is CA or EE
If X509_check_ca() fails to cache X509v3 extension values, the return
value may be incorrect, leading to erroneously assuming a given certificate
is a CA or EE cert (while in reality it is the other, or neither).
This failure mode can arise because X509_check_ca() doesn't verify
whether libcrypto's (void)x509v3_cache_extensions(x) flipped the EXFLAG_INVALID
flag in x->ex_flags. Unfortunately, X509_check_ca() doesn't have a return code
to indicate an error, so this can't be fixed in libcrypto - the API is broken.
The workaround is to call X509_check_purpose(3) with a purpose argument of -1,
before calling X509_check_ca(), this ensures the X509v3 extensions are cached.
Since X509_check_purpose() does have a return code to indicate errors, we can
use that to supplement X509_check_ca()'s shortcomings.
OpenBSD's rpki-client also uses the above approach.
backtrace() is a glibc-only feature. Some systems, such as Alpine,
do not support glibc.
It seems one solution is to rely on community ports, but I imagine it'd
be best to offload such a decision to the user. Not the safest.
Instead, if backtrace() is not available, just delete stack traces from
the binary. It's going to be a pain to debug, but that's the world we
live in, I guess.
Turns libexec into an optional dependency. Fixes #87.
Also, the commit contains a review and update of the documentation's
Alpine dependency list. There was a lot of fat in there.
Certificate: Remove subject name uniqueness validation
RFC 6487:
> An issuer SHOULD use a different subject name if the subject's key
> pair has changed (i.e., when the CA issues a certificate as part of
> re-keying the subject.)
Fort's implementation was problematic. The code was comparing the
certificate's subject name and public key to siblings that had
potentially not been validated yet. It seems to me this would make it
possible for attackers to crash FORT (by posting invalid objects) or to
invalidate legitimate objects (by publishing siblings that contained
conflicting subject names and public keys, without having to worry about
the rest of the fields).
This would be somewhat difficult o fix. I asked on the mailing list and
Discord ("RPKI Community"), and it seems the concensus is "don't
validate it." Subject Names don't really matter that much, because
RPKI's primary concern is resource ownership, not identity. Furthermore,
I'm not convinced that chopping off branches off the tree because of a
clumsy key rollover is a good idea.
On the other hand, it looks like the notfatal hash table API was being
used incorrectly. HASH_ADD_KEYPTR can OOM, but `errno` wasn't being
catched.
Fixing this is nontrivial, however, because strange `reqs_error`
functions are in the way, and that's a spaggetti I decided to avoid.
Instead, I converted HASH_ADD_KEYPTR usage to the fatal hash table API.
That's the future according to #40, anyway.
I don't think this has anything to do with #83, though.
When the file didn't need to be downloaded, because the IMS header did
its job, nothing needs to be deleted. So Fort shouldn't be complaining.
Like the previous commit, this is not a great solution, because IMS is
not the only trigger of file deletes, and the error message might be
helpful in other cases. Then again, I don't really agree with this eager
repository cleaning technique; it complicates debugging.
The proper solution is a WIP in the rrdp-refactor branch.
Lots of error messages were referring to the wrong file, and several of
them printed the correct file manually as what I can only describe as a
quick workaround.
It's not perfect; not all the RRDP code has been patched. That'll have
to wait until the deep RRDP refactor.
HTTP: Warn if a download exceeds 50% of the file size limit
Requested by Ties de Kock:
> Since some RPs now includes an upper limit on object size (some use
> 2MB if I recall correctly) I would appreciate a warning if an object
> goes over "a large fraction" of this limit (and a sample of the
> warning in the changelog and metrics if possible) - so people know
> what they need to alert on. In this situation operators can monitor
> for "natural growth" of an object and intervene, while the case that
> this check prevents (maliciously large objects) is still covered.
>
> The largest object I could find in the wild is 1.2MB (APNIC AS0 ROA).
> The RIPE NCC's largest object is smaller at the moment (but the CRL
> grows quickly if we do member CA keyrolls - since it adds all object
> on it).
>
> In summary:
>
> - I would recommend a warning (and preferably a metric) when an object
> of 50% of the object size limit is encountered.
> - I would like it if the hard limit is "safe" - especially CRLs can
> grow in some cases.
The metric will be added later, as part of #50. The warning is eg.
File size exceeds 50% of the configured limit (10/20 bytes).
50% is hardcoded at the moment.
Notice that this is an HTTP-only patch. rsync does not warn.
Helps the code review. Some structs and functions (such as
struct delta_router_key and router_key_print()) were bleeding into
mostly unrelated modules, and there were a couple of data types (struct
v4_address and struct v6_address) that were only used once, and induced
needless copying.
Philip Paeps [Wed, 27 Oct 2021 07:41:28 +0000 (15:41 +0800)]
Documentation: update FreeBSD build instructions
While binary packages are available, some people like to build from
source. Update the instructions for building from a port, a release
tarball or a Git checkout.
- Increase default remote object file size limit, because RRDP snapshot
files can be very large. (Current: ~148 MB, double on key rollover)
- Treat HTTP redirects as errors.
- Before: Redirects were treated as successes, but Fort didn't
bother to follow the redirect. As a result, the code was either
not finding the file, or finding an empty file.
- Now: Redirects are treated as errors. (Not sure if I'm meant to do
something here; curl doesn't do it automatically, and RFCs are
silent. In particular, I'm not in the mood to have to deal with
redirect loops and whatnot.)
- Remove ROA database clone operation in the SLURM code.
According to the code, it was working on a clone "so that updates can
be reverted" (on error, I presume.) But it was never reverting them.
- Refactor SLURM cleanup code.
I don't remember this one very well. Starting from the clone removal,
I got distracted with some inconsistent cleanups. Patched a buggy
cleanup somewhere.
There's still more I want to do to the SLURM code; it looks somewhat
slow.
It seems the #58 and #59 problem is a stray defer separator pop.
The comment above x509stack_cancel() clearly states that the function
should only be called shortly after a x509stack_push(), but there's one
in certificate_traverse() that isn't.
Removing this x509stack_cancel() seems to prevent the crash. I'm still
investigating the original intent of this code.
HTTP: Patch CURLOPT_LOW_SPEED_LIMIT and CURLOPT_LOW_SPEED_TIME
Likely due to some misunderstanding, Fort was managing both of these
variables using one command-line argument (--http.idle-timeout). This
unnecessarily limited the configurability of minimum transfer speeds for
HTTP connections.
--http.idle-timeout is now deprecated, and has been replaced by
--http.low-speed-limit and --http.low-speed-time, which correlate
verbatim to the corresponding curl arguments (CURLOPT_LOW_SPEED_LIMIT
and CURLOPT_LOW_SPEED_TIME).
It was allocating the deltas array twice, for seemingly no reason.
Also, the array slots were pointers, and the two arrays pointed to
different instances of the same objects. For seemingly no reason.
Now there's only one array, and it stores the objects directly.
1. Update the TAL URLs. (The old ones were very obsolete.)
2. Add --init-as0-tals. (Used to download the ASN0 TALs.)
3. Deprecate and zero-op --init-locations. (Didn't make sense.
If the user needs a different URL, they can do wget instead.)
4. Deprecate setup_fort.sh. (Seems to be redundant. --init-tals
already takes care of downloading TALs.)
In truth, this data structure should technically be a linked list.
But I'm not sure if sacrificing cache locality for faster removal is
worth the tradeoff.
RTR Server: thread-pool.server.max now refers to RTR requests
Apparently, there was a huge misunderstanding when the thread pool was
implemented.
The intended model was
> When the RTR server receives a request, it borrows a thread from the
> thread pool, and tasks it with the request.
Which is logical and a typical thread pool use case. However, what was
actually implemented was
> When the RTR server opens a connection, it borrows a thread from the
> thread pool, and tasks it with the whole connection.
So `thread-pool.server.max` was a hard limit for simultaneous RTR
clients (routers), but now it's just a limit to simultaneous RTR
requests. (Surplus requests will queue.) This is much less taxing to the
CPU when there are hundreds of clients.
Thanks to Mark Tinka for basically spelling this out to me.
-----------------------
Actually, this commit is an almost entire rewrite of the RTR server
core. Here's a (possibly incomplete) list of other problems I had to fix
in the process:
== Problem 1 ==
sockaddr2str() was returning a pointer to invalid memory on success.
Was "keep track of the clients, expire deltas when all clients outgrow
them." I see two problems with that:
1. It'll lead to bad performance if a client misbehaves by not
maintaining the connection. (ie. the server will have to fall back to
too many cache resets.)
2. It might keep the deltas forever if a client bugs out without killing
the connection.
New conditional is "keep deltas for server.deltas.lifetime iterations."
"server.deltas.lifetime" is a new configuration argument.
== Problem 3 ==
Serials weren't being compared according to RFC 1982 serial arithmetic.
This was going to cause mayhem when the integer wrapped.
(Though Fort always starts at 1, and serials are 32-bit unsigned
integers, so this wasn't going to be a problem for a very long time.)
== Problem 4 ==
The thread pool had an awkward termination bug. When threads were
suspended, they were meant to be ended through a pthread signal, but
when they were running, they were supposed to be terminated through
pthread_cancel(). (Because, since each client was assigned a thread,
they would spend most of their time sleeping.) These termination methods
don't play well with each other.
Apparently, threads waiting on a signal cannot be canceled, because of
this strange quirk from man 3 pthread_cond_wait:
> a side effect of acting upon a cancellation request while in a
> condition wait is that the mutex is (in effect) re-acquired before
> calling the first cancellation cleanup handler.
(So the first thread dies with the mutex locked, and no other threads
can be canceled because no one can ever lock the mutex again.)
And of course, you can't stop a server thread through a signal, because
they aren't listening to it; they're sleeping in wait for a request.
I still don't really know how would I fix this, but luckily, the problem
no longer exists since working threads are mapped to single requests,
and therefore no longer sleep. (For long periods of time, anyway.)
So always using the signal works fine.
It was pretty messy; I had to rewrite a good chunk of it.
== Problem 1 ==
It was discarding meaningful validation results when miscellaneous
errors prevented the deltas array from being built.
Deltas are optional; as long as Fort has the snapshot of the latest
tree, it doesn't technically need deltas. They speed up synchronization,
but in the worst case scenario, the RTR server can keep pushing Cache
Resets.
Severity: Warning. Memory allocation failures are the only eventuality
that might prevent the deltas array from being built.
== Problem 2 ==
The database was always keeping one serial's worth of obsolete deltas.
Cleaned up, saves a potentially large amount of memory.
Severity: Fine. Not a memory leak.
== Problem 3 ==
The code computed deltas even whene there were no routers listening.
Routers are the only delta consumers, so there was no need to waste all
that time.
Severity: Fine; performance quirk.
== Problem 4 ==
I found an RTR client implementation (Cloudflare's rpki-rtr-client) that
hangs when the first serial is zero. Fort's first serial is now 1.
Severity: Warning. This is rpki-rtr-client's fault, but any client
implementations are prone to the same bug. The new solution is more
future-proof.
== Problem 5 ==
It seems it wasn't cleaning the deltas array when all routers were known
to have bogus serials. This was the code:
/* Its the first element or reached end, nothing to purge */
if (group == state.deltas.array ||
(group - state.deltas.array) == state.deltas.len)
return 0;
If you reached the end of the deltas array, and the minimum router
serial is larger than all the array serials, then all deltas are
useless; you're supposed to purge all of them.
Severity: Fine. It was pretty hard to trigger, and not a memory leak.
init: Remove SIGPIPE SIG_IGN override during RTR server startup
Because it's now redundant.
As discovered in #49, SIGPIPE is total bullshit in all circumstances, no
exceptions, so there's now a global SIGPIPE SIG_IGN override, rendering
this one useless.
I guess we now know why #49 never triggered on server mode.
I think I finally found #49. The signal is not SIGSYS; it's SIGPIPE.
That's why dorpauli was getting no core dumps.
Apparently, this is a typical newbie trap for libcurl users.
CURLOPT_NOSIGNAL is incapable of supressing all SIGPIPEs, because some
of them are OS-generated.
I can't believe how dumb SIGPIPE has turned out to be. I/O functions
should return EPIPE; not interrupt the whole program to a handler that
defaults to "die silently."
These should be safe. They just print the signal number, the stack
trace, then restore the original behavior.
The only problem is I can't do the same with SIGKILL nor SIGSTOP,
but I suppose if SIGKILL were the problem, the kernel would have left
something in the logs. And SIGSTOP would have left the process alive.
ad841d50 was a mistake. It was never agreed in #40 that Fort should
shotgun blast its own face on the first ENOMEM, and even if it was, the
idea is preposterous. Memory allocation failures are neither programming
errors nor an excuse to leave all the routers hanging.
While there's some truth to the notion that a Fort memory leak (which
has been exhausting memory over time) could be temporarily amended by
killing Fort (and letting the OS clean it up), the argument completely
misses the reality that memory allocation failures could happen
regardless of the existence of a memory leak.
A memory leak is a valid reason to throw away the results of a current
validation run (as long as the admin is warned), but an existing
validation result and the RTR server must remain afloat.
Hypothesis: Something (which I haven't spotted yet) was causing the
main thread to skip its wait before the pool threads finished their
tasks. Maybe something to do with the ready signal again?
So the main thread returned early, which means pool threads were
silently suppressed by the OS. That explains the early terminations and
nonexistent stack traces.
If I keep finding crippling errors like this, I will definitely have to
purge the thread pool. It's turned out to be a fucking bug colony at
this point. I'm sick of it.
Way I see it, the root of the problem was the thread pool's control
code, which was too complicated for its own good. A surprisingly large
part of why it was overcomplicated was because it reinvented thread
joining.
So I simplified the control code by removing the detach property. Now
that the main thread joins the proper way, the validation code will not
be interrupted anymore.
This might well be the solution for #49. However, it bothers me that I
still don't have a reasonable explanation as to why the main thread
seemed to be skipping wait.
1. (error) Fix the --work-offline flag.
It has been unused since commit 85478ff30ebc029abb0ded48de5b557f52a758e0.
2. (performance) Remove redundant fopen() and fclose() during
valid_file_or_dir().
If stat() is used instead of fstat(), there's no need to open and
close the file.
(Technically, it's no longer validating readabilty, but since the
validator downloads the files, read permission errors should be
extremely rare, and can be catched later.)
3. (fine) Remove return value from thread_pool_task_cb.
This wasn't a problem, but the return value was meaningless, and
no callers were using it.
- Main validation loop: Remove some confusing, seemingly needless
wrapper functions.
- Libcurl: Catch lots of status codes properly
- Libcurl: Send proper data types to curl_easy_setopt()
(Argument types were not matching documented requirements.)
- RTR server: Reduce argument lists.
1. Merges the log and debug modules. I think their separation was the
reason why they forgot to add stack traces to syslog when they added
syslog to the project.
Not risking that mistake again.
2. Removes as many obstacles as possible from stack trace printing on
critical errors.
3. Add mutexes to logging. This should prevent messages from mixing on
top of each other when there are threads.
1. (warning) The libcrypto error stack trace was always showing empty.
This was because of a bad counter.
2. (critical) Normal stack traces were only being printed in the
standard streams, never on syslog.
This is probably the reason why we don't have a proper error message
on #49. It's probably a segmentation fault.
Also a whole bunch of cleanup. The logging module had a bit of a
duplicate code problem.