Pavel Emelyanov [Wed, 7 Dec 2016 13:59:43 +0000 (16:59 +0300)]
sock_diag.7: New page documenting NETLINK_SOCK_DIAG interface
Co-authored-by: Dmitry V. Levin <ldv@altlinux.org> Signed-off-by: Pavel Emelyanov <xemul@virtuozzo.com> Signed-off-by: Dmitry V. Levin <ldv@altlinux.org> Signed-off-by: Michael Kerrisk <mtk.manpages@gmail.com>
Michael Kerrisk [Tue, 6 Dec 2016 15:23:33 +0000 (16:23 +0100)]
close.2: Further clarify how to treat an error return
Further clarify that an error return should be used only
for diagnostic or remedial purposes.
Lifting Linus's words freely from
http://lkml.iu.edu/hypermail/linux/kernel/0207.2/0409.html
Re: close return value (was Re: [ANNOUNCE] Ext3 vs Reiserfs benchmarks)
Date: Wed Jul 17 2002 - 12:43:57 EST
Signed-off-by: Michael Kerrisk <mtk.manpages@gmail.com>
Michael Kerrisk [Tue, 6 Dec 2016 15:03:51 +0000 (16:03 +0100)]
close.2: Other UNIX implementations also close the FD, even if reporting an error
Looking at some historical source code (mostly from [1]) suggests
that the "close() always closes regardless of error return"
behavior has a long history, predating even POSIX.1-1990.
For example, in SVR4 for x86 (from the file sysvr4.tar.bz2 at
[1]), we see the following:
int
close(uap, rvp)
register struct closea *uap;
rval_t *rvp;
{
file_t *fp;
register int error;
In the above, getf() can return EBADF. The other errors are
returned by closef(), but the file descriptor is deallocated
regardless of errors by setf().
A similar pattern seems to have been preserved into at least late
OpenSolaris days (verified from looking at the initial commit of
the illumos source code). There we find the following in
closeandsetf() (called by close()):
error = closef(fp);
setf(fd, newfp);
return (error);
Looking at the code of closef() in AIX 4.1.3 suggests that, as on
on Linux and FreeBSD, the open file is always released, regardless
of errors.
For Irix, 6.5.5, I'm not sure (the code is not so easy to quickly
read); it may be that it does return errors while leaving the FD
open.
Michael Kerrisk [Tue, 6 Dec 2016 14:09:55 +0000 (15:09 +0100)]
close.2: Rework initial paragraph in NOTES on checking close() errors
As Daniel Wagner noted, saying on the one hand that failing
to check the return value of close() is a "serious error"
seems contradicted by the next paragraph that notes that
the return value should be used for "just diagnostics".
Rework the text to resolve the apparent contradiction.
Reported-by: Daniel Wagner <wagi@monom.org> Signed-off-by: Michael Kerrisk <mtk.manpages@gmail.com>
Carlos O'Donell [Mon, 5 Dec 2016 16:09:54 +0000 (11:09 -0500)]
resolv.conf.5: Timeout does not map to resolver API calls
I'm posting this patch to clarify the timeout behaviour because
there have been developers who expect this timeout to mean
something it is not.
The timeout (and by proxy attempts) does not map to resolver API
calls. For example a single call to getent might involve multiple
resolution requests to the resolvers listed in resolv.conf and
each request will use TIMEOUT and be attempted at least ATTEMPT
times. A developer using the resolver API cannot easily compute
any given timeout because the implementation may change e.g. A and
AAAA queries made in parallel. A system administrator uses this
setting to ensure there is a desirable timeout on any request to
any of the nameservers listed in resolv.conf, but no guarantees
exist beyond that.
Reviewed-by: Florian Weimer <fweimer@redhat.com> Signed-off-by: Carlos O'Donell <carlos@redhat.com> Signed-off-by: Michael Kerrisk <mtk.manpages@gmail.com>
Michael Kerrisk [Mon, 5 Dec 2016 13:23:20 +0000 (14:23 +0100)]
close.2: Further clarify that close() should not be retried after an error
See Linus ancient comments re EINTR in
https://lkml.org/lkml/headers/2005/9/10/129
Date Sat, 10 Sep 2005 12:00:01 -0700 (PDT)
From Linus Torvalds <>
Subject Re: [patch 7/7] uml: retry host close() on EINTR
The FreeBSD 11.0 close() man page says similar:
In case of any error except EBADF, the supplied file
descriptor is deallocated and therefore is no longer valid.
For AIX:
http://publib16.boulder.ibm.com/doc_link/en_US/a_doc_lib/libs/basetrf1/close.htm
If the FileDescriptor parameter refers to a device and the
close subroutine actually results in a device close, and the
device close routine returns an error, the error is returned
to the application. However, the FileDescriptor parameter is
considered closed and it may not be used in any subsequent
calls.
See also:
http://austingroupbugs.net/view.php?id=529
and in particular:
http://austingroupbugs.net/view.php?id=529#c1200
Reported-by: Daniel Wagner <wagi@monom.org> Signed-off-by: Michael Kerrisk <mtk.manpages@gmail.com>
Mike Frysinger [Sun, 27 Nov 2016 03:31:37 +0000 (22:31 -0500)]
elf(5): document notes
Document the Elf{32,64}_Nhdr structure, the sections/segments that
contain notes, and how to interpret them. I've been lazy and only
included the GNU extensions here, especially as others are not
defined in the elf.h header file as shipped by glibc.
I've mostly used binutils, glibc, breakpad, and the GABI ELF spec
as sources of data for these fields.
Signed-off-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Michael Kerrisk <mtk.manpages@gmail.com>
Michael Kerrisk [Sun, 20 Nov 2016 09:03:52 +0000 (10:03 +0100)]
random.7: Remove recommendation against consuming large amounts of randomness
From the email discussion:
> > Usage recommendations
> > The kernel random-number generator relies on entropy gathered
> > from device drivers and other sources of environmental noise.
> > It is designed to produce a small amount of high-quality seed
> > material to seed a cryptographically secure pseudorandom number
> > generator (CSPRNG). It is designed for security, not speed,
> > and is poorly suited to generating large amounts of crypto‐
> > graphic random data. Users should be economical in the amount
> > of seed material that they consume via getrandom(2), /dev/uran‐
> > dom, and /dev/random.
> >
> > ┌─────────────────────────────────────────────────────┐
> > │FIXME │
> > ├─────────────────────────────────────────────────────┤
> > │Is it really necessary to avoid consuming large │
> > │amounts from /dev/urandom? Various sources linked to │
> > │by https://bugzilla.kernel.org/show_bug.cgi?id=71211 │
> > │suggest it is not. │
> > │ │
> > │And: has the answer to the previous question changed │
> > │across kernel versions? │
> > └─────────────────────────────────────────────────────┘
> > Consuming unnecessarily large quantities of data via these
> > interfaces will have a negative impact on other consumers of
> > randomness.
[Ted T'so:]
> So "poorly suited" is definitely true. Also true is that urandom is
> not engineered for use for non-cryptographic uses. It's always going
> to be faster to use random(3) for those purposes.
>
> As far as whether or not it has a negative impact, it depends on how
> much you trust the underlying cryptographic algorithms. If the CSPRNG
> is seeded correctly with at least 256 bits of entropy that can't be
> guessed by the attacker, and if the underlying cryptographic
> primitives are secure, then it won't matter. But *if* there is an
> unknown vulnerability in the underlying primitive, and *if* large
> amounts of data generated by the CSPRNG would help exploit that
> vulnerability, and *if* that bulk amount of CSPRNG output is made
> available to an attacker with the capability to break the underlying
> cryptographic vulnerability, then there would be a problem.
>
> Obviously, no one knows of such a vulnerability, and I'm fairly
> confident that there won't be such a vulnerability across the
> different ways we've used to generate the urandom source --- but some
> people are professional paranoids, and would argue that we shouldn't
> make bulk output of the CSPRNG available for no good reason, just in
> case.
[Nikos Mavrogiannopoulos:]
The above is certainly accurate, however, I think that such a
discussion or text, when reflected to a man-page is going to
cause problems. The audience of a man-page are not crypto people,
and seeing such text would create confusion rather than clarify
how these devices/apis should be used. The *if* part is not put
into a perspective, suggesting that such an *if* is possible.
However, if one clarifies, i.e., in that case, your TLS or SSH
connection is most likely broken as well, and not because of any
attack on /dev/urandom, then one can see that we are heading
towards a theoretical discussion.
My suggestion, on that particular text would be to remove it,
but make it explicit somewhere in the text that all the
assurances for the devices depend on the crypto primitives,
rather than describing risks that may arise on particular
usage patterns *if* primitives are broken.
Darrick J. Wong [Wed, 23 Nov 2016 04:48:16 +0000 (20:48 -0800)]
fideduperange.2: Fix the discussion of maximum sizes
Fix the discussion of the limitations on the dest_count and
src_length parameters to the fideduperange ioctl() to reflect
what's actually in the kernel.
Reviewed-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>