]> git.ipfire.org Git - thirdparty/libvirt.git/commit
virNetClientSetTLSSession: Restore original signal mask
authorMichal Privoznik <mprivozn@redhat.com>
Wed, 19 Mar 2014 17:10:34 +0000 (18:10 +0100)
committerEric Blake <eblake@redhat.com>
Thu, 20 Mar 2014 14:32:00 +0000 (08:32 -0600)
commitb1066acb19e8ce57348e2fedf6868a0424fa77d2
tree7668a2e620a8390451a7453a15ba4780a6e0fe5f
parent35ed9796981cf7b939f28b60ca828824a0488a3a
virNetClientSetTLSSession: Restore original signal mask

Currently, we use pthread_sigmask(SIG_BLOCK, ...) prior to calling
poll(). This is okay, as we don't want poll() to be interrupted.
However, then - immediately as we fall out from the poll() - we try to
restore the original sigmask - again using SIG_BLOCK. But as the man
page says, SIG_BLOCK adds signals to the signal mask:

SIG_BLOCK
      The set of blocked signals is the union of the current set and the set argument.

Therefore, when restoring the original mask, we need to completely
overwrite the one we set earlier and hence we should be using:

SIG_SETMASK
      The set of blocked signals is set to the argument set.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit 3d4b4f5ac634c123af1981084add29d3a2ca6ab0)
src/rpc/virnetclient.c