]> git.ipfire.org Git - thirdparty/libvirt.git/commit
virNetClientSetTLSSession: Restore original signal mask
authorMichal Privoznik <mprivozn@redhat.com>
Wed, 19 Mar 2014 17:10:34 +0000 (18:10 +0100)
committerEric Blake <eblake@redhat.com>
Thu, 20 Mar 2014 04:20:11 +0000 (22:20 -0600)
commit4cbba884fcceae33236357c392e128582a95c5e0
treef0b0fe6b1dc50070d4c5227d7d41d872e4627fd6
parentb7d051af2084f2880a638a680e0d2a7b595f1e64
virNetClientSetTLSSession: Restore original signal mask

Currently, we use pthread_sigmask(SIG_BLOCK, ...) prior to calling
poll(). This is okay, as we don't want poll() to be interrupted.
However, then - immediately as we fall out from the poll() - we try to
restore the original sigmask - again using SIG_BLOCK. But as the man
page says, SIG_BLOCK adds signals to the signal mask:

SIG_BLOCK
      The set of blocked signals is the union of the current set and the set argument.

Therefore, when restoring the original mask, we need to completely
overwrite the one we set earlier and hence we should be using:

SIG_SETMASK
      The set of blocked signals is set to the argument set.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit 3d4b4f5ac634c123af1981084add29d3a2ca6ab0)
src/rpc/virnetclient.c