]> git.ipfire.org Git - thirdparty/libvirt.git/commit
virNetClientSetTLSSession: Restore original signal mask
authorMichal Privoznik <mprivozn@redhat.com>
Wed, 19 Mar 2014 17:10:34 +0000 (18:10 +0100)
committerEric Blake <eblake@redhat.com>
Wed, 19 Mar 2014 22:22:19 +0000 (16:22 -0600)
commitf1725e60e41300478dfeab7082388381fff4a961
tree1c2128a3c48d8b1c7f89b92b0b37aee32bff10b2
parent45d40bcf45871d3d7492625ee44d895a82fa6145
virNetClientSetTLSSession: Restore original signal mask

Currently, we use pthread_sigmask(SIG_BLOCK, ...) prior to calling
poll(). This is okay, as we don't want poll() to be interrupted.
However, then - immediately as we fall out from the poll() - we try to
restore the original sigmask - again using SIG_BLOCK. But as the man
page says, SIG_BLOCK adds signals to the signal mask:

SIG_BLOCK
      The set of blocked signals is the union of the current set and the set argument.

Therefore, when restoring the original mask, we need to completely
overwrite the one we set earlier and hence we should be using:

SIG_SETMASK
      The set of blocked signals is set to the argument set.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit 3d4b4f5ac634c123af1981084add29d3a2ca6ab0)
src/rpc/virnetclient.c