'VALGRIND_2_0_0'.
git-svn-id: svn://svn.valgrind.org/valgrind/tags/VALGRIND_2_0_0@2015
-Julian Seward, jseward@acm.org, is the main author.
+Julian Seward, jseward@acm.org, is the original author.
Nicholas Nethercote, njn25@cam.ac.uk, did the core/skin
-generalisation, and wrote Cachegrind and some of the other skins.
+generalisation, and wrote Cachegrind and some of the other skins, and
+generally hacked on just about every part of the system by now.
+
+Jeremy Fitzhardinge, jeremy@goop.org, contributed tons of
+improvements, including the Helgrind thread-checking tool.
readelf's dwarf2 source line reader, written by Nick Clifton, was
modified to be used in Valgrind by Daniel Berlin.
Michael Matz and Simon Hausmann modified the GNU binutils
demangler(s) for use in Valgrind.
-Dirk Mueller contrib'd the malloc-free mismatch checking stuff.
+Dirk Mueller contrib'd the malloc-free mismatch checking stuff
+and various other bits and pieces.
Lots of other people sent bug reports, patches, and very
helpful feedback. I thank you all.
-A mini-FAQ for valgrind, version 1.9.6
+A mini-FAQ for valgrind, version 2.0.0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Last revised 5 May 2003
-~~~~~~~~~~~~~~~~~~~~~~~
+Last revised 5 November 2003
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-----------------------------------------------------------------
Q5. I try running "valgrind my_program", but my_program runs normally,
and Valgrind doesn't emit any output at all.
-A5. Is my_program statically linked? Valgrind doesn't work with
- statically linked binaries. my_program must rely on at least one
- shared object. To determine if a my_program is statically linked,
- run:
+A5. Valgrind doesn't work out-of-the-box with programs that are entirely
+ statically linked. It does a quick test at startup, and if it detects
+ that the program is statically linked, it aborts with an explanation.
+
+ This test may fail in some obscure cases, eg. if you run a script
+ under Valgrind and the script interpreter is statically linked.
- ldd my_program
+ If you still want static linking, you can ask gcc to link certain
+ libraries statically. Try the following options:
- It will show what shared objects my_program relies on, or say:
+ -Wl,-Bstatic -lmyLibrary1 -lotherLibrary -Wl,-Bdynamic
- not a dynamic executable
+ Just make sure you end with -Wl,-Bdynamic so that libc is dynamically
+ linked.
- if my_program is statically linked.
+ If you absolutely cannot use dynamic libraries, you can try statically
+ linking together all the .o files in coregrind/, all the .o files of the
+ skin of your choice (eg. those in memcheck/), and the .o files of your
+ program. You'll end up with a statically linked binary that runs
+ permanently under Valgrind's control. Note that we haven't tested this
+ procedure thoroughly.
-----------------------------------------------------------------
-----------------------------------------------------------------
-Q8. My program dies (exactly) like this:
+Q8. My program dies, printing a message like this along the way:
- REPE then 0xF
- valgrind: the `impossible' happened:
- Unhandled REPE case
+ disInstr: unhandled instruction bytes: 0x66 0xF 0x2E 0x5
-A8. Yeah ... that I believe is a SSE or SSE2 instruction. Are you
- building your app with -march=pentium4 or -march=athlon or
- something like that? If you can somehow dissuade gcc from
- producing SSE/SSE2 instructions, you may be able to avoid this.
- Some folks have reported that removing the flag -march=...
- works around this.
-
- I'd be interested to hear if you can get rid of it by changing
- your application build flags.
+A8. Valgrind doesn't support the full x86 instruction set, although it
+ now supports many SSE and SSE2 instructions. If you know the
+ failing instruction is an SSE/SSE2 instruction, you might be able
+ to recompile your program without it by fiddling with the
+ -march=... flag(s) for gcc. In particular, get rid of
+ -march=pentium4 or -march=athlon if you can. Either way, let us
+ know and we'll try to fix it.
-----------------------------------------------------------------
VG_(mash_LD_PRELOAD_and_LD_LIBRARY_PATH): internal error:
(loads of text)
-A12. We're not entirely sure about this, and would appreciate
- someone sending a simple test case for us to look at.
- One possible cause is that your program modifies its
+A12. One possible cause is that your program modifies its
environment variables, possibly including zeroing them
- all. Avoid this if you can.
+ all. Valgrind relies on the LD_PRELOAD, LD_LIBRARY_PATH and
+ VG_ARGS variables. Zeroing them will break things.
- 1.9.6 contains a fix which hopefully reduces the chances
- of your program bombing out like this.
+ As of 1.9.6, Valgrind only uses these variables with
+ --trace-children=no, when executing execve() or using the
+ --stop-after=yes flag. This should reduce the potential for
+ problems.
-----------------------------------------------------------------
-----------------------------------------------------------------
+Q15. My program dies with a segmentation fault, but Valgrind doesn't give
+ any error messages before it, or none that look related.
+
+A15. The one kind of segmentation fault that Valgrind won't give any
+ warnings about is writes to read-only memory. Maybe your program is
+ writing to a static string like this:
+
+ char* s = "hello";
+ s[0] = 'j';
+
+ or something similar. Writing to read-only memory can also apparently
+ make LinuxThreads behave strangely.
+
+-----------------------------------------------------------------
+
+Q16. When I trying building Valgrind, 'make' dies partway with an
+ assertion failure, something like this: make: expand.c:489:
+
+ allocated_variable_append: Assertion
+ `current_variable_set_list->next != 0' failed.
+
+A16. It's probably a bug in 'make'. Some, but not all, instances of
+ version 3.79.1 have this bug, see
+ www.mail-archive.com/bug-make@gnu.org/msg01658.html. Try upgrading to a
+ more recent version of 'make'.
+
+-----------------------------------------------------------------
+
+Q17. I tried writing a suppression but it didn't work. Can you
+ write my suppression for me?
+
+A17. Yes! Use the --gen-suppressions=yes feature to spit out
+ suppressions automatically for you. You can then edit them
+ if you like, eg. combining similar automatically generated
+ suppressions using wildcards like '*'.
+
+ If you really want to write suppressions by hand, read the
+ manual carefully. Note particularly that C++ function names
+ must be _mangled_.
+
+-----------------------------------------------------------------
+
(this is the end of the FAQ.)
+Stable release 2.0.0 (5 Nov 2003)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+2.0.0 improves SSE/SSE2 support, fixes some minor bugs, and
+improves support for SuSE 9 and the Red Hat "Severn" beta.
+
+- Further improvements to SSE/SSE2 support. The entire test suite of
+ the GNU Scientific Library (gsl-1.4) compiled with Intel Icc 7.1
+ 20030307Z '-g -O -xW' now works. I think this gives pretty good
+ coverage of SSE/SSE2 floating point instructions, or at least the
+ subset emitted by Icc.
+
+- Also added support for the following instructions:
+ MOVNTDQ UCOMISD UNPCKLPS UNPCKHPS SQRTSS
+ PUSH/POP %{FS,GS}, and PUSH %CS (Nb: there is no POP %CS).
+
+- CFI support for GDB version 6. Needed to enable newer GDBs
+ to figure out where they are when using --gdb-attach=yes.
+
+- Fix this:
+ mc_translate.c:1091 (memcheck_instrument): Assertion
+ `u_in->size == 4 || u_in->size == 16' failed.
+
+- Return an error rather than panicing when given a bad socketcall.
+
+- Fix checking of syscall rt_sigtimedwait().
+
+- Implement __NR_clock_gettime (syscall 265). Needed on Red Hat Severn.
+
+- Fixed bug in overlap check in strncpy() -- it was assuming the src was 'n'
+ bytes long, when it could be shorter, which could cause false
+ positives.
+
+- Support use of select() for very large numbers of file descriptors.
+
+- Don't fail silently if the executable is statically linked, or is
+ setuid/setgid. Print an error message instead.
+
+- Support for old DWARF-1 format line number info.
+
+
+
+Snapshot 20031012 (12 October 2003)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Three months worth of bug fixes, roughly. Most significant single
+change is improved SSE/SSE2 support, mostly thanks to Dirk Mueller.
+
+20031012 builds on Red Hat Fedora ("Severn") but doesn't really work
+(curiosly, mozilla runs OK, but a modest "ls -l" bombs). I hope to
+get a working version out soon. It may or may not work ok on the
+forthcoming SuSE 9; I hear positive noises about it but haven't been
+able to verify this myself (not until I get hold of a copy of 9).
+
+A detailed list of changes, in no particular order:
+
+- Describe --gen-suppressions in the FAQ.
+
+- Syscall __NR_waitpid supported.
+
+- Minor MMX bug fix.
+
+- -v prints program's argv[] at startup.
+
+- More glibc-2.3 suppressions.
+
+- Suppressions for stack underrun bug(s) in the c++ support library
+ distributed with Intel Icc 7.0.
+
+- Fix problems reading /proc/self/maps.
+
+- Fix a couple of messages that should have been suppressed by -q,
+ but weren't.
+
+- Make Addrcheck understand "Overlap" suppressions.
+
+- At startup, check if program is statically linked and bail out if so.
+
+- Cachegrind: Auto-detect Intel Pentium-M, also VIA Nehemiah
+
+- Memcheck/addrcheck: minor speed optimisations
+
+- Handle syscall __NR_brk more correctly than before.
+
+- Fixed incorrect allocate/free mismatch errors when using
+ operator new(unsigned, std::nothrow_t const&)
+ operator new[](unsigned, std::nothrow_t const&)
+
+- Support POSIX pthread spinlocks.
+
+- Fixups for clean compilation with gcc-3.3.1.
+
+- Implemented more opcodes:
+ - push %es
+ - push %ds
+ - pop %es
+ - pop %ds
+ - movntq
+ - sfence
+ - pshufw
+ - pavgb
+ - ucomiss
+ - enter
+ - mov imm32, %esp
+ - all "in" and "out" opcodes
+ - inc/dec %esp
+ - A whole bunch of SSE/SSE2 instructions
+
+- Memcheck: don't bomb on SSE/SSE2 code.
+
+
+
+Snapshot 20030725 (25 July 2003)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Fixes some minor problems in 20030716.
+
+- Fix bugs in overlap checking for strcpy/memcpy etc.
+
+- Do overlap checking with Addrcheck as well as Memcheck.
+
+- Fix this:
+ Memcheck: the `impossible' happened:
+ get_error_name: unexpected type
+
+- Install headers needed to compile new skins.
+
+- Remove leading spaces and colon in the LD_LIBRARY_PATH / LD_PRELOAD
+ passed to non-traced children.
+
+- Fix file descriptor leak in valgrind-listener.
+
+- Fix longstanding bug in which the allocation point of a
+ block resized by realloc was not correctly set. This may
+ have caused confusing error messages.
+
+
Snapshot 20030716 (16 July 2003)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Release notes for Valgrind, version 1.0.0
+Release notes for Valgrind, version 2.0.0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
KDE3 developers: please read also README_KDE3_FOLKS for guidance
about how to debug KDE3 applications with Valgrind.
For instructions on how to build/install, see the end of this file.
-Valgrind works best on systems with glibc-2.1.X or 2.2.X, and with gcc
-versions prior to 3.1. gcc-3.1 works, but generates code which causes
-valgrind to report many false errors. For now, try to use a gcc prior
-to 3.1; if you can't, at least compile your application without
-optimisation. Valgrind-1.0.X also can't handle glibc-2.3.X systems.
-
Executive Summary
~~~~~~~~~~~~~~~~~
Memory leaks -- where pointers to malloc'd blocks are lost forever
Passing of uninitialised and/or unaddressible memory to system calls
Mismatched use of malloc/new/new [] vs free/delete/delete []
+ Overlaps of arguments to strcpy() and related functions
Some abuses of the POSIX pthread API
Problems like these can be difficult to find by other means, often
When Valgrind detects such a problem, it can, if you like, attach GDB
to your program, so you can poke around and see what's going on.
-Valgrind is closely tied to details of the CPU, operating system and
-to a less extent, compiler and basic C libraries. This makes it
-difficult to make it portable, so I have chosen at the outset to
-concentrate on what I believe to be a widely used platform: Red Hat
-Linux 7.2, on x86s. I believe that it will work without significant
-difficulty on other x86 GNU/Linux systems which use the 2.4 kernel and
-GNU libc 2.2.X, for example SuSE 7.1 and Mandrake 8.0. This version
-1.0 release is known to work on Red Hats 6.2, 7.2 and 7.3, at the very
-least.
-
Valgrind is licensed under the GNU General Public License, version 2.
Read the file COPYING in the source distribution for details.
Documentation
~~~~~~~~~~~~~
A comprehensive user guide is supplied. Point your browser at
-docs/index.html. If your browser doesn't like frames, point it
-instead at docs/manual.html. There's also detailed, although somewhat
-out of date, documentation of how valgrind works, in
-docs/techdocs.html.
+$PREFIX/share/doc/valgrind/manual.html, where $PREFIX is whatever you
+specified with --prefix= when building.
Building and installing it
~~~~~~~~~~~~~~~~~~~~~~~~~~
-To install from CVS :
-
- 0. Check out the code from CVS, following the instructions at
- http://sourceforge.net/cvs/?group_id=46268. The 'modulename' is
- "valgrind".
-
- 1. cd into the source directory.
+To install from a tar.bz2 distribution:
- 2. Run ./autogen.sh to setup the environment (you need the standard
- autoconf tools to do so).
-
-To install from a tar.gz archive:
-
- 3. Run ./configure, with some options if you wish. The standard
+ 1. Run ./configure, with some options if you wish. The standard
options are documented in the INSTALL file. The only interesting
one is the usual --prefix=/where/you/want/it/installed.
- 4. Do "make".
+ 2. Do "make".
- 5. Do "make install", possibly as root if the destination permissions
+ 3. Do "make install", possibly as root if the destination permissions
require that.
- 6. See if it works. Try "valgrind ls -l". Either this works,
+ 4. See if it works. Try "valgrind ls -l". Either this works,
or it bombs out complaining it can't find argc/argv/envp.
In that case, mail me a bug report.
Julian Seward (jseward@acm.org)
-1 July 2002
+Nick Nethercote (njn25@cam.ac.uk)
+Jeremy Fitzhardinge (jeremy@goop.org)
+
+5 November 2003
-5 May 2003
-
-Building and not installing it
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-To run Valgrind without having to install it, run coregrind/valgrind (prefix
-with "sh" because it's not executable) with the --in-place=<dir> option, where
-<dir> is the root of the source tree (and must be an absolute path). Eg:
-
- sh ~/grind/head4/coregrind/valgrind --in-place=/homes/njn25/grind/head4
-
-This allows you to compile and run with "make" instead of "make install",
-saving you time.
-
-I recommend compiling with "make --quiet" to further reduce the amount of
-output spewed out during compilation, letting you actually see any errors,
-warnings, etc.
-
Running the regression tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
perl tests/vg_regtest memcheck/tests/badfree.vgtest
perl tests/vg_regtest memcheck/tests/badfree
+
+Nick Nethercote (njn25@cam.ac.uk)
+Last updated 5 November 2003
+
-19 June 2002
+5 November 2003
The purpose of this small doc is to guide you in using Valgrind to
find and fix memory management bugs in KDE3.
but once set up things work fairly well.
-* You need an x86 box running a Linux 2.2 or 2.4 kernel, with glibc
- 2.1.X or 2.2.X and XFree86 3.X or 4.X. In practice this means
- practically any recent Linux distro. Valgrind is developed on a
- vanilla Red Hat 7.2 installation, so at least works ok there.
- I imagine Mandrake 8 and SuSE 7.X would be ok too. It is known to
- work on Red Hats 6.2, 7.2 and 7.3, at the very least.
-
+* You need an x86 box running a Linux 2.4 kernel, with glibc
+ 2.2.X or 2.3.X and XFree86 3.X or 4.X. In practice this means
+ practically any recent Linux distro.
* You need a reasonably fast machine, since programs run 25-100 x
slower on Valgrind. I work with a 1133 MHz PIII with 512 M of
usable, but in all, the faster your machine, the more useful
valgrind will be.
-
* You need at least 256M of memory for reasonable behaviour. Valgrind
inflates the memory use of KDE apps approximately 3-4 x, so (eg)
konqueror needs ~ 140M of memory to get started, although to be fair,
at least 40 M of that is due to reading the debug info -- this is for
a konqueror and all libraries built with -O -g.
-
* You need to compile the KDE to be debugged, using a decent gcc/g++:
- gcc 2.96-*, which comes with Red Hat 7.2, is buggy. It sometimes
flag to ignore them. See the manual for details; this is not really
a good solution.
- - I recommend you use gcc/g++ 2.95.3. It seems to compile
- KDE without problems, and does not suffer from the above bug. It's
- what I have been using.
-
- - gcc-3. 3.0.4 was observed to have a scheduling bug causing it to
- occasionally generate writes below the stack pointer. gcc-3.1 seems
- better in that respect.
-
- It's ok to build Valgrind with the default gcc on Red Hat 7.2.
-
-
* So: build valgrind -- see the README file. It's the standard
./configure ; make ; make install deal.
* If you are debugging KDE apps, be prepared for the fact that
Valgrind finds bugs in the underlying Qt (qt-copy from CVS) too.
-* Please read the Valgrind manual, docs/index.html. It contains
- considerable details about how to use it, what's really going on,
- etc.
-
-* There are some significant limitations:
- - No MMX, SSE, SSE2 insns. Basically a 486 instruction set only.
- - Various other minor limitations listed in the manual.
-
-* If you have trouble with it, please let me know (jseward@acm.org)
- and I'll see if I can help you out.
+* Please read the Valgrind manual, share/doc/valgrind/manual.html
+ in the installation tree. It contains considerable details about
+ how to use it, what's really going on, etc.
Have fun! If you find Valgrind useful in finding and fixing bugs,
-I shall consider my efforts to have been worthwhile.
+our efforts will have been worthwhile.
Julian Seward (jseward@acm.org)
+Nick Nethercote (njn25@cam.ac.uk)
+Jeremy Fitzhardinge (jeremy@goop.org)
\ No newline at end of file
-1 July 2002
+5 November 2003
Greetings, packaging person! This information is aimed at people
building binary distributions of Valgrind.
? VGM_BIT_INVALID : VGM_BIT_VALID;
}
-static __inline__ void set_abit ( Addr a, UChar abit )
+static /* __inline__ */ void set_abit ( Addr a, UChar abit )
{
AcSecMap* sm;
UInt sm_off;
/*--- Setting permissions over address ranges. ---*/
/*------------------------------------------------------------*/
-static __inline__
+static /* __inline__ */
void set_address_range_perms ( Addr a, UInt len,
UInt example_a_bit )
{
case NOP: case LOCK: case CALLM_E: case CALLM_S:
break;
- /* For memory-ref instrs, copy the data_addr into a temporary to be
- * passed to the helper at the end of the instruction.
+ /* For memory-ref instrs, copy the data_addr into a temporary
+ * to be passed to the helper at the end of the instruction.
*/
case LOAD:
t_addr = u_in->val1;
goto do_LOAD_or_STORE;
- case STORE: t_addr = u_in->val2;
+ case STORE:
+ t_addr = u_in->val2;
goto do_LOAD_or_STORE;
- do_LOAD_or_STORE:
+ do_LOAD_or_STORE:
uInstr1(cb, CCALL, 0, TempReg, t_addr);
switch (u_in->size) {
case 4: uCCall(cb, (Addr) & ac_helperc_ACCESS4, 1, 1, False );
VG_(copy_UInstr)(cb, u_in);
break;
- case SSE3a_MemRd: // this one causes trouble
+ case SSE3a_MemRd:
case SSE2a_MemRd:
case SSE2a_MemWr:
case SSE3a_MemWr:
+ case SSE3a1_MemRd:
sk_assert(u_in->size == 4 || u_in->size == 8
|| u_in->size == 16);
goto do_Access_ARG3;
VG_(copy_UInstr)(cb, u_in);
break;
- // case SSE2a1_MemRd:
- // case SSE2a1_MemWr:
- // case SSE3a1_MemRd:
- // case SSE3a1_MemWr:
+ case SSE2a1_MemRd:
VG_(pp_UInstr)(0,u_in);
VG_(skin_panic)("AddrCheck: unhandled SSE uinstr");
break;
case SSE3g_RegWr:
case SSE3e_RegRd:
case SSE4:
+ case SSE3:
default:
VG_(copy_UInstr)(cb, u_in);
break;
* In all cases prepends new nodes to their chain. Returns a pointer to the
* cost centre. Also sets BB_seen_before by reference.
*/
-static __inline__ BBCC* get_BBCC(Addr bb_orig_addr, UCodeBlock* cb,
- Bool remove, Bool *BB_seen_before)
+static BBCC* get_BBCC(Addr bb_orig_addr, UCodeBlock* cb,
+ Bool remove, Bool *BB_seen_before)
{
file_node *curr_file_node;
fn_node *curr_fn_node;
is_FPU_R = True;
break;
+ case SSE2a_MemRd:
+ case SSE2a1_MemRd:
+ sk_assert(u_in->size == 4 || u_in->size == 16);
+ t_read = u_in->val3;
+ is_FPU_R = True;
+ break;
+
+ case SSE3a_MemRd:
+ sk_assert(u_in->size == 4 || u_in->size == 8 || u_in->size == 16);
+ t_read = u_in->val3;
+ is_FPU_R = True;
+ break;
+
+ case SSE3a1_MemRd:
+ sk_assert(u_in->size == 16);
+ t_read = u_in->val3;
+ is_FPU_R = True;
+ break;
+
+ case SSE3ag_MemRd_RegWr:
+ sk_assert(u_in->size == 4 || u_in->size == 8);
+ t_read = u_in->val1;
+ is_FPU_R = True;
+ break;
+
case MMX2_MemWr:
sk_assert(u_in->size == 4 || u_in->size == 8);
/* fall through */
is_FPU_W = True;
break;
+ case SSE2a_MemWr:
+ sk_assert(u_in->size == 4 || u_in->size == 16);
+ t_write = u_in->val3;
+ is_FPU_W = True;
+ break;
+
+ case SSE3a_MemWr:
+ sk_assert(u_in->size == 4 || u_in->size == 8 || u_in->size == 16);
+ t_write = u_in->val3;
+ is_FPU_W = True;
+ break;
+
default:
break;
}
VG_(copy_UInstr)(cb, u_in);
break;
+ case SSE2a_MemRd:
+ case SSE2a1_MemRd:
+ sk_assert(u_in->size == 4 || u_in->size == 16);
+ t_read = u_in->val3;
+ t_read_addr = newTemp(cb);
+ uInstr2(cb, MOV, 4, TempReg, u_in->val3, TempReg, t_read_addr);
+ data_size = u_in->size;
+ VG_(copy_UInstr)(cb, u_in);
+ break;
+
+ case SSE3a_MemRd:
+ sk_assert(u_in->size == 4 || u_in->size == 8 || u_in->size == 16);
+ t_read = u_in->val3;
+ t_read_addr = newTemp(cb);
+ uInstr2(cb, MOV, 4, TempReg, u_in->val3, TempReg, t_read_addr);
+ data_size = u_in->size;
+ VG_(copy_UInstr)(cb, u_in);
+ break;
+
+ case SSE3a1_MemRd:
+ sk_assert(u_in->size == 16);
+ t_read = u_in->val3;
+ t_read_addr = newTemp(cb);
+ uInstr2(cb, MOV, 4, TempReg, u_in->val3, TempReg, t_read_addr);
+ data_size = u_in->size;
+ VG_(copy_UInstr)(cb, u_in);
+ break;
+
+ case SSE3ag_MemRd_RegWr:
+ sk_assert(u_in->size == 4 || u_in->size == 8);
+ t_read = u_in->val1;
+ t_read_addr = newTemp(cb);
+ uInstr2(cb, MOV, 4, TempReg, u_in->val1, TempReg, t_read_addr);
+ data_size = u_in->size;
+ VG_(copy_UInstr)(cb, u_in);
+ break;
+
/* Note that we must set t_write_addr even for mod instructions;
* That's how the code above determines whether it does a write.
* Without it, it would think a mod instruction is a read.
VG_(copy_UInstr)(cb, u_in);
break;
+ case SSE2a_MemWr:
+ sk_assert(u_in->size == 4 || u_in->size == 16);
+ /* fall through */
+ case SSE3a_MemWr:
+ sk_assert(u_in->size == 4 || u_in->size == 8 || u_in->size == 16);
+ t_write = u_in->val3;
+ t_write_addr = newTemp(cb);
+ uInstr2(cb, MOV, 4, TempReg, u_in->val3, TempReg, t_write_addr);
+ data_size = u_in->size;
+ VG_(copy_UInstr)(cb, u_in);
+ break;
/* For rep-prefixed instructions, log a single I-cache access
* before the UCode loop that implements the repeated part, which
/* TLB info, ignore */
case 0x01: case 0x02: case 0x03: case 0x04:
case 0x50: case 0x51: case 0x52: case 0x5b: case 0x5c: case 0x5d:
+ case 0xb0: case 0xb3:
+
break;
case 0x06: *I1c = (cache_t) { 8, 4, 32 }; break;
case 0x08: *I1c = (cache_t) { 16, 4, 32 }; break;
+ case 0x30: *I1c = (cache_t) { 32, 8, 64 }; break;
case 0x0a: *D1c = (cache_t) { 8, 2, 32 }; break;
case 0x0c: *D1c = (cache_t) { 16, 4, 32 }; break;
+ case 0x2c: *D1c = (cache_t) { 32, 8, 64 }; break;
/* IA-64 info -- panic! */
case 0x10: case 0x15: case 0x1a:
case 0x83: *L2c = (cache_t) { 512, 8, 32 }; L2_found = True; break;
case 0x84: *L2c = (cache_t) { 1024, 8, 32 }; L2_found = True; break;
case 0x85: *L2c = (cache_t) { 2048, 8, 32 }; L2_found = True; break;
+ case 0x86: *L2c = (cache_t) { 512, 4, 64 }; L2_found = True; break;
+ case 0x87: *L2c = (cache_t) { 1024, 8, 64 }; L2_found = True; break;
default:
VG_(message)(Vg_DebugMsg,
} else if (0 == VG_(strcmp)(vendor_id, "AuthenticAMD")) {
ret = AMD_cache_info(I1c, D1c, L2c);
+ } else if (0 == VG_(strcmp)(vendor_id, "CentaurHauls")) {
+ /* Total kludge. Pretend to be a VIA Nehemiah. */
+ D1c->size = 64;
+ D1c->assoc = 16;
+ D1c->line_size = 16;
+ I1c->size = 64;
+ I1c->assoc = 4;
+ I1c->line_size = 16;
+ L2c->size = 64;
+ L2c->assoc = 16;
+ L2c->line_size = 16;
+ ret = 0;
+
} else {
VG_(message)(Vg_DebugMsg, "CPU vendor ID not recognised (%s)",
vendor_id);
cachesim_initcache(config, &L); \
} \
\
-static __inline__ \
+static /* __inline__ */ \
void cachesim_##L##_doref(Addr a, UChar size, ULong* m1, ULong *m2) \
{ \
register UInt set1 = ( a >> L.line_size_bits) & (L.sets_min_1); \
# Process this file with autoconf to produce a configure script.
AC_INIT(coregrind/vg_main.c) # give me a source file, any source file...
AM_CONFIG_HEADER(config.h)
-AM_INIT_AUTOMAKE(valgrind, 20030716)
+AM_INIT_AUTOMAKE(valgrind, 2.0.0)
AM_MAINTAINER_MODE
kernel=`uname -r`
case "${kernel}" in
- 2.5.*)
- AC_MSG_RESULT([2.5 family (${kernel})])
- AC_DEFINE([KERNEL_2_5], 1, [Define to 1 if you're using Linux 2.5.x])
+ 2.6.*)
+ AC_MSG_RESULT([2.6 family (${kernel})])
+ AC_DEFINE([KERNEL_2_6], 1, [Define to 1 if you're using Linux 2.6.x])
;;
2.4.*)
*)
AC_MSG_RESULT([unsupported (${kernel})])
- AC_MSG_ERROR([Valgrind works on kernels 2.2 and 2.4])
+ AC_MSG_ERROR([Valgrind works on kernels 2.2, 2.4 and 2.6])
;;
esac
fi
+# check if the GNU as supports CFI directives
+AC_MSG_CHECKING([if gas accepts .cfi])
+AC_TRY_LINK(, [
+
+__asm__ __volatile__ (".cfi_startproc\n"
+ ".cfi_adjust_cfa_offset 0x0\n"
+ ".cfi_endproc\n");
+],
+[
+ AC_DEFINE_UNQUOTED([HAVE_GAS_CFI], 1, [Define if your GNU as supports .cfi])
+ AC_MSG_RESULT(yes)
+],
+ AC_MSG_RESULT(no)
+)
+
+
+
AC_MSG_CHECKING([if this is an NPTL-based system])
safe_LIBS="$LIBS"
LIBS="$LIBS -lpthread"
AM_CPPFLAGS = $(add_includes) -DVG_LIBDIR="\"$(libdir)"\"
AM_CFLAGS = $(WERROR) -Winline -Wall -Wshadow -O -fomit-frame-pointer \
@PREFERRED_STACK_BOUNDARY@ -g
-AM_CCASFLAGS = $(add_includes)
+AM_CCASFLAGS = -I$(top_builddir) -I$(top_srcdir) $(add_includes)
valdir = $(libdir)/valgrind
+++ /dev/null
-
-/*--------------------------------------------------------------------*/
-/*--- A replacement for the standard libpthread.so. ---*/
-/*--- vg_libpthread.c ---*/
-/*--------------------------------------------------------------------*/
-
-/*
- This file is part of Valgrind, an extensible x86 protected-mode
- emulator for monitoring program execution on x86-Unixes.
-
- Copyright (C) 2000-2003 Julian Seward
- jseward@acm.org
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License as
- published by the Free Software Foundation; either version 2 of the
- License, or (at your option) any later version.
-
- This program is distributed in the hope that it will be useful, but
- WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
- 02111-1307, USA.
-
- The GNU General Public License is contained in the file COPYING.
-*/
-
-/* ALL THIS CODE RUNS ON THE SIMULATED CPU.
-
- This is a replacement for the standard libpthread.so. It is loaded
- as part of the client's image (if required) and directs pthread
- calls through to Valgrind's request mechanism.
-
- A couple of caveats.
-
- 1. Since it's a binary-compatible replacement for an existing library,
- we must take care to used exactly the same data layouts, etc, as
- the standard pthread.so does.
-
- 2. Since this runs as part of the client, there are no specific
- restrictions on what headers etc we can include, so long as
- this libpthread.so does not end up having dependencies on .so's
- which the real one doesn't.
-
- Later ... it appears we cannot call file-related stuff in libc here,
- perhaps fair enough. Be careful what you call from here. Even exit()
- doesn't work (gives infinite recursion and then stack overflow); hence
- myexit(). Also fprintf doesn't seem safe.
-*/
-
-/* Sidestep the normal check which disallows using valgrind.h
- directly. */
-#define __VALGRIND_SOMESKIN_H
-#include "valgrind.h" /* For the request-passing mechanism */
-
-#include "vg_include.h" /* For the VG_USERREQ__* constants */
-
-#define __USE_UNIX98
-#include <sys/types.h>
-#include <pthread.h>
-#undef __USE_UNIX98
-
-#include <unistd.h>
-#include <string.h>
-#ifdef GLIBC_2_1
-#include <sys/time.h>
-#endif
-#include <sys/stat.h>
-#include <sys/poll.h>
-#include <stdio.h>
-
-
-/* ---------------------------------------------------------------------
- Forwardses.
- ------------------------------------------------------------------ */
-
-#define WEAK __attribute__((weak))
-
-
-static
-int my_do_syscall1 ( int syscallno, int arg1 );
-
-static
-int my_do_syscall2 ( int syscallno,
- int arg1, int arg2 );
-
-static
-int my_do_syscall3 ( int syscallno,
- int arg1, int arg2, int arg3 );
-
-static
-__inline__
-int is_kerror ( int res )
-{
- if (res >= -4095 && res <= -1)
- return 1;
- else
- return 0;
-}
-
-
-#ifdef GLIBC_2_3
- /* kludge by JRS (not from glibc) ... */
- typedef void* __locale_t;
-
- /* Copied from locale/locale.h in glibc-2.2.93 sources */
- /* This value can be passed to `uselocale' and may be returned by
- it. Passing this value to any other function has undefined
- behavior. */
-# define LC_GLOBAL_LOCALE ((__locale_t) -1L)
- extern __locale_t __uselocale ( __locale_t );
-#endif
-
-static
-void init_libc_tsd_keys ( void );
-
-
-/* ---------------------------------------------------------------------
- Helpers. We have to be pretty self-sufficient.
- ------------------------------------------------------------------ */
-
-/* Number of times any given error message is printed. */
-#define N_MOANS 3
-
-/* Extract from Valgrind the value of VG_(clo_trace_pthread_level).
- Returns 0 (none) if not running on Valgrind. */
-static
-int get_pt_trace_level ( void )
-{
- int res;
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__GET_PTHREAD_TRACE_LEVEL,
- 0, 0, 0, 0);
- return res;
-}
-
-static
-void my_exit ( int arg )
-{
- my_do_syscall1(__NR_exit, arg);
- /*NOTREACHED*/
-}
-
-/* Apparently unused.
-static
-void my_write ( int fd, const void *buf, int count )
-{
- my_do_syscall3(__NR_write, fd, (int)buf, count );
-}
-*/
-
-/* We need this guy -- it's in valgrind.so. */
-extern void VG_(startup) ( void );
-
-
-/* Just start up Valgrind if it's not already going. VG_(startup)()
- detects and ignores second and subsequent calls. */
-static __inline__
-void ensure_valgrind ( char* caller )
-{
- VG_(startup)();
-}
-
-/* While we're at it ... hook our own startup function into this
- game. */
-__asm__ (
- ".section .init\n"
- "\tcall vgPlain_startup"
-);
-
-
-static
-__attribute__((noreturn))
-void barf ( char* str )
-{
- char buf[1000];
- buf[0] = 0;
- strcat(buf, "\nvalgrind's libpthread.so: ");
- strcat(buf, str);
- strcat(buf, "\n\n");
- VALGRIND_NON_SIMD_CALL2(VG_(message), Vg_UserMsg, buf);
- my_exit(1);
- /* We have to persuade gcc into believing this doesn't return. */
- while (1) { };
-}
-
-
-static void cat_n_send ( char* pre, char* msg )
-{
- char buf[1000];
- if (get_pt_trace_level() >= 0) {
- snprintf(buf, sizeof(buf), "%s%s", pre, msg );
- buf[sizeof(buf)-1] = '\0';
- VALGRIND_NON_SIMD_CALL2(VG_(message), Vg_UserMsg, buf);
- }
-}
-
-static void ignored ( char* msg )
-{
- cat_n_send ( "valgrind's libpthread.so: IGNORED call to: ", msg );
-}
-
-
-static void kludged ( char* msg )
-{
- cat_n_send ( "valgrind's libpthread.so: KLUDGED call to: ", msg );
-}
-
-
-__attribute__((noreturn))
-void vgPlain_unimp ( char* what )
-{
- cat_n_send (
- "valgrind's libpthread.so: UNIMPLEMENTED FUNCTION: ", what );
- barf("Please report this bug to me at: jseward@acm.org");
-}
-
-
-static
-void my_assert_fail ( Char* expr, Char* file, Int line, Char* fn )
-{
- char buf[1000];
- static Bool entered = False;
- if (entered)
- my_exit(2);
- entered = True;
- sprintf(buf, "\n%s: %s:%d (%s): Assertion `%s' failed.\n",
- "valgrind", file, line, fn, expr );
- cat_n_send ( "", buf );
- sprintf(buf, "Please report this bug to me at: %s\n\n",
- VG_EMAIL_ADDR);
- cat_n_send ( "", buf );
- my_exit(1);
-}
-
-#define MY__STRING(__str) #__str
-
-#define my_assert(expr) \
- ((void) ((expr) ? 0 : \
- (my_assert_fail (MY__STRING(expr), \
- __FILE__, __LINE__, \
- __PRETTY_FUNCTION__), 0)))
-
-static
-void my_free ( void* ptr )
-{
- int res;
- VALGRIND_MAGIC_SEQUENCE(res, (-1) /* default */,
- VG_USERREQ__FREE, ptr, 0, 0, 0);
- my_assert(res == 0);
-}
-
-
-static
-void* my_malloc ( int nbytes )
-{
- void* res;
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__MALLOC, nbytes, 0, 0, 0);
- my_assert(res != (void*)0);
- return res;
-}
-
-
-
-/* ---------------------------------------------------------------------
- Pass pthread_ calls to Valgrind's request mechanism.
- ------------------------------------------------------------------ */
-
-#include <errno.h>
-#include <sys/time.h> /* gettimeofday */
-
-
-/* ---------------------------------------------------
- Ummm ..
- ------------------------------------------------ */
-
-static
-void pthread_error ( const char* msg )
-{
- int res;
- VALGRIND_MAGIC_SEQUENCE(res, 0,
- VG_USERREQ__PTHREAD_ERROR,
- msg, 0, 0, 0);
-}
-
-
-/* ---------------------------------------------------
- Here so it can be inlined without complaint.
- ------------------------------------------------ */
-
-__inline__
-pthread_t pthread_self(void)
-{
- int tid;
- ensure_valgrind("pthread_self");
- VALGRIND_MAGIC_SEQUENCE(tid, 0 /* default */,
- VG_USERREQ__PTHREAD_GET_THREADID,
- 0, 0, 0, 0);
- if (tid < 1 || tid >= VG_N_THREADS)
- barf("pthread_self: invalid ThreadId");
- return tid;
-}
-
-
-/* ---------------------------------------------------
- THREAD ATTRIBUTES
- ------------------------------------------------ */
-
-int pthread_attr_init(pthread_attr_t *attr)
-{
- /* Just initialise the fields which we might look at. */
- attr->__detachstate = PTHREAD_CREATE_JOINABLE;
- /* Linuxthreads sets this field to the value __getpagesize(), so I
- guess the following is OK. */
- attr->__guardsize = VKI_BYTES_PER_PAGE; return 0;
-}
-
-int pthread_attr_setdetachstate(pthread_attr_t *attr, int detachstate)
-{
- if (detachstate != PTHREAD_CREATE_JOINABLE
- && detachstate != PTHREAD_CREATE_DETACHED) {
- pthread_error("pthread_attr_setdetachstate: "
- "detachstate is invalid");
- return EINVAL;
- }
- attr->__detachstate = detachstate;
- return 0;
-}
-
-int pthread_attr_getdetachstate(const pthread_attr_t *attr, int *detachstate)
-{
- *detachstate = attr->__detachstate;
- return 0;
-}
-
-int pthread_attr_setinheritsched(pthread_attr_t *attr, int inherit)
-{
- static int moans = N_MOANS;
- if (moans-- > 0)
- ignored("pthread_attr_setinheritsched");
- return 0;
-}
-
-WEAK
-int pthread_attr_setstacksize (pthread_attr_t *__attr,
- size_t __stacksize)
-{
- size_t limit;
- char buf[1024];
- ensure_valgrind("pthread_attr_setstacksize");
- limit = VG_PTHREAD_STACK_SIZE - VG_AR_CLIENT_STACKBASE_REDZONE_SZB
- - 1000; /* paranoia */
- if (__stacksize < limit)
- return 0;
- snprintf(buf, sizeof(buf), "pthread_attr_setstacksize: "
- "requested size %d >= VG_PTHREAD_STACK_SIZE\n "
- "edit vg_include.h and rebuild.", __stacksize);
- buf[sizeof(buf)-1] = '\0'; /* Make sure it is zero terminated */
- barf(buf);
-}
-
-
-/* This is completely bogus. */
-int pthread_attr_getschedparam(const pthread_attr_t *attr,
- struct sched_param *param)
-{
- static int moans = N_MOANS;
- if (moans-- > 0)
- kludged("pthread_attr_getschedparam");
-# ifdef HAVE_SCHED_PRIORITY
- if (param) param->sched_priority = 0; /* who knows */
-# else
- if (param) param->__sched_priority = 0; /* who knows */
-# endif
- return 0;
-}
-
-int pthread_attr_setschedparam(pthread_attr_t *attr,
- const struct sched_param *param)
-{
- static int moans = N_MOANS;
- if (moans-- > 0)
- ignored("pthread_attr_setschedparam");
- return 0;
-}
-
-int pthread_attr_destroy(pthread_attr_t *attr)
-{
- static int moans = N_MOANS;
- if (moans-- > 0)
- ignored("pthread_attr_destroy");
- return 0;
-}
-
-/* These are no-ops, as with LinuxThreads. */
-int pthread_attr_setscope ( pthread_attr_t *attr, int scope )
-{
- ensure_valgrind("pthread_attr_setscope");
- if (scope == PTHREAD_SCOPE_SYSTEM)
- return 0;
- pthread_error("pthread_attr_setscope: "
- "invalid or unsupported scope");
- if (scope == PTHREAD_SCOPE_PROCESS)
- return ENOTSUP;
- return EINVAL;
-}
-
-int pthread_attr_getscope ( const pthread_attr_t *attr, int *scope )
-{
- ensure_valgrind("pthread_attr_setscope");
- if (scope)
- *scope = PTHREAD_SCOPE_SYSTEM;
- return 0;
-}
-
-
-/* Pretty bogus. Avoid if possible. */
-int pthread_getattr_np (pthread_t thread, pthread_attr_t *attr)
-{
- int detached;
- size_t limit;
- ensure_valgrind("pthread_getattr_np");
- kludged("pthread_getattr_np");
- limit = VG_PTHREAD_STACK_SIZE - VG_AR_CLIENT_STACKBASE_REDZONE_SZB
- - 1000; /* paranoia */
- attr->__detachstate = PTHREAD_CREATE_JOINABLE;
- attr->__schedpolicy = SCHED_OTHER;
- attr->__schedparam.sched_priority = 0;
- attr->__inheritsched = PTHREAD_EXPLICIT_SCHED;
- attr->__scope = PTHREAD_SCOPE_SYSTEM;
- attr->__guardsize = VKI_BYTES_PER_PAGE;
- attr->__stackaddr = NULL;
- attr->__stackaddr_set = 0;
- attr->__stacksize = limit;
- VALGRIND_MAGIC_SEQUENCE(detached, (-1) /* default */,
- VG_USERREQ__SET_OR_GET_DETACH,
- 2 /* get */, thread, 0, 0);
- my_assert(detached == 0 || detached == 1);
- if (detached)
- attr->__detachstate = PTHREAD_CREATE_DETACHED;
- return 0;
-}
-
-
-/* Bogus ... */
-WEAK
-int pthread_attr_getstackaddr ( const pthread_attr_t * attr,
- void ** stackaddr )
-{
- ensure_valgrind("pthread_attr_getstackaddr");
- kludged("pthread_attr_getstackaddr");
- if (stackaddr)
- *stackaddr = NULL;
- return 0;
-}
-
-/* Not bogus (!) */
-WEAK
-int pthread_attr_getstacksize ( const pthread_attr_t * _attr,
- size_t * __stacksize )
-{
- size_t limit;
- ensure_valgrind("pthread_attr_getstacksize");
- limit = VG_PTHREAD_STACK_SIZE - VG_AR_CLIENT_STACKBASE_REDZONE_SZB
- - 1000; /* paranoia */
- if (__stacksize)
- *__stacksize = limit;
- return 0;
-}
-
-int pthread_attr_setschedpolicy(pthread_attr_t *attr, int policy)
-{
- if (policy != SCHED_OTHER && policy != SCHED_FIFO && policy != SCHED_RR)
- return EINVAL;
- attr->__schedpolicy = policy;
- return 0;
-}
-
-int pthread_attr_getschedpolicy(const pthread_attr_t *attr, int *policy)
-{
- *policy = attr->__schedpolicy;
- return 0;
-}
-
-
-/* This is completely bogus. We reject all attempts to change it from
- VKI_BYTES_PER_PAGE. I don't have a clue what it's for so it seems
- safest to be paranoid. */
-WEAK
-int pthread_attr_setguardsize(pthread_attr_t *attr, size_t guardsize)
-{
- static int moans = N_MOANS;
-
- if (guardsize == VKI_BYTES_PER_PAGE)
- return 0;
-
- if (moans-- > 0)
- ignored("pthread_attr_setguardsize: ignoring guardsize != 4096");
-
- return 0;
-}
-
-/* A straight copy of the LinuxThreads code. */
-WEAK
-int pthread_attr_getguardsize(const pthread_attr_t *attr, size_t *guardsize)
-{
- *guardsize = attr->__guardsize;
- return 0;
-}
-
-/* Again, like LinuxThreads. */
-
-static int concurrency_current_level = 0;
-
-WEAK
-int pthread_setconcurrency(int new_level)
-{
- if (new_level < 0)
- return EINVAL;
- else {
- concurrency_current_level = new_level;
- return 0;
- }
-}
-
-WEAK
-int pthread_getconcurrency(void)
-{
- return concurrency_current_level;
-}
-
-
-
-/* ---------------------------------------------------
- Helper functions for running a thread
- and for clearing up afterwards.
- ------------------------------------------------ */
-
-/* All exiting threads eventually pass through here, bearing the
- return value, or PTHREAD_CANCELED, in ret_val. */
-static
-__attribute__((noreturn))
-void thread_exit_wrapper ( void* ret_val )
-{
- int detached, res;
- CleanupEntry cu;
- pthread_key_t key;
- void** specifics_ptr;
-
- /* Run this thread's cleanup handlers. */
- while (1) {
- VALGRIND_MAGIC_SEQUENCE(res, (-1) /* default */,
- VG_USERREQ__CLEANUP_POP,
- &cu, 0, 0, 0);
- if (res == -1) break; /* stack empty */
- my_assert(res == 0);
- if (0) printf("running exit cleanup handler");
- cu.fn ( cu.arg );
- }
-
- /* Run this thread's key finalizers. Really this should be run
- PTHREAD_DESTRUCTOR_ITERATIONS times. */
- for (key = 0; key < VG_N_THREAD_KEYS; key++) {
- VALGRIND_MAGIC_SEQUENCE(res, (-2) /* default */,
- VG_USERREQ__GET_KEY_D_AND_S,
- key, &cu, 0, 0 );
- if (res == 0) {
- /* valid key */
- if (cu.fn && cu.arg)
- cu.fn /* destructor for key */
- ( cu.arg /* specific for key for this thread */ );
- continue;
- }
- my_assert(res == -1);
- }
-
- /* Free up my specifics space, if any. */
- VALGRIND_MAGIC_SEQUENCE(specifics_ptr, 3 /* default */,
- VG_USERREQ__PTHREAD_GETSPECIFIC_PTR,
- pthread_self(), 0, 0, 0);
- my_assert(specifics_ptr != (void**)3);
- my_assert(specifics_ptr != (void**)1); /* 1 means invalid thread */
- if (specifics_ptr != NULL)
- my_free(specifics_ptr);
-
- /* Decide on my final disposition. */
- VALGRIND_MAGIC_SEQUENCE(detached, (-1) /* default */,
- VG_USERREQ__SET_OR_GET_DETACH,
- 2 /* get */, pthread_self(), 0, 0);
- my_assert(detached == 0 || detached == 1);
-
- if (detached) {
- /* Detached; I just quit right now. */
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__QUIT, 0, 0, 0, 0);
- } else {
- /* Not detached; so I wait for a joiner. */
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__WAIT_JOINER, ret_val, 0, 0, 0);
- }
- /* NOTREACHED */
- barf("thread_exit_wrapper: still alive?!");
-}
-
-
-/* This function is a wrapper function for running a thread. It runs
- the root function specified in pthread_create, and then, should the
- root function return a value, it arranges to run the thread's
- cleanup handlers and exit correctly. */
-
-/* Struct used to convey info from pthread_create to thread_wrapper.
- Must be careful not to pass to the child thread any pointers to
- objects which might be on the parent's stack. */
-typedef
- struct {
- int attr__detachstate;
- void* (*root_fn) ( void* );
- void* arg;
- }
- NewThreadInfo;
-
-
-/* This is passed to the VG_USERREQ__APPLY_IN_NEW_THREAD and so must
- not return. Note that this runs in the new thread, not the
- parent. */
-static
-__attribute__((noreturn))
-void thread_wrapper ( NewThreadInfo* info )
-{
- int attr__detachstate;
- void* (*root_fn) ( void* );
- void* arg;
- void* ret_val;
-
- attr__detachstate = info->attr__detachstate;
- root_fn = info->root_fn;
- arg = info->arg;
-
- /* Free up the arg block that pthread_create malloced. */
- my_free(info);
-
- /* Minimally observe the attributes supplied. */
- if (attr__detachstate != PTHREAD_CREATE_DETACHED
- && attr__detachstate != PTHREAD_CREATE_JOINABLE)
- pthread_error("thread_wrapper: invalid attr->__detachstate");
- if (attr__detachstate == PTHREAD_CREATE_DETACHED)
- pthread_detach(pthread_self());
-
-# ifdef GLIBC_2_3
- /* Set this thread's locale to the global (default) locale. A hack
- in support of glibc-2.3. This does the biz for the all new
- threads; the root thread is done with a horrible hack in
- init_libc_tsd_keys() below.
- */
- __uselocale(LC_GLOBAL_LOCALE);
-# endif
-
- /* The root function might not return. But if it does we simply
- move along to thread_exit_wrapper. All other ways out for the
- thread (cancellation, or calling pthread_exit) lead there
- too. */
- ret_val = root_fn(arg);
- thread_exit_wrapper(ret_val);
- /* NOTREACHED */
-}
-
-
-/* ---------------------------------------------------
- THREADs
- ------------------------------------------------ */
-
-static void __valgrind_pthread_yield ( void )
-{
- int res;
- ensure_valgrind("pthread_yield");
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_YIELD, 0, 0, 0, 0);
-}
-
-WEAK
-int pthread_yield ( void )
-{
- __valgrind_pthread_yield();
- return 0;
-}
-
-
-int pthread_equal(pthread_t thread1, pthread_t thread2)
-{
- return thread1 == thread2 ? 1 : 0;
-}
-
-
-/* Bundle up the args into a malloc'd block and create a new thread
- consisting of thread_wrapper() applied to said malloc'd block. */
-int
-pthread_create (pthread_t *__restrict __thredd,
- __const pthread_attr_t *__restrict __attr,
- void *(*__start_routine) (void *),
- void *__restrict __arg)
-{
- int tid_child;
- NewThreadInfo* info;
-
- ensure_valgrind("pthread_create");
-
- /* make sure the tsd keys, and hence locale info, are initialised
- before we get into complications making new threads. */
- init_libc_tsd_keys();
-
- /* Allocate space for the arg block. thread_wrapper will free
- it. */
- info = my_malloc(sizeof(NewThreadInfo));
- my_assert(info != NULL);
-
- if (__attr)
- info->attr__detachstate = __attr->__detachstate;
- else
- info->attr__detachstate = PTHREAD_CREATE_JOINABLE;
-
- info->root_fn = __start_routine;
- info->arg = __arg;
- VALGRIND_MAGIC_SEQUENCE(tid_child, VG_INVALID_THREADID /* default */,
- VG_USERREQ__APPLY_IN_NEW_THREAD,
- &thread_wrapper, info, 0, 0);
- my_assert(tid_child != VG_INVALID_THREADID);
-
- if (__thredd)
- *__thredd = tid_child;
- return 0; /* success */
-}
-
-
-int
-pthread_join (pthread_t __th, void **__thread_return)
-{
- int res;
- ensure_valgrind("pthread_join");
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_JOIN,
- __th, __thread_return, 0, 0);
- return res;
-}
-
-
-void pthread_exit(void *retval)
-{
- ensure_valgrind("pthread_exit");
- /* Simple! */
- thread_exit_wrapper(retval);
-}
-
-
-int pthread_detach(pthread_t th)
-{
- int res;
- ensure_valgrind("pthread_detach");
- /* First we enquire as to the current detach state. */
- VALGRIND_MAGIC_SEQUENCE(res, (-2) /* default */,
- VG_USERREQ__SET_OR_GET_DETACH,
- 2 /* get */, th, 0, 0);
- if (res == -1) {
- /* not found */
- pthread_error("pthread_detach: "
- "invalid target thread");
- return ESRCH;
- }
- if (res == 1) {
- /* already detached */
- pthread_error("pthread_detach: "
- "target thread is already detached");
- return EINVAL;
- }
- if (res == 0) {
- VALGRIND_MAGIC_SEQUENCE(res, (-2) /* default */,
- VG_USERREQ__SET_OR_GET_DETACH,
- 1 /* set */, th, 0, 0);
- my_assert(res == 0);
- return 0;
- }
- barf("pthread_detach");
-}
-
-
-/* ---------------------------------------------------
- CLEANUP STACKS
- ------------------------------------------------ */
-
-void _pthread_cleanup_push (struct _pthread_cleanup_buffer *__buffer,
- void (*__routine) (void *),
- void *__arg)
-{
- int res;
- CleanupEntry cu;
- ensure_valgrind("_pthread_cleanup_push");
- cu.fn = __routine;
- cu.arg = __arg;
- VALGRIND_MAGIC_SEQUENCE(res, (-1) /* default */,
- VG_USERREQ__CLEANUP_PUSH,
- &cu, 0, 0, 0);
- my_assert(res == 0);
-}
-
-
-void _pthread_cleanup_push_defer (struct _pthread_cleanup_buffer *__buffer,
- void (*__routine) (void *),
- void *__arg)
-{
- /* As _pthread_cleanup_push, but first save the thread's original
- cancellation type in __buffer and set it to Deferred. */
- int orig_ctype;
- ensure_valgrind("_pthread_cleanup_push_defer");
- /* Set to Deferred, and put the old cancellation type in res. */
- my_assert(-1 != PTHREAD_CANCEL_DEFERRED);
- my_assert(-1 != PTHREAD_CANCEL_ASYNCHRONOUS);
- my_assert(sizeof(struct _pthread_cleanup_buffer) >= sizeof(int));
- VALGRIND_MAGIC_SEQUENCE(orig_ctype, (-1) /* default */,
- VG_USERREQ__SET_CANCELTYPE,
- PTHREAD_CANCEL_DEFERRED, 0, 0, 0);
- my_assert(orig_ctype != -1);
- *((int*)(__buffer)) = orig_ctype;
- /* Now push the cleanup. */
- _pthread_cleanup_push(NULL, __routine, __arg);
-}
-
-
-void _pthread_cleanup_pop (struct _pthread_cleanup_buffer *__buffer,
- int __execute)
-{
- int res;
- CleanupEntry cu;
- ensure_valgrind("_pthread_cleanup_push");
- cu.fn = cu.arg = NULL; /* paranoia */
- VALGRIND_MAGIC_SEQUENCE(res, (-1) /* default */,
- VG_USERREQ__CLEANUP_POP,
- &cu, 0, 0, 0);
- if (res == 0) {
- /* pop succeeded */
- if (__execute) {
- cu.fn ( cu.arg );
- }
- return;
- }
- if (res == -1) {
- /* stack underflow */
- return;
- }
- barf("_pthread_cleanup_pop");
-}
-
-
-void _pthread_cleanup_pop_restore (struct _pthread_cleanup_buffer *__buffer,
- int __execute)
-{
- int orig_ctype, fake_ctype;
- /* As _pthread_cleanup_pop, but after popping/running the handler,
- restore the thread's original cancellation type from the first
- word of __buffer. */
- _pthread_cleanup_pop(NULL, __execute);
- orig_ctype = *((int*)(__buffer));
- my_assert(orig_ctype == PTHREAD_CANCEL_DEFERRED
- || orig_ctype == PTHREAD_CANCEL_ASYNCHRONOUS);
- my_assert(-1 != PTHREAD_CANCEL_DEFERRED);
- my_assert(-1 != PTHREAD_CANCEL_ASYNCHRONOUS);
- my_assert(sizeof(struct _pthread_cleanup_buffer) >= sizeof(int));
- VALGRIND_MAGIC_SEQUENCE(fake_ctype, (-1) /* default */,
- VG_USERREQ__SET_CANCELTYPE,
- orig_ctype, 0, 0, 0);
- my_assert(fake_ctype == PTHREAD_CANCEL_DEFERRED);
-}
-
-
-/* ---------------------------------------------------
- MUTEX ATTRIBUTES
- ------------------------------------------------ */
-
-int __pthread_mutexattr_init(pthread_mutexattr_t *attr)
-{
- attr->__mutexkind = PTHREAD_MUTEX_ERRORCHECK_NP;
- return 0;
-}
-
-int __pthread_mutexattr_settype(pthread_mutexattr_t *attr, int type)
-{
- switch (type) {
-# ifndef GLIBC_2_1
- case PTHREAD_MUTEX_TIMED_NP:
- case PTHREAD_MUTEX_ADAPTIVE_NP:
-# endif
-# ifdef GLIBC_2_1
- case PTHREAD_MUTEX_FAST_NP:
-# endif
- case PTHREAD_MUTEX_RECURSIVE_NP:
- case PTHREAD_MUTEX_ERRORCHECK_NP:
- attr->__mutexkind = type;
- return 0;
- default:
- pthread_error("pthread_mutexattr_settype: "
- "invalid type");
- return EINVAL;
- }
-}
-
-int __pthread_mutexattr_destroy(pthread_mutexattr_t *attr)
-{
- return 0;
-}
-
-int __pthread_mutexattr_setpshared ( pthread_mutexattr_t* attr, int pshared)
-{
- if (pshared != PTHREAD_PROCESS_PRIVATE && pshared != PTHREAD_PROCESS_SHARED)
- return EINVAL;
-
- /* For now it is not possible to shared a conditional variable. */
- if (pshared != PTHREAD_PROCESS_PRIVATE)
- return ENOSYS;
-
- return 0;
-}
-
-
-/* ---------------------------------------------------
- MUTEXes
- ------------------------------------------------ */
-
-int __pthread_mutex_init(pthread_mutex_t *mutex,
- const pthread_mutexattr_t *mutexattr)
-{
- mutex->__m_count = 0;
- mutex->__m_owner = (_pthread_descr)VG_INVALID_THREADID;
- mutex->__m_kind = PTHREAD_MUTEX_ERRORCHECK_NP;
- if (mutexattr)
- mutex->__m_kind = mutexattr->__mutexkind;
- return 0;
-}
-
-
-int __pthread_mutex_lock(pthread_mutex_t *mutex)
-{
- int res;
-
- if (RUNNING_ON_VALGRIND) {
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_MUTEX_LOCK,
- mutex, 0, 0, 0);
- return res;
- } else {
- /* Play at locking */
- if (0)
- kludged("prehistoric lock");
- mutex->__m_owner = (_pthread_descr)1;
- mutex->__m_count = 1;
- mutex->__m_kind |= VG_PTHREAD_PREHISTORY;
- return 0; /* success */
- }
-}
-
-
-int __pthread_mutex_trylock(pthread_mutex_t *mutex)
-{
- int res;
-
- if (RUNNING_ON_VALGRIND) {
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_MUTEX_TRYLOCK,
- mutex, 0, 0, 0);
- return res;
- } else {
- /* Play at locking */
- if (0)
- kludged("prehistoric trylock");
- mutex->__m_owner = (_pthread_descr)1;
- mutex->__m_count = 1;
- mutex->__m_kind |= VG_PTHREAD_PREHISTORY;
- return 0; /* success */
- }
-}
-
-
-int __pthread_mutex_unlock(pthread_mutex_t *mutex)
-{
- int res;
-
- if (RUNNING_ON_VALGRIND) {
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_MUTEX_UNLOCK,
- mutex, 0, 0, 0);
- return res;
- } else {
- /* Play at locking */
- if (0)
- kludged("prehistoric unlock");
- mutex->__m_owner = 0;
- mutex->__m_count = 0;
- mutex->__m_kind &= ~VG_PTHREAD_PREHISTORY;
- return 0; /* success */
- }
-}
-
-
-int __pthread_mutex_destroy(pthread_mutex_t *mutex)
-{
- /* Valgrind doesn't hold any resources on behalf of the mutex, so no
- need to involve it. */
- if (mutex->__m_count > 0) {
- /* Oh, the horror. glibc's internal use of pthreads "knows"
- that destroying a lock does an implicit unlock. Make it
- explicit. */
- __pthread_mutex_unlock(mutex);
- pthread_error("pthread_mutex_destroy: "
- "mutex is still in use");
- return EBUSY;
- }
- mutex->__m_count = 0;
- mutex->__m_owner = (_pthread_descr)VG_INVALID_THREADID;
- mutex->__m_kind = PTHREAD_MUTEX_ERRORCHECK_NP;
- return 0;
-}
-
-
-/* ---------------------------------------------------
- CONDITION VARIABLES
- ------------------------------------------------ */
-
-/* LinuxThreads supports no attributes for conditions. Hence ... */
-
-int pthread_condattr_init(pthread_condattr_t *attr)
-{
- return 0;
-}
-
-int pthread_condattr_destroy(pthread_condattr_t *attr)
-{
- return 0;
-}
-
-int pthread_cond_init( pthread_cond_t *cond,
- const pthread_condattr_t *cond_attr)
-{
- cond->__c_waiting = (_pthread_descr)VG_INVALID_THREADID;
- return 0;
-}
-
-int pthread_cond_destroy(pthread_cond_t *cond)
-{
- /* should check that no threads are waiting on this CV */
- static int moans = N_MOANS;
- if (moans-- > 0)
- kludged("pthread_cond_destroy");
- return 0;
-}
-
-/* ---------------------------------------------------
- SCHEDULING
- ------------------------------------------------ */
-
-/* This is completely bogus. */
-int pthread_getschedparam(pthread_t target_thread,
- int *policy,
- struct sched_param *param)
-{
- static int moans = N_MOANS;
- if (moans-- > 0)
- kludged("pthread_getschedparam");
- if (policy) *policy = SCHED_OTHER;
-# ifdef HAVE_SCHED_PRIORITY
- if (param) param->sched_priority = 0; /* who knows */
-# else
- if (param) param->__sched_priority = 0; /* who knows */
-# endif
- return 0;
-}
-
-int pthread_setschedparam(pthread_t target_thread,
- int policy,
- const struct sched_param *param)
-{
- static int moans = N_MOANS;
- if (moans-- > 0)
- ignored("pthread_setschedparam");
- return 0;
-}
-
-int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
-{
- int res;
- ensure_valgrind("pthread_cond_wait");
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_COND_WAIT,
- cond, mutex, 0, 0);
- return res;
-}
-
-int pthread_cond_timedwait ( pthread_cond_t *cond,
- pthread_mutex_t *mutex,
- const struct timespec *abstime )
-{
- int res;
- unsigned int ms_now, ms_end;
- struct timeval timeval_now;
- unsigned long long int ull_ms_now_after_1970;
- unsigned long long int ull_ms_end_after_1970;
-
- ensure_valgrind("pthread_cond_timedwait");
- VALGRIND_MAGIC_SEQUENCE(ms_now, 0xFFFFFFFF /* default */,
- VG_USERREQ__READ_MILLISECOND_TIMER,
- 0, 0, 0, 0);
- my_assert(ms_now != 0xFFFFFFFF);
- res = gettimeofday(&timeval_now, NULL);
- my_assert(res == 0);
-
- ull_ms_now_after_1970
- = 1000ULL * ((unsigned long long int)(timeval_now.tv_sec))
- + ((unsigned long long int)(timeval_now.tv_usec / 1000000));
- ull_ms_end_after_1970
- = 1000ULL * ((unsigned long long int)(abstime->tv_sec))
- + ((unsigned long long int)(abstime->tv_nsec / 1000000));
- if (ull_ms_end_after_1970 < ull_ms_now_after_1970)
- ull_ms_end_after_1970 = ull_ms_now_after_1970;
- ms_end
- = ms_now + (unsigned int)(ull_ms_end_after_1970 - ull_ms_now_after_1970);
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_COND_TIMEDWAIT,
- cond, mutex, ms_end, 0);
- return res;
-}
-
-
-int pthread_cond_signal(pthread_cond_t *cond)
-{
- int res;
- ensure_valgrind("pthread_cond_signal");
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_COND_SIGNAL,
- cond, 0, 0, 0);
- return res;
-}
-
-int pthread_cond_broadcast(pthread_cond_t *cond)
-{
- int res;
- ensure_valgrind("pthread_cond_broadcast");
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_COND_BROADCAST,
- cond, 0, 0, 0);
- return res;
-}
-
-
-/* ---------------------------------------------------
- CANCELLATION
- ------------------------------------------------ */
-
-int pthread_setcancelstate(int state, int *oldstate)
-{
- int res;
- ensure_valgrind("pthread_setcancelstate");
- if (state != PTHREAD_CANCEL_ENABLE
- && state != PTHREAD_CANCEL_DISABLE) {
- pthread_error("pthread_setcancelstate: "
- "invalid state");
- return EINVAL;
- }
- my_assert(-1 != PTHREAD_CANCEL_ENABLE);
- my_assert(-1 != PTHREAD_CANCEL_DISABLE);
- VALGRIND_MAGIC_SEQUENCE(res, (-1) /* default */,
- VG_USERREQ__SET_CANCELSTATE,
- state, 0, 0, 0);
- my_assert(res != -1);
- if (oldstate)
- *oldstate = res;
- return 0;
-}
-
-int pthread_setcanceltype(int type, int *oldtype)
-{
- int res;
- ensure_valgrind("pthread_setcanceltype");
- if (type != PTHREAD_CANCEL_DEFERRED
- && type != PTHREAD_CANCEL_ASYNCHRONOUS) {
- pthread_error("pthread_setcanceltype: "
- "invalid type");
- return EINVAL;
- }
- my_assert(-1 != PTHREAD_CANCEL_DEFERRED);
- my_assert(-1 != PTHREAD_CANCEL_ASYNCHRONOUS);
- VALGRIND_MAGIC_SEQUENCE(res, (-1) /* default */,
- VG_USERREQ__SET_CANCELTYPE,
- type, 0, 0, 0);
- my_assert(res != -1);
- if (oldtype)
- *oldtype = res;
- return 0;
-}
-
-int pthread_cancel(pthread_t thread)
-{
- int res;
- ensure_valgrind("pthread_cancel");
- VALGRIND_MAGIC_SEQUENCE(res, (-1) /* default */,
- VG_USERREQ__SET_CANCELPEND,
- thread, &thread_exit_wrapper, 0, 0);
- my_assert(res != -1);
- return res;
-}
-
-static __inline__
-void __my_pthread_testcancel(void)
-{
- int res;
- ensure_valgrind("__my_pthread_testcancel");
- VALGRIND_MAGIC_SEQUENCE(res, (-1) /* default */,
- VG_USERREQ__TESTCANCEL,
- 0, 0, 0, 0);
- my_assert(res == 0);
-}
-
-void pthread_testcancel ( void )
-{
- __my_pthread_testcancel();
-}
-
-
-/* Not really sure what this is for. I suspect for doing the POSIX
- requirements for fork() and exec(). We do this internally anyway
- whenever those syscalls are observed, so this could be superfluous,
- but hey ...
-*/
-void __pthread_kill_other_threads_np ( void )
-{
- int res;
- ensure_valgrind("__pthread_kill_other_threads_np");
- VALGRIND_MAGIC_SEQUENCE(res, (-1) /* default */,
- VG_USERREQ__NUKE_OTHER_THREADS,
- 0, 0, 0, 0);
- my_assert(res == 0);
-}
-
-
-/* ---------------------------------------------------
- SIGNALS
- ------------------------------------------------ */
-
-#include <signal.h>
-
-int pthread_sigmask(int how, const sigset_t *newmask,
- sigset_t *oldmask)
-{
- int res;
-
- /* A bit subtle, because the scheduler expects newmask and oldmask
- to be vki_sigset_t* rather than sigset_t*, and the two are
- different. Fortunately the first 64 bits of a sigset_t are
- exactly a vki_sigset_t, so we just pass the pointers through
- unmodified. Haaaack!
-
- Also mash the how value so that the SIG_ constants from glibc
- constants to VKI_ constants, so that the former do not have to
- be included into vg_scheduler.c. */
-
- ensure_valgrind("pthread_sigmask");
-
- switch (how) {
- case SIG_SETMASK: how = VKI_SIG_SETMASK; break;
- case SIG_BLOCK: how = VKI_SIG_BLOCK; break;
- case SIG_UNBLOCK: how = VKI_SIG_UNBLOCK; break;
- default: pthread_error("pthread_sigmask: invalid how");
- return EINVAL;
- }
-
- /* Crude check */
- if (newmask == NULL)
- return EFAULT;
-
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_SIGMASK,
- how, newmask, oldmask, 0);
-
- /* The scheduler tells us of any memory violations. */
- return res == 0 ? 0 : EFAULT;
-}
-
-
-int sigwait ( const sigset_t* set, int* sig )
-{
- int res;
- ensure_valgrind("sigwait");
- /* As with pthread_sigmask we deliberately confuse sigset_t with
- vki_ksigset_t. */
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__SIGWAIT,
- set, sig, 0, 0);
- return res;
-}
-
-
-int pthread_kill(pthread_t thread, int signo)
-{
- int res;
- ensure_valgrind("pthread_kill");
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_KILL,
- thread, signo, 0, 0);
- return res;
-}
-
-
-/* Copied verbatim from Linuxthreads */
-/* Redefine raise() to send signal to calling thread only,
- as per POSIX 1003.1c */
-int raise (int sig)
-{
- int retcode = pthread_kill(pthread_self(), sig);
- if (retcode == 0) {
- return 0;
- } else {
- *(__errno_location()) = retcode;
- return -1;
- }
-}
-
-
-int pause ( void )
-{
- unsigned int n_orig, n_now;
- struct vki_timespec nanosleep_interval;
- ensure_valgrind("pause");
-
- /* This is surely a cancellation point. */
- __my_pthread_testcancel();
-
- VALGRIND_MAGIC_SEQUENCE(n_orig, 0xFFFFFFFF /* default */,
- VG_USERREQ__GET_N_SIGS_RETURNED,
- 0, 0, 0, 0);
- my_assert(n_orig != 0xFFFFFFFF);
-
- while (1) {
- VALGRIND_MAGIC_SEQUENCE(n_now, 0xFFFFFFFF /* default */,
- VG_USERREQ__GET_N_SIGS_RETURNED,
- 0, 0, 0, 0);
- my_assert(n_now != 0xFFFFFFFF);
- my_assert(n_now >= n_orig);
- if (n_now != n_orig) break;
-
- nanosleep_interval.tv_sec = 0;
- nanosleep_interval.tv_nsec = 12 * 1000 * 1000; /* 12 milliseconds */
- /* It's critical here that valgrind's nanosleep implementation
- is nonblocking. */
- (void)my_do_syscall2(__NR_nanosleep,
- (int)(&nanosleep_interval), (int)NULL);
- }
-
- *(__errno_location()) = EINTR;
- return -1;
-}
-
-
-/* ---------------------------------------------------
- THREAD-SPECIFICs
- ------------------------------------------------ */
-
-static
-int key_is_valid (pthread_key_t key)
-{
- int res;
- VALGRIND_MAGIC_SEQUENCE(res, 2 /* default */,
- VG_USERREQ__PTHREAD_KEY_VALIDATE,
- key, 0, 0, 0);
- my_assert(res != 2);
- return res;
-}
-
-
-/* Returns NULL if thread is invalid. Otherwise, if the thread
- already has a specifics area, return that. Otherwise allocate it
- one. */
-static
-void** get_or_allocate_specifics_ptr ( pthread_t thread )
-{
- int res, i;
- void** specifics_ptr;
- ensure_valgrind("get_or_allocate_specifics_ptr");
-
- /* Returns zero if the thread has no specific_ptr. One if thread
- is invalid. Otherwise, the specific_ptr value. This is
- allocated with my_malloc and so is aligned and cannot be
- confused with 1 or 3. */
- VALGRIND_MAGIC_SEQUENCE(specifics_ptr, 3 /* default */,
- VG_USERREQ__PTHREAD_GETSPECIFIC_PTR,
- thread, 0, 0, 0);
- my_assert(specifics_ptr != (void**)3);
-
- if (specifics_ptr == (void**)1)
- return NULL; /* invalid thread */
-
- if (specifics_ptr != NULL)
- return specifics_ptr; /* already has a specifics ptr. */
-
- /* None yet ... allocate a new one. Should never fail. */
- specifics_ptr = my_malloc( VG_N_THREAD_KEYS * sizeof(void*) );
- my_assert(specifics_ptr != NULL);
-
- VALGRIND_MAGIC_SEQUENCE(res, -1 /* default */,
- VG_USERREQ__PTHREAD_SETSPECIFIC_PTR,
- specifics_ptr, 0, 0, 0);
- my_assert(res == 0);
-
- /* POSIX sez: "Upon thread creation, the value NULL shall be
- associated with all defined keys in the new thread." This
- allocation is in effect a delayed allocation of the specific
- data for a thread, at its first-use. Hence we initialise it
- here. */
- for (i = 0; i < VG_N_THREAD_KEYS; i++) {
- specifics_ptr[i] = NULL;
- }
-
- return specifics_ptr;
-}
-
-
-int __pthread_key_create(pthread_key_t *key,
- void (*destr_function) (void *))
-{
- void** specifics_ptr;
- int res, i;
- ensure_valgrind("pthread_key_create");
-
- /* This writes *key if successful. It should never fail. */
- VALGRIND_MAGIC_SEQUENCE(res, 1 /* default */,
- VG_USERREQ__PTHREAD_KEY_CREATE,
- key, destr_function, 0, 0);
- my_assert(res == 0);
-
- /* POSIX sez: "Upon key creation, the value NULL shall be
- associated with the new key in all active threads." */
- for (i = 0; i < VG_N_THREADS; i++) {
- specifics_ptr = get_or_allocate_specifics_ptr(i);
- /* we get NULL if i is an invalid thread. */
- if (specifics_ptr != NULL)
- specifics_ptr[*key] = NULL;
- }
-
- return res;
-}
-
-int pthread_key_delete(pthread_key_t key)
-{
- int res;
- ensure_valgrind("pthread_key_create");
- if (!key_is_valid(key))
- return EINVAL;
- VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
- VG_USERREQ__PTHREAD_KEY_DELETE,
- key, 0, 0, 0);
- my_assert(res == 0);
- return 0;
-}
-
-int __pthread_setspecific(pthread_key_t key, const void *pointer)
-{
- void** specifics_ptr;
- ensure_valgrind("pthread_setspecific");
-
- if (!key_is_valid(key))
- return EINVAL;
-
- specifics_ptr = get_or_allocate_specifics_ptr(pthread_self());
- specifics_ptr[key] = (void*)pointer;
- return 0;
-}
-
-void * __pthread_getspecific(pthread_key_t key)
-{
- void** specifics_ptr;
- ensure_valgrind("pthread_getspecific");
-
- if (!key_is_valid(key))
- return NULL;
-
- specifics_ptr = get_or_allocate_specifics_ptr(pthread_self());
- return specifics_ptr[key];
-}
-
-
-#ifdef GLIBC_2_3
-static
-void ** __pthread_getspecific_addr(pthread_key_t key)
-{
- void** specifics_ptr;
- ensure_valgrind("pthread_getspecific_addr");
-
- if (!key_is_valid(key))
- return NULL;
-
- specifics_ptr = get_or_allocate_specifics_ptr(pthread_self());
- return &(specifics_ptr[key]);
-}
-#endif
-
-
-/* ---------------------------------------------------
- ONCEry
- ------------------------------------------------ */
-
-/* This protects reads and writes of the once_control variable
- supplied. It is never held whilst any particular initialiser is
- running. */
-static pthread_mutex_t once_masterlock = PTHREAD_MUTEX_INITIALIZER;
-
-/* Initialiser needs to be run. */
-#define P_ONCE_NOT_DONE ((PTHREAD_ONCE_INIT) + 0)
-
-/* Initialiser currently running. */
-#define P_ONCE_RUNNING ((PTHREAD_ONCE_INIT) + 1)
-
-/* Initialiser has completed. */
-#define P_ONCE_COMPLETED ((PTHREAD_ONCE_INIT) + 2)
-
-int __pthread_once ( pthread_once_t *once_control,
- void (*init_routine) (void) )
-{
- int res;
- int done;
- ensure_valgrind("pthread_once");
-
-# define TAKE_LOCK \
- res = __pthread_mutex_lock(&once_masterlock); \
- my_assert(res == 0);
-
-# define RELEASE_LOCK \
- res = __pthread_mutex_unlock(&once_masterlock); \
- my_assert(res == 0);
-
- /* Grab the lock transiently, so we can safely see what state this
- once_control is in. */
-
- TAKE_LOCK;
-
- switch (*once_control) {
-
- case P_ONCE_NOT_DONE:
- /* Not started. Change state to indicate running, drop the
- lock and run. */
- *once_control = P_ONCE_RUNNING;
- RELEASE_LOCK;
- init_routine();
- /* re-take the lock, and set state to indicate done. */
- TAKE_LOCK;
- *once_control = P_ONCE_COMPLETED;
- RELEASE_LOCK;
- break;
-
- case P_ONCE_RUNNING:
- /* This is the tricky case. The initialiser is running in
- some other thread, but we have to delay this thread till
- the other one completes. So we sort-of busy wait. In
- fact it makes sense to yield now, because what we want to
- happen is for the thread running the initialiser to
- complete ASAP. */
- RELEASE_LOCK;
- done = 0;
- while (1) {
- /* Let others run for a while. */
- __valgrind_pthread_yield();
- /* Grab the lock and see if we're done waiting. */
- TAKE_LOCK;
- if (*once_control == P_ONCE_COMPLETED)
- done = 1;
- RELEASE_LOCK;
- if (done)
- break;
- }
- break;
-
- case P_ONCE_COMPLETED:
- default:
- /* Easy. It's already done. Just drop the lock. */
- RELEASE_LOCK;
- break;
- }
-
- return 0;
-
-# undef TAKE_LOCK
-# undef RELEASE_LOCK
-}
-
-#undef P_ONCE_NOT_DONE
-#undef P_ONCE_RUNNING
-#undef P_ONCE_COMPLETED
-
-
-/* ---------------------------------------------------
- MISC
- ------------------------------------------------ */
-
-static pthread_mutex_t pthread_atfork_lock
- = PTHREAD_MUTEX_INITIALIZER;
-
-int __pthread_atfork ( void (*prepare)(void),
- void (*parent)(void),
- void (*child)(void) )
-{
- int n, res;
- ForkHandlerEntry entry;
-
- ensure_valgrind("pthread_atfork");
- __pthread_mutex_lock(&pthread_atfork_lock);
-
- /* Fetch old counter */
- VALGRIND_MAGIC_SEQUENCE(n, -2 /* default */,
- VG_USERREQ__GET_FHSTACK_USED,
- 0, 0, 0, 0);
- my_assert(n >= 0 && n < VG_N_FORKHANDLERSTACK);
- if (n == VG_N_FORKHANDLERSTACK-1)
- barf("pthread_atfork: VG_N_FORKHANDLERSTACK is too low; "
- "increase and recompile");
-
- /* Add entry */
- entry.prepare = *prepare;
- entry.parent = *parent;
- entry.child = *child;
- VALGRIND_MAGIC_SEQUENCE(res, -2 /* default */,
- VG_USERREQ__SET_FHSTACK_ENTRY,
- n, &entry, 0, 0);
- my_assert(res == 0);
-
- /* Bump counter */
- VALGRIND_MAGIC_SEQUENCE(res, -2 /* default */,
- VG_USERREQ__SET_FHSTACK_USED,
- n+1, 0, 0, 0);
- my_assert(res == 0);
-
- __pthread_mutex_unlock(&pthread_atfork_lock);
- return 0;
-}
-
-
-#ifdef GLIBC_2_3
-/* This seems to be a hook which appeared in glibc-2.3.2. */
-int __register_atfork ( void (*prepare)(void),
- void (*parent)(void),
- void (*child)(void) )
-{
- return __pthread_atfork(prepare,parent,child);
-}
-#endif
-
-WEAK
-void __pthread_initialize ( void )
-{
- ensure_valgrind("__pthread_initialize");
-}
-
-
-/* ---------------------------------------------------
- LIBRARY-PRIVATE THREAD SPECIFIC STATE
- ------------------------------------------------ */
-
-#include <resolv.h>
-static int thread_specific_errno[VG_N_THREADS];
-static int thread_specific_h_errno[VG_N_THREADS];
-static struct __res_state
- thread_specific_res_state[VG_N_THREADS];
-
-#undef errno
-extern int errno;
-int* __errno_location ( void )
-{
- int tid;
- /* ensure_valgrind("__errno_location"); */
- VALGRIND_MAGIC_SEQUENCE(tid, 1 /* default */,
- VG_USERREQ__PTHREAD_GET_THREADID,
- 0, 0, 0, 0);
- /* 'cos I'm paranoid ... */
- if (tid < 1 || tid >= VG_N_THREADS)
- barf("__errno_location: invalid ThreadId");
- if (tid == 1)
- return &errno;
- return & thread_specific_errno[tid];
-}
-
-#undef h_errno
-extern int h_errno;
-int* __h_errno_location ( void )
-{
- int tid;
- /* ensure_valgrind("__h_errno_location"); */
- VALGRIND_MAGIC_SEQUENCE(tid, 1 /* default */,
- VG_USERREQ__PTHREAD_GET_THREADID,
- 0, 0, 0, 0);
- /* 'cos I'm paranoid ... */
- if (tid < 1 || tid >= VG_N_THREADS)
- barf("__h_errno_location: invalid ThreadId");
- if (tid == 1)
- return &h_errno;
- return & thread_specific_h_errno[tid];
-}
-
-
-#undef _res
-extern struct __res_state _res;
-struct __res_state* __res_state ( void )
-{
- int tid;
- /* ensure_valgrind("__res_state"); */
- VALGRIND_MAGIC_SEQUENCE(tid, 1 /* default */,
- VG_USERREQ__PTHREAD_GET_THREADID,
- 0, 0, 0, 0);
- /* 'cos I'm paranoid ... */
- if (tid < 1 || tid >= VG_N_THREADS)
- barf("__res_state: invalid ThreadId");
- if (tid == 1)
- return & _res;
- return & thread_specific_res_state[tid];
-}
-
-
-/* ---------------------------------------------------
- LIBC-PRIVATE SPECIFIC DATA
- ------------------------------------------------ */
-
-/* Relies on assumption that initial private data is NULL. This
- should be fixed somehow. */
-
-/* The allowable keys (indices) (all 3 of them).
- From sysdeps/pthread/bits/libc-tsd.h
-*/
-/* as per glibc anoncvs HEAD of 20021001. */
-enum __libc_tsd_key_t { _LIBC_TSD_KEY_MALLOC = 0,
- _LIBC_TSD_KEY_DL_ERROR,
- _LIBC_TSD_KEY_RPC_VARS,
- _LIBC_TSD_KEY_LOCALE,
- _LIBC_TSD_KEY_CTYPE_B,
- _LIBC_TSD_KEY_CTYPE_TOLOWER,
- _LIBC_TSD_KEY_CTYPE_TOUPPER,
- _LIBC_TSD_KEY_N };
-
-/* Auto-initialising subsystem. libc_specifics_inited is set
- after initialisation. libc_specifics_inited_mx guards it. */
-static int libc_specifics_inited = 0;
-static pthread_mutex_t libc_specifics_inited_mx = PTHREAD_MUTEX_INITIALIZER;
-
-
-/* These are the keys we must initialise the first time. */
-static pthread_key_t libc_specifics_keys[_LIBC_TSD_KEY_N];
-
-
-/* Initialise the keys, if they are not already initialised. */
-static
-void init_libc_tsd_keys ( void )
-{
- int res, i;
- pthread_key_t k;
-
- /* Don't fall into deadlock if we get called again whilst we still
- hold the lock, via the __uselocale() call herein. */
- if (libc_specifics_inited != 0)
- return;
-
- /* Take the lock. */
- res = __pthread_mutex_lock(&libc_specifics_inited_mx);
- if (res != 0) barf("init_libc_tsd_keys: lock");
-
- /* Now test again, to be sure there is no mistake. */
- if (libc_specifics_inited != 0) {
- res = __pthread_mutex_unlock(&libc_specifics_inited_mx);
- if (res != 0) barf("init_libc_tsd_keys: unlock(1)");
- return;
- }
-
- /* Actually do the initialisation. */
- /* printf("INIT libc specifics\n"); */
- for (i = 0; i < _LIBC_TSD_KEY_N; i++) {
- res = __pthread_key_create(&k, NULL);
- if (res != 0) barf("init_libc_tsd_keys: create");
- libc_specifics_keys[i] = k;
- }
-
- /* Signify init done. */
- libc_specifics_inited = 1;
-
-# ifdef GLIBC_2_3
- /* Set the initialising thread's locale to the global (default)
- locale. A hack in support of glibc-2.3. This does the biz for
- the root thread. For all other threads we run this in
- thread_wrapper(), which does the real work of
- pthread_create(). */
- /* assert that we are the root thread. I don't know if this is
- really a valid assertion to make; if it breaks I'll reconsider
- it. */
- my_assert(pthread_self() == 1);
- __uselocale(LC_GLOBAL_LOCALE);
-# endif
-
- /* Unlock and return. */
- res = __pthread_mutex_unlock(&libc_specifics_inited_mx);
- if (res != 0) barf("init_libc_tsd_keys: unlock");
-}
-
-
-static int
-libc_internal_tsd_set ( enum __libc_tsd_key_t key,
- const void * pointer )
-{
- int res;
- /* printf("SET SET SET key %d ptr %p\n", key, pointer); */
- if (key < _LIBC_TSD_KEY_MALLOC || key >= _LIBC_TSD_KEY_N)
- barf("libc_internal_tsd_set: invalid key");
- init_libc_tsd_keys();
- res = __pthread_setspecific(libc_specifics_keys[key], pointer);
- if (res != 0) barf("libc_internal_tsd_set: setspecific failed");
- return 0;
-}
-
-static void *
-libc_internal_tsd_get ( enum __libc_tsd_key_t key )
-{
- void* v;
- /* printf("GET GET GET key %d\n", key); */
- if (key < _LIBC_TSD_KEY_MALLOC || key >= _LIBC_TSD_KEY_N)
- barf("libc_internal_tsd_get: invalid key");
- init_libc_tsd_keys();
- v = __pthread_getspecific(libc_specifics_keys[key]);
- /* if (v == NULL) barf("libc_internal_tsd_set: getspecific failed"); */
- return v;
-}
-
-
-int (*__libc_internal_tsd_set)
- (enum __libc_tsd_key_t key, const void * pointer)
- = libc_internal_tsd_set;
-
-void* (*__libc_internal_tsd_get)
- (enum __libc_tsd_key_t key)
- = libc_internal_tsd_get;
-
-
-#ifdef GLIBC_2_3
-/* This one was first spotted be me in the glibc-2.2.93 sources. */
-static void**
-libc_internal_tsd_address ( enum __libc_tsd_key_t key )
-{
- void** v;
- /* printf("ADDR ADDR ADDR key %d\n", key); */
- if (key < _LIBC_TSD_KEY_MALLOC || key >= _LIBC_TSD_KEY_N)
- barf("libc_internal_tsd_address: invalid key");
- init_libc_tsd_keys();
- v = __pthread_getspecific_addr(libc_specifics_keys[key]);
- return v;
-}
-
-void ** (*__libc_internal_tsd_address)
- (enum __libc_tsd_key_t key)
- = libc_internal_tsd_address;
-#endif
-
-
-/* ---------------------------------------------------------------------
- These are here (I think) because they are deemed cancellation
- points by POSIX. For the moment we'll simply pass the call along
- to the corresponding thread-unaware (?) libc routine.
- ------------------------------------------------------------------ */
-
-#ifdef GLIBC_2_1
-extern
-int __sigaction
- (int signum,
- const struct sigaction *act,
- struct sigaction *oldact);
-#else
-extern
-int __libc_sigaction
- (int signum,
- const struct sigaction *act,
- struct sigaction *oldact);
-#endif
-int sigaction(int signum,
- const struct sigaction *act,
- struct sigaction *oldact)
-{
- __my_pthread_testcancel();
-# ifdef GLIBC_2_1
- return __sigaction(signum, act, oldact);
-# else
- return __libc_sigaction(signum, act, oldact);
-# endif
-}
-
-
-extern
-int __libc_connect(int sockfd,
- const struct sockaddr *serv_addr,
- socklen_t addrlen);
-WEAK
-int connect(int sockfd,
- const struct sockaddr *serv_addr,
- socklen_t addrlen)
-{
- __my_pthread_testcancel();
- return __libc_connect(sockfd, serv_addr, addrlen);
-}
-
-
-extern
-int __libc_fcntl(int fd, int cmd, long arg);
-WEAK
-int fcntl(int fd, int cmd, long arg)
-{
- __my_pthread_testcancel();
- return __libc_fcntl(fd, cmd, arg);
-}
-
-
-extern
-ssize_t __libc_write(int fd, const void *buf, size_t count);
-WEAK
-ssize_t write(int fd, const void *buf, size_t count)
-{
- __my_pthread_testcancel();
- return __libc_write(fd, buf, count);
-}
-
-
-extern
-ssize_t __libc_read(int fd, void *buf, size_t count);
-WEAK
-ssize_t read(int fd, void *buf, size_t count)
-{
- __my_pthread_testcancel();
- return __libc_read(fd, buf, count);
-}
-
-/*
- * Ugh, this is horrible but here goes:
- *
- * Open of a named pipe (fifo file) can block. In a threaded program,
- * this means that the whole thing can block. We therefore need to
- * make the open appear to block to the caller, but still keep polling
- * for everyone else.
- *
- * There are four cases:
- *
- * - the caller asked for O_NONBLOCK. The easy one: we just do it.
- *
- * - the caller asked for a blocking O_RDONLY open. We open it with
- * O_NONBLOCK and then use poll to wait for it to become ready.
- *
- * - the caller asked for a blocking O_WRONLY open. Unfortunately, this
- * will fail with ENXIO when we make it non-blocking. Doubly
- * unfortunate is that we can only rely on these semantics if it is
- * actually a fifo file; the hack is that if we see that it is a
- * O_WRONLY open and we get ENXIO, then stat the path and see if it
- * actually is a fifo. This is racy, but it is the best we can do.
- * If it is a fifo, then keep trying the open until it works; if not
- * just return the error.
- *
- * - the caller asked for a blocking O_RDWR open. Well, under Linux,
- * this never blocks, so we just clear the non-blocking flag and
- * return.
- *
- * This code assumes that for whatever we open, O_NONBLOCK followed by
- * a fcntl clearing O_NONBLOCK is the same as opening without
- * O_NONBLOCK. Also assumes that stat and fstat have no side-effects.
- *
- * XXX Should probably put in special cases for some devices as well,
- * like serial ports. Unfortunately they don't work like fifos, so
- * this logic will become even more tortured. Wait until we really
- * need it.
- */
-static inline int _open(const char *pathname, int flags, mode_t mode,
- int (*openp)(const char *, int, mode_t))
-{
- int fd;
- struct stat st;
- struct vki_timespec nanosleep_interval;
- int saved_errno;
-
- __my_pthread_testcancel();
-
- /* Assume we can only get O_RDONLY, O_WRONLY or O_RDWR */
- my_assert((flags & VKI_O_ACCMODE) != VKI_O_ACCMODE);
-
- for(;;) {
- fd = (*openp)(pathname, flags | VKI_O_NONBLOCK, mode);
-
- /* return immediately if caller wanted nonblocking anyway */
- if (flags & VKI_O_NONBLOCK)
- return fd;
-
- saved_errno = *(__errno_location());
-
- if (fd != -1)
- break; /* open worked */
-
- /* If we got ENXIO and we're opening WRONLY, and it turns out
- to really be a FIFO, then poll waiting for open to succeed */
- if (*(__errno_location()) == ENXIO &&
- (flags & VKI_O_ACCMODE) == VKI_O_WRONLY &&
- (stat(pathname, &st) == 0 && S_ISFIFO(st.st_mode))) {
-
- /* OK, we're opening a FIFO for writing; sleep and spin */
- nanosleep_interval.tv_sec = 0;
- nanosleep_interval.tv_nsec = 13 * 1000 * 1000; /* 13 milliseconds */
- /* It's critical here that valgrind's nanosleep implementation
- is nonblocking. */
- (void)my_do_syscall2(__NR_nanosleep,
- (int)(&nanosleep_interval), (int)NULL);
- } else {
- /* it was just an error */
- *(__errno_location()) = saved_errno;
- return -1;
- }
- }
-
- /* OK, we've got a nonblocking FD for a caller who wants blocking;
- reset the flags to what they asked for */
- fcntl(fd, VKI_F_SETFL, flags);
-
- /* Return now if one of:
- - we were opening O_RDWR (never blocks)
- - we opened with O_WRONLY (polling already done)
- - the thing we opened wasn't a FIFO after all (or fstat failed)
- */
- if ((flags & VKI_O_ACCMODE) != VKI_O_RDONLY ||
- (fstat(fd, &st) == -1 || !S_ISFIFO(st.st_mode))) {
- *(__errno_location()) = saved_errno;
- return fd;
- }
-
- /* OK, drop into the poll loop looking for something to read on the fd */
- my_assert((flags & VKI_O_ACCMODE) == VKI_O_RDONLY);
- for(;;) {
- struct pollfd pollfd;
- int res;
-
- pollfd.fd = fd;
- pollfd.events = POLLIN;
- pollfd.revents = 0;
-
- res = my_do_syscall3(__NR_poll, (int)&pollfd, 1, 0);
-
- my_assert(res == 0 || res == 1);
-
- if (res == 1) {
- /* OK, got it.
-
- XXX This is wrong: we're waiting for either something to
- read or a HUP on the file descriptor, but the semantics of
- fifo open are that we should unblock as soon as someone
- simply opens the other end, not that they write something.
- With luck this won't matter in practice.
- */
- my_assert(pollfd.revents & (POLLIN|POLLHUP));
- break;
- }
-
- /* Still nobody home; sleep and spin */
- nanosleep_interval.tv_sec = 0;
- nanosleep_interval.tv_nsec = 13 * 1000 * 1000; /* 13 milliseconds */
- /* It's critical here that valgrind's nanosleep implementation
- is nonblocking. */
- (void)my_do_syscall2(__NR_nanosleep,
- (int)(&nanosleep_interval), (int)NULL);
- }
-
- *(__errno_location()) = saved_errno;
- return fd;
-}
-
-extern
-int __libc_open64(const char *pathname, int flags, mode_t mode);
-/* WEAK */
-int open64(const char *pathname, int flags, mode_t mode)
-{
- return _open(pathname, flags, mode, __libc_open64);
-}
-
-extern
-int __libc_open(const char *pathname, int flags, mode_t mode);
-/* WEAK */
-int open(const char *pathname, int flags, mode_t mode)
-{
- return _open(pathname, flags, mode, __libc_open);
-}
-
-extern
-int __libc_close(int fd);
-WEAK
-int close(int fd)
-{
- __my_pthread_testcancel();
- return __libc_close(fd);
-}
-
-
-WEAK
-int accept(int s, struct sockaddr *addr, socklen_t *addrlen)
-{
- return VGR_(accept)(s, addr, addrlen);
-}
-
-WEAK
-int recv(int s, void *buf, size_t len, int flags)
-{
- return VGR_(recv)(s, buf, len, flags);
-}
-
-WEAK
-int readv(int fd, const struct iovec *iov, int count)
-{
- return VGR_(readv)(fd, iov, count);
-}
-
-WEAK
-int writev(int fd, const struct iovec *iov, int count)
-{
- return VGR_(writev)(fd, iov, count);
-}
-
-extern
-pid_t __libc_waitpid(pid_t pid, int *status, int options);
-WEAK
-pid_t waitpid(pid_t pid, int *status, int options)
-{
- __my_pthread_testcancel();
- return __libc_waitpid(pid, status, options);
-}
-
-
-extern
-int __libc_nanosleep(const struct timespec *req, struct timespec *rem);
-WEAK
-int nanosleep(const struct timespec *req, struct timespec *rem)
-{
- __my_pthread_testcancel();
- return __libc_nanosleep(req, rem);
-}
-
-
-extern
-int __libc_fsync(int fd);
-WEAK
-int fsync(int fd)
-{
- __my_pthread_testcancel();
- return __libc_fsync(fd);
-}
-
-
-extern
-off_t __libc_lseek(int fildes, off_t offset, int whence);
-WEAK
-off_t lseek(int fildes, off_t offset, int whence)
-{
- __my_pthread_testcancel();
- return __libc_lseek(fildes, offset, whence);
-}
-
-
-extern
-__off64_t __libc_lseek64(int fildes, __off64_t offset, int whence);
-WEAK
-__off64_t lseek64(int fildes, __off64_t offset, int whence)
-{
- __my_pthread_testcancel();
- return __libc_lseek64(fildes, offset, whence);
-}
-
-
-extern
-ssize_t __libc_pread64 (int __fd, void *__buf, size_t __nbytes,
- __off64_t __offset);
-ssize_t __pread64 (int __fd, void *__buf, size_t __nbytes,
- __off64_t __offset)
-{
- __my_pthread_testcancel();
- return __libc_pread64(__fd, __buf, __nbytes, __offset);
-}
-
-
-extern
-ssize_t __libc_pwrite64 (int __fd, const void *__buf, size_t __nbytes,
- __off64_t __offset);
-ssize_t __pwrite64 (int __fd, const void *__buf, size_t __nbytes,
- __off64_t __offset)
-{
- __my_pthread_testcancel();
- return __libc_pwrite64(__fd, __buf, __nbytes, __offset);
-}
-
-
-extern
-ssize_t __libc_pwrite(int fd, const void *buf, size_t count, off_t offset);
-WEAK
-ssize_t pwrite(int fd, const void *buf, size_t count, off_t offset)
-{
- __my_pthread_testcancel();
- return __libc_pwrite(fd, buf, count, offset);
-}
-
-
-extern
-ssize_t __libc_pread(int fd, void *buf, size_t count, off_t offset);
-WEAK
-ssize_t pread(int fd, void *buf, size_t count, off_t offset)
-{
- __my_pthread_testcancel();
- return __libc_pread(fd, buf, count, offset);
-}
-
-
-extern
-void __libc_longjmp(jmp_buf env, int val) __attribute((noreturn));
-/* not weak: WEAK */
-void longjmp(jmp_buf env, int val)
-{
- __libc_longjmp(env, val);
-}
-
-
-extern void __libc_siglongjmp (sigjmp_buf env, int val)
- __attribute__ ((noreturn));
-void siglongjmp(sigjmp_buf env, int val)
-{
- kludged("siglongjmp (cleanup handlers are ignored)");
- __libc_siglongjmp(env, val);
-}
-
-
-extern
-int __libc_send(int s, const void *msg, size_t len, int flags);
-WEAK
-int send(int s, const void *msg, size_t len, int flags)
-{
- __my_pthread_testcancel();
- return __libc_send(s, msg, len, flags);
-}
-
-
-extern
-int __libc_sendmsg(int s, const struct msghdr *msg, int flags);
-WEAK
-int sendmsg(int s, const struct msghdr *msg, int flags)
-{
- __my_pthread_testcancel();
- return __libc_sendmsg(s, msg, flags);
-}
-
-
-extern
-int __libc_recvmsg(int s, struct msghdr *msg, int flags);
-WEAK
-int recvmsg(int s, struct msghdr *msg, int flags)
-{
- __my_pthread_testcancel();
- return __libc_recvmsg(s, msg, flags);
-}
-
-
-extern
-int __libc_recvfrom(int s, void *buf, size_t len, int flags,
- struct sockaddr *from, socklen_t *fromlen);
-WEAK
-int recvfrom(int s, void *buf, size_t len, int flags,
- struct sockaddr *from, socklen_t *fromlen)
-{
- __my_pthread_testcancel();
- VGR_(wait_for_fd_to_be_readable_or_erring)(s);
- __my_pthread_testcancel();
- return __libc_recvfrom(s, buf, len, flags, from, fromlen);
-}
-
-
-extern
-int __libc_sendto(int s, const void *msg, size_t len, int flags,
- const struct sockaddr *to, socklen_t tolen);
-WEAK
-int sendto(int s, const void *msg, size_t len, int flags,
- const struct sockaddr *to, socklen_t tolen)
-{
- __my_pthread_testcancel();
- return __libc_sendto(s, msg, len, flags, to, tolen);
-}
-
-
-extern
-int __libc_system(const char* str);
-WEAK
-int system(const char* str)
-{
- __my_pthread_testcancel();
- return __libc_system(str);
-}
-
-
-extern
-pid_t __libc_wait(int *status);
-WEAK
-pid_t wait(int *status)
-{
- __my_pthread_testcancel();
- return __libc_wait(status);
-}
-
-
-extern
-int __libc_msync(const void *start, size_t length, int flags);
-WEAK
-int msync(const void *start, size_t length, int flags)
-{
- __my_pthread_testcancel();
- return __libc_msync(start, length, flags);
-}
-
-
-/*--- fork and its helper ---*/
-
-static
-void run_fork_handlers ( int what )
-{
- ForkHandlerEntry entry;
- int n_h, n_handlers, i, res;
-
- my_assert(what == 0 || what == 1 || what == 2);
-
- /* Fetch old counter */
- VALGRIND_MAGIC_SEQUENCE(n_handlers, -2 /* default */,
- VG_USERREQ__GET_FHSTACK_USED,
- 0, 0, 0, 0);
- my_assert(n_handlers >= 0 && n_handlers < VG_N_FORKHANDLERSTACK);
-
- /* Prepare handlers (what == 0) are called in opposite order of
- calls to pthread_atfork. Parent and child handlers are called
- in the same order as calls to pthread_atfork. */
- if (what == 0)
- n_h = n_handlers - 1;
- else
- n_h = 0;
-
- for (i = 0; i < n_handlers; i++) {
- VALGRIND_MAGIC_SEQUENCE(res, -2 /* default */,
- VG_USERREQ__GET_FHSTACK_ENTRY,
- n_h, &entry, 0, 0);
- my_assert(res == 0);
- switch (what) {
- case 0: if (entry.prepare) entry.prepare();
- n_h--; break;
- case 1: if (entry.parent) entry.parent();
- n_h++; break;
- case 2: if (entry.child) entry.child();
- n_h++; break;
- default: barf("run_fork_handlers: invalid what");
- }
- }
-
- if (what != 0 /* prepare */) {
- /* Empty out the stack. */
- VALGRIND_MAGIC_SEQUENCE(res, -2 /* default */,
- VG_USERREQ__SET_FHSTACK_USED,
- 0, 0, 0, 0);
- my_assert(res == 0);
- }
-}
-
-extern
-pid_t __libc_fork(void);
-pid_t __fork(void)
-{
- pid_t pid;
- __my_pthread_testcancel();
- __pthread_mutex_lock(&pthread_atfork_lock);
-
- run_fork_handlers(0 /* prepare */);
- pid = __libc_fork();
- if (pid == 0) {
- /* I am the child */
- run_fork_handlers(2 /* child */);
- __pthread_mutex_unlock(&pthread_atfork_lock);
- __pthread_mutex_init(&pthread_atfork_lock, NULL);
- } else {
- /* I am the parent */
- run_fork_handlers(1 /* parent */);
- __pthread_mutex_unlock(&pthread_atfork_lock);
- }
- return pid;
-}
-
-
-pid_t __vfork(void)
-{
- return __fork();
-}
-
-
-static
-int my_do_syscall1 ( int syscallno, int arg1 )
-{
- int __res;
- __asm__ volatile ("pushl %%ebx; movl %%edx,%%ebx ; int $0x80 ; popl %%ebx"
- : "=a" (__res)
- : "0" (syscallno),
- "d" (arg1) );
- return __res;
-}
-
-static
-int my_do_syscall2 ( int syscallno,
- int arg1, int arg2 )
-{
- int __res;
- __asm__ volatile ("pushl %%ebx; movl %%edx,%%ebx ; int $0x80 ; popl %%ebx"
- : "=a" (__res)
- : "0" (syscallno),
- "d" (arg1),
- "c" (arg2) );
- return __res;
-}
-
-static
-int my_do_syscall3 ( int syscallno,
- int arg1, int arg2, int arg3 )
-{
- int __res;
- __asm__ volatile ("pushl %%ebx; movl %%esi,%%ebx ; int $0x80 ; popl %%ebx"
- : "=a" (__res)
- : "0" (syscallno),
- "S" (arg1),
- "c" (arg2),
- "d" (arg3) );
- return __res;
-}
-
-static inline
-int my_do_syscall5 ( int syscallno,
- int arg1, int arg2, int arg3, int arg4, int arg5 )
-{
- int __res;
- __asm__ volatile ("int $0x80"
- : "=a" (__res)
- : "0" (syscallno),
- "b" (arg1),
- "c" (arg2),
- "d" (arg3),
- "S" (arg4),
- "D" (arg5));
- return __res;
-}
-
-
-WEAK
-int select ( int n,
- fd_set *rfds,
- fd_set *wfds,
- fd_set *xfds,
- struct timeval *timeout )
-{
- return VGR_(select)(n, rfds, wfds, xfds, timeout);
-}
-
-
-/* ---------------------------------------------------------------------
- Hacky implementation of semaphores.
- ------------------------------------------------------------------ */
-
-#include <semaphore.h>
-
-/* This is a terrible way to do the remapping. Plan is to import an
- AVL tree at some point. */
-
-typedef
- struct {
- pthread_mutex_t se_mx;
- pthread_cond_t se_cv;
- int count;
- }
- vg_sem_t;
-
-static pthread_mutex_t se_remap_mx = PTHREAD_MUTEX_INITIALIZER;
-
-static int se_remap_used = 0;
-static sem_t* se_remap_orig[VG_N_SEMAPHORES];
-static vg_sem_t se_remap_new[VG_N_SEMAPHORES];
-
-static vg_sem_t* se_remap ( sem_t* orig )
-{
- int res, i;
- res = __pthread_mutex_lock(&se_remap_mx);
- my_assert(res == 0);
-
- for (i = 0; i < se_remap_used; i++) {
- if (se_remap_orig[i] == orig)
- break;
- }
- if (i == se_remap_used) {
- if (se_remap_used == VG_N_SEMAPHORES) {
- res = pthread_mutex_unlock(&se_remap_mx);
- my_assert(res == 0);
- barf("VG_N_SEMAPHORES is too low. Increase and recompile.");
- }
- se_remap_used++;
- se_remap_orig[i] = orig;
- /* printf("allocated semaphore %d\n", i); */
- }
- res = __pthread_mutex_unlock(&se_remap_mx);
- my_assert(res == 0);
- return &se_remap_new[i];
-}
-
-
-int sem_init(sem_t *sem, int pshared, unsigned int value)
-{
- int res;
- vg_sem_t* vg_sem;
- ensure_valgrind("sem_init");
- if (pshared != 0) {
- pthread_error("sem_init: unsupported pshared value");
- *(__errno_location()) = ENOSYS;
- return -1;
- }
- vg_sem = se_remap(sem);
- res = pthread_mutex_init(&vg_sem->se_mx, NULL);
- my_assert(res == 0);
- res = pthread_cond_init(&vg_sem->se_cv, NULL);
- my_assert(res == 0);
- vg_sem->count = value;
- return 0;
-}
-
-
-int sem_wait ( sem_t* sem )
-{
- int res;
- vg_sem_t* vg_sem;
- ensure_valgrind("sem_wait");
- vg_sem = se_remap(sem);
- res = __pthread_mutex_lock(&vg_sem->se_mx);
- my_assert(res == 0);
- while (vg_sem->count == 0) {
- res = pthread_cond_wait(&vg_sem->se_cv, &vg_sem->se_mx);
- my_assert(res == 0);
- }
- vg_sem->count--;
- res = __pthread_mutex_unlock(&vg_sem->se_mx);
- my_assert(res == 0);
- return 0;
-}
-
-int sem_post ( sem_t* sem )
-{
- int res;
- vg_sem_t* vg_sem;
- ensure_valgrind("sem_post");
- vg_sem = se_remap(sem);
- res = __pthread_mutex_lock(&vg_sem->se_mx);
- my_assert(res == 0);
- if (vg_sem->count == 0) {
- vg_sem->count++;
- res = pthread_cond_broadcast(&vg_sem->se_cv);
- my_assert(res == 0);
- } else {
- vg_sem->count++;
- }
- res = __pthread_mutex_unlock(&vg_sem->se_mx);
- my_assert(res == 0);
- return 0;
-}
-
-
-int sem_trywait ( sem_t* sem )
-{
- int ret, res;
- vg_sem_t* vg_sem;
- ensure_valgrind("sem_trywait");
- vg_sem = se_remap(sem);
- res = __pthread_mutex_lock(&vg_sem->se_mx);
- my_assert(res == 0);
- if (vg_sem->count > 0) {
- vg_sem->count--;
- ret = 0;
- } else {
- ret = -1;
- *(__errno_location()) = EAGAIN;
- }
- res = __pthread_mutex_unlock(&vg_sem->se_mx);
- my_assert(res == 0);
- return ret;
-}
-
-
-int sem_getvalue(sem_t* sem, int * sval)
-{
- vg_sem_t* vg_sem;
- ensure_valgrind("sem_trywait");
- vg_sem = se_remap(sem);
- *sval = vg_sem->count;
- return 0;
-}
-
-
-int sem_destroy(sem_t * sem)
-{
- kludged("sem_destroy");
- /* if someone waiting on this semaphore, errno = EBUSY, return -1 */
- return 0;
-}
-
-
-int sem_timedwait(sem_t* sem, const struct timespec *abstime)
-{
- int res;
- vg_sem_t* vg_sem;
- ensure_valgrind("sem_timedwait");
- vg_sem = se_remap(sem);
- res = __pthread_mutex_lock(&vg_sem->se_mx);
- my_assert(res == 0);
- while ( vg_sem->count == 0 && res != ETIMEDOUT ) {
- res = pthread_cond_timedwait(&vg_sem->se_cv, &vg_sem->se_mx, abstime);
- }
- if ( vg_sem->count > 0 ) {
- vg_sem->count--;
- res = __pthread_mutex_unlock(&vg_sem->se_mx);
- my_assert(res == 0 );
- return 0;
- } else {
- res = __pthread_mutex_unlock(&vg_sem->se_mx);
- my_assert(res == 0 );
- *(__errno_location()) = ETIMEDOUT;
- return -1;
- }
-}
-
-
-/* ---------------------------------------------------------------------
- Reader-writer locks.
- ------------------------------------------------------------------ */
-
-typedef
- struct {
- int initted; /* != 0 --> in use; sanity check only */
- int prefer_w; /* != 0 --> prefer writer */
- int nwait_r; /* # of waiting readers */
- int nwait_w; /* # of waiting writers */
- pthread_cond_t cv_r; /* for signalling readers */
- pthread_cond_t cv_w; /* for signalling writers */
- pthread_mutex_t mx;
- int status;
- /* allowed range for status: >= -1. -1 means 1 writer currently
- active, >= 0 means N readers currently active. */
- }
- vg_rwlock_t;
-
-
-static pthread_mutex_t rw_remap_mx = PTHREAD_MUTEX_INITIALIZER;
-
-static int rw_remap_used = 0;
-static pthread_rwlock_t* rw_remap_orig[VG_N_RWLOCKS];
-static vg_rwlock_t rw_remap_new[VG_N_RWLOCKS];
-
-
-static
-void init_vg_rwlock ( vg_rwlock_t* vg_rwl )
-{
- int res = 0;
- vg_rwl->initted = 1;
- vg_rwl->prefer_w = 1;
- vg_rwl->nwait_r = 0;
- vg_rwl->nwait_w = 0;
- vg_rwl->status = 0;
- res = pthread_mutex_init(&vg_rwl->mx, NULL);
- res |= pthread_cond_init(&vg_rwl->cv_r, NULL);
- res |= pthread_cond_init(&vg_rwl->cv_w, NULL);
- my_assert(res == 0);
-}
-
-
-/* Take the address of a LinuxThreads rwlock_t and return the shadow
- address of our version. Further, if the LinuxThreads version
- appears to have been statically initialised, do the same to the one
- we allocate here. The pthread_rwlock_t.__rw_readers field is set
- to zero by PTHREAD_RWLOCK_INITIALIZER, so we take zero as meaning
- uninitialised and non-zero meaning initialised.
-*/
-static vg_rwlock_t* rw_remap ( pthread_rwlock_t* orig )
-{
- int res, i;
- vg_rwlock_t* vg_rwl;
- res = __pthread_mutex_lock(&rw_remap_mx);
- my_assert(res == 0);
-
- for (i = 0; i < rw_remap_used; i++) {
- if (rw_remap_orig[i] == orig)
- break;
- }
- if (i == rw_remap_used) {
- if (rw_remap_used == VG_N_RWLOCKS) {
- res = __pthread_mutex_unlock(&rw_remap_mx);
- my_assert(res == 0);
- barf("VG_N_RWLOCKS is too low. Increase and recompile.");
- }
- rw_remap_used++;
- rw_remap_orig[i] = orig;
- rw_remap_new[i].initted = 0;
- if (0) printf("allocated rwlock %d\n", i);
- }
- res = __pthread_mutex_unlock(&rw_remap_mx);
- my_assert(res == 0);
- vg_rwl = &rw_remap_new[i];
-
- /* Initialise the shadow, if required. */
- if (orig->__rw_readers == 0) {
- orig->__rw_readers = 1;
- init_vg_rwlock(vg_rwl);
- if (orig->__rw_kind == PTHREAD_RWLOCK_PREFER_READER_NP)
- vg_rwl->prefer_w = 0;
- }
-
- return vg_rwl;
-}
-
-
-int pthread_rwlock_init ( pthread_rwlock_t* orig,
- const pthread_rwlockattr_t* attr )
-{
- vg_rwlock_t* rwl;
- if (0) printf ("pthread_rwlock_init\n");
- /* Force the remapper to initialise the shadow. */
- orig->__rw_readers = 0;
- /* Install the lock preference; the remapper needs to know it. */
- orig->__rw_kind = PTHREAD_RWLOCK_DEFAULT_NP;
- if (attr)
- orig->__rw_kind = attr->__lockkind;
- rwl = rw_remap ( orig );
- return 0;
-}
-
-
-static
-void pthread_rwlock_rdlock_CANCEL_HDLR ( void* rwl_v )
-{
- vg_rwlock_t* rwl = (vg_rwlock_t*)rwl_v;
- rwl->nwait_r--;
- pthread_mutex_unlock (&rwl->mx);
-}
-
-
-int pthread_rwlock_rdlock ( pthread_rwlock_t* orig )
-{
- int res;
- vg_rwlock_t* rwl;
- if (0) printf ("pthread_rwlock_rdlock\n");
- rwl = rw_remap ( orig );
- res = __pthread_mutex_lock(&rwl->mx);
- my_assert(res == 0);
- if (!rwl->initted) {
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return EINVAL;
- }
- if (rwl->status < 0) {
- my_assert(rwl->status == -1);
- rwl->nwait_r++;
- pthread_cleanup_push( pthread_rwlock_rdlock_CANCEL_HDLR, rwl );
- while (1) {
- if (rwl->status == 0) break;
- res = pthread_cond_wait(&rwl->cv_r, &rwl->mx);
- my_assert(res == 0);
- }
- pthread_cleanup_pop(0);
- rwl->nwait_r--;
- }
- my_assert(rwl->status >= 0);
- rwl->status++;
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return 0;
-}
-
-
-int pthread_rwlock_tryrdlock ( pthread_rwlock_t* orig )
-{
- int res;
- vg_rwlock_t* rwl;
- if (0) printf ("pthread_rwlock_tryrdlock\n");
- rwl = rw_remap ( orig );
- res = __pthread_mutex_lock(&rwl->mx);
- my_assert(res == 0);
- if (!rwl->initted) {
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return EINVAL;
- }
- if (rwl->status == -1) {
- /* Writer active; we have to give up. */
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return EBUSY;
- }
- /* Success */
- my_assert(rwl->status >= 0);
- rwl->status++;
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return 0;
-}
-
-
-static
-void pthread_rwlock_wrlock_CANCEL_HDLR ( void* rwl_v )
-{
- vg_rwlock_t* rwl = (vg_rwlock_t*)rwl_v;
- rwl->nwait_w--;
- pthread_mutex_unlock (&rwl->mx);
-}
-
-
-int pthread_rwlock_wrlock ( pthread_rwlock_t* orig )
-{
- int res;
- vg_rwlock_t* rwl;
- if (0) printf ("pthread_rwlock_wrlock\n");
- rwl = rw_remap ( orig );
- res = __pthread_mutex_lock(&rwl->mx);
- my_assert(res == 0);
- if (!rwl->initted) {
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return EINVAL;
- }
- if (rwl->status != 0) {
- rwl->nwait_w++;
- pthread_cleanup_push( pthread_rwlock_wrlock_CANCEL_HDLR, rwl );
- while (1) {
- if (rwl->status == 0) break;
- res = pthread_cond_wait(&rwl->cv_w, &rwl->mx);
- my_assert(res == 0);
- }
- pthread_cleanup_pop(0);
- rwl->nwait_w--;
- }
- my_assert(rwl->status == 0);
- rwl->status = -1;
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return 0;
-}
-
-
-int pthread_rwlock_trywrlock ( pthread_rwlock_t* orig )
-{
- int res;
- vg_rwlock_t* rwl;
- if (0) printf ("pthread_wrlock_trywrlock\n");
- rwl = rw_remap ( orig );
- res = __pthread_mutex_lock(&rwl->mx);
- my_assert(res == 0);
- if (!rwl->initted) {
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return EINVAL;
- }
- if (rwl->status != 0) {
- /* Reader(s) or a writer active; we have to give up. */
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return EBUSY;
- }
- /* Success */
- my_assert(rwl->status == 0);
- rwl->status = -1;
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return 0;
-}
-
-
-int pthread_rwlock_unlock ( pthread_rwlock_t* orig )
-{
- int res;
- vg_rwlock_t* rwl;
- if (0) printf ("pthread_rwlock_unlock\n");
- rwl = rw_remap ( orig );
- rwl = rw_remap ( orig );
- res = __pthread_mutex_lock(&rwl->mx);
- my_assert(res == 0);
- if (!rwl->initted) {
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return EINVAL;
- }
- if (rwl->status == 0) {
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return EPERM;
- }
- my_assert(rwl->status != 0);
- if (rwl->status == -1) {
- rwl->status = 0;
- } else {
- my_assert(rwl->status > 0);
- rwl->status--;
- }
-
- my_assert(rwl->status >= 0);
-
- if (rwl->prefer_w) {
-
- /* Favour waiting writers, if any. */
- if (rwl->nwait_w > 0) {
- /* Writer(s) are waiting. */
- if (rwl->status == 0) {
- /* We can let a writer in. */
- res = pthread_cond_signal(&rwl->cv_w);
- my_assert(res == 0);
- } else {
- /* There are still readers active. Do nothing; eventually
- they will disappear, at which point a writer will be
- admitted. */
- }
- }
- else
- /* No waiting writers. */
- if (rwl->nwait_r > 0) {
- /* Let in a waiting reader. */
- res = pthread_cond_signal(&rwl->cv_r);
- my_assert(res == 0);
- }
-
- } else {
-
- /* Favour waiting readers, if any. */
- if (rwl->nwait_r > 0) {
- /* Reader(s) are waiting; let one in. */
- res = pthread_cond_signal(&rwl->cv_r);
- my_assert(res == 0);
- }
- else
- /* No waiting readers. */
- if (rwl->nwait_w > 0 && rwl->status == 0) {
- /* We have waiting writers and no active readers; let a
- writer in. */
- res = pthread_cond_signal(&rwl->cv_w);
- my_assert(res == 0);
- }
- }
-
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return 0;
-}
-
-
-int pthread_rwlock_destroy ( pthread_rwlock_t *orig )
-{
- int res;
- vg_rwlock_t* rwl;
- if (0) printf ("pthread_rwlock_destroy\n");
- rwl = rw_remap ( orig );
- res = __pthread_mutex_lock(&rwl->mx);
- my_assert(res == 0);
- if (!rwl->initted) {
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return EINVAL;
- }
- if (rwl->status != 0 || rwl->nwait_r > 0 || rwl->nwait_w > 0) {
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return EBUSY;
- }
- rwl->initted = 0;
- res = __pthread_mutex_unlock(&rwl->mx);
- my_assert(res == 0);
- return 0;
-}
-
-
-/* Copied directly from LinuxThreads. */
-int
-pthread_rwlockattr_init (pthread_rwlockattr_t *attr)
-{
- attr->__lockkind = 0;
- attr->__pshared = PTHREAD_PROCESS_PRIVATE;
-
- return 0;
-}
-
-/* Copied directly from LinuxThreads. */
-int
-pthread_rwlockattr_destroy (pthread_rwlockattr_t *attr)
-{
- return 0;
-}
-
-/* Copied directly from LinuxThreads. */
-int
-pthread_rwlockattr_setpshared (pthread_rwlockattr_t *attr, int pshared)
-{
- if (pshared != PTHREAD_PROCESS_PRIVATE && pshared != PTHREAD_PROCESS_SHARED)
- return EINVAL;
-
- /* For now it is not possible to shared a conditional variable. */
- if (pshared != PTHREAD_PROCESS_PRIVATE)
- return ENOSYS;
-
- attr->__pshared = pshared;
-
- return 0;
-}
-
-
-/* ---------------------------------------------------------------------
- Make SYSV IPC not block everything -- pass to vg_intercept.c.
- ------------------------------------------------------------------ */
-
-WEAK
-int msgsnd(int msgid, const void *msgp, size_t msgsz, int msgflg)
-{
- return VGR_(msgsnd)(msgid, msgp, msgsz, msgflg);
-}
-
-WEAK
-int msgrcv(int msqid, void* msgp, size_t msgsz,
- long msgtyp, int msgflg )
-{
- return VGR_(msgrcv)(msqid, msgp, msgsz, msgtyp, msgflg );
-}
-
-
-/* ---------------------------------------------------------------------
- The glibc sources say that returning -1 in these 3 functions
- causes real time signals not to be used.
- ------------------------------------------------------------------ */
-
-int __libc_current_sigrtmin (void)
-{
- static int moans = N_MOANS;
- if (moans-- > 0)
- kludged("__libc_current_sigrtmin");
- return -1;
-}
-
-int __libc_current_sigrtmax (void)
-{
- static int moans = N_MOANS;
- if (moans-- > 0)
- kludged("__libc_current_sigrtmax");
- return -1;
-}
-
-int __libc_allocate_rtsig (int high)
-{
- static int moans = N_MOANS;
- if (moans-- > 0)
- kludged("__libc_allocate_rtsig");
- return -1;
-}
-
-
-/* ---------------------------------------------------------------------
- B'stard.
- ------------------------------------------------------------------ */
-
-# define strong_alias(name, aliasname) \
- extern __typeof (name) aliasname __attribute__ ((alias (#name)));
-
-# define weak_alias(name, aliasname) \
- extern __typeof (name) aliasname __attribute__ ((weak, alias (#name)));
-
-strong_alias(__pthread_mutex_lock, pthread_mutex_lock)
-strong_alias(__pthread_mutex_trylock, pthread_mutex_trylock)
-strong_alias(__pthread_mutex_unlock, pthread_mutex_unlock)
-strong_alias(__pthread_mutexattr_init, pthread_mutexattr_init)
- weak_alias(__pthread_mutexattr_settype, pthread_mutexattr_settype)
- weak_alias(__pthread_mutexattr_setpshared, pthread_mutexattr_setpshared)
-strong_alias(__pthread_mutex_init, pthread_mutex_init)
-strong_alias(__pthread_mutexattr_destroy, pthread_mutexattr_destroy)
-strong_alias(__pthread_mutex_destroy, pthread_mutex_destroy)
-strong_alias(__pthread_once, pthread_once)
-strong_alias(__pthread_atfork, pthread_atfork)
-strong_alias(__pthread_key_create, pthread_key_create)
-strong_alias(__pthread_getspecific, pthread_getspecific)
-strong_alias(__pthread_setspecific, pthread_setspecific)
-
-#ifndef GLIBC_2_1
-strong_alias(sigaction, __sigaction)
-#endif
-
-strong_alias(close, __close)
-strong_alias(fcntl, __fcntl)
-strong_alias(lseek, __lseek)
-strong_alias(open, __open)
-strong_alias(open64, __open64)
-strong_alias(read, __read)
-strong_alias(wait, __wait)
-strong_alias(write, __write)
-strong_alias(connect, __connect)
-strong_alias(send, __send)
-
-weak_alias (__pread64, pread64)
-weak_alias (__pwrite64, pwrite64)
-weak_alias(__fork, fork)
-weak_alias(__vfork, vfork)
-
-weak_alias (__pthread_kill_other_threads_np, pthread_kill_other_threads_np)
-
-/*--------------------------------------------------*/
-
-weak_alias(pthread_rwlock_rdlock, __pthread_rwlock_rdlock)
-weak_alias(pthread_rwlock_unlock, __pthread_rwlock_unlock)
-weak_alias(pthread_rwlock_wrlock, __pthread_rwlock_wrlock)
-
-weak_alias(pthread_rwlock_destroy, __pthread_rwlock_destroy)
-weak_alias(pthread_rwlock_init, __pthread_rwlock_init)
-weak_alias(pthread_rwlock_tryrdlock, __pthread_rwlock_tryrdlock)
-weak_alias(pthread_rwlock_trywrlock, __pthread_rwlock_trywrlock)
-
-
-/* I've no idea what these are, but they get called quite a lot.
- Anybody know? */
-
-#undef _IO_flockfile
-void _IO_flockfile ( _IO_FILE * file )
-{
- pthread_mutex_lock(file->_lock);
-}
-weak_alias(_IO_flockfile, flockfile);
-
-
-#undef _IO_funlockfile
-void _IO_funlockfile ( _IO_FILE * file )
-{
- pthread_mutex_unlock(file->_lock);
-}
-weak_alias(_IO_funlockfile, funlockfile);
-
-
-/* This doesn't seem to be needed to simulate libpthread.so's external
- interface, but many people complain about its absence. */
-
-strong_alias(__pthread_mutexattr_settype, __pthread_mutexattr_setkind_np)
-weak_alias(__pthread_mutexattr_setkind_np, pthread_mutexattr_setkind_np)
-
-
-/*--------------------------------------------------------------------*/
-/*--- end vg_libpthread.c ---*/
-/*--------------------------------------------------------------------*/
+++ /dev/null
-
-/*--------------------------------------------------------------------*/
-/*--- Give dummy bindings for everything the real libpthread.so ---*/
-/*--- binds. vg_libpthread_unimp.c ---*/
-/*--------------------------------------------------------------------*/
-
-/*
- This file is part of Valgrind, an extensible x86 protected-mode
- emulator for monitoring program execution on x86-Unixes.
-
- Copyright (C) 2000-2003 Julian Seward
- jseward@acm.org
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License as
- published by the Free Software Foundation; either version 2 of the
- License, or (at your option) any later version.
-
- This program is distributed in the hope that it will be useful, but
- WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
- 02111-1307, USA.
-
- The GNU General Public License is contained in the file COPYING.
-*/
-
-/* ---------------------------------------------------------------------
- ALL THIS CODE RUNS ON THE SIMULATED CPU.
- Give a binding for everything the real libpthread.so binds.
- ------------------------------------------------------------------ */
-
-#include "vg_include.h" /* For GLIBC_2_3, or not, as the case may be */
-
-extern void vgPlain_unimp ( char* );
-#define unimp(str) vgPlain_unimp(str)
-
-//void _IO_flockfile ( void ) { unimp("_IO_flockfile"); }
-void _IO_ftrylockfile ( void ) { unimp("_IO_ftrylockfile"); }
-//void _IO_funlockfile ( void ) { unimp("_IO_funlockfile"); }
-//void __close ( void ) { unimp("__close"); }
-//void __connect ( void ) { unimp("__connect"); }
-//void __errno_location ( void ) { unimp("__errno_location"); }
-//void __fcntl ( void ) { unimp("__fcntl"); }
-//void __fork ( void ) { unimp("__fork"); }
-//void __h_errno_location ( void ) { unimp("__h_errno_location"); }
-//void __libc_allocate_rtsig ( void ) { unimp("__libc_allocate_rtsig"); }
-//void __libc_current_sigrtmax ( void ) { unimp("__libc_current_sigrtmax"); }
-//void __libc_current_sigrtmin ( void ) { unimp("__libc_current_sigrtmin"); }
-//void __lseek ( void ) { unimp("__lseek"); }
-//void __open ( void ) { unimp("__open"); }
-//void __open64 ( void ) { unimp("__open64"); }
-//void __pread64 ( void ) { unimp("__pread64"); }
-//void __pthread_atfork ( void ) { unimp("__pthread_atfork"); }
-//void __pthread_getspecific ( void ) { unimp("__pthread_getspecific"); }
-//void __pthread_key_create ( void ) { unimp("__pthread_key_create"); }
-//void __pthread_kill_other_threads_np ( void ) { unimp("__pthread_kill_other_threads_np"); }
-//void __pthread_mutex_destroy ( void ) { unimp("__pthread_mutex_destroy"); }
-//void __pthread_mutex_init ( void ) { unimp("__pthread_mutex_init"); }
-//void __pthread_mutex_lock ( void ) { unimp("__pthread_mutex_lock"); }
-//void __pthread_mutex_trylock ( void ) { unimp("__pthread_mutex_trylock"); }
-//void __pthread_mutex_unlock ( void ) { unimp("__pthread_mutex_unlock"); }
-//void __pthread_mutexattr_destroy ( void ) { unimp("__pthread_mutexattr_destroy"); }
-//void __pthread_mutexattr_init ( void ) { unimp("__pthread_mutexattr_init"); }
-//void __pthread_mutexattr_settype ( void ) { unimp("__pthread_mutexattr_settype"); }
-//void __pthread_once ( void ) { unimp("__pthread_once"); }
-//void __pthread_setspecific ( void ) { unimp("__pthread_setspecific"); }
-//void __pwrite64 ( void ) { unimp("__pwrite64"); }
-//void __read ( void ) { unimp("__read"); }
-//void __res_state ( void ) { unimp("__res_state"); }
-//void __send ( void ) { unimp("__send"); }
-//void __sigaction ( void ) { unimp("__sigaction"); }
-//--//void __vfork ( void ) { unimp("__vfork"); }
-//void __wait ( void ) { unimp("__wait"); }
-//void __write ( void ) { unimp("__write"); }
-//void _pthread_cleanup_pop ( void ) { unimp("_pthread_cleanup_pop"); }
-//void _pthread_cleanup_pop_restore ( void ) { unimp("_pthread_cleanup_pop_restore"); }
-//void _pthread_cleanup_push ( void ) { unimp("_pthread_cleanup_push"); }
-//void _pthread_cleanup_push_defer ( void ) { unimp("_pthread_cleanup_push_defer"); }
-//void longjmp ( void ) { unimp("longjmp"); }
-//void pthread_atfork ( void ) { unimp("pthread_atfork"); }
-//void pthread_attr_destroy ( void ) { unimp("pthread_attr_destroy"); }
-//void pthread_attr_getdetachstate ( void ) { unimp("pthread_attr_getdetachstate"); }
-void pthread_attr_getinheritsched ( void ) { unimp("pthread_attr_getinheritsched"); }
-//void pthread_attr_getschedparam ( void ) { unimp("pthread_attr_getschedparam"); }
-//void pthread_attr_getschedpolicy ( void ) { unimp("pthread_attr_getschedpolicy"); }
-//void pthread_attr_getscope ( void ) { unimp("pthread_attr_getscope"); }
-
-//void pthread_attr_setdetachstate ( void ) { unimp("pthread_attr_setdetachstate"); }
-//void pthread_attr_setinheritsched ( void ) { unimp("pthread_attr_setinheritsched"); }
-//void pthread_attr_setschedparam ( void ) { unimp("pthread_attr_setschedparam"); }
-//void pthread_attr_setschedpolicy ( void ) { unimp("pthread_attr_setschedpolicy"); }
-//void pthread_attr_setscope ( void ) { unimp("pthread_attr_setscope"); }
-void pthread_barrier_destroy ( void ) { unimp("pthread_barrier_destroy"); }
-void pthread_barrier_init ( void ) { unimp("pthread_barrier_init"); }
-void pthread_barrier_wait ( void ) { unimp("pthread_barrier_wait"); }
-void pthread_barrierattr_destroy ( void ) { unimp("pthread_barrierattr_destroy"); }
-void pthread_barrierattr_init ( void ) { unimp("pthread_barrierattr_init"); }
-void pthread_barrierattr_setpshared ( void ) { unimp("pthread_barrierattr_setpshared"); }
-//void pthread_cancel ( void ) { unimp("pthread_cancel"); }
-//void pthread_cond_broadcast ( void ) { unimp("pthread_cond_broadcast"); }
-//void pthread_cond_destroy ( void ) { unimp("pthread_cond_destroy"); }
-//void pthread_cond_init ( void ) { unimp("pthread_cond_init"); }
-//void pthread_cond_signal ( void ) { unimp("pthread_cond_signal"); }
-//void pthread_cond_timedwait ( void ) { unimp("pthread_cond_timedwait"); }
-//void pthread_cond_wait ( void ) { unimp("pthread_cond_wait"); }
-//void pthread_condattr_destroy ( void ) { unimp("pthread_condattr_destroy"); }
-void pthread_condattr_getpshared ( void ) { unimp("pthread_condattr_getpshared"); }
-//void pthread_condattr_init ( void ) { unimp("pthread_condattr_init"); }
-void pthread_condattr_setpshared ( void ) { unimp("pthread_condattr_setpshared"); }
-//void pthread_detach ( void ) { unimp("pthread_detach"); }
-//void pthread_equal ( void ) { unimp("pthread_equal"); }
-//void pthread_exit ( void ) { unimp("pthread_exit"); }
-//void pthread_getattr_np ( void ) { unimp("pthread_getattr_np"); }
-void pthread_getcpuclockid ( void ) { unimp("pthread_getcpuclockid"); }
-//void pthread_getschedparam ( void ) { unimp("pthread_getschedparam"); }
-//void pthread_getspecific ( void ) { unimp("pthread_getspecific"); }
-//void pthread_join ( void ) { unimp("pthread_join"); }
-//void pthread_key_create ( void ) { unimp("pthread_key_create"); }
-//void pthread_key_delete ( void ) { unimp("pthread_key_delete"); }
-//void pthread_kill ( void ) { unimp("pthread_kill"); }
-//void pthread_mutex_destroy ( void ) { unimp("pthread_mutex_destroy"); }
-//void pthread_mutex_init ( void ) { unimp("pthread_mutex_init"); }
-//void pthread_mutex_lock ( void ) { unimp("pthread_mutex_lock"); }
-void pthread_mutex_timedlock ( void ) { unimp("pthread_mutex_timedlock"); }
-//void pthread_mutex_trylock ( void ) { unimp("pthread_mutex_trylock"); }
-//void pthread_mutex_unlock ( void ) { unimp("pthread_mutex_unlock"); }
-//void pthread_mutexattr_destroy ( void ) { unimp("pthread_mutexattr_destroy"); }
-//void pthread_mutexattr_init ( void ) { unimp("pthread_mutexattr_init"); }
-//void pthread_once ( void ) { unimp("pthread_once"); }
-//void pthread_rwlock_destroy ( void ) { unimp("pthread_rwlock_destroy"); }
-//void pthread_rwlock_init ( void ) { unimp("pthread_rwlock_init"); }
-//void pthread_rwlock_rdlock ( void ) { unimp("pthread_rwlock_rdlock"); }
-void pthread_rwlock_timedrdlock ( void ) { unimp("pthread_rwlock_timedrdlock"); }
-void pthread_rwlock_timedwrlock ( void ) { unimp("pthread_rwlock_timedwrlock"); }
-//void pthread_rwlock_tryrdlock ( void ) { unimp("pthread_rwlock_tryrdlock"); }
-//void pthread_rwlock_trywrlock ( void ) { unimp("pthread_rwlock_trywrlock"); }
-//void pthread_rwlock_unlock ( void ) { unimp("pthread_rwlock_unlock"); }
-//void pthread_rwlock_wrlock ( void ) { unimp("pthread_rwlock_wrlock"); }
-//void pthread_rwlockattr_destroy ( void ) { unimp("pthread_rwlockattr_destroy"); }
-void pthread_rwlockattr_getkind_np ( void ) { unimp("pthread_rwlockattr_getkind_np"); }
-void pthread_rwlockattr_getpshared ( void ) { unimp("pthread_rwlockattr_getpshared"); }
-//void pthread_rwlockattr_init ( void ) { unimp("pthread_rwlockattr_init"); }
-void pthread_rwlockattr_setkind_np ( void ) { unimp("pthread_rwlockattr_setkind_np"); }
-//void pthread_rwlockattr_setpshared ( void ) { unimp("pthread_rwlockattr_setpshared"); }
-//void pthread_self ( void ) { unimp("pthread_self"); }
-//void pthread_setcancelstate ( void ) { unimp("pthread_setcancelstate"); }
-//void pthread_setcanceltype ( void ) { unimp("pthread_setcanceltype"); }
-//void pthread_setschedparam ( void ) { unimp("pthread_setschedparam"); }
-//void pthread_setspecific ( void ) { unimp("pthread_setspecific"); }
-//void pthread_sigmask ( void ) { unimp("pthread_sigmask"); }
-//void pthread_testcancel ( void ) { unimp("pthread_testcancel"); }
-//void raise ( void ) { unimp("raise"); }
-void sem_close ( void ) { unimp("sem_close"); }
-void sem_open ( void ) { unimp("sem_open"); }
-//void sem_timedwait ( void ) { unimp("sem_timedwait"); }
-void sem_unlink ( void ) { unimp("sem_unlink"); }
-//void sigaction ( void ) { unimp("sigaction"); }
-//void siglongjmp ( void ) { unimp("siglongjmp"); }
-//void sigwait ( void ) { unimp("sigwait"); }
-
-void __pthread_clock_gettime ( void ) { unimp("__pthread_clock_gettime"); }
-void __pthread_clock_settime ( void ) { unimp("__pthread_clock_settime"); }
-#ifdef GLIBC_2_3
-/* Needed for Red Hat 8.0 */
-__asm__(".symver __pthread_clock_gettime,"
- "__pthread_clock_gettime@GLIBC_PRIVATE");
-__asm__(".symver __pthread_clock_settime,"
- "__pthread_clock_settime@GLIBC_PRIVATE");
-#endif
-
-
-#if 0
-void pthread_create@@GLIBC_2.1 ( void ) { unimp("pthread_create@@GLIBC_2.1"); }
-void pthread_create@GLIBC_2.0 ( void ) { unimp("pthread_create@GLIBC_2.0"); }
-
-void sem_wait@@GLIBC_2.1 ( void ) { unimp("sem_wait@@GLIBC_2.1"); }
-void sem_wait@GLIBC_2.0 ( void ) { unimp("sem_wait@GLIBC_2.0"); }
-
-void sem_trywait@@GLIBC_2.1 ( void ) { unimp("sem_trywait@@GLIBC_2.1"); }
-void sem_trywait@GLIBC_2.0 ( void ) { unimp("sem_trywait@GLIBC_2.0"); }
-
-void sem_post@@GLIBC_2.1 ( void ) { unimp("sem_post@@GLIBC_2.1"); }
-void sem_post@GLIBC_2.0 ( void ) { unimp("sem_post@GLIBC_2.0"); }
-
-void sem_destroy@@GLIBC_2.1 ( void ) { unimp("sem_destroy@@GLIBC_2.1"); }
-void sem_destroy@GLIBC_2.0 ( void ) { unimp("sem_destroy@GLIBC_2.0"); }
-void sem_getvalue@@GLIBC_2.1 ( void ) { unimp("sem_getvalue@@GLIBC_2.1"); }
-void sem_getvalue@GLIBC_2.0 ( void ) { unimp("sem_getvalue@GLIBC_2.0"); }
-void sem_init@@GLIBC_2.1 ( void ) { unimp("sem_init@@GLIBC_2.1"); }
-void sem_init@GLIBC_2.0 ( void ) { unimp("sem_init@GLIBC_2.0"); }
-
-void pthread_attr_init@@GLIBC_2.1 ( void ) { unimp("pthread_attr_init@@GLIBC_2.1"); }
-void pthread_attr_init@GLIBC_2.0 ( void ) { unimp("pthread_attr_init@GLIBC_2.0"); }
-#endif
-
-
-
-# define strong_alias(name, aliasname) \
- extern __typeof (name) aliasname __attribute__ ((alias (#name)));
-
-# define weak_alias(name, aliasname) \
- extern __typeof (name) aliasname __attribute__ ((weak, alias (#name)));
-
-//weak_alias(pthread_rwlock_destroy, __pthread_rwlock_destroy)
-//weak_alias(pthread_rwlock_init, __pthread_rwlock_init)
-//weak_alias(pthread_rwlock_tryrdlock, __pthread_rwlock_tryrdlock)
-//weak_alias(pthread_rwlock_trywrlock, __pthread_rwlock_trywrlock)
-//weak_alias(pthread_rwlock_wrlock, __pthread_rwlock_wrlock)
-weak_alias(_IO_ftrylockfile, ftrylockfile)
-
-//__attribute__((weak)) void pread ( void ) { vgPlain_unimp("pread"); }
-//__attribute__((weak)) void pwrite ( void ) { vgPlain_unimp("pwrite"); }
-//__attribute__((weak)) void msync ( void ) { vgPlain_unimp("msync"); }
-//__attribute__((weak)) void pause ( void ) { vgPlain_unimp("pause"); }
-//__attribute__((weak)) void recvfrom ( void ) { vgPlain_unimp("recvfrom"); }
-//__attribute__((weak)) void recvmsg ( void ) { vgPlain_unimp("recvmsg"); }
-//__attribute__((weak)) void sendmsg ( void ) { vgPlain_unimp("sendmsg"); }
-__attribute__((weak)) void tcdrain ( void ) { vgPlain_unimp("tcdrain"); }
-//--//__attribute__((weak)) void vfork ( void ) { vgPlain_unimp("vfork"); }
-
-//__attribute__((weak)) void pthread_attr_getguardsize ( void )
-// { vgPlain_unimp("pthread_attr_getguardsize"); }
-__attribute__((weak)) void pthread_attr_getstack ( void )
- { vgPlain_unimp("pthread_attr_getstack"); }
-__attribute__((weak)) void pthread_attr_getstackaddr ( void )
- { vgPlain_unimp("pthread_attr_getstackaddr"); }
-__attribute__((weak)) void pthread_attr_getstacksize ( void )
- { vgPlain_unimp("pthread_attr_getstacksize"); }
-//__attribute__((weak)) void pthread_attr_setguardsize ( void )
-// { vgPlain_unimp("pthread_attr_setguardsize"); }
-__attribute__((weak)) void pthread_attr_setstack ( void )
- { vgPlain_unimp("pthread_attr_setstack"); }
-__attribute__((weak)) void pthread_attr_setstackaddr ( void )
- { vgPlain_unimp("pthread_attr_setstackaddr"); }
-//__attribute__((weak)) void pthread_attr_setstacksize ( void )
-// { vgPlain_unimp("pthread_attr_setstacksize"); }
-//__attribute__((weak)) void pthread_getconcurrency ( void )
-// { vgPlain_unimp("pthread_getconcurrency"); }
-//__attribute__((weak)) void pthread_kill_other_threads_np ( void )
-// { vgPlain_unimp("pthread_kill_other_threads_np"); }
-__attribute__((weak)) void pthread_mutexattr_getkind_np ( void )
- { vgPlain_unimp("pthread_mutexattr_getkind_np"); }
-__attribute__((weak)) void pthread_mutexattr_getpshared ( void )
- { vgPlain_unimp("pthread_mutexattr_getpshared"); }
-__attribute__((weak)) void pthread_mutexattr_gettype ( void )
- { vgPlain_unimp("pthread_mutexattr_gettype"); }
-__attribute__((weak)) void pthread_mutexattr_setkind_np ( void )
- { vgPlain_unimp("pthread_mutexattr_setkind_np"); }
-//__attribute__((weak)) void pthread_mutexattr_setpshared ( void )
-// { vgPlain_unimp("pthread_mutexattr_setpshared"); }
-//__attribute__((weak)) void pthread_setconcurrency ( void )
-// { vgPlain_unimp("pthread_setconcurrency"); }
-__attribute__((weak)) void pthread_spin_destroy ( void )
- { vgPlain_unimp("pthread_spin_destroy"); }
-__attribute__((weak)) void pthread_spin_init ( void )
- { vgPlain_unimp("pthread_spin_init"); }
-__attribute__((weak)) void pthread_spin_lock ( void )
- { vgPlain_unimp("pthread_spin_lock"); }
-__attribute__((weak)) void pthread_spin_trylock ( void )
- { vgPlain_unimp("pthread_spin_trylock"); }
-__attribute__((weak)) void pthread_spin_unlock ( void )
- { vgPlain_unimp("pthread_spin_unlock"); }
-
-
-/*--------------------------------------------------------------------*/
-/*--- end vg_libpthread_unimp.c ---*/
-/*--------------------------------------------------------------------*/
+++ /dev/null
-
-##--------------------------------------------------------------------##
-##--- Support for doing system calls. ---##
-##--- vg_syscall.S ---##
-##--------------------------------------------------------------------##
-
-/*
- This file is part of Valgrind, an extensible x86 protected-mode
- emulator for monitoring program execution on x86-Unixes.
-
- Copyright (C) 2000-2003 Julian Seward
- jseward@acm.org
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License as
- published by the Free Software Foundation; either version 2 of the
- License, or (at your option) any later version.
-
- This program is distributed in the hope that it will be useful, but
- WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
- 02111-1307, USA.
-
- The GNU General Public License is contained in the file COPYING.
-*/
-
-#include "vg_constants.h"
-
-
-.globl VG_(do_syscall)
-
-# NOTE that this routine expects the simulated machines state
-# to be in m_state_static. Therefore it needs to be wrapped by
-# code which copies from baseBlock before the call, into
-# m_state_static, and back afterwards.
-
-VG_(do_syscall):
- # Save all the int registers of the real machines state on the
- # simulators stack.
- pushal
-
- # and save the real FPU state too
- fwait
-
- pushfl
- cmpb $0, VG_(have_ssestate)
- jz qq1nosse
- fxsave VG_(real_sse_state_saved_over_syscall)
- andl $0x0000FFBF, VG_(real_sse_state_saved_over_syscall)+24
- fxrstor VG_(real_sse_state_saved_over_syscall)
- jmp qq1merge
-qq1nosse:
- fnsave VG_(real_sse_state_saved_over_syscall)
- frstor VG_(real_sse_state_saved_over_syscall)
-qq1merge:
- popfl
-
- # remember what the simulators stack pointer is
- movl %esp, VG_(esp_saved_over_syscall)
-
- # Now copy the simulated machines state into the real one
- # esp still refers to the simulators stack
- pushfl
- cmpb $0, VG_(have_ssestate)
- jz qq2nosse
- andl $0x0000FFBF, VG_(m_state_static)+64+24
- fxrstor VG_(m_state_static)+64
- jmp qq2merge
-qq2nosse:
- frstor VG_(m_state_static)+64
-qq2merge:
- popfl
-
- movl VG_(m_state_static)+56, %eax
- pushl %eax
- popfl
-#if 0
- /* don't bother to save/restore seg regs across the kernel iface.
- Once we have our hands on them, our simulation of it is
- completely internal, and the kernel sees nothing.
- What's more, loading new values in to %cs seems
- to be impossible anyway. */
- movw VG_(m_state_static)+0, %cs
- movw VG_(m_state_static)+4, %ss
- movw VG_(m_state_static)+8, %ds
- movw VG_(m_state_static)+12, %es
- movw VG_(m_state_static)+16, %fs
- movw VG_(m_state_static)+20, %gs
-#endif
- movl VG_(m_state_static)+24, %eax
- movl VG_(m_state_static)+28, %ecx
- movl VG_(m_state_static)+32, %edx
- movl VG_(m_state_static)+36, %ebx
- movl VG_(m_state_static)+40, %esp
- movl VG_(m_state_static)+44, %ebp
- movl VG_(m_state_static)+48, %esi
- movl VG_(m_state_static)+52, %edi
-
- # esp now refers to the simulatees stack
- # Do the actual system call
- int $0x80
-
- # restore stack as soon as possible
- # esp refers to simulatees stack
- movl %esp, VG_(m_state_static)+40
- movl VG_(esp_saved_over_syscall), %esp
- # esp refers to simulators stack
-
- # ... and undo everything else.
- # Copy real state back to simulated state.
-#if 0
- movw %cs, VG_(m_state_static)+0
- movw %ss, VG_(m_state_static)+4
- movw %ds, VG_(m_state_static)+8
- movw %es, VG_(m_state_static)+12
- movw %fs, VG_(m_state_static)+16
- movw %gs, VG_(m_state_static)+20
-#endif
- movl %eax, VG_(m_state_static)+24
- movl %ecx, VG_(m_state_static)+28
- movl %edx, VG_(m_state_static)+32
- movl %ebx, VG_(m_state_static)+36
- movl %ebp, VG_(m_state_static)+44
- movl %esi, VG_(m_state_static)+48
- movl %edi, VG_(m_state_static)+52
- pushfl
- popl %eax
- movl %eax, VG_(m_state_static)+56
- fwait
-
- pushfl
- cmpb $0, VG_(have_ssestate)
- jz pp2nosse
- fxsave VG_(m_state_static)+64
- andl $0x0000FFBF, VG_(m_state_static)+64+24
- fxrstor VG_(m_state_static)+64
- jmp pp2merge
-pp2nosse:
- fnsave VG_(m_state_static)+64
- frstor VG_(m_state_static)+64
-pp2merge:
- popfl
-
- # Restore the state of the simulator
- pushfl
- cmpb $0, VG_(have_ssestate)
- jz pp1nosse
- andl $0x0000FFBF, VG_(real_sse_state_saved_over_syscall)+24
- fxrstor VG_(real_sse_state_saved_over_syscall)
- jmp pp1merge
-pp1nosse:
- frstor VG_(real_sse_state_saved_over_syscall)
-pp1merge:
- popfl
-
- popal
-
- ret
-
-##--------------------------------------------------------------------##
-##--- end vg_syscall.S ---##
-##--------------------------------------------------------------------##
nearly useless. With <code>-g</code>, you'll hopefully get messages
which point directly to the relevant source code lines.
+<p>
+Another flag you might like to consider, if you are working with
+C++, is <code>-fno-inline</code>. That makes it easier to see the
+function-call chain, which can help reduce confusion when navigating
+around large C++ apps. For whatever it's worth, debugging
+OpenOffice.org with Valgrind is a bit easier when using this flag.
+
<p>
You don't have to do this, but doing so helps Valgrind produce more
accurate and less confusing error reports. Chances are you're set up
level.
<p>
-Valgrind understands both the older "stabs" debugging format, used by
-gcc versions prior to 3.1, and the newer DWARF2 format used by gcc 3.1
-and later. We continue to refine and debug our debug-info readers,
-although the majority of effort will naturally enough go into the
+Valgrind understands line number information in three formats: the old
+"stabs" debugging format, used by gcc versions prior to 3.1, the newer
+DWARF2 format used by gcc 3.1 and later, and the obsolete DWARF1
+format. We continue to refine and debug our debug-info readers,
+although the majority of effort will naturally enough go into the
newer DWARF2 reader.
<p>
<a name="clientreq"></a>
<h3>2.7 The Client Request mechanism</h3>
-(NOTE 20021117: this subsection is illogical here now; it jumbles up
-core and skin issues. To be fixed.).
-
-(NOTE 20030318: the most important correction is that
-<code>valgrind.h</code> should not be included in your program, but
-instead <code>memcheck.h</code> (for the Memcheck and Addrcheck skins)
-or <code>helgrind.h</code> (for Helgrind).)
-
-<p>
Valgrind has a trapdoor mechanism via which the client program can
-pass all manner of requests and queries to Valgrind. Internally, this
-is used extensively to make malloc, free, signals, threads, etc, work,
-although you don't see that.
+pass all manner of requests and queries to Valgrind and the current skin.
+Internally, this is used extensively to make malloc, free, signals, threads,
+etc, work, although you don't see that.
<p>
For your convenience, a subset of these so-called client requests is
provided to allow you to tell Valgrind facts about the behaviour of
that Valgrind would not otherwise know about, and so allows clients to
get Valgrind to do arbitrary custom checks.
<p>
-Clients need to include a skin-specific header file to make
-this work. For most people this will be <code>memcheck.h</code>,
-which should be installed in the <code>include</code> directory
-when you did <code>make install</code>.
-<code>memcheck.h</code> is the correct file to use with both
-the Memcheck (default) and Addrcheck skins.
-<p>
-Note for those migrating from 1.0.X, that the old header file
-<code>valgrind.h</code> no longer works, and will cause a compilation
-failure (deliberately) if included.
+Clients need to include a header file to make this work. Which header file
+depends on which client requests you use. Some client requests are handled by
+the core, and are defined in the header file <code>valgrind.h</code>.
+Skin-specific header files are named after the skin, e.g.
+<code>memcheck.h</code>. All header files can be found in the
+<code>include</code> directory of wherever Valgrind was installed.
<p>
-The macros in <code>memcheck.h</code> have the magical property that
+The macros in these header files have the magical property that
they generate code in-line which Valgrind can spot. However, the code
does nothing when not run on Valgrind, so you are not forced to run
your program on Valgrind just because you use the macros in this file.
Also, you are not required to link your program with any extra
supporting libraries.
<p>
-A brief description of the available macros:
+Here is a brief description of the macros available in
+<code>valgrind.h</code>, which work with more than one skin (see the
+skin-specific documentation for explanations of the skin-specific macros).
<ul>
-<li><code>VALGRIND_MAKE_NOACCESS</code>,
- <code>VALGRIND_MAKE_WRITABLE</code> and
- <code>VALGRIND_MAKE_READABLE</code>. These mark address
- ranges as completely inaccessible, accessible but containing
- undefined data, and accessible and containing defined data,
- respectively. Subsequent errors may have their faulting
- addresses described in terms of these blocks. Returns a
- "block handle". Returns zero when not run on Valgrind.
-<p>
-<li><code>VALGRIND_DISCARD</code>: At some point you may want
- Valgrind to stop reporting errors in terms of the blocks
- defined by the previous three macros. To do this, the above
- macros return a small-integer "block handle". You can pass
- this block handle to <code>VALGRIND_DISCARD</code>. After
- doing so, Valgrind will no longer be able to relate
- addressing errors to the user-defined block associated with
- the handle. The permissions settings associated with the
- handle remain in place; this just affects how errors are
- reported, not whether they are reported. Returns 1 for an
- invalid handle and 0 for a valid handle (although passing
- invalid handles is harmless). Always returns 0 when not run
- on Valgrind.
-<p>
-<li><code>VALGRIND_CHECK_NOACCESS</code>,
- <code>VALGRIND_CHECK_WRITABLE</code> and
- <code>VALGRIND_CHECK_READABLE</code>: check immediately
- whether or not the given address range has the relevant
- property, and if not, print an error message. Also, for the
- convenience of the client, returns zero if the relevant
- property holds; otherwise, the returned value is the address
- of the first byte for which the property is not true.
- Always returns 0 when not run on Valgrind.
-<p>
-<li><code>VALGRIND_CHECK_NOACCESS</code>: a quick and easy way
- to find out whether Valgrind thinks a particular variable
- (lvalue, to be precise) is addressible and defined. Prints
- an error message if not. Returns no value.
-<p>
-<li><code>VALGRIND_MAKE_NOACCESS_STACK</code>: a highly
- experimental feature. Similarly to
- <code>VALGRIND_MAKE_NOACCESS</code>, this marks an address
- range as inaccessible, so that subsequent accesses to an
- address in the range gives an error. However, this macro
- does not return a block handle. Instead, all annotations
- created like this are reviewed at each client
- <code>ret</code> (subroutine return) instruction, and those
- which now define an address range block the client's stack
- pointer register (<code>%esp</code>) are automatically
- deleted.
- <p>
- In other words, this macro allows the client to tell
- Valgrind about red-zones on its own stack. Valgrind
- automatically discards this information when the stack
- retreats past such blocks. Beware: hacky and flaky, and
- probably interacts badly with the new pthread support.
-<p>
<li><code>RUNNING_ON_VALGRIND</code>: returns 1 if running on
Valgrind, 0 if running on the real CPU.
<p>
-<li><code>VALGRIND_DO_LEAK_CHECK</code>: run the memory leak detector
- right now. Returns no value. I guess this could be used to
- incrementally check for leaks between arbitrary places in the
- program's execution. Warning: not properly tested!
-<p>
<li><code>VALGRIND_DISCARD_TRANSLATIONS</code>: discard translations
of code in the specified address range. Useful if you are
debugging a JITter or some other dynamic code generation system.
fresh memory, and just call this occasionally to discard large
chunks of old code all at once.
<p>
- Warning: minimally tested, especially for the cache simulator.
+ Warning: minimally tested, especially for skins other than Memcheck.
+<p>
<li><code>VALGRIND_COUNT_ERRORS</code>: returns the number of errors
found so far by Valgrind. Can be useful in test harness code when
combined with the <code>--logfile-fd=-1</code> option; this runs
Valgrind silently, but the client program can detect when errors
- occur.
-<p>
-<li><code>VALGRIND_COUNT_LEAKS</code>: fills in the four arguments with
- the number of bytes of memory found by the previous leak check to
- be leaked, dubious, reachable and suppressed. Again, useful in
- test harness code, after calling <code>VALGRIND_DO_LEAK_CHECK</code>.
+ occur. Only useful for skins that report errors, e.g. it's useful for
+ Memcheck, but for Cachegrind it will always return zero because
+ Cachegrind doesn't report errors.
<p>
-<li><code>VALGRIND_MALLOCLIKE_BLOCK</code>: If your program manages its own
- memory instead of using the standard
- <code>malloc()</code>/<code>new</code>/<code>new[]</code>, Memcheck will
- not detect nearly as many errors, and the error messages won't be as
- informative. To improve this situation, use this macro just after your
- custom allocator allocates some new memory. See the comments in
- <code>memcheck/memcheck.h</code> for information on how to use it.
-<p>
-<li><code>VALGRIND_FREELIKE_BLOCK</code>: This should be used in conjunction
- with <code>VALGRIND_MALLOCLIKE_BLOCK</code>. Again, see
- <code>memcheck/memcheck.h</code> for information on how to use it.
+<li><code>VALGRIND_NON_SIMD_CALL[0123]</code>: executes a function of 0, 1, 2
+ or 3 args in the client program on the <i>real</i> CPU, not the virtual
+ CPU that Valgrind normally runs code on. These are used in various ways
+ internally to Valgrind. They might be useful to client programs.
+ <b>Warning:</b> Only use these if you <i>really</i> know what you are
+ doing.
<p>
</ul>
+Note that <code>valgrind.h</code> is included by all the skin-specific header
+files (such as <code>memcheck.h</code>), so you don't need to include it in
+your client if you include a skin-specific header.
<p>
<a name="pthreads"></a>
<h3>2.8 Support for POSIX Pthreads</h3>
-As of late April 02, Valgrind supports programs which use POSIX
-pthreads. Doing this has proved technically challenging but is now
-mostly complete. It works well enough for significant threaded
+Valgrind supports programs which use POSIX pthreads. Getting this to work was
+technically challenging but it all works well enough for significant threaded
applications to work.
<p>
It works as follows: threaded apps are (dynamically) linked against
if you have some kind of concurrency, critical race, locking, or
similar, bugs.
<p>
-The current (valgrind-1.0 release) state of pthread support is as
-follows:
+As of the valgrind-1.0 release, the state of pthread support was as follows:
<ul>
<li>Mutexes, condition variables, thread-specific data,
<code>pthread_once</code>, reader-writer locks, semaphores,
rather than one for each thread. But hey.
</ul>
-
As of 18 May 02, the following threaded programs now work fine on my
RedHat 7.2 box: Opera 6.0Beta2, KNode in KDE 3.0, Mozilla-0.9.2.1 and
Galeon-0.11.3, both as supplied with RedHat 7.2. Also Mozilla 1.0RC2.
OpenOffice 1.0. MySQL 3.something (the current stable release).
-
+<p>
+As at the 2.0.0 release (5 Nov 03), we have continued to refine and
+stabilise pthread support. We've also increased the range of syscalls
+which are non-blocking.
<a name="signals"></a>
blocked in its own handler. Default actions for signals should work
as before. Etc, etc.
+<p>
+We only support signal handlers which take a single
+argument -- the signal number. Handlers which expect
+a second argument -- a pointer to signal context structure
+-- will probably segfault when they dereference that pointer.
+That's something we should fix properly in the future.
+
<p>Under the hood, dealing with signals is a real pain, and Valgrind's
simulation leaves much to be desired. If your program does
way-strange stuff with signals, bad things may happen. If so, let me
<h3>2.10 Building and installing</h3>
We now use the standard Unix <code>./configure</code>,
-<code>make</code>, <code>make install</code> mechanism, and I have
-attempted to ensure that it works on machines with kernel 2.2 or 2.4
-and glibc 2.1.X or 2.2.X. I don't think there is much else to say.
+<code>make</code>, <code>make install</code> mechanism, and have
+attempted to ensure that it works on machines with kernel 2.4
+and glibc2.2.X and 2.3.X. I don't think there is much else to say.
There are no options apart from the usual <code>--prefix</code> that
you should give to <code>./configure</code>.
are permanently enabled, and I have no plans to disable them. If one
of these breaks, please mail me!
-<p>If you get an assertion failure on the expression
-<code>chunkSane(ch)</code> in <code>vg_free()</code> in
-<code>vg_malloc.c</code>, this may have happened because your program
-wrote off the end of a malloc'd block, or before its beginning.
-Valgrind should have emitted a proper message to that effect before
-dying in this way. This is a known problem which I should fix.
-
<p>
Read the file <code>FAQ.txt</code> in the source distribution, for
more advice about common problems, crashes, etc.
most programs actually work fine.
<p>Valgrind will run x86-GNU/Linux ELF dynamically linked binaries, on
-a kernel 2.2.X or 2.4.X system, subject to the following constraints:
+a kernel 2.4.X system, subject to the following constraints:
<ul>
- <li>No MMX, SSE, SSE2, 3DNow instructions. If the translator
- encounters these, Valgrind will simply give up. It may be
- possible to add support for them at a later time. Intel added a
- few instructions such as "cmov" to the integer instruction set
- on Pentium and later processors, and these are supported.
- Nevertheless it's safest to think of Valgrind as implementing
- the 486 instruction set.</li><br>
+ <li>Imcomplete support for SSE and SSE2 instructions, and
+ no support for 3DNow instructions. If the translator
+ encounters these, Valgrind will simply give up.
+ </li>
<p>
<li>Pthreads support is improving, but there are still significant
against <code>libpthread.so</code>, so that Valgrind can
substitute its own implementation at program startup time. If
you're statically linked against it, things will fail
- badly.</li><br>
+ badly.</li>
<p>
<li>The memcheck skin assumes that the floating point registers are
immediately checks definedness of values loaded from memory by
floating-point loads. If you want to write code which copies
around possibly-uninitialised values, you must ensure these
- travel through the integer registers, not the FPU.</li><br>
+ travel through the integer registers, not the FPU.</li>
<p>
<li>If your program does its own memory management, rather than
using malloc/new/free/delete, it should still work, but
- Valgrind's error checking won't be so effective.</li><br>
+ Valgrind's error checking won't be so effective.
+ If you describe your program's memory management scheme
+ using "client requests" (Section 3.7 of this manual),
+ valgrind can do better. Nevertheless, using malloc/new
+ and free/delete is still the best approach.
+ </li>
<p>
<li>Valgrind's signal simulation is not as robust as it could be.
if you do weird things with signals. Workaround: don't.
Programs that do non-POSIX signal tricks are in any case
inherently unportable, so should be avoided if
- possible.</li><br>
+ possible.</li>
<p>
<li>Programs which switch stacks are not well handled. Valgrind
large change in %esp is as a result of the program switching
stacks, or merely allocating a large object temporarily on the
current stack -- yet Valgrind needs to handle the two situations
- differently.</li><br>
+ differently.</li>
<p>
<li>x86 instructions, and system calls, have been implemented on
demand. So it's possible, although unlikely, that a program
will fall over with a message to that effect. If this happens,
please mail me ALL the details printed out, so I can try and
- implement the missing feature.</li><br>
+ implement the missing feature.</li>
<p>
<li>x86 floating point works correctly, but floating-point code may
run even more slowly than integer code, due to my simplistic
- approach to FPU emulation.</li><br>
+ approach to FPU emulation.</li>
<p>
<li>You can't Valgrind-ize statically linked binaries. Valgrind
relies on the dynamic-link mechanism to gain control at
- startup.</li><br>
+ startup.</li>
<p>
<li>Memory consumption of your program is majorly increased whilst
running under Valgrind. This is due to the large amount of
- adminstrative information maintained behind the scenes. Another
+ administrative information maintained behind the scenes. Another
cause is that Valgrind dynamically translates the original
executable. Translated, instrumented code is 14-16 times larger
than the original (!) so you can easily end up with 30+ MB of
<p>
</ul>
-Known platform-specific limitations, as of release 1.0.0:
-
-<ul>
- <li>On Red Hat 7.3, there have been reports of link errors (at
- program start time) for threaded programs using
- <code>__pthread_clock_gettime</code> and
- <code>__pthread_clock_settime</code>. This appears to be due to
- <code>/lib/librt-2.2.5.so</code> needing them. Unfortunately I
- do not understand enough about this problem to fix it properly,
- and I can't reproduce it on my test RedHat 7.3 system. Please
- mail me if you have more information / understanding. </li><br>
- <p>
-</ul>
-
<a name="howworks"></a>
<ul>
<li>The <b>memcheck</b> skin detects memory-management problems in
- your programs. It provides services identical to those supplied
- by the valgrind-1.0.X series. Memcheck is essentially
- valgrind-1.0.X packaged up into a skin.
- <p>
+ your programs.
All reads and writes of memory are checked, and calls to
malloc/new/free/delete are intercepted. As a result, memcheck can
detect the following problems:
lying undetected for long periods, then causing occasional,
difficult-to-diagnose crashes.
<p>
-<li><b>cachegrind</b> is a packaging of Nick Nethercote's cache
- profiler from valgrind-1.0.X. It performs detailed simulation of
+<li><b>cachegrind</b> is a cache profiler.
+ It performs detailed simulation of
the I1, D1 and L2 caches in your CPU and so can accurately
pinpoint the sources of cache misses in your code. If you desire,
it will show the number of cache misses, memory references and
presents these profiling results in a graphical and
easier-to-understand form.
<p>
-<li>The new <b>addrcheck</b> skin is a lightweight version of
+<li>The <b>addrcheck</b> skin is a lightweight version of
memcheck. It is identical to memcheck except
for the single detail that it does not do any uninitialised-value
checks. All of the other checks -- primarily the fine-grained
concentrate on what we believe to be a widely used platform: Linux on
x86s. Valgrind uses the standard Unix <code>./configure</code>,
<code>make</code>, <code>make install</code> mechanism, and we have
-attempted to ensure that it works on machines with kernel 2.2 or 2.4
-and glibc 2.1.X, 2.2.X or 2.3.1. This should cover the vast majority
-of modern Linux installations. Note that glibc-2.3.2+, with the
-NPTL (next generation posix threads?) package won't work. We hope to
-be able to fix this, but it won't be easy.
+attempted to ensure that it works on machines with kernel 2.4
+and glibc 2.2.X or 2.3.X. This should cover the vast majority
+of modern Linux installations.
<p>
export LD_ASSUME_KERNEL
fi
+# Check that the program looks ok
+is_prog=0
+
+if [ $# != 0 ] ; then
+
+ # Ensure the program exists. Ignore any error messages from 'which'.
+ which_prog=`which $1 2> /dev/null`
+ if [ z$which_prog = z ] ; then
+ echo "$0: '$1' not found in \$PATH, aborting."
+ exit
+ fi
+
+ if [ $# != 0 ] ; then
+ case `file -L "$which_prog"` in # must follow symlinks, hence -L
+ # Ensure the program isn't statically linked.
+ *"statically linked"*)
+ echo "\`$which_prog' is statically linked"
+ echo "Valgrind only works on dynamically linked executables; your"
+ echo "program must rely on at least one shared object for Valgrind"
+ echo "to work with it. Read FAQ #5 for more information."
+ exit 1 ;;
+ # Ensure that there are no setuid or gid flags
+ *:\ set?id\ ELF*)
+ echo "\`$which_prog' is suid/sgid."
+ echo "Valgrind can't handle these executables, as it"
+ echo "requires the LD_PRELOAD feature in order to work."
+ echo ""
+ echo "Remove those flags and try again."
+ echo ""
+ exit 1
+ ;;
+ esac
+ fi
+
+ is_prog=1
+fi
+
# A bit subtle. The LD_PRELOAD added entry must be absolute
# and not depend on LD_LIBRARY_PATH. This is so that we can
# mess with LD_LIBRARY_PATH for child processes, which makes
#LD_DEBUG=symbols
#export LD_DEBUG
-# If no command given, act like -h was given so vg_main.c prints out
-# the usage string. And pass to 'exec' tha name of any program -- it doesn't
-# matter which -- because it won't be run anyway (we use 'true').
-if [ $# != 0 ] ; then
+# Actually run the program, under Valgrind's control
+if [ $is_prog = 1 ] ; then
exec "$@"
else
- VG_ARGS="$VG_ARGS -h"
+ # If no command given, act like -h was given so vg_main.c prints out the
+ # usage string. And pass to 'exec' the name of any program -- it doesn't
+ # matter which -- because it won't be run anyway (we use 'true').
+ VG_ARGS="$VG_ARGS -h"
exec true
fi
VG_(emitB) ( (l >> 8) & 0x000000FF );
}
-__inline__ void VG_(emitL) ( UInt l )
+/* __inline__ */
+void VG_(emitL) ( UInt l )
{
VG_(emitB) ( (l) & 0x000000FF );
VG_(emitB) ( (l >> 8) & 0x000000FF );
use_flags: set of (real) flags the instruction uses
set_flags: set of (real) flags the instruction sets
*/
-__inline__
void VG_(new_emit) ( Bool interacts_with_simd_flags,
FlagSet use_flags, FlagSet set_flags )
{
);
}
+static void emit_SSE3a1 ( FlagSet uses_sflags,
+ FlagSet sets_sflags,
+ UChar first_byte,
+ UChar second_byte,
+ UChar third_byte,
+ UChar fourth_byte,
+ UChar fifth_byte,
+ Int ireg )
+{
+ VG_(new_emit)(True, uses_sflags, sets_sflags);
+ VG_(emitB) ( first_byte );
+ VG_(emitB) ( second_byte );
+ VG_(emitB) ( third_byte );
+ fourth_byte &= 0x38; /* mask out mod and rm fields */
+ emit_amode_regmem_reg ( ireg, fourth_byte >> 3 );
+ VG_(emitB) ( fifth_byte );
+ if (dis)
+ VG_(printf)("\n\t\tsse3a1-0x%x:0x%x:0x%x:0x%x:0x%x-(%s)\n",
+ (UInt)first_byte, (UInt)second_byte,
+ (UInt)third_byte, (UInt)fourth_byte,
+ (UInt)fifth_byte,
+ nameIReg(4,ireg) );
+}
+
static void emit_SSE4 ( FlagSet uses_sflags,
FlagSet sets_sflags,
UChar first_byte,
u->val3 );
break;
+ case SSE3a1_MemRd:
+ vg_assert(u->size == 16);
+ vg_assert(u->tag1 == Lit16);
+ vg_assert(u->tag2 == Lit16);
+ vg_assert(u->tag3 == RealReg);
+ vg_assert(!anyFlagUse(u));
+ if (!(*sselive)) {
+ emit_get_sse_state();
+ *sselive = True;
+ }
+ emit_SSE3a1 ( u->flags_r, u->flags_w,
+ (u->val1 >> 8) & 0xFF,
+ u->val1 & 0xFF,
+ (u->val2 >> 8) & 0xFF,
+ u->val2 & 0xFF,
+ (u->lit32 >> 8) & 0xFF,
+ u->val3 );
+ break;
+
case SSE5:
vg_assert(u->size == 0);
vg_assert(u->tag1 == Lit16);
vg_assert(u->tag1 == Lit16);
vg_assert(u->tag2 == Lit16);
vg_assert(u->tag3 == NoValue);
- vg_assert(!anyFlagUse(u));
+ vg_assert(!readFlagUse(u));
if (!(*sselive)) {
emit_get_sse_state();
*sselive = True;
/* Holds malloc'd but not freed blocks. Static, so zero-inited by default. */
-#define VG_N_CHAINS 997
+#define VG_N_CHAINS 4999 /* a prime number */
#define VG_CHAIN_NO(aa) (((UInt)(aa)) % VG_N_CHAINS)
popl %eax
ret
+/*
+ Fetch a byte/word/dword from given port
+ On entry:
+ size 1, 2 or 4
+ port, replaced by result
+ RA
+*/
+.global VG_(helper_IN)
+VG_(helper_IN):
+ pushl %eax
+ pushl %edx
+ movl 16(%esp), %eax
+ movl 12(%esp), %edx
+ cmpl $4, %eax
+ je in_dword
+ cmpl $2, %eax
+ je in_word
+in_byte:
+ inb (%dx), %al
+ jmp in_done
+in_word:
+ in (%dx), %ax
+ jmp in_done
+in_dword:
+ inl (%dx),%eax
+in_done:
+ movl %eax,12(%esp)
+ popl %edx
+ popl %eax
+ ret
+
+/*
+ Write a byte/word/dword to given port
+ On entry:
+ size 1, 2 or 4
+ port
+ value
+ RA
+*/
+.global VG_(helper_OUT)
+VG_(helper_OUT):
+ pushl %eax
+ pushl %edx
+ movl 16(%esp), %edx
+ movl 12(%esp), %eax
+ cmpl $4, 20(%esp)
+ je out_dword
+ cmpl $2, 20(%esp)
+ je out_word
+out_byte:
+ outb %al,(%dx)
+ jmp out_done
+out_word:
+ out %ax,(%dx)
+ jmp out_done
+out_dword:
+ outl %eax,(%dx)
+out_done:
+ popl %edx
+ popl %eax
+ ret
+
/* Do the CPUID instruction.
On entry:
backtrace. */
#define VG_DEEPEST_BACKTRACE 50
-/* Number of lists in which we keep track of malloc'd but not free'd
- blocks. Should be prime. */
-#define VG_N_MALLOCLISTS 997
-
/* Number of lists in which we keep track of ExeContexts. Should be
prime. */
-#define VG_N_EC_LISTS /*997*/ 4999
+#define VG_N_EC_LISTS 4999 /* a prime number */
/* Defines the thread-scheduling timeslice, in terms of the number of
basic blocks we attempt to run each thread for. Smaller values
#define VG_N_CLEANUPSTACK 16
/* Number of entries in each thread's fork-handler stack. */
-#define VG_N_FORKHANDLERSTACK 2
+#define VG_N_FORKHANDLERSTACK 4
/* Max number of callers for context in a suppression. */
#define VG_N_SUPP_CALLERS 4
extern void VG_(helper_shrdl);
extern void VG_(helper_shrdw);
+extern void VG_(helper_IN);
+extern void VG_(helper_OUT);
+
extern void VG_(helper_RDTSC);
extern void VG_(helper_CPUID);
/* This has some nasty duplication of stuff from vg_libpthread.c */
+#include <string.h>
#include <errno.h>
#include <sys/types.h>
#include <stdio.h>
my_assert(res == 0);
}
+static
+void my_free ( void* ptr )
+{
+ int res;
+ VALGRIND_MAGIC_SEQUENCE(res, (-1) /* default */,
+ VG_USERREQ__FREE, ptr, 0, 0, 0);
+ my_assert(res == 0);
+}
+
+static
+void* my_malloc ( int nbytes )
+{
+ void* res;
+ VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
+ VG_USERREQ__MALLOC, nbytes, 0, 0, 0);
+ my_assert(res != (void*)0);
+ return res;
+}
+
/* ================================ poll ================================ */
/* This is the master implementation of poll(). It blocks only the
/*struct timeval*/ void *timeoutV )
{
unsigned int ms_now, ms_end;
- int res;
- fd_set rfds_copy;
- fd_set wfds_copy;
- fd_set xfds_copy;
+ int res, fdsetsz;
+ fd_set *rfds_copy = NULL;
+ fd_set *wfds_copy = NULL;
+ fd_set *xfds_copy = NULL;
struct vki_timeval t_now;
struct vki_timeval zero_timeout;
struct vki_timespec nanosleep_interval;
}
}
+# define howmany(x,y) (((x)+((y)-1))/(y)) /* Borrowed from Openssh */
+
+ /* Select is a scalable function. We should create data structures
+ * dynamically and therefore scaled so that we don't get invalid read
+ * or writes. See OpenBSD select 2 man page. */
+ fdsetsz = howmany(n+1, NFDBITS) * sizeof(fd_mask);
+ if (rfds) rfds_copy = (fd_set*)my_malloc(fdsetsz);
+ if (wfds) wfds_copy = (fd_set*)my_malloc(fdsetsz);
+ if (xfds) xfds_copy = (fd_set*)my_malloc(fdsetsz);
+
+#undef howmany
+
/* If a timeout was specified, set ms_end to be the end millisecond
counter [wallclock] time. */
if (timeout) {
/* These could be trashed each time round the loop, so restore
them each time. */
- if (rfds) rfds_copy = *rfds;
- if (wfds) wfds_copy = *wfds;
- if (xfds) xfds_copy = *xfds;
+ if (rfds) memcpy(rfds_copy, rfds, fdsetsz);
+ if (wfds) memcpy(wfds_copy, wfds, fdsetsz);
+ if (xfds) memcpy(xfds_copy, xfds, fdsetsz);
zero_timeout.tv_sec = zero_timeout.tv_usec = 0;
res = do_syscall_select( n,
- rfds ? (vki_fd_set*)(&rfds_copy) : NULL,
- wfds ? (vki_fd_set*)(&wfds_copy) : NULL,
- xfds ? (vki_fd_set*)(&xfds_copy) : NULL,
+ rfds ? (vki_fd_set*)(rfds_copy) : NULL,
+ wfds ? (vki_fd_set*)(wfds_copy) : NULL,
+ xfds ? (vki_fd_set*)(xfds_copy) : NULL,
& zero_timeout );
if (is_kerror(res)) {
/* Some kind of error (including EINTR). Set errno and
return. The sets are unspecified in this case. */
* (__errno_location()) = -res;
+ if (rfds_copy) my_free(rfds_copy);
+ if (wfds_copy) my_free(wfds_copy);
+ if (xfds_copy) my_free(xfds_copy);
return -1;
}
if (res > 0) {
- /* one or more fds is ready. Copy out resulting sets and
- return. */
- if (rfds) *rfds = rfds_copy;
- if (wfds) *wfds = wfds_copy;
- if (xfds) *xfds = xfds_copy;
+ /* one or more fds is ready. Copy out resulting sets and return. */
+ if (rfds) memcpy(rfds,rfds_copy, fdsetsz);
+ if (wfds) memcpy(wfds, wfds_copy, fdsetsz);
+ if (xfds) memcpy(xfds, xfds_copy, fdsetsz);
+ if (rfds_copy) my_free(rfds_copy);
+ if (wfds_copy) my_free(wfds_copy);
+ if (xfds_copy) my_free(xfds_copy);
return res;
}
/* The nanosleep was interrupted by a signal. So we do the
same. */
* (__errno_location()) = EINTR;
+ if (rfds_copy) my_free(rfds_copy);
+ if (wfds_copy) my_free(wfds_copy);
+ if (xfds_copy) my_free(xfds_copy);
return -1;
}
0, 0, 0, 0);
my_assert(ms_now != 0xFFFFFFFF);
if (ms_now >= ms_end) {
- /* timeout; nothing interesting happened. */
- if (rfds) FD_ZERO(rfds);
- if (wfds) FD_ZERO(wfds);
- if (xfds) FD_ZERO(xfds);
+ /* memset is used because FD_ZERO is not scalable */
+ if (rfds) memset(rfds, 0, fdsetsz);
+ if (wfds) memset(wfds, 0, fdsetsz);
+ if (xfds) memset(xfds, 0, fdsetsz);
+ if (rfds_copy) my_free(rfds_copy);
+ if (wfds_copy) my_free(wfds_copy);
+ if (xfds_copy) my_free(xfds_copy);
return 0;
}
}
-
}
}
* this logic will become even more tortured. Wait until we really
* need it.
*/
-static inline int _open(const char *pathname, int flags, mode_t mode,
- int (*openp)(const char *, int, mode_t))
+static int _open(const char *pathname, int flags, mode_t mode,
+ int (*openp)(const char *, int, mode_t))
{
int fd;
struct stat st;
strong_alias(__pthread_mutexattr_settype, __pthread_mutexattr_setkind_np)
weak_alias(__pthread_mutexattr_setkind_np, pthread_mutexattr_setkind_np)
+/* POSIX spinlocks, taken from glibc linuxthreads/sysdeps/i386 */
+
+typedef volatile int pthread_spinlock_t; /* Huh? Guarded by __USE_XOPEN2K */
+
+int pthread_spin_init(pthread_spinlock_t *lock, int pshared)
+{
+ /* We can ignore the `pshared' parameter. Since we are busy-waiting
+ all processes which can access the memory location `lock' points
+ to can use the spinlock. */
+ *lock = 1;
+ return 0;
+}
+
+int pthread_spin_lock(pthread_spinlock_t *lock)
+{
+ asm volatile
+ ("\n"
+ "1:\n\t"
+ "lock; decl %0\n\t"
+ "js 2f\n\t"
+ ".section .text.spinlock,\"ax\"\n"
+ "2:\n\t"
+ "cmpl $0,%0\n\t"
+ "rep; nop\n\t"
+ "jle 2b\n\t"
+ "jmp 1b\n\t"
+ ".previous"
+ : "=m" (*lock));
+ return 0;
+}
+
+int pthread_spin_unlock(pthread_spinlock_t *lock)
+{
+ asm volatile
+ ("movl $1,%0"
+ : "=m" (*lock));
+ return 0;
+}
+
+int pthread_spin_destroy(pthread_spinlock_t *lock)
+{
+ /* Nothing to do. */
+ return 0;
+}
+
+int pthread_spin_trylock(pthread_spinlock_t *lock)
+{
+ int oldval;
+
+ asm volatile
+ ("xchgl %0,%1"
+ : "=r" (oldval), "=m" (*lock)
+ : "0" (0));
+ return oldval > 0 ? 0 : EBUSY;
+}
/*--------------------------------------------------------------------*/
/*--- end vg_libpthread.c ---*/
// { vgPlain_unimp("pthread_mutexattr_setpshared"); }
//__attribute__((weak)) void pthread_setconcurrency ( void )
// { vgPlain_unimp("pthread_setconcurrency"); }
-__attribute__((weak)) void pthread_spin_destroy ( void )
- { vgPlain_unimp("pthread_spin_destroy"); }
-__attribute__((weak)) void pthread_spin_init ( void )
- { vgPlain_unimp("pthread_spin_init"); }
-__attribute__((weak)) void pthread_spin_lock ( void )
- { vgPlain_unimp("pthread_spin_lock"); }
-__attribute__((weak)) void pthread_spin_trylock ( void )
- { vgPlain_unimp("pthread_spin_trylock"); }
-__attribute__((weak)) void pthread_spin_unlock ( void )
- { vgPlain_unimp("pthread_spin_unlock"); }
+//__attribute__((weak)) void pthread_spin_destroy ( void )
+// { vgPlain_unimp("pthread_spin_destroy"); }
+//__attribute__((weak)) void pthread_spin_init ( void )
+// { vgPlain_unimp("pthread_spin_init"); }
+//__attribute__((weak)) void pthread_spin_lock ( void )
+// { vgPlain_unimp("pthread_spin_lock"); }
+//__attribute__((weak)) void pthread_spin_trylock ( void )
+// { vgPlain_unimp("pthread_spin_trylock"); }
+//__attribute__((weak)) void pthread_spin_unlock ( void )
+// { vgPlain_unimp("pthread_spin_unlock"); }
/*--------------------------------------------------------------------*/
Int VGOFF_(helper_shldw) = INVALID_OFFSET;
Int VGOFF_(helper_shrdl) = INVALID_OFFSET;
Int VGOFF_(helper_shrdw) = INVALID_OFFSET;
+Int VGOFF_(helper_IN) = INVALID_OFFSET;
+Int VGOFF_(helper_OUT) = INVALID_OFFSET;
Int VGOFF_(helper_RDTSC) = INVALID_OFFSET;
Int VGOFF_(helper_CPUID) = INVALID_OFFSET;
Int VGOFF_(helper_BSWAP) = INVALID_OFFSET;
/* Helper functions. */
VGOFF_(helper_idiv_64_32)
- = alloc_BaB_1_set( (Addr) & VG_(helper_idiv_64_32) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_idiv_64_32));
VGOFF_(helper_div_64_32)
- = alloc_BaB_1_set( (Addr) & VG_(helper_div_64_32) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_div_64_32));
VGOFF_(helper_idiv_32_16)
- = alloc_BaB_1_set( (Addr) & VG_(helper_idiv_32_16) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_idiv_32_16));
VGOFF_(helper_div_32_16)
- = alloc_BaB_1_set( (Addr) & VG_(helper_div_32_16) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_div_32_16));
VGOFF_(helper_idiv_16_8)
- = alloc_BaB_1_set( (Addr) & VG_(helper_idiv_16_8) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_idiv_16_8));
VGOFF_(helper_div_16_8)
- = alloc_BaB_1_set( (Addr) & VG_(helper_div_16_8) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_div_16_8));
VGOFF_(helper_imul_32_64)
- = alloc_BaB_1_set( (Addr) & VG_(helper_imul_32_64) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_imul_32_64));
VGOFF_(helper_mul_32_64)
- = alloc_BaB_1_set( (Addr) & VG_(helper_mul_32_64) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_mul_32_64));
VGOFF_(helper_imul_16_32)
- = alloc_BaB_1_set( (Addr) & VG_(helper_imul_16_32) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_imul_16_32));
VGOFF_(helper_mul_16_32)
- = alloc_BaB_1_set( (Addr) & VG_(helper_mul_16_32) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_mul_16_32));
VGOFF_(helper_imul_8_16)
- = alloc_BaB_1_set( (Addr) & VG_(helper_imul_8_16) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_imul_8_16));
VGOFF_(helper_mul_8_16)
- = alloc_BaB_1_set( (Addr) & VG_(helper_mul_8_16) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_mul_8_16));
VGOFF_(helper_CLD)
- = alloc_BaB_1_set( (Addr) & VG_(helper_CLD) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_CLD));
VGOFF_(helper_STD)
- = alloc_BaB_1_set( (Addr) & VG_(helper_STD) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_STD));
VGOFF_(helper_get_dirflag)
- = alloc_BaB_1_set( (Addr) & VG_(helper_get_dirflag) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_get_dirflag));
VGOFF_(helper_CLC)
- = alloc_BaB_1_set( (Addr) & VG_(helper_CLC) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_CLC));
VGOFF_(helper_STC)
- = alloc_BaB_1_set( (Addr) & VG_(helper_STC) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_STC));
VGOFF_(helper_shldl)
- = alloc_BaB_1_set( (Addr) & VG_(helper_shldl) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_shldl));
VGOFF_(helper_shldw)
- = alloc_BaB_1_set( (Addr) & VG_(helper_shldw) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_shldw));
VGOFF_(helper_shrdl)
- = alloc_BaB_1_set( (Addr) & VG_(helper_shrdl) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_shrdl));
VGOFF_(helper_shrdw)
- = alloc_BaB_1_set( (Addr) & VG_(helper_shrdw) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_shrdw));
VGOFF_(helper_RDTSC)
- = alloc_BaB_1_set( (Addr) & VG_(helper_RDTSC) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_RDTSC));
VGOFF_(helper_CPUID)
- = alloc_BaB_1_set( (Addr) & VG_(helper_CPUID) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_CPUID));
VGOFF_(helper_bsf)
- = alloc_BaB_1_set( (Addr) & VG_(helper_bsf) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_bsf));
VGOFF_(helper_bsr)
- = alloc_BaB_1_set( (Addr) & VG_(helper_bsr) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_bsr));
VGOFF_(helper_fstsw_AX)
- = alloc_BaB_1_set( (Addr) & VG_(helper_fstsw_AX) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_fstsw_AX));
VGOFF_(helper_SAHF)
- = alloc_BaB_1_set( (Addr) & VG_(helper_SAHF) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_SAHF));
VGOFF_(helper_LAHF)
- = alloc_BaB_1_set( (Addr) & VG_(helper_LAHF) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_LAHF));
VGOFF_(helper_DAS)
- = alloc_BaB_1_set( (Addr) & VG_(helper_DAS) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_DAS));
VGOFF_(helper_DAA)
- = alloc_BaB_1_set( (Addr) & VG_(helper_DAA) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_DAA));
+ VGOFF_(helper_IN)
+ = alloc_BaB_1_set( (Addr) & VG_(helper_IN));
+ VGOFF_(helper_OUT)
+ = alloc_BaB_1_set( (Addr) & VG_(helper_OUT));
VGOFF_(helper_undefined_instruction)
- = alloc_BaB_1_set( (Addr) & VG_(helper_undefined_instruction) );
+ = alloc_BaB_1_set( (Addr) & VG_(helper_undefined_instruction));
/* Allocate slots for noncompact helpers */
assign_helpers_in_baseBlock(VG_(n_noncompact_helpers),
if (VG_(clo_verbosity) > 1) {
if (VG_(clo_log_to) != VgLogTo_Fd)
VG_(message)(Vg_UserMsg, "");
+ VG_(message)(Vg_UserMsg, "Command line:");
+ for (i = 0; i < VG_(client_argc); i++)
+ VG_(message)(Vg_UserMsg, " %s", VG_(client_argv)[i]);
+
VG_(message)(Vg_UserMsg, "Startup, with flags:");
for (i = 0; i < argc; i++) {
VG_(message)(Vg_UserMsg, " %s", argv[i]);
it to $(libdir)/lib/valgrinq, so as to make our libpthread.so
disappear.
*/
+static void slideleft ( Char* s )
+{
+ vg_assert(s && (*s == ' ' || *s == ':'));
+ while (True) {
+ s[0] = s[1];
+ if (s[0] == '\0') break;
+ s++;
+ }
+}
+
+
void VG_(mash_LD_PRELOAD_and_LD_LIBRARY_PATH) ( Char* ld_preload_str,
Char* ld_library_path_str )
{
/* LD_LIBRARY_PATH: "<coredir>:Y" --> " :Y" */
for (i = 0; i < coredir_len; i++)
coredir2[i] = ' ';
-
+
+ /* Zap the leading spaces and : in both strings. */
+ while (ld_preload_str[0] == ' ') slideleft(ld_preload_str);
+ if (ld_preload_str[0] == ':') slideleft(ld_preload_str);
+
+ while (ld_library_path_str[0] == ' ') slideleft(ld_library_path_str);
+ if (ld_library_path_str[0] == ':') slideleft(ld_library_path_str);
+
/* VG_(printf)("post:\n%s\n%s\n", ld_preload_str, ld_library_path_str); */
return;
UInt foffset;
UChar rr, ww, xx, pp, ch, tmp;
- if (read_from_file) {
+ static Int depth = 0;
+
+ if (read_from_file && depth == 0) {
VG_(read_procselfmaps_contents)();
}
+ depth++;
if (0)
VG_(message)(Vg_DebugMsg, "raw:\n%s", procmap_buf );
i = i_eol + 1;
}
+ depth--;
}
/*--------------------------------------------------------------------*/
return __builtin_new(n);
}
+/* operator new(unsigned, std::nothrow_t const&) */
+void* _ZnwjRKSt9nothrow_t ( Int n )
+{
+ return __builtin_new(n);
+}
+
void* __builtin_vec_new ( Int n )
{
void* v;
return __builtin_vec_new(n);
}
+void* _ZnajRKSt9nothrow_t ( Int n )
+{
+ return __builtin_vec_new(n);
+}
+
void free ( void* p )
{
MALLOC_TRACE("free[simd=%d](%p)\n",
return;
bad_signo:
- if (VG_(needs).core_errors)
+ if (VG_(needs).core_errors && VG_(clo_verbosity) >= 1)
VG_(message)(Vg_UserMsg,
"Warning: bad signal number %d in __NR_sigaction.",
signo);
return;
bad_sigkill_or_sigstop:
- if (VG_(needs).core_errors)
+ if (VG_(needs).core_errors && VG_(clo_verbosity) >= 1)
VG_(message)(Vg_UserMsg,
"Warning: attempt to set %s handler in __NR_sigaction.",
signo == VKI_SIGKILL ? "SIGKILL" : "SIGSTOP" );
*/
#include "vg_constants.h"
+#include "config.h"
#---------------------------------------------------------------------
.long 0
.text
+.type VG_(swizzle_esp_then_start_GDB),@function
.global VG_(swizzle_esp_then_start_GDB)
VG_(swizzle_esp_then_start_GDB):
+#ifdef HAVE_GAS_CFI
+ .cfi_startproc
+#endif
pushal
# remember the simulators current stack/frame pointers
# push %EBP. This is a faked %ebp-chain pointer.
pushl %eax
+#ifdef HAVE_GAS_CFI
+ .cfi_adjust_cfa_offset 0x4
+#endif
movl %esp, %ebp
+#ifdef HAVE_GAS_CFI
+ .cfi_def_cfa_register ebp
+#endif
call VG_(start_GDB_whilst_on_client_stack)
# restore the simulators stack/frame pointer
movl vg_ebp_saved_over_GDB_start, %ebp
movl vg_esp_saved_over_GDB_start, %esp
+#ifdef HAVE_GAS_CFI
+ .cfi_adjust_cfa_offset -0x4
+#endif
popal
ret
+#ifdef HAVE_GAS_CFI
+ .cfi_endproc
+#endif
# gcc puts this construction at the end of every function. I think it
# allows the linker to figure out the size of the function. So we do
recently, in which case we find the old index and return that.
This avoids the most egregious duplications. */
-static __inline__
+static
Int addStr ( SegInfo* si, Char* str )
{
# define EMPTY 0xffffffff
/* Top-level place to call to add a source-location mapping entry. */
-static __inline__
+static
void addLineInfo ( SegInfo* si,
Int fnmoff,
Addr this,
}
+/*------------------------------------------------------------*/
+/*--- Read DWARF1 format debug info. ---*/
+/*------------------------------------------------------------*/
+
+/* The following three enums (dwarf_tag, dwarf_form, dwarf_attribute)
+ are taken from the file include/elf/dwarf.h in the GNU gdb-6.0
+ sources, which are Copyright 1992, 1993, 1995, 1999 Free Software
+ Foundation, Inc and naturally licensed under the GNU General Public
+ License version 2 or later.
+*/
+
+/* Tag names and codes. */
+
+enum dwarf_tag {
+ TAG_padding = 0x0000,
+ TAG_array_type = 0x0001,
+ TAG_class_type = 0x0002,
+ TAG_entry_point = 0x0003,
+ TAG_enumeration_type = 0x0004,
+ TAG_formal_parameter = 0x0005,
+ TAG_global_subroutine = 0x0006,
+ TAG_global_variable = 0x0007,
+ /* 0x0008 -- reserved */
+ /* 0x0009 -- reserved */
+ TAG_label = 0x000a,
+ TAG_lexical_block = 0x000b,
+ TAG_local_variable = 0x000c,
+ TAG_member = 0x000d,
+ /* 0x000e -- reserved */
+ TAG_pointer_type = 0x000f,
+ TAG_reference_type = 0x0010,
+ TAG_compile_unit = 0x0011,
+ TAG_string_type = 0x0012,
+ TAG_structure_type = 0x0013,
+ TAG_subroutine = 0x0014,
+ TAG_subroutine_type = 0x0015,
+ TAG_typedef = 0x0016,
+ TAG_union_type = 0x0017,
+ TAG_unspecified_parameters = 0x0018,
+ TAG_variant = 0x0019,
+ TAG_common_block = 0x001a,
+ TAG_common_inclusion = 0x001b,
+ TAG_inheritance = 0x001c,
+ TAG_inlined_subroutine = 0x001d,
+ TAG_module = 0x001e,
+ TAG_ptr_to_member_type = 0x001f,
+ TAG_set_type = 0x0020,
+ TAG_subrange_type = 0x0021,
+ TAG_with_stmt = 0x0022,
+
+ /* GNU extensions */
+
+ TAG_format_label = 0x8000, /* for FORTRAN 77 and Fortran 90 */
+ TAG_namelist = 0x8001, /* For Fortran 90 */
+ TAG_function_template = 0x8002, /* for C++ */
+ TAG_class_template = 0x8003 /* for C++ */
+};
+
+/* Form names and codes. */
+
+enum dwarf_form {
+ FORM_ADDR = 0x1,
+ FORM_REF = 0x2,
+ FORM_BLOCK2 = 0x3,
+ FORM_BLOCK4 = 0x4,
+ FORM_DATA2 = 0x5,
+ FORM_DATA4 = 0x6,
+ FORM_DATA8 = 0x7,
+ FORM_STRING = 0x8
+};
+
+/* Attribute names and codes. */
+
+enum dwarf_attribute {
+ AT_sibling = (0x0010|FORM_REF),
+ AT_location = (0x0020|FORM_BLOCK2),
+ AT_name = (0x0030|FORM_STRING),
+ AT_fund_type = (0x0050|FORM_DATA2),
+ AT_mod_fund_type = (0x0060|FORM_BLOCK2),
+ AT_user_def_type = (0x0070|FORM_REF),
+ AT_mod_u_d_type = (0x0080|FORM_BLOCK2),
+ AT_ordering = (0x0090|FORM_DATA2),
+ AT_subscr_data = (0x00a0|FORM_BLOCK2),
+ AT_byte_size = (0x00b0|FORM_DATA4),
+ AT_bit_offset = (0x00c0|FORM_DATA2),
+ AT_bit_size = (0x00d0|FORM_DATA4),
+ /* (0x00e0|FORM_xxxx) -- reserved */
+ AT_element_list = (0x00f0|FORM_BLOCK4),
+ AT_stmt_list = (0x0100|FORM_DATA4),
+ AT_low_pc = (0x0110|FORM_ADDR),
+ AT_high_pc = (0x0120|FORM_ADDR),
+ AT_language = (0x0130|FORM_DATA4),
+ AT_member = (0x0140|FORM_REF),
+ AT_discr = (0x0150|FORM_REF),
+ AT_discr_value = (0x0160|FORM_BLOCK2),
+ /* (0x0170|FORM_xxxx) -- reserved */
+ /* (0x0180|FORM_xxxx) -- reserved */
+ AT_string_length = (0x0190|FORM_BLOCK2),
+ AT_common_reference = (0x01a0|FORM_REF),
+ AT_comp_dir = (0x01b0|FORM_STRING),
+ AT_const_value_string = (0x01c0|FORM_STRING),
+ AT_const_value_data2 = (0x01c0|FORM_DATA2),
+ AT_const_value_data4 = (0x01c0|FORM_DATA4),
+ AT_const_value_data8 = (0x01c0|FORM_DATA8),
+ AT_const_value_block2 = (0x01c0|FORM_BLOCK2),
+ AT_const_value_block4 = (0x01c0|FORM_BLOCK4),
+ AT_containing_type = (0x01d0|FORM_REF),
+ AT_default_value_addr = (0x01e0|FORM_ADDR),
+ AT_default_value_data2 = (0x01e0|FORM_DATA2),
+ AT_default_value_data4 = (0x01e0|FORM_DATA4),
+ AT_default_value_data8 = (0x01e0|FORM_DATA8),
+ AT_default_value_string = (0x01e0|FORM_STRING),
+ AT_friends = (0x01f0|FORM_BLOCK2),
+ AT_inline = (0x0200|FORM_STRING),
+ AT_is_optional = (0x0210|FORM_STRING),
+ AT_lower_bound_ref = (0x0220|FORM_REF),
+ AT_lower_bound_data2 = (0x0220|FORM_DATA2),
+ AT_lower_bound_data4 = (0x0220|FORM_DATA4),
+ AT_lower_bound_data8 = (0x0220|FORM_DATA8),
+ AT_private = (0x0240|FORM_STRING),
+ AT_producer = (0x0250|FORM_STRING),
+ AT_program = (0x0230|FORM_STRING),
+ AT_protected = (0x0260|FORM_STRING),
+ AT_prototyped = (0x0270|FORM_STRING),
+ AT_public = (0x0280|FORM_STRING),
+ AT_pure_virtual = (0x0290|FORM_STRING),
+ AT_return_addr = (0x02a0|FORM_BLOCK2),
+ AT_abstract_origin = (0x02b0|FORM_REF),
+ AT_start_scope = (0x02c0|FORM_DATA4),
+ AT_stride_size = (0x02e0|FORM_DATA4),
+ AT_upper_bound_ref = (0x02f0|FORM_REF),
+ AT_upper_bound_data2 = (0x02f0|FORM_DATA2),
+ AT_upper_bound_data4 = (0x02f0|FORM_DATA4),
+ AT_upper_bound_data8 = (0x02f0|FORM_DATA8),
+ AT_virtual = (0x0300|FORM_STRING),
+
+ /* GNU extensions. */
+
+ AT_sf_names = (0x8000|FORM_DATA4),
+ AT_src_info = (0x8010|FORM_DATA4),
+ AT_mac_info = (0x8020|FORM_DATA4),
+ AT_src_coords = (0x8030|FORM_DATA4),
+ AT_body_begin = (0x8040|FORM_ADDR),
+ AT_body_end = (0x8050|FORM_ADDR)
+};
+
+/* end of enums taken from gdb-6.0 sources */
+
+static
+void read_debuginfo_dwarf1 (
+ SegInfo* si,
+ UChar* dwarf1d, Int dwarf1d_sz,
+ UChar* dwarf1l, Int dwarf1l_sz )
+{
+ UInt stmt_list;
+ Bool stmt_list_found;
+ Int die_offset, die_szb, at_offset;
+ UShort die_kind, at_kind;
+ UChar* at_base;
+ UChar* src_filename;
+
+ if (0)
+ VG_(printf)("read_debuginfo_dwarf1 ( %p, %d, %p, %d )\n",
+ dwarf1d, dwarf1d_sz, dwarf1l, dwarf1l_sz );
+
+ /* This loop scans the DIEs. */
+ die_offset = 0;
+ while (True) {
+ if (die_offset >= dwarf1d_sz) break;
+
+ die_szb = *(Int*)(dwarf1d + die_offset);
+ die_kind = *(UShort*)(dwarf1d + die_offset + 4);
+
+ /* We're only interested in compile_unit DIEs; ignore others. */
+ if (die_kind != TAG_compile_unit) {
+ die_offset += die_szb;
+ continue;
+ }
+
+ if (0)
+ VG_(printf)("compile-unit DIE: offset %d, tag 0x%x, size %d\n",
+ die_offset, (Int)die_kind, die_szb );
+
+ /* We've got a compile_unit DIE starting at (dwarf1d +
+ die_offset+6). Try and find the AT_name and AT_stmt_list
+ attributes. Then, finally, we can read the line number info
+ for this source file. */
+
+ /* The next 3 are set as we find the relevant attrs. */
+ src_filename = NULL;
+ stmt_list_found = False;
+ stmt_list = 0;
+
+ /* This loop scans the Attrs inside compile_unit DIEs. */
+ at_base = dwarf1d + die_offset + 6;
+ at_offset = 0;
+ while (True) {
+ if (at_offset >= die_szb-6) break;
+
+ at_kind = *(UShort*)(at_base + at_offset);
+ if (0) VG_(printf)("atoffset %d, attag 0x%x\n",
+ at_offset, (Int)at_kind );
+ at_offset += 2; /* step over the attribute itself */
+ /* We have to examine the attribute to figure out its
+ length. */
+ switch (at_kind) {
+ case AT_stmt_list:
+ case AT_language:
+ case AT_sibling:
+ if (at_kind == AT_stmt_list) {
+ stmt_list_found = True;
+ stmt_list = *(Int*)(at_base+at_offset);
+ }
+ at_offset += 4; break;
+ case AT_high_pc:
+ case AT_low_pc:
+ at_offset += sizeof(void*); break;
+ case AT_name:
+ case AT_producer:
+ case AT_comp_dir:
+ /* Zero terminated string, step over it. */
+ if (at_kind == AT_name)
+ src_filename = at_base + at_offset;
+ while (at_offset < die_szb-6 && at_base[at_offset] != 0)
+ at_offset++;
+ at_offset++;
+ break;
+ default:
+ VG_(printf)("Unhandled DWARF-1 attribute 0x%x\n",
+ (Int)at_kind );
+ VG_(core_panic)("Unhandled DWARF-1 attribute");
+ } /* switch (at_kind) */
+ } /* looping over attributes */
+
+ /* So, did we find the required stuff for a line number table in
+ this DIE? If yes, read it. */
+ if (stmt_list_found /* there is a line number table */
+ && src_filename != NULL /* we know the source filename */
+ ) {
+ /* Table starts:
+ Length:
+ 4 bytes, includes the entire table
+ Base address:
+ unclear (4? 8?), assuming native pointer size here.
+ Then a sequence of triples
+ (source line number -- 32 bits
+ source line column -- 16 bits
+ address delta -- 32 bits)
+ */
+ Addr base;
+ Int len, curr_filenmoff;
+ UChar* ptr;
+ UInt prev_line, prev_delta;
+
+ curr_filenmoff = addStr ( si, src_filename );
+ prev_line = prev_delta = 0;
+
+ ptr = dwarf1l + stmt_list;
+ len = *(Int*)ptr; ptr += sizeof(Int);
+ base = (Addr)(*(void**)ptr); ptr += sizeof(void*);
+ len -= (sizeof(Int) + sizeof(void*));
+ while (len > 0) {
+ UInt line;
+ UShort col;
+ UInt delta;
+ line = *(UInt*)ptr; ptr += sizeof(UInt);
+ col = *(UShort*)ptr; ptr += sizeof(UShort);
+ delta = *(UShort*)ptr; ptr += sizeof(UInt);
+ if (0) VG_(printf)("line %d, col %d, delta %d\n",
+ line, (Int)col, delta );
+ len -= (sizeof(UInt) + sizeof(UShort) + sizeof(UInt));
+
+ if (delta > 0 && prev_line > 0) {
+ if (0) VG_(printf) (" %d %d-%d\n",
+ prev_line, prev_delta, delta-1);
+ addLineInfo ( si, curr_filenmoff,
+ base + prev_delta, base + delta,
+ prev_line, 0 );
+ }
+ prev_line = line;
+ prev_delta = delta;
+ }
+ }
+
+ /* Move on the the next DIE. */
+ die_offset += die_szb;
+
+ } /* Looping over DIEs */
+
+}
+
+
/*------------------------------------------------------------*/
/*--- Read info from a .so/exe file. ---*/
/*------------------------------------------------------------*/
UChar* stab; /* The .stab table */
UChar* stabstr; /* The .stab string table */
UChar* dwarf2; /* The DWARF2 location info table */
+ UChar* dwarf1d; /* The DWARF1 ".debug" section */
+ UChar* dwarf1l; /* The DWARF1 ".line" section */
Int stab_sz; /* Size in bytes of the .stab table */
Int stabstr_sz; /* Size in bytes of the .stab string table */
Int dwarf2_sz; /* Size in bytes of the DWARF2 srcloc table*/
+ Int dwarf1d_sz; /* Size in bytes of the DWARF1 .debug sect */
+ Int dwarf1l_sz; /* Size in bytes of the DWARF1 .line sect */
Int fd;
Int i;
Bool ok;
Bool snaffle_it;
Addr sym_addr;
- /* find the .stabstr and .stab sections */
+ /* find various symbol and string tables */
for (i = 0; i < ehdr->e_shnum; i++) {
if (0 == VG_(strcmp)(".dynsym",sh_strtab + shdr[i].sh_name)) {
stabstr = NULL;
stab = NULL;
dwarf2 = NULL;
+ dwarf1d = NULL;
+ dwarf1l = NULL;
stabstr_sz = 0;
stab_sz = 0;
dwarf2_sz = 0;
+ dwarf1d_sz = 0;
+ dwarf1l_sz = 0;
/* find the .stabstr / .stab / .debug_line sections */
for (i = 0; i < ehdr->e_shnum; i++) {
dwarf2 = (UChar *)(oimage + shdr[i].sh_offset);
dwarf2_sz = shdr[i].sh_size;
}
+ if (0 == VG_(strcmp)(".debug",sh_strtab + shdr[i].sh_name)) {
+ dwarf1d = (UChar *)(oimage + shdr[i].sh_offset);
+ dwarf1d_sz = shdr[i].sh_size;
+ }
+ if (0 == VG_(strcmp)(".line",sh_strtab + shdr[i].sh_name)) {
+ dwarf1l = (UChar *)(oimage + shdr[i].sh_offset);
+ dwarf1l_sz = shdr[i].sh_size;
+ }
}
- if ((stab == NULL || stabstr == NULL) && dwarf2 == NULL) {
+ if ((stab == NULL || stabstr == NULL)
+ && dwarf2 == NULL
+ && (dwarf1d == NULL || dwarf1l == NULL)
+ ) {
vg_symerr(" object doesn't have any debug info");
VG_(munmap) ( (void*)oimage, n_oimage );
return False;
return False;
}
+ if ( dwarf1d_sz + (UChar*)dwarf1d > n_oimage + (UChar*)oimage
+ || dwarf1l_sz + (UChar*)dwarf1l > n_oimage + (UChar*)oimage ) {
+ vg_symerr(" ELF (dwarf1d) debug data is beyond image end?!");
+ VG_(munmap) ( (void*)oimage, n_oimage );
+ return False;
+ }
+
/* Looks plausible. Go on and read debug data. */
if (stab != NULL && stabstr != NULL) {
read_debuginfo_stabs ( si, stab, stab_sz, stabstr, stabstr_sz );
read_debuginfo_dwarf2 ( si, dwarf2, dwarf2_sz );
}
+ if (dwarf1d != NULL && dwarf1l != NULL) {
+ read_debuginfo_dwarf1 ( si, dwarf1d, dwarf1d_sz, dwarf1l, dwarf1l_sz );
+ }
+
/* Last, but not least, heave the oimage back overboard. */
VG_(munmap) ( (void*)oimage, n_oimage );
/* !!!!!!!!!! New, untested syscalls !!!!!!!!!!!!!!!!!!!!! */
+# if defined(__NR_clock_gettime)
+ case __NR_clock_gettime: /* syscall 265 */
+ /* int clock_gettime(clockid_t clk_id, struct timespec *tp); */
+ MAYBE_PRINTF( "clock_gettime( %d, %p )\n" ,arg1,arg2);
+ SYSCALL_TRACK( pre_mem_write, tid, "clock_gettime(tp)",
+ arg2, sizeof(struct timespec) );
+ KERNEL_DO_SYSCALL(tid,res);
+ if (!VG_(is_kerror)(res) && res > 0)
+ VG_TRACK( post_mem_write, arg2, sizeof(struct timespec) );
+ break;
+# endif
+
# if defined(__NR_ptrace)
case __NR_ptrace: { /* syscall 26 */
/* long ptrace (enum __ptrace_request request, pid_t pid,
KERNEL_DO_SYSCALL(tid,res);
break;
+# if defined(__NR_adjtimex)
+ case __NR_adjtimex: /* syscall 124 */
+ /* int adjtimex(struct timex *buf) */
+ MAYBE_PRINTF("adjtimex ( %p )\n",arg1);
+ SYSCALL_TRACK( pre_mem_write, tid, "adjtimex(buf)",
+ arg1, sizeof(struct timex) );
+ KERNEL_DO_SYSCALL(tid,res);
+ if (!VG_(is_kerror)(res))
+ VG_TRACK( post_mem_write, arg1, sizeof(struct timex) );
+ break;
+# endif
+
/* !!!!!!!!!! New, untested syscalls, 14 Mar 02 !!!!!!!!!! */
# if defined(__NR_setresgid32)
# if defined(__NR_rt_sigtimedwait)
case __NR_rt_sigtimedwait: /* syscall 177 */
- /* int sigtimedwait(const sigset_t *set, siginfo_t *info,
- const struct timespec timeout); */
+ /* int sigtimedwait(const sigset_t *set, siginfo_t *info,
+ const struct timespec timeout); */
+ MAYBE_PRINTF("sigtimedwait ( %p, %p, timeout )\n", arg1, arg2);
+ if (arg1 != (UInt)NULL)
+ SYSCALL_TRACK( pre_mem_read, tid,
+ "sigtimedwait(set)", arg1,
+ sizeof(vki_ksigset_t));
if (arg2 != (UInt)NULL)
SYSCALL_TRACK( pre_mem_write, tid, "sigtimedwait(info)", arg2,
sizeof(siginfo_t) );
break;
case __NR_brk: /* syscall 45 */
- /* Haven't a clue if this is really right. */
- /* int brk(void *end_data_segment); */
+ /* libc says: int brk(void *end_data_segment);
+ kernel says: void* brk(void* end_data_segment); (more or less)
+
+ libc returns 0 on success, and -1 (and sets errno) on failure.
+ Nb: if you ask to shrink the dataseg end below what it
+ currently is, that always succeeds, even if the dataseg end
+ doesn't actually change (eg. brk(0)). Unless it seg faults.
+
+ Kernel returns the new dataseg end. If the brk() failed, this
+ will be unchanged from the old one. That's why calling (kernel)
+ brk(0) gives the current dataseg end (libc brk() just returns
+ zero in that case).
+
+ Both will seg fault if you shrink it back into a text segment.
+ */
MAYBE_PRINTF("brk ( %p ) --> ",arg1);
KERNEL_DO_SYSCALL(tid,res);
MAYBE_PRINTF("0x%x\n", res);
- if (!VG_(is_kerror)(res)) {
- if (arg1 == 0) {
- /* Just asking where the current end is. (???) */
- curr_dataseg_end = res;
- } else
- if (arg1 < curr_dataseg_end) {
- /* shrinking the data segment. */
- VG_TRACK( die_mem_brk, (Addr)arg1,
+ if (res == arg1) {
+ /* brk() succeeded */
+ if (res < curr_dataseg_end) {
+ /* successfully shrunk the data segment. */
+ VG_TRACK( die_mem_brk, (Addr)arg1,
curr_dataseg_end-arg1 );
- curr_dataseg_end = arg1;
} else
- if (arg1 > curr_dataseg_end && res != 0) {
- /* asked for more memory, and got it */
- /*
- VG_(printf)("BRK: new area %x .. %x\n",
- VG_(curr_dataseg_end, arg1-1 );
- */
- VG_TRACK( new_mem_brk, (Addr)curr_dataseg_end,
- arg1-curr_dataseg_end );
- curr_dataseg_end = arg1;
+ if (res > curr_dataseg_end && res != 0) {
+ /* successfully grew the data segment */
+ VG_TRACK( new_mem_brk, curr_dataseg_end,
+ arg1-curr_dataseg_end );
}
+ curr_dataseg_end = res;
+
+ } else {
+ /* brk() failed */
+ vg_assert(curr_dataseg_end == res);
}
break;
}
default:
- VG_(message)(Vg_DebugMsg,"FATAL: unhandled socketcall 0x%x",arg1);
- VG_(core_panic)("... bye!\n");
- break; /*NOTREACHED*/
+ VG_(message)(Vg_DebugMsg,
+ "Warning: unhandled socketcall 0x%x",arg1);
+ res = -VKI_EINVAL;
+ break;
}
break;
}
break;
+ case __NR_waitpid: /* syscall 7 */
+ /* pid_t waitpid(pid_t pid, int *status, int options); */
+
+ MAYBE_PRINTF("waitpid ( %d, %p, %d )\n",
+ arg1,arg2,arg3);
+ if (arg2 != (Addr)NULL)
+ SYSCALL_TRACK( pre_mem_write, tid, "waitpid(status)",
+ arg2, sizeof(int) );
+ KERNEL_DO_SYSCALL(tid,res);
+ if (!VG_(is_kerror)(res)) {
+ if (arg2 != (Addr)NULL)
+ VG_TRACK( post_mem_write, arg2, sizeof(int) );
+ }
+ break;
+
case __NR_writev: { /* syscall 146 */
/* int writev(int fd, const struct iovec * vector, size_t count); */
UInt i;
arg1, sizeof(vki_kstack_t) );
}
if (arg2 != (UInt)NULL) {
- SYSCALL_TRACK( pre_mem_write, tid, "sigaltstack(ss)",
- arg1, sizeof(vki_kstack_t) );
+ SYSCALL_TRACK( pre_mem_write, tid, "sigaltstack(oss)",
+ arg2, sizeof(vki_kstack_t) );
}
# if SIGNAL_SIMULATION
VG_(do__NR_sigaltstack) (tid);
-
+/* -*- c-basic-offset: 3 -*- */
/*--------------------------------------------------------------------*/
/*--- The JITter: translate x86 code to ucode. ---*/
/*--- vg_to_ucode.c ---*/
return grp8_names[opc_aux];
}
-Char* VG_(name_of_int_reg) ( Int size, Int reg )
+const Char* VG_(name_of_int_reg) ( Int size, Int reg )
{
static Char* ireg32_names[8]
= { "%eax", "%ecx", "%edx", "%ebx",
return NULL; /*notreached*/
}
-Char* VG_(name_of_seg_reg) ( Int sreg )
+const Char* VG_(name_of_seg_reg) ( Int sreg )
{
switch (sreg) {
case R_ES: return "%es";
}
}
-Char* VG_(name_of_mmx_reg) ( Int mmxreg )
+const Char* VG_(name_of_mmx_reg) ( Int mmxreg )
{
- static Char* mmx_names[8]
+ static const Char* mmx_names[8]
= { "%mm0", "%mm1", "%mm2", "%mm3", "%mm4", "%mm5", "%mm6", "%mm7" };
if (mmxreg < 0 || mmxreg > 7) VG_(core_panic)("name_of_mmx_reg");
return mmx_names[mmxreg];
}
-Char* VG_(name_of_xmm_reg) ( Int xmmreg )
+const Char* VG_(name_of_xmm_reg) ( Int xmmreg )
{
- static Char* xmm_names[8]
+ static const Char* xmm_names[8]
= { "%xmm0", "%xmm1", "%xmm2", "%xmm3", "%xmm4", "%xmm5", "%xmm6", "%xmm7" };
if (xmmreg < 0 || xmmreg > 7) VG_(core_panic)("name_of_xmm_reg");
return xmm_names[xmmreg];
}
-Char* VG_(name_of_mmx_gran) ( UChar gran )
+const Char* VG_(name_of_mmx_gran) ( UChar gran )
{
switch (gran) {
case 0: return "b";
}
}
-Char VG_(name_of_int_size) ( Int size )
+const Char VG_(name_of_int_size) ( Int size )
{
switch (size) {
case 4: return 'l';
uInstr2(cb, PUT, size, TempReg, tmp, ArchReg, ge_reg);
}
-
/* Handle binary integer instructions of the form
op E, G meaning
op reg-or-mem, reg
UChar opc2,
UChar opc3 )
{
+ UChar dis_buf[50];
UChar modrm = getUChar(eip);
UChar imm8;
if (epartIsReg(modrm)) {
nameXMMReg(gregOfRM(modrm)), (Int)imm8 );
eip++;
} else {
- VG_(core_panic)("dis_SSE3_reg_or_mem_Imm8: mem");
+ UInt pair = disAMode ( cb, sorb, eip, dis?dis_buf:NULL );
+ Int tmpa = LOW24(pair);
+ eip += HI8(pair);
+ imm8 = getUChar(eip);
+ eip++;
+ uInstr3(cb, SSE3a1_MemRd, sz,
+ Lit16, (((UShort)(opc1)) << 8) | ((UShort)opc2),
+ Lit16, (((UShort)(opc3)) << 8) | ((UShort)modrm),
+ TempReg, tmpa);
+ uLiteral(cb, imm8);
+ if (dis)
+ VG_(printf)("%s %s, %s, $%d\n",
+ name,
+ dis_buf,
+ nameXMMReg(gregOfRM(modrm)), (Int)imm8 );
}
return eip;
}
return eip;
}
+static
+void dis_push_segreg ( UCodeBlock* cb, UInt sreg, Int sz )
+{
+ Int t1 = newTemp(cb), t2 = newTemp(cb);
+ vg_assert(sz == 4);
+ uInstr2(cb, GETSEG, 2, ArchRegS, sreg, TempReg, t1);
+ uInstr2(cb, GET, 4, ArchReg, R_ESP, TempReg, t2);
+ uInstr2(cb, SUB, 4, Literal, 0, TempReg, t2);
+ uLiteral(cb, 4);
+ uInstr2(cb, PUT, 4, TempReg, t2, ArchReg, R_ESP);
+ uInstr2(cb, STORE, 2, TempReg, t1, TempReg, t2);
+ if (dis)
+ VG_(printf)("push %s\n", VG_(name_of_seg_reg)(sreg));
+}
+
+static
+void dis_pop_segreg ( UCodeBlock* cb, UInt sreg, Int sz )
+{
+ Int t1 = newTemp(cb), t2 = newTemp(cb);
+ vg_assert(sz == 4);
+ uInstr2(cb, GET, 4, ArchReg, R_ESP, TempReg, t2);
+ uInstr2(cb, LOAD, 2, TempReg, t2, TempReg, t1);
+ uInstr2(cb, ADD, 4, Literal, 0, TempReg, t2);
+ uLiteral(cb, sz);
+ uInstr2(cb, PUT, 4, TempReg, t2, ArchReg, R_ESP);
+ uInstr2(cb, PUTSEG, 2, TempReg, t1, ArchRegS, sreg);
+ if (dis)
+ VG_(printf)("pop %s\n", VG_(name_of_seg_reg)(sreg));
+}
/*------------------------------------------------------------*/
/*--- Disassembling entire basic blocks ---*/
goto decode_success;
}
+ /* SFENCE -- flush all pending store operations to memory */
+ if (insn[0] == 0x0F && insn[1] == 0xAE
+ && (gregOfRM(insn[2]) == 7)) {
+ vg_assert(sz == 4);
+ eip += 3;
+ uInstr2(cb, SSE3, 0, /* ignore sz for internal ops */
+ Lit16, (((UShort)0x0F) << 8) | (UShort)0xAE,
+ Lit16, (UShort)insn[2] );
+ if (dis)
+ VG_(printf)("sfence\n");
+ goto decode_success;
+ }
+
/* CVTTSD2SI (0xF2,0x0F,0x2C) -- convert a double-precision float
value in memory or xmm reg to int and put it in an ireg.
Truncate. */
goto decode_success;
}
- /* CMPPS -- compare packed floats */
+ /* sz==4: CMPPS -- compare packed floats */
+ /* sz==2: CMPPD -- compare packed doubles */
if (insn[0] == 0x0F && insn[1] == 0xC2) {
- vg_assert(sz == 4);
- eip = dis_SSE2_reg_or_mem_Imm8 ( cb, sorb, eip+2, 16, "cmpps",
- insn[0], insn[1] );
+ vg_assert(sz == 4 || sz == 2);
+ if (sz == 4) {
+ eip = dis_SSE2_reg_or_mem_Imm8 ( cb, sorb, eip+2, 16, "cmpps",
+ insn[0], insn[1] );
+ } else {
+ eip = dis_SSE3_reg_or_mem_Imm8 ( cb, sorb, eip+2, 16, "cmppd",
+ 0x66, insn[0], insn[1] );
+ }
goto decode_success;
}
goto decode_success;
}
+ /* PSHUFW */
+ if (sz == 4
+ && insn[0] == 0x0F && insn[1] == 0x70) {
+ eip = dis_SSE2_reg_or_mem_Imm8 ( cb, sorb, eip+2, 16,
+ "pshufw",
+ insn[0], insn[1] );
+ goto decode_success;
+ }
+
/* SHUFPS */
if (insn[0] == 0x0F && insn[1] == 0xC6) {
vg_assert(sz == 4);
goto decode_success;
}
+ /* DIVPS */
+ /* 0x66: DIVPD */
+ if (insn[0] == 0x0F && insn[1] == 0x5E) {
+ vg_assert(sz == 4 || sz == 2);
+ if (sz == 4) {
+ eip = dis_SSE2_reg_or_mem ( cb, sorb, eip+2, 16, "divps",
+ insn[0], insn[1] );
+ } else {
+ eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+2, 16, "divpd",
+ 0x66, insn[0], insn[1] );
+ }
+ goto decode_success;
+ }
+
/* 0xF2: SUBSD */
/* 0xF3: SUBSS */
if ((insn[0] == 0xF2 || insn[0] == 0xF3)
}
/* 0xF2: MINSD */
- if (insn[0] == 0xF2 && insn[1] == 0x0F && insn[2] == 0x5D) {
+ /* 0xF3: MINSS */
+ if ((insn[0] == 0xF2 || insn[0] == 0xF3)
+ && insn[1] == 0x0F && insn[2] == 0x5D) {
+ Bool sz8 = insn[0] == 0xF2;
vg_assert(sz == 4);
- eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+3, 8, "minsd",
+ eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+3, sz8 ? 8 : 4,
+ sz8 ? "minsd" : "minss",
insn[0], insn[1], insn[2] );
goto decode_success;
}
0x66, insn[0], insn[1] );
goto decode_success;
}
-
+ /* 0xE0: PAVGB(src)xmmreg-or-mem, (dst)xmmreg, size 4 */
+ if (sz == 4
+ && insn[0] == 0x0F
+ && insn[1] == 0xE0 ) {
+ eip = dis_SSE2_reg_or_mem ( cb, sorb, eip+2, 16, "pavg{b,w}",
+ insn[0], insn[1] );
+ goto decode_success;
+ }
+
/* 0x60: PUNPCKLBW (src)xmmreg-or-mem, (dst)xmmreg */
/* 0x61: PUNPCKLWD (src)xmmreg-or-mem, (dst)xmmreg */
/* 0x62: PUNPCKLDQ (src)xmmreg-or-mem, (dst)xmmreg */
goto decode_success;
}
- /* 0x14: UNPCKLPD (src)xmmreg-or-mem, (dst)xmmreg */
- /* 0x15: UNPCKHPD (src)xmmreg-or-mem, (dst)xmmreg */
+ /* 0x14: UNPCKLPD (src)xmmreg-or-mem, (dst)xmmreg. Reads a+0
+ .. a+7, so we can say size 8 */
+ /* 0x15: UNPCKHPD (src)xmmreg-or-mem, (dst)xmmreg. Reads a+8
+ .. a+15, but we have no way to express this, so better say size
+ 16. Sigh. */
if (sz == 2
&& insn[0] == 0x0F
&& (insn[1] == 0x14 || insn[1] == 0x15)) {
- eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+2, 16,
+ eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+2,
+ insn[1]==0x14 ? 8 : 16,
"unpck{l,h}pd",
0x66, insn[0], insn[1] );
goto decode_success;
}
+ /* 0x14: UNPCKLPS (src)xmmreg-or-mem, (dst)xmmreg Reads a+0
+ .. a+7, so we can say size 8 */
+ /* 0x15: UNPCKHPS (src)xmmreg-or-mem, (dst)xmmreg Reads a+8
+ .. a+15, but we have no way to express this, so better say size
+ 16. Sigh. */
+ if (sz == 4
+ && insn[0] == 0x0F
+ && (insn[1] == 0x14 || insn[1] == 0x15)) {
+ eip = dis_SSE2_reg_or_mem ( cb, sorb, eip+2,
+ insn[1]==0x14 ? 8 : 16,
+ "unpck{l,h}ps",
+ insn[0], insn[1] );
+ goto decode_success;
+ }
+
/* 0xFC: PADDB (src)xmmreg-or-mem, (dst)xmmreg */
/* 0xFD: PADDW (src)xmmreg-or-mem, (dst)xmmreg */
/* 0xFE: PADDD (src)xmmreg-or-mem, (dst)xmmreg */
goto decode_success;
}
- /* COMISD (src)xmmreg-or-mem, (dst)xmmreg */
+ /* (U)COMISD (src)xmmreg-or-mem, (dst)xmmreg */
if (sz == 2
- && insn[0] == 0x0F && insn[1] == 0x2F) {
- eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+2, 8, "comisd",
+ && insn[0] == 0x0F
+ && ( insn[1] == 0x2E || insn[1] == 0x2F ) ) {
+ eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+2, 8, "{u}comisd",
0x66, insn[0], insn[1] );
vg_assert(LAST_UINSTR(cb).opcode == SSE3a_MemRd
|| LAST_UINSTR(cb).opcode == SSE4);
goto decode_success;
}
- /* COMISS (src)xmmreg-or-mem, (dst)xmmreg */
+ /* (U)COMISS (src)xmmreg-or-mem, (dst)xmmreg */
if (sz == 4
- && insn[0] == 0x0F && insn[1] == 0x2F) {
- eip = dis_SSE2_reg_or_mem ( cb, sorb, eip+2, 4, "comiss",
+ && insn[0] == 0x0F
+ && ( insn[1] == 0x2E || insn[ 1 ] == 0x2F )) {
+ eip = dis_SSE2_reg_or_mem ( cb, sorb, eip+2, 4, "{u}comiss",
insn[0], insn[1] );
vg_assert(LAST_UINSTR(cb).opcode == SSE2a_MemRd
|| LAST_UINSTR(cb).opcode == SSE3);
goto decode_success;
}
+ /* MOVQ -- move 8 bytes of XMM reg or mem to XMM reg. How
+ does this differ from MOVSD ?? */
+ if (insn[0] == 0xF3
+ && insn[1] == 0x0F
+ && insn[2] == 0x7E) {
+ eip = dis_SSE3_load_store_or_mov
+ ( cb, sorb, eip+3, 8, False /*load*/, "movq",
+ insn[0], insn[1], insn[2] );
+ goto decode_success;
+ }
+
/* MOVSS -- move 4 bytes of XMM reg to/from XMM reg or mem. */
if (insn[0] == 0xF3
&& insn[1] == 0x0F
goto decode_success;
}
- /* MOVAPS (28,29) -- aligned load/store of xmm reg, or xmm-xmm reg
- move */
- /* MOVUPS (10,11) -- unaligned load/store of xmm reg, or xmm-xmm
- reg move */
+ /* sz==4: MOVAPS (28,29) -- aligned load/store of xmm reg, or
+ xmm-xmm reg move */
+ /* sz==4: MOVUPS (10,11) -- unaligned load/store of xmm reg, or
+ xmm-xmm reg move */
+ /* sz==2: MOVAPD (28,29) -- aligned load/store of xmm reg, or
+ xmm-xmm reg move */
+ /* sz==2: MOVUPD (10,11) -- unaligned load/store of xmm reg, or
+ xmm-xmm reg move */
if (insn[0] == 0x0F && (insn[1] == 0x28
|| insn[1] == 0x29
|| insn[1] == 0x10
UChar* name = (insn[1] == 0x10 || insn[1] == 0x11)
? "movups" : "movaps";
Bool store = insn[1] == 0x29 || insn[1] == 11;
- vg_assert(sz == 4);
- eip = dis_SSE2_load_store_or_mov
- ( cb, sorb, eip+2, 16, store, name,
- insn[0], insn[1] );
+ vg_assert(sz == 2 || sz == 4);
+ if (sz == 4) {
+ eip = dis_SSE2_load_store_or_mov
+ ( cb, sorb, eip+2, 16, store, name,
+ insn[0], insn[1] );
+ } else {
+ eip = dis_SSE3_load_store_or_mov
+ ( cb, sorb, eip+2, 16, store, name,
+ 0x66, insn[0], insn[1] );
+ }
goto decode_success;
}
/* Cannot be used for reg-reg moves, according to Intel docs. */
vg_assert(!epartIsReg(insn[2]));
eip = dis_SSE3_load_store_or_mov
- (cb, sorb, eip+2, 16, is_store, "movlpd",
+ (cb, sorb, eip+2, 8, is_store, "movlpd",
0x66, insn[0], insn[1] );
goto decode_success;
}
goto decode_success;
}
+ /* MOVNTDQ -- 16-byte store with temporal hint (which we
+ ignore). */
+ if (sz == 2
+ && insn[0] == 0x0F
+ && insn[1] == 0xE7) {
+ eip = dis_SSE3_load_store_or_mov
+ (cb, sorb, eip+2, 16, True /* is_store */, "movntdq",
+ 0x66, insn[0], insn[1] );
+ goto decode_success;
+ }
+
/* MOVD -- 4-byte move between xmmregs and (ireg or memory). */
if (sz == 2
&& insn[0] == 0x0F
goto decode_success;
}
+ /* SQRTSD: square root of scalar double. */
+ if (insn[0] == 0xF2 && insn[1] == 0x0F && insn[2] == 0x51) {
+ vg_assert(sz == 4);
+ eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+3, 8,
+ "sqrtsd",
+ insn[0], insn[1], insn[2] );
+ goto decode_success;
+ }
+
+ /* SQRTSS: square root of scalar float. */
+ if (insn[0] == 0xF3 && insn[1] == 0x0F && insn[2] == 0x51) {
+ vg_assert(sz == 4);
+ eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+3, 4,
+ "sqrtss",
+ insn[0], insn[1], insn[2] );
+ goto decode_success;
+ }
+
+ /* MOVLPS -- 8-byte load/store. How is this different from MOVLPS
+ ? */
+ if (insn[0] == 0x0F
+ && (insn[1] == 0x12 || insn[1] == 0x13)) {
+ Bool is_store = insn[1]==0x13;
+ vg_assert(sz == 4);
+ /* Cannot be used for reg-reg moves, according to Intel docs. */
+ // vg_assert(!epartIsReg(insn[2]));
+ eip = dis_SSE2_load_store_or_mov
+ (cb, sorb, eip+2, 8, is_store, "movlps",
+ insn[0], insn[1] );
+ goto decode_success;
+ }
+
+ /* 0xF3: RCPSS -- reciprocal of scalar float */
+ if (insn[0] == 0xF3 && insn[1] == 0x0F && insn[2] == 0x53) {
+ vg_assert(sz == 4);
+ eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+3, 4,
+ "rcpss",
+ insn[0], insn[1], insn[2] );
+ goto decode_success;
+ }
+
+ /* MOVMSKPD -- extract 2 sign bits from a xmm reg and copy them to
+ an ireg. Top 30 bits of ireg are set to zero. */
+ if (sz == 2 && insn[0] == 0x0F && insn[1] == 0x50) {
+ modrm = insn[2];
+ /* Intel docs don't say anything about a memory source being
+ allowed here. */
+ vg_assert(epartIsReg(modrm));
+ t1 = newTemp(cb);
+ uInstr3(cb, SSE3g_RegWr, 4,
+ Lit16, (((UShort)0x66) << 8) | (UShort)insn[0],
+ Lit16, (((UShort)insn[1]) << 8) | (UShort)modrm,
+ TempReg, t1 );
+ uInstr2(cb, PUT, 4, TempReg, t1, ArchReg, gregOfRM(modrm));
+ if (dis)
+ VG_(printf)("movmskpd %s, %s\n",
+ nameXMMReg(eregOfRM(modrm)),
+ nameIReg(4,gregOfRM(modrm)));
+ eip += 3;
+ goto decode_success;
+ }
+
+ /* ANDNPS */
+ /* 0x66: ANDNPD (src)xmmreg-or-mem, (dst)xmmreg */
+ if (insn[0] == 0x0F && insn[1] == 0x55) {
+ vg_assert(sz == 4 || sz == 2);
+ if (sz == 4) {
+ eip = dis_SSE2_reg_or_mem ( cb, sorb, eip+2, 16, "andnps",
+ insn[0], insn[1] );
+ } else {
+ eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+2, 16, "andnpd",
+ 0x66, insn[0], insn[1] );
+ }
+ goto decode_success;
+ }
+
+ /* MOVHPD -- 8-byte load/store. */
+ if (sz == 2
+ && insn[0] == 0x0F
+ && (insn[1] == 0x16 || insn[1] == 0x17)) {
+ Bool is_store = insn[1]==0x17;
+ /* Cannot be used for reg-reg moves, according to Intel docs. */
+ vg_assert(!epartIsReg(insn[2]));
+ eip = dis_SSE3_load_store_or_mov
+ (cb, sorb, eip+2, 8, is_store, "movhpd",
+ 0x66, insn[0], insn[1] );
+ goto decode_success;
+ }
+
+ /* PMOVMSKB -- extract 16 sign bits from a xmm reg and copy them to
+ an ireg. Top 16 bits of ireg are set to zero. */
+ if (sz == 2 && insn[0] == 0x0F && insn[1] == 0xD7) {
+ modrm = insn[2];
+ /* Intel docs don't say anything about a memory source being
+ allowed here. */
+ vg_assert(epartIsReg(modrm));
+ t1 = newTemp(cb);
+ uInstr3(cb, SSE3g_RegWr, 4,
+ Lit16, (((UShort)0x66) << 8) | (UShort)insn[0],
+ Lit16, (((UShort)insn[1]) << 8) | (UShort)modrm,
+ TempReg, t1 );
+ uInstr2(cb, PUT, 4, TempReg, t1, ArchReg, gregOfRM(modrm));
+ if (dis)
+ VG_(printf)("pmovmskb %s, %s\n",
+ nameXMMReg(eregOfRM(modrm)),
+ nameIReg(4,gregOfRM(modrm)));
+ eip += 3;
+ goto decode_success;
+ }
+
+ /* CVTDQ2PD -- convert one single double. to float. */
+ if (insn[0] == 0xF3 && insn[1] == 0x0F && insn[2] == 0xE6) {
+ vg_assert(sz == 4);
+ eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+3, 8, "cvtdq2pd",
+ insn[0], insn[1], insn[2] );
+ goto decode_success;
+ }
+
+ /* SQRTPD: square root of packed double. */
+ if (sz == 2
+ && insn[0] == 0x0F && insn[1] == 0x51) {
+ eip = dis_SSE3_reg_or_mem ( cb, sorb, eip+2, 16,
+ "sqrtpd",
+ 0x66, insn[0], insn[1] );
+ goto decode_success;
+ }
+
/* Fall through into the non-SSE decoder. */
} /* if (VG_(have_ssestate)) */
}
break;
+ case 0xC8: /* ENTER */
+ d32 = getUDisp16(eip); eip += 2;
+ abyte = getUChar(eip); eip++;
+
+ vg_assert(sz == 4);
+ vg_assert(abyte == 0);
+
+ t1 = newTemp(cb); t2 = newTemp(cb);
+ uInstr2(cb, GET, sz, ArchReg, R_EBP, TempReg, t1);
+ uInstr2(cb, GET, 4, ArchReg, R_ESP, TempReg, t2);
+ uInstr2(cb, SUB, 4, Literal, 0, TempReg, t2);
+ uLiteral(cb, sz);
+ uInstr2(cb, PUT, 4, TempReg, t2, ArchReg, R_ESP);
+ uInstr2(cb, STORE, 4, TempReg, t1, TempReg, t2);
+ uInstr2(cb, PUT, 4, TempReg, t2, ArchReg, R_EBP);
+ if (d32) {
+ uInstr2(cb, SUB, 4, Literal, 0, TempReg, t2);
+ uLiteral(cb, d32);
+ uInstr2(cb, PUT, 4, TempReg, t2, ArchReg, R_ESP);
+ }
+ if (dis) VG_(printf)("enter 0x%x, 0x%x", d32, abyte);
+ break;
+
case 0xC9: /* LEAVE */
t1 = newTemp(cb); t2 = newTemp(cb);
uInstr2(cb, GET, 4, ArchReg, R_EBP, TempReg, t1);
+ /* First PUT ESP looks redundant, but need it because ESP must
+ always be up-to-date for Memcheck to work... */
uInstr2(cb, PUT, 4, TempReg, t1, ArchReg, R_ESP);
uInstr2(cb, LOAD, 4, TempReg, t1, TempReg, t2);
uInstr2(cb, PUT, 4, TempReg, t2, ArchReg, R_EBP);
uInstr2(cb, ADD, 4, Literal, 0, TempReg, t1);
uLiteral(cb, 4);
- /* This 2nd PUT looks redundant, but Julian thinks it's not.
- * --njn 03-feb-2003 */
uInstr2(cb, PUT, 4, TempReg, t1, ArchReg, R_ESP);
if (dis) VG_(printf)("leave");
break;
case 0x41: /* INC eCX */
case 0x42: /* INC eDX */
case 0x43: /* INC eBX */
+ case 0x44: /* INC eSP */
case 0x45: /* INC eBP */
case 0x46: /* INC eSI */
case 0x47: /* INC eDI */
case 0x49: /* DEC eCX */
case 0x4A: /* DEC eDX */
case 0x4B: /* DEC eBX */
+ case 0x4C: /* DEC eSP */
case 0x4D: /* DEC eBP */
case 0x4E: /* DEC eSI */
case 0x4F: /* DEC eDI */
uInstr2(cb, GET, 4, ArchReg, R_ESP, TempReg, t1);
/* load M[ESP] to virtual register t3: t3 = M[t1] */
uInstr2(cb, LOAD, 4, TempReg, t1, TempReg, t3);
+
+ /* increase ESP; must be done before the STORE. Intel manual says:
+ If the ESP register is used as a base register for addressing
+ a destination operand in memory, the POP instruction computes
+ the effective address of the operand after it increments the
+ ESP register.
+ */
+ uInstr2(cb, ADD, 4, Literal, 0, TempReg, t1);
+ uLiteral(cb, sz);
+ uInstr2(cb, PUT, 4, TempReg, t1, ArchReg, R_ESP);
+
/* resolve MODR/M */
pair1 = disAMode ( cb, sorb, eip, dis?dis_buf:NULL);
/* store value from stack in memory, M[m32] = t3 */
uInstr2(cb, STORE, 4, TempReg, t3, TempReg, tmpa);
- /* increase ESP */
- uInstr2(cb, ADD, 4, Literal, 0, TempReg, t1);
- uLiteral(cb, sz);
- uInstr2(cb, PUT, 4, TempReg, t1, ArchReg, R_ESP);
-
if (dis)
VG_(printf)("popl %s\n", dis_buf);
break;
}
+ case 0x1F: /* POP %DS */
+ dis_pop_segreg( cb, R_DS, sz ); break;
+ case 0x07: /* POP %ES */
+ dis_pop_segreg( cb, R_ES, sz ); break;
+ case 0x17: /* POP %SS */
+ dis_pop_segreg( cb, R_SS, sz ); break;
+
/* ------------------------ PUSH ----------------------- */
case 0x50: /* PUSH eAX */
if (dis)
VG_(printf)("pusha%c\n", nameISize(sz));
break;
- }
+ }
+
+ case 0x0E: /* PUSH %CS */
+ dis_push_segreg( cb, R_CS, sz ); break;
+ case 0x1E: /* PUSH %DS */
+ dis_push_segreg( cb, R_DS, sz ); break;
+ case 0x06: /* PUSH %ES */
+ dis_push_segreg( cb, R_ES, sz ); break;
+ case 0x16: /* PUSH %SS */
+ dis_push_segreg( cb, R_SS, sz ); break;
/* ------------------------ SCAS et al ----------------- */
VG_(printf)("xlat%c [ebx]\n", nameISize(sz));
break;
+ /* ------------------------ IN / OUT ----------------------- */
+
+ case 0xE4: /* IN ib, %al */
+ case 0xE5: /* IN ib, %{e}ax */
+ case 0xEC: /* IN (%dx),%al */
+ case 0xED: /* IN (%dx),%{e}ax */
+ t1 = newTemp(cb);
+ t2 = newTemp(cb);
+ t3 = newTemp(cb);
+
+ uInstr0(cb, CALLM_S, 0);
+ /* operand size? */
+ uInstr2(cb, MOV, 4, Literal, 0, TempReg, t1);
+ uLiteral(cb, ( opc == 0xE4 || opc == 0xEC ) ? 1 : sz);
+ uInstr1(cb, PUSH, 4, TempReg, t1);
+ /* port number ? */
+ if ( opc == 0xE4 || opc == 0xE5 ) {
+ abyte = getUChar(eip); eip++;
+ uInstr2(cb, MOV, 4, Literal, 0, TempReg, t2);
+ uLiteral(cb, abyte);
+ }
+ else
+ uInstr2(cb, GET, 4, ArchReg, R_EDX, TempReg, t2);
+
+ uInstr1(cb, PUSH, 4, TempReg, t2);
+ uInstr1(cb, CALLM, 0, Lit16, VGOFF_(helper_IN));
+ uFlagsRWU(cb, FlagsEmpty, FlagsEmpty, FlagsEmpty);
+ uInstr1(cb, POP, 4, TempReg, t2);
+ uInstr1(cb, CLEAR, 0, Lit16, 4);
+ uInstr0(cb, CALLM_E, 0);
+ uInstr2(cb, PUT, 4, TempReg, t2, ArchReg, R_EAX);
+ if (dis) {
+ if ( opc == 0xE4 || opc == 0xE5 )
+ VG_(printf)("in 0x%x, %%eax/%%ax/%%al\n", getUChar(eip-1) );
+ else
+ VG_(printf)("in (%%dx), %%eax/%%ax/%%al\n");
+ }
+ break;
+ case 0xE6: /* OUT %al,ib */
+ case 0xE7: /* OUT %{e}ax,ib */
+ case 0xEE: /* OUT %al,(%dx) */
+ case 0xEF: /* OUT %{e}ax,(%dx) */
+ t1 = newTemp(cb);
+ t2 = newTemp(cb);
+ t3 = newTemp(cb);
+
+ uInstr0(cb, CALLM_S, 0);
+ /* operand size? */
+ uInstr2(cb, MOV, 4, Literal, 0, TempReg, t1);
+ uLiteral(cb, ( opc == 0xE6 || opc == 0xEE ) ? 1 : sz);
+ uInstr1(cb, PUSH, 4, TempReg, t1);
+ /* port number ? */
+ if ( opc == 0xE6 || opc == 0xE7 ) {
+ abyte = getUChar(eip); eip++;
+ uInstr2(cb, MOV, 4, Literal, 0, TempReg, t2);
+ uLiteral(cb, abyte);
+ }
+ else
+ uInstr2(cb, GET, 4, ArchReg, R_EDX, TempReg, t2);
+ uInstr1(cb, PUSH, 4, TempReg, t2);
+ uInstr2(cb, GET, 4, ArchReg, R_EAX, TempReg, t3);
+ uInstr1(cb, PUSH, 4, TempReg, t3);
+ uInstr1(cb, CALLM, 0, Lit16, VGOFF_(helper_OUT));
+ uFlagsRWU(cb, FlagsEmpty, FlagsEmpty, FlagsEmpty);
+ uInstr1(cb, CLEAR, 0, Lit16, 12);
+ uInstr0(cb, CALLM_E, 0);
+ if (dis) {
+ if ( opc == 0xE4 || opc == 0xE5 )
+ VG_(printf)("out %%eax/%%ax/%%al, 0x%x\n", getUChar(eip-1) );
+ else
+ VG_(printf)("out %%eax/%%ax/%%al, (%%dx)\n");
+ }
+ break;
+
/* ------------------------ (Grp1 extensions) ---------- */
case 0x80: /* Grp1 Ib,Eb */
break;
case 0x7F: /* MOVQ (src)mmxreg, (dst)mmxreg-or-mem */
+ case 0xE7: /* MOVNTQ (src)mmxreg, (dst)mmxreg-or-mem */
vg_assert(sz == 4);
modrm = getUChar(eip);
if (epartIsReg(modrm)) {
(((UShort)(opc)) << 8) | ((UShort)modrm),
TempReg, tmpa);
if (dis)
- VG_(printf)("movq %s, %s\n",
+ VG_(printf)("mov(nt)q %s, %s\n",
nameMMXReg(gregOfRM(modrm)),
dis_buf);
}
eip = dis_MMXop_regmem_to_reg ( cb, sorb, eip, opc, "psra", True );
break;
+ case 0xA1: /* POP %FS */
+ dis_pop_segreg( cb, R_FS, sz ); break;
+ case 0xA9: /* POP %GS */
+ dis_pop_segreg( cb, R_GS, sz ); break;
+
+ case 0xA0: /* PUSH %FS */
+ dis_push_segreg( cb, R_FS, sz ); break;
+ case 0xA8: /* PUSH %GS */
+ dis_push_segreg( cb, R_GS, sz ); break;
+
/* =-=-=-=-=-=-=-=-=- unimp2 =-=-=-=-=-=-=-=-=-=-= */
default:
/* Ensure there's enough space in a block to add one uinstr. */
-static __inline__
+static
void ensureUInstr ( UCodeBlock* cb )
{
if (cb->used == cb->size) {
# define LIT8 (((u->lit32) & 0xFFFFFF00) == 0)
# define LIT1 (!(LIT0))
# define LITm (u->tag1 == Literal ? True : LIT0 )
+# define SZ16 (u->size == 16)
# define SZ8 (u->size == 8)
# define SZ4 (u->size == 4)
# define SZ2 (u->size == 2)
case SSE3a_MemRd: return LIT0 && SZsse && CCa && Ls1 && Ls2 && TR3 && XOTHER;
case SSE3e_RegRd: return LIT0 && SZ4 && CC0 && Ls1 && Ls2 && TR3 && XOTHER;
case SSE3e_RegWr: return LIT0 && SZ4 && CC0 && Ls1 && Ls2 && TR3 && XOTHER;
+ case SSE3a1_MemRd: return LIT8 && SZ16 && CC0 && Ls1 && Ls2 && TR3 && XOTHER;
case SSE3g_RegWr: return LIT0 && SZ4 && CC0 && Ls1 && Ls2 && TR3 && XOTHER;
case SSE3g1_RegWr: return LIT8 && SZ4 && CC0 && Ls1 && Ls2 && TR3 && XOTHER;
case SSE3e1_RegRd: return LIT8 && SZ2 && CC0 && Ls1 && Ls2 && TR3 && XOTHER;
- case SSE3: return LIT0 && SZ0 && CC0 && Ls1 && Ls2 && N3 && XOTHER;
+ case SSE3: return LIT0 && SZ0 && CCa && Ls1 && Ls2 && N3 && XOTHER;
case SSE4: return LIT0 && SZ0 && CCa && Ls1 && Ls2 && N3 && XOTHER;
case SSE5: return LIT0 && SZ0 && CC0 && Ls1 && Ls2 && Ls3 && XOTHER;
case SSE3ag_MemRd_RegWr:
# undef LIT1
# undef LIT8
# undef LITm
+# undef SZ16
# undef SZ8
# undef SZ4
# undef SZ2
case SSE3e_RegRd: return "SSE3e_RRd";
case SSE3e_RegWr: return "SSE3e_RWr";
case SSE3g_RegWr: return "SSE3g_RWr";
+ case SSE3a1_MemRd: return "SSE3a1_MRd";
case SSE3g1_RegWr: return "SSE3g1_RWr";
case SSE3e1_RegRd: return "SSE3e1_RRd";
case SSE3: return "SSE3";
case SSE3g1_RegWr:
case SSE3e1_RegRd:
+ case SSE3a1_MemRd:
VG_(printf)("0x%x:0x%x:0x%x:0x%x:0x%x",
(u->val1 >> 8) & 0xFF, u->val1 & 0xFF,
(u->val2 >> 8) & 0xFF, u->val2 & 0xFF,
read-modified-written, it appears first as a read and then as a write.
'tag' indicates whether we are looking at TempRegs or RealRegs.
*/
-__inline__
+/* __inline__ */
Int VG_(get_reg_usage) ( UInstr* u, Tag tag, Int* regs, Bool* isWrites )
{
# define RD(ono) VG_UINSTR_READS_REG(ono, regs, isWrites)
case LEA1: RD(1); WR(2); break;
case LEA2: RD(1); RD(2); WR(3); break;
+ case SSE3a1_MemRd:
case SSE2a1_MemRd:
case SSE3e_RegRd:
case SSE3a_MemWr:
/* Change temp regs in u into real regs, as directed by the
* temps[i]-->reals[i] mapping. */
-static __inline__
+static
void patchUInstr ( UInstr* u, Int temps[], UInt reals[], Int n_tmap )
{
Int i;
reg. Otherwise return -1. Used in redundant-PUT elimination.
Note that this is not required for skins extending UCode because
this happens before instrumentation. */
-static __inline__
+static
Int maybe_uinstrReadsArchReg ( UInstr* u )
{
switch (u->opcode) {
case MMX2_MemRd: case MMX2_MemWr:
case MMX2_ERegRd: case MMX2_ERegWr:
case SSE2a_MemWr: case SSE2a_MemRd: case SSE2a1_MemRd:
- case SSE3a_MemWr: case SSE3a_MemRd:
+ case SSE3a_MemWr: case SSE3a_MemRd: case SSE3a1_MemRd:
case SSE3e_RegRd: case SSE3g_RegWr: case SSE3e_RegWr:
case SSE3g1_RegWr: case SSE3e1_RegRd:
case SSE4: case SSE3: case SSE5: case SSE3ag_MemRd_RegWr:
instrumentation, so the skin doesn't have to worry about the CCALLs
it adds in, and we must do it before register allocation because
spilled temps make it much harder to work out the %esp deltas.
- Thus we have it as an extra phase between the two. */
+ Thus we have it as an extra phase between the two.
+
+ We look for "GETL %ESP, t_ESP", then track ADDs and SUBs of
+ literal values to t_ESP, and the total delta of the ADDs/SUBs. Then if
+ "PUTL t_ESP, %ESP" happens, we call the helper with the known delta. We
+ also cope with "MOVL t_ESP, tX", making tX the new t_ESP. If any other
+ instruction clobbers t_ESP, we don't track it anymore, and fall back to
+ the delta-is-unknown case. That case is also used when the delta is not
+ a nice small amount, or an unknown amount.
+*/
static
UCodeBlock* vg_ESP_update_pass(UCodeBlock* cb_in)
{
if (u->val1 == t_ESP) {
/* Known delta, common cases handled specially. */
switch (delta) {
+ case 0: break;
case 4: DO(die, 4);
case -4: DO(new, 4);
case 8: DO(die, 8);
} else {
/* Unknown delta */
DO_GENERIC;
+
+ /* now we know the temp that points to %ESP */
+ t_ESP = u->val1;
}
delta = 0;
# undef DO
# undef DO_GENERIC
- } else if (Literal == u->tag1 && t_ESP == u->val2) {
- if (ADD == u->opcode) delta += u->lit32;
- if (SUB == u->opcode) delta -= u->lit32;
+ } else if (ADD == u->opcode && Literal == u->tag1 && t_ESP == u->val2) {
+ delta += u->lit32;
+
+ } else if (SUB == u->opcode && Literal == u->tag1 && t_ESP == u->val2) {
+ delta -= u->lit32;
} else if (MOV == u->opcode && TempReg == u->tag1 && t_ESP == u->val1 &&
TempReg == u->tag2) {
+ // t_ESP is transferred
t_ESP = u->val2;
+
+ } else {
+ // Stop tracking t_ESP if it's clobbered by this instruction.
+ Int tempUse [VG_MAX_REGS_USED];
+ Bool isWrites[VG_MAX_REGS_USED];
+ Int j, n = VG_(get_reg_usage)(u, TempReg, tempUse, isWrites);
+
+ for (j = 0; j < n; j++) {
+ if (tempUse[j] == t_ESP && isWrites[j])
+ t_ESP = INVALID_TEMPREG;
+ }
}
VG_(copy_UInstr) ( cb, u );
}
UCodeBlock* vg_do_register_allocation ( UCodeBlock* c1 )
{
TempInfo* temp_info;
- Int real_to_temp[VG_MAX_REALREGS];
+ Int real_to_temp [VG_MAX_REALREGS];
Bool is_spill_cand[VG_MAX_REALREGS];
Int ss_busy_until_before[VG_MAX_SPILLSLOTS];
Int i, j, k, m, r, tno, max_ss_no;
Bool wr, defer, isRead, spill_reqd;
- UInt realUse[VG_MAX_REGS_USED];
- Int tempUse[VG_MAX_REGS_USED];
+ UInt realUse [VG_MAX_REGS_USED];
+ Int tempUse [VG_MAX_REGS_USED];
Bool isWrites[VG_MAX_REGS_USED];
UCodeBlock* c2;
/*--------------------------------------------------------------------*/
/*--- end vg_translate.c ---*/
/*--------------------------------------------------------------------*/
+
#include <linux/cdrom.h> /* for cd-rom ioctls */
#include <sys/user.h> /* for struct user_regs_struct et al */
#include <signal.h> /* for siginfo_t */
+#include <sys/timex.h> /* for struct timex */
#define __USE_LARGEFILE64
#include <sys/stat.h> /* for struct stat */
<body bgcolor="#ffffff">
<a name="title"> </a>
-<h1 align=center>Valgrind, version 2.0.0</h1>
-<center>This manual was last updated on 3 April 2003</center>
+<h1 align=center>Valgrind, stable release 2.0.0</h1>
+<center>This manual was last updated on 6 November 2003</center>
<p>
<center>
<a href="mailto:jseward@acm.org">jseward@acm.org</a>,
- <a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
-Copyright © 2000-2003 Julian Seward, Nick Nethercote
+ <a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a>,
+ <a href="mailto:jeremy@goop.org">jeremy@goop.org</a><br>
+Copyright © 2000-2003 Julian Seward, Nick Nethercote, Jeremy Fitzhardinge
<p>
Valgrind is licensed under the GNU General Public License, version
}
{
realpath is inefficiently coded
- Memcheck:Overlap
+ Addrcheck,Memcheck:Overlap
fun:memcpy
fun:realpath*
}
{
realpath stupidity part II
- Memcheck:Overlap
+ Addrcheck,Memcheck:Overlap
fun:strcpy
fun:realpath*
}
fun:__pthread_key_create
}
-
+##----------------------------------------------------------------------##
+## Bugs in helper library supplied with Intel Icc 7.0 (65)
+## in /opt/intel/compiler70/ia32/lib/libcxa.so.3
+{
+ Intel compiler70/ia32/lib/libcxa.so.3 below-esp accesses
+ Addrcheck,Memcheck:Addr4
+ obj:/opt/intel/compiler70/ia32/lib/libcxa.so.3
+}
#ifndef __HELGRIND_H
#define __HELGRIND_H
-#define __VALGRIND_SOMESKIN_H
#include "valgrind.h"
typedef
/* Set a word. The byte give by 'a' could be anywhere in the word -- the whole
* word gets set. */
-static __inline__
+static /* __inline__ */
void set_sword ( Addr a, shadow_word sword )
{
ESecMap* sm;
}
/* Compute the hash of a LockSet */
-static inline UInt hash_LockSet_w_wo(const LockSet *ls,
- const Mutex *with,
- const Mutex *without)
+static UInt hash_LockSet_w_wo(const LockSet *ls,
+ const Mutex *with,
+ const Mutex *without)
{
Int i;
UInt hash = ls->setsize + (with != NULL) - (without != NULL);
VG_(track_post_thread_create) (& hg_thread_create);
VG_(track_post_thread_join) (& hg_thread_join);
- VG_(track_post_mutex_lock) (& eraser_pre_mutex_lock);
+ VG_(track_pre_mutex_lock) (& eraser_pre_mutex_lock);
VG_(track_post_mutex_lock) (& eraser_post_mutex_lock);
VG_(track_post_mutex_unlock) (& eraser_post_mutex_unlock);
#define __VALGRIND_H
-#ifndef __VALGRIND_SOMESKIN_H
- #warning For valgrind versions 1.9.0 and after,
- #warning you should not include valgrind.h directly.
- #warning Instead include the .h relevant to the skin
- #warning you want to use. For most people this means
- #warning you need to include memcheck.h instead of
- #warning valgrind.h.
- #error Compilation of your source will now abort.
-#endif
-
-
/* This file is for inclusion into client (your!) code.
You can use these macros to manipulate and query Valgrind's
This file is part of Valgrind, an extensible x86 protected-mode
emulator for monitoring program execution on x86-Unixes.
- Copyright (C) 2000-2003 Julian Seward
+ Copyright (C) 2000-2003 Julian Seward
jseward@acm.org
This program is free software; you can redistribute it and/or
/*=== Build options and table sizes. ===*/
/*====================================================================*/
-/* You should be able to change these options or sizes, recompile, and
+/* You should be able to change these options or sizes, recompile, and
still have a working system. */
/* The maximum number of pthreads that we support. This is
/* Total number of integer registers available for allocation -- all of
them except %esp, %ebp. %ebp permanently points at VG_(baseBlock).
-
+
If you increase this you'll have to also change at least these:
- VG_(rank_to_realreg)()
- VG_(realreg_to_rank)()
You can decrease it, and performance will drop because more spills will
occur. If you decrease it too much, everything will fall over.
-
+
Do not change this unless you really know what you are doing! */
#define VG_MAX_REALREGS 6
interface; if the core and skin major versions don't match, Valgrind
will abort. The minor version indicates binary-compatible changes.
*/
-#define VG_CORE_INTERFACE_MAJOR_VERSION 3
+#define VG_CORE_INTERFACE_MAJOR_VERSION 4
#define VG_CORE_INTERFACE_MINOR_VERSION 0
extern const Int VG_(skin_interface_major_version);
#define VG_INVALID_THREADID ((ThreadId)(0))
/* ThreadIds are simply indices into the VG_(threads)[] array. */
-typedef
- UInt
+typedef
+ UInt
ThreadId;
/* When looking for the current ThreadId, this is the safe option and
probably the one you want.
-
+
Details: Use this one from non-generated code, eg. from functions called
on events like 'new_mem_heap'. In such a case, the "current" thread is
temporarily suspended as Valgrind's dispatcher is running. This function
is also suitable to be called from generated code (ie. from UCode, or a C
function called directly from UCode).
-
+
If you use VG_(get_current_tid)() from non-generated code, it will return
0 signifying the invalid thread, which is probably not what you want. */
extern ThreadId VG_(get_current_or_recent_tid) ( void );
/* When looking for the current ThreadId, only use this one if you know what
you are doing.
-
+
Details: Use this one from generated code, eg. from C functions called
from UCode. (VG_(get_current_or_recent_tid)() is also suitable in that
case.) If you use this function from non-generated code, it will return
*
* Note that they all output to the file descriptor given by the
* --logfile-fd=N argument, which defaults to 2 (stderr). Hence no
- * need for VG_(fprintf)().
+ * need for VG_(fprintf)().
*/
extern UInt VG_(printf) ( const char *format, ... );
/* too noisy ... __attribute__ ((format (printf, 1, 2))) ; */
extern UInt VG_(sprintf) ( Char* buf, Char *format, ... );
-extern UInt VG_(vprintf) ( void(*send)(Char),
+extern UInt VG_(vprintf) ( void(*send)(Char),
const Char *format, va_list vargs );
extern Int VG_(rename) ( Char* old_name, Char* new_name );
extern Int VG_(strcmp_ws) ( const Char* s1, const Char* s2 );
extern Int VG_(strncmp_ws) ( const Char* s1, const Char* s2, Int nmax );
-/* Like strncpy(), but if 'src' is longer than 'ndest' inserts a '\0' as the
+/* Like strncpy(), but if 'src' is longer than 'ndest' inserts a '\0' as the
last character. */
extern void VG_(strncpy_safely) ( Char* dest, const Char* src, Int ndest );
__PRETTY_FUNCTION__), 0)))
__attribute__ ((__noreturn__))
-extern void VG_(skin_assert_fail) ( Char* expr, Char* file,
+extern void VG_(skin_assert_fail) ( Char* expr, Char* file,
Int line, Char* fn );
/* ------------------------------------------------------------------ */
/* system/mman.h */
-extern void* VG_(mmap)( void* start, UInt length,
+extern void* VG_(mmap)( void* start, UInt length,
UInt prot, UInt flags, UInt fd, UInt offset );
extern Int VG_(munmap)( void* start, Int length );
/* ------------------------------------------------------------------ */
-/* signal.h.
-
+/* signal.h.
+
Note that these use the vk_ (kernel) structure
definitions, which are different in places from those that glibc
defines -- hence the 'k' prefix. Since we're operating right at the
extern void VG_(ksigdelset_from_set) ( vki_ksigset_t* dst, vki_ksigset_t* src );
/* --- Mess with the kernel's sig state --- */
-extern Int VG_(ksigprocmask) ( Int how, const vki_ksigset_t* set,
+extern Int VG_(ksigprocmask) ( Int how, const vki_ksigset_t* set,
vki_ksigset_t* oldset );
-extern Int VG_(ksigaction) ( Int signum,
- const vki_ksigaction* act,
+extern Int VG_(ksigaction) ( Int signum,
+ const vki_ksigaction* act,
vki_ksigaction* oldact );
extern Int VG_(ksignal) ( Int signum, void (*sighandler)(Int) );
WIDEN, /* Signed or unsigned widening */
/* Conditional or unconditional jump */
- JMP,
+ JMP,
/* FPU ops */
FPU, /* Doesn't touch memory */
/* 2 bytes, reads/writes mem. Insns of the form
bbbbbbbb:mod mmxreg r/m.
Held in val1[15:0], and mod and rm are to be replaced
- at codegen time by a reference to the Temp/RealReg holding
+ at codegen time by a reference to the Temp/RealReg holding
the address. Arg2 holds this Temp/Real Reg.
Transfer is always at size 8.
*/
holding the address. Arg3 holds this Temp/Real Reg.
Transfer is at stated size. */
SSE2a1_MemRd,
-#if 0
- SSE2a1_MemWr,
-#endif
+
/* 4 bytes, writes an integer register. Insns of the form
bbbbbbbb:bbbbbbbb:bbbbbbbb:11 ireg bbb.
Held in val1[15:0] and val2[15:0], and ireg is to be replaced
/* 5 bytes, no memrefs, no iregdefs, copy exactly to the
output. Held in val1[15:0], val2[15:0] and val3[7:0]. */
SSE5,
-#if 0
+
/* 5 bytes, reads/writes mem. Insns of the form
bbbbbbbb:bbbbbbbb:bbbbbbbb:mod mmxreg r/m:bbbbbbbb
- Held in val1[15:0], val2[15:0], lit32[7:0].
- mod and rm are to be replaced at codegen time by a reference
- to the Temp/RealReg holding the address. Arg3 holds this
+ Held in val1[15:0], val2[15:0], lit32[7:0].
+ mod and rm are to be replaced at codegen time by a reference
+ to the Temp/RealReg holding the address. Arg3 holds this
Temp/Real Reg. Transfer is always at size 16. */
SSE3a1_MemRd,
- SSE3a1_MemWr,
-#endif
+
/* ------------------------ */
/* Not strictly needed, but improve address calculation translations. */
Seven possibilities: 'arg[123]' show where args go, 'ret' shows
where return value goes (if present).
-
+
CCALL(-, -, - ) void f(void)
CCALL(arg1, -, - ) void f(UInt arg1)
CCALL(arg1, arg2, - ) void f(UInt arg1, UInt arg2)
/* This opcode makes it easy for skins that extend UCode to do this to
avoid opcode overlap:
- enum { EU_OP1 = DUMMY_FINAL_UOPCODE + 1, ... }
-
+ enum { EU_OP1 = DUMMY_FINAL_UOPCODE + 1, ... }
+
WARNING: Do not add new opcodes after this one! They can be added
before, though. */
DUMMY_FINAL_UOPCODE
CondLE = 14, /* less or equal */
CondNLE = 15, /* not less or equal */
CondAlways = 16 /* Jump always */
- }
+ }
Condcode;
/* Flags. User-level code can only read/write O(verflow), S(ign),
Z(ero), A(ux-carry), C(arry), P(arity), and may also write
D(irection). That's a total of 7 flags. A FlagSet is a bitset,
- thusly:
+ thusly:
76543210
DOSZACP
and bit 7 must always be zero since it is unused.
Bool signed_widen:1; /* signed or unsigned WIDEN ? */
JmpKind jmpkind:3; /* additional properties of unconditional JMP */
- /* Additional properties for UInstrs that call C functions:
+ /* Additional properties for UInstrs that call C functions:
- CCALL
- PUT (when %ESP is the target)
- possibly skin-specific UInstrs
to use this information requires converting between register ranks
and the Intel register numbers, using VG_(realreg_to_rank)()
and/or VG_(rank_to_realreg)() */
- RRegSet regs_live_after:VG_MAX_REALREGS;
+ RRegSet regs_live_after:VG_MAX_REALREGS;
}
UInstr;
-typedef
+typedef
struct _UCodeBlock
UCodeBlock;
extern UInstr* VG_(get_instr) (UCodeBlock* cb, Int i);
extern UInstr* VG_(get_last_instr) (UCodeBlock* cb);
-
+
/*====================================================================*/
/*=== Instrumenting UCode ===*/
Tag tag2, UInt val2,
Tag tag3, UInt val3 );
-/* Set read/write/undefined flags. Undefined flags are treaten as written,
+/* Set read/write/undefined flags. Undefined flags are treaten as written,
but it's worth keeping them logically distinct. */
extern void VG_(set_flag_fields) ( UCodeBlock* cb, FlagSet fr, FlagSet fw,
FlagSet fu);
extern void VG_(free_UCodeBlock) ( UCodeBlock* cb );
/* ------------------------------------------------------------------ */
-/* UCode pretty/ugly printing. Probably only useful to call from a skin
+/* UCode pretty/ugly printing. Probably only useful to call from a skin
if VG_(needs).extended_UCode == True. */
/* When True, all generated code is/should be printed. */
extern void VG_(up_UInstr) ( Int instrNo, UInstr* u );
extern Char* VG_(name_UOpcode) ( Bool upper, Opcode opc );
extern Char* VG_(name_UCondcode) ( Condcode cond );
-extern void VG_(pp_UOperand) ( UInstr* u, Int operandNo,
+extern void VG_(pp_UOperand) ( UInstr* u, Int operandNo,
Int sz, Bool parens );
/* ------------------------------------------------------------------ */
extern Int VGOFF_(helper_RDTSC);
extern Int VGOFF_(helper_CPUID);
+extern Int VGOFF_(helper_IN);
+extern Int VGOFF_(helper_OUT);
+
extern Int VGOFF_(helper_bsf);
extern Int VGOFF_(helper_bsr);
#define R_GS 5
/* For pretty printing x86 code */
-extern Char* VG_(name_of_mmx_gran) ( UChar gran );
-extern Char* VG_(name_of_mmx_reg) ( Int mmxreg );
-extern Char* VG_(name_of_seg_reg) ( Int sreg );
-extern Char* VG_(name_of_int_reg) ( Int size, Int reg );
-extern Char VG_(name_of_int_size) ( Int size );
+extern const Char* VG_(name_of_mmx_gran) ( UChar gran );
+extern const Char* VG_(name_of_mmx_reg) ( Int mmxreg );
+extern const Char* VG_(name_of_seg_reg) ( Int sreg );
+extern const Char* VG_(name_of_int_reg) ( Int size, Int reg );
+extern const Char VG_(name_of_int_size) ( Int size );
/* Shorter macros for convenience */
#define nameIReg VG_(name_of_int_reg)
extern Int VG_(rank_to_realreg) ( Int rank );
/* Call a subroutine. Does no argument passing, stack manipulations, etc. */
-extern void VG_(synth_call) ( Bool ensure_shortform, Int word_offset,
+extern void VG_(synth_call) ( Bool ensure_shortform, Int word_offset,
Bool upd_cc, FlagSet use_flags, FlagSet set_flags );
/* For calling C functions -- saves caller save regs, pushes args, calls,
by some other x86 assembly code; this will invalidate the results of
vg_realreg_liveness_analysis() and everything will fall over. */
extern void VG_(synth_ccall) ( Addr fn, Int argc, Int regparms_n, UInt argv[],
- Tag tagv[], Int ret_reg,
+ Tag tagv[], Int ret_reg,
RRegSet regs_live_before,
RRegSet regs_live_after );
/* Generic resolution type used in a few different ways, such as deciding
how closely to compare two errors for equality. */
-typedef
- enum { Vg_LowRes, Vg_MedRes, Vg_HighRes }
+typedef
+ enum { Vg_LowRes, Vg_MedRes, Vg_HighRes }
VgRes;
typedef
struct _ExeContext
ExeContext;
-/* Compare two ExeContexts. Number of callers considered depends on `res':
- Vg_LowRes: 2
- Vg_MedRes: 4
+/* Compare two ExeContexts. Number of callers considered depends on `res':
+ Vg_LowRes: 2
+ Vg_MedRes: 4
Vg_HighRes: all */
extern Bool VG_(eq_ExeContext) ( VgRes res,
ExeContext* e1, ExeContext* e2 );
/* Take a snapshot of the client's stack. Search our collection of
ExeContexts to see if we already have it, and if not, allocate a
- new one. Either way, return a pointer to the context.
-
+ new one. Either way, return a pointer to the context. Context size
+ controlled by --num-callers option.
+
If called from generated code, use VG_(get_current_tid)() to get the
current ThreadId. If called from non-generated code, the current
- ThreadId should be passed in by the core.
+ ThreadId should be passed in by the core.
*/
extern ExeContext* VG_(get_ExeContext) ( ThreadId tid );
/* Just grab the client's EIP, as a much smaller and cheaper
indication of where they are. Use is basically same as for
- VG_(get_ExeContext)() above.
+ VG_(get_ExeContext)() above.
*/
extern Addr VG_(get_EIP)( ThreadId tid );
/*====================================================================*/
/* ------------------------------------------------------------------ */
-/* Suppressions describe errors which we want to suppress, ie, not
+/* Suppressions describe errors which we want to suppress, ie, not
show the user, usually because it is caused by a problem in a library
- which we can't fix, replace or work around. Suppressions are read from
+ which we can't fix, replace or work around. Suppressions are read from
a file at startup time. This gives flexibility so that new
suppressions can be added to the file as and when needed.
*/
/* Call this when an error occurs. It will be recorded if it hasn't been
seen before. If it has, the existing error record will have its count
- incremented.
-
+ incremented.
+
'tid' can be found as for VG_(get_ExeContext)(). The `extra' field can
be stack-allocated; it will be copied by the core if needed (but it
won't be copied if it's NULL).
If no 'a', 's' or 'extra' of interest needs to be recorded, just use
NULL for them. */
-extern void VG_(maybe_record_error) ( ThreadId tid, ErrorKind ekind,
+extern void VG_(maybe_record_error) ( ThreadId tid, ErrorKind ekind,
Addr a, Char* s, void* extra );
/* Similar to VG_(maybe_record_error)(), except this one doesn't record the
error -- useful for errors that can only happen once. The errors can be
suppressed, though. Return value is True if it was suppressed.
- `print_error' dictates whether to print the error, which is a bit of a
+ `print_error' dictates whether to print the error, which is a bit of a
hack that's useful sometimes if you just want to know if the error would
- be suppressed without possibly printing it. `count_error' dictates
+ be suppressed without possibly printing it. `count_error' dictates
whether to add the error in the error total count (another mild hack). */
extern Bool VG_(unique_error) ( ThreadId tid, ErrorKind ekind,
Addr a, Char* s, void* extra,
Bool allow_GDB_attach, Bool count_error );
/* Gets a non-blank, non-comment line of at most nBuf chars from fd.
- Skips leading spaces on the line. Returns True if EOF was hit instead.
+ Skips leading spaces on the line. Returns True if EOF was hit instead.
Useful for reading in extra skin-specific suppression lines. */
extern Bool VG_(get_line) ( Int fd, Char* buf, Int nBuf );
extern Bool VG_(get_filename) ( Addr a, Char* filename, Int n_filename );
extern Bool VG_(get_fnname) ( Addr a, Char* fnname, Int n_fnname );
extern Bool VG_(get_linenum) ( Addr a, UInt* linenum );
-extern Bool VG_(get_fnname_w_offset)
+extern Bool VG_(get_fnname_w_offset)
( Addr a, Char* fnname, Int n_fnname );
/* This one is more efficient if getting both filename and line number,
because the two lookups are done together. */
-extern Bool VG_(get_filename_linenum)
+extern Bool VG_(get_filename_linenum)
( Addr a, Char* filename, Int n_filename,
UInt* linenum );
/* Add a node to the table. */
extern void VG_(HT_add_node) ( VgHashTable t, VgHashNode* node );
-/* Looks up a node in the hash table. Also returns the address of the
+/* Looks up a node in the hash table. Also returns the address of the
previous node's `next' pointer which allows it to be removed from the
list later without having to look it up again. */
extern VgHashNode* VG_(HT_get_node) ( VgHashTable t, UInt key,
/* Allocates a sorted array of pointers to all the shadow chunks of malloc'd
blocks. */
-extern VgHashNode** VG_(HT_to_sorted_array) ( VgHashTable t,
+extern VgHashNode** VG_(HT_to_sorted_array) ( VgHashTable t,
/*OUT*/ UInt* n_shadows );
/* Returns first node that matches predicate `p', or NULL if none do.
/* This one lets you override the shadow of the return value register for a
syscall. Call it from SK_(post_syscall)() (not SK_(pre_syscall)()!) to
override the default shadow register value. */
-extern void VG_(set_return_from_syscall_shadow) ( ThreadId tid,
+extern void VG_(set_return_from_syscall_shadow) ( ThreadId tid,
UInt ret_shadow );
/* This can be called from SK_(fini)() to find the shadow of the argument
/* ------------------------------------------------------------------ */
/* General stuff, for replacing any functions */
-/* Is the client running on the simulated CPU or the real one?
+/* Is the client running on the simulated CPU or the real one?
Nb: If it is, and you want to call a function to be run on the real CPU,
use one of the VALGRIND_NON_SIMD_CALL[123] macros in valgrind.h to call it.
- Nb: don't forget the function parentheses when using this in a
+ Nb: don't forget the function parentheses when using this in a
condition... write this:
if (VG_(is_running_on_simd_CPU)()) { ... } // calls function
not this:
-
+
if (VG_(is_running_on_simd_CPU)) { ... } // address of var!
*/
-extern Bool VG_(is_running_on_simd_CPU) ( void );
+extern Bool VG_(is_running_on_simd_CPU) ( void );
/*====================================================================*/
/* Arena size for valgrind's own malloc(); default value is 0, but can
be overridden by skin -- but must be done so *statically*, eg:
-
+
Int VG_(vg_malloc_redzone_szB) = 4;
-
+
It can't be done from a function like SK_(pre_clo_init)(). So it can't,
for example, be controlled with a command line option, unfortunately. */
extern UInt VG_(vg_malloc_redzone_szB);
extern void* SK_(realloc) ( void* p, Int size );
/* Can be called from SK_(malloc) et al to do the actual alloc/freeing. */
-extern void* VG_(cli_malloc) ( UInt align, Int nbytes );
+extern void* VG_(cli_malloc) ( UInt align, Int nbytes );
extern void VG_(cli_free) ( void* p );
/* Check if an address is within a range, allowing for redzones at edges */
extern Bool VG_(addr_is_in_block)( Addr a, Addr start, UInt size );
/* ------------------------------------------------------------------ */
-/* Some options that can be used by a skin if malloc() et al are replaced.
+/* Some options that can be used by a skin if malloc() et al are replaced.
The skin should call the functions in the appropriate places to give
control over these aspects of Valgrind's version of malloc(). */
/* Average size of a translation, in bytes, so that the translation
storage machinery can allocate memory appropriately. Not critical,
- setting is optional. */
+ setting is optional. */
extern void VG_(details_avg_translation_sizeB) ( UInt size );
/* String printed if an `sk_assert' assertion fails or VG_(skin_panic)
- pthread API errors (many; eg. unlocking a non-locked mutex)
- invalid file descriptors to blocking syscalls read() and write()
- bad signal numbers passed to sigaction()
- - attempt to install signal handler for SIGKILL or SIGSTOP */
+ - attempt to install signal handler for SIGKILL or SIGSTOP */
extern void VG_(needs_core_errors) ( void );
/* Booleans that indicate extra operations are defined; if these are True,
/* Part of the core from which this call was made. Useful for determining
what kind of error message should be emitted. */
-typedef
+typedef
enum { Vg_CorePThread, Vg_CoreSignal, Vg_CoreSysCall, Vg_CoreTranslate }
CorePart;
/* Events happening in core to track. To be notified, pass a callback
function to the appropriate function. To ignore an event, don't do
- anything (default is for events to be ignored).
-
+ anything (default is for events to be ignored).
+
Note that most events aren't passed a ThreadId. To find out the ThreadId
of the affected thread, use VG_(get_current_or_recent_tid)(). For the
ones passed a ThreadId, use that instead, since
malloc() et al. See above how to do this.) */
/* These ones occur at startup, upon some signals, and upon some syscalls */
-EV VG_(track_new_mem_startup) ( void (*f)(Addr a, UInt len,
+EV VG_(track_new_mem_startup) ( void (*f)(Addr a, UInt len,
Bool rr, Bool ww, Bool xx) );
EV VG_(track_new_mem_stack_signal) ( void (*f)(Addr a, UInt len) );
EV VG_(track_new_mem_brk) ( void (*f)(Addr a, UInt len) );
/* These ones are called when %esp changes. A skin could track these itself
(except for ban_mem_stack) but it's much easier to use the core's help.
-
+
The specialised ones are called in preference to the general one, if they
are defined. These functions are called a lot if they are used, so
specialising can optimise things significantly. If any of the
- specialised cases are defined, the general case must be defined too.
-
+ specialised cases are defined, the general case must be defined too.
+
Nb: they must all use the __attribute__((regparm(n))) attribute. */
EV VG_(track_new_mem_stack_4) ( void (*f)(Addr new_ESP) );
EV VG_(track_new_mem_stack_8) ( void (*f)(Addr new_ESP) );
Char* s, Addr a, UInt size) );
/* Not implemented yet -- have to add in lots of places, which is a
pain. Won't bother unless/until there's a need. */
-/* EV VG_(track_post_mem_read) ( void (*f)(ThreadId tid, Char* s,
+/* EV VG_(track_post_mem_read) ( void (*f)(ThreadId tid, Char* s,
Addr a, UInt size) ); */
EV VG_(track_post_mem_write) ( void (*f)(Addr a, UInt size) );
/* Use VG_(set_shadow_archreg)() to set the eight general purpose regs,
and use VG_(set_shadow_eflags)() to set eflags. */
-EV VG_(track_post_regs_write_init) ( void (*f)() );
+EV VG_(track_post_regs_write_init) ( void (*f)() );
-/* Use VG_(set_thread_shadow_archreg)() to set the shadow regs for these
+/* Use VG_(set_thread_shadow_archreg)() to set the shadow regs for these
events. */
-EV VG_(track_post_reg_write_syscall_return)
+EV VG_(track_post_reg_write_syscall_return)
( void (*f)(ThreadId tid, UInt reg) );
EV VG_(track_post_reg_write_deliver_signal)
( void (*f)(ThreadId tid, UInt reg) );
about to resume. */
EV VG_(track_post_thread_join) ( void (*f)(ThreadId joiner, ThreadId joinee) );
-
+
/* Mutex events (not exhaustive) */
/* Called before a thread can block while waiting for a mutex (called
regardless of whether the thread will block or not). */
-EV VG_(track_pre_mutex_lock) ( void (*f)(ThreadId tid,
+EV VG_(track_pre_mutex_lock) ( void (*f)(ThreadId tid,
void* /*pthread_mutex_t* */ mutex) );
/* Called once the thread actually holds the mutex (always paired with
pre_mutex_lock). */
-EV VG_(track_post_mutex_lock) ( void (*f)(ThreadId tid,
+EV VG_(track_post_mutex_lock) ( void (*f)(ThreadId tid,
void* /*pthread_mutex_t* */ mutex) );
/* Called after a thread has released a mutex (no need for a corresponding
pre_mutex_unlock, because unlocking can't block). */
-EV VG_(track_post_mutex_unlock) ( void (*f)(ThreadId tid,
+EV VG_(track_post_mutex_unlock) ( void (*f)(ThreadId tid,
void* /*pthread_mutex_t* */ mutex) );
/* Initialise skin. Must do the following:
- initialise the `details' struct, via the VG_(details_*)() functions
- register any helpers called by generated code
-
+
May do the following:
- initialise the `needs' struct to indicate certain requirements, via
the VG_(needs_*)() functions
/* Read any extra info for this suppression kind. Most likely for filling
in the `extra' and `string' parts (with VG_(set_supp_{extra,string})())
- of a suppression if necessary. Should return False if a syntax error
+ of a suppression if necessary. Should return False if a syntax error
occurred, True otherwise. */
extern Bool SK_(read_extra_suppression_info) ( Int fd, Char* buf, Int nBuf,
Supp* su );
/* VG_(needs).syscall_wrapper */
/* If either of the pre_ functions malloc() something to return, the
- * corresponding post_ function had better free() it!
- */
+ * corresponding post_ function had better free() it!
+ */
extern void* SK_( pre_syscall) ( ThreadId tid, UInt syscallno,
Bool is_blocking );
extern void SK_(post_syscall) ( ThreadId tid, UInt syscallno,
}
break;
- case MMX1: case MMX2: case MMX3:
- case MMX2_MemRd: case MMX2_MemWr:
- case MMX2_ERegRd: case MMX2_ERegWr:
- VG_(skin_panic)(
- "I don't know how to instrument MMXish stuff (yet)");
- break;
-
default:
/* Count UInstr */
VG_(call_helper_0_0)(cb, (Addr) & add_one_UInstr);
<h3>3.1 Kinds of bugs that memcheck can find</h3>
-Memcheck is Valgrind-1.0.X's checking mechanism bundled up into a skin.
All reads and writes of memory are checked, and calls to
malloc/new/free/delete are intercepted. As a result, memcheck can
detect the following problems:
assume that reads and writes some small distance below the stack
pointer <code>%esp</code> are due to bugs in gcc 2.96, and does
not report them. The "small distance" is 256 bytes by default.
- Note that gcc 2.96 is the default compiler on some popular Linux
- distributions (RedHat 7.X, Mandrake) and so you may well need to
+ Note that gcc 2.96 is the default compiler on some older Linux
+ distributions (RedHat 7.X) and so you may well need to
use this flag. Do not use it if you do not have to, as it can
cause real errors to be overlooked. Another option is to use a
gcc/g++ which does not generate accesses below the stack
is: all naturally-aligned 4-byte words for which all A bits indicate
addressibility and all V bits indicated that the stored value is
actually valid.
+<p>
+
+
+<a name="clientreqs"></a>
+<h3>3.7 Client Requests</h3>
+
+The following client requests are defined in <code>memcheck.h</code>. They
+also work for the Addrcheck skin. See <code>memcheck.h</code> for exact
+details of their arguments.
+
+<ul>
+<li><code>VALGRIND_MAKE_NOACCESS</code>,
+ <code>VALGRIND_MAKE_WRITABLE</code> and
+ <code>VALGRIND_MAKE_READABLE</code>. These mark address
+ ranges as completely inaccessible, accessible but containing
+ undefined data, and accessible and containing defined data,
+ respectively. Subsequent errors may have their faulting
+ addresses described in terms of these blocks. Returns a
+ "block handle". Returns zero when not run on Valgrind.
+<p>
+<li><code>VALGRIND_DISCARD</code>: At some point you may want
+ Valgrind to stop reporting errors in terms of the blocks
+ defined by the previous three macros. To do this, the above
+ macros return a small-integer "block handle". You can pass
+ this block handle to <code>VALGRIND_DISCARD</code>. After
+ doing so, Valgrind will no longer be able to relate
+ addressing errors to the user-defined block associated with
+ the handle. The permissions settings associated with the
+ handle remain in place; this just affects how errors are
+ reported, not whether they are reported. Returns 1 for an
+ invalid handle and 0 for a valid handle (although passing
+ invalid handles is harmless). Always returns 0 when not run
+ on Valgrind.
+<p>
+<li><code>VALGRIND_CHECK_WRITABLE</code> and
+ <code>VALGRIND_CHECK_READABLE</code>: check immediately
+ whether or not the given address range has the relevant
+ property, and if not, print an error message. Also, for the
+ convenience of the client, returns zero if the relevant
+ property holds; otherwise, the returned value is the address
+ of the first byte for which the property is not true.
+ Always returns 0 when not run on Valgrind.
+<p>
+<li><code>VALGRIND_CHECK_DEFINED</code>: a quick and easy way
+ to find out whether Valgrind thinks a particular variable
+ (lvalue, to be precise) is addressible and defined. Prints
+ an error message if not. Returns no value.
+<p>
+<li><code>VALGRIND_DO_LEAK_CHECK</code>: run the memory leak detector
+ right now. Returns no value. I guess this could be used to
+ incrementally check for leaks between arbitrary places in the
+ program's execution. Warning: not properly tested!
+<p>
+<li><code>VALGRIND_COUNT_LEAKS</code>: fills in the four arguments with
+ the number of bytes of memory found by the previous leak check to
+ be leaked, dubious, reachable and suppressed. Again, useful in
+ test harness code, after calling <code>VALGRIND_DO_LEAK_CHECK</code>.
+<p>
+<li><code>VALGRIND_MALLOCLIKE_BLOCK</code>: If your program manages its own
+ memory instead of using the standard
+ <code>malloc()</code>/<code>new</code>/<code>new[]</code>, Memcheck will
+ not detect nearly as many errors, and the error messages won't be as
+ informative. To improve this situation, use this macro just after your
+ custom allocator allocates some new memory. See the comments in
+ <code>memcheck/memcheck.h</code> for information on how to use it.
+<p>
+<li><code>VALGRIND_FREELIKE_BLOCK</code>: This should be used in conjunction
+ with <code>VALGRIND_MALLOCLIKE_BLOCK</code>. Again, see
+ <code>memcheck/memcheck.h</code> for information on how to use it.
+<p>
+<li><code>VALGRIND_GET_VBITS</code> and
+ <code>VALGRIND_SET_VBITS</code>: allow you to get and set the V (validity)
+ bits for an address range. You should probably only set V bits that you
+ have got with <code>VALGRIND_GET_VBITS</code>. Only for those who really
+ know what they are doing.
+<p>
+</ul>
if (lc_n_shadows == 0) {
sk_assert(lc_shadows == NULL);
- VG_(message)(Vg_UserMsg,
- "No malloc'd blocks -- no leaks are possible.");
+ if (VG_(clo_verbosity) >= 1) {
+ VG_(message)(Vg_UserMsg,
+ "No malloc'd blocks -- no leaks are possible.");
+ }
return;
}
else if (VG_STREQ(name, "Addr16")) skind = Addr16Supp;
else if (VG_STREQ(name, "Free")) skind = FreeSupp;
else if (VG_STREQ(name, "Leak")) skind = LeakSupp;
+ else if (VG_STREQ(name, "Overlap")) skind = OverlapSupp;
else
return False;
char* strncpy ( char* dst, const char* src, int n )
{
- Char* dst_orig = dst;
+ const Char* src_orig = src;
+ Char* dst_orig = dst;
Int m = 0;
- if (is_overlap(dst, src, n, n))
- complain3("strncpy", dst, src, n);
-
while (m < n && *src) { m++; *dst++ = *src++; }
+ /* Check for overlap after copying; all n bytes of dst are relevant,
+ but only m+1 bytes of src if terminator was found */
+ if (is_overlap(dst_orig, src_orig, n, (m < n) ? m+1 : n))
+ complain3("strncpy", dst, src, n);
while (m++ < n) *dst++ = 0; /* must pad remainder with nulls */
return dst_orig;
return dst;
}
+int memcmp ( const void *s1V, const void *s2V, unsigned int n )
+{
+ int res;
+ unsigned char a0;
+ unsigned char b0;
+ unsigned char* s1 = (unsigned char*)s1V;
+ unsigned char* s2 = (unsigned char*)s2V;
+
+ while (n != 0) {
+ a0 = s1[0];
+ b0 = s2[0];
+ s1 += 1;
+ s2 += 1;
+ res = ((int)a0) - ((int)b0);
+ if (res != 0)
+ return res;
+ n -= 1;
+ }
+ return 0;
+}
/*--------------------------------------------------------------------*/
/*--- end mac_replace_strmem.c ---*/
else if (VG_STREQ(name, "Value4")) skind = Value4Supp;
else if (VG_STREQ(name, "Value8")) skind = Value8Supp;
else if (VG_STREQ(name, "Value16")) skind = Value16Supp;
- else if (VG_STREQ(name, "Overlap")) skind = OverlapSupp;
else
return False;
return sm->vbyte[sm_off];
}
-static __inline__ void set_abit ( Addr a, UChar abit )
+static /* __inline__ */ void set_abit ( Addr a, UChar abit )
{
SecMap* sm;
UInt sm_off;
-
+/* -*- c-basic-offset: 3 -*- */
/*--------------------------------------------------------------------*/
/*--- Instrument UCode to perform memory checking operations. ---*/
/*--- mc_translate.c ---*/
case MMX2_MemRd: case MMX2_MemWr:
case FPU_R: case FPU_W: {
Int t_size = INVALID_TEMPREG;
+ Bool is_load;
if (u_in->opcode == MMX2_MemRd || u_in->opcode == MMX2_MemWr)
sk_assert(u_in->size == 4 || u_in->size == 8);
+ is_load = u_in->opcode==FPU_R || u_in->opcode==MMX2_MemRd;
sk_assert(u_in->tag2 == TempReg);
uInstr1(cb, TESTV, 4, TempReg, SHADOW(u_in->val2));
uInstr1(cb, SETV, 4, TempReg, SHADOW(u_in->val2));
uInstr2(cb, MOV, 4, Literal, 0, TempReg, t_size);
uLiteral(cb, u_in->size);
uInstr2(cb, CCALL, 0, TempReg, u_in->val2, TempReg, t_size);
- uCCall(cb,
- u_in->opcode==FPU_R ? (Addr) & MC_(fpu_read_check)
- : (Addr) & MC_(fpu_write_check),
+ uCCall(cb, is_load ? (Addr) & MC_(fpu_read_check)
+ : (Addr) & MC_(fpu_write_check),
2, 2, False);
VG_(copy_UInstr)(cb, u_in);
break;
}
- /* ... and the same deal for SSE insns referencing memory. */
+ /* SSE ins referencing scalar integer registers */
+ case SSE3g_RegWr:
+ case SSE3e_RegRd:
+ case SSE3e_RegWr:
+ case SSE3g1_RegWr:
+ case SSE3e1_RegRd:
+ sk_assert(u_in->tag3 == TempReg);
+
+ if (u_in->opcode == SSE3e1_RegRd) {
+ sk_assert(u_in->size == 2);
+ } else {
+ sk_assert(u_in->size == 4);
+ }
+
+ /* Is it a read ? Better check the V bits right now. */
+ if ( u_in->opcode == SSE3e_RegRd
+ || u_in->opcode == SSE3e1_RegRd )
+ uInstr1(cb, TESTV, u_in->size,
+ TempReg, SHADOW(u_in->val3));
+
+ /* And for both read and write, set the register to be
+ defined. */
+ uInstr1(cb, SETV, u_in->size,
+ TempReg, SHADOW(u_in->val3));
+
+ VG_(copy_UInstr)(cb, u_in);
+ break;
+
+ /* ... and the same deal for SSE insns referencing memory */
case SSE3a_MemRd:
case SSE3a_MemWr:
case SSE2a_MemWr:
- case SSE2a_MemRd: {
+ case SSE2a_MemRd:
+ case SSE3a1_MemRd: {
Bool is_load;
Int t_size;
- sk_assert(u_in->size == 4 || u_in->size == 16);
+ sk_assert(u_in->size == 4
+ || u_in->size == 8 || u_in->size == 16);
t_size = INVALID_TEMPREG;
- is_load = u_in->opcode==SSE2a_MemRd
- || u_in->opcode==SSE3a_MemRd;
+ is_load = u_in->opcode==SSE2a_MemRd
+ || u_in->opcode==SSE3a_MemRd
+ || u_in->opcode==SSE3a1_MemRd;
+
sk_assert(u_in->tag3 == TempReg);
- uInstr1(cb, TESTV, 4, TempReg, SHADOW(u_in->val3));
- uInstr1(cb, SETV, 4, TempReg, SHADOW(u_in->val3));
- t_size = newTemp(cb);
- uInstr2(cb, MOV, 4, Literal, 0, TempReg, t_size);
- uLiteral(cb, u_in->size);
- uInstr2(cb, CCALL, 0, TempReg, u_in->val3, TempReg, t_size);
- uCCall(cb, is_load ? (Addr) & MC_(fpu_read_check)
- : (Addr) & MC_(fpu_write_check),
- 2, 2, False);
+ uInstr1(cb, TESTV, 4, TempReg, SHADOW(u_in->val3));
+ uInstr1(cb, SETV, 4, TempReg, SHADOW(u_in->val3));
+ t_size = newTemp(cb);
+ uInstr2(cb, MOV, 4, Literal, 0, TempReg, t_size);
+ uLiteral(cb, u_in->size);
+ uInstr2(cb, CCALL, 0, TempReg, u_in->val3, TempReg, t_size);
+ uCCall(cb, is_load ? (Addr) & MC_(fpu_read_check)
+ : (Addr) & MC_(fpu_write_check),
+ 2, 2, False);
+
VG_(copy_UInstr)(cb, u_in);
break;
}
+ case SSE3ag_MemRd_RegWr:
+ {
+ Int t_size;
+
+ sk_assert(u_in->size == 4 || u_in->size == 8);
+ sk_assert(u_in->tag1 == TempReg);
+ uInstr1(cb, TESTV, 4, TempReg, SHADOW(u_in->val1));
+ uInstr1(cb, SETV, 4, TempReg, SHADOW(u_in->val1));
+ t_size = newTemp(cb);
+ uInstr2(cb, MOV, 4, Literal, 0, TempReg, t_size);
+ uLiteral(cb, u_in->size);
+ uInstr2(cb, CCALL, 0, TempReg, u_in->val1, TempReg, t_size);
+ uCCall(cb, (Addr) MC_(fpu_read_check), 2, 2, False );
+ uInstr1(cb, SETV, 4, TempReg, SHADOW(u_in->val2));
+ VG_(copy_UInstr)(cb, u_in);
+ break;
+ }
+
/* For FPU, MMX and SSE insns not referencing memory, just
- copy thru. */
- case SSE4: case SSE3:
+ copy thru. */
+ case SSE5: case SSE4: case SSE3:
case MMX1: case MMX2: case MMX3:
case FPU:
VG_(copy_UInstr)(cb, u_in);
See comment near the top of valgrind.h on how to use them.
*/
-#define __VALGRIND_SOMESKIN_H
#include "valgrind.h"
typedef
(volatile unsigned char *)&(__lvalue), \
(unsigned int)(sizeof (__lvalue)))
+/* Do a memory leak check mid-execution. */
+#define VALGRIND_DO_LEAK_CHECK \
+ {unsigned int _qzz_res; \
+ VALGRIND_MAGIC_SEQUENCE(_qzz_res, 0, \
+ VG_USERREQ__DO_LEAK_CHECK, \
+ 0, 0, 0, 0); \
+ }
+
+/* Return number of leaked, dubious, reachable and suppressed bytes found by
+ all previous leak checks. They must be lvalues. */
+#define VALGRIND_COUNT_LEAKS(leaked, dubious, reachable, suppressed) \
+ {unsigned int _qzz_res; \
+ VALGRIND_MAGIC_SEQUENCE(_qzz_res, 0, \
+ VG_USERREQ__COUNT_LEAKS, \
+ &leaked, &dubious, &reachable, &suppressed);\
+ }
+
+#endif
+
+
/* Mark a block of memory as having been allocated by a malloc()-like
function. `addr' is the start of the usable block (ie. after any
redzone) `rzB' is redzone size if the allocator can apply redzones;
addr, rzB, 0, 0); \
}
-/* Do a memory leak check mid-execution. */
-#define VALGRIND_DO_LEAK_CHECK \
- {unsigned int _qzz_res; \
- VALGRIND_MAGIC_SEQUENCE(_qzz_res, 0, \
- VG_USERREQ__DO_LEAK_CHECK, \
- 0, 0, 0, 0); \
- }
-
-/* Return number of leaked, dubious, reachable and suppressed bytes found by
- all previous leak checks. They must be lvalues. */
-#define VALGRIND_COUNT_LEAKS(leaked, dubious, reachable, suppressed) \
- {unsigned int _qzz_res; \
- VALGRIND_MAGIC_SEQUENCE(_qzz_res, 0, \
- VG_USERREQ__COUNT_LEAKS, \
- &leaked, &dubious, &reachable, &suppressed);\
- }
-
-#endif
-
-
/* Get in zzvbits the validity data for the zznbytes starting at
zzsrc. Return values:
0 if not running on valgrind
Conditional jump or move depends on uninitialised value(s)
- at 0x........: memcmp (in /...libc...)
- by 0x........: __libc_start_main (...libc...)
- by 0x........: ...
-
-Conditional jump or move depends on uninitialised value(s)
- at 0x........: memcmp (in /...libc...)
+ at 0x........: memcmp (mac_replace_strmem.c:...)
+ by 0x........: main (memcmptest.c:13)
by 0x........: __libc_start_main (...libc...)
by 0x........: ...
strncat(a+20, a, 21); // run twice to check 2nd error isn't shown
strncat(a, a+20, 21);
+ /* This is ok, but once gave a warning when strncpy() was wrong,
+ and used 'n' for the length, even when the src was shorter than 'n' */
+ {
+ char dest[64];
+ char src [16];
+ strcpy( src, "short" );
+ strncpy( dest, src, 20 );
+ }
+
return 0;
}