]> git.ipfire.org Git - thirdparty/glibc.git/blame - malloc/malloc.c
Update.
[thirdparty/glibc.git] / malloc / malloc.c
CommitLineData
f65fd747 1/* Malloc implementation for multiple threads without lock contention.
5290baf0 2 Copyright (C) 1996, 1997 Free Software Foundation, Inc.
f65fd747 3 This file is part of the GNU C Library.
6259ec0d
UD
4 Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
5 and Doug Lea <dl@cs.oswego.edu>, 1996.
f65fd747
UD
6
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Library General Public License as
9 published by the Free Software Foundation; either version 2 of the
10 License, or (at your option) any later version.
11
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Library General Public License for more details.
16
17 You should have received a copy of the GNU Library General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If not,
19 write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
20 Boston, MA 02111-1307, USA. */
21
2f6d1f1b 22/* V2.6.4-pt3 Thu Feb 20 1997
f65fd747
UD
23
24 This work is mainly derived from malloc-2.6.4 by Doug Lea
25 <dl@cs.oswego.edu>, which is available from:
26
27 ftp://g.oswego.edu/pub/misc/malloc.c
28
29 Most of the original comments are reproduced in the code below.
30
31* Why use this malloc?
32
33 This is not the fastest, most space-conserving, most portable, or
34 most tunable malloc ever written. However it is among the fastest
35 while also being among the most space-conserving, portable and tunable.
36 Consistent balance across these factors results in a good general-purpose
37 allocator. For a high-level description, see
38 http://g.oswego.edu/dl/html/malloc.html
39
40 On many systems, the standard malloc implementation is by itself not
41 thread-safe, and therefore wrapped with a single global lock around
42 all malloc-related functions. In some applications, especially with
43 multiple available processors, this can lead to contention problems
44 and bad performance. This malloc version was designed with the goal
45 to avoid waiting for locks as much as possible. Statistics indicate
46 that this goal is achieved in many cases.
47
48* Synopsis of public routines
49
50 (Much fuller descriptions are contained in the program documentation below.)
51
52 ptmalloc_init();
53 Initialize global configuration. When compiled for multiple threads,
54 this function must be called once before any other function in the
10dc2a90
UD
55 package. It is not required otherwise. It is called automatically
56 in the Linux/GNU C libray or when compiling with MALLOC_HOOKS.
f65fd747
UD
57 malloc(size_t n);
58 Return a pointer to a newly allocated chunk of at least n bytes, or null
59 if no space is available.
60 free(Void_t* p);
61 Release the chunk of memory pointed to by p, or no effect if p is null.
62 realloc(Void_t* p, size_t n);
63 Return a pointer to a chunk of size n that contains the same data
64 as does chunk p up to the minimum of (n, p's size) bytes, or null
65 if no space is available. The returned pointer may or may not be
66 the same as p. If p is null, equivalent to malloc. Unless the
67 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
68 size argument of zero (re)allocates a minimum-sized chunk.
69 memalign(size_t alignment, size_t n);
70 Return a pointer to a newly allocated chunk of n bytes, aligned
71 in accord with the alignment argument, which must be a power of
72 two.
73 valloc(size_t n);
74 Equivalent to memalign(pagesize, n), where pagesize is the page
75 size of the system (or as near to this as can be figured out from
76 all the includes/defines below.)
77 pvalloc(size_t n);
78 Equivalent to valloc(minimum-page-that-holds(n)), that is,
79 round up n to nearest pagesize.
80 calloc(size_t unit, size_t quantity);
81 Returns a pointer to quantity * unit bytes, with all locations
82 set to zero.
83 cfree(Void_t* p);
84 Equivalent to free(p).
85 malloc_trim(size_t pad);
86 Release all but pad bytes of freed top-most memory back
87 to the system. Return 1 if successful, else 0.
88 malloc_usable_size(Void_t* p);
89 Report the number usable allocated bytes associated with allocated
90 chunk p. This may or may not report more bytes than were requested,
91 due to alignment and minimum size constraints.
92 malloc_stats();
93 Prints brief summary statistics on stderr.
94 mallinfo()
95 Returns (by copy) a struct containing various summary statistics.
96 mallopt(int parameter_number, int parameter_value)
97 Changes one of the tunable parameters described below. Returns
98 1 if successful in changing the parameter, else 0.
99
100* Vital statistics:
101
102 Alignment: 8-byte
103 8 byte alignment is currently hardwired into the design. This
104 seems to suffice for all current machines and C compilers.
105
106 Assumed pointer representation: 4 or 8 bytes
107 Code for 8-byte pointers is untested by me but has worked
108 reliably by Wolfram Gloger, who contributed most of the
109 changes supporting this.
110
111 Assumed size_t representation: 4 or 8 bytes
112 Note that size_t is allowed to be 4 bytes even if pointers are 8.
113
114 Minimum overhead per allocated chunk: 4 or 8 bytes
115 Each malloced chunk has a hidden overhead of 4 bytes holding size
116 and status information.
117
118 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
119 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
120
121 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
122 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
123 needed; 4 (8) for a trailing size field
124 and 8 (16) bytes for free list pointers. Thus, the minimum
125 allocatable size is 16/24/32 bytes.
126
127 Even a request for zero bytes (i.e., malloc(0)) returns a
128 pointer to something of the minimum allocatable size.
129
130 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
131 8-byte size_t: 2^63 - 16 bytes
132
133 It is assumed that (possibly signed) size_t bit values suffice to
134 represent chunk sizes. `Possibly signed' is due to the fact
135 that `size_t' may be defined on a system as either a signed or
136 an unsigned type. To be conservative, values that would appear
137 as negative numbers are avoided.
138 Requests for sizes with a negative sign bit will return a
139 minimum-sized chunk.
140
141 Maximum overhead wastage per allocated chunk: normally 15 bytes
142
6d52618b 143 Alignment demands, plus the minimum allocatable size restriction
f65fd747
UD
144 make the normal worst-case wastage 15 bytes (i.e., up to 15
145 more bytes will be allocated than were requested in malloc), with
146 two exceptions:
147 1. Because requests for zero bytes allocate non-zero space,
148 the worst case wastage for a request of zero bytes is 24 bytes.
149 2. For requests >= mmap_threshold that are serviced via
150 mmap(), the worst case wastage is 8 bytes plus the remainder
151 from a system page (the minimal mmap unit); typically 4096 bytes.
152
153* Limitations
154
155 Here are some features that are NOT currently supported
156
f65fd747
UD
157 * No automated mechanism for fully checking that all accesses
158 to malloced memory stay within their bounds.
159 * No support for compaction.
160
161* Synopsis of compile-time options:
162
163 People have reported using previous versions of this malloc on all
164 versions of Unix, sometimes by tweaking some of the defines
165 below. It has been tested most extensively on Solaris and
166 Linux. People have also reported adapting this malloc for use in
167 stand-alone embedded systems.
168
169 The implementation is in straight, hand-tuned ANSI C. Among other
170 consequences, it uses a lot of macros. Because of this, to be at
171 all usable, this code should be compiled using an optimizing compiler
172 (for example gcc -O2) that can simplify expressions and control
173 paths.
174
175 __STD_C (default: derived from C compiler defines)
176 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
177 a C compiler sufficiently close to ANSI to get away with it.
178 MALLOC_DEBUG (default: NOT defined)
179 Define to enable debugging. Adds fairly extensive assertion-based
180 checking to help track down memory errors, but noticeably slows down
181 execution.
7e3be507 182 MALLOC_HOOKS (default: NOT defined)
10dc2a90
UD
183 Define to enable support run-time replacement of the allocation
184 functions through user-defined `hooks'.
f65fd747
UD
185 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
186 Define this if you think that realloc(p, 0) should be equivalent
187 to free(p). Otherwise, since malloc returns a unique pointer for
188 malloc(0), so does realloc(p, 0).
189 HAVE_MEMCPY (default: defined)
190 Define if you are not otherwise using ANSI STD C, but still
191 have memcpy and memset in your C library and want to use them.
192 Otherwise, simple internal versions are supplied.
193 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
194 Define as 1 if you want the C library versions of memset and
195 memcpy called in realloc and calloc (otherwise macro versions are used).
196 At least on some platforms, the simple macro versions usually
197 outperform libc versions.
198 HAVE_MMAP (default: defined as 1)
199 Define to non-zero to optionally make malloc() use mmap() to
200 allocate very large blocks.
201 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
202 Define to non-zero to optionally make realloc() use mremap() to
203 reallocate very large blocks.
204 malloc_getpagesize (default: derived from system #includes)
205 Either a constant or routine call returning the system page size.
206 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
207 Optionally define if you are on a system with a /usr/include/malloc.h
208 that declares struct mallinfo. It is not at all necessary to
209 define this even if you do, but will ensure consistency.
210 INTERNAL_SIZE_T (default: size_t)
211 Define to a 32-bit type (probably `unsigned int') if you are on a
212 64-bit machine, yet do not want or need to allow malloc requests of
213 greater than 2^31 to be handled. This saves space, especially for
214 very small chunks.
215 _LIBC (default: NOT defined)
216 Defined only when compiled as part of the Linux libc/glibc.
217 Also note that there is some odd internal name-mangling via defines
218 (for example, internally, `malloc' is named `mALLOc') needed
219 when compiling in this case. These look funny but don't otherwise
220 affect anything.
221 LACKS_UNISTD_H (default: undefined)
222 Define this if your system does not have a <unistd.h>.
223 MORECORE (default: sbrk)
224 The name of the routine to call to obtain more memory from the system.
225 MORECORE_FAILURE (default: -1)
226 The value returned upon failure of MORECORE.
227 MORECORE_CLEARS (default 1)
228 True (1) if the routine mapped to MORECORE zeroes out memory (which
229 holds for sbrk).
230 DEFAULT_TRIM_THRESHOLD
231 DEFAULT_TOP_PAD
232 DEFAULT_MMAP_THRESHOLD
233 DEFAULT_MMAP_MAX
234 Default values of tunable parameters (described in detail below)
235 controlling interaction with host system routines (sbrk, mmap, etc).
236 These values may also be changed dynamically via mallopt(). The
237 preset defaults are those that give best performance for typical
238 programs/systems.
10dc2a90
UD
239 DEFAULT_CHECK_ACTION
240 When the standard debugging hooks are in place, and a pointer is
241 detected as corrupt, do nothing (0), print an error message (1),
242 or call abort() (2).
f65fd747
UD
243
244
245*/
246
247/*
248
249* Compile-time options for multiple threads:
250
251 USE_PTHREADS, USE_THR, USE_SPROC
252 Define one of these as 1 to select the thread interface:
253 POSIX threads, Solaris threads or SGI sproc's, respectively.
254 If none of these is defined as non-zero, you get a `normal'
255 malloc implementation which is not thread-safe. Support for
256 multiple threads requires HAVE_MMAP=1. As an exception, when
257 compiling for GNU libc, i.e. when _LIBC is defined, then none of
258 the USE_... symbols have to be defined.
259
260 HEAP_MIN_SIZE
261 HEAP_MAX_SIZE
262 When thread support is enabled, additional `heap's are created
263 with mmap calls. These are limited in size; HEAP_MIN_SIZE should
264 be a multiple of the page size, while HEAP_MAX_SIZE must be a power
265 of two for alignment reasons. HEAP_MAX_SIZE should be at least
266 twice as large as the mmap threshold.
267 THREAD_STATS
268 When this is defined as non-zero, some statistics on mutex locking
269 are computed.
270
271*/
272
273\f
274
275
f65fd747
UD
276/* Preliminaries */
277
278#ifndef __STD_C
279#if defined (__STDC__)
280#define __STD_C 1
281#else
282#if __cplusplus
283#define __STD_C 1
284#else
285#define __STD_C 0
286#endif /*__cplusplus*/
287#endif /*__STDC__*/
288#endif /*__STD_C*/
289
290#ifndef Void_t
291#if __STD_C
292#define Void_t void
293#else
294#define Void_t char
295#endif
296#endif /*Void_t*/
297
298#if __STD_C
10dc2a90
UD
299# include <stddef.h> /* for size_t */
300# if defined(_LIBC) || defined(MALLOC_HOOKS)
7e3be507 301# include <stdlib.h> /* for getenv(), abort() */
10dc2a90 302# endif
f65fd747 303#else
10dc2a90 304# include <sys/types.h>
f65fd747
UD
305#endif
306
8a4b65b4
UD
307/* Macros for handling mutexes and thread-specific data. This is
308 included early, because some thread-related header files (such as
309 pthread.h) should be included before any others. */
310#include "thread-m.h"
311
f65fd747
UD
312#ifdef __cplusplus
313extern "C" {
314#endif
315
316#include <stdio.h> /* needed for malloc_stats */
317
318
319/*
320 Compile-time options
321*/
322
323
324/*
325 Debugging:
326
327 Because freed chunks may be overwritten with link fields, this
328 malloc will often die when freed memory is overwritten by user
329 programs. This can be very effective (albeit in an annoying way)
330 in helping track down dangling pointers.
331
332 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
333 enabled that will catch more memory errors. You probably won't be
334 able to make much sense of the actual assertion errors, but they
335 should help you locate incorrectly overwritten memory. The
336 checking is fairly extensive, and will slow down execution
337 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
338 attempt to check every non-mmapped allocated and free chunk in the
6d52618b 339 course of computing the summaries. (By nature, mmapped regions
f65fd747
UD
340 cannot be checked very much automatically.)
341
342 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
343 this code. The assertions in the check routines spell out in more
344 detail the assumptions and invariants underlying the algorithms.
345
346*/
347
348#if MALLOC_DEBUG
349#include <assert.h>
350#else
351#define assert(x) ((void)0)
352#endif
353
354
355/*
356 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
357 of chunk sizes. On a 64-bit machine, you can reduce malloc
358 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
359 at the expense of not being able to handle requests greater than
360 2^31. This limitation is hardly ever a concern; you are encouraged
361 to set this. However, the default version is the same as size_t.
362*/
363
364#ifndef INTERNAL_SIZE_T
365#define INTERNAL_SIZE_T size_t
366#endif
367
368/*
369 REALLOC_ZERO_BYTES_FREES should be set if a call to
370 realloc with zero bytes should be the same as a call to free.
371 Some people think it should. Otherwise, since this malloc
372 returns a unique pointer for malloc(0), so does realloc(p, 0).
373*/
374
375
376/* #define REALLOC_ZERO_BYTES_FREES */
377
378
379/*
380 HAVE_MEMCPY should be defined if you are not otherwise using
381 ANSI STD C, but still have memcpy and memset in your C library
382 and want to use them in calloc and realloc. Otherwise simple
383 macro versions are defined here.
384
385 USE_MEMCPY should be defined as 1 if you actually want to
386 have memset and memcpy called. People report that the macro
387 versions are often enough faster than libc versions on many
388 systems that it is better to use them.
389
390*/
391
10dc2a90 392#define HAVE_MEMCPY 1
f65fd747
UD
393
394#ifndef USE_MEMCPY
395#ifdef HAVE_MEMCPY
396#define USE_MEMCPY 1
397#else
398#define USE_MEMCPY 0
399#endif
400#endif
401
402#if (__STD_C || defined(HAVE_MEMCPY))
403
404#if __STD_C
405void* memset(void*, int, size_t);
406void* memcpy(void*, const void*, size_t);
407#else
408Void_t* memset();
409Void_t* memcpy();
410#endif
411#endif
412
413#if USE_MEMCPY
414
415/* The following macros are only invoked with (2n+1)-multiples of
416 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
417 for fast inline execution when n is small. */
418
419#define MALLOC_ZERO(charp, nbytes) \
420do { \
421 INTERNAL_SIZE_T mzsz = (nbytes); \
422 if(mzsz <= 9*sizeof(mzsz)) { \
423 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
424 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
425 *mz++ = 0; \
426 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
427 *mz++ = 0; \
428 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
429 *mz++ = 0; }}} \
430 *mz++ = 0; \
431 *mz++ = 0; \
432 *mz = 0; \
433 } else memset((charp), 0, mzsz); \
434} while(0)
435
436#define MALLOC_COPY(dest,src,nbytes) \
437do { \
438 INTERNAL_SIZE_T mcsz = (nbytes); \
439 if(mcsz <= 9*sizeof(mcsz)) { \
440 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
441 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
442 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
443 *mcdst++ = *mcsrc++; \
444 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
445 *mcdst++ = *mcsrc++; \
446 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
447 *mcdst++ = *mcsrc++; }}} \
448 *mcdst++ = *mcsrc++; \
449 *mcdst++ = *mcsrc++; \
450 *mcdst = *mcsrc ; \
451 } else memcpy(dest, src, mcsz); \
452} while(0)
453
454#else /* !USE_MEMCPY */
455
456/* Use Duff's device for good zeroing/copying performance. */
457
458#define MALLOC_ZERO(charp, nbytes) \
459do { \
460 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
461 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
462 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
463 switch (mctmp) { \
464 case 0: for(;;) { *mzp++ = 0; \
465 case 7: *mzp++ = 0; \
466 case 6: *mzp++ = 0; \
467 case 5: *mzp++ = 0; \
468 case 4: *mzp++ = 0; \
469 case 3: *mzp++ = 0; \
470 case 2: *mzp++ = 0; \
471 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
472 } \
473} while(0)
474
475#define MALLOC_COPY(dest,src,nbytes) \
476do { \
477 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
478 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
479 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
480 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
481 switch (mctmp) { \
482 case 0: for(;;) { *mcdst++ = *mcsrc++; \
483 case 7: *mcdst++ = *mcsrc++; \
484 case 6: *mcdst++ = *mcsrc++; \
485 case 5: *mcdst++ = *mcsrc++; \
486 case 4: *mcdst++ = *mcsrc++; \
487 case 3: *mcdst++ = *mcsrc++; \
488 case 2: *mcdst++ = *mcsrc++; \
489 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
490 } \
491} while(0)
492
493#endif
494
495
496/*
497 Define HAVE_MMAP to optionally make malloc() use mmap() to
498 allocate very large blocks. These will be returned to the
499 operating system immediately after a free().
500*/
501
502#ifndef HAVE_MMAP
503#define HAVE_MMAP 1
504#endif
505
506/*
507 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
508 large blocks. This is currently only possible on Linux with
509 kernel versions newer than 1.3.77.
510*/
511
512#ifndef HAVE_MREMAP
513#define HAVE_MREMAP defined(__linux__)
514#endif
515
516#if HAVE_MMAP
517
518#include <unistd.h>
519#include <fcntl.h>
520#include <sys/mman.h>
521
522#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
523#define MAP_ANONYMOUS MAP_ANON
524#endif
525
526#endif /* HAVE_MMAP */
527
528/*
529 Access to system page size. To the extent possible, this malloc
530 manages memory from the system in page-size units.
531
532 The following mechanics for getpagesize were adapted from
533 bsd/gnu getpagesize.h
534*/
535
536#ifndef LACKS_UNISTD_H
537# include <unistd.h>
538#endif
539
540#ifndef malloc_getpagesize
541# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
542# ifndef _SC_PAGE_SIZE
543# define _SC_PAGE_SIZE _SC_PAGESIZE
544# endif
545# endif
546# ifdef _SC_PAGE_SIZE
547# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
548# else
549# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
550 extern size_t getpagesize();
551# define malloc_getpagesize getpagesize()
552# else
553# include <sys/param.h>
554# ifdef EXEC_PAGESIZE
555# define malloc_getpagesize EXEC_PAGESIZE
556# else
557# ifdef NBPG
558# ifndef CLSIZE
559# define malloc_getpagesize NBPG
560# else
561# define malloc_getpagesize (NBPG * CLSIZE)
562# endif
563# else
564# ifdef NBPC
565# define malloc_getpagesize NBPC
566# else
567# ifdef PAGESIZE
568# define malloc_getpagesize PAGESIZE
569# else
570# define malloc_getpagesize (4096) /* just guess */
571# endif
572# endif
573# endif
574# endif
575# endif
576# endif
577#endif
578
579
580
581/*
582
583 This version of malloc supports the standard SVID/XPG mallinfo
584 routine that returns a struct containing the same kind of
585 information you can get from malloc_stats. It should work on
586 any SVID/XPG compliant system that has a /usr/include/malloc.h
587 defining struct mallinfo. (If you'd like to install such a thing
588 yourself, cut out the preliminary declarations as described above
589 and below and save them in a malloc.h file. But there's no
590 compelling reason to bother to do this.)
591
592 The main declaration needed is the mallinfo struct that is returned
593 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
594 bunch of fields, most of which are not even meaningful in this
595 version of malloc. Some of these fields are are instead filled by
596 mallinfo() with other numbers that might possibly be of interest.
597
598 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
599 /usr/include/malloc.h file that includes a declaration of struct
600 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
601 version is declared below. These must be precisely the same for
602 mallinfo() to work.
603
604*/
605
606/* #define HAVE_USR_INCLUDE_MALLOC_H */
607
608#if HAVE_USR_INCLUDE_MALLOC_H
8a4b65b4 609# include "/usr/include/malloc.h"
f65fd747 610#else
8a4b65b4
UD
611# ifdef _LIBC
612# include "malloc.h"
613# else
614# include "ptmalloc.h"
615# endif
f65fd747
UD
616#endif
617
618
619
620#ifndef DEFAULT_TRIM_THRESHOLD
621#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
622#endif
623
624/*
625 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
626 to keep before releasing via malloc_trim in free().
627
628 Automatic trimming is mainly useful in long-lived programs.
629 Because trimming via sbrk can be slow on some systems, and can
630 sometimes be wasteful (in cases where programs immediately
631 afterward allocate more large chunks) the value should be high
632 enough so that your overall system performance would improve by
633 releasing.
634
635 The trim threshold and the mmap control parameters (see below)
636 can be traded off with one another. Trimming and mmapping are
637 two different ways of releasing unused memory back to the
638 system. Between these two, it is often possible to keep
639 system-level demands of a long-lived program down to a bare
640 minimum. For example, in one test suite of sessions measuring
641 the XF86 X server on Linux, using a trim threshold of 128K and a
642 mmap threshold of 192K led to near-minimal long term resource
643 consumption.
644
645 If you are using this malloc in a long-lived program, it should
646 pay to experiment with these values. As a rough guide, you
647 might set to a value close to the average size of a process
648 (program) running on your system. Releasing this much memory
649 would allow such a process to run in memory. Generally, it's
831372e7 650 worth it to tune for trimming rather than memory mapping when a
f65fd747
UD
651 program undergoes phases where several large chunks are
652 allocated and released in ways that can reuse each other's
653 storage, perhaps mixed with phases where there are no such
654 chunks at all. And in well-behaved long-lived programs,
655 controlling release of large blocks via trimming versus mapping
656 is usually faster.
657
658 However, in most programs, these parameters serve mainly as
659 protection against the system-level effects of carrying around
660 massive amounts of unneeded memory. Since frequent calls to
661 sbrk, mmap, and munmap otherwise degrade performance, the default
662 parameters are set to relatively high values that serve only as
663 safeguards.
664
665 The default trim value is high enough to cause trimming only in
666 fairly extreme (by current memory consumption standards) cases.
667 It must be greater than page size to have any useful effect. To
668 disable trimming completely, you can set to (unsigned long)(-1);
669
670
671*/
672
673
674#ifndef DEFAULT_TOP_PAD
675#define DEFAULT_TOP_PAD (0)
676#endif
677
678/*
679 M_TOP_PAD is the amount of extra `padding' space to allocate or
680 retain whenever sbrk is called. It is used in two ways internally:
681
682 * When sbrk is called to extend the top of the arena to satisfy
683 a new malloc request, this much padding is added to the sbrk
684 request.
685
686 * When malloc_trim is called automatically from free(),
687 it is used as the `pad' argument.
688
689 In both cases, the actual amount of padding is rounded
690 so that the end of the arena is always a system page boundary.
691
692 The main reason for using padding is to avoid calling sbrk so
693 often. Having even a small pad greatly reduces the likelihood
694 that nearly every malloc request during program start-up (or
695 after trimming) will invoke sbrk, which needlessly wastes
696 time.
697
698 Automatic rounding-up to page-size units is normally sufficient
699 to avoid measurable overhead, so the default is 0. However, in
700 systems where sbrk is relatively slow, it can pay to increase
701 this value, at the expense of carrying around more memory than
702 the program needs.
703
704*/
705
706
707#ifndef DEFAULT_MMAP_THRESHOLD
708#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
709#endif
710
711/*
712
713 M_MMAP_THRESHOLD is the request size threshold for using mmap()
714 to service a request. Requests of at least this size that cannot
715 be allocated using already-existing space will be serviced via mmap.
716 (If enough normal freed space already exists it is used instead.)
717
718 Using mmap segregates relatively large chunks of memory so that
719 they can be individually obtained and released from the host
720 system. A request serviced through mmap is never reused by any
721 other request (at least not directly; the system may just so
722 happen to remap successive requests to the same locations).
723
724 Segregating space in this way has the benefit that mmapped space
725 can ALWAYS be individually released back to the system, which
726 helps keep the system level memory demands of a long-lived
727 program low. Mapped memory can never become `locked' between
728 other chunks, as can happen with normally allocated chunks, which
729 menas that even trimming via malloc_trim would not release them.
730
731 However, it has the disadvantages that:
732
733 1. The space cannot be reclaimed, consolidated, and then
734 used to service later requests, as happens with normal chunks.
735 2. It can lead to more wastage because of mmap page alignment
736 requirements
737 3. It causes malloc performance to be more dependent on host
738 system memory management support routines which may vary in
739 implementation quality and may impose arbitrary
740 limitations. Generally, servicing a request via normal
741 malloc steps is faster than going through a system's mmap.
742
743 All together, these considerations should lead you to use mmap
744 only for relatively large requests.
745
746
747*/
748
749
750
751#ifndef DEFAULT_MMAP_MAX
752#if HAVE_MMAP
753#define DEFAULT_MMAP_MAX (1024)
754#else
755#define DEFAULT_MMAP_MAX (0)
756#endif
757#endif
758
759/*
760 M_MMAP_MAX is the maximum number of requests to simultaneously
761 service using mmap. This parameter exists because:
762
763 1. Some systems have a limited number of internal tables for
764 use by mmap.
765 2. In most systems, overreliance on mmap can degrade overall
766 performance.
767 3. If a program allocates many large regions, it is probably
768 better off using normal sbrk-based allocation routines that
769 can reclaim and reallocate normal heap memory. Using a
770 small value allows transition into this mode after the
771 first few allocations.
772
773 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
774 the default value is 0, and attempts to set it to non-zero values
775 in mallopt will fail.
776*/
777
778
779
10dc2a90
UD
780#ifndef DEFAULT_CHECK_ACTION
781#define DEFAULT_CHECK_ACTION 1
782#endif
783
784/* What to do if the standard debugging hooks are in place and a
785 corrupt pointer is detected: do nothing (0), print an error message
786 (1), or call abort() (2). */
787
788
789
f65fd747
UD
790#define HEAP_MIN_SIZE (32*1024)
791#define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
792
793/* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
794 that are dynamically created for multi-threaded programs. The
795 maximum size must be a power of two, for fast determination of
796 which heap belongs to a chunk. It should be much larger than
797 the mmap threshold, so that requests with a size just below that
798 threshold can be fulfilled without creating too many heaps.
799*/
800
801
802
803#ifndef THREAD_STATS
804#define THREAD_STATS 0
805#endif
806
807/* If THREAD_STATS is non-zero, some statistics on mutex locking are
808 computed. */
809
810
811/*
812
813 Special defines for the Linux/GNU C library.
814
815*/
816
817
818#ifdef _LIBC
819
820#if __STD_C
821
822Void_t * __default_morecore (ptrdiff_t);
1228ed5c 823Void_t *(*__morecore)(ptrdiff_t) = __default_morecore;
f65fd747
UD
824
825#else
826
827Void_t * __default_morecore ();
1228ed5c 828Void_t *(*__morecore)() = __default_morecore;
f65fd747
UD
829
830#endif
831
832#define MORECORE (*__morecore)
833#define MORECORE_FAILURE 0
834#define MORECORE_CLEARS 1
10dc2a90
UD
835#define mmap __mmap
836#define munmap __munmap
837#define mremap __mremap
4cca6b86 838#define mprotect __mprotect
10dc2a90
UD
839#undef malloc_getpagesize
840#define malloc_getpagesize __getpagesize()
f65fd747
UD
841
842#else /* _LIBC */
843
844#if __STD_C
845extern Void_t* sbrk(ptrdiff_t);
846#else
847extern Void_t* sbrk();
848#endif
849
850#ifndef MORECORE
851#define MORECORE sbrk
852#endif
853
854#ifndef MORECORE_FAILURE
855#define MORECORE_FAILURE -1
856#endif
857
858#ifndef MORECORE_CLEARS
859#define MORECORE_CLEARS 1
860#endif
861
862#endif /* _LIBC */
863
10dc2a90 864#ifdef _LIBC
f65fd747
UD
865
866#define cALLOc __libc_calloc
867#define fREe __libc_free
868#define mALLOc __libc_malloc
869#define mEMALIGn __libc_memalign
870#define rEALLOc __libc_realloc
871#define vALLOc __libc_valloc
872#define pvALLOc __libc_pvalloc
873#define mALLINFo __libc_mallinfo
874#define mALLOPt __libc_mallopt
7e3be507
UD
875#define mALLOC_STATs __malloc_stats
876#define mALLOC_USABLE_SIZe __malloc_usable_size
877#define mALLOC_TRIm __malloc_trim
2f6d1f1b
UD
878#define mALLOC_GET_STATe __malloc_get_state
879#define mALLOC_SET_STATe __malloc_set_state
f65fd747 880
f65fd747
UD
881#else
882
883#define cALLOc calloc
884#define fREe free
885#define mALLOc malloc
886#define mEMALIGn memalign
887#define rEALLOc realloc
888#define vALLOc valloc
889#define pvALLOc pvalloc
890#define mALLINFo mallinfo
891#define mALLOPt mallopt
7e3be507
UD
892#define mALLOC_STATs malloc_stats
893#define mALLOC_USABLE_SIZe malloc_usable_size
894#define mALLOC_TRIm malloc_trim
2f6d1f1b
UD
895#define mALLOC_GET_STATe malloc_get_state
896#define mALLOC_SET_STATe malloc_set_state
f65fd747
UD
897
898#endif
899
900/* Public routines */
901
902#if __STD_C
903
904#ifndef _LIBC
905void ptmalloc_init(void);
906#endif
907Void_t* mALLOc(size_t);
908void fREe(Void_t*);
909Void_t* rEALLOc(Void_t*, size_t);
910Void_t* mEMALIGn(size_t, size_t);
911Void_t* vALLOc(size_t);
912Void_t* pvALLOc(size_t);
913Void_t* cALLOc(size_t, size_t);
914void cfree(Void_t*);
7e3be507
UD
915int mALLOC_TRIm(size_t);
916size_t mALLOC_USABLE_SIZe(Void_t*);
917void mALLOC_STATs(void);
f65fd747
UD
918int mALLOPt(int, int);
919struct mallinfo mALLINFo(void);
2f6d1f1b
UD
920Void_t* mALLOC_GET_STATe(void);
921int mALLOC_SET_STATe(Void_t*);
922
923#else /* !__STD_C */
924
f65fd747
UD
925#ifndef _LIBC
926void ptmalloc_init();
927#endif
928Void_t* mALLOc();
929void fREe();
930Void_t* rEALLOc();
931Void_t* mEMALIGn();
932Void_t* vALLOc();
933Void_t* pvALLOc();
934Void_t* cALLOc();
935void cfree();
7e3be507
UD
936int mALLOC_TRIm();
937size_t mALLOC_USABLE_SIZe();
938void mALLOC_STATs();
f65fd747
UD
939int mALLOPt();
940struct mallinfo mALLINFo();
2f6d1f1b
UD
941Void_t* mALLOC_GET_STATe();
942int mALLOC_SET_STATe();
943
944#endif /* __STD_C */
f65fd747
UD
945
946
947#ifdef __cplusplus
948}; /* end of extern "C" */
949#endif
950
951#if !defined(NO_THREADS) && !HAVE_MMAP
952"Can't have threads support without mmap"
953#endif
954
955
956/*
957 Type declarations
958*/
959
960
961struct malloc_chunk
962{
963 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
964 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
965 struct malloc_chunk* fd; /* double links -- used only if free. */
966 struct malloc_chunk* bk;
967};
968
969typedef struct malloc_chunk* mchunkptr;
970
971/*
972
973 malloc_chunk details:
974
975 (The following includes lightly edited explanations by Colin Plumb.)
976
977 Chunks of memory are maintained using a `boundary tag' method as
978 described in e.g., Knuth or Standish. (See the paper by Paul
979 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
980 survey of such techniques.) Sizes of free chunks are stored both
981 in the front of each chunk and at the end. This makes
982 consolidating fragmented chunks into bigger chunks very fast. The
983 size fields also hold bits representing whether chunks are free or
984 in use.
985
986 An allocated chunk looks like this:
987
988
989 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
990 | Size of previous chunk, if allocated | |
991 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
992 | Size of chunk, in bytes |P|
993 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
994 | User data starts here... .
995 . .
996 . (malloc_usable_space() bytes) .
997 . |
998nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
999 | Size of chunk |
1000 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1001
1002
1003 Where "chunk" is the front of the chunk for the purpose of most of
1004 the malloc code, but "mem" is the pointer that is returned to the
1005 user. "Nextchunk" is the beginning of the next contiguous chunk.
1006
6d52618b 1007 Chunks always begin on even word boundaries, so the mem portion
f65fd747
UD
1008 (which is returned to the user) is also on an even word boundary, and
1009 thus double-word aligned.
1010
1011 Free chunks are stored in circular doubly-linked lists, and look like this:
1012
1013 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1014 | Size of previous chunk |
1015 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1016 `head:' | Size of chunk, in bytes |P|
1017 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1018 | Forward pointer to next chunk in list |
1019 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1020 | Back pointer to previous chunk in list |
1021 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1022 | Unused space (may be 0 bytes long) .
1023 . .
1024 . |
1025nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1026 `foot:' | Size of chunk, in bytes |
1027 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1028
1029 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1030 chunk size (which is always a multiple of two words), is an in-use
1031 bit for the *previous* chunk. If that bit is *clear*, then the
1032 word before the current chunk size contains the previous chunk
1033 size, and can be used to find the front of the previous chunk.
1034 (The very first chunk allocated always has this bit set,
1035 preventing access to non-existent (or non-owned) memory.)
1036
1037 Note that the `foot' of the current chunk is actually represented
1038 as the prev_size of the NEXT chunk. (This makes it easier to
1039 deal with alignments etc).
1040
1041 The two exceptions to all this are
1042
1043 1. The special chunk `top', which doesn't bother using the
1044 trailing size field since there is no
1045 next contiguous chunk that would have to index off it. (After
1046 initialization, `top' is forced to always exist. If it would
1047 become less than MINSIZE bytes long, it is replenished via
1048 malloc_extend_top.)
1049
1050 2. Chunks allocated via mmap, which have the second-lowest-order
1051 bit (IS_MMAPPED) set in their size fields. Because they are
1052 never merged or traversed from any other chunk, they have no
1053 foot size or inuse information.
1054
1055 Available chunks are kept in any of several places (all declared below):
1056
1057 * `av': An array of chunks serving as bin headers for consolidated
1058 chunks. Each bin is doubly linked. The bins are approximately
1059 proportionally (log) spaced. There are a lot of these bins
1060 (128). This may look excessive, but works very well in
1061 practice. All procedures maintain the invariant that no
1062 consolidated chunk physically borders another one. Chunks in
1063 bins are kept in size order, with ties going to the
1064 approximately least recently used chunk.
1065
1066 The chunks in each bin are maintained in decreasing sorted order by
1067 size. This is irrelevant for the small bins, which all contain
1068 the same-sized chunks, but facilitates best-fit allocation for
1069 larger chunks. (These lists are just sequential. Keeping them in
1070 order almost never requires enough traversal to warrant using
1071 fancier ordered data structures.) Chunks of the same size are
1072 linked with the most recently freed at the front, and allocations
1073 are taken from the back. This results in LRU or FIFO allocation
1074 order, which tends to give each chunk an equal opportunity to be
1075 consolidated with adjacent freed chunks, resulting in larger free
1076 chunks and less fragmentation.
1077
1078 * `top': The top-most available chunk (i.e., the one bordering the
1079 end of available memory) is treated specially. It is never
1080 included in any bin, is used only if no other chunk is
1081 available, and is released back to the system if it is very
1082 large (see M_TRIM_THRESHOLD).
1083
1084 * `last_remainder': A bin holding only the remainder of the
1085 most recently split (non-top) chunk. This bin is checked
1086 before other non-fitting chunks, so as to provide better
1087 locality for runs of sequentially allocated chunks.
1088
1089 * Implicitly, through the host system's memory mapping tables.
1090 If supported, requests greater than a threshold are usually
1091 serviced via calls to mmap, and then later released via munmap.
1092
1093*/
1094
1095/*
1096 Bins
1097
1098 The bins are an array of pairs of pointers serving as the
1099 heads of (initially empty) doubly-linked lists of chunks, laid out
1100 in a way so that each pair can be treated as if it were in a
1101 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1102 and chunks are the same).
1103
1104 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1105 8 bytes apart. Larger bins are approximately logarithmically
1106 spaced. (See the table below.)
1107
1108 Bin layout:
1109
1110 64 bins of size 8
1111 32 bins of size 64
1112 16 bins of size 512
1113 8 bins of size 4096
1114 4 bins of size 32768
1115 2 bins of size 262144
1116 1 bin of size what's left
1117
1118 There is actually a little bit of slop in the numbers in bin_index
1119 for the sake of speed. This makes no difference elsewhere.
1120
1121 The special chunks `top' and `last_remainder' get their own bins,
1122 (this is implemented via yet more trickery with the av array),
1123 although `top' is never properly linked to its bin since it is
1124 always handled specially.
1125
1126*/
1127
1128#define NAV 128 /* number of bins */
1129
1130typedef struct malloc_chunk* mbinptr;
1131
1132/* An arena is a configuration of malloc_chunks together with an array
1133 of bins. With multiple threads, it must be locked via a mutex
1134 before changing its data structures. One or more `heaps' are
1135 associated with each arena, except for the main_arena, which is
1136 associated only with the `main heap', i.e. the conventional free
1137 store obtained with calls to MORECORE() (usually sbrk). The `av'
1138 array is never mentioned directly in the code, but instead used via
1139 bin access macros. */
1140
1141typedef struct _arena {
1142 mbinptr av[2*NAV + 2];
1143 struct _arena *next;
8a4b65b4
UD
1144 size_t size;
1145#if THREAD_STATS
1146 long stat_lock_direct, stat_lock_loop, stat_lock_wait;
1147#endif
f65fd747
UD
1148 mutex_t mutex;
1149} arena;
1150
1151
6d52618b 1152/* A heap is a single contiguous memory region holding (coalesceable)
f65fd747
UD
1153 malloc_chunks. It is allocated with mmap() and always starts at an
1154 address aligned to HEAP_MAX_SIZE. Not used unless compiling for
1155 multiple threads. */
1156
1157typedef struct _heap_info {
8a4b65b4
UD
1158 arena *ar_ptr; /* Arena for this heap. */
1159 struct _heap_info *prev; /* Previous heap. */
1160 size_t size; /* Current size in bytes. */
1161 size_t pad; /* Make sure the following data is properly aligned. */
f65fd747
UD
1162} heap_info;
1163
1164
1165/*
1166 Static functions (forward declarations)
1167*/
1168
1169#if __STD_C
10dc2a90 1170
f65fd747
UD
1171static void chunk_free(arena *ar_ptr, mchunkptr p);
1172static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size);
10dc2a90 1173static mchunkptr chunk_realloc(arena *ar_ptr, mchunkptr oldp,
7e3be507 1174 INTERNAL_SIZE_T oldsize, INTERNAL_SIZE_T nb);
10dc2a90 1175static mchunkptr chunk_align(arena *ar_ptr, INTERNAL_SIZE_T nb,
7e3be507 1176 size_t alignment);
8a4b65b4
UD
1177static int main_trim(size_t pad);
1178#ifndef NO_THREADS
1179static int heap_trim(heap_info *heap, size_t pad);
1180#endif
10dc2a90
UD
1181#if defined(_LIBC) || defined(MALLOC_HOOKS)
1182static Void_t* malloc_check(size_t sz);
1183static void free_check(Void_t* mem);
1184static Void_t* realloc_check(Void_t* oldmem, size_t bytes);
1185static Void_t* memalign_check(size_t alignment, size_t bytes);
7e3be507
UD
1186static Void_t* malloc_starter(size_t sz);
1187static void free_starter(Void_t* mem);
10dc2a90
UD
1188#endif
1189
f65fd747 1190#else
10dc2a90 1191
f65fd747
UD
1192static void chunk_free();
1193static mchunkptr chunk_alloc();
10dc2a90
UD
1194static mchunkptr chunk_realloc();
1195static mchunkptr chunk_align();
8a4b65b4
UD
1196static int main_trim();
1197#ifndef NO_THREADS
1198static int heap_trim();
1199#endif
10dc2a90
UD
1200#if defined(_LIBC) || defined(MALLOC_HOOKS)
1201static Void_t* malloc_check();
1202static void free_check();
1203static Void_t* realloc_check();
1204static Void_t* memalign_check();
7e3be507
UD
1205static Void_t* malloc_starter();
1206static void free_starter();
10dc2a90
UD
1207#endif
1208
f65fd747
UD
1209#endif
1210
1211\f
1212
1213/* sizes, alignments */
1214
1215#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1216#define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1217#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1218#define MINSIZE (sizeof(struct malloc_chunk))
1219
1220/* conversion from malloc headers to user pointers, and back */
1221
1222#define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1223#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1224
1225/* pad request bytes into a usable size */
1226
1227#define request2size(req) \
1228 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1229 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1230 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1231
1232/* Check if m has acceptable alignment */
1233
1234#define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1235
1236
1237\f
1238
1239/*
1240 Physical chunk operations
1241*/
1242
1243
1244/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1245
1246#define PREV_INUSE 0x1
1247
1248/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1249
1250#define IS_MMAPPED 0x2
1251
1252/* Bits to mask off when extracting size */
1253
1254#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1255
1256
1257/* Ptr to next physical malloc_chunk. */
1258
1259#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1260
1261/* Ptr to previous physical malloc_chunk */
1262
1263#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1264
1265
1266/* Treat space at ptr + offset as a chunk */
1267
1268#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1269
1270
1271\f
1272
1273/*
1274 Dealing with use bits
1275*/
1276
1277/* extract p's inuse bit */
1278
1279#define inuse(p) \
1280 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1281
1282/* extract inuse bit of previous chunk */
1283
1284#define prev_inuse(p) ((p)->size & PREV_INUSE)
1285
1286/* check for mmap()'ed chunk */
1287
1288#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1289
1290/* set/clear chunk as in use without otherwise disturbing */
1291
1292#define set_inuse(p) \
1293 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1294
1295#define clear_inuse(p) \
1296 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1297
1298/* check/set/clear inuse bits in known places */
1299
1300#define inuse_bit_at_offset(p, s)\
1301 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1302
1303#define set_inuse_bit_at_offset(p, s)\
1304 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1305
1306#define clear_inuse_bit_at_offset(p, s)\
1307 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1308
1309
1310\f
1311
1312/*
1313 Dealing with size fields
1314*/
1315
1316/* Get size, ignoring use bits */
1317
1318#define chunksize(p) ((p)->size & ~(SIZE_BITS))
1319
1320/* Set size at head, without disturbing its use bit */
1321
1322#define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1323
1324/* Set size/use ignoring previous bits in header */
1325
1326#define set_head(p, s) ((p)->size = (s))
1327
1328/* Set size at footer (only when chunk is not in use) */
1329
1330#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1331
1332
1333\f
1334
1335
1336/* access macros */
1337
1338#define bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i) + 2]) - 2*SIZE_SZ))
1339#define init_bin(a, i) ((a)->av[2*i+2] = (a)->av[2*i+3] = bin_at((a), i))
1340#define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1341#define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1342
1343/*
1344 The first 2 bins are never indexed. The corresponding av cells are instead
1345 used for bookkeeping. This is not to save space, but to simplify
1346 indexing, maintain locality, and avoid some initialization tests.
1347*/
1348
1349#define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */
1350#define top(a) (bin_at(a,0)->fd) /* The topmost chunk */
1351#define last_remainder(a) (bin_at(a,1)) /* remainder from last split */
1352
1353/*
1354 Because top initially points to its own bin with initial
1355 zero size, thus forcing extension on the first malloc request,
1356 we avoid having any special code in malloc to check whether
1357 it even exists yet. But we still need to in malloc_extend_top.
1358*/
1359
1360#define initial_top(a) ((mchunkptr)bin_at(a, 0))
1361
1362\f
1363
1364/* field-extraction macros */
1365
1366#define first(b) ((b)->fd)
1367#define last(b) ((b)->bk)
1368
1369/*
1370 Indexing into bins
1371*/
1372
1373#define bin_index(sz) \
1374(((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1375 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1376 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1377 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1378 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1379 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1380 126)
1381/*
1382 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1383 identically sized chunks. This is exploited in malloc.
1384*/
1385
1386#define MAX_SMALLBIN 63
1387#define MAX_SMALLBIN_SIZE 512
1388#define SMALLBIN_WIDTH 8
1389
1390#define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1391
1392/*
1393 Requests are `small' if both the corresponding and the next bin are small
1394*/
1395
1396#define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1397
1398\f
1399
1400/*
1401 To help compensate for the large number of bins, a one-level index
1402 structure is used for bin-by-bin searching. `binblocks' is a
1403 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1404 have any (possibly) non-empty bins, so they can be skipped over
1405 all at once during during traversals. The bits are NOT always
1406 cleared as soon as all bins in a block are empty, but instead only
1407 when all are noticed to be empty during traversal in malloc.
1408*/
1409
1410#define BINBLOCKWIDTH 4 /* bins per block */
1411
1412/* bin<->block macros */
1413
1414#define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
1415#define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii))
1416#define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
1417
1418
1419\f
1420
1421/* Static bookkeeping data */
1422
1423/* Helper macro to initialize bins */
1424#define IAV(i) bin_at(&main_arena, i), bin_at(&main_arena, i)
1425
1426static arena main_arena = {
1427 {
1428 0, 0,
1429 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1430 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1431 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1432 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1433 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1434 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1435 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1436 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1437 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1438 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1439 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1440 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1441 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1442 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1443 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1444 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1445 },
7e3be507 1446 &main_arena, /* next */
8a4b65b4
UD
1447 0, /* size */
1448#if THREAD_STATS
1449 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
1450#endif
f65fd747
UD
1451 MUTEX_INITIALIZER /* mutex */
1452};
1453
1454#undef IAV
1455
1456/* Thread specific data */
1457
8a4b65b4 1458#ifndef NO_THREADS
f65fd747
UD
1459static tsd_key_t arena_key;
1460static mutex_t list_lock = MUTEX_INITIALIZER;
8a4b65b4 1461#endif
f65fd747
UD
1462
1463#if THREAD_STATS
f65fd747 1464static int stat_n_heaps = 0;
f65fd747
UD
1465#define THREAD_STAT(x) x
1466#else
1467#define THREAD_STAT(x) do ; while(0)
1468#endif
1469
1470/* variables holding tunable values */
1471
1472static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1473static unsigned long top_pad = DEFAULT_TOP_PAD;
1474static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1475static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
10dc2a90 1476static int check_action = DEFAULT_CHECK_ACTION;
f65fd747
UD
1477
1478/* The first value returned from sbrk */
1479static char* sbrk_base = (char*)(-1);
1480
1481/* The maximum memory obtained from system via sbrk */
1482static unsigned long max_sbrked_mem = 0;
1483
8a4b65b4
UD
1484/* The maximum via either sbrk or mmap (too difficult to track with threads) */
1485#ifdef NO_THREADS
f65fd747 1486static unsigned long max_total_mem = 0;
8a4b65b4 1487#endif
f65fd747
UD
1488
1489/* The total memory obtained from system via sbrk */
8a4b65b4 1490#define sbrked_mem (main_arena.size)
f65fd747
UD
1491
1492/* Tracking mmaps */
1493
1494static unsigned int n_mmaps = 0;
1495static unsigned int max_n_mmaps = 0;
1496static unsigned long mmapped_mem = 0;
1497static unsigned long max_mmapped_mem = 0;
1498
1499
1500\f
831372e7
UD
1501#ifndef _LIBC
1502#define weak_variable
1503#else
1504/* In GNU libc we want the hook variables to be weak definitions to
1505 avoid a problem with Emacs. */
1506#define weak_variable weak_function
1507#endif
7e3be507
UD
1508
1509/* Already initialized? */
1510int __malloc_initialized = 0;
f65fd747
UD
1511
1512
1513/* Initialization routine. */
1514#if defined(_LIBC)
10dc2a90 1515#if 0
f65fd747 1516static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor));
10dc2a90 1517#endif
f65fd747
UD
1518
1519static void
1520ptmalloc_init __MALLOC_P((void))
1521#else
1522void
1523ptmalloc_init __MALLOC_P((void))
1524#endif
1525{
10dc2a90 1526#if defined(_LIBC) || defined(MALLOC_HOOKS)
7e3be507
UD
1527 __malloc_ptr_t (*save_malloc_hook) __MALLOC_P ((size_t __size));
1528 void (*save_free_hook) __MALLOC_P ((__malloc_ptr_t __ptr));
10dc2a90
UD
1529 const char* s;
1530#endif
f65fd747 1531
6d52618b
UD
1532 if(__malloc_initialized) return;
1533 __malloc_initialized = 1;
7e3be507
UD
1534#if defined(_LIBC) || defined(MALLOC_HOOKS)
1535 /* With some threads implementations, creating thread-specific data
1536 or initializing a mutex may call malloc() itself. Provide a
1537 simple starter version (realloc() won't work). */
1538 save_malloc_hook = __malloc_hook;
1539 save_free_hook = __free_hook;
1540 __malloc_hook = malloc_starter;
1541 __free_hook = free_starter;
1542#endif
5290baf0 1543#if defined(_LIBC) && !defined (NO_THREADS)
8a4b65b4 1544 /* Initialize the pthreads interface. */
f65fd747 1545 if (__pthread_initialize != NULL)
8a4b65b4 1546 __pthread_initialize();
f65fd747 1547#endif
8a4b65b4 1548#ifndef NO_THREADS
10dc2a90
UD
1549 mutex_init(&main_arena.mutex);
1550 mutex_init(&list_lock);
1551 tsd_key_create(&arena_key, NULL);
1552 tsd_setspecific(arena_key, (Void_t *)&main_arena);
1553#endif
1554#if defined(_LIBC) || defined(MALLOC_HOOKS)
831372e7
UD
1555 if((s = getenv("MALLOC_TRIM_THRESHOLD_")))
1556 mALLOPt(M_TRIM_THRESHOLD, atoi(s));
1557 if((s = getenv("MALLOC_TOP_PAD_")))
1558 mALLOPt(M_TOP_PAD, atoi(s));
1559 if((s = getenv("MALLOC_MMAP_THRESHOLD_")))
1560 mALLOPt(M_MMAP_THRESHOLD, atoi(s));
1561 if((s = getenv("MALLOC_MMAP_MAX_")))
1562 mALLOPt(M_MMAP_MAX, atoi(s));
10dc2a90 1563 s = getenv("MALLOC_CHECK_");
7e3be507
UD
1564 __malloc_hook = save_malloc_hook;
1565 __free_hook = save_free_hook;
10dc2a90 1566 if(s) {
831372e7
UD
1567 if(s[0]) mALLOPt(M_CHECK_ACTION, (int)(s[0] - '0'));
1568 __malloc_check_init();
f65fd747 1569 }
10dc2a90
UD
1570 if(__malloc_initialize_hook != NULL)
1571 (*__malloc_initialize_hook)();
1572#endif
f65fd747
UD
1573}
1574
10dc2a90
UD
1575#if defined(_LIBC) || defined(MALLOC_HOOKS)
1576
1577/* Hooks for debugging versions. The initial hooks just call the
1578 initialization routine, then do the normal work. */
1579
1580static Void_t*
1581#if __STD_C
1582malloc_hook_ini(size_t sz)
1583#else
1584malloc_hook_ini(sz) size_t sz;
1585#endif
1586{
1587 __malloc_hook = NULL;
1588 __realloc_hook = NULL;
1589 __memalign_hook = NULL;
1590 ptmalloc_init();
1591 return mALLOc(sz);
1592}
1593
1594static Void_t*
1595#if __STD_C
1596realloc_hook_ini(Void_t* ptr, size_t sz)
1597#else
1598realloc_hook_ini(ptr, sz) Void_t* ptr; size_t sz;
1599#endif
1600{
1601 __malloc_hook = NULL;
1602 __realloc_hook = NULL;
1603 __memalign_hook = NULL;
1604 ptmalloc_init();
1605 return rEALLOc(ptr, sz);
1606}
1607
1608static Void_t*
1609#if __STD_C
1610memalign_hook_ini(size_t sz, size_t alignment)
1611#else
1612memalign_hook_ini(sz, alignment) size_t sz; size_t alignment;
1613#endif
1614{
1615 __malloc_hook = NULL;
1616 __realloc_hook = NULL;
1617 __memalign_hook = NULL;
1618 ptmalloc_init();
1619 return mEMALIGn(sz, alignment);
1620}
1621
831372e7
UD
1622void weak_variable (*__malloc_initialize_hook) __MALLOC_P ((void)) = NULL;
1623void weak_variable (*__free_hook) __MALLOC_P ((__malloc_ptr_t __ptr)) = NULL;
1624__malloc_ptr_t weak_variable (*__malloc_hook)
10dc2a90 1625 __MALLOC_P ((size_t __size)) = malloc_hook_ini;
831372e7 1626__malloc_ptr_t weak_variable (*__realloc_hook)
10dc2a90 1627 __MALLOC_P ((__malloc_ptr_t __ptr, size_t __size)) = realloc_hook_ini;
831372e7 1628__malloc_ptr_t weak_variable (*__memalign_hook)
10dc2a90 1629 __MALLOC_P ((size_t __size, size_t __alignment)) = memalign_hook_ini;
1228ed5c 1630void weak_variable (*__after_morecore_hook) __MALLOC_P ((void)) = NULL;
10dc2a90
UD
1631
1632/* Activate a standard set of debugging hooks. */
1633void
831372e7 1634__malloc_check_init()
10dc2a90
UD
1635{
1636 __malloc_hook = malloc_check;
1637 __free_hook = free_check;
1638 __realloc_hook = realloc_check;
1639 __memalign_hook = memalign_check;
7e3be507
UD
1640 if(check_action == 1)
1641 fprintf(stderr, "malloc: using debugging hooks\n");
10dc2a90
UD
1642}
1643
1644#endif
1645
f65fd747
UD
1646
1647\f
1648
1649
1650/* Routines dealing with mmap(). */
1651
1652#if HAVE_MMAP
1653
1654#ifndef MAP_ANONYMOUS
1655
1656static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1657
1658#define MMAP(size, prot) ((dev_zero_fd < 0) ? \
1659 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1660 mmap(0, (size), (prot), MAP_PRIVATE, dev_zero_fd, 0)) : \
1661 mmap(0, (size), (prot), MAP_PRIVATE, dev_zero_fd, 0))
1662
1663#else
1664
1665#define MMAP(size, prot) \
1666 (mmap(0, (size), (prot), MAP_PRIVATE|MAP_ANONYMOUS, -1, 0))
1667
1668#endif
1669
1670#if __STD_C
1671static mchunkptr mmap_chunk(size_t size)
1672#else
1673static mchunkptr mmap_chunk(size) size_t size;
1674#endif
1675{
1676 size_t page_mask = malloc_getpagesize - 1;
1677 mchunkptr p;
1678
1679 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1680
1681 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1682 * there is no following chunk whose prev_size field could be used.
1683 */
1684 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1685
1686 p = (mchunkptr)MMAP(size, PROT_READ|PROT_WRITE);
0413b54c 1687 if(p == (mchunkptr) MAP_FAILED) return 0;
f65fd747
UD
1688
1689 n_mmaps++;
1690 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1691
1692 /* We demand that eight bytes into a page must be 8-byte aligned. */
1693 assert(aligned_OK(chunk2mem(p)));
1694
1695 /* The offset to the start of the mmapped region is stored
1696 * in the prev_size field of the chunk; normally it is zero,
1697 * but that can be changed in memalign().
1698 */
1699 p->prev_size = 0;
1700 set_head(p, size|IS_MMAPPED);
1701
1702 mmapped_mem += size;
1703 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1704 max_mmapped_mem = mmapped_mem;
8a4b65b4 1705#ifdef NO_THREADS
f65fd747
UD
1706 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1707 max_total_mem = mmapped_mem + sbrked_mem;
8a4b65b4 1708#endif
f65fd747
UD
1709 return p;
1710}
1711
1712#if __STD_C
1713static void munmap_chunk(mchunkptr p)
1714#else
1715static void munmap_chunk(p) mchunkptr p;
1716#endif
1717{
1718 INTERNAL_SIZE_T size = chunksize(p);
1719 int ret;
1720
1721 assert (chunk_is_mmapped(p));
1722 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1723 assert((n_mmaps > 0));
1724 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1725
1726 n_mmaps--;
1727 mmapped_mem -= (size + p->prev_size);
1728
1729 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1730
1731 /* munmap returns non-zero on failure */
1732 assert(ret == 0);
1733}
1734
1735#if HAVE_MREMAP
1736
1737#if __STD_C
1738static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1739#else
1740static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1741#endif
1742{
1743 size_t page_mask = malloc_getpagesize - 1;
1744 INTERNAL_SIZE_T offset = p->prev_size;
1745 INTERNAL_SIZE_T size = chunksize(p);
1746 char *cp;
1747
1748 assert (chunk_is_mmapped(p));
1749 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1750 assert((n_mmaps > 0));
1751 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1752
1753 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1754 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1755
1756 cp = (char *)mremap((char *)p - offset, size + offset, new_size,
1757 MREMAP_MAYMOVE);
1758
1759 if (cp == (char *)-1) return 0;
1760
1761 p = (mchunkptr)(cp + offset);
1762
1763 assert(aligned_OK(chunk2mem(p)));
1764
1765 assert((p->prev_size == offset));
1766 set_head(p, (new_size - offset)|IS_MMAPPED);
1767
1768 mmapped_mem -= size + offset;
1769 mmapped_mem += new_size;
1770 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1771 max_mmapped_mem = mmapped_mem;
8a4b65b4 1772#ifdef NO_THREADS
f65fd747
UD
1773 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1774 max_total_mem = mmapped_mem + sbrked_mem;
8a4b65b4 1775#endif
f65fd747
UD
1776 return p;
1777}
1778
1779#endif /* HAVE_MREMAP */
1780
1781#endif /* HAVE_MMAP */
1782
1783\f
1784
1785/* Managing heaps and arenas (for concurrent threads) */
1786
1787#ifndef NO_THREADS
1788
1789/* Create a new heap. size is automatically rounded up to a multiple
1790 of the page size. */
1791
1792static heap_info *
1793#if __STD_C
1794new_heap(size_t size)
1795#else
1796new_heap(size) size_t size;
1797#endif
1798{
1799 size_t page_mask = malloc_getpagesize - 1;
1800 char *p1, *p2;
1801 unsigned long ul;
1802 heap_info *h;
1803
7799b7b3 1804 if(size+top_pad < HEAP_MIN_SIZE)
f65fd747 1805 size = HEAP_MIN_SIZE;
7799b7b3
UD
1806 else if(size+top_pad <= HEAP_MAX_SIZE)
1807 size += top_pad;
1808 else if(size > HEAP_MAX_SIZE)
f65fd747 1809 return 0;
7799b7b3
UD
1810 else
1811 size = HEAP_MAX_SIZE;
1812 size = (size + page_mask) & ~page_mask;
1813
f65fd747 1814 p1 = (char *)MMAP(HEAP_MAX_SIZE<<1, PROT_NONE);
0413b54c 1815 if(p1 == MAP_FAILED)
f65fd747
UD
1816 return 0;
1817 p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1));
1818 ul = p2 - p1;
1819 munmap(p1, ul);
1820 munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul);
1821 if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) {
1822 munmap(p2, HEAP_MAX_SIZE);
1823 return 0;
1824 }
1825 h = (heap_info *)p2;
1826 h->size = size;
1827 THREAD_STAT(stat_n_heaps++);
1828 return h;
1829}
1830
1831/* Grow or shrink a heap. size is automatically rounded up to a
8a4b65b4 1832 multiple of the page size if it is positive. */
f65fd747
UD
1833
1834static int
1835#if __STD_C
1836grow_heap(heap_info *h, long diff)
1837#else
1838grow_heap(h, diff) heap_info *h; long diff;
1839#endif
1840{
1841 size_t page_mask = malloc_getpagesize - 1;
1842 long new_size;
1843
1844 if(diff >= 0) {
1845 diff = (diff + page_mask) & ~page_mask;
1846 new_size = (long)h->size + diff;
1847 if(new_size > HEAP_MAX_SIZE)
1848 return -1;
1849 if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0)
1850 return -2;
1851 } else {
1852 new_size = (long)h->size + diff;
8a4b65b4 1853 if(new_size < (long)sizeof(*h))
f65fd747
UD
1854 return -1;
1855 if(mprotect((char *)h + new_size, -diff, PROT_NONE) != 0)
1856 return -2;
1857 }
1858 h->size = new_size;
1859 return 0;
1860}
1861
8a4b65b4
UD
1862/* Delete a heap. */
1863
1864#define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
1865
f65fd747
UD
1866/* arena_get() acquires an arena and locks the corresponding mutex.
1867 First, try the one last locked successfully by this thread. (This
1868 is the common case and handled with a macro for speed.) Then, loop
7e3be507
UD
1869 once over the circularly linked list of arenas. If no arena is
1870 readily available, create a new one. */
f65fd747
UD
1871
1872#define arena_get(ptr, size) do { \
1873 Void_t *vptr = NULL; \
1874 ptr = (arena *)tsd_getspecific(arena_key, vptr); \
1875 if(ptr && !mutex_trylock(&ptr->mutex)) { \
8a4b65b4 1876 THREAD_STAT(++(ptr->stat_lock_direct)); \
7e3be507 1877 } else \
f65fd747 1878 ptr = arena_get2(ptr, (size)); \
f65fd747
UD
1879} while(0)
1880
1881static arena *
1882#if __STD_C
1883arena_get2(arena *a_tsd, size_t size)
1884#else
1885arena_get2(a_tsd, size) arena *a_tsd; size_t size;
1886#endif
1887{
1888 arena *a;
1889 heap_info *h;
1890 char *ptr;
1891 int i;
1892 unsigned long misalign;
1893
7e3be507
UD
1894 if(!a_tsd)
1895 a = a_tsd = &main_arena;
1896 else {
1897 a = a_tsd->next;
1898 if(!a) {
1899 /* This can only happen while initializing the new arena. */
1900 (void)mutex_lock(&main_arena.mutex);
1901 THREAD_STAT(++(main_arena.stat_lock_wait));
1902 return &main_arena;
f65fd747 1903 }
8a4b65b4 1904 }
7e3be507
UD
1905
1906 /* Check the global, circularly linked list for available arenas. */
1907 do {
1908 if(!mutex_trylock(&a->mutex)) {
1909 THREAD_STAT(++(a->stat_lock_loop));
1910 tsd_setspecific(arena_key, (Void_t *)a);
1911 return a;
1912 }
1913 a = a->next;
1914 } while(a != a_tsd);
f65fd747
UD
1915
1916 /* Nothing immediately available, so generate a new arena. */
1917 h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT));
1918 if(!h)
1919 return 0;
1920 a = h->ar_ptr = (arena *)(h+1);
1921 for(i=0; i<NAV; i++)
1922 init_bin(a, i);
7e3be507 1923 a->next = NULL;
8a4b65b4 1924 a->size = h->size;
7e3be507 1925 tsd_setspecific(arena_key, (Void_t *)a);
f65fd747
UD
1926 mutex_init(&a->mutex);
1927 i = mutex_lock(&a->mutex); /* remember result */
1928
1929 /* Set up the top chunk, with proper alignment. */
1930 ptr = (char *)(a + 1);
1931 misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK;
1932 if (misalign > 0)
1933 ptr += MALLOC_ALIGNMENT - misalign;
1934 top(a) = (mchunkptr)ptr;
8a4b65b4 1935 set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE);
f65fd747
UD
1936
1937 /* Add the new arena to the list. */
1938 (void)mutex_lock(&list_lock);
1939 a->next = main_arena.next;
1940 main_arena.next = a;
f65fd747
UD
1941 (void)mutex_unlock(&list_lock);
1942
1943 if(i) /* locking failed; keep arena for further attempts later */
1944 return 0;
1945
8a4b65b4 1946 THREAD_STAT(++(a->stat_lock_loop));
f65fd747
UD
1947 return a;
1948}
1949
1950/* find the heap and corresponding arena for a given ptr */
1951
1952#define heap_for_ptr(ptr) \
1953 ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
1954#define arena_for_ptr(ptr) \
1955 (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
1956 &main_arena : heap_for_ptr(ptr)->ar_ptr)
1957
1958#else /* defined(NO_THREADS) */
1959
1960/* Without concurrent threads, there is only one arena. */
1961
1962#define arena_get(ptr, sz) (ptr = &main_arena)
1963#define arena_for_ptr(ptr) (&main_arena)
1964
1965#endif /* !defined(NO_THREADS) */
1966
1967\f
1968
1969/*
1970 Debugging support
1971*/
1972
1973#if MALLOC_DEBUG
1974
1975
1976/*
1977 These routines make a number of assertions about the states
1978 of data structures that should be true at all times. If any
1979 are not true, it's very likely that a user program has somehow
1980 trashed memory. (It's also possible that there is a coding error
1981 in malloc. In which case, please report it!)
1982*/
1983
1984#if __STD_C
1985static void do_check_chunk(arena *ar_ptr, mchunkptr p)
1986#else
1987static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
1988#endif
1989{
1990 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1991
1992 /* No checkable chunk is mmapped */
1993 assert(!chunk_is_mmapped(p));
1994
1995#ifndef NO_THREADS
1996 if(ar_ptr != &main_arena) {
1997 heap_info *heap = heap_for_ptr(p);
1998 assert(heap->ar_ptr == ar_ptr);
1999 assert((char *)p + sz <= (char *)heap + heap->size);
2000 return;
2001 }
2002#endif
2003
2004 /* Check for legal address ... */
2005 assert((char*)p >= sbrk_base);
2006 if (p != top(ar_ptr))
2007 assert((char*)p + sz <= (char*)top(ar_ptr));
2008 else
2009 assert((char*)p + sz <= sbrk_base + sbrked_mem);
2010
2011}
2012
2013
2014#if __STD_C
2015static void do_check_free_chunk(arena *ar_ptr, mchunkptr p)
2016#else
2017static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2018#endif
2019{
2020 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2021 mchunkptr next = chunk_at_offset(p, sz);
2022
2023 do_check_chunk(ar_ptr, p);
2024
2025 /* Check whether it claims to be free ... */
2026 assert(!inuse(p));
2027
8a4b65b4
UD
2028 /* Must have OK size and fields */
2029 assert((long)sz >= (long)MINSIZE);
2030 assert((sz & MALLOC_ALIGN_MASK) == 0);
2031 assert(aligned_OK(chunk2mem(p)));
2032 /* ... matching footer field */
2033 assert(next->prev_size == sz);
2034 /* ... and is fully consolidated */
2035 assert(prev_inuse(p));
2036 assert (next == top(ar_ptr) || inuse(next));
2037
2038 /* ... and has minimally sane links */
2039 assert(p->fd->bk == p);
2040 assert(p->bk->fd == p);
f65fd747
UD
2041}
2042
2043#if __STD_C
2044static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p)
2045#else
2046static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2047#endif
2048{
2049 mchunkptr next = next_chunk(p);
2050 do_check_chunk(ar_ptr, p);
2051
2052 /* Check whether it claims to be in use ... */
2053 assert(inuse(p));
2054
8a4b65b4
UD
2055 /* ... whether its size is OK (it might be a fencepost) ... */
2056 assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE));
2057
f65fd747
UD
2058 /* ... and is surrounded by OK chunks.
2059 Since more things can be checked with free chunks than inuse ones,
2060 if an inuse chunk borders them and debug is on, it's worth doing them.
2061 */
2062 if (!prev_inuse(p))
2063 {
2064 mchunkptr prv = prev_chunk(p);
2065 assert(next_chunk(prv) == p);
2066 do_check_free_chunk(ar_ptr, prv);
2067 }
2068 if (next == top(ar_ptr))
2069 {
2070 assert(prev_inuse(next));
2071 assert(chunksize(next) >= MINSIZE);
2072 }
2073 else if (!inuse(next))
2074 do_check_free_chunk(ar_ptr, next);
2075
2076}
2077
2078#if __STD_C
2079static void do_check_malloced_chunk(arena *ar_ptr,
2080 mchunkptr p, INTERNAL_SIZE_T s)
2081#else
2082static void do_check_malloced_chunk(ar_ptr, p, s)
2083arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s;
2084#endif
2085{
2086 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2087 long room = sz - s;
2088
2089 do_check_inuse_chunk(ar_ptr, p);
2090
2091 /* Legal size ... */
2092 assert((long)sz >= (long)MINSIZE);
2093 assert((sz & MALLOC_ALIGN_MASK) == 0);
2094 assert(room >= 0);
2095 assert(room < (long)MINSIZE);
2096
2097 /* ... and alignment */
2098 assert(aligned_OK(chunk2mem(p)));
2099
2100
2101 /* ... and was allocated at front of an available chunk */
2102 assert(prev_inuse(p));
2103
2104}
2105
2106
2107#define check_free_chunk(A,P) do_check_free_chunk(A,P)
2108#define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
2109#define check_chunk(A,P) do_check_chunk(A,P)
2110#define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
2111#else
2112#define check_free_chunk(A,P)
2113#define check_inuse_chunk(A,P)
2114#define check_chunk(A,P)
2115#define check_malloced_chunk(A,P,N)
2116#endif
2117
2118\f
2119
2120/*
2121 Macro-based internal utilities
2122*/
2123
2124
2125/*
2126 Linking chunks in bin lists.
2127 Call these only with variables, not arbitrary expressions, as arguments.
2128*/
2129
2130/*
2131 Place chunk p of size s in its bin, in size order,
2132 putting it ahead of others of same size.
2133*/
2134
2135
2136#define frontlink(A, P, S, IDX, BK, FD) \
2137{ \
2138 if (S < MAX_SMALLBIN_SIZE) \
2139 { \
2140 IDX = smallbin_index(S); \
2141 mark_binblock(A, IDX); \
2142 BK = bin_at(A, IDX); \
2143 FD = BK->fd; \
2144 P->bk = BK; \
2145 P->fd = FD; \
2146 FD->bk = BK->fd = P; \
2147 } \
2148 else \
2149 { \
2150 IDX = bin_index(S); \
2151 BK = bin_at(A, IDX); \
2152 FD = BK->fd; \
2153 if (FD == BK) mark_binblock(A, IDX); \
2154 else \
2155 { \
2156 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2157 BK = FD->bk; \
2158 } \
2159 P->bk = BK; \
2160 P->fd = FD; \
2161 FD->bk = BK->fd = P; \
2162 } \
2163}
2164
2165
2166/* take a chunk off a list */
2167
2168#define unlink(P, BK, FD) \
2169{ \
2170 BK = P->bk; \
2171 FD = P->fd; \
2172 FD->bk = BK; \
2173 BK->fd = FD; \
2174} \
2175
2176/* Place p as the last remainder */
2177
2178#define link_last_remainder(A, P) \
2179{ \
2180 last_remainder(A)->fd = last_remainder(A)->bk = P; \
2181 P->fd = P->bk = last_remainder(A); \
2182}
2183
2184/* Clear the last_remainder bin */
2185
2186#define clear_last_remainder(A) \
2187 (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
2188
2189
2190
2191\f
2192
2193/*
2194 Extend the top-most chunk by obtaining memory from system.
2195 Main interface to sbrk (but see also malloc_trim).
2196*/
2197
2198#if __STD_C
2199static void malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb)
2200#else
2201static void malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2202#endif
2203{
2204 unsigned long pagesz = malloc_getpagesize;
2205 mchunkptr old_top = top(ar_ptr); /* Record state of old top */
2206 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2207 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2208
2209#ifndef NO_THREADS
2210 if(ar_ptr == &main_arena) {
2211#endif
2212
2213 char* brk; /* return value from sbrk */
2214 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2215 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2216 char* new_brk; /* return of 2nd sbrk call */
2217 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2218
2219 /* Pad request with top_pad plus minimal overhead */
2220 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2221
2222 /* If not the first time through, round to preserve page boundary */
2223 /* Otherwise, we need to correct to a page size below anyway. */
2224 /* (We also correct below if an intervening foreign sbrk call.) */
2225
2226 if (sbrk_base != (char*)(-1))
2227 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2228
2229 brk = (char*)(MORECORE (sbrk_size));
2230
2231 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2232 if (brk == (char*)(MORECORE_FAILURE) ||
2233 (brk < old_end && old_top != initial_top(&main_arena)))
2234 return;
2235
7799b7b3 2236#if defined(_LIBC) || defined(MALLOC_HOOKS)
1228ed5c
UD
2237 /* Call the `morecore' hook if necessary. */
2238 if (__after_morecore_hook)
2239 (*__after_morecore_hook) ();
7799b7b3 2240#endif
1228ed5c 2241
f65fd747
UD
2242 sbrked_mem += sbrk_size;
2243
2244 if (brk == old_end) { /* can just add bytes to current top */
2245 top_size = sbrk_size + old_top_size;
2246 set_head(old_top, top_size | PREV_INUSE);
2247 old_top = 0; /* don't free below */
2248 } else {
2249 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2250 sbrk_base = brk;
2251 else
2252 /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2253 sbrked_mem += brk - (char*)old_end;
2254
2255 /* Guarantee alignment of first new chunk made from this space */
2256 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2257 if (front_misalign > 0) {
2258 correction = (MALLOC_ALIGNMENT) - front_misalign;
2259 brk += correction;
2260 } else
2261 correction = 0;
2262
2263 /* Guarantee the next brk will be at a page boundary */
2264 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
2265
2266 /* Allocate correction */
2267 new_brk = (char*)(MORECORE (correction));
2268 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2269
7799b7b3 2270#if defined(_LIBC) || defined(MALLOC_HOOKS)
1228ed5c
UD
2271 /* Call the `morecore' hook if necessary. */
2272 if (__after_morecore_hook)
7799b7b3
UD
2273 (*__after_morecore_hook) ();
2274#endif
1228ed5c 2275
f65fd747
UD
2276 sbrked_mem += correction;
2277
2278 top(&main_arena) = (mchunkptr)brk;
2279 top_size = new_brk - brk + correction;
2280 set_head(top(&main_arena), top_size | PREV_INUSE);
2281
2282 if (old_top == initial_top(&main_arena))
2283 old_top = 0; /* don't free below */
2284 }
2285
2286 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2287 max_sbrked_mem = sbrked_mem;
8a4b65b4 2288#ifdef NO_THREADS
f65fd747
UD
2289 if ((unsigned long)(mmapped_mem + sbrked_mem) >
2290 (unsigned long)max_total_mem)
2291 max_total_mem = mmapped_mem + sbrked_mem;
8a4b65b4 2292#endif
f65fd747
UD
2293
2294#ifndef NO_THREADS
2295 } else { /* ar_ptr != &main_arena */
8a4b65b4
UD
2296 heap_info *old_heap, *heap;
2297 size_t old_heap_size;
f65fd747
UD
2298
2299 if(old_top_size < MINSIZE) /* this should never happen */
2300 return;
2301
2302 /* First try to extend the current heap. */
2303 if(MINSIZE + nb <= old_top_size)
2304 return;
8a4b65b4
UD
2305 old_heap = heap_for_ptr(old_top);
2306 old_heap_size = old_heap->size;
2307 if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) {
2308 ar_ptr->size += old_heap->size - old_heap_size;
2309 top_size = ((char *)old_heap + old_heap->size) - (char *)old_top;
f65fd747
UD
2310 set_head(old_top, top_size | PREV_INUSE);
2311 return;
2312 }
2313
2314 /* A new heap must be created. */
7799b7b3 2315 heap = new_heap(nb + (MINSIZE + sizeof(*heap)));
f65fd747
UD
2316 if(!heap)
2317 return;
2318 heap->ar_ptr = ar_ptr;
8a4b65b4
UD
2319 heap->prev = old_heap;
2320 ar_ptr->size += heap->size;
f65fd747
UD
2321
2322 /* Set up the new top, so we can safely use chunk_free() below. */
2323 top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap));
2324 top_size = heap->size - sizeof(*heap);
2325 set_head(top(ar_ptr), top_size | PREV_INUSE);
2326 }
2327#endif /* !defined(NO_THREADS) */
2328
2329 /* We always land on a page boundary */
2330 assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0);
2331
2332 /* Setup fencepost and free the old top chunk. */
2333 if(old_top) {
8a4b65b4
UD
2334 /* The fencepost takes at least MINSIZE bytes, because it might
2335 become the top chunk again later. Note that a footer is set
2336 up, too, although the chunk is marked in use. */
2337 old_top_size -= MINSIZE;
2338 set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE);
2339 if(old_top_size >= MINSIZE) {
2340 set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE);
2341 set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
f65fd747
UD
2342 set_head_size(old_top, old_top_size);
2343 chunk_free(ar_ptr, old_top);
2344 } else {
8a4b65b4
UD
2345 set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE);
2346 set_foot(old_top, (old_top_size + 2*SIZE_SZ));
f65fd747
UD
2347 }
2348 }
2349}
2350
2351
2352\f
2353
2354/* Main public routines */
2355
2356
2357/*
8a4b65b4 2358 Malloc Algorithm:
f65fd747
UD
2359
2360 The requested size is first converted into a usable form, `nb'.
2361 This currently means to add 4 bytes overhead plus possibly more to
2362 obtain 8-byte alignment and/or to obtain a size of at least
8a4b65b4
UD
2363 MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
2364 size. (All fits are considered `exact' if they are within MINSIZE
2365 bytes.)
f65fd747
UD
2366
2367 From there, the first successful of the following steps is taken:
2368
2369 1. The bin corresponding to the request size is scanned, and if
2370 a chunk of exactly the right size is found, it is taken.
2371
2372 2. The most recently remaindered chunk is used if it is big
2373 enough. This is a form of (roving) first fit, used only in
2374 the absence of exact fits. Runs of consecutive requests use
2375 the remainder of the chunk used for the previous such request
2376 whenever possible. This limited use of a first-fit style
2377 allocation strategy tends to give contiguous chunks
2378 coextensive lifetimes, which improves locality and can reduce
2379 fragmentation in the long run.
2380
2381 3. Other bins are scanned in increasing size order, using a
2382 chunk big enough to fulfill the request, and splitting off
2383 any remainder. This search is strictly by best-fit; i.e.,
2384 the smallest (with ties going to approximately the least
2385 recently used) chunk that fits is selected.
2386
2387 4. If large enough, the chunk bordering the end of memory
2388 (`top') is split off. (This use of `top' is in accord with
2389 the best-fit search rule. In effect, `top' is treated as
2390 larger (and thus less well fitting) than any other available
2391 chunk since it can be extended to be as large as necessary
2392 (up to system limitations).
2393
2394 5. If the request size meets the mmap threshold and the
2395 system supports mmap, and there are few enough currently
2396 allocated mmapped regions, and a call to mmap succeeds,
2397 the request is allocated via direct memory mapping.
2398
2399 6. Otherwise, the top of memory is extended by
2400 obtaining more space from the system (normally using sbrk,
2401 but definable to anything else via the MORECORE macro).
2402 Memory is gathered from the system (in system page-sized
2403 units) in a way that allows chunks obtained across different
2404 sbrk calls to be consolidated, but does not require
2405 contiguous memory. Thus, it should be safe to intersperse
2406 mallocs with other sbrk calls.
2407
2408
2409 All allocations are made from the the `lowest' part of any found
2410 chunk. (The implementation invariant is that prev_inuse is
2411 always true of any allocated chunk; i.e., that each allocated
2412 chunk borders either a previously allocated and still in-use chunk,
2413 or the base of its memory arena.)
2414
2415*/
2416
2417#if __STD_C
2418Void_t* mALLOc(size_t bytes)
2419#else
2420Void_t* mALLOc(bytes) size_t bytes;
2421#endif
2422{
2423 arena *ar_ptr;
10dc2a90 2424 INTERNAL_SIZE_T nb; /* padded request size */
f65fd747
UD
2425 mchunkptr victim;
2426
10dc2a90
UD
2427#if defined(_LIBC) || defined(MALLOC_HOOKS)
2428 if (__malloc_hook != NULL) {
2429 Void_t* result;
2430
2431 result = (*__malloc_hook)(bytes);
2432 return result;
2433 }
2434#endif
2435
2436 nb = request2size(bytes);
7799b7b3 2437 arena_get(ar_ptr, nb);
f65fd747
UD
2438 if(!ar_ptr)
2439 return 0;
2440 victim = chunk_alloc(ar_ptr, nb);
2441 (void)mutex_unlock(&ar_ptr->mutex);
7799b7b3
UD
2442 if(!victim) {
2443 /* Maybe the failure is due to running out of mmapped areas. */
2444 if(ar_ptr != &main_arena) {
2445 (void)mutex_lock(&main_arena.mutex);
2446 victim = chunk_alloc(&main_arena, nb);
2447 (void)mutex_unlock(&main_arena.mutex);
2448 }
2449 if(!victim) return 0;
2450 }
2451 return chunk2mem(victim);
f65fd747
UD
2452}
2453
2454static mchunkptr
2455#if __STD_C
2456chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb)
2457#else
2458chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2459#endif
2460{
2461 mchunkptr victim; /* inspected/selected chunk */
2462 INTERNAL_SIZE_T victim_size; /* its size */
2463 int idx; /* index for bin traversal */
2464 mbinptr bin; /* associated bin */
2465 mchunkptr remainder; /* remainder from a split */
2466 long remainder_size; /* its size */
2467 int remainder_index; /* its bin index */
2468 unsigned long block; /* block traverser bit */
2469 int startidx; /* first bin of a traversed block */
2470 mchunkptr fwd; /* misc temp for linking */
2471 mchunkptr bck; /* misc temp for linking */
2472 mbinptr q; /* misc temp */
2473
2474
2475 /* Check for exact match in a bin */
2476
2477 if (is_small_request(nb)) /* Faster version for small requests */
2478 {
2479 idx = smallbin_index(nb);
2480
2481 /* No traversal or size check necessary for small bins. */
2482
2483 q = bin_at(ar_ptr, idx);
2484 victim = last(q);
2485
2486 /* Also scan the next one, since it would have a remainder < MINSIZE */
2487 if (victim == q)
2488 {
2489 q = next_bin(q);
2490 victim = last(q);
2491 }
2492 if (victim != q)
2493 {
2494 victim_size = chunksize(victim);
2495 unlink(victim, bck, fwd);
2496 set_inuse_bit_at_offset(victim, victim_size);
2497 check_malloced_chunk(ar_ptr, victim, nb);
2498 return victim;
2499 }
2500
2501 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2502
2503 }
2504 else
2505 {
2506 idx = bin_index(nb);
2507 bin = bin_at(ar_ptr, idx);
2508
2509 for (victim = last(bin); victim != bin; victim = victim->bk)
2510 {
2511 victim_size = chunksize(victim);
2512 remainder_size = victim_size - nb;
2513
2514 if (remainder_size >= (long)MINSIZE) /* too big */
2515 {
2516 --idx; /* adjust to rescan below after checking last remainder */
2517 break;
2518 }
2519
2520 else if (remainder_size >= 0) /* exact fit */
2521 {
2522 unlink(victim, bck, fwd);
2523 set_inuse_bit_at_offset(victim, victim_size);
2524 check_malloced_chunk(ar_ptr, victim, nb);
2525 return victim;
2526 }
2527 }
2528
2529 ++idx;
2530
2531 }
2532
2533 /* Try to use the last split-off remainder */
2534
2535 if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr))
2536 {
2537 victim_size = chunksize(victim);
2538 remainder_size = victim_size - nb;
2539
2540 if (remainder_size >= (long)MINSIZE) /* re-split */
2541 {
2542 remainder = chunk_at_offset(victim, nb);
2543 set_head(victim, nb | PREV_INUSE);
2544 link_last_remainder(ar_ptr, remainder);
2545 set_head(remainder, remainder_size | PREV_INUSE);
2546 set_foot(remainder, remainder_size);
2547 check_malloced_chunk(ar_ptr, victim, nb);
2548 return victim;
2549 }
2550
2551 clear_last_remainder(ar_ptr);
2552
2553 if (remainder_size >= 0) /* exhaust */
2554 {
2555 set_inuse_bit_at_offset(victim, victim_size);
2556 check_malloced_chunk(ar_ptr, victim, nb);
2557 return victim;
2558 }
2559
2560 /* Else place in bin */
2561
2562 frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd);
2563 }
2564
2565 /*
2566 If there are any possibly nonempty big-enough blocks,
2567 search for best fitting chunk by scanning bins in blockwidth units.
2568 */
2569
2570 if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr))
2571 {
2572
2573 /* Get to the first marked block */
2574
2575 if ( (block & binblocks(ar_ptr)) == 0)
2576 {
2577 /* force to an even block boundary */
2578 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2579 block <<= 1;
2580 while ((block & binblocks(ar_ptr)) == 0)
2581 {
2582 idx += BINBLOCKWIDTH;
2583 block <<= 1;
2584 }
2585 }
2586
2587 /* For each possibly nonempty block ... */
2588 for (;;)
2589 {
2590 startidx = idx; /* (track incomplete blocks) */
2591 q = bin = bin_at(ar_ptr, idx);
2592
2593 /* For each bin in this block ... */
2594 do
2595 {
2596 /* Find and use first big enough chunk ... */
2597
2598 for (victim = last(bin); victim != bin; victim = victim->bk)
2599 {
2600 victim_size = chunksize(victim);
2601 remainder_size = victim_size - nb;
2602
2603 if (remainder_size >= (long)MINSIZE) /* split */
2604 {
2605 remainder = chunk_at_offset(victim, nb);
2606 set_head(victim, nb | PREV_INUSE);
2607 unlink(victim, bck, fwd);
2608 link_last_remainder(ar_ptr, remainder);
2609 set_head(remainder, remainder_size | PREV_INUSE);
2610 set_foot(remainder, remainder_size);
2611 check_malloced_chunk(ar_ptr, victim, nb);
2612 return victim;
2613 }
2614
2615 else if (remainder_size >= 0) /* take */
2616 {
2617 set_inuse_bit_at_offset(victim, victim_size);
2618 unlink(victim, bck, fwd);
2619 check_malloced_chunk(ar_ptr, victim, nb);
2620 return victim;
2621 }
2622
2623 }
2624
2625 bin = next_bin(bin);
2626
2627 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2628
2629 /* Clear out the block bit. */
2630
2631 do /* Possibly backtrack to try to clear a partial block */
2632 {
2633 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2634 {
2635 binblocks(ar_ptr) &= ~block;
2636 break;
2637 }
2638 --startidx;
2639 q = prev_bin(q);
2640 } while (first(q) == q);
2641
2642 /* Get to the next possibly nonempty block */
2643
2644 if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) )
2645 {
2646 while ((block & binblocks(ar_ptr)) == 0)
2647 {
2648 idx += BINBLOCKWIDTH;
2649 block <<= 1;
2650 }
2651 }
2652 else
2653 break;
2654 }
2655 }
2656
2657
2658 /* Try to use top chunk */
2659
2660 /* Require that there be a remainder, ensuring top always exists */
2661 if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2662 {
2663
2664#if HAVE_MMAP
2665 /* If big and would otherwise need to extend, try to use mmap instead */
2666 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2667 (victim = mmap_chunk(nb)) != 0)
2668 return victim;
2669#endif
2670
2671 /* Try to extend */
2672 malloc_extend_top(ar_ptr, nb);
2673 if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2674 return 0; /* propagate failure */
2675 }
2676
2677 victim = top(ar_ptr);
2678 set_head(victim, nb | PREV_INUSE);
2679 top(ar_ptr) = chunk_at_offset(victim, nb);
2680 set_head(top(ar_ptr), remainder_size | PREV_INUSE);
2681 check_malloced_chunk(ar_ptr, victim, nb);
2682 return victim;
2683
2684}
2685
2686
2687\f
2688
2689/*
2690
2691 free() algorithm :
2692
2693 cases:
2694
2695 1. free(0) has no effect.
2696
2697 2. If the chunk was allocated via mmap, it is released via munmap().
2698
2699 3. If a returned chunk borders the current high end of memory,
2700 it is consolidated into the top, and if the total unused
2701 topmost memory exceeds the trim threshold, malloc_trim is
2702 called.
2703
2704 4. Other chunks are consolidated as they arrive, and
2705 placed in corresponding bins. (This includes the case of
2706 consolidating with the current `last_remainder').
2707
2708*/
2709
2710
2711#if __STD_C
2712void fREe(Void_t* mem)
2713#else
2714void fREe(mem) Void_t* mem;
2715#endif
2716{
2717 arena *ar_ptr;
2718 mchunkptr p; /* chunk corresponding to mem */
2719
10dc2a90
UD
2720#if defined(_LIBC) || defined(MALLOC_HOOKS)
2721 if (__free_hook != NULL) {
2722 (*__free_hook)(mem);
2723 return;
2724 }
2725#endif
2726
f65fd747
UD
2727 if (mem == 0) /* free(0) has no effect */
2728 return;
2729
2730 p = mem2chunk(mem);
2731
2732#if HAVE_MMAP
2733 if (chunk_is_mmapped(p)) /* release mmapped memory. */
2734 {
2735 munmap_chunk(p);
2736 return;
2737 }
2738#endif
2739
2740 ar_ptr = arena_for_ptr(p);
8a4b65b4
UD
2741#if THREAD_STATS
2742 if(!mutex_trylock(&ar_ptr->mutex))
2743 ++(ar_ptr->stat_lock_direct);
2744 else {
2745 (void)mutex_lock(&ar_ptr->mutex);
2746 ++(ar_ptr->stat_lock_wait);
2747 }
2748#else
f65fd747 2749 (void)mutex_lock(&ar_ptr->mutex);
8a4b65b4 2750#endif
f65fd747
UD
2751 chunk_free(ar_ptr, p);
2752 (void)mutex_unlock(&ar_ptr->mutex);
2753}
2754
2755static void
2756#if __STD_C
2757chunk_free(arena *ar_ptr, mchunkptr p)
2758#else
2759chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2760#endif
2761{
2762 INTERNAL_SIZE_T hd = p->size; /* its head field */
2763 INTERNAL_SIZE_T sz; /* its size */
2764 int idx; /* its bin index */
2765 mchunkptr next; /* next contiguous chunk */
2766 INTERNAL_SIZE_T nextsz; /* its size */
2767 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2768 mchunkptr bck; /* misc temp for linking */
2769 mchunkptr fwd; /* misc temp for linking */
2770 int islr; /* track whether merging with last_remainder */
2771
2772 check_inuse_chunk(ar_ptr, p);
2773
2774 sz = hd & ~PREV_INUSE;
2775 next = chunk_at_offset(p, sz);
2776 nextsz = chunksize(next);
2777
2778 if (next == top(ar_ptr)) /* merge with top */
2779 {
2780 sz += nextsz;
2781
2782 if (!(hd & PREV_INUSE)) /* consolidate backward */
2783 {
2784 prevsz = p->prev_size;
2785 p = chunk_at_offset(p, -prevsz);
2786 sz += prevsz;
2787 unlink(p, bck, fwd);
2788 }
2789
2790 set_head(p, sz | PREV_INUSE);
2791 top(ar_ptr) = p;
8a4b65b4
UD
2792
2793#ifndef NO_THREADS
2794 if(ar_ptr == &main_arena) {
2795#endif
2796 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2797 main_trim(top_pad);
2798#ifndef NO_THREADS
2799 } else {
2800 heap_info *heap = heap_for_ptr(p);
2801
2802 assert(heap->ar_ptr == ar_ptr);
2803
2804 /* Try to get rid of completely empty heaps, if possible. */
2805 if((unsigned long)(sz) >= (unsigned long)trim_threshold ||
2806 p == chunk_at_offset(heap, sizeof(*heap)))
2807 heap_trim(heap, top_pad);
2808 }
2809#endif
f65fd747
UD
2810 return;
2811 }
2812
f65fd747
UD
2813 islr = 0;
2814
2815 if (!(hd & PREV_INUSE)) /* consolidate backward */
2816 {
2817 prevsz = p->prev_size;
2818 p = chunk_at_offset(p, -prevsz);
2819 sz += prevsz;
2820
2821 if (p->fd == last_remainder(ar_ptr)) /* keep as last_remainder */
2822 islr = 1;
2823 else
2824 unlink(p, bck, fwd);
2825 }
2826
2827 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2828 {
2829 sz += nextsz;
2830
2831 if (!islr && next->fd == last_remainder(ar_ptr))
2832 /* re-insert last_remainder */
2833 {
2834 islr = 1;
2835 link_last_remainder(ar_ptr, p);
2836 }
2837 else
2838 unlink(next, bck, fwd);
7799b7b3
UD
2839
2840 next = chunk_at_offset(p, sz);
f65fd747 2841 }
7799b7b3
UD
2842 else
2843 set_head(next, nextsz); /* clear inuse bit */
f65fd747
UD
2844
2845 set_head(p, sz | PREV_INUSE);
7799b7b3 2846 next->prev_size = sz;
f65fd747
UD
2847 if (!islr)
2848 frontlink(ar_ptr, p, sz, idx, bck, fwd);
7799b7b3
UD
2849
2850#ifndef NO_THREADS
2851 /* Check whether the heap containing top can go away now. */
2852 if(next->size < MINSIZE &&
2853 (unsigned long)sz > trim_threshold &&
2854 ar_ptr != &main_arena) { /* fencepost */
2855 heap_info* heap = heap_for_ptr(top(ar_ptr));
2856
2857 if(top(ar_ptr) == chunk_at_offset(heap, sizeof(*heap)) &&
2858 heap->prev == heap_for_ptr(p))
2859 heap_trim(heap, top_pad);
2860 }
2861#endif
f65fd747
UD
2862}
2863
2864
2865\f
2866
2867
2868/*
2869
2870 Realloc algorithm:
2871
2872 Chunks that were obtained via mmap cannot be extended or shrunk
2873 unless HAVE_MREMAP is defined, in which case mremap is used.
2874 Otherwise, if their reallocation is for additional space, they are
2875 copied. If for less, they are just left alone.
2876
2877 Otherwise, if the reallocation is for additional space, and the
2878 chunk can be extended, it is, else a malloc-copy-free sequence is
2879 taken. There are several different ways that a chunk could be
2880 extended. All are tried:
2881
2882 * Extending forward into following adjacent free chunk.
2883 * Shifting backwards, joining preceding adjacent space
2884 * Both shifting backwards and extending forward.
2885 * Extending into newly sbrked space
2886
2887 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2888 size argument of zero (re)allocates a minimum-sized chunk.
2889
2890 If the reallocation is for less space, and the new request is for
2891 a `small' (<512 bytes) size, then the newly unused space is lopped
2892 off and freed.
2893
2894 The old unix realloc convention of allowing the last-free'd chunk
2895 to be used as an argument to realloc is no longer supported.
2896 I don't know of any programs still relying on this feature,
2897 and allowing it would also allow too many other incorrect
2898 usages of realloc to be sensible.
2899
2900
2901*/
2902
2903
2904#if __STD_C
2905Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2906#else
2907Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2908#endif
2909{
2910 arena *ar_ptr;
2911 INTERNAL_SIZE_T nb; /* padded request size */
2912
2913 mchunkptr oldp; /* chunk corresponding to oldmem */
2914 INTERNAL_SIZE_T oldsize; /* its size */
2915
2916 mchunkptr newp; /* chunk to return */
f65fd747 2917
10dc2a90
UD
2918#if defined(_LIBC) || defined(MALLOC_HOOKS)
2919 if (__realloc_hook != NULL) {
2920 Void_t* result;
f65fd747 2921
10dc2a90
UD
2922 result = (*__realloc_hook)(oldmem, bytes);
2923 return result;
2924 }
2925#endif
f65fd747
UD
2926
2927#ifdef REALLOC_ZERO_BYTES_FREES
2928 if (bytes == 0) { fREe(oldmem); return 0; }
2929#endif
2930
f65fd747
UD
2931 /* realloc of null is supposed to be same as malloc */
2932 if (oldmem == 0) return mALLOc(bytes);
2933
10dc2a90
UD
2934 oldp = mem2chunk(oldmem);
2935 oldsize = chunksize(oldp);
f65fd747
UD
2936
2937 nb = request2size(bytes);
2938
2939#if HAVE_MMAP
2940 if (chunk_is_mmapped(oldp))
2941 {
10dc2a90
UD
2942 Void_t* newmem;
2943
f65fd747
UD
2944#if HAVE_MREMAP
2945 newp = mremap_chunk(oldp, nb);
2946 if(newp) return chunk2mem(newp);
2947#endif
2948 /* Note the extra SIZE_SZ overhead. */
2949 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2950 /* Must alloc, copy, free. */
2951 newmem = mALLOc(bytes);
2952 if (newmem == 0) return 0; /* propagate failure */
2953 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2954 munmap_chunk(oldp);
2955 return newmem;
2956 }
2957#endif
2958
2959 ar_ptr = arena_for_ptr(oldp);
8a4b65b4
UD
2960#if THREAD_STATS
2961 if(!mutex_trylock(&ar_ptr->mutex))
2962 ++(ar_ptr->stat_lock_direct);
2963 else {
2964 (void)mutex_lock(&ar_ptr->mutex);
2965 ++(ar_ptr->stat_lock_wait);
2966 }
2967#else
f65fd747 2968 (void)mutex_lock(&ar_ptr->mutex);
8a4b65b4
UD
2969#endif
2970
1228ed5c 2971#ifndef NO_THREADS
f65fd747
UD
2972 /* As in malloc(), remember this arena for the next allocation. */
2973 tsd_setspecific(arena_key, (Void_t *)ar_ptr);
1228ed5c 2974#endif
f65fd747 2975
10dc2a90
UD
2976 newp = chunk_realloc(ar_ptr, oldp, oldsize, nb);
2977
2978 (void)mutex_unlock(&ar_ptr->mutex);
2979 return newp ? chunk2mem(newp) : NULL;
2980}
2981
2982static mchunkptr
2983#if __STD_C
2984chunk_realloc(arena* ar_ptr, mchunkptr oldp, INTERNAL_SIZE_T oldsize,
7e3be507 2985 INTERNAL_SIZE_T nb)
10dc2a90
UD
2986#else
2987chunk_realloc(ar_ptr, oldp, oldsize, nb)
2988arena* ar_ptr; mchunkptr oldp; INTERNAL_SIZE_T oldsize, nb;
2989#endif
2990{
2991 mchunkptr newp = oldp; /* chunk to return */
2992 INTERNAL_SIZE_T newsize = oldsize; /* its size */
2993
2994 mchunkptr next; /* next contiguous chunk after oldp */
2995 INTERNAL_SIZE_T nextsize; /* its size */
2996
2997 mchunkptr prev; /* previous contiguous chunk before oldp */
2998 INTERNAL_SIZE_T prevsize; /* its size */
2999
3000 mchunkptr remainder; /* holds split off extra space from newp */
3001 INTERNAL_SIZE_T remainder_size; /* its size */
3002
3003 mchunkptr bck; /* misc temp for linking */
3004 mchunkptr fwd; /* misc temp for linking */
3005
f65fd747
UD
3006 check_inuse_chunk(ar_ptr, oldp);
3007
3008 if ((long)(oldsize) < (long)(nb))
3009 {
3010
3011 /* Try expanding forward */
3012
3013 next = chunk_at_offset(oldp, oldsize);
3014 if (next == top(ar_ptr) || !inuse(next))
3015 {
3016 nextsize = chunksize(next);
3017
3018 /* Forward into top only if a remainder */
3019 if (next == top(ar_ptr))
3020 {
3021 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
3022 {
3023 newsize += nextsize;
3024 top(ar_ptr) = chunk_at_offset(oldp, nb);
3025 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3026 set_head_size(oldp, nb);
10dc2a90 3027 return oldp;
f65fd747
UD
3028 }
3029 }
3030
3031 /* Forward into next chunk */
3032 else if (((long)(nextsize + newsize) >= (long)(nb)))
3033 {
3034 unlink(next, bck, fwd);
3035 newsize += nextsize;
3036 goto split;
3037 }
3038 }
3039 else
3040 {
3041 next = 0;
3042 nextsize = 0;
3043 }
3044
3045 /* Try shifting backwards. */
3046
3047 if (!prev_inuse(oldp))
3048 {
3049 prev = prev_chunk(oldp);
3050 prevsize = chunksize(prev);
3051
3052 /* try forward + backward first to save a later consolidation */
3053
3054 if (next != 0)
3055 {
3056 /* into top */
3057 if (next == top(ar_ptr))
3058 {
3059 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
3060 {
3061 unlink(prev, bck, fwd);
3062 newp = prev;
3063 newsize += prevsize + nextsize;
10dc2a90 3064 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747
UD
3065 top(ar_ptr) = chunk_at_offset(newp, nb);
3066 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3067 set_head_size(newp, nb);
10dc2a90 3068 return newp;
f65fd747
UD
3069 }
3070 }
3071
3072 /* into next chunk */
3073 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
3074 {
3075 unlink(next, bck, fwd);
3076 unlink(prev, bck, fwd);
3077 newp = prev;
3078 newsize += nextsize + prevsize;
10dc2a90 3079 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747
UD
3080 goto split;
3081 }
3082 }
3083
3084 /* backward only */
3085 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
3086 {
3087 unlink(prev, bck, fwd);
3088 newp = prev;
3089 newsize += prevsize;
10dc2a90 3090 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747
UD
3091 goto split;
3092 }
3093 }
3094
3095 /* Must allocate */
3096
3097 newp = chunk_alloc (ar_ptr, nb);
3098
7799b7b3
UD
3099 if (newp == 0) {
3100 /* Maybe the failure is due to running out of mmapped areas. */
3101 if (ar_ptr != &main_arena) {
3102 (void)mutex_lock(&main_arena.mutex);
3103 newp = chunk_alloc(&main_arena, nb);
3104 (void)mutex_unlock(&main_arena.mutex);
3105 }
3106 if (newp == 0) /* propagate failure */
3107 return 0;
3108 }
f65fd747
UD
3109
3110 /* Avoid copy if newp is next chunk after oldp. */
3111 /* (This can only happen when new chunk is sbrk'ed.) */
3112
3113 if ( newp == next_chunk(oldp))
3114 {
3115 newsize += chunksize(newp);
3116 newp = oldp;
3117 goto split;
3118 }
3119
3120 /* Otherwise copy, free, and exit */
10dc2a90 3121 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747 3122 chunk_free(ar_ptr, oldp);
10dc2a90 3123 return newp;
f65fd747
UD
3124 }
3125
3126
3127 split: /* split off extra room in old or expanded chunk */
3128
3129 if (newsize - nb >= MINSIZE) /* split off remainder */
3130 {
3131 remainder = chunk_at_offset(newp, nb);
3132 remainder_size = newsize - nb;
3133 set_head_size(newp, nb);
3134 set_head(remainder, remainder_size | PREV_INUSE);
3135 set_inuse_bit_at_offset(remainder, remainder_size);
3136 chunk_free(ar_ptr, remainder);
3137 }
3138 else
3139 {
3140 set_head_size(newp, newsize);
3141 set_inuse_bit_at_offset(newp, newsize);
3142 }
3143
3144 check_inuse_chunk(ar_ptr, newp);
10dc2a90 3145 return newp;
f65fd747
UD
3146}
3147
3148
3149\f
3150
3151/*
3152
3153 memalign algorithm:
3154
3155 memalign requests more than enough space from malloc, finds a spot
3156 within that chunk that meets the alignment request, and then
3157 possibly frees the leading and trailing space.
3158
3159 The alignment argument must be a power of two. This property is not
3160 checked by memalign, so misuse may result in random runtime errors.
3161
3162 8-byte alignment is guaranteed by normal malloc calls, so don't
3163 bother calling memalign with an argument of 8 or less.
3164
3165 Overreliance on memalign is a sure way to fragment space.
3166
3167*/
3168
3169
3170#if __STD_C
3171Void_t* mEMALIGn(size_t alignment, size_t bytes)
3172#else
3173Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
3174#endif
3175{
3176 arena *ar_ptr;
3177 INTERNAL_SIZE_T nb; /* padded request size */
10dc2a90
UD
3178 mchunkptr p;
3179
3180#if defined(_LIBC) || defined(MALLOC_HOOKS)
3181 if (__memalign_hook != NULL) {
3182 Void_t* result;
3183
3184 result = (*__memalign_hook)(alignment, bytes);
3185 return result;
3186 }
3187#endif
f65fd747
UD
3188
3189 /* If need less alignment than we give anyway, just relay to malloc */
3190
3191 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
3192
3193 /* Otherwise, ensure that it is at least a minimum chunk size */
3194
3195 if (alignment < MINSIZE) alignment = MINSIZE;
3196
f65fd747
UD
3197 nb = request2size(bytes);
3198 arena_get(ar_ptr, nb + alignment + MINSIZE);
3199 if(!ar_ptr)
3200 return 0;
10dc2a90
UD
3201 p = chunk_align(ar_ptr, nb, alignment);
3202 (void)mutex_unlock(&ar_ptr->mutex);
7799b7b3
UD
3203 if(!p) {
3204 /* Maybe the failure is due to running out of mmapped areas. */
3205 if(ar_ptr != &main_arena) {
3206 (void)mutex_lock(&main_arena.mutex);
3207 p = chunk_align(&main_arena, nb, alignment);
3208 (void)mutex_unlock(&main_arena.mutex);
3209 }
3210 if(!p) return 0;
3211 }
3212 return chunk2mem(p);
10dc2a90 3213}
f65fd747 3214
10dc2a90
UD
3215static mchunkptr
3216#if __STD_C
3217chunk_align(arena* ar_ptr, INTERNAL_SIZE_T nb, size_t alignment)
3218#else
3219chunk_align(ar_ptr, nb, alignment)
3220arena* ar_ptr; INTERNAL_SIZE_T nb; size_t alignment;
3221#endif
3222{
3223 char* m; /* memory returned by malloc call */
3224 mchunkptr p; /* corresponding chunk */
3225 char* brk; /* alignment point within p */
3226 mchunkptr newp; /* chunk to return */
3227 INTERNAL_SIZE_T newsize; /* its size */
3228 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3229 mchunkptr remainder; /* spare room at end to split off */
3230 long remainder_size; /* its size */
3231
3232 /* Call chunk_alloc with worst case padding to hit alignment. */
3233 p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE);
3234 if (p == 0)
f65fd747 3235 return 0; /* propagate failure */
f65fd747
UD
3236
3237 m = chunk2mem(p);
3238
3239 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3240 {
3241#if HAVE_MMAP
3242 if(chunk_is_mmapped(p)) {
10dc2a90 3243 return p; /* nothing more to do */
f65fd747
UD
3244 }
3245#endif
3246 }
3247 else /* misaligned */
3248 {
3249 /*
3250 Find an aligned spot inside chunk.
3251 Since we need to give back leading space in a chunk of at
3252 least MINSIZE, if the first calculation places us at
3253 a spot with less than MINSIZE leader, we can move to the
3254 next aligned spot -- we've allocated enough total room so that
3255 this is always possible.
3256 */
3257
3258 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
10dc2a90 3259 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk += alignment;
f65fd747
UD
3260
3261 newp = (mchunkptr)brk;
3262 leadsize = brk - (char*)(p);
3263 newsize = chunksize(p) - leadsize;
3264
3265#if HAVE_MMAP
3266 if(chunk_is_mmapped(p))
3267 {
3268 newp->prev_size = p->prev_size + leadsize;
3269 set_head(newp, newsize|IS_MMAPPED);
10dc2a90 3270 return newp;
f65fd747
UD
3271 }
3272#endif
3273
3274 /* give back leader, use the rest */
3275
3276 set_head(newp, newsize | PREV_INUSE);
3277 set_inuse_bit_at_offset(newp, newsize);
3278 set_head_size(p, leadsize);
3279 chunk_free(ar_ptr, p);
3280 p = newp;
3281
3282 assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3283 }
3284
3285 /* Also give back spare room at the end */
3286
3287 remainder_size = chunksize(p) - nb;
3288
3289 if (remainder_size >= (long)MINSIZE)
3290 {
3291 remainder = chunk_at_offset(p, nb);
3292 set_head(remainder, remainder_size | PREV_INUSE);
3293 set_head_size(p, nb);
3294 chunk_free(ar_ptr, remainder);
3295 }
3296
3297 check_inuse_chunk(ar_ptr, p);
10dc2a90 3298 return p;
f65fd747
UD
3299}
3300
3301\f
3302
3303
3304/*
3305 valloc just invokes memalign with alignment argument equal
3306 to the page size of the system (or as near to this as can
3307 be figured out from all the includes/defines above.)
3308*/
3309
3310#if __STD_C
3311Void_t* vALLOc(size_t bytes)
3312#else
3313Void_t* vALLOc(bytes) size_t bytes;
3314#endif
3315{
3316 return mEMALIGn (malloc_getpagesize, bytes);
3317}
3318
3319/*
3320 pvalloc just invokes valloc for the nearest pagesize
3321 that will accommodate request
3322*/
3323
3324
3325#if __STD_C
3326Void_t* pvALLOc(size_t bytes)
3327#else
3328Void_t* pvALLOc(bytes) size_t bytes;
3329#endif
3330{
3331 size_t pagesize = malloc_getpagesize;
3332 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3333}
3334
3335/*
3336
10dc2a90 3337 calloc calls chunk_alloc, then zeroes out the allocated chunk.
f65fd747
UD
3338
3339*/
3340
3341#if __STD_C
3342Void_t* cALLOc(size_t n, size_t elem_size)
3343#else
3344Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
3345#endif
3346{
3347 arena *ar_ptr;
3348 mchunkptr p, oldtop;
10dc2a90 3349 INTERNAL_SIZE_T sz, csz, oldtopsize;
f65fd747
UD
3350 Void_t* mem;
3351
10dc2a90
UD
3352#if defined(_LIBC) || defined(MALLOC_HOOKS)
3353 if (__malloc_hook != NULL) {
3354 sz = n * elem_size;
3355 mem = (*__malloc_hook)(sz);
831372e7
UD
3356 if(mem == 0)
3357 return 0;
10dc2a90 3358#ifdef HAVE_MEMCPY
c131718c 3359 return memset(mem, 0, sz);
10dc2a90 3360#else
831372e7 3361 while(sz > 0) ((char*)mem)[--sz] = 0; /* rather inefficient */
10dc2a90 3362 return mem;
c131718c 3363#endif
10dc2a90
UD
3364 }
3365#endif
f65fd747 3366
10dc2a90 3367 sz = request2size(n * elem_size);
f65fd747
UD
3368 arena_get(ar_ptr, sz);
3369 if(!ar_ptr)
3370 return 0;
3371
3372 /* check if expand_top called, in which case don't need to clear */
3373#if MORECORE_CLEARS
3374 oldtop = top(ar_ptr);
3375 oldtopsize = chunksize(top(ar_ptr));
3376#endif
3377 p = chunk_alloc (ar_ptr, sz);
3378
3379 /* Only clearing follows, so we can unlock early. */
3380 (void)mutex_unlock(&ar_ptr->mutex);
3381
7799b7b3
UD
3382 if (p == 0) {
3383 /* Maybe the failure is due to running out of mmapped areas. */
3384 if(ar_ptr != &main_arena) {
3385 (void)mutex_lock(&main_arena.mutex);
3386 p = chunk_alloc(&main_arena, sz);
3387 (void)mutex_unlock(&main_arena.mutex);
3388 }
3389 if (p == 0) return 0;
3390 }
3391 mem = chunk2mem(p);
f65fd747 3392
7799b7b3 3393 /* Two optional cases in which clearing not necessary */
f65fd747
UD
3394
3395#if HAVE_MMAP
7799b7b3 3396 if (chunk_is_mmapped(p)) return mem;
f65fd747
UD
3397#endif
3398
7799b7b3 3399 csz = chunksize(p);
f65fd747
UD
3400
3401#if MORECORE_CLEARS
7799b7b3
UD
3402 if (p == oldtop && csz > oldtopsize) {
3403 /* clear only the bytes from non-freshly-sbrked memory */
3404 csz = oldtopsize;
3405 }
f65fd747
UD
3406#endif
3407
7799b7b3
UD
3408 MALLOC_ZERO(mem, csz - SIZE_SZ);
3409 return mem;
f65fd747
UD
3410}
3411
3412/*
3413
3414 cfree just calls free. It is needed/defined on some systems
3415 that pair it with calloc, presumably for odd historical reasons.
3416
3417*/
3418
3419#if !defined(_LIBC)
3420#if __STD_C
3421void cfree(Void_t *mem)
3422#else
3423void cfree(mem) Void_t *mem;
3424#endif
3425{
3426 free(mem);
3427}
3428#endif
3429
3430\f
3431
3432/*
3433
3434 Malloc_trim gives memory back to the system (via negative
3435 arguments to sbrk) if there is unused memory at the `high' end of
3436 the malloc pool. You can call this after freeing large blocks of
3437 memory to potentially reduce the system-level memory requirements
3438 of a program. However, it cannot guarantee to reduce memory. Under
3439 some allocation patterns, some large free blocks of memory will be
3440 locked between two used chunks, so they cannot be given back to
3441 the system.
3442
3443 The `pad' argument to malloc_trim represents the amount of free
3444 trailing space to leave untrimmed. If this argument is zero,
3445 only the minimum amount of memory to maintain internal data
3446 structures will be left (one page or less). Non-zero arguments
3447 can be supplied to maintain enough trailing space to service
3448 future expected allocations without having to re-obtain memory
3449 from the system.
3450
3451 Malloc_trim returns 1 if it actually released any memory, else 0.
3452
3453*/
3454
3455#if __STD_C
7e3be507 3456int mALLOC_TRIm(size_t pad)
f65fd747 3457#else
7e3be507 3458int mALLOC_TRIm(pad) size_t pad;
f65fd747
UD
3459#endif
3460{
3461 int res;
3462
3463 (void)mutex_lock(&main_arena.mutex);
8a4b65b4 3464 res = main_trim(pad);
f65fd747
UD
3465 (void)mutex_unlock(&main_arena.mutex);
3466 return res;
3467}
3468
8a4b65b4
UD
3469/* Trim the main arena. */
3470
f65fd747
UD
3471static int
3472#if __STD_C
8a4b65b4 3473main_trim(size_t pad)
f65fd747 3474#else
8a4b65b4 3475main_trim(pad) size_t pad;
f65fd747
UD
3476#endif
3477{
3478 mchunkptr top_chunk; /* The current top chunk */
3479 long top_size; /* Amount of top-most memory */
3480 long extra; /* Amount to release */
3481 char* current_brk; /* address returned by pre-check sbrk call */
3482 char* new_brk; /* address returned by negative sbrk call */
3483
3484 unsigned long pagesz = malloc_getpagesize;
3485
8a4b65b4 3486 top_chunk = top(&main_arena);
f65fd747
UD
3487 top_size = chunksize(top_chunk);
3488 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3489
3490 if (extra < (long)pagesz) /* Not enough memory to release */
3491 return 0;
3492
8a4b65b4
UD
3493 /* Test to make sure no one else called sbrk */
3494 current_brk = (char*)(MORECORE (0));
3495 if (current_brk != (char*)(top_chunk) + top_size)
3496 return 0; /* Apparently we don't own memory; must fail */
f65fd747 3497
8a4b65b4 3498 new_brk = (char*)(MORECORE (-extra));
f65fd747 3499
7799b7b3 3500#if defined(_LIBC) || defined(MALLOC_HOOKS)
1228ed5c
UD
3501 /* Call the `morecore' hook if necessary. */
3502 if (__after_morecore_hook)
3503 (*__after_morecore_hook) ();
7799b7b3 3504#endif
1228ed5c 3505
8a4b65b4
UD
3506 if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */
3507 /* Try to figure out what we have */
3508 current_brk = (char*)(MORECORE (0));
3509 top_size = current_brk - (char*)top_chunk;
3510 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3511 {
3512 sbrked_mem = current_brk - sbrk_base;
3513 set_head(top_chunk, top_size | PREV_INUSE);
f65fd747 3514 }
8a4b65b4
UD
3515 check_chunk(&main_arena, top_chunk);
3516 return 0;
3517 }
3518 sbrked_mem -= extra;
3519
3520 /* Success. Adjust top accordingly. */
3521 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3522 check_chunk(&main_arena, top_chunk);
3523 return 1;
3524}
f65fd747
UD
3525
3526#ifndef NO_THREADS
8a4b65b4
UD
3527
3528static int
3529#if __STD_C
3530heap_trim(heap_info *heap, size_t pad)
3531#else
3532heap_trim(heap, pad) heap_info *heap; size_t pad;
f65fd747 3533#endif
8a4b65b4
UD
3534{
3535 unsigned long pagesz = malloc_getpagesize;
3536 arena *ar_ptr = heap->ar_ptr;
3537 mchunkptr top_chunk = top(ar_ptr), p, bck, fwd;
3538 heap_info *prev_heap;
3539 long new_size, top_size, extra;
3540
3541 /* Can this heap go away completely ? */
3542 while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) {
3543 prev_heap = heap->prev;
3544 p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ));
3545 assert(p->size == (0|PREV_INUSE)); /* must be fencepost */
3546 p = prev_chunk(p);
3547 new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ);
10dc2a90 3548 assert(new_size>0 && new_size<(long)(2*MINSIZE));
8a4b65b4
UD
3549 if(!prev_inuse(p))
3550 new_size += p->prev_size;
3551 assert(new_size>0 && new_size<HEAP_MAX_SIZE);
3552 if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz)
3553 break;
3554 ar_ptr->size -= heap->size;
3555 delete_heap(heap);
3556 heap = prev_heap;
3557 if(!prev_inuse(p)) { /* consolidate backward */
3558 p = prev_chunk(p);
3559 unlink(p, bck, fwd);
3560 }
3561 assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0);
3562 assert( ((char*)p + new_size) == ((char*)heap + heap->size) );
3563 top(ar_ptr) = top_chunk = p;
3564 set_head(top_chunk, new_size | PREV_INUSE);
3565 check_chunk(ar_ptr, top_chunk);
3566 }
3567 top_size = chunksize(top_chunk);
3568 extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz;
3569 if(extra < (long)pagesz)
3570 return 0;
3571 /* Try to shrink. */
3572 if(grow_heap(heap, -extra) != 0)
3573 return 0;
3574 ar_ptr->size -= extra;
f65fd747
UD
3575
3576 /* Success. Adjust top accordingly. */
3577 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3578 check_chunk(ar_ptr, top_chunk);
3579 return 1;
3580}
3581
8a4b65b4
UD
3582#endif
3583
f65fd747
UD
3584\f
3585
3586/*
3587 malloc_usable_size:
3588
3589 This routine tells you how many bytes you can actually use in an
3590 allocated chunk, which may be more than you requested (although
3591 often not). You can use this many bytes without worrying about
3592 overwriting other allocated objects. Not a particularly great
3593 programming practice, but still sometimes useful.
3594
3595*/
3596
3597#if __STD_C
7e3be507 3598size_t mALLOC_USABLE_SIZe(Void_t* mem)
f65fd747 3599#else
7e3be507 3600size_t mALLOC_USABLE_SIZe(mem) Void_t* mem;
f65fd747
UD
3601#endif
3602{
3603 mchunkptr p;
3604
3605 if (mem == 0)
3606 return 0;
3607 else
3608 {
3609 p = mem2chunk(mem);
3610 if(!chunk_is_mmapped(p))
3611 {
3612 if (!inuse(p)) return 0;
3613 check_inuse_chunk(arena_for_ptr(mem), p);
3614 return chunksize(p) - SIZE_SZ;
3615 }
3616 return chunksize(p) - 2*SIZE_SZ;
3617 }
3618}
3619
3620
3621\f
3622
8a4b65b4 3623/* Utility to update mallinfo for malloc_stats() and mallinfo() */
f65fd747 3624
8a4b65b4
UD
3625static void
3626#if __STD_C
3627malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi)
3628#else
3629malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi;
3630#endif
f65fd747 3631{
f65fd747
UD
3632 int i, navail;
3633 mbinptr b;
3634 mchunkptr p;
3635#if MALLOC_DEBUG
3636 mchunkptr q;
3637#endif
3638 INTERNAL_SIZE_T avail;
3639
3640 (void)mutex_lock(&ar_ptr->mutex);
3641 avail = chunksize(top(ar_ptr));
3642 navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3643
3644 for (i = 1; i < NAV; ++i)
3645 {
3646 b = bin_at(ar_ptr, i);
3647 for (p = last(b); p != b; p = p->bk)
3648 {
3649#if MALLOC_DEBUG
3650 check_free_chunk(ar_ptr, p);
3651 for (q = next_chunk(p);
8a4b65b4 3652 q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0;
f65fd747
UD
3653 q = next_chunk(q))
3654 check_inuse_chunk(ar_ptr, q);
3655#endif
3656 avail += chunksize(p);
3657 navail++;
3658 }
3659 }
3660
8a4b65b4
UD
3661 mi->arena = ar_ptr->size;
3662 mi->ordblks = navail;
3663 mi->uordblks = ar_ptr->size - avail;
3664 mi->fordblks = avail;
3665 mi->hblks = n_mmaps;
3666 mi->hblkhd = mmapped_mem;
3667 mi->keepcost = chunksize(top(ar_ptr));
f65fd747
UD
3668
3669 (void)mutex_unlock(&ar_ptr->mutex);
3670}
3671
8a4b65b4
UD
3672#if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3673
3674/* Print the complete contents of a single heap to stderr. */
3675
3676static void
3677#if __STD_C
3678dump_heap(heap_info *heap)
3679#else
3680dump_heap(heap) heap_info *heap;
3681#endif
3682{
3683 char *ptr;
3684 mchunkptr p;
3685
3686 fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size);
3687 ptr = (heap->ar_ptr != (arena*)(heap+1)) ?
3688 (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena);
3689 p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) &
3690 ~MALLOC_ALIGN_MASK);
3691 for(;;) {
3692 fprintf(stderr, "chunk %p size %10lx", p, (long)p->size);
3693 if(p == top(heap->ar_ptr)) {
3694 fprintf(stderr, " (top)\n");
3695 break;
3696 } else if(p->size == (0|PREV_INUSE)) {
3697 fprintf(stderr, " (fence)\n");
3698 break;
3699 }
3700 fprintf(stderr, "\n");
3701 p = next_chunk(p);
3702 }
3703}
3704
3705#endif
3706
f65fd747
UD
3707\f
3708
3709/*
3710
3711 malloc_stats:
3712
6d52618b 3713 For all arenas separately and in total, prints on stderr the
8a4b65b4 3714 amount of space obtained from the system, and the current number
f65fd747
UD
3715 of bytes allocated via malloc (or realloc, etc) but not yet
3716 freed. (Note that this is the number of bytes allocated, not the
3717 number requested. It will be larger than the number requested
8a4b65b4
UD
3718 because of alignment and bookkeeping overhead.) When not compiled
3719 for multiple threads, the maximum amount of allocated memory
3720 (which may be more than current if malloc_trim and/or munmap got
3721 called) is also reported. When using mmap(), prints the maximum
3722 number of simultaneous mmap regions used, too.
f65fd747
UD
3723
3724*/
3725
7e3be507 3726void mALLOC_STATs()
f65fd747 3727{
8a4b65b4
UD
3728 int i;
3729 arena *ar_ptr;
3730 struct mallinfo mi;
3731 unsigned int in_use_b = mmapped_mem, system_b = in_use_b;
3732#if THREAD_STATS
3733 long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0;
3734#endif
3735
7e3be507 3736 for(i=0, ar_ptr = &main_arena;; i++) {
8a4b65b4
UD
3737 malloc_update_mallinfo(ar_ptr, &mi);
3738 fprintf(stderr, "Arena %d:\n", i);
3739 fprintf(stderr, "system bytes = %10u\n", (unsigned int)mi.arena);
3740 fprintf(stderr, "in use bytes = %10u\n", (unsigned int)mi.uordblks);
3741 system_b += mi.arena;
3742 in_use_b += mi.uordblks;
3743#if THREAD_STATS
3744 stat_lock_direct += ar_ptr->stat_lock_direct;
3745 stat_lock_loop += ar_ptr->stat_lock_loop;
3746 stat_lock_wait += ar_ptr->stat_lock_wait;
3747#endif
3748#if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3749 if(ar_ptr != &main_arena) {
c131718c 3750 heap_info *heap;
7e3be507 3751 (void)mutex_lock(&ar_ptr->mutex);
c131718c 3752 heap = heap_for_ptr(top(ar_ptr));
8a4b65b4 3753 while(heap) { dump_heap(heap); heap = heap->prev; }
7e3be507 3754 (void)mutex_unlock(&ar_ptr->mutex);
8a4b65b4
UD
3755 }
3756#endif
7e3be507
UD
3757 ar_ptr = ar_ptr->next;
3758 if(ar_ptr == &main_arena) break;
8a4b65b4 3759 }
7799b7b3 3760#if HAVE_MMAP
8a4b65b4 3761 fprintf(stderr, "Total (incl. mmap):\n");
7799b7b3
UD
3762#else
3763 fprintf(stderr, "Total:\n");
3764#endif
8a4b65b4
UD
3765 fprintf(stderr, "system bytes = %10u\n", system_b);
3766 fprintf(stderr, "in use bytes = %10u\n", in_use_b);
3767#ifdef NO_THREADS
3768 fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem);
3769#endif
f65fd747 3770#if HAVE_MMAP
8a4b65b4 3771 fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps);
7799b7b3 3772 fprintf(stderr, "max mmap bytes = %10lu\n", max_mmapped_mem);
f65fd747
UD
3773#endif
3774#if THREAD_STATS
8a4b65b4 3775 fprintf(stderr, "heaps created = %10d\n", stat_n_heaps);
f65fd747
UD
3776 fprintf(stderr, "locked directly = %10ld\n", stat_lock_direct);
3777 fprintf(stderr, "locked in loop = %10ld\n", stat_lock_loop);
8a4b65b4
UD
3778 fprintf(stderr, "locked waiting = %10ld\n", stat_lock_wait);
3779 fprintf(stderr, "locked total = %10ld\n",
3780 stat_lock_direct + stat_lock_loop + stat_lock_wait);
f65fd747
UD
3781#endif
3782}
3783
3784/*
3785 mallinfo returns a copy of updated current mallinfo.
8a4b65b4 3786 The information reported is for the arena last used by the thread.
f65fd747
UD
3787*/
3788
3789struct mallinfo mALLINFo()
3790{
8a4b65b4
UD
3791 struct mallinfo mi;
3792 Void_t *vptr = NULL;
3793
1228ed5c 3794#ifndef NO_THREADS
8a4b65b4 3795 tsd_getspecific(arena_key, vptr);
1228ed5c 3796#endif
8a4b65b4
UD
3797 malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi);
3798 return mi;
f65fd747
UD
3799}
3800
3801
3802\f
3803
3804/*
3805 mallopt:
3806
3807 mallopt is the general SVID/XPG interface to tunable parameters.
3808 The format is to provide a (parameter-number, parameter-value) pair.
3809 mallopt then sets the corresponding parameter to the argument
3810 value if it can (i.e., so long as the value is meaningful),
3811 and returns 1 if successful else 0.
3812
3813 See descriptions of tunable parameters above.
3814
3815*/
3816
3817#if __STD_C
3818int mALLOPt(int param_number, int value)
3819#else
3820int mALLOPt(param_number, value) int param_number; int value;
3821#endif
3822{
3823 switch(param_number)
3824 {
3825 case M_TRIM_THRESHOLD:
3826 trim_threshold = value; return 1;
3827 case M_TOP_PAD:
3828 top_pad = value; return 1;
3829 case M_MMAP_THRESHOLD:
3830#ifndef NO_THREADS
3831 /* Forbid setting the threshold too high. */
3832 if((unsigned long)value > HEAP_MAX_SIZE/2) return 0;
3833#endif
3834 mmap_threshold = value; return 1;
3835 case M_MMAP_MAX:
3836#if HAVE_MMAP
3837 n_mmaps_max = value; return 1;
3838#else
3839 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3840#endif
10dc2a90
UD
3841 case M_CHECK_ACTION:
3842 check_action = value; return 1;
f65fd747
UD
3843
3844 default:
3845 return 0;
3846 }
3847}
10dc2a90 3848
10dc2a90
UD
3849\f
3850
2f6d1f1b
UD
3851/* Get/set state: malloc_get_state() records the current state of all
3852 malloc variables (_except_ for the actual heap contents and `hook'
3853 function pointers) in a system dependent, opaque data structure.
3854 This data structure is dynamically allocated and can be free()d
3855 after use. malloc_set_state() restores the state of all malloc
3856 variables to the previously obtained state. This is especially
3857 useful when using this malloc as part of a shared library, and when
3858 the heap contents are saved/restored via some other method. The
3859 primary example for this is GNU Emacs with its `dumping' procedure.
3860 `Hook' function pointers are never saved or restored by these
3861 functions. */
3862
3863#define MALLOC_STATE_MAGIC 0x444c4541l
3864#define MALLOC_STATE_VERSION (0*0x100l + 0l) /* major*0x100 + minor */
3865
3866struct malloc_state {
3867 long magic;
3868 long version;
3869 mbinptr av[NAV * 2 + 2];
3870 char* sbrk_base;
3871 int sbrked_mem_bytes;
3872 unsigned long trim_threshold;
3873 unsigned long top_pad;
3874 unsigned int n_mmaps_max;
3875 unsigned long mmap_threshold;
3876 int check_action;
3877 unsigned long max_sbrked_mem;
3878 unsigned long max_total_mem;
3879 unsigned int n_mmaps;
3880 unsigned int max_n_mmaps;
3881 unsigned long mmapped_mem;
3882 unsigned long max_mmapped_mem;
3883};
3884
3885Void_t*
3886mALLOC_GET_STATe()
3887{
3888 mchunkptr victim;
3889 struct malloc_state* ms;
3890 int i;
3891 mbinptr b;
3892
3893 ptmalloc_init();
3894 (void)mutex_lock(&main_arena.mutex);
3895 victim = chunk_alloc(&main_arena, request2size(sizeof(*ms)));
3896 if(!victim) {
3897 (void)mutex_unlock(&main_arena.mutex);
3898 return 0;
3899 }
3900 ms = (struct malloc_state*)chunk2mem(victim);
3901 ms->magic = MALLOC_STATE_MAGIC;
3902 ms->version = MALLOC_STATE_VERSION;
3903 ms->av[0] = main_arena.av[0];
3904 ms->av[1] = main_arena.av[1];
3905 for(i=0; i<NAV; i++) {
3906 b = bin_at(&main_arena, i);
3907 if(first(b) == b)
3908 ms->av[2*i+2] = ms->av[2*i+3] = 0; /* empty bin (or initial top) */
3909 else {
3910 ms->av[2*i+2] = first(b);
3911 ms->av[2*i+3] = last(b);
3912 }
3913 }
3914 ms->sbrk_base = sbrk_base;
3915 ms->sbrked_mem_bytes = sbrked_mem;
3916 ms->trim_threshold = trim_threshold;
3917 ms->top_pad = top_pad;
3918 ms->n_mmaps_max = n_mmaps_max;
3919 ms->mmap_threshold = mmap_threshold;
3920 ms->check_action = check_action;
3921 ms->max_sbrked_mem = max_sbrked_mem;
3922#ifdef NO_THREADS
3923 ms->max_total_mem = max_total_mem;
3924#else
3925 ms->max_total_mem = 0;
3926#endif
3927 ms->n_mmaps = n_mmaps;
3928 ms->max_n_mmaps = max_n_mmaps;
3929 ms->mmapped_mem = mmapped_mem;
3930 ms->max_mmapped_mem = max_mmapped_mem;
3931 (void)mutex_unlock(&main_arena.mutex);
3932 return (Void_t*)ms;
3933}
3934
3935int
3936#if __STD_C
3937mALLOC_SET_STATe(Void_t* msptr)
3938#else
3939mALLOC_SET_STATe(msptr) Void_t* msptr;
3940#endif
3941{
3942 struct malloc_state* ms = (struct malloc_state*)msptr;
3943 int i;
3944 mbinptr b;
3945
3946 ptmalloc_init();
3947 if(ms->magic != MALLOC_STATE_MAGIC) return -1;
3948 /* Must fail if the major version is too high. */
3949 if((ms->version & ~0xffl) > (MALLOC_STATE_VERSION & ~0xffl)) return -2;
3950 (void)mutex_lock(&main_arena.mutex);
3951 main_arena.av[0] = ms->av[0];
3952 main_arena.av[1] = ms->av[1];
3953 for(i=0; i<NAV; i++) {
3954 b = bin_at(&main_arena, i);
3955 if(ms->av[2*i+2] == 0)
3956 first(b) = last(b) = b;
3957 else {
3958 first(b) = ms->av[2*i+2];
3959 last(b) = ms->av[2*i+3];
3960 if(i > 0) {
3961 /* Make sure the links to the `av'-bins in the heap are correct. */
3962 first(b)->bk = b;
3963 last(b)->fd = b;
3964 }
3965 }
3966 }
3967 sbrk_base = ms->sbrk_base;
3968 sbrked_mem = ms->sbrked_mem_bytes;
3969 trim_threshold = ms->trim_threshold;
3970 top_pad = ms->top_pad;
3971 n_mmaps_max = ms->n_mmaps_max;
3972 mmap_threshold = ms->mmap_threshold;
3973 check_action = ms->check_action;
3974 max_sbrked_mem = ms->max_sbrked_mem;
3975#ifdef NO_THREADS
3976 max_total_mem = ms->max_total_mem;
3977#endif
3978 n_mmaps = ms->n_mmaps;
3979 max_n_mmaps = ms->max_n_mmaps;
3980 mmapped_mem = ms->mmapped_mem;
3981 max_mmapped_mem = ms->max_mmapped_mem;
3982 /* add version-dependent code here */
3983 (void)mutex_unlock(&main_arena.mutex);
3984 return 0;
3985}
3986
3987\f
3988
10dc2a90
UD
3989#if defined(_LIBC) || defined(MALLOC_HOOKS)
3990
3991/* A simple, standard set of debugging hooks. Overhead is `only' one
3992 byte per chunk; still this will catch most cases of double frees or
3993 overruns. */
3994
c0e45674 3995#define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF )
f65fd747 3996
10dc2a90
UD
3997/* Convert a pointer to be free()d or realloc()ed to a valid chunk
3998 pointer. If the provided pointer is not valid, return NULL. The
3999 goal here is to avoid crashes, unlike in the MALLOC_DEBUG code. */
4000
4001static mchunkptr
4002#if __STD_C
4003mem2chunk_check(Void_t* mem)
4004#else
4005mem2chunk_check(mem) Void_t* mem;
f65fd747 4006#endif
10dc2a90
UD
4007{
4008 mchunkptr p;
4009 INTERNAL_SIZE_T sz;
4010
4011 p = mem2chunk(mem);
4012 if(!aligned_OK(p)) return NULL;
4013 if( (char*)p>=sbrk_base && (char*)p<(sbrk_base+sbrked_mem) ) {
7e3be507 4014 /* Must be a chunk in conventional heap memory. */
10dc2a90
UD
4015 if(chunk_is_mmapped(p) ||
4016 ( (sz = chunksize(p)), ((char*)p + sz)>=(sbrk_base+sbrked_mem) ) ||
7e3be507
UD
4017 sz<MINSIZE || sz&MALLOC_ALIGN_MASK || !inuse(p) ||
4018 ( !prev_inuse(p) && (p->prev_size&MALLOC_ALIGN_MASK ||
4019 (long)prev_chunk(p)<(long)sbrk_base ||
4020 next_chunk(prev_chunk(p))!=p) ))
4021 return NULL;
4022 if(*((unsigned char*)p + sz + (SIZE_SZ-1)) != MAGICBYTE(p))
4023 return NULL;
4024 *((unsigned char*)p + sz + (SIZE_SZ-1)) ^= 0xFF;
10dc2a90
UD
4025 } else {
4026 unsigned long offset, page_mask = malloc_getpagesize-1;
4027
7e3be507 4028 /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two
10dc2a90
UD
4029 alignment relative to the beginning of a page. Check this
4030 first. */
4031 offset = (unsigned long)mem & page_mask;
4032 if((offset!=MALLOC_ALIGNMENT && offset!=0 && offset!=0x10 &&
7e3be507
UD
4033 offset!=0x20 && offset!=0x40 && offset!=0x80 && offset!=0x100 &&
4034 offset!=0x200 && offset!=0x400 && offset!=0x800 && offset!=0x1000 &&
4035 offset<0x2000) ||
10dc2a90
UD
4036 !chunk_is_mmapped(p) || (p->size & PREV_INUSE) ||
4037 ( (((unsigned long)p - p->prev_size) & page_mask) != 0 ) ||
4038 ( (sz = chunksize(p)), ((p->prev_size + sz) & page_mask) != 0 ) )
4039 return NULL;
7e3be507
UD
4040 if(*((unsigned char*)p + sz - 1) != MAGICBYTE(p))
4041 return NULL;
4042 *((unsigned char*)p + sz - 1) ^= 0xFF;
10dc2a90
UD
4043 }
4044 return p;
4045}
4046
4047static Void_t*
4048#if __STD_C
4049malloc_check(size_t sz)
4050#else
4051malloc_check(sz) size_t sz;
4052#endif
4053{
4054 mchunkptr victim;
4055 INTERNAL_SIZE_T nb = request2size(sz + 1);
4056
4057 (void)mutex_lock(&main_arena.mutex);
4058 victim = chunk_alloc(&main_arena, nb);
4059 (void)mutex_unlock(&main_arena.mutex);
4060 if(!victim) return NULL;
4061 nb = chunksize(victim);
4062 if(chunk_is_mmapped(victim))
4063 --nb;
4064 else
4065 nb += SIZE_SZ - 1;
7e3be507 4066 *((unsigned char*)victim + nb) = MAGICBYTE(victim);
10dc2a90
UD
4067 return chunk2mem(victim);
4068}
4069
4070static void
4071#if __STD_C
4072free_check(Void_t* mem)
4073#else
4074free_check(mem) Void_t* mem;
4075#endif
4076{
4077 mchunkptr p;
4078
4079 if(!mem) return;
7e3be507 4080 (void)mutex_lock(&main_arena.mutex);
10dc2a90
UD
4081 p = mem2chunk_check(mem);
4082 if(!p) {
7e3be507 4083 (void)mutex_unlock(&main_arena.mutex);
10dc2a90
UD
4084 switch(check_action) {
4085 case 1:
4086 fprintf(stderr, "free(): invalid pointer %lx!\n", (long)(mem));
4087 break;
4088 case 2:
4089 abort();
4090 }
4091 return;
4092 }
4093#if HAVE_MMAP
4094 if (chunk_is_mmapped(p)) {
7e3be507 4095 (void)mutex_unlock(&main_arena.mutex);
10dc2a90
UD
4096 munmap_chunk(p);
4097 return;
4098 }
4099#endif
7e3be507
UD
4100#if 0 /* Erase freed memory. */
4101 memset(mem, 0, chunksize(p) - (SIZE_SZ+1));
4102#endif
10dc2a90
UD
4103 chunk_free(&main_arena, p);
4104 (void)mutex_unlock(&main_arena.mutex);
4105}
4106
4107static Void_t*
4108#if __STD_C
4109realloc_check(Void_t* oldmem, size_t bytes)
4110#else
4111realloc_check(oldmem, bytes) Void_t* oldmem; size_t bytes;
4112#endif
4113{
4114 mchunkptr oldp, newp;
4115 INTERNAL_SIZE_T nb, oldsize;
4116
4117 if (oldmem == 0) return malloc_check(bytes);
7e3be507 4118 (void)mutex_lock(&main_arena.mutex);
10dc2a90
UD
4119 oldp = mem2chunk_check(oldmem);
4120 if(!oldp) {
7e3be507 4121 (void)mutex_unlock(&main_arena.mutex);
10dc2a90
UD
4122 switch(check_action) {
4123 case 1:
4124 fprintf(stderr, "realloc(): invalid pointer %lx!\n", (long)(oldmem));
4125 break;
4126 case 2:
4127 abort();
4128 }
4129 return malloc_check(bytes);
4130 }
4131 oldsize = chunksize(oldp);
4132
4133 nb = request2size(bytes+1);
4134
10dc2a90
UD
4135#if HAVE_MMAP
4136 if (chunk_is_mmapped(oldp)) {
4137#if HAVE_MREMAP
4138 newp = mremap_chunk(oldp, nb);
4139 if(!newp) {
4140#endif
4141 /* Note the extra SIZE_SZ overhead. */
4142 if(oldsize - SIZE_SZ >= nb) newp = oldp; /* do nothing */
4143 else {
7e3be507
UD
4144 /* Must alloc, copy, free. */
4145 newp = chunk_alloc(&main_arena, nb);
4146 if (newp) {
4147 MALLOC_COPY(chunk2mem(newp), oldmem, oldsize - 2*SIZE_SZ);
4148 munmap_chunk(oldp);
4149 }
10dc2a90
UD
4150 }
4151#if HAVE_MREMAP
4152 }
4153#endif
7e3be507 4154 } else {
10dc2a90
UD
4155#endif /* HAVE_MMAP */
4156 newp = chunk_realloc(&main_arena, oldp, oldsize, nb);
7e3be507
UD
4157#if 0 /* Erase freed memory. */
4158 nb = chunksize(newp);
4159 if(oldp<newp || oldp>=chunk_at_offset(newp, nb)) {
4160 memset((char*)oldmem + 2*sizeof(mbinptr), 0,
4161 oldsize - (2*sizeof(mbinptr)+2*SIZE_SZ+1));
4162 } else if(nb > oldsize+SIZE_SZ) {
4163 memset((char*)chunk2mem(newp) + oldsize, 0, nb - (oldsize+SIZE_SZ));
4164 }
4165#endif
4166#if HAVE_MMAP
4167 }
4168#endif
10dc2a90
UD
4169 (void)mutex_unlock(&main_arena.mutex);
4170
4171 if(!newp) return NULL;
4172 nb = chunksize(newp);
4173 if(chunk_is_mmapped(newp))
4174 --nb;
4175 else
4176 nb += SIZE_SZ - 1;
7e3be507 4177 *((unsigned char*)newp + nb) = MAGICBYTE(newp);
10dc2a90
UD
4178 return chunk2mem(newp);
4179}
4180
4181static Void_t*
4182#if __STD_C
4183memalign_check(size_t alignment, size_t bytes)
4184#else
4185memalign_check(alignment, bytes) size_t alignment; size_t bytes;
4186#endif
4187{
4188 INTERNAL_SIZE_T nb;
4189 mchunkptr p;
4190
4191 if (alignment <= MALLOC_ALIGNMENT) return malloc_check(bytes);
4192 if (alignment < MINSIZE) alignment = MINSIZE;
4193
4194 nb = request2size(bytes+1);
4195 (void)mutex_lock(&main_arena.mutex);
4196 p = chunk_align(&main_arena, nb, alignment);
4197 (void)mutex_unlock(&main_arena.mutex);
4198 if(!p) return NULL;
4199 nb = chunksize(p);
4200 if(chunk_is_mmapped(p))
4201 --nb;
4202 else
4203 nb += SIZE_SZ - 1;
7e3be507 4204 *((unsigned char*)p + nb) = MAGICBYTE(p);
10dc2a90
UD
4205 return chunk2mem(p);
4206}
4207
7e3be507
UD
4208/* The following hooks are used when the global initialization in
4209 ptmalloc_init() hasn't completed yet. */
4210
4211static Void_t*
4212#if __STD_C
4213malloc_starter(size_t sz)
4214#else
4215malloc_starter(sz) size_t sz;
4216#endif
4217{
4218 mchunkptr victim = chunk_alloc(&main_arena, request2size(sz));
4219
4220 return victim ? chunk2mem(victim) : 0;
4221}
4222
4223static void
4224#if __STD_C
4225free_starter(Void_t* mem)
4226#else
4227free_starter(mem) Void_t* mem;
4228#endif
4229{
4230 mchunkptr p;
4231
4232 if(!mem) return;
4233 p = mem2chunk(mem);
4234#if HAVE_MMAP
4235 if (chunk_is_mmapped(p)) {
4236 munmap_chunk(p);
4237 return;
4238 }
4239#endif
4240 chunk_free(&main_arena, p);
4241}
4242
10dc2a90 4243#endif /* defined(_LIBC) || defined(MALLOC_HOOKS) */
f65fd747 4244
7e3be507
UD
4245\f
4246
4247#ifdef _LIBC
4248weak_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
4249weak_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
4250weak_alias (__libc_free, __free) weak_alias (__libc_free, free)
4251weak_alias (__libc_malloc, __malloc) weak_alias (__libc_malloc, malloc)
4252weak_alias (__libc_memalign, __memalign) weak_alias (__libc_memalign, memalign)
4253weak_alias (__libc_realloc, __realloc) weak_alias (__libc_realloc, realloc)
4254weak_alias (__libc_valloc, __valloc) weak_alias (__libc_valloc, valloc)
4255weak_alias (__libc_pvalloc, __pvalloc) weak_alias (__libc_pvalloc, pvalloc)
4256weak_alias (__libc_mallinfo, __mallinfo) weak_alias (__libc_mallinfo, mallinfo)
4257weak_alias (__libc_mallopt, __mallopt) weak_alias (__libc_mallopt, mallopt)
4258
4259weak_alias (__malloc_stats, malloc_stats)
4260weak_alias (__malloc_usable_size, malloc_usable_size)
4261weak_alias (__malloc_trim, malloc_trim)
2f6d1f1b
UD
4262weak_alias (__malloc_get_state, malloc_get_state)
4263weak_alias (__malloc_set_state, malloc_set_state)
7e3be507
UD
4264#endif
4265
f65fd747
UD
4266/*
4267
4268History:
4269
2f6d1f1b
UD
4270 V2.6.4-pt3 Thu Feb 20 1997 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4271 * Added malloc_get/set_state() (mainly for use in GNU emacs),
4272 using interface from Marcus Daniels
4273 * All parameters are now adjustable via environment variables
4274
10dc2a90
UD
4275 V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4276 * Added debugging hooks
4277 * Fixed possible deadlock in realloc() when out of memory
4278 * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc.
4279
f65fd747
UD
4280 V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4281 * Very minor updates from the released 2.6.4 version.
4282 * Trimmed include file down to exported data structures.
4283 * Changes from H.J. Lu for glibc-2.0.
4284
4285 V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4286 * Many changes for multiple threads
4287 * Introduced arenas and heaps
4288
4289 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
4290 * Added pvalloc, as recommended by H.J. Liu
4291 * Added 64bit pointer support mainly from Wolfram Gloger
4292 * Added anonymously donated WIN32 sbrk emulation
4293 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4294 * malloc_extend_top: fix mask error that caused wastage after
4295 foreign sbrks
4296 * Add linux mremap support code from HJ Liu
4297
4298 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
4299 * Integrated most documentation with the code.
4300 * Add support for mmap, with help from
4301 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4302 * Use last_remainder in more cases.
4303 * Pack bins using idea from colin@nyx10.cs.du.edu
6d52618b 4304 * Use ordered bins instead of best-fit threshold
f65fd747
UD
4305 * Eliminate block-local decls to simplify tracing and debugging.
4306 * Support another case of realloc via move into top
6d52618b 4307 * Fix error occurring when initial sbrk_base not word-aligned.
f65fd747
UD
4308 * Rely on page size for units instead of SBRK_UNIT to
4309 avoid surprises about sbrk alignment conventions.
4310 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4311 (raymond@es.ele.tue.nl) for the suggestion.
4312 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4313 * More precautions for cases where other routines call sbrk,
4314 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4315 * Added macros etc., allowing use in linux libc from
4316 H.J. Lu (hjl@gnu.ai.mit.edu)
4317 * Inverted this history list
4318
4319 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
4320 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4321 * Removed all preallocation code since under current scheme
4322 the work required to undo bad preallocations exceeds
4323 the work saved in good cases for most test programs.
4324 * No longer use return list or unconsolidated bins since
4325 no scheme using them consistently outperforms those that don't
4326 given above changes.
4327 * Use best fit for very large chunks to prevent some worst-cases.
4328 * Added some support for debugging
4329
4330 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
4331 * Removed footers when chunks are in use. Thanks to
4332 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4333
4334 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
4335 * Added malloc_trim, with help from Wolfram Gloger
4336 (wmglo@Dent.MED.Uni-Muenchen.DE).
4337
4338 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
4339
4340 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
4341 * realloc: try to expand in both directions
4342 * malloc: swap order of clean-bin strategy;
4343 * realloc: only conditionally expand backwards
4344 * Try not to scavenge used bins
4345 * Use bin counts as a guide to preallocation
4346 * Occasionally bin return list chunks in first scan
4347 * Add a few optimizations from colin@nyx10.cs.du.edu
4348
4349 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
4350 * faster bin computation & slightly different binning
4351 * merged all consolidations to one part of malloc proper
4352 (eliminating old malloc_find_space & malloc_clean_bin)
4353 * Scan 2 returns chunks (not just 1)
4354 * Propagate failure in realloc if malloc returns 0
4355 * Add stuff to allow compilation on non-ANSI compilers
4356 from kpv@research.att.com
4357
4358 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
4359 * removed potential for odd address access in prev_chunk
4360 * removed dependency on getpagesize.h
4361 * misc cosmetics and a bit more internal documentation
4362 * anticosmetics: mangled names in macros to evade debugger strangeness
4363 * tested on sparc, hp-700, dec-mips, rs6000
4364 with gcc & native cc (hp, dec only) allowing
4365 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4366
4367 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
4368 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4369 structure of old version, but most details differ.)
4370
4371*/