]> git.ipfire.org Git - people/ms/u-boot.git/blame - common/dlmalloc.c
JFFS2: drop support for LZARI compression mode
[people/ms/u-boot.git] / common / dlmalloc.c
CommitLineData
81673e9a
KG
1#include <common.h>
2
217c9dad
WD
3#if 0 /* Moved to malloc.h */
4/* ---------- To make a malloc.h, start cutting here ------------ */
5
6/*
7 A version of malloc/free/realloc written by Doug Lea and released to the
8 public domain. Send questions/comments/complaints/performance data
9 to dl@cs.oswego.edu
10
11* VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee)
12
13 Note: There may be an updated version of this malloc obtainable at
8bde7f77
WD
14 ftp://g.oswego.edu/pub/misc/malloc.c
15 Check before installing!
217c9dad
WD
16
17* Why use this malloc?
18
19 This is not the fastest, most space-conserving, most portable, or
20 most tunable malloc ever written. However it is among the fastest
21 while also being among the most space-conserving, portable and tunable.
22 Consistent balance across these factors results in a good general-purpose
23 allocator. For a high-level description, see
24 http://g.oswego.edu/dl/html/malloc.html
25
26* Synopsis of public routines
27
28 (Much fuller descriptions are contained in the program documentation below.)
29
30 malloc(size_t n);
31 Return a pointer to a newly allocated chunk of at least n bytes, or null
32 if no space is available.
33 free(Void_t* p);
34 Release the chunk of memory pointed to by p, or no effect if p is null.
35 realloc(Void_t* p, size_t n);
36 Return a pointer to a chunk of size n that contains the same data
37 as does chunk p up to the minimum of (n, p's size) bytes, or null
38 if no space is available. The returned pointer may or may not be
39 the same as p. If p is null, equivalent to malloc. Unless the
40 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
41 size argument of zero (re)allocates a minimum-sized chunk.
42 memalign(size_t alignment, size_t n);
43 Return a pointer to a newly allocated chunk of n bytes, aligned
44 in accord with the alignment argument, which must be a power of
45 two.
46 valloc(size_t n);
47 Equivalent to memalign(pagesize, n), where pagesize is the page
48 size of the system (or as near to this as can be figured out from
49 all the includes/defines below.)
50 pvalloc(size_t n);
51 Equivalent to valloc(minimum-page-that-holds(n)), that is,
52 round up n to nearest pagesize.
53 calloc(size_t unit, size_t quantity);
54 Returns a pointer to quantity * unit bytes, with all locations
55 set to zero.
56 cfree(Void_t* p);
57 Equivalent to free(p).
58 malloc_trim(size_t pad);
59 Release all but pad bytes of freed top-most memory back
60 to the system. Return 1 if successful, else 0.
61 malloc_usable_size(Void_t* p);
62 Report the number usable allocated bytes associated with allocated
63 chunk p. This may or may not report more bytes than were requested,
64 due to alignment and minimum size constraints.
65 malloc_stats();
66 Prints brief summary statistics.
67 mallinfo()
68 Returns (by copy) a struct containing various summary statistics.
69 mallopt(int parameter_number, int parameter_value)
70 Changes one of the tunable parameters described below. Returns
71 1 if successful in changing the parameter, else 0.
72
73* Vital statistics:
74
75 Alignment: 8-byte
76 8 byte alignment is currently hardwired into the design. This
77 seems to suffice for all current machines and C compilers.
78
79 Assumed pointer representation: 4 or 8 bytes
80 Code for 8-byte pointers is untested by me but has worked
81 reliably by Wolfram Gloger, who contributed most of the
82 changes supporting this.
83
84 Assumed size_t representation: 4 or 8 bytes
85 Note that size_t is allowed to be 4 bytes even if pointers are 8.
86
87 Minimum overhead per allocated chunk: 4 or 8 bytes
88 Each malloced chunk has a hidden overhead of 4 bytes holding size
89 and status information.
90
91 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
8bde7f77 92 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
217c9dad
WD
93
94 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
95 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
96 needed; 4 (8) for a trailing size field
97 and 8 (16) bytes for free list pointers. Thus, the minimum
98 allocatable size is 16/24/32 bytes.
99
100 Even a request for zero bytes (i.e., malloc(0)) returns a
101 pointer to something of the minimum allocatable size.
102
103 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
8bde7f77 104 8-byte size_t: 2^63 - 16 bytes
217c9dad
WD
105
106 It is assumed that (possibly signed) size_t bit values suffice to
107 represent chunk sizes. `Possibly signed' is due to the fact
108 that `size_t' may be defined on a system as either a signed or
109 an unsigned type. To be conservative, values that would appear
110 as negative numbers are avoided.
111 Requests for sizes with a negative sign bit when the request
112 size is treaded as a long will return null.
113
114 Maximum overhead wastage per allocated chunk: normally 15 bytes
115
116 Alignnment demands, plus the minimum allocatable size restriction
117 make the normal worst-case wastage 15 bytes (i.e., up to 15
118 more bytes will be allocated than were requested in malloc), with
119 two exceptions:
8bde7f77
WD
120 1. Because requests for zero bytes allocate non-zero space,
121 the worst case wastage for a request of zero bytes is 24 bytes.
122 2. For requests >= mmap_threshold that are serviced via
123 mmap(), the worst case wastage is 8 bytes plus the remainder
124 from a system page (the minimal mmap unit); typically 4096 bytes.
217c9dad
WD
125
126* Limitations
127
128 Here are some features that are NOT currently supported
129
130 * No user-definable hooks for callbacks and the like.
131 * No automated mechanism for fully checking that all accesses
132 to malloced memory stay within their bounds.
133 * No support for compaction.
134
135* Synopsis of compile-time options:
136
137 People have reported using previous versions of this malloc on all
138 versions of Unix, sometimes by tweaking some of the defines
139 below. It has been tested most extensively on Solaris and
140 Linux. It is also reported to work on WIN32 platforms.
141 People have also reported adapting this malloc for use in
142 stand-alone embedded systems.
143
144 The implementation is in straight, hand-tuned ANSI C. Among other
145 consequences, it uses a lot of macros. Because of this, to be at
146 all usable, this code should be compiled using an optimizing compiler
147 (for example gcc -O2) that can simplify expressions and control
148 paths.
149
150 __STD_C (default: derived from C compiler defines)
151 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
152 a C compiler sufficiently close to ANSI to get away with it.
153 DEBUG (default: NOT defined)
154 Define to enable debugging. Adds fairly extensive assertion-based
155 checking to help track down memory errors, but noticeably slows down
156 execution.
157 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
158 Define this if you think that realloc(p, 0) should be equivalent
159 to free(p). Otherwise, since malloc returns a unique pointer for
160 malloc(0), so does realloc(p, 0).
161 HAVE_MEMCPY (default: defined)
162 Define if you are not otherwise using ANSI STD C, but still
163 have memcpy and memset in your C library and want to use them.
164 Otherwise, simple internal versions are supplied.
165 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
166 Define as 1 if you want the C library versions of memset and
167 memcpy called in realloc and calloc (otherwise macro versions are used).
168 At least on some platforms, the simple macro versions usually
169 outperform libc versions.
170 HAVE_MMAP (default: defined as 1)
171 Define to non-zero to optionally make malloc() use mmap() to
172 allocate very large blocks.
173 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
174 Define to non-zero to optionally make realloc() use mremap() to
175 reallocate very large blocks.
176 malloc_getpagesize (default: derived from system #includes)
177 Either a constant or routine call returning the system page size.
178 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
179 Optionally define if you are on a system with a /usr/include/malloc.h
180 that declares struct mallinfo. It is not at all necessary to
181 define this even if you do, but will ensure consistency.
182 INTERNAL_SIZE_T (default: size_t)
183 Define to a 32-bit type (probably `unsigned int') if you are on a
184 64-bit machine, yet do not want or need to allow malloc requests of
185 greater than 2^31 to be handled. This saves space, especially for
186 very small chunks.
187 INTERNAL_LINUX_C_LIB (default: NOT defined)
188 Defined only when compiled as part of Linux libc.
189 Also note that there is some odd internal name-mangling via defines
190 (for example, internally, `malloc' is named `mALLOc') needed
191 when compiling in this case. These look funny but don't otherwise
192 affect anything.
193 WIN32 (default: undefined)
194 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
195 LACKS_UNISTD_H (default: undefined if not WIN32)
196 Define this if your system does not have a <unistd.h>.
197 LACKS_SYS_PARAM_H (default: undefined if not WIN32)
198 Define this if your system does not have a <sys/param.h>.
199 MORECORE (default: sbrk)
200 The name of the routine to call to obtain more memory from the system.
201 MORECORE_FAILURE (default: -1)
202 The value returned upon failure of MORECORE.
203 MORECORE_CLEARS (default 1)
204 True (1) if the routine mapped to MORECORE zeroes out memory (which
205 holds for sbrk).
206 DEFAULT_TRIM_THRESHOLD
207 DEFAULT_TOP_PAD
208 DEFAULT_MMAP_THRESHOLD
209 DEFAULT_MMAP_MAX
210 Default values of tunable parameters (described in detail below)
211 controlling interaction with host system routines (sbrk, mmap, etc).
212 These values may also be changed dynamically via mallopt(). The
213 preset defaults are those that give best performance for typical
214 programs/systems.
215 USE_DL_PREFIX (default: undefined)
216 Prefix all public routines with the string 'dl'. Useful to
217 quickly avoid procedure declaration conflicts and linker symbol
218 conflicts with existing memory allocation routines.
219
220
221*/
222
223\f
224
225
226/* Preliminaries */
227
228#ifndef __STD_C
229#ifdef __STDC__
230#define __STD_C 1
231#else
232#if __cplusplus
233#define __STD_C 1
234#else
235#define __STD_C 0
236#endif /*__cplusplus*/
237#endif /*__STDC__*/
238#endif /*__STD_C*/
239
240#ifndef Void_t
241#if (__STD_C || defined(WIN32))
242#define Void_t void
243#else
244#define Void_t char
245#endif
246#endif /*Void_t*/
247
248#if __STD_C
249#include <stddef.h> /* for size_t */
250#else
251#include <sys/types.h>
252#endif
253
254#ifdef __cplusplus
255extern "C" {
256#endif
257
258#include <stdio.h> /* needed for malloc_stats */
259
260
261/*
262 Compile-time options
263*/
264
265
266/*
267 Debugging:
268
269 Because freed chunks may be overwritten with link fields, this
270 malloc will often die when freed memory is overwritten by user
271 programs. This can be very effective (albeit in an annoying way)
272 in helping track down dangling pointers.
273
274 If you compile with -DDEBUG, a number of assertion checks are
275 enabled that will catch more memory errors. You probably won't be
276 able to make much sense of the actual assertion errors, but they
277 should help you locate incorrectly overwritten memory. The
278 checking is fairly extensive, and will slow down execution
279 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
280 attempt to check every non-mmapped allocated and free chunk in the
281 course of computing the summmaries. (By nature, mmapped regions
282 cannot be checked very much automatically.)
283
284 Setting DEBUG may also be helpful if you are trying to modify
285 this code. The assertions in the check routines spell out in more
286 detail the assumptions and invariants underlying the algorithms.
287
288*/
289
290#ifdef DEBUG
291#include <assert.h>
292#else
293#define assert(x) ((void)0)
294#endif
295
296
297/*
298 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
299 of chunk sizes. On a 64-bit machine, you can reduce malloc
300 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
301 at the expense of not being able to handle requests greater than
302 2^31. This limitation is hardly ever a concern; you are encouraged
303 to set this. However, the default version is the same as size_t.
304*/
305
306#ifndef INTERNAL_SIZE_T
307#define INTERNAL_SIZE_T size_t
308#endif
309
310/*
311 REALLOC_ZERO_BYTES_FREES should be set if a call to
312 realloc with zero bytes should be the same as a call to free.
313 Some people think it should. Otherwise, since this malloc
314 returns a unique pointer for malloc(0), so does realloc(p, 0).
315*/
316
317
318/* #define REALLOC_ZERO_BYTES_FREES */
319
320
321/*
322 WIN32 causes an emulation of sbrk to be compiled in
323 mmap-based options are not currently supported in WIN32.
324*/
325
326/* #define WIN32 */
327#ifdef WIN32
328#define MORECORE wsbrk
329#define HAVE_MMAP 0
330
331#define LACKS_UNISTD_H
332#define LACKS_SYS_PARAM_H
333
334/*
335 Include 'windows.h' to get the necessary declarations for the
336 Microsoft Visual C++ data structures and routines used in the 'sbrk'
337 emulation.
338
339 Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
340 Visual C++ header files are included.
341*/
342#define WIN32_LEAN_AND_MEAN
343#include <windows.h>
344#endif
345
346
347/*
348 HAVE_MEMCPY should be defined if you are not otherwise using
349 ANSI STD C, but still have memcpy and memset in your C library
350 and want to use them in calloc and realloc. Otherwise simple
351 macro versions are defined here.
352
353 USE_MEMCPY should be defined as 1 if you actually want to
354 have memset and memcpy called. People report that the macro
355 versions are often enough faster than libc versions on many
356 systems that it is better to use them.
357
358*/
359
360#define HAVE_MEMCPY
361
362#ifndef USE_MEMCPY
363#ifdef HAVE_MEMCPY
364#define USE_MEMCPY 1
365#else
366#define USE_MEMCPY 0
367#endif
368#endif
369
370#if (__STD_C || defined(HAVE_MEMCPY))
371
372#if __STD_C
373void* memset(void*, int, size_t);
374void* memcpy(void*, const void*, size_t);
375#else
376#ifdef WIN32
8bde7f77
WD
377/* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */
378/* 'windows.h' */
217c9dad
WD
379#else
380Void_t* memset();
381Void_t* memcpy();
382#endif
383#endif
384#endif
385
386#if USE_MEMCPY
387
388/* The following macros are only invoked with (2n+1)-multiples of
389 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
390 for fast inline execution when n is small. */
391
392#define MALLOC_ZERO(charp, nbytes) \
393do { \
394 INTERNAL_SIZE_T mzsz = (nbytes); \
395 if(mzsz <= 9*sizeof(mzsz)) { \
396 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
397 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
8bde7f77 398 *mz++ = 0; \
217c9dad 399 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
8bde7f77
WD
400 *mz++ = 0; \
401 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
402 *mz++ = 0; }}} \
403 *mz++ = 0; \
404 *mz++ = 0; \
405 *mz = 0; \
217c9dad
WD
406 } else memset((charp), 0, mzsz); \
407} while(0)
408
409#define MALLOC_COPY(dest,src,nbytes) \
410do { \
411 INTERNAL_SIZE_T mcsz = (nbytes); \
412 if(mcsz <= 9*sizeof(mcsz)) { \
413 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
414 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
415 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
8bde7f77 416 *mcdst++ = *mcsrc++; \
217c9dad 417 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
8bde7f77
WD
418 *mcdst++ = *mcsrc++; \
419 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
420 *mcdst++ = *mcsrc++; }}} \
421 *mcdst++ = *mcsrc++; \
422 *mcdst++ = *mcsrc++; \
423 *mcdst = *mcsrc ; \
217c9dad
WD
424 } else memcpy(dest, src, mcsz); \
425} while(0)
426
427#else /* !USE_MEMCPY */
428
429/* Use Duff's device for good zeroing/copying performance. */
430
431#define MALLOC_ZERO(charp, nbytes) \
432do { \
433 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
434 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
435 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
436 switch (mctmp) { \
437 case 0: for(;;) { *mzp++ = 0; \
438 case 7: *mzp++ = 0; \
439 case 6: *mzp++ = 0; \
440 case 5: *mzp++ = 0; \
441 case 4: *mzp++ = 0; \
442 case 3: *mzp++ = 0; \
443 case 2: *mzp++ = 0; \
444 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
445 } \
446} while(0)
447
448#define MALLOC_COPY(dest,src,nbytes) \
449do { \
450 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
451 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
452 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
453 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
454 switch (mctmp) { \
455 case 0: for(;;) { *mcdst++ = *mcsrc++; \
456 case 7: *mcdst++ = *mcsrc++; \
457 case 6: *mcdst++ = *mcsrc++; \
458 case 5: *mcdst++ = *mcsrc++; \
459 case 4: *mcdst++ = *mcsrc++; \
460 case 3: *mcdst++ = *mcsrc++; \
461 case 2: *mcdst++ = *mcsrc++; \
462 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
463 } \
464} while(0)
465
466#endif
467
468
469/*
470 Define HAVE_MMAP to optionally make malloc() use mmap() to
471 allocate very large blocks. These will be returned to the
472 operating system immediately after a free().
473*/
474
475#ifndef HAVE_MMAP
476#define HAVE_MMAP 1
477#endif
478
479/*
480 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
481 large blocks. This is currently only possible on Linux with
482 kernel versions newer than 1.3.77.
483*/
484
485#ifndef HAVE_MREMAP
486#ifdef INTERNAL_LINUX_C_LIB
487#define HAVE_MREMAP 1
488#else
489#define HAVE_MREMAP 0
490#endif
491#endif
492
493#if HAVE_MMAP
494
495#include <unistd.h>
496#include <fcntl.h>
497#include <sys/mman.h>
498
499#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
500#define MAP_ANONYMOUS MAP_ANON
501#endif
502
503#endif /* HAVE_MMAP */
504
505/*
506 Access to system page size. To the extent possible, this malloc
507 manages memory from the system in page-size units.
508
509 The following mechanics for getpagesize were adapted from
510 bsd/gnu getpagesize.h
511*/
512
513#ifndef LACKS_UNISTD_H
514# include <unistd.h>
515#endif
516
517#ifndef malloc_getpagesize
518# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
519# ifndef _SC_PAGE_SIZE
520# define _SC_PAGE_SIZE _SC_PAGESIZE
521# endif
522# endif
523# ifdef _SC_PAGE_SIZE
524# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
525# else
526# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
527 extern size_t getpagesize();
528# define malloc_getpagesize getpagesize()
529# else
530# ifdef WIN32
531# define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
532# else
533# ifndef LACKS_SYS_PARAM_H
534# include <sys/param.h>
535# endif
536# ifdef EXEC_PAGESIZE
537# define malloc_getpagesize EXEC_PAGESIZE
538# else
539# ifdef NBPG
540# ifndef CLSIZE
541# define malloc_getpagesize NBPG
542# else
543# define malloc_getpagesize (NBPG * CLSIZE)
544# endif
545# else
546# ifdef NBPC
547# define malloc_getpagesize NBPC
548# else
549# ifdef PAGESIZE
550# define malloc_getpagesize PAGESIZE
551# else
552# define malloc_getpagesize (4096) /* just guess */
553# endif
554# endif
555# endif
556# endif
557# endif
558# endif
559# endif
560#endif
561
562
217c9dad
WD
563/*
564
565 This version of malloc supports the standard SVID/XPG mallinfo
566 routine that returns a struct containing the same kind of
567 information you can get from malloc_stats. It should work on
568 any SVID/XPG compliant system that has a /usr/include/malloc.h
569 defining struct mallinfo. (If you'd like to install such a thing
570 yourself, cut out the preliminary declarations as described above
571 and below and save them in a malloc.h file. But there's no
572 compelling reason to bother to do this.)
573
574 The main declaration needed is the mallinfo struct that is returned
575 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
576 bunch of fields, most of which are not even meaningful in this
577 version of malloc. Some of these fields are are instead filled by
578 mallinfo() with other numbers that might possibly be of interest.
579
580 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
581 /usr/include/malloc.h file that includes a declaration of struct
582 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
583 version is declared below. These must be precisely the same for
584 mallinfo() to work.
585
586*/
587
588/* #define HAVE_USR_INCLUDE_MALLOC_H */
589
590#if HAVE_USR_INCLUDE_MALLOC_H
591#include "/usr/include/malloc.h"
592#else
593
594/* SVID2/XPG mallinfo structure */
595
596struct mallinfo {
597 int arena; /* total space allocated from system */
598 int ordblks; /* number of non-inuse chunks */
599 int smblks; /* unused -- always zero */
600 int hblks; /* number of mmapped regions */
601 int hblkhd; /* total space in mmapped regions */
602 int usmblks; /* unused -- always zero */
603 int fsmblks; /* unused -- always zero */
604 int uordblks; /* total allocated space */
605 int fordblks; /* total non-inuse space */
606 int keepcost; /* top-most, releasable (via malloc_trim) space */
607};
608
609/* SVID2/XPG mallopt options */
610
611#define M_MXFAST 1 /* UNUSED in this malloc */
612#define M_NLBLKS 2 /* UNUSED in this malloc */
613#define M_GRAIN 3 /* UNUSED in this malloc */
614#define M_KEEP 4 /* UNUSED in this malloc */
615
616#endif
617
618/* mallopt options that actually do something */
619
620#define M_TRIM_THRESHOLD -1
621#define M_TOP_PAD -2
622#define M_MMAP_THRESHOLD -3
623#define M_MMAP_MAX -4
624
625
217c9dad
WD
626#ifndef DEFAULT_TRIM_THRESHOLD
627#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
628#endif
629
630/*
631 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
632 to keep before releasing via malloc_trim in free().
633
634 Automatic trimming is mainly useful in long-lived programs.
635 Because trimming via sbrk can be slow on some systems, and can
636 sometimes be wasteful (in cases where programs immediately
637 afterward allocate more large chunks) the value should be high
638 enough so that your overall system performance would improve by
639 releasing.
640
641 The trim threshold and the mmap control parameters (see below)
642 can be traded off with one another. Trimming and mmapping are
643 two different ways of releasing unused memory back to the
644 system. Between these two, it is often possible to keep
645 system-level demands of a long-lived program down to a bare
646 minimum. For example, in one test suite of sessions measuring
647 the XF86 X server on Linux, using a trim threshold of 128K and a
648 mmap threshold of 192K led to near-minimal long term resource
649 consumption.
650
651 If you are using this malloc in a long-lived program, it should
652 pay to experiment with these values. As a rough guide, you
653 might set to a value close to the average size of a process
654 (program) running on your system. Releasing this much memory
655 would allow such a process to run in memory. Generally, it's
656 worth it to tune for trimming rather tham memory mapping when a
657 program undergoes phases where several large chunks are
658 allocated and released in ways that can reuse each other's
659 storage, perhaps mixed with phases where there are no such
660 chunks at all. And in well-behaved long-lived programs,
661 controlling release of large blocks via trimming versus mapping
662 is usually faster.
663
664 However, in most programs, these parameters serve mainly as
665 protection against the system-level effects of carrying around
666 massive amounts of unneeded memory. Since frequent calls to
667 sbrk, mmap, and munmap otherwise degrade performance, the default
668 parameters are set to relatively high values that serve only as
669 safeguards.
670
671 The default trim value is high enough to cause trimming only in
672 fairly extreme (by current memory consumption standards) cases.
673 It must be greater than page size to have any useful effect. To
674 disable trimming completely, you can set to (unsigned long)(-1);
675
676
677*/
678
679
680#ifndef DEFAULT_TOP_PAD
681#define DEFAULT_TOP_PAD (0)
682#endif
683
684/*
685 M_TOP_PAD is the amount of extra `padding' space to allocate or
686 retain whenever sbrk is called. It is used in two ways internally:
687
688 * When sbrk is called to extend the top of the arena to satisfy
8bde7f77
WD
689 a new malloc request, this much padding is added to the sbrk
690 request.
217c9dad
WD
691
692 * When malloc_trim is called automatically from free(),
8bde7f77 693 it is used as the `pad' argument.
217c9dad
WD
694
695 In both cases, the actual amount of padding is rounded
696 so that the end of the arena is always a system page boundary.
697
698 The main reason for using padding is to avoid calling sbrk so
699 often. Having even a small pad greatly reduces the likelihood
700 that nearly every malloc request during program start-up (or
701 after trimming) will invoke sbrk, which needlessly wastes
702 time.
703
704 Automatic rounding-up to page-size units is normally sufficient
705 to avoid measurable overhead, so the default is 0. However, in
706 systems where sbrk is relatively slow, it can pay to increase
707 this value, at the expense of carrying around more memory than
708 the program needs.
709
710*/
711
712
713#ifndef DEFAULT_MMAP_THRESHOLD
714#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
715#endif
716
717/*
718
719 M_MMAP_THRESHOLD is the request size threshold for using mmap()
720 to service a request. Requests of at least this size that cannot
721 be allocated using already-existing space will be serviced via mmap.
722 (If enough normal freed space already exists it is used instead.)
723
724 Using mmap segregates relatively large chunks of memory so that
725 they can be individually obtained and released from the host
726 system. A request serviced through mmap is never reused by any
727 other request (at least not directly; the system may just so
728 happen to remap successive requests to the same locations).
729
730 Segregating space in this way has the benefit that mmapped space
731 can ALWAYS be individually released back to the system, which
732 helps keep the system level memory demands of a long-lived
733 program low. Mapped memory can never become `locked' between
734 other chunks, as can happen with normally allocated chunks, which
735 menas that even trimming via malloc_trim would not release them.
736
737 However, it has the disadvantages that:
738
8bde7f77
WD
739 1. The space cannot be reclaimed, consolidated, and then
740 used to service later requests, as happens with normal chunks.
741 2. It can lead to more wastage because of mmap page alignment
742 requirements
743 3. It causes malloc performance to be more dependent on host
744 system memory management support routines which may vary in
745 implementation quality and may impose arbitrary
746 limitations. Generally, servicing a request via normal
747 malloc steps is faster than going through a system's mmap.
217c9dad
WD
748
749 All together, these considerations should lead you to use mmap
750 only for relatively large requests.
751
752
753*/
754
755
217c9dad
WD
756#ifndef DEFAULT_MMAP_MAX
757#if HAVE_MMAP
758#define DEFAULT_MMAP_MAX (64)
759#else
760#define DEFAULT_MMAP_MAX (0)
761#endif
762#endif
763
764/*
765 M_MMAP_MAX is the maximum number of requests to simultaneously
766 service using mmap. This parameter exists because:
767
8bde7f77
WD
768 1. Some systems have a limited number of internal tables for
769 use by mmap.
770 2. In most systems, overreliance on mmap can degrade overall
771 performance.
772 3. If a program allocates many large regions, it is probably
773 better off using normal sbrk-based allocation routines that
774 can reclaim and reallocate normal heap memory. Using a
775 small value allows transition into this mode after the
776 first few allocations.
217c9dad
WD
777
778 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
779 the default value is 0, and attempts to set it to non-zero values
780 in mallopt will fail.
781*/
782
783
217c9dad
WD
784/*
785 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
786 Useful to quickly avoid procedure declaration conflicts and linker
787 symbol conflicts with existing memory allocation routines.
788
789*/
790
791/* #define USE_DL_PREFIX */
792
793
217c9dad
WD
794/*
795
796 Special defines for linux libc
797
798 Except when compiled using these special defines for Linux libc
799 using weak aliases, this malloc is NOT designed to work in
800 multithreaded applications. No semaphores or other concurrency
801 control are provided to ensure that multiple malloc or free calls
802 don't run at the same time, which could be disasterous. A single
803 semaphore could be used across malloc, realloc, and free (which is
804 essentially the effect of the linux weak alias approach). It would
805 be hard to obtain finer granularity.
806
807*/
808
809
810#ifdef INTERNAL_LINUX_C_LIB
811
812#if __STD_C
813
814Void_t * __default_morecore_init (ptrdiff_t);
815Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
816
817#else
818
819Void_t * __default_morecore_init ();
820Void_t *(*__morecore)() = __default_morecore_init;
821
822#endif
823
824#define MORECORE (*__morecore)
825#define MORECORE_FAILURE 0
826#define MORECORE_CLEARS 1
827
828#else /* INTERNAL_LINUX_C_LIB */
829
830#if __STD_C
831extern Void_t* sbrk(ptrdiff_t);
832#else
833extern Void_t* sbrk();
834#endif
835
836#ifndef MORECORE
837#define MORECORE sbrk
838#endif
839
840#ifndef MORECORE_FAILURE
841#define MORECORE_FAILURE -1
842#endif
843
844#ifndef MORECORE_CLEARS
845#define MORECORE_CLEARS 1
846#endif
847
848#endif /* INTERNAL_LINUX_C_LIB */
849
850#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
851
852#define cALLOc __libc_calloc
853#define fREe __libc_free
854#define mALLOc __libc_malloc
855#define mEMALIGn __libc_memalign
856#define rEALLOc __libc_realloc
857#define vALLOc __libc_valloc
858#define pvALLOc __libc_pvalloc
859#define mALLINFo __libc_mallinfo
860#define mALLOPt __libc_mallopt
861
862#pragma weak calloc = __libc_calloc
863#pragma weak free = __libc_free
864#pragma weak cfree = __libc_free
865#pragma weak malloc = __libc_malloc
866#pragma weak memalign = __libc_memalign
867#pragma weak realloc = __libc_realloc
868#pragma weak valloc = __libc_valloc
869#pragma weak pvalloc = __libc_pvalloc
870#pragma weak mallinfo = __libc_mallinfo
871#pragma weak mallopt = __libc_mallopt
872
873#else
874
875#ifdef USE_DL_PREFIX
876#define cALLOc dlcalloc
877#define fREe dlfree
878#define mALLOc dlmalloc
879#define mEMALIGn dlmemalign
880#define rEALLOc dlrealloc
881#define vALLOc dlvalloc
882#define pvALLOc dlpvalloc
883#define mALLINFo dlmallinfo
884#define mALLOPt dlmallopt
885#else /* USE_DL_PREFIX */
886#define cALLOc calloc
887#define fREe free
888#define mALLOc malloc
889#define mEMALIGn memalign
890#define rEALLOc realloc
891#define vALLOc valloc
892#define pvALLOc pvalloc
893#define mALLINFo mallinfo
894#define mALLOPt mallopt
895#endif /* USE_DL_PREFIX */
896
897#endif
898
899/* Public routines */
900
901#if __STD_C
902
903Void_t* mALLOc(size_t);
904void fREe(Void_t*);
905Void_t* rEALLOc(Void_t*, size_t);
906Void_t* mEMALIGn(size_t, size_t);
907Void_t* vALLOc(size_t);
908Void_t* pvALLOc(size_t);
909Void_t* cALLOc(size_t, size_t);
910void cfree(Void_t*);
911int malloc_trim(size_t);
912size_t malloc_usable_size(Void_t*);
913void malloc_stats();
914int mALLOPt(int, int);
915struct mallinfo mALLINFo(void);
916#else
917Void_t* mALLOc();
918void fREe();
919Void_t* rEALLOc();
920Void_t* mEMALIGn();
921Void_t* vALLOc();
922Void_t* pvALLOc();
923Void_t* cALLOc();
924void cfree();
925int malloc_trim();
926size_t malloc_usable_size();
927void malloc_stats();
928int mALLOPt();
929struct mallinfo mALLINFo();
930#endif
931
932
933#ifdef __cplusplus
934}; /* end of extern "C" */
935#endif
936
937/* ---------- To make a malloc.h, end cutting here ------------ */
938#else /* Moved to malloc.h */
939
940#include <malloc.h>
941#if 0
942#if __STD_C
943static void malloc_update_mallinfo (void);
944void malloc_stats (void);
945#else
946static void malloc_update_mallinfo ();
947void malloc_stats();
948#endif
949#endif /* 0 */
950
951#endif /* 0 */ /* Moved to malloc.h */
217c9dad 952
d87080b7
WD
953DECLARE_GLOBAL_DATA_PTR;
954
217c9dad
WD
955/*
956 Emulation of sbrk for WIN32
957 All code within the ifdef WIN32 is untested by me.
958
959 Thanks to Martin Fong and others for supplying this.
960*/
961
962
963#ifdef WIN32
964
965#define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
966~(malloc_getpagesize-1))
967#define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
968
969/* resrve 64MB to insure large contiguous space */
970#define RESERVED_SIZE (1024*1024*64)
971#define NEXT_SIZE (2048*1024)
972#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
973
974struct GmListElement;
975typedef struct GmListElement GmListElement;
976
977struct GmListElement
978{
979 GmListElement* next;
980 void* base;
981};
982
983static GmListElement* head = 0;
984static unsigned int gNextAddress = 0;
985static unsigned int gAddressBase = 0;
986static unsigned int gAllocatedSize = 0;
987
988static
989GmListElement* makeGmListElement (void* bas)
990{
991 GmListElement* this;
992 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
993 assert (this);
994 if (this)
995 {
996 this->base = bas;
997 this->next = head;
998 head = this;
999 }
1000 return this;
1001}
1002
1003void gcleanup ()
1004{
1005 BOOL rval;
1006 assert ( (head == NULL) || (head->base == (void*)gAddressBase));
1007 if (gAddressBase && (gNextAddress - gAddressBase))
1008 {
1009 rval = VirtualFree ((void*)gAddressBase,
1010 gNextAddress - gAddressBase,
1011 MEM_DECOMMIT);
8bde7f77 1012 assert (rval);
217c9dad
WD
1013 }
1014 while (head)
1015 {
1016 GmListElement* next = head->next;
1017 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1018 assert (rval);
1019 LocalFree (head);
1020 head = next;
1021 }
1022}
1023
1024static
1025void* findRegion (void* start_address, unsigned long size)
1026{
1027 MEMORY_BASIC_INFORMATION info;
1028 if (size >= TOP_MEMORY) return NULL;
1029
1030 while ((unsigned long)start_address + size < TOP_MEMORY)
1031 {
1032 VirtualQuery (start_address, &info, sizeof (info));
1033 if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1034 return start_address;
1035 else
1036 {
8bde7f77
WD
1037 /* Requested region is not available so see if the */
1038 /* next region is available. Set 'start_address' */
1039 /* to the next region and call 'VirtualQuery()' */
1040 /* again. */
217c9dad
WD
1041
1042 start_address = (char*)info.BaseAddress + info.RegionSize;
1043
8bde7f77
WD
1044 /* Make sure we start looking for the next region */
1045 /* on the *next* 64K boundary. Otherwise, even if */
1046 /* the new region is free according to */
1047 /* 'VirtualQuery()', the subsequent call to */
1048 /* 'VirtualAlloc()' (which follows the call to */
1049 /* this routine in 'wsbrk()') will round *down* */
1050 /* the requested address to a 64K boundary which */
1051 /* we already know is an address in the */
1052 /* unavailable region. Thus, the subsequent call */
1053 /* to 'VirtualAlloc()' will fail and bring us back */
1054 /* here, causing us to go into an infinite loop. */
217c9dad
WD
1055
1056 start_address =
1057 (void *) AlignPage64K((unsigned long) start_address);
1058 }
1059 }
1060 return NULL;
1061
1062}
1063
1064
1065void* wsbrk (long size)
1066{
1067 void* tmp;
1068 if (size > 0)
1069 {
1070 if (gAddressBase == 0)
1071 {
1072 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1073 gNextAddress = gAddressBase =
1074 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1075 MEM_RESERVE, PAGE_NOACCESS);
1076 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1077gAllocatedSize))
1078 {
1079 long new_size = max (NEXT_SIZE, AlignPage (size));
1080 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1081 do
1082 {
1083 new_address = findRegion (new_address, new_size);
1084
1085 if (new_address == 0)
1086 return (void*)-1;
1087
1088 gAddressBase = gNextAddress =
1089 (unsigned int)VirtualAlloc (new_address, new_size,
1090 MEM_RESERVE, PAGE_NOACCESS);
8bde7f77
WD
1091 /* repeat in case of race condition */
1092 /* The region that we found has been snagged */
1093 /* by another thread */
217c9dad
WD
1094 }
1095 while (gAddressBase == 0);
1096
1097 assert (new_address == (void*)gAddressBase);
1098
1099 gAllocatedSize = new_size;
1100
1101 if (!makeGmListElement ((void*)gAddressBase))
1102 return (void*)-1;
1103 }
1104 if ((size + gNextAddress) > AlignPage (gNextAddress))
1105 {
1106 void* res;
1107 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1108 (size + gNextAddress -
1109 AlignPage (gNextAddress)),
1110 MEM_COMMIT, PAGE_READWRITE);
1111 if (res == 0)
1112 return (void*)-1;
1113 }
1114 tmp = (void*)gNextAddress;
1115 gNextAddress = (unsigned int)tmp + size;
1116 return tmp;
1117 }
1118 else if (size < 0)
1119 {
1120 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1121 /* Trim by releasing the virtual memory */
1122 if (alignedGoal >= gAddressBase)
1123 {
1124 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1125 MEM_DECOMMIT);
1126 gNextAddress = gNextAddress + size;
1127 return (void*)gNextAddress;
1128 }
1129 else
1130 {
1131 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1132 MEM_DECOMMIT);
1133 gNextAddress = gAddressBase;
1134 return (void*)-1;
1135 }
1136 }
1137 else
1138 {
1139 return (void*)gNextAddress;
1140 }
1141}
1142
1143#endif
1144
1145\f
1146
1147/*
1148 Type declarations
1149*/
1150
1151
1152struct malloc_chunk
1153{
1154 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1155 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1156 struct malloc_chunk* fd; /* double links -- used only if free. */
1157 struct malloc_chunk* bk;
1158};
1159
1160typedef struct malloc_chunk* mchunkptr;
1161
1162/*
1163
1164 malloc_chunk details:
1165
1166 (The following includes lightly edited explanations by Colin Plumb.)
1167
1168 Chunks of memory are maintained using a `boundary tag' method as
1169 described in e.g., Knuth or Standish. (See the paper by Paul
1170 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1171 survey of such techniques.) Sizes of free chunks are stored both
1172 in the front of each chunk and at the end. This makes
1173 consolidating fragmented chunks into bigger chunks very fast. The
1174 size fields also hold bits representing whether chunks are free or
1175 in use.
1176
1177 An allocated chunk looks like this:
1178
1179
1180 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8bde7f77
WD
1181 | Size of previous chunk, if allocated | |
1182 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1183 | Size of chunk, in bytes |P|
217c9dad 1184 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8bde7f77
WD
1185 | User data starts here... .
1186 . .
1187 . (malloc_usable_space() bytes) .
1188 . |
217c9dad 1189nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8bde7f77
WD
1190 | Size of chunk |
1191 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
217c9dad
WD
1192
1193
1194 Where "chunk" is the front of the chunk for the purpose of most of
1195 the malloc code, but "mem" is the pointer that is returned to the
1196 user. "Nextchunk" is the beginning of the next contiguous chunk.
1197
1198 Chunks always begin on even word boundries, so the mem portion
1199 (which is returned to the user) is also on an even word boundary, and
1200 thus double-word aligned.
1201
1202 Free chunks are stored in circular doubly-linked lists, and look like this:
1203
1204 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8bde7f77
WD
1205 | Size of previous chunk |
1206 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
217c9dad
WD
1207 `head:' | Size of chunk, in bytes |P|
1208 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8bde7f77
WD
1209 | Forward pointer to next chunk in list |
1210 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1211 | Back pointer to previous chunk in list |
1212 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1213 | Unused space (may be 0 bytes long) .
1214 . .
1215 . |
217c9dad
WD
1216nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1217 `foot:' | Size of chunk, in bytes |
8bde7f77 1218 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
217c9dad
WD
1219
1220 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1221 chunk size (which is always a multiple of two words), is an in-use
1222 bit for the *previous* chunk. If that bit is *clear*, then the
1223 word before the current chunk size contains the previous chunk
1224 size, and can be used to find the front of the previous chunk.
1225 (The very first chunk allocated always has this bit set,
1226 preventing access to non-existent (or non-owned) memory.)
1227
1228 Note that the `foot' of the current chunk is actually represented
1229 as the prev_size of the NEXT chunk. (This makes it easier to
1230 deal with alignments etc).
1231
1232 The two exceptions to all this are
1233
1234 1. The special chunk `top', which doesn't bother using the
8bde7f77
WD
1235 trailing size field since there is no
1236 next contiguous chunk that would have to index off it. (After
1237 initialization, `top' is forced to always exist. If it would
1238 become less than MINSIZE bytes long, it is replenished via
1239 malloc_extend_top.)
217c9dad
WD
1240
1241 2. Chunks allocated via mmap, which have the second-lowest-order
8bde7f77
WD
1242 bit (IS_MMAPPED) set in their size fields. Because they are
1243 never merged or traversed from any other chunk, they have no
1244 foot size or inuse information.
217c9dad
WD
1245
1246 Available chunks are kept in any of several places (all declared below):
1247
1248 * `av': An array of chunks serving as bin headers for consolidated
1249 chunks. Each bin is doubly linked. The bins are approximately
1250 proportionally (log) spaced. There are a lot of these bins
1251 (128). This may look excessive, but works very well in
1252 practice. All procedures maintain the invariant that no
1253 consolidated chunk physically borders another one. Chunks in
1254 bins are kept in size order, with ties going to the
1255 approximately least recently used chunk.
1256
1257 The chunks in each bin are maintained in decreasing sorted order by
1258 size. This is irrelevant for the small bins, which all contain
1259 the same-sized chunks, but facilitates best-fit allocation for
1260 larger chunks. (These lists are just sequential. Keeping them in
1261 order almost never requires enough traversal to warrant using
1262 fancier ordered data structures.) Chunks of the same size are
1263 linked with the most recently freed at the front, and allocations
1264 are taken from the back. This results in LRU or FIFO allocation
1265 order, which tends to give each chunk an equal opportunity to be
1266 consolidated with adjacent freed chunks, resulting in larger free
1267 chunks and less fragmentation.
1268
1269 * `top': The top-most available chunk (i.e., the one bordering the
1270 end of available memory) is treated specially. It is never
1271 included in any bin, is used only if no other chunk is
1272 available, and is released back to the system if it is very
1273 large (see M_TRIM_THRESHOLD).
1274
1275 * `last_remainder': A bin holding only the remainder of the
1276 most recently split (non-top) chunk. This bin is checked
1277 before other non-fitting chunks, so as to provide better
1278 locality for runs of sequentially allocated chunks.
1279
1280 * Implicitly, through the host system's memory mapping tables.
1281 If supported, requests greater than a threshold are usually
1282 serviced via calls to mmap, and then later released via munmap.
1283
1284*/
217c9dad 1285\f
217c9dad
WD
1286/* sizes, alignments */
1287
1288#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1289#define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1290#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1291#define MINSIZE (sizeof(struct malloc_chunk))
1292
1293/* conversion from malloc headers to user pointers, and back */
1294
1295#define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1296#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1297
1298/* pad request bytes into a usable size */
1299
1300#define request2size(req) \
1301 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1302 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1303 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1304
1305/* Check if m has acceptable alignment */
1306
1307#define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1308
1309
1310\f
1311
1312/*
1313 Physical chunk operations
1314*/
1315
1316
1317/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1318
1319#define PREV_INUSE 0x1
1320
1321/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1322
1323#define IS_MMAPPED 0x2
1324
1325/* Bits to mask off when extracting size */
1326
1327#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1328
1329
1330/* Ptr to next physical malloc_chunk. */
1331
1332#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1333
1334/* Ptr to previous physical malloc_chunk */
1335
1336#define prev_chunk(p)\
1337 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1338
1339
1340/* Treat space at ptr + offset as a chunk */
1341
1342#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1343
1344
1345\f
1346
1347/*
1348 Dealing with use bits
1349*/
1350
1351/* extract p's inuse bit */
1352
1353#define inuse(p)\
1354((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1355
1356/* extract inuse bit of previous chunk */
1357
1358#define prev_inuse(p) ((p)->size & PREV_INUSE)
1359
1360/* check for mmap()'ed chunk */
1361
1362#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1363
1364/* set/clear chunk as in use without otherwise disturbing */
1365
1366#define set_inuse(p)\
1367((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1368
1369#define clear_inuse(p)\
1370((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1371
1372/* check/set/clear inuse bits in known places */
1373
1374#define inuse_bit_at_offset(p, s)\
1375 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1376
1377#define set_inuse_bit_at_offset(p, s)\
1378 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1379
1380#define clear_inuse_bit_at_offset(p, s)\
1381 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1382
1383
1384\f
1385
1386/*
1387 Dealing with size fields
1388*/
1389
1390/* Get size, ignoring use bits */
1391
1392#define chunksize(p) ((p)->size & ~(SIZE_BITS))
1393
1394/* Set size at head, without disturbing its use bit */
1395
1396#define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1397
1398/* Set size/use ignoring previous bits in header */
1399
1400#define set_head(p, s) ((p)->size = (s))
1401
1402/* Set size at footer (only when chunk is not in use) */
1403
1404#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1405
1406
1407\f
1408
1409
1410/*
1411 Bins
1412
1413 The bins, `av_' are an array of pairs of pointers serving as the
1414 heads of (initially empty) doubly-linked lists of chunks, laid out
1415 in a way so that each pair can be treated as if it were in a
1416 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1417 and chunks are the same).
1418
1419 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1420 8 bytes apart. Larger bins are approximately logarithmically
1421 spaced. (See the table below.) The `av_' array is never mentioned
1422 directly in the code, but instead via bin access macros.
1423
1424 Bin layout:
1425
1426 64 bins of size 8
1427 32 bins of size 64
1428 16 bins of size 512
1429 8 bins of size 4096
1430 4 bins of size 32768
1431 2 bins of size 262144
1432 1 bin of size what's left
1433
1434 There is actually a little bit of slop in the numbers in bin_index
1435 for the sake of speed. This makes no difference elsewhere.
1436
1437 The special chunks `top' and `last_remainder' get their own bins,
1438 (this is implemented via yet more trickery with the av_ array),
1439 although `top' is never properly linked to its bin since it is
1440 always handled specially.
1441
1442*/
1443
1444#define NAV 128 /* number of bins */
1445
1446typedef struct malloc_chunk* mbinptr;
1447
1448/* access macros */
1449
1450#define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1451#define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1452#define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1453
1454/*
1455 The first 2 bins are never indexed. The corresponding av_ cells are instead
1456 used for bookkeeping. This is not to save space, but to simplify
1457 indexing, maintain locality, and avoid some initialization tests.
1458*/
1459
f2302d44 1460#define top (av_[2]) /* The topmost chunk */
217c9dad
WD
1461#define last_remainder (bin_at(1)) /* remainder from last split */
1462
1463
1464/*
1465 Because top initially points to its own bin with initial
1466 zero size, thus forcing extension on the first malloc request,
1467 we avoid having any special code in malloc to check whether
1468 it even exists yet. But we still need to in malloc_extend_top.
1469*/
1470
1471#define initial_top ((mchunkptr)(bin_at(0)))
1472
1473/* Helper macro to initialize bins */
1474
1475#define IAV(i) bin_at(i), bin_at(i)
1476
1477static mbinptr av_[NAV * 2 + 2] = {
1478 0, 0,
1479 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1480 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1481 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1482 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1483 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1484 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1485 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1486 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1487 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1488 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1489 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1490 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1491 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1492 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1493 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1494 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1495};
1496
521af04d 1497#ifndef CONFIG_RELOC_FIXUP_WORKS
217c9dad
WD
1498void malloc_bin_reloc (void)
1499{
217c9dad
WD
1500 unsigned long *p = (unsigned long *)(&av_[2]);
1501 int i;
1502 for (i=2; i<(sizeof(av_)/sizeof(mbinptr)); ++i) {
1503 *p++ += gd->reloc_off;
1504 }
1505}
521af04d 1506#endif
5e93bd1c
PT
1507
1508ulong mem_malloc_start = 0;
1509ulong mem_malloc_end = 0;
1510ulong mem_malloc_brk = 0;
1511
1512void *sbrk(ptrdiff_t increment)
1513{
1514 ulong old = mem_malloc_brk;
1515 ulong new = old + increment;
1516
1517 if ((new < mem_malloc_start) || (new > mem_malloc_end))
1518 return NULL;
1519
1520 mem_malloc_brk = new;
1521
1522 return (void *)old;
1523}
217c9dad 1524
d4e8ada0
PT
1525void mem_malloc_init(ulong start, ulong size)
1526{
1527 mem_malloc_start = start;
1528 mem_malloc_end = start + size;
1529 mem_malloc_brk = start;
1530
1531 memset((void *)mem_malloc_start, 0, size);
1532}
d4e8ada0 1533
217c9dad
WD
1534/* field-extraction macros */
1535
1536#define first(b) ((b)->fd)
1537#define last(b) ((b)->bk)
1538
1539/*
1540 Indexing into bins
1541*/
1542
1543#define bin_index(sz) \
1544(((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1545 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1546 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1547 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1548 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1549 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
8bde7f77 1550 126)
217c9dad
WD
1551/*
1552 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1553 identically sized chunks. This is exploited in malloc.
1554*/
1555
1556#define MAX_SMALLBIN 63
1557#define MAX_SMALLBIN_SIZE 512
1558#define SMALLBIN_WIDTH 8
1559
1560#define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1561
1562/*
1563 Requests are `small' if both the corresponding and the next bin are small
1564*/
1565
1566#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1567
1568\f
1569
1570/*
1571 To help compensate for the large number of bins, a one-level index
1572 structure is used for bin-by-bin searching. `binblocks' is a
1573 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1574 have any (possibly) non-empty bins, so they can be skipped over
1575 all at once during during traversals. The bits are NOT always
1576 cleared as soon as all bins in a block are empty, but instead only
1577 when all are noticed to be empty during traversal in malloc.
1578*/
1579
1580#define BINBLOCKWIDTH 4 /* bins per block */
1581
f2302d44
SR
1582#define binblocks_r ((INTERNAL_SIZE_T)av_[1]) /* bitvector of nonempty blocks */
1583#define binblocks_w (av_[1])
217c9dad
WD
1584
1585/* bin<->block macros */
1586
1587#define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
f2302d44
SR
1588#define mark_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r | idx2binblock(ii)))
1589#define clear_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r & ~(idx2binblock(ii))))
217c9dad
WD
1590
1591
1592\f
1593
1594
1595/* Other static bookkeeping data */
1596
1597/* variables holding tunable values */
1598
1599static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1600static unsigned long top_pad = DEFAULT_TOP_PAD;
1601static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1602static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1603
1604/* The first value returned from sbrk */
1605static char* sbrk_base = (char*)(-1);
1606
1607/* The maximum memory obtained from system via sbrk */
1608static unsigned long max_sbrked_mem = 0;
1609
1610/* The maximum via either sbrk or mmap */
1611static unsigned long max_total_mem = 0;
1612
1613/* internal working copy of mallinfo */
1614static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1615
1616/* The total memory obtained from system via sbrk */
1617#define sbrked_mem (current_mallinfo.arena)
1618
1619/* Tracking mmaps */
1620
1621#if 0
1622static unsigned int n_mmaps = 0;
1623#endif /* 0 */
1624static unsigned long mmapped_mem = 0;
1625#if HAVE_MMAP
1626static unsigned int max_n_mmaps = 0;
1627static unsigned long max_mmapped_mem = 0;
1628#endif
1629
1630\f
1631
1632/*
1633 Debugging support
1634*/
1635
1636#ifdef DEBUG
1637
1638
1639/*
1640 These routines make a number of assertions about the states
1641 of data structures that should be true at all times. If any
1642 are not true, it's very likely that a user program has somehow
1643 trashed memory. (It's also possible that there is a coding error
1644 in malloc. In which case, please report it!)
1645*/
1646
1647#if __STD_C
1648static void do_check_chunk(mchunkptr p)
1649#else
1650static void do_check_chunk(p) mchunkptr p;
1651#endif
1652{
1653#if 0 /* causes warnings because assert() is off */
1654 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1655#endif /* 0 */
1656
1657 /* No checkable chunk is mmapped */
1658 assert(!chunk_is_mmapped(p));
1659
1660 /* Check for legal address ... */
1661 assert((char*)p >= sbrk_base);
1662 if (p != top)
1663 assert((char*)p + sz <= (char*)top);
1664 else
1665 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1666
1667}
1668
1669
1670#if __STD_C
1671static void do_check_free_chunk(mchunkptr p)
1672#else
1673static void do_check_free_chunk(p) mchunkptr p;
1674#endif
1675{
1676 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1677#if 0 /* causes warnings because assert() is off */
1678 mchunkptr next = chunk_at_offset(p, sz);
1679#endif /* 0 */
1680
1681 do_check_chunk(p);
1682
1683 /* Check whether it claims to be free ... */
1684 assert(!inuse(p));
1685
1686 /* Unless a special marker, must have OK fields */
1687 if ((long)sz >= (long)MINSIZE)
1688 {
1689 assert((sz & MALLOC_ALIGN_MASK) == 0);
1690 assert(aligned_OK(chunk2mem(p)));
1691 /* ... matching footer field */
1692 assert(next->prev_size == sz);
1693 /* ... and is fully consolidated */
1694 assert(prev_inuse(p));
1695 assert (next == top || inuse(next));
1696
1697 /* ... and has minimally sane links */
1698 assert(p->fd->bk == p);
1699 assert(p->bk->fd == p);
1700 }
1701 else /* markers are always of size SIZE_SZ */
1702 assert(sz == SIZE_SZ);
1703}
1704
1705#if __STD_C
1706static void do_check_inuse_chunk(mchunkptr p)
1707#else
1708static void do_check_inuse_chunk(p) mchunkptr p;
1709#endif
1710{
1711 mchunkptr next = next_chunk(p);
1712 do_check_chunk(p);
1713
1714 /* Check whether it claims to be in use ... */
1715 assert(inuse(p));
1716
1717 /* ... and is surrounded by OK chunks.
1718 Since more things can be checked with free chunks than inuse ones,
1719 if an inuse chunk borders them and debug is on, it's worth doing them.
1720 */
1721 if (!prev_inuse(p))
1722 {
1723 mchunkptr prv = prev_chunk(p);
1724 assert(next_chunk(prv) == p);
1725 do_check_free_chunk(prv);
1726 }
1727 if (next == top)
1728 {
1729 assert(prev_inuse(next));
1730 assert(chunksize(next) >= MINSIZE);
1731 }
1732 else if (!inuse(next))
1733 do_check_free_chunk(next);
1734
1735}
1736
1737#if __STD_C
1738static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1739#else
1740static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1741#endif
1742{
1743#if 0 /* causes warnings because assert() is off */
1744 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1745 long room = sz - s;
1746#endif /* 0 */
1747
1748 do_check_inuse_chunk(p);
1749
1750 /* Legal size ... */
1751 assert((long)sz >= (long)MINSIZE);
1752 assert((sz & MALLOC_ALIGN_MASK) == 0);
1753 assert(room >= 0);
1754 assert(room < (long)MINSIZE);
1755
1756 /* ... and alignment */
1757 assert(aligned_OK(chunk2mem(p)));
1758
1759
1760 /* ... and was allocated at front of an available chunk */
1761 assert(prev_inuse(p));
1762
1763}
1764
1765
1766#define check_free_chunk(P) do_check_free_chunk(P)
1767#define check_inuse_chunk(P) do_check_inuse_chunk(P)
1768#define check_chunk(P) do_check_chunk(P)
1769#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1770#else
1771#define check_free_chunk(P)
1772#define check_inuse_chunk(P)
1773#define check_chunk(P)
1774#define check_malloced_chunk(P,N)
1775#endif
1776
1777\f
1778
1779/*
1780 Macro-based internal utilities
1781*/
1782
1783
1784/*
1785 Linking chunks in bin lists.
1786 Call these only with variables, not arbitrary expressions, as arguments.
1787*/
1788
1789/*
1790 Place chunk p of size s in its bin, in size order,
1791 putting it ahead of others of same size.
1792*/
1793
1794
1795#define frontlink(P, S, IDX, BK, FD) \
1796{ \
1797 if (S < MAX_SMALLBIN_SIZE) \
1798 { \
1799 IDX = smallbin_index(S); \
1800 mark_binblock(IDX); \
1801 BK = bin_at(IDX); \
1802 FD = BK->fd; \
1803 P->bk = BK; \
1804 P->fd = FD; \
1805 FD->bk = BK->fd = P; \
1806 } \
1807 else \
1808 { \
1809 IDX = bin_index(S); \
1810 BK = bin_at(IDX); \
1811 FD = BK->fd; \
1812 if (FD == BK) mark_binblock(IDX); \
1813 else \
1814 { \
1815 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1816 BK = FD->bk; \
1817 } \
1818 P->bk = BK; \
1819 P->fd = FD; \
1820 FD->bk = BK->fd = P; \
1821 } \
1822}
1823
1824
1825/* take a chunk off a list */
1826
1827#define unlink(P, BK, FD) \
1828{ \
1829 BK = P->bk; \
1830 FD = P->fd; \
1831 FD->bk = BK; \
1832 BK->fd = FD; \
1833} \
1834
1835/* Place p as the last remainder */
1836
1837#define link_last_remainder(P) \
1838{ \
1839 last_remainder->fd = last_remainder->bk = P; \
1840 P->fd = P->bk = last_remainder; \
1841}
1842
1843/* Clear the last_remainder bin */
1844
1845#define clear_last_remainder \
1846 (last_remainder->fd = last_remainder->bk = last_remainder)
1847
1848
217c9dad
WD
1849\f
1850
1851
1852/* Routines dealing with mmap(). */
1853
1854#if HAVE_MMAP
1855
1856#if __STD_C
1857static mchunkptr mmap_chunk(size_t size)
1858#else
1859static mchunkptr mmap_chunk(size) size_t size;
1860#endif
1861{
1862 size_t page_mask = malloc_getpagesize - 1;
1863 mchunkptr p;
1864
1865#ifndef MAP_ANONYMOUS
1866 static int fd = -1;
1867#endif
1868
1869 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1870
1871 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1872 * there is no following chunk whose prev_size field could be used.
1873 */
1874 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1875
1876#ifdef MAP_ANONYMOUS
1877 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1878 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1879#else /* !MAP_ANONYMOUS */
1880 if (fd < 0)
1881 {
1882 fd = open("/dev/zero", O_RDWR);
1883 if(fd < 0) return 0;
1884 }
1885 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1886#endif
1887
1888 if(p == (mchunkptr)-1) return 0;
1889
1890 n_mmaps++;
1891 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1892
1893 /* We demand that eight bytes into a page must be 8-byte aligned. */
1894 assert(aligned_OK(chunk2mem(p)));
1895
1896 /* The offset to the start of the mmapped region is stored
1897 * in the prev_size field of the chunk; normally it is zero,
1898 * but that can be changed in memalign().
1899 */
1900 p->prev_size = 0;
1901 set_head(p, size|IS_MMAPPED);
1902
1903 mmapped_mem += size;
1904 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1905 max_mmapped_mem = mmapped_mem;
1906 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1907 max_total_mem = mmapped_mem + sbrked_mem;
1908 return p;
1909}
1910
1911#if __STD_C
1912static void munmap_chunk(mchunkptr p)
1913#else
1914static void munmap_chunk(p) mchunkptr p;
1915#endif
1916{
1917 INTERNAL_SIZE_T size = chunksize(p);
1918 int ret;
1919
1920 assert (chunk_is_mmapped(p));
1921 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1922 assert((n_mmaps > 0));
1923 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1924
1925 n_mmaps--;
1926 mmapped_mem -= (size + p->prev_size);
1927
1928 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1929
1930 /* munmap returns non-zero on failure */
1931 assert(ret == 0);
1932}
1933
1934#if HAVE_MREMAP
1935
1936#if __STD_C
1937static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1938#else
1939static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1940#endif
1941{
1942 size_t page_mask = malloc_getpagesize - 1;
1943 INTERNAL_SIZE_T offset = p->prev_size;
1944 INTERNAL_SIZE_T size = chunksize(p);
1945 char *cp;
1946
1947 assert (chunk_is_mmapped(p));
1948 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1949 assert((n_mmaps > 0));
1950 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1951
1952 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1953 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1954
1955 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1956
1957 if (cp == (char *)-1) return 0;
1958
1959 p = (mchunkptr)(cp + offset);
1960
1961 assert(aligned_OK(chunk2mem(p)));
1962
1963 assert((p->prev_size == offset));
1964 set_head(p, (new_size - offset)|IS_MMAPPED);
1965
1966 mmapped_mem -= size + offset;
1967 mmapped_mem += new_size;
1968 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1969 max_mmapped_mem = mmapped_mem;
1970 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1971 max_total_mem = mmapped_mem + sbrked_mem;
1972 return p;
1973}
1974
1975#endif /* HAVE_MREMAP */
1976
1977#endif /* HAVE_MMAP */
1978
1979
1980\f
1981
1982/*
1983 Extend the top-most chunk by obtaining memory from system.
1984 Main interface to sbrk (but see also malloc_trim).
1985*/
1986
1987#if __STD_C
1988static void malloc_extend_top(INTERNAL_SIZE_T nb)
1989#else
1990static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1991#endif
1992{
1993 char* brk; /* return value from sbrk */
1994 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1995 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
1996 char* new_brk; /* return of 2nd sbrk call */
1997 INTERNAL_SIZE_T top_size; /* new size of top chunk */
1998
1999 mchunkptr old_top = top; /* Record state of old top */
2000 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2001 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2002
2003 /* Pad request with top_pad plus minimal overhead */
2004
2005 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2006 unsigned long pagesz = malloc_getpagesize;
2007
2008 /* If not the first time through, round to preserve page boundary */
2009 /* Otherwise, we need to correct to a page size below anyway. */
2010 /* (We also correct below if an intervening foreign sbrk call.) */
2011
2012 if (sbrk_base != (char*)(-1))
2013 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2014
2015 brk = (char*)(MORECORE (sbrk_size));
2016
2017 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2018 if (brk == (char*)(MORECORE_FAILURE) ||
2019 (brk < old_end && old_top != initial_top))
2020 return;
2021
2022 sbrked_mem += sbrk_size;
2023
2024 if (brk == old_end) /* can just add bytes to current top */
2025 {
2026 top_size = sbrk_size + old_top_size;
2027 set_head(top, top_size | PREV_INUSE);
2028 }
2029 else
2030 {
2031 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2032 sbrk_base = brk;
2033 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2034 sbrked_mem += brk - (char*)old_end;
2035
2036 /* Guarantee alignment of first new chunk made from this space */
2037 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2038 if (front_misalign > 0)
2039 {
2040 correction = (MALLOC_ALIGNMENT) - front_misalign;
2041 brk += correction;
2042 }
2043 else
2044 correction = 0;
2045
2046 /* Guarantee the next brk will be at a page boundary */
2047
2048 correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
8bde7f77 2049 ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
217c9dad
WD
2050
2051 /* Allocate correction */
2052 new_brk = (char*)(MORECORE (correction));
2053 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2054
2055 sbrked_mem += correction;
2056
2057 top = (mchunkptr)brk;
2058 top_size = new_brk - brk + correction;
2059 set_head(top, top_size | PREV_INUSE);
2060
2061 if (old_top != initial_top)
2062 {
2063
2064 /* There must have been an intervening foreign sbrk call. */
2065 /* A double fencepost is necessary to prevent consolidation */
2066
2067 /* If not enough space to do this, then user did something very wrong */
2068 if (old_top_size < MINSIZE)
2069 {
8bde7f77
WD
2070 set_head(top, PREV_INUSE); /* will force null return from malloc */
2071 return;
217c9dad
WD
2072 }
2073
2074 /* Also keep size a multiple of MALLOC_ALIGNMENT */
2075 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2076 set_head_size(old_top, old_top_size);
2077 chunk_at_offset(old_top, old_top_size )->size =
8bde7f77 2078 SIZE_SZ|PREV_INUSE;
217c9dad 2079 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
8bde7f77 2080 SIZE_SZ|PREV_INUSE;
217c9dad
WD
2081 /* If possible, release the rest. */
2082 if (old_top_size >= MINSIZE)
8bde7f77 2083 fREe(chunk2mem(old_top));
217c9dad
WD
2084 }
2085 }
2086
2087 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2088 max_sbrked_mem = sbrked_mem;
2089 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2090 max_total_mem = mmapped_mem + sbrked_mem;
2091
2092 /* We always land on a page boundary */
2093 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2094}
2095
2096
2097\f
2098
2099/* Main public routines */
2100
2101
2102/*
2103 Malloc Algorthim:
2104
2105 The requested size is first converted into a usable form, `nb'.
2106 This currently means to add 4 bytes overhead plus possibly more to
2107 obtain 8-byte alignment and/or to obtain a size of at least
2108 MINSIZE (currently 16 bytes), the smallest allocatable size.
2109 (All fits are considered `exact' if they are within MINSIZE bytes.)
2110
2111 From there, the first successful of the following steps is taken:
2112
2113 1. The bin corresponding to the request size is scanned, and if
8bde7f77 2114 a chunk of exactly the right size is found, it is taken.
217c9dad
WD
2115
2116 2. The most recently remaindered chunk is used if it is big
8bde7f77
WD
2117 enough. This is a form of (roving) first fit, used only in
2118 the absence of exact fits. Runs of consecutive requests use
2119 the remainder of the chunk used for the previous such request
2120 whenever possible. This limited use of a first-fit style
2121 allocation strategy tends to give contiguous chunks
2122 coextensive lifetimes, which improves locality and can reduce
2123 fragmentation in the long run.
217c9dad
WD
2124
2125 3. Other bins are scanned in increasing size order, using a
8bde7f77
WD
2126 chunk big enough to fulfill the request, and splitting off
2127 any remainder. This search is strictly by best-fit; i.e.,
2128 the smallest (with ties going to approximately the least
2129 recently used) chunk that fits is selected.
217c9dad
WD
2130
2131 4. If large enough, the chunk bordering the end of memory
8bde7f77
WD
2132 (`top') is split off. (This use of `top' is in accord with
2133 the best-fit search rule. In effect, `top' is treated as
2134 larger (and thus less well fitting) than any other available
2135 chunk since it can be extended to be as large as necessary
2136 (up to system limitations).
217c9dad
WD
2137
2138 5. If the request size meets the mmap threshold and the
8bde7f77
WD
2139 system supports mmap, and there are few enough currently
2140 allocated mmapped regions, and a call to mmap succeeds,
2141 the request is allocated via direct memory mapping.
217c9dad
WD
2142
2143 6. Otherwise, the top of memory is extended by
8bde7f77
WD
2144 obtaining more space from the system (normally using sbrk,
2145 but definable to anything else via the MORECORE macro).
2146 Memory is gathered from the system (in system page-sized
2147 units) in a way that allows chunks obtained across different
2148 sbrk calls to be consolidated, but does not require
2149 contiguous memory. Thus, it should be safe to intersperse
2150 mallocs with other sbrk calls.
217c9dad
WD
2151
2152
2153 All allocations are made from the the `lowest' part of any found
2154 chunk. (The implementation invariant is that prev_inuse is
2155 always true of any allocated chunk; i.e., that each allocated
2156 chunk borders either a previously allocated and still in-use chunk,
2157 or the base of its memory arena.)
2158
2159*/
2160
2161#if __STD_C
2162Void_t* mALLOc(size_t bytes)
2163#else
2164Void_t* mALLOc(bytes) size_t bytes;
2165#endif
2166{
2167 mchunkptr victim; /* inspected/selected chunk */
2168 INTERNAL_SIZE_T victim_size; /* its size */
2169 int idx; /* index for bin traversal */
2170 mbinptr bin; /* associated bin */
2171 mchunkptr remainder; /* remainder from a split */
2172 long remainder_size; /* its size */
2173 int remainder_index; /* its bin index */
2174 unsigned long block; /* block traverser bit */
2175 int startidx; /* first bin of a traversed block */
2176 mchunkptr fwd; /* misc temp for linking */
2177 mchunkptr bck; /* misc temp for linking */
2178 mbinptr q; /* misc temp */
2179
2180 INTERNAL_SIZE_T nb;
2181
2182 if ((long)bytes < 0) return 0;
2183
2184 nb = request2size(bytes); /* padded request size; */
2185
2186 /* Check for exact match in a bin */
2187
2188 if (is_small_request(nb)) /* Faster version for small requests */
2189 {
2190 idx = smallbin_index(nb);
2191
2192 /* No traversal or size check necessary for small bins. */
2193
2194 q = bin_at(idx);
2195 victim = last(q);
2196
2197 /* Also scan the next one, since it would have a remainder < MINSIZE */
2198 if (victim == q)
2199 {
2200 q = next_bin(q);
2201 victim = last(q);
2202 }
2203 if (victim != q)
2204 {
2205 victim_size = chunksize(victim);
2206 unlink(victim, bck, fwd);
2207 set_inuse_bit_at_offset(victim, victim_size);
2208 check_malloced_chunk(victim, nb);
2209 return chunk2mem(victim);
2210 }
2211
2212 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2213
2214 }
2215 else
2216 {
2217 idx = bin_index(nb);
2218 bin = bin_at(idx);
2219
2220 for (victim = last(bin); victim != bin; victim = victim->bk)
2221 {
2222 victim_size = chunksize(victim);
2223 remainder_size = victim_size - nb;
2224
2225 if (remainder_size >= (long)MINSIZE) /* too big */
2226 {
8bde7f77
WD
2227 --idx; /* adjust to rescan below after checking last remainder */
2228 break;
217c9dad
WD
2229 }
2230
2231 else if (remainder_size >= 0) /* exact fit */
2232 {
8bde7f77
WD
2233 unlink(victim, bck, fwd);
2234 set_inuse_bit_at_offset(victim, victim_size);
2235 check_malloced_chunk(victim, nb);
2236 return chunk2mem(victim);
217c9dad
WD
2237 }
2238 }
2239
2240 ++idx;
2241
2242 }
2243
2244 /* Try to use the last split-off remainder */
2245
2246 if ( (victim = last_remainder->fd) != last_remainder)
2247 {
2248 victim_size = chunksize(victim);
2249 remainder_size = victim_size - nb;
2250
2251 if (remainder_size >= (long)MINSIZE) /* re-split */
2252 {
2253 remainder = chunk_at_offset(victim, nb);
2254 set_head(victim, nb | PREV_INUSE);
2255 link_last_remainder(remainder);
2256 set_head(remainder, remainder_size | PREV_INUSE);
2257 set_foot(remainder, remainder_size);
2258 check_malloced_chunk(victim, nb);
2259 return chunk2mem(victim);
2260 }
2261
2262 clear_last_remainder;
2263
2264 if (remainder_size >= 0) /* exhaust */
2265 {
2266 set_inuse_bit_at_offset(victim, victim_size);
2267 check_malloced_chunk(victim, nb);
2268 return chunk2mem(victim);
2269 }
2270
2271 /* Else place in bin */
2272
2273 frontlink(victim, victim_size, remainder_index, bck, fwd);
2274 }
2275
2276 /*
2277 If there are any possibly nonempty big-enough blocks,
2278 search for best fitting chunk by scanning bins in blockwidth units.
2279 */
2280
f2302d44 2281 if ( (block = idx2binblock(idx)) <= binblocks_r)
217c9dad
WD
2282 {
2283
2284 /* Get to the first marked block */
2285
f2302d44 2286 if ( (block & binblocks_r) == 0)
217c9dad
WD
2287 {
2288 /* force to an even block boundary */
2289 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2290 block <<= 1;
f2302d44 2291 while ((block & binblocks_r) == 0)
217c9dad 2292 {
8bde7f77
WD
2293 idx += BINBLOCKWIDTH;
2294 block <<= 1;
217c9dad
WD
2295 }
2296 }
2297
2298 /* For each possibly nonempty block ... */
2299 for (;;)
2300 {
2301 startidx = idx; /* (track incomplete blocks) */
2302 q = bin = bin_at(idx);
2303
2304 /* For each bin in this block ... */
2305 do
2306 {
8bde7f77
WD
2307 /* Find and use first big enough chunk ... */
2308
2309 for (victim = last(bin); victim != bin; victim = victim->bk)
2310 {
2311 victim_size = chunksize(victim);
2312 remainder_size = victim_size - nb;
2313
2314 if (remainder_size >= (long)MINSIZE) /* split */
2315 {
2316 remainder = chunk_at_offset(victim, nb);
2317 set_head(victim, nb | PREV_INUSE);
2318 unlink(victim, bck, fwd);
2319 link_last_remainder(remainder);
2320 set_head(remainder, remainder_size | PREV_INUSE);
2321 set_foot(remainder, remainder_size);
2322 check_malloced_chunk(victim, nb);
2323 return chunk2mem(victim);
2324 }
2325
2326 else if (remainder_size >= 0) /* take */
2327 {
2328 set_inuse_bit_at_offset(victim, victim_size);
2329 unlink(victim, bck, fwd);
2330 check_malloced_chunk(victim, nb);
2331 return chunk2mem(victim);
2332 }
2333
2334 }
217c9dad
WD
2335
2336 bin = next_bin(bin);
2337
2338 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2339
2340 /* Clear out the block bit. */
2341
2342 do /* Possibly backtrack to try to clear a partial block */
2343 {
8bde7f77
WD
2344 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2345 {
f2302d44 2346 av_[1] = (mbinptr)(binblocks_r & ~block);
8bde7f77
WD
2347 break;
2348 }
2349 --startidx;
217c9dad
WD
2350 q = prev_bin(q);
2351 } while (first(q) == q);
2352
2353 /* Get to the next possibly nonempty block */
2354
f2302d44 2355 if ( (block <<= 1) <= binblocks_r && (block != 0) )
217c9dad 2356 {
f2302d44 2357 while ((block & binblocks_r) == 0)
8bde7f77
WD
2358 {
2359 idx += BINBLOCKWIDTH;
2360 block <<= 1;
2361 }
217c9dad
WD
2362 }
2363 else
8bde7f77 2364 break;
217c9dad
WD
2365 }
2366 }
2367
2368
2369 /* Try to use top chunk */
2370
2371 /* Require that there be a remainder, ensuring top always exists */
2372 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2373 {
2374
2375#if HAVE_MMAP
2376 /* If big and would otherwise need to extend, try to use mmap instead */
2377 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
8bde7f77 2378 (victim = mmap_chunk(nb)) != 0)
217c9dad
WD
2379 return chunk2mem(victim);
2380#endif
2381
2382 /* Try to extend */
2383 malloc_extend_top(nb);
2384 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2385 return 0; /* propagate failure */
2386 }
2387
2388 victim = top;
2389 set_head(victim, nb | PREV_INUSE);
2390 top = chunk_at_offset(victim, nb);
2391 set_head(top, remainder_size | PREV_INUSE);
2392 check_malloced_chunk(victim, nb);
2393 return chunk2mem(victim);
2394
2395}
2396
2397
2398\f
2399
2400/*
2401
2402 free() algorithm :
2403
2404 cases:
2405
2406 1. free(0) has no effect.
2407
2408 2. If the chunk was allocated via mmap, it is release via munmap().
2409
2410 3. If a returned chunk borders the current high end of memory,
8bde7f77
WD
2411 it is consolidated into the top, and if the total unused
2412 topmost memory exceeds the trim threshold, malloc_trim is
2413 called.
217c9dad
WD
2414
2415 4. Other chunks are consolidated as they arrive, and
8bde7f77
WD
2416 placed in corresponding bins. (This includes the case of
2417 consolidating with the current `last_remainder').
217c9dad
WD
2418
2419*/
2420
2421
2422#if __STD_C
2423void fREe(Void_t* mem)
2424#else
2425void fREe(mem) Void_t* mem;
2426#endif
2427{
2428 mchunkptr p; /* chunk corresponding to mem */
2429 INTERNAL_SIZE_T hd; /* its head field */
2430 INTERNAL_SIZE_T sz; /* its size */
2431 int idx; /* its bin index */
2432 mchunkptr next; /* next contiguous chunk */
2433 INTERNAL_SIZE_T nextsz; /* its size */
2434 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2435 mchunkptr bck; /* misc temp for linking */
2436 mchunkptr fwd; /* misc temp for linking */
2437 int islr; /* track whether merging with last_remainder */
2438
2439 if (mem == 0) /* free(0) has no effect */
2440 return;
2441
2442 p = mem2chunk(mem);
2443 hd = p->size;
2444
2445#if HAVE_MMAP
2446 if (hd & IS_MMAPPED) /* release mmapped memory. */
2447 {
2448 munmap_chunk(p);
2449 return;
2450 }
2451#endif
2452
2453 check_inuse_chunk(p);
2454
2455 sz = hd & ~PREV_INUSE;
2456 next = chunk_at_offset(p, sz);
2457 nextsz = chunksize(next);
2458
2459 if (next == top) /* merge with top */
2460 {
2461 sz += nextsz;
2462
2463 if (!(hd & PREV_INUSE)) /* consolidate backward */
2464 {
2465 prevsz = p->prev_size;
2466 p = chunk_at_offset(p, -((long) prevsz));
2467 sz += prevsz;
2468 unlink(p, bck, fwd);
2469 }
2470
2471 set_head(p, sz | PREV_INUSE);
2472 top = p;
2473 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2474 malloc_trim(top_pad);
2475 return;
2476 }
2477
2478 set_head(next, nextsz); /* clear inuse bit */
2479
2480 islr = 0;
2481
2482 if (!(hd & PREV_INUSE)) /* consolidate backward */
2483 {
2484 prevsz = p->prev_size;
2485 p = chunk_at_offset(p, -((long) prevsz));
2486 sz += prevsz;
2487
2488 if (p->fd == last_remainder) /* keep as last_remainder */
2489 islr = 1;
2490 else
2491 unlink(p, bck, fwd);
2492 }
2493
2494 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2495 {
2496 sz += nextsz;
2497
2498 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2499 {
2500 islr = 1;
2501 link_last_remainder(p);
2502 }
2503 else
2504 unlink(next, bck, fwd);
2505 }
2506
2507
2508 set_head(p, sz | PREV_INUSE);
2509 set_foot(p, sz);
2510 if (!islr)
2511 frontlink(p, sz, idx, bck, fwd);
2512}
2513
2514
2515\f
2516
2517
2518/*
2519
2520 Realloc algorithm:
2521
2522 Chunks that were obtained via mmap cannot be extended or shrunk
2523 unless HAVE_MREMAP is defined, in which case mremap is used.
2524 Otherwise, if their reallocation is for additional space, they are
2525 copied. If for less, they are just left alone.
2526
2527 Otherwise, if the reallocation is for additional space, and the
2528 chunk can be extended, it is, else a malloc-copy-free sequence is
2529 taken. There are several different ways that a chunk could be
2530 extended. All are tried:
2531
2532 * Extending forward into following adjacent free chunk.
2533 * Shifting backwards, joining preceding adjacent space
2534 * Both shifting backwards and extending forward.
2535 * Extending into newly sbrked space
2536
2537 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2538 size argument of zero (re)allocates a minimum-sized chunk.
2539
2540 If the reallocation is for less space, and the new request is for
2541 a `small' (<512 bytes) size, then the newly unused space is lopped
2542 off and freed.
2543
2544 The old unix realloc convention of allowing the last-free'd chunk
2545 to be used as an argument to realloc is no longer supported.
2546 I don't know of any programs still relying on this feature,
2547 and allowing it would also allow too many other incorrect
2548 usages of realloc to be sensible.
2549
2550
2551*/
2552
2553
2554#if __STD_C
2555Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2556#else
2557Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2558#endif
2559{
2560 INTERNAL_SIZE_T nb; /* padded request size */
2561
2562 mchunkptr oldp; /* chunk corresponding to oldmem */
2563 INTERNAL_SIZE_T oldsize; /* its size */
2564
2565 mchunkptr newp; /* chunk to return */
2566 INTERNAL_SIZE_T newsize; /* its size */
2567 Void_t* newmem; /* corresponding user mem */
2568
2569 mchunkptr next; /* next contiguous chunk after oldp */
2570 INTERNAL_SIZE_T nextsize; /* its size */
2571
2572 mchunkptr prev; /* previous contiguous chunk before oldp */
2573 INTERNAL_SIZE_T prevsize; /* its size */
2574
2575 mchunkptr remainder; /* holds split off extra space from newp */
2576 INTERNAL_SIZE_T remainder_size; /* its size */
2577
2578 mchunkptr bck; /* misc temp for linking */
2579 mchunkptr fwd; /* misc temp for linking */
2580
2581#ifdef REALLOC_ZERO_BYTES_FREES
2582 if (bytes == 0) { fREe(oldmem); return 0; }
2583#endif
2584
2585 if ((long)bytes < 0) return 0;
2586
2587 /* realloc of null is supposed to be same as malloc */
2588 if (oldmem == 0) return mALLOc(bytes);
2589
2590 newp = oldp = mem2chunk(oldmem);
2591 newsize = oldsize = chunksize(oldp);
2592
2593
2594 nb = request2size(bytes);
2595
2596#if HAVE_MMAP
2597 if (chunk_is_mmapped(oldp))
2598 {
2599#if HAVE_MREMAP
2600 newp = mremap_chunk(oldp, nb);
2601 if(newp) return chunk2mem(newp);
2602#endif
2603 /* Note the extra SIZE_SZ overhead. */
2604 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2605 /* Must alloc, copy, free. */
2606 newmem = mALLOc(bytes);
2607 if (newmem == 0) return 0; /* propagate failure */
2608 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2609 munmap_chunk(oldp);
2610 return newmem;
2611 }
2612#endif
2613
2614 check_inuse_chunk(oldp);
2615
2616 if ((long)(oldsize) < (long)(nb))
2617 {
2618
2619 /* Try expanding forward */
2620
2621 next = chunk_at_offset(oldp, oldsize);
2622 if (next == top || !inuse(next))
2623 {
2624 nextsize = chunksize(next);
2625
2626 /* Forward into top only if a remainder */
2627 if (next == top)
2628 {
8bde7f77
WD
2629 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2630 {
2631 newsize += nextsize;
2632 top = chunk_at_offset(oldp, nb);
2633 set_head(top, (newsize - nb) | PREV_INUSE);
2634 set_head_size(oldp, nb);
2635 return chunk2mem(oldp);
2636 }
217c9dad
WD
2637 }
2638
2639 /* Forward into next chunk */
2640 else if (((long)(nextsize + newsize) >= (long)(nb)))
2641 {
8bde7f77
WD
2642 unlink(next, bck, fwd);
2643 newsize += nextsize;
2644 goto split;
217c9dad
WD
2645 }
2646 }
2647 else
2648 {
2649 next = 0;
2650 nextsize = 0;
2651 }
2652
2653 /* Try shifting backwards. */
2654
2655 if (!prev_inuse(oldp))
2656 {
2657 prev = prev_chunk(oldp);
2658 prevsize = chunksize(prev);
2659
2660 /* try forward + backward first to save a later consolidation */
2661
2662 if (next != 0)
2663 {
8bde7f77
WD
2664 /* into top */
2665 if (next == top)
2666 {
2667 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2668 {
2669 unlink(prev, bck, fwd);
2670 newp = prev;
2671 newsize += prevsize + nextsize;
2672 newmem = chunk2mem(newp);
2673 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2674 top = chunk_at_offset(newp, nb);
2675 set_head(top, (newsize - nb) | PREV_INUSE);
2676 set_head_size(newp, nb);
2677 return newmem;
2678 }
2679 }
2680
2681 /* into next chunk */
2682 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2683 {
2684 unlink(next, bck, fwd);
2685 unlink(prev, bck, fwd);
2686 newp = prev;
2687 newsize += nextsize + prevsize;
2688 newmem = chunk2mem(newp);
2689 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2690 goto split;
2691 }
217c9dad
WD
2692 }
2693
2694 /* backward only */
2695 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2696 {
8bde7f77
WD
2697 unlink(prev, bck, fwd);
2698 newp = prev;
2699 newsize += prevsize;
2700 newmem = chunk2mem(newp);
2701 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2702 goto split;
217c9dad
WD
2703 }
2704 }
2705
2706 /* Must allocate */
2707
2708 newmem = mALLOc (bytes);
2709
2710 if (newmem == 0) /* propagate failure */
2711 return 0;
2712
2713 /* Avoid copy if newp is next chunk after oldp. */
2714 /* (This can only happen when new chunk is sbrk'ed.) */
2715
2716 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2717 {
2718 newsize += chunksize(newp);
2719 newp = oldp;
2720 goto split;
2721 }
2722
2723 /* Otherwise copy, free, and exit */
2724 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2725 fREe(oldmem);
2726 return newmem;
2727 }
2728
2729
2730 split: /* split off extra room in old or expanded chunk */
2731
2732 if (newsize - nb >= MINSIZE) /* split off remainder */
2733 {
2734 remainder = chunk_at_offset(newp, nb);
2735 remainder_size = newsize - nb;
2736 set_head_size(newp, nb);
2737 set_head(remainder, remainder_size | PREV_INUSE);
2738 set_inuse_bit_at_offset(remainder, remainder_size);
2739 fREe(chunk2mem(remainder)); /* let free() deal with it */
2740 }
2741 else
2742 {
2743 set_head_size(newp, newsize);
2744 set_inuse_bit_at_offset(newp, newsize);
2745 }
2746
2747 check_inuse_chunk(newp);
2748 return chunk2mem(newp);
2749}
2750
2751
2752\f
2753
2754/*
2755
2756 memalign algorithm:
2757
2758 memalign requests more than enough space from malloc, finds a spot
2759 within that chunk that meets the alignment request, and then
2760 possibly frees the leading and trailing space.
2761
2762 The alignment argument must be a power of two. This property is not
2763 checked by memalign, so misuse may result in random runtime errors.
2764
2765 8-byte alignment is guaranteed by normal malloc calls, so don't
2766 bother calling memalign with an argument of 8 or less.
2767
2768 Overreliance on memalign is a sure way to fragment space.
2769
2770*/
2771
2772
2773#if __STD_C
2774Void_t* mEMALIGn(size_t alignment, size_t bytes)
2775#else
2776Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2777#endif
2778{
2779 INTERNAL_SIZE_T nb; /* padded request size */
2780 char* m; /* memory returned by malloc call */
2781 mchunkptr p; /* corresponding chunk */
2782 char* brk; /* alignment point within p */
2783 mchunkptr newp; /* chunk to return */
2784 INTERNAL_SIZE_T newsize; /* its size */
2785 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
2786 mchunkptr remainder; /* spare room at end to split off */
2787 long remainder_size; /* its size */
2788
2789 if ((long)bytes < 0) return 0;
2790
2791 /* If need less alignment than we give anyway, just relay to malloc */
2792
2793 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2794
2795 /* Otherwise, ensure that it is at least a minimum chunk size */
2796
2797 if (alignment < MINSIZE) alignment = MINSIZE;
2798
2799 /* Call malloc with worst case padding to hit alignment. */
2800
2801 nb = request2size(bytes);
2802 m = (char*)(mALLOc(nb + alignment + MINSIZE));
2803
2804 if (m == 0) return 0; /* propagate failure */
2805
2806 p = mem2chunk(m);
2807
2808 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2809 {
2810#if HAVE_MMAP
2811 if(chunk_is_mmapped(p))
2812 return chunk2mem(p); /* nothing more to do */
2813#endif
2814 }
2815 else /* misaligned */
2816 {
2817 /*
2818 Find an aligned spot inside chunk.
2819 Since we need to give back leading space in a chunk of at
2820 least MINSIZE, if the first calculation places us at
2821 a spot with less than MINSIZE leader, we can move to the
2822 next aligned spot -- we've allocated enough total room so that
2823 this is always possible.
2824 */
2825
2826 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
2827 if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2828
2829 newp = (mchunkptr)brk;
2830 leadsize = brk - (char*)(p);
2831 newsize = chunksize(p) - leadsize;
2832
2833#if HAVE_MMAP
2834 if(chunk_is_mmapped(p))
2835 {
2836 newp->prev_size = p->prev_size + leadsize;
2837 set_head(newp, newsize|IS_MMAPPED);
2838 return chunk2mem(newp);
2839 }
2840#endif
2841
2842 /* give back leader, use the rest */
2843
2844 set_head(newp, newsize | PREV_INUSE);
2845 set_inuse_bit_at_offset(newp, newsize);
2846 set_head_size(p, leadsize);
2847 fREe(chunk2mem(p));
2848 p = newp;
2849
2850 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2851 }
2852
2853 /* Also give back spare room at the end */
2854
2855 remainder_size = chunksize(p) - nb;
2856
2857 if (remainder_size >= (long)MINSIZE)
2858 {
2859 remainder = chunk_at_offset(p, nb);
2860 set_head(remainder, remainder_size | PREV_INUSE);
2861 set_head_size(p, nb);
2862 fREe(chunk2mem(remainder));
2863 }
2864
2865 check_inuse_chunk(p);
2866 return chunk2mem(p);
2867
2868}
2869
2870\f
2871
2872
2873/*
2874 valloc just invokes memalign with alignment argument equal
2875 to the page size of the system (or as near to this as can
2876 be figured out from all the includes/defines above.)
2877*/
2878
2879#if __STD_C
2880Void_t* vALLOc(size_t bytes)
2881#else
2882Void_t* vALLOc(bytes) size_t bytes;
2883#endif
2884{
2885 return mEMALIGn (malloc_getpagesize, bytes);
2886}
2887
2888/*
2889 pvalloc just invokes valloc for the nearest pagesize
2890 that will accommodate request
2891*/
2892
2893
2894#if __STD_C
2895Void_t* pvALLOc(size_t bytes)
2896#else
2897Void_t* pvALLOc(bytes) size_t bytes;
2898#endif
2899{
2900 size_t pagesize = malloc_getpagesize;
2901 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2902}
2903
2904/*
2905
2906 calloc calls malloc, then zeroes out the allocated chunk.
2907
2908*/
2909
2910#if __STD_C
2911Void_t* cALLOc(size_t n, size_t elem_size)
2912#else
2913Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2914#endif
2915{
2916 mchunkptr p;
2917 INTERNAL_SIZE_T csz;
2918
2919 INTERNAL_SIZE_T sz = n * elem_size;
2920
2921
2922 /* check if expand_top called, in which case don't need to clear */
2923#if MORECORE_CLEARS
2924 mchunkptr oldtop = top;
2925 INTERNAL_SIZE_T oldtopsize = chunksize(top);
2926#endif
2927 Void_t* mem = mALLOc (sz);
2928
2929 if ((long)n < 0) return 0;
2930
2931 if (mem == 0)
2932 return 0;
2933 else
2934 {
2935 p = mem2chunk(mem);
2936
2937 /* Two optional cases in which clearing not necessary */
2938
2939
2940#if HAVE_MMAP
2941 if (chunk_is_mmapped(p)) return mem;
2942#endif
2943
2944 csz = chunksize(p);
2945
2946#if MORECORE_CLEARS
2947 if (p == oldtop && csz > oldtopsize)
2948 {
2949 /* clear only the bytes from non-freshly-sbrked memory */
2950 csz = oldtopsize;
2951 }
2952#endif
2953
2954 MALLOC_ZERO(mem, csz - SIZE_SZ);
2955 return mem;
2956 }
2957}
2958
2959/*
2960
2961 cfree just calls free. It is needed/defined on some systems
2962 that pair it with calloc, presumably for odd historical reasons.
2963
2964*/
2965
2966#if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
2967#if __STD_C
2968void cfree(Void_t *mem)
2969#else
2970void cfree(mem) Void_t *mem;
2971#endif
2972{
2973 fREe(mem);
2974}
2975#endif
2976
2977\f
2978
2979/*
2980
2981 Malloc_trim gives memory back to the system (via negative
2982 arguments to sbrk) if there is unused memory at the `high' end of
2983 the malloc pool. You can call this after freeing large blocks of
2984 memory to potentially reduce the system-level memory requirements
2985 of a program. However, it cannot guarantee to reduce memory. Under
2986 some allocation patterns, some large free blocks of memory will be
2987 locked between two used chunks, so they cannot be given back to
2988 the system.
2989
2990 The `pad' argument to malloc_trim represents the amount of free
2991 trailing space to leave untrimmed. If this argument is zero,
2992 only the minimum amount of memory to maintain internal data
2993 structures will be left (one page or less). Non-zero arguments
2994 can be supplied to maintain enough trailing space to service
2995 future expected allocations without having to re-obtain memory
2996 from the system.
2997
2998 Malloc_trim returns 1 if it actually released any memory, else 0.
2999
3000*/
3001
3002#if __STD_C
3003int malloc_trim(size_t pad)
3004#else
3005int malloc_trim(pad) size_t pad;
3006#endif
3007{
3008 long top_size; /* Amount of top-most memory */
3009 long extra; /* Amount to release */
3010 char* current_brk; /* address returned by pre-check sbrk call */
3011 char* new_brk; /* address returned by negative sbrk call */
3012
3013 unsigned long pagesz = malloc_getpagesize;
3014
3015 top_size = chunksize(top);
3016 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3017
3018 if (extra < (long)pagesz) /* Not enough memory to release */
3019 return 0;
3020
3021 else
3022 {
3023 /* Test to make sure no one else called sbrk */
3024 current_brk = (char*)(MORECORE (0));
3025 if (current_brk != (char*)(top) + top_size)
3026 return 0; /* Apparently we don't own memory; must fail */
3027
3028 else
3029 {
3030 new_brk = (char*)(MORECORE (-extra));
3031
3032 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3033 {
8bde7f77
WD
3034 /* Try to figure out what we have */
3035 current_brk = (char*)(MORECORE (0));
3036 top_size = current_brk - (char*)top;
3037 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3038 {
3039 sbrked_mem = current_brk - sbrk_base;
3040 set_head(top, top_size | PREV_INUSE);
3041 }
3042 check_chunk(top);
3043 return 0;
217c9dad
WD
3044 }
3045
3046 else
3047 {
8bde7f77
WD
3048 /* Success. Adjust top accordingly. */
3049 set_head(top, (top_size - extra) | PREV_INUSE);
3050 sbrked_mem -= extra;
3051 check_chunk(top);
3052 return 1;
217c9dad
WD
3053 }
3054 }
3055 }
3056}
3057
3058\f
3059
3060/*
3061 malloc_usable_size:
3062
3063 This routine tells you how many bytes you can actually use in an
3064 allocated chunk, which may be more than you requested (although
3065 often not). You can use this many bytes without worrying about
3066 overwriting other allocated objects. Not a particularly great
3067 programming practice, but still sometimes useful.
3068
3069*/
3070
3071#if __STD_C
3072size_t malloc_usable_size(Void_t* mem)
3073#else
3074size_t malloc_usable_size(mem) Void_t* mem;
3075#endif
3076{
3077 mchunkptr p;
3078 if (mem == 0)
3079 return 0;
3080 else
3081 {
3082 p = mem2chunk(mem);
3083 if(!chunk_is_mmapped(p))
3084 {
3085 if (!inuse(p)) return 0;
3086 check_inuse_chunk(p);
3087 return chunksize(p) - SIZE_SZ;
3088 }
3089 return chunksize(p) - 2*SIZE_SZ;
3090 }
3091}
3092
3093
3094\f
3095
3096/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3097
3098#if 0
3099static void malloc_update_mallinfo()
3100{
3101 int i;
3102 mbinptr b;
3103 mchunkptr p;
3104#ifdef DEBUG
3105 mchunkptr q;
3106#endif
3107
3108 INTERNAL_SIZE_T avail = chunksize(top);
3109 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3110
3111 for (i = 1; i < NAV; ++i)
3112 {
3113 b = bin_at(i);
3114 for (p = last(b); p != b; p = p->bk)
3115 {
3116#ifdef DEBUG
3117 check_free_chunk(p);
3118 for (q = next_chunk(p);
8bde7f77
WD
3119 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3120 q = next_chunk(q))
3121 check_inuse_chunk(q);
217c9dad
WD
3122#endif
3123 avail += chunksize(p);
3124 navail++;
3125 }
3126 }
3127
3128 current_mallinfo.ordblks = navail;
3129 current_mallinfo.uordblks = sbrked_mem - avail;
3130 current_mallinfo.fordblks = avail;
3131 current_mallinfo.hblks = n_mmaps;
3132 current_mallinfo.hblkhd = mmapped_mem;
3133 current_mallinfo.keepcost = chunksize(top);
3134
3135}
3136#endif /* 0 */
3137
3138\f
3139
3140/*
3141
3142 malloc_stats:
3143
3144 Prints on the amount of space obtain from the system (both
3145 via sbrk and mmap), the maximum amount (which may be more than
3146 current if malloc_trim and/or munmap got called), the maximum
3147 number of simultaneous mmap regions used, and the current number
3148 of bytes allocated via malloc (or realloc, etc) but not yet
3149 freed. (Note that this is the number of bytes allocated, not the
3150 number requested. It will be larger than the number requested
3151 because of alignment and bookkeeping overhead.)
3152
3153*/
3154
3155#if 0
3156void malloc_stats()
3157{
3158 malloc_update_mallinfo();
3159 printf("max system bytes = %10u\n",
8bde7f77 3160 (unsigned int)(max_total_mem));
217c9dad 3161 printf("system bytes = %10u\n",
8bde7f77 3162 (unsigned int)(sbrked_mem + mmapped_mem));
217c9dad 3163 printf("in use bytes = %10u\n",
8bde7f77 3164 (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
217c9dad
WD
3165#if HAVE_MMAP
3166 printf("max mmap regions = %10u\n",
8bde7f77 3167 (unsigned int)max_n_mmaps);
217c9dad
WD
3168#endif
3169}
3170#endif /* 0 */
3171
3172/*
3173 mallinfo returns a copy of updated current mallinfo.
3174*/
3175
3176#if 0
3177struct mallinfo mALLINFo()
3178{
3179 malloc_update_mallinfo();
3180 return current_mallinfo;
3181}
3182#endif /* 0 */
3183
3184
3185\f
3186
3187/*
3188 mallopt:
3189
3190 mallopt is the general SVID/XPG interface to tunable parameters.
3191 The format is to provide a (parameter-number, parameter-value) pair.
3192 mallopt then sets the corresponding parameter to the argument
3193 value if it can (i.e., so long as the value is meaningful),
3194 and returns 1 if successful else 0.
3195
3196 See descriptions of tunable parameters above.
3197
3198*/
3199
3200#if __STD_C
3201int mALLOPt(int param_number, int value)
3202#else
3203int mALLOPt(param_number, value) int param_number; int value;
3204#endif
3205{
3206 switch(param_number)
3207 {
3208 case M_TRIM_THRESHOLD:
3209 trim_threshold = value; return 1;
3210 case M_TOP_PAD:
3211 top_pad = value; return 1;
3212 case M_MMAP_THRESHOLD:
3213 mmap_threshold = value; return 1;
3214 case M_MMAP_MAX:
3215#if HAVE_MMAP
3216 n_mmaps_max = value; return 1;
3217#else
3218 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3219#endif
3220
3221 default:
3222 return 0;
3223 }
3224}
3225
3226/*
3227
3228History:
3229
3230 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
3231 * return null for negative arguments
3232 * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
8bde7f77
WD
3233 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3234 (e.g. WIN32 platforms)
3235 * Cleanup up header file inclusion for WIN32 platforms
3236 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3237 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3238 memory allocation routines
3239 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3240 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
217c9dad 3241 usage of 'assert' in non-WIN32 code
8bde7f77
WD
3242 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3243 avoid infinite loop
217c9dad
WD
3244 * Always call 'fREe()' rather than 'free()'
3245
3246 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3247 * Fixed ordering problem with boundary-stamping
3248
3249 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3250 * Added pvalloc, as recommended by H.J. Liu
3251 * Added 64bit pointer support mainly from Wolfram Gloger
3252 * Added anonymously donated WIN32 sbrk emulation
3253 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3254 * malloc_extend_top: fix mask error that caused wastage after
8bde7f77 3255 foreign sbrks
217c9dad
WD
3256 * Add linux mremap support code from HJ Liu
3257
3258 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3259 * Integrated most documentation with the code.
3260 * Add support for mmap, with help from
8bde7f77 3261 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
217c9dad
WD
3262 * Use last_remainder in more cases.
3263 * Pack bins using idea from colin@nyx10.cs.du.edu
3264 * Use ordered bins instead of best-fit threshhold
3265 * Eliminate block-local decls to simplify tracing and debugging.
3266 * Support another case of realloc via move into top
3267 * Fix error occuring when initial sbrk_base not word-aligned.
3268 * Rely on page size for units instead of SBRK_UNIT to
8bde7f77 3269 avoid surprises about sbrk alignment conventions.
217c9dad 3270 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
8bde7f77 3271 (raymond@es.ele.tue.nl) for the suggestion.
217c9dad
WD
3272 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3273 * More precautions for cases where other routines call sbrk,
8bde7f77 3274 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
217c9dad 3275 * Added macros etc., allowing use in linux libc from
8bde7f77 3276 H.J. Lu (hjl@gnu.ai.mit.edu)
217c9dad
WD
3277 * Inverted this history list
3278
3279 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3280 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3281 * Removed all preallocation code since under current scheme
8bde7f77
WD
3282 the work required to undo bad preallocations exceeds
3283 the work saved in good cases for most test programs.
217c9dad 3284 * No longer use return list or unconsolidated bins since
8bde7f77
WD
3285 no scheme using them consistently outperforms those that don't
3286 given above changes.
217c9dad
WD
3287 * Use best fit for very large chunks to prevent some worst-cases.
3288 * Added some support for debugging
3289
3290 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3291 * Removed footers when chunks are in use. Thanks to
8bde7f77 3292 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
217c9dad
WD
3293
3294 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3295 * Added malloc_trim, with help from Wolfram Gloger
8bde7f77 3296 (wmglo@Dent.MED.Uni-Muenchen.DE).
217c9dad
WD
3297
3298 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3299
3300 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3301 * realloc: try to expand in both directions
3302 * malloc: swap order of clean-bin strategy;
3303 * realloc: only conditionally expand backwards
3304 * Try not to scavenge used bins
3305 * Use bin counts as a guide to preallocation
3306 * Occasionally bin return list chunks in first scan
3307 * Add a few optimizations from colin@nyx10.cs.du.edu
3308
3309 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3310 * faster bin computation & slightly different binning
3311 * merged all consolidations to one part of malloc proper
8bde7f77 3312 (eliminating old malloc_find_space & malloc_clean_bin)
217c9dad
WD
3313 * Scan 2 returns chunks (not just 1)
3314 * Propagate failure in realloc if malloc returns 0
3315 * Add stuff to allow compilation on non-ANSI compilers
8bde7f77 3316 from kpv@research.att.com
217c9dad
WD
3317
3318 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3319 * removed potential for odd address access in prev_chunk
3320 * removed dependency on getpagesize.h
3321 * misc cosmetics and a bit more internal documentation
3322 * anticosmetics: mangled names in macros to evade debugger strangeness
3323 * tested on sparc, hp-700, dec-mips, rs6000
8bde7f77
WD
3324 with gcc & native cc (hp, dec only) allowing
3325 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
217c9dad
WD
3326
3327 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3328 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
8bde7f77 3329 structure of old version, but most details differ.)
217c9dad
WD
3330
3331*/