]> git.ipfire.org Git - people/ms/u-boot.git/blame - common/dlmalloc.c
serial/ns16550: don't generate functions for undefined ports
[people/ms/u-boot.git] / common / dlmalloc.c
CommitLineData
81673e9a
KG
1#include <common.h>
2
217c9dad
WD
3#if 0 /* Moved to malloc.h */
4/* ---------- To make a malloc.h, start cutting here ------------ */
5
6/*
7 A version of malloc/free/realloc written by Doug Lea and released to the
8 public domain. Send questions/comments/complaints/performance data
9 to dl@cs.oswego.edu
10
11* VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee)
12
13 Note: There may be an updated version of this malloc obtainable at
8bde7f77
WD
14 ftp://g.oswego.edu/pub/misc/malloc.c
15 Check before installing!
217c9dad
WD
16
17* Why use this malloc?
18
19 This is not the fastest, most space-conserving, most portable, or
20 most tunable malloc ever written. However it is among the fastest
21 while also being among the most space-conserving, portable and tunable.
22 Consistent balance across these factors results in a good general-purpose
23 allocator. For a high-level description, see
24 http://g.oswego.edu/dl/html/malloc.html
25
26* Synopsis of public routines
27
28 (Much fuller descriptions are contained in the program documentation below.)
29
30 malloc(size_t n);
31 Return a pointer to a newly allocated chunk of at least n bytes, or null
32 if no space is available.
33 free(Void_t* p);
34 Release the chunk of memory pointed to by p, or no effect if p is null.
35 realloc(Void_t* p, size_t n);
36 Return a pointer to a chunk of size n that contains the same data
37 as does chunk p up to the minimum of (n, p's size) bytes, or null
38 if no space is available. The returned pointer may or may not be
39 the same as p. If p is null, equivalent to malloc. Unless the
40 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
41 size argument of zero (re)allocates a minimum-sized chunk.
42 memalign(size_t alignment, size_t n);
43 Return a pointer to a newly allocated chunk of n bytes, aligned
44 in accord with the alignment argument, which must be a power of
45 two.
46 valloc(size_t n);
47 Equivalent to memalign(pagesize, n), where pagesize is the page
48 size of the system (or as near to this as can be figured out from
49 all the includes/defines below.)
50 pvalloc(size_t n);
51 Equivalent to valloc(minimum-page-that-holds(n)), that is,
52 round up n to nearest pagesize.
53 calloc(size_t unit, size_t quantity);
54 Returns a pointer to quantity * unit bytes, with all locations
55 set to zero.
56 cfree(Void_t* p);
57 Equivalent to free(p).
58 malloc_trim(size_t pad);
59 Release all but pad bytes of freed top-most memory back
60 to the system. Return 1 if successful, else 0.
61 malloc_usable_size(Void_t* p);
62 Report the number usable allocated bytes associated with allocated
63 chunk p. This may or may not report more bytes than were requested,
64 due to alignment and minimum size constraints.
65 malloc_stats();
66 Prints brief summary statistics.
67 mallinfo()
68 Returns (by copy) a struct containing various summary statistics.
69 mallopt(int parameter_number, int parameter_value)
70 Changes one of the tunable parameters described below. Returns
71 1 if successful in changing the parameter, else 0.
72
73* Vital statistics:
74
75 Alignment: 8-byte
76 8 byte alignment is currently hardwired into the design. This
77 seems to suffice for all current machines and C compilers.
78
79 Assumed pointer representation: 4 or 8 bytes
80 Code for 8-byte pointers is untested by me but has worked
81 reliably by Wolfram Gloger, who contributed most of the
82 changes supporting this.
83
84 Assumed size_t representation: 4 or 8 bytes
85 Note that size_t is allowed to be 4 bytes even if pointers are 8.
86
87 Minimum overhead per allocated chunk: 4 or 8 bytes
88 Each malloced chunk has a hidden overhead of 4 bytes holding size
89 and status information.
90
91 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
8bde7f77 92 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
217c9dad
WD
93
94 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
95 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
96 needed; 4 (8) for a trailing size field
97 and 8 (16) bytes for free list pointers. Thus, the minimum
98 allocatable size is 16/24/32 bytes.
99
100 Even a request for zero bytes (i.e., malloc(0)) returns a
101 pointer to something of the minimum allocatable size.
102
103 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
8bde7f77 104 8-byte size_t: 2^63 - 16 bytes
217c9dad
WD
105
106 It is assumed that (possibly signed) size_t bit values suffice to
107 represent chunk sizes. `Possibly signed' is due to the fact
108 that `size_t' may be defined on a system as either a signed or
109 an unsigned type. To be conservative, values that would appear
110 as negative numbers are avoided.
111 Requests for sizes with a negative sign bit when the request
112 size is treaded as a long will return null.
113
114 Maximum overhead wastage per allocated chunk: normally 15 bytes
115
116 Alignnment demands, plus the minimum allocatable size restriction
117 make the normal worst-case wastage 15 bytes (i.e., up to 15
118 more bytes will be allocated than were requested in malloc), with
119 two exceptions:
8bde7f77
WD
120 1. Because requests for zero bytes allocate non-zero space,
121 the worst case wastage for a request of zero bytes is 24 bytes.
122 2. For requests >= mmap_threshold that are serviced via
123 mmap(), the worst case wastage is 8 bytes plus the remainder
124 from a system page (the minimal mmap unit); typically 4096 bytes.
217c9dad
WD
125
126* Limitations
127
128 Here are some features that are NOT currently supported
129
130 * No user-definable hooks for callbacks and the like.
131 * No automated mechanism for fully checking that all accesses
132 to malloced memory stay within their bounds.
133 * No support for compaction.
134
135* Synopsis of compile-time options:
136
137 People have reported using previous versions of this malloc on all
138 versions of Unix, sometimes by tweaking some of the defines
139 below. It has been tested most extensively on Solaris and
140 Linux. It is also reported to work on WIN32 platforms.
141 People have also reported adapting this malloc for use in
142 stand-alone embedded systems.
143
144 The implementation is in straight, hand-tuned ANSI C. Among other
145 consequences, it uses a lot of macros. Because of this, to be at
146 all usable, this code should be compiled using an optimizing compiler
147 (for example gcc -O2) that can simplify expressions and control
148 paths.
149
150 __STD_C (default: derived from C compiler defines)
151 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
152 a C compiler sufficiently close to ANSI to get away with it.
153 DEBUG (default: NOT defined)
154 Define to enable debugging. Adds fairly extensive assertion-based
155 checking to help track down memory errors, but noticeably slows down
156 execution.
157 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
158 Define this if you think that realloc(p, 0) should be equivalent
159 to free(p). Otherwise, since malloc returns a unique pointer for
160 malloc(0), so does realloc(p, 0).
161 HAVE_MEMCPY (default: defined)
162 Define if you are not otherwise using ANSI STD C, but still
163 have memcpy and memset in your C library and want to use them.
164 Otherwise, simple internal versions are supplied.
165 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
166 Define as 1 if you want the C library versions of memset and
167 memcpy called in realloc and calloc (otherwise macro versions are used).
168 At least on some platforms, the simple macro versions usually
169 outperform libc versions.
170 HAVE_MMAP (default: defined as 1)
171 Define to non-zero to optionally make malloc() use mmap() to
172 allocate very large blocks.
173 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
174 Define to non-zero to optionally make realloc() use mremap() to
175 reallocate very large blocks.
176 malloc_getpagesize (default: derived from system #includes)
177 Either a constant or routine call returning the system page size.
178 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
179 Optionally define if you are on a system with a /usr/include/malloc.h
180 that declares struct mallinfo. It is not at all necessary to
181 define this even if you do, but will ensure consistency.
182 INTERNAL_SIZE_T (default: size_t)
183 Define to a 32-bit type (probably `unsigned int') if you are on a
184 64-bit machine, yet do not want or need to allow malloc requests of
185 greater than 2^31 to be handled. This saves space, especially for
186 very small chunks.
187 INTERNAL_LINUX_C_LIB (default: NOT defined)
188 Defined only when compiled as part of Linux libc.
189 Also note that there is some odd internal name-mangling via defines
190 (for example, internally, `malloc' is named `mALLOc') needed
191 when compiling in this case. These look funny but don't otherwise
192 affect anything.
193 WIN32 (default: undefined)
194 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
195 LACKS_UNISTD_H (default: undefined if not WIN32)
196 Define this if your system does not have a <unistd.h>.
197 LACKS_SYS_PARAM_H (default: undefined if not WIN32)
198 Define this if your system does not have a <sys/param.h>.
199 MORECORE (default: sbrk)
200 The name of the routine to call to obtain more memory from the system.
201 MORECORE_FAILURE (default: -1)
202 The value returned upon failure of MORECORE.
203 MORECORE_CLEARS (default 1)
204 True (1) if the routine mapped to MORECORE zeroes out memory (which
205 holds for sbrk).
206 DEFAULT_TRIM_THRESHOLD
207 DEFAULT_TOP_PAD
208 DEFAULT_MMAP_THRESHOLD
209 DEFAULT_MMAP_MAX
210 Default values of tunable parameters (described in detail below)
211 controlling interaction with host system routines (sbrk, mmap, etc).
212 These values may also be changed dynamically via mallopt(). The
213 preset defaults are those that give best performance for typical
214 programs/systems.
215 USE_DL_PREFIX (default: undefined)
216 Prefix all public routines with the string 'dl'. Useful to
217 quickly avoid procedure declaration conflicts and linker symbol
218 conflicts with existing memory allocation routines.
219
220
221*/
222
223\f
224
217c9dad
WD
225/* Preliminaries */
226
227#ifndef __STD_C
228#ifdef __STDC__
229#define __STD_C 1
230#else
231#if __cplusplus
232#define __STD_C 1
233#else
234#define __STD_C 0
235#endif /*__cplusplus*/
236#endif /*__STDC__*/
237#endif /*__STD_C*/
238
239#ifndef Void_t
240#if (__STD_C || defined(WIN32))
241#define Void_t void
242#else
243#define Void_t char
244#endif
245#endif /*Void_t*/
246
247#if __STD_C
248#include <stddef.h> /* for size_t */
249#else
250#include <sys/types.h>
251#endif
252
253#ifdef __cplusplus
254extern "C" {
255#endif
256
257#include <stdio.h> /* needed for malloc_stats */
258
259
260/*
261 Compile-time options
262*/
263
264
265/*
266 Debugging:
267
268 Because freed chunks may be overwritten with link fields, this
269 malloc will often die when freed memory is overwritten by user
270 programs. This can be very effective (albeit in an annoying way)
271 in helping track down dangling pointers.
272
273 If you compile with -DDEBUG, a number of assertion checks are
274 enabled that will catch more memory errors. You probably won't be
275 able to make much sense of the actual assertion errors, but they
276 should help you locate incorrectly overwritten memory. The
277 checking is fairly extensive, and will slow down execution
278 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
279 attempt to check every non-mmapped allocated and free chunk in the
280 course of computing the summmaries. (By nature, mmapped regions
281 cannot be checked very much automatically.)
282
283 Setting DEBUG may also be helpful if you are trying to modify
284 this code. The assertions in the check routines spell out in more
285 detail the assumptions and invariants underlying the algorithms.
286
287*/
288
217c9dad
WD
289/*
290 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
291 of chunk sizes. On a 64-bit machine, you can reduce malloc
292 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
293 at the expense of not being able to handle requests greater than
294 2^31. This limitation is hardly ever a concern; you are encouraged
295 to set this. However, the default version is the same as size_t.
296*/
297
298#ifndef INTERNAL_SIZE_T
299#define INTERNAL_SIZE_T size_t
300#endif
301
302/*
303 REALLOC_ZERO_BYTES_FREES should be set if a call to
304 realloc with zero bytes should be the same as a call to free.
305 Some people think it should. Otherwise, since this malloc
306 returns a unique pointer for malloc(0), so does realloc(p, 0).
307*/
308
309
310/* #define REALLOC_ZERO_BYTES_FREES */
311
312
313/*
314 WIN32 causes an emulation of sbrk to be compiled in
315 mmap-based options are not currently supported in WIN32.
316*/
317
318/* #define WIN32 */
319#ifdef WIN32
320#define MORECORE wsbrk
321#define HAVE_MMAP 0
322
323#define LACKS_UNISTD_H
324#define LACKS_SYS_PARAM_H
325
326/*
327 Include 'windows.h' to get the necessary declarations for the
328 Microsoft Visual C++ data structures and routines used in the 'sbrk'
329 emulation.
330
331 Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
332 Visual C++ header files are included.
333*/
334#define WIN32_LEAN_AND_MEAN
335#include <windows.h>
336#endif
337
338
339/*
340 HAVE_MEMCPY should be defined if you are not otherwise using
341 ANSI STD C, but still have memcpy and memset in your C library
342 and want to use them in calloc and realloc. Otherwise simple
343 macro versions are defined here.
344
345 USE_MEMCPY should be defined as 1 if you actually want to
346 have memset and memcpy called. People report that the macro
347 versions are often enough faster than libc versions on many
348 systems that it is better to use them.
349
350*/
351
352#define HAVE_MEMCPY
353
354#ifndef USE_MEMCPY
355#ifdef HAVE_MEMCPY
356#define USE_MEMCPY 1
357#else
358#define USE_MEMCPY 0
359#endif
360#endif
361
362#if (__STD_C || defined(HAVE_MEMCPY))
363
364#if __STD_C
365void* memset(void*, int, size_t);
366void* memcpy(void*, const void*, size_t);
367#else
368#ifdef WIN32
8bde7f77
WD
369/* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */
370/* 'windows.h' */
217c9dad
WD
371#else
372Void_t* memset();
373Void_t* memcpy();
374#endif
375#endif
376#endif
377
378#if USE_MEMCPY
379
380/* The following macros are only invoked with (2n+1)-multiples of
381 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
382 for fast inline execution when n is small. */
383
384#define MALLOC_ZERO(charp, nbytes) \
385do { \
386 INTERNAL_SIZE_T mzsz = (nbytes); \
387 if(mzsz <= 9*sizeof(mzsz)) { \
388 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
389 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
8bde7f77 390 *mz++ = 0; \
217c9dad 391 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
8bde7f77
WD
392 *mz++ = 0; \
393 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
394 *mz++ = 0; }}} \
395 *mz++ = 0; \
396 *mz++ = 0; \
397 *mz = 0; \
217c9dad
WD
398 } else memset((charp), 0, mzsz); \
399} while(0)
400
401#define MALLOC_COPY(dest,src,nbytes) \
402do { \
403 INTERNAL_SIZE_T mcsz = (nbytes); \
404 if(mcsz <= 9*sizeof(mcsz)) { \
405 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
406 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
407 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
8bde7f77 408 *mcdst++ = *mcsrc++; \
217c9dad 409 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
8bde7f77
WD
410 *mcdst++ = *mcsrc++; \
411 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
412 *mcdst++ = *mcsrc++; }}} \
413 *mcdst++ = *mcsrc++; \
414 *mcdst++ = *mcsrc++; \
415 *mcdst = *mcsrc ; \
217c9dad
WD
416 } else memcpy(dest, src, mcsz); \
417} while(0)
418
419#else /* !USE_MEMCPY */
420
421/* Use Duff's device for good zeroing/copying performance. */
422
423#define MALLOC_ZERO(charp, nbytes) \
424do { \
425 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
426 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
427 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
428 switch (mctmp) { \
429 case 0: for(;;) { *mzp++ = 0; \
430 case 7: *mzp++ = 0; \
431 case 6: *mzp++ = 0; \
432 case 5: *mzp++ = 0; \
433 case 4: *mzp++ = 0; \
434 case 3: *mzp++ = 0; \
435 case 2: *mzp++ = 0; \
436 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
437 } \
438} while(0)
439
440#define MALLOC_COPY(dest,src,nbytes) \
441do { \
442 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
443 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
444 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
445 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
446 switch (mctmp) { \
447 case 0: for(;;) { *mcdst++ = *mcsrc++; \
448 case 7: *mcdst++ = *mcsrc++; \
449 case 6: *mcdst++ = *mcsrc++; \
450 case 5: *mcdst++ = *mcsrc++; \
451 case 4: *mcdst++ = *mcsrc++; \
452 case 3: *mcdst++ = *mcsrc++; \
453 case 2: *mcdst++ = *mcsrc++; \
454 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
455 } \
456} while(0)
457
458#endif
459
460
461/*
462 Define HAVE_MMAP to optionally make malloc() use mmap() to
463 allocate very large blocks. These will be returned to the
464 operating system immediately after a free().
465*/
466
467#ifndef HAVE_MMAP
468#define HAVE_MMAP 1
469#endif
470
471/*
472 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
473 large blocks. This is currently only possible on Linux with
474 kernel versions newer than 1.3.77.
475*/
476
477#ifndef HAVE_MREMAP
478#ifdef INTERNAL_LINUX_C_LIB
479#define HAVE_MREMAP 1
480#else
481#define HAVE_MREMAP 0
482#endif
483#endif
484
485#if HAVE_MMAP
486
487#include <unistd.h>
488#include <fcntl.h>
489#include <sys/mman.h>
490
491#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
492#define MAP_ANONYMOUS MAP_ANON
493#endif
494
495#endif /* HAVE_MMAP */
496
497/*
498 Access to system page size. To the extent possible, this malloc
499 manages memory from the system in page-size units.
500
501 The following mechanics for getpagesize were adapted from
502 bsd/gnu getpagesize.h
503*/
504
505#ifndef LACKS_UNISTD_H
506# include <unistd.h>
507#endif
508
509#ifndef malloc_getpagesize
510# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
511# ifndef _SC_PAGE_SIZE
512# define _SC_PAGE_SIZE _SC_PAGESIZE
513# endif
514# endif
515# ifdef _SC_PAGE_SIZE
516# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
517# else
518# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
519 extern size_t getpagesize();
520# define malloc_getpagesize getpagesize()
521# else
522# ifdef WIN32
523# define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
524# else
525# ifndef LACKS_SYS_PARAM_H
526# include <sys/param.h>
527# endif
528# ifdef EXEC_PAGESIZE
529# define malloc_getpagesize EXEC_PAGESIZE
530# else
531# ifdef NBPG
532# ifndef CLSIZE
533# define malloc_getpagesize NBPG
534# else
535# define malloc_getpagesize (NBPG * CLSIZE)
536# endif
537# else
538# ifdef NBPC
539# define malloc_getpagesize NBPC
540# else
541# ifdef PAGESIZE
542# define malloc_getpagesize PAGESIZE
543# else
544# define malloc_getpagesize (4096) /* just guess */
545# endif
546# endif
547# endif
548# endif
549# endif
550# endif
551# endif
552#endif
553
554
217c9dad
WD
555/*
556
557 This version of malloc supports the standard SVID/XPG mallinfo
558 routine that returns a struct containing the same kind of
559 information you can get from malloc_stats. It should work on
560 any SVID/XPG compliant system that has a /usr/include/malloc.h
561 defining struct mallinfo. (If you'd like to install such a thing
562 yourself, cut out the preliminary declarations as described above
563 and below and save them in a malloc.h file. But there's no
564 compelling reason to bother to do this.)
565
566 The main declaration needed is the mallinfo struct that is returned
567 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
568 bunch of fields, most of which are not even meaningful in this
569 version of malloc. Some of these fields are are instead filled by
570 mallinfo() with other numbers that might possibly be of interest.
571
572 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
573 /usr/include/malloc.h file that includes a declaration of struct
574 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
575 version is declared below. These must be precisely the same for
576 mallinfo() to work.
577
578*/
579
580/* #define HAVE_USR_INCLUDE_MALLOC_H */
581
582#if HAVE_USR_INCLUDE_MALLOC_H
583#include "/usr/include/malloc.h"
584#else
585
586/* SVID2/XPG mallinfo structure */
587
588struct mallinfo {
589 int arena; /* total space allocated from system */
590 int ordblks; /* number of non-inuse chunks */
591 int smblks; /* unused -- always zero */
592 int hblks; /* number of mmapped regions */
593 int hblkhd; /* total space in mmapped regions */
594 int usmblks; /* unused -- always zero */
595 int fsmblks; /* unused -- always zero */
596 int uordblks; /* total allocated space */
597 int fordblks; /* total non-inuse space */
598 int keepcost; /* top-most, releasable (via malloc_trim) space */
599};
600
601/* SVID2/XPG mallopt options */
602
603#define M_MXFAST 1 /* UNUSED in this malloc */
604#define M_NLBLKS 2 /* UNUSED in this malloc */
605#define M_GRAIN 3 /* UNUSED in this malloc */
606#define M_KEEP 4 /* UNUSED in this malloc */
607
608#endif
609
610/* mallopt options that actually do something */
611
612#define M_TRIM_THRESHOLD -1
613#define M_TOP_PAD -2
614#define M_MMAP_THRESHOLD -3
615#define M_MMAP_MAX -4
616
617
217c9dad
WD
618#ifndef DEFAULT_TRIM_THRESHOLD
619#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
620#endif
621
622/*
623 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
624 to keep before releasing via malloc_trim in free().
625
626 Automatic trimming is mainly useful in long-lived programs.
627 Because trimming via sbrk can be slow on some systems, and can
628 sometimes be wasteful (in cases where programs immediately
629 afterward allocate more large chunks) the value should be high
630 enough so that your overall system performance would improve by
631 releasing.
632
633 The trim threshold and the mmap control parameters (see below)
634 can be traded off with one another. Trimming and mmapping are
635 two different ways of releasing unused memory back to the
636 system. Between these two, it is often possible to keep
637 system-level demands of a long-lived program down to a bare
638 minimum. For example, in one test suite of sessions measuring
639 the XF86 X server on Linux, using a trim threshold of 128K and a
640 mmap threshold of 192K led to near-minimal long term resource
641 consumption.
642
643 If you are using this malloc in a long-lived program, it should
644 pay to experiment with these values. As a rough guide, you
645 might set to a value close to the average size of a process
646 (program) running on your system. Releasing this much memory
647 would allow such a process to run in memory. Generally, it's
648 worth it to tune for trimming rather tham memory mapping when a
649 program undergoes phases where several large chunks are
650 allocated and released in ways that can reuse each other's
651 storage, perhaps mixed with phases where there are no such
652 chunks at all. And in well-behaved long-lived programs,
653 controlling release of large blocks via trimming versus mapping
654 is usually faster.
655
656 However, in most programs, these parameters serve mainly as
657 protection against the system-level effects of carrying around
658 massive amounts of unneeded memory. Since frequent calls to
659 sbrk, mmap, and munmap otherwise degrade performance, the default
660 parameters are set to relatively high values that serve only as
661 safeguards.
662
663 The default trim value is high enough to cause trimming only in
664 fairly extreme (by current memory consumption standards) cases.
665 It must be greater than page size to have any useful effect. To
666 disable trimming completely, you can set to (unsigned long)(-1);
667
668
669*/
670
671
672#ifndef DEFAULT_TOP_PAD
673#define DEFAULT_TOP_PAD (0)
674#endif
675
676/*
677 M_TOP_PAD is the amount of extra `padding' space to allocate or
678 retain whenever sbrk is called. It is used in two ways internally:
679
680 * When sbrk is called to extend the top of the arena to satisfy
8bde7f77
WD
681 a new malloc request, this much padding is added to the sbrk
682 request.
217c9dad
WD
683
684 * When malloc_trim is called automatically from free(),
8bde7f77 685 it is used as the `pad' argument.
217c9dad
WD
686
687 In both cases, the actual amount of padding is rounded
688 so that the end of the arena is always a system page boundary.
689
690 The main reason for using padding is to avoid calling sbrk so
691 often. Having even a small pad greatly reduces the likelihood
692 that nearly every malloc request during program start-up (or
693 after trimming) will invoke sbrk, which needlessly wastes
694 time.
695
696 Automatic rounding-up to page-size units is normally sufficient
697 to avoid measurable overhead, so the default is 0. However, in
698 systems where sbrk is relatively slow, it can pay to increase
699 this value, at the expense of carrying around more memory than
700 the program needs.
701
702*/
703
704
705#ifndef DEFAULT_MMAP_THRESHOLD
706#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
707#endif
708
709/*
710
711 M_MMAP_THRESHOLD is the request size threshold for using mmap()
712 to service a request. Requests of at least this size that cannot
713 be allocated using already-existing space will be serviced via mmap.
714 (If enough normal freed space already exists it is used instead.)
715
716 Using mmap segregates relatively large chunks of memory so that
717 they can be individually obtained and released from the host
718 system. A request serviced through mmap is never reused by any
719 other request (at least not directly; the system may just so
720 happen to remap successive requests to the same locations).
721
722 Segregating space in this way has the benefit that mmapped space
723 can ALWAYS be individually released back to the system, which
724 helps keep the system level memory demands of a long-lived
725 program low. Mapped memory can never become `locked' between
726 other chunks, as can happen with normally allocated chunks, which
727 menas that even trimming via malloc_trim would not release them.
728
729 However, it has the disadvantages that:
730
8bde7f77
WD
731 1. The space cannot be reclaimed, consolidated, and then
732 used to service later requests, as happens with normal chunks.
733 2. It can lead to more wastage because of mmap page alignment
734 requirements
735 3. It causes malloc performance to be more dependent on host
736 system memory management support routines which may vary in
737 implementation quality and may impose arbitrary
738 limitations. Generally, servicing a request via normal
739 malloc steps is faster than going through a system's mmap.
217c9dad
WD
740
741 All together, these considerations should lead you to use mmap
742 only for relatively large requests.
743
744
745*/
746
747
217c9dad
WD
748#ifndef DEFAULT_MMAP_MAX
749#if HAVE_MMAP
750#define DEFAULT_MMAP_MAX (64)
751#else
752#define DEFAULT_MMAP_MAX (0)
753#endif
754#endif
755
756/*
757 M_MMAP_MAX is the maximum number of requests to simultaneously
758 service using mmap. This parameter exists because:
759
8bde7f77
WD
760 1. Some systems have a limited number of internal tables for
761 use by mmap.
762 2. In most systems, overreliance on mmap can degrade overall
763 performance.
764 3. If a program allocates many large regions, it is probably
765 better off using normal sbrk-based allocation routines that
766 can reclaim and reallocate normal heap memory. Using a
767 small value allows transition into this mode after the
768 first few allocations.
217c9dad
WD
769
770 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
771 the default value is 0, and attempts to set it to non-zero values
772 in mallopt will fail.
773*/
774
775
217c9dad
WD
776/*
777 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
778 Useful to quickly avoid procedure declaration conflicts and linker
779 symbol conflicts with existing memory allocation routines.
780
781*/
782
783/* #define USE_DL_PREFIX */
784
785
217c9dad
WD
786/*
787
788 Special defines for linux libc
789
790 Except when compiled using these special defines for Linux libc
791 using weak aliases, this malloc is NOT designed to work in
792 multithreaded applications. No semaphores or other concurrency
793 control are provided to ensure that multiple malloc or free calls
794 don't run at the same time, which could be disasterous. A single
795 semaphore could be used across malloc, realloc, and free (which is
796 essentially the effect of the linux weak alias approach). It would
797 be hard to obtain finer granularity.
798
799*/
800
801
802#ifdef INTERNAL_LINUX_C_LIB
803
804#if __STD_C
805
806Void_t * __default_morecore_init (ptrdiff_t);
807Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
808
809#else
810
811Void_t * __default_morecore_init ();
812Void_t *(*__morecore)() = __default_morecore_init;
813
814#endif
815
816#define MORECORE (*__morecore)
817#define MORECORE_FAILURE 0
818#define MORECORE_CLEARS 1
819
820#else /* INTERNAL_LINUX_C_LIB */
821
822#if __STD_C
823extern Void_t* sbrk(ptrdiff_t);
824#else
825extern Void_t* sbrk();
826#endif
827
828#ifndef MORECORE
829#define MORECORE sbrk
830#endif
831
832#ifndef MORECORE_FAILURE
833#define MORECORE_FAILURE -1
834#endif
835
836#ifndef MORECORE_CLEARS
837#define MORECORE_CLEARS 1
838#endif
839
840#endif /* INTERNAL_LINUX_C_LIB */
841
842#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
843
844#define cALLOc __libc_calloc
845#define fREe __libc_free
846#define mALLOc __libc_malloc
847#define mEMALIGn __libc_memalign
848#define rEALLOc __libc_realloc
849#define vALLOc __libc_valloc
850#define pvALLOc __libc_pvalloc
851#define mALLINFo __libc_mallinfo
852#define mALLOPt __libc_mallopt
853
854#pragma weak calloc = __libc_calloc
855#pragma weak free = __libc_free
856#pragma weak cfree = __libc_free
857#pragma weak malloc = __libc_malloc
858#pragma weak memalign = __libc_memalign
859#pragma weak realloc = __libc_realloc
860#pragma weak valloc = __libc_valloc
861#pragma weak pvalloc = __libc_pvalloc
862#pragma weak mallinfo = __libc_mallinfo
863#pragma weak mallopt = __libc_mallopt
864
865#else
866
867#ifdef USE_DL_PREFIX
868#define cALLOc dlcalloc
869#define fREe dlfree
870#define mALLOc dlmalloc
871#define mEMALIGn dlmemalign
872#define rEALLOc dlrealloc
873#define vALLOc dlvalloc
874#define pvALLOc dlpvalloc
875#define mALLINFo dlmallinfo
876#define mALLOPt dlmallopt
877#else /* USE_DL_PREFIX */
878#define cALLOc calloc
879#define fREe free
880#define mALLOc malloc
881#define mEMALIGn memalign
882#define rEALLOc realloc
883#define vALLOc valloc
884#define pvALLOc pvalloc
885#define mALLINFo mallinfo
886#define mALLOPt mallopt
887#endif /* USE_DL_PREFIX */
888
889#endif
890
891/* Public routines */
892
893#if __STD_C
894
895Void_t* mALLOc(size_t);
896void fREe(Void_t*);
897Void_t* rEALLOc(Void_t*, size_t);
898Void_t* mEMALIGn(size_t, size_t);
899Void_t* vALLOc(size_t);
900Void_t* pvALLOc(size_t);
901Void_t* cALLOc(size_t, size_t);
902void cfree(Void_t*);
903int malloc_trim(size_t);
904size_t malloc_usable_size(Void_t*);
905void malloc_stats();
906int mALLOPt(int, int);
907struct mallinfo mALLINFo(void);
908#else
909Void_t* mALLOc();
910void fREe();
911Void_t* rEALLOc();
912Void_t* mEMALIGn();
913Void_t* vALLOc();
914Void_t* pvALLOc();
915Void_t* cALLOc();
916void cfree();
917int malloc_trim();
918size_t malloc_usable_size();
919void malloc_stats();
920int mALLOPt();
921struct mallinfo mALLINFo();
922#endif
923
924
925#ifdef __cplusplus
926}; /* end of extern "C" */
927#endif
928
929/* ---------- To make a malloc.h, end cutting here ------------ */
ea882baf 930#endif /* 0 */ /* Moved to malloc.h */
217c9dad
WD
931
932#include <malloc.h>
ea882baf 933#ifdef DEBUG
217c9dad
WD
934#if __STD_C
935static void malloc_update_mallinfo (void);
936void malloc_stats (void);
937#else
938static void malloc_update_mallinfo ();
939void malloc_stats();
940#endif
ea882baf 941#endif /* DEBUG */
217c9dad 942
d87080b7
WD
943DECLARE_GLOBAL_DATA_PTR;
944
217c9dad
WD
945/*
946 Emulation of sbrk for WIN32
947 All code within the ifdef WIN32 is untested by me.
948
949 Thanks to Martin Fong and others for supplying this.
950*/
951
952
953#ifdef WIN32
954
955#define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
956~(malloc_getpagesize-1))
957#define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
958
959/* resrve 64MB to insure large contiguous space */
960#define RESERVED_SIZE (1024*1024*64)
961#define NEXT_SIZE (2048*1024)
962#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
963
964struct GmListElement;
965typedef struct GmListElement GmListElement;
966
967struct GmListElement
968{
969 GmListElement* next;
970 void* base;
971};
972
973static GmListElement* head = 0;
974static unsigned int gNextAddress = 0;
975static unsigned int gAddressBase = 0;
976static unsigned int gAllocatedSize = 0;
977
978static
979GmListElement* makeGmListElement (void* bas)
980{
981 GmListElement* this;
982 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
983 assert (this);
984 if (this)
985 {
986 this->base = bas;
987 this->next = head;
988 head = this;
989 }
990 return this;
991}
992
993void gcleanup ()
994{
995 BOOL rval;
996 assert ( (head == NULL) || (head->base == (void*)gAddressBase));
997 if (gAddressBase && (gNextAddress - gAddressBase))
998 {
999 rval = VirtualFree ((void*)gAddressBase,
1000 gNextAddress - gAddressBase,
1001 MEM_DECOMMIT);
8bde7f77 1002 assert (rval);
217c9dad
WD
1003 }
1004 while (head)
1005 {
1006 GmListElement* next = head->next;
1007 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1008 assert (rval);
1009 LocalFree (head);
1010 head = next;
1011 }
1012}
1013
1014static
1015void* findRegion (void* start_address, unsigned long size)
1016{
1017 MEMORY_BASIC_INFORMATION info;
1018 if (size >= TOP_MEMORY) return NULL;
1019
1020 while ((unsigned long)start_address + size < TOP_MEMORY)
1021 {
1022 VirtualQuery (start_address, &info, sizeof (info));
1023 if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1024 return start_address;
1025 else
1026 {
8bde7f77
WD
1027 /* Requested region is not available so see if the */
1028 /* next region is available. Set 'start_address' */
1029 /* to the next region and call 'VirtualQuery()' */
1030 /* again. */
217c9dad
WD
1031
1032 start_address = (char*)info.BaseAddress + info.RegionSize;
1033
8bde7f77
WD
1034 /* Make sure we start looking for the next region */
1035 /* on the *next* 64K boundary. Otherwise, even if */
1036 /* the new region is free according to */
1037 /* 'VirtualQuery()', the subsequent call to */
1038 /* 'VirtualAlloc()' (which follows the call to */
1039 /* this routine in 'wsbrk()') will round *down* */
1040 /* the requested address to a 64K boundary which */
1041 /* we already know is an address in the */
1042 /* unavailable region. Thus, the subsequent call */
1043 /* to 'VirtualAlloc()' will fail and bring us back */
1044 /* here, causing us to go into an infinite loop. */
217c9dad
WD
1045
1046 start_address =
1047 (void *) AlignPage64K((unsigned long) start_address);
1048 }
1049 }
1050 return NULL;
1051
1052}
1053
1054
1055void* wsbrk (long size)
1056{
1057 void* tmp;
1058 if (size > 0)
1059 {
1060 if (gAddressBase == 0)
1061 {
1062 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1063 gNextAddress = gAddressBase =
1064 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1065 MEM_RESERVE, PAGE_NOACCESS);
1066 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1067gAllocatedSize))
1068 {
1069 long new_size = max (NEXT_SIZE, AlignPage (size));
1070 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1071 do
1072 {
1073 new_address = findRegion (new_address, new_size);
1074
1075 if (new_address == 0)
1076 return (void*)-1;
1077
1078 gAddressBase = gNextAddress =
1079 (unsigned int)VirtualAlloc (new_address, new_size,
1080 MEM_RESERVE, PAGE_NOACCESS);
8bde7f77
WD
1081 /* repeat in case of race condition */
1082 /* The region that we found has been snagged */
1083 /* by another thread */
217c9dad
WD
1084 }
1085 while (gAddressBase == 0);
1086
1087 assert (new_address == (void*)gAddressBase);
1088
1089 gAllocatedSize = new_size;
1090
1091 if (!makeGmListElement ((void*)gAddressBase))
1092 return (void*)-1;
1093 }
1094 if ((size + gNextAddress) > AlignPage (gNextAddress))
1095 {
1096 void* res;
1097 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1098 (size + gNextAddress -
1099 AlignPage (gNextAddress)),
1100 MEM_COMMIT, PAGE_READWRITE);
1101 if (res == 0)
1102 return (void*)-1;
1103 }
1104 tmp = (void*)gNextAddress;
1105 gNextAddress = (unsigned int)tmp + size;
1106 return tmp;
1107 }
1108 else if (size < 0)
1109 {
1110 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1111 /* Trim by releasing the virtual memory */
1112 if (alignedGoal >= gAddressBase)
1113 {
1114 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1115 MEM_DECOMMIT);
1116 gNextAddress = gNextAddress + size;
1117 return (void*)gNextAddress;
1118 }
1119 else
1120 {
1121 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1122 MEM_DECOMMIT);
1123 gNextAddress = gAddressBase;
1124 return (void*)-1;
1125 }
1126 }
1127 else
1128 {
1129 return (void*)gNextAddress;
1130 }
1131}
1132
1133#endif
1134
1135\f
1136
1137/*
1138 Type declarations
1139*/
1140
1141
1142struct malloc_chunk
1143{
1144 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1145 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1146 struct malloc_chunk* fd; /* double links -- used only if free. */
1147 struct malloc_chunk* bk;
1ba91ba2 1148} __attribute__((__may_alias__)) ;
217c9dad
WD
1149
1150typedef struct malloc_chunk* mchunkptr;
1151
1152/*
1153
1154 malloc_chunk details:
1155
1156 (The following includes lightly edited explanations by Colin Plumb.)
1157
1158 Chunks of memory are maintained using a `boundary tag' method as
1159 described in e.g., Knuth or Standish. (See the paper by Paul
1160 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1161 survey of such techniques.) Sizes of free chunks are stored both
1162 in the front of each chunk and at the end. This makes
1163 consolidating fragmented chunks into bigger chunks very fast. The
1164 size fields also hold bits representing whether chunks are free or
1165 in use.
1166
1167 An allocated chunk looks like this:
1168
1169
1170 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8bde7f77
WD
1171 | Size of previous chunk, if allocated | |
1172 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1173 | Size of chunk, in bytes |P|
217c9dad 1174 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8bde7f77
WD
1175 | User data starts here... .
1176 . .
1177 . (malloc_usable_space() bytes) .
1178 . |
217c9dad 1179nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8bde7f77
WD
1180 | Size of chunk |
1181 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
217c9dad
WD
1182
1183
1184 Where "chunk" is the front of the chunk for the purpose of most of
1185 the malloc code, but "mem" is the pointer that is returned to the
1186 user. "Nextchunk" is the beginning of the next contiguous chunk.
1187
1188 Chunks always begin on even word boundries, so the mem portion
1189 (which is returned to the user) is also on an even word boundary, and
1190 thus double-word aligned.
1191
1192 Free chunks are stored in circular doubly-linked lists, and look like this:
1193
1194 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8bde7f77
WD
1195 | Size of previous chunk |
1196 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
217c9dad
WD
1197 `head:' | Size of chunk, in bytes |P|
1198 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8bde7f77
WD
1199 | Forward pointer to next chunk in list |
1200 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1201 | Back pointer to previous chunk in list |
1202 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1203 | Unused space (may be 0 bytes long) .
1204 . .
1205 . |
217c9dad
WD
1206nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1207 `foot:' | Size of chunk, in bytes |
8bde7f77 1208 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
217c9dad
WD
1209
1210 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1211 chunk size (which is always a multiple of two words), is an in-use
1212 bit for the *previous* chunk. If that bit is *clear*, then the
1213 word before the current chunk size contains the previous chunk
1214 size, and can be used to find the front of the previous chunk.
1215 (The very first chunk allocated always has this bit set,
1216 preventing access to non-existent (or non-owned) memory.)
1217
1218 Note that the `foot' of the current chunk is actually represented
1219 as the prev_size of the NEXT chunk. (This makes it easier to
1220 deal with alignments etc).
1221
1222 The two exceptions to all this are
1223
1224 1. The special chunk `top', which doesn't bother using the
8bde7f77
WD
1225 trailing size field since there is no
1226 next contiguous chunk that would have to index off it. (After
1227 initialization, `top' is forced to always exist. If it would
1228 become less than MINSIZE bytes long, it is replenished via
1229 malloc_extend_top.)
217c9dad
WD
1230
1231 2. Chunks allocated via mmap, which have the second-lowest-order
8bde7f77
WD
1232 bit (IS_MMAPPED) set in their size fields. Because they are
1233 never merged or traversed from any other chunk, they have no
1234 foot size or inuse information.
217c9dad
WD
1235
1236 Available chunks are kept in any of several places (all declared below):
1237
1238 * `av': An array of chunks serving as bin headers for consolidated
1239 chunks. Each bin is doubly linked. The bins are approximately
1240 proportionally (log) spaced. There are a lot of these bins
1241 (128). This may look excessive, but works very well in
1242 practice. All procedures maintain the invariant that no
1243 consolidated chunk physically borders another one. Chunks in
1244 bins are kept in size order, with ties going to the
1245 approximately least recently used chunk.
1246
1247 The chunks in each bin are maintained in decreasing sorted order by
1248 size. This is irrelevant for the small bins, which all contain
1249 the same-sized chunks, but facilitates best-fit allocation for
1250 larger chunks. (These lists are just sequential. Keeping them in
1251 order almost never requires enough traversal to warrant using
1252 fancier ordered data structures.) Chunks of the same size are
1253 linked with the most recently freed at the front, and allocations
1254 are taken from the back. This results in LRU or FIFO allocation
1255 order, which tends to give each chunk an equal opportunity to be
1256 consolidated with adjacent freed chunks, resulting in larger free
1257 chunks and less fragmentation.
1258
1259 * `top': The top-most available chunk (i.e., the one bordering the
1260 end of available memory) is treated specially. It is never
1261 included in any bin, is used only if no other chunk is
1262 available, and is released back to the system if it is very
1263 large (see M_TRIM_THRESHOLD).
1264
1265 * `last_remainder': A bin holding only the remainder of the
1266 most recently split (non-top) chunk. This bin is checked
1267 before other non-fitting chunks, so as to provide better
1268 locality for runs of sequentially allocated chunks.
1269
1270 * Implicitly, through the host system's memory mapping tables.
1271 If supported, requests greater than a threshold are usually
1272 serviced via calls to mmap, and then later released via munmap.
1273
1274*/
217c9dad 1275\f
217c9dad
WD
1276/* sizes, alignments */
1277
1278#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1279#define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1280#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1281#define MINSIZE (sizeof(struct malloc_chunk))
1282
1283/* conversion from malloc headers to user pointers, and back */
1284
1285#define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1286#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1287
1288/* pad request bytes into a usable size */
1289
1290#define request2size(req) \
1291 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1292 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1293 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1294
1295/* Check if m has acceptable alignment */
1296
1297#define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1298
1299
1300\f
1301
1302/*
1303 Physical chunk operations
1304*/
1305
1306
1307/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1308
1309#define PREV_INUSE 0x1
1310
1311/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1312
1313#define IS_MMAPPED 0x2
1314
1315/* Bits to mask off when extracting size */
1316
1317#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1318
1319
1320/* Ptr to next physical malloc_chunk. */
1321
1322#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1323
1324/* Ptr to previous physical malloc_chunk */
1325
1326#define prev_chunk(p)\
1327 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1328
1329
1330/* Treat space at ptr + offset as a chunk */
1331
1332#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1333
1334
1335\f
1336
1337/*
1338 Dealing with use bits
1339*/
1340
1341/* extract p's inuse bit */
1342
1343#define inuse(p)\
1344((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1345
1346/* extract inuse bit of previous chunk */
1347
1348#define prev_inuse(p) ((p)->size & PREV_INUSE)
1349
1350/* check for mmap()'ed chunk */
1351
1352#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1353
1354/* set/clear chunk as in use without otherwise disturbing */
1355
1356#define set_inuse(p)\
1357((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1358
1359#define clear_inuse(p)\
1360((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1361
1362/* check/set/clear inuse bits in known places */
1363
1364#define inuse_bit_at_offset(p, s)\
1365 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1366
1367#define set_inuse_bit_at_offset(p, s)\
1368 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1369
1370#define clear_inuse_bit_at_offset(p, s)\
1371 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1372
1373
1374\f
1375
1376/*
1377 Dealing with size fields
1378*/
1379
1380/* Get size, ignoring use bits */
1381
1382#define chunksize(p) ((p)->size & ~(SIZE_BITS))
1383
1384/* Set size at head, without disturbing its use bit */
1385
1386#define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1387
1388/* Set size/use ignoring previous bits in header */
1389
1390#define set_head(p, s) ((p)->size = (s))
1391
1392/* Set size at footer (only when chunk is not in use) */
1393
1394#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1395
1396
1397\f
1398
1399
1400/*
1401 Bins
1402
1403 The bins, `av_' are an array of pairs of pointers serving as the
1404 heads of (initially empty) doubly-linked lists of chunks, laid out
1405 in a way so that each pair can be treated as if it were in a
1406 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1407 and chunks are the same).
1408
1409 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1410 8 bytes apart. Larger bins are approximately logarithmically
1411 spaced. (See the table below.) The `av_' array is never mentioned
1412 directly in the code, but instead via bin access macros.
1413
1414 Bin layout:
1415
1416 64 bins of size 8
1417 32 bins of size 64
1418 16 bins of size 512
1419 8 bins of size 4096
1420 4 bins of size 32768
1421 2 bins of size 262144
1422 1 bin of size what's left
1423
1424 There is actually a little bit of slop in the numbers in bin_index
1425 for the sake of speed. This makes no difference elsewhere.
1426
1427 The special chunks `top' and `last_remainder' get their own bins,
1428 (this is implemented via yet more trickery with the av_ array),
1429 although `top' is never properly linked to its bin since it is
1430 always handled specially.
1431
1432*/
1433
1434#define NAV 128 /* number of bins */
1435
1436typedef struct malloc_chunk* mbinptr;
1437
1438/* access macros */
1439
1440#define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1441#define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1442#define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1443
1444/*
1445 The first 2 bins are never indexed. The corresponding av_ cells are instead
1446 used for bookkeeping. This is not to save space, but to simplify
1447 indexing, maintain locality, and avoid some initialization tests.
1448*/
1449
f2302d44 1450#define top (av_[2]) /* The topmost chunk */
217c9dad
WD
1451#define last_remainder (bin_at(1)) /* remainder from last split */
1452
1453
1454/*
1455 Because top initially points to its own bin with initial
1456 zero size, thus forcing extension on the first malloc request,
1457 we avoid having any special code in malloc to check whether
1458 it even exists yet. But we still need to in malloc_extend_top.
1459*/
1460
1461#define initial_top ((mchunkptr)(bin_at(0)))
1462
1463/* Helper macro to initialize bins */
1464
1465#define IAV(i) bin_at(i), bin_at(i)
1466
1467static mbinptr av_[NAV * 2 + 2] = {
199adb60 1468 NULL, NULL,
217c9dad
WD
1469 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1470 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1471 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1472 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1473 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1474 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1475 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1476 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1477 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1478 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1479 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1480 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1481 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1482 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1483 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1484 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1485};
1486
2e5167cc 1487#ifdef CONFIG_NEEDS_MANUAL_RELOC
217c9dad
WD
1488void malloc_bin_reloc (void)
1489{
93691842
SG
1490 mbinptr *p = &av_[2];
1491 size_t i;
1492
1493 for (i = 2; i < ARRAY_SIZE(av_); ++i, ++p)
1494 *p = (mbinptr)((ulong)*p + gd->reloc_off);
217c9dad 1495}
521af04d 1496#endif
5e93bd1c
PT
1497
1498ulong mem_malloc_start = 0;
1499ulong mem_malloc_end = 0;
1500ulong mem_malloc_brk = 0;
1501
1502void *sbrk(ptrdiff_t increment)
1503{
1504 ulong old = mem_malloc_brk;
1505 ulong new = old + increment;
1506
6163f5b4
KG
1507 /*
1508 * if we are giving memory back make sure we clear it out since
1509 * we set MORECORE_CLEARS to 1
1510 */
1511 if (increment < 0)
1512 memset((void *)new, 0, -increment);
1513
5e93bd1c 1514 if ((new < mem_malloc_start) || (new > mem_malloc_end))
ae30b8c2 1515 return (void *)MORECORE_FAILURE;
5e93bd1c
PT
1516
1517 mem_malloc_brk = new;
1518
1519 return (void *)old;
1520}
217c9dad 1521
d4e8ada0
PT
1522void mem_malloc_init(ulong start, ulong size)
1523{
1524 mem_malloc_start = start;
1525 mem_malloc_end = start + size;
1526 mem_malloc_brk = start;
1527
1528 memset((void *)mem_malloc_start, 0, size);
1529}
d4e8ada0 1530
217c9dad
WD
1531/* field-extraction macros */
1532
1533#define first(b) ((b)->fd)
1534#define last(b) ((b)->bk)
1535
1536/*
1537 Indexing into bins
1538*/
1539
1540#define bin_index(sz) \
1541(((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1542 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1543 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1544 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1545 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1546 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
8bde7f77 1547 126)
217c9dad
WD
1548/*
1549 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1550 identically sized chunks. This is exploited in malloc.
1551*/
1552
1553#define MAX_SMALLBIN 63
1554#define MAX_SMALLBIN_SIZE 512
1555#define SMALLBIN_WIDTH 8
1556
1557#define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1558
1559/*
1560 Requests are `small' if both the corresponding and the next bin are small
1561*/
1562
1563#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1564
1565\f
1566
1567/*
1568 To help compensate for the large number of bins, a one-level index
1569 structure is used for bin-by-bin searching. `binblocks' is a
1570 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1571 have any (possibly) non-empty bins, so they can be skipped over
1572 all at once during during traversals. The bits are NOT always
1573 cleared as soon as all bins in a block are empty, but instead only
1574 when all are noticed to be empty during traversal in malloc.
1575*/
1576
1577#define BINBLOCKWIDTH 4 /* bins per block */
1578
f2302d44
SR
1579#define binblocks_r ((INTERNAL_SIZE_T)av_[1]) /* bitvector of nonempty blocks */
1580#define binblocks_w (av_[1])
217c9dad
WD
1581
1582/* bin<->block macros */
1583
1584#define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
f2302d44
SR
1585#define mark_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r | idx2binblock(ii)))
1586#define clear_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r & ~(idx2binblock(ii))))
217c9dad
WD
1587
1588
1589\f
1590
1591
1592/* Other static bookkeeping data */
1593
1594/* variables holding tunable values */
1595
1596static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1597static unsigned long top_pad = DEFAULT_TOP_PAD;
1598static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1599static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1600
1601/* The first value returned from sbrk */
1602static char* sbrk_base = (char*)(-1);
1603
1604/* The maximum memory obtained from system via sbrk */
1605static unsigned long max_sbrked_mem = 0;
1606
1607/* The maximum via either sbrk or mmap */
1608static unsigned long max_total_mem = 0;
1609
1610/* internal working copy of mallinfo */
1611static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1612
1613/* The total memory obtained from system via sbrk */
1614#define sbrked_mem (current_mallinfo.arena)
1615
1616/* Tracking mmaps */
1617
ea882baf 1618#ifdef DEBUG
217c9dad 1619static unsigned int n_mmaps = 0;
ea882baf 1620#endif /* DEBUG */
217c9dad
WD
1621static unsigned long mmapped_mem = 0;
1622#if HAVE_MMAP
1623static unsigned int max_n_mmaps = 0;
1624static unsigned long max_mmapped_mem = 0;
1625#endif
1626
1627\f
1628
1629/*
1630 Debugging support
1631*/
1632
1633#ifdef DEBUG
1634
1635
1636/*
1637 These routines make a number of assertions about the states
1638 of data structures that should be true at all times. If any
1639 are not true, it's very likely that a user program has somehow
1640 trashed memory. (It's also possible that there is a coding error
1641 in malloc. In which case, please report it!)
1642*/
1643
1644#if __STD_C
1645static void do_check_chunk(mchunkptr p)
1646#else
1647static void do_check_chunk(p) mchunkptr p;
1648#endif
1649{
217c9dad 1650 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
217c9dad
WD
1651
1652 /* No checkable chunk is mmapped */
1653 assert(!chunk_is_mmapped(p));
1654
1655 /* Check for legal address ... */
1656 assert((char*)p >= sbrk_base);
1657 if (p != top)
1658 assert((char*)p + sz <= (char*)top);
1659 else
1660 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1661
1662}
1663
1664
1665#if __STD_C
1666static void do_check_free_chunk(mchunkptr p)
1667#else
1668static void do_check_free_chunk(p) mchunkptr p;
1669#endif
1670{
1671 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
217c9dad 1672 mchunkptr next = chunk_at_offset(p, sz);
217c9dad
WD
1673
1674 do_check_chunk(p);
1675
1676 /* Check whether it claims to be free ... */
1677 assert(!inuse(p));
1678
1679 /* Unless a special marker, must have OK fields */
1680 if ((long)sz >= (long)MINSIZE)
1681 {
1682 assert((sz & MALLOC_ALIGN_MASK) == 0);
1683 assert(aligned_OK(chunk2mem(p)));
1684 /* ... matching footer field */
1685 assert(next->prev_size == sz);
1686 /* ... and is fully consolidated */
1687 assert(prev_inuse(p));
1688 assert (next == top || inuse(next));
1689
1690 /* ... and has minimally sane links */
1691 assert(p->fd->bk == p);
1692 assert(p->bk->fd == p);
1693 }
1694 else /* markers are always of size SIZE_SZ */
1695 assert(sz == SIZE_SZ);
1696}
1697
1698#if __STD_C
1699static void do_check_inuse_chunk(mchunkptr p)
1700#else
1701static void do_check_inuse_chunk(p) mchunkptr p;
1702#endif
1703{
1704 mchunkptr next = next_chunk(p);
1705 do_check_chunk(p);
1706
1707 /* Check whether it claims to be in use ... */
1708 assert(inuse(p));
1709
1710 /* ... and is surrounded by OK chunks.
1711 Since more things can be checked with free chunks than inuse ones,
1712 if an inuse chunk borders them and debug is on, it's worth doing them.
1713 */
1714 if (!prev_inuse(p))
1715 {
1716 mchunkptr prv = prev_chunk(p);
1717 assert(next_chunk(prv) == p);
1718 do_check_free_chunk(prv);
1719 }
1720 if (next == top)
1721 {
1722 assert(prev_inuse(next));
1723 assert(chunksize(next) >= MINSIZE);
1724 }
1725 else if (!inuse(next))
1726 do_check_free_chunk(next);
1727
1728}
1729
1730#if __STD_C
1731static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1732#else
1733static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1734#endif
1735{
217c9dad
WD
1736 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1737 long room = sz - s;
217c9dad
WD
1738
1739 do_check_inuse_chunk(p);
1740
1741 /* Legal size ... */
1742 assert((long)sz >= (long)MINSIZE);
1743 assert((sz & MALLOC_ALIGN_MASK) == 0);
1744 assert(room >= 0);
1745 assert(room < (long)MINSIZE);
1746
1747 /* ... and alignment */
1748 assert(aligned_OK(chunk2mem(p)));
1749
1750
1751 /* ... and was allocated at front of an available chunk */
1752 assert(prev_inuse(p));
1753
1754}
1755
1756
1757#define check_free_chunk(P) do_check_free_chunk(P)
1758#define check_inuse_chunk(P) do_check_inuse_chunk(P)
1759#define check_chunk(P) do_check_chunk(P)
1760#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1761#else
1762#define check_free_chunk(P)
1763#define check_inuse_chunk(P)
1764#define check_chunk(P)
1765#define check_malloced_chunk(P,N)
1766#endif
1767
1768\f
1769
1770/*
1771 Macro-based internal utilities
1772*/
1773
1774
1775/*
1776 Linking chunks in bin lists.
1777 Call these only with variables, not arbitrary expressions, as arguments.
1778*/
1779
1780/*
1781 Place chunk p of size s in its bin, in size order,
1782 putting it ahead of others of same size.
1783*/
1784
1785
1786#define frontlink(P, S, IDX, BK, FD) \
1787{ \
1788 if (S < MAX_SMALLBIN_SIZE) \
1789 { \
1790 IDX = smallbin_index(S); \
1791 mark_binblock(IDX); \
1792 BK = bin_at(IDX); \
1793 FD = BK->fd; \
1794 P->bk = BK; \
1795 P->fd = FD; \
1796 FD->bk = BK->fd = P; \
1797 } \
1798 else \
1799 { \
1800 IDX = bin_index(S); \
1801 BK = bin_at(IDX); \
1802 FD = BK->fd; \
1803 if (FD == BK) mark_binblock(IDX); \
1804 else \
1805 { \
1806 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1807 BK = FD->bk; \
1808 } \
1809 P->bk = BK; \
1810 P->fd = FD; \
1811 FD->bk = BK->fd = P; \
1812 } \
1813}
1814
1815
1816/* take a chunk off a list */
1817
1818#define unlink(P, BK, FD) \
1819{ \
1820 BK = P->bk; \
1821 FD = P->fd; \
1822 FD->bk = BK; \
1823 BK->fd = FD; \
1824} \
1825
1826/* Place p as the last remainder */
1827
1828#define link_last_remainder(P) \
1829{ \
1830 last_remainder->fd = last_remainder->bk = P; \
1831 P->fd = P->bk = last_remainder; \
1832}
1833
1834/* Clear the last_remainder bin */
1835
1836#define clear_last_remainder \
1837 (last_remainder->fd = last_remainder->bk = last_remainder)
1838
1839
217c9dad
WD
1840\f
1841
1842
1843/* Routines dealing with mmap(). */
1844
1845#if HAVE_MMAP
1846
1847#if __STD_C
1848static mchunkptr mmap_chunk(size_t size)
1849#else
1850static mchunkptr mmap_chunk(size) size_t size;
1851#endif
1852{
1853 size_t page_mask = malloc_getpagesize - 1;
1854 mchunkptr p;
1855
1856#ifndef MAP_ANONYMOUS
1857 static int fd = -1;
1858#endif
1859
1860 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1861
1862 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1863 * there is no following chunk whose prev_size field could be used.
1864 */
1865 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1866
1867#ifdef MAP_ANONYMOUS
1868 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1869 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1870#else /* !MAP_ANONYMOUS */
1871 if (fd < 0)
1872 {
1873 fd = open("/dev/zero", O_RDWR);
1874 if(fd < 0) return 0;
1875 }
1876 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1877#endif
1878
1879 if(p == (mchunkptr)-1) return 0;
1880
1881 n_mmaps++;
1882 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1883
1884 /* We demand that eight bytes into a page must be 8-byte aligned. */
1885 assert(aligned_OK(chunk2mem(p)));
1886
1887 /* The offset to the start of the mmapped region is stored
1888 * in the prev_size field of the chunk; normally it is zero,
1889 * but that can be changed in memalign().
1890 */
1891 p->prev_size = 0;
1892 set_head(p, size|IS_MMAPPED);
1893
1894 mmapped_mem += size;
1895 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1896 max_mmapped_mem = mmapped_mem;
1897 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1898 max_total_mem = mmapped_mem + sbrked_mem;
1899 return p;
1900}
1901
1902#if __STD_C
1903static void munmap_chunk(mchunkptr p)
1904#else
1905static void munmap_chunk(p) mchunkptr p;
1906#endif
1907{
1908 INTERNAL_SIZE_T size = chunksize(p);
1909 int ret;
1910
1911 assert (chunk_is_mmapped(p));
1912 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1913 assert((n_mmaps > 0));
1914 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1915
1916 n_mmaps--;
1917 mmapped_mem -= (size + p->prev_size);
1918
1919 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1920
1921 /* munmap returns non-zero on failure */
1922 assert(ret == 0);
1923}
1924
1925#if HAVE_MREMAP
1926
1927#if __STD_C
1928static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1929#else
1930static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1931#endif
1932{
1933 size_t page_mask = malloc_getpagesize - 1;
1934 INTERNAL_SIZE_T offset = p->prev_size;
1935 INTERNAL_SIZE_T size = chunksize(p);
1936 char *cp;
1937
1938 assert (chunk_is_mmapped(p));
1939 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1940 assert((n_mmaps > 0));
1941 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1942
1943 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1944 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1945
1946 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1947
1948 if (cp == (char *)-1) return 0;
1949
1950 p = (mchunkptr)(cp + offset);
1951
1952 assert(aligned_OK(chunk2mem(p)));
1953
1954 assert((p->prev_size == offset));
1955 set_head(p, (new_size - offset)|IS_MMAPPED);
1956
1957 mmapped_mem -= size + offset;
1958 mmapped_mem += new_size;
1959 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1960 max_mmapped_mem = mmapped_mem;
1961 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1962 max_total_mem = mmapped_mem + sbrked_mem;
1963 return p;
1964}
1965
1966#endif /* HAVE_MREMAP */
1967
1968#endif /* HAVE_MMAP */
1969
1970
1971\f
1972
1973/*
1974 Extend the top-most chunk by obtaining memory from system.
1975 Main interface to sbrk (but see also malloc_trim).
1976*/
1977
1978#if __STD_C
1979static void malloc_extend_top(INTERNAL_SIZE_T nb)
1980#else
1981static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1982#endif
1983{
1984 char* brk; /* return value from sbrk */
1985 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1986 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
1987 char* new_brk; /* return of 2nd sbrk call */
1988 INTERNAL_SIZE_T top_size; /* new size of top chunk */
1989
1990 mchunkptr old_top = top; /* Record state of old top */
1991 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
1992 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
1993
1994 /* Pad request with top_pad plus minimal overhead */
1995
1996 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
1997 unsigned long pagesz = malloc_getpagesize;
1998
1999 /* If not the first time through, round to preserve page boundary */
2000 /* Otherwise, we need to correct to a page size below anyway. */
2001 /* (We also correct below if an intervening foreign sbrk call.) */
2002
2003 if (sbrk_base != (char*)(-1))
2004 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2005
2006 brk = (char*)(MORECORE (sbrk_size));
2007
2008 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2009 if (brk == (char*)(MORECORE_FAILURE) ||
2010 (brk < old_end && old_top != initial_top))
2011 return;
2012
2013 sbrked_mem += sbrk_size;
2014
2015 if (brk == old_end) /* can just add bytes to current top */
2016 {
2017 top_size = sbrk_size + old_top_size;
2018 set_head(top, top_size | PREV_INUSE);
2019 }
2020 else
2021 {
2022 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2023 sbrk_base = brk;
2024 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2025 sbrked_mem += brk - (char*)old_end;
2026
2027 /* Guarantee alignment of first new chunk made from this space */
2028 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2029 if (front_misalign > 0)
2030 {
2031 correction = (MALLOC_ALIGNMENT) - front_misalign;
2032 brk += correction;
2033 }
2034 else
2035 correction = 0;
2036
2037 /* Guarantee the next brk will be at a page boundary */
2038
2039 correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
8bde7f77 2040 ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
217c9dad
WD
2041
2042 /* Allocate correction */
2043 new_brk = (char*)(MORECORE (correction));
2044 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2045
2046 sbrked_mem += correction;
2047
2048 top = (mchunkptr)brk;
2049 top_size = new_brk - brk + correction;
2050 set_head(top, top_size | PREV_INUSE);
2051
2052 if (old_top != initial_top)
2053 {
2054
2055 /* There must have been an intervening foreign sbrk call. */
2056 /* A double fencepost is necessary to prevent consolidation */
2057
2058 /* If not enough space to do this, then user did something very wrong */
2059 if (old_top_size < MINSIZE)
2060 {
8bde7f77
WD
2061 set_head(top, PREV_INUSE); /* will force null return from malloc */
2062 return;
217c9dad
WD
2063 }
2064
2065 /* Also keep size a multiple of MALLOC_ALIGNMENT */
2066 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2067 set_head_size(old_top, old_top_size);
2068 chunk_at_offset(old_top, old_top_size )->size =
8bde7f77 2069 SIZE_SZ|PREV_INUSE;
217c9dad 2070 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
8bde7f77 2071 SIZE_SZ|PREV_INUSE;
217c9dad
WD
2072 /* If possible, release the rest. */
2073 if (old_top_size >= MINSIZE)
8bde7f77 2074 fREe(chunk2mem(old_top));
217c9dad
WD
2075 }
2076 }
2077
2078 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2079 max_sbrked_mem = sbrked_mem;
2080 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2081 max_total_mem = mmapped_mem + sbrked_mem;
2082
2083 /* We always land on a page boundary */
2084 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2085}
2086
2087
2088\f
2089
2090/* Main public routines */
2091
2092
2093/*
2094 Malloc Algorthim:
2095
2096 The requested size is first converted into a usable form, `nb'.
2097 This currently means to add 4 bytes overhead plus possibly more to
2098 obtain 8-byte alignment and/or to obtain a size of at least
2099 MINSIZE (currently 16 bytes), the smallest allocatable size.
2100 (All fits are considered `exact' if they are within MINSIZE bytes.)
2101
2102 From there, the first successful of the following steps is taken:
2103
2104 1. The bin corresponding to the request size is scanned, and if
8bde7f77 2105 a chunk of exactly the right size is found, it is taken.
217c9dad
WD
2106
2107 2. The most recently remaindered chunk is used if it is big
8bde7f77
WD
2108 enough. This is a form of (roving) first fit, used only in
2109 the absence of exact fits. Runs of consecutive requests use
2110 the remainder of the chunk used for the previous such request
2111 whenever possible. This limited use of a first-fit style
2112 allocation strategy tends to give contiguous chunks
2113 coextensive lifetimes, which improves locality and can reduce
2114 fragmentation in the long run.
217c9dad
WD
2115
2116 3. Other bins are scanned in increasing size order, using a
8bde7f77
WD
2117 chunk big enough to fulfill the request, and splitting off
2118 any remainder. This search is strictly by best-fit; i.e.,
2119 the smallest (with ties going to approximately the least
2120 recently used) chunk that fits is selected.
217c9dad
WD
2121
2122 4. If large enough, the chunk bordering the end of memory
8bde7f77
WD
2123 (`top') is split off. (This use of `top' is in accord with
2124 the best-fit search rule. In effect, `top' is treated as
2125 larger (and thus less well fitting) than any other available
2126 chunk since it can be extended to be as large as necessary
2127 (up to system limitations).
217c9dad
WD
2128
2129 5. If the request size meets the mmap threshold and the
8bde7f77
WD
2130 system supports mmap, and there are few enough currently
2131 allocated mmapped regions, and a call to mmap succeeds,
2132 the request is allocated via direct memory mapping.
217c9dad
WD
2133
2134 6. Otherwise, the top of memory is extended by
8bde7f77
WD
2135 obtaining more space from the system (normally using sbrk,
2136 but definable to anything else via the MORECORE macro).
2137 Memory is gathered from the system (in system page-sized
2138 units) in a way that allows chunks obtained across different
2139 sbrk calls to be consolidated, but does not require
2140 contiguous memory. Thus, it should be safe to intersperse
2141 mallocs with other sbrk calls.
217c9dad
WD
2142
2143
2144 All allocations are made from the the `lowest' part of any found
2145 chunk. (The implementation invariant is that prev_inuse is
2146 always true of any allocated chunk; i.e., that each allocated
2147 chunk borders either a previously allocated and still in-use chunk,
2148 or the base of its memory arena.)
2149
2150*/
2151
2152#if __STD_C
2153Void_t* mALLOc(size_t bytes)
2154#else
2155Void_t* mALLOc(bytes) size_t bytes;
2156#endif
2157{
2158 mchunkptr victim; /* inspected/selected chunk */
2159 INTERNAL_SIZE_T victim_size; /* its size */
2160 int idx; /* index for bin traversal */
2161 mbinptr bin; /* associated bin */
2162 mchunkptr remainder; /* remainder from a split */
2163 long remainder_size; /* its size */
2164 int remainder_index; /* its bin index */
2165 unsigned long block; /* block traverser bit */
2166 int startidx; /* first bin of a traversed block */
2167 mchunkptr fwd; /* misc temp for linking */
2168 mchunkptr bck; /* misc temp for linking */
2169 mbinptr q; /* misc temp */
2170
2171 INTERNAL_SIZE_T nb;
2172
27405448
WD
2173 /* check if mem_malloc_init() was run */
2174 if ((mem_malloc_start == 0) && (mem_malloc_end == 0)) {
2175 /* not initialized yet */
199adb60 2176 return NULL;
27405448
WD
2177 }
2178
199adb60 2179 if ((long)bytes < 0) return NULL;
217c9dad
WD
2180
2181 nb = request2size(bytes); /* padded request size; */
2182
2183 /* Check for exact match in a bin */
2184
2185 if (is_small_request(nb)) /* Faster version for small requests */
2186 {
2187 idx = smallbin_index(nb);
2188
2189 /* No traversal or size check necessary for small bins. */
2190
2191 q = bin_at(idx);
2192 victim = last(q);
2193
2194 /* Also scan the next one, since it would have a remainder < MINSIZE */
2195 if (victim == q)
2196 {
2197 q = next_bin(q);
2198 victim = last(q);
2199 }
2200 if (victim != q)
2201 {
2202 victim_size = chunksize(victim);
2203 unlink(victim, bck, fwd);
2204 set_inuse_bit_at_offset(victim, victim_size);
2205 check_malloced_chunk(victim, nb);
2206 return chunk2mem(victim);
2207 }
2208
2209 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2210
2211 }
2212 else
2213 {
2214 idx = bin_index(nb);
2215 bin = bin_at(idx);
2216
2217 for (victim = last(bin); victim != bin; victim = victim->bk)
2218 {
2219 victim_size = chunksize(victim);
2220 remainder_size = victim_size - nb;
2221
2222 if (remainder_size >= (long)MINSIZE) /* too big */
2223 {
8bde7f77
WD
2224 --idx; /* adjust to rescan below after checking last remainder */
2225 break;
217c9dad
WD
2226 }
2227
2228 else if (remainder_size >= 0) /* exact fit */
2229 {
8bde7f77
WD
2230 unlink(victim, bck, fwd);
2231 set_inuse_bit_at_offset(victim, victim_size);
2232 check_malloced_chunk(victim, nb);
2233 return chunk2mem(victim);
217c9dad
WD
2234 }
2235 }
2236
2237 ++idx;
2238
2239 }
2240
2241 /* Try to use the last split-off remainder */
2242
2243 if ( (victim = last_remainder->fd) != last_remainder)
2244 {
2245 victim_size = chunksize(victim);
2246 remainder_size = victim_size - nb;
2247
2248 if (remainder_size >= (long)MINSIZE) /* re-split */
2249 {
2250 remainder = chunk_at_offset(victim, nb);
2251 set_head(victim, nb | PREV_INUSE);
2252 link_last_remainder(remainder);
2253 set_head(remainder, remainder_size | PREV_INUSE);
2254 set_foot(remainder, remainder_size);
2255 check_malloced_chunk(victim, nb);
2256 return chunk2mem(victim);
2257 }
2258
2259 clear_last_remainder;
2260
2261 if (remainder_size >= 0) /* exhaust */
2262 {
2263 set_inuse_bit_at_offset(victim, victim_size);
2264 check_malloced_chunk(victim, nb);
2265 return chunk2mem(victim);
2266 }
2267
2268 /* Else place in bin */
2269
2270 frontlink(victim, victim_size, remainder_index, bck, fwd);
2271 }
2272
2273 /*
2274 If there are any possibly nonempty big-enough blocks,
2275 search for best fitting chunk by scanning bins in blockwidth units.
2276 */
2277
f2302d44 2278 if ( (block = idx2binblock(idx)) <= binblocks_r)
217c9dad
WD
2279 {
2280
2281 /* Get to the first marked block */
2282
f2302d44 2283 if ( (block & binblocks_r) == 0)
217c9dad
WD
2284 {
2285 /* force to an even block boundary */
2286 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2287 block <<= 1;
f2302d44 2288 while ((block & binblocks_r) == 0)
217c9dad 2289 {
8bde7f77
WD
2290 idx += BINBLOCKWIDTH;
2291 block <<= 1;
217c9dad
WD
2292 }
2293 }
2294
2295 /* For each possibly nonempty block ... */
2296 for (;;)
2297 {
2298 startidx = idx; /* (track incomplete blocks) */
2299 q = bin = bin_at(idx);
2300
2301 /* For each bin in this block ... */
2302 do
2303 {
8bde7f77
WD
2304 /* Find and use first big enough chunk ... */
2305
2306 for (victim = last(bin); victim != bin; victim = victim->bk)
2307 {
2308 victim_size = chunksize(victim);
2309 remainder_size = victim_size - nb;
2310
2311 if (remainder_size >= (long)MINSIZE) /* split */
2312 {
2313 remainder = chunk_at_offset(victim, nb);
2314 set_head(victim, nb | PREV_INUSE);
2315 unlink(victim, bck, fwd);
2316 link_last_remainder(remainder);
2317 set_head(remainder, remainder_size | PREV_INUSE);
2318 set_foot(remainder, remainder_size);
2319 check_malloced_chunk(victim, nb);
2320 return chunk2mem(victim);
2321 }
2322
2323 else if (remainder_size >= 0) /* take */
2324 {
2325 set_inuse_bit_at_offset(victim, victim_size);
2326 unlink(victim, bck, fwd);
2327 check_malloced_chunk(victim, nb);
2328 return chunk2mem(victim);
2329 }
2330
2331 }
217c9dad
WD
2332
2333 bin = next_bin(bin);
2334
2335 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2336
2337 /* Clear out the block bit. */
2338
2339 do /* Possibly backtrack to try to clear a partial block */
2340 {
8bde7f77
WD
2341 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2342 {
f2302d44 2343 av_[1] = (mbinptr)(binblocks_r & ~block);
8bde7f77
WD
2344 break;
2345 }
2346 --startidx;
217c9dad
WD
2347 q = prev_bin(q);
2348 } while (first(q) == q);
2349
2350 /* Get to the next possibly nonempty block */
2351
f2302d44 2352 if ( (block <<= 1) <= binblocks_r && (block != 0) )
217c9dad 2353 {
f2302d44 2354 while ((block & binblocks_r) == 0)
8bde7f77
WD
2355 {
2356 idx += BINBLOCKWIDTH;
2357 block <<= 1;
2358 }
217c9dad
WD
2359 }
2360 else
8bde7f77 2361 break;
217c9dad
WD
2362 }
2363 }
2364
2365
2366 /* Try to use top chunk */
2367
2368 /* Require that there be a remainder, ensuring top always exists */
2369 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2370 {
2371
2372#if HAVE_MMAP
2373 /* If big and would otherwise need to extend, try to use mmap instead */
2374 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
8bde7f77 2375 (victim = mmap_chunk(nb)) != 0)
217c9dad
WD
2376 return chunk2mem(victim);
2377#endif
2378
2379 /* Try to extend */
2380 malloc_extend_top(nb);
2381 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
199adb60 2382 return NULL; /* propagate failure */
217c9dad
WD
2383 }
2384
2385 victim = top;
2386 set_head(victim, nb | PREV_INUSE);
2387 top = chunk_at_offset(victim, nb);
2388 set_head(top, remainder_size | PREV_INUSE);
2389 check_malloced_chunk(victim, nb);
2390 return chunk2mem(victim);
2391
2392}
2393
2394
2395\f
2396
2397/*
2398
2399 free() algorithm :
2400
2401 cases:
2402
2403 1. free(0) has no effect.
2404
2405 2. If the chunk was allocated via mmap, it is release via munmap().
2406
2407 3. If a returned chunk borders the current high end of memory,
8bde7f77
WD
2408 it is consolidated into the top, and if the total unused
2409 topmost memory exceeds the trim threshold, malloc_trim is
2410 called.
217c9dad
WD
2411
2412 4. Other chunks are consolidated as they arrive, and
8bde7f77
WD
2413 placed in corresponding bins. (This includes the case of
2414 consolidating with the current `last_remainder').
217c9dad
WD
2415
2416*/
2417
2418
2419#if __STD_C
2420void fREe(Void_t* mem)
2421#else
2422void fREe(mem) Void_t* mem;
2423#endif
2424{
2425 mchunkptr p; /* chunk corresponding to mem */
2426 INTERNAL_SIZE_T hd; /* its head field */
2427 INTERNAL_SIZE_T sz; /* its size */
2428 int idx; /* its bin index */
2429 mchunkptr next; /* next contiguous chunk */
2430 INTERNAL_SIZE_T nextsz; /* its size */
2431 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2432 mchunkptr bck; /* misc temp for linking */
2433 mchunkptr fwd; /* misc temp for linking */
2434 int islr; /* track whether merging with last_remainder */
2435
199adb60 2436 if (mem == NULL) /* free(0) has no effect */
217c9dad
WD
2437 return;
2438
2439 p = mem2chunk(mem);
2440 hd = p->size;
2441
2442#if HAVE_MMAP
2443 if (hd & IS_MMAPPED) /* release mmapped memory. */
2444 {
2445 munmap_chunk(p);
2446 return;
2447 }
2448#endif
2449
2450 check_inuse_chunk(p);
2451
2452 sz = hd & ~PREV_INUSE;
2453 next = chunk_at_offset(p, sz);
2454 nextsz = chunksize(next);
2455
2456 if (next == top) /* merge with top */
2457 {
2458 sz += nextsz;
2459
2460 if (!(hd & PREV_INUSE)) /* consolidate backward */
2461 {
2462 prevsz = p->prev_size;
2463 p = chunk_at_offset(p, -((long) prevsz));
2464 sz += prevsz;
2465 unlink(p, bck, fwd);
2466 }
2467
2468 set_head(p, sz | PREV_INUSE);
2469 top = p;
2470 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2471 malloc_trim(top_pad);
2472 return;
2473 }
2474
2475 set_head(next, nextsz); /* clear inuse bit */
2476
2477 islr = 0;
2478
2479 if (!(hd & PREV_INUSE)) /* consolidate backward */
2480 {
2481 prevsz = p->prev_size;
2482 p = chunk_at_offset(p, -((long) prevsz));
2483 sz += prevsz;
2484
2485 if (p->fd == last_remainder) /* keep as last_remainder */
2486 islr = 1;
2487 else
2488 unlink(p, bck, fwd);
2489 }
2490
2491 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2492 {
2493 sz += nextsz;
2494
2495 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2496 {
2497 islr = 1;
2498 link_last_remainder(p);
2499 }
2500 else
2501 unlink(next, bck, fwd);
2502 }
2503
2504
2505 set_head(p, sz | PREV_INUSE);
2506 set_foot(p, sz);
2507 if (!islr)
2508 frontlink(p, sz, idx, bck, fwd);
2509}
2510
2511
2512\f
2513
2514
2515/*
2516
2517 Realloc algorithm:
2518
2519 Chunks that were obtained via mmap cannot be extended or shrunk
2520 unless HAVE_MREMAP is defined, in which case mremap is used.
2521 Otherwise, if their reallocation is for additional space, they are
2522 copied. If for less, they are just left alone.
2523
2524 Otherwise, if the reallocation is for additional space, and the
2525 chunk can be extended, it is, else a malloc-copy-free sequence is
2526 taken. There are several different ways that a chunk could be
2527 extended. All are tried:
2528
2529 * Extending forward into following adjacent free chunk.
2530 * Shifting backwards, joining preceding adjacent space
2531 * Both shifting backwards and extending forward.
2532 * Extending into newly sbrked space
2533
2534 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2535 size argument of zero (re)allocates a minimum-sized chunk.
2536
2537 If the reallocation is for less space, and the new request is for
2538 a `small' (<512 bytes) size, then the newly unused space is lopped
2539 off and freed.
2540
2541 The old unix realloc convention of allowing the last-free'd chunk
2542 to be used as an argument to realloc is no longer supported.
2543 I don't know of any programs still relying on this feature,
2544 and allowing it would also allow too many other incorrect
2545 usages of realloc to be sensible.
2546
2547
2548*/
2549
2550
2551#if __STD_C
2552Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2553#else
2554Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2555#endif
2556{
2557 INTERNAL_SIZE_T nb; /* padded request size */
2558
2559 mchunkptr oldp; /* chunk corresponding to oldmem */
2560 INTERNAL_SIZE_T oldsize; /* its size */
2561
2562 mchunkptr newp; /* chunk to return */
2563 INTERNAL_SIZE_T newsize; /* its size */
2564 Void_t* newmem; /* corresponding user mem */
2565
2566 mchunkptr next; /* next contiguous chunk after oldp */
2567 INTERNAL_SIZE_T nextsize; /* its size */
2568
2569 mchunkptr prev; /* previous contiguous chunk before oldp */
2570 INTERNAL_SIZE_T prevsize; /* its size */
2571
2572 mchunkptr remainder; /* holds split off extra space from newp */
2573 INTERNAL_SIZE_T remainder_size; /* its size */
2574
2575 mchunkptr bck; /* misc temp for linking */
2576 mchunkptr fwd; /* misc temp for linking */
2577
2578#ifdef REALLOC_ZERO_BYTES_FREES
2579 if (bytes == 0) { fREe(oldmem); return 0; }
2580#endif
2581
199adb60 2582 if ((long)bytes < 0) return NULL;
217c9dad
WD
2583
2584 /* realloc of null is supposed to be same as malloc */
199adb60 2585 if (oldmem == NULL) return mALLOc(bytes);
217c9dad
WD
2586
2587 newp = oldp = mem2chunk(oldmem);
2588 newsize = oldsize = chunksize(oldp);
2589
2590
2591 nb = request2size(bytes);
2592
2593#if HAVE_MMAP
2594 if (chunk_is_mmapped(oldp))
2595 {
2596#if HAVE_MREMAP
2597 newp = mremap_chunk(oldp, nb);
2598 if(newp) return chunk2mem(newp);
2599#endif
2600 /* Note the extra SIZE_SZ overhead. */
2601 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2602 /* Must alloc, copy, free. */
2603 newmem = mALLOc(bytes);
2604 if (newmem == 0) return 0; /* propagate failure */
2605 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2606 munmap_chunk(oldp);
2607 return newmem;
2608 }
2609#endif
2610
2611 check_inuse_chunk(oldp);
2612
2613 if ((long)(oldsize) < (long)(nb))
2614 {
2615
2616 /* Try expanding forward */
2617
2618 next = chunk_at_offset(oldp, oldsize);
2619 if (next == top || !inuse(next))
2620 {
2621 nextsize = chunksize(next);
2622
2623 /* Forward into top only if a remainder */
2624 if (next == top)
2625 {
8bde7f77
WD
2626 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2627 {
2628 newsize += nextsize;
2629 top = chunk_at_offset(oldp, nb);
2630 set_head(top, (newsize - nb) | PREV_INUSE);
2631 set_head_size(oldp, nb);
2632 return chunk2mem(oldp);
2633 }
217c9dad
WD
2634 }
2635
2636 /* Forward into next chunk */
2637 else if (((long)(nextsize + newsize) >= (long)(nb)))
2638 {
8bde7f77
WD
2639 unlink(next, bck, fwd);
2640 newsize += nextsize;
2641 goto split;
217c9dad
WD
2642 }
2643 }
2644 else
2645 {
199adb60 2646 next = NULL;
217c9dad
WD
2647 nextsize = 0;
2648 }
2649
2650 /* Try shifting backwards. */
2651
2652 if (!prev_inuse(oldp))
2653 {
2654 prev = prev_chunk(oldp);
2655 prevsize = chunksize(prev);
2656
2657 /* try forward + backward first to save a later consolidation */
2658
199adb60 2659 if (next != NULL)
217c9dad 2660 {
8bde7f77
WD
2661 /* into top */
2662 if (next == top)
2663 {
2664 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2665 {
2666 unlink(prev, bck, fwd);
2667 newp = prev;
2668 newsize += prevsize + nextsize;
2669 newmem = chunk2mem(newp);
2670 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2671 top = chunk_at_offset(newp, nb);
2672 set_head(top, (newsize - nb) | PREV_INUSE);
2673 set_head_size(newp, nb);
2674 return newmem;
2675 }
2676 }
2677
2678 /* into next chunk */
2679 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2680 {
2681 unlink(next, bck, fwd);
2682 unlink(prev, bck, fwd);
2683 newp = prev;
2684 newsize += nextsize + prevsize;
2685 newmem = chunk2mem(newp);
2686 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2687 goto split;
2688 }
217c9dad
WD
2689 }
2690
2691 /* backward only */
199adb60 2692 if (prev != NULL && (long)(prevsize + newsize) >= (long)nb)
217c9dad 2693 {
8bde7f77
WD
2694 unlink(prev, bck, fwd);
2695 newp = prev;
2696 newsize += prevsize;
2697 newmem = chunk2mem(newp);
2698 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2699 goto split;
217c9dad
WD
2700 }
2701 }
2702
2703 /* Must allocate */
2704
2705 newmem = mALLOc (bytes);
2706
199adb60
KP
2707 if (newmem == NULL) /* propagate failure */
2708 return NULL;
217c9dad
WD
2709
2710 /* Avoid copy if newp is next chunk after oldp. */
2711 /* (This can only happen when new chunk is sbrk'ed.) */
2712
2713 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2714 {
2715 newsize += chunksize(newp);
2716 newp = oldp;
2717 goto split;
2718 }
2719
2720 /* Otherwise copy, free, and exit */
2721 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2722 fREe(oldmem);
2723 return newmem;
2724 }
2725
2726
2727 split: /* split off extra room in old or expanded chunk */
2728
2729 if (newsize - nb >= MINSIZE) /* split off remainder */
2730 {
2731 remainder = chunk_at_offset(newp, nb);
2732 remainder_size = newsize - nb;
2733 set_head_size(newp, nb);
2734 set_head(remainder, remainder_size | PREV_INUSE);
2735 set_inuse_bit_at_offset(remainder, remainder_size);
2736 fREe(chunk2mem(remainder)); /* let free() deal with it */
2737 }
2738 else
2739 {
2740 set_head_size(newp, newsize);
2741 set_inuse_bit_at_offset(newp, newsize);
2742 }
2743
2744 check_inuse_chunk(newp);
2745 return chunk2mem(newp);
2746}
2747
2748
2749\f
2750
2751/*
2752
2753 memalign algorithm:
2754
2755 memalign requests more than enough space from malloc, finds a spot
2756 within that chunk that meets the alignment request, and then
2757 possibly frees the leading and trailing space.
2758
2759 The alignment argument must be a power of two. This property is not
2760 checked by memalign, so misuse may result in random runtime errors.
2761
2762 8-byte alignment is guaranteed by normal malloc calls, so don't
2763 bother calling memalign with an argument of 8 or less.
2764
2765 Overreliance on memalign is a sure way to fragment space.
2766
2767*/
2768
2769
2770#if __STD_C
2771Void_t* mEMALIGn(size_t alignment, size_t bytes)
2772#else
2773Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2774#endif
2775{
2776 INTERNAL_SIZE_T nb; /* padded request size */
2777 char* m; /* memory returned by malloc call */
2778 mchunkptr p; /* corresponding chunk */
2779 char* brk; /* alignment point within p */
2780 mchunkptr newp; /* chunk to return */
2781 INTERNAL_SIZE_T newsize; /* its size */
2782 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
2783 mchunkptr remainder; /* spare room at end to split off */
2784 long remainder_size; /* its size */
2785
199adb60 2786 if ((long)bytes < 0) return NULL;
217c9dad
WD
2787
2788 /* If need less alignment than we give anyway, just relay to malloc */
2789
2790 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2791
2792 /* Otherwise, ensure that it is at least a minimum chunk size */
2793
2794 if (alignment < MINSIZE) alignment = MINSIZE;
2795
2796 /* Call malloc with worst case padding to hit alignment. */
2797
2798 nb = request2size(bytes);
2799 m = (char*)(mALLOc(nb + alignment + MINSIZE));
2800
199adb60 2801 if (m == NULL) return NULL; /* propagate failure */
217c9dad
WD
2802
2803 p = mem2chunk(m);
2804
2805 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2806 {
2807#if HAVE_MMAP
2808 if(chunk_is_mmapped(p))
2809 return chunk2mem(p); /* nothing more to do */
2810#endif
2811 }
2812 else /* misaligned */
2813 {
2814 /*
2815 Find an aligned spot inside chunk.
2816 Since we need to give back leading space in a chunk of at
2817 least MINSIZE, if the first calculation places us at
2818 a spot with less than MINSIZE leader, we can move to the
2819 next aligned spot -- we've allocated enough total room so that
2820 this is always possible.
2821 */
2822
2823 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
2824 if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2825
2826 newp = (mchunkptr)brk;
2827 leadsize = brk - (char*)(p);
2828 newsize = chunksize(p) - leadsize;
2829
2830#if HAVE_MMAP
2831 if(chunk_is_mmapped(p))
2832 {
2833 newp->prev_size = p->prev_size + leadsize;
2834 set_head(newp, newsize|IS_MMAPPED);
2835 return chunk2mem(newp);
2836 }
2837#endif
2838
2839 /* give back leader, use the rest */
2840
2841 set_head(newp, newsize | PREV_INUSE);
2842 set_inuse_bit_at_offset(newp, newsize);
2843 set_head_size(p, leadsize);
2844 fREe(chunk2mem(p));
2845 p = newp;
2846
2847 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2848 }
2849
2850 /* Also give back spare room at the end */
2851
2852 remainder_size = chunksize(p) - nb;
2853
2854 if (remainder_size >= (long)MINSIZE)
2855 {
2856 remainder = chunk_at_offset(p, nb);
2857 set_head(remainder, remainder_size | PREV_INUSE);
2858 set_head_size(p, nb);
2859 fREe(chunk2mem(remainder));
2860 }
2861
2862 check_inuse_chunk(p);
2863 return chunk2mem(p);
2864
2865}
2866
2867\f
2868
2869
2870/*
2871 valloc just invokes memalign with alignment argument equal
2872 to the page size of the system (or as near to this as can
2873 be figured out from all the includes/defines above.)
2874*/
2875
2876#if __STD_C
2877Void_t* vALLOc(size_t bytes)
2878#else
2879Void_t* vALLOc(bytes) size_t bytes;
2880#endif
2881{
2882 return mEMALIGn (malloc_getpagesize, bytes);
2883}
2884
2885/*
2886 pvalloc just invokes valloc for the nearest pagesize
2887 that will accommodate request
2888*/
2889
2890
2891#if __STD_C
2892Void_t* pvALLOc(size_t bytes)
2893#else
2894Void_t* pvALLOc(bytes) size_t bytes;
2895#endif
2896{
2897 size_t pagesize = malloc_getpagesize;
2898 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2899}
2900
2901/*
2902
2903 calloc calls malloc, then zeroes out the allocated chunk.
2904
2905*/
2906
2907#if __STD_C
2908Void_t* cALLOc(size_t n, size_t elem_size)
2909#else
2910Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2911#endif
2912{
2913 mchunkptr p;
2914 INTERNAL_SIZE_T csz;
2915
2916 INTERNAL_SIZE_T sz = n * elem_size;
2917
2918
2919 /* check if expand_top called, in which case don't need to clear */
2920#if MORECORE_CLEARS
2921 mchunkptr oldtop = top;
2922 INTERNAL_SIZE_T oldtopsize = chunksize(top);
2923#endif
2924 Void_t* mem = mALLOc (sz);
2925
199adb60 2926 if ((long)n < 0) return NULL;
217c9dad 2927
199adb60
KP
2928 if (mem == NULL)
2929 return NULL;
217c9dad
WD
2930 else
2931 {
2932 p = mem2chunk(mem);
2933
2934 /* Two optional cases in which clearing not necessary */
2935
2936
2937#if HAVE_MMAP
2938 if (chunk_is_mmapped(p)) return mem;
2939#endif
2940
2941 csz = chunksize(p);
2942
2943#if MORECORE_CLEARS
2944 if (p == oldtop && csz > oldtopsize)
2945 {
2946 /* clear only the bytes from non-freshly-sbrked memory */
2947 csz = oldtopsize;
2948 }
2949#endif
2950
2951 MALLOC_ZERO(mem, csz - SIZE_SZ);
2952 return mem;
2953 }
2954}
2955
2956/*
2957
2958 cfree just calls free. It is needed/defined on some systems
2959 that pair it with calloc, presumably for odd historical reasons.
2960
2961*/
2962
2963#if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
2964#if __STD_C
2965void cfree(Void_t *mem)
2966#else
2967void cfree(mem) Void_t *mem;
2968#endif
2969{
2970 fREe(mem);
2971}
2972#endif
2973
2974\f
2975
2976/*
2977
2978 Malloc_trim gives memory back to the system (via negative
2979 arguments to sbrk) if there is unused memory at the `high' end of
2980 the malloc pool. You can call this after freeing large blocks of
2981 memory to potentially reduce the system-level memory requirements
2982 of a program. However, it cannot guarantee to reduce memory. Under
2983 some allocation patterns, some large free blocks of memory will be
2984 locked between two used chunks, so they cannot be given back to
2985 the system.
2986
2987 The `pad' argument to malloc_trim represents the amount of free
2988 trailing space to leave untrimmed. If this argument is zero,
2989 only the minimum amount of memory to maintain internal data
2990 structures will be left (one page or less). Non-zero arguments
2991 can be supplied to maintain enough trailing space to service
2992 future expected allocations without having to re-obtain memory
2993 from the system.
2994
2995 Malloc_trim returns 1 if it actually released any memory, else 0.
2996
2997*/
2998
2999#if __STD_C
3000int malloc_trim(size_t pad)
3001#else
3002int malloc_trim(pad) size_t pad;
3003#endif
3004{
3005 long top_size; /* Amount of top-most memory */
3006 long extra; /* Amount to release */
3007 char* current_brk; /* address returned by pre-check sbrk call */
3008 char* new_brk; /* address returned by negative sbrk call */
3009
3010 unsigned long pagesz = malloc_getpagesize;
3011
3012 top_size = chunksize(top);
3013 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3014
3015 if (extra < (long)pagesz) /* Not enough memory to release */
3016 return 0;
3017
3018 else
3019 {
3020 /* Test to make sure no one else called sbrk */
3021 current_brk = (char*)(MORECORE (0));
3022 if (current_brk != (char*)(top) + top_size)
3023 return 0; /* Apparently we don't own memory; must fail */
3024
3025 else
3026 {
3027 new_brk = (char*)(MORECORE (-extra));
3028
3029 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3030 {
8bde7f77
WD
3031 /* Try to figure out what we have */
3032 current_brk = (char*)(MORECORE (0));
3033 top_size = current_brk - (char*)top;
3034 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3035 {
3036 sbrked_mem = current_brk - sbrk_base;
3037 set_head(top, top_size | PREV_INUSE);
3038 }
3039 check_chunk(top);
3040 return 0;
217c9dad
WD
3041 }
3042
3043 else
3044 {
8bde7f77
WD
3045 /* Success. Adjust top accordingly. */
3046 set_head(top, (top_size - extra) | PREV_INUSE);
3047 sbrked_mem -= extra;
3048 check_chunk(top);
3049 return 1;
217c9dad
WD
3050 }
3051 }
3052 }
3053}
3054
3055\f
3056
3057/*
3058 malloc_usable_size:
3059
3060 This routine tells you how many bytes you can actually use in an
3061 allocated chunk, which may be more than you requested (although
3062 often not). You can use this many bytes without worrying about
3063 overwriting other allocated objects. Not a particularly great
3064 programming practice, but still sometimes useful.
3065
3066*/
3067
3068#if __STD_C
3069size_t malloc_usable_size(Void_t* mem)
3070#else
3071size_t malloc_usable_size(mem) Void_t* mem;
3072#endif
3073{
3074 mchunkptr p;
199adb60 3075 if (mem == NULL)
217c9dad
WD
3076 return 0;
3077 else
3078 {
3079 p = mem2chunk(mem);
3080 if(!chunk_is_mmapped(p))
3081 {
3082 if (!inuse(p)) return 0;
3083 check_inuse_chunk(p);
3084 return chunksize(p) - SIZE_SZ;
3085 }
3086 return chunksize(p) - 2*SIZE_SZ;
3087 }
3088}
3089
3090
3091\f
3092
3093/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3094
ea882baf 3095#ifdef DEBUG
217c9dad
WD
3096static void malloc_update_mallinfo()
3097{
3098 int i;
3099 mbinptr b;
3100 mchunkptr p;
3101#ifdef DEBUG
3102 mchunkptr q;
3103#endif
3104
3105 INTERNAL_SIZE_T avail = chunksize(top);
3106 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3107
3108 for (i = 1; i < NAV; ++i)
3109 {
3110 b = bin_at(i);
3111 for (p = last(b); p != b; p = p->bk)
3112 {
3113#ifdef DEBUG
3114 check_free_chunk(p);
3115 for (q = next_chunk(p);
8bde7f77
WD
3116 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3117 q = next_chunk(q))
3118 check_inuse_chunk(q);
217c9dad
WD
3119#endif
3120 avail += chunksize(p);
3121 navail++;
3122 }
3123 }
3124
3125 current_mallinfo.ordblks = navail;
3126 current_mallinfo.uordblks = sbrked_mem - avail;
3127 current_mallinfo.fordblks = avail;
3128 current_mallinfo.hblks = n_mmaps;
3129 current_mallinfo.hblkhd = mmapped_mem;
3130 current_mallinfo.keepcost = chunksize(top);
3131
3132}
ea882baf 3133#endif /* DEBUG */
217c9dad
WD
3134
3135\f
3136
3137/*
3138
3139 malloc_stats:
3140
3141 Prints on the amount of space obtain from the system (both
3142 via sbrk and mmap), the maximum amount (which may be more than
3143 current if malloc_trim and/or munmap got called), the maximum
3144 number of simultaneous mmap regions used, and the current number
3145 of bytes allocated via malloc (or realloc, etc) but not yet
3146 freed. (Note that this is the number of bytes allocated, not the
3147 number requested. It will be larger than the number requested
3148 because of alignment and bookkeeping overhead.)
3149
3150*/
3151
ea882baf 3152#ifdef DEBUG
217c9dad
WD
3153void malloc_stats()
3154{
3155 malloc_update_mallinfo();
3156 printf("max system bytes = %10u\n",
8bde7f77 3157 (unsigned int)(max_total_mem));
217c9dad 3158 printf("system bytes = %10u\n",
8bde7f77 3159 (unsigned int)(sbrked_mem + mmapped_mem));
217c9dad 3160 printf("in use bytes = %10u\n",
8bde7f77 3161 (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
217c9dad
WD
3162#if HAVE_MMAP
3163 printf("max mmap regions = %10u\n",
8bde7f77 3164 (unsigned int)max_n_mmaps);
217c9dad
WD
3165#endif
3166}
ea882baf 3167#endif /* DEBUG */
217c9dad
WD
3168
3169/*
3170 mallinfo returns a copy of updated current mallinfo.
3171*/
3172
ea882baf 3173#ifdef DEBUG
217c9dad
WD
3174struct mallinfo mALLINFo()
3175{
3176 malloc_update_mallinfo();
3177 return current_mallinfo;
3178}
ea882baf 3179#endif /* DEBUG */
217c9dad
WD
3180
3181
3182\f
3183
3184/*
3185 mallopt:
3186
3187 mallopt is the general SVID/XPG interface to tunable parameters.
3188 The format is to provide a (parameter-number, parameter-value) pair.
3189 mallopt then sets the corresponding parameter to the argument
3190 value if it can (i.e., so long as the value is meaningful),
3191 and returns 1 if successful else 0.
3192
3193 See descriptions of tunable parameters above.
3194
3195*/
3196
3197#if __STD_C
3198int mALLOPt(int param_number, int value)
3199#else
3200int mALLOPt(param_number, value) int param_number; int value;
3201#endif
3202{
3203 switch(param_number)
3204 {
3205 case M_TRIM_THRESHOLD:
3206 trim_threshold = value; return 1;
3207 case M_TOP_PAD:
3208 top_pad = value; return 1;
3209 case M_MMAP_THRESHOLD:
3210 mmap_threshold = value; return 1;
3211 case M_MMAP_MAX:
3212#if HAVE_MMAP
3213 n_mmaps_max = value; return 1;
3214#else
3215 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3216#endif
3217
3218 default:
3219 return 0;
3220 }
3221}
3222
3223/*
3224
3225History:
3226
3227 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
3228 * return null for negative arguments
3229 * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
8bde7f77
WD
3230 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3231 (e.g. WIN32 platforms)
3232 * Cleanup up header file inclusion for WIN32 platforms
3233 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3234 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3235 memory allocation routines
3236 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3237 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
217c9dad 3238 usage of 'assert' in non-WIN32 code
8bde7f77
WD
3239 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3240 avoid infinite loop
217c9dad
WD
3241 * Always call 'fREe()' rather than 'free()'
3242
3243 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3244 * Fixed ordering problem with boundary-stamping
3245
3246 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3247 * Added pvalloc, as recommended by H.J. Liu
3248 * Added 64bit pointer support mainly from Wolfram Gloger
3249 * Added anonymously donated WIN32 sbrk emulation
3250 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3251 * malloc_extend_top: fix mask error that caused wastage after
8bde7f77 3252 foreign sbrks
217c9dad
WD
3253 * Add linux mremap support code from HJ Liu
3254
3255 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3256 * Integrated most documentation with the code.
3257 * Add support for mmap, with help from
8bde7f77 3258 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
217c9dad
WD
3259 * Use last_remainder in more cases.
3260 * Pack bins using idea from colin@nyx10.cs.du.edu
3261 * Use ordered bins instead of best-fit threshhold
3262 * Eliminate block-local decls to simplify tracing and debugging.
3263 * Support another case of realloc via move into top
3264 * Fix error occuring when initial sbrk_base not word-aligned.
3265 * Rely on page size for units instead of SBRK_UNIT to
8bde7f77 3266 avoid surprises about sbrk alignment conventions.
217c9dad 3267 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
8bde7f77 3268 (raymond@es.ele.tue.nl) for the suggestion.
217c9dad
WD
3269 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3270 * More precautions for cases where other routines call sbrk,
8bde7f77 3271 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
217c9dad 3272 * Added macros etc., allowing use in linux libc from
8bde7f77 3273 H.J. Lu (hjl@gnu.ai.mit.edu)
217c9dad
WD
3274 * Inverted this history list
3275
3276 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3277 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3278 * Removed all preallocation code since under current scheme
8bde7f77
WD
3279 the work required to undo bad preallocations exceeds
3280 the work saved in good cases for most test programs.
217c9dad 3281 * No longer use return list or unconsolidated bins since
8bde7f77
WD
3282 no scheme using them consistently outperforms those that don't
3283 given above changes.
217c9dad
WD
3284 * Use best fit for very large chunks to prevent some worst-cases.
3285 * Added some support for debugging
3286
3287 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3288 * Removed footers when chunks are in use. Thanks to
8bde7f77 3289 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
217c9dad
WD
3290
3291 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3292 * Added malloc_trim, with help from Wolfram Gloger
8bde7f77 3293 (wmglo@Dent.MED.Uni-Muenchen.DE).
217c9dad
WD
3294
3295 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3296
3297 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3298 * realloc: try to expand in both directions
3299 * malloc: swap order of clean-bin strategy;
3300 * realloc: only conditionally expand backwards
3301 * Try not to scavenge used bins
3302 * Use bin counts as a guide to preallocation
3303 * Occasionally bin return list chunks in first scan
3304 * Add a few optimizations from colin@nyx10.cs.du.edu
3305
3306 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3307 * faster bin computation & slightly different binning
3308 * merged all consolidations to one part of malloc proper
8bde7f77 3309 (eliminating old malloc_find_space & malloc_clean_bin)
217c9dad
WD
3310 * Scan 2 returns chunks (not just 1)
3311 * Propagate failure in realloc if malloc returns 0
3312 * Add stuff to allow compilation on non-ANSI compilers
8bde7f77 3313 from kpv@research.att.com
217c9dad
WD
3314
3315 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3316 * removed potential for odd address access in prev_chunk
3317 * removed dependency on getpagesize.h
3318 * misc cosmetics and a bit more internal documentation
3319 * anticosmetics: mangled names in macros to evade debugger strangeness
3320 * tested on sparc, hp-700, dec-mips, rs6000
8bde7f77
WD
3321 with gcc & native cc (hp, dec only) allowing
3322 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
217c9dad
WD
3323
3324 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3325 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
8bde7f77 3326 structure of old version, but most details differ.)
217c9dad
WD
3327
3328*/