]> git.ipfire.org Git - people/ms/u-boot.git/blob - common/dlmalloc.c
kconfig: move CONFIG_CMD_CRC32 to Kconfig
[people/ms/u-boot.git] / common / dlmalloc.c
1 #include <common.h>
2
3 #ifdef CONFIG_SANDBOX
4 #define DEBUG
5 #endif
6
7 #if 0 /* Moved to malloc.h */
8 /* ---------- To make a malloc.h, start cutting here ------------ */
9
10 /*
11 A version of malloc/free/realloc written by Doug Lea and released to the
12 public domain. Send questions/comments/complaints/performance data
13 to dl@cs.oswego.edu
14
15 * VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee)
16
17 Note: There may be an updated version of this malloc obtainable at
18 ftp://g.oswego.edu/pub/misc/malloc.c
19 Check before installing!
20
21 * Why use this malloc?
22
23 This is not the fastest, most space-conserving, most portable, or
24 most tunable malloc ever written. However it is among the fastest
25 while also being among the most space-conserving, portable and tunable.
26 Consistent balance across these factors results in a good general-purpose
27 allocator. For a high-level description, see
28 http://g.oswego.edu/dl/html/malloc.html
29
30 * Synopsis of public routines
31
32 (Much fuller descriptions are contained in the program documentation below.)
33
34 malloc(size_t n);
35 Return a pointer to a newly allocated chunk of at least n bytes, or null
36 if no space is available.
37 free(Void_t* p);
38 Release the chunk of memory pointed to by p, or no effect if p is null.
39 realloc(Void_t* p, size_t n);
40 Return a pointer to a chunk of size n that contains the same data
41 as does chunk p up to the minimum of (n, p's size) bytes, or null
42 if no space is available. The returned pointer may or may not be
43 the same as p. If p is null, equivalent to malloc. Unless the
44 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
45 size argument of zero (re)allocates a minimum-sized chunk.
46 memalign(size_t alignment, size_t n);
47 Return a pointer to a newly allocated chunk of n bytes, aligned
48 in accord with the alignment argument, which must be a power of
49 two.
50 valloc(size_t n);
51 Equivalent to memalign(pagesize, n), where pagesize is the page
52 size of the system (or as near to this as can be figured out from
53 all the includes/defines below.)
54 pvalloc(size_t n);
55 Equivalent to valloc(minimum-page-that-holds(n)), that is,
56 round up n to nearest pagesize.
57 calloc(size_t unit, size_t quantity);
58 Returns a pointer to quantity * unit bytes, with all locations
59 set to zero.
60 cfree(Void_t* p);
61 Equivalent to free(p).
62 malloc_trim(size_t pad);
63 Release all but pad bytes of freed top-most memory back
64 to the system. Return 1 if successful, else 0.
65 malloc_usable_size(Void_t* p);
66 Report the number usable allocated bytes associated with allocated
67 chunk p. This may or may not report more bytes than were requested,
68 due to alignment and minimum size constraints.
69 malloc_stats();
70 Prints brief summary statistics.
71 mallinfo()
72 Returns (by copy) a struct containing various summary statistics.
73 mallopt(int parameter_number, int parameter_value)
74 Changes one of the tunable parameters described below. Returns
75 1 if successful in changing the parameter, else 0.
76
77 * Vital statistics:
78
79 Alignment: 8-byte
80 8 byte alignment is currently hardwired into the design. This
81 seems to suffice for all current machines and C compilers.
82
83 Assumed pointer representation: 4 or 8 bytes
84 Code for 8-byte pointers is untested by me but has worked
85 reliably by Wolfram Gloger, who contributed most of the
86 changes supporting this.
87
88 Assumed size_t representation: 4 or 8 bytes
89 Note that size_t is allowed to be 4 bytes even if pointers are 8.
90
91 Minimum overhead per allocated chunk: 4 or 8 bytes
92 Each malloced chunk has a hidden overhead of 4 bytes holding size
93 and status information.
94
95 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
96 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
97
98 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
99 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
100 needed; 4 (8) for a trailing size field
101 and 8 (16) bytes for free list pointers. Thus, the minimum
102 allocatable size is 16/24/32 bytes.
103
104 Even a request for zero bytes (i.e., malloc(0)) returns a
105 pointer to something of the minimum allocatable size.
106
107 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
108 8-byte size_t: 2^63 - 16 bytes
109
110 It is assumed that (possibly signed) size_t bit values suffice to
111 represent chunk sizes. `Possibly signed' is due to the fact
112 that `size_t' may be defined on a system as either a signed or
113 an unsigned type. To be conservative, values that would appear
114 as negative numbers are avoided.
115 Requests for sizes with a negative sign bit when the request
116 size is treaded as a long will return null.
117
118 Maximum overhead wastage per allocated chunk: normally 15 bytes
119
120 Alignnment demands, plus the minimum allocatable size restriction
121 make the normal worst-case wastage 15 bytes (i.e., up to 15
122 more bytes will be allocated than were requested in malloc), with
123 two exceptions:
124 1. Because requests for zero bytes allocate non-zero space,
125 the worst case wastage for a request of zero bytes is 24 bytes.
126 2. For requests >= mmap_threshold that are serviced via
127 mmap(), the worst case wastage is 8 bytes plus the remainder
128 from a system page (the minimal mmap unit); typically 4096 bytes.
129
130 * Limitations
131
132 Here are some features that are NOT currently supported
133
134 * No user-definable hooks for callbacks and the like.
135 * No automated mechanism for fully checking that all accesses
136 to malloced memory stay within their bounds.
137 * No support for compaction.
138
139 * Synopsis of compile-time options:
140
141 People have reported using previous versions of this malloc on all
142 versions of Unix, sometimes by tweaking some of the defines
143 below. It has been tested most extensively on Solaris and
144 Linux. It is also reported to work on WIN32 platforms.
145 People have also reported adapting this malloc for use in
146 stand-alone embedded systems.
147
148 The implementation is in straight, hand-tuned ANSI C. Among other
149 consequences, it uses a lot of macros. Because of this, to be at
150 all usable, this code should be compiled using an optimizing compiler
151 (for example gcc -O2) that can simplify expressions and control
152 paths.
153
154 __STD_C (default: derived from C compiler defines)
155 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
156 a C compiler sufficiently close to ANSI to get away with it.
157 DEBUG (default: NOT defined)
158 Define to enable debugging. Adds fairly extensive assertion-based
159 checking to help track down memory errors, but noticeably slows down
160 execution.
161 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
162 Define this if you think that realloc(p, 0) should be equivalent
163 to free(p). Otherwise, since malloc returns a unique pointer for
164 malloc(0), so does realloc(p, 0).
165 HAVE_MEMCPY (default: defined)
166 Define if you are not otherwise using ANSI STD C, but still
167 have memcpy and memset in your C library and want to use them.
168 Otherwise, simple internal versions are supplied.
169 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
170 Define as 1 if you want the C library versions of memset and
171 memcpy called in realloc and calloc (otherwise macro versions are used).
172 At least on some platforms, the simple macro versions usually
173 outperform libc versions.
174 HAVE_MMAP (default: defined as 1)
175 Define to non-zero to optionally make malloc() use mmap() to
176 allocate very large blocks.
177 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
178 Define to non-zero to optionally make realloc() use mremap() to
179 reallocate very large blocks.
180 malloc_getpagesize (default: derived from system #includes)
181 Either a constant or routine call returning the system page size.
182 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
183 Optionally define if you are on a system with a /usr/include/malloc.h
184 that declares struct mallinfo. It is not at all necessary to
185 define this even if you do, but will ensure consistency.
186 INTERNAL_SIZE_T (default: size_t)
187 Define to a 32-bit type (probably `unsigned int') if you are on a
188 64-bit machine, yet do not want or need to allow malloc requests of
189 greater than 2^31 to be handled. This saves space, especially for
190 very small chunks.
191 INTERNAL_LINUX_C_LIB (default: NOT defined)
192 Defined only when compiled as part of Linux libc.
193 Also note that there is some odd internal name-mangling via defines
194 (for example, internally, `malloc' is named `mALLOc') needed
195 when compiling in this case. These look funny but don't otherwise
196 affect anything.
197 WIN32 (default: undefined)
198 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
199 LACKS_UNISTD_H (default: undefined if not WIN32)
200 Define this if your system does not have a <unistd.h>.
201 LACKS_SYS_PARAM_H (default: undefined if not WIN32)
202 Define this if your system does not have a <sys/param.h>.
203 MORECORE (default: sbrk)
204 The name of the routine to call to obtain more memory from the system.
205 MORECORE_FAILURE (default: -1)
206 The value returned upon failure of MORECORE.
207 MORECORE_CLEARS (default 1)
208 true (1) if the routine mapped to MORECORE zeroes out memory (which
209 holds for sbrk).
210 DEFAULT_TRIM_THRESHOLD
211 DEFAULT_TOP_PAD
212 DEFAULT_MMAP_THRESHOLD
213 DEFAULT_MMAP_MAX
214 Default values of tunable parameters (described in detail below)
215 controlling interaction with host system routines (sbrk, mmap, etc).
216 These values may also be changed dynamically via mallopt(). The
217 preset defaults are those that give best performance for typical
218 programs/systems.
219 USE_DL_PREFIX (default: undefined)
220 Prefix all public routines with the string 'dl'. Useful to
221 quickly avoid procedure declaration conflicts and linker symbol
222 conflicts with existing memory allocation routines.
223
224
225 */
226
227
228
229 /* Preliminaries */
230
231 #ifndef __STD_C
232 #ifdef __STDC__
233 #define __STD_C 1
234 #else
235 #if __cplusplus
236 #define __STD_C 1
237 #else
238 #define __STD_C 0
239 #endif /*__cplusplus*/
240 #endif /*__STDC__*/
241 #endif /*__STD_C*/
242
243 #ifndef Void_t
244 #if (__STD_C || defined(WIN32))
245 #define Void_t void
246 #else
247 #define Void_t char
248 #endif
249 #endif /*Void_t*/
250
251 #if __STD_C
252 #include <stddef.h> /* for size_t */
253 #else
254 #include <sys/types.h>
255 #endif
256
257 #ifdef __cplusplus
258 extern "C" {
259 #endif
260
261 #include <stdio.h> /* needed for malloc_stats */
262
263
264 /*
265 Compile-time options
266 */
267
268
269 /*
270 Debugging:
271
272 Because freed chunks may be overwritten with link fields, this
273 malloc will often die when freed memory is overwritten by user
274 programs. This can be very effective (albeit in an annoying way)
275 in helping track down dangling pointers.
276
277 If you compile with -DDEBUG, a number of assertion checks are
278 enabled that will catch more memory errors. You probably won't be
279 able to make much sense of the actual assertion errors, but they
280 should help you locate incorrectly overwritten memory. The
281 checking is fairly extensive, and will slow down execution
282 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
283 attempt to check every non-mmapped allocated and free chunk in the
284 course of computing the summmaries. (By nature, mmapped regions
285 cannot be checked very much automatically.)
286
287 Setting DEBUG may also be helpful if you are trying to modify
288 this code. The assertions in the check routines spell out in more
289 detail the assumptions and invariants underlying the algorithms.
290
291 */
292
293 /*
294 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
295 of chunk sizes. On a 64-bit machine, you can reduce malloc
296 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
297 at the expense of not being able to handle requests greater than
298 2^31. This limitation is hardly ever a concern; you are encouraged
299 to set this. However, the default version is the same as size_t.
300 */
301
302 #ifndef INTERNAL_SIZE_T
303 #define INTERNAL_SIZE_T size_t
304 #endif
305
306 /*
307 REALLOC_ZERO_BYTES_FREES should be set if a call to
308 realloc with zero bytes should be the same as a call to free.
309 Some people think it should. Otherwise, since this malloc
310 returns a unique pointer for malloc(0), so does realloc(p, 0).
311 */
312
313
314 /* #define REALLOC_ZERO_BYTES_FREES */
315
316
317 /*
318 WIN32 causes an emulation of sbrk to be compiled in
319 mmap-based options are not currently supported in WIN32.
320 */
321
322 /* #define WIN32 */
323 #ifdef WIN32
324 #define MORECORE wsbrk
325 #define HAVE_MMAP 0
326
327 #define LACKS_UNISTD_H
328 #define LACKS_SYS_PARAM_H
329
330 /*
331 Include 'windows.h' to get the necessary declarations for the
332 Microsoft Visual C++ data structures and routines used in the 'sbrk'
333 emulation.
334
335 Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
336 Visual C++ header files are included.
337 */
338 #define WIN32_LEAN_AND_MEAN
339 #include <windows.h>
340 #endif
341
342
343 /*
344 HAVE_MEMCPY should be defined if you are not otherwise using
345 ANSI STD C, but still have memcpy and memset in your C library
346 and want to use them in calloc and realloc. Otherwise simple
347 macro versions are defined here.
348
349 USE_MEMCPY should be defined as 1 if you actually want to
350 have memset and memcpy called. People report that the macro
351 versions are often enough faster than libc versions on many
352 systems that it is better to use them.
353
354 */
355
356 #define HAVE_MEMCPY
357
358 #ifndef USE_MEMCPY
359 #ifdef HAVE_MEMCPY
360 #define USE_MEMCPY 1
361 #else
362 #define USE_MEMCPY 0
363 #endif
364 #endif
365
366 #if (__STD_C || defined(HAVE_MEMCPY))
367
368 #if __STD_C
369 void* memset(void*, int, size_t);
370 void* memcpy(void*, const void*, size_t);
371 #else
372 #ifdef WIN32
373 /* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */
374 /* 'windows.h' */
375 #else
376 Void_t* memset();
377 Void_t* memcpy();
378 #endif
379 #endif
380 #endif
381
382 #if USE_MEMCPY
383
384 /* The following macros are only invoked with (2n+1)-multiples of
385 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
386 for fast inline execution when n is small. */
387
388 #define MALLOC_ZERO(charp, nbytes) \
389 do { \
390 INTERNAL_SIZE_T mzsz = (nbytes); \
391 if(mzsz <= 9*sizeof(mzsz)) { \
392 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
393 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
394 *mz++ = 0; \
395 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
396 *mz++ = 0; \
397 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
398 *mz++ = 0; }}} \
399 *mz++ = 0; \
400 *mz++ = 0; \
401 *mz = 0; \
402 } else memset((charp), 0, mzsz); \
403 } while(0)
404
405 #define MALLOC_COPY(dest,src,nbytes) \
406 do { \
407 INTERNAL_SIZE_T mcsz = (nbytes); \
408 if(mcsz <= 9*sizeof(mcsz)) { \
409 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
410 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
411 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
412 *mcdst++ = *mcsrc++; \
413 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
414 *mcdst++ = *mcsrc++; \
415 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
416 *mcdst++ = *mcsrc++; }}} \
417 *mcdst++ = *mcsrc++; \
418 *mcdst++ = *mcsrc++; \
419 *mcdst = *mcsrc ; \
420 } else memcpy(dest, src, mcsz); \
421 } while(0)
422
423 #else /* !USE_MEMCPY */
424
425 /* Use Duff's device for good zeroing/copying performance. */
426
427 #define MALLOC_ZERO(charp, nbytes) \
428 do { \
429 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
430 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
431 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
432 switch (mctmp) { \
433 case 0: for(;;) { *mzp++ = 0; \
434 case 7: *mzp++ = 0; \
435 case 6: *mzp++ = 0; \
436 case 5: *mzp++ = 0; \
437 case 4: *mzp++ = 0; \
438 case 3: *mzp++ = 0; \
439 case 2: *mzp++ = 0; \
440 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
441 } \
442 } while(0)
443
444 #define MALLOC_COPY(dest,src,nbytes) \
445 do { \
446 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
447 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
448 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
449 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
450 switch (mctmp) { \
451 case 0: for(;;) { *mcdst++ = *mcsrc++; \
452 case 7: *mcdst++ = *mcsrc++; \
453 case 6: *mcdst++ = *mcsrc++; \
454 case 5: *mcdst++ = *mcsrc++; \
455 case 4: *mcdst++ = *mcsrc++; \
456 case 3: *mcdst++ = *mcsrc++; \
457 case 2: *mcdst++ = *mcsrc++; \
458 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
459 } \
460 } while(0)
461
462 #endif
463
464
465 /*
466 Define HAVE_MMAP to optionally make malloc() use mmap() to
467 allocate very large blocks. These will be returned to the
468 operating system immediately after a free().
469 */
470
471 #ifndef HAVE_MMAP
472 #define HAVE_MMAP 1
473 #endif
474
475 /*
476 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
477 large blocks. This is currently only possible on Linux with
478 kernel versions newer than 1.3.77.
479 */
480
481 #ifndef HAVE_MREMAP
482 #ifdef INTERNAL_LINUX_C_LIB
483 #define HAVE_MREMAP 1
484 #else
485 #define HAVE_MREMAP 0
486 #endif
487 #endif
488
489 #if HAVE_MMAP
490
491 #include <unistd.h>
492 #include <fcntl.h>
493 #include <sys/mman.h>
494
495 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
496 #define MAP_ANONYMOUS MAP_ANON
497 #endif
498
499 #endif /* HAVE_MMAP */
500
501 /*
502 Access to system page size. To the extent possible, this malloc
503 manages memory from the system in page-size units.
504
505 The following mechanics for getpagesize were adapted from
506 bsd/gnu getpagesize.h
507 */
508
509 #ifndef LACKS_UNISTD_H
510 # include <unistd.h>
511 #endif
512
513 #ifndef malloc_getpagesize
514 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
515 # ifndef _SC_PAGE_SIZE
516 # define _SC_PAGE_SIZE _SC_PAGESIZE
517 # endif
518 # endif
519 # ifdef _SC_PAGE_SIZE
520 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
521 # else
522 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
523 extern size_t getpagesize();
524 # define malloc_getpagesize getpagesize()
525 # else
526 # ifdef WIN32
527 # define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
528 # else
529 # ifndef LACKS_SYS_PARAM_H
530 # include <sys/param.h>
531 # endif
532 # ifdef EXEC_PAGESIZE
533 # define malloc_getpagesize EXEC_PAGESIZE
534 # else
535 # ifdef NBPG
536 # ifndef CLSIZE
537 # define malloc_getpagesize NBPG
538 # else
539 # define malloc_getpagesize (NBPG * CLSIZE)
540 # endif
541 # else
542 # ifdef NBPC
543 # define malloc_getpagesize NBPC
544 # else
545 # ifdef PAGESIZE
546 # define malloc_getpagesize PAGESIZE
547 # else
548 # define malloc_getpagesize (4096) /* just guess */
549 # endif
550 # endif
551 # endif
552 # endif
553 # endif
554 # endif
555 # endif
556 #endif
557
558
559 /*
560
561 This version of malloc supports the standard SVID/XPG mallinfo
562 routine that returns a struct containing the same kind of
563 information you can get from malloc_stats. It should work on
564 any SVID/XPG compliant system that has a /usr/include/malloc.h
565 defining struct mallinfo. (If you'd like to install such a thing
566 yourself, cut out the preliminary declarations as described above
567 and below and save them in a malloc.h file. But there's no
568 compelling reason to bother to do this.)
569
570 The main declaration needed is the mallinfo struct that is returned
571 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
572 bunch of fields, most of which are not even meaningful in this
573 version of malloc. Some of these fields are are instead filled by
574 mallinfo() with other numbers that might possibly be of interest.
575
576 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
577 /usr/include/malloc.h file that includes a declaration of struct
578 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
579 version is declared below. These must be precisely the same for
580 mallinfo() to work.
581
582 */
583
584 /* #define HAVE_USR_INCLUDE_MALLOC_H */
585
586 #if HAVE_USR_INCLUDE_MALLOC_H
587 #include "/usr/include/malloc.h"
588 #else
589
590 /* SVID2/XPG mallinfo structure */
591
592 struct mallinfo {
593 int arena; /* total space allocated from system */
594 int ordblks; /* number of non-inuse chunks */
595 int smblks; /* unused -- always zero */
596 int hblks; /* number of mmapped regions */
597 int hblkhd; /* total space in mmapped regions */
598 int usmblks; /* unused -- always zero */
599 int fsmblks; /* unused -- always zero */
600 int uordblks; /* total allocated space */
601 int fordblks; /* total non-inuse space */
602 int keepcost; /* top-most, releasable (via malloc_trim) space */
603 };
604
605 /* SVID2/XPG mallopt options */
606
607 #define M_MXFAST 1 /* UNUSED in this malloc */
608 #define M_NLBLKS 2 /* UNUSED in this malloc */
609 #define M_GRAIN 3 /* UNUSED in this malloc */
610 #define M_KEEP 4 /* UNUSED in this malloc */
611
612 #endif
613
614 /* mallopt options that actually do something */
615
616 #define M_TRIM_THRESHOLD -1
617 #define M_TOP_PAD -2
618 #define M_MMAP_THRESHOLD -3
619 #define M_MMAP_MAX -4
620
621
622 #ifndef DEFAULT_TRIM_THRESHOLD
623 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
624 #endif
625
626 /*
627 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
628 to keep before releasing via malloc_trim in free().
629
630 Automatic trimming is mainly useful in long-lived programs.
631 Because trimming via sbrk can be slow on some systems, and can
632 sometimes be wasteful (in cases where programs immediately
633 afterward allocate more large chunks) the value should be high
634 enough so that your overall system performance would improve by
635 releasing.
636
637 The trim threshold and the mmap control parameters (see below)
638 can be traded off with one another. Trimming and mmapping are
639 two different ways of releasing unused memory back to the
640 system. Between these two, it is often possible to keep
641 system-level demands of a long-lived program down to a bare
642 minimum. For example, in one test suite of sessions measuring
643 the XF86 X server on Linux, using a trim threshold of 128K and a
644 mmap threshold of 192K led to near-minimal long term resource
645 consumption.
646
647 If you are using this malloc in a long-lived program, it should
648 pay to experiment with these values. As a rough guide, you
649 might set to a value close to the average size of a process
650 (program) running on your system. Releasing this much memory
651 would allow such a process to run in memory. Generally, it's
652 worth it to tune for trimming rather tham memory mapping when a
653 program undergoes phases where several large chunks are
654 allocated and released in ways that can reuse each other's
655 storage, perhaps mixed with phases where there are no such
656 chunks at all. And in well-behaved long-lived programs,
657 controlling release of large blocks via trimming versus mapping
658 is usually faster.
659
660 However, in most programs, these parameters serve mainly as
661 protection against the system-level effects of carrying around
662 massive amounts of unneeded memory. Since frequent calls to
663 sbrk, mmap, and munmap otherwise degrade performance, the default
664 parameters are set to relatively high values that serve only as
665 safeguards.
666
667 The default trim value is high enough to cause trimming only in
668 fairly extreme (by current memory consumption standards) cases.
669 It must be greater than page size to have any useful effect. To
670 disable trimming completely, you can set to (unsigned long)(-1);
671
672
673 */
674
675
676 #ifndef DEFAULT_TOP_PAD
677 #define DEFAULT_TOP_PAD (0)
678 #endif
679
680 /*
681 M_TOP_PAD is the amount of extra `padding' space to allocate or
682 retain whenever sbrk is called. It is used in two ways internally:
683
684 * When sbrk is called to extend the top of the arena to satisfy
685 a new malloc request, this much padding is added to the sbrk
686 request.
687
688 * When malloc_trim is called automatically from free(),
689 it is used as the `pad' argument.
690
691 In both cases, the actual amount of padding is rounded
692 so that the end of the arena is always a system page boundary.
693
694 The main reason for using padding is to avoid calling sbrk so
695 often. Having even a small pad greatly reduces the likelihood
696 that nearly every malloc request during program start-up (or
697 after trimming) will invoke sbrk, which needlessly wastes
698 time.
699
700 Automatic rounding-up to page-size units is normally sufficient
701 to avoid measurable overhead, so the default is 0. However, in
702 systems where sbrk is relatively slow, it can pay to increase
703 this value, at the expense of carrying around more memory than
704 the program needs.
705
706 */
707
708
709 #ifndef DEFAULT_MMAP_THRESHOLD
710 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
711 #endif
712
713 /*
714
715 M_MMAP_THRESHOLD is the request size threshold for using mmap()
716 to service a request. Requests of at least this size that cannot
717 be allocated using already-existing space will be serviced via mmap.
718 (If enough normal freed space already exists it is used instead.)
719
720 Using mmap segregates relatively large chunks of memory so that
721 they can be individually obtained and released from the host
722 system. A request serviced through mmap is never reused by any
723 other request (at least not directly; the system may just so
724 happen to remap successive requests to the same locations).
725
726 Segregating space in this way has the benefit that mmapped space
727 can ALWAYS be individually released back to the system, which
728 helps keep the system level memory demands of a long-lived
729 program low. Mapped memory can never become `locked' between
730 other chunks, as can happen with normally allocated chunks, which
731 menas that even trimming via malloc_trim would not release them.
732
733 However, it has the disadvantages that:
734
735 1. The space cannot be reclaimed, consolidated, and then
736 used to service later requests, as happens with normal chunks.
737 2. It can lead to more wastage because of mmap page alignment
738 requirements
739 3. It causes malloc performance to be more dependent on host
740 system memory management support routines which may vary in
741 implementation quality and may impose arbitrary
742 limitations. Generally, servicing a request via normal
743 malloc steps is faster than going through a system's mmap.
744
745 All together, these considerations should lead you to use mmap
746 only for relatively large requests.
747
748
749 */
750
751
752 #ifndef DEFAULT_MMAP_MAX
753 #if HAVE_MMAP
754 #define DEFAULT_MMAP_MAX (64)
755 #else
756 #define DEFAULT_MMAP_MAX (0)
757 #endif
758 #endif
759
760 /*
761 M_MMAP_MAX is the maximum number of requests to simultaneously
762 service using mmap. This parameter exists because:
763
764 1. Some systems have a limited number of internal tables for
765 use by mmap.
766 2. In most systems, overreliance on mmap can degrade overall
767 performance.
768 3. If a program allocates many large regions, it is probably
769 better off using normal sbrk-based allocation routines that
770 can reclaim and reallocate normal heap memory. Using a
771 small value allows transition into this mode after the
772 first few allocations.
773
774 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
775 the default value is 0, and attempts to set it to non-zero values
776 in mallopt will fail.
777 */
778
779
780 /*
781 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
782 Useful to quickly avoid procedure declaration conflicts and linker
783 symbol conflicts with existing memory allocation routines.
784
785 */
786
787 /* #define USE_DL_PREFIX */
788
789
790 /*
791
792 Special defines for linux libc
793
794 Except when compiled using these special defines for Linux libc
795 using weak aliases, this malloc is NOT designed to work in
796 multithreaded applications. No semaphores or other concurrency
797 control are provided to ensure that multiple malloc or free calls
798 don't run at the same time, which could be disasterous. A single
799 semaphore could be used across malloc, realloc, and free (which is
800 essentially the effect of the linux weak alias approach). It would
801 be hard to obtain finer granularity.
802
803 */
804
805
806 #ifdef INTERNAL_LINUX_C_LIB
807
808 #if __STD_C
809
810 Void_t * __default_morecore_init (ptrdiff_t);
811 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
812
813 #else
814
815 Void_t * __default_morecore_init ();
816 Void_t *(*__morecore)() = __default_morecore_init;
817
818 #endif
819
820 #define MORECORE (*__morecore)
821 #define MORECORE_FAILURE 0
822 #define MORECORE_CLEARS 1
823
824 #else /* INTERNAL_LINUX_C_LIB */
825
826 #if __STD_C
827 extern Void_t* sbrk(ptrdiff_t);
828 #else
829 extern Void_t* sbrk();
830 #endif
831
832 #ifndef MORECORE
833 #define MORECORE sbrk
834 #endif
835
836 #ifndef MORECORE_FAILURE
837 #define MORECORE_FAILURE -1
838 #endif
839
840 #ifndef MORECORE_CLEARS
841 #define MORECORE_CLEARS 1
842 #endif
843
844 #endif /* INTERNAL_LINUX_C_LIB */
845
846 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
847
848 #define cALLOc __libc_calloc
849 #define fREe __libc_free
850 #define mALLOc __libc_malloc
851 #define mEMALIGn __libc_memalign
852 #define rEALLOc __libc_realloc
853 #define vALLOc __libc_valloc
854 #define pvALLOc __libc_pvalloc
855 #define mALLINFo __libc_mallinfo
856 #define mALLOPt __libc_mallopt
857
858 #pragma weak calloc = __libc_calloc
859 #pragma weak free = __libc_free
860 #pragma weak cfree = __libc_free
861 #pragma weak malloc = __libc_malloc
862 #pragma weak memalign = __libc_memalign
863 #pragma weak realloc = __libc_realloc
864 #pragma weak valloc = __libc_valloc
865 #pragma weak pvalloc = __libc_pvalloc
866 #pragma weak mallinfo = __libc_mallinfo
867 #pragma weak mallopt = __libc_mallopt
868
869 #else
870
871 #ifdef USE_DL_PREFIX
872 #define cALLOc dlcalloc
873 #define fREe dlfree
874 #define mALLOc dlmalloc
875 #define mEMALIGn dlmemalign
876 #define rEALLOc dlrealloc
877 #define vALLOc dlvalloc
878 #define pvALLOc dlpvalloc
879 #define mALLINFo dlmallinfo
880 #define mALLOPt dlmallopt
881 #else /* USE_DL_PREFIX */
882 #define cALLOc calloc
883 #define fREe free
884 #define mALLOc malloc
885 #define mEMALIGn memalign
886 #define rEALLOc realloc
887 #define vALLOc valloc
888 #define pvALLOc pvalloc
889 #define mALLINFo mallinfo
890 #define mALLOPt mallopt
891 #endif /* USE_DL_PREFIX */
892
893 #endif
894
895 /* Public routines */
896
897 #if __STD_C
898
899 Void_t* mALLOc(size_t);
900 void fREe(Void_t*);
901 Void_t* rEALLOc(Void_t*, size_t);
902 Void_t* mEMALIGn(size_t, size_t);
903 Void_t* vALLOc(size_t);
904 Void_t* pvALLOc(size_t);
905 Void_t* cALLOc(size_t, size_t);
906 void cfree(Void_t*);
907 int malloc_trim(size_t);
908 size_t malloc_usable_size(Void_t*);
909 void malloc_stats();
910 int mALLOPt(int, int);
911 struct mallinfo mALLINFo(void);
912 #else
913 Void_t* mALLOc();
914 void fREe();
915 Void_t* rEALLOc();
916 Void_t* mEMALIGn();
917 Void_t* vALLOc();
918 Void_t* pvALLOc();
919 Void_t* cALLOc();
920 void cfree();
921 int malloc_trim();
922 size_t malloc_usable_size();
923 void malloc_stats();
924 int mALLOPt();
925 struct mallinfo mALLINFo();
926 #endif
927
928
929 #ifdef __cplusplus
930 }; /* end of extern "C" */
931 #endif
932
933 /* ---------- To make a malloc.h, end cutting here ------------ */
934 #endif /* 0 */ /* Moved to malloc.h */
935
936 #include <malloc.h>
937 #include <asm/io.h>
938
939 #ifdef DEBUG
940 #if __STD_C
941 static void malloc_update_mallinfo (void);
942 void malloc_stats (void);
943 #else
944 static void malloc_update_mallinfo ();
945 void malloc_stats();
946 #endif
947 #endif /* DEBUG */
948
949 DECLARE_GLOBAL_DATA_PTR;
950
951 /*
952 Emulation of sbrk for WIN32
953 All code within the ifdef WIN32 is untested by me.
954
955 Thanks to Martin Fong and others for supplying this.
956 */
957
958
959 #ifdef WIN32
960
961 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
962 ~(malloc_getpagesize-1))
963 #define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
964
965 /* resrve 64MB to insure large contiguous space */
966 #define RESERVED_SIZE (1024*1024*64)
967 #define NEXT_SIZE (2048*1024)
968 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
969
970 struct GmListElement;
971 typedef struct GmListElement GmListElement;
972
973 struct GmListElement
974 {
975 GmListElement* next;
976 void* base;
977 };
978
979 static GmListElement* head = 0;
980 static unsigned int gNextAddress = 0;
981 static unsigned int gAddressBase = 0;
982 static unsigned int gAllocatedSize = 0;
983
984 static
985 GmListElement* makeGmListElement (void* bas)
986 {
987 GmListElement* this;
988 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
989 assert (this);
990 if (this)
991 {
992 this->base = bas;
993 this->next = head;
994 head = this;
995 }
996 return this;
997 }
998
999 void gcleanup ()
1000 {
1001 BOOL rval;
1002 assert ( (head == NULL) || (head->base == (void*)gAddressBase));
1003 if (gAddressBase && (gNextAddress - gAddressBase))
1004 {
1005 rval = VirtualFree ((void*)gAddressBase,
1006 gNextAddress - gAddressBase,
1007 MEM_DECOMMIT);
1008 assert (rval);
1009 }
1010 while (head)
1011 {
1012 GmListElement* next = head->next;
1013 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1014 assert (rval);
1015 LocalFree (head);
1016 head = next;
1017 }
1018 }
1019
1020 static
1021 void* findRegion (void* start_address, unsigned long size)
1022 {
1023 MEMORY_BASIC_INFORMATION info;
1024 if (size >= TOP_MEMORY) return NULL;
1025
1026 while ((unsigned long)start_address + size < TOP_MEMORY)
1027 {
1028 VirtualQuery (start_address, &info, sizeof (info));
1029 if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1030 return start_address;
1031 else
1032 {
1033 /* Requested region is not available so see if the */
1034 /* next region is available. Set 'start_address' */
1035 /* to the next region and call 'VirtualQuery()' */
1036 /* again. */
1037
1038 start_address = (char*)info.BaseAddress + info.RegionSize;
1039
1040 /* Make sure we start looking for the next region */
1041 /* on the *next* 64K boundary. Otherwise, even if */
1042 /* the new region is free according to */
1043 /* 'VirtualQuery()', the subsequent call to */
1044 /* 'VirtualAlloc()' (which follows the call to */
1045 /* this routine in 'wsbrk()') will round *down* */
1046 /* the requested address to a 64K boundary which */
1047 /* we already know is an address in the */
1048 /* unavailable region. Thus, the subsequent call */
1049 /* to 'VirtualAlloc()' will fail and bring us back */
1050 /* here, causing us to go into an infinite loop. */
1051
1052 start_address =
1053 (void *) AlignPage64K((unsigned long) start_address);
1054 }
1055 }
1056 return NULL;
1057
1058 }
1059
1060
1061 void* wsbrk (long size)
1062 {
1063 void* tmp;
1064 if (size > 0)
1065 {
1066 if (gAddressBase == 0)
1067 {
1068 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1069 gNextAddress = gAddressBase =
1070 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1071 MEM_RESERVE, PAGE_NOACCESS);
1072 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1073 gAllocatedSize))
1074 {
1075 long new_size = max (NEXT_SIZE, AlignPage (size));
1076 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1077 do
1078 {
1079 new_address = findRegion (new_address, new_size);
1080
1081 if (new_address == 0)
1082 return (void*)-1;
1083
1084 gAddressBase = gNextAddress =
1085 (unsigned int)VirtualAlloc (new_address, new_size,
1086 MEM_RESERVE, PAGE_NOACCESS);
1087 /* repeat in case of race condition */
1088 /* The region that we found has been snagged */
1089 /* by another thread */
1090 }
1091 while (gAddressBase == 0);
1092
1093 assert (new_address == (void*)gAddressBase);
1094
1095 gAllocatedSize = new_size;
1096
1097 if (!makeGmListElement ((void*)gAddressBase))
1098 return (void*)-1;
1099 }
1100 if ((size + gNextAddress) > AlignPage (gNextAddress))
1101 {
1102 void* res;
1103 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1104 (size + gNextAddress -
1105 AlignPage (gNextAddress)),
1106 MEM_COMMIT, PAGE_READWRITE);
1107 if (res == 0)
1108 return (void*)-1;
1109 }
1110 tmp = (void*)gNextAddress;
1111 gNextAddress = (unsigned int)tmp + size;
1112 return tmp;
1113 }
1114 else if (size < 0)
1115 {
1116 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1117 /* Trim by releasing the virtual memory */
1118 if (alignedGoal >= gAddressBase)
1119 {
1120 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1121 MEM_DECOMMIT);
1122 gNextAddress = gNextAddress + size;
1123 return (void*)gNextAddress;
1124 }
1125 else
1126 {
1127 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1128 MEM_DECOMMIT);
1129 gNextAddress = gAddressBase;
1130 return (void*)-1;
1131 }
1132 }
1133 else
1134 {
1135 return (void*)gNextAddress;
1136 }
1137 }
1138
1139 #endif
1140
1141
1142
1143 /*
1144 Type declarations
1145 */
1146
1147
1148 struct malloc_chunk
1149 {
1150 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1151 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1152 struct malloc_chunk* fd; /* double links -- used only if free. */
1153 struct malloc_chunk* bk;
1154 } __attribute__((__may_alias__)) ;
1155
1156 typedef struct malloc_chunk* mchunkptr;
1157
1158 /*
1159
1160 malloc_chunk details:
1161
1162 (The following includes lightly edited explanations by Colin Plumb.)
1163
1164 Chunks of memory are maintained using a `boundary tag' method as
1165 described in e.g., Knuth or Standish. (See the paper by Paul
1166 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1167 survey of such techniques.) Sizes of free chunks are stored both
1168 in the front of each chunk and at the end. This makes
1169 consolidating fragmented chunks into bigger chunks very fast. The
1170 size fields also hold bits representing whether chunks are free or
1171 in use.
1172
1173 An allocated chunk looks like this:
1174
1175
1176 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1177 | Size of previous chunk, if allocated | |
1178 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1179 | Size of chunk, in bytes |P|
1180 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1181 | User data starts here... .
1182 . .
1183 . (malloc_usable_space() bytes) .
1184 . |
1185 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1186 | Size of chunk |
1187 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1188
1189
1190 Where "chunk" is the front of the chunk for the purpose of most of
1191 the malloc code, but "mem" is the pointer that is returned to the
1192 user. "Nextchunk" is the beginning of the next contiguous chunk.
1193
1194 Chunks always begin on even word boundries, so the mem portion
1195 (which is returned to the user) is also on an even word boundary, and
1196 thus double-word aligned.
1197
1198 Free chunks are stored in circular doubly-linked lists, and look like this:
1199
1200 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1201 | Size of previous chunk |
1202 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1203 `head:' | Size of chunk, in bytes |P|
1204 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1205 | Forward pointer to next chunk in list |
1206 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1207 | Back pointer to previous chunk in list |
1208 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1209 | Unused space (may be 0 bytes long) .
1210 . .
1211 . |
1212 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1213 `foot:' | Size of chunk, in bytes |
1214 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1215
1216 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1217 chunk size (which is always a multiple of two words), is an in-use
1218 bit for the *previous* chunk. If that bit is *clear*, then the
1219 word before the current chunk size contains the previous chunk
1220 size, and can be used to find the front of the previous chunk.
1221 (The very first chunk allocated always has this bit set,
1222 preventing access to non-existent (or non-owned) memory.)
1223
1224 Note that the `foot' of the current chunk is actually represented
1225 as the prev_size of the NEXT chunk. (This makes it easier to
1226 deal with alignments etc).
1227
1228 The two exceptions to all this are
1229
1230 1. The special chunk `top', which doesn't bother using the
1231 trailing size field since there is no
1232 next contiguous chunk that would have to index off it. (After
1233 initialization, `top' is forced to always exist. If it would
1234 become less than MINSIZE bytes long, it is replenished via
1235 malloc_extend_top.)
1236
1237 2. Chunks allocated via mmap, which have the second-lowest-order
1238 bit (IS_MMAPPED) set in their size fields. Because they are
1239 never merged or traversed from any other chunk, they have no
1240 foot size or inuse information.
1241
1242 Available chunks are kept in any of several places (all declared below):
1243
1244 * `av': An array of chunks serving as bin headers for consolidated
1245 chunks. Each bin is doubly linked. The bins are approximately
1246 proportionally (log) spaced. There are a lot of these bins
1247 (128). This may look excessive, but works very well in
1248 practice. All procedures maintain the invariant that no
1249 consolidated chunk physically borders another one. Chunks in
1250 bins are kept in size order, with ties going to the
1251 approximately least recently used chunk.
1252
1253 The chunks in each bin are maintained in decreasing sorted order by
1254 size. This is irrelevant for the small bins, which all contain
1255 the same-sized chunks, but facilitates best-fit allocation for
1256 larger chunks. (These lists are just sequential. Keeping them in
1257 order almost never requires enough traversal to warrant using
1258 fancier ordered data structures.) Chunks of the same size are
1259 linked with the most recently freed at the front, and allocations
1260 are taken from the back. This results in LRU or FIFO allocation
1261 order, which tends to give each chunk an equal opportunity to be
1262 consolidated with adjacent freed chunks, resulting in larger free
1263 chunks and less fragmentation.
1264
1265 * `top': The top-most available chunk (i.e., the one bordering the
1266 end of available memory) is treated specially. It is never
1267 included in any bin, is used only if no other chunk is
1268 available, and is released back to the system if it is very
1269 large (see M_TRIM_THRESHOLD).
1270
1271 * `last_remainder': A bin holding only the remainder of the
1272 most recently split (non-top) chunk. This bin is checked
1273 before other non-fitting chunks, so as to provide better
1274 locality for runs of sequentially allocated chunks.
1275
1276 * Implicitly, through the host system's memory mapping tables.
1277 If supported, requests greater than a threshold are usually
1278 serviced via calls to mmap, and then later released via munmap.
1279
1280 */
1281
1282 /* sizes, alignments */
1283
1284 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1285 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1286 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1287 #define MINSIZE (sizeof(struct malloc_chunk))
1288
1289 /* conversion from malloc headers to user pointers, and back */
1290
1291 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1292 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1293
1294 /* pad request bytes into a usable size */
1295
1296 #define request2size(req) \
1297 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1298 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1299 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1300
1301 /* Check if m has acceptable alignment */
1302
1303 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1304
1305
1306
1307
1308 /*
1309 Physical chunk operations
1310 */
1311
1312
1313 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1314
1315 #define PREV_INUSE 0x1
1316
1317 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1318
1319 #define IS_MMAPPED 0x2
1320
1321 /* Bits to mask off when extracting size */
1322
1323 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1324
1325
1326 /* Ptr to next physical malloc_chunk. */
1327
1328 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1329
1330 /* Ptr to previous physical malloc_chunk */
1331
1332 #define prev_chunk(p)\
1333 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1334
1335
1336 /* Treat space at ptr + offset as a chunk */
1337
1338 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1339
1340
1341
1342
1343 /*
1344 Dealing with use bits
1345 */
1346
1347 /* extract p's inuse bit */
1348
1349 #define inuse(p)\
1350 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1351
1352 /* extract inuse bit of previous chunk */
1353
1354 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1355
1356 /* check for mmap()'ed chunk */
1357
1358 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1359
1360 /* set/clear chunk as in use without otherwise disturbing */
1361
1362 #define set_inuse(p)\
1363 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1364
1365 #define clear_inuse(p)\
1366 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1367
1368 /* check/set/clear inuse bits in known places */
1369
1370 #define inuse_bit_at_offset(p, s)\
1371 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1372
1373 #define set_inuse_bit_at_offset(p, s)\
1374 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1375
1376 #define clear_inuse_bit_at_offset(p, s)\
1377 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1378
1379
1380
1381
1382 /*
1383 Dealing with size fields
1384 */
1385
1386 /* Get size, ignoring use bits */
1387
1388 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1389
1390 /* Set size at head, without disturbing its use bit */
1391
1392 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1393
1394 /* Set size/use ignoring previous bits in header */
1395
1396 #define set_head(p, s) ((p)->size = (s))
1397
1398 /* Set size at footer (only when chunk is not in use) */
1399
1400 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1401
1402
1403
1404
1405
1406 /*
1407 Bins
1408
1409 The bins, `av_' are an array of pairs of pointers serving as the
1410 heads of (initially empty) doubly-linked lists of chunks, laid out
1411 in a way so that each pair can be treated as if it were in a
1412 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1413 and chunks are the same).
1414
1415 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1416 8 bytes apart. Larger bins are approximately logarithmically
1417 spaced. (See the table below.) The `av_' array is never mentioned
1418 directly in the code, but instead via bin access macros.
1419
1420 Bin layout:
1421
1422 64 bins of size 8
1423 32 bins of size 64
1424 16 bins of size 512
1425 8 bins of size 4096
1426 4 bins of size 32768
1427 2 bins of size 262144
1428 1 bin of size what's left
1429
1430 There is actually a little bit of slop in the numbers in bin_index
1431 for the sake of speed. This makes no difference elsewhere.
1432
1433 The special chunks `top' and `last_remainder' get their own bins,
1434 (this is implemented via yet more trickery with the av_ array),
1435 although `top' is never properly linked to its bin since it is
1436 always handled specially.
1437
1438 */
1439
1440 #define NAV 128 /* number of bins */
1441
1442 typedef struct malloc_chunk* mbinptr;
1443
1444 /* access macros */
1445
1446 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1447 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1448 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1449
1450 /*
1451 The first 2 bins are never indexed. The corresponding av_ cells are instead
1452 used for bookkeeping. This is not to save space, but to simplify
1453 indexing, maintain locality, and avoid some initialization tests.
1454 */
1455
1456 #define top (av_[2]) /* The topmost chunk */
1457 #define last_remainder (bin_at(1)) /* remainder from last split */
1458
1459
1460 /*
1461 Because top initially points to its own bin with initial
1462 zero size, thus forcing extension on the first malloc request,
1463 we avoid having any special code in malloc to check whether
1464 it even exists yet. But we still need to in malloc_extend_top.
1465 */
1466
1467 #define initial_top ((mchunkptr)(bin_at(0)))
1468
1469 /* Helper macro to initialize bins */
1470
1471 #define IAV(i) bin_at(i), bin_at(i)
1472
1473 static mbinptr av_[NAV * 2 + 2] = {
1474 NULL, NULL,
1475 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1476 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1477 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1478 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1479 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1480 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1481 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1482 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1483 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1484 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1485 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1486 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1487 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1488 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1489 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1490 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1491 };
1492
1493 #ifdef CONFIG_NEEDS_MANUAL_RELOC
1494 static void malloc_bin_reloc(void)
1495 {
1496 mbinptr *p = &av_[2];
1497 size_t i;
1498
1499 for (i = 2; i < ARRAY_SIZE(av_); ++i, ++p)
1500 *p = (mbinptr)((ulong)*p + gd->reloc_off);
1501 }
1502 #else
1503 static inline void malloc_bin_reloc(void) {}
1504 #endif
1505
1506 ulong mem_malloc_start = 0;
1507 ulong mem_malloc_end = 0;
1508 ulong mem_malloc_brk = 0;
1509
1510 void *sbrk(ptrdiff_t increment)
1511 {
1512 ulong old = mem_malloc_brk;
1513 ulong new = old + increment;
1514
1515 /*
1516 * if we are giving memory back make sure we clear it out since
1517 * we set MORECORE_CLEARS to 1
1518 */
1519 if (increment < 0)
1520 memset((void *)new, 0, -increment);
1521
1522 if ((new < mem_malloc_start) || (new > mem_malloc_end))
1523 return (void *)MORECORE_FAILURE;
1524
1525 mem_malloc_brk = new;
1526
1527 return (void *)old;
1528 }
1529
1530 void mem_malloc_init(ulong start, ulong size)
1531 {
1532 mem_malloc_start = start;
1533 mem_malloc_end = start + size;
1534 mem_malloc_brk = start;
1535
1536 memset((void *)mem_malloc_start, 0, size);
1537
1538 malloc_bin_reloc();
1539 }
1540
1541 /* field-extraction macros */
1542
1543 #define first(b) ((b)->fd)
1544 #define last(b) ((b)->bk)
1545
1546 /*
1547 Indexing into bins
1548 */
1549
1550 #define bin_index(sz) \
1551 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1552 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1553 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1554 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1555 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1556 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1557 126)
1558 /*
1559 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1560 identically sized chunks. This is exploited in malloc.
1561 */
1562
1563 #define MAX_SMALLBIN 63
1564 #define MAX_SMALLBIN_SIZE 512
1565 #define SMALLBIN_WIDTH 8
1566
1567 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1568
1569 /*
1570 Requests are `small' if both the corresponding and the next bin are small
1571 */
1572
1573 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1574
1575
1576
1577 /*
1578 To help compensate for the large number of bins, a one-level index
1579 structure is used for bin-by-bin searching. `binblocks' is a
1580 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1581 have any (possibly) non-empty bins, so they can be skipped over
1582 all at once during during traversals. The bits are NOT always
1583 cleared as soon as all bins in a block are empty, but instead only
1584 when all are noticed to be empty during traversal in malloc.
1585 */
1586
1587 #define BINBLOCKWIDTH 4 /* bins per block */
1588
1589 #define binblocks_r ((INTERNAL_SIZE_T)av_[1]) /* bitvector of nonempty blocks */
1590 #define binblocks_w (av_[1])
1591
1592 /* bin<->block macros */
1593
1594 #define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
1595 #define mark_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r | idx2binblock(ii)))
1596 #define clear_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r & ~(idx2binblock(ii))))
1597
1598
1599
1600
1601
1602 /* Other static bookkeeping data */
1603
1604 /* variables holding tunable values */
1605
1606 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1607 static unsigned long top_pad = DEFAULT_TOP_PAD;
1608 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1609 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1610
1611 /* The first value returned from sbrk */
1612 static char* sbrk_base = (char*)(-1);
1613
1614 /* The maximum memory obtained from system via sbrk */
1615 static unsigned long max_sbrked_mem = 0;
1616
1617 /* The maximum via either sbrk or mmap */
1618 static unsigned long max_total_mem = 0;
1619
1620 /* internal working copy of mallinfo */
1621 static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1622
1623 /* The total memory obtained from system via sbrk */
1624 #define sbrked_mem (current_mallinfo.arena)
1625
1626 /* Tracking mmaps */
1627
1628 #ifdef DEBUG
1629 static unsigned int n_mmaps = 0;
1630 #endif /* DEBUG */
1631 static unsigned long mmapped_mem = 0;
1632 #if HAVE_MMAP
1633 static unsigned int max_n_mmaps = 0;
1634 static unsigned long max_mmapped_mem = 0;
1635 #endif
1636
1637
1638
1639 /*
1640 Debugging support
1641 */
1642
1643 #ifdef DEBUG
1644
1645
1646 /*
1647 These routines make a number of assertions about the states
1648 of data structures that should be true at all times. If any
1649 are not true, it's very likely that a user program has somehow
1650 trashed memory. (It's also possible that there is a coding error
1651 in malloc. In which case, please report it!)
1652 */
1653
1654 #if __STD_C
1655 static void do_check_chunk(mchunkptr p)
1656 #else
1657 static void do_check_chunk(p) mchunkptr p;
1658 #endif
1659 {
1660 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1661
1662 /* No checkable chunk is mmapped */
1663 assert(!chunk_is_mmapped(p));
1664
1665 /* Check for legal address ... */
1666 assert((char*)p >= sbrk_base);
1667 if (p != top)
1668 assert((char*)p + sz <= (char*)top);
1669 else
1670 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1671
1672 }
1673
1674
1675 #if __STD_C
1676 static void do_check_free_chunk(mchunkptr p)
1677 #else
1678 static void do_check_free_chunk(p) mchunkptr p;
1679 #endif
1680 {
1681 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1682 mchunkptr next = chunk_at_offset(p, sz);
1683
1684 do_check_chunk(p);
1685
1686 /* Check whether it claims to be free ... */
1687 assert(!inuse(p));
1688
1689 /* Unless a special marker, must have OK fields */
1690 if ((long)sz >= (long)MINSIZE)
1691 {
1692 assert((sz & MALLOC_ALIGN_MASK) == 0);
1693 assert(aligned_OK(chunk2mem(p)));
1694 /* ... matching footer field */
1695 assert(next->prev_size == sz);
1696 /* ... and is fully consolidated */
1697 assert(prev_inuse(p));
1698 assert (next == top || inuse(next));
1699
1700 /* ... and has minimally sane links */
1701 assert(p->fd->bk == p);
1702 assert(p->bk->fd == p);
1703 }
1704 else /* markers are always of size SIZE_SZ */
1705 assert(sz == SIZE_SZ);
1706 }
1707
1708 #if __STD_C
1709 static void do_check_inuse_chunk(mchunkptr p)
1710 #else
1711 static void do_check_inuse_chunk(p) mchunkptr p;
1712 #endif
1713 {
1714 mchunkptr next = next_chunk(p);
1715 do_check_chunk(p);
1716
1717 /* Check whether it claims to be in use ... */
1718 assert(inuse(p));
1719
1720 /* ... and is surrounded by OK chunks.
1721 Since more things can be checked with free chunks than inuse ones,
1722 if an inuse chunk borders them and debug is on, it's worth doing them.
1723 */
1724 if (!prev_inuse(p))
1725 {
1726 mchunkptr prv = prev_chunk(p);
1727 assert(next_chunk(prv) == p);
1728 do_check_free_chunk(prv);
1729 }
1730 if (next == top)
1731 {
1732 assert(prev_inuse(next));
1733 assert(chunksize(next) >= MINSIZE);
1734 }
1735 else if (!inuse(next))
1736 do_check_free_chunk(next);
1737
1738 }
1739
1740 #if __STD_C
1741 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1742 #else
1743 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1744 #endif
1745 {
1746 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1747 long room = sz - s;
1748
1749 do_check_inuse_chunk(p);
1750
1751 /* Legal size ... */
1752 assert((long)sz >= (long)MINSIZE);
1753 assert((sz & MALLOC_ALIGN_MASK) == 0);
1754 assert(room >= 0);
1755 assert(room < (long)MINSIZE);
1756
1757 /* ... and alignment */
1758 assert(aligned_OK(chunk2mem(p)));
1759
1760
1761 /* ... and was allocated at front of an available chunk */
1762 assert(prev_inuse(p));
1763
1764 }
1765
1766
1767 #define check_free_chunk(P) do_check_free_chunk(P)
1768 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
1769 #define check_chunk(P) do_check_chunk(P)
1770 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1771 #else
1772 #define check_free_chunk(P)
1773 #define check_inuse_chunk(P)
1774 #define check_chunk(P)
1775 #define check_malloced_chunk(P,N)
1776 #endif
1777
1778
1779
1780 /*
1781 Macro-based internal utilities
1782 */
1783
1784
1785 /*
1786 Linking chunks in bin lists.
1787 Call these only with variables, not arbitrary expressions, as arguments.
1788 */
1789
1790 /*
1791 Place chunk p of size s in its bin, in size order,
1792 putting it ahead of others of same size.
1793 */
1794
1795
1796 #define frontlink(P, S, IDX, BK, FD) \
1797 { \
1798 if (S < MAX_SMALLBIN_SIZE) \
1799 { \
1800 IDX = smallbin_index(S); \
1801 mark_binblock(IDX); \
1802 BK = bin_at(IDX); \
1803 FD = BK->fd; \
1804 P->bk = BK; \
1805 P->fd = FD; \
1806 FD->bk = BK->fd = P; \
1807 } \
1808 else \
1809 { \
1810 IDX = bin_index(S); \
1811 BK = bin_at(IDX); \
1812 FD = BK->fd; \
1813 if (FD == BK) mark_binblock(IDX); \
1814 else \
1815 { \
1816 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1817 BK = FD->bk; \
1818 } \
1819 P->bk = BK; \
1820 P->fd = FD; \
1821 FD->bk = BK->fd = P; \
1822 } \
1823 }
1824
1825
1826 /* take a chunk off a list */
1827
1828 #define unlink(P, BK, FD) \
1829 { \
1830 BK = P->bk; \
1831 FD = P->fd; \
1832 FD->bk = BK; \
1833 BK->fd = FD; \
1834 } \
1835
1836 /* Place p as the last remainder */
1837
1838 #define link_last_remainder(P) \
1839 { \
1840 last_remainder->fd = last_remainder->bk = P; \
1841 P->fd = P->bk = last_remainder; \
1842 }
1843
1844 /* Clear the last_remainder bin */
1845
1846 #define clear_last_remainder \
1847 (last_remainder->fd = last_remainder->bk = last_remainder)
1848
1849
1850
1851
1852
1853 /* Routines dealing with mmap(). */
1854
1855 #if HAVE_MMAP
1856
1857 #if __STD_C
1858 static mchunkptr mmap_chunk(size_t size)
1859 #else
1860 static mchunkptr mmap_chunk(size) size_t size;
1861 #endif
1862 {
1863 size_t page_mask = malloc_getpagesize - 1;
1864 mchunkptr p;
1865
1866 #ifndef MAP_ANONYMOUS
1867 static int fd = -1;
1868 #endif
1869
1870 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1871
1872 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1873 * there is no following chunk whose prev_size field could be used.
1874 */
1875 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1876
1877 #ifdef MAP_ANONYMOUS
1878 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1879 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1880 #else /* !MAP_ANONYMOUS */
1881 if (fd < 0)
1882 {
1883 fd = open("/dev/zero", O_RDWR);
1884 if(fd < 0) return 0;
1885 }
1886 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1887 #endif
1888
1889 if(p == (mchunkptr)-1) return 0;
1890
1891 n_mmaps++;
1892 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1893
1894 /* We demand that eight bytes into a page must be 8-byte aligned. */
1895 assert(aligned_OK(chunk2mem(p)));
1896
1897 /* The offset to the start of the mmapped region is stored
1898 * in the prev_size field of the chunk; normally it is zero,
1899 * but that can be changed in memalign().
1900 */
1901 p->prev_size = 0;
1902 set_head(p, size|IS_MMAPPED);
1903
1904 mmapped_mem += size;
1905 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1906 max_mmapped_mem = mmapped_mem;
1907 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1908 max_total_mem = mmapped_mem + sbrked_mem;
1909 return p;
1910 }
1911
1912 #if __STD_C
1913 static void munmap_chunk(mchunkptr p)
1914 #else
1915 static void munmap_chunk(p) mchunkptr p;
1916 #endif
1917 {
1918 INTERNAL_SIZE_T size = chunksize(p);
1919 int ret;
1920
1921 assert (chunk_is_mmapped(p));
1922 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1923 assert((n_mmaps > 0));
1924 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1925
1926 n_mmaps--;
1927 mmapped_mem -= (size + p->prev_size);
1928
1929 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1930
1931 /* munmap returns non-zero on failure */
1932 assert(ret == 0);
1933 }
1934
1935 #if HAVE_MREMAP
1936
1937 #if __STD_C
1938 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1939 #else
1940 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1941 #endif
1942 {
1943 size_t page_mask = malloc_getpagesize - 1;
1944 INTERNAL_SIZE_T offset = p->prev_size;
1945 INTERNAL_SIZE_T size = chunksize(p);
1946 char *cp;
1947
1948 assert (chunk_is_mmapped(p));
1949 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1950 assert((n_mmaps > 0));
1951 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1952
1953 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1954 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1955
1956 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1957
1958 if (cp == (char *)-1) return 0;
1959
1960 p = (mchunkptr)(cp + offset);
1961
1962 assert(aligned_OK(chunk2mem(p)));
1963
1964 assert((p->prev_size == offset));
1965 set_head(p, (new_size - offset)|IS_MMAPPED);
1966
1967 mmapped_mem -= size + offset;
1968 mmapped_mem += new_size;
1969 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1970 max_mmapped_mem = mmapped_mem;
1971 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1972 max_total_mem = mmapped_mem + sbrked_mem;
1973 return p;
1974 }
1975
1976 #endif /* HAVE_MREMAP */
1977
1978 #endif /* HAVE_MMAP */
1979
1980
1981
1982
1983 /*
1984 Extend the top-most chunk by obtaining memory from system.
1985 Main interface to sbrk (but see also malloc_trim).
1986 */
1987
1988 #if __STD_C
1989 static void malloc_extend_top(INTERNAL_SIZE_T nb)
1990 #else
1991 static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1992 #endif
1993 {
1994 char* brk; /* return value from sbrk */
1995 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1996 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
1997 char* new_brk; /* return of 2nd sbrk call */
1998 INTERNAL_SIZE_T top_size; /* new size of top chunk */
1999
2000 mchunkptr old_top = top; /* Record state of old top */
2001 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2002 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2003
2004 /* Pad request with top_pad plus minimal overhead */
2005
2006 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2007 unsigned long pagesz = malloc_getpagesize;
2008
2009 /* If not the first time through, round to preserve page boundary */
2010 /* Otherwise, we need to correct to a page size below anyway. */
2011 /* (We also correct below if an intervening foreign sbrk call.) */
2012
2013 if (sbrk_base != (char*)(-1))
2014 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2015
2016 brk = (char*)(MORECORE (sbrk_size));
2017
2018 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2019 if (brk == (char*)(MORECORE_FAILURE) ||
2020 (brk < old_end && old_top != initial_top))
2021 return;
2022
2023 sbrked_mem += sbrk_size;
2024
2025 if (brk == old_end) /* can just add bytes to current top */
2026 {
2027 top_size = sbrk_size + old_top_size;
2028 set_head(top, top_size | PREV_INUSE);
2029 }
2030 else
2031 {
2032 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2033 sbrk_base = brk;
2034 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2035 sbrked_mem += brk - (char*)old_end;
2036
2037 /* Guarantee alignment of first new chunk made from this space */
2038 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2039 if (front_misalign > 0)
2040 {
2041 correction = (MALLOC_ALIGNMENT) - front_misalign;
2042 brk += correction;
2043 }
2044 else
2045 correction = 0;
2046
2047 /* Guarantee the next brk will be at a page boundary */
2048
2049 correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
2050 ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
2051
2052 /* Allocate correction */
2053 new_brk = (char*)(MORECORE (correction));
2054 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2055
2056 sbrked_mem += correction;
2057
2058 top = (mchunkptr)brk;
2059 top_size = new_brk - brk + correction;
2060 set_head(top, top_size | PREV_INUSE);
2061
2062 if (old_top != initial_top)
2063 {
2064
2065 /* There must have been an intervening foreign sbrk call. */
2066 /* A double fencepost is necessary to prevent consolidation */
2067
2068 /* If not enough space to do this, then user did something very wrong */
2069 if (old_top_size < MINSIZE)
2070 {
2071 set_head(top, PREV_INUSE); /* will force null return from malloc */
2072 return;
2073 }
2074
2075 /* Also keep size a multiple of MALLOC_ALIGNMENT */
2076 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2077 set_head_size(old_top, old_top_size);
2078 chunk_at_offset(old_top, old_top_size )->size =
2079 SIZE_SZ|PREV_INUSE;
2080 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2081 SIZE_SZ|PREV_INUSE;
2082 /* If possible, release the rest. */
2083 if (old_top_size >= MINSIZE)
2084 fREe(chunk2mem(old_top));
2085 }
2086 }
2087
2088 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2089 max_sbrked_mem = sbrked_mem;
2090 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2091 max_total_mem = mmapped_mem + sbrked_mem;
2092
2093 /* We always land on a page boundary */
2094 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2095 }
2096
2097
2098
2099
2100 /* Main public routines */
2101
2102
2103 /*
2104 Malloc Algorthim:
2105
2106 The requested size is first converted into a usable form, `nb'.
2107 This currently means to add 4 bytes overhead plus possibly more to
2108 obtain 8-byte alignment and/or to obtain a size of at least
2109 MINSIZE (currently 16 bytes), the smallest allocatable size.
2110 (All fits are considered `exact' if they are within MINSIZE bytes.)
2111
2112 From there, the first successful of the following steps is taken:
2113
2114 1. The bin corresponding to the request size is scanned, and if
2115 a chunk of exactly the right size is found, it is taken.
2116
2117 2. The most recently remaindered chunk is used if it is big
2118 enough. This is a form of (roving) first fit, used only in
2119 the absence of exact fits. Runs of consecutive requests use
2120 the remainder of the chunk used for the previous such request
2121 whenever possible. This limited use of a first-fit style
2122 allocation strategy tends to give contiguous chunks
2123 coextensive lifetimes, which improves locality and can reduce
2124 fragmentation in the long run.
2125
2126 3. Other bins are scanned in increasing size order, using a
2127 chunk big enough to fulfill the request, and splitting off
2128 any remainder. This search is strictly by best-fit; i.e.,
2129 the smallest (with ties going to approximately the least
2130 recently used) chunk that fits is selected.
2131
2132 4. If large enough, the chunk bordering the end of memory
2133 (`top') is split off. (This use of `top' is in accord with
2134 the best-fit search rule. In effect, `top' is treated as
2135 larger (and thus less well fitting) than any other available
2136 chunk since it can be extended to be as large as necessary
2137 (up to system limitations).
2138
2139 5. If the request size meets the mmap threshold and the
2140 system supports mmap, and there are few enough currently
2141 allocated mmapped regions, and a call to mmap succeeds,
2142 the request is allocated via direct memory mapping.
2143
2144 6. Otherwise, the top of memory is extended by
2145 obtaining more space from the system (normally using sbrk,
2146 but definable to anything else via the MORECORE macro).
2147 Memory is gathered from the system (in system page-sized
2148 units) in a way that allows chunks obtained across different
2149 sbrk calls to be consolidated, but does not require
2150 contiguous memory. Thus, it should be safe to intersperse
2151 mallocs with other sbrk calls.
2152
2153
2154 All allocations are made from the the `lowest' part of any found
2155 chunk. (The implementation invariant is that prev_inuse is
2156 always true of any allocated chunk; i.e., that each allocated
2157 chunk borders either a previously allocated and still in-use chunk,
2158 or the base of its memory arena.)
2159
2160 */
2161
2162 #if __STD_C
2163 Void_t* mALLOc(size_t bytes)
2164 #else
2165 Void_t* mALLOc(bytes) size_t bytes;
2166 #endif
2167 {
2168 mchunkptr victim; /* inspected/selected chunk */
2169 INTERNAL_SIZE_T victim_size; /* its size */
2170 int idx; /* index for bin traversal */
2171 mbinptr bin; /* associated bin */
2172 mchunkptr remainder; /* remainder from a split */
2173 long remainder_size; /* its size */
2174 int remainder_index; /* its bin index */
2175 unsigned long block; /* block traverser bit */
2176 int startidx; /* first bin of a traversed block */
2177 mchunkptr fwd; /* misc temp for linking */
2178 mchunkptr bck; /* misc temp for linking */
2179 mbinptr q; /* misc temp */
2180
2181 INTERNAL_SIZE_T nb;
2182
2183 #ifdef CONFIG_SYS_MALLOC_F_LEN
2184 if (!(gd->flags & GD_FLG_RELOC)) {
2185 ulong new_ptr;
2186 void *ptr;
2187
2188 new_ptr = gd->malloc_ptr + bytes;
2189 if (new_ptr > gd->malloc_limit)
2190 panic("Out of pre-reloc memory");
2191 ptr = map_sysmem(gd->malloc_base + gd->malloc_ptr, bytes);
2192 gd->malloc_ptr = ALIGN(new_ptr, sizeof(new_ptr));
2193 return ptr;
2194 }
2195 #endif
2196
2197 /* check if mem_malloc_init() was run */
2198 if ((mem_malloc_start == 0) && (mem_malloc_end == 0)) {
2199 /* not initialized yet */
2200 return NULL;
2201 }
2202
2203 if ((long)bytes < 0) return NULL;
2204
2205 nb = request2size(bytes); /* padded request size; */
2206
2207 /* Check for exact match in a bin */
2208
2209 if (is_small_request(nb)) /* Faster version for small requests */
2210 {
2211 idx = smallbin_index(nb);
2212
2213 /* No traversal or size check necessary for small bins. */
2214
2215 q = bin_at(idx);
2216 victim = last(q);
2217
2218 /* Also scan the next one, since it would have a remainder < MINSIZE */
2219 if (victim == q)
2220 {
2221 q = next_bin(q);
2222 victim = last(q);
2223 }
2224 if (victim != q)
2225 {
2226 victim_size = chunksize(victim);
2227 unlink(victim, bck, fwd);
2228 set_inuse_bit_at_offset(victim, victim_size);
2229 check_malloced_chunk(victim, nb);
2230 return chunk2mem(victim);
2231 }
2232
2233 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2234
2235 }
2236 else
2237 {
2238 idx = bin_index(nb);
2239 bin = bin_at(idx);
2240
2241 for (victim = last(bin); victim != bin; victim = victim->bk)
2242 {
2243 victim_size = chunksize(victim);
2244 remainder_size = victim_size - nb;
2245
2246 if (remainder_size >= (long)MINSIZE) /* too big */
2247 {
2248 --idx; /* adjust to rescan below after checking last remainder */
2249 break;
2250 }
2251
2252 else if (remainder_size >= 0) /* exact fit */
2253 {
2254 unlink(victim, bck, fwd);
2255 set_inuse_bit_at_offset(victim, victim_size);
2256 check_malloced_chunk(victim, nb);
2257 return chunk2mem(victim);
2258 }
2259 }
2260
2261 ++idx;
2262
2263 }
2264
2265 /* Try to use the last split-off remainder */
2266
2267 if ( (victim = last_remainder->fd) != last_remainder)
2268 {
2269 victim_size = chunksize(victim);
2270 remainder_size = victim_size - nb;
2271
2272 if (remainder_size >= (long)MINSIZE) /* re-split */
2273 {
2274 remainder = chunk_at_offset(victim, nb);
2275 set_head(victim, nb | PREV_INUSE);
2276 link_last_remainder(remainder);
2277 set_head(remainder, remainder_size | PREV_INUSE);
2278 set_foot(remainder, remainder_size);
2279 check_malloced_chunk(victim, nb);
2280 return chunk2mem(victim);
2281 }
2282
2283 clear_last_remainder;
2284
2285 if (remainder_size >= 0) /* exhaust */
2286 {
2287 set_inuse_bit_at_offset(victim, victim_size);
2288 check_malloced_chunk(victim, nb);
2289 return chunk2mem(victim);
2290 }
2291
2292 /* Else place in bin */
2293
2294 frontlink(victim, victim_size, remainder_index, bck, fwd);
2295 }
2296
2297 /*
2298 If there are any possibly nonempty big-enough blocks,
2299 search for best fitting chunk by scanning bins in blockwidth units.
2300 */
2301
2302 if ( (block = idx2binblock(idx)) <= binblocks_r)
2303 {
2304
2305 /* Get to the first marked block */
2306
2307 if ( (block & binblocks_r) == 0)
2308 {
2309 /* force to an even block boundary */
2310 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2311 block <<= 1;
2312 while ((block & binblocks_r) == 0)
2313 {
2314 idx += BINBLOCKWIDTH;
2315 block <<= 1;
2316 }
2317 }
2318
2319 /* For each possibly nonempty block ... */
2320 for (;;)
2321 {
2322 startidx = idx; /* (track incomplete blocks) */
2323 q = bin = bin_at(idx);
2324
2325 /* For each bin in this block ... */
2326 do
2327 {
2328 /* Find and use first big enough chunk ... */
2329
2330 for (victim = last(bin); victim != bin; victim = victim->bk)
2331 {
2332 victim_size = chunksize(victim);
2333 remainder_size = victim_size - nb;
2334
2335 if (remainder_size >= (long)MINSIZE) /* split */
2336 {
2337 remainder = chunk_at_offset(victim, nb);
2338 set_head(victim, nb | PREV_INUSE);
2339 unlink(victim, bck, fwd);
2340 link_last_remainder(remainder);
2341 set_head(remainder, remainder_size | PREV_INUSE);
2342 set_foot(remainder, remainder_size);
2343 check_malloced_chunk(victim, nb);
2344 return chunk2mem(victim);
2345 }
2346
2347 else if (remainder_size >= 0) /* take */
2348 {
2349 set_inuse_bit_at_offset(victim, victim_size);
2350 unlink(victim, bck, fwd);
2351 check_malloced_chunk(victim, nb);
2352 return chunk2mem(victim);
2353 }
2354
2355 }
2356
2357 bin = next_bin(bin);
2358
2359 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2360
2361 /* Clear out the block bit. */
2362
2363 do /* Possibly backtrack to try to clear a partial block */
2364 {
2365 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2366 {
2367 av_[1] = (mbinptr)(binblocks_r & ~block);
2368 break;
2369 }
2370 --startidx;
2371 q = prev_bin(q);
2372 } while (first(q) == q);
2373
2374 /* Get to the next possibly nonempty block */
2375
2376 if ( (block <<= 1) <= binblocks_r && (block != 0) )
2377 {
2378 while ((block & binblocks_r) == 0)
2379 {
2380 idx += BINBLOCKWIDTH;
2381 block <<= 1;
2382 }
2383 }
2384 else
2385 break;
2386 }
2387 }
2388
2389
2390 /* Try to use top chunk */
2391
2392 /* Require that there be a remainder, ensuring top always exists */
2393 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2394 {
2395
2396 #if HAVE_MMAP
2397 /* If big and would otherwise need to extend, try to use mmap instead */
2398 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2399 (victim = mmap_chunk(nb)) != 0)
2400 return chunk2mem(victim);
2401 #endif
2402
2403 /* Try to extend */
2404 malloc_extend_top(nb);
2405 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2406 return NULL; /* propagate failure */
2407 }
2408
2409 victim = top;
2410 set_head(victim, nb | PREV_INUSE);
2411 top = chunk_at_offset(victim, nb);
2412 set_head(top, remainder_size | PREV_INUSE);
2413 check_malloced_chunk(victim, nb);
2414 return chunk2mem(victim);
2415
2416 }
2417
2418
2419
2420
2421 /*
2422
2423 free() algorithm :
2424
2425 cases:
2426
2427 1. free(0) has no effect.
2428
2429 2. If the chunk was allocated via mmap, it is release via munmap().
2430
2431 3. If a returned chunk borders the current high end of memory,
2432 it is consolidated into the top, and if the total unused
2433 topmost memory exceeds the trim threshold, malloc_trim is
2434 called.
2435
2436 4. Other chunks are consolidated as they arrive, and
2437 placed in corresponding bins. (This includes the case of
2438 consolidating with the current `last_remainder').
2439
2440 */
2441
2442
2443 #if __STD_C
2444 void fREe(Void_t* mem)
2445 #else
2446 void fREe(mem) Void_t* mem;
2447 #endif
2448 {
2449 mchunkptr p; /* chunk corresponding to mem */
2450 INTERNAL_SIZE_T hd; /* its head field */
2451 INTERNAL_SIZE_T sz; /* its size */
2452 int idx; /* its bin index */
2453 mchunkptr next; /* next contiguous chunk */
2454 INTERNAL_SIZE_T nextsz; /* its size */
2455 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2456 mchunkptr bck; /* misc temp for linking */
2457 mchunkptr fwd; /* misc temp for linking */
2458 int islr; /* track whether merging with last_remainder */
2459
2460 #ifdef CONFIG_SYS_MALLOC_F_LEN
2461 /* free() is a no-op - all the memory will be freed on relocation */
2462 if (!(gd->flags & GD_FLG_RELOC))
2463 return;
2464 #endif
2465
2466 if (mem == NULL) /* free(0) has no effect */
2467 return;
2468
2469 p = mem2chunk(mem);
2470 hd = p->size;
2471
2472 #if HAVE_MMAP
2473 if (hd & IS_MMAPPED) /* release mmapped memory. */
2474 {
2475 munmap_chunk(p);
2476 return;
2477 }
2478 #endif
2479
2480 check_inuse_chunk(p);
2481
2482 sz = hd & ~PREV_INUSE;
2483 next = chunk_at_offset(p, sz);
2484 nextsz = chunksize(next);
2485
2486 if (next == top) /* merge with top */
2487 {
2488 sz += nextsz;
2489
2490 if (!(hd & PREV_INUSE)) /* consolidate backward */
2491 {
2492 prevsz = p->prev_size;
2493 p = chunk_at_offset(p, -((long) prevsz));
2494 sz += prevsz;
2495 unlink(p, bck, fwd);
2496 }
2497
2498 set_head(p, sz | PREV_INUSE);
2499 top = p;
2500 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2501 malloc_trim(top_pad);
2502 return;
2503 }
2504
2505 set_head(next, nextsz); /* clear inuse bit */
2506
2507 islr = 0;
2508
2509 if (!(hd & PREV_INUSE)) /* consolidate backward */
2510 {
2511 prevsz = p->prev_size;
2512 p = chunk_at_offset(p, -((long) prevsz));
2513 sz += prevsz;
2514
2515 if (p->fd == last_remainder) /* keep as last_remainder */
2516 islr = 1;
2517 else
2518 unlink(p, bck, fwd);
2519 }
2520
2521 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2522 {
2523 sz += nextsz;
2524
2525 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2526 {
2527 islr = 1;
2528 link_last_remainder(p);
2529 }
2530 else
2531 unlink(next, bck, fwd);
2532 }
2533
2534
2535 set_head(p, sz | PREV_INUSE);
2536 set_foot(p, sz);
2537 if (!islr)
2538 frontlink(p, sz, idx, bck, fwd);
2539 }
2540
2541
2542
2543
2544
2545 /*
2546
2547 Realloc algorithm:
2548
2549 Chunks that were obtained via mmap cannot be extended or shrunk
2550 unless HAVE_MREMAP is defined, in which case mremap is used.
2551 Otherwise, if their reallocation is for additional space, they are
2552 copied. If for less, they are just left alone.
2553
2554 Otherwise, if the reallocation is for additional space, and the
2555 chunk can be extended, it is, else a malloc-copy-free sequence is
2556 taken. There are several different ways that a chunk could be
2557 extended. All are tried:
2558
2559 * Extending forward into following adjacent free chunk.
2560 * Shifting backwards, joining preceding adjacent space
2561 * Both shifting backwards and extending forward.
2562 * Extending into newly sbrked space
2563
2564 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2565 size argument of zero (re)allocates a minimum-sized chunk.
2566
2567 If the reallocation is for less space, and the new request is for
2568 a `small' (<512 bytes) size, then the newly unused space is lopped
2569 off and freed.
2570
2571 The old unix realloc convention of allowing the last-free'd chunk
2572 to be used as an argument to realloc is no longer supported.
2573 I don't know of any programs still relying on this feature,
2574 and allowing it would also allow too many other incorrect
2575 usages of realloc to be sensible.
2576
2577
2578 */
2579
2580
2581 #if __STD_C
2582 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2583 #else
2584 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2585 #endif
2586 {
2587 INTERNAL_SIZE_T nb; /* padded request size */
2588
2589 mchunkptr oldp; /* chunk corresponding to oldmem */
2590 INTERNAL_SIZE_T oldsize; /* its size */
2591
2592 mchunkptr newp; /* chunk to return */
2593 INTERNAL_SIZE_T newsize; /* its size */
2594 Void_t* newmem; /* corresponding user mem */
2595
2596 mchunkptr next; /* next contiguous chunk after oldp */
2597 INTERNAL_SIZE_T nextsize; /* its size */
2598
2599 mchunkptr prev; /* previous contiguous chunk before oldp */
2600 INTERNAL_SIZE_T prevsize; /* its size */
2601
2602 mchunkptr remainder; /* holds split off extra space from newp */
2603 INTERNAL_SIZE_T remainder_size; /* its size */
2604
2605 mchunkptr bck; /* misc temp for linking */
2606 mchunkptr fwd; /* misc temp for linking */
2607
2608 #ifdef REALLOC_ZERO_BYTES_FREES
2609 if (bytes == 0) { fREe(oldmem); return 0; }
2610 #endif
2611
2612 if ((long)bytes < 0) return NULL;
2613
2614 /* realloc of null is supposed to be same as malloc */
2615 if (oldmem == NULL) return mALLOc(bytes);
2616
2617 #ifdef CONFIG_SYS_MALLOC_F_LEN
2618 if (!(gd->flags & GD_FLG_RELOC)) {
2619 /* This is harder to support and should not be needed */
2620 panic("pre-reloc realloc() is not supported");
2621 }
2622 #endif
2623
2624 newp = oldp = mem2chunk(oldmem);
2625 newsize = oldsize = chunksize(oldp);
2626
2627
2628 nb = request2size(bytes);
2629
2630 #if HAVE_MMAP
2631 if (chunk_is_mmapped(oldp))
2632 {
2633 #if HAVE_MREMAP
2634 newp = mremap_chunk(oldp, nb);
2635 if(newp) return chunk2mem(newp);
2636 #endif
2637 /* Note the extra SIZE_SZ overhead. */
2638 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2639 /* Must alloc, copy, free. */
2640 newmem = mALLOc(bytes);
2641 if (newmem == 0) return 0; /* propagate failure */
2642 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2643 munmap_chunk(oldp);
2644 return newmem;
2645 }
2646 #endif
2647
2648 check_inuse_chunk(oldp);
2649
2650 if ((long)(oldsize) < (long)(nb))
2651 {
2652
2653 /* Try expanding forward */
2654
2655 next = chunk_at_offset(oldp, oldsize);
2656 if (next == top || !inuse(next))
2657 {
2658 nextsize = chunksize(next);
2659
2660 /* Forward into top only if a remainder */
2661 if (next == top)
2662 {
2663 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2664 {
2665 newsize += nextsize;
2666 top = chunk_at_offset(oldp, nb);
2667 set_head(top, (newsize - nb) | PREV_INUSE);
2668 set_head_size(oldp, nb);
2669 return chunk2mem(oldp);
2670 }
2671 }
2672
2673 /* Forward into next chunk */
2674 else if (((long)(nextsize + newsize) >= (long)(nb)))
2675 {
2676 unlink(next, bck, fwd);
2677 newsize += nextsize;
2678 goto split;
2679 }
2680 }
2681 else
2682 {
2683 next = NULL;
2684 nextsize = 0;
2685 }
2686
2687 /* Try shifting backwards. */
2688
2689 if (!prev_inuse(oldp))
2690 {
2691 prev = prev_chunk(oldp);
2692 prevsize = chunksize(prev);
2693
2694 /* try forward + backward first to save a later consolidation */
2695
2696 if (next != NULL)
2697 {
2698 /* into top */
2699 if (next == top)
2700 {
2701 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2702 {
2703 unlink(prev, bck, fwd);
2704 newp = prev;
2705 newsize += prevsize + nextsize;
2706 newmem = chunk2mem(newp);
2707 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2708 top = chunk_at_offset(newp, nb);
2709 set_head(top, (newsize - nb) | PREV_INUSE);
2710 set_head_size(newp, nb);
2711 return newmem;
2712 }
2713 }
2714
2715 /* into next chunk */
2716 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2717 {
2718 unlink(next, bck, fwd);
2719 unlink(prev, bck, fwd);
2720 newp = prev;
2721 newsize += nextsize + prevsize;
2722 newmem = chunk2mem(newp);
2723 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2724 goto split;
2725 }
2726 }
2727
2728 /* backward only */
2729 if (prev != NULL && (long)(prevsize + newsize) >= (long)nb)
2730 {
2731 unlink(prev, bck, fwd);
2732 newp = prev;
2733 newsize += prevsize;
2734 newmem = chunk2mem(newp);
2735 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2736 goto split;
2737 }
2738 }
2739
2740 /* Must allocate */
2741
2742 newmem = mALLOc (bytes);
2743
2744 if (newmem == NULL) /* propagate failure */
2745 return NULL;
2746
2747 /* Avoid copy if newp is next chunk after oldp. */
2748 /* (This can only happen when new chunk is sbrk'ed.) */
2749
2750 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2751 {
2752 newsize += chunksize(newp);
2753 newp = oldp;
2754 goto split;
2755 }
2756
2757 /* Otherwise copy, free, and exit */
2758 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2759 fREe(oldmem);
2760 return newmem;
2761 }
2762
2763
2764 split: /* split off extra room in old or expanded chunk */
2765
2766 if (newsize - nb >= MINSIZE) /* split off remainder */
2767 {
2768 remainder = chunk_at_offset(newp, nb);
2769 remainder_size = newsize - nb;
2770 set_head_size(newp, nb);
2771 set_head(remainder, remainder_size | PREV_INUSE);
2772 set_inuse_bit_at_offset(remainder, remainder_size);
2773 fREe(chunk2mem(remainder)); /* let free() deal with it */
2774 }
2775 else
2776 {
2777 set_head_size(newp, newsize);
2778 set_inuse_bit_at_offset(newp, newsize);
2779 }
2780
2781 check_inuse_chunk(newp);
2782 return chunk2mem(newp);
2783 }
2784
2785
2786
2787
2788 /*
2789
2790 memalign algorithm:
2791
2792 memalign requests more than enough space from malloc, finds a spot
2793 within that chunk that meets the alignment request, and then
2794 possibly frees the leading and trailing space.
2795
2796 The alignment argument must be a power of two. This property is not
2797 checked by memalign, so misuse may result in random runtime errors.
2798
2799 8-byte alignment is guaranteed by normal malloc calls, so don't
2800 bother calling memalign with an argument of 8 or less.
2801
2802 Overreliance on memalign is a sure way to fragment space.
2803
2804 */
2805
2806
2807 #if __STD_C
2808 Void_t* mEMALIGn(size_t alignment, size_t bytes)
2809 #else
2810 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2811 #endif
2812 {
2813 INTERNAL_SIZE_T nb; /* padded request size */
2814 char* m; /* memory returned by malloc call */
2815 mchunkptr p; /* corresponding chunk */
2816 char* brk; /* alignment point within p */
2817 mchunkptr newp; /* chunk to return */
2818 INTERNAL_SIZE_T newsize; /* its size */
2819 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
2820 mchunkptr remainder; /* spare room at end to split off */
2821 long remainder_size; /* its size */
2822
2823 if ((long)bytes < 0) return NULL;
2824
2825 /* If need less alignment than we give anyway, just relay to malloc */
2826
2827 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2828
2829 /* Otherwise, ensure that it is at least a minimum chunk size */
2830
2831 if (alignment < MINSIZE) alignment = MINSIZE;
2832
2833 /* Call malloc with worst case padding to hit alignment. */
2834
2835 nb = request2size(bytes);
2836 m = (char*)(mALLOc(nb + alignment + MINSIZE));
2837
2838 if (m == NULL) return NULL; /* propagate failure */
2839
2840 p = mem2chunk(m);
2841
2842 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2843 {
2844 #if HAVE_MMAP
2845 if(chunk_is_mmapped(p))
2846 return chunk2mem(p); /* nothing more to do */
2847 #endif
2848 }
2849 else /* misaligned */
2850 {
2851 /*
2852 Find an aligned spot inside chunk.
2853 Since we need to give back leading space in a chunk of at
2854 least MINSIZE, if the first calculation places us at
2855 a spot with less than MINSIZE leader, we can move to the
2856 next aligned spot -- we've allocated enough total room so that
2857 this is always possible.
2858 */
2859
2860 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
2861 if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2862
2863 newp = (mchunkptr)brk;
2864 leadsize = brk - (char*)(p);
2865 newsize = chunksize(p) - leadsize;
2866
2867 #if HAVE_MMAP
2868 if(chunk_is_mmapped(p))
2869 {
2870 newp->prev_size = p->prev_size + leadsize;
2871 set_head(newp, newsize|IS_MMAPPED);
2872 return chunk2mem(newp);
2873 }
2874 #endif
2875
2876 /* give back leader, use the rest */
2877
2878 set_head(newp, newsize | PREV_INUSE);
2879 set_inuse_bit_at_offset(newp, newsize);
2880 set_head_size(p, leadsize);
2881 fREe(chunk2mem(p));
2882 p = newp;
2883
2884 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2885 }
2886
2887 /* Also give back spare room at the end */
2888
2889 remainder_size = chunksize(p) - nb;
2890
2891 if (remainder_size >= (long)MINSIZE)
2892 {
2893 remainder = chunk_at_offset(p, nb);
2894 set_head(remainder, remainder_size | PREV_INUSE);
2895 set_head_size(p, nb);
2896 fREe(chunk2mem(remainder));
2897 }
2898
2899 check_inuse_chunk(p);
2900 return chunk2mem(p);
2901
2902 }
2903
2904
2905
2906
2907 /*
2908 valloc just invokes memalign with alignment argument equal
2909 to the page size of the system (or as near to this as can
2910 be figured out from all the includes/defines above.)
2911 */
2912
2913 #if __STD_C
2914 Void_t* vALLOc(size_t bytes)
2915 #else
2916 Void_t* vALLOc(bytes) size_t bytes;
2917 #endif
2918 {
2919 return mEMALIGn (malloc_getpagesize, bytes);
2920 }
2921
2922 /*
2923 pvalloc just invokes valloc for the nearest pagesize
2924 that will accommodate request
2925 */
2926
2927
2928 #if __STD_C
2929 Void_t* pvALLOc(size_t bytes)
2930 #else
2931 Void_t* pvALLOc(bytes) size_t bytes;
2932 #endif
2933 {
2934 size_t pagesize = malloc_getpagesize;
2935 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2936 }
2937
2938 /*
2939
2940 calloc calls malloc, then zeroes out the allocated chunk.
2941
2942 */
2943
2944 #if __STD_C
2945 Void_t* cALLOc(size_t n, size_t elem_size)
2946 #else
2947 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2948 #endif
2949 {
2950 mchunkptr p;
2951 INTERNAL_SIZE_T csz;
2952
2953 INTERNAL_SIZE_T sz = n * elem_size;
2954
2955
2956 /* check if expand_top called, in which case don't need to clear */
2957 #if MORECORE_CLEARS
2958 mchunkptr oldtop = top;
2959 INTERNAL_SIZE_T oldtopsize = chunksize(top);
2960 #endif
2961 Void_t* mem = mALLOc (sz);
2962
2963 if ((long)n < 0) return NULL;
2964
2965 if (mem == NULL)
2966 return NULL;
2967 else
2968 {
2969 #ifdef CONFIG_SYS_MALLOC_F_LEN
2970 if (!(gd->flags & GD_FLG_RELOC)) {
2971 MALLOC_ZERO(mem, sz);
2972 return mem;
2973 }
2974 #endif
2975 p = mem2chunk(mem);
2976
2977 /* Two optional cases in which clearing not necessary */
2978
2979
2980 #if HAVE_MMAP
2981 if (chunk_is_mmapped(p)) return mem;
2982 #endif
2983
2984 csz = chunksize(p);
2985
2986 #if MORECORE_CLEARS
2987 if (p == oldtop && csz > oldtopsize)
2988 {
2989 /* clear only the bytes from non-freshly-sbrked memory */
2990 csz = oldtopsize;
2991 }
2992 #endif
2993
2994 MALLOC_ZERO(mem, csz - SIZE_SZ);
2995 return mem;
2996 }
2997 }
2998
2999 /*
3000
3001 cfree just calls free. It is needed/defined on some systems
3002 that pair it with calloc, presumably for odd historical reasons.
3003
3004 */
3005
3006 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3007 #if __STD_C
3008 void cfree(Void_t *mem)
3009 #else
3010 void cfree(mem) Void_t *mem;
3011 #endif
3012 {
3013 fREe(mem);
3014 }
3015 #endif
3016
3017
3018
3019 /*
3020
3021 Malloc_trim gives memory back to the system (via negative
3022 arguments to sbrk) if there is unused memory at the `high' end of
3023 the malloc pool. You can call this after freeing large blocks of
3024 memory to potentially reduce the system-level memory requirements
3025 of a program. However, it cannot guarantee to reduce memory. Under
3026 some allocation patterns, some large free blocks of memory will be
3027 locked between two used chunks, so they cannot be given back to
3028 the system.
3029
3030 The `pad' argument to malloc_trim represents the amount of free
3031 trailing space to leave untrimmed. If this argument is zero,
3032 only the minimum amount of memory to maintain internal data
3033 structures will be left (one page or less). Non-zero arguments
3034 can be supplied to maintain enough trailing space to service
3035 future expected allocations without having to re-obtain memory
3036 from the system.
3037
3038 Malloc_trim returns 1 if it actually released any memory, else 0.
3039
3040 */
3041
3042 #if __STD_C
3043 int malloc_trim(size_t pad)
3044 #else
3045 int malloc_trim(pad) size_t pad;
3046 #endif
3047 {
3048 long top_size; /* Amount of top-most memory */
3049 long extra; /* Amount to release */
3050 char* current_brk; /* address returned by pre-check sbrk call */
3051 char* new_brk; /* address returned by negative sbrk call */
3052
3053 unsigned long pagesz = malloc_getpagesize;
3054
3055 top_size = chunksize(top);
3056 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3057
3058 if (extra < (long)pagesz) /* Not enough memory to release */
3059 return 0;
3060
3061 else
3062 {
3063 /* Test to make sure no one else called sbrk */
3064 current_brk = (char*)(MORECORE (0));
3065 if (current_brk != (char*)(top) + top_size)
3066 return 0; /* Apparently we don't own memory; must fail */
3067
3068 else
3069 {
3070 new_brk = (char*)(MORECORE (-extra));
3071
3072 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3073 {
3074 /* Try to figure out what we have */
3075 current_brk = (char*)(MORECORE (0));
3076 top_size = current_brk - (char*)top;
3077 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3078 {
3079 sbrked_mem = current_brk - sbrk_base;
3080 set_head(top, top_size | PREV_INUSE);
3081 }
3082 check_chunk(top);
3083 return 0;
3084 }
3085
3086 else
3087 {
3088 /* Success. Adjust top accordingly. */
3089 set_head(top, (top_size - extra) | PREV_INUSE);
3090 sbrked_mem -= extra;
3091 check_chunk(top);
3092 return 1;
3093 }
3094 }
3095 }
3096 }
3097
3098
3099
3100 /*
3101 malloc_usable_size:
3102
3103 This routine tells you how many bytes you can actually use in an
3104 allocated chunk, which may be more than you requested (although
3105 often not). You can use this many bytes without worrying about
3106 overwriting other allocated objects. Not a particularly great
3107 programming practice, but still sometimes useful.
3108
3109 */
3110
3111 #if __STD_C
3112 size_t malloc_usable_size(Void_t* mem)
3113 #else
3114 size_t malloc_usable_size(mem) Void_t* mem;
3115 #endif
3116 {
3117 mchunkptr p;
3118 if (mem == NULL)
3119 return 0;
3120 else
3121 {
3122 p = mem2chunk(mem);
3123 if(!chunk_is_mmapped(p))
3124 {
3125 if (!inuse(p)) return 0;
3126 check_inuse_chunk(p);
3127 return chunksize(p) - SIZE_SZ;
3128 }
3129 return chunksize(p) - 2*SIZE_SZ;
3130 }
3131 }
3132
3133
3134
3135
3136 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3137
3138 #ifdef DEBUG
3139 static void malloc_update_mallinfo()
3140 {
3141 int i;
3142 mbinptr b;
3143 mchunkptr p;
3144 #ifdef DEBUG
3145 mchunkptr q;
3146 #endif
3147
3148 INTERNAL_SIZE_T avail = chunksize(top);
3149 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3150
3151 for (i = 1; i < NAV; ++i)
3152 {
3153 b = bin_at(i);
3154 for (p = last(b); p != b; p = p->bk)
3155 {
3156 #ifdef DEBUG
3157 check_free_chunk(p);
3158 for (q = next_chunk(p);
3159 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3160 q = next_chunk(q))
3161 check_inuse_chunk(q);
3162 #endif
3163 avail += chunksize(p);
3164 navail++;
3165 }
3166 }
3167
3168 current_mallinfo.ordblks = navail;
3169 current_mallinfo.uordblks = sbrked_mem - avail;
3170 current_mallinfo.fordblks = avail;
3171 current_mallinfo.hblks = n_mmaps;
3172 current_mallinfo.hblkhd = mmapped_mem;
3173 current_mallinfo.keepcost = chunksize(top);
3174
3175 }
3176 #endif /* DEBUG */
3177
3178
3179
3180 /*
3181
3182 malloc_stats:
3183
3184 Prints on the amount of space obtain from the system (both
3185 via sbrk and mmap), the maximum amount (which may be more than
3186 current if malloc_trim and/or munmap got called), the maximum
3187 number of simultaneous mmap regions used, and the current number
3188 of bytes allocated via malloc (or realloc, etc) but not yet
3189 freed. (Note that this is the number of bytes allocated, not the
3190 number requested. It will be larger than the number requested
3191 because of alignment and bookkeeping overhead.)
3192
3193 */
3194
3195 #ifdef DEBUG
3196 void malloc_stats()
3197 {
3198 malloc_update_mallinfo();
3199 printf("max system bytes = %10u\n",
3200 (unsigned int)(max_total_mem));
3201 printf("system bytes = %10u\n",
3202 (unsigned int)(sbrked_mem + mmapped_mem));
3203 printf("in use bytes = %10u\n",
3204 (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3205 #if HAVE_MMAP
3206 printf("max mmap regions = %10u\n",
3207 (unsigned int)max_n_mmaps);
3208 #endif
3209 }
3210 #endif /* DEBUG */
3211
3212 /*
3213 mallinfo returns a copy of updated current mallinfo.
3214 */
3215
3216 #ifdef DEBUG
3217 struct mallinfo mALLINFo()
3218 {
3219 malloc_update_mallinfo();
3220 return current_mallinfo;
3221 }
3222 #endif /* DEBUG */
3223
3224
3225
3226
3227 /*
3228 mallopt:
3229
3230 mallopt is the general SVID/XPG interface to tunable parameters.
3231 The format is to provide a (parameter-number, parameter-value) pair.
3232 mallopt then sets the corresponding parameter to the argument
3233 value if it can (i.e., so long as the value is meaningful),
3234 and returns 1 if successful else 0.
3235
3236 See descriptions of tunable parameters above.
3237
3238 */
3239
3240 #if __STD_C
3241 int mALLOPt(int param_number, int value)
3242 #else
3243 int mALLOPt(param_number, value) int param_number; int value;
3244 #endif
3245 {
3246 switch(param_number)
3247 {
3248 case M_TRIM_THRESHOLD:
3249 trim_threshold = value; return 1;
3250 case M_TOP_PAD:
3251 top_pad = value; return 1;
3252 case M_MMAP_THRESHOLD:
3253 mmap_threshold = value; return 1;
3254 case M_MMAP_MAX:
3255 #if HAVE_MMAP
3256 n_mmaps_max = value; return 1;
3257 #else
3258 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3259 #endif
3260
3261 default:
3262 return 0;
3263 }
3264 }
3265
3266 /*
3267
3268 History:
3269
3270 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
3271 * return null for negative arguments
3272 * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
3273 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3274 (e.g. WIN32 platforms)
3275 * Cleanup up header file inclusion for WIN32 platforms
3276 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3277 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3278 memory allocation routines
3279 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3280 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3281 usage of 'assert' in non-WIN32 code
3282 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3283 avoid infinite loop
3284 * Always call 'fREe()' rather than 'free()'
3285
3286 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3287 * Fixed ordering problem with boundary-stamping
3288
3289 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3290 * Added pvalloc, as recommended by H.J. Liu
3291 * Added 64bit pointer support mainly from Wolfram Gloger
3292 * Added anonymously donated WIN32 sbrk emulation
3293 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3294 * malloc_extend_top: fix mask error that caused wastage after
3295 foreign sbrks
3296 * Add linux mremap support code from HJ Liu
3297
3298 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3299 * Integrated most documentation with the code.
3300 * Add support for mmap, with help from
3301 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3302 * Use last_remainder in more cases.
3303 * Pack bins using idea from colin@nyx10.cs.du.edu
3304 * Use ordered bins instead of best-fit threshhold
3305 * Eliminate block-local decls to simplify tracing and debugging.
3306 * Support another case of realloc via move into top
3307 * Fix error occuring when initial sbrk_base not word-aligned.
3308 * Rely on page size for units instead of SBRK_UNIT to
3309 avoid surprises about sbrk alignment conventions.
3310 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3311 (raymond@es.ele.tue.nl) for the suggestion.
3312 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3313 * More precautions for cases where other routines call sbrk,
3314 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3315 * Added macros etc., allowing use in linux libc from
3316 H.J. Lu (hjl@gnu.ai.mit.edu)
3317 * Inverted this history list
3318
3319 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3320 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3321 * Removed all preallocation code since under current scheme
3322 the work required to undo bad preallocations exceeds
3323 the work saved in good cases for most test programs.
3324 * No longer use return list or unconsolidated bins since
3325 no scheme using them consistently outperforms those that don't
3326 given above changes.
3327 * Use best fit for very large chunks to prevent some worst-cases.
3328 * Added some support for debugging
3329
3330 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3331 * Removed footers when chunks are in use. Thanks to
3332 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3333
3334 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3335 * Added malloc_trim, with help from Wolfram Gloger
3336 (wmglo@Dent.MED.Uni-Muenchen.DE).
3337
3338 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3339
3340 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3341 * realloc: try to expand in both directions
3342 * malloc: swap order of clean-bin strategy;
3343 * realloc: only conditionally expand backwards
3344 * Try not to scavenge used bins
3345 * Use bin counts as a guide to preallocation
3346 * Occasionally bin return list chunks in first scan
3347 * Add a few optimizations from colin@nyx10.cs.du.edu
3348
3349 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3350 * faster bin computation & slightly different binning
3351 * merged all consolidations to one part of malloc proper
3352 (eliminating old malloc_find_space & malloc_clean_bin)
3353 * Scan 2 returns chunks (not just 1)
3354 * Propagate failure in realloc if malloc returns 0
3355 * Add stuff to allow compilation on non-ANSI compilers
3356 from kpv@research.att.com
3357
3358 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3359 * removed potential for odd address access in prev_chunk
3360 * removed dependency on getpagesize.h
3361 * misc cosmetics and a bit more internal documentation
3362 * anticosmetics: mangled names in macros to evade debugger strangeness
3363 * tested on sparc, hp-700, dec-mips, rs6000
3364 with gcc & native cc (hp, dec only) allowing
3365 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3366
3367 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3368 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3369 structure of old version, but most details differ.)
3370
3371 */