]>
Commit | Line | Data |
---|---|---|
f65fd747 | 1 | /* Malloc implementation for multiple threads without lock contention. |
eb406346 | 2 | Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. |
f65fd747 | 3 | This file is part of the GNU C Library. |
6259ec0d UD |
4 | Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de> |
5 | and Doug Lea <dl@cs.oswego.edu>, 1996. | |
f65fd747 UD |
6 | |
7 | The GNU C Library is free software; you can redistribute it and/or | |
8 | modify it under the terms of the GNU Library General Public License as | |
9 | published by the Free Software Foundation; either version 2 of the | |
10 | License, or (at your option) any later version. | |
11 | ||
12 | The GNU C Library is distributed in the hope that it will be useful, | |
13 | but WITHOUT ANY WARRANTY; without even the implied warranty of | |
14 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | |
15 | Library General Public License for more details. | |
16 | ||
17 | You should have received a copy of the GNU Library General Public | |
18 | License along with the GNU C Library; see the file COPYING.LIB. If not, | |
19 | write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, | |
20 | Boston, MA 02111-1307, USA. */ | |
21 | ||
2f6d1f1b | 22 | /* V2.6.4-pt3 Thu Feb 20 1997 |
f65fd747 UD |
23 | |
24 | This work is mainly derived from malloc-2.6.4 by Doug Lea | |
25 | <dl@cs.oswego.edu>, which is available from: | |
26 | ||
27 | ftp://g.oswego.edu/pub/misc/malloc.c | |
28 | ||
29 | Most of the original comments are reproduced in the code below. | |
30 | ||
31 | * Why use this malloc? | |
32 | ||
33 | This is not the fastest, most space-conserving, most portable, or | |
34 | most tunable malloc ever written. However it is among the fastest | |
35 | while also being among the most space-conserving, portable and tunable. | |
36 | Consistent balance across these factors results in a good general-purpose | |
37 | allocator. For a high-level description, see | |
38 | http://g.oswego.edu/dl/html/malloc.html | |
39 | ||
40 | On many systems, the standard malloc implementation is by itself not | |
41 | thread-safe, and therefore wrapped with a single global lock around | |
42 | all malloc-related functions. In some applications, especially with | |
43 | multiple available processors, this can lead to contention problems | |
44 | and bad performance. This malloc version was designed with the goal | |
45 | to avoid waiting for locks as much as possible. Statistics indicate | |
46 | that this goal is achieved in many cases. | |
47 | ||
48 | * Synopsis of public routines | |
49 | ||
50 | (Much fuller descriptions are contained in the program documentation below.) | |
51 | ||
52 | ptmalloc_init(); | |
53 | Initialize global configuration. When compiled for multiple threads, | |
54 | this function must be called once before any other function in the | |
10dc2a90 UD |
55 | package. It is not required otherwise. It is called automatically |
56 | in the Linux/GNU C libray or when compiling with MALLOC_HOOKS. | |
f65fd747 UD |
57 | malloc(size_t n); |
58 | Return a pointer to a newly allocated chunk of at least n bytes, or null | |
59 | if no space is available. | |
60 | free(Void_t* p); | |
61 | Release the chunk of memory pointed to by p, or no effect if p is null. | |
62 | realloc(Void_t* p, size_t n); | |
63 | Return a pointer to a chunk of size n that contains the same data | |
64 | as does chunk p up to the minimum of (n, p's size) bytes, or null | |
65 | if no space is available. The returned pointer may or may not be | |
66 | the same as p. If p is null, equivalent to malloc. Unless the | |
67 | #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a | |
68 | size argument of zero (re)allocates a minimum-sized chunk. | |
69 | memalign(size_t alignment, size_t n); | |
70 | Return a pointer to a newly allocated chunk of n bytes, aligned | |
71 | in accord with the alignment argument, which must be a power of | |
72 | two. | |
73 | valloc(size_t n); | |
74 | Equivalent to memalign(pagesize, n), where pagesize is the page | |
75 | size of the system (or as near to this as can be figured out from | |
76 | all the includes/defines below.) | |
77 | pvalloc(size_t n); | |
78 | Equivalent to valloc(minimum-page-that-holds(n)), that is, | |
79 | round up n to nearest pagesize. | |
80 | calloc(size_t unit, size_t quantity); | |
81 | Returns a pointer to quantity * unit bytes, with all locations | |
82 | set to zero. | |
83 | cfree(Void_t* p); | |
84 | Equivalent to free(p). | |
85 | malloc_trim(size_t pad); | |
86 | Release all but pad bytes of freed top-most memory back | |
87 | to the system. Return 1 if successful, else 0. | |
88 | malloc_usable_size(Void_t* p); | |
89 | Report the number usable allocated bytes associated with allocated | |
90 | chunk p. This may or may not report more bytes than were requested, | |
91 | due to alignment and minimum size constraints. | |
92 | malloc_stats(); | |
93 | Prints brief summary statistics on stderr. | |
94 | mallinfo() | |
95 | Returns (by copy) a struct containing various summary statistics. | |
96 | mallopt(int parameter_number, int parameter_value) | |
97 | Changes one of the tunable parameters described below. Returns | |
98 | 1 if successful in changing the parameter, else 0. | |
99 | ||
100 | * Vital statistics: | |
101 | ||
102 | Alignment: 8-byte | |
103 | 8 byte alignment is currently hardwired into the design. This | |
104 | seems to suffice for all current machines and C compilers. | |
105 | ||
106 | Assumed pointer representation: 4 or 8 bytes | |
107 | Code for 8-byte pointers is untested by me but has worked | |
108 | reliably by Wolfram Gloger, who contributed most of the | |
109 | changes supporting this. | |
110 | ||
111 | Assumed size_t representation: 4 or 8 bytes | |
112 | Note that size_t is allowed to be 4 bytes even if pointers are 8. | |
113 | ||
114 | Minimum overhead per allocated chunk: 4 or 8 bytes | |
115 | Each malloced chunk has a hidden overhead of 4 bytes holding size | |
116 | and status information. | |
117 | ||
118 | Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead) | |
119 | 8-byte ptrs: 24/32 bytes (including, 4/8 overhead) | |
120 | ||
121 | When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte | |
122 | ptrs but 4 byte size) or 24 (for 8/8) additional bytes are | |
123 | needed; 4 (8) for a trailing size field | |
124 | and 8 (16) bytes for free list pointers. Thus, the minimum | |
125 | allocatable size is 16/24/32 bytes. | |
126 | ||
127 | Even a request for zero bytes (i.e., malloc(0)) returns a | |
128 | pointer to something of the minimum allocatable size. | |
129 | ||
130 | Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes | |
131 | 8-byte size_t: 2^63 - 16 bytes | |
132 | ||
133 | It is assumed that (possibly signed) size_t bit values suffice to | |
134 | represent chunk sizes. `Possibly signed' is due to the fact | |
135 | that `size_t' may be defined on a system as either a signed or | |
136 | an unsigned type. To be conservative, values that would appear | |
137 | as negative numbers are avoided. | |
138 | Requests for sizes with a negative sign bit will return a | |
139 | minimum-sized chunk. | |
140 | ||
141 | Maximum overhead wastage per allocated chunk: normally 15 bytes | |
142 | ||
6d52618b | 143 | Alignment demands, plus the minimum allocatable size restriction |
f65fd747 UD |
144 | make the normal worst-case wastage 15 bytes (i.e., up to 15 |
145 | more bytes will be allocated than were requested in malloc), with | |
146 | two exceptions: | |
147 | 1. Because requests for zero bytes allocate non-zero space, | |
148 | the worst case wastage for a request of zero bytes is 24 bytes. | |
149 | 2. For requests >= mmap_threshold that are serviced via | |
150 | mmap(), the worst case wastage is 8 bytes plus the remainder | |
151 | from a system page (the minimal mmap unit); typically 4096 bytes. | |
152 | ||
153 | * Limitations | |
154 | ||
155 | Here are some features that are NOT currently supported | |
156 | ||
f65fd747 UD |
157 | * No automated mechanism for fully checking that all accesses |
158 | to malloced memory stay within their bounds. | |
159 | * No support for compaction. | |
160 | ||
161 | * Synopsis of compile-time options: | |
162 | ||
163 | People have reported using previous versions of this malloc on all | |
164 | versions of Unix, sometimes by tweaking some of the defines | |
165 | below. It has been tested most extensively on Solaris and | |
166 | Linux. People have also reported adapting this malloc for use in | |
167 | stand-alone embedded systems. | |
168 | ||
169 | The implementation is in straight, hand-tuned ANSI C. Among other | |
170 | consequences, it uses a lot of macros. Because of this, to be at | |
171 | all usable, this code should be compiled using an optimizing compiler | |
172 | (for example gcc -O2) that can simplify expressions and control | |
173 | paths. | |
174 | ||
175 | __STD_C (default: derived from C compiler defines) | |
176 | Nonzero if using ANSI-standard C compiler, a C++ compiler, or | |
177 | a C compiler sufficiently close to ANSI to get away with it. | |
178 | MALLOC_DEBUG (default: NOT defined) | |
179 | Define to enable debugging. Adds fairly extensive assertion-based | |
180 | checking to help track down memory errors, but noticeably slows down | |
181 | execution. | |
7e3be507 | 182 | MALLOC_HOOKS (default: NOT defined) |
10dc2a90 UD |
183 | Define to enable support run-time replacement of the allocation |
184 | functions through user-defined `hooks'. | |
f65fd747 UD |
185 | REALLOC_ZERO_BYTES_FREES (default: NOT defined) |
186 | Define this if you think that realloc(p, 0) should be equivalent | |
187 | to free(p). Otherwise, since malloc returns a unique pointer for | |
188 | malloc(0), so does realloc(p, 0). | |
189 | HAVE_MEMCPY (default: defined) | |
190 | Define if you are not otherwise using ANSI STD C, but still | |
191 | have memcpy and memset in your C library and want to use them. | |
192 | Otherwise, simple internal versions are supplied. | |
193 | USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise) | |
194 | Define as 1 if you want the C library versions of memset and | |
195 | memcpy called in realloc and calloc (otherwise macro versions are used). | |
196 | At least on some platforms, the simple macro versions usually | |
197 | outperform libc versions. | |
198 | HAVE_MMAP (default: defined as 1) | |
199 | Define to non-zero to optionally make malloc() use mmap() to | |
200 | allocate very large blocks. | |
201 | HAVE_MREMAP (default: defined as 0 unless Linux libc set) | |
202 | Define to non-zero to optionally make realloc() use mremap() to | |
203 | reallocate very large blocks. | |
204 | malloc_getpagesize (default: derived from system #includes) | |
205 | Either a constant or routine call returning the system page size. | |
206 | HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined) | |
207 | Optionally define if you are on a system with a /usr/include/malloc.h | |
208 | that declares struct mallinfo. It is not at all necessary to | |
209 | define this even if you do, but will ensure consistency. | |
210 | INTERNAL_SIZE_T (default: size_t) | |
211 | Define to a 32-bit type (probably `unsigned int') if you are on a | |
212 | 64-bit machine, yet do not want or need to allow malloc requests of | |
213 | greater than 2^31 to be handled. This saves space, especially for | |
214 | very small chunks. | |
215 | _LIBC (default: NOT defined) | |
216 | Defined only when compiled as part of the Linux libc/glibc. | |
217 | Also note that there is some odd internal name-mangling via defines | |
218 | (for example, internally, `malloc' is named `mALLOc') needed | |
219 | when compiling in this case. These look funny but don't otherwise | |
220 | affect anything. | |
221 | LACKS_UNISTD_H (default: undefined) | |
222 | Define this if your system does not have a <unistd.h>. | |
223 | MORECORE (default: sbrk) | |
224 | The name of the routine to call to obtain more memory from the system. | |
225 | MORECORE_FAILURE (default: -1) | |
226 | The value returned upon failure of MORECORE. | |
227 | MORECORE_CLEARS (default 1) | |
228 | True (1) if the routine mapped to MORECORE zeroes out memory (which | |
229 | holds for sbrk). | |
230 | DEFAULT_TRIM_THRESHOLD | |
231 | DEFAULT_TOP_PAD | |
232 | DEFAULT_MMAP_THRESHOLD | |
233 | DEFAULT_MMAP_MAX | |
234 | Default values of tunable parameters (described in detail below) | |
235 | controlling interaction with host system routines (sbrk, mmap, etc). | |
236 | These values may also be changed dynamically via mallopt(). The | |
237 | preset defaults are those that give best performance for typical | |
238 | programs/systems. | |
10dc2a90 UD |
239 | DEFAULT_CHECK_ACTION |
240 | When the standard debugging hooks are in place, and a pointer is | |
241 | detected as corrupt, do nothing (0), print an error message (1), | |
242 | or call abort() (2). | |
f65fd747 UD |
243 | |
244 | ||
245 | */ | |
246 | ||
247 | /* | |
248 | ||
249 | * Compile-time options for multiple threads: | |
250 | ||
251 | USE_PTHREADS, USE_THR, USE_SPROC | |
252 | Define one of these as 1 to select the thread interface: | |
253 | POSIX threads, Solaris threads or SGI sproc's, respectively. | |
254 | If none of these is defined as non-zero, you get a `normal' | |
255 | malloc implementation which is not thread-safe. Support for | |
256 | multiple threads requires HAVE_MMAP=1. As an exception, when | |
257 | compiling for GNU libc, i.e. when _LIBC is defined, then none of | |
258 | the USE_... symbols have to be defined. | |
259 | ||
260 | HEAP_MIN_SIZE | |
261 | HEAP_MAX_SIZE | |
262 | When thread support is enabled, additional `heap's are created | |
263 | with mmap calls. These are limited in size; HEAP_MIN_SIZE should | |
264 | be a multiple of the page size, while HEAP_MAX_SIZE must be a power | |
265 | of two for alignment reasons. HEAP_MAX_SIZE should be at least | |
266 | twice as large as the mmap threshold. | |
267 | THREAD_STATS | |
268 | When this is defined as non-zero, some statistics on mutex locking | |
269 | are computed. | |
270 | ||
271 | */ | |
272 | ||
273 | \f | |
274 | ||
275 | ||
f65fd747 UD |
276 | /* Preliminaries */ |
277 | ||
278 | #ifndef __STD_C | |
279 | #if defined (__STDC__) | |
280 | #define __STD_C 1 | |
281 | #else | |
282 | #if __cplusplus | |
283 | #define __STD_C 1 | |
284 | #else | |
285 | #define __STD_C 0 | |
286 | #endif /*__cplusplus*/ | |
287 | #endif /*__STDC__*/ | |
288 | #endif /*__STD_C*/ | |
289 | ||
290 | #ifndef Void_t | |
291 | #if __STD_C | |
292 | #define Void_t void | |
293 | #else | |
294 | #define Void_t char | |
295 | #endif | |
296 | #endif /*Void_t*/ | |
297 | ||
298 | #if __STD_C | |
10dc2a90 | 299 | # include <stddef.h> /* for size_t */ |
dfd2257a | 300 | # if defined _LIBC || defined MALLOC_HOOKS |
7e3be507 | 301 | # include <stdlib.h> /* for getenv(), abort() */ |
10dc2a90 | 302 | # endif |
f65fd747 | 303 | #else |
10dc2a90 | 304 | # include <sys/types.h> |
f65fd747 UD |
305 | #endif |
306 | ||
8a4b65b4 UD |
307 | /* Macros for handling mutexes and thread-specific data. This is |
308 | included early, because some thread-related header files (such as | |
309 | pthread.h) should be included before any others. */ | |
310 | #include "thread-m.h" | |
311 | ||
f65fd747 UD |
312 | #ifdef __cplusplus |
313 | extern "C" { | |
314 | #endif | |
315 | ||
316 | #include <stdio.h> /* needed for malloc_stats */ | |
317 | ||
318 | ||
319 | /* | |
320 | Compile-time options | |
321 | */ | |
322 | ||
323 | ||
324 | /* | |
325 | Debugging: | |
326 | ||
327 | Because freed chunks may be overwritten with link fields, this | |
328 | malloc will often die when freed memory is overwritten by user | |
329 | programs. This can be very effective (albeit in an annoying way) | |
330 | in helping track down dangling pointers. | |
331 | ||
332 | If you compile with -DMALLOC_DEBUG, a number of assertion checks are | |
333 | enabled that will catch more memory errors. You probably won't be | |
334 | able to make much sense of the actual assertion errors, but they | |
335 | should help you locate incorrectly overwritten memory. The | |
336 | checking is fairly extensive, and will slow down execution | |
337 | noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will | |
338 | attempt to check every non-mmapped allocated and free chunk in the | |
6d52618b | 339 | course of computing the summaries. (By nature, mmapped regions |
f65fd747 UD |
340 | cannot be checked very much automatically.) |
341 | ||
342 | Setting MALLOC_DEBUG may also be helpful if you are trying to modify | |
343 | this code. The assertions in the check routines spell out in more | |
344 | detail the assumptions and invariants underlying the algorithms. | |
345 | ||
346 | */ | |
347 | ||
348 | #if MALLOC_DEBUG | |
349 | #include <assert.h> | |
350 | #else | |
351 | #define assert(x) ((void)0) | |
352 | #endif | |
353 | ||
354 | ||
355 | /* | |
356 | INTERNAL_SIZE_T is the word-size used for internal bookkeeping | |
357 | of chunk sizes. On a 64-bit machine, you can reduce malloc | |
358 | overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' | |
359 | at the expense of not being able to handle requests greater than | |
360 | 2^31. This limitation is hardly ever a concern; you are encouraged | |
361 | to set this. However, the default version is the same as size_t. | |
362 | */ | |
363 | ||
364 | #ifndef INTERNAL_SIZE_T | |
365 | #define INTERNAL_SIZE_T size_t | |
366 | #endif | |
367 | ||
368 | /* | |
369 | REALLOC_ZERO_BYTES_FREES should be set if a call to | |
370 | realloc with zero bytes should be the same as a call to free. | |
371 | Some people think it should. Otherwise, since this malloc | |
372 | returns a unique pointer for malloc(0), so does realloc(p, 0). | |
373 | */ | |
374 | ||
375 | ||
376 | /* #define REALLOC_ZERO_BYTES_FREES */ | |
377 | ||
378 | ||
379 | /* | |
380 | HAVE_MEMCPY should be defined if you are not otherwise using | |
381 | ANSI STD C, but still have memcpy and memset in your C library | |
382 | and want to use them in calloc and realloc. Otherwise simple | |
383 | macro versions are defined here. | |
384 | ||
385 | USE_MEMCPY should be defined as 1 if you actually want to | |
386 | have memset and memcpy called. People report that the macro | |
387 | versions are often enough faster than libc versions on many | |
388 | systems that it is better to use them. | |
389 | ||
390 | */ | |
391 | ||
10dc2a90 | 392 | #define HAVE_MEMCPY 1 |
f65fd747 UD |
393 | |
394 | #ifndef USE_MEMCPY | |
395 | #ifdef HAVE_MEMCPY | |
396 | #define USE_MEMCPY 1 | |
397 | #else | |
398 | #define USE_MEMCPY 0 | |
399 | #endif | |
400 | #endif | |
401 | ||
402 | #if (__STD_C || defined(HAVE_MEMCPY)) | |
403 | ||
404 | #if __STD_C | |
405 | void* memset(void*, int, size_t); | |
406 | void* memcpy(void*, const void*, size_t); | |
407 | #else | |
408 | Void_t* memset(); | |
409 | Void_t* memcpy(); | |
410 | #endif | |
411 | #endif | |
412 | ||
413 | #if USE_MEMCPY | |
414 | ||
415 | /* The following macros are only invoked with (2n+1)-multiples of | |
416 | INTERNAL_SIZE_T units, with a positive integer n. This is exploited | |
417 | for fast inline execution when n is small. */ | |
418 | ||
419 | #define MALLOC_ZERO(charp, nbytes) \ | |
420 | do { \ | |
421 | INTERNAL_SIZE_T mzsz = (nbytes); \ | |
422 | if(mzsz <= 9*sizeof(mzsz)) { \ | |
423 | INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \ | |
424 | if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \ | |
425 | *mz++ = 0; \ | |
426 | if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \ | |
427 | *mz++ = 0; \ | |
428 | if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \ | |
429 | *mz++ = 0; }}} \ | |
430 | *mz++ = 0; \ | |
431 | *mz++ = 0; \ | |
432 | *mz = 0; \ | |
433 | } else memset((charp), 0, mzsz); \ | |
434 | } while(0) | |
435 | ||
436 | #define MALLOC_COPY(dest,src,nbytes) \ | |
437 | do { \ | |
438 | INTERNAL_SIZE_T mcsz = (nbytes); \ | |
439 | if(mcsz <= 9*sizeof(mcsz)) { \ | |
440 | INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \ | |
441 | INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \ | |
442 | if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ | |
443 | *mcdst++ = *mcsrc++; \ | |
444 | if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ | |
445 | *mcdst++ = *mcsrc++; \ | |
446 | if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ | |
447 | *mcdst++ = *mcsrc++; }}} \ | |
448 | *mcdst++ = *mcsrc++; \ | |
449 | *mcdst++ = *mcsrc++; \ | |
450 | *mcdst = *mcsrc ; \ | |
451 | } else memcpy(dest, src, mcsz); \ | |
452 | } while(0) | |
453 | ||
454 | #else /* !USE_MEMCPY */ | |
455 | ||
456 | /* Use Duff's device for good zeroing/copying performance. */ | |
457 | ||
458 | #define MALLOC_ZERO(charp, nbytes) \ | |
459 | do { \ | |
460 | INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \ | |
461 | long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \ | |
462 | if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ | |
463 | switch (mctmp) { \ | |
464 | case 0: for(;;) { *mzp++ = 0; \ | |
465 | case 7: *mzp++ = 0; \ | |
466 | case 6: *mzp++ = 0; \ | |
467 | case 5: *mzp++ = 0; \ | |
468 | case 4: *mzp++ = 0; \ | |
469 | case 3: *mzp++ = 0; \ | |
470 | case 2: *mzp++ = 0; \ | |
471 | case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \ | |
472 | } \ | |
473 | } while(0) | |
474 | ||
475 | #define MALLOC_COPY(dest,src,nbytes) \ | |
476 | do { \ | |
477 | INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \ | |
478 | INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \ | |
479 | long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \ | |
480 | if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ | |
481 | switch (mctmp) { \ | |
482 | case 0: for(;;) { *mcdst++ = *mcsrc++; \ | |
483 | case 7: *mcdst++ = *mcsrc++; \ | |
484 | case 6: *mcdst++ = *mcsrc++; \ | |
485 | case 5: *mcdst++ = *mcsrc++; \ | |
486 | case 4: *mcdst++ = *mcsrc++; \ | |
487 | case 3: *mcdst++ = *mcsrc++; \ | |
488 | case 2: *mcdst++ = *mcsrc++; \ | |
489 | case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \ | |
490 | } \ | |
491 | } while(0) | |
492 | ||
493 | #endif | |
494 | ||
495 | ||
496 | /* | |
497 | Define HAVE_MMAP to optionally make malloc() use mmap() to | |
498 | allocate very large blocks. These will be returned to the | |
499 | operating system immediately after a free(). | |
500 | */ | |
501 | ||
502 | #ifndef HAVE_MMAP | |
503 | #define HAVE_MMAP 1 | |
504 | #endif | |
505 | ||
506 | /* | |
507 | Define HAVE_MREMAP to make realloc() use mremap() to re-allocate | |
508 | large blocks. This is currently only possible on Linux with | |
509 | kernel versions newer than 1.3.77. | |
510 | */ | |
511 | ||
512 | #ifndef HAVE_MREMAP | |
d71b808a | 513 | #define HAVE_MREMAP defined(__linux__) && !defined(__arm__) |
f65fd747 UD |
514 | #endif |
515 | ||
516 | #if HAVE_MMAP | |
517 | ||
518 | #include <unistd.h> | |
519 | #include <fcntl.h> | |
520 | #include <sys/mman.h> | |
521 | ||
522 | #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) | |
523 | #define MAP_ANONYMOUS MAP_ANON | |
524 | #endif | |
525 | ||
b22fc5f5 UD |
526 | #ifndef MAP_NORESERVE |
527 | # ifdef MAP_AUTORESRV | |
528 | # define MAP_NORESERVE MAP_AUTORESRV | |
529 | # else | |
530 | # define MAP_NORESERVE 0 | |
531 | # endif | |
532 | #endif | |
533 | ||
f65fd747 UD |
534 | #endif /* HAVE_MMAP */ |
535 | ||
536 | /* | |
537 | Access to system page size. To the extent possible, this malloc | |
538 | manages memory from the system in page-size units. | |
539 | ||
540 | The following mechanics for getpagesize were adapted from | |
541 | bsd/gnu getpagesize.h | |
542 | */ | |
543 | ||
544 | #ifndef LACKS_UNISTD_H | |
545 | # include <unistd.h> | |
546 | #endif | |
547 | ||
548 | #ifndef malloc_getpagesize | |
549 | # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ | |
550 | # ifndef _SC_PAGE_SIZE | |
551 | # define _SC_PAGE_SIZE _SC_PAGESIZE | |
552 | # endif | |
553 | # endif | |
554 | # ifdef _SC_PAGE_SIZE | |
555 | # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) | |
556 | # else | |
557 | # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) | |
558 | extern size_t getpagesize(); | |
559 | # define malloc_getpagesize getpagesize() | |
560 | # else | |
561 | # include <sys/param.h> | |
562 | # ifdef EXEC_PAGESIZE | |
563 | # define malloc_getpagesize EXEC_PAGESIZE | |
564 | # else | |
565 | # ifdef NBPG | |
566 | # ifndef CLSIZE | |
567 | # define malloc_getpagesize NBPG | |
568 | # else | |
569 | # define malloc_getpagesize (NBPG * CLSIZE) | |
570 | # endif | |
571 | # else | |
572 | # ifdef NBPC | |
573 | # define malloc_getpagesize NBPC | |
574 | # else | |
575 | # ifdef PAGESIZE | |
576 | # define malloc_getpagesize PAGESIZE | |
577 | # else | |
578 | # define malloc_getpagesize (4096) /* just guess */ | |
579 | # endif | |
580 | # endif | |
581 | # endif | |
582 | # endif | |
583 | # endif | |
584 | # endif | |
585 | #endif | |
586 | ||
587 | ||
588 | ||
589 | /* | |
590 | ||
591 | This version of malloc supports the standard SVID/XPG mallinfo | |
592 | routine that returns a struct containing the same kind of | |
593 | information you can get from malloc_stats. It should work on | |
594 | any SVID/XPG compliant system that has a /usr/include/malloc.h | |
595 | defining struct mallinfo. (If you'd like to install such a thing | |
596 | yourself, cut out the preliminary declarations as described above | |
597 | and below and save them in a malloc.h file. But there's no | |
598 | compelling reason to bother to do this.) | |
599 | ||
600 | The main declaration needed is the mallinfo struct that is returned | |
601 | (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a | |
602 | bunch of fields, most of which are not even meaningful in this | |
603 | version of malloc. Some of these fields are are instead filled by | |
604 | mallinfo() with other numbers that might possibly be of interest. | |
605 | ||
606 | HAVE_USR_INCLUDE_MALLOC_H should be set if you have a | |
607 | /usr/include/malloc.h file that includes a declaration of struct | |
608 | mallinfo. If so, it is included; else an SVID2/XPG2 compliant | |
609 | version is declared below. These must be precisely the same for | |
610 | mallinfo() to work. | |
611 | ||
612 | */ | |
613 | ||
614 | /* #define HAVE_USR_INCLUDE_MALLOC_H */ | |
615 | ||
616 | #if HAVE_USR_INCLUDE_MALLOC_H | |
8a4b65b4 | 617 | # include "/usr/include/malloc.h" |
f65fd747 | 618 | #else |
8a4b65b4 UD |
619 | # ifdef _LIBC |
620 | # include "malloc.h" | |
621 | # else | |
622 | # include "ptmalloc.h" | |
623 | # endif | |
f65fd747 UD |
624 | #endif |
625 | ||
626 | ||
627 | ||
628 | #ifndef DEFAULT_TRIM_THRESHOLD | |
629 | #define DEFAULT_TRIM_THRESHOLD (128 * 1024) | |
630 | #endif | |
631 | ||
632 | /* | |
633 | M_TRIM_THRESHOLD is the maximum amount of unused top-most memory | |
634 | to keep before releasing via malloc_trim in free(). | |
635 | ||
636 | Automatic trimming is mainly useful in long-lived programs. | |
637 | Because trimming via sbrk can be slow on some systems, and can | |
638 | sometimes be wasteful (in cases where programs immediately | |
639 | afterward allocate more large chunks) the value should be high | |
640 | enough so that your overall system performance would improve by | |
641 | releasing. | |
642 | ||
643 | The trim threshold and the mmap control parameters (see below) | |
644 | can be traded off with one another. Trimming and mmapping are | |
645 | two different ways of releasing unused memory back to the | |
646 | system. Between these two, it is often possible to keep | |
647 | system-level demands of a long-lived program down to a bare | |
648 | minimum. For example, in one test suite of sessions measuring | |
649 | the XF86 X server on Linux, using a trim threshold of 128K and a | |
650 | mmap threshold of 192K led to near-minimal long term resource | |
651 | consumption. | |
652 | ||
653 | If you are using this malloc in a long-lived program, it should | |
654 | pay to experiment with these values. As a rough guide, you | |
655 | might set to a value close to the average size of a process | |
656 | (program) running on your system. Releasing this much memory | |
657 | would allow such a process to run in memory. Generally, it's | |
831372e7 | 658 | worth it to tune for trimming rather than memory mapping when a |
f65fd747 UD |
659 | program undergoes phases where several large chunks are |
660 | allocated and released in ways that can reuse each other's | |
661 | storage, perhaps mixed with phases where there are no such | |
662 | chunks at all. And in well-behaved long-lived programs, | |
663 | controlling release of large blocks via trimming versus mapping | |
664 | is usually faster. | |
665 | ||
666 | However, in most programs, these parameters serve mainly as | |
667 | protection against the system-level effects of carrying around | |
668 | massive amounts of unneeded memory. Since frequent calls to | |
669 | sbrk, mmap, and munmap otherwise degrade performance, the default | |
670 | parameters are set to relatively high values that serve only as | |
671 | safeguards. | |
672 | ||
673 | The default trim value is high enough to cause trimming only in | |
674 | fairly extreme (by current memory consumption standards) cases. | |
675 | It must be greater than page size to have any useful effect. To | |
676 | disable trimming completely, you can set to (unsigned long)(-1); | |
677 | ||
678 | ||
679 | */ | |
680 | ||
681 | ||
682 | #ifndef DEFAULT_TOP_PAD | |
683 | #define DEFAULT_TOP_PAD (0) | |
684 | #endif | |
685 | ||
686 | /* | |
687 | M_TOP_PAD is the amount of extra `padding' space to allocate or | |
688 | retain whenever sbrk is called. It is used in two ways internally: | |
689 | ||
690 | * When sbrk is called to extend the top of the arena to satisfy | |
691 | a new malloc request, this much padding is added to the sbrk | |
692 | request. | |
693 | ||
694 | * When malloc_trim is called automatically from free(), | |
695 | it is used as the `pad' argument. | |
696 | ||
697 | In both cases, the actual amount of padding is rounded | |
698 | so that the end of the arena is always a system page boundary. | |
699 | ||
700 | The main reason for using padding is to avoid calling sbrk so | |
701 | often. Having even a small pad greatly reduces the likelihood | |
702 | that nearly every malloc request during program start-up (or | |
703 | after trimming) will invoke sbrk, which needlessly wastes | |
704 | time. | |
705 | ||
706 | Automatic rounding-up to page-size units is normally sufficient | |
707 | to avoid measurable overhead, so the default is 0. However, in | |
708 | systems where sbrk is relatively slow, it can pay to increase | |
709 | this value, at the expense of carrying around more memory than | |
710 | the program needs. | |
711 | ||
712 | */ | |
713 | ||
714 | ||
715 | #ifndef DEFAULT_MMAP_THRESHOLD | |
716 | #define DEFAULT_MMAP_THRESHOLD (128 * 1024) | |
717 | #endif | |
718 | ||
719 | /* | |
720 | ||
721 | M_MMAP_THRESHOLD is the request size threshold for using mmap() | |
722 | to service a request. Requests of at least this size that cannot | |
723 | be allocated using already-existing space will be serviced via mmap. | |
724 | (If enough normal freed space already exists it is used instead.) | |
725 | ||
726 | Using mmap segregates relatively large chunks of memory so that | |
727 | they can be individually obtained and released from the host | |
728 | system. A request serviced through mmap is never reused by any | |
729 | other request (at least not directly; the system may just so | |
730 | happen to remap successive requests to the same locations). | |
731 | ||
732 | Segregating space in this way has the benefit that mmapped space | |
733 | can ALWAYS be individually released back to the system, which | |
734 | helps keep the system level memory demands of a long-lived | |
735 | program low. Mapped memory can never become `locked' between | |
736 | other chunks, as can happen with normally allocated chunks, which | |
737 | menas that even trimming via malloc_trim would not release them. | |
738 | ||
739 | However, it has the disadvantages that: | |
740 | ||
741 | 1. The space cannot be reclaimed, consolidated, and then | |
742 | used to service later requests, as happens with normal chunks. | |
743 | 2. It can lead to more wastage because of mmap page alignment | |
744 | requirements | |
745 | 3. It causes malloc performance to be more dependent on host | |
746 | system memory management support routines which may vary in | |
747 | implementation quality and may impose arbitrary | |
748 | limitations. Generally, servicing a request via normal | |
749 | malloc steps is faster than going through a system's mmap. | |
750 | ||
751 | All together, these considerations should lead you to use mmap | |
752 | only for relatively large requests. | |
753 | ||
754 | ||
755 | */ | |
756 | ||
757 | ||
758 | ||
759 | #ifndef DEFAULT_MMAP_MAX | |
760 | #if HAVE_MMAP | |
761 | #define DEFAULT_MMAP_MAX (1024) | |
762 | #else | |
763 | #define DEFAULT_MMAP_MAX (0) | |
764 | #endif | |
765 | #endif | |
766 | ||
767 | /* | |
768 | M_MMAP_MAX is the maximum number of requests to simultaneously | |
769 | service using mmap. This parameter exists because: | |
770 | ||
771 | 1. Some systems have a limited number of internal tables for | |
772 | use by mmap. | |
773 | 2. In most systems, overreliance on mmap can degrade overall | |
774 | performance. | |
775 | 3. If a program allocates many large regions, it is probably | |
776 | better off using normal sbrk-based allocation routines that | |
777 | can reclaim and reallocate normal heap memory. Using a | |
778 | small value allows transition into this mode after the | |
779 | first few allocations. | |
780 | ||
781 | Setting to 0 disables all use of mmap. If HAVE_MMAP is not set, | |
782 | the default value is 0, and attempts to set it to non-zero values | |
783 | in mallopt will fail. | |
784 | */ | |
785 | ||
786 | ||
787 | ||
10dc2a90 UD |
788 | #ifndef DEFAULT_CHECK_ACTION |
789 | #define DEFAULT_CHECK_ACTION 1 | |
790 | #endif | |
791 | ||
792 | /* What to do if the standard debugging hooks are in place and a | |
793 | corrupt pointer is detected: do nothing (0), print an error message | |
794 | (1), or call abort() (2). */ | |
795 | ||
796 | ||
797 | ||
f65fd747 UD |
798 | #define HEAP_MIN_SIZE (32*1024) |
799 | #define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */ | |
800 | ||
801 | /* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps | |
802 | that are dynamically created for multi-threaded programs. The | |
803 | maximum size must be a power of two, for fast determination of | |
804 | which heap belongs to a chunk. It should be much larger than | |
805 | the mmap threshold, so that requests with a size just below that | |
806 | threshold can be fulfilled without creating too many heaps. | |
807 | */ | |
808 | ||
809 | ||
810 | ||
811 | #ifndef THREAD_STATS | |
812 | #define THREAD_STATS 0 | |
813 | #endif | |
814 | ||
815 | /* If THREAD_STATS is non-zero, some statistics on mutex locking are | |
816 | computed. */ | |
817 | ||
818 | ||
819 | /* | |
820 | ||
821 | Special defines for the Linux/GNU C library. | |
822 | ||
823 | */ | |
824 | ||
825 | ||
826 | #ifdef _LIBC | |
827 | ||
828 | #if __STD_C | |
829 | ||
830 | Void_t * __default_morecore (ptrdiff_t); | |
1228ed5c | 831 | Void_t *(*__morecore)(ptrdiff_t) = __default_morecore; |
f65fd747 UD |
832 | |
833 | #else | |
834 | ||
835 | Void_t * __default_morecore (); | |
1228ed5c | 836 | Void_t *(*__morecore)() = __default_morecore; |
f65fd747 UD |
837 | |
838 | #endif | |
839 | ||
840 | #define MORECORE (*__morecore) | |
841 | #define MORECORE_FAILURE 0 | |
842 | #define MORECORE_CLEARS 1 | |
10dc2a90 UD |
843 | #define mmap __mmap |
844 | #define munmap __munmap | |
845 | #define mremap __mremap | |
4cca6b86 | 846 | #define mprotect __mprotect |
10dc2a90 UD |
847 | #undef malloc_getpagesize |
848 | #define malloc_getpagesize __getpagesize() | |
f65fd747 UD |
849 | |
850 | #else /* _LIBC */ | |
851 | ||
852 | #if __STD_C | |
853 | extern Void_t* sbrk(ptrdiff_t); | |
854 | #else | |
855 | extern Void_t* sbrk(); | |
856 | #endif | |
857 | ||
858 | #ifndef MORECORE | |
859 | #define MORECORE sbrk | |
860 | #endif | |
861 | ||
862 | #ifndef MORECORE_FAILURE | |
863 | #define MORECORE_FAILURE -1 | |
864 | #endif | |
865 | ||
866 | #ifndef MORECORE_CLEARS | |
867 | #define MORECORE_CLEARS 1 | |
868 | #endif | |
869 | ||
870 | #endif /* _LIBC */ | |
871 | ||
10dc2a90 | 872 | #ifdef _LIBC |
f65fd747 UD |
873 | |
874 | #define cALLOc __libc_calloc | |
875 | #define fREe __libc_free | |
876 | #define mALLOc __libc_malloc | |
877 | #define mEMALIGn __libc_memalign | |
878 | #define rEALLOc __libc_realloc | |
879 | #define vALLOc __libc_valloc | |
880 | #define pvALLOc __libc_pvalloc | |
881 | #define mALLINFo __libc_mallinfo | |
882 | #define mALLOPt __libc_mallopt | |
7e3be507 UD |
883 | #define mALLOC_STATs __malloc_stats |
884 | #define mALLOC_USABLE_SIZe __malloc_usable_size | |
885 | #define mALLOC_TRIm __malloc_trim | |
2f6d1f1b UD |
886 | #define mALLOC_GET_STATe __malloc_get_state |
887 | #define mALLOC_SET_STATe __malloc_set_state | |
f65fd747 | 888 | |
f65fd747 UD |
889 | #else |
890 | ||
891 | #define cALLOc calloc | |
892 | #define fREe free | |
893 | #define mALLOc malloc | |
894 | #define mEMALIGn memalign | |
895 | #define rEALLOc realloc | |
896 | #define vALLOc valloc | |
897 | #define pvALLOc pvalloc | |
898 | #define mALLINFo mallinfo | |
899 | #define mALLOPt mallopt | |
7e3be507 UD |
900 | #define mALLOC_STATs malloc_stats |
901 | #define mALLOC_USABLE_SIZe malloc_usable_size | |
902 | #define mALLOC_TRIm malloc_trim | |
2f6d1f1b UD |
903 | #define mALLOC_GET_STATe malloc_get_state |
904 | #define mALLOC_SET_STATe malloc_set_state | |
f65fd747 UD |
905 | |
906 | #endif | |
907 | ||
908 | /* Public routines */ | |
909 | ||
910 | #if __STD_C | |
911 | ||
912 | #ifndef _LIBC | |
913 | void ptmalloc_init(void); | |
914 | #endif | |
915 | Void_t* mALLOc(size_t); | |
916 | void fREe(Void_t*); | |
917 | Void_t* rEALLOc(Void_t*, size_t); | |
918 | Void_t* mEMALIGn(size_t, size_t); | |
919 | Void_t* vALLOc(size_t); | |
920 | Void_t* pvALLOc(size_t); | |
921 | Void_t* cALLOc(size_t, size_t); | |
922 | void cfree(Void_t*); | |
7e3be507 UD |
923 | int mALLOC_TRIm(size_t); |
924 | size_t mALLOC_USABLE_SIZe(Void_t*); | |
925 | void mALLOC_STATs(void); | |
f65fd747 UD |
926 | int mALLOPt(int, int); |
927 | struct mallinfo mALLINFo(void); | |
2f6d1f1b UD |
928 | Void_t* mALLOC_GET_STATe(void); |
929 | int mALLOC_SET_STATe(Void_t*); | |
930 | ||
931 | #else /* !__STD_C */ | |
932 | ||
f65fd747 UD |
933 | #ifndef _LIBC |
934 | void ptmalloc_init(); | |
935 | #endif | |
936 | Void_t* mALLOc(); | |
937 | void fREe(); | |
938 | Void_t* rEALLOc(); | |
939 | Void_t* mEMALIGn(); | |
940 | Void_t* vALLOc(); | |
941 | Void_t* pvALLOc(); | |
942 | Void_t* cALLOc(); | |
943 | void cfree(); | |
7e3be507 UD |
944 | int mALLOC_TRIm(); |
945 | size_t mALLOC_USABLE_SIZe(); | |
946 | void mALLOC_STATs(); | |
f65fd747 UD |
947 | int mALLOPt(); |
948 | struct mallinfo mALLINFo(); | |
2f6d1f1b UD |
949 | Void_t* mALLOC_GET_STATe(); |
950 | int mALLOC_SET_STATe(); | |
951 | ||
952 | #endif /* __STD_C */ | |
f65fd747 UD |
953 | |
954 | ||
955 | #ifdef __cplusplus | |
956 | }; /* end of extern "C" */ | |
957 | #endif | |
958 | ||
959 | #if !defined(NO_THREADS) && !HAVE_MMAP | |
960 | "Can't have threads support without mmap" | |
961 | #endif | |
962 | ||
963 | ||
964 | /* | |
965 | Type declarations | |
966 | */ | |
967 | ||
968 | ||
969 | struct malloc_chunk | |
970 | { | |
971 | INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */ | |
972 | INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */ | |
973 | struct malloc_chunk* fd; /* double links -- used only if free. */ | |
974 | struct malloc_chunk* bk; | |
975 | }; | |
976 | ||
977 | typedef struct malloc_chunk* mchunkptr; | |
978 | ||
979 | /* | |
980 | ||
981 | malloc_chunk details: | |
982 | ||
983 | (The following includes lightly edited explanations by Colin Plumb.) | |
984 | ||
985 | Chunks of memory are maintained using a `boundary tag' method as | |
986 | described in e.g., Knuth or Standish. (See the paper by Paul | |
987 | Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a | |
988 | survey of such techniques.) Sizes of free chunks are stored both | |
989 | in the front of each chunk and at the end. This makes | |
990 | consolidating fragmented chunks into bigger chunks very fast. The | |
991 | size fields also hold bits representing whether chunks are free or | |
992 | in use. | |
993 | ||
994 | An allocated chunk looks like this: | |
995 | ||
996 | ||
997 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
998 | | Size of previous chunk, if allocated | | | |
999 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1000 | | Size of chunk, in bytes |P| | |
1001 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1002 | | User data starts here... . | |
1003 | . . | |
1004 | . (malloc_usable_space() bytes) . | |
1005 | . | | |
1006 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1007 | | Size of chunk | | |
1008 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1009 | ||
1010 | ||
1011 | Where "chunk" is the front of the chunk for the purpose of most of | |
1012 | the malloc code, but "mem" is the pointer that is returned to the | |
1013 | user. "Nextchunk" is the beginning of the next contiguous chunk. | |
1014 | ||
6d52618b | 1015 | Chunks always begin on even word boundaries, so the mem portion |
f65fd747 UD |
1016 | (which is returned to the user) is also on an even word boundary, and |
1017 | thus double-word aligned. | |
1018 | ||
1019 | Free chunks are stored in circular doubly-linked lists, and look like this: | |
1020 | ||
1021 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1022 | | Size of previous chunk | | |
1023 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1024 | `head:' | Size of chunk, in bytes |P| | |
1025 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1026 | | Forward pointer to next chunk in list | | |
1027 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1028 | | Back pointer to previous chunk in list | | |
1029 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1030 | | Unused space (may be 0 bytes long) . | |
1031 | . . | |
1032 | . | | |
1033 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1034 | `foot:' | Size of chunk, in bytes | | |
1035 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1036 | ||
1037 | The P (PREV_INUSE) bit, stored in the unused low-order bit of the | |
1038 | chunk size (which is always a multiple of two words), is an in-use | |
1039 | bit for the *previous* chunk. If that bit is *clear*, then the | |
1040 | word before the current chunk size contains the previous chunk | |
1041 | size, and can be used to find the front of the previous chunk. | |
1042 | (The very first chunk allocated always has this bit set, | |
1043 | preventing access to non-existent (or non-owned) memory.) | |
1044 | ||
1045 | Note that the `foot' of the current chunk is actually represented | |
1046 | as the prev_size of the NEXT chunk. (This makes it easier to | |
1047 | deal with alignments etc). | |
1048 | ||
1049 | The two exceptions to all this are | |
1050 | ||
1051 | 1. The special chunk `top', which doesn't bother using the | |
1052 | trailing size field since there is no | |
1053 | next contiguous chunk that would have to index off it. (After | |
1054 | initialization, `top' is forced to always exist. If it would | |
1055 | become less than MINSIZE bytes long, it is replenished via | |
1056 | malloc_extend_top.) | |
1057 | ||
1058 | 2. Chunks allocated via mmap, which have the second-lowest-order | |
1059 | bit (IS_MMAPPED) set in their size fields. Because they are | |
1060 | never merged or traversed from any other chunk, they have no | |
1061 | foot size or inuse information. | |
1062 | ||
1063 | Available chunks are kept in any of several places (all declared below): | |
1064 | ||
1065 | * `av': An array of chunks serving as bin headers for consolidated | |
1066 | chunks. Each bin is doubly linked. The bins are approximately | |
1067 | proportionally (log) spaced. There are a lot of these bins | |
1068 | (128). This may look excessive, but works very well in | |
1069 | practice. All procedures maintain the invariant that no | |
1070 | consolidated chunk physically borders another one. Chunks in | |
1071 | bins are kept in size order, with ties going to the | |
1072 | approximately least recently used chunk. | |
1073 | ||
1074 | The chunks in each bin are maintained in decreasing sorted order by | |
1075 | size. This is irrelevant for the small bins, which all contain | |
1076 | the same-sized chunks, but facilitates best-fit allocation for | |
1077 | larger chunks. (These lists are just sequential. Keeping them in | |
1078 | order almost never requires enough traversal to warrant using | |
1079 | fancier ordered data structures.) Chunks of the same size are | |
1080 | linked with the most recently freed at the front, and allocations | |
1081 | are taken from the back. This results in LRU or FIFO allocation | |
1082 | order, which tends to give each chunk an equal opportunity to be | |
1083 | consolidated with adjacent freed chunks, resulting in larger free | |
1084 | chunks and less fragmentation. | |
1085 | ||
1086 | * `top': The top-most available chunk (i.e., the one bordering the | |
1087 | end of available memory) is treated specially. It is never | |
1088 | included in any bin, is used only if no other chunk is | |
1089 | available, and is released back to the system if it is very | |
1090 | large (see M_TRIM_THRESHOLD). | |
1091 | ||
1092 | * `last_remainder': A bin holding only the remainder of the | |
1093 | most recently split (non-top) chunk. This bin is checked | |
1094 | before other non-fitting chunks, so as to provide better | |
1095 | locality for runs of sequentially allocated chunks. | |
1096 | ||
1097 | * Implicitly, through the host system's memory mapping tables. | |
1098 | If supported, requests greater than a threshold are usually | |
1099 | serviced via calls to mmap, and then later released via munmap. | |
1100 | ||
1101 | */ | |
1102 | ||
1103 | /* | |
1104 | Bins | |
1105 | ||
1106 | The bins are an array of pairs of pointers serving as the | |
1107 | heads of (initially empty) doubly-linked lists of chunks, laid out | |
1108 | in a way so that each pair can be treated as if it were in a | |
1109 | malloc_chunk. (This way, the fd/bk offsets for linking bin heads | |
1110 | and chunks are the same). | |
1111 | ||
1112 | Bins for sizes < 512 bytes contain chunks of all the same size, spaced | |
1113 | 8 bytes apart. Larger bins are approximately logarithmically | |
1114 | spaced. (See the table below.) | |
1115 | ||
1116 | Bin layout: | |
1117 | ||
1118 | 64 bins of size 8 | |
1119 | 32 bins of size 64 | |
1120 | 16 bins of size 512 | |
1121 | 8 bins of size 4096 | |
1122 | 4 bins of size 32768 | |
1123 | 2 bins of size 262144 | |
1124 | 1 bin of size what's left | |
1125 | ||
1126 | There is actually a little bit of slop in the numbers in bin_index | |
1127 | for the sake of speed. This makes no difference elsewhere. | |
1128 | ||
1129 | The special chunks `top' and `last_remainder' get their own bins, | |
1130 | (this is implemented via yet more trickery with the av array), | |
1131 | although `top' is never properly linked to its bin since it is | |
1132 | always handled specially. | |
1133 | ||
1134 | */ | |
1135 | ||
1136 | #define NAV 128 /* number of bins */ | |
1137 | ||
1138 | typedef struct malloc_chunk* mbinptr; | |
1139 | ||
1140 | /* An arena is a configuration of malloc_chunks together with an array | |
1141 | of bins. With multiple threads, it must be locked via a mutex | |
1142 | before changing its data structures. One or more `heaps' are | |
1143 | associated with each arena, except for the main_arena, which is | |
1144 | associated only with the `main heap', i.e. the conventional free | |
1145 | store obtained with calls to MORECORE() (usually sbrk). The `av' | |
1146 | array is never mentioned directly in the code, but instead used via | |
1147 | bin access macros. */ | |
1148 | ||
1149 | typedef struct _arena { | |
1150 | mbinptr av[2*NAV + 2]; | |
1151 | struct _arena *next; | |
8a4b65b4 UD |
1152 | size_t size; |
1153 | #if THREAD_STATS | |
1154 | long stat_lock_direct, stat_lock_loop, stat_lock_wait; | |
1155 | #endif | |
f65fd747 UD |
1156 | mutex_t mutex; |
1157 | } arena; | |
1158 | ||
1159 | ||
6d52618b | 1160 | /* A heap is a single contiguous memory region holding (coalesceable) |
f65fd747 UD |
1161 | malloc_chunks. It is allocated with mmap() and always starts at an |
1162 | address aligned to HEAP_MAX_SIZE. Not used unless compiling for | |
1163 | multiple threads. */ | |
1164 | ||
1165 | typedef struct _heap_info { | |
8a4b65b4 UD |
1166 | arena *ar_ptr; /* Arena for this heap. */ |
1167 | struct _heap_info *prev; /* Previous heap. */ | |
1168 | size_t size; /* Current size in bytes. */ | |
1169 | size_t pad; /* Make sure the following data is properly aligned. */ | |
f65fd747 UD |
1170 | } heap_info; |
1171 | ||
1172 | ||
1173 | /* | |
1174 | Static functions (forward declarations) | |
1175 | */ | |
1176 | ||
1177 | #if __STD_C | |
10dc2a90 | 1178 | |
dfd2257a UD |
1179 | static void chunk_free(arena *ar_ptr, mchunkptr p) internal_function; |
1180 | static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size) | |
1181 | internal_function; | |
10dc2a90 | 1182 | static mchunkptr chunk_realloc(arena *ar_ptr, mchunkptr oldp, |
dfd2257a UD |
1183 | INTERNAL_SIZE_T oldsize, INTERNAL_SIZE_T nb) |
1184 | internal_function; | |
10dc2a90 | 1185 | static mchunkptr chunk_align(arena *ar_ptr, INTERNAL_SIZE_T nb, |
dfd2257a UD |
1186 | size_t alignment) internal_function; |
1187 | static int main_trim(size_t pad) internal_function; | |
8a4b65b4 | 1188 | #ifndef NO_THREADS |
dfd2257a | 1189 | static int heap_trim(heap_info *heap, size_t pad) internal_function; |
8a4b65b4 | 1190 | #endif |
dfd2257a | 1191 | #if defined _LIBC || defined MALLOC_HOOKS |
a2b08ee5 UD |
1192 | static Void_t* malloc_check(size_t sz, const Void_t *caller); |
1193 | static void free_check(Void_t* mem, const Void_t *caller); | |
1194 | static Void_t* realloc_check(Void_t* oldmem, size_t bytes, | |
1195 | const Void_t *caller); | |
1196 | static Void_t* memalign_check(size_t alignment, size_t bytes, | |
1197 | const Void_t *caller); | |
ee74a442 | 1198 | #ifndef NO_THREADS |
a2b08ee5 UD |
1199 | static Void_t* malloc_starter(size_t sz, const Void_t *caller); |
1200 | static void free_starter(Void_t* mem, const Void_t *caller); | |
1201 | static Void_t* malloc_atfork(size_t sz, const Void_t *caller); | |
1202 | static void free_atfork(Void_t* mem, const Void_t *caller); | |
a2b08ee5 | 1203 | #endif |
ee74a442 | 1204 | #endif |
10dc2a90 | 1205 | |
f65fd747 | 1206 | #else |
10dc2a90 | 1207 | |
f65fd747 UD |
1208 | static void chunk_free(); |
1209 | static mchunkptr chunk_alloc(); | |
10dc2a90 UD |
1210 | static mchunkptr chunk_realloc(); |
1211 | static mchunkptr chunk_align(); | |
8a4b65b4 UD |
1212 | static int main_trim(); |
1213 | #ifndef NO_THREADS | |
1214 | static int heap_trim(); | |
1215 | #endif | |
dfd2257a | 1216 | #if defined _LIBC || defined MALLOC_HOOKS |
10dc2a90 UD |
1217 | static Void_t* malloc_check(); |
1218 | static void free_check(); | |
1219 | static Void_t* realloc_check(); | |
1220 | static Void_t* memalign_check(); | |
ee74a442 | 1221 | #ifndef NO_THREADS |
7e3be507 UD |
1222 | static Void_t* malloc_starter(); |
1223 | static void free_starter(); | |
ca34d7a7 UD |
1224 | static Void_t* malloc_atfork(); |
1225 | static void free_atfork(); | |
10dc2a90 | 1226 | #endif |
ee74a442 | 1227 | #endif |
10dc2a90 | 1228 | |
f65fd747 UD |
1229 | #endif |
1230 | ||
dfd2257a UD |
1231 | /* On some platforms we can compile internal, not exported functions better. |
1232 | Let the environment provide a macro and define it to be empty if it | |
1233 | is not available. */ | |
1234 | #ifndef internal_function | |
1235 | # define internal_function | |
1236 | #endif | |
1237 | ||
f65fd747 UD |
1238 | \f |
1239 | ||
1240 | /* sizes, alignments */ | |
1241 | ||
1242 | #define SIZE_SZ (sizeof(INTERNAL_SIZE_T)) | |
1243 | #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ) | |
1244 | #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1) | |
1245 | #define MINSIZE (sizeof(struct malloc_chunk)) | |
1246 | ||
1247 | /* conversion from malloc headers to user pointers, and back */ | |
1248 | ||
1249 | #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ)) | |
1250 | #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ)) | |
1251 | ||
1252 | /* pad request bytes into a usable size */ | |
1253 | ||
1254 | #define request2size(req) \ | |
1255 | (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \ | |
1256 | (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \ | |
1257 | (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK))) | |
1258 | ||
1259 | /* Check if m has acceptable alignment */ | |
1260 | ||
1261 | #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0) | |
1262 | ||
1263 | ||
1264 | \f | |
1265 | ||
1266 | /* | |
1267 | Physical chunk operations | |
1268 | */ | |
1269 | ||
1270 | ||
1271 | /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */ | |
1272 | ||
1273 | #define PREV_INUSE 0x1 | |
1274 | ||
1275 | /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */ | |
1276 | ||
1277 | #define IS_MMAPPED 0x2 | |
1278 | ||
1279 | /* Bits to mask off when extracting size */ | |
1280 | ||
1281 | #define SIZE_BITS (PREV_INUSE|IS_MMAPPED) | |
1282 | ||
1283 | ||
1284 | /* Ptr to next physical malloc_chunk. */ | |
1285 | ||
1286 | #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) )) | |
1287 | ||
1288 | /* Ptr to previous physical malloc_chunk */ | |
1289 | ||
1290 | #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) )) | |
1291 | ||
1292 | ||
1293 | /* Treat space at ptr + offset as a chunk */ | |
1294 | ||
1295 | #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) | |
1296 | ||
1297 | ||
1298 | \f | |
1299 | ||
1300 | /* | |
1301 | Dealing with use bits | |
1302 | */ | |
1303 | ||
1304 | /* extract p's inuse bit */ | |
1305 | ||
1306 | #define inuse(p) \ | |
1307 | ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE) | |
1308 | ||
1309 | /* extract inuse bit of previous chunk */ | |
1310 | ||
1311 | #define prev_inuse(p) ((p)->size & PREV_INUSE) | |
1312 | ||
1313 | /* check for mmap()'ed chunk */ | |
1314 | ||
1315 | #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED) | |
1316 | ||
1317 | /* set/clear chunk as in use without otherwise disturbing */ | |
1318 | ||
1319 | #define set_inuse(p) \ | |
1320 | ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE | |
1321 | ||
1322 | #define clear_inuse(p) \ | |
1323 | ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE) | |
1324 | ||
1325 | /* check/set/clear inuse bits in known places */ | |
1326 | ||
1327 | #define inuse_bit_at_offset(p, s)\ | |
1328 | (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE) | |
1329 | ||
1330 | #define set_inuse_bit_at_offset(p, s)\ | |
1331 | (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE) | |
1332 | ||
1333 | #define clear_inuse_bit_at_offset(p, s)\ | |
1334 | (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE)) | |
1335 | ||
1336 | ||
1337 | \f | |
1338 | ||
1339 | /* | |
1340 | Dealing with size fields | |
1341 | */ | |
1342 | ||
1343 | /* Get size, ignoring use bits */ | |
1344 | ||
1345 | #define chunksize(p) ((p)->size & ~(SIZE_BITS)) | |
1346 | ||
1347 | /* Set size at head, without disturbing its use bit */ | |
1348 | ||
1349 | #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s))) | |
1350 | ||
1351 | /* Set size/use ignoring previous bits in header */ | |
1352 | ||
1353 | #define set_head(p, s) ((p)->size = (s)) | |
1354 | ||
1355 | /* Set size at footer (only when chunk is not in use) */ | |
1356 | ||
1357 | #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s)) | |
1358 | ||
1359 | ||
1360 | \f | |
1361 | ||
1362 | ||
1363 | /* access macros */ | |
1364 | ||
1365 | #define bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i) + 2]) - 2*SIZE_SZ)) | |
1366 | #define init_bin(a, i) ((a)->av[2*i+2] = (a)->av[2*i+3] = bin_at((a), i)) | |
1367 | #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr))) | |
1368 | #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr))) | |
1369 | ||
1370 | /* | |
1371 | The first 2 bins are never indexed. The corresponding av cells are instead | |
1372 | used for bookkeeping. This is not to save space, but to simplify | |
1373 | indexing, maintain locality, and avoid some initialization tests. | |
1374 | */ | |
1375 | ||
1376 | #define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */ | |
1377 | #define top(a) (bin_at(a,0)->fd) /* The topmost chunk */ | |
1378 | #define last_remainder(a) (bin_at(a,1)) /* remainder from last split */ | |
1379 | ||
1380 | /* | |
1381 | Because top initially points to its own bin with initial | |
1382 | zero size, thus forcing extension on the first malloc request, | |
1383 | we avoid having any special code in malloc to check whether | |
1384 | it even exists yet. But we still need to in malloc_extend_top. | |
1385 | */ | |
1386 | ||
1387 | #define initial_top(a) ((mchunkptr)bin_at(a, 0)) | |
1388 | ||
1389 | \f | |
1390 | ||
1391 | /* field-extraction macros */ | |
1392 | ||
1393 | #define first(b) ((b)->fd) | |
1394 | #define last(b) ((b)->bk) | |
1395 | ||
1396 | /* | |
1397 | Indexing into bins | |
1398 | */ | |
1399 | ||
dfd2257a UD |
1400 | #define bin_index(sz) \ |
1401 | (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3):\ | |
1402 | ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6):\ | |
1403 | ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9):\ | |
1404 | ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12):\ | |
1405 | ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15):\ | |
1406 | ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18):\ | |
f65fd747 UD |
1407 | 126) |
1408 | /* | |
1409 | bins for chunks < 512 are all spaced 8 bytes apart, and hold | |
1410 | identically sized chunks. This is exploited in malloc. | |
1411 | */ | |
1412 | ||
1413 | #define MAX_SMALLBIN 63 | |
1414 | #define MAX_SMALLBIN_SIZE 512 | |
1415 | #define SMALLBIN_WIDTH 8 | |
1416 | ||
1417 | #define smallbin_index(sz) (((unsigned long)(sz)) >> 3) | |
1418 | ||
1419 | /* | |
1420 | Requests are `small' if both the corresponding and the next bin are small | |
1421 | */ | |
1422 | ||
1423 | #define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH) | |
1424 | ||
1425 | \f | |
1426 | ||
1427 | /* | |
1428 | To help compensate for the large number of bins, a one-level index | |
1429 | structure is used for bin-by-bin searching. `binblocks' is a | |
1430 | one-word bitvector recording whether groups of BINBLOCKWIDTH bins | |
1431 | have any (possibly) non-empty bins, so they can be skipped over | |
1432 | all at once during during traversals. The bits are NOT always | |
1433 | cleared as soon as all bins in a block are empty, but instead only | |
1434 | when all are noticed to be empty during traversal in malloc. | |
1435 | */ | |
1436 | ||
1437 | #define BINBLOCKWIDTH 4 /* bins per block */ | |
1438 | ||
1439 | /* bin<->block macros */ | |
1440 | ||
1441 | #define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH)) | |
1442 | #define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii)) | |
1443 | #define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii))) | |
1444 | ||
1445 | ||
1446 | \f | |
1447 | ||
1448 | /* Static bookkeeping data */ | |
1449 | ||
1450 | /* Helper macro to initialize bins */ | |
1451 | #define IAV(i) bin_at(&main_arena, i), bin_at(&main_arena, i) | |
1452 | ||
1453 | static arena main_arena = { | |
1454 | { | |
1455 | 0, 0, | |
1456 | IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7), | |
1457 | IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15), | |
1458 | IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23), | |
1459 | IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31), | |
1460 | IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39), | |
1461 | IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47), | |
1462 | IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55), | |
1463 | IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63), | |
1464 | IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71), | |
1465 | IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79), | |
1466 | IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87), | |
1467 | IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95), | |
1468 | IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103), | |
1469 | IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111), | |
1470 | IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119), | |
1471 | IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127) | |
1472 | }, | |
7e3be507 | 1473 | &main_arena, /* next */ |
8a4b65b4 UD |
1474 | 0, /* size */ |
1475 | #if THREAD_STATS | |
1476 | 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */ | |
1477 | #endif | |
f65fd747 UD |
1478 | MUTEX_INITIALIZER /* mutex */ |
1479 | }; | |
1480 | ||
1481 | #undef IAV | |
1482 | ||
1483 | /* Thread specific data */ | |
1484 | ||
8a4b65b4 | 1485 | #ifndef NO_THREADS |
f65fd747 UD |
1486 | static tsd_key_t arena_key; |
1487 | static mutex_t list_lock = MUTEX_INITIALIZER; | |
8a4b65b4 | 1488 | #endif |
f65fd747 UD |
1489 | |
1490 | #if THREAD_STATS | |
f65fd747 | 1491 | static int stat_n_heaps = 0; |
f65fd747 UD |
1492 | #define THREAD_STAT(x) x |
1493 | #else | |
1494 | #define THREAD_STAT(x) do ; while(0) | |
1495 | #endif | |
1496 | ||
1497 | /* variables holding tunable values */ | |
1498 | ||
1499 | static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD; | |
1500 | static unsigned long top_pad = DEFAULT_TOP_PAD; | |
1501 | static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX; | |
1502 | static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD; | |
10dc2a90 | 1503 | static int check_action = DEFAULT_CHECK_ACTION; |
f65fd747 UD |
1504 | |
1505 | /* The first value returned from sbrk */ | |
1506 | static char* sbrk_base = (char*)(-1); | |
1507 | ||
1508 | /* The maximum memory obtained from system via sbrk */ | |
1509 | static unsigned long max_sbrked_mem = 0; | |
1510 | ||
8a4b65b4 UD |
1511 | /* The maximum via either sbrk or mmap (too difficult to track with threads) */ |
1512 | #ifdef NO_THREADS | |
f65fd747 | 1513 | static unsigned long max_total_mem = 0; |
8a4b65b4 | 1514 | #endif |
f65fd747 UD |
1515 | |
1516 | /* The total memory obtained from system via sbrk */ | |
8a4b65b4 | 1517 | #define sbrked_mem (main_arena.size) |
f65fd747 UD |
1518 | |
1519 | /* Tracking mmaps */ | |
1520 | ||
1521 | static unsigned int n_mmaps = 0; | |
1522 | static unsigned int max_n_mmaps = 0; | |
1523 | static unsigned long mmapped_mem = 0; | |
1524 | static unsigned long max_mmapped_mem = 0; | |
1525 | ||
1526 | ||
1527 | \f | |
831372e7 UD |
1528 | #ifndef _LIBC |
1529 | #define weak_variable | |
1530 | #else | |
1531 | /* In GNU libc we want the hook variables to be weak definitions to | |
1532 | avoid a problem with Emacs. */ | |
1533 | #define weak_variable weak_function | |
1534 | #endif | |
7e3be507 UD |
1535 | |
1536 | /* Already initialized? */ | |
9756dfe1 | 1537 | int __malloc_initialized = -1; |
f65fd747 UD |
1538 | |
1539 | ||
ee74a442 UD |
1540 | #ifndef NO_THREADS |
1541 | ||
ca34d7a7 UD |
1542 | /* The following two functions are registered via thread_atfork() to |
1543 | make sure that the mutexes remain in a consistent state in the | |
1544 | fork()ed version of a thread. Also adapt the malloc and free hooks | |
1545 | temporarily, because the `atfork' handler mechanism may use | |
1546 | malloc/free internally (e.g. in LinuxThreads). */ | |
1547 | ||
dfd2257a | 1548 | #if defined _LIBC || defined MALLOC_HOOKS |
a2b08ee5 UD |
1549 | static __malloc_ptr_t (*save_malloc_hook) __MALLOC_P ((size_t __size, |
1550 | const __malloc_ptr_t)); | |
1551 | static void (*save_free_hook) __MALLOC_P ((__malloc_ptr_t __ptr, | |
1552 | const __malloc_ptr_t)); | |
1553 | static Void_t* save_arena; | |
a2b08ee5 | 1554 | #endif |
ca34d7a7 UD |
1555 | |
1556 | static void | |
1557 | ptmalloc_lock_all __MALLOC_P((void)) | |
1558 | { | |
1559 | arena *ar_ptr; | |
1560 | ||
1561 | (void)mutex_lock(&list_lock); | |
1562 | for(ar_ptr = &main_arena;;) { | |
1563 | (void)mutex_lock(&ar_ptr->mutex); | |
1564 | ar_ptr = ar_ptr->next; | |
1565 | if(ar_ptr == &main_arena) break; | |
1566 | } | |
dfd2257a | 1567 | #if defined _LIBC || defined MALLOC_HOOKS |
ca34d7a7 UD |
1568 | save_malloc_hook = __malloc_hook; |
1569 | save_free_hook = __free_hook; | |
1570 | __malloc_hook = malloc_atfork; | |
1571 | __free_hook = free_atfork; | |
1572 | /* Only the current thread may perform malloc/free calls now. */ | |
1573 | tsd_getspecific(arena_key, save_arena); | |
1574 | tsd_setspecific(arena_key, (Void_t*)0); | |
1575 | #endif | |
1576 | } | |
1577 | ||
1578 | static void | |
1579 | ptmalloc_unlock_all __MALLOC_P((void)) | |
1580 | { | |
1581 | arena *ar_ptr; | |
1582 | ||
dfd2257a | 1583 | #if defined _LIBC || defined MALLOC_HOOKS |
ca34d7a7 UD |
1584 | tsd_setspecific(arena_key, save_arena); |
1585 | __malloc_hook = save_malloc_hook; | |
1586 | __free_hook = save_free_hook; | |
1587 | #endif | |
1588 | for(ar_ptr = &main_arena;;) { | |
1589 | (void)mutex_unlock(&ar_ptr->mutex); | |
1590 | ar_ptr = ar_ptr->next; | |
1591 | if(ar_ptr == &main_arena) break; | |
1592 | } | |
1593 | (void)mutex_unlock(&list_lock); | |
1594 | } | |
1595 | ||
eb406346 UD |
1596 | static void |
1597 | ptmalloc_init_all __MALLOC_P((void)) | |
1598 | { | |
1599 | arena *ar_ptr; | |
1600 | ||
1601 | #if defined _LIBC || defined MALLOC_HOOKS | |
1602 | tsd_setspecific(arena_key, save_arena); | |
1603 | __malloc_hook = save_malloc_hook; | |
1604 | __free_hook = save_free_hook; | |
1605 | #endif | |
1606 | for(ar_ptr = &main_arena;;) { | |
1607 | (void)mutex_init(&ar_ptr->mutex); | |
1608 | ar_ptr = ar_ptr->next; | |
1609 | if(ar_ptr == &main_arena) break; | |
1610 | } | |
1611 | (void)mutex_init(&list_lock); | |
1612 | } | |
1613 | ||
ee74a442 UD |
1614 | #endif |
1615 | ||
f65fd747 UD |
1616 | /* Initialization routine. */ |
1617 | #if defined(_LIBC) | |
10dc2a90 | 1618 | #if 0 |
f65fd747 | 1619 | static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor)); |
10dc2a90 | 1620 | #endif |
f65fd747 UD |
1621 | |
1622 | static void | |
1623 | ptmalloc_init __MALLOC_P((void)) | |
1624 | #else | |
1625 | void | |
1626 | ptmalloc_init __MALLOC_P((void)) | |
1627 | #endif | |
1628 | { | |
dfd2257a | 1629 | #if defined _LIBC || defined MALLOC_HOOKS |
10dc2a90 UD |
1630 | const char* s; |
1631 | #endif | |
f65fd747 | 1632 | |
9756dfe1 UD |
1633 | if(__malloc_initialized >= 0) return; |
1634 | __malloc_initialized = 0; | |
ee74a442 | 1635 | #ifndef NO_THREADS |
dfd2257a | 1636 | #if defined _LIBC || defined MALLOC_HOOKS |
7e3be507 UD |
1637 | /* With some threads implementations, creating thread-specific data |
1638 | or initializing a mutex may call malloc() itself. Provide a | |
1639 | simple starter version (realloc() won't work). */ | |
1640 | save_malloc_hook = __malloc_hook; | |
1641 | save_free_hook = __free_hook; | |
1642 | __malloc_hook = malloc_starter; | |
1643 | __free_hook = free_starter; | |
1644 | #endif | |
ee74a442 | 1645 | #ifdef _LIBC |
8a4b65b4 | 1646 | /* Initialize the pthreads interface. */ |
f65fd747 | 1647 | if (__pthread_initialize != NULL) |
8a4b65b4 | 1648 | __pthread_initialize(); |
f65fd747 | 1649 | #endif |
10dc2a90 UD |
1650 | mutex_init(&main_arena.mutex); |
1651 | mutex_init(&list_lock); | |
1652 | tsd_key_create(&arena_key, NULL); | |
1653 | tsd_setspecific(arena_key, (Void_t *)&main_arena); | |
eb406346 | 1654 | thread_atfork(ptmalloc_lock_all, ptmalloc_unlock_all, ptmalloc_init_all); |
ee74a442 | 1655 | #endif /* !defined NO_THREADS */ |
dfd2257a | 1656 | #if defined _LIBC || defined MALLOC_HOOKS |
831372e7 UD |
1657 | if((s = getenv("MALLOC_TRIM_THRESHOLD_"))) |
1658 | mALLOPt(M_TRIM_THRESHOLD, atoi(s)); | |
1659 | if((s = getenv("MALLOC_TOP_PAD_"))) | |
1660 | mALLOPt(M_TOP_PAD, atoi(s)); | |
1661 | if((s = getenv("MALLOC_MMAP_THRESHOLD_"))) | |
1662 | mALLOPt(M_MMAP_THRESHOLD, atoi(s)); | |
1663 | if((s = getenv("MALLOC_MMAP_MAX_"))) | |
1664 | mALLOPt(M_MMAP_MAX, atoi(s)); | |
10dc2a90 | 1665 | s = getenv("MALLOC_CHECK_"); |
ee74a442 | 1666 | #ifndef NO_THREADS |
7e3be507 UD |
1667 | __malloc_hook = save_malloc_hook; |
1668 | __free_hook = save_free_hook; | |
ee74a442 | 1669 | #endif |
10dc2a90 | 1670 | if(s) { |
831372e7 UD |
1671 | if(s[0]) mALLOPt(M_CHECK_ACTION, (int)(s[0] - '0')); |
1672 | __malloc_check_init(); | |
f65fd747 | 1673 | } |
10dc2a90 UD |
1674 | if(__malloc_initialize_hook != NULL) |
1675 | (*__malloc_initialize_hook)(); | |
1676 | #endif | |
9756dfe1 | 1677 | __malloc_initialized = 1; |
f65fd747 UD |
1678 | } |
1679 | ||
ca34d7a7 UD |
1680 | /* There are platforms (e.g. Hurd) with a link-time hook mechanism. */ |
1681 | #ifdef thread_atfork_static | |
1682 | thread_atfork_static(ptmalloc_lock_all, ptmalloc_unlock_all, \ | |
eb406346 | 1683 | ptmalloc_init_all) |
ca34d7a7 UD |
1684 | #endif |
1685 | ||
dfd2257a | 1686 | #if defined _LIBC || defined MALLOC_HOOKS |
10dc2a90 UD |
1687 | |
1688 | /* Hooks for debugging versions. The initial hooks just call the | |
1689 | initialization routine, then do the normal work. */ | |
1690 | ||
1691 | static Void_t* | |
a2b08ee5 UD |
1692 | #ifdef _LIBC |
1693 | malloc_hook_ini(size_t sz, const __malloc_ptr_t caller) | |
1694 | #else | |
10dc2a90 UD |
1695 | #if __STD_C |
1696 | malloc_hook_ini(size_t sz) | |
1697 | #else | |
1698 | malloc_hook_ini(sz) size_t sz; | |
1699 | #endif | |
a2b08ee5 | 1700 | #endif |
10dc2a90 UD |
1701 | { |
1702 | __malloc_hook = NULL; | |
1703 | __realloc_hook = NULL; | |
1704 | __memalign_hook = NULL; | |
1705 | ptmalloc_init(); | |
1706 | return mALLOc(sz); | |
1707 | } | |
1708 | ||
1709 | static Void_t* | |
1710 | #if __STD_C | |
dfd2257a | 1711 | realloc_hook_ini(Void_t* ptr, size_t sz, const __malloc_ptr_t caller) |
10dc2a90 | 1712 | #else |
dfd2257a UD |
1713 | realloc_hook_ini(ptr, sz, caller) |
1714 | Void_t* ptr; size_t sz; const __malloc_ptr_t caller; | |
a2b08ee5 | 1715 | #endif |
10dc2a90 UD |
1716 | { |
1717 | __malloc_hook = NULL; | |
1718 | __realloc_hook = NULL; | |
1719 | __memalign_hook = NULL; | |
1720 | ptmalloc_init(); | |
1721 | return rEALLOc(ptr, sz); | |
1722 | } | |
1723 | ||
1724 | static Void_t* | |
1725 | #if __STD_C | |
dfd2257a | 1726 | memalign_hook_ini(size_t sz, size_t alignment, const __malloc_ptr_t caller) |
10dc2a90 | 1727 | #else |
dfd2257a UD |
1728 | memalign_hook_ini(sz, alignment, caller) |
1729 | size_t sz; size_t alignment; const __malloc_ptr_t caller; | |
a2b08ee5 | 1730 | #endif |
10dc2a90 UD |
1731 | { |
1732 | __malloc_hook = NULL; | |
1733 | __realloc_hook = NULL; | |
1734 | __memalign_hook = NULL; | |
1735 | ptmalloc_init(); | |
1736 | return mEMALIGn(sz, alignment); | |
1737 | } | |
1738 | ||
831372e7 | 1739 | void weak_variable (*__malloc_initialize_hook) __MALLOC_P ((void)) = NULL; |
a2b08ee5 UD |
1740 | void weak_variable (*__free_hook) __MALLOC_P ((__malloc_ptr_t __ptr, |
1741 | const __malloc_ptr_t)) = NULL; | |
1742 | __malloc_ptr_t weak_variable (*__malloc_hook) | |
1743 | __MALLOC_P ((size_t __size, const __malloc_ptr_t)) = malloc_hook_ini; | |
1744 | __malloc_ptr_t weak_variable (*__realloc_hook) | |
1745 | __MALLOC_P ((__malloc_ptr_t __ptr, size_t __size, const __malloc_ptr_t)) | |
1746 | = realloc_hook_ini; | |
1747 | __malloc_ptr_t weak_variable (*__memalign_hook) | |
1748 | __MALLOC_P ((size_t __size, size_t __alignment, const __malloc_ptr_t)) | |
1749 | = memalign_hook_ini; | |
1228ed5c | 1750 | void weak_variable (*__after_morecore_hook) __MALLOC_P ((void)) = NULL; |
10dc2a90 UD |
1751 | |
1752 | /* Activate a standard set of debugging hooks. */ | |
1753 | void | |
831372e7 | 1754 | __malloc_check_init() |
10dc2a90 UD |
1755 | { |
1756 | __malloc_hook = malloc_check; | |
1757 | __free_hook = free_check; | |
1758 | __realloc_hook = realloc_check; | |
1759 | __memalign_hook = memalign_check; | |
7e3be507 UD |
1760 | if(check_action == 1) |
1761 | fprintf(stderr, "malloc: using debugging hooks\n"); | |
10dc2a90 UD |
1762 | } |
1763 | ||
1764 | #endif | |
1765 | ||
f65fd747 UD |
1766 | |
1767 | \f | |
1768 | ||
1769 | ||
1770 | /* Routines dealing with mmap(). */ | |
1771 | ||
1772 | #if HAVE_MMAP | |
1773 | ||
1774 | #ifndef MAP_ANONYMOUS | |
1775 | ||
1776 | static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ | |
1777 | ||
b22fc5f5 | 1778 | #define MMAP(size, prot, flags) ((dev_zero_fd < 0) ? \ |
f65fd747 | 1779 | (dev_zero_fd = open("/dev/zero", O_RDWR), \ |
b22fc5f5 UD |
1780 | mmap(0, (size), (prot), (flags), dev_zero_fd, 0)) : \ |
1781 | mmap(0, (size), (prot), (flags), dev_zero_fd, 0)) | |
f65fd747 UD |
1782 | |
1783 | #else | |
1784 | ||
b22fc5f5 UD |
1785 | #define MMAP(size, prot, flags) \ |
1786 | (mmap(0, (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0)) | |
f65fd747 UD |
1787 | |
1788 | #endif | |
1789 | ||
dfd2257a UD |
1790 | #if defined __GNUC__ && __GNUC__ >= 2 |
1791 | /* This function is only called from one place, inline it. */ | |
1792 | inline | |
1793 | #endif | |
af6f3906 | 1794 | static mchunkptr |
dfd2257a | 1795 | internal_function |
f65fd747 | 1796 | #if __STD_C |
dfd2257a | 1797 | mmap_chunk(size_t size) |
f65fd747 | 1798 | #else |
dfd2257a | 1799 | mmap_chunk(size) size_t size; |
f65fd747 UD |
1800 | #endif |
1801 | { | |
1802 | size_t page_mask = malloc_getpagesize - 1; | |
1803 | mchunkptr p; | |
1804 | ||
1805 | if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */ | |
1806 | ||
1807 | /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because | |
1808 | * there is no following chunk whose prev_size field could be used. | |
1809 | */ | |
1810 | size = (size + SIZE_SZ + page_mask) & ~page_mask; | |
1811 | ||
b22fc5f5 | 1812 | p = (mchunkptr)MMAP(size, PROT_READ|PROT_WRITE, MAP_PRIVATE); |
0413b54c | 1813 | if(p == (mchunkptr) MAP_FAILED) return 0; |
f65fd747 UD |
1814 | |
1815 | n_mmaps++; | |
1816 | if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps; | |
1817 | ||
1818 | /* We demand that eight bytes into a page must be 8-byte aligned. */ | |
1819 | assert(aligned_OK(chunk2mem(p))); | |
1820 | ||
1821 | /* The offset to the start of the mmapped region is stored | |
1822 | * in the prev_size field of the chunk; normally it is zero, | |
1823 | * but that can be changed in memalign(). | |
1824 | */ | |
1825 | p->prev_size = 0; | |
1826 | set_head(p, size|IS_MMAPPED); | |
1827 | ||
1828 | mmapped_mem += size; | |
1829 | if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) | |
1830 | max_mmapped_mem = mmapped_mem; | |
8a4b65b4 | 1831 | #ifdef NO_THREADS |
f65fd747 UD |
1832 | if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) |
1833 | max_total_mem = mmapped_mem + sbrked_mem; | |
8a4b65b4 | 1834 | #endif |
f65fd747 UD |
1835 | return p; |
1836 | } | |
1837 | ||
1838 | #if __STD_C | |
1839 | static void munmap_chunk(mchunkptr p) | |
1840 | #else | |
1841 | static void munmap_chunk(p) mchunkptr p; | |
1842 | #endif | |
1843 | { | |
1844 | INTERNAL_SIZE_T size = chunksize(p); | |
1845 | int ret; | |
1846 | ||
1847 | assert (chunk_is_mmapped(p)); | |
1848 | assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem)); | |
1849 | assert((n_mmaps > 0)); | |
1850 | assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0); | |
1851 | ||
1852 | n_mmaps--; | |
1853 | mmapped_mem -= (size + p->prev_size); | |
1854 | ||
1855 | ret = munmap((char *)p - p->prev_size, size + p->prev_size); | |
1856 | ||
1857 | /* munmap returns non-zero on failure */ | |
1858 | assert(ret == 0); | |
1859 | } | |
1860 | ||
1861 | #if HAVE_MREMAP | |
1862 | ||
1863 | #if __STD_C | |
1864 | static mchunkptr mremap_chunk(mchunkptr p, size_t new_size) | |
1865 | #else | |
1866 | static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size; | |
1867 | #endif | |
1868 | { | |
1869 | size_t page_mask = malloc_getpagesize - 1; | |
1870 | INTERNAL_SIZE_T offset = p->prev_size; | |
1871 | INTERNAL_SIZE_T size = chunksize(p); | |
1872 | char *cp; | |
1873 | ||
1874 | assert (chunk_is_mmapped(p)); | |
1875 | assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem)); | |
1876 | assert((n_mmaps > 0)); | |
1877 | assert(((size + offset) & (malloc_getpagesize-1)) == 0); | |
1878 | ||
1879 | /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */ | |
1880 | new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask; | |
1881 | ||
1882 | cp = (char *)mremap((char *)p - offset, size + offset, new_size, | |
1883 | MREMAP_MAYMOVE); | |
1884 | ||
1885 | if (cp == (char *)-1) return 0; | |
1886 | ||
1887 | p = (mchunkptr)(cp + offset); | |
1888 | ||
1889 | assert(aligned_OK(chunk2mem(p))); | |
1890 | ||
1891 | assert((p->prev_size == offset)); | |
1892 | set_head(p, (new_size - offset)|IS_MMAPPED); | |
1893 | ||
1894 | mmapped_mem -= size + offset; | |
1895 | mmapped_mem += new_size; | |
1896 | if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) | |
1897 | max_mmapped_mem = mmapped_mem; | |
8a4b65b4 | 1898 | #ifdef NO_THREADS |
f65fd747 UD |
1899 | if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) |
1900 | max_total_mem = mmapped_mem + sbrked_mem; | |
8a4b65b4 | 1901 | #endif |
f65fd747 UD |
1902 | return p; |
1903 | } | |
1904 | ||
1905 | #endif /* HAVE_MREMAP */ | |
1906 | ||
1907 | #endif /* HAVE_MMAP */ | |
1908 | ||
1909 | \f | |
1910 | ||
1911 | /* Managing heaps and arenas (for concurrent threads) */ | |
1912 | ||
1913 | #ifndef NO_THREADS | |
1914 | ||
1915 | /* Create a new heap. size is automatically rounded up to a multiple | |
1916 | of the page size. */ | |
1917 | ||
1918 | static heap_info * | |
dfd2257a | 1919 | internal_function |
f65fd747 UD |
1920 | #if __STD_C |
1921 | new_heap(size_t size) | |
1922 | #else | |
1923 | new_heap(size) size_t size; | |
1924 | #endif | |
1925 | { | |
1926 | size_t page_mask = malloc_getpagesize - 1; | |
1927 | char *p1, *p2; | |
1928 | unsigned long ul; | |
1929 | heap_info *h; | |
1930 | ||
7799b7b3 | 1931 | if(size+top_pad < HEAP_MIN_SIZE) |
f65fd747 | 1932 | size = HEAP_MIN_SIZE; |
7799b7b3 UD |
1933 | else if(size+top_pad <= HEAP_MAX_SIZE) |
1934 | size += top_pad; | |
1935 | else if(size > HEAP_MAX_SIZE) | |
f65fd747 | 1936 | return 0; |
7799b7b3 UD |
1937 | else |
1938 | size = HEAP_MAX_SIZE; | |
1939 | size = (size + page_mask) & ~page_mask; | |
1940 | ||
b22fc5f5 UD |
1941 | /* A memory region aligned to a multiple of HEAP_MAX_SIZE is needed. |
1942 | No swap space needs to be reserved for the following large | |
1943 | mapping (on Linux, this is the case for all non-writable mappings | |
1944 | anyway). */ | |
1945 | p1 = (char *)MMAP(HEAP_MAX_SIZE<<1, PROT_NONE, MAP_PRIVATE|MAP_NORESERVE); | |
0413b54c | 1946 | if(p1 == MAP_FAILED) |
f65fd747 UD |
1947 | return 0; |
1948 | p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1)); | |
1949 | ul = p2 - p1; | |
1950 | munmap(p1, ul); | |
1951 | munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul); | |
1952 | if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) { | |
1953 | munmap(p2, HEAP_MAX_SIZE); | |
1954 | return 0; | |
1955 | } | |
1956 | h = (heap_info *)p2; | |
1957 | h->size = size; | |
1958 | THREAD_STAT(stat_n_heaps++); | |
1959 | return h; | |
1960 | } | |
1961 | ||
1962 | /* Grow or shrink a heap. size is automatically rounded up to a | |
8a4b65b4 | 1963 | multiple of the page size if it is positive. */ |
f65fd747 UD |
1964 | |
1965 | static int | |
1966 | #if __STD_C | |
1967 | grow_heap(heap_info *h, long diff) | |
1968 | #else | |
1969 | grow_heap(h, diff) heap_info *h; long diff; | |
1970 | #endif | |
1971 | { | |
1972 | size_t page_mask = malloc_getpagesize - 1; | |
1973 | long new_size; | |
1974 | ||
1975 | if(diff >= 0) { | |
1976 | diff = (diff + page_mask) & ~page_mask; | |
1977 | new_size = (long)h->size + diff; | |
1978 | if(new_size > HEAP_MAX_SIZE) | |
1979 | return -1; | |
1980 | if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0) | |
1981 | return -2; | |
1982 | } else { | |
1983 | new_size = (long)h->size + diff; | |
8a4b65b4 | 1984 | if(new_size < (long)sizeof(*h)) |
f65fd747 UD |
1985 | return -1; |
1986 | if(mprotect((char *)h + new_size, -diff, PROT_NONE) != 0) | |
1987 | return -2; | |
1988 | } | |
1989 | h->size = new_size; | |
1990 | return 0; | |
1991 | } | |
1992 | ||
8a4b65b4 UD |
1993 | /* Delete a heap. */ |
1994 | ||
1995 | #define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE) | |
1996 | ||
f65fd747 UD |
1997 | /* arena_get() acquires an arena and locks the corresponding mutex. |
1998 | First, try the one last locked successfully by this thread. (This | |
1999 | is the common case and handled with a macro for speed.) Then, loop | |
7e3be507 UD |
2000 | once over the circularly linked list of arenas. If no arena is |
2001 | readily available, create a new one. */ | |
f65fd747 UD |
2002 | |
2003 | #define arena_get(ptr, size) do { \ | |
2004 | Void_t *vptr = NULL; \ | |
2005 | ptr = (arena *)tsd_getspecific(arena_key, vptr); \ | |
2006 | if(ptr && !mutex_trylock(&ptr->mutex)) { \ | |
8a4b65b4 | 2007 | THREAD_STAT(++(ptr->stat_lock_direct)); \ |
7e3be507 | 2008 | } else \ |
f65fd747 | 2009 | ptr = arena_get2(ptr, (size)); \ |
f65fd747 UD |
2010 | } while(0) |
2011 | ||
2012 | static arena * | |
dfd2257a | 2013 | internal_function |
f65fd747 UD |
2014 | #if __STD_C |
2015 | arena_get2(arena *a_tsd, size_t size) | |
2016 | #else | |
2017 | arena_get2(a_tsd, size) arena *a_tsd; size_t size; | |
2018 | #endif | |
2019 | { | |
2020 | arena *a; | |
2021 | heap_info *h; | |
2022 | char *ptr; | |
2023 | int i; | |
2024 | unsigned long misalign; | |
2025 | ||
7e3be507 UD |
2026 | if(!a_tsd) |
2027 | a = a_tsd = &main_arena; | |
2028 | else { | |
2029 | a = a_tsd->next; | |
2030 | if(!a) { | |
2031 | /* This can only happen while initializing the new arena. */ | |
2032 | (void)mutex_lock(&main_arena.mutex); | |
2033 | THREAD_STAT(++(main_arena.stat_lock_wait)); | |
2034 | return &main_arena; | |
f65fd747 | 2035 | } |
8a4b65b4 | 2036 | } |
7e3be507 UD |
2037 | |
2038 | /* Check the global, circularly linked list for available arenas. */ | |
b22fc5f5 | 2039 | repeat: |
7e3be507 UD |
2040 | do { |
2041 | if(!mutex_trylock(&a->mutex)) { | |
2042 | THREAD_STAT(++(a->stat_lock_loop)); | |
2043 | tsd_setspecific(arena_key, (Void_t *)a); | |
2044 | return a; | |
2045 | } | |
2046 | a = a->next; | |
2047 | } while(a != a_tsd); | |
f65fd747 | 2048 | |
b22fc5f5 UD |
2049 | /* If not even the list_lock can be obtained, try again. This can |
2050 | happen during `atfork', or for example on systems where thread | |
2051 | creation makes it temporarily impossible to obtain _any_ | |
2052 | locks. */ | |
2053 | if(mutex_trylock(&list_lock)) { | |
2054 | a = a_tsd; | |
2055 | goto repeat; | |
2056 | } | |
2057 | (void)mutex_unlock(&list_lock); | |
2058 | ||
f65fd747 UD |
2059 | /* Nothing immediately available, so generate a new arena. */ |
2060 | h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT)); | |
2061 | if(!h) | |
2062 | return 0; | |
2063 | a = h->ar_ptr = (arena *)(h+1); | |
2064 | for(i=0; i<NAV; i++) | |
2065 | init_bin(a, i); | |
7e3be507 | 2066 | a->next = NULL; |
8a4b65b4 | 2067 | a->size = h->size; |
7e3be507 | 2068 | tsd_setspecific(arena_key, (Void_t *)a); |
f65fd747 UD |
2069 | mutex_init(&a->mutex); |
2070 | i = mutex_lock(&a->mutex); /* remember result */ | |
2071 | ||
2072 | /* Set up the top chunk, with proper alignment. */ | |
2073 | ptr = (char *)(a + 1); | |
2074 | misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK; | |
2075 | if (misalign > 0) | |
2076 | ptr += MALLOC_ALIGNMENT - misalign; | |
2077 | top(a) = (mchunkptr)ptr; | |
8a4b65b4 | 2078 | set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE); |
f65fd747 UD |
2079 | |
2080 | /* Add the new arena to the list. */ | |
2081 | (void)mutex_lock(&list_lock); | |
2082 | a->next = main_arena.next; | |
2083 | main_arena.next = a; | |
f65fd747 UD |
2084 | (void)mutex_unlock(&list_lock); |
2085 | ||
2086 | if(i) /* locking failed; keep arena for further attempts later */ | |
2087 | return 0; | |
2088 | ||
8a4b65b4 | 2089 | THREAD_STAT(++(a->stat_lock_loop)); |
f65fd747 UD |
2090 | return a; |
2091 | } | |
2092 | ||
2093 | /* find the heap and corresponding arena for a given ptr */ | |
2094 | ||
2095 | #define heap_for_ptr(ptr) \ | |
2096 | ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1))) | |
2097 | #define arena_for_ptr(ptr) \ | |
2098 | (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \ | |
2099 | &main_arena : heap_for_ptr(ptr)->ar_ptr) | |
2100 | ||
2101 | #else /* defined(NO_THREADS) */ | |
2102 | ||
2103 | /* Without concurrent threads, there is only one arena. */ | |
2104 | ||
2105 | #define arena_get(ptr, sz) (ptr = &main_arena) | |
2106 | #define arena_for_ptr(ptr) (&main_arena) | |
2107 | ||
2108 | #endif /* !defined(NO_THREADS) */ | |
2109 | ||
2110 | \f | |
2111 | ||
2112 | /* | |
2113 | Debugging support | |
2114 | */ | |
2115 | ||
2116 | #if MALLOC_DEBUG | |
2117 | ||
2118 | ||
2119 | /* | |
2120 | These routines make a number of assertions about the states | |
2121 | of data structures that should be true at all times. If any | |
2122 | are not true, it's very likely that a user program has somehow | |
2123 | trashed memory. (It's also possible that there is a coding error | |
2124 | in malloc. In which case, please report it!) | |
2125 | */ | |
2126 | ||
2127 | #if __STD_C | |
2128 | static void do_check_chunk(arena *ar_ptr, mchunkptr p) | |
2129 | #else | |
2130 | static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p; | |
2131 | #endif | |
2132 | { | |
2133 | INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; | |
2134 | ||
2135 | /* No checkable chunk is mmapped */ | |
2136 | assert(!chunk_is_mmapped(p)); | |
2137 | ||
2138 | #ifndef NO_THREADS | |
2139 | if(ar_ptr != &main_arena) { | |
2140 | heap_info *heap = heap_for_ptr(p); | |
2141 | assert(heap->ar_ptr == ar_ptr); | |
2142 | assert((char *)p + sz <= (char *)heap + heap->size); | |
2143 | return; | |
2144 | } | |
2145 | #endif | |
2146 | ||
2147 | /* Check for legal address ... */ | |
2148 | assert((char*)p >= sbrk_base); | |
2149 | if (p != top(ar_ptr)) | |
2150 | assert((char*)p + sz <= (char*)top(ar_ptr)); | |
2151 | else | |
2152 | assert((char*)p + sz <= sbrk_base + sbrked_mem); | |
2153 | ||
2154 | } | |
2155 | ||
2156 | ||
2157 | #if __STD_C | |
2158 | static void do_check_free_chunk(arena *ar_ptr, mchunkptr p) | |
2159 | #else | |
2160 | static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p; | |
2161 | #endif | |
2162 | { | |
2163 | INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; | |
2164 | mchunkptr next = chunk_at_offset(p, sz); | |
2165 | ||
2166 | do_check_chunk(ar_ptr, p); | |
2167 | ||
2168 | /* Check whether it claims to be free ... */ | |
2169 | assert(!inuse(p)); | |
2170 | ||
8a4b65b4 UD |
2171 | /* Must have OK size and fields */ |
2172 | assert((long)sz >= (long)MINSIZE); | |
2173 | assert((sz & MALLOC_ALIGN_MASK) == 0); | |
2174 | assert(aligned_OK(chunk2mem(p))); | |
2175 | /* ... matching footer field */ | |
2176 | assert(next->prev_size == sz); | |
2177 | /* ... and is fully consolidated */ | |
2178 | assert(prev_inuse(p)); | |
2179 | assert (next == top(ar_ptr) || inuse(next)); | |
2180 | ||
2181 | /* ... and has minimally sane links */ | |
2182 | assert(p->fd->bk == p); | |
2183 | assert(p->bk->fd == p); | |
f65fd747 UD |
2184 | } |
2185 | ||
2186 | #if __STD_C | |
2187 | static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p) | |
2188 | #else | |
2189 | static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p; | |
2190 | #endif | |
2191 | { | |
2192 | mchunkptr next = next_chunk(p); | |
2193 | do_check_chunk(ar_ptr, p); | |
2194 | ||
2195 | /* Check whether it claims to be in use ... */ | |
2196 | assert(inuse(p)); | |
2197 | ||
8a4b65b4 UD |
2198 | /* ... whether its size is OK (it might be a fencepost) ... */ |
2199 | assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE)); | |
2200 | ||
f65fd747 UD |
2201 | /* ... and is surrounded by OK chunks. |
2202 | Since more things can be checked with free chunks than inuse ones, | |
2203 | if an inuse chunk borders them and debug is on, it's worth doing them. | |
2204 | */ | |
2205 | if (!prev_inuse(p)) | |
2206 | { | |
2207 | mchunkptr prv = prev_chunk(p); | |
2208 | assert(next_chunk(prv) == p); | |
2209 | do_check_free_chunk(ar_ptr, prv); | |
2210 | } | |
2211 | if (next == top(ar_ptr)) | |
2212 | { | |
2213 | assert(prev_inuse(next)); | |
2214 | assert(chunksize(next) >= MINSIZE); | |
2215 | } | |
2216 | else if (!inuse(next)) | |
2217 | do_check_free_chunk(ar_ptr, next); | |
2218 | ||
2219 | } | |
2220 | ||
2221 | #if __STD_C | |
2222 | static void do_check_malloced_chunk(arena *ar_ptr, | |
2223 | mchunkptr p, INTERNAL_SIZE_T s) | |
2224 | #else | |
2225 | static void do_check_malloced_chunk(ar_ptr, p, s) | |
2226 | arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s; | |
2227 | #endif | |
2228 | { | |
2229 | INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; | |
2230 | long room = sz - s; | |
2231 | ||
2232 | do_check_inuse_chunk(ar_ptr, p); | |
2233 | ||
2234 | /* Legal size ... */ | |
2235 | assert((long)sz >= (long)MINSIZE); | |
2236 | assert((sz & MALLOC_ALIGN_MASK) == 0); | |
2237 | assert(room >= 0); | |
2238 | assert(room < (long)MINSIZE); | |
2239 | ||
2240 | /* ... and alignment */ | |
2241 | assert(aligned_OK(chunk2mem(p))); | |
2242 | ||
2243 | ||
2244 | /* ... and was allocated at front of an available chunk */ | |
2245 | assert(prev_inuse(p)); | |
2246 | ||
2247 | } | |
2248 | ||
2249 | ||
2250 | #define check_free_chunk(A,P) do_check_free_chunk(A,P) | |
2251 | #define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P) | |
2252 | #define check_chunk(A,P) do_check_chunk(A,P) | |
2253 | #define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N) | |
2254 | #else | |
2255 | #define check_free_chunk(A,P) | |
2256 | #define check_inuse_chunk(A,P) | |
2257 | #define check_chunk(A,P) | |
2258 | #define check_malloced_chunk(A,P,N) | |
2259 | #endif | |
2260 | ||
2261 | \f | |
2262 | ||
2263 | /* | |
2264 | Macro-based internal utilities | |
2265 | */ | |
2266 | ||
2267 | ||
2268 | /* | |
2269 | Linking chunks in bin lists. | |
2270 | Call these only with variables, not arbitrary expressions, as arguments. | |
2271 | */ | |
2272 | ||
2273 | /* | |
2274 | Place chunk p of size s in its bin, in size order, | |
2275 | putting it ahead of others of same size. | |
2276 | */ | |
2277 | ||
2278 | ||
2279 | #define frontlink(A, P, S, IDX, BK, FD) \ | |
2280 | { \ | |
2281 | if (S < MAX_SMALLBIN_SIZE) \ | |
2282 | { \ | |
2283 | IDX = smallbin_index(S); \ | |
2284 | mark_binblock(A, IDX); \ | |
2285 | BK = bin_at(A, IDX); \ | |
2286 | FD = BK->fd; \ | |
2287 | P->bk = BK; \ | |
2288 | P->fd = FD; \ | |
2289 | FD->bk = BK->fd = P; \ | |
2290 | } \ | |
2291 | else \ | |
2292 | { \ | |
2293 | IDX = bin_index(S); \ | |
2294 | BK = bin_at(A, IDX); \ | |
2295 | FD = BK->fd; \ | |
2296 | if (FD == BK) mark_binblock(A, IDX); \ | |
2297 | else \ | |
2298 | { \ | |
2299 | while (FD != BK && S < chunksize(FD)) FD = FD->fd; \ | |
2300 | BK = FD->bk; \ | |
2301 | } \ | |
2302 | P->bk = BK; \ | |
2303 | P->fd = FD; \ | |
2304 | FD->bk = BK->fd = P; \ | |
2305 | } \ | |
2306 | } | |
2307 | ||
2308 | ||
2309 | /* take a chunk off a list */ | |
2310 | ||
2311 | #define unlink(P, BK, FD) \ | |
2312 | { \ | |
2313 | BK = P->bk; \ | |
2314 | FD = P->fd; \ | |
2315 | FD->bk = BK; \ | |
2316 | BK->fd = FD; \ | |
2317 | } \ | |
2318 | ||
2319 | /* Place p as the last remainder */ | |
2320 | ||
2321 | #define link_last_remainder(A, P) \ | |
2322 | { \ | |
2323 | last_remainder(A)->fd = last_remainder(A)->bk = P; \ | |
2324 | P->fd = P->bk = last_remainder(A); \ | |
2325 | } | |
2326 | ||
2327 | /* Clear the last_remainder bin */ | |
2328 | ||
2329 | #define clear_last_remainder(A) \ | |
2330 | (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A)) | |
2331 | ||
2332 | ||
2333 | ||
2334 | \f | |
2335 | ||
2336 | /* | |
2337 | Extend the top-most chunk by obtaining memory from system. | |
2338 | Main interface to sbrk (but see also malloc_trim). | |
2339 | */ | |
2340 | ||
dfd2257a UD |
2341 | #if defined __GNUC__ && __GNUC__ >= 2 |
2342 | /* This function is called only from one place, inline it. */ | |
2343 | inline | |
2344 | #endif | |
af6f3906 | 2345 | static void |
dfd2257a | 2346 | internal_function |
f65fd747 | 2347 | #if __STD_C |
dfd2257a | 2348 | malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb) |
f65fd747 | 2349 | #else |
dfd2257a | 2350 | malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb; |
f65fd747 UD |
2351 | #endif |
2352 | { | |
2353 | unsigned long pagesz = malloc_getpagesize; | |
2354 | mchunkptr old_top = top(ar_ptr); /* Record state of old top */ | |
2355 | INTERNAL_SIZE_T old_top_size = chunksize(old_top); | |
2356 | INTERNAL_SIZE_T top_size; /* new size of top chunk */ | |
2357 | ||
2358 | #ifndef NO_THREADS | |
2359 | if(ar_ptr == &main_arena) { | |
2360 | #endif | |
2361 | ||
2362 | char* brk; /* return value from sbrk */ | |
2363 | INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */ | |
2364 | INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */ | |
2365 | char* new_brk; /* return of 2nd sbrk call */ | |
2366 | char* old_end = (char*)(chunk_at_offset(old_top, old_top_size)); | |
2367 | ||
2368 | /* Pad request with top_pad plus minimal overhead */ | |
2369 | INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE; | |
2370 | ||
2371 | /* If not the first time through, round to preserve page boundary */ | |
2372 | /* Otherwise, we need to correct to a page size below anyway. */ | |
2373 | /* (We also correct below if an intervening foreign sbrk call.) */ | |
2374 | ||
2375 | if (sbrk_base != (char*)(-1)) | |
2376 | sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1); | |
2377 | ||
2378 | brk = (char*)(MORECORE (sbrk_size)); | |
2379 | ||
2380 | /* Fail if sbrk failed or if a foreign sbrk call killed our space */ | |
2381 | if (brk == (char*)(MORECORE_FAILURE) || | |
2382 | (brk < old_end && old_top != initial_top(&main_arena))) | |
2383 | return; | |
2384 | ||
dfd2257a | 2385 | #if defined _LIBC || defined MALLOC_HOOKS |
1228ed5c UD |
2386 | /* Call the `morecore' hook if necessary. */ |
2387 | if (__after_morecore_hook) | |
2388 | (*__after_morecore_hook) (); | |
7799b7b3 | 2389 | #endif |
1228ed5c | 2390 | |
f65fd747 UD |
2391 | sbrked_mem += sbrk_size; |
2392 | ||
2393 | if (brk == old_end) { /* can just add bytes to current top */ | |
2394 | top_size = sbrk_size + old_top_size; | |
2395 | set_head(old_top, top_size | PREV_INUSE); | |
2396 | old_top = 0; /* don't free below */ | |
2397 | } else { | |
2398 | if (sbrk_base == (char*)(-1)) /* First time through. Record base */ | |
2399 | sbrk_base = brk; | |
2400 | else | |
2401 | /* Someone else called sbrk(). Count those bytes as sbrked_mem. */ | |
2402 | sbrked_mem += brk - (char*)old_end; | |
2403 | ||
2404 | /* Guarantee alignment of first new chunk made from this space */ | |
2405 | front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK; | |
2406 | if (front_misalign > 0) { | |
2407 | correction = (MALLOC_ALIGNMENT) - front_misalign; | |
2408 | brk += correction; | |
2409 | } else | |
2410 | correction = 0; | |
2411 | ||
2412 | /* Guarantee the next brk will be at a page boundary */ | |
2413 | correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1)); | |
2414 | ||
2415 | /* Allocate correction */ | |
2416 | new_brk = (char*)(MORECORE (correction)); | |
2417 | if (new_brk == (char*)(MORECORE_FAILURE)) return; | |
2418 | ||
dfd2257a | 2419 | #if defined _LIBC || defined MALLOC_HOOKS |
1228ed5c UD |
2420 | /* Call the `morecore' hook if necessary. */ |
2421 | if (__after_morecore_hook) | |
7799b7b3 UD |
2422 | (*__after_morecore_hook) (); |
2423 | #endif | |
1228ed5c | 2424 | |
f65fd747 UD |
2425 | sbrked_mem += correction; |
2426 | ||
2427 | top(&main_arena) = (mchunkptr)brk; | |
2428 | top_size = new_brk - brk + correction; | |
2429 | set_head(top(&main_arena), top_size | PREV_INUSE); | |
2430 | ||
2431 | if (old_top == initial_top(&main_arena)) | |
2432 | old_top = 0; /* don't free below */ | |
2433 | } | |
2434 | ||
2435 | if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem) | |
2436 | max_sbrked_mem = sbrked_mem; | |
8a4b65b4 | 2437 | #ifdef NO_THREADS |
f65fd747 UD |
2438 | if ((unsigned long)(mmapped_mem + sbrked_mem) > |
2439 | (unsigned long)max_total_mem) | |
2440 | max_total_mem = mmapped_mem + sbrked_mem; | |
8a4b65b4 | 2441 | #endif |
f65fd747 UD |
2442 | |
2443 | #ifndef NO_THREADS | |
2444 | } else { /* ar_ptr != &main_arena */ | |
8a4b65b4 UD |
2445 | heap_info *old_heap, *heap; |
2446 | size_t old_heap_size; | |
f65fd747 UD |
2447 | |
2448 | if(old_top_size < MINSIZE) /* this should never happen */ | |
2449 | return; | |
2450 | ||
2451 | /* First try to extend the current heap. */ | |
2452 | if(MINSIZE + nb <= old_top_size) | |
2453 | return; | |
8a4b65b4 UD |
2454 | old_heap = heap_for_ptr(old_top); |
2455 | old_heap_size = old_heap->size; | |
2456 | if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) { | |
2457 | ar_ptr->size += old_heap->size - old_heap_size; | |
2458 | top_size = ((char *)old_heap + old_heap->size) - (char *)old_top; | |
f65fd747 UD |
2459 | set_head(old_top, top_size | PREV_INUSE); |
2460 | return; | |
2461 | } | |
2462 | ||
2463 | /* A new heap must be created. */ | |
7799b7b3 | 2464 | heap = new_heap(nb + (MINSIZE + sizeof(*heap))); |
f65fd747 UD |
2465 | if(!heap) |
2466 | return; | |
2467 | heap->ar_ptr = ar_ptr; | |
8a4b65b4 UD |
2468 | heap->prev = old_heap; |
2469 | ar_ptr->size += heap->size; | |
f65fd747 UD |
2470 | |
2471 | /* Set up the new top, so we can safely use chunk_free() below. */ | |
2472 | top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap)); | |
2473 | top_size = heap->size - sizeof(*heap); | |
2474 | set_head(top(ar_ptr), top_size | PREV_INUSE); | |
2475 | } | |
2476 | #endif /* !defined(NO_THREADS) */ | |
2477 | ||
2478 | /* We always land on a page boundary */ | |
2479 | assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0); | |
2480 | ||
2481 | /* Setup fencepost and free the old top chunk. */ | |
2482 | if(old_top) { | |
8a4b65b4 UD |
2483 | /* The fencepost takes at least MINSIZE bytes, because it might |
2484 | become the top chunk again later. Note that a footer is set | |
2485 | up, too, although the chunk is marked in use. */ | |
2486 | old_top_size -= MINSIZE; | |
2487 | set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE); | |
2488 | if(old_top_size >= MINSIZE) { | |
2489 | set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE); | |
2490 | set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)); | |
f65fd747 UD |
2491 | set_head_size(old_top, old_top_size); |
2492 | chunk_free(ar_ptr, old_top); | |
2493 | } else { | |
8a4b65b4 UD |
2494 | set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE); |
2495 | set_foot(old_top, (old_top_size + 2*SIZE_SZ)); | |
f65fd747 UD |
2496 | } |
2497 | } | |
2498 | } | |
2499 | ||
2500 | ||
2501 | \f | |
2502 | ||
2503 | /* Main public routines */ | |
2504 | ||
2505 | ||
2506 | /* | |
8a4b65b4 | 2507 | Malloc Algorithm: |
f65fd747 UD |
2508 | |
2509 | The requested size is first converted into a usable form, `nb'. | |
2510 | This currently means to add 4 bytes overhead plus possibly more to | |
2511 | obtain 8-byte alignment and/or to obtain a size of at least | |
8a4b65b4 UD |
2512 | MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable |
2513 | size. (All fits are considered `exact' if they are within MINSIZE | |
2514 | bytes.) | |
f65fd747 UD |
2515 | |
2516 | From there, the first successful of the following steps is taken: | |
2517 | ||
2518 | 1. The bin corresponding to the request size is scanned, and if | |
2519 | a chunk of exactly the right size is found, it is taken. | |
2520 | ||
2521 | 2. The most recently remaindered chunk is used if it is big | |
2522 | enough. This is a form of (roving) first fit, used only in | |
2523 | the absence of exact fits. Runs of consecutive requests use | |
2524 | the remainder of the chunk used for the previous such request | |
2525 | whenever possible. This limited use of a first-fit style | |
2526 | allocation strategy tends to give contiguous chunks | |
2527 | coextensive lifetimes, which improves locality and can reduce | |
2528 | fragmentation in the long run. | |
2529 | ||
2530 | 3. Other bins are scanned in increasing size order, using a | |
2531 | chunk big enough to fulfill the request, and splitting off | |
2532 | any remainder. This search is strictly by best-fit; i.e., | |
2533 | the smallest (with ties going to approximately the least | |
2534 | recently used) chunk that fits is selected. | |
2535 | ||
2536 | 4. If large enough, the chunk bordering the end of memory | |
2537 | (`top') is split off. (This use of `top' is in accord with | |
2538 | the best-fit search rule. In effect, `top' is treated as | |
2539 | larger (and thus less well fitting) than any other available | |
2540 | chunk since it can be extended to be as large as necessary | |
2541 | (up to system limitations). | |
2542 | ||
2543 | 5. If the request size meets the mmap threshold and the | |
2544 | system supports mmap, and there are few enough currently | |
2545 | allocated mmapped regions, and a call to mmap succeeds, | |
2546 | the request is allocated via direct memory mapping. | |
2547 | ||
2548 | 6. Otherwise, the top of memory is extended by | |
2549 | obtaining more space from the system (normally using sbrk, | |
2550 | but definable to anything else via the MORECORE macro). | |
2551 | Memory is gathered from the system (in system page-sized | |
2552 | units) in a way that allows chunks obtained across different | |
2553 | sbrk calls to be consolidated, but does not require | |
2554 | contiguous memory. Thus, it should be safe to intersperse | |
2555 | mallocs with other sbrk calls. | |
2556 | ||
2557 | ||
3081378b | 2558 | All allocations are made from the `lowest' part of any found |
f65fd747 UD |
2559 | chunk. (The implementation invariant is that prev_inuse is |
2560 | always true of any allocated chunk; i.e., that each allocated | |
2561 | chunk borders either a previously allocated and still in-use chunk, | |
2562 | or the base of its memory arena.) | |
2563 | ||
2564 | */ | |
2565 | ||
2566 | #if __STD_C | |
2567 | Void_t* mALLOc(size_t bytes) | |
2568 | #else | |
2569 | Void_t* mALLOc(bytes) size_t bytes; | |
2570 | #endif | |
2571 | { | |
2572 | arena *ar_ptr; | |
10dc2a90 | 2573 | INTERNAL_SIZE_T nb; /* padded request size */ |
f65fd747 UD |
2574 | mchunkptr victim; |
2575 | ||
dfd2257a | 2576 | #if defined _LIBC || defined MALLOC_HOOKS |
10dc2a90 UD |
2577 | if (__malloc_hook != NULL) { |
2578 | Void_t* result; | |
2579 | ||
dfd2257a | 2580 | #if defined __GNUC__ && __GNUC__ >= 2 |
a2b08ee5 UD |
2581 | result = (*__malloc_hook)(bytes, __builtin_return_address (0)); |
2582 | #else | |
dfd2257a | 2583 | result = (*__malloc_hook)(bytes, NULL); |
a2b08ee5 | 2584 | #endif |
10dc2a90 UD |
2585 | return result; |
2586 | } | |
2587 | #endif | |
2588 | ||
2589 | nb = request2size(bytes); | |
7799b7b3 | 2590 | arena_get(ar_ptr, nb); |
f65fd747 UD |
2591 | if(!ar_ptr) |
2592 | return 0; | |
2593 | victim = chunk_alloc(ar_ptr, nb); | |
2594 | (void)mutex_unlock(&ar_ptr->mutex); | |
7799b7b3 UD |
2595 | if(!victim) { |
2596 | /* Maybe the failure is due to running out of mmapped areas. */ | |
2597 | if(ar_ptr != &main_arena) { | |
2598 | (void)mutex_lock(&main_arena.mutex); | |
2599 | victim = chunk_alloc(&main_arena, nb); | |
2600 | (void)mutex_unlock(&main_arena.mutex); | |
2601 | } | |
2602 | if(!victim) return 0; | |
2603 | } | |
2604 | return chunk2mem(victim); | |
f65fd747 UD |
2605 | } |
2606 | ||
2607 | static mchunkptr | |
dfd2257a | 2608 | internal_function |
f65fd747 UD |
2609 | #if __STD_C |
2610 | chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb) | |
2611 | #else | |
2612 | chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb; | |
2613 | #endif | |
2614 | { | |
2615 | mchunkptr victim; /* inspected/selected chunk */ | |
2616 | INTERNAL_SIZE_T victim_size; /* its size */ | |
2617 | int idx; /* index for bin traversal */ | |
2618 | mbinptr bin; /* associated bin */ | |
2619 | mchunkptr remainder; /* remainder from a split */ | |
2620 | long remainder_size; /* its size */ | |
2621 | int remainder_index; /* its bin index */ | |
2622 | unsigned long block; /* block traverser bit */ | |
2623 | int startidx; /* first bin of a traversed block */ | |
2624 | mchunkptr fwd; /* misc temp for linking */ | |
2625 | mchunkptr bck; /* misc temp for linking */ | |
2626 | mbinptr q; /* misc temp */ | |
2627 | ||
2628 | ||
2629 | /* Check for exact match in a bin */ | |
2630 | ||
2631 | if (is_small_request(nb)) /* Faster version for small requests */ | |
2632 | { | |
2633 | idx = smallbin_index(nb); | |
2634 | ||
2635 | /* No traversal or size check necessary for small bins. */ | |
2636 | ||
2637 | q = bin_at(ar_ptr, idx); | |
2638 | victim = last(q); | |
2639 | ||
2640 | /* Also scan the next one, since it would have a remainder < MINSIZE */ | |
2641 | if (victim == q) | |
2642 | { | |
2643 | q = next_bin(q); | |
2644 | victim = last(q); | |
2645 | } | |
2646 | if (victim != q) | |
2647 | { | |
2648 | victim_size = chunksize(victim); | |
2649 | unlink(victim, bck, fwd); | |
2650 | set_inuse_bit_at_offset(victim, victim_size); | |
2651 | check_malloced_chunk(ar_ptr, victim, nb); | |
2652 | return victim; | |
2653 | } | |
2654 | ||
2655 | idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */ | |
2656 | ||
2657 | } | |
2658 | else | |
2659 | { | |
2660 | idx = bin_index(nb); | |
2661 | bin = bin_at(ar_ptr, idx); | |
2662 | ||
2663 | for (victim = last(bin); victim != bin; victim = victim->bk) | |
2664 | { | |
2665 | victim_size = chunksize(victim); | |
2666 | remainder_size = victim_size - nb; | |
2667 | ||
2668 | if (remainder_size >= (long)MINSIZE) /* too big */ | |
2669 | { | |
2670 | --idx; /* adjust to rescan below after checking last remainder */ | |
2671 | break; | |
2672 | } | |
2673 | ||
2674 | else if (remainder_size >= 0) /* exact fit */ | |
2675 | { | |
2676 | unlink(victim, bck, fwd); | |
2677 | set_inuse_bit_at_offset(victim, victim_size); | |
2678 | check_malloced_chunk(ar_ptr, victim, nb); | |
2679 | return victim; | |
2680 | } | |
2681 | } | |
2682 | ||
2683 | ++idx; | |
2684 | ||
2685 | } | |
2686 | ||
2687 | /* Try to use the last split-off remainder */ | |
2688 | ||
2689 | if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr)) | |
2690 | { | |
2691 | victim_size = chunksize(victim); | |
2692 | remainder_size = victim_size - nb; | |
2693 | ||
2694 | if (remainder_size >= (long)MINSIZE) /* re-split */ | |
2695 | { | |
2696 | remainder = chunk_at_offset(victim, nb); | |
2697 | set_head(victim, nb | PREV_INUSE); | |
2698 | link_last_remainder(ar_ptr, remainder); | |
2699 | set_head(remainder, remainder_size | PREV_INUSE); | |
2700 | set_foot(remainder, remainder_size); | |
2701 | check_malloced_chunk(ar_ptr, victim, nb); | |
2702 | return victim; | |
2703 | } | |
2704 | ||
2705 | clear_last_remainder(ar_ptr); | |
2706 | ||
2707 | if (remainder_size >= 0) /* exhaust */ | |
2708 | { | |
2709 | set_inuse_bit_at_offset(victim, victim_size); | |
2710 | check_malloced_chunk(ar_ptr, victim, nb); | |
2711 | return victim; | |
2712 | } | |
2713 | ||
2714 | /* Else place in bin */ | |
2715 | ||
2716 | frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd); | |
2717 | } | |
2718 | ||
2719 | /* | |
2720 | If there are any possibly nonempty big-enough blocks, | |
2721 | search for best fitting chunk by scanning bins in blockwidth units. | |
2722 | */ | |
2723 | ||
2724 | if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr)) | |
2725 | { | |
2726 | ||
2727 | /* Get to the first marked block */ | |
2728 | ||
2729 | if ( (block & binblocks(ar_ptr)) == 0) | |
2730 | { | |
2731 | /* force to an even block boundary */ | |
2732 | idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH; | |
2733 | block <<= 1; | |
2734 | while ((block & binblocks(ar_ptr)) == 0) | |
2735 | { | |
2736 | idx += BINBLOCKWIDTH; | |
2737 | block <<= 1; | |
2738 | } | |
2739 | } | |
2740 | ||
2741 | /* For each possibly nonempty block ... */ | |
2742 | for (;;) | |
2743 | { | |
2744 | startidx = idx; /* (track incomplete blocks) */ | |
2745 | q = bin = bin_at(ar_ptr, idx); | |
2746 | ||
2747 | /* For each bin in this block ... */ | |
2748 | do | |
2749 | { | |
2750 | /* Find and use first big enough chunk ... */ | |
2751 | ||
2752 | for (victim = last(bin); victim != bin; victim = victim->bk) | |
2753 | { | |
2754 | victim_size = chunksize(victim); | |
2755 | remainder_size = victim_size - nb; | |
2756 | ||
2757 | if (remainder_size >= (long)MINSIZE) /* split */ | |
2758 | { | |
2759 | remainder = chunk_at_offset(victim, nb); | |
2760 | set_head(victim, nb | PREV_INUSE); | |
2761 | unlink(victim, bck, fwd); | |
2762 | link_last_remainder(ar_ptr, remainder); | |
2763 | set_head(remainder, remainder_size | PREV_INUSE); | |
2764 | set_foot(remainder, remainder_size); | |
2765 | check_malloced_chunk(ar_ptr, victim, nb); | |
2766 | return victim; | |
2767 | } | |
2768 | ||
2769 | else if (remainder_size >= 0) /* take */ | |
2770 | { | |
2771 | set_inuse_bit_at_offset(victim, victim_size); | |
2772 | unlink(victim, bck, fwd); | |
2773 | check_malloced_chunk(ar_ptr, victim, nb); | |
2774 | return victim; | |
2775 | } | |
2776 | ||
2777 | } | |
2778 | ||
2779 | bin = next_bin(bin); | |
2780 | ||
2781 | } while ((++idx & (BINBLOCKWIDTH - 1)) != 0); | |
2782 | ||
2783 | /* Clear out the block bit. */ | |
2784 | ||
2785 | do /* Possibly backtrack to try to clear a partial block */ | |
2786 | { | |
2787 | if ((startidx & (BINBLOCKWIDTH - 1)) == 0) | |
2788 | { | |
2789 | binblocks(ar_ptr) &= ~block; | |
2790 | break; | |
2791 | } | |
2792 | --startidx; | |
2793 | q = prev_bin(q); | |
2794 | } while (first(q) == q); | |
2795 | ||
2796 | /* Get to the next possibly nonempty block */ | |
2797 | ||
2798 | if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) ) | |
2799 | { | |
2800 | while ((block & binblocks(ar_ptr)) == 0) | |
2801 | { | |
2802 | idx += BINBLOCKWIDTH; | |
2803 | block <<= 1; | |
2804 | } | |
2805 | } | |
2806 | else | |
2807 | break; | |
2808 | } | |
2809 | } | |
2810 | ||
2811 | ||
2812 | /* Try to use top chunk */ | |
2813 | ||
2814 | /* Require that there be a remainder, ensuring top always exists */ | |
2815 | if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE) | |
2816 | { | |
2817 | ||
2818 | #if HAVE_MMAP | |
2819 | /* If big and would otherwise need to extend, try to use mmap instead */ | |
2820 | if ((unsigned long)nb >= (unsigned long)mmap_threshold && | |
2821 | (victim = mmap_chunk(nb)) != 0) | |
2822 | return victim; | |
2823 | #endif | |
2824 | ||
2825 | /* Try to extend */ | |
2826 | malloc_extend_top(ar_ptr, nb); | |
2827 | if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE) | |
2828 | return 0; /* propagate failure */ | |
2829 | } | |
2830 | ||
2831 | victim = top(ar_ptr); | |
2832 | set_head(victim, nb | PREV_INUSE); | |
2833 | top(ar_ptr) = chunk_at_offset(victim, nb); | |
2834 | set_head(top(ar_ptr), remainder_size | PREV_INUSE); | |
2835 | check_malloced_chunk(ar_ptr, victim, nb); | |
2836 | return victim; | |
2837 | ||
2838 | } | |
2839 | ||
2840 | ||
2841 | \f | |
2842 | ||
2843 | /* | |
2844 | ||
2845 | free() algorithm : | |
2846 | ||
2847 | cases: | |
2848 | ||
2849 | 1. free(0) has no effect. | |
2850 | ||
2851 | 2. If the chunk was allocated via mmap, it is released via munmap(). | |
2852 | ||
2853 | 3. If a returned chunk borders the current high end of memory, | |
2854 | it is consolidated into the top, and if the total unused | |
2855 | topmost memory exceeds the trim threshold, malloc_trim is | |
2856 | called. | |
2857 | ||
2858 | 4. Other chunks are consolidated as they arrive, and | |
2859 | placed in corresponding bins. (This includes the case of | |
2860 | consolidating with the current `last_remainder'). | |
2861 | ||
2862 | */ | |
2863 | ||
2864 | ||
2865 | #if __STD_C | |
2866 | void fREe(Void_t* mem) | |
2867 | #else | |
2868 | void fREe(mem) Void_t* mem; | |
2869 | #endif | |
2870 | { | |
2871 | arena *ar_ptr; | |
2872 | mchunkptr p; /* chunk corresponding to mem */ | |
2873 | ||
dfd2257a | 2874 | #if defined _LIBC || defined MALLOC_HOOKS |
10dc2a90 | 2875 | if (__free_hook != NULL) { |
dfd2257a | 2876 | #if defined __GNUC__ && __GNUC__ >= 2 |
a2b08ee5 UD |
2877 | (*__free_hook)(mem, __builtin_return_address (0)); |
2878 | #else | |
dfd2257a | 2879 | (*__free_hook)(mem, NULL); |
a2b08ee5 | 2880 | #endif |
10dc2a90 UD |
2881 | return; |
2882 | } | |
2883 | #endif | |
2884 | ||
f65fd747 UD |
2885 | if (mem == 0) /* free(0) has no effect */ |
2886 | return; | |
2887 | ||
2888 | p = mem2chunk(mem); | |
2889 | ||
2890 | #if HAVE_MMAP | |
2891 | if (chunk_is_mmapped(p)) /* release mmapped memory. */ | |
2892 | { | |
2893 | munmap_chunk(p); | |
2894 | return; | |
2895 | } | |
2896 | #endif | |
2897 | ||
2898 | ar_ptr = arena_for_ptr(p); | |
8a4b65b4 UD |
2899 | #if THREAD_STATS |
2900 | if(!mutex_trylock(&ar_ptr->mutex)) | |
2901 | ++(ar_ptr->stat_lock_direct); | |
2902 | else { | |
2903 | (void)mutex_lock(&ar_ptr->mutex); | |
2904 | ++(ar_ptr->stat_lock_wait); | |
2905 | } | |
2906 | #else | |
f65fd747 | 2907 | (void)mutex_lock(&ar_ptr->mutex); |
8a4b65b4 | 2908 | #endif |
f65fd747 UD |
2909 | chunk_free(ar_ptr, p); |
2910 | (void)mutex_unlock(&ar_ptr->mutex); | |
2911 | } | |
2912 | ||
2913 | static void | |
dfd2257a | 2914 | internal_function |
f65fd747 UD |
2915 | #if __STD_C |
2916 | chunk_free(arena *ar_ptr, mchunkptr p) | |
2917 | #else | |
2918 | chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p; | |
2919 | #endif | |
2920 | { | |
2921 | INTERNAL_SIZE_T hd = p->size; /* its head field */ | |
2922 | INTERNAL_SIZE_T sz; /* its size */ | |
2923 | int idx; /* its bin index */ | |
2924 | mchunkptr next; /* next contiguous chunk */ | |
2925 | INTERNAL_SIZE_T nextsz; /* its size */ | |
2926 | INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */ | |
2927 | mchunkptr bck; /* misc temp for linking */ | |
2928 | mchunkptr fwd; /* misc temp for linking */ | |
2929 | int islr; /* track whether merging with last_remainder */ | |
2930 | ||
2931 | check_inuse_chunk(ar_ptr, p); | |
2932 | ||
2933 | sz = hd & ~PREV_INUSE; | |
2934 | next = chunk_at_offset(p, sz); | |
2935 | nextsz = chunksize(next); | |
2936 | ||
2937 | if (next == top(ar_ptr)) /* merge with top */ | |
2938 | { | |
2939 | sz += nextsz; | |
2940 | ||
2941 | if (!(hd & PREV_INUSE)) /* consolidate backward */ | |
2942 | { | |
2943 | prevsz = p->prev_size; | |
2944 | p = chunk_at_offset(p, -prevsz); | |
2945 | sz += prevsz; | |
2946 | unlink(p, bck, fwd); | |
2947 | } | |
2948 | ||
2949 | set_head(p, sz | PREV_INUSE); | |
2950 | top(ar_ptr) = p; | |
8a4b65b4 UD |
2951 | |
2952 | #ifndef NO_THREADS | |
2953 | if(ar_ptr == &main_arena) { | |
2954 | #endif | |
2955 | if ((unsigned long)(sz) >= (unsigned long)trim_threshold) | |
2956 | main_trim(top_pad); | |
2957 | #ifndef NO_THREADS | |
2958 | } else { | |
2959 | heap_info *heap = heap_for_ptr(p); | |
2960 | ||
2961 | assert(heap->ar_ptr == ar_ptr); | |
2962 | ||
2963 | /* Try to get rid of completely empty heaps, if possible. */ | |
2964 | if((unsigned long)(sz) >= (unsigned long)trim_threshold || | |
2965 | p == chunk_at_offset(heap, sizeof(*heap))) | |
2966 | heap_trim(heap, top_pad); | |
2967 | } | |
2968 | #endif | |
f65fd747 UD |
2969 | return; |
2970 | } | |
2971 | ||
f65fd747 UD |
2972 | islr = 0; |
2973 | ||
2974 | if (!(hd & PREV_INUSE)) /* consolidate backward */ | |
2975 | { | |
2976 | prevsz = p->prev_size; | |
2977 | p = chunk_at_offset(p, -prevsz); | |
2978 | sz += prevsz; | |
2979 | ||
2980 | if (p->fd == last_remainder(ar_ptr)) /* keep as last_remainder */ | |
2981 | islr = 1; | |
2982 | else | |
2983 | unlink(p, bck, fwd); | |
2984 | } | |
2985 | ||
2986 | if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */ | |
2987 | { | |
2988 | sz += nextsz; | |
2989 | ||
2990 | if (!islr && next->fd == last_remainder(ar_ptr)) | |
2991 | /* re-insert last_remainder */ | |
2992 | { | |
2993 | islr = 1; | |
2994 | link_last_remainder(ar_ptr, p); | |
2995 | } | |
2996 | else | |
2997 | unlink(next, bck, fwd); | |
7799b7b3 UD |
2998 | |
2999 | next = chunk_at_offset(p, sz); | |
f65fd747 | 3000 | } |
7799b7b3 UD |
3001 | else |
3002 | set_head(next, nextsz); /* clear inuse bit */ | |
f65fd747 UD |
3003 | |
3004 | set_head(p, sz | PREV_INUSE); | |
7799b7b3 | 3005 | next->prev_size = sz; |
f65fd747 UD |
3006 | if (!islr) |
3007 | frontlink(ar_ptr, p, sz, idx, bck, fwd); | |
7799b7b3 UD |
3008 | |
3009 | #ifndef NO_THREADS | |
3010 | /* Check whether the heap containing top can go away now. */ | |
3011 | if(next->size < MINSIZE && | |
3012 | (unsigned long)sz > trim_threshold && | |
3013 | ar_ptr != &main_arena) { /* fencepost */ | |
3014 | heap_info* heap = heap_for_ptr(top(ar_ptr)); | |
3015 | ||
3016 | if(top(ar_ptr) == chunk_at_offset(heap, sizeof(*heap)) && | |
3017 | heap->prev == heap_for_ptr(p)) | |
3018 | heap_trim(heap, top_pad); | |
3019 | } | |
3020 | #endif | |
f65fd747 UD |
3021 | } |
3022 | ||
3023 | ||
3024 | \f | |
3025 | ||
3026 | ||
3027 | /* | |
3028 | ||
3029 | Realloc algorithm: | |
3030 | ||
3031 | Chunks that were obtained via mmap cannot be extended or shrunk | |
3032 | unless HAVE_MREMAP is defined, in which case mremap is used. | |
3033 | Otherwise, if their reallocation is for additional space, they are | |
3034 | copied. If for less, they are just left alone. | |
3035 | ||
3036 | Otherwise, if the reallocation is for additional space, and the | |
3037 | chunk can be extended, it is, else a malloc-copy-free sequence is | |
3038 | taken. There are several different ways that a chunk could be | |
3039 | extended. All are tried: | |
3040 | ||
3041 | * Extending forward into following adjacent free chunk. | |
3042 | * Shifting backwards, joining preceding adjacent space | |
3043 | * Both shifting backwards and extending forward. | |
3044 | * Extending into newly sbrked space | |
3045 | ||
3046 | Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a | |
3047 | size argument of zero (re)allocates a minimum-sized chunk. | |
3048 | ||
3049 | If the reallocation is for less space, and the new request is for | |
3050 | a `small' (<512 bytes) size, then the newly unused space is lopped | |
3051 | off and freed. | |
3052 | ||
3053 | The old unix realloc convention of allowing the last-free'd chunk | |
3054 | to be used as an argument to realloc is no longer supported. | |
3055 | I don't know of any programs still relying on this feature, | |
3056 | and allowing it would also allow too many other incorrect | |
3057 | usages of realloc to be sensible. | |
3058 | ||
3059 | ||
3060 | */ | |
3061 | ||
3062 | ||
3063 | #if __STD_C | |
3064 | Void_t* rEALLOc(Void_t* oldmem, size_t bytes) | |
3065 | #else | |
3066 | Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes; | |
3067 | #endif | |
3068 | { | |
3069 | arena *ar_ptr; | |
3070 | INTERNAL_SIZE_T nb; /* padded request size */ | |
3071 | ||
3072 | mchunkptr oldp; /* chunk corresponding to oldmem */ | |
3073 | INTERNAL_SIZE_T oldsize; /* its size */ | |
3074 | ||
3075 | mchunkptr newp; /* chunk to return */ | |
f65fd747 | 3076 | |
dfd2257a | 3077 | #if defined _LIBC || defined MALLOC_HOOKS |
10dc2a90 UD |
3078 | if (__realloc_hook != NULL) { |
3079 | Void_t* result; | |
f65fd747 | 3080 | |
dfd2257a | 3081 | #if defined __GNUC__ && __GNUC__ >= 2 |
a2b08ee5 UD |
3082 | result = (*__realloc_hook)(oldmem, bytes, __builtin_return_address (0)); |
3083 | #else | |
dfd2257a | 3084 | result = (*__realloc_hook)(oldmem, bytes, NULL); |
a2b08ee5 | 3085 | #endif |
10dc2a90 UD |
3086 | return result; |
3087 | } | |
3088 | #endif | |
f65fd747 UD |
3089 | |
3090 | #ifdef REALLOC_ZERO_BYTES_FREES | |
3091 | if (bytes == 0) { fREe(oldmem); return 0; } | |
3092 | #endif | |
3093 | ||
f65fd747 UD |
3094 | /* realloc of null is supposed to be same as malloc */ |
3095 | if (oldmem == 0) return mALLOc(bytes); | |
3096 | ||
10dc2a90 UD |
3097 | oldp = mem2chunk(oldmem); |
3098 | oldsize = chunksize(oldp); | |
f65fd747 UD |
3099 | |
3100 | nb = request2size(bytes); | |
3101 | ||
3102 | #if HAVE_MMAP | |
3103 | if (chunk_is_mmapped(oldp)) | |
3104 | { | |
10dc2a90 UD |
3105 | Void_t* newmem; |
3106 | ||
f65fd747 UD |
3107 | #if HAVE_MREMAP |
3108 | newp = mremap_chunk(oldp, nb); | |
3109 | if(newp) return chunk2mem(newp); | |
3110 | #endif | |
3111 | /* Note the extra SIZE_SZ overhead. */ | |
3112 | if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */ | |
3113 | /* Must alloc, copy, free. */ | |
3114 | newmem = mALLOc(bytes); | |
3115 | if (newmem == 0) return 0; /* propagate failure */ | |
3116 | MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ); | |
3117 | munmap_chunk(oldp); | |
3118 | return newmem; | |
3119 | } | |
3120 | #endif | |
3121 | ||
3122 | ar_ptr = arena_for_ptr(oldp); | |
8a4b65b4 UD |
3123 | #if THREAD_STATS |
3124 | if(!mutex_trylock(&ar_ptr->mutex)) | |
3125 | ++(ar_ptr->stat_lock_direct); | |
3126 | else { | |
3127 | (void)mutex_lock(&ar_ptr->mutex); | |
3128 | ++(ar_ptr->stat_lock_wait); | |
3129 | } | |
3130 | #else | |
f65fd747 | 3131 | (void)mutex_lock(&ar_ptr->mutex); |
8a4b65b4 UD |
3132 | #endif |
3133 | ||
1228ed5c | 3134 | #ifndef NO_THREADS |
f65fd747 UD |
3135 | /* As in malloc(), remember this arena for the next allocation. */ |
3136 | tsd_setspecific(arena_key, (Void_t *)ar_ptr); | |
1228ed5c | 3137 | #endif |
f65fd747 | 3138 | |
10dc2a90 UD |
3139 | newp = chunk_realloc(ar_ptr, oldp, oldsize, nb); |
3140 | ||
3141 | (void)mutex_unlock(&ar_ptr->mutex); | |
3142 | return newp ? chunk2mem(newp) : NULL; | |
3143 | } | |
3144 | ||
3145 | static mchunkptr | |
dfd2257a | 3146 | internal_function |
10dc2a90 UD |
3147 | #if __STD_C |
3148 | chunk_realloc(arena* ar_ptr, mchunkptr oldp, INTERNAL_SIZE_T oldsize, | |
7e3be507 | 3149 | INTERNAL_SIZE_T nb) |
10dc2a90 UD |
3150 | #else |
3151 | chunk_realloc(ar_ptr, oldp, oldsize, nb) | |
3152 | arena* ar_ptr; mchunkptr oldp; INTERNAL_SIZE_T oldsize, nb; | |
3153 | #endif | |
3154 | { | |
3155 | mchunkptr newp = oldp; /* chunk to return */ | |
3156 | INTERNAL_SIZE_T newsize = oldsize; /* its size */ | |
3157 | ||
3158 | mchunkptr next; /* next contiguous chunk after oldp */ | |
3159 | INTERNAL_SIZE_T nextsize; /* its size */ | |
3160 | ||
3161 | mchunkptr prev; /* previous contiguous chunk before oldp */ | |
3162 | INTERNAL_SIZE_T prevsize; /* its size */ | |
3163 | ||
3164 | mchunkptr remainder; /* holds split off extra space from newp */ | |
3165 | INTERNAL_SIZE_T remainder_size; /* its size */ | |
3166 | ||
3167 | mchunkptr bck; /* misc temp for linking */ | |
3168 | mchunkptr fwd; /* misc temp for linking */ | |
3169 | ||
f65fd747 UD |
3170 | check_inuse_chunk(ar_ptr, oldp); |
3171 | ||
3172 | if ((long)(oldsize) < (long)(nb)) | |
3173 | { | |
3174 | ||
3175 | /* Try expanding forward */ | |
3176 | ||
3177 | next = chunk_at_offset(oldp, oldsize); | |
3178 | if (next == top(ar_ptr) || !inuse(next)) | |
3179 | { | |
3180 | nextsize = chunksize(next); | |
3181 | ||
3182 | /* Forward into top only if a remainder */ | |
3183 | if (next == top(ar_ptr)) | |
3184 | { | |
3185 | if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE)) | |
3186 | { | |
3187 | newsize += nextsize; | |
3188 | top(ar_ptr) = chunk_at_offset(oldp, nb); | |
3189 | set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE); | |
3190 | set_head_size(oldp, nb); | |
10dc2a90 | 3191 | return oldp; |
f65fd747 UD |
3192 | } |
3193 | } | |
3194 | ||
3195 | /* Forward into next chunk */ | |
3196 | else if (((long)(nextsize + newsize) >= (long)(nb))) | |
3197 | { | |
3198 | unlink(next, bck, fwd); | |
3199 | newsize += nextsize; | |
3200 | goto split; | |
3201 | } | |
3202 | } | |
3203 | else | |
3204 | { | |
3205 | next = 0; | |
3206 | nextsize = 0; | |
3207 | } | |
3208 | ||
3209 | /* Try shifting backwards. */ | |
3210 | ||
3211 | if (!prev_inuse(oldp)) | |
3212 | { | |
3213 | prev = prev_chunk(oldp); | |
3214 | prevsize = chunksize(prev); | |
3215 | ||
3216 | /* try forward + backward first to save a later consolidation */ | |
3217 | ||
3218 | if (next != 0) | |
3219 | { | |
3220 | /* into top */ | |
3221 | if (next == top(ar_ptr)) | |
3222 | { | |
3223 | if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE)) | |
3224 | { | |
3225 | unlink(prev, bck, fwd); | |
3226 | newp = prev; | |
3227 | newsize += prevsize + nextsize; | |
10dc2a90 | 3228 | MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ); |
f65fd747 UD |
3229 | top(ar_ptr) = chunk_at_offset(newp, nb); |
3230 | set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE); | |
3231 | set_head_size(newp, nb); | |
10dc2a90 | 3232 | return newp; |
f65fd747 UD |
3233 | } |
3234 | } | |
3235 | ||
3236 | /* into next chunk */ | |
3237 | else if (((long)(nextsize + prevsize + newsize) >= (long)(nb))) | |
3238 | { | |
3239 | unlink(next, bck, fwd); | |
3240 | unlink(prev, bck, fwd); | |
3241 | newp = prev; | |
3242 | newsize += nextsize + prevsize; | |
10dc2a90 | 3243 | MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ); |
f65fd747 UD |
3244 | goto split; |
3245 | } | |
3246 | } | |
3247 | ||
3248 | /* backward only */ | |
3249 | if (prev != 0 && (long)(prevsize + newsize) >= (long)nb) | |
3250 | { | |
3251 | unlink(prev, bck, fwd); | |
3252 | newp = prev; | |
3253 | newsize += prevsize; | |
10dc2a90 | 3254 | MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ); |
f65fd747 UD |
3255 | goto split; |
3256 | } | |
3257 | } | |
3258 | ||
3259 | /* Must allocate */ | |
3260 | ||
3261 | newp = chunk_alloc (ar_ptr, nb); | |
3262 | ||
7799b7b3 UD |
3263 | if (newp == 0) { |
3264 | /* Maybe the failure is due to running out of mmapped areas. */ | |
3265 | if (ar_ptr != &main_arena) { | |
3266 | (void)mutex_lock(&main_arena.mutex); | |
3267 | newp = chunk_alloc(&main_arena, nb); | |
3268 | (void)mutex_unlock(&main_arena.mutex); | |
3269 | } | |
3270 | if (newp == 0) /* propagate failure */ | |
3271 | return 0; | |
3272 | } | |
f65fd747 UD |
3273 | |
3274 | /* Avoid copy if newp is next chunk after oldp. */ | |
3275 | /* (This can only happen when new chunk is sbrk'ed.) */ | |
3276 | ||
3277 | if ( newp == next_chunk(oldp)) | |
3278 | { | |
3279 | newsize += chunksize(newp); | |
3280 | newp = oldp; | |
3281 | goto split; | |
3282 | } | |
3283 | ||
3284 | /* Otherwise copy, free, and exit */ | |
10dc2a90 | 3285 | MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ); |
f65fd747 | 3286 | chunk_free(ar_ptr, oldp); |
10dc2a90 | 3287 | return newp; |
f65fd747 UD |
3288 | } |
3289 | ||
3290 | ||
3291 | split: /* split off extra room in old or expanded chunk */ | |
3292 | ||
3293 | if (newsize - nb >= MINSIZE) /* split off remainder */ | |
3294 | { | |
3295 | remainder = chunk_at_offset(newp, nb); | |
3296 | remainder_size = newsize - nb; | |
3297 | set_head_size(newp, nb); | |
3298 | set_head(remainder, remainder_size | PREV_INUSE); | |
3299 | set_inuse_bit_at_offset(remainder, remainder_size); | |
3300 | chunk_free(ar_ptr, remainder); | |
3301 | } | |
3302 | else | |
3303 | { | |
3304 | set_head_size(newp, newsize); | |
3305 | set_inuse_bit_at_offset(newp, newsize); | |
3306 | } | |
3307 | ||
3308 | check_inuse_chunk(ar_ptr, newp); | |
10dc2a90 | 3309 | return newp; |
f65fd747 UD |
3310 | } |
3311 | ||
3312 | ||
3313 | \f | |
3314 | ||
3315 | /* | |
3316 | ||
3317 | memalign algorithm: | |
3318 | ||
3319 | memalign requests more than enough space from malloc, finds a spot | |
3320 | within that chunk that meets the alignment request, and then | |
3321 | possibly frees the leading and trailing space. | |
3322 | ||
3323 | The alignment argument must be a power of two. This property is not | |
3324 | checked by memalign, so misuse may result in random runtime errors. | |
3325 | ||
3326 | 8-byte alignment is guaranteed by normal malloc calls, so don't | |
3327 | bother calling memalign with an argument of 8 or less. | |
3328 | ||
3329 | Overreliance on memalign is a sure way to fragment space. | |
3330 | ||
3331 | */ | |
3332 | ||
3333 | ||
3334 | #if __STD_C | |
3335 | Void_t* mEMALIGn(size_t alignment, size_t bytes) | |
3336 | #else | |
3337 | Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes; | |
3338 | #endif | |
3339 | { | |
3340 | arena *ar_ptr; | |
3341 | INTERNAL_SIZE_T nb; /* padded request size */ | |
10dc2a90 UD |
3342 | mchunkptr p; |
3343 | ||
dfd2257a | 3344 | #if defined _LIBC || defined MALLOC_HOOKS |
10dc2a90 UD |
3345 | if (__memalign_hook != NULL) { |
3346 | Void_t* result; | |
3347 | ||
dfd2257a | 3348 | #if defined __GNUC__ && __GNUC__ >= 2 |
a2b08ee5 UD |
3349 | result = (*__memalign_hook)(alignment, bytes, |
3350 | __builtin_return_address (0)); | |
3351 | #else | |
dfd2257a | 3352 | result = (*__memalign_hook)(alignment, bytes, NULL); |
a2b08ee5 | 3353 | #endif |
10dc2a90 UD |
3354 | return result; |
3355 | } | |
3356 | #endif | |
f65fd747 UD |
3357 | |
3358 | /* If need less alignment than we give anyway, just relay to malloc */ | |
3359 | ||
3360 | if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes); | |
3361 | ||
3362 | /* Otherwise, ensure that it is at least a minimum chunk size */ | |
3363 | ||
3364 | if (alignment < MINSIZE) alignment = MINSIZE; | |
3365 | ||
f65fd747 UD |
3366 | nb = request2size(bytes); |
3367 | arena_get(ar_ptr, nb + alignment + MINSIZE); | |
3368 | if(!ar_ptr) | |
3369 | return 0; | |
10dc2a90 UD |
3370 | p = chunk_align(ar_ptr, nb, alignment); |
3371 | (void)mutex_unlock(&ar_ptr->mutex); | |
7799b7b3 UD |
3372 | if(!p) { |
3373 | /* Maybe the failure is due to running out of mmapped areas. */ | |
3374 | if(ar_ptr != &main_arena) { | |
3375 | (void)mutex_lock(&main_arena.mutex); | |
3376 | p = chunk_align(&main_arena, nb, alignment); | |
3377 | (void)mutex_unlock(&main_arena.mutex); | |
3378 | } | |
3379 | if(!p) return 0; | |
3380 | } | |
3381 | return chunk2mem(p); | |
10dc2a90 | 3382 | } |
f65fd747 | 3383 | |
10dc2a90 | 3384 | static mchunkptr |
dfd2257a | 3385 | internal_function |
10dc2a90 UD |
3386 | #if __STD_C |
3387 | chunk_align(arena* ar_ptr, INTERNAL_SIZE_T nb, size_t alignment) | |
3388 | #else | |
3389 | chunk_align(ar_ptr, nb, alignment) | |
3390 | arena* ar_ptr; INTERNAL_SIZE_T nb; size_t alignment; | |
3391 | #endif | |
3392 | { | |
3393 | char* m; /* memory returned by malloc call */ | |
3394 | mchunkptr p; /* corresponding chunk */ | |
3395 | char* brk; /* alignment point within p */ | |
3396 | mchunkptr newp; /* chunk to return */ | |
3397 | INTERNAL_SIZE_T newsize; /* its size */ | |
3398 | INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */ | |
3399 | mchunkptr remainder; /* spare room at end to split off */ | |
3400 | long remainder_size; /* its size */ | |
3401 | ||
3402 | /* Call chunk_alloc with worst case padding to hit alignment. */ | |
3403 | p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE); | |
3404 | if (p == 0) | |
f65fd747 | 3405 | return 0; /* propagate failure */ |
f65fd747 UD |
3406 | |
3407 | m = chunk2mem(p); | |
3408 | ||
3409 | if ((((unsigned long)(m)) % alignment) == 0) /* aligned */ | |
3410 | { | |
3411 | #if HAVE_MMAP | |
3412 | if(chunk_is_mmapped(p)) { | |
10dc2a90 | 3413 | return p; /* nothing more to do */ |
f65fd747 UD |
3414 | } |
3415 | #endif | |
3416 | } | |
3417 | else /* misaligned */ | |
3418 | { | |
3419 | /* | |
3420 | Find an aligned spot inside chunk. | |
3421 | Since we need to give back leading space in a chunk of at | |
3422 | least MINSIZE, if the first calculation places us at | |
3423 | a spot with less than MINSIZE leader, we can move to the | |
3424 | next aligned spot -- we've allocated enough total room so that | |
3425 | this is always possible. | |
3426 | */ | |
3427 | ||
3428 | brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment); | |
10dc2a90 | 3429 | if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk += alignment; |
f65fd747 UD |
3430 | |
3431 | newp = (mchunkptr)brk; | |
3432 | leadsize = brk - (char*)(p); | |
3433 | newsize = chunksize(p) - leadsize; | |
3434 | ||
3435 | #if HAVE_MMAP | |
3436 | if(chunk_is_mmapped(p)) | |
3437 | { | |
3438 | newp->prev_size = p->prev_size + leadsize; | |
3439 | set_head(newp, newsize|IS_MMAPPED); | |
10dc2a90 | 3440 | return newp; |
f65fd747 UD |
3441 | } |
3442 | #endif | |
3443 | ||
3444 | /* give back leader, use the rest */ | |
3445 | ||
3446 | set_head(newp, newsize | PREV_INUSE); | |
3447 | set_inuse_bit_at_offset(newp, newsize); | |
3448 | set_head_size(p, leadsize); | |
3449 | chunk_free(ar_ptr, p); | |
3450 | p = newp; | |
3451 | ||
3452 | assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0); | |
3453 | } | |
3454 | ||
3455 | /* Also give back spare room at the end */ | |
3456 | ||
3457 | remainder_size = chunksize(p) - nb; | |
3458 | ||
3459 | if (remainder_size >= (long)MINSIZE) | |
3460 | { | |
3461 | remainder = chunk_at_offset(p, nb); | |
3462 | set_head(remainder, remainder_size | PREV_INUSE); | |
3463 | set_head_size(p, nb); | |
3464 | chunk_free(ar_ptr, remainder); | |
3465 | } | |
3466 | ||
3467 | check_inuse_chunk(ar_ptr, p); | |
10dc2a90 | 3468 | return p; |
f65fd747 UD |
3469 | } |
3470 | ||
3471 | \f | |
3472 | ||
3473 | ||
3474 | /* | |
3475 | valloc just invokes memalign with alignment argument equal | |
3476 | to the page size of the system (or as near to this as can | |
3477 | be figured out from all the includes/defines above.) | |
3478 | */ | |
3479 | ||
3480 | #if __STD_C | |
3481 | Void_t* vALLOc(size_t bytes) | |
3482 | #else | |
3483 | Void_t* vALLOc(bytes) size_t bytes; | |
3484 | #endif | |
3485 | { | |
3486 | return mEMALIGn (malloc_getpagesize, bytes); | |
3487 | } | |
3488 | ||
3489 | /* | |
3490 | pvalloc just invokes valloc for the nearest pagesize | |
3491 | that will accommodate request | |
3492 | */ | |
3493 | ||
3494 | ||
3495 | #if __STD_C | |
3496 | Void_t* pvALLOc(size_t bytes) | |
3497 | #else | |
3498 | Void_t* pvALLOc(bytes) size_t bytes; | |
3499 | #endif | |
3500 | { | |
3501 | size_t pagesize = malloc_getpagesize; | |
3502 | return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1)); | |
3503 | } | |
3504 | ||
3505 | /* | |
3506 | ||
10dc2a90 | 3507 | calloc calls chunk_alloc, then zeroes out the allocated chunk. |
f65fd747 UD |
3508 | |
3509 | */ | |
3510 | ||
3511 | #if __STD_C | |
3512 | Void_t* cALLOc(size_t n, size_t elem_size) | |
3513 | #else | |
3514 | Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size; | |
3515 | #endif | |
3516 | { | |
3517 | arena *ar_ptr; | |
3518 | mchunkptr p, oldtop; | |
10dc2a90 | 3519 | INTERNAL_SIZE_T sz, csz, oldtopsize; |
f65fd747 UD |
3520 | Void_t* mem; |
3521 | ||
dfd2257a | 3522 | #if defined _LIBC || defined MALLOC_HOOKS |
10dc2a90 UD |
3523 | if (__malloc_hook != NULL) { |
3524 | sz = n * elem_size; | |
dfd2257a | 3525 | #if defined __GNUC__ && __GNUC__ >= 2 |
a2b08ee5 UD |
3526 | mem = (*__malloc_hook)(sz, __builtin_return_address (0)); |
3527 | #else | |
dfd2257a | 3528 | mem = (*__malloc_hook)(sz, NULL); |
a2b08ee5 | 3529 | #endif |
831372e7 UD |
3530 | if(mem == 0) |
3531 | return 0; | |
a2b08ee5 | 3532 | #ifdef HAVE_MEMSET |
c131718c | 3533 | return memset(mem, 0, sz); |
10dc2a90 | 3534 | #else |
831372e7 | 3535 | while(sz > 0) ((char*)mem)[--sz] = 0; /* rather inefficient */ |
10dc2a90 | 3536 | return mem; |
c131718c | 3537 | #endif |
10dc2a90 UD |
3538 | } |
3539 | #endif | |
f65fd747 | 3540 | |
10dc2a90 | 3541 | sz = request2size(n * elem_size); |
f65fd747 UD |
3542 | arena_get(ar_ptr, sz); |
3543 | if(!ar_ptr) | |
3544 | return 0; | |
3545 | ||
3546 | /* check if expand_top called, in which case don't need to clear */ | |
3547 | #if MORECORE_CLEARS | |
3548 | oldtop = top(ar_ptr); | |
3549 | oldtopsize = chunksize(top(ar_ptr)); | |
3550 | #endif | |
3551 | p = chunk_alloc (ar_ptr, sz); | |
3552 | ||
3553 | /* Only clearing follows, so we can unlock early. */ | |
3554 | (void)mutex_unlock(&ar_ptr->mutex); | |
3555 | ||
7799b7b3 UD |
3556 | if (p == 0) { |
3557 | /* Maybe the failure is due to running out of mmapped areas. */ | |
3558 | if(ar_ptr != &main_arena) { | |
3559 | (void)mutex_lock(&main_arena.mutex); | |
3560 | p = chunk_alloc(&main_arena, sz); | |
3561 | (void)mutex_unlock(&main_arena.mutex); | |
3562 | } | |
3563 | if (p == 0) return 0; | |
3564 | } | |
3565 | mem = chunk2mem(p); | |
f65fd747 | 3566 | |
7799b7b3 | 3567 | /* Two optional cases in which clearing not necessary */ |
f65fd747 UD |
3568 | |
3569 | #if HAVE_MMAP | |
7799b7b3 | 3570 | if (chunk_is_mmapped(p)) return mem; |
f65fd747 UD |
3571 | #endif |
3572 | ||
7799b7b3 | 3573 | csz = chunksize(p); |
f65fd747 UD |
3574 | |
3575 | #if MORECORE_CLEARS | |
7799b7b3 UD |
3576 | if (p == oldtop && csz > oldtopsize) { |
3577 | /* clear only the bytes from non-freshly-sbrked memory */ | |
3578 | csz = oldtopsize; | |
3579 | } | |
f65fd747 UD |
3580 | #endif |
3581 | ||
7799b7b3 UD |
3582 | MALLOC_ZERO(mem, csz - SIZE_SZ); |
3583 | return mem; | |
f65fd747 UD |
3584 | } |
3585 | ||
3586 | /* | |
3587 | ||
3588 | cfree just calls free. It is needed/defined on some systems | |
3589 | that pair it with calloc, presumably for odd historical reasons. | |
3590 | ||
3591 | */ | |
3592 | ||
3593 | #if !defined(_LIBC) | |
3594 | #if __STD_C | |
3595 | void cfree(Void_t *mem) | |
3596 | #else | |
3597 | void cfree(mem) Void_t *mem; | |
3598 | #endif | |
3599 | { | |
3600 | free(mem); | |
3601 | } | |
3602 | #endif | |
3603 | ||
3604 | \f | |
3605 | ||
3606 | /* | |
3607 | ||
3608 | Malloc_trim gives memory back to the system (via negative | |
3609 | arguments to sbrk) if there is unused memory at the `high' end of | |
3610 | the malloc pool. You can call this after freeing large blocks of | |
3611 | memory to potentially reduce the system-level memory requirements | |
3612 | of a program. However, it cannot guarantee to reduce memory. Under | |
3613 | some allocation patterns, some large free blocks of memory will be | |
3614 | locked between two used chunks, so they cannot be given back to | |
3615 | the system. | |
3616 | ||
3617 | The `pad' argument to malloc_trim represents the amount of free | |
3618 | trailing space to leave untrimmed. If this argument is zero, | |
3619 | only the minimum amount of memory to maintain internal data | |
3620 | structures will be left (one page or less). Non-zero arguments | |
3621 | can be supplied to maintain enough trailing space to service | |
3622 | future expected allocations without having to re-obtain memory | |
3623 | from the system. | |
3624 | ||
3625 | Malloc_trim returns 1 if it actually released any memory, else 0. | |
3626 | ||
3627 | */ | |
3628 | ||
3629 | #if __STD_C | |
7e3be507 | 3630 | int mALLOC_TRIm(size_t pad) |
f65fd747 | 3631 | #else |
7e3be507 | 3632 | int mALLOC_TRIm(pad) size_t pad; |
f65fd747 UD |
3633 | #endif |
3634 | { | |
3635 | int res; | |
3636 | ||
3637 | (void)mutex_lock(&main_arena.mutex); | |
8a4b65b4 | 3638 | res = main_trim(pad); |
f65fd747 UD |
3639 | (void)mutex_unlock(&main_arena.mutex); |
3640 | return res; | |
3641 | } | |
3642 | ||
8a4b65b4 UD |
3643 | /* Trim the main arena. */ |
3644 | ||
f65fd747 | 3645 | static int |
dfd2257a | 3646 | internal_function |
f65fd747 | 3647 | #if __STD_C |
8a4b65b4 | 3648 | main_trim(size_t pad) |
f65fd747 | 3649 | #else |
8a4b65b4 | 3650 | main_trim(pad) size_t pad; |
f65fd747 UD |
3651 | #endif |
3652 | { | |
3653 | mchunkptr top_chunk; /* The current top chunk */ | |
3654 | long top_size; /* Amount of top-most memory */ | |
3655 | long extra; /* Amount to release */ | |
3656 | char* current_brk; /* address returned by pre-check sbrk call */ | |
3657 | char* new_brk; /* address returned by negative sbrk call */ | |
3658 | ||
3659 | unsigned long pagesz = malloc_getpagesize; | |
3660 | ||
8a4b65b4 | 3661 | top_chunk = top(&main_arena); |
f65fd747 UD |
3662 | top_size = chunksize(top_chunk); |
3663 | extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz; | |
3664 | ||
3665 | if (extra < (long)pagesz) /* Not enough memory to release */ | |
3666 | return 0; | |
3667 | ||
8a4b65b4 UD |
3668 | /* Test to make sure no one else called sbrk */ |
3669 | current_brk = (char*)(MORECORE (0)); | |
3670 | if (current_brk != (char*)(top_chunk) + top_size) | |
3671 | return 0; /* Apparently we don't own memory; must fail */ | |
f65fd747 | 3672 | |
8a4b65b4 | 3673 | new_brk = (char*)(MORECORE (-extra)); |
f65fd747 | 3674 | |
dfd2257a | 3675 | #if defined _LIBC || defined MALLOC_HOOKS |
1228ed5c UD |
3676 | /* Call the `morecore' hook if necessary. */ |
3677 | if (__after_morecore_hook) | |
3678 | (*__after_morecore_hook) (); | |
7799b7b3 | 3679 | #endif |
1228ed5c | 3680 | |
8a4b65b4 UD |
3681 | if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */ |
3682 | /* Try to figure out what we have */ | |
3683 | current_brk = (char*)(MORECORE (0)); | |
3684 | top_size = current_brk - (char*)top_chunk; | |
3685 | if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */ | |
3686 | { | |
3687 | sbrked_mem = current_brk - sbrk_base; | |
3688 | set_head(top_chunk, top_size | PREV_INUSE); | |
f65fd747 | 3689 | } |
8a4b65b4 UD |
3690 | check_chunk(&main_arena, top_chunk); |
3691 | return 0; | |
3692 | } | |
3693 | sbrked_mem -= extra; | |
3694 | ||
3695 | /* Success. Adjust top accordingly. */ | |
3696 | set_head(top_chunk, (top_size - extra) | PREV_INUSE); | |
3697 | check_chunk(&main_arena, top_chunk); | |
3698 | return 1; | |
3699 | } | |
f65fd747 UD |
3700 | |
3701 | #ifndef NO_THREADS | |
8a4b65b4 UD |
3702 | |
3703 | static int | |
dfd2257a | 3704 | internal_function |
8a4b65b4 UD |
3705 | #if __STD_C |
3706 | heap_trim(heap_info *heap, size_t pad) | |
3707 | #else | |
3708 | heap_trim(heap, pad) heap_info *heap; size_t pad; | |
f65fd747 | 3709 | #endif |
8a4b65b4 UD |
3710 | { |
3711 | unsigned long pagesz = malloc_getpagesize; | |
3712 | arena *ar_ptr = heap->ar_ptr; | |
3713 | mchunkptr top_chunk = top(ar_ptr), p, bck, fwd; | |
3714 | heap_info *prev_heap; | |
3715 | long new_size, top_size, extra; | |
3716 | ||
3717 | /* Can this heap go away completely ? */ | |
3718 | while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) { | |
3719 | prev_heap = heap->prev; | |
3720 | p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ)); | |
3721 | assert(p->size == (0|PREV_INUSE)); /* must be fencepost */ | |
3722 | p = prev_chunk(p); | |
3723 | new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ); | |
10dc2a90 | 3724 | assert(new_size>0 && new_size<(long)(2*MINSIZE)); |
8a4b65b4 UD |
3725 | if(!prev_inuse(p)) |
3726 | new_size += p->prev_size; | |
3727 | assert(new_size>0 && new_size<HEAP_MAX_SIZE); | |
3728 | if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz) | |
3729 | break; | |
3730 | ar_ptr->size -= heap->size; | |
3731 | delete_heap(heap); | |
3732 | heap = prev_heap; | |
3733 | if(!prev_inuse(p)) { /* consolidate backward */ | |
3734 | p = prev_chunk(p); | |
3735 | unlink(p, bck, fwd); | |
3736 | } | |
3737 | assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0); | |
3738 | assert( ((char*)p + new_size) == ((char*)heap + heap->size) ); | |
3739 | top(ar_ptr) = top_chunk = p; | |
3740 | set_head(top_chunk, new_size | PREV_INUSE); | |
3741 | check_chunk(ar_ptr, top_chunk); | |
3742 | } | |
3743 | top_size = chunksize(top_chunk); | |
3744 | extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz; | |
3745 | if(extra < (long)pagesz) | |
3746 | return 0; | |
3747 | /* Try to shrink. */ | |
3748 | if(grow_heap(heap, -extra) != 0) | |
3749 | return 0; | |
3750 | ar_ptr->size -= extra; | |
f65fd747 UD |
3751 | |
3752 | /* Success. Adjust top accordingly. */ | |
3753 | set_head(top_chunk, (top_size - extra) | PREV_INUSE); | |
3754 | check_chunk(ar_ptr, top_chunk); | |
3755 | return 1; | |
3756 | } | |
3757 | ||
8a4b65b4 UD |
3758 | #endif |
3759 | ||
f65fd747 UD |
3760 | \f |
3761 | ||
3762 | /* | |
3763 | malloc_usable_size: | |
3764 | ||
3765 | This routine tells you how many bytes you can actually use in an | |
3766 | allocated chunk, which may be more than you requested (although | |
3767 | often not). You can use this many bytes without worrying about | |
3768 | overwriting other allocated objects. Not a particularly great | |
3769 | programming practice, but still sometimes useful. | |
3770 | ||
3771 | */ | |
3772 | ||
3773 | #if __STD_C | |
7e3be507 | 3774 | size_t mALLOC_USABLE_SIZe(Void_t* mem) |
f65fd747 | 3775 | #else |
7e3be507 | 3776 | size_t mALLOC_USABLE_SIZe(mem) Void_t* mem; |
f65fd747 UD |
3777 | #endif |
3778 | { | |
3779 | mchunkptr p; | |
3780 | ||
3781 | if (mem == 0) | |
3782 | return 0; | |
3783 | else | |
3784 | { | |
3785 | p = mem2chunk(mem); | |
3786 | if(!chunk_is_mmapped(p)) | |
3787 | { | |
3788 | if (!inuse(p)) return 0; | |
3789 | check_inuse_chunk(arena_for_ptr(mem), p); | |
3790 | return chunksize(p) - SIZE_SZ; | |
3791 | } | |
3792 | return chunksize(p) - 2*SIZE_SZ; | |
3793 | } | |
3794 | } | |
3795 | ||
3796 | ||
3797 | \f | |
3798 | ||
8a4b65b4 | 3799 | /* Utility to update mallinfo for malloc_stats() and mallinfo() */ |
f65fd747 | 3800 | |
8a4b65b4 UD |
3801 | static void |
3802 | #if __STD_C | |
3803 | malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi) | |
3804 | #else | |
3805 | malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi; | |
3806 | #endif | |
f65fd747 | 3807 | { |
f65fd747 UD |
3808 | int i, navail; |
3809 | mbinptr b; | |
3810 | mchunkptr p; | |
3811 | #if MALLOC_DEBUG | |
3812 | mchunkptr q; | |
3813 | #endif | |
3814 | INTERNAL_SIZE_T avail; | |
3815 | ||
3816 | (void)mutex_lock(&ar_ptr->mutex); | |
3817 | avail = chunksize(top(ar_ptr)); | |
3818 | navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0; | |
3819 | ||
3820 | for (i = 1; i < NAV; ++i) | |
3821 | { | |
3822 | b = bin_at(ar_ptr, i); | |
3823 | for (p = last(b); p != b; p = p->bk) | |
3824 | { | |
3825 | #if MALLOC_DEBUG | |
3826 | check_free_chunk(ar_ptr, p); | |
3827 | for (q = next_chunk(p); | |
8a4b65b4 | 3828 | q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0; |
f65fd747 UD |
3829 | q = next_chunk(q)) |
3830 | check_inuse_chunk(ar_ptr, q); | |
3831 | #endif | |
3832 | avail += chunksize(p); | |
3833 | navail++; | |
3834 | } | |
3835 | } | |
3836 | ||
8a4b65b4 UD |
3837 | mi->arena = ar_ptr->size; |
3838 | mi->ordblks = navail; | |
b22fc5f5 | 3839 | mi->smblks = mi->usmblks = mi->fsmblks = 0; /* clear unused fields */ |
8a4b65b4 UD |
3840 | mi->uordblks = ar_ptr->size - avail; |
3841 | mi->fordblks = avail; | |
3842 | mi->hblks = n_mmaps; | |
3843 | mi->hblkhd = mmapped_mem; | |
3844 | mi->keepcost = chunksize(top(ar_ptr)); | |
f65fd747 UD |
3845 | |
3846 | (void)mutex_unlock(&ar_ptr->mutex); | |
3847 | } | |
3848 | ||
8a4b65b4 UD |
3849 | #if !defined(NO_THREADS) && MALLOC_DEBUG > 1 |
3850 | ||
3851 | /* Print the complete contents of a single heap to stderr. */ | |
3852 | ||
3853 | static void | |
3854 | #if __STD_C | |
3855 | dump_heap(heap_info *heap) | |
3856 | #else | |
3857 | dump_heap(heap) heap_info *heap; | |
3858 | #endif | |
3859 | { | |
3860 | char *ptr; | |
3861 | mchunkptr p; | |
3862 | ||
3863 | fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size); | |
3864 | ptr = (heap->ar_ptr != (arena*)(heap+1)) ? | |
3865 | (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena); | |
3866 | p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) & | |
3867 | ~MALLOC_ALIGN_MASK); | |
3868 | for(;;) { | |
3869 | fprintf(stderr, "chunk %p size %10lx", p, (long)p->size); | |
3870 | if(p == top(heap->ar_ptr)) { | |
3871 | fprintf(stderr, " (top)\n"); | |
3872 | break; | |
3873 | } else if(p->size == (0|PREV_INUSE)) { | |
3874 | fprintf(stderr, " (fence)\n"); | |
3875 | break; | |
3876 | } | |
3877 | fprintf(stderr, "\n"); | |
3878 | p = next_chunk(p); | |
3879 | } | |
3880 | } | |
3881 | ||
3882 | #endif | |
3883 | ||
f65fd747 UD |
3884 | \f |
3885 | ||
3886 | /* | |
3887 | ||
3888 | malloc_stats: | |
3889 | ||
6d52618b | 3890 | For all arenas separately and in total, prints on stderr the |
8a4b65b4 | 3891 | amount of space obtained from the system, and the current number |
f65fd747 UD |
3892 | of bytes allocated via malloc (or realloc, etc) but not yet |
3893 | freed. (Note that this is the number of bytes allocated, not the | |
3894 | number requested. It will be larger than the number requested | |
8a4b65b4 UD |
3895 | because of alignment and bookkeeping overhead.) When not compiled |
3896 | for multiple threads, the maximum amount of allocated memory | |
3897 | (which may be more than current if malloc_trim and/or munmap got | |
3898 | called) is also reported. When using mmap(), prints the maximum | |
3899 | number of simultaneous mmap regions used, too. | |
f65fd747 UD |
3900 | |
3901 | */ | |
3902 | ||
7e3be507 | 3903 | void mALLOC_STATs() |
f65fd747 | 3904 | { |
8a4b65b4 UD |
3905 | int i; |
3906 | arena *ar_ptr; | |
3907 | struct mallinfo mi; | |
3908 | unsigned int in_use_b = mmapped_mem, system_b = in_use_b; | |
3909 | #if THREAD_STATS | |
3910 | long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0; | |
3911 | #endif | |
3912 | ||
7e3be507 | 3913 | for(i=0, ar_ptr = &main_arena;; i++) { |
8a4b65b4 UD |
3914 | malloc_update_mallinfo(ar_ptr, &mi); |
3915 | fprintf(stderr, "Arena %d:\n", i); | |
3916 | fprintf(stderr, "system bytes = %10u\n", (unsigned int)mi.arena); | |
3917 | fprintf(stderr, "in use bytes = %10u\n", (unsigned int)mi.uordblks); | |
3918 | system_b += mi.arena; | |
3919 | in_use_b += mi.uordblks; | |
3920 | #if THREAD_STATS | |
3921 | stat_lock_direct += ar_ptr->stat_lock_direct; | |
3922 | stat_lock_loop += ar_ptr->stat_lock_loop; | |
3923 | stat_lock_wait += ar_ptr->stat_lock_wait; | |
3924 | #endif | |
3925 | #if !defined(NO_THREADS) && MALLOC_DEBUG > 1 | |
3926 | if(ar_ptr != &main_arena) { | |
c131718c | 3927 | heap_info *heap; |
7e3be507 | 3928 | (void)mutex_lock(&ar_ptr->mutex); |
c131718c | 3929 | heap = heap_for_ptr(top(ar_ptr)); |
8a4b65b4 | 3930 | while(heap) { dump_heap(heap); heap = heap->prev; } |
7e3be507 | 3931 | (void)mutex_unlock(&ar_ptr->mutex); |
8a4b65b4 UD |
3932 | } |
3933 | #endif | |
7e3be507 UD |
3934 | ar_ptr = ar_ptr->next; |
3935 | if(ar_ptr == &main_arena) break; | |
8a4b65b4 | 3936 | } |
7799b7b3 | 3937 | #if HAVE_MMAP |
8a4b65b4 | 3938 | fprintf(stderr, "Total (incl. mmap):\n"); |
7799b7b3 UD |
3939 | #else |
3940 | fprintf(stderr, "Total:\n"); | |
3941 | #endif | |
8a4b65b4 UD |
3942 | fprintf(stderr, "system bytes = %10u\n", system_b); |
3943 | fprintf(stderr, "in use bytes = %10u\n", in_use_b); | |
3944 | #ifdef NO_THREADS | |
3945 | fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem); | |
3946 | #endif | |
f65fd747 | 3947 | #if HAVE_MMAP |
8a4b65b4 | 3948 | fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps); |
7799b7b3 | 3949 | fprintf(stderr, "max mmap bytes = %10lu\n", max_mmapped_mem); |
f65fd747 UD |
3950 | #endif |
3951 | #if THREAD_STATS | |
8a4b65b4 | 3952 | fprintf(stderr, "heaps created = %10d\n", stat_n_heaps); |
f65fd747 UD |
3953 | fprintf(stderr, "locked directly = %10ld\n", stat_lock_direct); |
3954 | fprintf(stderr, "locked in loop = %10ld\n", stat_lock_loop); | |
8a4b65b4 UD |
3955 | fprintf(stderr, "locked waiting = %10ld\n", stat_lock_wait); |
3956 | fprintf(stderr, "locked total = %10ld\n", | |
3957 | stat_lock_direct + stat_lock_loop + stat_lock_wait); | |
f65fd747 UD |
3958 | #endif |
3959 | } | |
3960 | ||
3961 | /* | |
3962 | mallinfo returns a copy of updated current mallinfo. | |
8a4b65b4 | 3963 | The information reported is for the arena last used by the thread. |
f65fd747 UD |
3964 | */ |
3965 | ||
3966 | struct mallinfo mALLINFo() | |
3967 | { | |
8a4b65b4 UD |
3968 | struct mallinfo mi; |
3969 | Void_t *vptr = NULL; | |
3970 | ||
1228ed5c | 3971 | #ifndef NO_THREADS |
8a4b65b4 | 3972 | tsd_getspecific(arena_key, vptr); |
1228ed5c | 3973 | #endif |
8a4b65b4 UD |
3974 | malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi); |
3975 | return mi; | |
f65fd747 UD |
3976 | } |
3977 | ||
3978 | ||
3979 | \f | |
3980 | ||
3981 | /* | |
3982 | mallopt: | |
3983 | ||
3984 | mallopt is the general SVID/XPG interface to tunable parameters. | |
3985 | The format is to provide a (parameter-number, parameter-value) pair. | |
3986 | mallopt then sets the corresponding parameter to the argument | |
3987 | value if it can (i.e., so long as the value is meaningful), | |
3988 | and returns 1 if successful else 0. | |
3989 | ||
3990 | See descriptions of tunable parameters above. | |
3991 | ||
3992 | */ | |
3993 | ||
3994 | #if __STD_C | |
3995 | int mALLOPt(int param_number, int value) | |
3996 | #else | |
3997 | int mALLOPt(param_number, value) int param_number; int value; | |
3998 | #endif | |
3999 | { | |
4000 | switch(param_number) | |
4001 | { | |
4002 | case M_TRIM_THRESHOLD: | |
4003 | trim_threshold = value; return 1; | |
4004 | case M_TOP_PAD: | |
4005 | top_pad = value; return 1; | |
4006 | case M_MMAP_THRESHOLD: | |
4007 | #ifndef NO_THREADS | |
4008 | /* Forbid setting the threshold too high. */ | |
4009 | if((unsigned long)value > HEAP_MAX_SIZE/2) return 0; | |
4010 | #endif | |
4011 | mmap_threshold = value; return 1; | |
4012 | case M_MMAP_MAX: | |
4013 | #if HAVE_MMAP | |
4014 | n_mmaps_max = value; return 1; | |
4015 | #else | |
4016 | if (value != 0) return 0; else n_mmaps_max = value; return 1; | |
4017 | #endif | |
10dc2a90 UD |
4018 | case M_CHECK_ACTION: |
4019 | check_action = value; return 1; | |
f65fd747 UD |
4020 | |
4021 | default: | |
4022 | return 0; | |
4023 | } | |
4024 | } | |
10dc2a90 | 4025 | |
10dc2a90 UD |
4026 | \f |
4027 | ||
2f6d1f1b UD |
4028 | /* Get/set state: malloc_get_state() records the current state of all |
4029 | malloc variables (_except_ for the actual heap contents and `hook' | |
4030 | function pointers) in a system dependent, opaque data structure. | |
4031 | This data structure is dynamically allocated and can be free()d | |
4032 | after use. malloc_set_state() restores the state of all malloc | |
4033 | variables to the previously obtained state. This is especially | |
4034 | useful when using this malloc as part of a shared library, and when | |
4035 | the heap contents are saved/restored via some other method. The | |
4036 | primary example for this is GNU Emacs with its `dumping' procedure. | |
4037 | `Hook' function pointers are never saved or restored by these | |
4038 | functions. */ | |
4039 | ||
4040 | #define MALLOC_STATE_MAGIC 0x444c4541l | |
4041 | #define MALLOC_STATE_VERSION (0*0x100l + 0l) /* major*0x100 + minor */ | |
4042 | ||
4043 | struct malloc_state { | |
4044 | long magic; | |
4045 | long version; | |
4046 | mbinptr av[NAV * 2 + 2]; | |
4047 | char* sbrk_base; | |
4048 | int sbrked_mem_bytes; | |
4049 | unsigned long trim_threshold; | |
4050 | unsigned long top_pad; | |
4051 | unsigned int n_mmaps_max; | |
4052 | unsigned long mmap_threshold; | |
4053 | int check_action; | |
4054 | unsigned long max_sbrked_mem; | |
4055 | unsigned long max_total_mem; | |
4056 | unsigned int n_mmaps; | |
4057 | unsigned int max_n_mmaps; | |
4058 | unsigned long mmapped_mem; | |
4059 | unsigned long max_mmapped_mem; | |
4060 | }; | |
4061 | ||
4062 | Void_t* | |
4063 | mALLOC_GET_STATe() | |
4064 | { | |
4065 | mchunkptr victim; | |
4066 | struct malloc_state* ms; | |
4067 | int i; | |
4068 | mbinptr b; | |
4069 | ||
4070 | ptmalloc_init(); | |
4071 | (void)mutex_lock(&main_arena.mutex); | |
4072 | victim = chunk_alloc(&main_arena, request2size(sizeof(*ms))); | |
4073 | if(!victim) { | |
4074 | (void)mutex_unlock(&main_arena.mutex); | |
4075 | return 0; | |
4076 | } | |
4077 | ms = (struct malloc_state*)chunk2mem(victim); | |
4078 | ms->magic = MALLOC_STATE_MAGIC; | |
4079 | ms->version = MALLOC_STATE_VERSION; | |
4080 | ms->av[0] = main_arena.av[0]; | |
4081 | ms->av[1] = main_arena.av[1]; | |
4082 | for(i=0; i<NAV; i++) { | |
4083 | b = bin_at(&main_arena, i); | |
4084 | if(first(b) == b) | |
4085 | ms->av[2*i+2] = ms->av[2*i+3] = 0; /* empty bin (or initial top) */ | |
4086 | else { | |
4087 | ms->av[2*i+2] = first(b); | |
4088 | ms->av[2*i+3] = last(b); | |
4089 | } | |
4090 | } | |
4091 | ms->sbrk_base = sbrk_base; | |
4092 | ms->sbrked_mem_bytes = sbrked_mem; | |
4093 | ms->trim_threshold = trim_threshold; | |
4094 | ms->top_pad = top_pad; | |
4095 | ms->n_mmaps_max = n_mmaps_max; | |
4096 | ms->mmap_threshold = mmap_threshold; | |
4097 | ms->check_action = check_action; | |
4098 | ms->max_sbrked_mem = max_sbrked_mem; | |
4099 | #ifdef NO_THREADS | |
4100 | ms->max_total_mem = max_total_mem; | |
4101 | #else | |
4102 | ms->max_total_mem = 0; | |
4103 | #endif | |
4104 | ms->n_mmaps = n_mmaps; | |
4105 | ms->max_n_mmaps = max_n_mmaps; | |
4106 | ms->mmapped_mem = mmapped_mem; | |
4107 | ms->max_mmapped_mem = max_mmapped_mem; | |
4108 | (void)mutex_unlock(&main_arena.mutex); | |
4109 | return (Void_t*)ms; | |
4110 | } | |
4111 | ||
4112 | int | |
4113 | #if __STD_C | |
4114 | mALLOC_SET_STATe(Void_t* msptr) | |
4115 | #else | |
4116 | mALLOC_SET_STATe(msptr) Void_t* msptr; | |
4117 | #endif | |
4118 | { | |
4119 | struct malloc_state* ms = (struct malloc_state*)msptr; | |
4120 | int i; | |
4121 | mbinptr b; | |
4122 | ||
4123 | ptmalloc_init(); | |
4124 | if(ms->magic != MALLOC_STATE_MAGIC) return -1; | |
4125 | /* Must fail if the major version is too high. */ | |
4126 | if((ms->version & ~0xffl) > (MALLOC_STATE_VERSION & ~0xffl)) return -2; | |
4127 | (void)mutex_lock(&main_arena.mutex); | |
4128 | main_arena.av[0] = ms->av[0]; | |
4129 | main_arena.av[1] = ms->av[1]; | |
4130 | for(i=0; i<NAV; i++) { | |
4131 | b = bin_at(&main_arena, i); | |
4132 | if(ms->av[2*i+2] == 0) | |
4133 | first(b) = last(b) = b; | |
4134 | else { | |
4135 | first(b) = ms->av[2*i+2]; | |
4136 | last(b) = ms->av[2*i+3]; | |
4137 | if(i > 0) { | |
4138 | /* Make sure the links to the `av'-bins in the heap are correct. */ | |
4139 | first(b)->bk = b; | |
4140 | last(b)->fd = b; | |
4141 | } | |
4142 | } | |
4143 | } | |
4144 | sbrk_base = ms->sbrk_base; | |
4145 | sbrked_mem = ms->sbrked_mem_bytes; | |
4146 | trim_threshold = ms->trim_threshold; | |
4147 | top_pad = ms->top_pad; | |
4148 | n_mmaps_max = ms->n_mmaps_max; | |
4149 | mmap_threshold = ms->mmap_threshold; | |
4150 | check_action = ms->check_action; | |
4151 | max_sbrked_mem = ms->max_sbrked_mem; | |
4152 | #ifdef NO_THREADS | |
4153 | max_total_mem = ms->max_total_mem; | |
4154 | #endif | |
4155 | n_mmaps = ms->n_mmaps; | |
4156 | max_n_mmaps = ms->max_n_mmaps; | |
4157 | mmapped_mem = ms->mmapped_mem; | |
4158 | max_mmapped_mem = ms->max_mmapped_mem; | |
4159 | /* add version-dependent code here */ | |
4160 | (void)mutex_unlock(&main_arena.mutex); | |
4161 | return 0; | |
4162 | } | |
4163 | ||
4164 | \f | |
4165 | ||
dfd2257a | 4166 | #if defined _LIBC || defined MALLOC_HOOKS |
10dc2a90 UD |
4167 | |
4168 | /* A simple, standard set of debugging hooks. Overhead is `only' one | |
4169 | byte per chunk; still this will catch most cases of double frees or | |
b22fc5f5 UD |
4170 | overruns. The goal here is to avoid obscure crashes due to invalid |
4171 | usage, unlike in the MALLOC_DEBUG code. */ | |
10dc2a90 | 4172 | |
c0e45674 | 4173 | #define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF ) |
f65fd747 | 4174 | |
b22fc5f5 UD |
4175 | /* Instrument a chunk with overrun detector byte(s) and convert it |
4176 | into a user pointer with requested size sz. */ | |
4177 | ||
4178 | static Void_t* | |
4179 | #if __STD_C | |
4180 | chunk2mem_check(mchunkptr p, size_t sz) | |
4181 | #else | |
4182 | chunk2mem_check(p, sz) mchunkptr p; size_t sz; | |
4183 | #endif | |
4184 | { | |
4185 | unsigned char* m_ptr = (unsigned char*)chunk2mem(p); | |
4186 | size_t i; | |
4187 | ||
4188 | for(i = chunksize(p) - (chunk_is_mmapped(p) ? 2*SIZE_SZ+1 : SIZE_SZ+1); | |
4189 | i > sz; | |
4190 | i -= 0xFF) { | |
4191 | if(i-sz < 0x100) { | |
4192 | m_ptr[i] = (unsigned char)(i-sz); | |
4193 | break; | |
4194 | } | |
4195 | m_ptr[i] = 0xFF; | |
4196 | } | |
4197 | m_ptr[sz] = MAGICBYTE(p); | |
4198 | return (Void_t*)m_ptr; | |
4199 | } | |
4200 | ||
10dc2a90 | 4201 | /* Convert a pointer to be free()d or realloc()ed to a valid chunk |
b22fc5f5 | 4202 | pointer. If the provided pointer is not valid, return NULL. */ |
10dc2a90 UD |
4203 | |
4204 | static mchunkptr | |
dfd2257a | 4205 | internal_function |
10dc2a90 UD |
4206 | #if __STD_C |
4207 | mem2chunk_check(Void_t* mem) | |
4208 | #else | |
4209 | mem2chunk_check(mem) Void_t* mem; | |
f65fd747 | 4210 | #endif |
10dc2a90 UD |
4211 | { |
4212 | mchunkptr p; | |
b22fc5f5 UD |
4213 | INTERNAL_SIZE_T sz, c; |
4214 | unsigned char magic; | |
10dc2a90 UD |
4215 | |
4216 | p = mem2chunk(mem); | |
4217 | if(!aligned_OK(p)) return NULL; | |
4218 | if( (char*)p>=sbrk_base && (char*)p<(sbrk_base+sbrked_mem) ) { | |
7e3be507 | 4219 | /* Must be a chunk in conventional heap memory. */ |
10dc2a90 UD |
4220 | if(chunk_is_mmapped(p) || |
4221 | ( (sz = chunksize(p)), ((char*)p + sz)>=(sbrk_base+sbrked_mem) ) || | |
7e3be507 UD |
4222 | sz<MINSIZE || sz&MALLOC_ALIGN_MASK || !inuse(p) || |
4223 | ( !prev_inuse(p) && (p->prev_size&MALLOC_ALIGN_MASK || | |
4224 | (long)prev_chunk(p)<(long)sbrk_base || | |
4225 | next_chunk(prev_chunk(p))!=p) )) | |
4226 | return NULL; | |
b22fc5f5 UD |
4227 | magic = MAGICBYTE(p); |
4228 | for(sz += SIZE_SZ-1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) { | |
4229 | if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL; | |
4230 | } | |
4231 | ((unsigned char*)p)[sz] ^= 0xFF; | |
10dc2a90 UD |
4232 | } else { |
4233 | unsigned long offset, page_mask = malloc_getpagesize-1; | |
4234 | ||
7e3be507 | 4235 | /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two |
10dc2a90 UD |
4236 | alignment relative to the beginning of a page. Check this |
4237 | first. */ | |
4238 | offset = (unsigned long)mem & page_mask; | |
4239 | if((offset!=MALLOC_ALIGNMENT && offset!=0 && offset!=0x10 && | |
7e3be507 UD |
4240 | offset!=0x20 && offset!=0x40 && offset!=0x80 && offset!=0x100 && |
4241 | offset!=0x200 && offset!=0x400 && offset!=0x800 && offset!=0x1000 && | |
4242 | offset<0x2000) || | |
10dc2a90 UD |
4243 | !chunk_is_mmapped(p) || (p->size & PREV_INUSE) || |
4244 | ( (((unsigned long)p - p->prev_size) & page_mask) != 0 ) || | |
4245 | ( (sz = chunksize(p)), ((p->prev_size + sz) & page_mask) != 0 ) ) | |
4246 | return NULL; | |
b22fc5f5 UD |
4247 | magic = MAGICBYTE(p); |
4248 | for(sz -= 1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) { | |
4249 | if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL; | |
4250 | } | |
4251 | ((unsigned char*)p)[sz] ^= 0xFF; | |
10dc2a90 UD |
4252 | } |
4253 | return p; | |
4254 | } | |
4255 | ||
b22fc5f5 UD |
4256 | /* Check for corruption of the top chunk, and try to recover if |
4257 | necessary. */ | |
4258 | ||
ae9b308c UD |
4259 | static int |
4260 | #if __STD_C | |
4261 | top_check(void) | |
4262 | #else | |
b22fc5f5 | 4263 | top_check() |
ae9b308c | 4264 | #endif |
b22fc5f5 UD |
4265 | { |
4266 | mchunkptr t = top(&main_arena); | |
4267 | char* brk, * new_brk; | |
4268 | INTERNAL_SIZE_T front_misalign, sbrk_size; | |
4269 | unsigned long pagesz = malloc_getpagesize; | |
4270 | ||
4271 | if((char*)t + chunksize(t) == sbrk_base + sbrked_mem || | |
4272 | t == initial_top(&main_arena)) return 0; | |
4273 | ||
4274 | switch(check_action) { | |
4275 | case 1: | |
4276 | fprintf(stderr, "malloc: top chunk is corrupt\n"); | |
4277 | break; | |
4278 | case 2: | |
4279 | abort(); | |
4280 | } | |
4281 | /* Try to set up a new top chunk. */ | |
4282 | brk = MORECORE(0); | |
4283 | front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK; | |
4284 | if (front_misalign > 0) | |
4285 | front_misalign = MALLOC_ALIGNMENT - front_misalign; | |
4286 | sbrk_size = front_misalign + top_pad + MINSIZE; | |
4287 | sbrk_size += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1)); | |
4288 | new_brk = (char*)(MORECORE (sbrk_size)); | |
4289 | if (new_brk == (char*)(MORECORE_FAILURE)) return -1; | |
4290 | sbrked_mem = (new_brk - sbrk_base) + sbrk_size; | |
4291 | ||
4292 | top(&main_arena) = (mchunkptr)(brk + front_misalign); | |
4293 | set_head(top(&main_arena), (sbrk_size - front_misalign) | PREV_INUSE); | |
4294 | ||
4295 | return 0; | |
4296 | } | |
4297 | ||
10dc2a90 UD |
4298 | static Void_t* |
4299 | #if __STD_C | |
dfd2257a | 4300 | malloc_check(size_t sz, const Void_t *caller) |
10dc2a90 | 4301 | #else |
dfd2257a | 4302 | malloc_check(sz, caller) size_t sz; const Void_t *caller; |
a2b08ee5 | 4303 | #endif |
10dc2a90 UD |
4304 | { |
4305 | mchunkptr victim; | |
4306 | INTERNAL_SIZE_T nb = request2size(sz + 1); | |
4307 | ||
4308 | (void)mutex_lock(&main_arena.mutex); | |
b22fc5f5 | 4309 | victim = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL; |
10dc2a90 UD |
4310 | (void)mutex_unlock(&main_arena.mutex); |
4311 | if(!victim) return NULL; | |
b22fc5f5 | 4312 | return chunk2mem_check(victim, sz); |
10dc2a90 UD |
4313 | } |
4314 | ||
4315 | static void | |
4316 | #if __STD_C | |
dfd2257a | 4317 | free_check(Void_t* mem, const Void_t *caller) |
10dc2a90 | 4318 | #else |
dfd2257a | 4319 | free_check(mem, caller) Void_t* mem; const Void_t *caller; |
a2b08ee5 | 4320 | #endif |
10dc2a90 UD |
4321 | { |
4322 | mchunkptr p; | |
4323 | ||
4324 | if(!mem) return; | |
7e3be507 | 4325 | (void)mutex_lock(&main_arena.mutex); |
10dc2a90 UD |
4326 | p = mem2chunk_check(mem); |
4327 | if(!p) { | |
7e3be507 | 4328 | (void)mutex_unlock(&main_arena.mutex); |
10dc2a90 UD |
4329 | switch(check_action) { |
4330 | case 1: | |
4331 | fprintf(stderr, "free(): invalid pointer %lx!\n", (long)(mem)); | |
4332 | break; | |
4333 | case 2: | |
4334 | abort(); | |
4335 | } | |
4336 | return; | |
4337 | } | |
4338 | #if HAVE_MMAP | |
4339 | if (chunk_is_mmapped(p)) { | |
7e3be507 | 4340 | (void)mutex_unlock(&main_arena.mutex); |
10dc2a90 UD |
4341 | munmap_chunk(p); |
4342 | return; | |
4343 | } | |
4344 | #endif | |
7e3be507 UD |
4345 | #if 0 /* Erase freed memory. */ |
4346 | memset(mem, 0, chunksize(p) - (SIZE_SZ+1)); | |
4347 | #endif | |
10dc2a90 UD |
4348 | chunk_free(&main_arena, p); |
4349 | (void)mutex_unlock(&main_arena.mutex); | |
4350 | } | |
4351 | ||
4352 | static Void_t* | |
4353 | #if __STD_C | |
dfd2257a | 4354 | realloc_check(Void_t* oldmem, size_t bytes, const Void_t *caller) |
10dc2a90 | 4355 | #else |
dfd2257a UD |
4356 | realloc_check(oldmem, bytes, caller) |
4357 | Void_t* oldmem; size_t bytes; const Void_t *caller; | |
a2b08ee5 | 4358 | #endif |
10dc2a90 UD |
4359 | { |
4360 | mchunkptr oldp, newp; | |
4361 | INTERNAL_SIZE_T nb, oldsize; | |
4362 | ||
a2b08ee5 | 4363 | if (oldmem == 0) return malloc_check(bytes, NULL); |
7e3be507 | 4364 | (void)mutex_lock(&main_arena.mutex); |
10dc2a90 UD |
4365 | oldp = mem2chunk_check(oldmem); |
4366 | if(!oldp) { | |
7e3be507 | 4367 | (void)mutex_unlock(&main_arena.mutex); |
10dc2a90 UD |
4368 | switch(check_action) { |
4369 | case 1: | |
4370 | fprintf(stderr, "realloc(): invalid pointer %lx!\n", (long)(oldmem)); | |
4371 | break; | |
4372 | case 2: | |
4373 | abort(); | |
4374 | } | |
a2b08ee5 | 4375 | return malloc_check(bytes, NULL); |
10dc2a90 UD |
4376 | } |
4377 | oldsize = chunksize(oldp); | |
4378 | ||
4379 | nb = request2size(bytes+1); | |
4380 | ||
10dc2a90 UD |
4381 | #if HAVE_MMAP |
4382 | if (chunk_is_mmapped(oldp)) { | |
4383 | #if HAVE_MREMAP | |
4384 | newp = mremap_chunk(oldp, nb); | |
4385 | if(!newp) { | |
4386 | #endif | |
4387 | /* Note the extra SIZE_SZ overhead. */ | |
4388 | if(oldsize - SIZE_SZ >= nb) newp = oldp; /* do nothing */ | |
4389 | else { | |
7e3be507 | 4390 | /* Must alloc, copy, free. */ |
b22fc5f5 | 4391 | newp = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL; |
7e3be507 UD |
4392 | if (newp) { |
4393 | MALLOC_COPY(chunk2mem(newp), oldmem, oldsize - 2*SIZE_SZ); | |
4394 | munmap_chunk(oldp); | |
4395 | } | |
10dc2a90 UD |
4396 | } |
4397 | #if HAVE_MREMAP | |
4398 | } | |
4399 | #endif | |
7e3be507 | 4400 | } else { |
10dc2a90 | 4401 | #endif /* HAVE_MMAP */ |
b22fc5f5 UD |
4402 | newp = (top_check() >= 0) ? |
4403 | chunk_realloc(&main_arena, oldp, oldsize, nb) : NULL; | |
7e3be507 UD |
4404 | #if 0 /* Erase freed memory. */ |
4405 | nb = chunksize(newp); | |
4406 | if(oldp<newp || oldp>=chunk_at_offset(newp, nb)) { | |
4407 | memset((char*)oldmem + 2*sizeof(mbinptr), 0, | |
4408 | oldsize - (2*sizeof(mbinptr)+2*SIZE_SZ+1)); | |
4409 | } else if(nb > oldsize+SIZE_SZ) { | |
4410 | memset((char*)chunk2mem(newp) + oldsize, 0, nb - (oldsize+SIZE_SZ)); | |
4411 | } | |
4412 | #endif | |
4413 | #if HAVE_MMAP | |
4414 | } | |
4415 | #endif | |
10dc2a90 UD |
4416 | (void)mutex_unlock(&main_arena.mutex); |
4417 | ||
4418 | if(!newp) return NULL; | |
b22fc5f5 | 4419 | return chunk2mem_check(newp, bytes); |
10dc2a90 UD |
4420 | } |
4421 | ||
4422 | static Void_t* | |
4423 | #if __STD_C | |
dfd2257a | 4424 | memalign_check(size_t alignment, size_t bytes, const Void_t *caller) |
10dc2a90 | 4425 | #else |
dfd2257a UD |
4426 | memalign_check(alignment, bytes, caller) |
4427 | size_t alignment; size_t bytes; const Void_t *caller; | |
a2b08ee5 | 4428 | #endif |
10dc2a90 UD |
4429 | { |
4430 | INTERNAL_SIZE_T nb; | |
4431 | mchunkptr p; | |
4432 | ||
a2b08ee5 | 4433 | if (alignment <= MALLOC_ALIGNMENT) return malloc_check(bytes, NULL); |
10dc2a90 UD |
4434 | if (alignment < MINSIZE) alignment = MINSIZE; |
4435 | ||
4436 | nb = request2size(bytes+1); | |
4437 | (void)mutex_lock(&main_arena.mutex); | |
b22fc5f5 | 4438 | p = (top_check() >= 0) ? chunk_align(&main_arena, nb, alignment) : NULL; |
10dc2a90 UD |
4439 | (void)mutex_unlock(&main_arena.mutex); |
4440 | if(!p) return NULL; | |
b22fc5f5 | 4441 | return chunk2mem_check(p, bytes); |
10dc2a90 UD |
4442 | } |
4443 | ||
ee74a442 UD |
4444 | #ifndef NO_THREADS |
4445 | ||
7e3be507 UD |
4446 | /* The following hooks are used when the global initialization in |
4447 | ptmalloc_init() hasn't completed yet. */ | |
4448 | ||
4449 | static Void_t* | |
4450 | #if __STD_C | |
dfd2257a | 4451 | malloc_starter(size_t sz, const Void_t *caller) |
7e3be507 | 4452 | #else |
dfd2257a | 4453 | malloc_starter(sz, caller) size_t sz; const Void_t *caller; |
a2b08ee5 | 4454 | #endif |
7e3be507 UD |
4455 | { |
4456 | mchunkptr victim = chunk_alloc(&main_arena, request2size(sz)); | |
4457 | ||
4458 | return victim ? chunk2mem(victim) : 0; | |
4459 | } | |
4460 | ||
4461 | static void | |
4462 | #if __STD_C | |
dfd2257a | 4463 | free_starter(Void_t* mem, const Void_t *caller) |
7e3be507 | 4464 | #else |
dfd2257a | 4465 | free_starter(mem, caller) Void_t* mem; const Void_t *caller; |
a2b08ee5 | 4466 | #endif |
7e3be507 UD |
4467 | { |
4468 | mchunkptr p; | |
4469 | ||
4470 | if(!mem) return; | |
4471 | p = mem2chunk(mem); | |
4472 | #if HAVE_MMAP | |
4473 | if (chunk_is_mmapped(p)) { | |
4474 | munmap_chunk(p); | |
4475 | return; | |
4476 | } | |
4477 | #endif | |
4478 | chunk_free(&main_arena, p); | |
4479 | } | |
4480 | ||
ca34d7a7 UD |
4481 | /* The following hooks are used while the `atfork' handling mechanism |
4482 | is active. */ | |
4483 | ||
4484 | static Void_t* | |
4485 | #if __STD_C | |
dfd2257a | 4486 | malloc_atfork (size_t sz, const Void_t *caller) |
ca34d7a7 | 4487 | #else |
dfd2257a | 4488 | malloc_atfork(sz, caller) size_t sz; const Void_t *caller; |
a2b08ee5 | 4489 | #endif |
ca34d7a7 UD |
4490 | { |
4491 | Void_t *vptr = NULL; | |
4492 | ||
4493 | tsd_getspecific(arena_key, vptr); | |
4494 | if(!vptr) { | |
4495 | mchunkptr victim = chunk_alloc(&main_arena, request2size(sz)); | |
4496 | return victim ? chunk2mem(victim) : 0; | |
4497 | } else { | |
4498 | /* Suspend the thread until the `atfork' handlers have completed. | |
4499 | By that time, the hooks will have been reset as well, so that | |
4500 | mALLOc() can be used again. */ | |
4501 | (void)mutex_lock(&list_lock); | |
4502 | (void)mutex_unlock(&list_lock); | |
4503 | return mALLOc(sz); | |
4504 | } | |
4505 | } | |
4506 | ||
4507 | static void | |
4508 | #if __STD_C | |
dfd2257a | 4509 | free_atfork(Void_t* mem, const Void_t *caller) |
ca34d7a7 | 4510 | #else |
dfd2257a | 4511 | free_atfork(mem, caller) Void_t* mem; const Void_t *caller; |
a2b08ee5 | 4512 | #endif |
ca34d7a7 UD |
4513 | { |
4514 | Void_t *vptr = NULL; | |
4515 | arena *ar_ptr; | |
4516 | mchunkptr p; /* chunk corresponding to mem */ | |
4517 | ||
4518 | if (mem == 0) /* free(0) has no effect */ | |
4519 | return; | |
4520 | ||
4521 | p = mem2chunk(mem); | |
4522 | ||
4523 | #if HAVE_MMAP | |
4524 | if (chunk_is_mmapped(p)) /* release mmapped memory. */ | |
4525 | { | |
4526 | munmap_chunk(p); | |
4527 | return; | |
4528 | } | |
4529 | #endif | |
4530 | ||
4531 | ar_ptr = arena_for_ptr(p); | |
4532 | tsd_getspecific(arena_key, vptr); | |
4533 | if(vptr) | |
4534 | (void)mutex_lock(&ar_ptr->mutex); | |
4535 | chunk_free(ar_ptr, p); | |
4536 | if(vptr) | |
4537 | (void)mutex_unlock(&ar_ptr->mutex); | |
4538 | } | |
4539 | ||
ee74a442 UD |
4540 | #endif |
4541 | ||
dfd2257a | 4542 | #endif /* defined _LIBC || defined MALLOC_HOOKS */ |
f65fd747 | 4543 | |
7e3be507 UD |
4544 | \f |
4545 | ||
4546 | #ifdef _LIBC | |
4547 | weak_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc) | |
4548 | weak_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree) | |
4549 | weak_alias (__libc_free, __free) weak_alias (__libc_free, free) | |
4550 | weak_alias (__libc_malloc, __malloc) weak_alias (__libc_malloc, malloc) | |
4551 | weak_alias (__libc_memalign, __memalign) weak_alias (__libc_memalign, memalign) | |
4552 | weak_alias (__libc_realloc, __realloc) weak_alias (__libc_realloc, realloc) | |
4553 | weak_alias (__libc_valloc, __valloc) weak_alias (__libc_valloc, valloc) | |
4554 | weak_alias (__libc_pvalloc, __pvalloc) weak_alias (__libc_pvalloc, pvalloc) | |
4555 | weak_alias (__libc_mallinfo, __mallinfo) weak_alias (__libc_mallinfo, mallinfo) | |
4556 | weak_alias (__libc_mallopt, __mallopt) weak_alias (__libc_mallopt, mallopt) | |
4557 | ||
4558 | weak_alias (__malloc_stats, malloc_stats) | |
4559 | weak_alias (__malloc_usable_size, malloc_usable_size) | |
4560 | weak_alias (__malloc_trim, malloc_trim) | |
2f6d1f1b UD |
4561 | weak_alias (__malloc_get_state, malloc_get_state) |
4562 | weak_alias (__malloc_set_state, malloc_set_state) | |
7e3be507 UD |
4563 | #endif |
4564 | ||
f65fd747 UD |
4565 | /* |
4566 | ||
4567 | History: | |
4568 | ||
2f6d1f1b UD |
4569 | V2.6.4-pt3 Thu Feb 20 1997 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de) |
4570 | * Added malloc_get/set_state() (mainly for use in GNU emacs), | |
4571 | using interface from Marcus Daniels | |
4572 | * All parameters are now adjustable via environment variables | |
4573 | ||
10dc2a90 UD |
4574 | V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de) |
4575 | * Added debugging hooks | |
4576 | * Fixed possible deadlock in realloc() when out of memory | |
4577 | * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc. | |
4578 | ||
f65fd747 UD |
4579 | V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de) |
4580 | * Very minor updates from the released 2.6.4 version. | |
4581 | * Trimmed include file down to exported data structures. | |
4582 | * Changes from H.J. Lu for glibc-2.0. | |
4583 | ||
4584 | V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de) | |
4585 | * Many changes for multiple threads | |
4586 | * Introduced arenas and heaps | |
4587 | ||
4588 | V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) | |
4589 | * Added pvalloc, as recommended by H.J. Liu | |
4590 | * Added 64bit pointer support mainly from Wolfram Gloger | |
4591 | * Added anonymously donated WIN32 sbrk emulation | |
4592 | * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen | |
4593 | * malloc_extend_top: fix mask error that caused wastage after | |
4594 | foreign sbrks | |
4595 | * Add linux mremap support code from HJ Liu | |
4596 | ||
4597 | V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) | |
4598 | * Integrated most documentation with the code. | |
4599 | * Add support for mmap, with help from | |
4600 | Wolfram Gloger (Gloger@lrz.uni-muenchen.de). | |
4601 | * Use last_remainder in more cases. | |
4602 | * Pack bins using idea from colin@nyx10.cs.du.edu | |
6d52618b | 4603 | * Use ordered bins instead of best-fit threshold |
f65fd747 UD |
4604 | * Eliminate block-local decls to simplify tracing and debugging. |
4605 | * Support another case of realloc via move into top | |
6d52618b | 4606 | * Fix error occurring when initial sbrk_base not word-aligned. |
f65fd747 UD |
4607 | * Rely on page size for units instead of SBRK_UNIT to |
4608 | avoid surprises about sbrk alignment conventions. | |
4609 | * Add mallinfo, mallopt. Thanks to Raymond Nijssen | |
4610 | (raymond@es.ele.tue.nl) for the suggestion. | |
4611 | * Add `pad' argument to malloc_trim and top_pad mallopt parameter. | |
4612 | * More precautions for cases where other routines call sbrk, | |
4613 | courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). | |
4614 | * Added macros etc., allowing use in linux libc from | |
4615 | H.J. Lu (hjl@gnu.ai.mit.edu) | |
4616 | * Inverted this history list | |
4617 | ||
4618 | V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) | |
4619 | * Re-tuned and fixed to behave more nicely with V2.6.0 changes. | |
4620 | * Removed all preallocation code since under current scheme | |
4621 | the work required to undo bad preallocations exceeds | |
4622 | the work saved in good cases for most test programs. | |
4623 | * No longer use return list or unconsolidated bins since | |
4624 | no scheme using them consistently outperforms those that don't | |
4625 | given above changes. | |
4626 | * Use best fit for very large chunks to prevent some worst-cases. | |
4627 | * Added some support for debugging | |
4628 | ||
4629 | V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) | |
4630 | * Removed footers when chunks are in use. Thanks to | |
4631 | Paul Wilson (wilson@cs.texas.edu) for the suggestion. | |
4632 | ||
4633 | V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) | |
4634 | * Added malloc_trim, with help from Wolfram Gloger | |
4635 | (wmglo@Dent.MED.Uni-Muenchen.DE). | |
4636 | ||
4637 | V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) | |
4638 | ||
4639 | V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) | |
4640 | * realloc: try to expand in both directions | |
4641 | * malloc: swap order of clean-bin strategy; | |
4642 | * realloc: only conditionally expand backwards | |
4643 | * Try not to scavenge used bins | |
4644 | * Use bin counts as a guide to preallocation | |
4645 | * Occasionally bin return list chunks in first scan | |
4646 | * Add a few optimizations from colin@nyx10.cs.du.edu | |
4647 | ||
4648 | V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) | |
4649 | * faster bin computation & slightly different binning | |
4650 | * merged all consolidations to one part of malloc proper | |
4651 | (eliminating old malloc_find_space & malloc_clean_bin) | |
4652 | * Scan 2 returns chunks (not just 1) | |
4653 | * Propagate failure in realloc if malloc returns 0 | |
4654 | * Add stuff to allow compilation on non-ANSI compilers | |
4655 | from kpv@research.att.com | |
4656 | ||
4657 | V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) | |
4658 | * removed potential for odd address access in prev_chunk | |
4659 | * removed dependency on getpagesize.h | |
4660 | * misc cosmetics and a bit more internal documentation | |
4661 | * anticosmetics: mangled names in macros to evade debugger strangeness | |
4662 | * tested on sparc, hp-700, dec-mips, rs6000 | |
4663 | with gcc & native cc (hp, dec only) allowing | |
4664 | Detlefs & Zorn comparison study (in SIGPLAN Notices.) | |
4665 | ||
4666 | Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) | |
4667 | * Based loosely on libg++-1.2X malloc. (It retains some of the overall | |
4668 | structure of old version, but most details differ.) | |
4669 | ||
4670 | */ |