]>
Commit | Line | Data |
---|---|---|
f0ed8226 MSO |
1 | /* |
2 | This is a version (aka dlmalloc) of malloc/free/realloc written by | |
3 | Doug Lea and released to the public domain, as explained at | |
4 | http://creativecommons.org/licenses/publicdomain. Send questions, | |
5 | comments, complaints, performance data, etc to dl@cs.oswego.edu | |
6 | ||
7 | * Version pre-2.8.4 Mon Nov 27 11:22:37 2006 (dl at gee) | |
8 | ||
9 | Note: There may be an updated version of this malloc obtainable at | |
10 | ftp://gee.cs.oswego.edu/pub/misc/malloc.c | |
11 | Check before installing! | |
12 | ||
13 | * Quickstart | |
14 | ||
15 | This library is all in one file to simplify the most common usage: | |
16 | ftp it, compile it (-O3), and link it into another program. All of | |
17 | the compile-time options default to reasonable values for use on | |
18 | most platforms. You might later want to step through various | |
19 | compile-time and dynamic tuning options. | |
20 | ||
21 | For convenience, an include file for code using this malloc is at: | |
22 | ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h | |
23 | You don't really need this .h file unless you call functions not | |
24 | defined in your system include files. The .h file contains only the | |
25 | excerpts from this file needed for using this malloc on ANSI C/C++ | |
26 | systems, so long as you haven't changed compile-time options about | |
27 | naming and tuning parameters. If you do, then you can create your | |
28 | own malloc.h that does include all settings by cutting at the point | |
29 | indicated below. Note that you may already by default be using a C | |
30 | library containing a malloc that is based on some version of this | |
31 | malloc (for example in linux). You might still want to use the one | |
32 | in this file to customize settings or to avoid overheads associated | |
33 | with library versions. | |
34 | ||
35 | * Vital statistics: | |
36 | ||
37 | Supported pointer/size_t representation: 4 or 8 bytes | |
38 | size_t MUST be an unsigned type of the same width as | |
39 | pointers. (If you are using an ancient system that declares | |
40 | size_t as a signed type, or need it to be a different width | |
41 | than pointers, you can use a previous release of this malloc | |
42 | (e.g. 2.7.2) supporting these.) | |
43 | ||
44 | Alignment: 8 bytes (default) | |
45 | This suffices for nearly all current machines and C compilers. | |
46 | However, you can define MALLOC_ALIGNMENT to be wider than this | |
47 | if necessary (up to 128bytes), at the expense of using more space. | |
48 | ||
49 | Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes) | |
50 | 8 or 16 bytes (if 8byte sizes) | |
51 | Each malloced chunk has a hidden word of overhead holding size | |
52 | and status information, and additional cross-check word | |
53 | if FOOTERS is defined. | |
54 | ||
55 | Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead) | |
56 | 8-byte ptrs: 32 bytes (including overhead) | |
57 | ||
58 | Even a request for zero bytes (i.e., malloc(0)) returns a | |
59 | pointer to something of the minimum allocatable size. | |
60 | The maximum overhead wastage (i.e., number of extra bytes | |
61 | allocated than were requested in malloc) is less than or equal | |
62 | to the minimum size, except for requests >= mmap_threshold that | |
63 | are serviced via mmap(), where the worst case wastage is about | |
64 | 32 bytes plus the remainder from a system page (the minimal | |
65 | mmap unit); typically 4096 or 8192 bytes. | |
66 | ||
67 | Security: static-safe; optionally more or less | |
68 | The "security" of malloc refers to the ability of malicious | |
69 | code to accentuate the effects of errors (for example, freeing | |
70 | space that is not currently malloc'ed or overwriting past the | |
71 | ends of chunks) in code that calls malloc. This malloc | |
72 | guarantees not to modify any memory locations below the base of | |
73 | heap, i.e., static variables, even in the presence of usage | |
74 | errors. The routines additionally detect most improper frees | |
75 | and reallocs. All this holds as long as the static bookkeeping | |
76 | for malloc itself is not corrupted by some other means. This | |
77 | is only one aspect of security -- these checks do not, and | |
78 | cannot, detect all possible programming errors. | |
79 | ||
80 | If FOOTERS is defined nonzero, then each allocated chunk | |
81 | carries an additional check word to verify that it was malloced | |
82 | from its space. These check words are the same within each | |
83 | execution of a program using malloc, but differ across | |
84 | executions, so externally crafted fake chunks cannot be | |
85 | freed. This improves security by rejecting frees/reallocs that | |
86 | could corrupt heap memory, in addition to the checks preventing | |
87 | writes to statics that are always on. This may further improve | |
88 | security at the expense of time and space overhead. (Note that | |
89 | FOOTERS may also be worth using with MSPACES.) | |
90 | ||
91 | By default detected errors cause the program to abort (calling | |
92 | "abort()"). You can override this to instead proceed past | |
93 | errors by defining PROCEED_ON_ERROR. In this case, a bad free | |
94 | has no effect, and a malloc that encounters a bad address | |
95 | caused by user overwrites will ignore the bad address by | |
96 | dropping pointers and indices to all known memory. This may | |
97 | be appropriate for programs that should continue if at all | |
98 | possible in the face of programming errors, although they may | |
99 | run out of memory because dropped memory is never reclaimed. | |
100 | ||
101 | If you don't like either of these options, you can define | |
102 | CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything | |
0353a0c4 | 103 | else. And if you are sure that your program using malloc has |
f0ed8226 MSO |
104 | no errors or vulnerabilities, you can define INSECURE to 1, |
105 | which might (or might not) provide a small performance improvement. | |
106 | ||
107 | Thread-safety: NOT thread-safe unless USE_LOCKS defined | |
108 | When USE_LOCKS is defined, each public call to malloc, free, | |
109 | etc is surrounded with either a pthread mutex or a win32 | |
110 | spinlock (depending on WIN32). This is not especially fast, and | |
111 | can be a major bottleneck. It is designed only to provide | |
112 | minimal protection in concurrent environments, and to provide a | |
113 | basis for extensions. If you are using malloc in a concurrent | |
114 | program, consider instead using nedmalloc | |
115 | (http://www.nedprod.com/programs/portable/nedmalloc/) or | |
116 | ptmalloc (See http://www.malloc.de), which are derived | |
117 | from versions of this malloc. | |
118 | ||
119 | System requirements: Any combination of MORECORE and/or MMAP/MUNMAP | |
120 | This malloc can use unix sbrk or any emulation (invoked using | |
121 | the CALL_MORECORE macro) and/or mmap/munmap or any emulation | |
122 | (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system | |
123 | memory. On most unix systems, it tends to work best if both | |
124 | MORECORE and MMAP are enabled. On Win32, it uses emulations | |
125 | based on VirtualAlloc. It also uses common C library functions | |
126 | like memset. | |
127 | ||
128 | Compliance: I believe it is compliant with the Single Unix Specification | |
129 | (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably | |
130 | others as well. | |
131 | ||
132 | * Overview of algorithms | |
133 | ||
134 | This is not the fastest, most space-conserving, most portable, or | |
135 | most tunable malloc ever written. However it is among the fastest | |
136 | while also being among the most space-conserving, portable and | |
137 | tunable. Consistent balance across these factors results in a good | |
138 | general-purpose allocator for malloc-intensive programs. | |
139 | ||
140 | In most ways, this malloc is a best-fit allocator. Generally, it | |
141 | chooses the best-fitting existing chunk for a request, with ties | |
142 | broken in approximately least-recently-used order. (This strategy | |
143 | normally maintains low fragmentation.) However, for requests less | |
144 | than 256bytes, it deviates from best-fit when there is not an | |
145 | exactly fitting available chunk by preferring to use space adjacent | |
146 | to that used for the previous small request, as well as by breaking | |
147 | ties in approximately most-recently-used order. (These enhance | |
148 | locality of series of small allocations.) And for very large requests | |
149 | (>= 256Kb by default), it relies on system memory mapping | |
150 | facilities, if supported. (This helps avoid carrying around and | |
151 | possibly fragmenting memory used only for large chunks.) | |
152 | ||
153 | All operations (except malloc_stats and mallinfo) have execution | |
154 | times that are bounded by a constant factor of the number of bits in | |
155 | a size_t, not counting any clearing in calloc or copying in realloc, | |
156 | or actions surrounding MORECORE and MMAP that have times | |
157 | proportional to the number of non-contiguous regions returned by | |
158 | system allocation routines, which is often just 1. In real-time | |
159 | applications, you can optionally suppress segment traversals using | |
160 | NO_SEGMENT_TRAVERSAL, which assures bounded execution even when | |
161 | system allocators return non-contiguous spaces, at the typical | |
162 | expense of carrying around more memory and increased fragmentation. | |
163 | ||
164 | The implementation is not very modular and seriously overuses | |
165 | macros. Perhaps someday all C compilers will do as good a job | |
166 | inlining modular code as can now be done by brute-force expansion, | |
167 | but now, enough of them seem not to. | |
168 | ||
169 | Some compilers issue a lot of warnings about code that is | |
170 | dead/unreachable only on some platforms, and also about intentional | |
171 | uses of negation on unsigned types. All known cases of each can be | |
172 | ignored. | |
173 | ||
174 | For a longer but out of date high-level description, see | |
175 | http://gee.cs.oswego.edu/dl/html/malloc.html | |
176 | ||
177 | * MSPACES | |
178 | If MSPACES is defined, then in addition to malloc, free, etc., | |
179 | this file also defines mspace_malloc, mspace_free, etc. These | |
180 | are versions of malloc routines that take an "mspace" argument | |
181 | obtained using create_mspace, to control all internal bookkeeping. | |
182 | If ONLY_MSPACES is defined, only these versions are compiled. | |
183 | So if you would like to use this allocator for only some allocations, | |
184 | and your system malloc for others, you can compile with | |
185 | ONLY_MSPACES and then do something like... | |
186 | static mspace mymspace = create_mspace(0,0); // for example | |
187 | #define mymalloc(bytes) mspace_malloc(mymspace, bytes) | |
188 | ||
189 | (Note: If you only need one instance of an mspace, you can instead | |
190 | use "USE_DL_PREFIX" to relabel the global malloc.) | |
191 | ||
192 | You can similarly create thread-local allocators by storing | |
193 | mspaces as thread-locals. For example: | |
194 | static __thread mspace tlms = 0; | |
195 | void* tlmalloc(size_t bytes) { | |
196 | if (tlms == 0) tlms = create_mspace(0, 0); | |
197 | return mspace_malloc(tlms, bytes); | |
198 | } | |
199 | void tlfree(void* mem) { mspace_free(tlms, mem); } | |
200 | ||
201 | Unless FOOTERS is defined, each mspace is completely independent. | |
202 | You cannot allocate from one and free to another (although | |
203 | conformance is only weakly checked, so usage errors are not always | |
204 | caught). If FOOTERS is defined, then each chunk carries around a tag | |
205 | indicating its originating mspace, and frees are directed to their | |
206 | originating spaces. | |
207 | ||
208 | ------------------------- Compile-time options --------------------------- | |
209 | ||
210 | Be careful in setting #define values for numerical constants of type | |
211 | size_t. On some systems, literal values are not automatically extended | |
212 | to size_t precision unless they are explicitly casted. You can also | |
213 | use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below. | |
214 | ||
215 | WIN32 default: defined if _WIN32 defined | |
216 | Defining WIN32 sets up defaults for MS environment and compilers. | |
217 | Otherwise defaults are for unix. Beware that there seem to be some | |
218 | cases where this malloc might not be a pure drop-in replacement for | |
219 | Win32 malloc: Random-looking failures from Win32 GDI API's (eg; | |
220 | SetDIBits()) may be due to bugs in some video driver implementations | |
221 | when pixel buffers are malloc()ed, and the region spans more than | |
222 | one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb) | |
223 | default granularity, pixel buffers may straddle virtual allocation | |
224 | regions more often than when using the Microsoft allocator. You can | |
225 | avoid this by using VirtualAlloc() and VirtualFree() for all pixel | |
226 | buffers rather than using malloc(). If this is not possible, | |
227 | recompile this malloc with a larger DEFAULT_GRANULARITY. | |
228 | ||
229 | MALLOC_ALIGNMENT default: (size_t)8 | |
230 | Controls the minimum alignment for malloc'ed chunks. It must be a | |
231 | power of two and at least 8, even on machines for which smaller | |
232 | alignments would suffice. It may be defined as larger than this | |
233 | though. Note however that code and data structures are optimized for | |
234 | the case of 8-byte alignment. | |
235 | ||
236 | MSPACES default: 0 (false) | |
237 | If true, compile in support for independent allocation spaces. | |
238 | This is only supported if HAVE_MMAP is true. | |
239 | ||
240 | ONLY_MSPACES default: 0 (false) | |
241 | If true, only compile in mspace versions, not regular versions. | |
242 | ||
243 | USE_LOCKS default: 0 (false) | |
244 | Causes each call to each public routine to be surrounded with | |
245 | pthread or WIN32 mutex lock/unlock. (If set true, this can be | |
246 | overridden on a per-mspace basis for mspace versions.) If set to a | |
247 | non-zero value other than 1, locks are used, but their | |
248 | implementation is left out, so lock functions must be supplied manually. | |
249 | ||
250 | USE_SPIN_LOCKS default: 1 iff USE_LOCKS and on x86 using gcc or MSC | |
251 | If true, uses custom spin locks for locking. This is currently | |
252 | supported only for x86 platforms using gcc or recent MS compilers. | |
253 | Otherwise, posix locks or win32 critical sections are used. | |
254 | ||
255 | FOOTERS default: 0 | |
256 | If true, provide extra checking and dispatching by placing | |
257 | information in the footers of allocated chunks. This adds | |
258 | space and time overhead. | |
259 | ||
260 | INSECURE default: 0 | |
261 | If true, omit checks for usage errors and heap space overwrites. | |
262 | ||
263 | USE_DL_PREFIX default: NOT defined | |
264 | Causes compiler to prefix all public routines with the string 'dl'. | |
265 | This can be useful when you only want to use this malloc in one part | |
266 | of a program, using your regular system malloc elsewhere. | |
267 | ||
268 | ABORT default: defined as abort() | |
269 | Defines how to abort on failed checks. On most systems, a failed | |
270 | check cannot die with an "assert" or even print an informative | |
271 | message, because the underlying print routines in turn call malloc, | |
272 | which will fail again. Generally, the best policy is to simply call | |
273 | abort(). It's not very useful to do more than this because many | |
274 | errors due to overwriting will show up as address faults (null, odd | |
275 | addresses etc) rather than malloc-triggered checks, so will also | |
276 | abort. Also, most compilers know that abort() does not return, so | |
277 | can better optimize code conditionally calling it. | |
278 | ||
279 | PROCEED_ON_ERROR default: defined as 0 (false) | |
280 | Controls whether detected bad addresses cause them to bypassed | |
281 | rather than aborting. If set, detected bad arguments to free and | |
282 | realloc are ignored. And all bookkeeping information is zeroed out | |
283 | upon a detected overwrite of freed heap space, thus losing the | |
284 | ability to ever return it from malloc again, but enabling the | |
285 | application to proceed. If PROCEED_ON_ERROR is defined, the | |
286 | static variable malloc_corruption_error_count is compiled in | |
287 | and can be examined to see if errors have occurred. This option | |
288 | generates slower code than the default abort policy. | |
289 | ||
290 | DEBUG default: NOT defined | |
291 | The DEBUG setting is mainly intended for people trying to modify | |
292 | this code or diagnose problems when porting to new platforms. | |
293 | However, it may also be able to better isolate user errors than just | |
294 | using runtime checks. The assertions in the check routines spell | |
295 | out in more detail the assumptions and invariants underlying the | |
296 | algorithms. The checking is fairly extensive, and will slow down | |
297 | execution noticeably. Calling malloc_stats or mallinfo with DEBUG | |
298 | set will attempt to check every non-mmapped allocated and free chunk | |
299 | in the course of computing the summaries. | |
300 | ||
301 | ABORT_ON_ASSERT_FAILURE default: defined as 1 (true) | |
302 | Debugging assertion failures can be nearly impossible if your | |
303 | version of the assert macro causes malloc to be called, which will | |
304 | lead to a cascade of further failures, blowing the runtime stack. | |
305 | ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(), | |
306 | which will usually make debugging easier. | |
307 | ||
308 | MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32 | |
309 | The action to take before "return 0" when malloc fails to be able to | |
310 | return memory because there is none available. | |
311 | ||
312 | HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES | |
313 | True if this system supports sbrk or an emulation of it. | |
314 | ||
315 | MORECORE default: sbrk | |
316 | The name of the sbrk-style system routine to call to obtain more | |
317 | memory. See below for guidance on writing custom MORECORE | |
318 | functions. The type of the argument to sbrk/MORECORE varies across | |
319 | systems. It cannot be size_t, because it supports negative | |
320 | arguments, so it is normally the signed type of the same width as | |
321 | size_t (sometimes declared as "intptr_t"). It doesn't much matter | |
322 | though. Internally, we only call it with arguments less than half | |
323 | the max value of a size_t, which should work across all reasonable | |
324 | possibilities, although sometimes generating compiler warnings. | |
325 | ||
326 | MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE | |
327 | If true, take advantage of fact that consecutive calls to MORECORE | |
328 | with positive arguments always return contiguous increasing | |
329 | addresses. This is true of unix sbrk. It does not hurt too much to | |
330 | set it true anyway, since malloc copes with non-contiguities. | |
331 | Setting it false when definitely non-contiguous saves time | |
332 | and possibly wasted space it would take to discover this though. | |
333 | ||
334 | MORECORE_CANNOT_TRIM default: NOT defined | |
335 | True if MORECORE cannot release space back to the system when given | |
336 | negative arguments. This is generally necessary only if you are | |
337 | using a hand-crafted MORECORE function that cannot handle negative | |
338 | arguments. | |
339 | ||
340 | NO_SEGMENT_TRAVERSAL default: 0 | |
341 | If non-zero, suppresses traversals of memory segments | |
342 | returned by either MORECORE or CALL_MMAP. This disables | |
343 | merging of segments that are contiguous, and selectively | |
344 | releasing them to the OS if unused, but bounds execution times. | |
345 | ||
346 | HAVE_MMAP default: 1 (true) | |
347 | True if this system supports mmap or an emulation of it. If so, and | |
348 | HAVE_MORECORE is not true, MMAP is used for all system | |
349 | allocation. If set and HAVE_MORECORE is true as well, MMAP is | |
350 | primarily used to directly allocate very large blocks. It is also | |
351 | used as a backup strategy in cases where MORECORE fails to provide | |
352 | space from system. Note: A single call to MUNMAP is assumed to be | |
353 | able to unmap memory that may have be allocated using multiple calls | |
354 | to MMAP, so long as they are adjacent. | |
355 | ||
356 | HAVE_MREMAP default: 1 on linux, else 0 | |
357 | If true realloc() uses mremap() to re-allocate large blocks and | |
358 | extend or shrink allocation spaces. | |
359 | ||
360 | MMAP_CLEARS default: 1 except on WINCE. | |
361 | True if mmap clears memory so calloc doesn't need to. This is true | |
362 | for standard unix mmap using /dev/zero and on WIN32 except for WINCE. | |
363 | ||
364 | USE_BUILTIN_FFS default: 0 (i.e., not used) | |
365 | Causes malloc to use the builtin ffs() function to compute indices. | |
366 | Some compilers may recognize and intrinsify ffs to be faster than the | |
367 | supplied C version. Also, the case of x86 using gcc is special-cased | |
368 | to an asm instruction, so is already as fast as it can be, and so | |
369 | this setting has no effect. Similarly for Win32 under recent MS compilers. | |
370 | (On most x86s, the asm version is only slightly faster than the C version.) | |
371 | ||
372 | malloc_getpagesize default: derive from system includes, or 4096. | |
373 | The system page size. To the extent possible, this malloc manages | |
374 | memory from the system in page-size units. This may be (and | |
375 | usually is) a function rather than a constant. This is ignored | |
376 | if WIN32, where page size is determined using getSystemInfo during | |
377 | initialization. | |
378 | ||
379 | USE_DEV_RANDOM default: 0 (i.e., not used) | |
380 | Causes malloc to use /dev/random to initialize secure magic seed for | |
381 | stamping footers. Otherwise, the current time is used. | |
382 | ||
383 | NO_MALLINFO default: 0 | |
384 | If defined, don't compile "mallinfo". This can be a simple way | |
385 | of dealing with mismatches between system declarations and | |
386 | those in this file. | |
387 | ||
388 | MALLINFO_FIELD_TYPE default: size_t | |
389 | The type of the fields in the mallinfo struct. This was originally | |
390 | defined as "int" in SVID etc, but is more usefully defined as | |
391 | size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set | |
392 | ||
393 | REALLOC_ZERO_BYTES_FREES default: not defined | |
394 | This should be set if a call to realloc with zero bytes should | |
395 | be the same as a call to free. Some people think it should. Otherwise, | |
396 | since this malloc returns a unique pointer for malloc(0), so does | |
397 | realloc(p, 0). | |
398 | ||
399 | LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H | |
400 | LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H | |
401 | LACKS_STDLIB_H default: NOT defined unless on WIN32 | |
402 | Define these if your system does not have these header files. | |
403 | You might need to manually insert some of the declarations they provide. | |
404 | ||
405 | DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS, | |
406 | system_info.dwAllocationGranularity in WIN32, | |
407 | otherwise 64K. | |
408 | Also settable using mallopt(M_GRANULARITY, x) | |
409 | The unit for allocating and deallocating memory from the system. On | |
410 | most systems with contiguous MORECORE, there is no reason to | |
411 | make this more than a page. However, systems with MMAP tend to | |
412 | either require or encourage larger granularities. You can increase | |
413 | this value to prevent system allocation functions to be called so | |
414 | often, especially if they are slow. The value must be at least one | |
415 | page and must be a power of two. Setting to 0 causes initialization | |
416 | to either page size or win32 region size. (Note: In previous | |
417 | versions of malloc, the equivalent of this option was called | |
418 | "TOP_PAD") | |
419 | ||
420 | DEFAULT_TRIM_THRESHOLD default: 2MB | |
421 | Also settable using mallopt(M_TRIM_THRESHOLD, x) | |
422 | The maximum amount of unused top-most memory to keep before | |
423 | releasing via malloc_trim in free(). Automatic trimming is mainly | |
424 | useful in long-lived programs using contiguous MORECORE. Because | |
425 | trimming via sbrk can be slow on some systems, and can sometimes be | |
426 | wasteful (in cases where programs immediately afterward allocate | |
427 | more large chunks) the value should be high enough so that your | |
428 | overall system performance would improve by releasing this much | |
429 | memory. As a rough guide, you might set to a value close to the | |
430 | average size of a process (program) running on your system. | |
431 | Releasing this much memory would allow such a process to run in | |
432 | memory. Generally, it is worth tuning trim thresholds when a | |
433 | program undergoes phases where several large chunks are allocated | |
434 | and released in ways that can reuse each other's storage, perhaps | |
435 | mixed with phases where there are no such chunks at all. The trim | |
436 | value must be greater than page size to have any useful effect. To | |
437 | disable trimming completely, you can set to MAX_SIZE_T. Note that the trick | |
438 | some people use of mallocing a huge space and then freeing it at | |
439 | program startup, in an attempt to reserve system memory, doesn't | |
440 | have the intended effect under automatic trimming, since that memory | |
441 | will immediately be returned to the system. | |
442 | ||
443 | DEFAULT_MMAP_THRESHOLD default: 256K | |
444 | Also settable using mallopt(M_MMAP_THRESHOLD, x) | |
445 | The request size threshold for using MMAP to directly service a | |
446 | request. Requests of at least this size that cannot be allocated | |
447 | using already-existing space will be serviced via mmap. (If enough | |
448 | normal freed space already exists it is used instead.) Using mmap | |
449 | segregates relatively large chunks of memory so that they can be | |
450 | individually obtained and released from the host system. A request | |
451 | serviced through mmap is never reused by any other request (at least | |
452 | not directly; the system may just so happen to remap successive | |
453 | requests to the same locations). Segregating space in this way has | |
454 | the benefits that: Mmapped space can always be individually released | |
455 | back to the system, which helps keep the system level memory demands | |
456 | of a long-lived program low. Also, mapped memory doesn't become | |
457 | `locked' between other chunks, as can happen with normally allocated | |
458 | chunks, which means that even trimming via malloc_trim would not | |
459 | release them. However, it has the disadvantage that the space | |
460 | cannot be reclaimed, consolidated, and then used to service later | |
461 | requests, as happens with normal chunks. The advantages of mmap | |
462 | nearly always outweigh disadvantages for "large" chunks, but the | |
463 | value of "large" may vary across systems. The default is an | |
464 | empirically derived value that works well in most systems. You can | |
465 | disable mmap by setting to MAX_SIZE_T. | |
466 | ||
467 | MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP | |
468 | The number of consolidated frees between checks to release | |
469 | unused segments when freeing. When using non-contiguous segments, | |
470 | especially with multiple mspaces, checking only for topmost space | |
471 | doesn't always suffice to trigger trimming. To compensate for this, | |
472 | free() will, with a period of MAX_RELEASE_CHECK_RATE (or the | |
473 | current number of segments, if greater) try to release unused | |
474 | segments to the OS when freeing chunks that result in | |
475 | consolidation. The best value for this parameter is a compromise | |
476 | between slowing down frees with relatively costly checks that | |
477 | rarely trigger versus holding on to unused memory. To effectively | |
478 | disable, set to MAX_SIZE_T. This may lead to a very slight speed | |
479 | improvement at the expense of carrying around more memory. | |
480 | */ | |
481 | ||
482 | /* Version identifier to allow people to support multiple versions */ | |
483 | #ifndef DLMALLOC_VERSION | |
484 | #define DLMALLOC_VERSION 20804 | |
485 | #endif /* DLMALLOC_VERSION */ | |
486 | ||
eec7fd8b RJ |
487 | #if defined(linux) |
488 | #define _GNU_SOURCE 1 | |
489 | #endif | |
490 | ||
f0ed8226 MSO |
491 | #ifndef WIN32 |
492 | #ifdef _WIN32 | |
493 | #define WIN32 1 | |
494 | #endif /* _WIN32 */ | |
495 | #ifdef _WIN32_WCE | |
496 | #define LACKS_FCNTL_H | |
497 | #define WIN32 1 | |
498 | #endif /* _WIN32_WCE */ | |
499 | #endif /* WIN32 */ | |
500 | #ifdef WIN32 | |
501 | #define WIN32_LEAN_AND_MEAN | |
8453c125 | 502 | #ifndef _WIN32_WINNT |
f0ed8226 | 503 | #define _WIN32_WINNT 0x403 |
8453c125 | 504 | #endif |
f0ed8226 MSO |
505 | #include <windows.h> |
506 | #define HAVE_MMAP 1 | |
507 | #define HAVE_MORECORE 0 | |
508 | #define LACKS_UNISTD_H | |
509 | #define LACKS_SYS_PARAM_H | |
510 | #define LACKS_SYS_MMAN_H | |
511 | #define LACKS_STRING_H | |
512 | #define LACKS_STRINGS_H | |
513 | #define LACKS_SYS_TYPES_H | |
514 | #define LACKS_ERRNO_H | |
515 | #ifndef MALLOC_FAILURE_ACTION | |
516 | #define MALLOC_FAILURE_ACTION | |
517 | #endif /* MALLOC_FAILURE_ACTION */ | |
518 | #ifdef _WIN32_WCE /* WINCE reportedly does not clear */ | |
519 | #define MMAP_CLEARS 0 | |
520 | #else | |
521 | #define MMAP_CLEARS 1 | |
522 | #endif /* _WIN32_WCE */ | |
523 | #endif /* WIN32 */ | |
524 | ||
525 | #if defined(DARWIN) || defined(_DARWIN) | |
526 | /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */ | |
527 | #ifndef HAVE_MORECORE | |
528 | #define HAVE_MORECORE 0 | |
529 | #define HAVE_MMAP 1 | |
530 | /* OSX allocators provide 16 byte alignment */ | |
531 | #ifndef MALLOC_ALIGNMENT | |
532 | #define MALLOC_ALIGNMENT ((size_t)16U) | |
533 | #endif | |
534 | #endif /* HAVE_MORECORE */ | |
535 | #endif /* DARWIN */ | |
536 | ||
537 | #ifndef LACKS_SYS_TYPES_H | |
538 | #include <sys/types.h> /* For size_t */ | |
539 | #endif /* LACKS_SYS_TYPES_H */ | |
540 | ||
541 | /* The maximum possible size_t value has all bits set */ | |
542 | #define MAX_SIZE_T (~(size_t)0) | |
543 | ||
544 | #ifndef ONLY_MSPACES | |
545 | #define ONLY_MSPACES 0 /* define to a value */ | |
546 | #else | |
547 | #define ONLY_MSPACES 1 | |
548 | #endif /* ONLY_MSPACES */ | |
549 | #ifndef MSPACES | |
550 | #if ONLY_MSPACES | |
551 | #define MSPACES 1 | |
552 | #else /* ONLY_MSPACES */ | |
553 | #define MSPACES 0 | |
554 | #endif /* ONLY_MSPACES */ | |
555 | #endif /* MSPACES */ | |
556 | #ifndef MALLOC_ALIGNMENT | |
557 | #define MALLOC_ALIGNMENT ((size_t)8U) | |
558 | #endif /* MALLOC_ALIGNMENT */ | |
559 | #ifndef FOOTERS | |
560 | #define FOOTERS 0 | |
561 | #endif /* FOOTERS */ | |
562 | #ifndef ABORT | |
563 | #define ABORT abort() | |
564 | #endif /* ABORT */ | |
565 | #ifndef ABORT_ON_ASSERT_FAILURE | |
566 | #define ABORT_ON_ASSERT_FAILURE 1 | |
567 | #endif /* ABORT_ON_ASSERT_FAILURE */ | |
568 | #ifndef PROCEED_ON_ERROR | |
569 | #define PROCEED_ON_ERROR 0 | |
570 | #endif /* PROCEED_ON_ERROR */ | |
571 | #ifndef USE_LOCKS | |
572 | #define USE_LOCKS 0 | |
573 | #endif /* USE_LOCKS */ | |
574 | #ifndef USE_SPIN_LOCKS | |
575 | #if USE_LOCKS && (defined(__GNUC__) && ((defined(__i386__) || defined(__x86_64__)))) || (defined(_MSC_VER) && _MSC_VER>=1310) | |
576 | #define USE_SPIN_LOCKS 1 | |
577 | #else | |
578 | #define USE_SPIN_LOCKS 0 | |
579 | #endif /* USE_LOCKS && ... */ | |
580 | #endif /* USE_SPIN_LOCKS */ | |
581 | #ifndef INSECURE | |
582 | #define INSECURE 0 | |
583 | #endif /* INSECURE */ | |
584 | #ifndef HAVE_MMAP | |
585 | #define HAVE_MMAP 1 | |
586 | #endif /* HAVE_MMAP */ | |
587 | #ifndef MMAP_CLEARS | |
588 | #define MMAP_CLEARS 1 | |
589 | #endif /* MMAP_CLEARS */ | |
590 | #ifndef HAVE_MREMAP | |
591 | #ifdef linux | |
592 | #define HAVE_MREMAP 1 | |
593 | #else /* linux */ | |
594 | #define HAVE_MREMAP 0 | |
595 | #endif /* linux */ | |
596 | #endif /* HAVE_MREMAP */ | |
597 | #ifndef MALLOC_FAILURE_ACTION | |
598 | #define MALLOC_FAILURE_ACTION errno = ENOMEM; | |
599 | #endif /* MALLOC_FAILURE_ACTION */ | |
600 | #ifndef HAVE_MORECORE | |
601 | #if ONLY_MSPACES | |
602 | #define HAVE_MORECORE 0 | |
603 | #else /* ONLY_MSPACES */ | |
604 | #define HAVE_MORECORE 1 | |
605 | #endif /* ONLY_MSPACES */ | |
606 | #endif /* HAVE_MORECORE */ | |
607 | #if !HAVE_MORECORE | |
608 | #define MORECORE_CONTIGUOUS 0 | |
609 | #else /* !HAVE_MORECORE */ | |
610 | #define MORECORE_DEFAULT sbrk | |
611 | #ifndef MORECORE_CONTIGUOUS | |
612 | #define MORECORE_CONTIGUOUS 1 | |
613 | #endif /* MORECORE_CONTIGUOUS */ | |
614 | #endif /* HAVE_MORECORE */ | |
615 | #ifndef DEFAULT_GRANULARITY | |
616 | #if (MORECORE_CONTIGUOUS || defined(WIN32)) | |
617 | #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */ | |
618 | #else /* MORECORE_CONTIGUOUS */ | |
619 | #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U) | |
620 | #endif /* MORECORE_CONTIGUOUS */ | |
621 | #endif /* DEFAULT_GRANULARITY */ | |
622 | #ifndef DEFAULT_TRIM_THRESHOLD | |
623 | #ifndef MORECORE_CANNOT_TRIM | |
624 | #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U) | |
625 | #else /* MORECORE_CANNOT_TRIM */ | |
626 | #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T | |
627 | #endif /* MORECORE_CANNOT_TRIM */ | |
628 | #endif /* DEFAULT_TRIM_THRESHOLD */ | |
629 | #ifndef DEFAULT_MMAP_THRESHOLD | |
630 | #if HAVE_MMAP | |
631 | #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U) | |
632 | #else /* HAVE_MMAP */ | |
633 | #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T | |
634 | #endif /* HAVE_MMAP */ | |
635 | #endif /* DEFAULT_MMAP_THRESHOLD */ | |
636 | #ifndef MAX_RELEASE_CHECK_RATE | |
637 | #if HAVE_MMAP | |
638 | #define MAX_RELEASE_CHECK_RATE 4095 | |
639 | #else | |
640 | #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T | |
641 | #endif /* HAVE_MMAP */ | |
642 | #endif /* MAX_RELEASE_CHECK_RATE */ | |
643 | #ifndef USE_BUILTIN_FFS | |
644 | #define USE_BUILTIN_FFS 0 | |
645 | #endif /* USE_BUILTIN_FFS */ | |
646 | #ifndef USE_DEV_RANDOM | |
647 | #define USE_DEV_RANDOM 0 | |
648 | #endif /* USE_DEV_RANDOM */ | |
649 | #ifndef NO_MALLINFO | |
650 | #define NO_MALLINFO 0 | |
651 | #endif /* NO_MALLINFO */ | |
652 | #ifndef MALLINFO_FIELD_TYPE | |
653 | #define MALLINFO_FIELD_TYPE size_t | |
654 | #endif /* MALLINFO_FIELD_TYPE */ | |
655 | #ifndef NO_SEGMENT_TRAVERSAL | |
656 | #define NO_SEGMENT_TRAVERSAL 0 | |
657 | #endif /* NO_SEGMENT_TRAVERSAL */ | |
658 | ||
659 | /* | |
660 | mallopt tuning options. SVID/XPG defines four standard parameter | |
661 | numbers for mallopt, normally defined in malloc.h. None of these | |
662 | are used in this malloc, so setting them has no effect. But this | |
663 | malloc does support the following options. | |
664 | */ | |
665 | ||
666 | #define M_TRIM_THRESHOLD (-1) | |
667 | #define M_GRANULARITY (-2) | |
668 | #define M_MMAP_THRESHOLD (-3) | |
669 | ||
670 | /* ------------------------ Mallinfo declarations ------------------------ */ | |
671 | ||
672 | #if !NO_MALLINFO | |
673 | /* | |
674 | This version of malloc supports the standard SVID/XPG mallinfo | |
675 | routine that returns a struct containing usage properties and | |
676 | statistics. It should work on any system that has a | |
677 | /usr/include/malloc.h defining struct mallinfo. The main | |
678 | declaration needed is the mallinfo struct that is returned (by-copy) | |
679 | by mallinfo(). The malloinfo struct contains a bunch of fields that | |
680 | are not even meaningful in this version of malloc. These fields are | |
681 | are instead filled by mallinfo() with other numbers that might be of | |
682 | interest. | |
683 | ||
684 | HAVE_USR_INCLUDE_MALLOC_H should be set if you have a | |
685 | /usr/include/malloc.h file that includes a declaration of struct | |
686 | mallinfo. If so, it is included; else a compliant version is | |
687 | declared below. These must be precisely the same for mallinfo() to | |
688 | work. The original SVID version of this struct, defined on most | |
689 | systems with mallinfo, declares all fields as ints. But some others | |
690 | define as unsigned long. If your system defines the fields using a | |
691 | type of different width than listed here, you MUST #include your | |
692 | system version and #define HAVE_USR_INCLUDE_MALLOC_H. | |
693 | */ | |
694 | ||
695 | /* #define HAVE_USR_INCLUDE_MALLOC_H */ | |
696 | ||
697 | #ifdef HAVE_USR_INCLUDE_MALLOC_H | |
698 | #include "/usr/include/malloc.h" | |
699 | #else /* HAVE_USR_INCLUDE_MALLOC_H */ | |
700 | #ifndef STRUCT_MALLINFO_DECLARED | |
701 | #define STRUCT_MALLINFO_DECLARED 1 | |
702 | struct mallinfo { | |
703 | MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */ | |
704 | MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */ | |
705 | MALLINFO_FIELD_TYPE smblks; /* always 0 */ | |
706 | MALLINFO_FIELD_TYPE hblks; /* always 0 */ | |
707 | MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */ | |
708 | MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */ | |
709 | MALLINFO_FIELD_TYPE fsmblks; /* always 0 */ | |
710 | MALLINFO_FIELD_TYPE uordblks; /* total allocated space */ | |
711 | MALLINFO_FIELD_TYPE fordblks; /* total free space */ | |
712 | MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */ | |
713 | }; | |
714 | #endif /* STRUCT_MALLINFO_DECLARED */ | |
715 | #endif /* HAVE_USR_INCLUDE_MALLOC_H */ | |
716 | #endif /* NO_MALLINFO */ | |
717 | ||
718 | /* | |
719 | Try to persuade compilers to inline. The most critical functions for | |
720 | inlining are defined as macros, so these aren't used for them. | |
721 | */ | |
722 | ||
9e2af084 JS |
723 | #ifdef __MINGW64_VERSION_MAJOR |
724 | #undef FORCEINLINE | |
725 | #endif | |
f0ed8226 MSO |
726 | #ifndef FORCEINLINE |
727 | #if defined(__GNUC__) | |
728 | #define FORCEINLINE __inline __attribute__ ((always_inline)) | |
729 | #elif defined(_MSC_VER) | |
730 | #define FORCEINLINE __forceinline | |
731 | #endif | |
732 | #endif | |
733 | #ifndef NOINLINE | |
734 | #if defined(__GNUC__) | |
735 | #define NOINLINE __attribute__ ((noinline)) | |
736 | #elif defined(_MSC_VER) | |
737 | #define NOINLINE __declspec(noinline) | |
738 | #else | |
739 | #define NOINLINE | |
740 | #endif | |
741 | #endif | |
742 | ||
743 | #ifdef __cplusplus | |
744 | extern "C" { | |
745 | #ifndef FORCEINLINE | |
746 | #define FORCEINLINE inline | |
747 | #endif | |
748 | #endif /* __cplusplus */ | |
749 | #ifndef FORCEINLINE | |
750 | #define FORCEINLINE | |
751 | #endif | |
752 | ||
753 | #if !ONLY_MSPACES | |
754 | ||
755 | /* ------------------- Declarations of public routines ------------------- */ | |
756 | ||
757 | #ifndef USE_DL_PREFIX | |
758 | #define dlcalloc calloc | |
759 | #define dlfree free | |
760 | #define dlmalloc malloc | |
761 | #define dlmemalign memalign | |
762 | #define dlrealloc realloc | |
763 | #define dlvalloc valloc | |
764 | #define dlpvalloc pvalloc | |
765 | #define dlmallinfo mallinfo | |
766 | #define dlmallopt mallopt | |
767 | #define dlmalloc_trim malloc_trim | |
768 | #define dlmalloc_stats malloc_stats | |
769 | #define dlmalloc_usable_size malloc_usable_size | |
770 | #define dlmalloc_footprint malloc_footprint | |
771 | #define dlmalloc_max_footprint malloc_max_footprint | |
772 | #define dlindependent_calloc independent_calloc | |
773 | #define dlindependent_comalloc independent_comalloc | |
774 | #endif /* USE_DL_PREFIX */ | |
775 | ||
776 | ||
777 | /* | |
778 | malloc(size_t n) | |
779 | Returns a pointer to a newly allocated chunk of at least n bytes, or | |
780 | null if no space is available, in which case errno is set to ENOMEM | |
781 | on ANSI C systems. | |
782 | ||
783 | If n is zero, malloc returns a minimum-sized chunk. (The minimum | |
784 | size is 16 bytes on most 32bit systems, and 32 bytes on 64bit | |
785 | systems.) Note that size_t is an unsigned type, so calls with | |
786 | arguments that would be negative if signed are interpreted as | |
787 | requests for huge amounts of space, which will often fail. The | |
788 | maximum supported value of n differs across systems, but is in all | |
789 | cases less than the maximum representable value of a size_t. | |
790 | */ | |
791 | void* dlmalloc(size_t); | |
792 | ||
793 | /* | |
794 | free(void* p) | |
795 | Releases the chunk of memory pointed to by p, that had been previously | |
796 | allocated using malloc or a related routine such as realloc. | |
797 | It has no effect if p is null. If p was not malloced or already | |
798 | freed, free(p) will by default cause the current program to abort. | |
799 | */ | |
800 | void dlfree(void*); | |
801 | ||
802 | /* | |
803 | calloc(size_t n_elements, size_t element_size); | |
804 | Returns a pointer to n_elements * element_size bytes, with all locations | |
805 | set to zero. | |
806 | */ | |
807 | void* dlcalloc(size_t, size_t); | |
808 | ||
809 | /* | |
810 | realloc(void* p, size_t n) | |
811 | Returns a pointer to a chunk of size n that contains the same data | |
812 | as does chunk p up to the minimum of (n, p's size) bytes, or null | |
813 | if no space is available. | |
814 | ||
815 | The returned pointer may or may not be the same as p. The algorithm | |
816 | prefers extending p in most cases when possible, otherwise it | |
817 | employs the equivalent of a malloc-copy-free sequence. | |
818 | ||
819 | If p is null, realloc is equivalent to malloc. | |
820 | ||
821 | If space is not available, realloc returns null, errno is set (if on | |
822 | ANSI) and p is NOT freed. | |
823 | ||
824 | if n is for fewer bytes than already held by p, the newly unused | |
825 | space is lopped off and freed if possible. realloc with a size | |
826 | argument of zero (re)allocates a minimum-sized chunk. | |
827 | ||
828 | The old unix realloc convention of allowing the last-free'd chunk | |
829 | to be used as an argument to realloc is not supported. | |
830 | */ | |
831 | ||
832 | void* dlrealloc(void*, size_t); | |
833 | ||
834 | /* | |
835 | memalign(size_t alignment, size_t n); | |
836 | Returns a pointer to a newly allocated chunk of n bytes, aligned | |
837 | in accord with the alignment argument. | |
838 | ||
839 | The alignment argument should be a power of two. If the argument is | |
840 | not a power of two, the nearest greater power is used. | |
841 | 8-byte alignment is guaranteed by normal malloc calls, so don't | |
842 | bother calling memalign with an argument of 8 or less. | |
843 | ||
844 | Overreliance on memalign is a sure way to fragment space. | |
845 | */ | |
846 | void* dlmemalign(size_t, size_t); | |
847 | ||
848 | /* | |
849 | valloc(size_t n); | |
850 | Equivalent to memalign(pagesize, n), where pagesize is the page | |
851 | size of the system. If the pagesize is unknown, 4096 is used. | |
852 | */ | |
853 | void* dlvalloc(size_t); | |
854 | ||
855 | /* | |
856 | mallopt(int parameter_number, int parameter_value) | |
857 | Sets tunable parameters The format is to provide a | |
858 | (parameter-number, parameter-value) pair. mallopt then sets the | |
859 | corresponding parameter to the argument value if it can (i.e., so | |
860 | long as the value is meaningful), and returns 1 if successful else | |
861 | 0. To workaround the fact that mallopt is specified to use int, | |
862 | not size_t parameters, the value -1 is specially treated as the | |
863 | maximum unsigned size_t value. | |
864 | ||
865 | SVID/XPG/ANSI defines four standard param numbers for mallopt, | |
866 | normally defined in malloc.h. None of these are use in this malloc, | |
867 | so setting them has no effect. But this malloc also supports other | |
868 | options in mallopt. See below for details. Briefly, supported | |
869 | parameters are as follows (listed defaults are for "typical" | |
870 | configurations). | |
871 | ||
872 | Symbol param # default allowed param values | |
873 | M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables) | |
874 | M_GRANULARITY -2 page size any power of 2 >= page size | |
875 | M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support) | |
876 | */ | |
877 | int dlmallopt(int, int); | |
878 | ||
879 | /* | |
880 | malloc_footprint(); | |
881 | Returns the number of bytes obtained from the system. The total | |
882 | number of bytes allocated by malloc, realloc etc., is less than this | |
883 | value. Unlike mallinfo, this function returns only a precomputed | |
884 | result, so can be called frequently to monitor memory consumption. | |
885 | Even if locks are otherwise defined, this function does not use them, | |
886 | so results might not be up to date. | |
887 | */ | |
888 | size_t dlmalloc_footprint(void); | |
889 | ||
890 | /* | |
891 | malloc_max_footprint(); | |
892 | Returns the maximum number of bytes obtained from the system. This | |
893 | value will be greater than current footprint if deallocated space | |
894 | has been reclaimed by the system. The peak number of bytes allocated | |
895 | by malloc, realloc etc., is less than this value. Unlike mallinfo, | |
896 | this function returns only a precomputed result, so can be called | |
897 | frequently to monitor memory consumption. Even if locks are | |
898 | otherwise defined, this function does not use them, so results might | |
899 | not be up to date. | |
900 | */ | |
901 | size_t dlmalloc_max_footprint(void); | |
902 | ||
903 | #if !NO_MALLINFO | |
904 | /* | |
905 | mallinfo() | |
906 | Returns (by copy) a struct containing various summary statistics: | |
907 | ||
908 | arena: current total non-mmapped bytes allocated from system | |
909 | ordblks: the number of free chunks | |
910 | smblks: always zero. | |
911 | hblks: current number of mmapped regions | |
912 | hblkhd: total bytes held in mmapped regions | |
913 | usmblks: the maximum total allocated space. This will be greater | |
914 | than current total if trimming has occurred. | |
915 | fsmblks: always zero | |
916 | uordblks: current total allocated space (normal or mmapped) | |
917 | fordblks: total free space | |
918 | keepcost: the maximum number of bytes that could ideally be released | |
919 | back to system via malloc_trim. ("ideally" means that | |
920 | it ignores page restrictions etc.) | |
921 | ||
922 | Because these fields are ints, but internal bookkeeping may | |
923 | be kept as longs, the reported values may wrap around zero and | |
924 | thus be inaccurate. | |
925 | */ | |
926 | struct mallinfo dlmallinfo(void); | |
927 | #endif /* NO_MALLINFO */ | |
928 | ||
929 | /* | |
930 | independent_calloc(size_t n_elements, size_t element_size, void* chunks[]); | |
931 | ||
932 | independent_calloc is similar to calloc, but instead of returning a | |
933 | single cleared space, it returns an array of pointers to n_elements | |
934 | independent elements that can hold contents of size elem_size, each | |
935 | of which starts out cleared, and can be independently freed, | |
936 | realloc'ed etc. The elements are guaranteed to be adjacently | |
937 | allocated (this is not guaranteed to occur with multiple callocs or | |
938 | mallocs), which may also improve cache locality in some | |
939 | applications. | |
940 | ||
941 | The "chunks" argument is optional (i.e., may be null, which is | |
942 | probably the most typical usage). If it is null, the returned array | |
943 | is itself dynamically allocated and should also be freed when it is | |
944 | no longer needed. Otherwise, the chunks array must be of at least | |
945 | n_elements in length. It is filled in with the pointers to the | |
946 | chunks. | |
947 | ||
948 | In either case, independent_calloc returns this pointer array, or | |
949 | null if the allocation failed. If n_elements is zero and "chunks" | |
950 | is null, it returns a chunk representing an array with zero elements | |
951 | (which should be freed if not wanted). | |
952 | ||
953 | Each element must be individually freed when it is no longer | |
954 | needed. If you'd like to instead be able to free all at once, you | |
955 | should instead use regular calloc and assign pointers into this | |
956 | space to represent elements. (In this case though, you cannot | |
957 | independently free elements.) | |
958 | ||
959 | independent_calloc simplifies and speeds up implementations of many | |
960 | kinds of pools. It may also be useful when constructing large data | |
961 | structures that initially have a fixed number of fixed-sized nodes, | |
962 | but the number is not known at compile time, and some of the nodes | |
963 | may later need to be freed. For example: | |
964 | ||
965 | struct Node { int item; struct Node* next; }; | |
966 | ||
967 | struct Node* build_list() { | |
968 | struct Node** pool; | |
969 | int n = read_number_of_nodes_needed(); | |
970 | if (n <= 0) return 0; | |
971 | pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); | |
972 | if (pool == 0) die(); | |
973 | // organize into a linked list... | |
974 | struct Node* first = pool[0]; | |
975 | for (i = 0; i < n-1; ++i) | |
976 | pool[i]->next = pool[i+1]; | |
977 | free(pool); // Can now free the array (or not, if it is needed later) | |
978 | return first; | |
979 | } | |
980 | */ | |
981 | void** dlindependent_calloc(size_t, size_t, void**); | |
982 | ||
983 | /* | |
984 | independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]); | |
985 | ||
986 | independent_comalloc allocates, all at once, a set of n_elements | |
987 | chunks with sizes indicated in the "sizes" array. It returns | |
988 | an array of pointers to these elements, each of which can be | |
989 | independently freed, realloc'ed etc. The elements are guaranteed to | |
990 | be adjacently allocated (this is not guaranteed to occur with | |
991 | multiple callocs or mallocs), which may also improve cache locality | |
992 | in some applications. | |
993 | ||
994 | The "chunks" argument is optional (i.e., may be null). If it is null | |
995 | the returned array is itself dynamically allocated and should also | |
996 | be freed when it is no longer needed. Otherwise, the chunks array | |
997 | must be of at least n_elements in length. It is filled in with the | |
998 | pointers to the chunks. | |
999 | ||
1000 | In either case, independent_comalloc returns this pointer array, or | |
1001 | null if the allocation failed. If n_elements is zero and chunks is | |
1002 | null, it returns a chunk representing an array with zero elements | |
1003 | (which should be freed if not wanted). | |
1004 | ||
1005 | Each element must be individually freed when it is no longer | |
1006 | needed. If you'd like to instead be able to free all at once, you | |
1007 | should instead use a single regular malloc, and assign pointers at | |
1008 | particular offsets in the aggregate space. (In this case though, you | |
1009 | cannot independently free elements.) | |
1010 | ||
1011 | independent_comallac differs from independent_calloc in that each | |
1012 | element may have a different size, and also that it does not | |
1013 | automatically clear elements. | |
1014 | ||
1015 | independent_comalloc can be used to speed up allocation in cases | |
1016 | where several structs or objects must always be allocated at the | |
1017 | same time. For example: | |
1018 | ||
1019 | struct Head { ... } | |
1020 | struct Foot { ... } | |
1021 | ||
1022 | void send_message(char* msg) { | |
1023 | int msglen = strlen(msg); | |
1024 | size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) }; | |
1025 | void* chunks[3]; | |
1026 | if (independent_comalloc(3, sizes, chunks) == 0) | |
1027 | die(); | |
1028 | struct Head* head = (struct Head*)(chunks[0]); | |
1029 | char* body = (char*)(chunks[1]); | |
1030 | struct Foot* foot = (struct Foot*)(chunks[2]); | |
1031 | // ... | |
1032 | } | |
1033 | ||
1034 | In general though, independent_comalloc is worth using only for | |
1035 | larger values of n_elements. For small values, you probably won't | |
1036 | detect enough difference from series of malloc calls to bother. | |
1037 | ||
1038 | Overuse of independent_comalloc can increase overall memory usage, | |
1039 | since it cannot reuse existing noncontiguous small chunks that | |
1040 | might be available for some of the elements. | |
1041 | */ | |
1042 | void** dlindependent_comalloc(size_t, size_t*, void**); | |
1043 | ||
1044 | ||
1045 | /* | |
1046 | pvalloc(size_t n); | |
1047 | Equivalent to valloc(minimum-page-that-holds(n)), that is, | |
1048 | round up n to nearest pagesize. | |
1049 | */ | |
1050 | void* dlpvalloc(size_t); | |
1051 | ||
1052 | /* | |
1053 | malloc_trim(size_t pad); | |
1054 | ||
1055 | If possible, gives memory back to the system (via negative arguments | |
1056 | to sbrk) if there is unused memory at the `high' end of the malloc | |
1057 | pool or in unused MMAP segments. You can call this after freeing | |
1058 | large blocks of memory to potentially reduce the system-level memory | |
1059 | requirements of a program. However, it cannot guarantee to reduce | |
1060 | memory. Under some allocation patterns, some large free blocks of | |
1061 | memory will be locked between two used chunks, so they cannot be | |
1062 | given back to the system. | |
1063 | ||
1064 | The `pad' argument to malloc_trim represents the amount of free | |
1065 | trailing space to leave untrimmed. If this argument is zero, only | |
1066 | the minimum amount of memory to maintain internal data structures | |
1067 | will be left. Non-zero arguments can be supplied to maintain enough | |
1068 | trailing space to service future expected allocations without having | |
1069 | to re-obtain memory from the system. | |
1070 | ||
1071 | Malloc_trim returns 1 if it actually released any memory, else 0. | |
1072 | */ | |
1073 | int dlmalloc_trim(size_t); | |
1074 | ||
1075 | /* | |
1076 | malloc_stats(); | |
1077 | Prints on stderr the amount of space obtained from the system (both | |
1078 | via sbrk and mmap), the maximum amount (which may be more than | |
1079 | current if malloc_trim and/or munmap got called), and the current | |
1080 | number of bytes allocated via malloc (or realloc, etc) but not yet | |
1081 | freed. Note that this is the number of bytes allocated, not the | |
1082 | number requested. It will be larger than the number requested | |
1083 | because of alignment and bookkeeping overhead. Because it includes | |
1084 | alignment wastage as being in use, this figure may be greater than | |
1085 | zero even when no user-level chunks are allocated. | |
1086 | ||
1087 | The reported current and maximum system memory can be inaccurate if | |
1088 | a program makes other calls to system memory allocation functions | |
1089 | (normally sbrk) outside of malloc. | |
1090 | ||
1091 | malloc_stats prints only the most commonly interesting statistics. | |
1092 | More information can be obtained by calling mallinfo. | |
1093 | */ | |
1094 | void dlmalloc_stats(void); | |
1095 | ||
1096 | #endif /* ONLY_MSPACES */ | |
1097 | ||
1098 | /* | |
1099 | malloc_usable_size(void* p); | |
1100 | ||
1101 | Returns the number of bytes you can actually use in | |
1102 | an allocated chunk, which may be more than you requested (although | |
1103 | often not) due to alignment and minimum size constraints. | |
1104 | You can use this many bytes without worrying about | |
1105 | overwriting other allocated objects. This is not a particularly great | |
1106 | programming practice. malloc_usable_size can be more useful in | |
1107 | debugging and assertions, for example: | |
1108 | ||
1109 | p = malloc(n); | |
1110 | assert(malloc_usable_size(p) >= 256); | |
1111 | */ | |
1112 | size_t dlmalloc_usable_size(void*); | |
1113 | ||
1114 | ||
1115 | #if MSPACES | |
1116 | ||
1117 | /* | |
1118 | mspace is an opaque type representing an independent | |
1119 | region of space that supports mspace_malloc, etc. | |
1120 | */ | |
1121 | typedef void* mspace; | |
1122 | ||
1123 | /* | |
1124 | create_mspace creates and returns a new independent space with the | |
1125 | given initial capacity, or, if 0, the default granularity size. It | |
1126 | returns null if there is no system memory available to create the | |
1127 | space. If argument locked is non-zero, the space uses a separate | |
1128 | lock to control access. The capacity of the space will grow | |
1129 | dynamically as needed to service mspace_malloc requests. You can | |
1130 | control the sizes of incremental increases of this space by | |
1131 | compiling with a different DEFAULT_GRANULARITY or dynamically | |
1132 | setting with mallopt(M_GRANULARITY, value). | |
1133 | */ | |
1134 | mspace create_mspace(size_t capacity, int locked); | |
1135 | ||
1136 | /* | |
1137 | destroy_mspace destroys the given space, and attempts to return all | |
1138 | of its memory back to the system, returning the total number of | |
1139 | bytes freed. After destruction, the results of access to all memory | |
1140 | used by the space become undefined. | |
1141 | */ | |
1142 | size_t destroy_mspace(mspace msp); | |
1143 | ||
1144 | /* | |
1145 | create_mspace_with_base uses the memory supplied as the initial base | |
1146 | of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this | |
1147 | space is used for bookkeeping, so the capacity must be at least this | |
1148 | large. (Otherwise 0 is returned.) When this initial space is | |
1149 | exhausted, additional memory will be obtained from the system. | |
1150 | Destroying this space will deallocate all additionally allocated | |
1151 | space (if possible) but not the initial base. | |
1152 | */ | |
1153 | mspace create_mspace_with_base(void* base, size_t capacity, int locked); | |
1154 | ||
1155 | /* | |
1156 | mspace_mmap_large_chunks controls whether requests for large chunks | |
1157 | are allocated in their own mmapped regions, separate from others in | |
1158 | this mspace. By default this is enabled, which reduces | |
1159 | fragmentation. However, such chunks are not necessarily released to | |
1160 | the system upon destroy_mspace. Disabling by setting to false may | |
1161 | increase fragmentation, but avoids leakage when relying on | |
1162 | destroy_mspace to release all memory allocated using this space. | |
1163 | */ | |
1164 | int mspace_mmap_large_chunks(mspace msp, int enable); | |
1165 | ||
1166 | ||
1167 | /* | |
1168 | mspace_malloc behaves as malloc, but operates within | |
1169 | the given space. | |
1170 | */ | |
1171 | void* mspace_malloc(mspace msp, size_t bytes); | |
1172 | ||
1173 | /* | |
1174 | mspace_free behaves as free, but operates within | |
1175 | the given space. | |
1176 | ||
1177 | If compiled with FOOTERS==1, mspace_free is not actually needed. | |
1178 | free may be called instead of mspace_free because freed chunks from | |
1179 | any space are handled by their originating spaces. | |
1180 | */ | |
1181 | void mspace_free(mspace msp, void* mem); | |
1182 | ||
1183 | /* | |
1184 | mspace_realloc behaves as realloc, but operates within | |
1185 | the given space. | |
1186 | ||
1187 | If compiled with FOOTERS==1, mspace_realloc is not actually | |
1188 | needed. realloc may be called instead of mspace_realloc because | |
1189 | realloced chunks from any space are handled by their originating | |
1190 | spaces. | |
1191 | */ | |
1192 | void* mspace_realloc(mspace msp, void* mem, size_t newsize); | |
1193 | ||
1194 | /* | |
1195 | mspace_calloc behaves as calloc, but operates within | |
1196 | the given space. | |
1197 | */ | |
1198 | void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size); | |
1199 | ||
1200 | /* | |
1201 | mspace_memalign behaves as memalign, but operates within | |
1202 | the given space. | |
1203 | */ | |
1204 | void* mspace_memalign(mspace msp, size_t alignment, size_t bytes); | |
1205 | ||
1206 | /* | |
1207 | mspace_independent_calloc behaves as independent_calloc, but | |
1208 | operates within the given space. | |
1209 | */ | |
1210 | void** mspace_independent_calloc(mspace msp, size_t n_elements, | |
1211 | size_t elem_size, void* chunks[]); | |
1212 | ||
1213 | /* | |
1214 | mspace_independent_comalloc behaves as independent_comalloc, but | |
1215 | operates within the given space. | |
1216 | */ | |
1217 | void** mspace_independent_comalloc(mspace msp, size_t n_elements, | |
1218 | size_t sizes[], void* chunks[]); | |
1219 | ||
1220 | /* | |
1221 | mspace_footprint() returns the number of bytes obtained from the | |
1222 | system for this space. | |
1223 | */ | |
1224 | size_t mspace_footprint(mspace msp); | |
1225 | ||
1226 | /* | |
1227 | mspace_max_footprint() returns the peak number of bytes obtained from the | |
1228 | system for this space. | |
1229 | */ | |
1230 | size_t mspace_max_footprint(mspace msp); | |
1231 | ||
1232 | ||
1233 | #if !NO_MALLINFO | |
1234 | /* | |
1235 | mspace_mallinfo behaves as mallinfo, but reports properties of | |
1236 | the given space. | |
1237 | */ | |
1238 | struct mallinfo mspace_mallinfo(mspace msp); | |
1239 | #endif /* NO_MALLINFO */ | |
1240 | ||
1241 | /* | |
1242 | malloc_usable_size(void* p) behaves the same as malloc_usable_size; | |
1243 | */ | |
1244 | size_t mspace_usable_size(void* mem); | |
1245 | ||
1246 | /* | |
1247 | mspace_malloc_stats behaves as malloc_stats, but reports | |
1248 | properties of the given space. | |
1249 | */ | |
1250 | void mspace_malloc_stats(mspace msp); | |
1251 | ||
1252 | /* | |
1253 | mspace_trim behaves as malloc_trim, but | |
1254 | operates within the given space. | |
1255 | */ | |
1256 | int mspace_trim(mspace msp, size_t pad); | |
1257 | ||
1258 | /* | |
1259 | An alias for mallopt. | |
1260 | */ | |
1261 | int mspace_mallopt(int, int); | |
1262 | ||
1263 | #endif /* MSPACES */ | |
1264 | ||
1265 | #ifdef __cplusplus | |
1266 | }; /* end of extern "C" */ | |
1267 | #endif /* __cplusplus */ | |
1268 | ||
1269 | /* | |
1270 | ======================================================================== | |
1271 | To make a fully customizable malloc.h header file, cut everything | |
1272 | above this line, put into file malloc.h, edit to suit, and #include it | |
1273 | on the next line, as well as in programs that use this malloc. | |
1274 | ======================================================================== | |
1275 | */ | |
1276 | ||
1277 | /* #include "malloc.h" */ | |
1278 | ||
1279 | /*------------------------------ internal #includes ---------------------- */ | |
1280 | ||
1281 | #ifdef WIN32 | |
a21077e7 | 1282 | #ifndef __GNUC__ |
f0ed8226 | 1283 | #pragma warning( disable : 4146 ) /* no "unsigned" warnings */ |
a21077e7 | 1284 | #endif |
f0ed8226 MSO |
1285 | #endif /* WIN32 */ |
1286 | ||
1287 | #include <stdio.h> /* for printing in malloc_stats */ | |
1288 | ||
1289 | #ifndef LACKS_ERRNO_H | |
1290 | #include <errno.h> /* for MALLOC_FAILURE_ACTION */ | |
1291 | #endif /* LACKS_ERRNO_H */ | |
1292 | #if FOOTERS | |
1293 | #include <time.h> /* for magic initialization */ | |
1294 | #endif /* FOOTERS */ | |
1295 | #ifndef LACKS_STDLIB_H | |
1296 | #include <stdlib.h> /* for abort() */ | |
1297 | #endif /* LACKS_STDLIB_H */ | |
1298 | #ifdef DEBUG | |
1299 | #if ABORT_ON_ASSERT_FAILURE | |
1300 | #define assert(x) if(!(x)) ABORT | |
1301 | #else /* ABORT_ON_ASSERT_FAILURE */ | |
1302 | #include <assert.h> | |
1303 | #endif /* ABORT_ON_ASSERT_FAILURE */ | |
1304 | #else /* DEBUG */ | |
1305 | #ifndef assert | |
1306 | #define assert(x) | |
1307 | #endif | |
1308 | #define DEBUG 0 | |
1309 | #endif /* DEBUG */ | |
1310 | #ifndef LACKS_STRING_H | |
1311 | #include <string.h> /* for memset etc */ | |
1312 | #endif /* LACKS_STRING_H */ | |
1313 | #if USE_BUILTIN_FFS | |
1314 | #ifndef LACKS_STRINGS_H | |
1315 | #include <strings.h> /* for ffs */ | |
1316 | #endif /* LACKS_STRINGS_H */ | |
1317 | #endif /* USE_BUILTIN_FFS */ | |
1318 | #if HAVE_MMAP | |
1319 | #ifndef LACKS_SYS_MMAN_H | |
1320 | #include <sys/mman.h> /* for mmap */ | |
1321 | #endif /* LACKS_SYS_MMAN_H */ | |
1322 | #ifndef LACKS_FCNTL_H | |
1323 | #include <fcntl.h> | |
1324 | #endif /* LACKS_FCNTL_H */ | |
1325 | #endif /* HAVE_MMAP */ | |
1326 | #ifndef LACKS_UNISTD_H | |
1327 | #include <unistd.h> /* for sbrk, sysconf */ | |
1328 | #else /* LACKS_UNISTD_H */ | |
1329 | #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) | |
1330 | extern void* sbrk(ptrdiff_t); | |
1331 | #endif /* FreeBSD etc */ | |
1332 | #endif /* LACKS_UNISTD_H */ | |
1333 | ||
1334 | /* Declarations for locking */ | |
1335 | #if USE_LOCKS | |
1336 | #ifndef WIN32 | |
1337 | #include <pthread.h> | |
1338 | #if defined (__SVR4) && defined (__sun) /* solaris */ | |
1339 | #include <thread.h> | |
1340 | #endif /* solaris */ | |
1341 | #else | |
1342 | #ifndef _M_AMD64 | |
1343 | /* These are already defined on AMD64 builds */ | |
1344 | #ifdef __cplusplus | |
1345 | extern "C" { | |
1346 | #endif /* __cplusplus */ | |
1347 | #ifndef __MINGW32__ | |
1348 | LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp); | |
1349 | LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value); | |
1350 | #endif | |
1351 | #ifdef __cplusplus | |
1352 | } | |
1353 | #endif /* __cplusplus */ | |
1354 | #endif /* _M_AMD64 */ | |
1355 | #ifndef __MINGW32__ | |
1356 | #pragma intrinsic (_InterlockedCompareExchange) | |
1357 | #pragma intrinsic (_InterlockedExchange) | |
1358 | #else | |
1359 | /* --[ start GCC compatibility ]---------------------------------------------- | |
1360 | * Compatibility <intrin_x86.h> header for GCC -- GCC equivalents of intrinsic | |
1361 | * Microsoft Visual C++ functions. Originally developed for the ReactOS | |
1362 | * (<http://www.reactos.org/>) and TinyKrnl (<http://www.tinykrnl.org/>) | |
1363 | * projects. | |
1364 | * | |
1365 | * Copyright (c) 2006 KJK::Hyperion <hackbunny@reactos.com> | |
1366 | * | |
1367 | * Permission is hereby granted, free of charge, to any person obtaining a | |
1368 | * copy of this software and associated documentation files (the "Software"), | |
1369 | * to deal in the Software without restriction, including without limitation | |
1370 | * the rights to use, copy, modify, merge, publish, distribute, sublicense, | |
1371 | * and/or sell copies of the Software, and to permit persons to whom the | |
1372 | * Software is furnished to do so, subject to the following conditions: | |
1373 | * | |
1374 | * The above copyright notice and this permission notice shall be included in | |
1375 | * all copies or substantial portions of the Software. | |
1376 | * | |
1377 | * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | |
1378 | * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | |
1379 | * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | |
1380 | * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | |
1381 | * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING | |
1382 | * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER | |
1383 | * DEALINGS IN THE SOFTWARE. | |
1384 | */ | |
1385 | ||
1386 | /*** Atomic operations ***/ | |
1387 | #if (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) > 40100 | |
9e2af084 | 1388 | #undef _ReadWriteBarrier |
f0ed8226 MSO |
1389 | #define _ReadWriteBarrier() __sync_synchronize() |
1390 | #else | |
1391 | static __inline__ __attribute__((always_inline)) long __sync_lock_test_and_set(volatile long * const Target, const long Value) | |
1392 | { | |
1393 | long res; | |
1394 | __asm__ __volatile__("xchg%z0 %2, %0" : "=g" (*(Target)), "=r" (res) : "1" (Value)); | |
1395 | return res; | |
1396 | } | |
1397 | static void __inline__ __attribute__((always_inline)) _MemoryBarrier(void) | |
1398 | { | |
1399 | __asm__ __volatile__("" : : : "memory"); | |
1400 | } | |
1401 | #define _ReadWriteBarrier() _MemoryBarrier() | |
1402 | #endif | |
1403 | /* BUGBUG: GCC only supports full barriers */ | |
1404 | static __inline__ __attribute__((always_inline)) long _InterlockedExchange(volatile long * const Target, const long Value) | |
1405 | { | |
1406 | /* NOTE: __sync_lock_test_and_set would be an acquire barrier, so we force a full barrier */ | |
1407 | _ReadWriteBarrier(); | |
1408 | return __sync_lock_test_and_set(Target, Value); | |
1409 | } | |
1410 | /* --[ end GCC compatibility ]---------------------------------------------- */ | |
1411 | #endif | |
1412 | #define interlockedcompareexchange _InterlockedCompareExchange | |
1413 | #define interlockedexchange _InterlockedExchange | |
1414 | #endif /* Win32 */ | |
1415 | #endif /* USE_LOCKS */ | |
1416 | ||
1417 | /* Declarations for bit scanning on win32 */ | |
1418 | #if defined(_MSC_VER) && _MSC_VER>=1300 | |
1419 | #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */ | |
1420 | #ifdef __cplusplus | |
1421 | extern "C" { | |
1422 | #endif /* __cplusplus */ | |
1423 | unsigned char _BitScanForward(unsigned long *index, unsigned long mask); | |
1424 | unsigned char _BitScanReverse(unsigned long *index, unsigned long mask); | |
1425 | #ifdef __cplusplus | |
1426 | } | |
1427 | #endif /* __cplusplus */ | |
1428 | ||
1429 | #define BitScanForward _BitScanForward | |
1430 | #define BitScanReverse _BitScanReverse | |
1431 | #pragma intrinsic(_BitScanForward) | |
1432 | #pragma intrinsic(_BitScanReverse) | |
1433 | #endif /* BitScanForward */ | |
1434 | #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */ | |
1435 | ||
1436 | #ifndef WIN32 | |
1437 | #ifndef malloc_getpagesize | |
1438 | # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ | |
1439 | # ifndef _SC_PAGE_SIZE | |
1440 | # define _SC_PAGE_SIZE _SC_PAGESIZE | |
1441 | # endif | |
1442 | # endif | |
1443 | # ifdef _SC_PAGE_SIZE | |
1444 | # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) | |
1445 | # else | |
1446 | # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) | |
1447 | extern size_t getpagesize(); | |
1448 | # define malloc_getpagesize getpagesize() | |
1449 | # else | |
1450 | # ifdef WIN32 /* use supplied emulation of getpagesize */ | |
1451 | # define malloc_getpagesize getpagesize() | |
1452 | # else | |
1453 | # ifndef LACKS_SYS_PARAM_H | |
1454 | # include <sys/param.h> | |
1455 | # endif | |
1456 | # ifdef EXEC_PAGESIZE | |
1457 | # define malloc_getpagesize EXEC_PAGESIZE | |
1458 | # else | |
1459 | # ifdef NBPG | |
1460 | # ifndef CLSIZE | |
1461 | # define malloc_getpagesize NBPG | |
1462 | # else | |
1463 | # define malloc_getpagesize (NBPG * CLSIZE) | |
1464 | # endif | |
1465 | # else | |
1466 | # ifdef NBPC | |
1467 | # define malloc_getpagesize NBPC | |
1468 | # else | |
1469 | # ifdef PAGESIZE | |
1470 | # define malloc_getpagesize PAGESIZE | |
1471 | # else /* just guess */ | |
1472 | # define malloc_getpagesize ((size_t)4096U) | |
1473 | # endif | |
1474 | # endif | |
1475 | # endif | |
1476 | # endif | |
1477 | # endif | |
1478 | # endif | |
1479 | # endif | |
1480 | #endif | |
1481 | #endif | |
1482 | ||
1483 | ||
1484 | ||
1485 | /* ------------------- size_t and alignment properties -------------------- */ | |
1486 | ||
1487 | /* The byte and bit size of a size_t */ | |
1488 | #define SIZE_T_SIZE (sizeof(size_t)) | |
1489 | #define SIZE_T_BITSIZE (sizeof(size_t) << 3) | |
1490 | ||
1491 | /* Some constants coerced to size_t */ | |
1492 | /* Annoying but necessary to avoid errors on some platforms */ | |
1493 | #define SIZE_T_ZERO ((size_t)0) | |
1494 | #define SIZE_T_ONE ((size_t)1) | |
1495 | #define SIZE_T_TWO ((size_t)2) | |
1496 | #define SIZE_T_FOUR ((size_t)4) | |
1497 | #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1) | |
1498 | #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2) | |
1499 | #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES) | |
1500 | #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U) | |
1501 | ||
1502 | /* The bit mask value corresponding to MALLOC_ALIGNMENT */ | |
1503 | #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE) | |
1504 | ||
1505 | /* True if address a has acceptable alignment */ | |
1506 | #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0) | |
1507 | ||
1508 | /* the number of bytes to offset an address to align it */ | |
1509 | #define align_offset(A)\ | |
1510 | ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\ | |
1511 | ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK)) | |
1512 | ||
1513 | /* -------------------------- MMAP preliminaries ------------------------- */ | |
1514 | ||
1515 | /* | |
1516 | If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and | |
1517 | checks to fail so compiler optimizer can delete code rather than | |
1518 | using so many "#if"s. | |
1519 | */ | |
1520 | ||
1521 | ||
1522 | /* MORECORE and MMAP must return MFAIL on failure */ | |
1523 | #define MFAIL ((void*)(MAX_SIZE_T)) | |
1524 | #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */ | |
1525 | ||
1526 | #if HAVE_MMAP | |
1527 | ||
1528 | #ifndef WIN32 | |
1529 | #define MUNMAP_DEFAULT(a, s) munmap((a), (s)) | |
1530 | #define MMAP_PROT (PROT_READ|PROT_WRITE) | |
1531 | #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) | |
1532 | #define MAP_ANONYMOUS MAP_ANON | |
1533 | #endif /* MAP_ANON */ | |
1534 | #ifdef MAP_ANONYMOUS | |
1535 | #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS) | |
1536 | #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0) | |
1537 | #else /* MAP_ANONYMOUS */ | |
1538 | /* | |
1539 | Nearly all versions of mmap support MAP_ANONYMOUS, so the following | |
1540 | is unlikely to be needed, but is supplied just in case. | |
1541 | */ | |
1542 | #define MMAP_FLAGS (MAP_PRIVATE) | |
1543 | static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ | |
1544 | #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \ | |
1545 | (dev_zero_fd = open("/dev/zero", O_RDWR), \ | |
1546 | mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \ | |
1547 | mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) | |
1548 | #endif /* MAP_ANONYMOUS */ | |
1549 | ||
1550 | #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s) | |
1551 | ||
1552 | #else /* WIN32 */ | |
1553 | ||
1554 | /* Win32 MMAP via VirtualAlloc */ | |
1555 | static FORCEINLINE void* win32mmap(size_t size) { | |
1556 | void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE); | |
1557 | return (ptr != 0)? ptr: MFAIL; | |
1558 | } | |
1559 | ||
1560 | /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */ | |
1561 | static FORCEINLINE void* win32direct_mmap(size_t size) { | |
1562 | void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN, | |
1563 | PAGE_READWRITE); | |
1564 | return (ptr != 0)? ptr: MFAIL; | |
1565 | } | |
1566 | ||
1567 | /* This function supports releasing coalesed segments */ | |
1568 | static FORCEINLINE int win32munmap(void* ptr, size_t size) { | |
1569 | MEMORY_BASIC_INFORMATION minfo; | |
1570 | char* cptr = (char*)ptr; | |
1571 | while (size) { | |
1572 | if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0) | |
1573 | return -1; | |
1574 | if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr || | |
1575 | minfo.State != MEM_COMMIT || minfo.RegionSize > size) | |
1576 | return -1; | |
1577 | if (VirtualFree(cptr, 0, MEM_RELEASE) == 0) | |
1578 | return -1; | |
1579 | cptr += minfo.RegionSize; | |
1580 | size -= minfo.RegionSize; | |
1581 | } | |
1582 | return 0; | |
1583 | } | |
1584 | ||
1585 | #define MMAP_DEFAULT(s) win32mmap(s) | |
1586 | #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s)) | |
1587 | #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s) | |
1588 | #endif /* WIN32 */ | |
1589 | #endif /* HAVE_MMAP */ | |
1590 | ||
1591 | #if HAVE_MREMAP | |
1592 | #ifndef WIN32 | |
1593 | #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv)) | |
1594 | #endif /* WIN32 */ | |
1595 | #endif /* HAVE_MREMAP */ | |
1596 | ||
1597 | ||
1598 | /** | |
1599 | * Define CALL_MORECORE | |
1600 | */ | |
1601 | #if HAVE_MORECORE | |
1602 | #ifdef MORECORE | |
1603 | #define CALL_MORECORE(S) MORECORE(S) | |
1604 | #else /* MORECORE */ | |
1605 | #define CALL_MORECORE(S) MORECORE_DEFAULT(S) | |
1606 | #endif /* MORECORE */ | |
1607 | #else /* HAVE_MORECORE */ | |
1608 | #define CALL_MORECORE(S) MFAIL | |
1609 | #endif /* HAVE_MORECORE */ | |
1610 | ||
1611 | /** | |
1612 | * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP | |
1613 | */ | |
1614 | #if HAVE_MMAP | |
1615 | #define IS_MMAPPED_BIT (SIZE_T_ONE) | |
1616 | #define USE_MMAP_BIT (SIZE_T_ONE) | |
1617 | ||
1618 | #ifdef MMAP | |
1619 | #define CALL_MMAP(s) MMAP(s) | |
1620 | #else /* MMAP */ | |
1621 | #define CALL_MMAP(s) MMAP_DEFAULT(s) | |
1622 | #endif /* MMAP */ | |
1623 | #ifdef MUNMAP | |
1624 | #define CALL_MUNMAP(a, s) MUNMAP((a), (s)) | |
1625 | #else /* MUNMAP */ | |
1626 | #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s)) | |
1627 | #endif /* MUNMAP */ | |
1628 | #ifdef DIRECT_MMAP | |
1629 | #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s) | |
1630 | #else /* DIRECT_MMAP */ | |
1631 | #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s) | |
1632 | #endif /* DIRECT_MMAP */ | |
1633 | #else /* HAVE_MMAP */ | |
1634 | #define IS_MMAPPED_BIT (SIZE_T_ZERO) | |
1635 | #define USE_MMAP_BIT (SIZE_T_ZERO) | |
1636 | ||
1637 | #define MMAP(s) MFAIL | |
1638 | #define MUNMAP(a, s) (-1) | |
1639 | #define DIRECT_MMAP(s) MFAIL | |
1640 | #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s) | |
1641 | #define CALL_MMAP(s) MMAP(s) | |
1642 | #define CALL_MUNMAP(a, s) MUNMAP((a), (s)) | |
1643 | #endif /* HAVE_MMAP */ | |
1644 | ||
1645 | /** | |
1646 | * Define CALL_MREMAP | |
1647 | */ | |
1648 | #if HAVE_MMAP && HAVE_MREMAP | |
1649 | #ifdef MREMAP | |
1650 | #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv)) | |
1651 | #else /* MREMAP */ | |
1652 | #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv)) | |
1653 | #endif /* MREMAP */ | |
1654 | #else /* HAVE_MMAP && HAVE_MREMAP */ | |
1655 | #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL | |
1656 | #endif /* HAVE_MMAP && HAVE_MREMAP */ | |
1657 | ||
1658 | /* mstate bit set if continguous morecore disabled or failed */ | |
1659 | #define USE_NONCONTIGUOUS_BIT (4U) | |
1660 | ||
1661 | /* segment bit set in create_mspace_with_base */ | |
1662 | #define EXTERN_BIT (8U) | |
1663 | ||
1664 | ||
1665 | /* --------------------------- Lock preliminaries ------------------------ */ | |
1666 | ||
1667 | /* | |
1668 | When locks are defined, there is one global lock, plus | |
1669 | one per-mspace lock. | |
1670 | ||
1671 | The global lock_ensures that mparams.magic and other unique | |
1672 | mparams values are initialized only once. It also protects | |
1673 | sequences of calls to MORECORE. In many cases sys_alloc requires | |
1674 | two calls, that should not be interleaved with calls by other | |
1675 | threads. This does not protect against direct calls to MORECORE | |
1676 | by other threads not using this lock, so there is still code to | |
1677 | cope the best we can on interference. | |
1678 | ||
1679 | Per-mspace locks surround calls to malloc, free, etc. To enable use | |
1680 | in layered extensions, per-mspace locks are reentrant. | |
1681 | ||
1682 | Because lock-protected regions generally have bounded times, it is | |
1683 | OK to use the supplied simple spinlocks in the custom versions for | |
1684 | x86. | |
1685 | ||
1686 | If USE_LOCKS is > 1, the definitions of lock routines here are | |
1687 | bypassed, in which case you will need to define at least | |
1688 | INITIAL_LOCK, ACQUIRE_LOCK, RELEASE_LOCK and possibly TRY_LOCK | |
1689 | (which is not used in this malloc, but commonly needed in | |
1690 | extensions.) | |
1691 | */ | |
1692 | ||
1693 | #if USE_LOCKS == 1 | |
1694 | ||
1695 | #if USE_SPIN_LOCKS | |
1696 | #ifndef WIN32 | |
1697 | ||
1698 | /* Custom pthread-style spin locks on x86 and x64 for gcc */ | |
1699 | struct pthread_mlock_t { | |
1700 | volatile unsigned int l; | |
1701 | volatile unsigned int c; | |
1702 | volatile pthread_t threadid; | |
1703 | }; | |
1704 | #define MLOCK_T struct pthread_mlock_t | |
1705 | #define CURRENT_THREAD pthread_self() | |
1706 | #define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), 0) | |
1707 | #define ACQUIRE_LOCK(sl) pthread_acquire_lock(sl) | |
1708 | #define RELEASE_LOCK(sl) pthread_release_lock(sl) | |
1709 | #define TRY_LOCK(sl) pthread_try_lock(sl) | |
1710 | #define SPINS_PER_YIELD 63 | |
1711 | ||
1712 | static MLOCK_T malloc_global_mutex = { 0, 0, 0}; | |
1713 | ||
1714 | static FORCEINLINE int pthread_acquire_lock (MLOCK_T *sl) { | |
1715 | int spins = 0; | |
1716 | volatile unsigned int* lp = &sl->l; | |
1717 | for (;;) { | |
1718 | if (*lp != 0) { | |
1719 | if (sl->threadid == CURRENT_THREAD) { | |
1720 | ++sl->c; | |
1721 | return 0; | |
1722 | } | |
1723 | } | |
1724 | else { | |
1725 | /* place args to cmpxchgl in locals to evade oddities in some gccs */ | |
1726 | int cmp = 0; | |
1727 | int val = 1; | |
1728 | int ret; | |
1729 | __asm__ __volatile__ ("lock; cmpxchgl %1, %2" | |
1730 | : "=a" (ret) | |
1731 | : "r" (val), "m" (*(lp)), "0"(cmp) | |
1732 | : "memory", "cc"); | |
1733 | if (!ret) { | |
1734 | assert(!sl->threadid); | |
1735 | sl->c = 1; | |
1736 | sl->threadid = CURRENT_THREAD; | |
1737 | return 0; | |
1738 | } | |
1739 | if ((++spins & SPINS_PER_YIELD) == 0) { | |
1740 | #if defined (__SVR4) && defined (__sun) /* solaris */ | |
1741 | thr_yield(); | |
1742 | #else | |
1743 | #if defined(__linux__) || defined(__FreeBSD__) || defined(__APPLE__) | |
1744 | sched_yield(); | |
1745 | #else /* no-op yield on unknown systems */ | |
1746 | ; | |
1747 | #endif /* __linux__ || __FreeBSD__ || __APPLE__ */ | |
1748 | #endif /* solaris */ | |
1749 | } | |
1750 | } | |
1751 | } | |
1752 | } | |
1753 | ||
1754 | static FORCEINLINE void pthread_release_lock (MLOCK_T *sl) { | |
1755 | assert(sl->l != 0); | |
1756 | assert(sl->threadid == CURRENT_THREAD); | |
1757 | if (--sl->c == 0) { | |
f0ed8226 MSO |
1758 | volatile unsigned int* lp = &sl->l; |
1759 | int prev = 0; | |
1760 | int ret; | |
c9b9c09d | 1761 | sl->threadid = 0; |
f0ed8226 MSO |
1762 | __asm__ __volatile__ ("lock; xchgl %0, %1" |
1763 | : "=r" (ret) | |
1764 | : "m" (*(lp)), "0"(prev) | |
1765 | : "memory"); | |
1766 | } | |
1767 | } | |
1768 | ||
1769 | static FORCEINLINE int pthread_try_lock (MLOCK_T *sl) { | |
1770 | volatile unsigned int* lp = &sl->l; | |
1771 | if (*lp != 0) { | |
1772 | if (sl->threadid == CURRENT_THREAD) { | |
1773 | ++sl->c; | |
1774 | return 1; | |
1775 | } | |
1776 | } | |
1777 | else { | |
1778 | int cmp = 0; | |
1779 | int val = 1; | |
1780 | int ret; | |
1781 | __asm__ __volatile__ ("lock; cmpxchgl %1, %2" | |
1782 | : "=a" (ret) | |
1783 | : "r" (val), "m" (*(lp)), "0"(cmp) | |
1784 | : "memory", "cc"); | |
1785 | if (!ret) { | |
1786 | assert(!sl->threadid); | |
1787 | sl->c = 1; | |
1788 | sl->threadid = CURRENT_THREAD; | |
1789 | return 1; | |
1790 | } | |
1791 | } | |
1792 | return 0; | |
1793 | } | |
1794 | ||
1795 | ||
1796 | #else /* WIN32 */ | |
1797 | /* Custom win32-style spin locks on x86 and x64 for MSC */ | |
1798 | struct win32_mlock_t | |
1799 | { | |
1800 | volatile long l; | |
1801 | volatile unsigned int c; | |
1802 | volatile long threadid; | |
1803 | }; | |
1804 | ||
2921600a | 1805 | static inline int return_0(int i) { return 0; } |
f0ed8226 MSO |
1806 | #define MLOCK_T struct win32_mlock_t |
1807 | #define CURRENT_THREAD win32_getcurrentthreadid() | |
2921600a | 1808 | #define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), return_0(0)) |
f0ed8226 MSO |
1809 | #define ACQUIRE_LOCK(sl) win32_acquire_lock(sl) |
1810 | #define RELEASE_LOCK(sl) win32_release_lock(sl) | |
1811 | #define TRY_LOCK(sl) win32_try_lock(sl) | |
1812 | #define SPINS_PER_YIELD 63 | |
1813 | ||
1814 | static MLOCK_T malloc_global_mutex = { 0, 0, 0}; | |
1815 | ||
241c957d | 1816 | static FORCEINLINE long win32_getcurrentthreadid(void) { |
f0ed8226 MSO |
1817 | #ifdef _MSC_VER |
1818 | #if defined(_M_IX86) | |
1819 | long *threadstruct=(long *)__readfsdword(0x18); | |
1820 | long threadid=threadstruct[0x24/sizeof(long)]; | |
1821 | return threadid; | |
1822 | #elif defined(_M_X64) | |
1823 | /* todo */ | |
1824 | return GetCurrentThreadId(); | |
1825 | #else | |
1826 | return GetCurrentThreadId(); | |
1827 | #endif | |
1828 | #else | |
1829 | return GetCurrentThreadId(); | |
1830 | #endif | |
1831 | } | |
1832 | ||
1833 | static FORCEINLINE int win32_acquire_lock (MLOCK_T *sl) { | |
1834 | int spins = 0; | |
1835 | for (;;) { | |
1836 | if (sl->l != 0) { | |
1837 | if (sl->threadid == CURRENT_THREAD) { | |
1838 | ++sl->c; | |
1839 | return 0; | |
1840 | } | |
1841 | } | |
1842 | else { | |
1843 | if (!interlockedexchange(&sl->l, 1)) { | |
1844 | assert(!sl->threadid); | |
1845 | sl->c=CURRENT_THREAD; | |
1846 | sl->threadid = CURRENT_THREAD; | |
1847 | sl->c = 1; | |
1848 | return 0; | |
1849 | } | |
1850 | } | |
1851 | if ((++spins & SPINS_PER_YIELD) == 0) | |
1852 | SleepEx(0, FALSE); | |
1853 | } | |
1854 | } | |
1855 | ||
1856 | static FORCEINLINE void win32_release_lock (MLOCK_T *sl) { | |
1857 | assert(sl->threadid == CURRENT_THREAD); | |
1858 | assert(sl->l != 0); | |
1859 | if (--sl->c == 0) { | |
1860 | sl->threadid = 0; | |
1861 | interlockedexchange (&sl->l, 0); | |
1862 | } | |
1863 | } | |
1864 | ||
1865 | static FORCEINLINE int win32_try_lock (MLOCK_T *sl) { | |
1866 | if(sl->l != 0) { | |
1867 | if (sl->threadid == CURRENT_THREAD) { | |
1868 | ++sl->c; | |
1869 | return 1; | |
1870 | } | |
1871 | } | |
1872 | else { | |
1873 | if (!interlockedexchange(&sl->l, 1)){ | |
1874 | assert(!sl->threadid); | |
1875 | sl->threadid = CURRENT_THREAD; | |
1876 | sl->c = 1; | |
1877 | return 1; | |
1878 | } | |
1879 | } | |
1880 | return 0; | |
1881 | } | |
1882 | ||
1883 | #endif /* WIN32 */ | |
1884 | #else /* USE_SPIN_LOCKS */ | |
1885 | ||
1886 | #ifndef WIN32 | |
1887 | /* pthreads-based locks */ | |
1888 | ||
1889 | #define MLOCK_T pthread_mutex_t | |
1890 | #define CURRENT_THREAD pthread_self() | |
1891 | #define INITIAL_LOCK(sl) pthread_init_lock(sl) | |
1892 | #define ACQUIRE_LOCK(sl) pthread_mutex_lock(sl) | |
1893 | #define RELEASE_LOCK(sl) pthread_mutex_unlock(sl) | |
1894 | #define TRY_LOCK(sl) (!pthread_mutex_trylock(sl)) | |
1895 | ||
1896 | static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER; | |
1897 | ||
1898 | /* Cope with old-style linux recursive lock initialization by adding */ | |
1899 | /* skipped internal declaration from pthread.h */ | |
1900 | #ifdef linux | |
1901 | #ifndef PTHREAD_MUTEX_RECURSIVE | |
1902 | extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr, | |
1903 | int __kind)); | |
1904 | #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP | |
1905 | #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y) | |
1906 | #endif | |
1907 | #endif | |
1908 | ||
1909 | static int pthread_init_lock (MLOCK_T *sl) { | |
1910 | pthread_mutexattr_t attr; | |
1911 | if (pthread_mutexattr_init(&attr)) return 1; | |
1912 | if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1; | |
1913 | if (pthread_mutex_init(sl, &attr)) return 1; | |
1914 | if (pthread_mutexattr_destroy(&attr)) return 1; | |
1915 | return 0; | |
1916 | } | |
1917 | ||
1918 | #else /* WIN32 */ | |
1919 | /* Win32 critical sections */ | |
1920 | #define MLOCK_T CRITICAL_SECTION | |
1921 | #define CURRENT_THREAD GetCurrentThreadId() | |
1922 | #define INITIAL_LOCK(s) (!InitializeCriticalSectionAndSpinCount((s), 0x80000000|4000)) | |
1923 | #define ACQUIRE_LOCK(s) (EnterCriticalSection(s), 0) | |
1924 | #define RELEASE_LOCK(s) LeaveCriticalSection(s) | |
1925 | #define TRY_LOCK(s) TryEnterCriticalSection(s) | |
1926 | #define NEED_GLOBAL_LOCK_INIT | |
1927 | ||
1928 | static MLOCK_T malloc_global_mutex; | |
1929 | static volatile long malloc_global_mutex_status; | |
1930 | ||
1931 | /* Use spin loop to initialize global lock */ | |
1932 | static void init_malloc_global_mutex() { | |
1933 | for (;;) { | |
1934 | long stat = malloc_global_mutex_status; | |
1935 | if (stat > 0) | |
1936 | return; | |
1937 | /* transition to < 0 while initializing, then to > 0) */ | |
1938 | if (stat == 0 && | |
1939 | interlockedcompareexchange(&malloc_global_mutex_status, -1, 0) == 0) { | |
1940 | InitializeCriticalSection(&malloc_global_mutex); | |
1941 | interlockedexchange(&malloc_global_mutex_status,1); | |
1942 | return; | |
1943 | } | |
1944 | SleepEx(0, FALSE); | |
1945 | } | |
1946 | } | |
1947 | ||
1948 | #endif /* WIN32 */ | |
1949 | #endif /* USE_SPIN_LOCKS */ | |
1950 | #endif /* USE_LOCKS == 1 */ | |
1951 | ||
1952 | /* ----------------------- User-defined locks ------------------------ */ | |
1953 | ||
1954 | #if USE_LOCKS > 1 | |
1955 | /* Define your own lock implementation here */ | |
1956 | /* #define INITIAL_LOCK(sl) ... */ | |
1957 | /* #define ACQUIRE_LOCK(sl) ... */ | |
1958 | /* #define RELEASE_LOCK(sl) ... */ | |
1959 | /* #define TRY_LOCK(sl) ... */ | |
1960 | /* static MLOCK_T malloc_global_mutex = ... */ | |
1961 | #endif /* USE_LOCKS > 1 */ | |
1962 | ||
1963 | /* ----------------------- Lock-based state ------------------------ */ | |
1964 | ||
1965 | #if USE_LOCKS | |
1966 | #define USE_LOCK_BIT (2U) | |
1967 | #else /* USE_LOCKS */ | |
1968 | #define USE_LOCK_BIT (0U) | |
1969 | #define INITIAL_LOCK(l) | |
1970 | #endif /* USE_LOCKS */ | |
1971 | ||
1972 | #if USE_LOCKS | |
1973 | #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex); | |
1974 | #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex); | |
1975 | #else /* USE_LOCKS */ | |
1976 | #define ACQUIRE_MALLOC_GLOBAL_LOCK() | |
1977 | #define RELEASE_MALLOC_GLOBAL_LOCK() | |
1978 | #endif /* USE_LOCKS */ | |
1979 | ||
1980 | ||
1981 | /* ----------------------- Chunk representations ------------------------ */ | |
1982 | ||
1983 | /* | |
1984 | (The following includes lightly edited explanations by Colin Plumb.) | |
1985 | ||
1986 | The malloc_chunk declaration below is misleading (but accurate and | |
1987 | necessary). It declares a "view" into memory allowing access to | |
1988 | necessary fields at known offsets from a given base. | |
1989 | ||
1990 | Chunks of memory are maintained using a `boundary tag' method as | |
1991 | originally described by Knuth. (See the paper by Paul Wilson | |
1992 | ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such | |
1993 | techniques.) Sizes of free chunks are stored both in the front of | |
1994 | each chunk and at the end. This makes consolidating fragmented | |
1995 | chunks into bigger chunks fast. The head fields also hold bits | |
1996 | representing whether chunks are free or in use. | |
1997 | ||
1998 | Here are some pictures to make it clearer. They are "exploded" to | |
1999 | show that the state of a chunk can be thought of as extending from | |
2000 | the high 31 bits of the head field of its header through the | |
2001 | prev_foot and PINUSE_BIT bit of the following chunk header. | |
2002 | ||
2003 | A chunk that's in use looks like: | |
2004 | ||
2005 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2006 | | Size of previous chunk (if P = 0) | | |
2007 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2008 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| | |
2009 | | Size of this chunk 1| +-+ | |
2010 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2011 | | | | |
2012 | +- -+ | |
2013 | | | | |
2014 | +- -+ | |
2015 | | : | |
2016 | +- size - sizeof(size_t) available payload bytes -+ | |
2017 | : | | |
2018 | chunk-> +- -+ | |
2019 | | | | |
2020 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2021 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1| | |
2022 | | Size of next chunk (may or may not be in use) | +-+ | |
2023 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2024 | ||
2025 | And if it's free, it looks like this: | |
2026 | ||
2027 | chunk-> +- -+ | |
2028 | | User payload (must be in use, or we would have merged!) | | |
2029 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2030 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| | |
2031 | | Size of this chunk 0| +-+ | |
2032 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2033 | | Next pointer | | |
2034 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2035 | | Prev pointer | | |
2036 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2037 | | : | |
2038 | +- size - sizeof(struct chunk) unused bytes -+ | |
2039 | : | | |
2040 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2041 | | Size of this chunk | | |
2042 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2043 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0| | |
2044 | | Size of next chunk (must be in use, or we would have merged)| +-+ | |
2045 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2046 | | : | |
2047 | +- User payload -+ | |
2048 | : | | |
2049 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2050 | |0| | |
2051 | +-+ | |
2052 | Note that since we always merge adjacent free chunks, the chunks | |
2053 | adjacent to a free chunk must be in use. | |
2054 | ||
2055 | Given a pointer to a chunk (which can be derived trivially from the | |
2056 | payload pointer) we can, in O(1) time, find out whether the adjacent | |
2057 | chunks are free, and if so, unlink them from the lists that they | |
2058 | are on and merge them with the current chunk. | |
2059 | ||
2060 | Chunks always begin on even word boundaries, so the mem portion | |
2061 | (which is returned to the user) is also on an even word boundary, and | |
2062 | thus at least double-word aligned. | |
2063 | ||
2064 | The P (PINUSE_BIT) bit, stored in the unused low-order bit of the | |
2065 | chunk size (which is always a multiple of two words), is an in-use | |
2066 | bit for the *previous* chunk. If that bit is *clear*, then the | |
2067 | word before the current chunk size contains the previous chunk | |
2068 | size, and can be used to find the front of the previous chunk. | |
2069 | The very first chunk allocated always has this bit set, preventing | |
2070 | access to non-existent (or non-owned) memory. If pinuse is set for | |
2071 | any given chunk, then you CANNOT determine the size of the | |
2072 | previous chunk, and might even get a memory addressing fault when | |
2073 | trying to do so. | |
2074 | ||
2075 | The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of | |
2076 | the chunk size redundantly records whether the current chunk is | |
2077 | inuse. This redundancy enables usage checks within free and realloc, | |
2078 | and reduces indirection when freeing and consolidating chunks. | |
2079 | ||
2080 | Each freshly allocated chunk must have both cinuse and pinuse set. | |
2081 | That is, each allocated chunk borders either a previously allocated | |
2082 | and still in-use chunk, or the base of its memory arena. This is | |
22e5e58a | 2083 | ensured by making all allocations from the `lowest' part of any |
f0ed8226 MSO |
2084 | found chunk. Further, no free chunk physically borders another one, |
2085 | so each free chunk is known to be preceded and followed by either | |
2086 | inuse chunks or the ends of memory. | |
2087 | ||
2088 | Note that the `foot' of the current chunk is actually represented | |
2089 | as the prev_foot of the NEXT chunk. This makes it easier to | |
2090 | deal with alignments etc but can be very confusing when trying | |
2091 | to extend or adapt this code. | |
2092 | ||
2093 | The exceptions to all this are | |
2094 | ||
2095 | 1. The special chunk `top' is the top-most available chunk (i.e., | |
2096 | the one bordering the end of available memory). It is treated | |
2097 | specially. Top is never included in any bin, is used only if | |
2098 | no other chunk is available, and is released back to the | |
2099 | system if it is very large (see M_TRIM_THRESHOLD). In effect, | |
2100 | the top chunk is treated as larger (and thus less well | |
2101 | fitting) than any other available chunk. The top chunk | |
2102 | doesn't update its trailing size field since there is no next | |
2103 | contiguous chunk that would have to index off it. However, | |
2104 | space is still allocated for it (TOP_FOOT_SIZE) to enable | |
2105 | separation or merging when space is extended. | |
2106 | ||
2107 | 3. Chunks allocated via mmap, which have the lowest-order bit | |
2108 | (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set | |
2109 | PINUSE_BIT in their head fields. Because they are allocated | |
2110 | one-by-one, each must carry its own prev_foot field, which is | |
2111 | also used to hold the offset this chunk has within its mmapped | |
2112 | region, which is needed to preserve alignment. Each mmapped | |
2113 | chunk is trailed by the first two fields of a fake next-chunk | |
2114 | for sake of usage checks. | |
2115 | ||
2116 | */ | |
2117 | ||
2118 | struct malloc_chunk { | |
2119 | size_t prev_foot; /* Size of previous chunk (if free). */ | |
2120 | size_t head; /* Size and inuse bits. */ | |
2121 | struct malloc_chunk* fd; /* double links -- used only if free. */ | |
2122 | struct malloc_chunk* bk; | |
2123 | }; | |
2124 | ||
2125 | typedef struct malloc_chunk mchunk; | |
2126 | typedef struct malloc_chunk* mchunkptr; | |
2127 | typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */ | |
2128 | typedef unsigned int bindex_t; /* Described below */ | |
2129 | typedef unsigned int binmap_t; /* Described below */ | |
2130 | typedef unsigned int flag_t; /* The type of various bit flag sets */ | |
2131 | ||
2132 | /* ------------------- Chunks sizes and alignments ----------------------- */ | |
2133 | ||
2134 | #define MCHUNK_SIZE (sizeof(mchunk)) | |
2135 | ||
2136 | #if FOOTERS | |
2137 | #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) | |
2138 | #else /* FOOTERS */ | |
2139 | #define CHUNK_OVERHEAD (SIZE_T_SIZE) | |
2140 | #endif /* FOOTERS */ | |
2141 | ||
2142 | /* MMapped chunks need a second word of overhead ... */ | |
2143 | #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) | |
2144 | /* ... and additional padding for fake next-chunk at foot */ | |
2145 | #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES) | |
2146 | ||
2147 | /* The smallest size we can malloc is an aligned minimal chunk */ | |
2148 | #define MIN_CHUNK_SIZE\ | |
2149 | ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) | |
2150 | ||
2151 | /* conversion from malloc headers to user pointers, and back */ | |
2152 | #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES)) | |
2153 | #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES)) | |
2154 | /* chunk associated with aligned address A */ | |
2155 | #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A))) | |
2156 | ||
2157 | /* Bounds on request (not chunk) sizes. */ | |
2158 | #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2) | |
2159 | #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE) | |
2160 | ||
2161 | /* pad request bytes into a usable size */ | |
2162 | #define pad_request(req) \ | |
2163 | (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) | |
2164 | ||
2165 | /* pad request, checking for minimum (but not maximum) */ | |
2166 | #define request2size(req) \ | |
2167 | (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req)) | |
2168 | ||
2169 | ||
2170 | /* ------------------ Operations on head and foot fields ----------------- */ | |
2171 | ||
2172 | /* | |
2173 | The head field of a chunk is or'ed with PINUSE_BIT when previous | |
2174 | adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in | |
2175 | use. If the chunk was obtained with mmap, the prev_foot field has | |
2176 | IS_MMAPPED_BIT set, otherwise holding the offset of the base of the | |
2177 | mmapped region to the base of the chunk. | |
2178 | ||
2179 | FLAG4_BIT is not used by this malloc, but might be useful in extensions. | |
2180 | */ | |
2181 | ||
2182 | #define PINUSE_BIT (SIZE_T_ONE) | |
2183 | #define CINUSE_BIT (SIZE_T_TWO) | |
2184 | #define FLAG4_BIT (SIZE_T_FOUR) | |
2185 | #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT) | |
2186 | #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT) | |
2187 | ||
2188 | /* Head value for fenceposts */ | |
2189 | #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE) | |
2190 | ||
2191 | /* extraction of fields from head words */ | |
2192 | #define cinuse(p) ((p)->head & CINUSE_BIT) | |
2193 | #define pinuse(p) ((p)->head & PINUSE_BIT) | |
2194 | #define chunksize(p) ((p)->head & ~(FLAG_BITS)) | |
2195 | ||
2196 | #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT) | |
2197 | #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT) | |
2198 | ||
2199 | /* Treat space at ptr +/- offset as a chunk */ | |
2200 | #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) | |
2201 | #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s))) | |
2202 | ||
2203 | /* Ptr to next or previous physical malloc_chunk. */ | |
2204 | #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS))) | |
2205 | #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) )) | |
2206 | ||
2207 | /* extract next chunk's pinuse bit */ | |
2208 | #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT) | |
2209 | ||
2210 | /* Get/set size at footer */ | |
2211 | #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot) | |
2212 | #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s)) | |
2213 | ||
2214 | /* Set size, pinuse bit, and foot */ | |
2215 | #define set_size_and_pinuse_of_free_chunk(p, s)\ | |
2216 | ((p)->head = (s|PINUSE_BIT), set_foot(p, s)) | |
2217 | ||
2218 | /* Set size, pinuse bit, foot, and clear next pinuse */ | |
2219 | #define set_free_with_pinuse(p, s, n)\ | |
2220 | (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s)) | |
2221 | ||
2222 | #define is_mmapped(p)\ | |
2223 | (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT)) | |
2224 | ||
2225 | /* Get the internal overhead associated with chunk p */ | |
2226 | #define overhead_for(p)\ | |
2227 | (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD) | |
2228 | ||
2229 | /* Return true if malloced space is not necessarily cleared */ | |
2230 | #if MMAP_CLEARS | |
2231 | #define calloc_must_clear(p) (!is_mmapped(p)) | |
2232 | #else /* MMAP_CLEARS */ | |
2233 | #define calloc_must_clear(p) (1) | |
2234 | #endif /* MMAP_CLEARS */ | |
2235 | ||
2236 | /* ---------------------- Overlaid data structures ----------------------- */ | |
2237 | ||
2238 | /* | |
2239 | When chunks are not in use, they are treated as nodes of either | |
2240 | lists or trees. | |
2241 | ||
2242 | "Small" chunks are stored in circular doubly-linked lists, and look | |
2243 | like this: | |
2244 | ||
2245 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2246 | | Size of previous chunk | | |
2247 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2248 | `head:' | Size of chunk, in bytes |P| | |
2249 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2250 | | Forward pointer to next chunk in list | | |
2251 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2252 | | Back pointer to previous chunk in list | | |
2253 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2254 | | Unused space (may be 0 bytes long) . | |
2255 | . . | |
2256 | . | | |
2257 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2258 | `foot:' | Size of chunk, in bytes | | |
2259 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2260 | ||
2261 | Larger chunks are kept in a form of bitwise digital trees (aka | |
2262 | tries) keyed on chunksizes. Because malloc_tree_chunks are only for | |
2263 | free chunks greater than 256 bytes, their size doesn't impose any | |
2264 | constraints on user chunk sizes. Each node looks like: | |
2265 | ||
2266 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2267 | | Size of previous chunk | | |
2268 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2269 | `head:' | Size of chunk, in bytes |P| | |
2270 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2271 | | Forward pointer to next chunk of same size | | |
2272 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2273 | | Back pointer to previous chunk of same size | | |
2274 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2275 | | Pointer to left child (child[0]) | | |
2276 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2277 | | Pointer to right child (child[1]) | | |
2278 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2279 | | Pointer to parent | | |
2280 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2281 | | bin index of this chunk | | |
2282 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2283 | | Unused space . | |
2284 | . | | |
2285 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2286 | `foot:' | Size of chunk, in bytes | | |
2287 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
2288 | ||
2289 | Each tree holding treenodes is a tree of unique chunk sizes. Chunks | |
2290 | of the same size are arranged in a circularly-linked list, with only | |
2291 | the oldest chunk (the next to be used, in our FIFO ordering) | |
2292 | actually in the tree. (Tree members are distinguished by a non-null | |
0353a0c4 | 2293 | parent pointer.) If a chunk with the same size as an existing node |
f0ed8226 MSO |
2294 | is inserted, it is linked off the existing node using pointers that |
2295 | work in the same way as fd/bk pointers of small chunks. | |
2296 | ||
2297 | Each tree contains a power of 2 sized range of chunk sizes (the | |
0353a0c4 | 2298 | smallest is 0x100 <= x < 0x180), which is divided in half at each |
f0ed8226 MSO |
2299 | tree level, with the chunks in the smaller half of the range (0x100 |
2300 | <= x < 0x140 for the top nose) in the left subtree and the larger | |
2301 | half (0x140 <= x < 0x180) in the right subtree. This is, of course, | |
2302 | done by inspecting individual bits. | |
2303 | ||
2304 | Using these rules, each node's left subtree contains all smaller | |
2305 | sizes than its right subtree. However, the node at the root of each | |
2306 | subtree has no particular ordering relationship to either. (The | |
2307 | dividing line between the subtree sizes is based on trie relation.) | |
2308 | If we remove the last chunk of a given size from the interior of the | |
2309 | tree, we need to replace it with a leaf node. The tree ordering | |
2310 | rules permit a node to be replaced by any leaf below it. | |
2311 | ||
2312 | The smallest chunk in a tree (a common operation in a best-fit | |
2313 | allocator) can be found by walking a path to the leftmost leaf in | |
2314 | the tree. Unlike a usual binary tree, where we follow left child | |
2315 | pointers until we reach a null, here we follow the right child | |
2316 | pointer any time the left one is null, until we reach a leaf with | |
2317 | both child pointers null. The smallest chunk in the tree will be | |
2318 | somewhere along that path. | |
2319 | ||
2320 | The worst case number of steps to add, find, or remove a node is | |
2321 | bounded by the number of bits differentiating chunks within | |
2322 | bins. Under current bin calculations, this ranges from 6 up to 21 | |
2323 | (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case | |
2324 | is of course much better. | |
2325 | */ | |
2326 | ||
2327 | struct malloc_tree_chunk { | |
2328 | /* The first four fields must be compatible with malloc_chunk */ | |
2329 | size_t prev_foot; | |
2330 | size_t head; | |
2331 | struct malloc_tree_chunk* fd; | |
2332 | struct malloc_tree_chunk* bk; | |
2333 | ||
2334 | struct malloc_tree_chunk* child[2]; | |
2335 | struct malloc_tree_chunk* parent; | |
2336 | bindex_t index; | |
2337 | }; | |
2338 | ||
2339 | typedef struct malloc_tree_chunk tchunk; | |
2340 | typedef struct malloc_tree_chunk* tchunkptr; | |
2341 | typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */ | |
2342 | ||
2343 | /* A little helper macro for trees */ | |
2344 | #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1]) | |
2345 | ||
2346 | /* ----------------------------- Segments -------------------------------- */ | |
2347 | ||
2348 | /* | |
2349 | Each malloc space may include non-contiguous segments, held in a | |
2350 | list headed by an embedded malloc_segment record representing the | |
2351 | top-most space. Segments also include flags holding properties of | |
2352 | the space. Large chunks that are directly allocated by mmap are not | |
2353 | included in this list. They are instead independently created and | |
2354 | destroyed without otherwise keeping track of them. | |
2355 | ||
2356 | Segment management mainly comes into play for spaces allocated by | |
2357 | MMAP. Any call to MMAP might or might not return memory that is | |
2358 | adjacent to an existing segment. MORECORE normally contiguously | |
2359 | extends the current space, so this space is almost always adjacent, | |
2360 | which is simpler and faster to deal with. (This is why MORECORE is | |
2361 | used preferentially to MMAP when both are available -- see | |
2362 | sys_alloc.) When allocating using MMAP, we don't use any of the | |
2363 | hinting mechanisms (inconsistently) supported in various | |
2364 | implementations of unix mmap, or distinguish reserving from | |
2365 | committing memory. Instead, we just ask for space, and exploit | |
2366 | contiguity when we get it. It is probably possible to do | |
2367 | better than this on some systems, but no general scheme seems | |
2368 | to be significantly better. | |
2369 | ||
2370 | Management entails a simpler variant of the consolidation scheme | |
2371 | used for chunks to reduce fragmentation -- new adjacent memory is | |
2372 | normally prepended or appended to an existing segment. However, | |
2373 | there are limitations compared to chunk consolidation that mostly | |
2374 | reflect the fact that segment processing is relatively infrequent | |
2375 | (occurring only when getting memory from system) and that we | |
2376 | don't expect to have huge numbers of segments: | |
2377 | ||
2378 | * Segments are not indexed, so traversal requires linear scans. (It | |
2379 | would be possible to index these, but is not worth the extra | |
2380 | overhead and complexity for most programs on most platforms.) | |
2381 | * New segments are only appended to old ones when holding top-most | |
2382 | memory; if they cannot be prepended to others, they are held in | |
2383 | different segments. | |
2384 | ||
2385 | Except for the top-most segment of an mstate, each segment record | |
2386 | is kept at the tail of its segment. Segments are added by pushing | |
2387 | segment records onto the list headed by &mstate.seg for the | |
2388 | containing mstate. | |
2389 | ||
2390 | Segment flags control allocation/merge/deallocation policies: | |
2391 | * If EXTERN_BIT set, then we did not allocate this segment, | |
2392 | and so should not try to deallocate or merge with others. | |
2393 | (This currently holds only for the initial segment passed | |
2394 | into create_mspace_with_base.) | |
2395 | * If IS_MMAPPED_BIT set, the segment may be merged with | |
2396 | other surrounding mmapped segments and trimmed/de-allocated | |
2397 | using munmap. | |
2398 | * If neither bit is set, then the segment was obtained using | |
2399 | MORECORE so can be merged with surrounding MORECORE'd segments | |
2400 | and deallocated/trimmed using MORECORE with negative arguments. | |
2401 | */ | |
2402 | ||
2403 | struct malloc_segment { | |
2404 | char* base; /* base address */ | |
2405 | size_t size; /* allocated size */ | |
2406 | struct malloc_segment* next; /* ptr to next segment */ | |
2407 | flag_t sflags; /* mmap and extern flag */ | |
2408 | }; | |
2409 | ||
2410 | #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT) | |
2411 | #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT) | |
2412 | ||
2413 | typedef struct malloc_segment msegment; | |
2414 | typedef struct malloc_segment* msegmentptr; | |
2415 | ||
2416 | /* ---------------------------- malloc_state ----------------------------- */ | |
2417 | ||
2418 | /* | |
2419 | A malloc_state holds all of the bookkeeping for a space. | |
2420 | The main fields are: | |
2421 | ||
2422 | Top | |
2423 | The topmost chunk of the currently active segment. Its size is | |
2424 | cached in topsize. The actual size of topmost space is | |
2425 | topsize+TOP_FOOT_SIZE, which includes space reserved for adding | |
2426 | fenceposts and segment records if necessary when getting more | |
2427 | space from the system. The size at which to autotrim top is | |
2428 | cached from mparams in trim_check, except that it is disabled if | |
2429 | an autotrim fails. | |
2430 | ||
2431 | Designated victim (dv) | |
2432 | This is the preferred chunk for servicing small requests that | |
2433 | don't have exact fits. It is normally the chunk split off most | |
2434 | recently to service another small request. Its size is cached in | |
2435 | dvsize. The link fields of this chunk are not maintained since it | |
2436 | is not kept in a bin. | |
2437 | ||
2438 | SmallBins | |
2439 | An array of bin headers for free chunks. These bins hold chunks | |
2440 | with sizes less than MIN_LARGE_SIZE bytes. Each bin contains | |
2441 | chunks of all the same size, spaced 8 bytes apart. To simplify | |
2442 | use in double-linked lists, each bin header acts as a malloc_chunk | |
2443 | pointing to the real first node, if it exists (else pointing to | |
2444 | itself). This avoids special-casing for headers. But to avoid | |
2445 | waste, we allocate only the fd/bk pointers of bins, and then use | |
2446 | repositioning tricks to treat these as the fields of a chunk. | |
2447 | ||
2448 | TreeBins | |
2449 | Treebins are pointers to the roots of trees holding a range of | |
2450 | sizes. There are 2 equally spaced treebins for each power of two | |
2451 | from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything | |
2452 | larger. | |
2453 | ||
2454 | Bin maps | |
2455 | There is one bit map for small bins ("smallmap") and one for | |
2456 | treebins ("treemap). Each bin sets its bit when non-empty, and | |
2457 | clears the bit when empty. Bit operations are then used to avoid | |
2458 | bin-by-bin searching -- nearly all "search" is done without ever | |
2459 | looking at bins that won't be selected. The bit maps | |
2460 | conservatively use 32 bits per map word, even if on 64bit system. | |
2461 | For a good description of some of the bit-based techniques used | |
2462 | here, see Henry S. Warren Jr's book "Hacker's Delight" (and | |
2463 | supplement at http://hackersdelight.org/). Many of these are | |
2464 | intended to reduce the branchiness of paths through malloc etc, as | |
2465 | well as to reduce the number of memory locations read or written. | |
2466 | ||
2467 | Segments | |
2468 | A list of segments headed by an embedded malloc_segment record | |
2469 | representing the initial space. | |
2470 | ||
2471 | Address check support | |
2472 | The least_addr field is the least address ever obtained from | |
2473 | MORECORE or MMAP. Attempted frees and reallocs of any address less | |
2474 | than this are trapped (unless INSECURE is defined). | |
2475 | ||
2476 | Magic tag | |
2477 | A cross-check field that should always hold same value as mparams.magic. | |
2478 | ||
2479 | Flags | |
2480 | Bits recording whether to use MMAP, locks, or contiguous MORECORE | |
2481 | ||
2482 | Statistics | |
2483 | Each space keeps track of current and maximum system memory | |
2484 | obtained via MORECORE or MMAP. | |
2485 | ||
2486 | Trim support | |
2487 | Fields holding the amount of unused topmost memory that should trigger | |
2488 | timming, and a counter to force periodic scanning to release unused | |
2489 | non-topmost segments. | |
2490 | ||
2491 | Locking | |
2492 | If USE_LOCKS is defined, the "mutex" lock is acquired and released | |
2493 | around every public call using this mspace. | |
2494 | ||
2495 | Extension support | |
2496 | A void* pointer and a size_t field that can be used to help implement | |
2497 | extensions to this malloc. | |
2498 | */ | |
2499 | ||
2500 | /* Bin types, widths and sizes */ | |
2501 | #define NSMALLBINS (32U) | |
2502 | #define NTREEBINS (32U) | |
2503 | #define SMALLBIN_SHIFT (3U) | |
2504 | #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT) | |
2505 | #define TREEBIN_SHIFT (8U) | |
2506 | #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT) | |
2507 | #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE) | |
2508 | #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD) | |
2509 | ||
2510 | struct malloc_state { | |
2511 | binmap_t smallmap; | |
2512 | binmap_t treemap; | |
2513 | size_t dvsize; | |
2514 | size_t topsize; | |
2515 | char* least_addr; | |
2516 | mchunkptr dv; | |
2517 | mchunkptr top; | |
2518 | size_t trim_check; | |
2519 | size_t release_checks; | |
2520 | size_t magic; | |
2521 | mchunkptr smallbins[(NSMALLBINS+1)*2]; | |
2522 | tbinptr treebins[NTREEBINS]; | |
2523 | size_t footprint; | |
2524 | size_t max_footprint; | |
2525 | flag_t mflags; | |
2526 | #if USE_LOCKS | |
2527 | MLOCK_T mutex; /* locate lock among fields that rarely change */ | |
2528 | #endif /* USE_LOCKS */ | |
2529 | msegment seg; | |
2530 | void* extp; /* Unused but available for extensions */ | |
2531 | size_t exts; | |
2532 | }; | |
2533 | ||
2534 | typedef struct malloc_state* mstate; | |
2535 | ||
2536 | /* ------------- Global malloc_state and malloc_params ------------------- */ | |
2537 | ||
2538 | /* | |
2539 | malloc_params holds global properties, including those that can be | |
2540 | dynamically set using mallopt. There is a single instance, mparams, | |
2541 | initialized in init_mparams. Note that the non-zeroness of "magic" | |
2542 | also serves as an initialization flag. | |
2543 | */ | |
2544 | ||
2545 | struct malloc_params { | |
2546 | volatile size_t magic; | |
2547 | size_t page_size; | |
2548 | size_t granularity; | |
2549 | size_t mmap_threshold; | |
2550 | size_t trim_threshold; | |
2551 | flag_t default_mflags; | |
2552 | }; | |
2553 | ||
2554 | static struct malloc_params mparams; | |
2555 | ||
2556 | /* Ensure mparams initialized */ | |
2ae8239d | 2557 | #define ensure_initialization() ((void)(mparams.magic != 0 || init_mparams())) |
f0ed8226 MSO |
2558 | |
2559 | #if !ONLY_MSPACES | |
2560 | ||
2561 | /* The global malloc_state used for all non-"mspace" calls */ | |
2562 | static struct malloc_state _gm_; | |
2563 | #define gm (&_gm_) | |
2564 | #define is_global(M) ((M) == &_gm_) | |
2565 | ||
2566 | #endif /* !ONLY_MSPACES */ | |
2567 | ||
2568 | #define is_initialized(M) ((M)->top != 0) | |
2569 | ||
2570 | /* -------------------------- system alloc setup ------------------------- */ | |
2571 | ||
2572 | /* Operations on mflags */ | |
2573 | ||
2574 | #define use_lock(M) ((M)->mflags & USE_LOCK_BIT) | |
2575 | #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT) | |
2576 | #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT) | |
2577 | ||
2578 | #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT) | |
2579 | #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT) | |
2580 | #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT) | |
2581 | ||
2582 | #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT) | |
2583 | #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT) | |
2584 | ||
2585 | #define set_lock(M,L)\ | |
2586 | ((M)->mflags = (L)?\ | |
2587 | ((M)->mflags | USE_LOCK_BIT) :\ | |
2588 | ((M)->mflags & ~USE_LOCK_BIT)) | |
2589 | ||
2590 | /* page-align a size */ | |
2591 | #define page_align(S)\ | |
2592 | (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE)) | |
2593 | ||
2594 | /* granularity-align a size */ | |
2595 | #define granularity_align(S)\ | |
2596 | (((S) + (mparams.granularity - SIZE_T_ONE))\ | |
2597 | & ~(mparams.granularity - SIZE_T_ONE)) | |
2598 | ||
2599 | ||
2600 | /* For mmap, use granularity alignment on windows, else page-align */ | |
2601 | #ifdef WIN32 | |
2602 | #define mmap_align(S) granularity_align(S) | |
2603 | #else | |
2604 | #define mmap_align(S) page_align(S) | |
2605 | #endif | |
2606 | ||
2607 | /* For sys_alloc, enough padding to ensure can malloc request on success */ | |
2608 | #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT) | |
2609 | ||
2610 | #define is_page_aligned(S)\ | |
2611 | (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0) | |
2612 | #define is_granularity_aligned(S)\ | |
2613 | (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0) | |
2614 | ||
2615 | /* True if segment S holds address A */ | |
2616 | #define segment_holds(S, A)\ | |
2617 | ((char*)(A) >= S->base && (char*)(A) < S->base + S->size) | |
2618 | ||
2619 | /* Return segment holding given address */ | |
2620 | static msegmentptr segment_holding(mstate m, char* addr) { | |
2621 | msegmentptr sp = &m->seg; | |
2622 | for (;;) { | |
2623 | if (addr >= sp->base && addr < sp->base + sp->size) | |
2624 | return sp; | |
2625 | if ((sp = sp->next) == 0) | |
2626 | return 0; | |
2627 | } | |
2628 | } | |
2629 | ||
2630 | /* Return true if segment contains a segment link */ | |
2631 | static int has_segment_link(mstate m, msegmentptr ss) { | |
2632 | msegmentptr sp = &m->seg; | |
2633 | for (;;) { | |
2634 | if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size) | |
2635 | return 1; | |
2636 | if ((sp = sp->next) == 0) | |
2637 | return 0; | |
2638 | } | |
2639 | } | |
2640 | ||
2641 | #ifndef MORECORE_CANNOT_TRIM | |
2642 | #define should_trim(M,s) ((s) > (M)->trim_check) | |
2643 | #else /* MORECORE_CANNOT_TRIM */ | |
2644 | #define should_trim(M,s) (0) | |
2645 | #endif /* MORECORE_CANNOT_TRIM */ | |
2646 | ||
2647 | /* | |
2648 | TOP_FOOT_SIZE is padding at the end of a segment, including space | |
2649 | that may be needed to place segment records and fenceposts when new | |
2650 | noncontiguous segments are added. | |
2651 | */ | |
2652 | #define TOP_FOOT_SIZE\ | |
2653 | (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE) | |
2654 | ||
2655 | ||
2656 | /* ------------------------------- Hooks -------------------------------- */ | |
2657 | ||
2658 | /* | |
2659 | PREACTION should be defined to return 0 on success, and nonzero on | |
2660 | failure. If you are not using locking, you can redefine these to do | |
2661 | anything you like. | |
2662 | */ | |
2663 | ||
2664 | #if USE_LOCKS | |
2665 | ||
2666 | #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0) | |
2667 | #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); } | |
2668 | #else /* USE_LOCKS */ | |
2669 | ||
2670 | #ifndef PREACTION | |
2671 | #define PREACTION(M) (0) | |
2672 | #endif /* PREACTION */ | |
2673 | ||
2674 | #ifndef POSTACTION | |
2675 | #define POSTACTION(M) | |
2676 | #endif /* POSTACTION */ | |
2677 | ||
2678 | #endif /* USE_LOCKS */ | |
2679 | ||
2680 | /* | |
2681 | CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses. | |
2682 | USAGE_ERROR_ACTION is triggered on detected bad frees and | |
2683 | reallocs. The argument p is an address that might have triggered the | |
2684 | fault. It is ignored by the two predefined actions, but might be | |
2685 | useful in custom actions that try to help diagnose errors. | |
2686 | */ | |
2687 | ||
2688 | #if PROCEED_ON_ERROR | |
2689 | ||
2690 | /* A count of the number of corruption errors causing resets */ | |
2691 | int malloc_corruption_error_count; | |
2692 | ||
2693 | /* default corruption action */ | |
2694 | static void reset_on_error(mstate m); | |
2695 | ||
2696 | #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m) | |
2697 | #define USAGE_ERROR_ACTION(m, p) | |
2698 | ||
2699 | #else /* PROCEED_ON_ERROR */ | |
2700 | ||
2701 | #ifndef CORRUPTION_ERROR_ACTION | |
2702 | #define CORRUPTION_ERROR_ACTION(m) ABORT | |
2703 | #endif /* CORRUPTION_ERROR_ACTION */ | |
2704 | ||
2705 | #ifndef USAGE_ERROR_ACTION | |
2706 | #define USAGE_ERROR_ACTION(m,p) ABORT | |
2707 | #endif /* USAGE_ERROR_ACTION */ | |
2708 | ||
2709 | #endif /* PROCEED_ON_ERROR */ | |
2710 | ||
2711 | /* -------------------------- Debugging setup ---------------------------- */ | |
2712 | ||
2713 | #if ! DEBUG | |
2714 | ||
2715 | #define check_free_chunk(M,P) | |
2716 | #define check_inuse_chunk(M,P) | |
2717 | #define check_malloced_chunk(M,P,N) | |
2718 | #define check_mmapped_chunk(M,P) | |
2719 | #define check_malloc_state(M) | |
2720 | #define check_top_chunk(M,P) | |
2721 | ||
2722 | #else /* DEBUG */ | |
2723 | #define check_free_chunk(M,P) do_check_free_chunk(M,P) | |
2724 | #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P) | |
2725 | #define check_top_chunk(M,P) do_check_top_chunk(M,P) | |
2726 | #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N) | |
2727 | #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P) | |
2728 | #define check_malloc_state(M) do_check_malloc_state(M) | |
2729 | ||
2730 | static void do_check_any_chunk(mstate m, mchunkptr p); | |
2731 | static void do_check_top_chunk(mstate m, mchunkptr p); | |
2732 | static void do_check_mmapped_chunk(mstate m, mchunkptr p); | |
2733 | static void do_check_inuse_chunk(mstate m, mchunkptr p); | |
2734 | static void do_check_free_chunk(mstate m, mchunkptr p); | |
2735 | static void do_check_malloced_chunk(mstate m, void* mem, size_t s); | |
2736 | static void do_check_tree(mstate m, tchunkptr t); | |
2737 | static void do_check_treebin(mstate m, bindex_t i); | |
2738 | static void do_check_smallbin(mstate m, bindex_t i); | |
2739 | static void do_check_malloc_state(mstate m); | |
2740 | static int bin_find(mstate m, mchunkptr x); | |
2741 | static size_t traverse_and_check(mstate m); | |
2742 | #endif /* DEBUG */ | |
2743 | ||
2744 | /* ---------------------------- Indexing Bins ---------------------------- */ | |
2745 | ||
2746 | #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS) | |
2747 | #define small_index(s) ((s) >> SMALLBIN_SHIFT) | |
2748 | #define small_index2size(i) ((i) << SMALLBIN_SHIFT) | |
2749 | #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE)) | |
2750 | ||
2751 | /* addressing by index. See above about smallbin repositioning */ | |
2752 | #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1]))) | |
2753 | #define treebin_at(M,i) (&((M)->treebins[i])) | |
2754 | ||
2755 | /* assign tree index for size S to variable I. Use x86 asm if possible */ | |
2756 | #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)) | |
2757 | #define compute_tree_index(S, I)\ | |
2758 | {\ | |
2759 | unsigned int X = S >> TREEBIN_SHIFT;\ | |
2760 | if (X == 0)\ | |
2761 | I = 0;\ | |
2762 | else if (X > 0xFFFF)\ | |
2763 | I = NTREEBINS-1;\ | |
2764 | else {\ | |
2765 | unsigned int K;\ | |
2766 | __asm__("bsrl\t%1, %0\n\t" : "=r" (K) : "rm" (X));\ | |
2767 | I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ | |
2768 | }\ | |
2769 | } | |
2770 | ||
2771 | #elif defined (__INTEL_COMPILER) | |
2772 | #define compute_tree_index(S, I)\ | |
2773 | {\ | |
2774 | size_t X = S >> TREEBIN_SHIFT;\ | |
2775 | if (X == 0)\ | |
2776 | I = 0;\ | |
2777 | else if (X > 0xFFFF)\ | |
2778 | I = NTREEBINS-1;\ | |
2779 | else {\ | |
2780 | unsigned int K = _bit_scan_reverse (X); \ | |
2781 | I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ | |
2782 | }\ | |
2783 | } | |
2784 | ||
2785 | #elif defined(_MSC_VER) && _MSC_VER>=1300 | |
2786 | #define compute_tree_index(S, I)\ | |
2787 | {\ | |
2788 | size_t X = S >> TREEBIN_SHIFT;\ | |
2789 | if (X == 0)\ | |
2790 | I = 0;\ | |
2791 | else if (X > 0xFFFF)\ | |
2792 | I = NTREEBINS-1;\ | |
2793 | else {\ | |
2794 | unsigned int K;\ | |
2795 | _BitScanReverse((DWORD *) &K, X);\ | |
2796 | I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ | |
2797 | }\ | |
2798 | } | |
2799 | ||
2800 | #else /* GNUC */ | |
2801 | #define compute_tree_index(S, I)\ | |
2802 | {\ | |
2803 | size_t X = S >> TREEBIN_SHIFT;\ | |
2804 | if (X == 0)\ | |
2805 | I = 0;\ | |
2806 | else if (X > 0xFFFF)\ | |
2807 | I = NTREEBINS-1;\ | |
2808 | else {\ | |
2809 | unsigned int Y = (unsigned int)X;\ | |
2810 | unsigned int N = ((Y - 0x100) >> 16) & 8;\ | |
2811 | unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\ | |
2812 | N += K;\ | |
2813 | N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\ | |
2814 | K = 14 - N + ((Y <<= K) >> 15);\ | |
2815 | I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\ | |
2816 | }\ | |
2817 | } | |
2818 | #endif /* GNUC */ | |
2819 | ||
2820 | /* Bit representing maximum resolved size in a treebin at i */ | |
2821 | #define bit_for_tree_index(i) \ | |
2822 | (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2) | |
2823 | ||
2824 | /* Shift placing maximum resolved bit in a treebin at i as sign bit */ | |
2825 | #define leftshift_for_tree_index(i) \ | |
2826 | ((i == NTREEBINS-1)? 0 : \ | |
2827 | ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2))) | |
2828 | ||
2829 | /* The size of the smallest chunk held in bin with index i */ | |
2830 | #define minsize_for_tree_index(i) \ | |
2831 | ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \ | |
2832 | (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1))) | |
2833 | ||
2834 | ||
2835 | /* ------------------------ Operations on bin maps ----------------------- */ | |
2836 | ||
2837 | /* bit corresponding to given index */ | |
2838 | #define idx2bit(i) ((binmap_t)(1) << (i)) | |
2839 | ||
2840 | /* Mark/Clear bits with given index */ | |
2841 | #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i)) | |
2842 | #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i)) | |
2843 | #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i)) | |
2844 | ||
2845 | #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i)) | |
2846 | #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i)) | |
2847 | #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i)) | |
2848 | ||
2849 | /* isolate the least set bit of a bitmap */ | |
2850 | #define least_bit(x) ((x) & -(x)) | |
2851 | ||
2852 | /* mask with all bits to left of least bit of x on */ | |
2853 | #define left_bits(x) ((x<<1) | -(x<<1)) | |
2854 | ||
2855 | /* mask with all bits to left of or equal to least bit of x on */ | |
2856 | #define same_or_left_bits(x) ((x) | -(x)) | |
2857 | ||
2858 | /* index corresponding to given bit. Use x86 asm if possible */ | |
2859 | ||
2860 | #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)) | |
2861 | #define compute_bit2idx(X, I)\ | |
2862 | {\ | |
2863 | unsigned int J;\ | |
2864 | __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "rm" (X));\ | |
2865 | I = (bindex_t)J;\ | |
2866 | } | |
2867 | ||
2868 | #elif defined (__INTEL_COMPILER) | |
2869 | #define compute_bit2idx(X, I)\ | |
2870 | {\ | |
2871 | unsigned int J;\ | |
2872 | J = _bit_scan_forward (X); \ | |
2873 | I = (bindex_t)J;\ | |
2874 | } | |
2875 | ||
2876 | #elif defined(_MSC_VER) && _MSC_VER>=1300 | |
2877 | #define compute_bit2idx(X, I)\ | |
2878 | {\ | |
2879 | unsigned int J;\ | |
2880 | _BitScanForward((DWORD *) &J, X);\ | |
2881 | I = (bindex_t)J;\ | |
2882 | } | |
2883 | ||
2884 | #elif USE_BUILTIN_FFS | |
2885 | #define compute_bit2idx(X, I) I = ffs(X)-1 | |
2886 | ||
2887 | #else | |
2888 | #define compute_bit2idx(X, I)\ | |
2889 | {\ | |
2890 | unsigned int Y = X - 1;\ | |
2891 | unsigned int K = Y >> (16-4) & 16;\ | |
2892 | unsigned int N = K; Y >>= K;\ | |
2893 | N += K = Y >> (8-3) & 8; Y >>= K;\ | |
2894 | N += K = Y >> (4-2) & 4; Y >>= K;\ | |
2895 | N += K = Y >> (2-1) & 2; Y >>= K;\ | |
2896 | N += K = Y >> (1-0) & 1; Y >>= K;\ | |
2897 | I = (bindex_t)(N + Y);\ | |
2898 | } | |
2899 | #endif /* GNUC */ | |
2900 | ||
2901 | ||
2902 | /* ----------------------- Runtime Check Support ------------------------- */ | |
2903 | ||
2904 | /* | |
2905 | For security, the main invariant is that malloc/free/etc never | |
2906 | writes to a static address other than malloc_state, unless static | |
2907 | malloc_state itself has been corrupted, which cannot occur via | |
2908 | malloc (because of these checks). In essence this means that we | |
2909 | believe all pointers, sizes, maps etc held in malloc_state, but | |
2910 | check all of those linked or offsetted from other embedded data | |
2911 | structures. These checks are interspersed with main code in a way | |
2912 | that tends to minimize their run-time cost. | |
2913 | ||
2914 | When FOOTERS is defined, in addition to range checking, we also | |
2915 | verify footer fields of inuse chunks, which can be used guarantee | |
2916 | that the mstate controlling malloc/free is intact. This is a | |
2917 | streamlined version of the approach described by William Robertson | |
2918 | et al in "Run-time Detection of Heap-based Overflows" LISA'03 | |
2919 | http://www.usenix.org/events/lisa03/tech/robertson.html The footer | |
2920 | of an inuse chunk holds the xor of its mstate and a random seed, | |
2921 | that is checked upon calls to free() and realloc(). This is | |
2922 | (probablistically) unguessable from outside the program, but can be | |
2923 | computed by any code successfully malloc'ing any chunk, so does not | |
2924 | itself provide protection against code that has already broken | |
2925 | security through some other means. Unlike Robertson et al, we | |
2926 | always dynamically check addresses of all offset chunks (previous, | |
2927 | next, etc). This turns out to be cheaper than relying on hashes. | |
2928 | */ | |
2929 | ||
2930 | #if !INSECURE | |
2931 | /* Check if address a is at least as high as any from MORECORE or MMAP */ | |
2932 | #define ok_address(M, a) ((char*)(a) >= (M)->least_addr) | |
2933 | /* Check if address of next chunk n is higher than base chunk p */ | |
2934 | #define ok_next(p, n) ((char*)(p) < (char*)(n)) | |
2935 | /* Check if p has its cinuse bit on */ | |
2936 | #define ok_cinuse(p) cinuse(p) | |
2937 | /* Check if p has its pinuse bit on */ | |
2938 | #define ok_pinuse(p) pinuse(p) | |
2939 | ||
2940 | #else /* !INSECURE */ | |
2941 | #define ok_address(M, a) (1) | |
2942 | #define ok_next(b, n) (1) | |
2943 | #define ok_cinuse(p) (1) | |
2944 | #define ok_pinuse(p) (1) | |
2945 | #endif /* !INSECURE */ | |
2946 | ||
2947 | #if (FOOTERS && !INSECURE) | |
2948 | /* Check if (alleged) mstate m has expected magic field */ | |
2949 | #define ok_magic(M) ((M)->magic == mparams.magic) | |
2950 | #else /* (FOOTERS && !INSECURE) */ | |
2951 | #define ok_magic(M) (1) | |
2952 | #endif /* (FOOTERS && !INSECURE) */ | |
2953 | ||
2954 | ||
2955 | /* In gcc, use __builtin_expect to minimize impact of checks */ | |
2956 | #if !INSECURE | |
2957 | #if defined(__GNUC__) && __GNUC__ >= 3 | |
2958 | #define RTCHECK(e) __builtin_expect(e, 1) | |
2959 | #else /* GNUC */ | |
2960 | #define RTCHECK(e) (e) | |
2961 | #endif /* GNUC */ | |
2962 | #else /* !INSECURE */ | |
2963 | #define RTCHECK(e) (1) | |
2964 | #endif /* !INSECURE */ | |
2965 | ||
2966 | /* macros to set up inuse chunks with or without footers */ | |
2967 | ||
2968 | #if !FOOTERS | |
2969 | ||
2970 | #define mark_inuse_foot(M,p,s) | |
2971 | ||
2972 | /* Set cinuse bit and pinuse bit of next chunk */ | |
2973 | #define set_inuse(M,p,s)\ | |
2974 | ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ | |
2975 | ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) | |
2976 | ||
2977 | /* Set cinuse and pinuse of this chunk and pinuse of next chunk */ | |
2978 | #define set_inuse_and_pinuse(M,p,s)\ | |
2979 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ | |
2980 | ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) | |
2981 | ||
2982 | /* Set size, cinuse and pinuse bit of this chunk */ | |
2983 | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ | |
2984 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT)) | |
2985 | ||
2986 | #else /* FOOTERS */ | |
2987 | ||
2988 | /* Set foot of inuse chunk to be xor of mstate and seed */ | |
2989 | #define mark_inuse_foot(M,p,s)\ | |
2990 | (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic)) | |
2991 | ||
2992 | #define get_mstate_for(p)\ | |
2993 | ((mstate)(((mchunkptr)((char*)(p) +\ | |
2994 | (chunksize(p))))->prev_foot ^ mparams.magic)) | |
2995 | ||
2996 | #define set_inuse(M,p,s)\ | |
2997 | ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ | |
2998 | (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \ | |
2999 | mark_inuse_foot(M,p,s)) | |
3000 | ||
3001 | #define set_inuse_and_pinuse(M,p,s)\ | |
3002 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ | |
3003 | (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\ | |
3004 | mark_inuse_foot(M,p,s)) | |
3005 | ||
3006 | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ | |
3007 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ | |
3008 | mark_inuse_foot(M, p, s)) | |
3009 | ||
3010 | #endif /* !FOOTERS */ | |
3011 | ||
3012 | /* ---------------------------- setting mparams -------------------------- */ | |
3013 | ||
3014 | /* Initialize mparams */ | |
3015 | static int init_mparams(void) { | |
3016 | #ifdef NEED_GLOBAL_LOCK_INIT | |
3017 | if (malloc_global_mutex_status <= 0) | |
3018 | init_malloc_global_mutex(); | |
3019 | #endif | |
3020 | ||
3021 | ACQUIRE_MALLOC_GLOBAL_LOCK(); | |
3022 | if (mparams.magic == 0) { | |
3023 | size_t magic; | |
3024 | size_t psize; | |
3025 | size_t gsize; | |
3026 | ||
3027 | #ifndef WIN32 | |
3028 | psize = malloc_getpagesize; | |
3029 | gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize); | |
3030 | #else /* WIN32 */ | |
3031 | { | |
3032 | SYSTEM_INFO system_info; | |
3033 | GetSystemInfo(&system_info); | |
3034 | psize = system_info.dwPageSize; | |
3035 | gsize = ((DEFAULT_GRANULARITY != 0)? | |
3036 | DEFAULT_GRANULARITY : system_info.dwAllocationGranularity); | |
3037 | } | |
3038 | #endif /* WIN32 */ | |
3039 | ||
3040 | /* Sanity-check configuration: | |
3041 | size_t must be unsigned and as wide as pointer type. | |
3042 | ints must be at least 4 bytes. | |
3043 | alignment must be at least 8. | |
3044 | Alignment, min chunk size, and page size must all be powers of 2. | |
3045 | */ | |
3046 | if ((sizeof(size_t) != sizeof(char*)) || | |
3047 | (MAX_SIZE_T < MIN_CHUNK_SIZE) || | |
3048 | (sizeof(int) < 4) || | |
3049 | (MALLOC_ALIGNMENT < (size_t)8U) || | |
3050 | ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) || | |
3051 | ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) || | |
3052 | ((gsize & (gsize-SIZE_T_ONE)) != 0) || | |
3053 | ((psize & (psize-SIZE_T_ONE)) != 0)) | |
3054 | ABORT; | |
3055 | ||
3056 | mparams.granularity = gsize; | |
3057 | mparams.page_size = psize; | |
3058 | mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD; | |
3059 | mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD; | |
3060 | #if MORECORE_CONTIGUOUS | |
3061 | mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT; | |
3062 | #else /* MORECORE_CONTIGUOUS */ | |
3063 | mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT; | |
3064 | #endif /* MORECORE_CONTIGUOUS */ | |
3065 | ||
3066 | #if !ONLY_MSPACES | |
3067 | /* Set up lock for main malloc area */ | |
3068 | gm->mflags = mparams.default_mflags; | |
70597e83 | 3069 | (void)INITIAL_LOCK(&gm->mutex); |
f0ed8226 MSO |
3070 | #endif |
3071 | ||
3072 | #if (FOOTERS && !INSECURE) | |
3073 | { | |
3074 | #if USE_DEV_RANDOM | |
3075 | int fd; | |
3076 | unsigned char buf[sizeof(size_t)]; | |
3077 | /* Try to use /dev/urandom, else fall back on using time */ | |
3078 | if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 && | |
3079 | read(fd, buf, sizeof(buf)) == sizeof(buf)) { | |
3080 | magic = *((size_t *) buf); | |
3081 | close(fd); | |
3082 | } | |
3083 | else | |
3084 | #endif /* USE_DEV_RANDOM */ | |
3085 | #ifdef WIN32 | |
3086 | magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U); | |
3087 | #else | |
3088 | magic = (size_t)(time(0) ^ (size_t)0x55555555U); | |
3089 | #endif | |
3090 | magic |= (size_t)8U; /* ensure nonzero */ | |
3091 | magic &= ~(size_t)7U; /* improve chances of fault for bad values */ | |
3092 | } | |
3093 | #else /* (FOOTERS && !INSECURE) */ | |
3094 | magic = (size_t)0x58585858U; | |
3095 | #endif /* (FOOTERS && !INSECURE) */ | |
3096 | ||
3097 | mparams.magic = magic; | |
3098 | } | |
3099 | ||
3100 | RELEASE_MALLOC_GLOBAL_LOCK(); | |
3101 | return 1; | |
3102 | } | |
3103 | ||
3104 | /* support for mallopt */ | |
3105 | static int change_mparam(int param_number, int value) { | |
3106 | size_t val = (value == -1)? MAX_SIZE_T : (size_t)value; | |
3107 | ensure_initialization(); | |
3108 | switch(param_number) { | |
3109 | case M_TRIM_THRESHOLD: | |
3110 | mparams.trim_threshold = val; | |
3111 | return 1; | |
3112 | case M_GRANULARITY: | |
3113 | if (val >= mparams.page_size && ((val & (val-1)) == 0)) { | |
3114 | mparams.granularity = val; | |
3115 | return 1; | |
3116 | } | |
3117 | else | |
3118 | return 0; | |
3119 | case M_MMAP_THRESHOLD: | |
3120 | mparams.mmap_threshold = val; | |
3121 | return 1; | |
3122 | default: | |
3123 | return 0; | |
3124 | } | |
3125 | } | |
3126 | ||
3127 | #if DEBUG | |
3128 | /* ------------------------- Debugging Support --------------------------- */ | |
3129 | ||
3130 | /* Check properties of any chunk, whether free, inuse, mmapped etc */ | |
3131 | static void do_check_any_chunk(mstate m, mchunkptr p) { | |
3132 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); | |
3133 | assert(ok_address(m, p)); | |
3134 | } | |
3135 | ||
3136 | /* Check properties of top chunk */ | |
3137 | static void do_check_top_chunk(mstate m, mchunkptr p) { | |
3138 | msegmentptr sp = segment_holding(m, (char*)p); | |
3139 | size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */ | |
3140 | assert(sp != 0); | |
3141 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); | |
3142 | assert(ok_address(m, p)); | |
3143 | assert(sz == m->topsize); | |
3144 | assert(sz > 0); | |
3145 | assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE); | |
3146 | assert(pinuse(p)); | |
3147 | assert(!pinuse(chunk_plus_offset(p, sz))); | |
3148 | } | |
3149 | ||
3150 | /* Check properties of (inuse) mmapped chunks */ | |
3151 | static void do_check_mmapped_chunk(mstate m, mchunkptr p) { | |
3152 | size_t sz = chunksize(p); | |
3153 | size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD); | |
3154 | assert(is_mmapped(p)); | |
3155 | assert(use_mmap(m)); | |
3156 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); | |
3157 | assert(ok_address(m, p)); | |
3158 | assert(!is_small(sz)); | |
3159 | assert((len & (mparams.page_size-SIZE_T_ONE)) == 0); | |
3160 | assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD); | |
3161 | assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0); | |
3162 | } | |
3163 | ||
3164 | /* Check properties of inuse chunks */ | |
3165 | static void do_check_inuse_chunk(mstate m, mchunkptr p) { | |
3166 | do_check_any_chunk(m, p); | |
3167 | assert(cinuse(p)); | |
3168 | assert(next_pinuse(p)); | |
3169 | /* If not pinuse and not mmapped, previous chunk has OK offset */ | |
3170 | assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p); | |
3171 | if (is_mmapped(p)) | |
3172 | do_check_mmapped_chunk(m, p); | |
3173 | } | |
3174 | ||
3175 | /* Check properties of free chunks */ | |
3176 | static void do_check_free_chunk(mstate m, mchunkptr p) { | |
3177 | size_t sz = chunksize(p); | |
3178 | mchunkptr next = chunk_plus_offset(p, sz); | |
3179 | do_check_any_chunk(m, p); | |
3180 | assert(!cinuse(p)); | |
3181 | assert(!next_pinuse(p)); | |
3182 | assert (!is_mmapped(p)); | |
3183 | if (p != m->dv && p != m->top) { | |
3184 | if (sz >= MIN_CHUNK_SIZE) { | |
3185 | assert((sz & CHUNK_ALIGN_MASK) == 0); | |
3186 | assert(is_aligned(chunk2mem(p))); | |
3187 | assert(next->prev_foot == sz); | |
3188 | assert(pinuse(p)); | |
3189 | assert (next == m->top || cinuse(next)); | |
3190 | assert(p->fd->bk == p); | |
3191 | assert(p->bk->fd == p); | |
3192 | } | |
3193 | else /* markers are always of size SIZE_T_SIZE */ | |
3194 | assert(sz == SIZE_T_SIZE); | |
3195 | } | |
3196 | } | |
3197 | ||
3198 | /* Check properties of malloced chunks at the point they are malloced */ | |
3199 | static void do_check_malloced_chunk(mstate m, void* mem, size_t s) { | |
3200 | if (mem != 0) { | |
3201 | mchunkptr p = mem2chunk(mem); | |
3202 | size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT); | |
3203 | do_check_inuse_chunk(m, p); | |
3204 | assert((sz & CHUNK_ALIGN_MASK) == 0); | |
3205 | assert(sz >= MIN_CHUNK_SIZE); | |
3206 | assert(sz >= s); | |
3207 | /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */ | |
3208 | assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE)); | |
3209 | } | |
3210 | } | |
3211 | ||
3212 | /* Check a tree and its subtrees. */ | |
3213 | static void do_check_tree(mstate m, tchunkptr t) { | |
3214 | tchunkptr head = 0; | |
3215 | tchunkptr u = t; | |
3216 | bindex_t tindex = t->index; | |
3217 | size_t tsize = chunksize(t); | |
3218 | bindex_t idx; | |
3219 | compute_tree_index(tsize, idx); | |
3220 | assert(tindex == idx); | |
3221 | assert(tsize >= MIN_LARGE_SIZE); | |
3222 | assert(tsize >= minsize_for_tree_index(idx)); | |
3223 | assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1)))); | |
3224 | ||
3225 | do { /* traverse through chain of same-sized nodes */ | |
3226 | do_check_any_chunk(m, ((mchunkptr)u)); | |
3227 | assert(u->index == tindex); | |
3228 | assert(chunksize(u) == tsize); | |
3229 | assert(!cinuse(u)); | |
3230 | assert(!next_pinuse(u)); | |
3231 | assert(u->fd->bk == u); | |
3232 | assert(u->bk->fd == u); | |
3233 | if (u->parent == 0) { | |
3234 | assert(u->child[0] == 0); | |
3235 | assert(u->child[1] == 0); | |
3236 | } | |
3237 | else { | |
3238 | assert(head == 0); /* only one node on chain has parent */ | |
3239 | head = u; | |
3240 | assert(u->parent != u); | |
3241 | assert (u->parent->child[0] == u || | |
3242 | u->parent->child[1] == u || | |
3243 | *((tbinptr*)(u->parent)) == u); | |
3244 | if (u->child[0] != 0) { | |
3245 | assert(u->child[0]->parent == u); | |
3246 | assert(u->child[0] != u); | |
3247 | do_check_tree(m, u->child[0]); | |
3248 | } | |
3249 | if (u->child[1] != 0) { | |
3250 | assert(u->child[1]->parent == u); | |
3251 | assert(u->child[1] != u); | |
3252 | do_check_tree(m, u->child[1]); | |
3253 | } | |
3254 | if (u->child[0] != 0 && u->child[1] != 0) { | |
3255 | assert(chunksize(u->child[0]) < chunksize(u->child[1])); | |
3256 | } | |
3257 | } | |
3258 | u = u->fd; | |
3259 | } while (u != t); | |
3260 | assert(head != 0); | |
3261 | } | |
3262 | ||
3263 | /* Check all the chunks in a treebin. */ | |
3264 | static void do_check_treebin(mstate m, bindex_t i) { | |
3265 | tbinptr* tb = treebin_at(m, i); | |
3266 | tchunkptr t = *tb; | |
3267 | int empty = (m->treemap & (1U << i)) == 0; | |
3268 | if (t == 0) | |
3269 | assert(empty); | |
3270 | if (!empty) | |
3271 | do_check_tree(m, t); | |
3272 | } | |
3273 | ||
3274 | /* Check all the chunks in a smallbin. */ | |
3275 | static void do_check_smallbin(mstate m, bindex_t i) { | |
3276 | sbinptr b = smallbin_at(m, i); | |
3277 | mchunkptr p = b->bk; | |
3278 | unsigned int empty = (m->smallmap & (1U << i)) == 0; | |
3279 | if (p == b) | |
3280 | assert(empty); | |
3281 | if (!empty) { | |
3282 | for (; p != b; p = p->bk) { | |
3283 | size_t size = chunksize(p); | |
3284 | mchunkptr q; | |
3285 | /* each chunk claims to be free */ | |
3286 | do_check_free_chunk(m, p); | |
3287 | /* chunk belongs in bin */ | |
3288 | assert(small_index(size) == i); | |
3289 | assert(p->bk == b || chunksize(p->bk) == chunksize(p)); | |
3290 | /* chunk is followed by an inuse chunk */ | |
3291 | q = next_chunk(p); | |
3292 | if (q->head != FENCEPOST_HEAD) | |
3293 | do_check_inuse_chunk(m, q); | |
3294 | } | |
3295 | } | |
3296 | } | |
3297 | ||
3298 | /* Find x in a bin. Used in other check functions. */ | |
3299 | static int bin_find(mstate m, mchunkptr x) { | |
3300 | size_t size = chunksize(x); | |
3301 | if (is_small(size)) { | |
3302 | bindex_t sidx = small_index(size); | |
3303 | sbinptr b = smallbin_at(m, sidx); | |
3304 | if (smallmap_is_marked(m, sidx)) { | |
3305 | mchunkptr p = b; | |
3306 | do { | |
3307 | if (p == x) | |
3308 | return 1; | |
3309 | } while ((p = p->fd) != b); | |
3310 | } | |
3311 | } | |
3312 | else { | |
3313 | bindex_t tidx; | |
3314 | compute_tree_index(size, tidx); | |
3315 | if (treemap_is_marked(m, tidx)) { | |
3316 | tchunkptr t = *treebin_at(m, tidx); | |
3317 | size_t sizebits = size << leftshift_for_tree_index(tidx); | |
3318 | while (t != 0 && chunksize(t) != size) { | |
3319 | t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; | |
3320 | sizebits <<= 1; | |
3321 | } | |
3322 | if (t != 0) { | |
3323 | tchunkptr u = t; | |
3324 | do { | |
3325 | if (u == (tchunkptr)x) | |
3326 | return 1; | |
3327 | } while ((u = u->fd) != t); | |
3328 | } | |
3329 | } | |
3330 | } | |
3331 | return 0; | |
3332 | } | |
3333 | ||
3334 | /* Traverse each chunk and check it; return total */ | |
3335 | static size_t traverse_and_check(mstate m) { | |
3336 | size_t sum = 0; | |
3337 | if (is_initialized(m)) { | |
3338 | msegmentptr s = &m->seg; | |
3339 | sum += m->topsize + TOP_FOOT_SIZE; | |
3340 | while (s != 0) { | |
3341 | mchunkptr q = align_as_chunk(s->base); | |
3342 | mchunkptr lastq = 0; | |
3343 | assert(pinuse(q)); | |
3344 | while (segment_holds(s, q) && | |
3345 | q != m->top && q->head != FENCEPOST_HEAD) { | |
3346 | sum += chunksize(q); | |
3347 | if (cinuse(q)) { | |
3348 | assert(!bin_find(m, q)); | |
3349 | do_check_inuse_chunk(m, q); | |
3350 | } | |
3351 | else { | |
3352 | assert(q == m->dv || bin_find(m, q)); | |
3353 | assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */ | |
3354 | do_check_free_chunk(m, q); | |
3355 | } | |
3356 | lastq = q; | |
3357 | q = next_chunk(q); | |
3358 | } | |
3359 | s = s->next; | |
3360 | } | |
3361 | } | |
3362 | return sum; | |
3363 | } | |
3364 | ||
3365 | /* Check all properties of malloc_state. */ | |
3366 | static void do_check_malloc_state(mstate m) { | |
3367 | bindex_t i; | |
3368 | size_t total; | |
3369 | /* check bins */ | |
3370 | for (i = 0; i < NSMALLBINS; ++i) | |
3371 | do_check_smallbin(m, i); | |
3372 | for (i = 0; i < NTREEBINS; ++i) | |
3373 | do_check_treebin(m, i); | |
3374 | ||
3375 | if (m->dvsize != 0) { /* check dv chunk */ | |
3376 | do_check_any_chunk(m, m->dv); | |
3377 | assert(m->dvsize == chunksize(m->dv)); | |
3378 | assert(m->dvsize >= MIN_CHUNK_SIZE); | |
3379 | assert(bin_find(m, m->dv) == 0); | |
3380 | } | |
3381 | ||
3382 | if (m->top != 0) { /* check top chunk */ | |
3383 | do_check_top_chunk(m, m->top); | |
3384 | /*assert(m->topsize == chunksize(m->top)); redundant */ | |
3385 | assert(m->topsize > 0); | |
3386 | assert(bin_find(m, m->top) == 0); | |
3387 | } | |
3388 | ||
3389 | total = traverse_and_check(m); | |
3390 | assert(total <= m->footprint); | |
3391 | assert(m->footprint <= m->max_footprint); | |
3392 | } | |
3393 | #endif /* DEBUG */ | |
3394 | ||
3395 | /* ----------------------------- statistics ------------------------------ */ | |
3396 | ||
3397 | #if !NO_MALLINFO | |
3398 | static struct mallinfo internal_mallinfo(mstate m) { | |
3399 | struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; | |
3400 | ensure_initialization(); | |
3401 | if (!PREACTION(m)) { | |
3402 | check_malloc_state(m); | |
3403 | if (is_initialized(m)) { | |
3404 | size_t nfree = SIZE_T_ONE; /* top always free */ | |
3405 | size_t mfree = m->topsize + TOP_FOOT_SIZE; | |
3406 | size_t sum = mfree; | |
3407 | msegmentptr s = &m->seg; | |
3408 | while (s != 0) { | |
3409 | mchunkptr q = align_as_chunk(s->base); | |
3410 | while (segment_holds(s, q) && | |
3411 | q != m->top && q->head != FENCEPOST_HEAD) { | |
3412 | size_t sz = chunksize(q); | |
3413 | sum += sz; | |
3414 | if (!cinuse(q)) { | |
3415 | mfree += sz; | |
3416 | ++nfree; | |
3417 | } | |
3418 | q = next_chunk(q); | |
3419 | } | |
3420 | s = s->next; | |
3421 | } | |
3422 | ||
3423 | nm.arena = sum; | |
3424 | nm.ordblks = nfree; | |
3425 | nm.hblkhd = m->footprint - sum; | |
3426 | nm.usmblks = m->max_footprint; | |
3427 | nm.uordblks = m->footprint - mfree; | |
3428 | nm.fordblks = mfree; | |
3429 | nm.keepcost = m->topsize; | |
3430 | } | |
3431 | ||
3432 | POSTACTION(m); | |
3433 | } | |
3434 | return nm; | |
3435 | } | |
3436 | #endif /* !NO_MALLINFO */ | |
3437 | ||
3438 | static void internal_malloc_stats(mstate m) { | |
3439 | ensure_initialization(); | |
3440 | if (!PREACTION(m)) { | |
3441 | size_t maxfp = 0; | |
3442 | size_t fp = 0; | |
3443 | size_t used = 0; | |
3444 | check_malloc_state(m); | |
3445 | if (is_initialized(m)) { | |
3446 | msegmentptr s = &m->seg; | |
3447 | maxfp = m->max_footprint; | |
3448 | fp = m->footprint; | |
3449 | used = fp - (m->topsize + TOP_FOOT_SIZE); | |
3450 | ||
3451 | while (s != 0) { | |
3452 | mchunkptr q = align_as_chunk(s->base); | |
3453 | while (segment_holds(s, q) && | |
3454 | q != m->top && q->head != FENCEPOST_HEAD) { | |
3455 | if (!cinuse(q)) | |
3456 | used -= chunksize(q); | |
3457 | q = next_chunk(q); | |
3458 | } | |
3459 | s = s->next; | |
3460 | } | |
3461 | } | |
3462 | ||
3463 | fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp)); | |
3464 | fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp)); | |
3465 | fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used)); | |
3466 | ||
3467 | POSTACTION(m); | |
3468 | } | |
3469 | } | |
3470 | ||
3471 | /* ----------------------- Operations on smallbins ----------------------- */ | |
3472 | ||
3473 | /* | |
3474 | Various forms of linking and unlinking are defined as macros. Even | |
3475 | the ones for trees, which are very long but have very short typical | |
3476 | paths. This is ugly but reduces reliance on inlining support of | |
3477 | compilers. | |
3478 | */ | |
3479 | ||
3480 | /* Link a free chunk into a smallbin */ | |
3481 | #define insert_small_chunk(M, P, S) {\ | |
3482 | bindex_t I = small_index(S);\ | |
3483 | mchunkptr B = smallbin_at(M, I);\ | |
3484 | mchunkptr F = B;\ | |
3485 | assert(S >= MIN_CHUNK_SIZE);\ | |
3486 | if (!smallmap_is_marked(M, I))\ | |
3487 | mark_smallmap(M, I);\ | |
3488 | else if (RTCHECK(ok_address(M, B->fd)))\ | |
3489 | F = B->fd;\ | |
3490 | else {\ | |
3491 | CORRUPTION_ERROR_ACTION(M);\ | |
3492 | }\ | |
3493 | B->fd = P;\ | |
3494 | F->bk = P;\ | |
3495 | P->fd = F;\ | |
3496 | P->bk = B;\ | |
3497 | } | |
3498 | ||
3499 | /* Unlink a chunk from a smallbin */ | |
3500 | #define unlink_small_chunk(M, P, S) {\ | |
3501 | mchunkptr F = P->fd;\ | |
3502 | mchunkptr B = P->bk;\ | |
3503 | bindex_t I = small_index(S);\ | |
3504 | assert(P != B);\ | |
3505 | assert(P != F);\ | |
3506 | assert(chunksize(P) == small_index2size(I));\ | |
3507 | if (F == B)\ | |
3508 | clear_smallmap(M, I);\ | |
3509 | else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\ | |
3510 | (B == smallbin_at(M,I) || ok_address(M, B)))) {\ | |
3511 | F->bk = B;\ | |
3512 | B->fd = F;\ | |
3513 | }\ | |
3514 | else {\ | |
3515 | CORRUPTION_ERROR_ACTION(M);\ | |
3516 | }\ | |
3517 | } | |
3518 | ||
3519 | /* Unlink the first chunk from a smallbin */ | |
3520 | #define unlink_first_small_chunk(M, B, P, I) {\ | |
3521 | mchunkptr F = P->fd;\ | |
3522 | assert(P != B);\ | |
3523 | assert(P != F);\ | |
3524 | assert(chunksize(P) == small_index2size(I));\ | |
3525 | if (B == F)\ | |
3526 | clear_smallmap(M, I);\ | |
3527 | else if (RTCHECK(ok_address(M, F))) {\ | |
3528 | B->fd = F;\ | |
3529 | F->bk = B;\ | |
3530 | }\ | |
3531 | else {\ | |
3532 | CORRUPTION_ERROR_ACTION(M);\ | |
3533 | }\ | |
3534 | } | |
3535 | ||
3536 | ||
3537 | ||
3538 | /* Replace dv node, binning the old one */ | |
3539 | /* Used only when dvsize known to be small */ | |
3540 | #define replace_dv(M, P, S) {\ | |
3541 | size_t DVS = M->dvsize;\ | |
3542 | if (DVS != 0) {\ | |
3543 | mchunkptr DV = M->dv;\ | |
3544 | assert(is_small(DVS));\ | |
3545 | insert_small_chunk(M, DV, DVS);\ | |
3546 | }\ | |
3547 | M->dvsize = S;\ | |
3548 | M->dv = P;\ | |
3549 | } | |
3550 | ||
3551 | /* ------------------------- Operations on trees ------------------------- */ | |
3552 | ||
3553 | /* Insert chunk into tree */ | |
3554 | #define insert_large_chunk(M, X, S) {\ | |
3555 | tbinptr* H;\ | |
3556 | bindex_t I;\ | |
3557 | compute_tree_index(S, I);\ | |
3558 | H = treebin_at(M, I);\ | |
3559 | X->index = I;\ | |
3560 | X->child[0] = X->child[1] = 0;\ | |
3561 | if (!treemap_is_marked(M, I)) {\ | |
3562 | mark_treemap(M, I);\ | |
3563 | *H = X;\ | |
3564 | X->parent = (tchunkptr)H;\ | |
3565 | X->fd = X->bk = X;\ | |
3566 | }\ | |
3567 | else {\ | |
3568 | tchunkptr T = *H;\ | |
3569 | size_t K = S << leftshift_for_tree_index(I);\ | |
3570 | for (;;) {\ | |
3571 | if (chunksize(T) != S) {\ | |
3572 | tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\ | |
3573 | K <<= 1;\ | |
3574 | if (*C != 0)\ | |
3575 | T = *C;\ | |
3576 | else if (RTCHECK(ok_address(M, C))) {\ | |
3577 | *C = X;\ | |
3578 | X->parent = T;\ | |
3579 | X->fd = X->bk = X;\ | |
3580 | break;\ | |
3581 | }\ | |
3582 | else {\ | |
3583 | CORRUPTION_ERROR_ACTION(M);\ | |
3584 | break;\ | |
3585 | }\ | |
3586 | }\ | |
3587 | else {\ | |
3588 | tchunkptr F = T->fd;\ | |
3589 | if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\ | |
3590 | T->fd = F->bk = X;\ | |
3591 | X->fd = F;\ | |
3592 | X->bk = T;\ | |
3593 | X->parent = 0;\ | |
3594 | break;\ | |
3595 | }\ | |
3596 | else {\ | |
3597 | CORRUPTION_ERROR_ACTION(M);\ | |
3598 | break;\ | |
3599 | }\ | |
3600 | }\ | |
3601 | }\ | |
3602 | }\ | |
3603 | } | |
3604 | ||
3605 | /* | |
3606 | Unlink steps: | |
3607 | ||
3608 | 1. If x is a chained node, unlink it from its same-sized fd/bk links | |
3609 | and choose its bk node as its replacement. | |
3610 | 2. If x was the last node of its size, but not a leaf node, it must | |
3611 | be replaced with a leaf node (not merely one with an open left or | |
749f763d OB |
3612 | right), to make sure that lefts and rights of descendants |
3613 | correspond properly to bit masks. We use the rightmost descendant | |
f0ed8226 MSO |
3614 | of x. We could use any other leaf, but this is easy to locate and |
3615 | tends to counteract removal of leftmosts elsewhere, and so keeps | |
3616 | paths shorter than minimally guaranteed. This doesn't loop much | |
3617 | because on average a node in a tree is near the bottom. | |
3618 | 3. If x is the base of a chain (i.e., has parent links) relink | |
3619 | x's parent and children to x's replacement (or null if none). | |
3620 | */ | |
3621 | ||
3622 | #define unlink_large_chunk(M, X) {\ | |
3623 | tchunkptr XP = X->parent;\ | |
3624 | tchunkptr R;\ | |
3625 | if (X->bk != X) {\ | |
3626 | tchunkptr F = X->fd;\ | |
3627 | R = X->bk;\ | |
3628 | if (RTCHECK(ok_address(M, F))) {\ | |
3629 | F->bk = R;\ | |
3630 | R->fd = F;\ | |
3631 | }\ | |
3632 | else {\ | |
3633 | CORRUPTION_ERROR_ACTION(M);\ | |
3634 | }\ | |
3635 | }\ | |
3636 | else {\ | |
3637 | tchunkptr* RP;\ | |
3638 | if (((R = *(RP = &(X->child[1]))) != 0) ||\ | |
3639 | ((R = *(RP = &(X->child[0]))) != 0)) {\ | |
3640 | tchunkptr* CP;\ | |
3641 | while ((*(CP = &(R->child[1])) != 0) ||\ | |
3642 | (*(CP = &(R->child[0])) != 0)) {\ | |
3643 | R = *(RP = CP);\ | |
3644 | }\ | |
3645 | if (RTCHECK(ok_address(M, RP)))\ | |
3646 | *RP = 0;\ | |
3647 | else {\ | |
3648 | CORRUPTION_ERROR_ACTION(M);\ | |
3649 | }\ | |
3650 | }\ | |
3651 | }\ | |
3652 | if (XP != 0) {\ | |
3653 | tbinptr* H = treebin_at(M, X->index);\ | |
3654 | if (X == *H) {\ | |
3655 | if ((*H = R) == 0) \ | |
3656 | clear_treemap(M, X->index);\ | |
3657 | }\ | |
3658 | else if (RTCHECK(ok_address(M, XP))) {\ | |
3659 | if (XP->child[0] == X) \ | |
3660 | XP->child[0] = R;\ | |
3661 | else \ | |
3662 | XP->child[1] = R;\ | |
3663 | }\ | |
3664 | else\ | |
3665 | CORRUPTION_ERROR_ACTION(M);\ | |
3666 | if (R != 0) {\ | |
3667 | if (RTCHECK(ok_address(M, R))) {\ | |
3668 | tchunkptr C0, C1;\ | |
3669 | R->parent = XP;\ | |
3670 | if ((C0 = X->child[0]) != 0) {\ | |
3671 | if (RTCHECK(ok_address(M, C0))) {\ | |
3672 | R->child[0] = C0;\ | |
3673 | C0->parent = R;\ | |
3674 | }\ | |
3675 | else\ | |
3676 | CORRUPTION_ERROR_ACTION(M);\ | |
3677 | }\ | |
3678 | if ((C1 = X->child[1]) != 0) {\ | |
3679 | if (RTCHECK(ok_address(M, C1))) {\ | |
3680 | R->child[1] = C1;\ | |
3681 | C1->parent = R;\ | |
3682 | }\ | |
3683 | else\ | |
3684 | CORRUPTION_ERROR_ACTION(M);\ | |
3685 | }\ | |
3686 | }\ | |
3687 | else\ | |
3688 | CORRUPTION_ERROR_ACTION(M);\ | |
3689 | }\ | |
3690 | }\ | |
3691 | } | |
3692 | ||
3693 | /* Relays to large vs small bin operations */ | |
3694 | ||
3695 | #define insert_chunk(M, P, S)\ | |
3696 | if (is_small(S)) insert_small_chunk(M, P, S)\ | |
3697 | else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); } | |
3698 | ||
3699 | #define unlink_chunk(M, P, S)\ | |
3700 | if (is_small(S)) unlink_small_chunk(M, P, S)\ | |
3701 | else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); } | |
3702 | ||
3703 | ||
3704 | /* Relays to internal calls to malloc/free from realloc, memalign etc */ | |
3705 | ||
3706 | #if ONLY_MSPACES | |
3707 | #define internal_malloc(m, b) mspace_malloc(m, b) | |
3708 | #define internal_free(m, mem) mspace_free(m,mem); | |
3709 | #else /* ONLY_MSPACES */ | |
3710 | #if MSPACES | |
3711 | #define internal_malloc(m, b)\ | |
3712 | (m == gm)? dlmalloc(b) : mspace_malloc(m, b) | |
3713 | #define internal_free(m, mem)\ | |
3714 | if (m == gm) dlfree(mem); else mspace_free(m,mem); | |
3715 | #else /* MSPACES */ | |
3716 | #define internal_malloc(m, b) dlmalloc(b) | |
3717 | #define internal_free(m, mem) dlfree(mem) | |
3718 | #endif /* MSPACES */ | |
3719 | #endif /* ONLY_MSPACES */ | |
3720 | ||
3721 | /* ----------------------- Direct-mmapping chunks ----------------------- */ | |
3722 | ||
3723 | /* | |
3724 | Directly mmapped chunks are set up with an offset to the start of | |
3725 | the mmapped region stored in the prev_foot field of the chunk. This | |
3726 | allows reconstruction of the required argument to MUNMAP when freed, | |
3727 | and also allows adjustment of the returned chunk to meet alignment | |
3728 | requirements (especially in memalign). There is also enough space | |
3729 | allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain | |
3730 | the PINUSE bit so frees can be checked. | |
3731 | */ | |
3732 | ||
3733 | /* Malloc using mmap */ | |
3734 | static void* mmap_alloc(mstate m, size_t nb) { | |
3735 | size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); | |
3736 | if (mmsize > nb) { /* Check for wrap around 0 */ | |
3737 | char* mm = (char*)(CALL_DIRECT_MMAP(mmsize)); | |
3738 | if (mm != CMFAIL) { | |
3739 | size_t offset = align_offset(chunk2mem(mm)); | |
3740 | size_t psize = mmsize - offset - MMAP_FOOT_PAD; | |
3741 | mchunkptr p = (mchunkptr)(mm + offset); | |
3742 | p->prev_foot = offset | IS_MMAPPED_BIT; | |
3743 | (p)->head = (psize|CINUSE_BIT); | |
3744 | mark_inuse_foot(m, p, psize); | |
3745 | chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD; | |
3746 | chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0; | |
3747 | ||
3748 | if (mm < m->least_addr) | |
3749 | m->least_addr = mm; | |
3750 | if ((m->footprint += mmsize) > m->max_footprint) | |
3751 | m->max_footprint = m->footprint; | |
3752 | assert(is_aligned(chunk2mem(p))); | |
3753 | check_mmapped_chunk(m, p); | |
3754 | return chunk2mem(p); | |
3755 | } | |
3756 | } | |
3757 | return 0; | |
3758 | } | |
3759 | ||
3760 | /* Realloc using mmap */ | |
3761 | static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) { | |
3762 | size_t oldsize = chunksize(oldp); | |
3763 | if (is_small(nb)) /* Can't shrink mmap regions below small size */ | |
3764 | return 0; | |
3765 | /* Keep old chunk if big enough but not too big */ | |
3766 | if (oldsize >= nb + SIZE_T_SIZE && | |
3767 | (oldsize - nb) <= (mparams.granularity << 1)) | |
3768 | return oldp; | |
3769 | else { | |
3770 | size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT; | |
3771 | size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD; | |
3772 | size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); | |
3773 | char* cp = (char*)CALL_MREMAP((char*)oldp - offset, | |
3774 | oldmmsize, newmmsize, 1); | |
3775 | if (cp != CMFAIL) { | |
3776 | mchunkptr newp = (mchunkptr)(cp + offset); | |
3777 | size_t psize = newmmsize - offset - MMAP_FOOT_PAD; | |
3778 | newp->head = (psize|CINUSE_BIT); | |
3779 | mark_inuse_foot(m, newp, psize); | |
3780 | chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD; | |
3781 | chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0; | |
3782 | ||
3783 | if (cp < m->least_addr) | |
3784 | m->least_addr = cp; | |
3785 | if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint) | |
3786 | m->max_footprint = m->footprint; | |
3787 | check_mmapped_chunk(m, newp); | |
3788 | return newp; | |
3789 | } | |
3790 | } | |
3791 | return 0; | |
3792 | } | |
3793 | ||
3794 | /* -------------------------- mspace management -------------------------- */ | |
3795 | ||
3796 | /* Initialize top chunk and its size */ | |
3797 | static void init_top(mstate m, mchunkptr p, size_t psize) { | |
3798 | /* Ensure alignment */ | |
3799 | size_t offset = align_offset(chunk2mem(p)); | |
3800 | p = (mchunkptr)((char*)p + offset); | |
3801 | psize -= offset; | |
3802 | ||
3803 | m->top = p; | |
3804 | m->topsize = psize; | |
3805 | p->head = psize | PINUSE_BIT; | |
3806 | /* set size of fake trailing chunk holding overhead space only once */ | |
3807 | chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE; | |
3808 | m->trim_check = mparams.trim_threshold; /* reset on each update */ | |
3809 | } | |
3810 | ||
3811 | /* Initialize bins for a new mstate that is otherwise zeroed out */ | |
3812 | static void init_bins(mstate m) { | |
3813 | /* Establish circular links for smallbins */ | |
3814 | bindex_t i; | |
3815 | for (i = 0; i < NSMALLBINS; ++i) { | |
3816 | sbinptr bin = smallbin_at(m,i); | |
3817 | bin->fd = bin->bk = bin; | |
3818 | } | |
3819 | } | |
3820 | ||
3821 | #if PROCEED_ON_ERROR | |
3822 | ||
3823 | /* default corruption action */ | |
3824 | static void reset_on_error(mstate m) { | |
3825 | int i; | |
3826 | ++malloc_corruption_error_count; | |
3827 | /* Reinitialize fields to forget about all memory */ | |
3828 | m->smallbins = m->treebins = 0; | |
3829 | m->dvsize = m->topsize = 0; | |
3830 | m->seg.base = 0; | |
3831 | m->seg.size = 0; | |
3832 | m->seg.next = 0; | |
3833 | m->top = m->dv = 0; | |
3834 | for (i = 0; i < NTREEBINS; ++i) | |
3835 | *treebin_at(m, i) = 0; | |
3836 | init_bins(m); | |
3837 | } | |
3838 | #endif /* PROCEED_ON_ERROR */ | |
3839 | ||
3840 | /* Allocate chunk and prepend remainder with chunk in successor base. */ | |
3841 | static void* prepend_alloc(mstate m, char* newbase, char* oldbase, | |
3842 | size_t nb) { | |
3843 | mchunkptr p = align_as_chunk(newbase); | |
3844 | mchunkptr oldfirst = align_as_chunk(oldbase); | |
3845 | size_t psize = (char*)oldfirst - (char*)p; | |
3846 | mchunkptr q = chunk_plus_offset(p, nb); | |
3847 | size_t qsize = psize - nb; | |
3848 | set_size_and_pinuse_of_inuse_chunk(m, p, nb); | |
3849 | ||
3850 | assert((char*)oldfirst > (char*)q); | |
3851 | assert(pinuse(oldfirst)); | |
3852 | assert(qsize >= MIN_CHUNK_SIZE); | |
3853 | ||
3854 | /* consolidate remainder with first chunk of old base */ | |
3855 | if (oldfirst == m->top) { | |
3856 | size_t tsize = m->topsize += qsize; | |
3857 | m->top = q; | |
3858 | q->head = tsize | PINUSE_BIT; | |
3859 | check_top_chunk(m, q); | |
3860 | } | |
3861 | else if (oldfirst == m->dv) { | |
3862 | size_t dsize = m->dvsize += qsize; | |
3863 | m->dv = q; | |
3864 | set_size_and_pinuse_of_free_chunk(q, dsize); | |
3865 | } | |
3866 | else { | |
3867 | if (!cinuse(oldfirst)) { | |
3868 | size_t nsize = chunksize(oldfirst); | |
3869 | unlink_chunk(m, oldfirst, nsize); | |
3870 | oldfirst = chunk_plus_offset(oldfirst, nsize); | |
3871 | qsize += nsize; | |
3872 | } | |
3873 | set_free_with_pinuse(q, qsize, oldfirst); | |
3874 | insert_chunk(m, q, qsize); | |
3875 | check_free_chunk(m, q); | |
3876 | } | |
3877 | ||
3878 | check_malloced_chunk(m, chunk2mem(p), nb); | |
3879 | return chunk2mem(p); | |
3880 | } | |
3881 | ||
3882 | /* Add a segment to hold a new noncontiguous region */ | |
3883 | static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) { | |
3884 | /* Determine locations and sizes of segment, fenceposts, old top */ | |
3885 | char* old_top = (char*)m->top; | |
3886 | msegmentptr oldsp = segment_holding(m, old_top); | |
3887 | char* old_end = oldsp->base + oldsp->size; | |
3888 | size_t ssize = pad_request(sizeof(struct malloc_segment)); | |
3889 | char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK); | |
3890 | size_t offset = align_offset(chunk2mem(rawsp)); | |
3891 | char* asp = rawsp + offset; | |
3892 | char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp; | |
3893 | mchunkptr sp = (mchunkptr)csp; | |
3894 | msegmentptr ss = (msegmentptr)(chunk2mem(sp)); | |
3895 | mchunkptr tnext = chunk_plus_offset(sp, ssize); | |
3896 | mchunkptr p = tnext; | |
3897 | int nfences = 0; | |
3898 | ||
3899 | /* reset top to new space */ | |
3900 | init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); | |
3901 | ||
3902 | /* Set up segment record */ | |
3903 | assert(is_aligned(ss)); | |
3904 | set_size_and_pinuse_of_inuse_chunk(m, sp, ssize); | |
3905 | *ss = m->seg; /* Push current record */ | |
3906 | m->seg.base = tbase; | |
3907 | m->seg.size = tsize; | |
3908 | m->seg.sflags = mmapped; | |
3909 | m->seg.next = ss; | |
3910 | ||
3911 | /* Insert trailing fenceposts */ | |
3912 | for (;;) { | |
3913 | mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE); | |
3914 | p->head = FENCEPOST_HEAD; | |
3915 | ++nfences; | |
3916 | if ((char*)(&(nextp->head)) < old_end) | |
3917 | p = nextp; | |
3918 | else | |
3919 | break; | |
3920 | } | |
3921 | assert(nfences >= 2); | |
3922 | ||
3923 | /* Insert the rest of old top into a bin as an ordinary free chunk */ | |
3924 | if (csp != old_top) { | |
3925 | mchunkptr q = (mchunkptr)old_top; | |
3926 | size_t psize = csp - old_top; | |
3927 | mchunkptr tn = chunk_plus_offset(q, psize); | |
3928 | set_free_with_pinuse(q, psize, tn); | |
3929 | insert_chunk(m, q, psize); | |
3930 | } | |
3931 | ||
3932 | check_top_chunk(m, m->top); | |
3933 | } | |
3934 | ||
3935 | /* -------------------------- System allocation -------------------------- */ | |
3936 | ||
3937 | /* Get memory from system using MORECORE or MMAP */ | |
3938 | static void* sys_alloc(mstate m, size_t nb) { | |
3939 | char* tbase = CMFAIL; | |
3940 | size_t tsize = 0; | |
3941 | flag_t mmap_flag = 0; | |
3942 | ||
3943 | ensure_initialization(); | |
3944 | ||
3945 | /* Directly map large chunks */ | |
3946 | if (use_mmap(m) && nb >= mparams.mmap_threshold) { | |
3947 | void* mem = mmap_alloc(m, nb); | |
3948 | if (mem != 0) | |
3949 | return mem; | |
3950 | } | |
3951 | ||
3952 | /* | |
3953 | Try getting memory in any of three ways (in most-preferred to | |
3954 | least-preferred order): | |
3955 | 1. A call to MORECORE that can normally contiguously extend memory. | |
3956 | (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or | |
0353a0c4 | 3957 | main space is mmapped or a previous contiguous call failed) |
f0ed8226 MSO |
3958 | 2. A call to MMAP new space (disabled if not HAVE_MMAP). |
3959 | Note that under the default settings, if MORECORE is unable to | |
3960 | fulfill a request, and HAVE_MMAP is true, then mmap is | |
3961 | used as a noncontiguous system allocator. This is a useful backup | |
3962 | strategy for systems with holes in address spaces -- in this case | |
3963 | sbrk cannot contiguously expand the heap, but mmap may be able to | |
3964 | find space. | |
3965 | 3. A call to MORECORE that cannot usually contiguously extend memory. | |
3966 | (disabled if not HAVE_MORECORE) | |
3967 | ||
3968 | In all cases, we need to request enough bytes from system to ensure | |
3969 | we can malloc nb bytes upon success, so pad with enough space for | |
3970 | top_foot, plus alignment-pad to make sure we don't lose bytes if | |
3971 | not on boundary, and round this up to a granularity unit. | |
3972 | */ | |
3973 | ||
3974 | if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) { | |
3975 | char* br = CMFAIL; | |
3976 | msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top); | |
3977 | size_t asize = 0; | |
3978 | ACQUIRE_MALLOC_GLOBAL_LOCK(); | |
3979 | ||
3980 | if (ss == 0) { /* First time through or recovery */ | |
3981 | char* base = (char*)CALL_MORECORE(0); | |
3982 | if (base != CMFAIL) { | |
3983 | asize = granularity_align(nb + SYS_ALLOC_PADDING); | |
3984 | /* Adjust to end on a page boundary */ | |
3985 | if (!is_page_aligned(base)) | |
3986 | asize += (page_align((size_t)base) - (size_t)base); | |
3987 | /* Can't call MORECORE if size is negative when treated as signed */ | |
3988 | if (asize < HALF_MAX_SIZE_T && | |
3989 | (br = (char*)(CALL_MORECORE(asize))) == base) { | |
3990 | tbase = base; | |
3991 | tsize = asize; | |
3992 | } | |
3993 | } | |
3994 | } | |
3995 | else { | |
3996 | /* Subtract out existing available top space from MORECORE request. */ | |
3997 | asize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING); | |
3998 | /* Use mem here only if it did continuously extend old space */ | |
3999 | if (asize < HALF_MAX_SIZE_T && | |
4000 | (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) { | |
4001 | tbase = br; | |
4002 | tsize = asize; | |
4003 | } | |
4004 | } | |
4005 | ||
4006 | if (tbase == CMFAIL) { /* Cope with partial failure */ | |
4007 | if (br != CMFAIL) { /* Try to use/extend the space we did get */ | |
4008 | if (asize < HALF_MAX_SIZE_T && | |
4009 | asize < nb + SYS_ALLOC_PADDING) { | |
4010 | size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - asize); | |
4011 | if (esize < HALF_MAX_SIZE_T) { | |
4012 | char* end = (char*)CALL_MORECORE(esize); | |
4013 | if (end != CMFAIL) | |
4014 | asize += esize; | |
4015 | else { /* Can't use; try to release */ | |
4016 | (void) CALL_MORECORE(-asize); | |
4017 | br = CMFAIL; | |
4018 | } | |
4019 | } | |
4020 | } | |
4021 | } | |
4022 | if (br != CMFAIL) { /* Use the space we did get */ | |
4023 | tbase = br; | |
4024 | tsize = asize; | |
4025 | } | |
4026 | else | |
4027 | disable_contiguous(m); /* Don't try contiguous path in the future */ | |
4028 | } | |
4029 | ||
4030 | RELEASE_MALLOC_GLOBAL_LOCK(); | |
4031 | } | |
4032 | ||
4033 | if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */ | |
4034 | size_t rsize = granularity_align(nb + SYS_ALLOC_PADDING); | |
4035 | if (rsize > nb) { /* Fail if wraps around zero */ | |
4036 | char* mp = (char*)(CALL_MMAP(rsize)); | |
4037 | if (mp != CMFAIL) { | |
4038 | tbase = mp; | |
4039 | tsize = rsize; | |
4040 | mmap_flag = IS_MMAPPED_BIT; | |
4041 | } | |
4042 | } | |
4043 | } | |
4044 | ||
4045 | if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */ | |
4046 | size_t asize = granularity_align(nb + SYS_ALLOC_PADDING); | |
4047 | if (asize < HALF_MAX_SIZE_T) { | |
4048 | char* br = CMFAIL; | |
4049 | char* end = CMFAIL; | |
4050 | ACQUIRE_MALLOC_GLOBAL_LOCK(); | |
4051 | br = (char*)(CALL_MORECORE(asize)); | |
4052 | end = (char*)(CALL_MORECORE(0)); | |
4053 | RELEASE_MALLOC_GLOBAL_LOCK(); | |
4054 | if (br != CMFAIL && end != CMFAIL && br < end) { | |
4055 | size_t ssize = end - br; | |
4056 | if (ssize > nb + TOP_FOOT_SIZE) { | |
4057 | tbase = br; | |
4058 | tsize = ssize; | |
4059 | } | |
4060 | } | |
4061 | } | |
4062 | } | |
4063 | ||
4064 | if (tbase != CMFAIL) { | |
4065 | ||
4066 | if ((m->footprint += tsize) > m->max_footprint) | |
4067 | m->max_footprint = m->footprint; | |
4068 | ||
4069 | if (!is_initialized(m)) { /* first-time initialization */ | |
4070 | m->seg.base = m->least_addr = tbase; | |
4071 | m->seg.size = tsize; | |
4072 | m->seg.sflags = mmap_flag; | |
4073 | m->magic = mparams.magic; | |
4074 | m->release_checks = MAX_RELEASE_CHECK_RATE; | |
4075 | init_bins(m); | |
4076 | #if !ONLY_MSPACES | |
4077 | if (is_global(m)) | |
4078 | init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); | |
4079 | else | |
4080 | #endif | |
4081 | { | |
4082 | /* Offset top by embedded malloc_state */ | |
4083 | mchunkptr mn = next_chunk(mem2chunk(m)); | |
4084 | init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE); | |
4085 | } | |
4086 | } | |
4087 | ||
4088 | else { | |
4089 | /* Try to merge with an existing segment */ | |
4090 | msegmentptr sp = &m->seg; | |
4091 | /* Only consider most recent segment if traversal suppressed */ | |
4092 | while (sp != 0 && tbase != sp->base + sp->size) | |
4093 | sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; | |
4094 | if (sp != 0 && | |
4095 | !is_extern_segment(sp) && | |
4096 | (sp->sflags & IS_MMAPPED_BIT) == mmap_flag && | |
4097 | segment_holds(sp, m->top)) { /* append */ | |
4098 | sp->size += tsize; | |
4099 | init_top(m, m->top, m->topsize + tsize); | |
4100 | } | |
4101 | else { | |
4102 | if (tbase < m->least_addr) | |
4103 | m->least_addr = tbase; | |
4104 | sp = &m->seg; | |
4105 | while (sp != 0 && sp->base != tbase + tsize) | |
4106 | sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; | |
4107 | if (sp != 0 && | |
4108 | !is_extern_segment(sp) && | |
4109 | (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) { | |
4110 | char* oldbase = sp->base; | |
4111 | sp->base = tbase; | |
4112 | sp->size += tsize; | |
4113 | return prepend_alloc(m, tbase, oldbase, nb); | |
4114 | } | |
4115 | else | |
4116 | add_segment(m, tbase, tsize, mmap_flag); | |
4117 | } | |
4118 | } | |
4119 | ||
4120 | if (nb < m->topsize) { /* Allocate from new or extended top space */ | |
4121 | size_t rsize = m->topsize -= nb; | |
4122 | mchunkptr p = m->top; | |
4123 | mchunkptr r = m->top = chunk_plus_offset(p, nb); | |
4124 | r->head = rsize | PINUSE_BIT; | |
4125 | set_size_and_pinuse_of_inuse_chunk(m, p, nb); | |
4126 | check_top_chunk(m, m->top); | |
4127 | check_malloced_chunk(m, chunk2mem(p), nb); | |
4128 | return chunk2mem(p); | |
4129 | } | |
4130 | } | |
4131 | ||
4132 | MALLOC_FAILURE_ACTION; | |
4133 | return 0; | |
4134 | } | |
4135 | ||
4136 | /* ----------------------- system deallocation -------------------------- */ | |
4137 | ||
4138 | /* Unmap and unlink any mmapped segments that don't contain used chunks */ | |
4139 | static size_t release_unused_segments(mstate m) { | |
4140 | size_t released = 0; | |
4141 | int nsegs = 0; | |
4142 | msegmentptr pred = &m->seg; | |
4143 | msegmentptr sp = pred->next; | |
4144 | while (sp != 0) { | |
4145 | char* base = sp->base; | |
4146 | size_t size = sp->size; | |
4147 | msegmentptr next = sp->next; | |
4148 | ++nsegs; | |
4149 | if (is_mmapped_segment(sp) && !is_extern_segment(sp)) { | |
4150 | mchunkptr p = align_as_chunk(base); | |
4151 | size_t psize = chunksize(p); | |
4152 | /* Can unmap if first chunk holds entire segment and not pinned */ | |
4153 | if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) { | |
4154 | tchunkptr tp = (tchunkptr)p; | |
4155 | assert(segment_holds(sp, (char*)sp)); | |
4156 | if (p == m->dv) { | |
4157 | m->dv = 0; | |
4158 | m->dvsize = 0; | |
4159 | } | |
4160 | else { | |
4161 | unlink_large_chunk(m, tp); | |
4162 | } | |
4163 | if (CALL_MUNMAP(base, size) == 0) { | |
4164 | released += size; | |
4165 | m->footprint -= size; | |
4166 | /* unlink obsoleted record */ | |
4167 | sp = pred; | |
4168 | sp->next = next; | |
4169 | } | |
4170 | else { /* back out if cannot unmap */ | |
4171 | insert_large_chunk(m, tp, psize); | |
4172 | } | |
4173 | } | |
4174 | } | |
4175 | if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */ | |
4176 | break; | |
4177 | pred = sp; | |
4178 | sp = next; | |
4179 | } | |
4180 | /* Reset check counter */ | |
4181 | m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)? | |
4182 | nsegs : MAX_RELEASE_CHECK_RATE); | |
4183 | return released; | |
4184 | } | |
4185 | ||
4186 | static int sys_trim(mstate m, size_t pad) { | |
4187 | size_t released = 0; | |
4188 | ensure_initialization(); | |
4189 | if (pad < MAX_REQUEST && is_initialized(m)) { | |
4190 | pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */ | |
4191 | ||
4192 | if (m->topsize > pad) { | |
4193 | /* Shrink top space in granularity-size units, keeping at least one */ | |
4194 | size_t unit = mparams.granularity; | |
4195 | size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit - | |
4196 | SIZE_T_ONE) * unit; | |
4197 | msegmentptr sp = segment_holding(m, (char*)m->top); | |
4198 | ||
4199 | if (!is_extern_segment(sp)) { | |
4200 | if (is_mmapped_segment(sp)) { | |
4201 | if (HAVE_MMAP && | |
4202 | sp->size >= extra && | |
4203 | !has_segment_link(m, sp)) { /* can't shrink if pinned */ | |
4204 | size_t newsize = sp->size - extra; | |
4205 | /* Prefer mremap, fall back to munmap */ | |
4206 | if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) || | |
4207 | (CALL_MUNMAP(sp->base + newsize, extra) == 0)) { | |
4208 | released = extra; | |
4209 | } | |
4210 | } | |
4211 | } | |
4212 | else if (HAVE_MORECORE) { | |
4213 | if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */ | |
4214 | extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit; | |
4215 | ACQUIRE_MALLOC_GLOBAL_LOCK(); | |
4216 | { | |
4217 | /* Make sure end of memory is where we last set it. */ | |
4218 | char* old_br = (char*)(CALL_MORECORE(0)); | |
4219 | if (old_br == sp->base + sp->size) { | |
4220 | char* rel_br = (char*)(CALL_MORECORE(-extra)); | |
4221 | char* new_br = (char*)(CALL_MORECORE(0)); | |
4222 | if (rel_br != CMFAIL && new_br < old_br) | |
4223 | released = old_br - new_br; | |
4224 | } | |
4225 | } | |
4226 | RELEASE_MALLOC_GLOBAL_LOCK(); | |
4227 | } | |
4228 | } | |
4229 | ||
4230 | if (released != 0) { | |
4231 | sp->size -= released; | |
4232 | m->footprint -= released; | |
4233 | init_top(m, m->top, m->topsize - released); | |
4234 | check_top_chunk(m, m->top); | |
4235 | } | |
4236 | } | |
4237 | ||
4238 | /* Unmap any unused mmapped segments */ | |
4239 | if (HAVE_MMAP) | |
4240 | released += release_unused_segments(m); | |
4241 | ||
4242 | /* On failure, disable autotrim to avoid repeated failed future calls */ | |
4243 | if (released == 0 && m->topsize > m->trim_check) | |
4244 | m->trim_check = MAX_SIZE_T; | |
4245 | } | |
4246 | ||
4247 | return (released != 0)? 1 : 0; | |
4248 | } | |
4249 | ||
4250 | ||
4251 | /* ---------------------------- malloc support --------------------------- */ | |
4252 | ||
4253 | /* allocate a large request from the best fitting chunk in a treebin */ | |
4254 | static void* tmalloc_large(mstate m, size_t nb) { | |
4255 | tchunkptr v = 0; | |
4256 | size_t rsize = -nb; /* Unsigned negation */ | |
4257 | tchunkptr t; | |
4258 | bindex_t idx; | |
4259 | compute_tree_index(nb, idx); | |
4260 | if ((t = *treebin_at(m, idx)) != 0) { | |
4261 | /* Traverse tree for this bin looking for node with size == nb */ | |
4262 | size_t sizebits = nb << leftshift_for_tree_index(idx); | |
4263 | tchunkptr rst = 0; /* The deepest untaken right subtree */ | |
4264 | for (;;) { | |
4265 | tchunkptr rt; | |
4266 | size_t trem = chunksize(t) - nb; | |
4267 | if (trem < rsize) { | |
4268 | v = t; | |
4269 | if ((rsize = trem) == 0) | |
4270 | break; | |
4271 | } | |
4272 | rt = t->child[1]; | |
4273 | t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; | |
4274 | if (rt != 0 && rt != t) | |
4275 | rst = rt; | |
4276 | if (t == 0) { | |
4277 | t = rst; /* set t to least subtree holding sizes > nb */ | |
4278 | break; | |
4279 | } | |
4280 | sizebits <<= 1; | |
4281 | } | |
4282 | } | |
4283 | if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */ | |
4284 | binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap; | |
4285 | if (leftbits != 0) { | |
4286 | bindex_t i; | |
4287 | binmap_t leastbit = least_bit(leftbits); | |
4288 | compute_bit2idx(leastbit, i); | |
4289 | t = *treebin_at(m, i); | |
4290 | } | |
4291 | } | |
4292 | ||
4293 | while (t != 0) { /* find smallest of tree or subtree */ | |
4294 | size_t trem = chunksize(t) - nb; | |
4295 | if (trem < rsize) { | |
4296 | rsize = trem; | |
4297 | v = t; | |
4298 | } | |
4299 | t = leftmost_child(t); | |
4300 | } | |
4301 | ||
4302 | /* If dv is a better fit, return 0 so malloc will use it */ | |
4303 | if (v != 0 && rsize < (size_t)(m->dvsize - nb)) { | |
4304 | if (RTCHECK(ok_address(m, v))) { /* split */ | |
4305 | mchunkptr r = chunk_plus_offset(v, nb); | |
4306 | assert(chunksize(v) == rsize + nb); | |
4307 | if (RTCHECK(ok_next(v, r))) { | |
4308 | unlink_large_chunk(m, v); | |
4309 | if (rsize < MIN_CHUNK_SIZE) | |
4310 | set_inuse_and_pinuse(m, v, (rsize + nb)); | |
4311 | else { | |
4312 | set_size_and_pinuse_of_inuse_chunk(m, v, nb); | |
4313 | set_size_and_pinuse_of_free_chunk(r, rsize); | |
4314 | insert_chunk(m, r, rsize); | |
4315 | } | |
4316 | return chunk2mem(v); | |
4317 | } | |
4318 | } | |
4319 | CORRUPTION_ERROR_ACTION(m); | |
4320 | } | |
4321 | return 0; | |
4322 | } | |
4323 | ||
4324 | /* allocate a small request from the best fitting chunk in a treebin */ | |
4325 | static void* tmalloc_small(mstate m, size_t nb) { | |
4326 | tchunkptr t, v; | |
4327 | size_t rsize; | |
4328 | bindex_t i; | |
4329 | binmap_t leastbit = least_bit(m->treemap); | |
4330 | compute_bit2idx(leastbit, i); | |
4331 | v = t = *treebin_at(m, i); | |
4332 | rsize = chunksize(t) - nb; | |
4333 | ||
4334 | while ((t = leftmost_child(t)) != 0) { | |
4335 | size_t trem = chunksize(t) - nb; | |
4336 | if (trem < rsize) { | |
4337 | rsize = trem; | |
4338 | v = t; | |
4339 | } | |
4340 | } | |
4341 | ||
4342 | if (RTCHECK(ok_address(m, v))) { | |
4343 | mchunkptr r = chunk_plus_offset(v, nb); | |
4344 | assert(chunksize(v) == rsize + nb); | |
4345 | if (RTCHECK(ok_next(v, r))) { | |
4346 | unlink_large_chunk(m, v); | |
4347 | if (rsize < MIN_CHUNK_SIZE) | |
4348 | set_inuse_and_pinuse(m, v, (rsize + nb)); | |
4349 | else { | |
4350 | set_size_and_pinuse_of_inuse_chunk(m, v, nb); | |
4351 | set_size_and_pinuse_of_free_chunk(r, rsize); | |
4352 | replace_dv(m, r, rsize); | |
4353 | } | |
4354 | return chunk2mem(v); | |
4355 | } | |
4356 | } | |
4357 | ||
4358 | CORRUPTION_ERROR_ACTION(m); | |
4359 | return 0; | |
4360 | } | |
4361 | ||
4362 | /* --------------------------- realloc support --------------------------- */ | |
4363 | ||
4364 | static void* internal_realloc(mstate m, void* oldmem, size_t bytes) { | |
4365 | if (bytes >= MAX_REQUEST) { | |
4366 | MALLOC_FAILURE_ACTION; | |
4367 | return 0; | |
4368 | } | |
4369 | if (!PREACTION(m)) { | |
4370 | mchunkptr oldp = mem2chunk(oldmem); | |
4371 | size_t oldsize = chunksize(oldp); | |
4372 | mchunkptr next = chunk_plus_offset(oldp, oldsize); | |
4373 | mchunkptr newp = 0; | |
4374 | void* extra = 0; | |
4375 | ||
4376 | /* Try to either shrink or extend into top. Else malloc-copy-free */ | |
4377 | ||
4378 | if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) && | |
4379 | ok_next(oldp, next) && ok_pinuse(next))) { | |
4380 | size_t nb = request2size(bytes); | |
4381 | if (is_mmapped(oldp)) | |
4382 | newp = mmap_resize(m, oldp, nb); | |
4383 | else if (oldsize >= nb) { /* already big enough */ | |
4384 | size_t rsize = oldsize - nb; | |
4385 | newp = oldp; | |
4386 | if (rsize >= MIN_CHUNK_SIZE) { | |
4387 | mchunkptr remainder = chunk_plus_offset(newp, nb); | |
4388 | set_inuse(m, newp, nb); | |
4389 | set_inuse(m, remainder, rsize); | |
4390 | extra = chunk2mem(remainder); | |
4391 | } | |
4392 | } | |
4393 | else if (next == m->top && oldsize + m->topsize > nb) { | |
4394 | /* Expand into top */ | |
4395 | size_t newsize = oldsize + m->topsize; | |
4396 | size_t newtopsize = newsize - nb; | |
4397 | mchunkptr newtop = chunk_plus_offset(oldp, nb); | |
4398 | set_inuse(m, oldp, nb); | |
4399 | newtop->head = newtopsize |PINUSE_BIT; | |
4400 | m->top = newtop; | |
4401 | m->topsize = newtopsize; | |
4402 | newp = oldp; | |
4403 | } | |
4404 | } | |
4405 | else { | |
4406 | USAGE_ERROR_ACTION(m, oldmem); | |
4407 | POSTACTION(m); | |
4408 | return 0; | |
4409 | } | |
4410 | ||
4411 | POSTACTION(m); | |
4412 | ||
4413 | if (newp != 0) { | |
4414 | if (extra != 0) { | |
4415 | internal_free(m, extra); | |
4416 | } | |
4417 | check_inuse_chunk(m, newp); | |
4418 | return chunk2mem(newp); | |
4419 | } | |
4420 | else { | |
4421 | void* newmem = internal_malloc(m, bytes); | |
4422 | if (newmem != 0) { | |
4423 | size_t oc = oldsize - overhead_for(oldp); | |
4424 | memcpy(newmem, oldmem, (oc < bytes)? oc : bytes); | |
4425 | internal_free(m, oldmem); | |
4426 | } | |
4427 | return newmem; | |
4428 | } | |
4429 | } | |
4430 | return 0; | |
4431 | } | |
4432 | ||
4433 | /* --------------------------- memalign support -------------------------- */ | |
4434 | ||
4435 | static void* internal_memalign(mstate m, size_t alignment, size_t bytes) { | |
4436 | if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */ | |
4437 | return internal_malloc(m, bytes); | |
4438 | if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */ | |
4439 | alignment = MIN_CHUNK_SIZE; | |
4440 | if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */ | |
4441 | size_t a = MALLOC_ALIGNMENT << 1; | |
4442 | while (a < alignment) a <<= 1; | |
4443 | alignment = a; | |
4444 | } | |
4445 | ||
4446 | if (bytes >= MAX_REQUEST - alignment) { | |
4447 | if (m != 0) { /* Test isn't needed but avoids compiler warning */ | |
4448 | MALLOC_FAILURE_ACTION; | |
4449 | } | |
4450 | } | |
4451 | else { | |
4452 | size_t nb = request2size(bytes); | |
4453 | size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD; | |
4454 | char* mem = (char*)internal_malloc(m, req); | |
4455 | if (mem != 0) { | |
4456 | void* leader = 0; | |
4457 | void* trailer = 0; | |
4458 | mchunkptr p = mem2chunk(mem); | |
4459 | ||
4460 | if (PREACTION(m)) return 0; | |
4461 | if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */ | |
4462 | /* | |
4463 | Find an aligned spot inside chunk. Since we need to give | |
4464 | back leading space in a chunk of at least MIN_CHUNK_SIZE, if | |
4465 | the first calculation places us at a spot with less than | |
4466 | MIN_CHUNK_SIZE leader, we can move to the next aligned spot. | |
4467 | We've allocated enough total room so that this is always | |
4468 | possible. | |
4469 | */ | |
4470 | char* br = (char*)mem2chunk((size_t)(((size_t)(mem + | |
4471 | alignment - | |
4472 | SIZE_T_ONE)) & | |
4473 | -alignment)); | |
4474 | char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)? | |
4475 | br : br+alignment; | |
4476 | mchunkptr newp = (mchunkptr)pos; | |
4477 | size_t leadsize = pos - (char*)(p); | |
4478 | size_t newsize = chunksize(p) - leadsize; | |
4479 | ||
4480 | if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */ | |
4481 | newp->prev_foot = p->prev_foot + leadsize; | |
4482 | newp->head = (newsize|CINUSE_BIT); | |
4483 | } | |
4484 | else { /* Otherwise, give back leader, use the rest */ | |
4485 | set_inuse(m, newp, newsize); | |
4486 | set_inuse(m, p, leadsize); | |
4487 | leader = chunk2mem(p); | |
4488 | } | |
4489 | p = newp; | |
4490 | } | |
4491 | ||
4492 | /* Give back spare room at the end */ | |
4493 | if (!is_mmapped(p)) { | |
4494 | size_t size = chunksize(p); | |
4495 | if (size > nb + MIN_CHUNK_SIZE) { | |
4496 | size_t remainder_size = size - nb; | |
4497 | mchunkptr remainder = chunk_plus_offset(p, nb); | |
4498 | set_inuse(m, p, nb); | |
4499 | set_inuse(m, remainder, remainder_size); | |
4500 | trailer = chunk2mem(remainder); | |
4501 | } | |
4502 | } | |
4503 | ||
4504 | assert (chunksize(p) >= nb); | |
4505 | assert((((size_t)(chunk2mem(p))) % alignment) == 0); | |
4506 | check_inuse_chunk(m, p); | |
4507 | POSTACTION(m); | |
4508 | if (leader != 0) { | |
4509 | internal_free(m, leader); | |
4510 | } | |
4511 | if (trailer != 0) { | |
4512 | internal_free(m, trailer); | |
4513 | } | |
4514 | return chunk2mem(p); | |
4515 | } | |
4516 | } | |
4517 | return 0; | |
4518 | } | |
4519 | ||
4520 | /* ------------------------ comalloc/coalloc support --------------------- */ | |
4521 | ||
4522 | static void** ialloc(mstate m, | |
4523 | size_t n_elements, | |
4524 | size_t* sizes, | |
4525 | int opts, | |
4526 | void* chunks[]) { | |
4527 | /* | |
4528 | This provides common support for independent_X routines, handling | |
4529 | all of the combinations that can result. | |
4530 | ||
4531 | The opts arg has: | |
4532 | bit 0 set if all elements are same size (using sizes[0]) | |
4533 | bit 1 set if elements should be zeroed | |
4534 | */ | |
4535 | ||
4536 | size_t element_size; /* chunksize of each element, if all same */ | |
4537 | size_t contents_size; /* total size of elements */ | |
4538 | size_t array_size; /* request size of pointer array */ | |
4539 | void* mem; /* malloced aggregate space */ | |
4540 | mchunkptr p; /* corresponding chunk */ | |
4541 | size_t remainder_size; /* remaining bytes while splitting */ | |
4542 | void** marray; /* either "chunks" or malloced ptr array */ | |
4543 | mchunkptr array_chunk; /* chunk for malloced ptr array */ | |
4544 | flag_t was_enabled; /* to disable mmap */ | |
4545 | size_t size; | |
4546 | size_t i; | |
4547 | ||
4548 | ensure_initialization(); | |
4549 | /* compute array length, if needed */ | |
4550 | if (chunks != 0) { | |
4551 | if (n_elements == 0) | |
4552 | return chunks; /* nothing to do */ | |
4553 | marray = chunks; | |
4554 | array_size = 0; | |
4555 | } | |
4556 | else { | |
4557 | /* if empty req, must still return chunk representing empty array */ | |
4558 | if (n_elements == 0) | |
4559 | return (void**)internal_malloc(m, 0); | |
4560 | marray = 0; | |
4561 | array_size = request2size(n_elements * (sizeof(void*))); | |
4562 | } | |
4563 | ||
4564 | /* compute total element size */ | |
4565 | if (opts & 0x1) { /* all-same-size */ | |
4566 | element_size = request2size(*sizes); | |
4567 | contents_size = n_elements * element_size; | |
4568 | } | |
4569 | else { /* add up all the sizes */ | |
4570 | element_size = 0; | |
4571 | contents_size = 0; | |
4572 | for (i = 0; i != n_elements; ++i) | |
4573 | contents_size += request2size(sizes[i]); | |
4574 | } | |
4575 | ||
4576 | size = contents_size + array_size; | |
4577 | ||
4578 | /* | |
4579 | Allocate the aggregate chunk. First disable direct-mmapping so | |
4580 | malloc won't use it, since we would not be able to later | |
4581 | free/realloc space internal to a segregated mmap region. | |
4582 | */ | |
4583 | was_enabled = use_mmap(m); | |
4584 | disable_mmap(m); | |
4585 | mem = internal_malloc(m, size - CHUNK_OVERHEAD); | |
4586 | if (was_enabled) | |
4587 | enable_mmap(m); | |
4588 | if (mem == 0) | |
4589 | return 0; | |
4590 | ||
4591 | if (PREACTION(m)) return 0; | |
4592 | p = mem2chunk(mem); | |
4593 | remainder_size = chunksize(p); | |
4594 | ||
4595 | assert(!is_mmapped(p)); | |
4596 | ||
4597 | if (opts & 0x2) { /* optionally clear the elements */ | |
4598 | memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size); | |
4599 | } | |
4600 | ||
4601 | /* If not provided, allocate the pointer array as final part of chunk */ | |
4602 | if (marray == 0) { | |
4603 | size_t array_chunk_size; | |
4604 | array_chunk = chunk_plus_offset(p, contents_size); | |
4605 | array_chunk_size = remainder_size - contents_size; | |
4606 | marray = (void**) (chunk2mem(array_chunk)); | |
4607 | set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size); | |
4608 | remainder_size = contents_size; | |
4609 | } | |
4610 | ||
4611 | /* split out elements */ | |
4612 | for (i = 0; ; ++i) { | |
4613 | marray[i] = chunk2mem(p); | |
4614 | if (i != n_elements-1) { | |
4615 | if (element_size != 0) | |
4616 | size = element_size; | |
4617 | else | |
4618 | size = request2size(sizes[i]); | |
4619 | remainder_size -= size; | |
4620 | set_size_and_pinuse_of_inuse_chunk(m, p, size); | |
4621 | p = chunk_plus_offset(p, size); | |
4622 | } | |
4623 | else { /* the final element absorbs any overallocation slop */ | |
4624 | set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size); | |
4625 | break; | |
4626 | } | |
4627 | } | |
4628 | ||
4629 | #if DEBUG | |
4630 | if (marray != chunks) { | |
4631 | /* final element must have exactly exhausted chunk */ | |
4632 | if (element_size != 0) { | |
4633 | assert(remainder_size == element_size); | |
4634 | } | |
4635 | else { | |
4636 | assert(remainder_size == request2size(sizes[i])); | |
4637 | } | |
4638 | check_inuse_chunk(m, mem2chunk(marray)); | |
4639 | } | |
4640 | for (i = 0; i != n_elements; ++i) | |
4641 | check_inuse_chunk(m, mem2chunk(marray[i])); | |
4642 | ||
4643 | #endif /* DEBUG */ | |
4644 | ||
4645 | POSTACTION(m); | |
4646 | return marray; | |
4647 | } | |
4648 | ||
4649 | ||
4650 | /* -------------------------- public routines ---------------------------- */ | |
4651 | ||
4652 | #if !ONLY_MSPACES | |
4653 | ||
4654 | void* dlmalloc(size_t bytes) { | |
4655 | /* | |
4656 | Basic algorithm: | |
4657 | If a small request (< 256 bytes minus per-chunk overhead): | |
4658 | 1. If one exists, use a remainderless chunk in associated smallbin. | |
4659 | (Remainderless means that there are too few excess bytes to | |
4660 | represent as a chunk.) | |
4661 | 2. If it is big enough, use the dv chunk, which is normally the | |
4662 | chunk adjacent to the one used for the most recent small request. | |
4663 | 3. If one exists, split the smallest available chunk in a bin, | |
4664 | saving remainder in dv. | |
4665 | 4. If it is big enough, use the top chunk. | |
4666 | 5. If available, get memory from system and use it | |
4667 | Otherwise, for a large request: | |
4668 | 1. Find the smallest available binned chunk that fits, and use it | |
4669 | if it is better fitting than dv chunk, splitting if necessary. | |
4670 | 2. If better fitting than any binned chunk, use the dv chunk. | |
4671 | 3. If it is big enough, use the top chunk. | |
4672 | 4. If request size >= mmap threshold, try to directly mmap this chunk. | |
4673 | 5. If available, get memory from system and use it | |
4674 | ||
4675 | The ugly goto's here ensure that postaction occurs along all paths. | |
4676 | */ | |
4677 | ||
4678 | #if USE_LOCKS | |
4679 | ensure_initialization(); /* initialize in sys_alloc if not using locks */ | |
4680 | #endif | |
4681 | ||
4682 | if (!PREACTION(gm)) { | |
4683 | void* mem; | |
4684 | size_t nb; | |
4685 | if (bytes <= MAX_SMALL_REQUEST) { | |
4686 | bindex_t idx; | |
4687 | binmap_t smallbits; | |
4688 | nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); | |
4689 | idx = small_index(nb); | |
4690 | smallbits = gm->smallmap >> idx; | |
4691 | ||
4692 | if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ | |
4693 | mchunkptr b, p; | |
4694 | idx += ~smallbits & 1; /* Uses next bin if idx empty */ | |
4695 | b = smallbin_at(gm, idx); | |
4696 | p = b->fd; | |
4697 | assert(chunksize(p) == small_index2size(idx)); | |
4698 | unlink_first_small_chunk(gm, b, p, idx); | |
4699 | set_inuse_and_pinuse(gm, p, small_index2size(idx)); | |
4700 | mem = chunk2mem(p); | |
4701 | check_malloced_chunk(gm, mem, nb); | |
4702 | goto postaction; | |
4703 | } | |
4704 | ||
4705 | else if (nb > gm->dvsize) { | |
4706 | if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ | |
4707 | mchunkptr b, p, r; | |
4708 | size_t rsize; | |
4709 | bindex_t i; | |
4710 | binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); | |
4711 | binmap_t leastbit = least_bit(leftbits); | |
4712 | compute_bit2idx(leastbit, i); | |
4713 | b = smallbin_at(gm, i); | |
4714 | p = b->fd; | |
4715 | assert(chunksize(p) == small_index2size(i)); | |
4716 | unlink_first_small_chunk(gm, b, p, i); | |
4717 | rsize = small_index2size(i) - nb; | |
4718 | /* Fit here cannot be remainderless if 4byte sizes */ | |
4719 | if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) | |
4720 | set_inuse_and_pinuse(gm, p, small_index2size(i)); | |
4721 | else { | |
4722 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); | |
4723 | r = chunk_plus_offset(p, nb); | |
4724 | set_size_and_pinuse_of_free_chunk(r, rsize); | |
4725 | replace_dv(gm, r, rsize); | |
4726 | } | |
4727 | mem = chunk2mem(p); | |
4728 | check_malloced_chunk(gm, mem, nb); | |
4729 | goto postaction; | |
4730 | } | |
4731 | ||
4732 | else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) { | |
4733 | check_malloced_chunk(gm, mem, nb); | |
4734 | goto postaction; | |
4735 | } | |
4736 | } | |
4737 | } | |
4738 | else if (bytes >= MAX_REQUEST) | |
4739 | nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ | |
4740 | else { | |
4741 | nb = pad_request(bytes); | |
4742 | if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) { | |
4743 | check_malloced_chunk(gm, mem, nb); | |
4744 | goto postaction; | |
4745 | } | |
4746 | } | |
4747 | ||
4748 | if (nb <= gm->dvsize) { | |
4749 | size_t rsize = gm->dvsize - nb; | |
4750 | mchunkptr p = gm->dv; | |
4751 | if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ | |
4752 | mchunkptr r = gm->dv = chunk_plus_offset(p, nb); | |
4753 | gm->dvsize = rsize; | |
4754 | set_size_and_pinuse_of_free_chunk(r, rsize); | |
4755 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); | |
4756 | } | |
4757 | else { /* exhaust dv */ | |
4758 | size_t dvs = gm->dvsize; | |
4759 | gm->dvsize = 0; | |
4760 | gm->dv = 0; | |
4761 | set_inuse_and_pinuse(gm, p, dvs); | |
4762 | } | |
4763 | mem = chunk2mem(p); | |
4764 | check_malloced_chunk(gm, mem, nb); | |
4765 | goto postaction; | |
4766 | } | |
4767 | ||
4768 | else if (nb < gm->topsize) { /* Split top */ | |
4769 | size_t rsize = gm->topsize -= nb; | |
4770 | mchunkptr p = gm->top; | |
4771 | mchunkptr r = gm->top = chunk_plus_offset(p, nb); | |
4772 | r->head = rsize | PINUSE_BIT; | |
4773 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); | |
4774 | mem = chunk2mem(p); | |
4775 | check_top_chunk(gm, gm->top); | |
4776 | check_malloced_chunk(gm, mem, nb); | |
4777 | goto postaction; | |
4778 | } | |
4779 | ||
4780 | mem = sys_alloc(gm, nb); | |
4781 | ||
4782 | postaction: | |
4783 | POSTACTION(gm); | |
4784 | return mem; | |
4785 | } | |
4786 | ||
4787 | return 0; | |
4788 | } | |
4789 | ||
4790 | void dlfree(void* mem) { | |
4791 | /* | |
4283b8e4 | 4792 | Consolidate freed chunks with preceding or succeeding bordering |
f0ed8226 MSO |
4793 | free chunks, if they exist, and then place in a bin. Intermixed |
4794 | with special cases for top, dv, mmapped chunks, and usage errors. | |
4795 | */ | |
4796 | ||
4797 | if (mem != 0) { | |
4798 | mchunkptr p = mem2chunk(mem); | |
4799 | #if FOOTERS | |
4800 | mstate fm = get_mstate_for(p); | |
4801 | if (!ok_magic(fm)) { | |
4802 | USAGE_ERROR_ACTION(fm, p); | |
4803 | return; | |
4804 | } | |
4805 | #else /* FOOTERS */ | |
4806 | #define fm gm | |
4807 | #endif /* FOOTERS */ | |
4808 | if (!PREACTION(fm)) { | |
4809 | check_inuse_chunk(fm, p); | |
4810 | if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) { | |
4811 | size_t psize = chunksize(p); | |
4812 | mchunkptr next = chunk_plus_offset(p, psize); | |
4813 | if (!pinuse(p)) { | |
4814 | size_t prevsize = p->prev_foot; | |
4815 | if ((prevsize & IS_MMAPPED_BIT) != 0) { | |
4816 | prevsize &= ~IS_MMAPPED_BIT; | |
4817 | psize += prevsize + MMAP_FOOT_PAD; | |
4818 | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) | |
4819 | fm->footprint -= psize; | |
4820 | goto postaction; | |
4821 | } | |
4822 | else { | |
4823 | mchunkptr prev = chunk_minus_offset(p, prevsize); | |
4824 | psize += prevsize; | |
4825 | p = prev; | |
4826 | if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ | |
4827 | if (p != fm->dv) { | |
4828 | unlink_chunk(fm, p, prevsize); | |
4829 | } | |
4830 | else if ((next->head & INUSE_BITS) == INUSE_BITS) { | |
4831 | fm->dvsize = psize; | |
4832 | set_free_with_pinuse(p, psize, next); | |
4833 | goto postaction; | |
4834 | } | |
4835 | } | |
4836 | else | |
4837 | goto erroraction; | |
4838 | } | |
4839 | } | |
4840 | ||
4841 | if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { | |
4842 | if (!cinuse(next)) { /* consolidate forward */ | |
4843 | if (next == fm->top) { | |
4844 | size_t tsize = fm->topsize += psize; | |
4845 | fm->top = p; | |
4846 | p->head = tsize | PINUSE_BIT; | |
4847 | if (p == fm->dv) { | |
4848 | fm->dv = 0; | |
4849 | fm->dvsize = 0; | |
4850 | } | |
4851 | if (should_trim(fm, tsize)) | |
4852 | sys_trim(fm, 0); | |
4853 | goto postaction; | |
4854 | } | |
4855 | else if (next == fm->dv) { | |
4856 | size_t dsize = fm->dvsize += psize; | |
4857 | fm->dv = p; | |
4858 | set_size_and_pinuse_of_free_chunk(p, dsize); | |
4859 | goto postaction; | |
4860 | } | |
4861 | else { | |
4862 | size_t nsize = chunksize(next); | |
4863 | psize += nsize; | |
4864 | unlink_chunk(fm, next, nsize); | |
4865 | set_size_and_pinuse_of_free_chunk(p, psize); | |
4866 | if (p == fm->dv) { | |
4867 | fm->dvsize = psize; | |
4868 | goto postaction; | |
4869 | } | |
4870 | } | |
4871 | } | |
4872 | else | |
4873 | set_free_with_pinuse(p, psize, next); | |
4874 | ||
4875 | if (is_small(psize)) { | |
4876 | insert_small_chunk(fm, p, psize); | |
4877 | check_free_chunk(fm, p); | |
4878 | } | |
4879 | else { | |
4880 | tchunkptr tp = (tchunkptr)p; | |
4881 | insert_large_chunk(fm, tp, psize); | |
4882 | check_free_chunk(fm, p); | |
4883 | if (--fm->release_checks == 0) | |
4884 | release_unused_segments(fm); | |
4885 | } | |
4886 | goto postaction; | |
4887 | } | |
4888 | } | |
4889 | erroraction: | |
4890 | USAGE_ERROR_ACTION(fm, p); | |
4891 | postaction: | |
4892 | POSTACTION(fm); | |
4893 | } | |
4894 | } | |
4895 | #if !FOOTERS | |
4896 | #undef fm | |
4897 | #endif /* FOOTERS */ | |
4898 | } | |
4899 | ||
4900 | void* dlcalloc(size_t n_elements, size_t elem_size) { | |
4901 | void* mem; | |
4902 | size_t req = 0; | |
4903 | if (n_elements != 0) { | |
4904 | req = n_elements * elem_size; | |
4905 | if (((n_elements | elem_size) & ~(size_t)0xffff) && | |
4906 | (req / n_elements != elem_size)) | |
4907 | req = MAX_SIZE_T; /* force downstream failure on overflow */ | |
4908 | } | |
4909 | mem = dlmalloc(req); | |
4910 | if (mem != 0 && calloc_must_clear(mem2chunk(mem))) | |
4911 | memset(mem, 0, req); | |
4912 | return mem; | |
4913 | } | |
4914 | ||
4915 | void* dlrealloc(void* oldmem, size_t bytes) { | |
4916 | if (oldmem == 0) | |
4917 | return dlmalloc(bytes); | |
4918 | #ifdef REALLOC_ZERO_BYTES_FREES | |
4919 | if (bytes == 0) { | |
4920 | dlfree(oldmem); | |
4921 | return 0; | |
4922 | } | |
4923 | #endif /* REALLOC_ZERO_BYTES_FREES */ | |
4924 | else { | |
4925 | #if ! FOOTERS | |
4926 | mstate m = gm; | |
4927 | #else /* FOOTERS */ | |
4928 | mstate m = get_mstate_for(mem2chunk(oldmem)); | |
4929 | if (!ok_magic(m)) { | |
4930 | USAGE_ERROR_ACTION(m, oldmem); | |
4931 | return 0; | |
4932 | } | |
4933 | #endif /* FOOTERS */ | |
4934 | return internal_realloc(m, oldmem, bytes); | |
4935 | } | |
4936 | } | |
4937 | ||
4938 | void* dlmemalign(size_t alignment, size_t bytes) { | |
4939 | return internal_memalign(gm, alignment, bytes); | |
4940 | } | |
4941 | ||
4942 | void** dlindependent_calloc(size_t n_elements, size_t elem_size, | |
4943 | void* chunks[]) { | |
4944 | size_t sz = elem_size; /* serves as 1-element array */ | |
4945 | return ialloc(gm, n_elements, &sz, 3, chunks); | |
4946 | } | |
4947 | ||
4948 | void** dlindependent_comalloc(size_t n_elements, size_t sizes[], | |
4949 | void* chunks[]) { | |
4950 | return ialloc(gm, n_elements, sizes, 0, chunks); | |
4951 | } | |
4952 | ||
4953 | void* dlvalloc(size_t bytes) { | |
4954 | size_t pagesz; | |
4955 | ensure_initialization(); | |
4956 | pagesz = mparams.page_size; | |
4957 | return dlmemalign(pagesz, bytes); | |
4958 | } | |
4959 | ||
4960 | void* dlpvalloc(size_t bytes) { | |
4961 | size_t pagesz; | |
4962 | ensure_initialization(); | |
4963 | pagesz = mparams.page_size; | |
4964 | return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE)); | |
4965 | } | |
4966 | ||
4967 | int dlmalloc_trim(size_t pad) { | |
4968 | ensure_initialization(); | |
4969 | int result = 0; | |
4970 | if (!PREACTION(gm)) { | |
4971 | result = sys_trim(gm, pad); | |
4972 | POSTACTION(gm); | |
4973 | } | |
4974 | return result; | |
4975 | } | |
4976 | ||
4977 | size_t dlmalloc_footprint(void) { | |
4978 | return gm->footprint; | |
4979 | } | |
4980 | ||
4981 | size_t dlmalloc_max_footprint(void) { | |
4982 | return gm->max_footprint; | |
4983 | } | |
4984 | ||
4985 | #if !NO_MALLINFO | |
4986 | struct mallinfo dlmallinfo(void) { | |
4987 | return internal_mallinfo(gm); | |
4988 | } | |
4989 | #endif /* NO_MALLINFO */ | |
4990 | ||
4991 | void dlmalloc_stats() { | |
4992 | internal_malloc_stats(gm); | |
4993 | } | |
4994 | ||
4995 | int dlmallopt(int param_number, int value) { | |
4996 | return change_mparam(param_number, value); | |
4997 | } | |
4998 | ||
4999 | #endif /* !ONLY_MSPACES */ | |
5000 | ||
5001 | size_t dlmalloc_usable_size(void* mem) { | |
5002 | if (mem != 0) { | |
5003 | mchunkptr p = mem2chunk(mem); | |
5004 | if (cinuse(p)) | |
5005 | return chunksize(p) - overhead_for(p); | |
5006 | } | |
5007 | return 0; | |
5008 | } | |
5009 | ||
5010 | /* ----------------------------- user mspaces ---------------------------- */ | |
5011 | ||
5012 | #if MSPACES | |
5013 | ||
5014 | static mstate init_user_mstate(char* tbase, size_t tsize) { | |
5015 | size_t msize = pad_request(sizeof(struct malloc_state)); | |
5016 | mchunkptr mn; | |
5017 | mchunkptr msp = align_as_chunk(tbase); | |
5018 | mstate m = (mstate)(chunk2mem(msp)); | |
5019 | memset(m, 0, msize); | |
70597e83 | 5020 | (void)INITIAL_LOCK(&m->mutex); |
f0ed8226 MSO |
5021 | msp->head = (msize|PINUSE_BIT|CINUSE_BIT); |
5022 | m->seg.base = m->least_addr = tbase; | |
5023 | m->seg.size = m->footprint = m->max_footprint = tsize; | |
5024 | m->magic = mparams.magic; | |
5025 | m->release_checks = MAX_RELEASE_CHECK_RATE; | |
5026 | m->mflags = mparams.default_mflags; | |
5027 | m->extp = 0; | |
5028 | m->exts = 0; | |
5029 | disable_contiguous(m); | |
5030 | init_bins(m); | |
5031 | mn = next_chunk(mem2chunk(m)); | |
5032 | init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE); | |
5033 | check_top_chunk(m, m->top); | |
5034 | return m; | |
5035 | } | |
5036 | ||
5037 | mspace create_mspace(size_t capacity, int locked) { | |
5038 | mstate m = 0; | |
5039 | size_t msize; | |
5040 | ensure_initialization(); | |
5041 | msize = pad_request(sizeof(struct malloc_state)); | |
5042 | if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { | |
5043 | size_t rs = ((capacity == 0)? mparams.granularity : | |
5044 | (capacity + TOP_FOOT_SIZE + msize)); | |
5045 | size_t tsize = granularity_align(rs); | |
5046 | char* tbase = (char*)(CALL_MMAP(tsize)); | |
5047 | if (tbase != CMFAIL) { | |
5048 | m = init_user_mstate(tbase, tsize); | |
5049 | m->seg.sflags = IS_MMAPPED_BIT; | |
5050 | set_lock(m, locked); | |
5051 | } | |
5052 | } | |
5053 | return (mspace)m; | |
5054 | } | |
5055 | ||
5056 | mspace create_mspace_with_base(void* base, size_t capacity, int locked) { | |
5057 | mstate m = 0; | |
5058 | size_t msize; | |
5059 | ensure_initialization(); | |
5060 | msize = pad_request(sizeof(struct malloc_state)); | |
5061 | if (capacity > msize + TOP_FOOT_SIZE && | |
5062 | capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { | |
5063 | m = init_user_mstate((char*)base, capacity); | |
5064 | m->seg.sflags = EXTERN_BIT; | |
5065 | set_lock(m, locked); | |
5066 | } | |
5067 | return (mspace)m; | |
5068 | } | |
5069 | ||
5070 | int mspace_mmap_large_chunks(mspace msp, int enable) { | |
5071 | int ret = 0; | |
5072 | mstate ms = (mstate)msp; | |
5073 | if (!PREACTION(ms)) { | |
5074 | if (use_mmap(ms)) | |
5075 | ret = 1; | |
5076 | if (enable) | |
5077 | enable_mmap(ms); | |
5078 | else | |
5079 | disable_mmap(ms); | |
5080 | POSTACTION(ms); | |
5081 | } | |
5082 | return ret; | |
5083 | } | |
5084 | ||
5085 | size_t destroy_mspace(mspace msp) { | |
5086 | size_t freed = 0; | |
5087 | mstate ms = (mstate)msp; | |
5088 | if (ok_magic(ms)) { | |
5089 | msegmentptr sp = &ms->seg; | |
5090 | while (sp != 0) { | |
5091 | char* base = sp->base; | |
5092 | size_t size = sp->size; | |
5093 | flag_t flag = sp->sflags; | |
5094 | sp = sp->next; | |
5095 | if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) && | |
5096 | CALL_MUNMAP(base, size) == 0) | |
5097 | freed += size; | |
5098 | } | |
5099 | } | |
5100 | else { | |
5101 | USAGE_ERROR_ACTION(ms,ms); | |
5102 | } | |
5103 | return freed; | |
5104 | } | |
5105 | ||
5106 | /* | |
5107 | mspace versions of routines are near-clones of the global | |
5108 | versions. This is not so nice but better than the alternatives. | |
5109 | */ | |
5110 | ||
5111 | ||
5112 | void* mspace_malloc(mspace msp, size_t bytes) { | |
5113 | mstate ms = (mstate)msp; | |
5114 | if (!ok_magic(ms)) { | |
5115 | USAGE_ERROR_ACTION(ms,ms); | |
5116 | return 0; | |
5117 | } | |
5118 | if (!PREACTION(ms)) { | |
5119 | void* mem; | |
5120 | size_t nb; | |
5121 | if (bytes <= MAX_SMALL_REQUEST) { | |
5122 | bindex_t idx; | |
5123 | binmap_t smallbits; | |
5124 | nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); | |
5125 | idx = small_index(nb); | |
5126 | smallbits = ms->smallmap >> idx; | |
5127 | ||
5128 | if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ | |
5129 | mchunkptr b, p; | |
5130 | idx += ~smallbits & 1; /* Uses next bin if idx empty */ | |
5131 | b = smallbin_at(ms, idx); | |
5132 | p = b->fd; | |
5133 | assert(chunksize(p) == small_index2size(idx)); | |
5134 | unlink_first_small_chunk(ms, b, p, idx); | |
5135 | set_inuse_and_pinuse(ms, p, small_index2size(idx)); | |
5136 | mem = chunk2mem(p); | |
5137 | check_malloced_chunk(ms, mem, nb); | |
5138 | goto postaction; | |
5139 | } | |
5140 | ||
5141 | else if (nb > ms->dvsize) { | |
5142 | if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ | |
5143 | mchunkptr b, p, r; | |
5144 | size_t rsize; | |
5145 | bindex_t i; | |
5146 | binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); | |
5147 | binmap_t leastbit = least_bit(leftbits); | |
5148 | compute_bit2idx(leastbit, i); | |
5149 | b = smallbin_at(ms, i); | |
5150 | p = b->fd; | |
5151 | assert(chunksize(p) == small_index2size(i)); | |
5152 | unlink_first_small_chunk(ms, b, p, i); | |
5153 | rsize = small_index2size(i) - nb; | |
5154 | /* Fit here cannot be remainderless if 4byte sizes */ | |
5155 | if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) | |
5156 | set_inuse_and_pinuse(ms, p, small_index2size(i)); | |
5157 | else { | |
5158 | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); | |
5159 | r = chunk_plus_offset(p, nb); | |
5160 | set_size_and_pinuse_of_free_chunk(r, rsize); | |
5161 | replace_dv(ms, r, rsize); | |
5162 | } | |
5163 | mem = chunk2mem(p); | |
5164 | check_malloced_chunk(ms, mem, nb); | |
5165 | goto postaction; | |
5166 | } | |
5167 | ||
5168 | else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) { | |
5169 | check_malloced_chunk(ms, mem, nb); | |
5170 | goto postaction; | |
5171 | } | |
5172 | } | |
5173 | } | |
5174 | else if (bytes >= MAX_REQUEST) | |
5175 | nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ | |
5176 | else { | |
5177 | nb = pad_request(bytes); | |
5178 | if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) { | |
5179 | check_malloced_chunk(ms, mem, nb); | |
5180 | goto postaction; | |
5181 | } | |
5182 | } | |
5183 | ||
5184 | if (nb <= ms->dvsize) { | |
5185 | size_t rsize = ms->dvsize - nb; | |
5186 | mchunkptr p = ms->dv; | |
5187 | if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ | |
5188 | mchunkptr r = ms->dv = chunk_plus_offset(p, nb); | |
5189 | ms->dvsize = rsize; | |
5190 | set_size_and_pinuse_of_free_chunk(r, rsize); | |
5191 | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); | |
5192 | } | |
5193 | else { /* exhaust dv */ | |
5194 | size_t dvs = ms->dvsize; | |
5195 | ms->dvsize = 0; | |
5196 | ms->dv = 0; | |
5197 | set_inuse_and_pinuse(ms, p, dvs); | |
5198 | } | |
5199 | mem = chunk2mem(p); | |
5200 | check_malloced_chunk(ms, mem, nb); | |
5201 | goto postaction; | |
5202 | } | |
5203 | ||
5204 | else if (nb < ms->topsize) { /* Split top */ | |
5205 | size_t rsize = ms->topsize -= nb; | |
5206 | mchunkptr p = ms->top; | |
5207 | mchunkptr r = ms->top = chunk_plus_offset(p, nb); | |
5208 | r->head = rsize | PINUSE_BIT; | |
5209 | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); | |
5210 | mem = chunk2mem(p); | |
5211 | check_top_chunk(ms, ms->top); | |
5212 | check_malloced_chunk(ms, mem, nb); | |
5213 | goto postaction; | |
5214 | } | |
5215 | ||
5216 | mem = sys_alloc(ms, nb); | |
5217 | ||
5218 | postaction: | |
5219 | POSTACTION(ms); | |
5220 | return mem; | |
5221 | } | |
5222 | ||
5223 | return 0; | |
5224 | } | |
5225 | ||
5226 | void mspace_free(mspace msp, void* mem) { | |
5227 | if (mem != 0) { | |
5228 | mchunkptr p = mem2chunk(mem); | |
5229 | #if FOOTERS | |
5230 | mstate fm = get_mstate_for(p); | |
5231 | #else /* FOOTERS */ | |
5232 | mstate fm = (mstate)msp; | |
5233 | #endif /* FOOTERS */ | |
5234 | if (!ok_magic(fm)) { | |
5235 | USAGE_ERROR_ACTION(fm, p); | |
5236 | return; | |
5237 | } | |
5238 | if (!PREACTION(fm)) { | |
5239 | check_inuse_chunk(fm, p); | |
5240 | if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) { | |
5241 | size_t psize = chunksize(p); | |
5242 | mchunkptr next = chunk_plus_offset(p, psize); | |
5243 | if (!pinuse(p)) { | |
5244 | size_t prevsize = p->prev_foot; | |
5245 | if ((prevsize & IS_MMAPPED_BIT) != 0) { | |
5246 | prevsize &= ~IS_MMAPPED_BIT; | |
5247 | psize += prevsize + MMAP_FOOT_PAD; | |
5248 | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) | |
5249 | fm->footprint -= psize; | |
5250 | goto postaction; | |
5251 | } | |
5252 | else { | |
5253 | mchunkptr prev = chunk_minus_offset(p, prevsize); | |
5254 | psize += prevsize; | |
5255 | p = prev; | |
5256 | if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ | |
5257 | if (p != fm->dv) { | |
5258 | unlink_chunk(fm, p, prevsize); | |
5259 | } | |
5260 | else if ((next->head & INUSE_BITS) == INUSE_BITS) { | |
5261 | fm->dvsize = psize; | |
5262 | set_free_with_pinuse(p, psize, next); | |
5263 | goto postaction; | |
5264 | } | |
5265 | } | |
5266 | else | |
5267 | goto erroraction; | |
5268 | } | |
5269 | } | |
5270 | ||
5271 | if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { | |
5272 | if (!cinuse(next)) { /* consolidate forward */ | |
5273 | if (next == fm->top) { | |
5274 | size_t tsize = fm->topsize += psize; | |
5275 | fm->top = p; | |
5276 | p->head = tsize | PINUSE_BIT; | |
5277 | if (p == fm->dv) { | |
5278 | fm->dv = 0; | |
5279 | fm->dvsize = 0; | |
5280 | } | |
5281 | if (should_trim(fm, tsize)) | |
5282 | sys_trim(fm, 0); | |
5283 | goto postaction; | |
5284 | } | |
5285 | else if (next == fm->dv) { | |
5286 | size_t dsize = fm->dvsize += psize; | |
5287 | fm->dv = p; | |
5288 | set_size_and_pinuse_of_free_chunk(p, dsize); | |
5289 | goto postaction; | |
5290 | } | |
5291 | else { | |
5292 | size_t nsize = chunksize(next); | |
5293 | psize += nsize; | |
5294 | unlink_chunk(fm, next, nsize); | |
5295 | set_size_and_pinuse_of_free_chunk(p, psize); | |
5296 | if (p == fm->dv) { | |
5297 | fm->dvsize = psize; | |
5298 | goto postaction; | |
5299 | } | |
5300 | } | |
5301 | } | |
5302 | else | |
5303 | set_free_with_pinuse(p, psize, next); | |
5304 | ||
5305 | if (is_small(psize)) { | |
5306 | insert_small_chunk(fm, p, psize); | |
5307 | check_free_chunk(fm, p); | |
5308 | } | |
5309 | else { | |
5310 | tchunkptr tp = (tchunkptr)p; | |
5311 | insert_large_chunk(fm, tp, psize); | |
5312 | check_free_chunk(fm, p); | |
5313 | if (--fm->release_checks == 0) | |
5314 | release_unused_segments(fm); | |
5315 | } | |
5316 | goto postaction; | |
5317 | } | |
5318 | } | |
5319 | erroraction: | |
5320 | USAGE_ERROR_ACTION(fm, p); | |
5321 | postaction: | |
5322 | POSTACTION(fm); | |
5323 | } | |
5324 | } | |
5325 | } | |
5326 | ||
5327 | void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) { | |
5328 | void* mem; | |
5329 | size_t req = 0; | |
5330 | mstate ms = (mstate)msp; | |
5331 | if (!ok_magic(ms)) { | |
5332 | USAGE_ERROR_ACTION(ms,ms); | |
5333 | return 0; | |
5334 | } | |
5335 | if (n_elements != 0) { | |
5336 | req = n_elements * elem_size; | |
5337 | if (((n_elements | elem_size) & ~(size_t)0xffff) && | |
5338 | (req / n_elements != elem_size)) | |
5339 | req = MAX_SIZE_T; /* force downstream failure on overflow */ | |
5340 | } | |
5341 | mem = internal_malloc(ms, req); | |
5342 | if (mem != 0 && calloc_must_clear(mem2chunk(mem))) | |
5343 | memset(mem, 0, req); | |
5344 | return mem; | |
5345 | } | |
5346 | ||
5347 | void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) { | |
5348 | if (oldmem == 0) | |
5349 | return mspace_malloc(msp, bytes); | |
5350 | #ifdef REALLOC_ZERO_BYTES_FREES | |
5351 | if (bytes == 0) { | |
5352 | mspace_free(msp, oldmem); | |
5353 | return 0; | |
5354 | } | |
5355 | #endif /* REALLOC_ZERO_BYTES_FREES */ | |
5356 | else { | |
5357 | #if FOOTERS | |
5358 | mchunkptr p = mem2chunk(oldmem); | |
5359 | mstate ms = get_mstate_for(p); | |
5360 | #else /* FOOTERS */ | |
5361 | mstate ms = (mstate)msp; | |
5362 | #endif /* FOOTERS */ | |
5363 | if (!ok_magic(ms)) { | |
5364 | USAGE_ERROR_ACTION(ms,ms); | |
5365 | return 0; | |
5366 | } | |
5367 | return internal_realloc(ms, oldmem, bytes); | |
5368 | } | |
5369 | } | |
5370 | ||
5371 | void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) { | |
5372 | mstate ms = (mstate)msp; | |
5373 | if (!ok_magic(ms)) { | |
5374 | USAGE_ERROR_ACTION(ms,ms); | |
5375 | return 0; | |
5376 | } | |
5377 | return internal_memalign(ms, alignment, bytes); | |
5378 | } | |
5379 | ||
5380 | void** mspace_independent_calloc(mspace msp, size_t n_elements, | |
5381 | size_t elem_size, void* chunks[]) { | |
5382 | size_t sz = elem_size; /* serves as 1-element array */ | |
5383 | mstate ms = (mstate)msp; | |
5384 | if (!ok_magic(ms)) { | |
5385 | USAGE_ERROR_ACTION(ms,ms); | |
5386 | return 0; | |
5387 | } | |
5388 | return ialloc(ms, n_elements, &sz, 3, chunks); | |
5389 | } | |
5390 | ||
5391 | void** mspace_independent_comalloc(mspace msp, size_t n_elements, | |
5392 | size_t sizes[], void* chunks[]) { | |
5393 | mstate ms = (mstate)msp; | |
5394 | if (!ok_magic(ms)) { | |
5395 | USAGE_ERROR_ACTION(ms,ms); | |
5396 | return 0; | |
5397 | } | |
5398 | return ialloc(ms, n_elements, sizes, 0, chunks); | |
5399 | } | |
5400 | ||
5401 | int mspace_trim(mspace msp, size_t pad) { | |
5402 | int result = 0; | |
5403 | mstate ms = (mstate)msp; | |
5404 | if (ok_magic(ms)) { | |
5405 | if (!PREACTION(ms)) { | |
5406 | result = sys_trim(ms, pad); | |
5407 | POSTACTION(ms); | |
5408 | } | |
5409 | } | |
5410 | else { | |
5411 | USAGE_ERROR_ACTION(ms,ms); | |
5412 | } | |
5413 | return result; | |
5414 | } | |
5415 | ||
5416 | void mspace_malloc_stats(mspace msp) { | |
5417 | mstate ms = (mstate)msp; | |
5418 | if (ok_magic(ms)) { | |
5419 | internal_malloc_stats(ms); | |
5420 | } | |
5421 | else { | |
5422 | USAGE_ERROR_ACTION(ms,ms); | |
5423 | } | |
5424 | } | |
5425 | ||
5426 | size_t mspace_footprint(mspace msp) { | |
5427 | size_t result = 0; | |
5428 | mstate ms = (mstate)msp; | |
5429 | if (ok_magic(ms)) { | |
5430 | result = ms->footprint; | |
5431 | } | |
5432 | else { | |
5433 | USAGE_ERROR_ACTION(ms,ms); | |
5434 | } | |
5435 | return result; | |
5436 | } | |
5437 | ||
5438 | ||
5439 | size_t mspace_max_footprint(mspace msp) { | |
5440 | size_t result = 0; | |
5441 | mstate ms = (mstate)msp; | |
5442 | if (ok_magic(ms)) { | |
5443 | result = ms->max_footprint; | |
5444 | } | |
5445 | else { | |
5446 | USAGE_ERROR_ACTION(ms,ms); | |
5447 | } | |
5448 | return result; | |
5449 | } | |
5450 | ||
5451 | ||
5452 | #if !NO_MALLINFO | |
5453 | struct mallinfo mspace_mallinfo(mspace msp) { | |
5454 | mstate ms = (mstate)msp; | |
5455 | if (!ok_magic(ms)) { | |
5456 | USAGE_ERROR_ACTION(ms,ms); | |
5457 | } | |
5458 | return internal_mallinfo(ms); | |
5459 | } | |
5460 | #endif /* NO_MALLINFO */ | |
5461 | ||
5462 | size_t mspace_usable_size(void* mem) { | |
5463 | if (mem != 0) { | |
5464 | mchunkptr p = mem2chunk(mem); | |
5465 | if (cinuse(p)) | |
5466 | return chunksize(p) - overhead_for(p); | |
5467 | } | |
5468 | return 0; | |
5469 | } | |
5470 | ||
5471 | int mspace_mallopt(int param_number, int value) { | |
5472 | return change_mparam(param_number, value); | |
5473 | } | |
5474 | ||
5475 | #endif /* MSPACES */ | |
5476 | ||
5477 | /* -------------------- Alternative MORECORE functions ------------------- */ | |
5478 | ||
5479 | /* | |
5480 | Guidelines for creating a custom version of MORECORE: | |
5481 | ||
5482 | * For best performance, MORECORE should allocate in multiples of pagesize. | |
5483 | * MORECORE may allocate more memory than requested. (Or even less, | |
5484 | but this will usually result in a malloc failure.) | |
5485 | * MORECORE must not allocate memory when given argument zero, but | |
5486 | instead return one past the end address of memory from previous | |
5487 | nonzero call. | |
5488 | * For best performance, consecutive calls to MORECORE with positive | |
5489 | arguments should return increasing addresses, indicating that | |
5490 | space has been contiguously extended. | |
5491 | * Even though consecutive calls to MORECORE need not return contiguous | |
5492 | addresses, it must be OK for malloc'ed chunks to span multiple | |
5493 | regions in those cases where they do happen to be contiguous. | |
5494 | * MORECORE need not handle negative arguments -- it may instead | |
5495 | just return MFAIL when given negative arguments. | |
5496 | Negative arguments are always multiples of pagesize. MORECORE | |
5497 | must not misinterpret negative args as large positive unsigned | |
5498 | args. You can suppress all such calls from even occurring by defining | |
5499 | MORECORE_CANNOT_TRIM, | |
5500 | ||
5501 | As an example alternative MORECORE, here is a custom allocator | |
5502 | kindly contributed for pre-OSX macOS. It uses virtually but not | |
5503 | necessarily physically contiguous non-paged memory (locked in, | |
5504 | present and won't get swapped out). You can use it by uncommenting | |
5505 | this section, adding some #includes, and setting up the appropriate | |
5506 | defines above: | |
5507 | ||
5508 | #define MORECORE osMoreCore | |
5509 | ||
5510 | There is also a shutdown routine that should somehow be called for | |
5511 | cleanup upon program exit. | |
5512 | ||
5513 | #define MAX_POOL_ENTRIES 100 | |
5514 | #define MINIMUM_MORECORE_SIZE (64 * 1024U) | |
5515 | static int next_os_pool; | |
5516 | void *our_os_pools[MAX_POOL_ENTRIES]; | |
5517 | ||
5518 | void *osMoreCore(int size) | |
5519 | { | |
5520 | void *ptr = 0; | |
5521 | static void *sbrk_top = 0; | |
5522 | ||
5523 | if (size > 0) | |
5524 | { | |
5525 | if (size < MINIMUM_MORECORE_SIZE) | |
5526 | size = MINIMUM_MORECORE_SIZE; | |
5527 | if (CurrentExecutionLevel() == kTaskLevel) | |
5528 | ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0); | |
5529 | if (ptr == 0) | |
5530 | { | |
5531 | return (void *) MFAIL; | |
5532 | } | |
5533 | // save ptrs so they can be freed during cleanup | |
5534 | our_os_pools[next_os_pool] = ptr; | |
5535 | next_os_pool++; | |
5536 | ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK); | |
5537 | sbrk_top = (char *) ptr + size; | |
5538 | return ptr; | |
5539 | } | |
5540 | else if (size < 0) | |
5541 | { | |
5542 | // we don't currently support shrink behavior | |
5543 | return (void *) MFAIL; | |
5544 | } | |
5545 | else | |
5546 | { | |
5547 | return sbrk_top; | |
5548 | } | |
5549 | } | |
5550 | ||
5551 | // cleanup any allocated memory pools | |
5552 | // called as last thing before shutting down driver | |
5553 | ||
5554 | void osCleanupMem(void) | |
5555 | { | |
5556 | void **ptr; | |
5557 | ||
5558 | for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++) | |
5559 | if (*ptr) | |
5560 | { | |
5561 | PoolDeallocate(*ptr); | |
5562 | *ptr = 0; | |
5563 | } | |
5564 | } | |
5565 | ||
5566 | */ | |
5567 | ||
5568 | ||
5569 | /* ----------------------------------------------------------------------- | |
5570 | History: | |
5571 | V2.8.4 (not yet released) | |
5572 | * Add mspace_mmap_large_chunks; thanks to Jean Brouwers | |
5573 | * Fix insufficient sys_alloc padding when using 16byte alignment | |
5574 | * Fix bad error check in mspace_footprint | |
5575 | * Adaptations for ptmalloc, courtesy of Wolfram Gloger. | |
5576 | * Reentrant spin locks, courtesy of Earl Chew and others | |
5577 | * Win32 improvements, courtesy of Niall Douglas and Earl Chew | |
5578 | * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options | |
5579 | * Extension hook in malloc_state | |
5580 | * Various small adjustments to reduce warnings on some compilers | |
5581 | * Various configuration extensions/changes for more platforms. Thanks | |
5582 | to all who contributed these. | |
5583 | ||
5584 | V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee) | |
5585 | * Add max_footprint functions | |
5586 | * Ensure all appropriate literals are size_t | |
5587 | * Fix conditional compilation problem for some #define settings | |
5588 | * Avoid concatenating segments with the one provided | |
5589 | in create_mspace_with_base | |
5590 | * Rename some variables to avoid compiler shadowing warnings | |
5591 | * Use explicit lock initialization. | |
5592 | * Better handling of sbrk interference. | |
5593 | * Simplify and fix segment insertion, trimming and mspace_destroy | |
5594 | * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x | |
5595 | * Thanks especially to Dennis Flanagan for help on these. | |
5596 | ||
5597 | V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee) | |
5598 | * Fix memalign brace error. | |
5599 | ||
5600 | V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee) | |
5601 | * Fix improper #endif nesting in C++ | |
5602 | * Add explicit casts needed for C++ | |
5603 | ||
5604 | V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee) | |
5605 | * Use trees for large bins | |
5606 | * Support mspaces | |
5607 | * Use segments to unify sbrk-based and mmap-based system allocation, | |
5608 | removing need for emulation on most platforms without sbrk. | |
5609 | * Default safety checks | |
5610 | * Optional footer checks. Thanks to William Robertson for the idea. | |
5611 | * Internal code refactoring | |
5612 | * Incorporate suggestions and platform-specific changes. | |
5613 | Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas, | |
5614 | Aaron Bachmann, Emery Berger, and others. | |
5615 | * Speed up non-fastbin processing enough to remove fastbins. | |
5616 | * Remove useless cfree() to avoid conflicts with other apps. | |
5617 | * Remove internal memcpy, memset. Compilers handle builtins better. | |
5618 | * Remove some options that no one ever used and rename others. | |
5619 | ||
5620 | V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee) | |
5621 | * Fix malloc_state bitmap array misdeclaration | |
5622 | ||
5623 | V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee) | |
5624 | * Allow tuning of FIRST_SORTED_BIN_SIZE | |
5625 | * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte. | |
5626 | * Better detection and support for non-contiguousness of MORECORE. | |
5627 | Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger | |
5628 | * Bypass most of malloc if no frees. Thanks To Emery Berger. | |
5629 | * Fix freeing of old top non-contiguous chunk im sysmalloc. | |
5630 | * Raised default trim and map thresholds to 256K. | |
5631 | * Fix mmap-related #defines. Thanks to Lubos Lunak. | |
5632 | * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield. | |
5633 | * Branch-free bin calculation | |
5634 | * Default trim and mmap thresholds now 256K. | |
5635 | ||
5636 | V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) | |
5637 | * Introduce independent_comalloc and independent_calloc. | |
5638 | Thanks to Michael Pachos for motivation and help. | |
5639 | * Make optional .h file available | |
5640 | * Allow > 2GB requests on 32bit systems. | |
5641 | * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>. | |
5642 | Thanks also to Andreas Mueller <a.mueller at paradatec.de>, | |
5643 | and Anonymous. | |
5644 | * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for | |
5645 | helping test this.) | |
5646 | * memalign: check alignment arg | |
5647 | * realloc: don't try to shift chunks backwards, since this | |
5648 | leads to more fragmentation in some programs and doesn't | |
5649 | seem to help in any others. | |
5650 | * Collect all cases in malloc requiring system memory into sysmalloc | |
5651 | * Use mmap as backup to sbrk | |
5652 | * Place all internal state in malloc_state | |
5653 | * Introduce fastbins (although similar to 2.5.1) | |
5654 | * Many minor tunings and cosmetic improvements | |
5655 | * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK | |
5656 | * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS | |
5657 | Thanks to Tony E. Bennett <tbennett@nvidia.com> and others. | |
5658 | * Include errno.h to support default failure action. | |
5659 | ||
5660 | V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) | |
5661 | * return null for negative arguments | |
5662 | * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com> | |
5663 | * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' | |
5664 | (e.g. WIN32 platforms) | |
5665 | * Cleanup header file inclusion for WIN32 platforms | |
5666 | * Cleanup code to avoid Microsoft Visual C++ compiler complaints | |
5667 | * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing | |
5668 | memory allocation routines | |
5669 | * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) | |
5670 | * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to | |
5671 | usage of 'assert' in non-WIN32 code | |
5672 | * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to | |
5673 | avoid infinite loop | |
5674 | * Always call 'fREe()' rather than 'free()' | |
5675 | ||
5676 | V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) | |
5677 | * Fixed ordering problem with boundary-stamping | |
5678 | ||
5679 | V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) | |
5680 | * Added pvalloc, as recommended by H.J. Liu | |
5681 | * Added 64bit pointer support mainly from Wolfram Gloger | |
5682 | * Added anonymously donated WIN32 sbrk emulation | |
5683 | * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen | |
5684 | * malloc_extend_top: fix mask error that caused wastage after | |
5685 | foreign sbrks | |
5686 | * Add linux mremap support code from HJ Liu | |
5687 | ||
5688 | V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) | |
5689 | * Integrated most documentation with the code. | |
5690 | * Add support for mmap, with help from | |
5691 | Wolfram Gloger (Gloger@lrz.uni-muenchen.de). | |
5692 | * Use last_remainder in more cases. | |
5693 | * Pack bins using idea from colin@nyx10.cs.du.edu | |
4283b8e4 | 5694 | * Use ordered bins instead of best-fit threshold |
f0ed8226 MSO |
5695 | * Eliminate block-local decls to simplify tracing and debugging. |
5696 | * Support another case of realloc via move into top | |
4283b8e4 | 5697 | * Fix error occurring when initial sbrk_base not word-aligned. |
f0ed8226 MSO |
5698 | * Rely on page size for units instead of SBRK_UNIT to |
5699 | avoid surprises about sbrk alignment conventions. | |
5700 | * Add mallinfo, mallopt. Thanks to Raymond Nijssen | |
5701 | (raymond@es.ele.tue.nl) for the suggestion. | |
5702 | * Add `pad' argument to malloc_trim and top_pad mallopt parameter. | |
5703 | * More precautions for cases where other routines call sbrk, | |
5704 | courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). | |
5705 | * Added macros etc., allowing use in linux libc from | |
5706 | H.J. Lu (hjl@gnu.ai.mit.edu) | |
5707 | * Inverted this history list | |
5708 | ||
5709 | V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) | |
5710 | * Re-tuned and fixed to behave more nicely with V2.6.0 changes. | |
5711 | * Removed all preallocation code since under current scheme | |
5712 | the work required to undo bad preallocations exceeds | |
5713 | the work saved in good cases for most test programs. | |
5714 | * No longer use return list or unconsolidated bins since | |
5715 | no scheme using them consistently outperforms those that don't | |
5716 | given above changes. | |
5717 | * Use best fit for very large chunks to prevent some worst-cases. | |
5718 | * Added some support for debugging | |
5719 | ||
5720 | V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) | |
5721 | * Removed footers when chunks are in use. Thanks to | |
5722 | Paul Wilson (wilson@cs.texas.edu) for the suggestion. | |
5723 | ||
5724 | V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) | |
5725 | * Added malloc_trim, with help from Wolfram Gloger | |
5726 | (wmglo@Dent.MED.Uni-Muenchen.DE). | |
5727 | ||
5728 | V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) | |
5729 | ||
5730 | V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) | |
5731 | * realloc: try to expand in both directions | |
5732 | * malloc: swap order of clean-bin strategy; | |
5733 | * realloc: only conditionally expand backwards | |
5734 | * Try not to scavenge used bins | |
5735 | * Use bin counts as a guide to preallocation | |
5736 | * Occasionally bin return list chunks in first scan | |
5737 | * Add a few optimizations from colin@nyx10.cs.du.edu | |
5738 | ||
5739 | V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) | |
5740 | * faster bin computation & slightly different binning | |
5741 | * merged all consolidations to one part of malloc proper | |
5742 | (eliminating old malloc_find_space & malloc_clean_bin) | |
5743 | * Scan 2 returns chunks (not just 1) | |
5744 | * Propagate failure in realloc if malloc returns 0 | |
5745 | * Add stuff to allow compilation on non-ANSI compilers | |
5746 | from kpv@research.att.com | |
5747 | ||
5748 | V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) | |
5749 | * removed potential for odd address access in prev_chunk | |
5750 | * removed dependency on getpagesize.h | |
5751 | * misc cosmetics and a bit more internal documentation | |
5752 | * anticosmetics: mangled names in macros to evade debugger strangeness | |
5753 | * tested on sparc, hp-700, dec-mips, rs6000 | |
5754 | with gcc & native cc (hp, dec only) allowing | |
5755 | Detlefs & Zorn comparison study (in SIGPLAN Notices.) | |
5756 | ||
5757 | Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) | |
5758 | * Based loosely on libg++-1.2X malloc. (It retains some of the overall | |
5759 | structure of old version, but most details differ.) | |
5760 | ||
5761 | */ |