- May 04, 2016
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
Depending on virtual memory resource limits, it is necessary to attempt allocating three maximally sized objects to trigger OOM rather than just two, since the maximum supported size is slightly less than half the total virtual memory address space. This fixes a test failure that was introduced by 0c516a00 (Make *allocx() size class overflow behavior defined.). This resolves #379.
-
Jason Evans authored
-
- May 03, 2016
-
-
Jason Evans authored
-
Jason Evans authored
This assures that side effects of internal allocation don't impact tests.
-
hitstergtd authored
-
Jason Evans authored
This regression was caused by 8f683b94 (Make opt_narenas unsigned rather than size_t.).
-
Jason Evans authored
Fix bitmap_sfu() to shift by LG_BITMAP_GROUP_NBITS rather than hard-coded 6 when using linear (non-USE_TREE) bitmap search. In practice this affects only 64-bit systems for which sizeof(long) is not 8 (i.e. Windows), since USE_TREE is defined for 32-bit systems. This regression was caused by b8823ab0 (Use linear scan for small bitmaps). This resolves #368.
-
Jason Evans authored
Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(), so that if the dalloc hook fails, proper decommit/purge/retain cascading occurs. This fixes three potential chunk leaks on OOM paths, one during dss-based chunk allocation, one during chunk header commit (currently relevant only on Windows), and one during rtree write (e.g. if rtree node allocation fails). Merge chunk_purge_arena() into chunk_purge_default() (refactor, no change to functionality).
-
Rajeev Misra authored
-
Dmitri Smirnov authored
Stack corruption happens in x64 bit This resolves #347.
-
rustyx authored
-
Dmitri Smirnov authored
-
- Feb 28, 2016
-
-
Jason Evans authored
-
Jason Evans authored
-
rustyx authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
Prior to 767d8506 (Refactor arenas array (fixes deadlock).), it was possible under some circumstances for arena_get() to trigger recreation of the arenas cache during tsd cleanup, and the arenas cache would then be leaked. In principle a similar issue could still occur as a side effect of decay-based purging, which calls arena_tdata_get(). Fix arenas_tdata_cleanup() by setting tsd->arenas_tdata_bypass to true, so that arena_tdata_get() will gracefully fail (an expected behavior) rather than recreating tsd->arena_tdata. Reported by Christopher Ferris <cferris@google.com>.
-
Jason Evans authored
Add missing stats.arenas.<i>.{dss,lg_dirty_mult,decay_time} initialization. Fix stats.arenas.<i>.{pactive,pdirty} to read under the protection of the arena mutex.
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
Fix stats.cactive accounting to always increase/decrease by multiples of the chunk size, even for huge size classes that are not multiples of the chunk size, e.g. {2.5, 3, 3.5, 5, 7} MiB with 2 MiB chunk size. This regression was introduced by 155bfa7d (Normalize size classes.) and first released in 4.0.0. This resolves #336.
-
- Feb 27, 2016
-
-
Jason Evans authored
-
Jason Evans authored
This removes an implicit conversion from size_t to ssize_t. For cactive decreases, the size_t value was intentionally underflowed to generate "negative" values (actually positive values above the positive range of ssize_t), and the conversion to ssize_t was undefined according to C language semantics. This regression was perpetuated by 1522937e (Fix the cactive statistic.) and first release in 4.0.0, which in retrospect only fixed one of two problems introduced by aa5113b1 (Refactor overly large/complex functions) and first released in 3.5.0.
-
Jason Evans authored
Remove invalid tests that were intended to be tests of (hugemax+1) OOM, for which tests already exist.
-
buchgr authored
This fixes chunk allocation to reuse retained memory even if an application-provided chunk allocation function is in use. This resolves #307.
-
- Feb 26, 2016
-
-
Jason Evans authored
-
Dave Watson authored
For small bitmaps, a linear scan of the bitmap is slightly faster than a tree search - bitmap_t is more compact, and there are fewer writes since we don't have to propogate state transitions up the tree. On x86_64 with the current settings, I'm seeing ~.5%-1% CPU improvement in production canaries with this change. The old tree code is left since 32bit sizes are much larger (and ffsl smaller), and maybe the run sizes will change in the future. This resolves #339.
-
Jason Evans authored
-
rustyx authored
-
rustyx authored
-
Jason Evans authored
This resolves #341.
-
Jason Evans authored
-
Jason Evans authored
Add HUGE_MAXCLASS overflow checks that are specific to heap profiling code paths. This fixes test failures that were introduced by 0c516a00 (Make *allocx() size class overflow behavior defined.).
-
Jason Evans authored
This fixes compilation warnings regarding integer overflow that were introduced by 0c516a00 (Make *allocx() size class overflow behavior defined.).
-
Jason Evans authored
Limit supported size and alignment to HUGE_MAXCLASS, which in turn is now limited to be less than PTRDIFF_MAX. This resolves #278 and #295.
-
- Feb 25, 2016
-
-
Jason Evans authored
Refactor the arenas array, which contains pointers to all extant arenas, such that it starts out as a sparse array of maximum size, and use double-checked atomics-based reads as the basis for fast and simple arena_get(). Additionally, reduce arenas_lock's role such that it only protects against arena initalization races. These changes remove the possibility for arena lookups to trigger locking, which resolves at least one known (fork-related) deadlock. This resolves #315.
-
Dave Watson authored
Fix arena_size arena_new() computation to incorporate runs_avail_nclasses elements for runs_avail, rather than (runs_avail_nclasses - 1) elements. Since offsetof(arena_t, runs_avail) is used rather than sizeof(arena_t) for the first term of the computation, all of the runs_avail elements must be added into the second term. This bug was introduced (by Jason Evans) while merging pull request #330 as 3417a304 (Separate arena_avail trees).
-