- Jun 08, 2016
-
-
Jason Evans authored
-
- Jun 07, 2016
-
-
Jason Evans authored
Revert 245ae603 (Support --with-lg-page values larger than actual page size.), because it could cause VM map fragmentation if the kernel grows mmap()ed memory downward. This resolves #391.
-
Elliot Ronaghan authored
Fix mixed decl in the gettimeofday() branch of nstime_update()
-
Jason Evans authored
This avoids bootstrapping issues for configurations that require allocation during tsd initialization. This resolves #390.
-
Jason Evans authored
As a side effect this causes the extent's 'committed' flag to be updated.
-
- Jun 06, 2016
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
Fix a fundamental extent_split_wrapper() bug in an error path. Fix extent_recycle() to deregister unsplittable extents before leaking them. Relax xallocx() test assertions so that unsplittable extents don't cause test failures.
-
Jason Evans authored
Deregister extents before deallocation, so that subsequent reallocation/registration doesn't race with deregistration.
-
Jason Evans authored
Page-align the gap, if any, and add/use extent_dalloc_gap(), which registers the gap extent before deallocation.
-
Jason Evans authored
Now that extents are not multiples of chunksize, it's necessary to track pages rather than chunks.
-
Jason Evans authored
With the removal of subchunk size class infrastructure, there are no large size classes that are guaranteed to be re-expandable in place unless munmap() is disabled. Work around these legitimate failures with rallocx() fallback calls. If there were no test configuration for which the xallocx() calls succeeded, it would be important to override the extent hooks for testing purposes, but by default these tests don't use the rallocx() fallbacks on Linux, so test coverage is still sufficient.
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
This facilitates the application accessing its own extent allocator metadata during hook invocations. This resolves #259.
-
Jason Evans authored
rtree-based extent lookups remain more expensive than chunk-based run lookups, but with this optimization the fast path slowdown is ~3 CPU cycles per metadata lookup (on Intel Core i7-4980HQ), versus ~11 cycles prior. The path caching speedup tends to degrade gracefully unless allocated memory is spread far apart (as is the case when using a mixture of sbrk() and mmap()).
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
rallocx() for an alignment-constrained request may end up with a smaller-than-worst-case size if in-place reallocation succeeds due to serendipitous alignment. In such cases, sampling may not happen.
-
Jason Evans authored
In the case where prof_alloc_prep() is called with an over-estimate of allocation size, and sampling doesn't end up being triggered, the tctx must be discarded.
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
Rename stats.arenas.<i>.metadata.allocated mallctl to stats.arenas.<i>.metadata .
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
When an allocation is large enough to trigger multiple dumps, use modular math rather than subtraction to reset the interval counter. Prior to this change, it was possible for a single allocation to cause many subsequent allocations to all trigger profile dumps. When updating usable size for a sampled object, try to cancel out the difference between LARGE_MINCLASS and usable size from the interval counter.
-
Jason Evans authored
-
- Jun 03, 2016
-
-
Jason Evans authored
-
Jason Evans authored
Precisely size extents for huge size classes that aren't multiples of chunksize.
-