- Sep 10, 2015
-
-
Jason Evans authored
Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault.
-
- Sep 04, 2015
-
-
Dmitry-Me authored
-
Mike Hommey authored
This resolves #269.
-
Jason Evans authored
This didn't cause bad code generation in the one case spot-checked (gcc 4.8.1), but had the potential to to so. This bug was introduced by 594c759f (Optimize arena_prof_tctx_set().).
-
- Sep 02, 2015
-
-
Jason Evans authored
Optimize arena_prof_tctx_set() to avoid reading run metadata when deciding whether it's actually necessary to write.
-
Jason Evans authored
Fix TLS configuration such that it is enabled by default for platforms on which it works correctly. This regression was introduced by ac5db020 (Make --enable-tls and --enable-lazy-lock take precedence over configure.ac-hardcoded defaults).
-
- Aug 28, 2015
-
-
Mike Hommey authored
When junk filling is enabled, shrinking an allocation fills the bytes that were previously allocated but now aren't. Purging the chunk before doing that is just a waste of time. This resolves #260.
-
Mike Hommey authored
Fix chunk purge hook calls for in-place huge shrinking reallocation to specify the old chunk size rather than the new chunk size. This bug caused no correctness issues for the default chunk purge function, but was visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl. This resolves #264.
-
Jason Evans authored
Fix arenas_cache_cleanup() and arena_get_hard() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down. This is a more general fix that complements 45e9f66c (Fix arenas_cache_cleanup().).
-
- Aug 26, 2015
-
-
Jason Evans authored
Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to match glibc and avoid compilation errors when including both jemalloc/jemalloc.h and malloc.h in C++ code. This change was unintentionally omitted from ae93d6bf (Avoid function prototype incompatibilities.).
-
- Aug 21, 2015
-
-
Christopher Ferris authored
Fix arenas_cache_cleanup() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down.
-
- Aug 20, 2015
-
-
Jason Evans authored
Reported by Ingvar Hagelund.
-
Jason Evans authored
This resolves #256.
-
- Aug 19, 2015
-
-
Jason Evans authored
Don't bitshift by negative amounts when encoding/decoding run sizes in chunk header maps. This affected systems with page sizes greater than 8 KiB. Reported by Ingvar Hagelund <ingvar@redpill-linpro.com>.
-
- Aug 17, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
- Aug 14, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
- Aug 13, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
- Aug 12, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
This is no longer necessary because of the more general chunk merge/split approach to dealing with map coalescing.
-
Jason Evans authored
Always leave decommit disabled on non-Windows systems.
-
Jason Evans authored
Fix arena_run_split_large_helper() to treat newly committed memory as zeroed.
-
- Aug 11, 2015
-
-
Jason Evans authored
This regression was introduced by de249c86 (Arena chunk decommit cleanups and fixes.). This resolves #254.
-
Mike Hommey authored
-
Jason Evans authored
Only set the unzeroed flag when initializing the entire mapbits entry, rather than mutating just the unzeroed bit. This simplifies the possible mapbits state transitions.
-
Jason Evans authored
Decommit arena chunk header during chunk deallocation if the rest of the chunk is decommitted.
-
- Aug 07, 2015
-
-
Jason Evans authored
-
Jason Evans authored
Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
-
Jason Evans authored
Fix arena_ralloc_large_grow() to properly account for large_pad, so that in-place large reallocation succeeds when possible, rather than always failing. This regression was introduced by 8a03cf03 (Implement cache index randomization for large allocations.)
-
- Aug 04, 2015
-
-
Daniel Micay authored
In builds with profiling disabled (default), the opt_prof_prefix array has a one byte length as a micro-optimization. This will cause the usage of write in the unused profiling code to be statically detected as a buffer overflow by Bionic's _FORTIFY_SOURCE implementation as it tries to detect read overflows in addition to write overflows. This works around the problem by informing the compiler that not_reached() means code in unreachable in release builds.
-
Matthijs authored
- Decorate public function with __declspec(allocator) and __declspec(restrict), just like MSVC 1900 - Support JEMALLOC_HAS_RESTRICT by defining the restrict keyword - Move __declspec(nothrow) between 'void' and '*' so it compiles once more
-