- Sep 15, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
Fix ixallocx_prof() to clamp the extra parameter if size+extra would overflow HUGE_MAXCLASS.
-
- Sep 12, 2015
-
-
Jason Evans authored
Prior to this change the debug build/test command needed to look like: make all tests && make check_unit && make check_integration && \ make check_integration_prof This is now simply: make check Rename the check_stress target to stress.
-
Jason Evans authored
arena_maxclass is no longer an appropriate name, because arenas also manage huge allocations.
-
Jason Evans authored
Fix xallocx() bugs related to the 'extra' parameter when specified as non-zero.
-
- Sep 10, 2015
-
-
Jason Evans authored
Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault.
-
- Sep 04, 2015
-
-
Dmitry-Me authored
-
Mike Hommey authored
This resolves #269.
-
Jason Evans authored
This didn't cause bad code generation in the one case spot-checked (gcc 4.8.1), but had the potential to to so. This bug was introduced by 594c759f (Optimize arena_prof_tctx_set().).
-
- Sep 02, 2015
-
-
Jason Evans authored
Optimize arena_prof_tctx_set() to avoid reading run metadata when deciding whether it's actually necessary to write.
-
Jason Evans authored
Fix TLS configuration such that it is enabled by default for platforms on which it works correctly. This regression was introduced by ac5db020 (Make --enable-tls and --enable-lazy-lock take precedence over configure.ac-hardcoded defaults).
-
- Aug 28, 2015
-
-
Mike Hommey authored
When junk filling is enabled, shrinking an allocation fills the bytes that were previously allocated but now aren't. Purging the chunk before doing that is just a waste of time. This resolves #260.
-
Mike Hommey authored
Fix chunk purge hook calls for in-place huge shrinking reallocation to specify the old chunk size rather than the new chunk size. This bug caused no correctness issues for the default chunk purge function, but was visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl. This resolves #264.
-
Jason Evans authored
Fix arenas_cache_cleanup() and arena_get_hard() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down. This is a more general fix that complements 45e9f66c (Fix arenas_cache_cleanup().).
-
- Aug 26, 2015
-
-
Jason Evans authored
Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to match glibc and avoid compilation errors when including both jemalloc/jemalloc.h and malloc.h in C++ code. This change was unintentionally omitted from ae93d6bf (Avoid function prototype incompatibilities.).
-
- Aug 21, 2015
-
-
Christopher Ferris authored
Fix arenas_cache_cleanup() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down.
-
- Aug 20, 2015
-
-
Jason Evans authored
Reported by Ingvar Hagelund.
-
Jason Evans authored
This resolves #256.
-
- Aug 19, 2015
-
-
Jason Evans authored
Don't bitshift by negative amounts when encoding/decoding run sizes in chunk header maps. This affected systems with page sizes greater than 8 KiB. Reported by Ingvar Hagelund <ingvar@redpill-linpro.com>.
-
- Aug 17, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
- Aug 14, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
- Aug 13, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
- Aug 12, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
This is no longer necessary because of the more general chunk merge/split approach to dealing with map coalescing.
-
Jason Evans authored
Always leave decommit disabled on non-Windows systems.
-
Jason Evans authored
Fix arena_run_split_large_helper() to treat newly committed memory as zeroed.
-
- Aug 11, 2015
-
-
Jason Evans authored
This regression was introduced by de249c86 (Arena chunk decommit cleanups and fixes.). This resolves #254.
-
Mike Hommey authored
-
Jason Evans authored
Only set the unzeroed flag when initializing the entire mapbits entry, rather than mutating just the unzeroed bit. This simplifies the possible mapbits state transitions.
-