- Sep 17, 2015
-
-
Jason Evans authored
Fix ixallocx_prof_sample() to never modify nor create sampled small allocations. xallocx() is in general incapable of moving small allocations, so this fix removes buggy code without loss of generality.
-
- Sep 16, 2015
-
-
Jason Evans authored
-
- Sep 15, 2015
-
-
Jason Evans authored
-
Jason Evans authored
Fix prof_realloc() to call prof_free_sampled_object() after calling prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were the same, the tctx could have been prematurely destroyed.
-
Jason Evans authored
Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample() in the correct order.
-
Jason Evans authored
Make one call to prof_active_get_unlocked() per allocation event, and use the result throughout the relevant functions that handle an allocation event. Also add a missing check in prof_realloc(). These fixes protect allocation events against concurrent prof_active changes.
-
Jason Evans authored
-
- Sep 12, 2015
-
-
Jason Evans authored
Fix xallocx() bugs related to the 'extra' parameter when specified as non-zero.
-
- Sep 10, 2015
-
-
Jason Evans authored
Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault.
-
- Sep 02, 2015
-
-
Jason Evans authored
Fix TLS configuration such that it is enabled by default for platforms on which it works correctly. This regression was introduced by ac5db020 (Make --enable-tls and --enable-lazy-lock take precedence over configure.ac-hardcoded defaults).
-
- Aug 28, 2015
-
-
Mike Hommey authored
Fix chunk purge hook calls for in-place huge shrinking reallocation to specify the old chunk size rather than the new chunk size. This bug caused no correctness issues for the default chunk purge function, but was visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl. This resolves #264.
-
Jason Evans authored
Fix arenas_cache_cleanup() and arena_get_hard() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down. This is a more general fix that complements 45e9f66c (Fix arenas_cache_cleanup().).
-
- Aug 26, 2015
-
-
Jason Evans authored
Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to match glibc and avoid compilation errors when including both jemalloc/jemalloc.h and malloc.h in C++ code. This change was unintentionally omitted from ae93d6bf (Avoid function prototype incompatibilities.).
-
- Aug 21, 2015
-
-
Christopher Ferris authored
Fix arenas_cache_cleanup() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down.
-
- Aug 20, 2015
-
-
Jason Evans authored
This resolves #256.
-
- Aug 19, 2015
-
-
Jason Evans authored
Don't bitshift by negative amounts when encoding/decoding run sizes in chunk header maps. This affected systems with page sizes greater than 8 KiB. Reported by Ingvar Hagelund <ingvar@redpill-linpro.com>.
-
- Aug 17, 2015
-
-
Jason Evans authored
-
- Aug 12, 2015
-
-
Jason Evans authored
-
- Aug 04, 2015
-
-
Jason Evans authored
Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
-
- Jul 18, 2015
-
-
Jason Evans authored
-
- Jul 09, 2015
-
-
Jason Evans authored
-
- Jul 07, 2015
-
-
charsyam authored
Fix typos ChangeLog
-
Jason Evans authored
-
- Jun 24, 2015
-
-
Jason Evans authored
Fix size class overflow handling for malloc(), posix_memalign(), memalign(), calloc(), and realloc() when profiling is enabled. Remove an assertion that erroneously caused arena_sdalloc() to fail when profiling was enabled. This resolves #232.
-
- May 06, 2015
-
-
Jason Evans authored
Extract szad size quantization into {extent,run}_quantize(), and . quantize szad run sizes to the union of valid small region run sizes and large run sizes. Refactor iteration in arena_run_first_fit() to use run_quantize{,_first,_next(), and add support for padded large runs. For large allocations that have no specified alignment constraints, compute a pseudo-random offset from the beginning of the first backing page that is a multiple of the cache line size. Under typical configurations with 4-KiB pages and 64-byte cache lines this results in a uniform distribution among 64 page boundary offsets. Add the --disable-cache-oblivious option, primarily intended for performance testing. This resolves #13.
-
- May 01, 2015
-
-
Jason Evans authored
This rename avoids installation collisions with the upstream gperftools. Additionally, jemalloc's per thread heap profile functionality introduced an incompatible file format, so it's now worthwhile to clearly distinguish jemalloc's version of this script from the upstream version. This resolves #229.
-
- Apr 30, 2015
-
-
Qinfan Wu authored
-
- Mar 25, 2015
-
-
Jason Evans authored
-
- Mar 24, 2015
-
-
Jason Evans authored
-
- Mar 19, 2015
-
-
Jason Evans authored
However, unlike before it was removed do not force --enable-ivsalloc when Darwin zone allocator integration is enabled, since the zone allocator code uses ivsalloc() regardless of whether malloc_usable_size() and sallocx() do. This resolves #211.
-
- Mar 10, 2015
-
-
Jason Evans authored
-
- Mar 31, 2014
-
-
Jason Evans authored
-
- Feb 26, 2014
-
-
Jason Evans authored
-
- Jan 22, 2014
-
-
Jason Evans authored
-
Jason Evans authored
-
- Jan 18, 2014
-
-
Jason Evans authored
-
- Dec 06, 2013
-
-
Jason Evans authored
Unless heap profiling is enabled, disable floating point code and don't link with libm. This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64 systems, makes it possible to completely disable floating point register use. Some versions of glibc neglect to save/restore caller-saved floating point registers during dynamic lazy symbol loading, and the symbol loading code uses whatever malloc the application happens to have linked/loaded with, the result being potential floating point register corruption.
-
- Dec 04, 2013
-
-
Jason Evans authored
Refactor the test harness to support three types of tests: - unit: White box unit tests. These tests have full access to all internal jemalloc library symbols. Though in actuality all symbols are prefixed by jet_, macro-based name mangling abstracts this away from test code. - integration: Black box integration tests. These tests link with the installable shared jemalloc library, and with the exception of some utility code and configure-generated macro definitions, they have no access to jemalloc internals. - stress: Black box stress tests. These tests link with the installable shared jemalloc library, as well as with an internal allocator with symbols prefixed by jet_ (same as for unit tests) that can be used to allocate data structures that are internal to the test code. Move existing tests into test/{unit,integration}/ as appropriate. Split out internal parts of jemalloc_defs.h.in and put them in jemalloc_internal_defs.h.in. This reduces internals exposure to applications that #include <jemalloc/jemalloc.h>. Refactor jemalloc.h header generation so that a single header file results, and the prototypes can be used to generate jet_ prototypes for tests. Split jemalloc.h.in into multiple parts (jemalloc_defs.h.in, jemalloc_macros.h.in, jemalloc_protos.h.in, jemalloc_mangle.h.in) and use a shell script to generate a unified jemalloc.h at configure time. Change the default private namespace prefix from "" to "je_". Add missing private namespace mangling. Remove hard-coded private_namespace.h. Instead generate it and private_unnamespace.h from private_symbols.txt. Use similar logic for public symbols, which aids in name mangling for jet_ symbols. Add test_warn() and test_fail(). Replace existing exit(1) calls with test_fail() calls.
-
- Oct 21, 2013
-
-
Jason Evans authored
-
- Oct 20, 2013
-
-
Jason Evans authored
Fix a race condition in the "arenas.extend" mallctl that could lead to internal data structure corruption. The race could be hit if one thread called the "arenas.extend" mallctl while another thread concurrently triggered initialization of one of the lazily created arenas.
-