- Aug 14, 2010
-
-
Jason Evans authored
Make it possible for each thread to manage which arena it is associated with. Implement the 'tests' and 'check' build targets.
-
- Aug 05, 2010
-
-
Jason Evans authored
Move assert() calls up in arena_run_reg_alloc(), so that a corrupt pointer will likely be caught by an assertion *before* it is dereferenced.
-
- Jul 22, 2010
-
-
Jason Evans authored
If multiple threads race to initialize malloc, the loser(s) busy-wait until initialization is complete. Add a missing mutex lock so that the loser(s) properly release the initialization mutex. Under some race conditions, this flaw could have caused one or more threads to become permanently blocked. Reported by Terrell Magee.
-
- Jun 05, 2010
-
-
Jason Evans authored
Fix the libunwind version of prof_backtrace() to set the backtrace depth for all possible code paths. This fixes the zero-length backtrace problem when using libunwind.
-
- May 12, 2010
-
-
Jason Evans authored
When heap profiling is enabled but deactivated, there is no need to call isalloc(ptr) in prof_{malloc,realloc}(). Avoid these calls, so that profiling overhead under such conditions is negligible.
-
- May 11, 2010
-
-
Jason Evans authored
If there is more than one arena, initialize next_arena so that the first and second threads to allocate memory use arenas 0 and 1, rather than both using arena 0.
-
Jordan DeLong authored
Add MAP_NORESERVE to the chunk_mmap() case being used by chunk_swap_enable(), if the system supports it.
-
- Apr 28, 2010
-
-
Jason Evans authored
Use the size argument to tcache_dalloc_large() to control the number of bytes set to 0x5a when junk filling is enabled, rather than accessing a non-existent arena bin. This bug was capable of corrupting an arbitrarily large memory region, depending on what followed the arena data structure in memory (typically zeroed memory, another arena_t, or a red-black tree node for a huge object).
-
- Apr 14, 2010
-
-
Jason Evans authored
Properly maintain tcache_bin_t's avail pointer such that it is NULL if no objects are cached. This only caused problems during thread cache destruction, since cache flushing otherwise never occurs on an empty bin.
-
Jason Evans authored
Properly set the context associated with each allocated object, even when the object is not sampled. Remove debug print code that slipped in.
-
Jason Evans authored
-
Jason Evans authored
Fix arena_chunk_dealloc() to put the new spare in a consistent state before dropping the arena mutex to deallocate the previous spare. Fix arena_run_dalloc() to insert a newly dirtied chunk into the chunks_dirty list before potentially deallocating the chunk, so that dirty page accounting is self-consistent.
-
Jason Evans authored
Initialize bt2cnt_tsd so that cleanup at thread exit actually happens. Associate (prof_ctx_t *) with allocated objects, rather than (prof_thr_cnt_t *). Each thread must always operate on its own (prof_thr_cnt_t *), and an object may outlive the thread that allocated it.
-
Jason Evans authored
Fix a compilation error due to stale data structure access code in tcache_dalloc_large() for junk filling.
-
- Apr 12, 2010
-
-
Jason Evans authored
-
- Apr 09, 2010
-
-
Jason Evans authored
Generalize ExtractSymbols to handle all cases of library address overlap with the main binary.
-
Jason Evans authored
Linux kernels have been capable of concurrent page table access since 2.6.27, so this hack is not necessary for modern kernels.
-
Jason Evans authored
Now that JEMALLOC_OPTIONS=P isn't the only way to cause stats_print() to be called, opt_stats_print must actually be checked when reporting the state of the P/p option.
-
- Apr 08, 2010
-
-
Jason Evans authored
Don't build with -march=native by default, because the generated code may perform especially poorly on ABI-compatible, but internally different, systems.
-
Jason Evans authored
Split library build rules up so that parallel building works. Fix autoconf-related dependencies. Remove obsolete JEMALLOC_VERSION definition.
-
Jason Evans authored
Iterated downward through both libraries and PCs. This allows PCs to resolve even when library address ranges overlap.
-
- Apr 06, 2010
-
-
Jason Evans authored
Remove a duplicate prof_leave() call in an error path through prof_dump().
-
Jason Evans authored
-
- Apr 03, 2010
-
-
Jason Evans authored
Modify ExtractSymbols to operate on sorted PCs and libraries, in order to reduce computational complexity from O(N*M) to O(N+M).
-
Jason Evans authored
-
- Apr 02, 2010
-
-
Jason Evans authored
Fix divide-by-zero error in pprof. It is possible for sample contexts to currently have no associated objects, but the cumulative statistics are still useful, depending on how the user invokes pprof. Since jemalloc intentionally does not filter such contexts, take care not to divide by 0 when re-scaling for v2 heap sampling. Install pprof as part of 'make install'. Update pprof documentation.
-
Jason Evans authored
Leak reporting is useful even if sampling is enabled; some leaks may not be reported, but those reported are still genuine leaks.
-
- Apr 01, 2010
-
-
Jason Evans authored
Add the E/e options to control whether the application starts with sampling active/inactive (secondary control to F/f). Add the prof.active mallctl so that the application can activate/deactivate sampling on the fly.
-
Jason Evans authored
Make it possible to disable interval-triggered profile dumping, even if profiling is enabled. This is useful if the user only wants a single dump at exit, or if the application manually triggers profile dumps.
-
Jason Evans authored
If the mean heap sampling interval is larger than one page, simulate sampled small objects with large objects. This allows profiling context pointers to be omitted for small objects. As a result, the memory overhead for sampling decreases as the sampling interval is increased. Fix a compilation error in the profiling code.
-
- Mar 27, 2010
-
-
Jason Evans authored
-
- Mar 22, 2010
-
-
Jason Evans authored
Properly set/clear CHUNK_MAP_ZEROED for all purged pages, according to whether the pages are (potentially) file-backed or anonymous. This was merely a performance pessimization for the anonymous mapping case, but was a calloc()-related bug for the swap_enabled case.
-
- Mar 19, 2010
-
-
Jason Evans authored
Split arena->runs_avail into arena->runs_avail_{clean,dirty}, and preferentially allocate dirty runs.
-
- Mar 18, 2010
-
-
Jason Evans authored
Remove medium size classes, because concurrent dirty page purging is no longer capable of purging inactive dirty pages inside active runs (due to recent arena/bin locking changes). Enhance tcache to support caching large objects, so that the same range of size classes is still cached, despite the removal of medium size class support.
-
- Mar 16, 2010
-
-
Jason Evans authored
Initialize small run header before dropping arena->lock, arena_chunk_purge() relies on valid small run headers during run iteration. Add some assertions.
-
Jason Evans authored
Check for interior pointers in arena_[ds]alloc(). Check for corrupt pointers in tcache_alloc().
-
- Mar 15, 2010
-
-
Jason Evans authored
-
Jason Evans authored
Update arena->nactive when pseudo-allocating runs in arena_chunk_purge(), since arena_run_dalloc() subtracts from arena->nactive.
-
Jason Evans authored
-
Jason Evans authored
-