- Jul 08, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
Conditionally define ENOENT, EINVAL, etc. (was unconditional). Add/use PRIzu, PRIzd, and PRIzx for use in malloc_printf() calls. gcc issued (harmless) warnings since e.g. "%zu" should be "%Iu" on Windows, and the alternative to this workaround would have been to disable the function attributes which cause gcc to look for type mismatches in formatted printing function calls.
-
Jason Evans authored
-
- Jul 07, 2015
-
-
charsyam authored
Fix typos ChangeLog
-
Jason Evans authored
-
Jason Evans authored
-
- Jun 25, 2015
-
-
Matthijs authored
- Set opt_lg_chunk based on run-time OS setting - Verify LG_PAGE is compatible with run-time OS setting - When targeting Windows Vista or newer, use SRWLOCK instead of CRITICAL_SECTION - When targeting Windows Vista or newer, statically initialize init_lock
-
- Jun 24, 2015
-
-
Jason Evans authored
Fix size class overflow handling for malloc(), posix_memalign(), memalign(), calloc(), and realloc() when profiling is enabled. Remove an assertion that erroneously caused arena_sdalloc() to fail when profiling was enabled. This resolves #232.
-
- Jun 23, 2015
-
-
Jason Evans authored
This resolves #235.
-
Jason Evans authored
-
- Jun 22, 2015
-
-
Jason Evans authored
The regressions were never merged into the master branch.
-
- Jun 15, 2015
-
-
Jason Evans authored
-
- May 30, 2015
-
-
Jason Evans authored
-
Jason Evans authored
This avoids the potential surprise of deallocating an object with one tcache specified, and having the object cached in a different tcache once it drains from the quarantine.
-
- May 28, 2015
-
-
Chi-hung Hsieh authored
-
- May 20, 2015
-
-
Jason Evans authored
Now that small allocation runs have fewer regions due to run metadata residing in chunk headers, an explicit minimum tcache count is needed to make sure that tcache adequately amortizes synchronization overhead.
-
Jason Evans authored
Take into account large_pad when computing whether to pass the deallocation request to tcache_dalloc_large(), so that the largest cacheable size makes it back to tcache. This regression was introduced by 8a03cf03 (Implement cache index randomization for large allocations.).
-
Jason Evans authored
Pass large allocation requests to arena_malloc() when possible. This regression was introduced by 155bfa7d (Normalize size classes.).
-
Jason Evans authored
This regression was introduced by 155bfa7d (Normalize size classes.).
-
- May 16, 2015
-
-
Jason Evans authored
-
- May 08, 2015
-
-
Jason Evans authored
-
- May 06, 2015
-
-
Jason Evans authored
Extract szad size quantization into {extent,run}_quantize(), and . quantize szad run sizes to the union of valid small region run sizes and large run sizes. Refactor iteration in arena_run_first_fit() to use run_quantize{,_first,_next(), and add support for padded large runs. For large allocations that have no specified alignment constraints, compute a pseudo-random offset from the beginning of the first backing page that is a multiple of the cache line size. Under typical configurations with 4-KiB pages and 64-byte cache lines this results in a uniform distribution among 64 page boundary offsets. Add the --disable-cache-oblivious option, primarily intended for performance testing. This resolves #13.
-
Jason Evans authored
-
- May 01, 2015
-
-
Jason Evans authored
This rename avoids installation collisions with the upstream gperftools. Additionally, jemalloc's per thread heap profile functionality introduced an incompatible file format, so it's now worthwhile to clearly distinguish jemalloc's version of this script from the upstream version. This resolves #229.
-
Jason Evans authored
This resolves #227.
-
Jason Evans authored
This resolves #228.
-
- Apr 30, 2015
-
-
Igor Podlesny authored
-
Qinfan Wu authored
-
- Apr 07, 2015
-
-
Sébastien Marie authored
under some compiler (gcc 4.8.4 in particular), the auto-detection of TLS don't work properly. force tls to be disabled. the testsuite pass under gcc (4.8.4) and gcc (4.2.1)
-
- Mar 26, 2015
-
-
Jason Evans authored
Fix the shrinking case of huge_ralloc_no_move_similar() to purge the correct number of pages, at the correct offset. This regression was introduced by 8d6a3e83 (Implement dynamic per arena control over dirty page purging.). Fix huge_ralloc_no_move_shrink() to purge the correct number of pages. This bug was introduced by 96739834 (Purge/zero sub-chunk huge allocations as necessary.).
-
- Mar 25, 2015
-
-
Jason Evans authored
-
- Mar 24, 2015
-
-
Jason Evans authored
-
Jason Evans authored
Fix arena_get() calls that specify refresh_if_missing=false. In ctl_refresh() and ctl.c's arena_purge(), these calls attempted to only refresh once, but did so in an unreliable way. arena_i_lg_dirty_mult_ctl() was simply wrong to pass refresh_if_missing=false.
-
Igor Podlesny authored
-
Jason Evans authored
-
- Mar 22, 2015
-
-
Igor Podlesny authored
-
- Mar 21, 2015
-
-
Qinfan Wu authored
-
Jason Evans authored
This regression was introduced by 8d6a3e83 (Implement dynamic per arena control over dirty page purging.). This resolves #215.
-
- Mar 19, 2015
-
-
Jason Evans authored
However, unlike before it was removed do not force --enable-ivsalloc when Darwin zone allocator integration is enabled, since the zone allocator code uses ivsalloc() regardless of whether malloc_usable_size() and sallocx() do. This resolves #211.
-