- Jul 18, 2015
-
-
Jason Evans authored
-
- Jul 16, 2015
-
-
Dave Rigby authored
Fixes warning with newer GCCs: include/jemalloc/jemalloc.h:229:2: warning: extra ';' [-Wpedantic] }; ^
-
Jason Evans authored
This change improves interaction with transparent huge pages, e.g. reduced page faults (at least in the absence of unused dirty page purging).
-
Jason Evans authored
This effectively reverts 97c04a93 (Use first-fit rather than first-best-fit run/chunk allocation.). In some pathological cases, first-fit search dominates allocation time, and it also tends not to converge as readily on a steady state of memory layout, since precise allocation order has a bigger effect than for first-best-fit.
-
- Jul 13, 2015
-
-
Jason Evans authored
-
- Jul 11, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
Add various function attributes to the exported functions to give the compiler more information to work with during optimization, and also specify throw() when compiling with C++ on Linux, in order to adequately match what __THROW does in glibc. This resolves #237.
-
- Jul 10, 2015
-
-
Jason Evans authored
This {bug,regression} was introduced by 155bfa7d (Normalize size classes.). This resolves #241.
-
Jason Evans authored
-
- Jul 09, 2015
-
-
Jason Evans authored
-
- Jul 08, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
Conditionally define ENOENT, EINVAL, etc. (was unconditional). Add/use PRIzu, PRIzd, and PRIzx for use in malloc_printf() calls. gcc issued (harmless) warnings since e.g. "%zu" should be "%Iu" on Windows, and the alternative to this workaround would have been to disable the function attributes which cause gcc to look for type mismatches in formatted printing function calls.
-
Jason Evans authored
-
- Jul 07, 2015
-
-
charsyam authored
Fix typos ChangeLog
-
Jason Evans authored
-
Jason Evans authored
-
- Jun 25, 2015
-
-
Matthijs authored
- Set opt_lg_chunk based on run-time OS setting - Verify LG_PAGE is compatible with run-time OS setting - When targeting Windows Vista or newer, use SRWLOCK instead of CRITICAL_SECTION - When targeting Windows Vista or newer, statically initialize init_lock
-
- Jun 24, 2015
-
-
Jason Evans authored
Fix size class overflow handling for malloc(), posix_memalign(), memalign(), calloc(), and realloc() when profiling is enabled. Remove an assertion that erroneously caused arena_sdalloc() to fail when profiling was enabled. This resolves #232.
-
- Jun 23, 2015
-
-
Jason Evans authored
This resolves #235.
-
Jason Evans authored
-
- Jun 22, 2015
-
-
Jason Evans authored
The regressions were never merged into the master branch.
-
- Jun 15, 2015
-
-
Jason Evans authored
-
- May 30, 2015
-
-
Jason Evans authored
-
Jason Evans authored
This avoids the potential surprise of deallocating an object with one tcache specified, and having the object cached in a different tcache once it drains from the quarantine.
-
- May 28, 2015
-
-
Chi-hung Hsieh authored
-
- May 20, 2015
-
-
Jason Evans authored
Now that small allocation runs have fewer regions due to run metadata residing in chunk headers, an explicit minimum tcache count is needed to make sure that tcache adequately amortizes synchronization overhead.
-
Jason Evans authored
Take into account large_pad when computing whether to pass the deallocation request to tcache_dalloc_large(), so that the largest cacheable size makes it back to tcache. This regression was introduced by 8a03cf03 (Implement cache index randomization for large allocations.).
-
Jason Evans authored
Pass large allocation requests to arena_malloc() when possible. This regression was introduced by 155bfa7d (Normalize size classes.).
-
Jason Evans authored
This regression was introduced by 155bfa7d (Normalize size classes.).
-
- May 16, 2015
-
-
Jason Evans authored
-
- May 08, 2015
-
-
Jason Evans authored
-
- May 06, 2015
-
-
Jason Evans authored
Extract szad size quantization into {extent,run}_quantize(), and . quantize szad run sizes to the union of valid small region run sizes and large run sizes. Refactor iteration in arena_run_first_fit() to use run_quantize{,_first,_next(), and add support for padded large runs. For large allocations that have no specified alignment constraints, compute a pseudo-random offset from the beginning of the first backing page that is a multiple of the cache line size. Under typical configurations with 4-KiB pages and 64-byte cache lines this results in a uniform distribution among 64 page boundary offsets. Add the --disable-cache-oblivious option, primarily intended for performance testing. This resolves #13.
-
Jason Evans authored
-
- May 01, 2015
-
-
Jason Evans authored
This rename avoids installation collisions with the upstream gperftools. Additionally, jemalloc's per thread heap profile functionality introduced an incompatible file format, so it's now worthwhile to clearly distinguish jemalloc's version of this script from the upstream version. This resolves #229.
-
Jason Evans authored
This resolves #227.
-
Jason Evans authored
This resolves #228.
-
- Apr 30, 2015
-
-
Igor Podlesny authored
-
Qinfan Wu authored
-