- Mar 24, 2015
-
-
Jason Evans authored
-
- Mar 22, 2015
-
-
Igor Podlesny authored
-
- Mar 21, 2015
-
-
Qinfan Wu authored
-
Jason Evans authored
This regression was introduced by 8d6a3e83 (Implement dynamic per arena control over dirty page purging.). This resolves #215.
-
- Mar 19, 2015
-
-
Jason Evans authored
However, unlike before it was removed do not force --enable-ivsalloc when Darwin zone allocator integration is enabled, since the zone allocator code uses ivsalloc() regardless of whether malloc_usable_size() and sallocx() do. This resolves #211.
-
Jason Evans authored
Add mallctls: - arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be modified to change the initial lg_dirty_mult setting for newly created arenas. - arena.<i>.lg_dirty_mult controls an individual arena's dirty page purging threshold, and synchronously triggers any purging that may be necessary to maintain the constraint. - arena.<i>.chunk.purge allows the per arena dirty page purging function to be replaced. This resolves #93.
-
- Mar 17, 2015
-
-
Mike Hommey authored
-
- Mar 16, 2015
-
-
Jason Evans authored
Remove the prof_tctx_state_destroying transitory state and instead add the tctx_uid field, so that the tuple <thr_uid, tctx_uid> uniquely identifies a tctx. This assures that tctx's are well ordered even when more than two with the same thr_uid coexist. A previous attempted fix based on prof_tctx_state_destroying was only sufficient for protecting against two coexisting tctx's, but it also introduced a new dumping race. These regressions were introduced by 602c8e09 (Implement per thread heap profiling.) and 764b0002 (Fix a heap profiling regression.).
-
- Mar 14, 2015
-
-
Jason Evans authored
-
Jason Evans authored
Add the prof_tctx_state_destroying transitionary state to fix a race between a thread destroying a tctx and another thread creating a new equivalent tctx. This regression was introduced by 602c8e09 (Implement per thread heap profiling.).
-
- Mar 13, 2015
-
-
Daniel Micay authored
Linux sets _POSIX_MONOTONIC_CLOCK to 0 meaning it *might* be available, so a sysconf check is necessary at runtime with a fallback to the mandatory CLOCK_REALTIME clock.
-
Mike Hommey authored
a14bce85 made buferror not take an error code, and make the Windows code path for buferror use GetLastError, while the alternative code paths used errno. Then 2a83ed02 made buferror take an error code again, and while it changed the non-Windows code paths to use that error code, the Windows code path was not changed accordingly.
-
Jason Evans authored
Fix prof_tctx_comp() to incorporate tctx state into the comparison. During a dump it is possible for both a purgatory tctx and an otherwise equivalent nominal tctx to reside in the tree at the same time. This regression was introduced by 602c8e09 (Implement per thread heap profiling.).
-
- Mar 12, 2015
-
-
Jason Evans authored
These bugs only affected tests and debug builds.
-
Jason Evans authored
-
- Mar 11, 2015
-
-
Jason Evans authored
-
Jason Evans authored
-
- Mar 10, 2015
-
-
Jason Evans authored
-
- Mar 07, 2015
-
-
Jason Evans authored
This regression was introduced by 97c04a93 (Use first-fit rather than first-best-fit run/chunk allocation.).
-
Jason Evans authored
This tends to more effectively pack active memory toward low addresses. However, additional tree searches are required in many cases, so whether this change stands the test of time will depend on real-world benchmarks.
-
Jason Evans authored
Treat sizes that round down to the same size class as size-equivalent in trees that are used to search for first best fit, so that there are only as many "firsts" as there are size classes. This comes closer to the ideal of first fit.
-
Jason Evans authored
Recent changes have improved huge allocation scalability, which removes upward pressure to set the chunk size so large that huge allocations are rare. Smaller chunks are more likely to completely drain, so set the default to the smallest size that doesn't leave excessive unusable trailing space in chunk headers.
-
- Mar 04, 2015
-
-
Mike Hommey authored
TlsGetValue has a semantic difference with pthread_getspecific, in that it can return a non-error NULL value, so it always sets the LastError. But allocator callers may not be expecting calling e.g. free() to change the value of the last error, so preserve it.
-
Mike Hommey authored
9906660e added a --without-export configure option to avoid exporting jemalloc symbols, but the option didn't actually work.
-
- Feb 26, 2015
-
-
Dave Huseby authored
-
- Feb 19, 2015
-
-
Jason Evans authored
-
Jason Evans authored
These regressions were introduced by ee41ad40 (Integrate whole chunks into unused dirty page purging machinery.).
-
- Feb 18, 2015
-
-
Jason Evans authored
Rename "dirty chunks" to "cached chunks", in order to avoid overloading the term "dirty". Fix the regression caused by 339c2b23 (Fix chunk_unmap() to propagate dirty state.), and actually address what that change attempted, which is to only purge chunks once, and propagate whether zeroed pages resulted into chunk_record().
-
Jason Evans authored
Fix chunk_unmap() to propagate whether a chunk is dirty, and modify dirty chunk purging to record this information so it can be passed to chunk_unmap(). Since the broken version of chunk_unmap() claimed that all chunks were clean, this resulted in potential memory corruption for purging implementations that do not zero (e.g. MADV_FREE). This regression was introduced by ee41ad40 (Integrate whole chunks into unused dirty page purging machinery.).
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
-
- Feb 17, 2015
-
-
Jason Evans authored
Extend per arena unused dirty page purging to manage unused dirty chunks in aaddtion to unused dirty runs. Rather than immediately unmapping deallocated chunks (or purging them in the --disable-munmap case), store them in a separate set of trees, chunks_[sz]ad_dirty. Preferrentially allocate dirty chunks. When excessive unused dirty pages accumulate, purge runs and chunks in ingegrated LRU order (and unmap chunks in the --enable-munmap case). Refactor extent_node_t to provide accessor functions.
-
- Feb 16, 2015
-
-
Jason Evans authored
This regression was introduced by 88fef7ce (Refactor huge_*() calls into arena internals.), and went undetected because of the --enable-debug regression.
-
Jason Evans authored
This regression was introduced by 88fef7ce (Refactor huge_*() calls into arena internals.), and went undetected because of the --enable-debug regression.
-
Jason Evans authored
Fix --enable-debug to actually enable debug mode. This regression was introduced by cbf3a6d7 (Move centralized chunk management into arenas.).
-
Jason Evans authored
-
- Feb 15, 2015
-
-
Jason Evans authored
-
- Feb 14, 2015
-
-
Jason Evans authored
-
- Feb 13, 2015
-
-
Abhishek Kulkarni authored
Signed-off-by: Abhishek Kulkarni <adkulkar@umail.iu.edu>
-