Skip to content
  1. Nov 09, 2012
  2. Nov 07, 2012
  3. Nov 06, 2012
    • Jason Evans's avatar
      Purge unused dirty pages in a fragmentation-reducing order. · e3d13060
      Jason Evans authored
      Purge unused dirty pages in an order that first performs clean/dirty run
      defragmentation, in order to mitigate available run fragmentation.
      
      Remove the limitation that prevented purging unless at least one chunk
      worth of dirty pages had accumulated in an arena.  This limitation was
      intended to avoid excessive purging for small applications, but the
      threshold was arbitrary, and the effect of questionable utility.
      
      Relax opt_lg_dirty_mult from 5 to 3.  This compensates for increased
      likelihood of allocating clean runs, given the same ratio of clean:dirty
      runs, and reduces the potential for repeated purging in pathological
      large malloc/free loops that push the active:dirty page ratio just over
      the purge threshold.
      e3d13060
  4. Nov 04, 2012
  5. Oct 17, 2012
  6. Oct 16, 2012
  7. Oct 15, 2012
  8. Oct 13, 2012
    • Jason Evans's avatar
      Add arena-specific and selective dss allocation. · 609ae595
      Jason Evans authored
      Add the "arenas.extend" mallctl, so that it is possible to create new
      arenas that are outside the set that jemalloc automatically multiplexes
      threads onto.
      
      Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible
      to explicitly allocate from a particular arena.
      
      Add the "opt.dss" mallctl, which controls the default precedence of dss
      allocation relative to mmap allocation.
      
      Add the "arena.<i>.dss" mallctl, which makes it possible to set the
      default dss precedence on a per arena or global basis.
      
      Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".
      
      Add the "stats.arenas.<i>.dss" mallctl.
      609ae595
  9. Oct 10, 2012
    • Jan Beich's avatar
      mark _pthread_mutex_init_calloc_cb as public explicitly · d0ffd8ed
      Jan Beich authored
      Mozilla build hides everything by default using visibility pragma and
      unhides only explicitly listed headers. But this doesn't work on FreeBSD
      because _pthread_mutex_init_calloc_cb is neither documented nor exposed
      via any header.
      d0ffd8ed
    • Jason Evans's avatar
      Make malloc_usable_size() implementation consistent with prototype. · 2cc11ff8
      Jason Evans authored
      Use JEMALLOC_USABLE_SIZE_CONST for the malloc_usable_size()
      implementation as well as the prototype, for consistency's sake.
      2cc11ff8
    • Jason Evans's avatar
      Drop const from malloc_usable_size() argument on Linux. · 247d1248
      Jason Evans authored
      Drop const from malloc_usable_size() argument on Linux, in order to
      match the prototype in Linux's malloc.h.
      247d1248
    • Jason Evans's avatar
      Fix fork(2)-related mutex acquisition order. · b5225928
      Jason Evans authored
      Fix mutex acquisition order inversion for the chunks rtree and the base
      mutex.  Chunks rtree acquisition was introduced by the previous commit,
      so this bug was short-lived.
      b5225928
    • Jason Evans's avatar
      Fix fork(2)-related deadlocks. · 20f1fc95
      Jason Evans authored
      Add a library constructor for jemalloc that initializes the allocator.
      This fixes a race that could occur if threads were created by the main
      thread prior to any memory allocation, followed by fork(2), and then
      memory allocation in the child process.
      
      Fix the prefork/postfork functions to acquire/release the ctl, prof, and
      rtree mutexes.  This fixes various fork() child process deadlocks, but
      one possible deadlock remains (intentionally) unaddressed: prof
      backtracing can acquire runtime library mutexes, so deadlock is still
      possible if heap profiling is enabled during fork().  This deadlock is
      known to be a real issue in at least the case of libgcc-based
      backtracing.
      
      Reported by tfengjun.
      20f1fc95
  10. Oct 09, 2012
  11. May 24, 2012
  12. May 16, 2012
  13. May 15, 2012
  14. May 12, 2012
  15. May 11, 2012
    • Jason Evans's avatar
      Fix large calloc() zeroing bugs. · d8ceef6c
      Jason Evans authored
      Refactor code such that arena_mapbits_{large,small}_set() always
      preserves the unzeroed flag, and manually manipulate the unzeroed flag
      in the one case where it actually gets reset (in arena_chunk_purge()).
      This fixes unzeroed preservation bugs in arena_run_split() and
      arena_ralloc_large_grow().  These bugs caused large calloc() to return
      non-zeroed memory under some circumstances.
      d8ceef6c
    • Jason Evans's avatar
      Add arena chunk map assertions. · 30fe12b8
      Jason Evans authored
      30fe12b8
    • Jason Evans's avatar
      Refactor arena_run_alloc(). · 5b0c9964
      Jason Evans authored
      Refactor duplicated arena_run_alloc() code into
      arena_run_alloc_helper().
      5b0c9964
  16. May 10, 2012
  17. May 09, 2012
    • Jason Evans's avatar
      Fix chunk_recycle() to stop leaking trailing chunks. · 374d26a4
      Jason Evans authored
      Fix chunk_recycle() to correctly compute trailsize and re-insert
      trailing chunks.  This fixes a major virtual memory leak.
      
      Simplify chunk_record() to avoid dropping/re-acquiring chunks_mtx.
      374d26a4
    • Jason Evans's avatar
      Fix chunk_alloc_mmap() bugs. · de6fbdb7
      Jason Evans authored
      Simplify chunk_alloc_mmap() to no longer attempt map extension.  The
      extra complexity isn't warranted, because although in the success case
      it saves one system call as compared to immediately falling back to
      chunk_alloc_mmap_slow(), it also makes the failure case even more
      expensive.  This simplification removes two bugs:
      
      - For Windows platforms, pages_unmap() wasn't being called for unaligned
        mappings prior to falling back to chunk_alloc_mmap_slow().  This
        caused permanent virtual memory leaks.
      - For non-Windows platforms, alignment greater than chunksize caused
        pages_map() to be called with size 0 when attempting map extension.
        This always resulted in an mmap() error, and subsequent fallback to
        chunk_alloc_mmap_slow().
      de6fbdb7
  18. May 03, 2012