Skip to content
  1. Aug 12, 2015
  2. Aug 04, 2015
    • Jason Evans's avatar
      Generalize chunk management hooks. · b49a334a
      Jason Evans authored
      Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on
      the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls.  The chunk hooks
      allow control over chunk allocation/deallocation, decommit/commit,
      purging, and splitting/merging, such that the application can rely on
      jemalloc's internal chunk caching and retaining functionality, yet
      implement a variety of chunk management mechanisms and policies.
      
      Merge the chunks_[sz]ad_{mmap,dss} red-black trees into
      chunks_[sz]ad_retained.  This slightly reduces how hard jemalloc tries
      to honor the dss precedence setting; prior to this change the precedence
      setting was also consulted when recycling chunks.
      
      Fix chunk purging.  Don't purge chunks in arena_purge_stashed(); instead
      deallocate them in arena_unstash_purged(), so that the dirty memory
      linkage remains valid until after the last time it is used.
      
      This resolves #176 and #201.
      b49a334a
  3. Jul 18, 2015
  4. Jul 09, 2015
  5. Jul 07, 2015
  6. Jun 24, 2015
    • Jason Evans's avatar
      Fix size class overflow handling when profiling is enabled. · 241abc60
      Jason Evans authored
      Fix size class overflow handling for malloc(), posix_memalign(),
      memalign(), calloc(), and realloc() when profiling is enabled.
      
      Remove an assertion that erroneously caused arena_sdalloc() to fail when
      profiling was enabled.
      
      This resolves #232.
      241abc60
  7. May 06, 2015
    • Jason Evans's avatar
      Implement cache index randomization for large allocations. · 8a03cf03
      Jason Evans authored
      Extract szad size quantization into {extent,run}_quantize(), and .
      quantize szad run sizes to the union of valid small region run sizes and
      large run sizes.
      
      Refactor iteration in arena_run_first_fit() to use
      run_quantize{,_first,_next(), and add support for padded large runs.
      
      For large allocations that have no specified alignment constraints,
      compute a pseudo-random offset from the beginning of the first backing
      page that is a multiple of the cache line size.  Under typical
      configurations with 4-KiB pages and 64-byte cache lines this results in
      a uniform distribution among 64 page boundary offsets.
      
      Add the --disable-cache-oblivious option, primarily intended for
      performance testing.
      
      This resolves #13.
      8a03cf03
  8. May 01, 2015
    • Jason Evans's avatar
      Rename pprof to jeprof. · 7041720a
      Jason Evans authored
      This rename avoids installation collisions with the upstream gperftools.
      Additionally, jemalloc's per thread heap profile functionality
      introduced an incompatible file format, so it's now worthwhile to
      clearly distinguish jemalloc's version of this script from the upstream
      version.
      
      This resolves #229.
      7041720a
  9. Apr 30, 2015
  10. Mar 25, 2015
  11. Mar 24, 2015
  12. Mar 19, 2015
    • Jason Evans's avatar
      Restore --enable-ivsalloc. · e0a08a14
      Jason Evans authored
      However, unlike before it was removed do not force --enable-ivsalloc
      when Darwin zone allocator integration is enabled, since the zone
      allocator code uses ivsalloc() regardless of whether
      malloc_usable_size() and sallocx() do.
      
      This resolves #211.
      e0a08a14
  13. Mar 10, 2015
  14. Mar 31, 2014
  15. Feb 26, 2014
  16. Jan 22, 2014
  17. Jan 18, 2014
  18. Dec 06, 2013
    • Jason Evans's avatar
      Disable floating point code/linking when possible. · d37d5ade
      Jason Evans authored
      Unless heap profiling is enabled, disable floating point code and don't
      link with libm.  This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on
      x64 systems, makes it possible to completely disable floating point
      register use.  Some versions of glibc neglect to save/restore
      caller-saved floating point registers during dynamic lazy symbol
      loading, and the symbol loading code uses whatever malloc the
      application happens to have linked/loaded with, the result being
      potential floating point register corruption.
      d37d5ade
  19. Dec 04, 2013
    • Jason Evans's avatar
      Refactor to support more varied testing. · 86abd0dc
      Jason Evans authored
      Refactor the test harness to support three types of tests:
      - unit: White box unit tests.  These tests have full access to all
        internal jemalloc library symbols.  Though in actuality all symbols
        are prefixed by jet_, macro-based name mangling abstracts this away
        from test code.
      - integration: Black box integration tests.  These tests link with
        the installable shared jemalloc library, and with the exception of
        some utility code and configure-generated macro definitions, they have
        no access to jemalloc internals.
      - stress: Black box stress tests.  These tests link with the installable
        shared jemalloc library, as well as with an internal allocator with
        symbols prefixed by jet_ (same as for unit tests) that can be used to
        allocate data structures that are internal to the test code.
      
      Move existing tests into test/{unit,integration}/ as appropriate.
      
      Split out internal parts of jemalloc_defs.h.in and put them in
      jemalloc_internal_defs.h.in.  This reduces internals exposure to
      applications that #include <jemalloc/jemalloc.h>.
      
      Refactor jemalloc.h header generation so that a single header file
      results, and the prototypes can be used to generate jet_ prototypes for
      tests.  Split jemalloc.h.in into multiple parts (jemalloc_defs.h.in,
      jemalloc_macros.h.in, jemalloc_protos.h.in, jemalloc_mangle.h.in) and
      use a shell script to generate a unified jemalloc.h at configure time.
      
      Change the default private namespace prefix from "" to "je_".
      
      Add missing private namespace mangling.
      
      Remove hard-coded private_namespace.h.  Instead generate it and
      private_unnamespace.h from private_symbols.txt.  Use similar logic for
      public symbols, which aids in name mangling for jet_ symbols.
      
      Add test_warn() and test_fail().  Replace existing exit(1) calls with
      test_fail() calls.
      86abd0dc
  20. Oct 21, 2013
  21. Oct 20, 2013
    • Jason Evans's avatar
      Fix a race condition in the "arenas.extend" mallctl. · 7b65180b
      Jason Evans authored
      Fix a race condition in the "arenas.extend" mallctl that could lead to
      internal data structure corruption.  The race could be hit if one
      thread called the "arenas.extend" mallctl while another thread
      concurrently triggered initialization of one of the lazily created
      arenas.
      7b65180b
    • Jason Evans's avatar
      Fix a Valgrind integration flaw. · dda90f59
      Jason Evans authored
      Fix a Valgrind integration flaw that caused Valgrind warnings about
      reads of uninitialized memory in internal zero-initialized data
      structures (relevant to tcache and prof code).
      dda90f59
    • Jason Evans's avatar
      Update ChangeLog. · ff08ef70
      Jason Evans authored
      ff08ef70
  22. Oct 03, 2013
  23. Jun 03, 2013
  24. Mar 06, 2013
  25. Feb 06, 2013
  26. Feb 01, 2013
    • Jason Evans's avatar
      Fix Valgrind integration. · 06912756
      Jason Evans authored
      Fix Valgrind integration to annotate all internally allocated memory in
      a way that keeps Valgrind happy about internal data structure access.
      06912756
    • Jason Evans's avatar
      Fix a chunk recycling bug. · a7a28c33
      Jason Evans authored
      Fix a chunk recycling bug that could cause the allocator to lose track
      of whether a chunk was zeroed.  On FreeBSD, NetBSD, and OS X, it could
      cause corruption if allocating via sbrk(2) (unlikely unless running with
      the "dss:primary" option specified).  This was completely harmless on
      Linux unless using mlockall(2) (and unlikely even then, unless the
      --disable-munmap configure option or the "dss:primary" option was
      specified).  This regression was introduced in 3.1.0 by the
      mlockall(2)/madvise(2) interaction fix.
      a7a28c33
  27. Jan 31, 2013
    • Jason Evans's avatar
      Fix two quarantine bugs. · d0e942e4
      Jason Evans authored
      Internal reallocation of the quarantined object array leaked the old array.
      
      Reallocation failure for internal reallocation of the quarantined object
      array (very unlikely) resulted in memory corruption.
      d0e942e4
    • Jason Evans's avatar
      Fix potential TLS-related memory corruption. · bbe29d37
      Jason Evans authored
      Avoid writing to uninitialized TLS as a side effect of deallocation.
      Initializing TLS during deallocation is unsafe because it is possible
      that a thread never did any allocation, and that TLS has already been
      deallocated by the threads library, resulting in write-after-free
      corruption.  These fixes affect prof_tdata and quarantine; all other
      uses of TLS are already safe, whether intentionally (as for tcache) or
      unintentionally (as for arenas).
      bbe29d37
  28. Jan 23, 2013
  29. Dec 12, 2012
    • Jason Evans's avatar
      Fix chunk_recycle() Valgrind integration. · 1271185b
      Jason Evans authored
      Fix chunk_recycyle() to unconditionally inform Valgrind that returned
      memory is undefined.  This fixes Valgrind warnings that would result
      from a huge allocation being freed, then recycled for use as an arena
      chunk.  The arena code would write metadata to the chunk header, and
      Valgrind would consider these invalid writes.
      1271185b
  30. Nov 30, 2012
  31. Nov 09, 2012
  32. Nov 06, 2012
    • Jason Evans's avatar
      Purge unused dirty pages in a fragmentation-reducing order. · e3d13060
      Jason Evans authored
      Purge unused dirty pages in an order that first performs clean/dirty run
      defragmentation, in order to mitigate available run fragmentation.
      
      Remove the limitation that prevented purging unless at least one chunk
      worth of dirty pages had accumulated in an arena.  This limitation was
      intended to avoid excessive purging for small applications, but the
      threshold was arbitrary, and the effect of questionable utility.
      
      Relax opt_lg_dirty_mult from 5 to 3.  This compensates for increased
      likelihood of allocating clean runs, given the same ratio of clean:dirty
      runs, and reduces the potential for repeated purging in pathological
      large malloc/free loops that push the active:dirty page ratio just over
      the purge threshold.
      e3d13060
  33. Oct 17, 2012
  34. Oct 16, 2012