Skip to content
  1. Mar 14, 2011
    • Jason Evans's avatar
      Fix a thread cache stats merging bug. · a8118233
      Jason Evans authored
      When a thread cache flushes objects to their arenas due to an abundance
      of cached objects, it merges the allocation request count for the
      associated size class, and increments a flush counter.  If none of the
      flushed objects came from the thread's assigned arena, then the merging
      wouldn't happen (though the counter would typically eventually be
      merged), nor would the flush counter be incremented (a hard bug).  Fix
      this via extra conditional code just after the flush loop.
      a8118233
    • Jason Evans's avatar
      Fix a "thread.arena" mallctl bug. · a7153a0d
      Jason Evans authored
      Fix a variable reversal bug in mallctl("thread.arena", ...).
      a7153a0d
  2. Mar 07, 2011
  3. Mar 02, 2011
    • je's avatar
      Update ChangeLog for 2.1.2. · 6e56e5ec
      je authored
      6e56e5ec
    • Arun Sharma's avatar
      Build both PIC and no PIC static libraries · af5d6987
      Arun Sharma authored
      When jemalloc is linked into an executable (as opposed to a shared
      library), compiling with -fno-pic can have significant advantages,
      mainly because we don't have to go throught the GOT (global offset
      table).
      
      Users who want to link jemalloc into a shared library that could
      be dlopened need to link with libjemalloc_pic.a or libjemalloc.so.
      af5d6987
  4. Feb 14, 2011
    • Jason Evans's avatar
      Fix style nits. · 655f04a5
      Jason Evans authored
      655f04a5
    • Jason Evans's avatar
      Fix "thread.{de,}allocatedp" mallctl. · 9dcad2df
      Jason Evans authored
      For the non-TLS case (as on OS X), if the "thread.{de,}allocatedp"
      mallctl was called before any allocation occurred for that thread, the
      TSD was still NULL, thus putting the application at risk of
      dereferencing NULL.  Fix this by refactoring the initialization code,
      and making it part of the conditional logic for all per thread
      allocation counter accesses.
      9dcad2df
  5. Feb 08, 2011
  6. Feb 01, 2011
  7. Jan 26, 2011
    • Jason Evans's avatar
      Fix ALLOCM_LG_ALIGN definition. · f256680f
      Jason Evans authored
      Fix ALLOCM_LG_ALIGN to take a parameter and use it.  Apparently, an
      editing error left ALLOCM_LG_ALIGN with the same definition as
      ALLOCM_LG_ALIGN_MASK.
      f256680f
  8. Jan 15, 2011
    • Jason Evans's avatar
      Fix assertion typos. · dbd3832d
      Jason Evans authored
      s/=/==/ in several assertions, as well as fixing spelling errors.
      dbd3832d
    • Jason Evans's avatar
      Fix a heap dumping deadlock. · 10e45230
      Jason Evans authored
      Restructure the ctx initialization code such that the ctx isn't locked
      across portions of the initialization code where allocation could occur.
      Instead artificially inflate the cnt_merged.curobjs field, just as is
      done elsewhere to avoid similar races to the one that would otherwise be
      created by the reduction in locking scope.
      
      This bug affected interval- and growth-triggered heap dumping, but not
      manual heap dumping.
      10e45230
  9. Dec 29, 2010
    • Jason Evans's avatar
      Fix a "thread.arena" mallctl bug. · 624f2f3c
      Jason Evans authored
      When setting a new arena association for the calling thread, also update
      the tcache's cached arena pointer, primarily so that
      tcache_alloc_small_hard() uses the intended arena.
      624f2f3c
  10. Dec 18, 2010
  11. Dec 16, 2010
  12. Dec 04, 2010
  13. Dec 01, 2010
    • Jason Evans's avatar
      Use mremap(2) for huge realloc(). · cfdc8cfb
      Jason Evans authored
      If mremap(2) is available and supports MREMAP_FIXED, use it for huge
      realloc().
      
      Initialize rtree later during bootstrapping, so that --enable-debug
      --enable-dss works.
      
      Fix a minor swap_avail stats bug.
      cfdc8cfb
  14. Nov 27, 2010
    • Jason Evans's avatar
      Convert man page from roff to DocBook. · aee7fd2b
      Jason Evans authored
      Convert the man page source from roff to DocBook, and generate html and
      roff output.  Modify the build system such that the documentation can be
      built as part of the release process, so that users need not have
      DocBook tools installed.
      aee7fd2b
  15. Nov 25, 2010
    • Jason Evans's avatar
      Push down ctl_mtx. · fc4dcfa2
      Jason Evans authored
      Many mallctl*() end points require no locking, so push the locking down
      to just the functions that need it.  This is of particular import for
      "thread.allocated" and "thread.deallocated", which are intended as a
      low-overhead way to introspect per thread allocation activity.
      fc4dcfa2
  16. Nov 05, 2010
  17. Oct 30, 2010
  18. Oct 28, 2010
    • Jason Evans's avatar
      Fix prof bugs. · b04a940e
      Jason Evans authored
      Fix a race condition in ctx destruction that could cause undefined
      behavior (deadlock observed).
      
      Add mutex unlocks to some OOM error paths.
      b04a940e
  19. Oct 25, 2010
  20. Oct 24, 2010
    • Jason Evans's avatar
      Use madvise(..., MADV_FREE) on OS X. · ce93055c
      Jason Evans authored
      Use madvise(..., MADV_FREE) rather than msync(..., MS_KILLPAGES) on OS
      X, since it works for at least OS X 10.5 and 10.6.
      ce93055c
    • Jason Evans's avatar
      Edit manpage. · 0d38791e
      Jason Evans authored
      Make various minor edits to the manpage.
      0d38791e
    • Jason Evans's avatar
      Re-format size class table. · 8da141f4
      Jason Evans authored
      Use a more compact layout for the size class table in the man page.
      This avoids layout glitches due to approaching the single-page table
      size limit.
      8da141f4
    • Jason Evans's avatar
      Add missing #ifdef JEMALLOC_PROF. · 49d0293c
      Jason Evans authored
      Only call prof_boot0() if profiling is enabled.
      49d0293c
    • Jason Evans's avatar
      Replace JEMALLOC_OPTIONS with MALLOC_CONF. · e7339706
      Jason Evans authored
      Replace the single-character run-time flags with key/value pairs, which
      can be set via the malloc_conf global, /etc/malloc.conf, and the
      MALLOC_CONF environment variable.
      
      Replace the JEMALLOC_PROF_PREFIX environment variable with the
      "opt.prof_prefix" option.
      
      Replace umax2s() with u2s().
      e7339706
  21. Oct 22, 2010
    • Jason Evans's avatar
      Fix heap profiling bugs. · e4f7846f
      Jason Evans authored
      Fix a regression due to the recent heap profiling accuracy improvements:
      prof_{m,re}alloc() must set the object's profiling context regardless of
      whether it is sampled.
      
      Fix management of the CHUNK_MAP_CLASS chunk map bits, such that all
      large object (re-)allocation paths correctly initialize the bits.  Prior
      to this fix, in-place realloc() cleared the bits, resulting in incorrect
      reported object size from arena_salloc_demote().  After this fix the
      non-demoted bit pattern is all zeros (instead of all ones), which makes
      it easier to assure that the bits are properly set.
      e4f7846f
  22. Oct 21, 2010
    • Jason Evans's avatar
      Fix a heap profiling regression. · 81b4e6eb
      Jason Evans authored
      Call prof_ctx_set() in all paths through prof_{m,re}alloc().
      
      Inline arena_prof_ctx_get().
      81b4e6eb
    • Jason Evans's avatar
      Inline the fast path for heap sampling. · 4d6a134e
      Jason Evans authored
      Inline the heap sampling code that is executed for every allocation
      event (regardless of whether a sample is taken).
      
      Combine all prof TLS data into a single data structure, in order to
      reduce the TLS lookup volume.
      4d6a134e
    • Jason Evans's avatar
      Add per thread allocation counters, and enhance heap sampling. · 93443689
      Jason Evans authored
      Add the "thread.allocated" and "thread.deallocated" mallctls, which can
      be used to query the total number of bytes ever allocated/deallocated by
      the calling thread.
      
      Add s2u() and sa2u(), which can be used to compute the usable size that
      will result from an allocation request of a particular size/alignment.
      
      Re-factor ipalloc() to use sa2u().
      
      Enhance the heap profiler to trigger samples based on usable size,
      rather than request size.  This has a subtle, but important, impact on
      the accuracy of heap sampling.  For example, previous to this change,
      16- and 17-byte objects were sampled at nearly the same rate, but
      17-byte objects actually consume 32 bytes each.  Therefore it was
      possible for the sample to be somewhat skewed compared to actual memory
      usage of the allocated objects.
      93443689