Skip to content
  1. Mar 23, 2011
    • Jason Evans's avatar
      Fix error detection for ipalloc() when profiling. · 38d9210c
      Jason Evans authored
      sa2u() returns 0 on overflow, but the profiling code was blindly calling
      sa2u() and allowing the error to silently propagate, ultimately ending
      in a later assertion failure.  Refactor all ipalloc() callers to call
      sa2u(), check for overflow before calling ipalloc(), and pass usize
      rather than size.  This allows ipalloc() to avoid calling sa2u() in the
      common case.
      38d9210c
    • Jason Evans's avatar
      Fix rallocm() rsize bug. · eacb896c
      Jason Evans authored
      Add code to set *rsize even when profiling is enabled.
      eacb896c
    • Jason Evans's avatar
      Fix bootstrapping order bug. · c957398b
      Jason Evans authored
      Initialize arenas_tsd earlier, so that the non-TLS case works when
      profiling is enabled.
      c957398b
  2. Mar 22, 2011
    • Jason Evans's avatar
      Avoid overflow in arena_run_regind(). · 47e57f9b
      Jason Evans authored
      Fix a regression due to:
          Remove an arena_bin_run_size_calc() constraint.
          2a6f2af6
      The removed constraint required that small run headers fit in one page,
      which indirectly limited runs such that they would not cause overflow in
      arena_run_regind().  Add an explicit constraint to
      arena_bin_run_size_calc() based on the largest number of regions that
      arena_run_regind() can handle (2^11 as currently configured).
      47e57f9b
  3. Mar 21, 2011
    • Jason Evans's avatar
      Dynamically adjust tcache fill count. · 1dcb4f86
      Jason Evans authored
      Dynamically adjust tcache fill count (number of objects allocated per
      tcache refill) such that if GC has to flush inactive objects, the fill
      count gradually decreases.  Conversely, if refills occur while the fill
      count is depressed, the fill count gradually increases back to its
      maximum value.
      1dcb4f86
  4. Mar 19, 2011
  5. Mar 18, 2011
    • Jason Evans's avatar
      Improve thread-->arena assignment. · 597632be
      Jason Evans authored
      Rather than blindly assigning threads to arenas in round-robin fashion,
      choose the lowest-numbered arena that currently has the smallest number
      of threads assigned to it.
      
      Add the "stats.arenas.<i>.nthreads" mallctl.
      597632be
    • Jason Evans's avatar
      Reverse tcache fill order. · 9c43c13a
      Jason Evans authored
      Refill the thread cache such that low regions get used first.  This
      fixes a regression due to the recent transition to bitmap-based region
      management.
      9c43c13a
    • Jason Evans's avatar
      Use bitmaps to track small regions. · 84c8eefe
      Jason Evans authored
      The previous free list implementation, which embedded singly linked
      lists in available regions, had the unfortunate side effect of causing
      many cache misses during thread cache fills.  Fix this in two places:
      
        - arena_run_t: Use a new bitmap implementation to track which regions
                       are available.  Furthermore, revert to preferring the
                       lowest available region (as jemalloc did with its old
                       bitmap-based approach).
      
        - tcache_t: Move read-only tcache_bin_t metadata into
                    tcache_bin_info_t, and add a contiguous array of pointers
                    to tcache_t in order to track cached objects.  This
                    substantially increases the size of tcache_t, but results
                    in much higher data locality for common tcache operations.
                    As a side benefit, it is again possible to efficiently
                    flush the least recently used cached objects, so this
                    change changes flushing from MRU to LRU.
      
      The new bitmap implementation uses a multi-level summary approach to
      make finding the lowest available region very fast.  In practice,
      bitmaps only have one or two levels, though the implementation is
      general enough to handle extremely large bitmaps, mainly so that large
      page sizes can still be entertained.
      
      Fix tcache_bin_flush_large() to always flush statistics, in the same way
      that tcache_bin_flush_small() was recently fixed.
      
      Use JEMALLOC_DEBUG rather than NDEBUG.
      
      Add dassert(), and use it for debug-only asserts.
      84c8eefe
  6. Mar 16, 2011
  7. Mar 15, 2011
  8. Mar 14, 2011
    • Jason Evans's avatar
      Fix a thread cache stats merging bug. · a8118233
      Jason Evans authored
      When a thread cache flushes objects to their arenas due to an abundance
      of cached objects, it merges the allocation request count for the
      associated size class, and increments a flush counter.  If none of the
      flushed objects came from the thread's assigned arena, then the merging
      wouldn't happen (though the counter would typically eventually be
      merged), nor would the flush counter be incremented (a hard bug).  Fix
      this via extra conditional code just after the flush loop.
      a8118233
    • Jason Evans's avatar
      Fix a "thread.arena" mallctl bug. · a7153a0d
      Jason Evans authored
      Fix a variable reversal bug in mallctl("thread.arena", ...).
      a7153a0d
  9. Mar 07, 2011
  10. Mar 02, 2011
    • je's avatar
      Update ChangeLog for 2.1.2. · 6e56e5ec
      je authored
      6e56e5ec
    • Arun Sharma's avatar
      Build both PIC and no PIC static libraries · af5d6987
      Arun Sharma authored
      When jemalloc is linked into an executable (as opposed to a shared
      library), compiling with -fno-pic can have significant advantages,
      mainly because we don't have to go throught the GOT (global offset
      table).
      
      Users who want to link jemalloc into a shared library that could
      be dlopened need to link with libjemalloc_pic.a or libjemalloc.so.
      af5d6987
  11. Feb 14, 2011
    • Jason Evans's avatar
      Fix style nits. · 655f04a5
      Jason Evans authored
      655f04a5
    • Jason Evans's avatar
      Fix "thread.{de,}allocatedp" mallctl. · 9dcad2df
      Jason Evans authored
      For the non-TLS case (as on OS X), if the "thread.{de,}allocatedp"
      mallctl was called before any allocation occurred for that thread, the
      TSD was still NULL, thus putting the application at risk of
      dereferencing NULL.  Fix this by refactoring the initialization code,
      and making it part of the conditional logic for all per thread
      allocation counter accesses.
      9dcad2df
  12. Feb 08, 2011
  13. Feb 01, 2011
  14. Jan 26, 2011
    • Jason Evans's avatar
      Fix ALLOCM_LG_ALIGN definition. · f256680f
      Jason Evans authored
      Fix ALLOCM_LG_ALIGN to take a parameter and use it.  Apparently, an
      editing error left ALLOCM_LG_ALIGN with the same definition as
      ALLOCM_LG_ALIGN_MASK.
      f256680f
  15. Jan 15, 2011
    • Jason Evans's avatar
      Fix assertion typos. · dbd3832d
      Jason Evans authored
      s/=/==/ in several assertions, as well as fixing spelling errors.
      dbd3832d
    • Jason Evans's avatar
      Fix a heap dumping deadlock. · 10e45230
      Jason Evans authored
      Restructure the ctx initialization code such that the ctx isn't locked
      across portions of the initialization code where allocation could occur.
      Instead artificially inflate the cnt_merged.curobjs field, just as is
      done elsewhere to avoid similar races to the one that would otherwise be
      created by the reduction in locking scope.
      
      This bug affected interval- and growth-triggered heap dumping, but not
      manual heap dumping.
      10e45230
  16. Dec 29, 2010
    • Jason Evans's avatar
      Fix a "thread.arena" mallctl bug. · 624f2f3c
      Jason Evans authored
      When setting a new arena association for the calling thread, also update
      the tcache's cached arena pointer, primarily so that
      tcache_alloc_small_hard() uses the intended arena.
      624f2f3c
  17. Dec 18, 2010
  18. Dec 16, 2010