Skip to content
  1. Aug 31, 2011
    • Jason Evans's avatar
      Fix a prof-related race condition. · a9076c94
      Jason Evans authored
      Fix prof_lookup() to artificially raise curobjs for all paths through
      the code that creates a new entry in the per thread bt2cnt hash table.
      This fixes a race condition that could corrupt memory if prof_accum were
      false, and a non-default lg_prof_tcmax were used and/or threads were
      destroyed.
      a9076c94
    • Jason Evans's avatar
      Fix a prof-related bug in realloc(). · 46405e67
      Jason Evans authored
      Fix realloc() such that it only records the object passed in as freed if
      no OOM error occurs.
      46405e67
  2. Aug 13, 2011
  3. Aug 12, 2011
    • Jason Evans's avatar
      Fix off-by-one backtracing issues. · a507004d
      Jason Evans authored
      Rewrite prof_alloc_prep() as a cpp macro, PROF_ALLOC_PREP(), in order to
      remove any doubt as to whether an additional stack frame is created.
      Prior to this change, it was assumed that inlining would reduce the
      total number of frames in the backtrace, but in practice behavior wasn't
      completely predictable.
      
      Create imemalign() and call it from posix_memalign(), memalign(), and
      valloc(), so that all entry points require the same number of stack
      frames to be ignored during backtracing.
      a507004d
    • Jason Evans's avatar
      Document swap.fds mallctl as read-write. · 745e30b1
      Jason Evans authored
      Fix the manual page to document the swap.fds mallctl as read-write,
      rather than read-only.
      745e30b1
    • Jason Evans's avatar
      Conditionalize an isalloc() call in rallocm(). · b493ce22
      Jason Evans authored
      Conditionalize an isalloc() call in rallocm() that be unnecessary.
      b493ce22
    • Jason Evans's avatar
      Fix two prof-related bugs in rallocm(). · 183ba50c
      Jason Evans authored
      Properly handle boundary conditions for sampled region promotion in
      rallocm().  Prior to this fix, some combinations of 'size' and 'extra'
      values could cause erroneous behavior.  Additionally, size class
      recording for promoted regions was incorrect.
      183ba50c
  4. Aug 10, 2011
    • Jason Evans's avatar
      Clean up prof-related comments. · 0cdd42eb
      Jason Evans authored
      Clean up some prof-related comments to more accurately reflect how the
      code works.
      
      Simplify OOM handling code in a couple of prof-related error paths.
      0cdd42eb
  5. Aug 09, 2011
  6. Jul 31, 2011
  7. Jun 13, 2011
    • Jason Evans's avatar
      Fix assertions in arena_purge(). · f9a8edbb
      Jason Evans authored
      Fix assertions in arena_purge() to accurately reflect the constraints in
      arena_maybe_purge().  There were two bugs here, one of which merely
      weakened the assertion, and the other of which referred to an
      uninitialized variable (typo; used npurgatory instead of
      arena->npurgatory).
      f9a8edbb
  8. May 22, 2011
  9. May 11, 2011
  10. Apr 01, 2011
  11. Mar 31, 2011
  12. Mar 25, 2011
  13. Mar 23, 2011
  14. Mar 24, 2011
  15. Mar 23, 2011
    • Jason Evans's avatar
      Fix error detection for ipalloc() when profiling. · 38d9210c
      Jason Evans authored
      sa2u() returns 0 on overflow, but the profiling code was blindly calling
      sa2u() and allowing the error to silently propagate, ultimately ending
      in a later assertion failure.  Refactor all ipalloc() callers to call
      sa2u(), check for overflow before calling ipalloc(), and pass usize
      rather than size.  This allows ipalloc() to avoid calling sa2u() in the
      common case.
      38d9210c
    • Jason Evans's avatar
      Fix rallocm() rsize bug. · eacb896c
      Jason Evans authored
      Add code to set *rsize even when profiling is enabled.
      eacb896c
    • Jason Evans's avatar
      Fix bootstrapping order bug. · c957398b
      Jason Evans authored
      Initialize arenas_tsd earlier, so that the non-TLS case works when
      profiling is enabled.
      c957398b
  16. Mar 22, 2011
    • Jason Evans's avatar
      Update ChangeLog for 2.2.0. · 4bcd9872
      Jason Evans authored
      4bcd9872
    • Jason Evans's avatar
      Avoid overflow in arena_run_regind(). · 47e57f9b
      Jason Evans authored
      Fix a regression due to:
          Remove an arena_bin_run_size_calc() constraint.
          2a6f2af6
      The removed constraint required that small run headers fit in one page,
      which indirectly limited runs such that they would not cause overflow in
      arena_run_regind().  Add an explicit constraint to
      arena_bin_run_size_calc() based on the largest number of regions that
      arena_run_regind() can handle (2^11 as currently configured).
      47e57f9b
  17. Mar 21, 2011
    • Jason Evans's avatar
      Dynamically adjust tcache fill count. · 1dcb4f86
      Jason Evans authored
      Dynamically adjust tcache fill count (number of objects allocated per
      tcache refill) such that if GC has to flush inactive objects, the fill
      count gradually decreases.  Conversely, if refills occur while the fill
      count is depressed, the fill count gradually increases back to its
      maximum value.
      1dcb4f86
  18. Mar 19, 2011
  19. Mar 18, 2011
    • Jason Evans's avatar
      Improve thread-->arena assignment. · 597632be
      Jason Evans authored
      Rather than blindly assigning threads to arenas in round-robin fashion,
      choose the lowest-numbered arena that currently has the smallest number
      of threads assigned to it.
      
      Add the "stats.arenas.<i>.nthreads" mallctl.
      597632be
    • Jason Evans's avatar
      Reverse tcache fill order. · 9c43c13a
      Jason Evans authored
      Refill the thread cache such that low regions get used first.  This
      fixes a regression due to the recent transition to bitmap-based region
      management.
      9c43c13a
    • Jason Evans's avatar
      Use bitmaps to track small regions. · 84c8eefe
      Jason Evans authored
      The previous free list implementation, which embedded singly linked
      lists in available regions, had the unfortunate side effect of causing
      many cache misses during thread cache fills.  Fix this in two places:
      
        - arena_run_t: Use a new bitmap implementation to track which regions
                       are available.  Furthermore, revert to preferring the
                       lowest available region (as jemalloc did with its old
                       bitmap-based approach).
      
        - tcache_t: Move read-only tcache_bin_t metadata into
                    tcache_bin_info_t, and add a contiguous array of pointers
                    to tcache_t in order to track cached objects.  This
                    substantially increases the size of tcache_t, but results
                    in much higher data locality for common tcache operations.
                    As a side benefit, it is again possible to efficiently
                    flush the least recently used cached objects, so this
                    change changes flushing from MRU to LRU.
      
      The new bitmap implementation uses a multi-level summary approach to
      make finding the lowest available region very fast.  In practice,
      bitmaps only have one or two levels, though the implementation is
      general enough to handle extremely large bitmaps, mainly so that large
      page sizes can still be entertained.
      
      Fix tcache_bin_flush_large() to always flush statistics, in the same way
      that tcache_bin_flush_small() was recently fixed.
      
      Use JEMALLOC_DEBUG rather than NDEBUG.
      
      Add dassert(), and use it for debug-only asserts.
      84c8eefe
  20. Mar 16, 2011