Skip to content
  1. Oct 30, 2010
  2. Oct 28, 2010
    • Jason Evans's avatar
      Fix prof bugs. · b04a940e
      Jason Evans authored
      Fix a race condition in ctx destruction that could cause undefined
      behavior (deadlock observed).
      
      Add mutex unlocks to some OOM error paths.
      b04a940e
  3. Oct 25, 2010
  4. Oct 24, 2010
    • Jason Evans's avatar
      Use madvise(..., MADV_FREE) on OS X. · ce93055c
      Jason Evans authored
      Use madvise(..., MADV_FREE) rather than msync(..., MS_KILLPAGES) on OS
      X, since it works for at least OS X 10.5 and 10.6.
      ce93055c
    • Jason Evans's avatar
      Edit manpage. · 0d38791e
      Jason Evans authored
      Make various minor edits to the manpage.
      0d38791e
    • Jason Evans's avatar
      Re-format size class table. · 8da141f4
      Jason Evans authored
      Use a more compact layout for the size class table in the man page.
      This avoids layout glitches due to approaching the single-page table
      size limit.
      8da141f4
    • Jason Evans's avatar
      Add missing #ifdef JEMALLOC_PROF. · 49d0293c
      Jason Evans authored
      Only call prof_boot0() if profiling is enabled.
      49d0293c
    • Jason Evans's avatar
      Replace JEMALLOC_OPTIONS with MALLOC_CONF. · e7339706
      Jason Evans authored
      Replace the single-character run-time flags with key/value pairs, which
      can be set via the malloc_conf global, /etc/malloc.conf, and the
      MALLOC_CONF environment variable.
      
      Replace the JEMALLOC_PROF_PREFIX environment variable with the
      "opt.prof_prefix" option.
      
      Replace umax2s() with u2s().
      e7339706
  5. Oct 22, 2010
    • Jason Evans's avatar
      Fix heap profiling bugs. · e4f7846f
      Jason Evans authored
      Fix a regression due to the recent heap profiling accuracy improvements:
      prof_{m,re}alloc() must set the object's profiling context regardless of
      whether it is sampled.
      
      Fix management of the CHUNK_MAP_CLASS chunk map bits, such that all
      large object (re-)allocation paths correctly initialize the bits.  Prior
      to this fix, in-place realloc() cleared the bits, resulting in incorrect
      reported object size from arena_salloc_demote().  After this fix the
      non-demoted bit pattern is all zeros (instead of all ones), which makes
      it easier to assure that the bits are properly set.
      e4f7846f
  6. Oct 21, 2010
    • Jason Evans's avatar
      Fix a heap profiling regression. · 81b4e6eb
      Jason Evans authored
      Call prof_ctx_set() in all paths through prof_{m,re}alloc().
      
      Inline arena_prof_ctx_get().
      81b4e6eb
    • Jason Evans's avatar
      Inline the fast path for heap sampling. · 4d6a134e
      Jason Evans authored
      Inline the heap sampling code that is executed for every allocation
      event (regardless of whether a sample is taken).
      
      Combine all prof TLS data into a single data structure, in order to
      reduce the TLS lookup volume.
      4d6a134e
    • Jason Evans's avatar
      Add per thread allocation counters, and enhance heap sampling. · 93443689
      Jason Evans authored
      Add the "thread.allocated" and "thread.deallocated" mallctls, which can
      be used to query the total number of bytes ever allocated/deallocated by
      the calling thread.
      
      Add s2u() and sa2u(), which can be used to compute the usable size that
      will result from an allocation request of a particular size/alignment.
      
      Re-factor ipalloc() to use sa2u().
      
      Enhance the heap profiler to trigger samples based on usable size,
      rather than request size.  This has a subtle, but important, impact on
      the accuracy of heap sampling.  For example, previous to this change,
      16- and 17-byte objects were sampled at nearly the same rate, but
      17-byte objects actually consume 32 bytes each.  Therefore it was
      possible for the sample to be somewhat skewed compared to actual memory
      usage of the allocated objects.
      93443689
  7. Oct 19, 2010
    • Jason Evans's avatar
      Fix a bug in arena_dalloc_bin_run(). · 21fb95bb
      Jason Evans authored
      Fix the newsize argument to arena_run_trim_tail() that
      arena_dalloc_bin_run() passes.  Previously, oldsize-newsize (i.e. the
      complement) was passed, which could erroneously cause dirty pages to be
      returned to the clean available runs tree.  Prior to the
      CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED conversion, this bug merely
      caused dirty pages to be unaccounted for (and therefore never get
      purged), but with CHUNK_MAP_UNZEROED, this could cause dirty pages to be
      treated as zeroed (i.e. memory corruption).
      21fb95bb
  8. Oct 18, 2010
    • Jason Evans's avatar
      Fix arena bugs. · 088e6a0a
      Jason Evans authored
      Split arena_dissociate_bin_run() out of arena_dalloc_bin_run(), so that
      arena_bin_malloc_hard() can avoid dissociation when recovering from
      losing a race.  This fixes a bug introduced by a recent attempted fix.
      
      Fix a regression in arena_ralloc_large_grow() that was introduced by
      recent fixes.
      088e6a0a
    • Jason Evans's avatar
      Fix arena bugs. · 8de6a028
      Jason Evans authored
      Move part of arena_bin_lower_run() into the callers, since the
      conditions under which it should be called differ slightly between
      callers.
      
      Fix arena_chunk_purge() to omit run size in the last map entry for each
      run it temporarily allocates.
      8de6a028
    • Jason Evans's avatar
      Add assertions to run coalescing. · 12ca9140
      Jason Evans authored
      Assert that the chunk map bits at the ends of the runs that participate
      in coalescing are self-consistent.
      12ca9140
    • Jason Evans's avatar
      Fix numerous arena bugs. · 940a2e02
      Jason Evans authored
      In arena_ralloc_large_grow(), update the map element for the end of the
      newly grown run, rather than the interior map element that was the
      beginning of the appended run.  This is a long-standing bug, and it had
      the potential to cause massive corruption, but triggering it required
      roughly the following sequence of events:
        1) Large in-place growing realloc(), with left-over space in the run
           that followed the large object.
        2) Allocation of the remainder run left over from (1).
        3) Deallocation of the remainder run *before* deallocation of the
           large run, with unfortunate interior map state left over from
           previous run allocation/deallocation activity, such that one or
           more pages of allocated memory would be treated as part of the
           remainder run during run coalescing.
      In summary, this was a bad bug, but it was difficult to trigger.
      
      In arena_bin_malloc_hard(), if another thread wins the race to allocate
      a bin run, dispose of the spare run via arena_bin_lower_run() rather
      than arena_run_dalloc(), since the run has already been prepared for use
      as a bin run.  This bug has existed since March 14, 2010:
          e00572b3
          mmap()/munmap() without arena->lock or bin->lock.
      
      Fix bugs in arena_dalloc_bin_run(), arena_trim_head(),
      arena_trim_tail(), and arena_ralloc_large_grow() that could cause the
      CHUNK_MAP_UNZEROED map bit to become corrupted.  These are all
      long-standing bugs, but the chances of them actually causing problems
      was much lower before the CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED
      conversion.
      
      Fix a large run statistics regression in arena_ralloc_large_grow() that
      was introduced on September 17, 2010:
          8e3c3c61
          Add {,r,s,d}allocm().
      
      Add debug code to validate that supposedly pre-zeroed memory really is.
      940a2e02
  9. Oct 17, 2010
    • Jason Evans's avatar
      Preserve CHUNK_MAP_UNZEROED for small runs. · 397e5111
      Jason Evans authored
      Preserve CHUNK_MAP_UNZEROED when allocating small runs, because it is
      possible that untouched pages will be returned to the tree of clean
      runs, where the CHUNK_MAP_UNZEROED flag matters.  Prior to the
      conversion from CHUNK_MAP_ZEROED, this was already a bug, but in the
      worst case extra zeroing occurred.  After the conversion, this bug made
      it possible to incorrectly treat pages as pre-zeroed.
      397e5111
  10. Oct 14, 2010
    • Jason Evans's avatar
      Fix a regression in CHUNK_MAP_UNZEROED change. · 004ed142
      Jason Evans authored
      Fix a regression added by revision:
      
      	3377ffa1
      	Change CHUNK_MAP_ZEROED to CHUNK_MAP_UNZEROED.
      
      A modified chunk->map dereference was missing the subtraction of
      map_bias, which caused incorrect chunk map initialization, as well as
      potential corruption of the first non-header page of memory within each
      chunk.
      004ed142
  11. Oct 07, 2010
  12. Oct 03, 2010
  13. Oct 02, 2010
  14. Oct 01, 2010
  15. Sep 21, 2010
    • Jason Evans's avatar
      Fix compiler warnings and errors. · 075e77ca
      Jason Evans authored
      Use INT_MAX instead of MAX_INT in ALLOCM_ALIGN(), and #include
      <limits.h> in order to get its definition.
      
      Modify prof code related to hash tables to avoid aliasing warnings from
      gcc 4.1.2 (gcc 4.4.0 and 4.4.3 do not warn).
      075e77ca
    • Jason Evans's avatar
      Fix compiler warnings. · 355b438c
      Jason Evans authored
      Add --enable-cc-silence, which can be used to silence harmless warnings.
      
      Fix an aliasing bug in ckh_pointer_hash().
      355b438c