Skip to content
  1. Mar 25, 2015
  2. Mar 24, 2015
  3. Mar 22, 2015
  4. Mar 21, 2015
  5. Mar 19, 2015
    • Jason Evans's avatar
      Restore --enable-ivsalloc. · e0a08a14
      Jason Evans authored
      However, unlike before it was removed do not force --enable-ivsalloc
      when Darwin zone allocator integration is enabled, since the zone
      allocator code uses ivsalloc() regardless of whether
      malloc_usable_size() and sallocx() do.
      
      This resolves #211.
      e0a08a14
    • Jason Evans's avatar
      Implement dynamic per arena control over dirty page purging. · 8d6a3e83
      Jason Evans authored
      Add mallctls:
      - arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be
        modified to change the initial lg_dirty_mult setting for newly created
        arenas.
      - arena.<i>.lg_dirty_mult controls an individual arena's dirty page
        purging threshold, and synchronously triggers any purging that may be
        necessary to maintain the constraint.
      - arena.<i>.chunk.purge allows the per arena dirty page purging function
        to be replaced.
      
      This resolves #93.
      8d6a3e83
  6. Mar 17, 2015
  7. Mar 16, 2015
    • Jason Evans's avatar
      Fix heap profiling regressions. · 04211e22
      Jason Evans authored
      Remove the prof_tctx_state_destroying transitory state and instead add
      the tctx_uid field, so that the tuple <thr_uid, tctx_uid> uniquely
      identifies a tctx.  This assures that tctx's are well ordered even when
      more than two with the same thr_uid coexist.  A previous attempted fix
      based on prof_tctx_state_destroying was only sufficient for protecting
      against two coexisting tctx's, but it also introduced a new dumping
      race.
      
      These regressions were introduced by
      602c8e09 (Implement per thread heap
      profiling.) and 764b0002 (Fix a heap
      profiling regression.).
      04211e22
  8. Mar 14, 2015
  9. Mar 13, 2015
    • Daniel Micay's avatar
      use CLOCK_MONOTONIC in the timer if it's available · d6384b09
      Daniel Micay authored
      Linux sets _POSIX_MONOTONIC_CLOCK to 0 meaning it *might* be available,
      so a sysconf check is necessary at runtime with a fallback to the
      mandatory CLOCK_REALTIME clock.
      d6384b09
    • Mike Hommey's avatar
      Use the error code given to buferror on Windows · f69e2f6f
      Mike Hommey authored
      a14bce85 made buferror not take an error code, and make the Windows
      code path for buferror use GetLastError, while the alternative code
      paths used errno. Then 2a83ed02 made buferror take an error code
      again, and while it changed the non-Windows code paths to use that
      error code, the Windows code path was not changed accordingly.
      f69e2f6f
    • Jason Evans's avatar
      Fix a heap profiling regression. · d69964bd
      Jason Evans authored
      Fix prof_tctx_comp() to incorporate tctx state into the comparison.
      During a dump it is possible for both a purgatory tctx and an otherwise
      equivalent nominal tctx to reside in the tree at the same time.
      
      This regression was introduced by
      602c8e09 (Implement per thread heap
      profiling.).
      d69964bd
  10. Mar 12, 2015
  11. Mar 11, 2015
  12. Mar 10, 2015
  13. Mar 07, 2015
    • Jason Evans's avatar
      Fix a chunk_recycle() regression. · 04ca7580
      Jason Evans authored
      This regression was introduced by
      97c04a93 (Use first-fit rather than
      first-best-fit run/chunk allocation.).
      04ca7580
    • Jason Evans's avatar
      Use first-fit rather than first-best-fit run/chunk allocation. · 97c04a93
      Jason Evans authored
      This tends to more effectively pack active memory toward low addresses.
      However, additional tree searches are required in many cases, so whether
      this change stands the test of time will depend on real-world
      benchmarks.
      97c04a93
    • Jason Evans's avatar
      Quantize szad trees by size class. · 5707d6f9
      Jason Evans authored
      Treat sizes that round down to the same size class as size-equivalent
      in trees that are used to search for first best fit, so that there are
      only as many "firsts" as there are size classes.  This comes closer to
      the ideal of first fit.
      5707d6f9
    • Jason Evans's avatar
      Change default chunk size from 4 MiB to 256 KiB. · f044bb21
      Jason Evans authored
      Recent changes have improved huge allocation scalability, which removes
      upward pressure to set the chunk size so large that huge allocations are
      rare.  Smaller chunks are more likely to completely drain, so set the
      default to the smallest size that doesn't leave excessive unusable
      trailing space in chunk headers.
      f044bb21
  14. Mar 04, 2015
    • Mike Hommey's avatar
      Preserve LastError when calling TlsGetValue · 4d871f73
      Mike Hommey authored
      TlsGetValue has a semantic difference with pthread_getspecific, in that it
      can return a non-error NULL value, so it always sets the LastError.
      But allocator callers may not be expecting calling e.g. free() to change
      the value of the last error, so preserve it.
      4d871f73
    • Mike Hommey's avatar
      Make --without-export actually work · 7c46fd59
      Mike Hommey authored
      9906660e added a --without-export configure option to avoid exporting
      jemalloc symbols, but the option didn't actually work.
      7c46fd59
  15. Feb 26, 2015
  16. Feb 19, 2015
  17. Feb 18, 2015
  18. Feb 17, 2015
    • Jason Evans's avatar
      Integrate whole chunks into unused dirty page purging machinery. · ee41ad40
      Jason Evans authored
      Extend per arena unused dirty page purging to manage unused dirty chunks
      in aaddtion to unused dirty runs.  Rather than immediately unmapping
      deallocated chunks (or purging them in the --disable-munmap case), store
      them in a separate set of trees, chunks_[sz]ad_dirty.  Preferrentially
      allocate dirty chunks.  When excessive unused dirty pages accumulate,
      purge runs and chunks in ingegrated LRU order (and unmap chunks in the
      --enable-munmap case).
      
      Refactor extent_node_t to provide accessor functions.
      ee41ad40
  19. Feb 16, 2015