Skip to content
  1. May 01, 2015
  2. Apr 30, 2015
  3. Apr 07, 2015
    • Sébastien Marie's avatar
      OpenBSD don't support TLS · b80fbcbb
      Sébastien Marie authored
      under some compiler (gcc 4.8.4 in particular), the auto-detection of TLS
      don't work properly.
      
      force tls to be disabled.
      
      the testsuite pass under gcc (4.8.4) and gcc (4.2.1)
      b80fbcbb
  4. Mar 26, 2015
    • Jason Evans's avatar
      Fix in-place shrinking huge reallocation purging bugs. · 65db63cf
      Jason Evans authored
      Fix the shrinking case of huge_ralloc_no_move_similar() to purge the
      correct number of pages, at the correct offset.  This regression was
      introduced by 8d6a3e83 (Implement
      dynamic per arena control over dirty page purging.).
      
      Fix huge_ralloc_no_move_shrink() to purge the correct number of pages.
      This bug was introduced by 96739834
      (Purge/zero sub-chunk huge allocations as necessary.).
      65db63cf
  5. Mar 25, 2015
  6. Mar 24, 2015
  7. Mar 22, 2015
  8. Mar 21, 2015
  9. Mar 19, 2015
    • Jason Evans's avatar
      Restore --enable-ivsalloc. · e0a08a14
      Jason Evans authored
      However, unlike before it was removed do not force --enable-ivsalloc
      when Darwin zone allocator integration is enabled, since the zone
      allocator code uses ivsalloc() regardless of whether
      malloc_usable_size() and sallocx() do.
      
      This resolves #211.
      e0a08a14
    • Jason Evans's avatar
      Implement dynamic per arena control over dirty page purging. · 8d6a3e83
      Jason Evans authored
      Add mallctls:
      - arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be
        modified to change the initial lg_dirty_mult setting for newly created
        arenas.
      - arena.<i>.lg_dirty_mult controls an individual arena's dirty page
        purging threshold, and synchronously triggers any purging that may be
        necessary to maintain the constraint.
      - arena.<i>.chunk.purge allows the per arena dirty page purging function
        to be replaced.
      
      This resolves #93.
      8d6a3e83
  10. Mar 17, 2015
  11. Mar 16, 2015
    • Jason Evans's avatar
      Fix heap profiling regressions. · 04211e22
      Jason Evans authored
      Remove the prof_tctx_state_destroying transitory state and instead add
      the tctx_uid field, so that the tuple <thr_uid, tctx_uid> uniquely
      identifies a tctx.  This assures that tctx's are well ordered even when
      more than two with the same thr_uid coexist.  A previous attempted fix
      based on prof_tctx_state_destroying was only sufficient for protecting
      against two coexisting tctx's, but it also introduced a new dumping
      race.
      
      These regressions were introduced by
      602c8e09 (Implement per thread heap
      profiling.) and 764b0002 (Fix a heap
      profiling regression.).
      04211e22
  12. Mar 14, 2015
  13. Mar 13, 2015
    • Daniel Micay's avatar
      use CLOCK_MONOTONIC in the timer if it's available · d6384b09
      Daniel Micay authored
      Linux sets _POSIX_MONOTONIC_CLOCK to 0 meaning it *might* be available,
      so a sysconf check is necessary at runtime with a fallback to the
      mandatory CLOCK_REALTIME clock.
      d6384b09
    • Mike Hommey's avatar
      Use the error code given to buferror on Windows · f69e2f6f
      Mike Hommey authored
      a14bce85 made buferror not take an error code, and make the Windows
      code path for buferror use GetLastError, while the alternative code
      paths used errno. Then 2a83ed02 made buferror take an error code
      again, and while it changed the non-Windows code paths to use that
      error code, the Windows code path was not changed accordingly.
      f69e2f6f
    • Jason Evans's avatar
      Fix a heap profiling regression. · d69964bd
      Jason Evans authored
      Fix prof_tctx_comp() to incorporate tctx state into the comparison.
      During a dump it is possible for both a purgatory tctx and an otherwise
      equivalent nominal tctx to reside in the tree at the same time.
      
      This regression was introduced by
      602c8e09 (Implement per thread heap
      profiling.).
      d69964bd
  14. Mar 12, 2015
  15. Mar 11, 2015
  16. Mar 10, 2015
  17. Mar 07, 2015
    • Jason Evans's avatar
      Fix a chunk_recycle() regression. · 04ca7580
      Jason Evans authored
      This regression was introduced by
      97c04a93 (Use first-fit rather than
      first-best-fit run/chunk allocation.).
      04ca7580
    • Jason Evans's avatar
      Use first-fit rather than first-best-fit run/chunk allocation. · 97c04a93
      Jason Evans authored
      This tends to more effectively pack active memory toward low addresses.
      However, additional tree searches are required in many cases, so whether
      this change stands the test of time will depend on real-world
      benchmarks.
      97c04a93
    • Jason Evans's avatar
      Quantize szad trees by size class. · 5707d6f9
      Jason Evans authored
      Treat sizes that round down to the same size class as size-equivalent
      in trees that are used to search for first best fit, so that there are
      only as many "firsts" as there are size classes.  This comes closer to
      the ideal of first fit.
      5707d6f9
    • Jason Evans's avatar
      Change default chunk size from 4 MiB to 256 KiB. · f044bb21
      Jason Evans authored
      Recent changes have improved huge allocation scalability, which removes
      upward pressure to set the chunk size so large that huge allocations are
      rare.  Smaller chunks are more likely to completely drain, so set the
      default to the smallest size that doesn't leave excessive unusable
      trailing space in chunk headers.
      f044bb21
  18. Mar 04, 2015
    • Mike Hommey's avatar
      Preserve LastError when calling TlsGetValue · 4d871f73
      Mike Hommey authored
      TlsGetValue has a semantic difference with pthread_getspecific, in that it
      can return a non-error NULL value, so it always sets the LastError.
      But allocator callers may not be expecting calling e.g. free() to change
      the value of the last error, so preserve it.
      4d871f73
    • Mike Hommey's avatar
      Make --without-export actually work · 7c46fd59
      Mike Hommey authored
      9906660e added a --without-export configure option to avoid exporting
      jemalloc symbols, but the option didn't actually work.
      7c46fd59
  19. Feb 26, 2015
  20. Feb 19, 2015
  21. Feb 18, 2015
    • Jason Evans's avatar
      Rename "dirty chunks" to "cached chunks". · 738e089a
      Jason Evans authored
      Rename "dirty chunks" to "cached chunks", in order to avoid overloading
      the term "dirty".
      
      Fix the regression caused by 339c2b23
      (Fix chunk_unmap() to propagate dirty state.), and actually address what
      that change attempted, which is to only purge chunks once, and propagate
      whether zeroed pages resulted into chunk_record().
      738e089a
    • Jason Evans's avatar
      Fix chunk_unmap() to propagate dirty state. · 339c2b23
      Jason Evans authored
      Fix chunk_unmap() to propagate whether a chunk is dirty, and modify
      dirty chunk purging to record this information so it can be passed to
      chunk_unmap().  Since the broken version of chunk_unmap() claimed that
      all chunks were clean, this resulted in potential memory corruption for
      purging implementations that do not zero (e.g. MADV_FREE).
      
      This regression was introduced by
      ee41ad40 (Integrate whole chunks into
      unused dirty page purging machinery.).
      339c2b23
    • Jason Evans's avatar