Skip to content
  1. Jun 23, 2015
  2. Jun 22, 2015
  3. Jun 15, 2015
  4. May 30, 2015
  5. May 28, 2015
  6. May 20, 2015
  7. May 16, 2015
  8. May 08, 2015
  9. May 06, 2015
    • Jason Evans's avatar
      Implement cache index randomization for large allocations. · 8a03cf03
      Jason Evans authored
      Extract szad size quantization into {extent,run}_quantize(), and .
      quantize szad run sizes to the union of valid small region run sizes and
      large run sizes.
      
      Refactor iteration in arena_run_first_fit() to use
      run_quantize{,_first,_next(), and add support for padded large runs.
      
      For large allocations that have no specified alignment constraints,
      compute a pseudo-random offset from the beginning of the first backing
      page that is a multiple of the cache line size.  Under typical
      configurations with 4-KiB pages and 64-byte cache lines this results in
      a uniform distribution among 64 page boundary offsets.
      
      Add the --disable-cache-oblivious option, primarily intended for
      performance testing.
      
      This resolves #13.
      8a03cf03
    • Jason Evans's avatar
      6bb54cb9
  10. May 01, 2015
  11. Apr 30, 2015
  12. Apr 07, 2015
    • Sébastien Marie's avatar
      OpenBSD don't support TLS · b80fbcbb
      Sébastien Marie authored
      under some compiler (gcc 4.8.4 in particular), the auto-detection of TLS
      don't work properly.
      
      force tls to be disabled.
      
      the testsuite pass under gcc (4.8.4) and gcc (4.2.1)
      b80fbcbb
  13. Mar 26, 2015
    • Jason Evans's avatar
      Fix in-place shrinking huge reallocation purging bugs. · 65db63cf
      Jason Evans authored
      Fix the shrinking case of huge_ralloc_no_move_similar() to purge the
      correct number of pages, at the correct offset.  This regression was
      introduced by 8d6a3e83 (Implement
      dynamic per arena control over dirty page purging.).
      
      Fix huge_ralloc_no_move_shrink() to purge the correct number of pages.
      This bug was introduced by 96739834
      (Purge/zero sub-chunk huge allocations as necessary.).
      65db63cf
  14. Mar 25, 2015
  15. Mar 24, 2015
  16. Mar 22, 2015
  17. Mar 21, 2015
  18. Mar 19, 2015
    • Jason Evans's avatar
      Restore --enable-ivsalloc. · e0a08a14
      Jason Evans authored
      However, unlike before it was removed do not force --enable-ivsalloc
      when Darwin zone allocator integration is enabled, since the zone
      allocator code uses ivsalloc() regardless of whether
      malloc_usable_size() and sallocx() do.
      
      This resolves #211.
      e0a08a14
    • Jason Evans's avatar
      Implement dynamic per arena control over dirty page purging. · 8d6a3e83
      Jason Evans authored
      Add mallctls:
      - arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be
        modified to change the initial lg_dirty_mult setting for newly created
        arenas.
      - arena.<i>.lg_dirty_mult controls an individual arena's dirty page
        purging threshold, and synchronously triggers any purging that may be
        necessary to maintain the constraint.
      - arena.<i>.chunk.purge allows the per arena dirty page purging function
        to be replaced.
      
      This resolves #93.
      8d6a3e83
  19. Mar 17, 2015
  20. Mar 16, 2015
    • Jason Evans's avatar
      Fix heap profiling regressions. · 04211e22
      Jason Evans authored
      Remove the prof_tctx_state_destroying transitory state and instead add
      the tctx_uid field, so that the tuple <thr_uid, tctx_uid> uniquely
      identifies a tctx.  This assures that tctx's are well ordered even when
      more than two with the same thr_uid coexist.  A previous attempted fix
      based on prof_tctx_state_destroying was only sufficient for protecting
      against two coexisting tctx's, but it also introduced a new dumping
      race.
      
      These regressions were introduced by
      602c8e09 (Implement per thread heap
      profiling.) and 764b0002 (Fix a heap
      profiling regression.).
      04211e22
  21. Mar 14, 2015
  22. Mar 13, 2015
    • Daniel Micay's avatar
      use CLOCK_MONOTONIC in the timer if it's available · d6384b09
      Daniel Micay authored
      Linux sets _POSIX_MONOTONIC_CLOCK to 0 meaning it *might* be available,
      so a sysconf check is necessary at runtime with a fallback to the
      mandatory CLOCK_REALTIME clock.
      d6384b09
    • Mike Hommey's avatar
      Use the error code given to buferror on Windows · f69e2f6f
      Mike Hommey authored
      a14bce85 made buferror not take an error code, and make the Windows
      code path for buferror use GetLastError, while the alternative code
      paths used errno. Then 2a83ed02 made buferror take an error code
      again, and while it changed the non-Windows code paths to use that
      error code, the Windows code path was not changed accordingly.
      f69e2f6f
    • Jason Evans's avatar
      Fix a heap profiling regression. · d69964bd
      Jason Evans authored
      Fix prof_tctx_comp() to incorporate tctx state into the comparison.
      During a dump it is possible for both a purgatory tctx and an otherwise
      equivalent nominal tctx to reside in the tree at the same time.
      
      This regression was introduced by
      602c8e09 (Implement per thread heap
      profiling.).
      d69964bd
  23. Mar 12, 2015