Skip to content
  1. Aug 28, 2015
    • Mike Hommey's avatar
      Don't purge junk filled chunks when shrinking huge allocations · 4a2a3c9a
      Mike Hommey authored
      When junk filling is enabled, shrinking an allocation fills the bytes
      that were previously allocated but now aren't. Purging the chunk before
      doing that is just a waste of time.
      
      This resolves #260.
      4a2a3c9a
    • Mike Hommey's avatar
      Fix chunk purge hook calls for in-place huge shrinking reallocation. · 6d8075f1
      Mike Hommey authored
      Fix chunk purge hook calls for in-place huge shrinking reallocation to
      specify the old chunk size rather than the new chunk size.  This bug
      caused no correctness issues for the default chunk purge function, but
      was visible to custom functions set via the "arena.<i>.chunk_hooks"
      mallctl.
      
      This resolves #264.
      6d8075f1
    • Jason Evans's avatar
      Fix arenas_cache_cleanup() and arena_get_hard(). · 30949da6
      Jason Evans authored
      Fix arenas_cache_cleanup() and arena_get_hard() to handle
      allocation/deallocation within the application's thread-specific data
      cleanup functions even after arenas_cache is torn down.
      
      This is a more general fix that complements
      45e9f66c (Fix arenas_cache_cleanup().).
      30949da6
  2. Aug 26, 2015
  3. Aug 21, 2015
    • Christopher Ferris's avatar
      Fix arenas_cache_cleanup(). · 45e9f66c
      Christopher Ferris authored
      Fix arenas_cache_cleanup() to handle allocation/deallocation within the
      application's thread-specific data cleanup functions even after
      arenas_cache is torn down.
      45e9f66c
  4. Aug 20, 2015
  5. Aug 19, 2015
    • Jason Evans's avatar
      Don't bitshift by negative amounts. · 5ef33a9f
      Jason Evans authored
      Don't bitshift by negative amounts when encoding/decoding run sizes in
      chunk header maps.  This affected systems with page sizes greater than 8
      KiB.
      
      Reported by Ingvar Hagelund <ingvar@redpill-linpro.com>.
      5ef33a9f
  6. Aug 17, 2015
  7. Aug 14, 2015
  8. Aug 13, 2015
  9. Aug 12, 2015
  10. Aug 11, 2015
  11. Aug 07, 2015
  12. Aug 04, 2015
    • Daniel Micay's avatar
      work around _FORTIFY_SOURCE false positive · 67c46a9e
      Daniel Micay authored
      In builds with profiling disabled (default), the opt_prof_prefix array
      has a one byte length as a micro-optimization. This will cause the usage
      of write in the unused profiling code to be statically detected as a
      buffer overflow by Bionic's _FORTIFY_SOURCE implementation as it tries
      to detect read overflows in addition to write overflows.
      
      This works around the problem by informing the compiler that
      not_reached() means code in unreachable in release builds.
      67c46a9e
    • Matthijs's avatar
      MSVC compatibility changes · c1a6a51e
      Matthijs authored
      - Decorate public function with __declspec(allocator) and __declspec(restrict), just like MSVC 1900
      - Support JEMALLOC_HAS_RESTRICT by defining the restrict keyword
      - Move __declspec(nothrow) between 'void' and '*' so it compiles once more
      c1a6a51e
    • Jason Evans's avatar
      Generalize chunk management hooks. · b49a334a
      Jason Evans authored
      Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on
      the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls.  The chunk hooks
      allow control over chunk allocation/deallocation, decommit/commit,
      purging, and splitting/merging, such that the application can rely on
      jemalloc's internal chunk caching and retaining functionality, yet
      implement a variety of chunk management mechanisms and policies.
      
      Merge the chunks_[sz]ad_{mmap,dss} red-black trees into
      chunks_[sz]ad_retained.  This slightly reduces how hard jemalloc tries
      to honor the dss precedence setting; prior to this change the precedence
      setting was also consulted when recycling chunks.
      
      Fix chunk purging.  Don't purge chunks in arena_purge_stashed(); instead
      deallocate them in arena_unstash_purged(), so that the dirty memory
      linkage remains valid until after the last time it is used.
      
      This resolves #176 and #201.
      b49a334a
  13. Jul 25, 2015
    • Jason Evans's avatar
      Implement support for non-coalescing maps on MinGW. · d059b9d6
      Jason Evans authored
      - Do not reallocate huge objects in place if the number of backing
        chunks would change.
      - Do not cache multi-chunk mappings.
      
      This resolves #213.
      d059b9d6
    • Jason Evans's avatar
      Fix huge_ralloc_no_move() to succeed more often. · 40cbd30d
      Jason Evans authored
      Fix huge_ralloc_no_move() to succeed if an allocation request results in
      the same usable size as the existing allocation, even if the request
      size is smaller than the usable size.  This bug did not cause
      correctness issues, but it could cause unnecessary moves during
      reallocation.
      40cbd30d
  14. Jul 24, 2015
    • Jason Evans's avatar
      Fix huge_palloc() to handle size rather than usize input. · 87ccb555
      Jason Evans authored
      huge_ralloc() passes a size that may not be precisely a size class, so
      make huge_palloc() handle the more general case of a size input rather
      than usize.
      
      This regression appears to have been introduced by the addition of
      in-place huge reallocation; as such it was never incorporated into a
      release.
      87ccb555
    • Jason Evans's avatar
      Fix sa2u() regression. · 4becdf21
      Jason Evans authored
      Take large_pad into account when determining whether an aligned
      allocation can be satisfied by a large size class.
      
      This regression was introduced by
      8a03cf03 (Implement cache index
      randomization for large allocations.).
      4becdf21
    • Jason Evans's avatar
      Change arena_palloc_large() parameter from size to usize. · 50883deb
      Jason Evans authored
      This change merely documents that arena_palloc_large() always receives
      usize as its argument.
      50883deb