Skip to content
  1. Apr 23, 2012
    • Jason Evans's avatar
      Fix heap profiling bugs. · 52386b2d
      Jason Evans authored
      Fix a potential deadlock that could occur during interval- and
      growth-triggered heap profile dumps.
      
      Fix an off-by-one heap profile statistics bug that could be observed in
      interval- and growth-triggered heap profiles.
      
      Fix heap profile dump filename sequence numbers (regression during
      conversion to malloc_snprintf()).
      52386b2d
  2. Apr 22, 2012
    • Mike Hommey's avatar
      Remove unused #includes · a5288ca9
      Mike Hommey authored
      a5288ca9
    • Mike Hommey's avatar
      Remove #includes in tests · 834f8770
      Mike Hommey authored
      Since we're now including jemalloc_internal.h, all the required headers
      are already pulled. This will avoid having to fiddle with headers that can
      or can't be used with MSVC. Also, now that we use malloc_printf, we can use
      util.h's definition of assert instead of assert.h's.
      834f8770
    • Mike Hommey's avatar
      Fix intmax_t configure error message · 14103d35
      Mike Hommey authored
      14103d35
    • Mike Hommey's avatar
      Remove leftovers from the vsnprintf check in malloc_vsnprintf · 08e2221e
      Mike Hommey authored
      Commit 4eeb52f0 removed vsnprintf validation, but left a now unused va_copy.
      It so happens that MSVC doesn't support va_copy.
      08e2221e
    • Mike Hommey's avatar
      Add support for Mingw · a19e87fb
      Mike Hommey authored
      a19e87fb
    • Jason Evans's avatar
      Remove mmap_unaligned. · a8f8d754
      Jason Evans authored
      Remove mmap_unaligned, which was used to heuristically decide whether to
      optimistically call mmap() in such a way that could reduce the total
      number of system calls.  If I remember correctly, the intention of
      mmap_unaligned was to avoid always executing the slow path in the
      presence of ASLR.  However, that reasoning seems to have been based on a
      flawed understanding of how ASLR actually works.  Although ASLR
      apparently causes mmap() to ignore address requests, it does not cause
      total placement randomness, so there is a reasonable expectation that
      iterative mmap() calls will start returning chunk-aligned mappings once
      the first chunk has been properly aligned.
      a8f8d754
    • Jason Evans's avatar
      Fix chunk allocation/deallocation bugs. · 7ad54c1c
      Jason Evans authored
      Fix chunk_alloc_dss() to zero memory when requested.
      
      Fix chunk_dealloc() to avoid chunk_dealloc_mmap() for dss-allocated
      memory.
      
      Fix huge_palloc() to always junk fill when requested.
      
      Improve chunk_recycle() to report that memory is zeroed as a side effect
      of pages_purge().
      7ad54c1c
  3. Apr 21, 2012
  4. Apr 20, 2012
  5. Apr 19, 2012
  6. Apr 18, 2012
  7. Apr 17, 2012
  8. Apr 16, 2012
  9. Apr 14, 2012
  10. Apr 13, 2012
    • Jason Evans's avatar
      Disable munmap() if it causes VM map holes. · 7ca0fdfb
      Jason Evans authored
      Add a configure test to determine whether common mmap()/munmap()
      patterns cause VM map holes, and only use munmap() to discard unused
      chunks if the problem does not exist.
      
      Unify the chunk caching for mmap and dss.
      
      Fix options processing to limit lg_chunk to be large enough that
      redzones will always fit.
      7ca0fdfb
    • Jason Evans's avatar
      Always disable redzone by default. · d6abcbb1
      Jason Evans authored
      Always disable redzone by default, even when --enable-debug is
      specified.  The memory overhead for redzones can be substantial, which
      makes this feature something that should only be opted into.
      d6abcbb1