Skip to content
  1. Nov 04, 2016
    • Jason Evans's avatar
      Fix arena data structure size calculation. · 28b7e42e
      Jason Evans authored
      Fix paren placement so that QUANTUM_CEILING() applies to the correct
      portion of the expression that computes how much memory to base_alloc().
      In practice this bug had no impact.  This was caused by
      5d8db15d (Simplify run quantization.),
      which in turn fixed an over-allocation regression caused by
      3c4d92e8 (Add per size class huge
      allocation statistics.).
      28b7e42e
    • Matthew Parkinson's avatar
      Fixes to Visual Studio Project files · 77635bf5
      Matthew Parkinson authored
      77635bf5
    • Jason Evans's avatar
      Use -std=gnu11 if available. · cb3ad659
      Jason Evans authored
      This supersedes -std=gnu99, and enables C11 atomics.
      cb3ad659
    • Jason Evans's avatar
      Update ChangeLog for 4.3.0. · 213667fe
      Jason Evans authored
      213667fe
    • Jason Evans's avatar
      Fix large allocation to search optimal size class heap. · 32896a90
      Jason Evans authored
      Fix arena_run_alloc_large_helper() to not convert size to usize when
      searching for the first best fit via arena_run_first_best_fit().  This
      allows the search to consider the optimal quantized size class, so that
      e.g. allocating and deallocating 40 KiB in a tight loop can reuse the
      same memory.
      
      This regression was nominally caused by
      5707d6f9 (Quantize szad trees by size
      class.), but it did not commonly cause problems until
      8a03cf03 (Implement cache index
      randomization for large allocations.).  These regressions were first
      released in 4.0.0.
      
      This resolves #487.
      32896a90
    • Jason Evans's avatar
      Fix chunk_alloc_cache() to support decommitted allocation. · e9012630
      Jason Evans authored
      Fix chunk_alloc_cache() to support decommitted allocation, and use this
      ability in arena_chunk_alloc_internal() and arena_stash_dirty(), so that
      chunks don't get permanently stuck in a hybrid state.
      
      This resolves #487.
      e9012630
  2. Nov 03, 2016
  3. Nov 02, 2016
    • Jason Evans's avatar
      Force no lazy-lock on Windows. · 07ee4c5f
      Jason Evans authored
      Monitoring thread creation is unimplemented for Windows, which means
      lazy-lock cannot function correctly.
      
      This resolves #310.
      07ee4c5f
  4. Nov 01, 2016
  5. Oct 31, 2016
  6. Oct 30, 2016
  7. Oct 29, 2016
    • Jason Evans's avatar
      Do not mark malloc_conf as weak on Windows. · e46f8f97
      Jason Evans authored
      This works around malloc_conf not being properly initialized by at least
      the cygwin toolchain.  Prior build system changes to use
      -Wl,--[no-]whole-archive may be necessary for malloc_conf resolution to
      work properly as a non-weak symbol (not tested).
      e46f8f97
    • Jason Evans's avatar
      Do not mark malloc_conf as weak for unit tests. · 35799a50
      Jason Evans authored
      This is generally correct (no need for weak symbols since no jemalloc
      library is involved in the link phase), and avoids linking problems
      (apparently unininitialized non-NULL malloc_conf) when using cygwin with
      gcc.
      35799a50
    • Dave Watson's avatar
      Support static linking of jemalloc with glibc · ed84764a
      Dave Watson authored
      glibc defines its malloc implementation with several weak and strong
      symbols:
      
      strong_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
      strong_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
      strong_alias (__libc_free, __free) strong_alias (__libc_free, free)
      strong_alias (__libc_malloc, __malloc) strong_alias (__libc_malloc, malloc)
      
      The issue is not with the weak symbols, but that other parts of glibc
      depend on __libc_malloc explicitly.  Defining them in terms of jemalloc
      API's allows the linker to drop glibc's malloc.o completely from the link,
      and static linking no longer results in symbol collisions.
      
      Another wrinkle: jemalloc during initialization calls sysconf to
      get the number of CPU's.  GLIBC allocates for the first time before
      setting up isspace (and other related) tables, which are used by
      sysconf.  Instead, use the pthread API to get the number of
      CPUs with GLIBC, which seems to work.
      
      This resolves #442.
      ed84764a
  8. Oct 28, 2016
  9. Oct 26, 2016
    • Jason Evans's avatar
      Use --whole-archive when linking integration tests on MinGW. · 5569b4a4
      Jason Evans authored
      Prior to this change, the malloc_conf weak symbol provided by the
      jemalloc dynamic library is always used, even if the application
      provides a malloc_conf symbol.  Use the --whole-archive linker option
      to allow the weak symbol to be overridden.
      5569b4a4
  10. Oct 21, 2016
  11. Oct 14, 2016
    • Jason Evans's avatar
      Make dss operations lockless. · e2bcf037
      Jason Evans authored
      Rather than protecting dss operations with a mutex, use atomic
      operations.  This has negligible impact on synchronization overhead
      during typical dss allocation, but is a substantial improvement for
      chunk_in_dss() and the newly added chunk_dss_mergeable(), which can be
      called multiple times during chunk deallocations.
      
      This change also has the advantage of avoiding tsd in deallocation paths
      associated with purging, which resolves potential deadlocks during
      thread exit due to attempted tsd resurrection.
      
      This resolves #425.
      e2bcf037
  12. Oct 13, 2016
    • Jason Evans's avatar
      Add/use adaptive spinning. · 97376859
      Jason Evans authored
      Add spin_t and spin_{init,adaptive}(), which provide a simple
      abstraction for adaptive spinning.
      
      Adaptively spin during busy waits in bootstrapping and rtree node
      initialization.
      97376859
    • Jason Evans's avatar
      Disallow 0x5a junk filling when running in Valgrind. · a2539fab
      Jason Evans authored
      Explicitly disallow junk:true and junk:free runtime settings when
      running in Valgrind, since deallocation-time junk filling and redzone
      validation cause false positive Valgrind reports.
      
      This resolves #470.
      a2539fab
  13. Oct 12, 2016
    • Jason Evans's avatar
      Fix and simplify decay-based purging. · d419bb09
      Jason Evans authored
      Simplify decay-based purging attempts to only be triggered when the
      epoch is advanced, rather than every time purgeable memory increases.
      In a correctly functioning system (not previously the case; see below),
      this only causes a behavior difference if during subsequent purge
      attempts the least recently used (LRU) purgeable memory extent is
      initially too large to be purged, but that memory is reused between
      attempts and one or more of the next LRU purgeable memory extents are
      small enough to be purged.  In practice this is an arbitrary behavior
      change that is within the set of acceptable behaviors.
      
      As for the purging fix, assure that arena->decay.ndirty is recorded
      *after* the epoch advance and associated purging occurs.  Prior to this
      fix, it was possible for purging during epoch advance to cause a
      substantially underrepresentative (arena->ndirty - arena->decay.ndirty),
      i.e. the number of dirty pages attributed to the current epoch was too
      low, and a series of unintended purges could result.  This fix is also
      relevant in the context of the simplification described above, but the
      bug's impact would be limited to over-purging at epoch advances.
      d419bb09
    • Jason Evans's avatar
      a14712b4
  14. Oct 11, 2016