Skip to content
  1. Mar 08, 2016
  2. Mar 03, 2016
    • Jason Evans's avatar
      Avoid a potential innocuous compiler warning. · 022f6891
      Jason Evans authored
      Add a cast to avoid comparing a ssize_t value to a uint64_t value that
      is always larger than a 32-bit ssize_t.  This silences an innocuous
      compiler warning from e.g. gcc 4.2.1 about the comparison always having
      the same result.
      022f6891
  3. Mar 01, 2016
  4. Feb 29, 2016
  5. Feb 28, 2016
  6. Feb 27, 2016
    • Jason Evans's avatar
      Update ChangeLog in preparation for 4.1.0. · 14be4a7c
      Jason Evans authored
      14be4a7c
    • Jason Evans's avatar
      Refactor arena_cactive_update() into arena_cactive_{add,sub}(). · 3763d3b5
      Jason Evans authored
      This removes an implicit conversion from size_t to ssize_t.  For cactive
      decreases, the size_t value was intentionally underflowed to generate
      "negative" values (actually positive values above the positive range of
      ssize_t), and the conversion to ssize_t was undefined according to C
      language semantics.
      
      This regression was perpetuated by
      1522937e (Fix the cactive statistic.)
      and first release in 4.0.0, which in retrospect only fixed one of two
      problems introduced by aa5113b1
      (Refactor overly large/complex functions) and first released in 3.5.0.
      3763d3b5
    • Jason Evans's avatar
      Remove invalid tests. · a62e94ca
      Jason Evans authored
      Remove invalid tests that were intended to be tests of (hugemax+1) OOM,
      for which tests already exist.
      a62e94ca
    • buchgr's avatar
      Move retaining out of default chunk hooks · d412624b
      buchgr authored
      This fixes chunk allocation to reuse retained memory even if an
      application-provided chunk allocation function is in use.
      
      This resolves #307.
      d412624b
  7. Feb 26, 2016
  8. Feb 25, 2016
    • Jason Evans's avatar
      Refactor arenas array (fixes deadlock). · 767d8506
      Jason Evans authored
      Refactor the arenas array, which contains pointers to all extant arenas,
      such that it starts out as a sparse array of maximum size, and use
      double-checked atomics-based reads as the basis for fast and simple
      arena_get().  Additionally, reduce arenas_lock's role such that it only
      protects against arena initalization races.  These changes remove the
      possibility for arena lookups to trigger locking, which resolves at
      least one known (fork-related) deadlock.
      
      This resolves #315.
      767d8506
    • Dave Watson's avatar
      Fix arena_size computation. · 38127291
      Dave Watson authored
      Fix arena_size arena_new() computation to incorporate
      runs_avail_nclasses elements for runs_avail, rather than
      (runs_avail_nclasses - 1) elements.  Since offsetof(arena_t, runs_avail)
      is used rather than sizeof(arena_t) for the first term of the
      computation, all of the runs_avail elements must be added into the
      second term.
      
      This bug was introduced (by Jason Evans) while merging pull request #330
      as 3417a304 (Separate arena_avail
      trees).
      38127291
    • Dave Watson's avatar
      Fix arena_run_first_best_fit · cd86c148
      Dave Watson authored
      Merge of 3417a304 looks like a small
      bug: first_best_fit doesn't scan through all the classes, since ind is
      offset from runs_avail_nclasses by run_avail_bias.
      cd86c148
    • Jason Evans's avatar
      Attempt mmap-based in-place huge reallocation. · c7a9a6c8
      Jason Evans authored
      Attempt mmap-based in-place huge reallocation by plumbing new_addr into
      chunk_alloc_mmap().  This can dramatically speed up incremental huge
      reallocation.
      
      This resolves #335.
      c7a9a6c8
    • Jason Evans's avatar
      Document the heap profile format. · 5ec703dd
      Jason Evans authored
      This resolves #258.
      5ec703dd
  9. Feb 24, 2016