Skip to content
  1. Jan 23, 2015
  2. Jan 22, 2015
    • Jason Evans's avatar
      Refactor bootstrapping to delay tsd initialization. · 10aff3f3
      Jason Evans authored
      Refactor bootstrapping to delay tsd initialization, primarily to support
      integration with FreeBSD's libc.
      
      Refactor a0*() for internal-only use, and add the
      bootstrap_{malloc,calloc,free}() API for use by FreeBSD's libc.  This
      separation limits use of the a0*() functions to metadata allocation,
      which doesn't require malloc/calloc/free API compatibility.
      
      This resolves #170.
      10aff3f3
    • Jason Evans's avatar
      Fix arenas_cache_cleanup(). · bc96876f
      Jason Evans authored
      Fix arenas_cache_cleanup() to check whether arenas_cache is NULL before
      deallocation, rather than checking arenas.
      bc96876f
  3. Jan 21, 2015
  4. Jan 17, 2015
  5. Jan 15, 2015
  6. Jan 09, 2015
  7. Dec 18, 2014
    • Mike Hommey's avatar
      Make mixed declarations an error · b7b44dfa
      Mike Hommey authored
      It often happens that code changes introduce mixed declarations, that then
      break building with Visual Studio. Since the code style is to not use
      mixed declarations anyways, we might as well enforce it with -Werror.
      b7b44dfa
  8. Dec 17, 2014
  9. Dec 15, 2014
  10. Dec 09, 2014
  11. Dec 07, 2014
  12. Dec 06, 2014
  13. Dec 05, 2014
    • Jason Evans's avatar
      Fix OOM cleanup in huge_palloc(). · 1036ddbf
      Jason Evans authored
      Fix OOM cleanup in huge_palloc() to call idalloct() rather than
      base_node_dalloc().  This bug is a result of incomplete refactoring, and
      has no impact other than leaking memory during OOM.
      1036ddbf
  14. Dec 03, 2014
  15. Nov 29, 2014
    • Daniel Micay's avatar
      teach the dss chunk allocator to handle new_addr · 879e76a9
      Daniel Micay authored
      This provides in-place expansion of huge allocations when the end of the
      allocation is at the end of the sbrk heap. There's already the ability
      to extend in-place via recycled chunks but this handles the initial
      growth of the heap via repeated vector / string reallocations.
      
      A possible future extension could allow realloc to go from the following:
      
          | huge allocation | recycled chunks |
                                              ^ dss_end
      
      To a larger allocation built from recycled *and* new chunks:
      
          |                      huge allocation                      |
                                                                      ^ dss_end
      
      Doing that would involve teaching the chunk recycling code to request
      new chunks to satisfy the request. The chunk_dss code wouldn't require
      any further changes.
      
          #include <stdlib.h>
      
          int main(void) {
              size_t chunk = 4 * 1024 * 1024;
              void *ptr = NULL;
              for (size_t size = chunk; size < chunk * 128; size *= 2) {
                  ptr = realloc(ptr, size);
                  if (!ptr) return 1;
              }
          }
      
      dss:secondary: 0.083s
      dss:primary: 0.083s
      
      After:
      
      dss:secondary: 0.083s
      dss:primary: 0.003s
      
      The dss heap grows in the upwards direction, so the oldest chunks are at
      the low addresses and they are used first. Linux prefers to grow the
      mmap heap downwards, so the trick will not work in the *current* mmap
      chunk allocator as a huge allocation will only be at the top of the heap
      in a contrived case.
      879e76a9
  16. Nov 18, 2014
  17. Nov 17, 2014
  18. Nov 07, 2014
  19. Nov 05, 2014
    • Jason Evans's avatar
      Fix two quarantine regressions. · c002a5c8
      Jason Evans authored
      Fix quarantine to actually update tsd when expanding, and to avoid
      double initialization (leaking the first quarantine) due to recursive
      initialization.
      
      This resolves #161.
      c002a5c8
  20. Nov 01, 2014
  21. Oct 31, 2014
  22. Oct 21, 2014
  23. Oct 20, 2014
  24. Oct 16, 2014
    • Jason Evans's avatar
      Merge pull request #151 from thestinger/ralloc · 8f47e3d8
      Jason Evans authored
      use sized deallocation internally for ralloc
      8f47e3d8
    • Daniel Micay's avatar
      use sized deallocation internally for ralloc · a9ea10d2
      Daniel Micay authored
      The size of the source allocation is known at this point, so reading the
      chunk header can be avoided for the small size class fast path. This is
      not very useful right now, but it provides a significant performance
      boost with an alternate ralloc entry point taking the old size.
      a9ea10d2
    • Jason Evans's avatar
      Initialize chunks_mtx for all configurations. · c83bccd2
      Jason Evans authored
      This resolves #150.
      c83bccd2
    • Jason Evans's avatar
      Purge/zero sub-chunk huge allocations as necessary. · 96739834
      Jason Evans authored
      Purge trailing pages during shrinking huge reallocation when resulting
      size is not a multiple of the chunk size.  Similarly, zero pages if
      necessary during growing huge reallocation when the resulting size is
      not a multiple of the chunk size.
      96739834
    • Jason Evans's avatar
      Add small run utilization to stats output. · bf8d6a10
      Jason Evans authored
      Add the 'util' column, which reports the proportion of available regions
      that are currently in use for each small size class.  Small run
      utilization is the complement of external fragmentation.  For example,
      utilization of 0.75 indicates that 25% of small run memory is consumed
      by external fragmentation, in other (more obtuse) words, 33% external
      fragmentation overhead.
      
      This resolves #27.
      bf8d6a10