- Nov 07, 2016
-
-
Jason Evans authored
-
Jason Evans authored
This change conforms to naming conventions throughout the codebase.
-
Jason Evans authored
This reverts commit c2942e2c. This resolves #495.
-
Jason Evans authored
This resolves #495.
-
- Nov 04, 2016
-
-
Jason Evans authored
-
Matthew Parkinson authored
-
Jason Evans authored
This supersedes -std=gnu99, and enables C11 atomics.
-
Jason Evans authored
-
Jason Evans authored
Do not call s2u() during alloc_size computation, since any necessary ceiling increase is taken care of later by extent_first_best_fit() --> extent_size_quantize_ceil(), and the s2u() call may erroneously cause a higher quantization result. Remove an overly strict overflow check that was added in 4a785213 (Fix extent_recycle()'s cache-oblivious padding support.).
-
Jason Evans authored
Add padding *after* computing the size class, so that the optimal size class isn't skipped during search for a usable extent. This regression was caused by b46261d5 (Implement cache-oblivious support for huge size classes.).
-
Jason Evans authored
Add an "over-size" extent heap in which to store extents which exceed the maximum size class (plus cache-oblivious padding, if enabled). Remove psz2ind_clamp() and use psz2ind() instead so that trying to allocate the maximum size class can in principle succeed. In practice, this allows assertions to hold so that OOM errors can be successfully generated.
-
Jason Evans authored
Fix extent_alloc_cache[_locked]() to support decommitted allocation, and use this ability in arena_stash_dirty(), so that decommitted extents are not needlessly committed during purging. In practice this does not happen on any currently supported systems, because both extent merging and decommit must be implemented; all supported systems implement one xor the other.
-
- Nov 03, 2016
-
-
Jason Evans authored
-
Jason Evans authored
-
Samuel Moritz authored
Treat it exactly like Linux since they both use GNU libc.
-
Dave Watson authored
rtree_node_init spinlocks the node, allocates, and then sets the node. This is under heavy contention at the top of the tree if many threads start to allocate at the same time. Instead, take a per-rtree sleeping mutex to reduce spinning. Tested both pthreads and osx OSSpinLock, and both reduce spinning adequately Previous benchmark time: ./ttest1 500 100 ~15s New benchmark time: ./ttest1 500 100 .57s
-
Dave Watson authored
This resolves #485.
-
Jason Evans authored
-
Jason Evans authored
-
Jason Evans authored
OS X 10.12 deprecated OSSpinLock; os_unfair_lock is the recommended replacement.
-
Jason Evans authored
Fix zone_force_unlock() to reinitialize, rather than unlocking mutexes, since OS X 10.12 cannot tolerate a child unlocking mutexes that were locked by its parent. Refactor; this was a side effect of experimenting with zone {de,re}registration during fork(2).
-
Jason Evans authored
_exit(2) is async-signal-safe, whereas exit(3) is not.
-
- Nov 02, 2016
-
-
Jason Evans authored
Monitoring thread creation is unimplemented for Windows, which means lazy-lock cannot function correctly. This resolves #310.
-
- Nov 01, 2016
-
-
Jason Evans authored
Fix and clean up various malloc_stats_print() issues caused by 0ba5b9b6 (Add "J" (JSON) support to malloc_stats_print().).
-
Jason Evans authored
-
Jason Evans authored
This resolves #474.
-
Jason Evans authored
This resolves #480.
-
- Oct 31, 2016
-
-
Jason Evans authored
-
Jason Evans authored
This resolves #396.
-
- Oct 30, 2016
-
-
Jason Evans authored
The raw clock variant is slow (even relative to plain CLOCK_MONOTONIC), whereas the coarse clock variant is faster than CLOCK_MONOTONIC, but still has resolution (~1ms) that is adequate for our purposes. This resolves #479.
-
Jason Evans authored
Some applications wrap various system calls, and if they call the allocator in their wrappers, unexpected reentry can result. This is not a general solution (many other syscalls are spread throughout the code), but this resolves a bootstrapping issue that is apparently common. This resolves #443.
-
Jason Evans authored
-
- Oct 29, 2016
-
-
Jason Evans authored
This works around malloc_conf not being properly initialized by at least the cygwin toolchain. Prior build system changes to use -Wl,--[no-]whole-archive may be necessary for malloc_conf resolution to work properly as a non-weak symbol (not tested).
-
Jason Evans authored
This is generally correct (no need for weak symbols since no jemalloc library is involved in the link phase), and avoids linking problems (apparently unininitialized non-NULL malloc_conf) when using cygwin with gcc.
-
Dave Watson authored
glibc defines its malloc implementation with several weak and strong symbols: strong_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc) strong_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree) strong_alias (__libc_free, __free) strong_alias (__libc_free, free) strong_alias (__libc_malloc, __malloc) strong_alias (__libc_malloc, malloc) The issue is not with the weak symbols, but that other parts of glibc depend on __libc_malloc explicitly. Defining them in terms of jemalloc API's allows the linker to drop glibc's malloc.o completely from the link, and static linking no longer results in symbol collisions. Another wrinkle: jemalloc during initialization calls sysconf to get the number of CPU's. GLIBC allocates for the first time before setting up isspace (and other related) tables, which are used by sysconf. Instead, use the pthread API to get the number of CPUs with GLIBC, which seems to work. This resolves #442.
-
- Oct 28, 2016
-
-
Jason Evans authored
This is intended to drop memory usage to a level that AppVeyor test instances can handle. This resolves #393.
-
Jason Evans authored
This resolves #393.
-
Jason Evans authored
This resolves #393.
-
Jason Evans authored
Use the correct level metadata when allocating child nodes so that leaf nodes don't end up over-sized (2^16 elements vs 2^4 elements).
-
Jason Evans authored
This avoids warnings in some cases, and is otherwise generally good hygiene.
-