hanchenye-llvm-project/libcxx/include/__hash_table

2642 lines
90 KiB
Plaintext
Raw Normal View History

2010-05-12 03:42:16 +08:00
// -*- C++ -*-
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
2010-05-12 03:42:16 +08:00
//
2010-11-17 06:09:02 +08:00
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.TXT for details.
2010-05-12 03:42:16 +08:00
//
//===----------------------------------------------------------------------===//
#ifndef _LIBCPP__HASH_TABLE
#define _LIBCPP__HASH_TABLE
#include <__config>
#include <initializer_list>
#include <memory>
#include <iterator>
#include <algorithm>
#include <cmath>
#include <utility>
2010-05-12 03:42:16 +08:00
#include <__undef_min_max>
#include <__undef___deallocate>
#include <__debug>
Ok, 3 major changes for debug mode in one commit: 1. I had been detecting and trapping iterator == and \!= among iterators in different containers as an error. But the trapping itself is actually an error. Consider: #include <iostream> #include <vector> #include <algorithm> template <class C> void display(const C& c) { std::cout << "{"; bool first = true; for (const auto& x : c) { if (\!first) std::cout << ", "; first = false; std::cout << x; } std::cout << "}\n"; } int main() { typedef std::vector<int> V; V v1 = {1, 3, 5}; V v2 = {2, 4, 6}; display(v1); display(v2); V::iterator i = std::find(v1.begin(), v1.end(), 1); V::iterator j = std::find(v2.begin(), v2.end(), 2); if (*i == *j) i = j; // perfectly legal // ... if (i \!= j) // the only way to check v2.push_back(*i); display(v1); display(v2); } It is legal to assign an iterator from one container to another of the same type. This is required to work. One might want to test whether or not such an assignment had been made. The way one performs such a check is using the iterator's ==, \!= operator. This is a logical and necessary function and does not constitute an error. 2. I had a header circular dependence bug when _LIBCPP_DEBUG2 is defined. This caused a problem in several of the libc++ tests. Fixed. 3. There is a serious problem when _LIBCPP_DEBUG2=1 at the moment in that std::basic_string is inoperable. std::basic_string uses __wrap_iterator to implement its iterators. __wrap_iterator has been rigged up in debug mode to support vector. But string hasn't been rigged up yet. This means that one gets false positives when using std::string in debug mode. I've upped std::string's priority in www/debug_mode.html. llvm-svn: 187636
2013-08-02 08:26:35 +08:00
#if !defined(_LIBCPP_HAS_NO_PRAGMA_SYSTEM_HEADER)
2010-05-12 03:42:16 +08:00
#pragma GCC system_header
#endif
2010-05-12 03:42:16 +08:00
_LIBCPP_BEGIN_NAMESPACE_STD
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
template <class _Key, class _Tp>
union __hash_value_type;
#else
template <class _Key, class _Tp>
struct __hash_value_type;
#endif
#ifndef _LIBCPP_CXX03_LANG
template <class _Tp>
struct __is_hash_value_type_imp : false_type {};
template <class _Key, class _Value>
struct __is_hash_value_type_imp<__hash_value_type<_Key, _Value>> : true_type {};
template <class ..._Args>
struct __is_hash_value_type : false_type {};
template <class _One>
struct __is_hash_value_type<_One> : __is_hash_value_type_imp<typename __uncvref<_One>::type> {};
#endif
_LIBCPP_FUNC_VIS
size_t __next_prime(size_t __n);
2010-05-12 03:42:16 +08:00
template <class _NodePtr>
struct __hash_node_base
{
typedef __hash_node_base __first_node;
_NodePtr __next_;
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY __hash_node_base() _NOEXCEPT : __next_(nullptr) {}
2010-05-12 03:42:16 +08:00
};
template <class _Tp, class _VoidPtr>
struct __hash_node
: public __hash_node_base
<
typename __rebind_pointer<_VoidPtr, __hash_node<_Tp, _VoidPtr> >::type
2010-05-12 03:42:16 +08:00
>
{
typedef _Tp __node_value_type;
2010-05-12 03:42:16 +08:00
size_t __hash_;
__node_value_type __value_;
2010-05-12 03:42:16 +08:00
};
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
inline _LIBCPP_INLINE_VISIBILITY
bool
__is_hash_power2(size_t __bc)
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
{
return __bc > 2 && !(__bc & (__bc - 1));
}
inline _LIBCPP_INLINE_VISIBILITY
size_t
__constrain_hash(size_t __h, size_t __bc)
{
return !(__bc & (__bc - 1)) ? __h & (__bc - 1) : __h % __bc;
}
inline _LIBCPP_INLINE_VISIBILITY
size_t
__next_hash_pow2(size_t __n)
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
{
return size_t(1) << (std::numeric_limits<size_t>::digits - __clz(__n-1));
}
#ifndef _LIBCPP_CXX03_LANG
struct __extract_key_fail_tag {};
struct __extract_key_self_tag {};
struct __extract_key_first_tag {};
template <class _ValTy, class _Key,
class _RawValTy = typename __unconstref<_ValTy>::type>
struct __can_extract_key
: conditional<is_same<_RawValTy, _Key>::value, __extract_key_self_tag,
__extract_key_fail_tag>::type {};
template <class _Pair, class _Key, class _First, class _Second>
struct __can_extract_key<_Pair, _Key, pair<_First, _Second>>
: conditional<is_same<typename remove_const<_First>::type, _Key>::value,
__extract_key_first_tag, __extract_key_fail_tag>::type {};
#endif
template <class _Tp, class _Hash, class _Equal, class _Alloc> class __hash_table;
template <class _NodePtr> class _LIBCPP_TYPE_VIS_ONLY __hash_iterator;
template <class _ConstNodePtr> class _LIBCPP_TYPE_VIS_ONLY __hash_const_iterator;
template <class _NodePtr> class _LIBCPP_TYPE_VIS_ONLY __hash_local_iterator;
template <class _ConstNodePtr> class _LIBCPP_TYPE_VIS_ONLY __hash_const_local_iterator;
template <class _HashIterator> class _LIBCPP_TYPE_VIS_ONLY __hash_map_iterator;
template <class _HashIterator> class _LIBCPP_TYPE_VIS_ONLY __hash_map_const_iterator;
2010-05-12 03:42:16 +08:00
template <class _Tp>
struct __hash_key_value_types {
static_assert(!is_reference<_Tp>::value && !is_const<_Tp>::value, "");
typedef _Tp key_type;
typedef _Tp __node_value_type;
typedef _Tp __container_value_type;
static const bool __is_map = false;
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
_LIBCPP_INLINE_VISIBILITY
static key_type const& __get_key(_Tp const& __v) {
return __v;
}
_LIBCPP_INLINE_VISIBILITY
static __container_value_type const& __get_value(__node_value_type const& __v) {
return __v;
}
_LIBCPP_INLINE_VISIBILITY
static __container_value_type* __get_ptr(__node_value_type& __n) {
return _VSTD::addressof(__n);
}
#ifndef _LIBCPP_CXX03_LANG
_LIBCPP_INLINE_VISIBILITY
static __container_value_type&& __move(__node_value_type& __v) {
return _VSTD::move(__v);
}
#endif
};
template <class _Key, class _Tp>
struct __hash_key_value_types<__hash_value_type<_Key, _Tp> > {
typedef _Key key_type;
typedef _Tp mapped_type;
typedef __hash_value_type<_Key, _Tp> __node_value_type;
typedef pair<const _Key, _Tp> __container_value_type;
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
typedef pair<_Key, _Tp> __nc_value_type;
typedef __container_value_type __map_value_type;
static const bool __is_map = true;
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
_LIBCPP_INLINE_VISIBILITY
static key_type const& __get_key(__container_value_type const& __v) {
return __v.first;
}
template <class _Up>
_LIBCPP_INLINE_VISIBILITY
static typename enable_if<__is_same_uncvref<_Up, __node_value_type>::value,
__container_value_type const&>::type
__get_value(_Up& __t) {
return __t.__cc;
}
template <class _Up>
_LIBCPP_INLINE_VISIBILITY
static typename enable_if<__is_same_uncvref<_Up, __container_value_type>::value,
__container_value_type const&>::type
__get_value(_Up& __t) {
return __t;
}
_LIBCPP_INLINE_VISIBILITY
static __container_value_type* __get_ptr(__node_value_type& __n) {
return _VSTD::addressof(__n.__cc);
}
#ifndef _LIBCPP_CXX03_LANG
_LIBCPP_INLINE_VISIBILITY
static __nc_value_type&& __move(__node_value_type& __v) {
return _VSTD::move(__v.__nc);
}
#endif
};
template <class _Tp, class _AllocPtr, class _KVTypes = __hash_key_value_types<_Tp>,
bool = _KVTypes::__is_map>
struct __hash_map_pointer_types {};
template <class _Tp, class _AllocPtr, class _KVTypes>
struct __hash_map_pointer_types<_Tp, _AllocPtr, _KVTypes, true> {
typedef typename _KVTypes::__map_value_type _Mv;
typedef typename __rebind_pointer<_AllocPtr, _Mv>::type
__map_value_type_pointer;
typedef typename __rebind_pointer<_AllocPtr, const _Mv>::type
__const_map_value_type_pointer;
};
template <class _NodePtr, class _NodeT = typename pointer_traits<_NodePtr>::element_type>
struct __hash_node_types;
template <class _NodePtr, class _Tp, class _VoidPtr>
struct __hash_node_types<_NodePtr, __hash_node<_Tp, _VoidPtr> >
: public __hash_key_value_types<_Tp>, __hash_map_pointer_types<_Tp, _VoidPtr>
{
typedef __hash_key_value_types<_Tp> __base;
public:
typedef ptrdiff_t difference_type;
typedef size_t size_type;
typedef typename __rebind_pointer<_NodePtr, void>::type __void_pointer;
typedef typename pointer_traits<_NodePtr>::element_type __node_type;
typedef _NodePtr __node_pointer;
typedef __hash_node_base<__node_pointer> __node_base_type;
typedef typename __rebind_pointer<_NodePtr, __node_base_type>::type
__node_base_pointer;
typedef _Tp __node_value_type;
typedef typename __rebind_pointer<_VoidPtr, __node_value_type>::type
__node_value_type_pointer;
typedef typename __rebind_pointer<_VoidPtr, const __node_value_type>::type
__const_node_value_type_pointer;
private:
static_assert(!is_const<__node_type>::value,
"_NodePtr should never be a pointer to const");
static_assert((is_same<typename pointer_traits<_VoidPtr>::element_type, void>::value),
"_VoidPtr does not point to unqualified void type");
static_assert((is_same<typename __rebind_pointer<_VoidPtr, __node_type>::type,
_NodePtr>::value), "_VoidPtr does not rebind to _NodePtr.");
};
template <class _HashIterator>
struct __hash_node_types_from_iterator;
template <class _NodePtr>
struct __hash_node_types_from_iterator<__hash_iterator<_NodePtr> > : __hash_node_types<_NodePtr> {};
template <class _NodePtr>
struct __hash_node_types_from_iterator<__hash_const_iterator<_NodePtr> > : __hash_node_types<_NodePtr> {};
template <class _NodePtr>
struct __hash_node_types_from_iterator<__hash_local_iterator<_NodePtr> > : __hash_node_types<_NodePtr> {};
template <class _NodePtr>
struct __hash_node_types_from_iterator<__hash_const_local_iterator<_NodePtr> > : __hash_node_types<_NodePtr> {};
template <class _NodeValueTp, class _VoidPtr>
struct __make_hash_node_types {
typedef __hash_node<_NodeValueTp, _VoidPtr> _NodeTp;
typedef typename __rebind_pointer<_VoidPtr, _NodeTp>::type _NodePtr;
typedef __hash_node_types<_NodePtr> type;
};
2010-05-12 03:42:16 +08:00
template <class _NodePtr>
class _LIBCPP_TYPE_VIS_ONLY __hash_iterator
2010-05-12 03:42:16 +08:00
{
typedef __hash_node_types<_NodePtr> _NodeTypes;
typedef _NodePtr __node_pointer;
2010-05-12 03:42:16 +08:00
__node_pointer __node_;
public:
typedef forward_iterator_tag iterator_category;
typedef typename _NodeTypes::__node_value_type value_type;
typedef typename _NodeTypes::difference_type difference_type;
typedef value_type& reference;
typedef typename _NodeTypes::__node_value_type_pointer pointer;
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY __hash_iterator() _NOEXCEPT
#if _LIBCPP_STD_VER > 11
: __node_(nullptr)
#endif
{
#if _LIBCPP_DEBUG_LEVEL >= 2
__get_db()->__insert_i(this);
#endif
}
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_INLINE_VISIBILITY
__hash_iterator(const __hash_iterator& __i)
: __node_(__i.__node_)
{
__get_db()->__iterator_copy(this, &__i);
}
_LIBCPP_INLINE_VISIBILITY
~__hash_iterator()
{
__get_db()->__erase_i(this);
}
_LIBCPP_INLINE_VISIBILITY
__hash_iterator& operator=(const __hash_iterator& __i)
{
if (this != &__i)
{
__get_db()->__iterator_copy(this, &__i);
__node_ = __i.__node_;
}
return *this;
}
#endif // _LIBCPP_DEBUG_LEVEL >= 2
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
reference operator*() const
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to dereference a non-dereferenceable unordered container iterator");
#endif
return __node_->__value_;
}
_LIBCPP_INLINE_VISIBILITY
pointer operator->() const
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to dereference a non-dereferenceable unordered container iterator");
#endif
return pointer_traits<pointer>::pointer_to(__node_->__value_);
}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_iterator& operator++()
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to increment non-incrementable unordered container iterator");
#endif
2010-05-12 03:42:16 +08:00
__node_ = __node_->__next_;
return *this;
}
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_iterator operator++(int)
{
__hash_iterator __t(*this);
++(*this);
return __t;
}
friend _LIBCPP_INLINE_VISIBILITY
bool operator==(const __hash_iterator& __x, const __hash_iterator& __y)
{
return __x.__node_ == __y.__node_;
}
friend _LIBCPP_INLINE_VISIBILITY
bool operator!=(const __hash_iterator& __x, const __hash_iterator& __y)
{return !(__x == __y);}
2010-05-12 03:42:16 +08:00
private:
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_INLINE_VISIBILITY
__hash_iterator(__node_pointer __node, const void* __c) _NOEXCEPT
: __node_(__node)
{
__get_db()->__insert_ic(this, __c);
}
#else
_LIBCPP_INLINE_VISIBILITY
__hash_iterator(__node_pointer __node) _NOEXCEPT
2010-05-12 03:42:16 +08:00
: __node_(__node)
{}
#endif
2010-05-12 03:42:16 +08:00
template <class, class, class, class> friend class __hash_table;
template <class> friend class _LIBCPP_TYPE_VIS_ONLY __hash_const_iterator;
template <class> friend class _LIBCPP_TYPE_VIS_ONLY __hash_map_iterator;
template <class, class, class, class, class> friend class _LIBCPP_TYPE_VIS_ONLY unordered_map;
template <class, class, class, class, class> friend class _LIBCPP_TYPE_VIS_ONLY unordered_multimap;
2010-05-12 03:42:16 +08:00
};
template <class _NodePtr>
class _LIBCPP_TYPE_VIS_ONLY __hash_const_iterator
2010-05-12 03:42:16 +08:00
{
static_assert(!is_const<typename pointer_traits<_NodePtr>::element_type>::value, "");
typedef __hash_node_types<_NodePtr> _NodeTypes;
typedef _NodePtr __node_pointer;
__node_pointer __node_;
public:
typedef __hash_iterator<_NodePtr> __non_const_iterator;
typedef forward_iterator_tag iterator_category;
typedef typename _NodeTypes::__node_value_type value_type;
typedef typename _NodeTypes::difference_type difference_type;
typedef const value_type& reference;
typedef typename _NodeTypes::__const_node_value_type_pointer pointer;
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY __hash_const_iterator() _NOEXCEPT
#if _LIBCPP_STD_VER > 11
: __node_(nullptr)
#endif
{
#if _LIBCPP_DEBUG_LEVEL >= 2
__get_db()->__insert_i(this);
#endif
}
_LIBCPP_INLINE_VISIBILITY
__hash_const_iterator(const __non_const_iterator& __x) _NOEXCEPT
2010-05-12 03:42:16 +08:00
: __node_(__x.__node_)
{
#if _LIBCPP_DEBUG_LEVEL >= 2
__get_db()->__iterator_copy(this, &__x);
#endif
}
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_INLINE_VISIBILITY
__hash_const_iterator(const __hash_const_iterator& __i)
: __node_(__i.__node_)
{
__get_db()->__iterator_copy(this, &__i);
}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
~__hash_const_iterator()
{
__get_db()->__erase_i(this);
}
_LIBCPP_INLINE_VISIBILITY
__hash_const_iterator& operator=(const __hash_const_iterator& __i)
{
if (this != &__i)
{
__get_db()->__iterator_copy(this, &__i);
__node_ = __i.__node_;
}
return *this;
}
#endif // _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_INLINE_VISIBILITY
reference operator*() const
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to dereference a non-dereferenceable unordered container const_iterator");
#endif
return __node_->__value_;
}
_LIBCPP_INLINE_VISIBILITY
pointer operator->() const
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to dereference a non-dereferenceable unordered container const_iterator");
#endif
return pointer_traits<pointer>::pointer_to(__node_->__value_);
}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_const_iterator& operator++()
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to increment non-incrementable unordered container const_iterator");
#endif
2010-05-12 03:42:16 +08:00
__node_ = __node_->__next_;
return *this;
}
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_const_iterator operator++(int)
{
__hash_const_iterator __t(*this);
++(*this);
return __t;
}
friend _LIBCPP_INLINE_VISIBILITY
bool operator==(const __hash_const_iterator& __x, const __hash_const_iterator& __y)
{
return __x.__node_ == __y.__node_;
}
friend _LIBCPP_INLINE_VISIBILITY
bool operator!=(const __hash_const_iterator& __x, const __hash_const_iterator& __y)
{return !(__x == __y);}
2010-05-12 03:42:16 +08:00
private:
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_INLINE_VISIBILITY
__hash_const_iterator(__node_pointer __node, const void* __c) _NOEXCEPT
: __node_(__node)
{
__get_db()->__insert_ic(this, __c);
}
#else
_LIBCPP_INLINE_VISIBILITY
__hash_const_iterator(__node_pointer __node) _NOEXCEPT
2010-05-12 03:42:16 +08:00
: __node_(__node)
{}
#endif
2010-05-12 03:42:16 +08:00
template <class, class, class, class> friend class __hash_table;
template <class> friend class _LIBCPP_TYPE_VIS_ONLY __hash_map_const_iterator;
template <class, class, class, class, class> friend class _LIBCPP_TYPE_VIS_ONLY unordered_map;
template <class, class, class, class, class> friend class _LIBCPP_TYPE_VIS_ONLY unordered_multimap;
2010-05-12 03:42:16 +08:00
};
template <class _NodePtr>
class _LIBCPP_TYPE_VIS_ONLY __hash_local_iterator
2010-05-12 03:42:16 +08:00
{
typedef __hash_node_types<_NodePtr> _NodeTypes;
typedef _NodePtr __node_pointer;
2010-05-12 03:42:16 +08:00
__node_pointer __node_;
size_t __bucket_;
size_t __bucket_count_;
public:
typedef forward_iterator_tag iterator_category;
typedef typename _NodeTypes::__node_value_type value_type;
typedef typename _NodeTypes::difference_type difference_type;
2010-05-12 03:42:16 +08:00
typedef value_type& reference;
typedef typename _NodeTypes::__node_value_type_pointer pointer;
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY __hash_local_iterator() _NOEXCEPT
{
#if _LIBCPP_DEBUG_LEVEL >= 2
__get_db()->__insert_i(this);
#endif
}
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_INLINE_VISIBILITY
__hash_local_iterator(const __hash_local_iterator& __i)
: __node_(__i.__node_),
__bucket_(__i.__bucket_),
__bucket_count_(__i.__bucket_count_)
{
__get_db()->__iterator_copy(this, &__i);
}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
~__hash_local_iterator()
{
__get_db()->__erase_i(this);
}
_LIBCPP_INLINE_VISIBILITY
__hash_local_iterator& operator=(const __hash_local_iterator& __i)
{
if (this != &__i)
{
__get_db()->__iterator_copy(this, &__i);
__node_ = __i.__node_;
__bucket_ = __i.__bucket_;
__bucket_count_ = __i.__bucket_count_;
}
return *this;
}
#endif // _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_INLINE_VISIBILITY
reference operator*() const
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to dereference a non-dereferenceable unordered container local_iterator");
#endif
return __node_->__value_;
}
_LIBCPP_INLINE_VISIBILITY
pointer operator->() const
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to dereference a non-dereferenceable unordered container local_iterator");
#endif
return pointer_traits<pointer>::pointer_to(__node_->__value_);
}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_local_iterator& operator++()
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to increment non-incrementable unordered container local_iterator");
#endif
2010-05-12 03:42:16 +08:00
__node_ = __node_->__next_;
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
if (__node_ != nullptr && __constrain_hash(__node_->__hash_, __bucket_count_) != __bucket_)
2010-05-12 03:42:16 +08:00
__node_ = nullptr;
return *this;
}
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_local_iterator operator++(int)
{
__hash_local_iterator __t(*this);
++(*this);
return __t;
}
friend _LIBCPP_INLINE_VISIBILITY
bool operator==(const __hash_local_iterator& __x, const __hash_local_iterator& __y)
{
return __x.__node_ == __y.__node_;
}
friend _LIBCPP_INLINE_VISIBILITY
bool operator!=(const __hash_local_iterator& __x, const __hash_local_iterator& __y)
{return !(__x == __y);}
2010-05-12 03:42:16 +08:00
private:
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_INLINE_VISIBILITY
__hash_local_iterator(__node_pointer __node, size_t __bucket,
size_t __bucket_count, const void* __c) _NOEXCEPT
: __node_(__node),
__bucket_(__bucket),
__bucket_count_(__bucket_count)
{
__get_db()->__insert_ic(this, __c);
if (__node_ != nullptr)
__node_ = __node_->__next_;
}
#else
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_local_iterator(__node_pointer __node, size_t __bucket,
size_t __bucket_count) _NOEXCEPT
2010-05-12 03:42:16 +08:00
: __node_(__node),
__bucket_(__bucket),
__bucket_count_(__bucket_count)
{
if (__node_ != nullptr)
__node_ = __node_->__next_;
}
#endif
2010-05-12 03:42:16 +08:00
template <class, class, class, class> friend class __hash_table;
template <class> friend class _LIBCPP_TYPE_VIS_ONLY __hash_const_local_iterator;
template <class> friend class _LIBCPP_TYPE_VIS_ONLY __hash_map_iterator;
2010-05-12 03:42:16 +08:00
};
template <class _ConstNodePtr>
class _LIBCPP_TYPE_VIS_ONLY __hash_const_local_iterator
2010-05-12 03:42:16 +08:00
{
typedef __hash_node_types<_ConstNodePtr> _NodeTypes;
typedef _ConstNodePtr __node_pointer;
2010-05-12 03:42:16 +08:00
__node_pointer __node_;
size_t __bucket_;
size_t __bucket_count_;
typedef pointer_traits<__node_pointer> __pointer_traits;
typedef typename __pointer_traits::element_type __node;
typedef typename remove_const<__node>::type __non_const_node;
typedef typename __rebind_pointer<__node_pointer, __non_const_node>::type
__non_const_node_pointer;
public:
2010-05-12 03:42:16 +08:00
typedef __hash_local_iterator<__non_const_node_pointer>
__non_const_iterator;
typedef forward_iterator_tag iterator_category;
typedef typename _NodeTypes::__node_value_type value_type;
typedef typename _NodeTypes::difference_type difference_type;
typedef const value_type& reference;
typedef typename _NodeTypes::__const_node_value_type_pointer pointer;
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY __hash_const_local_iterator() _NOEXCEPT
{
#if _LIBCPP_DEBUG_LEVEL >= 2
__get_db()->__insert_i(this);
#endif
}
_LIBCPP_INLINE_VISIBILITY
__hash_const_local_iterator(const __non_const_iterator& __x) _NOEXCEPT
2010-05-12 03:42:16 +08:00
: __node_(__x.__node_),
__bucket_(__x.__bucket_),
__bucket_count_(__x.__bucket_count_)
{
#if _LIBCPP_DEBUG_LEVEL >= 2
__get_db()->__iterator_copy(this, &__x);
#endif
}
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_INLINE_VISIBILITY
__hash_const_local_iterator(const __hash_const_local_iterator& __i)
: __node_(__i.__node_),
__bucket_(__i.__bucket_),
__bucket_count_(__i.__bucket_count_)
{
__get_db()->__iterator_copy(this, &__i);
}
_LIBCPP_INLINE_VISIBILITY
~__hash_const_local_iterator()
{
__get_db()->__erase_i(this);
}
_LIBCPP_INLINE_VISIBILITY
__hash_const_local_iterator& operator=(const __hash_const_local_iterator& __i)
{
if (this != &__i)
{
__get_db()->__iterator_copy(this, &__i);
__node_ = __i.__node_;
__bucket_ = __i.__bucket_;
__bucket_count_ = __i.__bucket_count_;
}
return *this;
}
#endif // _LIBCPP_DEBUG_LEVEL >= 2
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
reference operator*() const
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to dereference a non-dereferenceable unordered container const_local_iterator");
#endif
return __node_->__value_;
}
_LIBCPP_INLINE_VISIBILITY
pointer operator->() const
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to dereference a non-dereferenceable unordered container const_local_iterator");
#endif
return pointer_traits<pointer>::pointer_to(__node_->__value_);
}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_const_local_iterator& operator++()
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__dereferenceable(this),
"Attempted to increment non-incrementable unordered container const_local_iterator");
#endif
2010-05-12 03:42:16 +08:00
__node_ = __node_->__next_;
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
if (__node_ != nullptr && __constrain_hash(__node_->__hash_, __bucket_count_) != __bucket_)
2010-05-12 03:42:16 +08:00
__node_ = nullptr;
return *this;
}
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_const_local_iterator operator++(int)
{
__hash_const_local_iterator __t(*this);
++(*this);
return __t;
}
friend _LIBCPP_INLINE_VISIBILITY
bool operator==(const __hash_const_local_iterator& __x, const __hash_const_local_iterator& __y)
{
return __x.__node_ == __y.__node_;
}
friend _LIBCPP_INLINE_VISIBILITY
bool operator!=(const __hash_const_local_iterator& __x, const __hash_const_local_iterator& __y)
{return !(__x == __y);}
2010-05-12 03:42:16 +08:00
private:
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_INLINE_VISIBILITY
__hash_const_local_iterator(__node_pointer __node, size_t __bucket,
size_t __bucket_count, const void* __c) _NOEXCEPT
: __node_(__node),
__bucket_(__bucket),
__bucket_count_(__bucket_count)
{
__get_db()->__insert_ic(this, __c);
if (__node_ != nullptr)
__node_ = __node_->__next_;
}
#else
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_const_local_iterator(__node_pointer __node, size_t __bucket,
size_t __bucket_count) _NOEXCEPT
2010-05-12 03:42:16 +08:00
: __node_(__node),
__bucket_(__bucket),
__bucket_count_(__bucket_count)
{
if (__node_ != nullptr)
__node_ = __node_->__next_;
}
#endif
2010-05-12 03:42:16 +08:00
template <class, class, class, class> friend class __hash_table;
template <class> friend class _LIBCPP_TYPE_VIS_ONLY __hash_map_const_iterator;
2010-05-12 03:42:16 +08:00
};
template <class _Alloc>
class __bucket_list_deallocator
{
typedef _Alloc allocator_type;
typedef allocator_traits<allocator_type> __alloc_traits;
typedef typename __alloc_traits::size_type size_type;
__compressed_pair<size_type, allocator_type> __data_;
public:
typedef typename __alloc_traits::pointer pointer;
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__bucket_list_deallocator()
_NOEXCEPT_(is_nothrow_default_constructible<allocator_type>::value)
2010-05-12 03:42:16 +08:00
: __data_(0) {}
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__bucket_list_deallocator(const allocator_type& __a, size_type __size)
_NOEXCEPT_(is_nothrow_copy_constructible<allocator_type>::value)
2010-05-12 03:42:16 +08:00
: __data_(__size, __a) {}
#ifndef _LIBCPP_HAS_NO_RVALUE_REFERENCES
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__bucket_list_deallocator(__bucket_list_deallocator&& __x)
_NOEXCEPT_(is_nothrow_move_constructible<allocator_type>::value)
: __data_(_VSTD::move(__x.__data_))
2010-05-12 03:42:16 +08:00
{
__x.size() = 0;
}
#endif // _LIBCPP_HAS_NO_RVALUE_REFERENCES
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
size_type& size() _NOEXCEPT {return __data_.first();}
_LIBCPP_INLINE_VISIBILITY
size_type size() const _NOEXCEPT {return __data_.first();}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
allocator_type& __alloc() _NOEXCEPT {return __data_.second();}
_LIBCPP_INLINE_VISIBILITY
const allocator_type& __alloc() const _NOEXCEPT {return __data_.second();}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
void operator()(pointer __p) _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
__alloc_traits::deallocate(__alloc(), __p, size());
}
};
template <class _Alloc> class __hash_map_node_destructor;
2010-05-12 03:42:16 +08:00
template <class _Alloc>
class __hash_node_destructor
{
typedef _Alloc allocator_type;
typedef allocator_traits<allocator_type> __alloc_traits;
2010-05-12 03:42:16 +08:00
public:
typedef typename __alloc_traits::pointer pointer;
private:
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
typedef __hash_node_types<pointer> _NodeTypes;
2010-05-12 03:42:16 +08:00
allocator_type& __na_;
__hash_node_destructor& operator=(const __hash_node_destructor&);
public:
bool __value_constructed;
_LIBCPP_INLINE_VISIBILITY
explicit __hash_node_destructor(allocator_type& __na,
bool __constructed = false) _NOEXCEPT
2010-05-12 03:42:16 +08:00
: __na_(__na),
__value_constructed(__constructed)
2010-05-12 03:42:16 +08:00
{}
_LIBCPP_INLINE_VISIBILITY
void operator()(pointer __p) _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
if (__value_constructed)
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__alloc_traits::destroy(__na_, _NodeTypes::__get_ptr(__p->__value_));
2010-05-12 03:42:16 +08:00
if (__p)
__alloc_traits::deallocate(__na_, __p, 1);
}
template <class> friend class __hash_map_node_destructor;
};
template <class _Tp, class _Hash, class _Equal, class _Alloc>
class __hash_table
{
public:
typedef _Tp value_type;
typedef _Hash hasher;
typedef _Equal key_equal;
typedef _Alloc allocator_type;
private:
typedef allocator_traits<allocator_type> __alloc_traits;
typedef typename
__make_hash_node_types<value_type, typename __alloc_traits::void_pointer>::type
_NodeTypes;
2010-05-12 03:42:16 +08:00
public:
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
typedef typename _NodeTypes::__node_value_type __node_value_type;
typedef typename _NodeTypes::__container_value_type __container_value_type;
typedef typename _NodeTypes::key_type key_type;
2010-05-12 03:42:16 +08:00
typedef value_type& reference;
typedef const value_type& const_reference;
typedef typename __alloc_traits::pointer pointer;
typedef typename __alloc_traits::const_pointer const_pointer;
#ifndef _LIBCPP_ABI_FIX_UNORDERED_CONTAINER_SIZE_TYPE
2010-05-12 03:42:16 +08:00
typedef typename __alloc_traits::size_type size_type;
#else
typedef typename _NodeTypes::size_type size_type;
#endif
typedef typename _NodeTypes::difference_type difference_type;
2010-05-12 03:42:16 +08:00
public:
// Create __node
typedef typename _NodeTypes::__node_type __node;
typedef typename __rebind_alloc_helper<__alloc_traits, __node>::type __node_allocator;
2010-05-12 03:42:16 +08:00
typedef allocator_traits<__node_allocator> __node_traits;
typedef typename _NodeTypes::__void_pointer __void_pointer;
typedef typename _NodeTypes::__node_pointer __node_pointer;
typedef typename _NodeTypes::__node_pointer __node_const_pointer;
typedef typename _NodeTypes::__node_base_type __first_node;
typedef typename _NodeTypes::__node_base_pointer __node_base_pointer;
private:
// check for sane allocator pointer rebinding semantics. Rebinding the
// allocator for a new pointer type should be exactly the same as rebinding
// the pointer using 'pointer_traits'.
static_assert((is_same<__node_pointer, typename __node_traits::pointer>::value),
"Allocator does not rebind pointers in a sane manner.");
typedef typename __rebind_alloc_helper<__node_traits, __first_node>::type
__node_base_allocator;
typedef allocator_traits<__node_base_allocator> __node_base_traits;
static_assert((is_same<__node_base_pointer, typename __node_base_traits::pointer>::value),
"Allocator does not rebind pointers in a sane manner.");
2010-05-12 03:42:16 +08:00
private:
typedef typename __rebind_alloc_helper<__node_traits, __node_pointer>::type __pointer_allocator;
2010-05-12 03:42:16 +08:00
typedef __bucket_list_deallocator<__pointer_allocator> __bucket_list_deleter;
typedef unique_ptr<__node_pointer[], __bucket_list_deleter> __bucket_list;
typedef allocator_traits<__pointer_allocator> __pointer_alloc_traits;
typedef typename __bucket_list_deleter::pointer __node_pointer_pointer;
// --- Member data begin ---
__bucket_list __bucket_list_;
__compressed_pair<__first_node, __node_allocator> __p1_;
__compressed_pair<size_type, hasher> __p2_;
__compressed_pair<float, key_equal> __p3_;
2010-05-12 03:42:16 +08:00
// --- Member data end ---
_LIBCPP_INLINE_VISIBILITY
size_type& size() _NOEXCEPT {return __p2_.first();}
2010-05-12 03:42:16 +08:00
public:
_LIBCPP_INLINE_VISIBILITY
size_type size() const _NOEXCEPT {return __p2_.first();}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
hasher& hash_function() _NOEXCEPT {return __p2_.second();}
_LIBCPP_INLINE_VISIBILITY
const hasher& hash_function() const _NOEXCEPT {return __p2_.second();}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
float& max_load_factor() _NOEXCEPT {return __p3_.first();}
_LIBCPP_INLINE_VISIBILITY
float max_load_factor() const _NOEXCEPT {return __p3_.first();}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
key_equal& key_eq() _NOEXCEPT {return __p3_.second();}
_LIBCPP_INLINE_VISIBILITY
const key_equal& key_eq() const _NOEXCEPT {return __p3_.second();}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
__node_allocator& __node_alloc() _NOEXCEPT {return __p1_.second();}
_LIBCPP_INLINE_VISIBILITY
const __node_allocator& __node_alloc() const _NOEXCEPT
{return __p1_.second();}
2010-05-12 03:42:16 +08:00
public:
typedef __hash_iterator<__node_pointer> iterator;
typedef __hash_const_iterator<__node_pointer> const_iterator;
2010-05-12 03:42:16 +08:00
typedef __hash_local_iterator<__node_pointer> local_iterator;
typedef __hash_const_local_iterator<__node_pointer> const_local_iterator;
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
__hash_table()
_NOEXCEPT_(
is_nothrow_default_constructible<__bucket_list>::value &&
is_nothrow_default_constructible<__first_node>::value &&
is_nothrow_default_constructible<__node_allocator>::value &&
is_nothrow_default_constructible<hasher>::value &&
is_nothrow_default_constructible<key_equal>::value);
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
__hash_table(const hasher& __hf, const key_equal& __eql);
__hash_table(const hasher& __hf, const key_equal& __eql,
const allocator_type& __a);
explicit __hash_table(const allocator_type& __a);
__hash_table(const __hash_table& __u);
__hash_table(const __hash_table& __u, const allocator_type& __a);
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
__hash_table(__hash_table&& __u)
_NOEXCEPT_(
is_nothrow_move_constructible<__bucket_list>::value &&
is_nothrow_move_constructible<__first_node>::value &&
is_nothrow_move_constructible<__node_allocator>::value &&
is_nothrow_move_constructible<hasher>::value &&
is_nothrow_move_constructible<key_equal>::value);
2010-05-12 03:42:16 +08:00
__hash_table(__hash_table&& __u, const allocator_type& __a);
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#endif // _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
~__hash_table();
__hash_table& operator=(const __hash_table& __u);
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
_LIBCPP_INLINE_VISIBILITY
__hash_table& operator=(__hash_table&& __u)
_NOEXCEPT_(
__node_traits::propagate_on_container_move_assignment::value &&
is_nothrow_move_assignable<__node_allocator>::value &&
is_nothrow_move_assignable<hasher>::value &&
is_nothrow_move_assignable<key_equal>::value);
2010-05-12 03:42:16 +08:00
#endif
template <class _InputIterator>
void __assign_unique(_InputIterator __first, _InputIterator __last);
template <class _InputIterator>
void __assign_multi(_InputIterator __first, _InputIterator __last);
_LIBCPP_INLINE_VISIBILITY
size_type max_size() const _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
return allocator_traits<__pointer_allocator>::max_size(
__bucket_list_.get_deleter().__alloc());
}
pair<iterator, bool> __node_insert_unique(__node_pointer __nd);
iterator __node_insert_multi(__node_pointer __nd);
iterator __node_insert_multi(const_iterator __p,
__node_pointer __nd);
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
template <class _Key, class ..._Args>
_LIBCPP_INLINE_VISIBILITY
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
pair<iterator, bool> __emplace_unique_key_args(_Key const& __k, _Args&&... __args);
2010-05-12 03:42:16 +08:00
template <class... _Args>
_LIBCPP_INLINE_VISIBILITY
pair<iterator, bool> __emplace_unique_impl(_Args&&... __args);
template <class _Pp>
_LIBCPP_INLINE_VISIBILITY
pair<iterator, bool> __emplace_unique(_Pp&& __x) {
return __emplace_unique_extract_key(_VSTD::forward<_Pp>(__x),
__can_extract_key<_Pp, key_type>());
}
template <class... _Args>
_LIBCPP_INLINE_VISIBILITY
pair<iterator, bool> __emplace_unique(_Args&&... __args) {
return __emplace_unique_impl(_VSTD::forward<_Args>(__args)...);
}
template <class _Pp>
_LIBCPP_INLINE_VISIBILITY
pair<iterator, bool>
__emplace_unique_extract_key(_Pp&& __x, __extract_key_fail_tag) {
return __emplace_unique_impl(_VSTD::forward<_Pp>(__x));
}
template <class _Pp>
_LIBCPP_INLINE_VISIBILITY
pair<iterator, bool>
__emplace_unique_extract_key(_Pp&& __x, __extract_key_self_tag) {
return __emplace_unique_key_args(__x, _VSTD::forward<_Pp>(__x));
}
template <class _Pp>
_LIBCPP_INLINE_VISIBILITY
pair<iterator, bool>
__emplace_unique_extract_key(_Pp&& __x, __extract_key_first_tag) {
return __emplace_unique_key_args(__x.first, _VSTD::forward<_Pp>(__x));
}
2010-05-12 03:42:16 +08:00
template <class... _Args>
_LIBCPP_INLINE_VISIBILITY
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
iterator __emplace_multi(_Args&&... __args);
2010-05-12 03:42:16 +08:00
template <class... _Args>
_LIBCPP_INLINE_VISIBILITY
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
iterator __emplace_hint_multi(const_iterator __p, _Args&&... __args);
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
pair<iterator, bool>
__insert_unique(__container_value_type&& __x) {
return __emplace_unique_key_args(_NodeTypes::__get_key(__x), _VSTD::move(__x));
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
template <class _Pp, class = typename enable_if<
!__is_same_uncvref<_Pp, __container_value_type>::value
>::type>
_LIBCPP_INLINE_VISIBILITY
pair<iterator, bool> __insert_unique(_Pp&& __x) {
return __emplace_unique(_VSTD::forward<_Pp>(__x));
}
2010-05-12 03:42:16 +08:00
template <class _Pp>
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
_LIBCPP_INLINE_VISIBILITY
iterator __insert_multi(_Pp&& __x) {
return __emplace_multi(_VSTD::forward<_Pp>(__x));
}
2010-05-12 03:42:16 +08:00
template <class _Pp>
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
_LIBCPP_INLINE_VISIBILITY
iterator __insert_multi(const_iterator __p, _Pp&& __x) {
return __emplace_hint_multi(__p, _VSTD::forward<_Pp>(__x));
}
#else // !defined(_LIBCPP_CXX03_LANG)
template <class _Key, class _Args>
pair<iterator, bool> __emplace_unique_key_args(_Key const&, _Args& __args);
iterator __insert_multi(const __container_value_type& __x);
iterator __insert_multi(const_iterator __p, const __container_value_type& __x);
#endif
_LIBCPP_INLINE_VISIBILITY
pair<iterator, bool> __insert_unique(const __container_value_type& __x) {
return __emplace_unique_key_args(_NodeTypes::__get_key(__x), __x);
}
2010-05-12 03:42:16 +08:00
void clear() _NOEXCEPT;
2010-05-12 03:42:16 +08:00
void rehash(size_type __n);
_LIBCPP_INLINE_VISIBILITY void reserve(size_type __n)
2010-05-12 03:42:16 +08:00
{rehash(static_cast<size_type>(ceil(__n / max_load_factor())));}
_LIBCPP_INLINE_VISIBILITY
size_type bucket_count() const _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
return __bucket_list_.get_deleter().size();
}
_LIBCPP_INLINE_VISIBILITY
iterator begin() _NOEXCEPT;
_LIBCPP_INLINE_VISIBILITY
iterator end() _NOEXCEPT;
_LIBCPP_INLINE_VISIBILITY
const_iterator begin() const _NOEXCEPT;
_LIBCPP_INLINE_VISIBILITY
const_iterator end() const _NOEXCEPT;
2010-05-12 03:42:16 +08:00
template <class _Key>
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
size_type bucket(const _Key& __k) const
{
_LIBCPP_ASSERT(bucket_count() > 0,
"unordered container::bucket(key) called when bucket_count() == 0");
return __constrain_hash(hash_function()(__k), bucket_count());
}
2010-05-12 03:42:16 +08:00
template <class _Key>
iterator find(const _Key& __x);
template <class _Key>
const_iterator find(const _Key& __x) const;
typedef __hash_node_destructor<__node_allocator> _Dp;
typedef unique_ptr<__node, _Dp> __node_holder;
2010-05-12 03:42:16 +08:00
iterator erase(const_iterator __p);
iterator erase(const_iterator __first, const_iterator __last);
template <class _Key>
size_type __erase_unique(const _Key& __k);
template <class _Key>
size_type __erase_multi(const _Key& __k);
__node_holder remove(const_iterator __p) _NOEXCEPT;
2010-05-12 03:42:16 +08:00
template <class _Key>
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
size_type __count_unique(const _Key& __k) const;
template <class _Key>
size_type __count_multi(const _Key& __k) const;
template <class _Key>
pair<iterator, iterator>
__equal_range_unique(const _Key& __k);
template <class _Key>
pair<const_iterator, const_iterator>
__equal_range_unique(const _Key& __k) const;
template <class _Key>
pair<iterator, iterator>
__equal_range_multi(const _Key& __k);
template <class _Key>
pair<const_iterator, const_iterator>
__equal_range_multi(const _Key& __k) const;
void swap(__hash_table& __u)
#if _LIBCPP_STD_VER <= 11
_NOEXCEPT_(
__is_nothrow_swappable<hasher>::value && __is_nothrow_swappable<key_equal>::value
&& (!allocator_traits<__pointer_allocator>::propagate_on_container_swap::value
|| __is_nothrow_swappable<__pointer_allocator>::value)
&& (!__node_traits::propagate_on_container_swap::value
|| __is_nothrow_swappable<__node_allocator>::value)
);
#else
_NOEXCEPT_(__is_nothrow_swappable<hasher>::value && __is_nothrow_swappable<key_equal>::value);
#endif
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
size_type max_bucket_count() const _NOEXCEPT
{return __pointer_alloc_traits::max_size(__bucket_list_.get_deleter().__alloc());}
2010-05-12 03:42:16 +08:00
size_type bucket_size(size_type __n) const;
_LIBCPP_INLINE_VISIBILITY float load_factor() const _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
size_type __bc = bucket_count();
return __bc != 0 ? (float)size() / __bc : 0.f;
}
_LIBCPP_INLINE_VISIBILITY void max_load_factor(float __mlf) _NOEXCEPT
{
_LIBCPP_ASSERT(__mlf > 0,
"unordered container::max_load_factor(lf) called with lf <= 0");
max_load_factor() = _VSTD::max(__mlf, load_factor());
}
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
local_iterator
begin(size_type __n)
{
_LIBCPP_ASSERT(__n < bucket_count(),
"unordered container::begin(n) called with n >= bucket_count()");
#if _LIBCPP_DEBUG_LEVEL >= 2
return local_iterator(__bucket_list_[__n], __n, bucket_count(), this);
#else
return local_iterator(__bucket_list_[__n], __n, bucket_count());
#endif
}
_LIBCPP_INLINE_VISIBILITY
local_iterator
end(size_type __n)
{
_LIBCPP_ASSERT(__n < bucket_count(),
"unordered container::end(n) called with n >= bucket_count()");
#if _LIBCPP_DEBUG_LEVEL >= 2
return local_iterator(nullptr, __n, bucket_count(), this);
#else
return local_iterator(nullptr, __n, bucket_count());
#endif
}
_LIBCPP_INLINE_VISIBILITY
const_local_iterator
cbegin(size_type __n) const
{
_LIBCPP_ASSERT(__n < bucket_count(),
"unordered container::cbegin(n) called with n >= bucket_count()");
#if _LIBCPP_DEBUG_LEVEL >= 2
return const_local_iterator(__bucket_list_[__n], __n, bucket_count(), this);
#else
return const_local_iterator(__bucket_list_[__n], __n, bucket_count());
#endif
}
_LIBCPP_INLINE_VISIBILITY
const_local_iterator
cend(size_type __n) const
{
_LIBCPP_ASSERT(__n < bucket_count(),
"unordered container::cend(n) called with n >= bucket_count()");
#if _LIBCPP_DEBUG_LEVEL >= 2
return const_local_iterator(nullptr, __n, bucket_count(), this);
#else
return const_local_iterator(nullptr, __n, bucket_count());
#endif
}
#if _LIBCPP_DEBUG_LEVEL >= 2
bool __dereferenceable(const const_iterator* __i) const;
bool __decrementable(const const_iterator* __i) const;
bool __addable(const const_iterator* __i, ptrdiff_t __n) const;
bool __subscriptable(const const_iterator* __i, ptrdiff_t __n) const;
#endif // _LIBCPP_DEBUG_LEVEL >= 2
2010-05-12 03:42:16 +08:00
private:
void __rehash(size_type __n);
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class ..._Args>
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__node_holder __construct_node(_Args&& ...__args);
template <class _First, class ..._Rest>
__node_holder __construct_node_hash(size_t __hash, _First&& __f, _Rest&&... __rest);
#else // _LIBCPP_CXX03_LANG
__node_holder __construct_node(const __container_value_type& __v);
__node_holder __construct_node_hash(size_t __hash, const __container_value_type& __v);
2010-05-12 03:42:16 +08:00
#endif
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
2010-05-12 03:42:16 +08:00
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
void __copy_assign_alloc(const __hash_table& __u)
{__copy_assign_alloc(__u, integral_constant<bool,
__node_traits::propagate_on_container_copy_assignment::value>());}
void __copy_assign_alloc(const __hash_table& __u, true_type);
_LIBCPP_INLINE_VISIBILITY
void __copy_assign_alloc(const __hash_table&, false_type) {}
2010-05-12 03:42:16 +08:00
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
void __move_assign(__hash_table& __u, false_type);
void __move_assign(__hash_table& __u, true_type)
_NOEXCEPT_(
is_nothrow_move_assignable<__node_allocator>::value &&
is_nothrow_move_assignable<hasher>::value &&
is_nothrow_move_assignable<key_equal>::value);
_LIBCPP_INLINE_VISIBILITY
void __move_assign_alloc(__hash_table& __u)
_NOEXCEPT_(
!__node_traits::propagate_on_container_move_assignment::value ||
(is_nothrow_move_assignable<__pointer_allocator>::value &&
is_nothrow_move_assignable<__node_allocator>::value))
2010-05-12 03:42:16 +08:00
{__move_assign_alloc(__u, integral_constant<bool,
__node_traits::propagate_on_container_move_assignment::value>());}
_LIBCPP_INLINE_VISIBILITY
2010-05-12 03:42:16 +08:00
void __move_assign_alloc(__hash_table& __u, true_type)
_NOEXCEPT_(
is_nothrow_move_assignable<__pointer_allocator>::value &&
is_nothrow_move_assignable<__node_allocator>::value)
2010-05-12 03:42:16 +08:00
{
__bucket_list_.get_deleter().__alloc() =
_VSTD::move(__u.__bucket_list_.get_deleter().__alloc());
__node_alloc() = _VSTD::move(__u.__node_alloc());
2010-05-12 03:42:16 +08:00
}
_LIBCPP_INLINE_VISIBILITY
void __move_assign_alloc(__hash_table&, false_type) _NOEXCEPT {}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#endif // _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
void __deallocate(__node_pointer __np) _NOEXCEPT;
__node_pointer __detach() _NOEXCEPT;
template <class, class, class, class, class> friend class _LIBCPP_TYPE_VIS_ONLY unordered_map;
template <class, class, class, class, class> friend class _LIBCPP_TYPE_VIS_ONLY unordered_multimap;
2010-05-12 03:42:16 +08:00
};
template <class _Tp, class _Hash, class _Equal, class _Alloc>
inline
2010-05-12 03:42:16 +08:00
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__hash_table()
_NOEXCEPT_(
is_nothrow_default_constructible<__bucket_list>::value &&
is_nothrow_default_constructible<__first_node>::value &&
is_nothrow_default_constructible<__node_allocator>::value &&
is_nothrow_default_constructible<hasher>::value &&
is_nothrow_default_constructible<key_equal>::value)
2010-05-12 03:42:16 +08:00
: __p2_(0),
__p3_(1.0f)
{
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
inline
2010-05-12 03:42:16 +08:00
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__hash_table(const hasher& __hf,
const key_equal& __eql)
: __bucket_list_(nullptr, __bucket_list_deleter()),
__p1_(),
__p2_(0, __hf),
__p3_(1.0f, __eql)
{
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__hash_table(const hasher& __hf,
const key_equal& __eql,
const allocator_type& __a)
: __bucket_list_(nullptr, __bucket_list_deleter(__pointer_allocator(__a), 0)),
__p1_(__node_allocator(__a)),
__p2_(0, __hf),
__p3_(1.0f, __eql)
{
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__hash_table(const allocator_type& __a)
: __bucket_list_(nullptr, __bucket_list_deleter(__pointer_allocator(__a), 0)),
__p1_(__node_allocator(__a)),
__p2_(0),
__p3_(1.0f)
{
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__hash_table(const __hash_table& __u)
: __bucket_list_(nullptr,
__bucket_list_deleter(allocator_traits<__pointer_allocator>::
select_on_container_copy_construction(
__u.__bucket_list_.get_deleter().__alloc()), 0)),
__p1_(allocator_traits<__node_allocator>::
select_on_container_copy_construction(__u.__node_alloc())),
__p2_(0, __u.hash_function()),
__p3_(__u.__p3_)
{
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__hash_table(const __hash_table& __u,
const allocator_type& __a)
: __bucket_list_(nullptr, __bucket_list_deleter(__pointer_allocator(__a), 0)),
__p1_(__node_allocator(__a)),
__p2_(0, __u.hash_function()),
__p3_(__u.__p3_)
{
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class _Tp, class _Hash, class _Equal, class _Alloc>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__hash_table(__hash_table&& __u)
_NOEXCEPT_(
is_nothrow_move_constructible<__bucket_list>::value &&
is_nothrow_move_constructible<__first_node>::value &&
is_nothrow_move_constructible<__node_allocator>::value &&
is_nothrow_move_constructible<hasher>::value &&
is_nothrow_move_constructible<key_equal>::value)
: __bucket_list_(_VSTD::move(__u.__bucket_list_)),
__p1_(_VSTD::move(__u.__p1_)),
__p2_(_VSTD::move(__u.__p2_)),
__p3_(_VSTD::move(__u.__p3_))
2010-05-12 03:42:16 +08:00
{
if (size() > 0)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__bucket_list_[__constrain_hash(__p1_.first().__next_->__hash_, bucket_count())] =
static_cast<__node_pointer>(pointer_traits<__node_base_pointer>::pointer_to(__p1_.first()));
2010-05-12 03:42:16 +08:00
__u.__p1_.first().__next_ = nullptr;
__u.size() = 0;
}
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__hash_table(__hash_table&& __u,
const allocator_type& __a)
: __bucket_list_(nullptr, __bucket_list_deleter(__pointer_allocator(__a), 0)),
__p1_(__node_allocator(__a)),
__p2_(0, _VSTD::move(__u.hash_function())),
__p3_(_VSTD::move(__u.__p3_))
2010-05-12 03:42:16 +08:00
{
if (__a == allocator_type(__u.__node_alloc()))
{
__bucket_list_.reset(__u.__bucket_list_.release());
__bucket_list_.get_deleter().size() = __u.__bucket_list_.get_deleter().size();
__u.__bucket_list_.get_deleter().size() = 0;
if (__u.size() > 0)
{
__p1_.first().__next_ = __u.__p1_.first().__next_;
__u.__p1_.first().__next_ = nullptr;
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__bucket_list_[__constrain_hash(__p1_.first().__next_->__hash_, bucket_count())] =
static_cast<__node_pointer>(pointer_traits<__node_base_pointer>::pointer_to(__p1_.first()));
2010-05-12 03:42:16 +08:00
size() = __u.size();
__u.size() = 0;
}
}
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#endif // _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class _Tp, class _Hash, class _Equal, class _Alloc>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::~__hash_table()
{
__deallocate(__p1_.first().__next_);
#if _LIBCPP_DEBUG_LEVEL >= 2
__get_db()->__erase_c(this);
#endif
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
void
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__copy_assign_alloc(
const __hash_table& __u, true_type)
{
if (__node_alloc() != __u.__node_alloc())
{
clear();
__bucket_list_.reset();
__bucket_list_.get_deleter().size() = 0;
}
__bucket_list_.get_deleter().__alloc() = __u.__bucket_list_.get_deleter().__alloc();
__node_alloc() = __u.__node_alloc();
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
__hash_table<_Tp, _Hash, _Equal, _Alloc>&
__hash_table<_Tp, _Hash, _Equal, _Alloc>::operator=(const __hash_table& __u)
{
if (this != &__u)
{
__copy_assign_alloc(__u);
hash_function() = __u.hash_function();
key_eq() = __u.key_eq();
max_load_factor() = __u.max_load_factor();
__assign_multi(__u.begin(), __u.end());
}
return *this;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
void
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__deallocate(__node_pointer __np)
_NOEXCEPT
2010-05-12 03:42:16 +08:00
{
__node_allocator& __na = __node_alloc();
while (__np != nullptr)
{
__node_pointer __next = __np->__next_;
#if _LIBCPP_DEBUG_LEVEL >= 2
__c_node* __c = __get_db()->__find_c_and_lock(this);
for (__i_node** __p = __c->end_; __p != __c->beg_; )
{
--__p;
iterator* __i = static_cast<iterator*>((*__p)->__i_);
if (__i->__node_ == __np)
{
(*__p)->__c_ = nullptr;
if (--__c->end_ != __p)
memmove(__p, __p+1, (__c->end_ - __p)*sizeof(__i_node*));
}
}
__get_db()->unlock();
#endif
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__node_traits::destroy(__na, _NodeTypes::__get_ptr(__np->__value_));
2010-05-12 03:42:16 +08:00
__node_traits::deallocate(__na, __np, 1);
__np = __next;
}
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::__node_pointer
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__detach() _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
size_type __bc = bucket_count();
for (size_type __i = 0; __i < __bc; ++__i)
__bucket_list_[__i] = nullptr;
size() = 0;
__node_pointer __cache = __p1_.first().__next_;
__p1_.first().__next_ = nullptr;
return __cache;
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class _Tp, class _Hash, class _Equal, class _Alloc>
void
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__move_assign(
__hash_table& __u, true_type)
_NOEXCEPT_(
is_nothrow_move_assignable<__node_allocator>::value &&
is_nothrow_move_assignable<hasher>::value &&
is_nothrow_move_assignable<key_equal>::value)
2010-05-12 03:42:16 +08:00
{
clear();
__bucket_list_.reset(__u.__bucket_list_.release());
__bucket_list_.get_deleter().size() = __u.__bucket_list_.get_deleter().size();
__u.__bucket_list_.get_deleter().size() = 0;
__move_assign_alloc(__u);
size() = __u.size();
hash_function() = _VSTD::move(__u.hash_function());
2010-05-12 03:42:16 +08:00
max_load_factor() = __u.max_load_factor();
key_eq() = _VSTD::move(__u.key_eq());
2010-05-12 03:42:16 +08:00
__p1_.first().__next_ = __u.__p1_.first().__next_;
if (size() > 0)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__bucket_list_[__constrain_hash(__p1_.first().__next_->__hash_, bucket_count())] =
static_cast<__node_pointer>(pointer_traits<__node_base_pointer>::pointer_to(__p1_.first()));
2010-05-12 03:42:16 +08:00
__u.__p1_.first().__next_ = nullptr;
__u.size() = 0;
}
#if _LIBCPP_DEBUG_LEVEL >= 2
__get_db()->swap(this, &__u);
#endif
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
void
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__move_assign(
__hash_table& __u, false_type)
{
if (__node_alloc() == __u.__node_alloc())
__move_assign(__u, true_type());
else
{
hash_function() = _VSTD::move(__u.hash_function());
key_eq() = _VSTD::move(__u.key_eq());
2010-05-12 03:42:16 +08:00
max_load_factor() = __u.max_load_factor();
if (bucket_count() != 0)
{
__node_pointer __cache = __detach();
#ifndef _LIBCPP_NO_EXCEPTIONS
try
{
#endif // _LIBCPP_NO_EXCEPTIONS
2010-05-12 03:42:16 +08:00
const_iterator __i = __u.begin();
while (__cache != nullptr && __u.size() != 0)
{
__cache->__value_ = _VSTD::move(__u.remove(__i++)->__value_);
2010-05-12 03:42:16 +08:00
__node_pointer __next = __cache->__next_;
__node_insert_multi(__cache);
__cache = __next;
}
#ifndef _LIBCPP_NO_EXCEPTIONS
}
catch (...)
{
__deallocate(__cache);
throw;
}
#endif // _LIBCPP_NO_EXCEPTIONS
2010-05-12 03:42:16 +08:00
__deallocate(__cache);
}
const_iterator __i = __u.begin();
while (__u.size() != 0)
{
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__node_holder __h = __construct_node(_NodeTypes::__move(__u.remove(__i++)->__value_));
2010-05-12 03:42:16 +08:00
__node_insert_multi(__h.get());
__h.release();
}
}
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
inline
2010-05-12 03:42:16 +08:00
__hash_table<_Tp, _Hash, _Equal, _Alloc>&
__hash_table<_Tp, _Hash, _Equal, _Alloc>::operator=(__hash_table&& __u)
_NOEXCEPT_(
__node_traits::propagate_on_container_move_assignment::value &&
is_nothrow_move_assignable<__node_allocator>::value &&
is_nothrow_move_assignable<hasher>::value &&
is_nothrow_move_assignable<key_equal>::value)
2010-05-12 03:42:16 +08:00
{
__move_assign(__u, integral_constant<bool,
__node_traits::propagate_on_container_move_assignment::value>());
return *this;
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#endif // _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _InputIterator>
void
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__assign_unique(_InputIterator __first,
_InputIterator __last)
{
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
typedef iterator_traits<_InputIterator> _ITraits;
typedef typename _ITraits::value_type _ItValueType;
static_assert((is_same<_ItValueType, __container_value_type>::value),
"__assign_unique may only be called with the containers value type");
2010-05-12 03:42:16 +08:00
if (bucket_count() != 0)
{
__node_pointer __cache = __detach();
#ifndef _LIBCPP_NO_EXCEPTIONS
try
{
#endif // _LIBCPP_NO_EXCEPTIONS
2010-05-12 03:42:16 +08:00
for (; __cache != nullptr && __first != __last; ++__first)
{
__cache->__value_ = *__first;
__node_pointer __next = __cache->__next_;
__node_insert_unique(__cache);
__cache = __next;
}
#ifndef _LIBCPP_NO_EXCEPTIONS
}
catch (...)
{
__deallocate(__cache);
throw;
}
#endif // _LIBCPP_NO_EXCEPTIONS
2010-05-12 03:42:16 +08:00
__deallocate(__cache);
}
for (; __first != __last; ++__first)
__insert_unique(*__first);
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _InputIterator>
void
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__assign_multi(_InputIterator __first,
_InputIterator __last)
{
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
typedef iterator_traits<_InputIterator> _ITraits;
typedef typename _ITraits::value_type _ItValueType;
static_assert((is_same<_ItValueType, __container_value_type>::value ||
is_same<_ItValueType, __node_value_type>::value),
"__assign_multi may only be called with the containers value type"
" or the nodes value type");
2010-05-12 03:42:16 +08:00
if (bucket_count() != 0)
{
__node_pointer __cache = __detach();
#ifndef _LIBCPP_NO_EXCEPTIONS
try
{
#endif // _LIBCPP_NO_EXCEPTIONS
2010-05-12 03:42:16 +08:00
for (; __cache != nullptr && __first != __last; ++__first)
{
__cache->__value_ = *__first;
__node_pointer __next = __cache->__next_;
__node_insert_multi(__cache);
__cache = __next;
}
#ifndef _LIBCPP_NO_EXCEPTIONS
}
catch (...)
{
__deallocate(__cache);
throw;
}
#endif // _LIBCPP_NO_EXCEPTIONS
2010-05-12 03:42:16 +08:00
__deallocate(__cache);
}
for (; __first != __last; ++__first)
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__insert_multi(_NodeTypes::__get_value(*__first));
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
inline
2010-05-12 03:42:16 +08:00
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::begin() _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
#if _LIBCPP_DEBUG_LEVEL >= 2
return iterator(__p1_.first().__next_, this);
#else
2010-05-12 03:42:16 +08:00
return iterator(__p1_.first().__next_);
#endif
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
inline
2010-05-12 03:42:16 +08:00
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::end() _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
#if _LIBCPP_DEBUG_LEVEL >= 2
return iterator(nullptr, this);
#else
2010-05-12 03:42:16 +08:00
return iterator(nullptr);
#endif
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
inline
2010-05-12 03:42:16 +08:00
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::const_iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::begin() const _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
#if _LIBCPP_DEBUG_LEVEL >= 2
return const_iterator(__p1_.first().__next_, this);
#else
2010-05-12 03:42:16 +08:00
return const_iterator(__p1_.first().__next_);
#endif
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
inline
2010-05-12 03:42:16 +08:00
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::const_iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::end() const _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
#if _LIBCPP_DEBUG_LEVEL >= 2
return const_iterator(nullptr, this);
#else
2010-05-12 03:42:16 +08:00
return const_iterator(nullptr);
#endif
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
void
__hash_table<_Tp, _Hash, _Equal, _Alloc>::clear() _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
if (size() > 0)
{
__deallocate(__p1_.first().__next_);
__p1_.first().__next_ = nullptr;
size_type __bc = bucket_count();
for (size_type __i = 0; __i < __bc; ++__i)
2010-05-12 03:42:16 +08:00
__bucket_list_[__i] = nullptr;
size() = 0;
}
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
pair<typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator, bool>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__node_insert_unique(__node_pointer __nd)
{
__nd->__hash_ = hash_function()(__nd->__value_);
size_type __bc = bucket_count();
bool __inserted = false;
__node_pointer __ndptr;
size_t __chash;
if (__bc != 0)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__chash = __constrain_hash(__nd->__hash_, __bc);
2010-05-12 03:42:16 +08:00
__ndptr = __bucket_list_[__chash];
if (__ndptr != nullptr)
{
for (__ndptr = __ndptr->__next_; __ndptr != nullptr &&
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__constrain_hash(__ndptr->__hash_, __bc) == __chash;
2010-05-12 03:42:16 +08:00
__ndptr = __ndptr->__next_)
{
if (key_eq()(__ndptr->__value_, __nd->__value_))
goto __done;
}
}
}
{
if (size()+1 > __bc * max_load_factor() || __bc == 0)
{
rehash(_VSTD::max<size_type>(2 * __bc + !__is_hash_power2(__bc),
2010-05-12 03:42:16 +08:00
size_type(ceil(float(size() + 1) / max_load_factor()))));
__bc = bucket_count();
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__chash = __constrain_hash(__nd->__hash_, __bc);
2010-05-12 03:42:16 +08:00
}
// insert_after __bucket_list_[__chash], or __first_node if bucket is null
__node_pointer __pn = __bucket_list_[__chash];
if (__pn == nullptr)
{
__pn = static_cast<__node_pointer>(pointer_traits<__node_base_pointer>::pointer_to(__p1_.first()));
2010-05-12 03:42:16 +08:00
__nd->__next_ = __pn->__next_;
__pn->__next_ = __nd;
// fix up __bucket_list_
__bucket_list_[__chash] = __pn;
if (__nd->__next_ != nullptr)
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__bucket_list_[__constrain_hash(__nd->__next_->__hash_, __bc)] = __nd;
2010-05-12 03:42:16 +08:00
}
else
{
__nd->__next_ = __pn->__next_;
__pn->__next_ = __nd;
}
__ndptr = __nd;
// increment size
++size();
__inserted = true;
}
__done:
#if _LIBCPP_DEBUG_LEVEL >= 2
return pair<iterator, bool>(iterator(__ndptr, this), __inserted);
#else
2010-05-12 03:42:16 +08:00
return pair<iterator, bool>(iterator(__ndptr), __inserted);
#endif
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__node_insert_multi(__node_pointer __cp)
{
__cp->__hash_ = hash_function()(__cp->__value_);
size_type __bc = bucket_count();
if (size()+1 > __bc * max_load_factor() || __bc == 0)
{
rehash(_VSTD::max<size_type>(2 * __bc + !__is_hash_power2(__bc),
2010-05-12 03:42:16 +08:00
size_type(ceil(float(size() + 1) / max_load_factor()))));
__bc = bucket_count();
}
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
size_t __chash = __constrain_hash(__cp->__hash_, __bc);
2010-05-12 03:42:16 +08:00
__node_pointer __pn = __bucket_list_[__chash];
if (__pn == nullptr)
{
__pn = static_cast<__node_pointer>(pointer_traits<__node_base_pointer>::pointer_to(__p1_.first()));
2010-05-12 03:42:16 +08:00
__cp->__next_ = __pn->__next_;
__pn->__next_ = __cp;
// fix up __bucket_list_
__bucket_list_[__chash] = __pn;
if (__cp->__next_ != nullptr)
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__bucket_list_[__constrain_hash(__cp->__next_->__hash_, __bc)] = __cp;
2010-05-12 03:42:16 +08:00
}
else
{
for (bool __found = false; __pn->__next_ != nullptr &&
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__constrain_hash(__pn->__next_->__hash_, __bc) == __chash;
2010-05-12 03:42:16 +08:00
__pn = __pn->__next_)
{
2010-05-12 03:42:16 +08:00
// __found key_eq() action
// false false loop
// true true loop
// false true set __found to true
// true false break
if (__found != (__pn->__next_->__hash_ == __cp->__hash_ &&
key_eq()(__pn->__next_->__value_, __cp->__value_)))
{
if (!__found)
__found = true;
else
break;
}
}
__cp->__next_ = __pn->__next_;
__pn->__next_ = __cp;
if (__cp->__next_ != nullptr)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
size_t __nhash = __constrain_hash(__cp->__next_->__hash_, __bc);
2010-05-12 03:42:16 +08:00
if (__nhash != __chash)
__bucket_list_[__nhash] = __cp;
}
}
++size();
#if _LIBCPP_DEBUG_LEVEL >= 2
return iterator(__cp, this);
#else
2010-05-12 03:42:16 +08:00
return iterator(__cp);
#endif
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__node_insert_multi(
const_iterator __p, __node_pointer __cp)
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__find_c_from_i(&__p) == this,
"unordered container::emplace_hint(const_iterator, args...) called with an iterator not"
" referring to this unordered container");
#endif
2010-05-12 03:42:16 +08:00
if (__p != end() && key_eq()(*__p, __cp->__value_))
{
__node_pointer __np = __p.__node_;
2010-05-12 03:42:16 +08:00
__cp->__hash_ = __np->__hash_;
size_type __bc = bucket_count();
if (size()+1 > __bc * max_load_factor() || __bc == 0)
{
rehash(_VSTD::max<size_type>(2 * __bc + !__is_hash_power2(__bc),
2010-05-12 03:42:16 +08:00
size_type(ceil(float(size() + 1) / max_load_factor()))));
__bc = bucket_count();
}
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
size_t __chash = __constrain_hash(__cp->__hash_, __bc);
2010-05-12 03:42:16 +08:00
__node_pointer __pp = __bucket_list_[__chash];
while (__pp->__next_ != __np)
__pp = __pp->__next_;
__cp->__next_ = __np;
__pp->__next_ = __cp;
++size();
#if _LIBCPP_DEBUG_LEVEL >= 2
return iterator(__cp, this);
#else
2010-05-12 03:42:16 +08:00
return iterator(__cp);
#endif
2010-05-12 03:42:16 +08:00
}
return __node_insert_multi(__cp);
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
template <class _Tp, class _Hash, class _Equal, class _Alloc>
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
template <class _Key, class ..._Args>
_LIBCPP_INLINE_VISIBILITY
pair<typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator, bool>
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__emplace_unique_key_args(_Key const& __k, _Args&&... __args)
#else
template <class _Tp, class _Hash, class _Equal, class _Alloc>
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
template <class _Key, class _Args>
_LIBCPP_INLINE_VISIBILITY
pair<typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator, bool>
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__emplace_unique_key_args(_Key const& __k, _Args& __args)
#endif
{
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
size_t __hash = hash_function()(__k);
2010-05-12 03:42:16 +08:00
size_type __bc = bucket_count();
bool __inserted = false;
__node_pointer __nd;
size_t __chash;
if (__bc != 0)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__chash = __constrain_hash(__hash, __bc);
2010-05-12 03:42:16 +08:00
__nd = __bucket_list_[__chash];
if (__nd != nullptr)
{
for (__nd = __nd->__next_; __nd != nullptr &&
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__constrain_hash(__nd->__hash_, __bc) == __chash;
2010-05-12 03:42:16 +08:00
__nd = __nd->__next_)
{
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
if (key_eq()(__nd->__value_, __k))
2010-05-12 03:42:16 +08:00
goto __done;
}
}
}
{
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
__node_holder __h = __construct_node_hash(__hash, _VSTD::forward<_Args>(__args)...);
#else
__node_holder __h = __construct_node_hash(__hash, __args);
#endif
2010-05-12 03:42:16 +08:00
if (size()+1 > __bc * max_load_factor() || __bc == 0)
{
rehash(_VSTD::max<size_type>(2 * __bc + !__is_hash_power2(__bc),
2010-05-12 03:42:16 +08:00
size_type(ceil(float(size() + 1) / max_load_factor()))));
__bc = bucket_count();
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__chash = __constrain_hash(__hash, __bc);
2010-05-12 03:42:16 +08:00
}
// insert_after __bucket_list_[__chash], or __first_node if bucket is null
__node_pointer __pn = __bucket_list_[__chash];
if (__pn == nullptr)
{
__pn = static_cast<__node_pointer>(static_cast<__void_pointer>(pointer_traits<__node_base_pointer>::pointer_to(__p1_.first())));
2010-05-12 03:42:16 +08:00
__h->__next_ = __pn->__next_;
__pn->__next_ = __h.get();
// fix up __bucket_list_
__bucket_list_[__chash] = __pn;
if (__h->__next_ != nullptr)
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__bucket_list_[__constrain_hash(__h->__next_->__hash_, __bc)] = __h.get();
2010-05-12 03:42:16 +08:00
}
else
{
__h->__next_ = __pn->__next_;
__pn->__next_ = __h.get();
}
__nd = __h.release();
// increment size
++size();
__inserted = true;
}
__done:
#if _LIBCPP_DEBUG_LEVEL >= 2
return pair<iterator, bool>(iterator(__nd, this), __inserted);
#else
2010-05-12 03:42:16 +08:00
return pair<iterator, bool>(iterator(__nd), __inserted);
#endif
2010-05-12 03:42:16 +08:00
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class... _Args>
pair<typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator, bool>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__emplace_unique_impl(_Args&&... __args)
2010-05-12 03:42:16 +08:00
{
__node_holder __h = __construct_node(_VSTD::forward<_Args>(__args)...);
2010-05-12 03:42:16 +08:00
pair<iterator, bool> __r = __node_insert_unique(__h.get());
if (__r.second)
__h.release();
return __r;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class... _Args>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__emplace_multi(_Args&&... __args)
{
__node_holder __h = __construct_node(_VSTD::forward<_Args>(__args)...);
2010-05-12 03:42:16 +08:00
iterator __r = __node_insert_multi(__h.get());
__h.release();
return __r;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class... _Args>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__emplace_hint_multi(
const_iterator __p, _Args&&... __args)
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__find_c_from_i(&__p) == this,
"unordered container::emplace_hint(const_iterator, args...) called with an iterator not"
" referring to this unordered container");
#endif
__node_holder __h = __construct_node(_VSTD::forward<_Args>(__args)...);
2010-05-12 03:42:16 +08:00
iterator __r = __node_insert_multi(__p, __h.get());
__h.release();
return __r;
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#else // _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__insert_multi(const __container_value_type& __x)
2010-05-12 03:42:16 +08:00
{
__node_holder __h = __construct_node(__x);
iterator __r = __node_insert_multi(__h.get());
__h.release();
return __r;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__insert_multi(const_iterator __p,
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
const __container_value_type& __x)
2010-05-12 03:42:16 +08:00
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__find_c_from_i(&__p) == this,
"unordered container::insert(const_iterator, lvalue) called with an iterator not"
" referring to this unordered container");
#endif
2010-05-12 03:42:16 +08:00
__node_holder __h = __construct_node(__x);
iterator __r = __node_insert_multi(__p, __h.get());
__h.release();
return __r;
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#endif // _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class _Tp, class _Hash, class _Equal, class _Alloc>
void
__hash_table<_Tp, _Hash, _Equal, _Alloc>::rehash(size_type __n)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
if (__n == 1)
__n = 2;
else if (__n & (__n - 1))
__n = __next_prime(__n);
2010-05-12 03:42:16 +08:00
size_type __bc = bucket_count();
if (__n > __bc)
__rehash(__n);
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
else if (__n < __bc)
2010-05-12 03:42:16 +08:00
{
__n = _VSTD::max<size_type>
2010-05-12 03:42:16 +08:00
(
__n,
__is_hash_power2(__bc) ? __next_hash_pow2(size_t(ceil(float(size()) / max_load_factor()))) :
__next_prime(size_t(ceil(float(size()) / max_load_factor())))
2010-05-12 03:42:16 +08:00
);
if (__n < __bc)
__rehash(__n);
}
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
void
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__rehash(size_type __nbc)
{
#if _LIBCPP_DEBUG_LEVEL >= 2
__get_db()->__invalidate_all(this);
#endif // _LIBCPP_DEBUG_LEVEL >= 2
2010-05-12 03:42:16 +08:00
__pointer_allocator& __npa = __bucket_list_.get_deleter().__alloc();
__bucket_list_.reset(__nbc > 0 ?
__pointer_alloc_traits::allocate(__npa, __nbc) : nullptr);
__bucket_list_.get_deleter().size() = __nbc;
if (__nbc > 0)
{
for (size_type __i = 0; __i < __nbc; ++__i)
__bucket_list_[__i] = nullptr;
__node_pointer __pp(static_cast<__node_pointer>(pointer_traits<__node_base_pointer>::pointer_to(__p1_.first())));
2010-05-12 03:42:16 +08:00
__node_pointer __cp = __pp->__next_;
if (__cp != nullptr)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
size_type __chash = __constrain_hash(__cp->__hash_, __nbc);
2010-05-12 03:42:16 +08:00
__bucket_list_[__chash] = __pp;
size_type __phash = __chash;
for (__pp = __cp, __cp = __cp->__next_; __cp != nullptr;
__cp = __pp->__next_)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__chash = __constrain_hash(__cp->__hash_, __nbc);
2010-05-12 03:42:16 +08:00
if (__chash == __phash)
__pp = __cp;
else
{
if (__bucket_list_[__chash] == nullptr)
{
__bucket_list_[__chash] = __pp;
__pp = __cp;
__phash = __chash;
}
else
{
__node_pointer __np = __cp;
for (; __np->__next_ != nullptr &&
key_eq()(__cp->__value_, __np->__next_->__value_);
__np = __np->__next_)
;
__pp->__next_ = __np->__next_;
__np->__next_ = __bucket_list_[__chash]->__next_;
__bucket_list_[__chash]->__next_ = __cp;
2010-05-12 03:42:16 +08:00
}
}
}
}
}
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _Key>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::find(const _Key& __k)
{
size_t __hash = hash_function()(__k);
size_type __bc = bucket_count();
if (__bc != 0)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
size_t __chash = __constrain_hash(__hash, __bc);
2010-05-12 03:42:16 +08:00
__node_pointer __nd = __bucket_list_[__chash];
if (__nd != nullptr)
{
for (__nd = __nd->__next_; __nd != nullptr &&
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__constrain_hash(__nd->__hash_, __bc) == __chash;
2010-05-12 03:42:16 +08:00
__nd = __nd->__next_)
{
if (key_eq()(__nd->__value_, __k))
#if _LIBCPP_DEBUG_LEVEL >= 2
return iterator(__nd, this);
#else
2010-05-12 03:42:16 +08:00
return iterator(__nd);
#endif
2010-05-12 03:42:16 +08:00
}
}
}
return end();
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _Key>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::const_iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::find(const _Key& __k) const
{
size_t __hash = hash_function()(__k);
size_type __bc = bucket_count();
if (__bc != 0)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
size_t __chash = __constrain_hash(__hash, __bc);
2010-05-12 03:42:16 +08:00
__node_const_pointer __nd = __bucket_list_[__chash];
if (__nd != nullptr)
{
for (__nd = __nd->__next_; __nd != nullptr &&
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__constrain_hash(__nd->__hash_, __bc) == __chash;
2010-05-12 03:42:16 +08:00
__nd = __nd->__next_)
{
if (key_eq()(__nd->__value_, __k))
#if _LIBCPP_DEBUG_LEVEL >= 2
return const_iterator(__nd, this);
#else
2010-05-12 03:42:16 +08:00
return const_iterator(__nd);
#endif
2010-05-12 03:42:16 +08:00
}
}
2010-05-12 03:42:16 +08:00
}
return end();
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#ifndef _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class ..._Args>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::__node_holder
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__construct_node(_Args&& ...__args)
{
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
static_assert(!__is_hash_value_type<_Args...>::value,
"Construct cannot be called with a hash value type");
2010-05-12 03:42:16 +08:00
__node_allocator& __na = __node_alloc();
__node_holder __h(__node_traits::allocate(__na, 1), _Dp(__na));
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__node_traits::construct(__na, _NodeTypes::__get_ptr(__h->__value_), _VSTD::forward<_Args>(__args)...);
2010-05-12 03:42:16 +08:00
__h.get_deleter().__value_constructed = true;
__h->__hash_ = hash_function()(__h->__value_);
__h->__next_ = nullptr;
return __h;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
template <class _First, class ..._Rest>
2010-05-12 03:42:16 +08:00
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::__node_holder
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__construct_node_hash(
size_t __hash, _First&& __f, _Rest&& ...__rest)
2010-05-12 03:42:16 +08:00
{
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
static_assert(!__is_hash_value_type<_First, _Rest...>::value,
"Construct cannot be called with a hash value type");
2010-05-12 03:42:16 +08:00
__node_allocator& __na = __node_alloc();
__node_holder __h(__node_traits::allocate(__na, 1), _Dp(__na));
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__node_traits::construct(__na, _NodeTypes::__get_ptr(__h->__value_),
_VSTD::forward<_First>(__f),
_VSTD::forward<_Rest>(__rest)...);
2010-05-12 03:42:16 +08:00
__h.get_deleter().__value_constructed = true;
__h->__hash_ = __hash;
__h->__next_ = nullptr;
return __h;
2010-05-12 03:42:16 +08:00
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#else // _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::__node_holder
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__construct_node(const __container_value_type& __v)
2010-05-12 03:42:16 +08:00
{
__node_allocator& __na = __node_alloc();
__node_holder __h(__node_traits::allocate(__na, 1), _Dp(__na));
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__node_traits::construct(__na, _NodeTypes::__get_ptr(__h->__value_), __v);
2010-05-12 03:42:16 +08:00
__h.get_deleter().__value_constructed = true;
__h->__hash_ = hash_function()(__h->__value_);
__h->__next_ = nullptr;
return _LIBCPP_EXPLICIT_MOVE(__h); // explicitly moved for C++03
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::__node_holder
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__construct_node_hash(size_t __hash,
const __container_value_type& __v)
2010-05-12 03:42:16 +08:00
{
__node_allocator& __na = __node_alloc();
__node_holder __h(__node_traits::allocate(__na, 1), _Dp(__na));
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
__node_traits::construct(__na, _NodeTypes::__get_ptr(__h->__value_), __v);
2010-05-12 03:42:16 +08:00
__h.get_deleter().__value_constructed = true;
__h->__hash_ = __hash;
__h->__next_ = nullptr;
return _LIBCPP_EXPLICIT_MOVE(__h); // explicitly moved for C++03
2010-05-12 03:42:16 +08:00
}
Teach __hash_table how to handle unordered_map's __hash_value_type. This patch is fairly large and contains a number of changes. The main change is teaching '__hash_table' how to handle '__hash_value_type'. Unfortunately this change is a rampant layering violation, but it's required to make unordered_map conforming without re-writing all of __hash_table. After this change 'unordered_map' can delegate to '__hash_table' in almost all cases. The major changes found in this patch are: * Teach __hash_table to differentiate between the true container value type and the node value type by introducing the "__container_value_type" and "__node_value_type" typedefs. In the case of unordered_map '__container_value_type' is 'pair<const Key, Value>' and '__node_value_type' is '__hash_value_type'. * Switch almost all overloads in '__hash_table' previously taking 'value_type' (AKA '__node_value_type) to take '__container_value_type' instead. Previously 'pair<K, V>' would be implicitly converted to '__hash_value_type<K, V>' because of the function signature. * Add '__get_key', '__get_value', '__get_ptr', and '__move' static functions to '__key_value_types'. These functions allow '__hash_table' to unwrap '__node_value_type' objects into '__container_value_type' and its sub-parts. * Pass '__hash_value_type::__value_' to 'a.construct(p, ...)' instead of '__hash_value_type' itself. The C++14 standard requires that 'a.construct()' and 'a.destroy()' are only ever instantiated for the containers value type. * Remove '__hash_value_type's constructors and destructors. We should never construct an instance of this type. (TODO this is UB but we already do it in plenty of places). * Add a generic "try-emplace" function to '__hash_table' called '__emplace_unique_key_args(Key const&, Args...)'. The following changes were done as cleanup: * Introduce the '_LIBCPP_CXX03_LANG' macro to be used in place of '_LIBCPP_HAS_NO_VARIADICS' or '_LIBCPP_HAS_NO_RVALUE_REFERENCE'. * Cleanup C++11 only overloads that assume an incomplete C++11 implementation. For example this patch removes the __construct_node overloads that do manual pack expansion. * Forward 'unordered_map::emplace' to '__hash_table' and remove dead code resulting from the change. This includes almost all 'unordered_map::__construct_node' overloads. The following changes are planed for future revisions: * Fix LWG issue #2469 by delegating 'unordered_map::operator[]' to use '__emplace_unique_key_args'. * Rewrite 'unordered_map::try_emplace' in terms of '__emplace_unique_key_args'. * Optimize '__emplace_unique' to call '__emplace_unique_key_args' when possible. This prevent unneeded allocations when inserting duplicate entries. The additional follow up work needed after this patch: * Respect the lifetime rules for '__hash_value_type' by actually constructing it. * Make '__insert_multi' act similar to '__insert_unique' for objects of type 'T&' and 'T const &&' with 'T = __container_value_type'. llvm-svn: 260513
2016-02-11 19:59:44 +08:00
#endif // _LIBCPP_CXX03_LANG
2010-05-12 03:42:16 +08:00
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::erase(const_iterator __p)
{
__node_pointer __np = __p.__node_;
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__find_c_from_i(&__p) == this,
"unordered container erase(iterator) called with an iterator not"
" referring to this container");
_LIBCPP_ASSERT(__p != end(),
"unordered container erase(iterator) called with a non-dereferenceable iterator");
iterator __r(__np, this);
#else
2010-05-12 03:42:16 +08:00
iterator __r(__np);
#endif
2010-05-12 03:42:16 +08:00
++__r;
remove(__p);
return __r;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator
__hash_table<_Tp, _Hash, _Equal, _Alloc>::erase(const_iterator __first,
const_iterator __last)
{
#if _LIBCPP_DEBUG_LEVEL >= 2
_LIBCPP_ASSERT(__get_const_db()->__find_c_from_i(&__first) == this,
"unodered container::erase(iterator, iterator) called with an iterator not"
" referring to this unodered container");
_LIBCPP_ASSERT(__get_const_db()->__find_c_from_i(&__last) == this,
"unodered container::erase(iterator, iterator) called with an iterator not"
" referring to this unodered container");
#endif
2010-05-12 03:42:16 +08:00
for (const_iterator __p = __first; __first != __last; __p = __first)
{
++__first;
erase(__p);
}
__node_pointer __np = __last.__node_;
#if _LIBCPP_DEBUG_LEVEL >= 2
return iterator (__np, this);
#else
2010-05-12 03:42:16 +08:00
return iterator (__np);
#endif
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _Key>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::size_type
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__erase_unique(const _Key& __k)
{
iterator __i = find(__k);
if (__i == end())
return 0;
erase(__i);
return 1;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _Key>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::size_type
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__erase_multi(const _Key& __k)
{
size_type __r = 0;
iterator __i = find(__k);
if (__i != end())
{
iterator __e = end();
do
{
erase(__i++);
++__r;
} while (__i != __e && key_eq()(*__i, __k));
}
return __r;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::__node_holder
__hash_table<_Tp, _Hash, _Equal, _Alloc>::remove(const_iterator __p) _NOEXCEPT
2010-05-12 03:42:16 +08:00
{
// current node
__node_pointer __cn = __p.__node_;
2010-05-12 03:42:16 +08:00
size_type __bc = bucket_count();
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
size_t __chash = __constrain_hash(__cn->__hash_, __bc);
2010-05-12 03:42:16 +08:00
// find previous node
__node_pointer __pn = __bucket_list_[__chash];
for (; __pn->__next_ != __cn; __pn = __pn->__next_)
;
// Fix up __bucket_list_
// if __pn is not in same bucket (before begin is not in same bucket) &&
// if __cn->__next_ is not in same bucket (nullptr is not in same bucket)
if (__pn == static_cast<__node_pointer>(pointer_traits<__node_base_pointer>::pointer_to(__p1_.first()))
|| __constrain_hash(__pn->__hash_, __bc) != __chash)
2010-05-12 03:42:16 +08:00
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
if (__cn->__next_ == nullptr || __constrain_hash(__cn->__next_->__hash_, __bc) != __chash)
2010-05-12 03:42:16 +08:00
__bucket_list_[__chash] = nullptr;
}
// if __cn->__next_ is not in same bucket (nullptr is in same bucket)
if (__cn->__next_ != nullptr)
{
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
size_t __nhash = __constrain_hash(__cn->__next_->__hash_, __bc);
2010-05-12 03:42:16 +08:00
if (__nhash != __chash)
__bucket_list_[__nhash] = __pn;
}
// remove __cn
__pn->__next_ = __cn->__next_;
__cn->__next_ = nullptr;
--size();
#if _LIBCPP_DEBUG_LEVEL >= 2
__c_node* __c = __get_db()->__find_c_and_lock(this);
for (__i_node** __p = __c->end_; __p != __c->beg_; )
{
--__p;
iterator* __i = static_cast<iterator*>((*__p)->__i_);
if (__i->__node_ == __cn)
{
(*__p)->__c_ = nullptr;
if (--__c->end_ != __p)
memmove(__p, __p+1, (__c->end_ - __p)*sizeof(__i_node*));
}
}
__get_db()->unlock();
#endif
return __node_holder(__cn, _Dp(__node_alloc(), true));
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _Key>
inline
2010-05-12 03:42:16 +08:00
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::size_type
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__count_unique(const _Key& __k) const
{
return static_cast<size_type>(find(__k) != end());
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _Key>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::size_type
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__count_multi(const _Key& __k) const
{
size_type __r = 0;
const_iterator __i = find(__k);
if (__i != end())
{
const_iterator __e = end();
do
{
++__i;
++__r;
} while (__i != __e && key_eq()(*__i, __k));
}
return __r;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _Key>
pair<typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator,
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__equal_range_unique(
const _Key& __k)
{
iterator __i = find(__k);
iterator __j = __i;
if (__i != end())
++__j;
return pair<iterator, iterator>(__i, __j);
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _Key>
pair<typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::const_iterator,
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::const_iterator>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__equal_range_unique(
const _Key& __k) const
{
const_iterator __i = find(__k);
const_iterator __j = __i;
if (__i != end())
++__j;
return pair<const_iterator, const_iterator>(__i, __j);
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _Key>
pair<typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator,
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::iterator>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__equal_range_multi(
const _Key& __k)
{
iterator __i = find(__k);
iterator __j = __i;
if (__i != end())
{
iterator __e = end();
do
{
++__j;
} while (__j != __e && key_eq()(*__j, __k));
}
return pair<iterator, iterator>(__i, __j);
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
template <class _Key>
pair<typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::const_iterator,
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::const_iterator>
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__equal_range_multi(
const _Key& __k) const
{
const_iterator __i = find(__k);
const_iterator __j = __i;
if (__i != end())
{
const_iterator __e = end();
do
{
++__j;
} while (__j != __e && key_eq()(*__j, __k));
}
return pair<const_iterator, const_iterator>(__i, __j);
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
void
__hash_table<_Tp, _Hash, _Equal, _Alloc>::swap(__hash_table& __u)
#if _LIBCPP_STD_VER <= 11
_NOEXCEPT_(
__is_nothrow_swappable<hasher>::value && __is_nothrow_swappable<key_equal>::value
&& (!allocator_traits<__pointer_allocator>::propagate_on_container_swap::value
|| __is_nothrow_swappable<__pointer_allocator>::value)
&& (!__node_traits::propagate_on_container_swap::value
|| __is_nothrow_swappable<__node_allocator>::value)
)
#else
_NOEXCEPT_(__is_nothrow_swappable<hasher>::value && __is_nothrow_swappable<key_equal>::value)
#endif
2010-05-12 03:42:16 +08:00
{
{
__node_pointer_pointer __npp = __bucket_list_.release();
__bucket_list_.reset(__u.__bucket_list_.release());
__u.__bucket_list_.reset(__npp);
}
_VSTD::swap(__bucket_list_.get_deleter().size(), __u.__bucket_list_.get_deleter().size());
__swap_allocator(__bucket_list_.get_deleter().__alloc(),
2010-05-12 03:42:16 +08:00
__u.__bucket_list_.get_deleter().__alloc());
__swap_allocator(__node_alloc(), __u.__node_alloc());
_VSTD::swap(__p1_.first().__next_, __u.__p1_.first().__next_);
2010-05-12 03:42:16 +08:00
__p2_.swap(__u.__p2_);
__p3_.swap(__u.__p3_);
if (size() > 0)
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__bucket_list_[__constrain_hash(__p1_.first().__next_->__hash_, bucket_count())] =
static_cast<__node_pointer>(pointer_traits<__node_base_pointer>::pointer_to(__p1_.first()));
2010-05-12 03:42:16 +08:00
if (__u.size() > 0)
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__u.__bucket_list_[__constrain_hash(__u.__p1_.first().__next_->__hash_, __u.bucket_count())] =
static_cast<__node_pointer>(pointer_traits<__node_base_pointer>::pointer_to(__u.__p1_.first()));
#if _LIBCPP_DEBUG_LEVEL >= 2
__get_db()->swap(this, &__u);
#endif
2010-05-12 03:42:16 +08:00
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
typename __hash_table<_Tp, _Hash, _Equal, _Alloc>::size_type
__hash_table<_Tp, _Hash, _Equal, _Alloc>::bucket_size(size_type __n) const
{
_LIBCPP_ASSERT(__n < bucket_count(),
"unordered container::bucket_size(n) called with n >= bucket_count()");
2010-05-12 03:42:16 +08:00
__node_const_pointer __np = __bucket_list_[__n];
size_type __bc = bucket_count();
size_type __r = 0;
if (__np != nullptr)
{
for (__np = __np->__next_; __np != nullptr &&
This commit establishes a new bucket_count policy in the unordered containers: The policy now allows a power-of-2 number of buckets to be requested (and that request honored) by the client. And if the number of buckets is set to a power of 2, then the constraint of the hash to the number of buckets uses & instead of %. If the client does not specify a number of buckets, then the policy remains unchanged: a prime number of buckets is selected. The growth policy is that the number of buckets is roughly doubled when needed. While growing, either the prime, or the power-of-2 strategy will be preserved. There is a small run time cost for putting in this switch. For very cheap hash functions, e.g. identity for int, the cost can be as high as 18%. However with more typical use cases, e.g. strings, the cost is in the noise level. I've measured cases with very cheap hash functions (int) that using a power-of-2 number of buckets can make look up about twice as fast. However I've also noted that a power-of-2 number of buckets is more susceptible to accidental catastrophic collisions. Though I've also noted that accidental catastrophic collisions are also possible when using a prime number of buckets (but seems far less likely). In short, this patch adds an extra tuning knob for those clients trying to get the last bit of performance squeezed out of their hash containers. Casual users of the hash containers will not notice the introduction of this tuning knob. Those clients who swear by power-of-2 hash containers can now opt-in to that strategy. Clients who prefer a prime number of buckets can continue as they have. llvm-svn: 159836
2012-07-07 01:31:14 +08:00
__constrain_hash(__np->__hash_, __bc) == __n;
2010-05-12 03:42:16 +08:00
__np = __np->__next_, ++__r)
;
}
return __r;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
inline _LIBCPP_INLINE_VISIBILITY
void
swap(__hash_table<_Tp, _Hash, _Equal, _Alloc>& __x,
__hash_table<_Tp, _Hash, _Equal, _Alloc>& __y)
_NOEXCEPT_(_NOEXCEPT_(__x.swap(__y)))
{
__x.swap(__y);
}
#if _LIBCPP_DEBUG_LEVEL >= 2
template <class _Tp, class _Hash, class _Equal, class _Alloc>
bool
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__dereferenceable(const const_iterator* __i) const
{
return __i->__node_ != nullptr;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
bool
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__decrementable(const const_iterator*) const
{
return false;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
bool
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__addable(const const_iterator*, ptrdiff_t) const
{
return false;
}
template <class _Tp, class _Hash, class _Equal, class _Alloc>
bool
__hash_table<_Tp, _Hash, _Equal, _Alloc>::__subscriptable(const const_iterator*, ptrdiff_t) const
{
return false;
}
#endif // _LIBCPP_DEBUG_LEVEL >= 2
2010-05-12 03:42:16 +08:00
_LIBCPP_END_NAMESPACE_STD
#endif // _LIBCPP__HASH_TABLE