Commit Graph

70 Commits

Author SHA1 Message Date
Mark Rousskov 02f1930595 step cfgs 2024-03-20 08:49:13 -04:00
The 8472 be33586adc fix unsoundness in Step::forward_unchecked for signed integers 2024-03-14 00:57:02 +01:00
bors b0d3e04ca9 Auto merge of #120393 - Urgau:rfc3373-non-local-defs, r=WaffleLapkin
Implement RFC 3373: Avoid non-local definitions in functions

This PR implements [RFC 3373: Avoid non-local definitions in functions](https://github.com/rust-lang/rust/issues/120363).
2024-02-25 19:11:06 +00:00
Urgau 1b733558bf Allow newly added non_local_definitions in std 2024-02-17 13:59:46 +01:00
Josh Stone 974bc455ee Specialize flattening iterators with only one inner item
For iterators like `Once` and `option::IntoIter` that only ever have a
single item at most, the front and back iterator states in `FlatMap` and
`Flatten` are a waste, as they're always consumed already. We can use
specialization for these types to simplify the iterator methods.

It's a somewhat common pattern to use `flatten()` for options and
results, even recommended by [multiple][1] [clippy][2] [lints][3]. The
implementation is more efficient with `filter_map`, as mentioned in
[clippy#9377], but this new specialization should close some of that
gap for existing code that flattens.

[1]: https://rust-lang.github.io/rust-clippy/master/#filter_map_identity
[2]: https://rust-lang.github.io/rust-clippy/master/#option_filter_map
[3]: https://rust-lang.github.io/rust-clippy/master/#result_filter_map
[clippy#9377]: https://github.com/rust-lang/rust-clippy/issues/9377
2024-02-16 13:49:29 -08:00
Markus Reiter a90cc05233
Replace `NonZero::<_>::new` with `NonZero::new`. 2024-02-15 08:09:42 +01:00
Markus Reiter 746a58d435
Use generic `NonZero` internally. 2024-02-15 08:09:42 +01:00
bors fa404339c9 Auto merge of #85528 - the8472:iter-markers, r=dtolnay
Implement iterator specialization traits on more adapters

This adds

* `TrustedLen` to `Skip` and `StepBy`
* `TrustedRandomAccess` to `Skip`
* `InPlaceIterable` and `SourceIter` to  `Copied` and `Cloned`

The first two might improve performance in the compiler itself since `skip` is used in several places. Constellations that would exercise the last point are probably rare since it would require an owning iterator that has references as Items somewhere in its iterator pipeline.

Improvements for `Skip`:

```
# old
test iter::bench_skip_trusted_random_access                     ... bench:       8,335 ns/iter (+/- 90)

# new
test iter::bench_skip_trusted_random_access                     ... bench:       2,753 ns/iter (+/- 27)
```
2024-01-21 11:17:46 +00:00
Nadrieril e8d1c2ef9c
Rollup merge of #118811 - EbbDrop:is-sorted-by-bool, r=Mark-Simulacrum
Use `bool` instead of `PartiolOrd` as return value of the comparison closure in `{slice,Iteraotr}::is_sorted_by`

Changes the function signature of the closure given to `{slice,Iteraotr}::is_sorted_by` to return a `bool` instead of a `PartiolOrd` as suggested by the libs-api team here: https://github.com/rust-lang/rust/issues/53485#issuecomment-1766411980.

This means these functions now return true if the closure returns true for all the pairs of values.
2024-01-21 06:38:35 +01:00
EbbDrop 606eeb84ad Use bool instead of PartiolOrd in is_sorted_by 2024-01-20 21:38:34 +01:00
klensy aa696c5a22 apply fmt 2024-01-11 15:04:48 +03:00
The8472 a2a7caacf7 implement TrustedLen for StepBy 2024-01-10 18:55:34 +01:00
surechen 40ae34194c remove redundant imports
detects redundant imports that can be eliminated.

for #117772 :

In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
2023-12-10 10:56:22 +08:00
The 8472 b018ad3d41 optimize zipping over array iterators 2023-10-06 18:33:25 +02:00
ltdk ef3305449b Implement Step for AsciiChar 2023-08-14 01:34:47 -04:00
Frank King 97c953f561 Add Iterator::map_windows
This is inherited from the old PR.

This method returns an iterator over mapped windows of the starting
iterator. Adding the more straight-forward `Iterator::windows` is not
easily possible right now as the items are stored in the iterator type,
meaning the `next` call would return references to `self`. This is not
allowed by the current `Iterator` trait design. This might change once
GATs have landed.

The idea has been brought up by @m-ou-se here:
https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Iterator.3A.3A.7Bpairwise.2C.20windows.7D/near/224587771

Co-authored-by: Lukas Kalbertodt <lukas.kalbertodt@gmail.com>
2023-08-11 07:26:51 +08:00
bors a5e2eca40e Auto merge of #112699 - bluebear94:mf/more-is-sorted-tests, r=cuviper
Add more comprehensive tests for is_sorted and friends

See #53485 and #55045.
2023-07-21 23:25:04 +00:00
The 8472 070ce235f2 Specialize StepBy<Range<{integer}>>
For ranges < usize we determine the number of items
StepBy would yield and then store that in the range.end
instead of the actual end. This significantly
simplifies calculation of the loop induction variable
especially in cases where StepBy::step (an usize)
could overflow the Range's item type
2023-06-23 00:17:34 +02:00
+merlan #flirora c2e4e981b3 Add more comprehensive tests for is_sorted and friends
See #53485 and #55045.
2023-06-16 03:04:34 -04:00
Deadbeef 04a5d61161 Revert "Mark DoubleEndedIterator as #[const_trait] using rustc_do_not_const_check, implement const Iterator and DoubleEndedIterator for Range."
This reverts commit 8a9d6bf4fd.
2023-04-08 08:18:29 +00:00
The 8472 e29b27b4a4 replace advance_by returning usize with Result<(), NonZeroUsize> 2023-03-27 16:03:14 +02:00
The 8472 69db91b8b2 Change advance(_back)_by to return `usize` instead of `Result<(), usize>`
A successful advance is now signalled by returning `0` and other values now represent the remaining number
of steps that couldn't be advanced as opposed to the amount of steps that have been advanced during a partial advance_by.

This simplifies adapters a bit, replacing some `match`/`if` with arithmetic. Whether this is beneficial overall depends
on whether `advance_by` is mostly used as a building-block for other iterator methods and adapters or whether
we also see uses by users where `Result` might be more useful.
2023-03-27 14:11:49 +02:00
onestacked 8a9d6bf4fd Mark DoubleEndedIterator as #[const_trait] using rustc_do_not_const_check, implement const Iterator and DoubleEndedIterator for Range. 2023-03-18 09:17:37 +01:00
est31 999405059c Match unmatched backticks in library/ 2023-03-03 03:03:29 +01:00
bors 2d91939bb7 Auto merge of #107634 - scottmcm:array-drain, r=thomcc
Improve the `array::map` codegen

The `map` method on arrays [is documented as sometimes performing poorly](https://doc.rust-lang.org/std/primitive.array.html#note-on-performance-and-stack-usage), and after [a question on URLO](https://users.rust-lang.org/t/try-trait-residual-o-trait-and-try-collect-into-array/88510?u=scottmcm) prompted me to take another look at the core [`try_collect_into_array`](7c46fb2111/library/core/src/array/mod.rs (L865-L912)) function, I had some ideas that ended up working better than I'd expected.

There's three main ideas in here, split over three commits:
1. Don't use `array::IntoIter` when we can avoid it, since that seems to not get SRoA'd, meaning that every step writes things like loop counters into the stack unnecessarily
2. Don't return arrays in `Result`s unnecessarily, as that doesn't seem to optimize away even with `unwrap_unchecked` (perhaps because it needs to get moved into a new LLVM type to account for the discriminant)
3. Don't distract LLVM with all the `Option` dances when we know for sure we have enough items (like in `map` and `zip`).  This one's a larger commit as to do it I ended up adding a new `pub(crate)` trait, but hopefully those changes are still straight-forward.

(No libs-api changes; everything should be completely implementation-detail-internal.)

It's still not completely fixed -- I think it needs pcwalton's `memcpy` optimizations still (#103830) to get further -- but this seems to go much better than before.  And the remaining `memcpy`s are just `transmute`-equivalent (`[T; N] -> ManuallyDrop<[T; N]>` and `[MaybeUninit<T>; N] -> [T; N]`), so hopefully those will be easier to remove with LLVM16 than the previous subobject copies 🤞

r? `@thomcc`

As a simple example, this test
```rust
pub fn long_integer_map(x: [u32; 64]) -> [u32; 64] {
    x.map(|x| 13 * x + 7)
}
```
On nightly <https://rust.godbolt.org/z/xK7548TGj> takes `sub rsp, 808`
```llvm
start:
  %array.i.i.i.i = alloca [64 x i32], align 4
  %_3.sroa.5.i.i.i = alloca [65 x i32], align 4
  %_5.i = alloca %"core::iter::adapters::map::Map<core::array::iter::IntoIter<u32, 64>, [closure@/app/example.rs:2:11: 2:14]>", align 8
```
(and yes, that's a 6**5**-element array `alloca` despite 6**4**-element input and output)

But with this PR it's only `sub rsp, 520`
```llvm
start:
  %array.i.i.i.i.i.i = alloca [64 x i32], align 4
  %array1.i.i.i = alloca %"core::mem::manually_drop::ManuallyDrop<[u32; 64]>", align 4
```

Similarly, the loop it emits on nightly is scalar-only and horrifying
```nasm
.LBB0_1:
        mov     esi, 64
        mov     edi, 0
        cmp     rdx, 64
        je      .LBB0_3
        lea     rsi, [rdx + 1]
        mov     qword ptr [rsp + 784], rsi
        mov     r8d, dword ptr [rsp + 4*rdx + 528]
        mov     edi, 1
        lea     edx, [r8 + 2*r8]
        lea     r8d, [r8 + 4*rdx]
        add     r8d, 7
.LBB0_3:
        test    edi, edi
        je      .LBB0_11
        mov     dword ptr [rsp + 4*rcx + 272], r8d
        cmp     rsi, 64
        jne     .LBB0_6
        xor     r8d, r8d
        mov     edx, 64
        test    r8d, r8d
        jne     .LBB0_8
        jmp     .LBB0_11
.LBB0_6:
        lea     rdx, [rsi + 1]
        mov     qword ptr [rsp + 784], rdx
        mov     edi, dword ptr [rsp + 4*rsi + 528]
        mov     r8d, 1
        lea     esi, [rdi + 2*rdi]
        lea     edi, [rdi + 4*rsi]
        add     edi, 7
        test    r8d, r8d
        je      .LBB0_11
.LBB0_8:
        mov     dword ptr [rsp + 4*rcx + 276], edi
        add     rcx, 2
        cmp     rcx, 64
        jne     .LBB0_1
```

whereas with this PR it's unrolled and vectorized
```nasm
	vpmulld	ymm1, ymm0, ymmword ptr [rsp + 64]
	vpaddd	ymm1, ymm1, ymm2
	vmovdqu	ymmword ptr [rsp + 328], ymm1
	vpmulld	ymm1, ymm0, ymmword ptr [rsp + 96]
	vpaddd	ymm1, ymm1, ymm2
	vmovdqu	ymmword ptr [rsp + 360], ymm1
```
(though sadly still stack-to-stack)
2023-02-13 10:18:48 +00:00
Scott McMurray 5bc328fdef Allow canonicalizing the `array::map` loop in trusted cases 2023-02-04 16:44:51 -08:00
Lukas Markeffsky 76e216f29b Use associated items of `char` instead of freestanding items in `core::char` 2023-01-14 11:58:41 +01:00
Scott McMurray 9d68a1a74c Tune RepeatWith::try_fold and Take::for_each and Vec::extend_trusted 2022-11-24 19:14:19 -08:00
Scott McMurray d62b903892 `VecDeque::resize` should re-use the buffer in the passed-in element
Today it always copies it for *every* appended element, but one of those clones is avoidable.
2022-11-15 00:53:26 -08:00
The 8472 43c353fff7 simplification: do not process the ArrayChunks remainder in fold() 2022-11-07 21:44:25 +01:00
Matthias Krüger 6deca5f067
Rollup merge of #100220 - scottmcm:fix-by-ref-sized, r=joshtriplett
Properly forward `ByRefSized::fold` to the inner iterator

cc ``@timvermeulen,`` who noticed this mistake in https://github.com/rust-lang/rust/pull/100214#issuecomment-1207317625
2022-08-24 18:20:08 +02:00
bors 6c943bad02 Auto merge of #99541 - timvermeulen:flatten_cleanup, r=the8472
Refactor iteration logic in the `Flatten` and `FlatMap` iterators

The `Flatten` and `FlatMap` iterators both delegate to `FlattenCompat`:
```rust
struct FlattenCompat<I, U> {
    iter: Fuse<I>,
    frontiter: Option<U>,
    backiter: Option<U>,
}
```
Every individual iterator method that `FlattenCompat` implements needs to carefully manage this state, checking whether the `frontiter` and `backiter` are present, and storing the current iterator appropriately if iteration is aborted. This has led to methods such as `next`, `advance_by`, and `try_fold` all having similar code for managing the iterator's state.

I have extracted this common logic of iterating the inner iterators with the option to exit early into a `iter_try_fold` method:
```rust
impl<I, U> FlattenCompat<I, U>
where
    I: Iterator<Item: IntoIterator<IntoIter = U>>,
{
    fn iter_try_fold<Acc, Fold, R>(&mut self, acc: Acc, fold: Fold) -> R
    where
        Fold: FnMut(Acc, &mut U) -> R,
        R: Try<Output = Acc>,
    { ... }
}
```
It passes each of the inner iterators to the given function as long as it keep succeeding. It takes care of managing `FlattenCompat`'s state, so that the actual `Iterator` methods don't need to. The resulting code that makes use of this abstraction is much more straightforward:
```rust
fn next(&mut self) -> Option<U::Item> {
    #[inline]
    fn next<U: Iterator>((): (), iter: &mut U) -> ControlFlow<U::Item> {
        match iter.next() {
            None => ControlFlow::CONTINUE,
            Some(x) => ControlFlow::Break(x),
        }
    }

    self.iter_try_fold((), next).break_value()
}
```
Note that despite being implemented in terms of `iter_try_fold`, `next` is still able to benefit from `U`'s `next` method. It therefore does not take the performance hit that implementing `next` directly in terms of `Self::try_fold` causes (in some benchmarks).

This PR also adds `iter_try_rfold` which captures the shared logic of `try_rfold` and `advance_back_by`, as well as `iter_fold` and `iter_rfold` for folding without early exits (used by `fold`, `rfold`, `count`, and `last`).

Benchmark results:
```
                                             before                after
bench_flat_map_sum                       423,255 ns/iter      414,338 ns/iter
bench_flat_map_ref_sum                 1,942,139 ns/iter    2,216,643 ns/iter
bench_flat_map_chain_sum               1,616,840 ns/iter    1,246,445 ns/iter
bench_flat_map_chain_ref_sum           4,348,110 ns/iter    3,574,775 ns/iter
bench_flat_map_chain_option_sum          780,037 ns/iter      780,679 ns/iter
bench_flat_map_chain_option_ref_sum    2,056,458 ns/iter      834,932 ns/iter
```

I added the last two benchmarks specifically to demonstrate an extreme case where `FlatMap::next` can benefit from custom internal iteration of the outer iterator, so take it with a grain of salt. We should probably do a perf run to see if the changes to `next` are worth it in practice.
2022-08-19 02:34:30 +00:00
Scott McMurray 7680c8b690 Properly forward `ByRefSized::fold` to the inner iterator 2022-08-14 22:55:30 -07:00
austinabell 00bc9e8ac4
fix(iter::skip): Optimize `next` and `nth` implementations of `Skip` 2022-08-14 13:25:13 -04:00
Tim Vermeulen 3f7004920c Move `fold` logic to `iter_fold` method and reuse it in `count` and `last` 2022-08-05 03:43:39 +02:00
Maybe Waffle 4db628a801 Remove incorrect impl `TrustedLen` for `ArrayChunks`
As explained in the review of the previous attempt to add `ArrayChunks`,
adapters that shrink the length can't implement `TrustedLen`.
2022-08-01 19:16:24 +04:00
Ross MacArthur f5485181ca Use `array::IntoIter` for the `ArrayChunks` remainder 2022-08-01 16:39:30 +04:00
Ross MacArthur ca3d1010bb Add `Iterator::array_chunks()` 2022-08-01 16:39:27 +04:00
Tim Vermeulen e52837c362 Add note to test about `Unfuse` 2022-07-18 21:53:35 +02:00
Tim Vermeulen 50c612faef Fix `Skip::next` for non-fused inner iterators 2022-07-18 21:10:47 +02:00
Ross MacArthur bbdff1fff4
Add `Iterator::next_chunk` 2022-06-21 08:57:02 +02:00
est31 cdb8e64bc7 Use Box::new() instead of box syntax in core tests 2022-05-29 01:44:11 +02:00
Matthias Krüger c183d4a510
Rollup merge of #94115 - scottmcm:iter-process-by-ref, r=yaahc
Let `try_collect` take advantage of `try_fold` overrides

No public API changes.

With this change, `try_collect` (#94047) is no longer going through the `impl Iterator for &mut impl Iterator`, and thus will be able to use `try_fold` overrides instead of being forced through `next` for every element.

Here's the test added, to see that it fails before this PR (once a new enough nightly is out): https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=462f2896f2fed2c238ee63ca1a7e7c56

This might as well go to the same person as my last `try_process` PR  (#93572), so
r? ``@yaahc``
2022-03-18 21:50:44 +01:00
bors 21b0325c68 Auto merge of #94738 - Urgau:rustbuild-check-cfg-values, r=Mark-Simulacrum
Enable conditional checking of values in the Rust codebase

This pull-request enable conditional checking of (well known) values in the Rust codebase.

Well known values were added in https://github.com/rust-lang/rust/pull/94362. All the `target_*` values are taken from all the built-in targets which is why some extra values were needed do be added as they are not (yet ?) defined in any built-in targets.

r? `@Mark-Simulacrum`
2022-03-13 18:34:00 +00:00
Scott McMurray 7ef74bc8b9 Let `try_collect` take advantage of `try_fold` overrides
Without this change it's going through `&mut impl Iterator`, which handles `?Sized` and thus currently can't forward generics.

Here's the test added, to see that it fails before this PR (once a new enough nightly is out): https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=462f2896f2fed2c238ee63ca1a7e7c56
2022-03-10 00:16:06 -08:00
Loïc BRANSTETT e3ea59ada5 Remove unexpected #[cfg(target_pointer_width = "8")] in tests 2022-03-09 00:30:17 +01:00
fren_gor 04b3162764
Add collect_into 2022-02-20 01:57:32 +01:00
Arthur Lafrance 47d5196a00 Add a `try_collect()` helper method to `Iterator`
Tweaked `try_collect()` to accept more `Try` types

Updated feature attribute for tracking issue
2022-02-16 14:26:39 -08:00
tamaron 83242897fb add tests 2022-02-02 23:07:02 +09:00
Lucas Kent 08829853d3 eplace usages of vec![].into_iter with [].into_iter 2022-01-09 14:09:25 +11:00