Commit Graph

2641 Commits

Author SHA1 Message Date
Matthias Krüger f775fffac5
Rollup merge of #125561 - Cyborus04:stabilize-slice-flatten, r=scottmcm
Stabilize `slice_flatten`
2024-05-26 13:43:07 +02:00
Cyborus 824ffd29ee
Stabilize `slice_flatten` 2024-05-26 01:26:24 -04:00
Matthias Krüger 7fb81229d3
Rollup merge of #123803 - Sp00ph:shrink_to_fix, r=Mark-Simulacrum
Fix `VecDeque::shrink_to` UB when `handle_alloc_error` unwinds.

Fixes #123369

For `VecDeque` it's relatively simple to restore the buffer into a consistent state so this PR does just that.

Note that with its current implementation, `shrink_to` may change the internal arrangement of elements in the buffer, so e.g. `[D, <uninit>, A, B, C]` will become `[<uninit>, A, B, C, D]` and `[<uninit>, <uninit>, A, B, C]` may become `[B, C, <uninit>, <uninit>, A]` if `shrink_to` unwinds. This shouldn't be an issue though as we don't make any guarantees about the stability of the internal buffer arrangement (and this case is impossible to hit on stable anyways).

This PR also includes a test with code adapted from #123369 which fails without the new `shrink_to` code. Does this suffice or do we maybe need more exhaustive tests like in #108475?

cc `@Amanieu`

`@rustbot` label +T-libs
2024-05-25 22:15:17 +02:00
Urgau 02eada8f8d Remove now outdated comment since we bumped stage0 2024-05-24 08:08:41 +02:00
Lzu Tao c7d2f4592f addresss reviews 2024-05-21 18:17:55 +00:00
Lzu Tao 0734ae22f5 Update check-cfg lists for alloc 2024-05-21 18:17:55 +00:00
bors 6715446db6 Auto merge of #125358 - matthiaskrgr:rollup-mx841tg, r=matthiaskrgr
Rollup of 7 pull requests

Successful merges:

 - #124570 (Miscellaneous cleanups)
 - #124772 (Refactor documentation for Apple targets)
 - #125011 (Add opt-for-size core lib feature flag)
 - #125218 (Migrate `run-make/no-intermediate-extras` to new `rmake.rs`)
 - #125225 (Use functions from `crt_externs.h` on iOS/tvOS/watchOS/visionOS)
 - #125266 (compiler: add simd_ctpop intrinsic)
 - #125348 (Small fixes to `std::path::absolute` docs)

Failed merges:

 - #125296 (Fix `unexpected_cfgs` lint on std)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-05-21 12:50:09 +00:00
Matthias Krüger 4abf179b14
Rollup merge of #125011 - diondokter:opt-for-size, r=Amanieu,kobzol
Add opt-for-size core lib feature flag

Adds a feature flag to the core library that enables the possibility to have smaller implementations for certain algorithms.

So far, the core lib has traded performance for binary size. This is likely what most people want since they have big simd-capable machines. However, people on small machines, like embedded devices, don't enjoy the potential speedup of the bigger algorithms, but do have to pay for them. These microcontrollers often only have 16-1024kB of flash memory.

This PR is the result of some talks with project members like `@Amanieu` at RustNL.
There are some open questions of how this is eventually stabilized, but it's a similar question as with the existing `panic_immediate_abort` feature.

Speaking as someone from the embedded side, we'd rather have this unstable for a while as opposed to not having it at all. In the meantime we can try to use it and also add additional PRs to the core lib that uses the feature flag in areas where we find benefit.

Open questions from my side:
- Is this a good feature name?
  - `panic_immediate_abort` is fairly verbose, so I went with something equally verbose
  - It's easy to refactor later
- I've added the feature to `std` and `alloc` as well as they might benefit too. Do we agree?
  - I expect these to get less usage out of the flag since most size-constraint projects don't use these libraries often.
2024-05-21 12:47:04 +02:00
Michael Goulet 1a81092531 Add the impls for Box<[T]>: IntoIterator
Co-authored-by: ltdk <usr@ltdk.xyz>
2024-05-20 19:21:30 -04:00
Matthias Krüger d1da2387a4
Rollup merge of #125283 - zachs18:arc-default-shared, r=dtolnay
Use a single static for all default slice Arcs.

Also adds debug_asserts in Drop for Weak/Arc that the shared static is not being "dropped"/"deallocated".

As per https://github.com/rust-lang/rust/pull/124640#pullrequestreview-2064962003

r? dtolnay
2024-05-20 14:26:53 +02:00
Matthias Krüger 7389416284
Rollup merge of #125093 - zachs18:rc-into-raw-with-allocator-only, r=Mark-Simulacrum
Add `fn into_raw_with_allocator` to Rc/Arc/Weak.

Split out from #119761

Add `fn into_raw_with_allocator` for `Rc`/`rc::Weak`[^1]/`Arc`/`sync::Weak`.
* Pairs with `from_raw_in` (which already exists on all 4 types).
* Name matches `Box::into_raw_with_allocator`.
* Associated fns on `Rc`/`Arc`, methods on `Weak`s.

<details> <summary>Future PR/ACP</summary>

As a follow-on to this PR, I plan to make a PR/ACP later to move `into_raw(_parts)` from `Container<_, A: Allocator>` to only `Container<_, Global>` (where `Container` = `Vec`/`Box`/`Rc`/`rc::Weak`/`Arc`/`sync::Weak`) so that users of non-`Global` allocators have to explicitly handle the allocator when using `into_raw`-like APIs.

The current behaviors of stdlib containers are inconsistent with respect to what happens to the allocator when `into_raw` is called (which does not return the allocator)

| Type | `into_raw` currently callable with | behavior of `into_raw`|
| --- | --- | --- |
| `Box` | any allocator | allocator is [dropped](https://doc.rust-lang.org/nightly/src/alloc/boxed.rs.html#1060) |
| `Vec` | any allocator | allocator is [forgotten](https://doc.rust-lang.org/nightly/src/alloc/vec/mod.rs.html#884) |
| `Arc`/`Rc`/`Weak` | any allocator | allocator is [forgotten](https://doc.rust-lang.org/src/alloc/sync.rs.html#1487)(Arc) [(sync::Weak)](https://doc.rust-lang.org/src/alloc/sync.rs.html#2726) [(Rc)](https://doc.rust-lang.org/src/alloc/rc.rs.html#1352) [(rc::Weak)](https://doc.rust-lang.org/src/alloc/rc.rs.html#2993) |

In my opinion, neither implicitly dropping nor implicitly forgetting the allocator is ideal; dropping it could immediately invalidate the returned pointer, and forgetting it could unintentionally leak memory. My (to-be) proposed solution is to just forbid calling `into_raw(_parts)` on containers with non-`Global` allocators, and require calling `into_raw_with_allocator`(/`Vec::into_raw_parts_with_alloc`)

</details>

[^1]:  Technically, `rc::Weak::into_raw_with_allocator` is not newly added, as it was modified and renamed from `rc::Weak::into_raw_and_alloc`.
2024-05-20 08:31:41 +02:00
bors 12075f04e6 Auto merge of #123878 - jwong101:inplacecollect, r=jhpratt
optimize inplace collection of Vec

This PR has the following changes:

1. Using `usize::unchecked_mul` in 79424056b0/library/alloc/src/vec/in_place_collect.rs (L262) as LLVM, does not know that the operation can't wrap, since that's the size of the original allocation.

Given the following:

```rust

pub struct Foo([usize; 3]);

pub fn unwrap_copy(v: Vec<Foo>) -> Vec<[usize; 3]> {
    v.into_iter().map(|f| f.0).collect()
}
```

<details>
<summary>Before this commit:</summary>

```llvm
define void `@unwrap_copy(ptr` noalias nocapture noundef writeonly sret([24 x i8]) align 8 dereferenceable(24) %_0, ptr noalias nocapture noundef readonly align 8 dereferenceable(24) %iter) {
start:
  %me.sroa.0.0.copyload.i = load i64, ptr %iter, align 8
  %me.sroa.4.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 8
  %me.sroa.4.0.copyload.i = load ptr, ptr %me.sroa.4.0.self.sroa_idx.i, align 8
  %me.sroa.5.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 16
  %me.sroa.5.0.copyload.i = load i64, ptr %me.sroa.5.0.self.sroa_idx.i, align 8
  %_19.i.idx = mul nsw i64 %me.sroa.5.0.copyload.i, 24
  %0 = udiv i64 %_19.i.idx, 24

; Unnecessary calculation
  %_16.i.i = mul i64 %me.sroa.0.0.copyload.i, 24
  %dst_cap.i.i = udiv i64 %_16.i.i, 24

  store i64 %dst_cap.i.i, ptr %_0, align 8
  %1 = getelementptr inbounds i8, ptr %_0, i64 8
  store ptr %me.sroa.4.0.copyload.i, ptr %1, align 8
  %2 = getelementptr inbounds i8, ptr %_0, i64 16
  store i64 %0, ptr %2, align 8
  ret void
}
```
</details>

<details>
<summary>After:</summary>

```llvm
define void `@unwrap_copy(ptr` noalias nocapture noundef writeonly sret([24 x i8]) align 8 dereferenceable(24) %_0, ptr noalias nocapture noundef readonly align 8 dereferenceable(24) %iter) {
start:
  %me.sroa.0.0.copyload.i = load i64, ptr %iter, align 8
  %me.sroa.4.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 8
  %me.sroa.4.0.copyload.i = load ptr, ptr %me.sroa.4.0.self.sroa_idx.i, align 8
  %me.sroa.5.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 16
  %me.sroa.5.0.copyload.i = load i64, ptr %me.sroa.5.0.self.sroa_idx.i, align 8
  %_19.i.idx = mul nsw i64 %me.sroa.5.0.copyload.i, 24
  %0 = udiv i64 %_19.i.idx, 24
  store i64 %me.sroa.0.0.copyload.i, ptr %_0, align 8
  %1 = getelementptr inbounds i8, ptr %_0, i64 8
  store ptr %me.sroa.4.0.copyload.i, ptr %1, align 8
  %2 = getelementptr inbounds i8, ptr %_0, i64 16
  store i64 %0, ptr %2, align 8, !alias.scope !9, !noalias !14
  ret void
}
```
</details>

Note that there is still one more `mul,udiv` pair that I couldn't get
rid of. The root cause is the same issue as https://github.com/rust-lang/rust/issues/121239, the `nuw` gets
stripped off of `ptr::sub_ptr`.

2.

`Iterator::try_fold` gets called on the underlying Iterator in
`SpecInPlaceCollect::collect_in_place` whenever it does not implement
`TrustedRandomAccess`. For types that impl `Drop`, LLVM currently can't
tell that the drop can never occur, when using the default
`Iterator::try_fold` implementation.

For example, given the following code from #120493

```rust
#[repr(transparent)]
struct WrappedClone {
    inner: String
}

#[no_mangle]
pub fn unwrap_clone(list: Vec<WrappedClone>) -> Vec<String> {
    list.into_iter().map(|s| s.inner).collect()
}
```

<details>
<summary>The asm for the `unwrap_clone` method is currently:</summary>

```asm
unwrap_clone:
        push    rbp
        push    r15
        push    r14
        push    r13
        push    r12
        push    rbx
        push    rax
        mov     rbx, rdi
        mov     r12, qword ptr [rsi]
        mov     rdi, qword ptr [rsi + 8]
        mov     rax, qword ptr [rsi + 16]
        movabs  rsi, -6148914691236517205
        mov     r14, r12
        test    rax, rax
        je      .LBB0_10
        lea     rcx, [rax + 2*rax]
        lea     r14, [r12 + 8*rcx]
        shl     rax, 3
        lea     rax, [rax + 2*rax]
        xor     ecx, ecx
.LBB0_2:
        cmp     qword ptr [r12 + rcx], 0
        je      .LBB0_4
        add     rcx, 24
        cmp     rax, rcx
        jne     .LBB0_2
        jmp     .LBB0_10
.LBB0_4:
        lea     rdx, [rax - 24]
        lea     r14, [r12 + rcx]
        cmp     rdx, rcx
        je      .LBB0_10
        mov     qword ptr [rsp], rdi
        sub     rax, rcx
        add     rax, -24
        mul     rsi
        mov     r15, rdx
        lea     rbp, [r12 + rcx]
        add     rbp, 32
        shr     r15, 4
        mov     r13, qword ptr [rip + __rust_dealloc@GOTPCREL]
        jmp     .LBB0_6
.LBB0_8:
        add     rbp, 24
        dec     r15
        je      .LBB0_9
.LBB0_6:
        mov     rsi, qword ptr [rbp]
        test    rsi, rsi
        je      .LBB0_8
        mov     rdi, qword ptr [rbp - 8]
        mov     edx, 1
        call    r13
        jmp     .LBB0_8
.LBB0_9:
        mov     rdi, qword ptr [rsp]
        movabs  rsi, -6148914691236517205
.LBB0_10:
        sub     r14, r12
        mov     rax, r14
        mul     rsi
        shr     rdx, 4
        mov     qword ptr [rbx], r12
        mov     qword ptr [rbx + 8], rdi
        mov     qword ptr [rbx + 16], rdx
        mov     rax, rbx
        add     rsp, 8
        pop     rbx
        pop     r12
        pop     r13
        pop     r14
        pop     r15
        pop     rbp
        ret
```
</details>

<details>
<summary>After this PR:</summary>

```asm
unwrap_clone:
	mov	rax, rdi
	movups	xmm0, xmmword ptr [rsi]
	mov	rcx, qword ptr [rsi + 16]
	movups	xmmword ptr [rdi], xmm0
	mov	qword ptr [rdi + 16], rcx
	ret
```
</details>

Fixes https://github.com/rust-lang/rust/issues/120493
2024-05-20 00:51:12 +00:00
Zachary S 3299823d62 Fix typo in assert message 2024-05-19 13:29:45 -05:00
Zachary S 58f8ed122a cfg-out unused code under no_global_oom_handling 2024-05-19 13:27:17 -05:00
Zachary S 6fae171e54 fmt 2024-05-19 13:21:53 -05:00
Zachary S 2dacd70e1e Fix stacked borrows violation 2024-05-19 11:42:35 -05:00
Zachary S e6396bca01 Use a single static for all default slice Arcs.
Also adds debug_asserts in Drop for Weak/Arc that the shared static is not being "dropped"/"deallocated".
2024-05-19 11:02:22 -05:00
bors 496f7310c8 Auto merge of #124640 - Billy-Sheppard:master, r=dtolnay
Fix #124275: Implemented Default for `Arc<str>`

With added implementations.

```
GOOD    Arc<CStr>
BROKEN  Arc<OsStr> // removed
GOOD    Rc<str>
GOOD    Rc<CStr>
BROKEN  Rc<OsStr> // removed

GOOD    Rc<[T]>
GOOD    Arc<[T]>
```

For discussion of https://github.com/rust-lang/rust/pull/124367#issuecomment-2091940137.

Key pain points currently:
> I've had a guess at the best locations/feature attrs for them but they might not be correct.

> However I'm unclear how to get the OsStr impl to compile, which file should they go in to avoid the error below? Is it possible, perhaps with some special std rust lib magic?
2024-05-19 06:25:20 +00:00
bors bfa3635df9 Auto merge of #99969 - calebsander:feature/collect-box-str, r=dtolnay
alloc: implement FromIterator for Box<str>

`Box<[T]>` implements `FromIterator<T>` using `Vec<T>` + `into_boxed_slice()`.
Add analogous `FromIterator` implementations for `Box<str>`
matching the current implementations for `String`.
Remove the `Global` allocator requirement for `FromIterator<Box<str>>` too.

ACP: https://github.com/rust-lang/libs-team/issues/196
2024-05-19 02:13:06 +00:00
Joshua Wong 65e302fc36 use `Result::into_ok` on infallible result. 2024-05-18 19:15:21 -05:00
Joshua Wong 9d6b93c3e6 specialize `Iterator::fold` for `vec::IntoIter`
LLVM currently adds a redundant check for the returned option, in addition
to the `self.ptr != self.end` check when using the default
`Iterator::fold` method that calls `vec::IntoIter::next` in a loop.
2024-05-18 18:30:20 -05:00
Joshua Wong 6165dca6db optimize in_place_collect with vec::IntoIter::try_fold
`Iterator::try_fold` gets called on the underlying Iterator in
`SpecInPlaceCollect::collect_in_place` whenever it does not implement
`TrustedRandomAccess`. For types that impl `Drop`, LLVM currently can't
tell that the drop can never occur, when using the default
`Iterator::try_fold` implementation.

For example, the asm from the `unwrap_clone` method is currently:

```
unwrap_clone:
        push    rbp
        push    r15
        push    r14
        push    r13
        push    r12
        push    rbx
        push    rax
        mov     rbx, rdi
        mov     r12, qword ptr [rsi]
        mov     rdi, qword ptr [rsi + 8]
        mov     rax, qword ptr [rsi + 16]
        movabs  rsi, -6148914691236517205
        mov     r14, r12
        test    rax, rax
        je      .LBB0_10
        lea     rcx, [rax + 2*rax]
        lea     r14, [r12 + 8*rcx]
        shl     rax, 3
        lea     rax, [rax + 2*rax]
        xor     ecx, ecx
.LBB0_2:
        cmp     qword ptr [r12 + rcx], 0
        je      .LBB0_4
        add     rcx, 24
        cmp     rax, rcx
        jne     .LBB0_2
        jmp     .LBB0_10
.LBB0_4:
        lea     rdx, [rax - 24]
        lea     r14, [r12 + rcx]
        cmp     rdx, rcx
        je      .LBB0_10
        mov     qword ptr [rsp], rdi
        sub     rax, rcx
        add     rax, -24
        mul     rsi
        mov     r15, rdx
        lea     rbp, [r12 + rcx]
        add     rbp, 32
        shr     r15, 4
        mov     r13, qword ptr [rip + __rust_dealloc@GOTPCREL]
        jmp     .LBB0_6
.LBB0_8:
        add     rbp, 24
        dec     r15
        je      .LBB0_9
.LBB0_6:
        mov     rsi, qword ptr [rbp]
        test    rsi, rsi
        je      .LBB0_8
        mov     rdi, qword ptr [rbp - 8]
        mov     edx, 1
        call    r13
        jmp     .LBB0_8
.LBB0_9:
        mov     rdi, qword ptr [rsp]
        movabs  rsi, -6148914691236517205
.LBB0_10:
        sub     r14, r12
        mov     rax, r14
        mul     rsi
        shr     rdx, 4
        mov     qword ptr [rbx], r12
        mov     qword ptr [rbx + 8], rdi
        mov     qword ptr [rbx + 16], rdx
        mov     rax, rbx
        add     rsp, 8
        pop     rbx
        pop     r12
        pop     r13
        pop     r14
        pop     r15
        pop     rbp
        ret
```

After this PR:

```
unwrap_clone:
	mov	rax, rdi
	movups	xmm0, xmmword ptr [rsi]
	mov	rcx, qword ptr [rsi + 16]
	movups	xmmword ptr [rdi], xmm0
	mov	qword ptr [rdi + 16], rcx
	ret
```

Fixes #120493
2024-05-18 18:30:20 -05:00
Joshua Wong c585541e67 optimize in-place collection of `Vec`
LLVM does not know that the multiplication never overflows, which causes
it to generate unnecessary instructions. Use `usize::unchecked_mul`, so
that it can fold the `dst_cap` calculation when `size_of::<I::SRC>() ==
size_of::<T>()`.

Running:

```
rustc -C llvm-args=-x86-asm-syntax=intel -O src/lib.rs --emit asm`
```

```rust

pub struct Foo([usize; 3]);

pub fn unwrap_copy(v: Vec<Foo>) -> Vec<[usize; 3]> {
    v.into_iter().map(|f| f.0).collect()
}
```

Before this commit:

```
define void @unwrap_copy(ptr noalias nocapture noundef writeonly sret([24 x i8]) align 8 dereferenceable(24) %_0, ptr noalias nocapture noundef readonly align 8 dereferenceable(24) %iter) {
start:
  %me.sroa.0.0.copyload.i = load i64, ptr %iter, align 8
  %me.sroa.4.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 8
  %me.sroa.4.0.copyload.i = load ptr, ptr %me.sroa.4.0.self.sroa_idx.i, align 8
  %me.sroa.5.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 16
  %me.sroa.5.0.copyload.i = load i64, ptr %me.sroa.5.0.self.sroa_idx.i, align 8
  %_19.i.idx = mul nsw i64 %me.sroa.5.0.copyload.i, 24
  %0 = udiv i64 %_19.i.idx, 24
  %_16.i.i = mul i64 %me.sroa.0.0.copyload.i, 24
  %dst_cap.i.i = udiv i64 %_16.i.i, 24
  store i64 %dst_cap.i.i, ptr %_0, align 8
  %1 = getelementptr inbounds i8, ptr %_0, i64 8
  store ptr %me.sroa.4.0.copyload.i, ptr %1, align 8
  %2 = getelementptr inbounds i8, ptr %_0, i64 16
  store i64 %0, ptr %2, align 8
  ret void
}
```

After:

```
define void @unwrap_copy(ptr noalias nocapture noundef writeonly sret([24 x i8]) align 8 dereferenceable(24) %_0, ptr noalias nocapture noundef readonly align 8 dereferenceable(24) %iter) {
start:
  %me.sroa.0.0.copyload.i = load i64, ptr %iter, align 8
  %me.sroa.4.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 8
  %me.sroa.4.0.copyload.i = load ptr, ptr %me.sroa.4.0.self.sroa_idx.i, align 8
  %me.sroa.5.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 16
  %me.sroa.5.0.copyload.i = load i64, ptr %me.sroa.5.0.self.sroa_idx.i, align 8
  %_19.i.idx = mul nsw i64 %me.sroa.5.0.copyload.i, 24
  %0 = udiv i64 %_19.i.idx, 24
  store i64 %me.sroa.0.0.copyload.i, ptr %_0, align 8
  %1 = getelementptr inbounds i8, ptr %_0, i64 8
  store ptr %me.sroa.4.0.copyload.i, ptr %1, align 8
  %2 = getelementptr inbounds i8, ptr %_0, i64 16
  store i64 %0, ptr %2, align 8, !alias.scope !9, !noalias !14
  ret void
}
```

Note that there is still one more `mul,udiv` pair that I couldn't get
rid of. The root cause is the same issue as #121239, the `nuw` gets
stripped off of `ptr::sub_ptr`.
2024-05-18 18:30:20 -05:00
Jon Gjengset 0beba9699c Clarify how String::leak and into_boxed_str differ 2024-05-18 19:17:43 +02:00
Zachary S c895f6e958 Access alloc field directly in Arc/Rc::into_raw_with_allocator.
... since fn allocator doesn't exist yet.
2024-05-16 21:09:05 -05:00
Zachary S 28cb2d7dfb Add fn into_raw_with_allocator to Rc/Arc/Weak. 2024-05-13 17:49:41 -05:00
Dion Dokter b8b68983f3
Forward alloc features to core 2024-05-13 22:20:32 +02:00
Zachary S f27d1e114c Use shared statics for the ArcInner for Arc<str, CStr>::default, and for Arc<[T]>::default where alignof(T) <= 16. 2024-05-12 20:29:08 -05:00
Zachary S 0b3ebb546f Add note about possible allocation-sharing to Arc/Rc<str/[T]/CStr>::default. 2024-05-12 20:27:29 -05:00
Billy Sheppard 5c6326ad79 added Default impls
reorganised attrs

removed OsStr impls

added backticks
2024-05-12 20:27:28 -05:00
bors 4fd98a4b1b Auto merge of #125012 - RalfJung:format-error, r=Mark-Simulacrum,workingjubilee
io::Write::write_fmt: panic if the formatter fails when the stream does not fail

Follow-up to https://github.com/rust-lang/rust/pull/124954
2024-05-12 08:34:32 +00:00
Matthias Krüger e3fca20eae
Rollup merge of #124981 - zachs18:rc-allocator-generalize-1, r=Mark-Simulacrum
Relax allocator requirements on some Rc/Arc APIs.

Split out from #119761

* Remove `A: Clone` bound from `Rc::assume_init`(s), `Rc::downcast`, and `Rc::downcast_unchecked` (`Arc` methods were already relaxed by #120445)
* Make `From<Rc<[T; N]>> for Rc<[T]>` allocator-aware (`Arc`'s already is).
* Remove `A: Clone` from `Rc/Arc::unwrap_or_clone`

Internal changes:

* Made `Arc::internal_into_inner_with_allocator` method into `Arc::into_inner_with_allocator` associated fn.
* Add private `Rc::into_inner_with_allocator` (to match Arc), so other fns don't have to juggle `ManuallyDrop`.
2024-05-11 23:43:25 +02:00
Ralf Jung e00f27b7be io::Write::write_fmt: panic if the formatter fails when the stream does not fail 2024-05-11 15:13:18 +02:00
Dion Dokter 579266a6be Add flag to std and alloc too 2024-05-11 14:20:03 +02:00
Zachary S 8d8eb505b0 Relax A: Clone requirement on Rc/Arc::unwrap_or_clone. 2024-05-10 14:34:19 -05:00
Zachary S d6122f1924 Relax allocator requirements on some Rc APIs.
* Remove A: Clone bound from Rc::assume_init, Rc::downcast, and Rc::downcast_unchecked.
* Make From<Rc<[T; N]>> for Rc<[T]> allocator-aware.

Internal changes:

* Made Arc::internal_into_inner_with_allocator method into Arc::into_inner_with_allocator associated fn.
* Add private Rc::into_inner_with_allocator (to match Arc), so other fns don't have to juggle ManuallyDrop.
2024-05-10 14:11:23 -05:00
Kevin Reid c21c5baad9 Document proper usage of `fmt::Error` and `fmt()`'s `Result`.
Documentation of these properties previously existed in a lone paragraph
in the `fmt` module's documentation:
<https://doc.rust-lang.org/1.78.0/std/fmt/index.html#formatting-traits>
However, users looking to implement a formatting trait won't necessarily
look there. Therefore, let's add the critical information (that
formatting per se is infallible) to all the involved items.
2024-05-09 17:58:38 -07:00
Marcondiro bbdf97254a
fix #124714 str.to_lowercase sigma handling 2024-05-08 17:05:10 +02:00
Markus Everling 5cb53bc34d Move `test_shrink_to_unwind` to its own file.
This way, no other test can be tripped up by `test_shrink_to_unwind` changing the alloc error hook.
2024-05-07 19:43:54 +00:00
Markus Everling ffe8510e3d Fix `VecDeque::shrink_to` UB when `handle_alloc_error` unwinds.
Luckily it's comparatively simple to just restore the `VecDeque` into a valid state on unwinds.
2024-05-07 19:30:34 +00:00
Caleb Sander c92c228260 alloc: implement FromIterator for Box<str>
Box<[T]> implements FromIterator<T> using Vec<T> + into_boxed_slice().
Add analogous FromIterator implementations for Box<str>
matching the current implementations for String.
Remove the Global allocator requirement for FromIterator<Box<str>> too.
2024-05-05 10:29:57 -07:00
Guillaume Gomez d3e042dc4e
Rollup merge of #124749 - RossSmyth:stable_range, r=davidtwco
Stabilize exclusive_range_pattern (v2)

This PR is identical to #124459, which was approved and merged but then removed from master by a force-push due to a [CI bug](https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/ci.20broken.3F).

r? ghost

Original PR description:

---

Stabilization report: https://github.com/rust-lang/rust/issues/37854#issuecomment-1842398130
FCP: https://github.com/rust-lang/rust/issues/37854#issuecomment-1872520294

Stabilization was blocked by a lint that was merged here: #118879

Documentation PR is here: rust-lang/reference#1484

`@rustbot` label +F-exclusive_range_pattern +T-lang
2024-05-05 16:42:48 +02:00
Matthias Krüger 1ff247c404
Rollup merge of #124593 - GKFX:cstr-literals-in-api-docs, r=workingjubilee
Describe and use CStr literals in CStr and CString docs

Mention CStr literals in the description of both types, and use them in some of the code samples for CStr. This is intended to make C string literals more discoverable.

Additionally, I don't think the orange "This example is not tested" warnings are very encouraging, so I have made the examples on `CStr` build.
2024-05-03 20:33:46 +02:00
Matthias Krüger d7a8936b78
Rollup merge of #124441 - bravequickcleverfibreyarn:string.rs, r=Amanieu
String.truncate comment microfix (greater or equal)

String.truncate calls Vec.truncate, in turn, and that states "is greater or equal to". Beside common sense.
2024-05-03 06:04:20 +02:00
Matthias Krüger e17a222a0a
Rollup merge of #123480 - Nadrieril:impl-all-derefpures, r=compiler-errors
deref patterns: impl `DerefPure` for more std types

Context: [deref patterns](https://github.com/rust-lang/rust/issues/87121). The requirements of `DerefPure` aren't precise yet, but these types unambiguously satisfy them.

Interestingly, a hypothetical `impl DerefMut for Cow` that does a `Clone` would *not* be eligible for `DerefPure` if we allow mixing deref patterns with normal patterns. If the following is exhaustive then the `DerefMut` would cause UB:
```rust
match &mut Cow::Borrowed(&()) {
    Cow::Owned(_) => ..., // Doesn't match
    deref!(_x) if false => ..., // Causes the variant to switch to `Owned`
    Cow::Borrowed(_) => ..., // Doesn't match
    // We reach unreachable
}
```
2024-05-03 06:04:19 +02:00
Ross Smyth 6967d1c0fc Stabilize exclusive_range 2024-05-02 19:42:31 -04:00
Mark Rousskov a64f941611 Step bootstrap cfgs 2024-05-01 22:19:11 -04:00
Mark Rousskov bd7d328807 Replace version placeholders for 1.79 2024-05-01 21:01:51 -04:00
George Bateman e610a52a62
Describe and use CStr literals in CStr and CString docs 2024-05-01 19:59:00 +01:00
Trevor Gross e0f8202ed5 Stabilize `non_null_convenience`
Fully stabilize the following API, including const where applicable:

    impl <T> NonNull<T> {
        pub const unsafe fn offset(self, count: isize) -> Self;
        pub const unsafe fn add(self, count: usize) -> Self;
        pub const unsafe fn sub(self, count: usize) -> Self;
        pub const unsafe fn offset_from(self, origin: NonNull<T>) -> isize;
        pub const unsafe fn read(self) -> T;
        pub unsafe fn read_volatile(self) -> T;
        pub const unsafe fn read_unaligned(self) -> T;
        pub unsafe fn write_volatile(self, val: T);
        pub unsafe fn replace(self, src: T) -> T;
    }

    impl<T: ?Sized> NonNull<T> {
        pub const unsafe fn byte_offset(self, count: isize) -> Self;
        pub const unsafe fn byte_add(self, count: usize) -> Self;
        pub const unsafe fn byte_sub(self, count: usize) -> Self;
        pub const unsafe fn byte_offset_from<U: ?Sized>(self, origin: NonNull<U>) -> isize;
        pub unsafe fn drop_in_place(self);
    }

Stabilize the following without const:

    impl <T> NonNull<T> {
        // const under `const_intrinsic_copy`
        pub const unsafe fn copy_to(self, dest: NonNull<T>, count: usize);
        pub const unsafe fn copy_to_nonoverlapping(self, dest: NonNull<T>, count: usize);
        pub const unsafe fn copy_from(self, src: NonNull<T>, count: usize);
        pub const unsafe fn copy_from_nonoverlapping(self, src: NonNull<T>, count: usize);

        // const under `const_ptr_write`
        pub const unsafe fn write(self, val: T);
        pub const unsafe fn write_bytes(self, val: u8, count: usize);
        pub const unsafe fn write_unaligned(self, val: T);

        // const under `const_swap`
        pub const unsafe fn swap(self, with: NonNull<T>);

        // const under `const_align_offset`
        pub const fn align_offset(self, align: usize) -> usize;

        // const under `const_pointer_is_aligned`
        pub const fn is_aligned(self) -> bool;
    }

Left the following unstable:

    impl <T> NonNull<T> {
        // moved gate to `ptr_sub_ptr`
        pub const unsafe fn sub_ptr(self, subtracted: NonNull<T>) -> usize;
    }

    impl <T: ?Sized> NonNull<T> {
        // moved gate to `pointer_is_aligned_to`
        pub const fn is_aligned_to(self, align: usize) -> bool;
    }

Fixes: https://github.com/rust-lang/rust/issues/117691
2024-04-28 16:19:53 -05:00