Rollup merge of #89216 - r00ster91:bigo, r=dtolnay

Consistent big O notation

This makes the big O time complexity notation in places with markdown support more consistent.
Inspired by #89210
This commit is contained in:
Manish Goregaokar 2021-09-25 18:22:20 -07:00 committed by GitHub
commit 653dcaac2b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 26 additions and 25 deletions

View File

@ -5170,7 +5170,7 @@ Libraries
- [Upgrade to Unicode 10.0.0][42999]
- [Reimplemented `{f32, f64}::{min, max}` in Rust instead of using CMath.][42430]
- [Skip the main thread's manual stack guard on Linux][43072]
- [Iterator::nth for `ops::{Range, RangeFrom}` is now done in O(1) time][43077]
- [Iterator::nth for `ops::{Range, RangeFrom}` is now done in *O*(1) time][43077]
- [`#[repr(align(N))]` attribute max number is now 2^31 - 1.][43097] This was
previously 2^15.
- [`{OsStr, Path}::Display` now avoids allocations where possible][42613]
@ -8473,7 +8473,7 @@ Libraries
algorithm][s].
* [`std::io::copy` allows `?Sized` arguments][cc].
* The `Windows`, `Chunks`, and `ChunksMut` iterators over slices all
[override `count`, `nth` and `last` with an O(1)
[override `count`, `nth` and `last` with an *O*(1)
implementation][it].
* [`Default` is implemented for arrays up to `[T; 32]`][d].
* [`IntoRawFd` has been added to the Unix-specific prelude,
@ -8995,7 +8995,7 @@ Libraries
* The `Default` implementation for `Arc` [no longer requires `Sync +
Send`][arc].
* [The `Iterator` methods `count`, `nth`, and `last` have been
overridden for slices to have O(1) performance instead of O(n)][si].
overridden for slices to have *O*(1) performance instead of *O*(*n*)][si].
* Incorrect handling of paths on Windows has been improved in both the
compiler and the standard library.
* [`AtomicPtr` gained a `Default` implementation][ap].

View File

@ -3,7 +3,7 @@
//! Also computes as the resulting DAG if each SCC is replaced with a
//! node in the graph. This uses [Tarjan's algorithm](
//! https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm)
//! that completes in *O(n)* time.
//! that completes in *O*(*n*) time.
use crate::fx::FxHashSet;
use crate::graph::vec_graph::VecGraph;

View File

@ -9,7 +9,7 @@ mod index_map;
pub use index_map::SortedIndexMultiMap;
/// `SortedMap` is a data structure with similar characteristics as BTreeMap but
/// slightly different trade-offs: lookup, insertion, and removal are O(log(N))
/// slightly different trade-offs: lookup, insertion, and removal are *O*(log(*n*))
/// and elements can be iterated in order cheaply.
///
/// `SortedMap` can be faster than a `BTreeMap` for small sizes (<50) since it

View File

@ -3,7 +3,7 @@
//! Insertion and popping the largest element have *O*(log(*n*)) time complexity.
//! Checking the largest element is *O*(1). Converting a vector to a binary heap
//! can be done in-place, and has *O*(*n*) complexity. A binary heap can also be
//! converted to a sorted vector in-place, allowing it to be used for an *O*(*n* \* log(*n*))
//! converted to a sorted vector in-place, allowing it to be used for an *O*(*n* * log(*n*))
//! in-place heapsort.
//!
//! # Examples
@ -243,9 +243,9 @@ use super::SpecExtend;
///
/// # Time complexity
///
/// | [push] | [pop] | [peek]/[peek\_mut] |
/// |--------|-----------|--------------------|
/// | O(1)~ | *O*(log(*n*)) | *O*(1) |
/// | [push] | [pop] | [peek]/[peek\_mut] |
/// |---------|---------------|--------------------|
/// | *O*(1)~ | *O*(log(*n*)) | *O*(1) |
///
/// The value for `push` is an expected cost; the method documentation gives a
/// more detailed analysis.

View File

@ -1,8 +1,8 @@
//! A contiguous growable array type with heap-allocated contents, written
//! `Vec<T>`.
//!
//! Vectors have `O(1)` indexing, amortized `O(1)` push (to the end) and
//! `O(1)` pop (from the end).
//! Vectors have *O*(1) indexing, amortized *O*(1) push (to the end) and
//! *O*(1) pop (from the end).
//!
//! Vectors ensure they never allocate more than `isize::MAX` bytes.
//!
@ -1270,7 +1270,7 @@ impl<T, A: Allocator> Vec<T, A> {
///
/// The removed element is replaced by the last element of the vector.
///
/// This does not preserve ordering, but is O(1).
/// This does not preserve ordering, but is *O*(1).
///
/// # Panics
///

View File

@ -1795,10 +1795,11 @@ pub trait Iterator {
/// The relative order of partitioned items is not maintained.
///
/// # Current implementation
///
/// Current algorithms tries finding the first element for which the predicate evaluates
/// to false, and the last element for which it evaluates to true and repeatedly swaps them.
///
/// Time Complexity: *O*(*N*)
/// Time complexity: *O*(*n*)
///
/// See also [`is_partitioned()`] and [`partition()`].
///

View File

@ -97,11 +97,11 @@
//!
//! ## Sequences
//!
//! | | get(i) | insert(i) | remove(i) | append | split_off(i) |
//! |----------------|----------------|-----------------|----------------|--------|----------------|
//! | [`Vec`] | O(1) | O(n-i)* | O(n-i) | O(m)* | O(n-i) |
//! | [`VecDeque`] | O(1) | O(min(i, n-i))* | O(min(i, n-i)) | O(m)* | O(min(i, n-i)) |
//! | [`LinkedList`] | O(min(i, n-i)) | O(min(i, n-i)) | O(min(i, n-i)) | O(1) | O(min(i, n-i)) |
//! | | get(i) | insert(i) | remove(i) | append | split_off(i) |
//! |----------------|------------------------|-------------------------|------------------------|-----------|------------------------|
//! | [`Vec`] | *O*(1) | *O*(*n*-*i*)* | *O*(*n*-*i*) | *O*(*m*)* | *O*(*n*-*i*) |
//! | [`VecDeque`] | *O*(1) | *O*(min(*i*, *n*-*i*))* | *O*(min(*i*, *n*-*i*)) | *O*(*m*)* | *O*(min(*i*, *n*-*i*)) |
//! | [`LinkedList`] | *O*(min(*i*, *n*-*i*)) | *O*(min(*i*, *n*-*i*)) | *O*(min(*i*, *n*-*i*)) | *O*(1) | *O*(min(*i*, *n*-*i*)) |
//!
//! Note that where ties occur, [`Vec`] is generally going to be faster than [`VecDeque`], and
//! [`VecDeque`] is generally going to be faster than [`LinkedList`].
@ -110,10 +110,10 @@
//!
//! For Sets, all operations have the cost of the equivalent Map operation.
//!
//! | | get | insert | remove | range | append |
//! |--------------|-----------|-----------|-----------|-----------|--------|
//! | [`HashMap`] | O(1)~ | O(1)~* | O(1)~ | N/A | N/A |
//! | [`BTreeMap`] | O(log(n)) | O(log(n)) | O(log(n)) | O(log(n)) | O(n+m) |
//! | | get | insert | remove | range | append |
//! |--------------|---------------|---------------|---------------|---------------|--------------|
//! | [`HashMap`] | *O*(1)~ | *O*(1)~* | *O*(1)~ | N/A | N/A |
//! | [`BTreeMap`] | *O*(log(*n*)) | *O*(log(*n*)) | *O*(log(*n*)) | *O*(log(*n*)) | *O*(*n*+*m*) |
//!
//! # Correct and Efficient Usage of Collections
//!

View File

@ -43,8 +43,8 @@
//! terminator, so the buffer length is really `len+1` characters.
//! Rust strings don't have a nul terminator; their length is always
//! stored and does not need to be calculated. While in Rust
//! accessing a string's length is a `O(1)` operation (because the
//! length is stored); in C it is an `O(length)` operation because the
//! accessing a string's length is an *O*(1) operation (because the
//! length is stored); in C it is an *O*(*n*) operation because the
//! length needs to be computed by scanning the string for the nul
//! terminator.
//!

View File

@ -995,7 +995,7 @@ declare_clippy_lint! {
declare_clippy_lint! {
/// ### What it does
/// Checks for use of `.iter().nth()` (and the related
/// `.iter_mut().nth()`) on standard library types with O(1) element access.
/// `.iter_mut().nth()`) on standard library types with *O*(1) element access.
///
/// ### Why is this bad?
/// `.get()` and `.get_mut()` are more efficient and more