Given nondecreasing \(f, g : \N \to \R^+\), we write \(f = O(g)\) if there is \(c \in \N\) so that, for all \(n \in \N\), \[ f(n) \le c \cdot g(n). \] We call \(g\) an asymptotic upper bound for \(f\). (Like saying \(f\) “\(\le\)” \(g\))
Would be more technically accurate to write \(f \in O(g)\). Many functions are \(O(g)\), and \(f\) is only equal to one of them.
Given nondecreasing \(f, g : \N \to \R^+\), we write \(f = o(g)\) if \(\lim\limits_{n \to \infty} \frac{f(n)}{g(n)} = 0\). (Like saying \(f\) “\(<\)” \(g\))
\[ \begin{aligned} f = O(g) &\iff (\exists c \in \N) (\forall n)\ f(n) \le c \cdot g(n) & f \text{ "$\le$" } g \\ f = o(g) &\iff \lim_{n \to \infty} \frac{f(n)}{g(n)} = 0 & f \text{ "$<$" } g \\ f = \Omega(g) &\iff g = O(f) & f \text{ "$\ge$" } g \\ f = \omega(g) &\iff g = o(f) & f \text{ "$>$" } g \\ f = \Theta(g) &\iff f = O(g) \text{ and } f = \Omega(g) & f \text{ "$=$" } g \end{aligned} \]
| Notation | Meaning |
|---|---|
| \(\quad f = O(g)\) | \(f \text{ "$\le$" } g\) |
| \(\quad f = o(g)\) | \(f \text{ "$\le$" } g\) |
| \(\quad f = \Omega(g)\) | \(f \text{ "$\ge$" } g\) |
| \(\quad f = \omega(g)\) | \(f \text{ "$>$" } g\) |
| \(\quad f = \Theta(g)\) | \(f \text{ "$=$" } g\) |
Which of the following statements are correct?
Can every problem be solved in \(O(n)\), \(O(n^2)\), or maybe \(O(n^{100})\) time
if we simply find the right algorithm?
It turns out, no:
for any time bound \(t\), there are problems that can’t be solved in \(O(t)\),
but can if we allow more time.
“Given more time, a TM can solve more problems!”
We formalize this by introducing notation for “problems solvable in \(O(t)\) time.”
Let \(t: \N \to \N^+\) be a time bound. We define the time complexity class \[ \fragment{\time{t} =} \fragment{\setbuild{L \subseteq \binary^*}{\fragment{L \text{ is decided by a TM with running time } O(t)}}} \]
\(\time{t} \subseteq \powerset(\binary^*)\)
Let \(\REG\) denote the class of regular languages.
True or false: \(\REG \subseteq \time{n}\)
Can every problem be solved in \(O(n)\), \(O(n^2)\), or maybe \(O(n^{100})\) time
if we simply find the right algorithm?
It turns out, no:
for any time bound \(t\), there are problems that can’t be solved in \(O(t)\),
but can if we allow more time.
“Given more time, a TM can solve more problems!”
We formalize this by introducing notation for “problems solvable in \(O(t)\) time.”
Let \(t: \N \to \N^+\) be a time bound. We define the time complexity class \[ \time{t} = \setbuild{L \subseteq \binary^*}{L \text{ is decided by a TM with running time } O(t)} \]
\(\time{t} \subseteq \powerset(\binary^*)\)
For all time bounds \(t_1\) and \(t_2\) such that \(t_1 = O(t_2)\),
\(\time{t_1} \subseteq \time{t_2}\)
Let \(t: \N \to \N^+\) be a time bound. We define the time complexity class \[ \time{t} = \setbuild{L \subseteq \binary^*}{L \text{ is decided by a TM with running time } O(t)} \]
Time Hierarchy Theorem:
Let \(t_1(n)\) and \(t_2(n)\) be time bounds such that \(t_1(n) \log(n) = o(t_2(n))\).
Then \(\time{t_1} \subsetneq \time{t_2}\).
There is a problem that can be solved in \(O(t_2(n))\) time, but not \(O(t_1(n))\) time!
Consequences:
A special time complexity class containing problems with “efficient” solutions:
We denote by \(\P\) the class of languages decidable in polynomial time by a (deterministic) Turing machine: \[ \P = \bigcup_{k = 1}^\infty \time{n^k} \]
Remember our model of compute time:
The time complexity of a decider \(M\) is the time bound function \(t : \mathbb{N} \to \mathbb{N}^+\) defined as \[ t(n) = \max_{x \in \binary^n} \texttt{time}_M(x), \]
where \(\texttt{time}_M(x)\) is the number of configurations that \(M\) visits on input \(x\).
Let \(\encoding{\mathcal{X}}_1\) and \(\encoding{\mathcal{X}}_2\) be two different encodings of input object \(\mathcal{X}\) into binary strings.
Define an “encoding blowup factor” from \(\encoding{\mathcal{X}}_1\) to \(\encoding{\mathcal{X}}_2\) as: \[
f_{1 \to 2}(n) = \max_{\mathcal{X} \text{ s.t. } |\encoding{\mathcal{X}}_1| = n} |\encoding{\mathcal{X}}_2|
\]
If \(f_{1 \to 2}(n) = O(n^c)\) and \(f_{2 \to 1}(n) = O(n^c)\) (i.e., each encoding length is within a polynomial of the other), each encoding yields the same definition of \(\P\).
(Assuming encoding/decoding takes polynomial time.)
Example of reasonable encodings for graph \(G = (V, E)\):
Node and edge lists:
\(\encoding{G}_1 = \texttt{ascii\_to\_binary}(((1,2,3,4), ((1,2),(2,3),(3,1),(1,4))))\)
Binary adjacency matrix: \[ \begin{bmatrix} 0 & 1 & 1 & 1 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ \end{bmatrix} \quad \Longrightarrow \quad \encoding{G}_2 = 0111101011001000 \hspace{15em} \]