> See also:
> - [[Computer Algorithms]]
> - [[The Master Theorem]]
# Algorithm Analysis
![[Pasted image 20230210192026.png|400]]
An algorithm's **efficiency** is often measured in terms of its run time. Because the amount of input that an algorithm would handle in the real world can vary, an approximation of this efficiency is used instead of the exact
We often assume that primitive operations (typically binary arithmetic operators) occur in constant time, aka order O(1) time
- Ex: arithmetic operators, data movement, control statements (method calls, conditionals)
---
## Asymptotic Notation
> See also:
> - [[Asymptotes]]
When analyzing a functional representation of an algorithm’s performance, we are interested in which term grows at the fastest rate as the input size increases.
There are 5 asymptotic notations that assist with describing an algorithm’s behavior
, the three major ones are:
| Type | Notation |
| :--: | :--: |
| Polynomial Time | $O(2^n)$ |
| Exponential Time | $O(n^2)$ |
| Linear Time | $O(n)$ |
| Logarithmic Time | $O(\log_{}{(n)})$ |
| Constant Time | $O(1)$ |
linearithmic
> [!NOTE] **Limitations of Asymptotic Notation**
> - algorithms can contain strange behaviors/irregularities that make any predicted performance growth rate inaccurate
> - I feel like an example would be running out of memory and needing to wait for space to be cleared up, but I feel like its fair to isolate the algorithm’s input-to-performance rate from factors such as this (we have space complexity analysis for a reason)
thoughts
- some of this is
### Big-Omicron Notation
Big-Omicron Notation (the head honcho):
- Upper Bound
-
> [!NOTE]+ Formal Definition of Big-Omicron
> Let $c, n_0 \in \mathbb{R}^+$ (we cannot have a negative input)
> There exists some $c, n_0 > 0$ such that, for all $n \ge n_0$, $f(n) < c \cdot g(n)$
>
> $\omicron (f(n))=\{T(n):$ there exists a positive constant c (c > 0) and $n_0 \ge 0$ such that $0 \le T(n) \le c \cdot f(n)$ for all $n \ge n_0$
>
> ![[Pasted image 20240111183233.png|200]]
>
> ---
> Note that the $f(x)$ function is describing the “growth rate” of the asymptotic notation.
Based on the value of $c$ that is chosen, you have to ensure that the relation that $f(x) \ge g(x)$ holds true for all values after $n_0$.
This can be done by evaluating both halfs of the inequality described in the formal definition above
We want to find values of n (which must be $\ge n_0$) that keep the inequality true
- Ex: $16 \le n$ would imply an $n_0$ of 16
### Big-Omega Notation
Big-$\Omega$ Notation:
- Lower Bound
Very similar to Big-Omicron notation
Represents the *lower bound* of the function
$0 \le c \cdot f(n) \le T(n)$
![[Pasted image 20240111185621.png|200]]
### Big-Theta Notation
Big-$\Theta$ Notation:
- Upper & Lower Bound (Strict-Tight Bound)
Big-Theta ($\Theta$) is often considered the best representations of an algorithms performance
> [!NOTE]+ Formal Definition of Big-Theta
> $R(N)\in \Theta (f(n))$ means there exist *two different positive constants* $c_1$ and $c_2$, such that:
> $0 \le c_1 \cdot f(n) \le r(n) \le c_2 \cdot f(n)$
>
> for values of $n$ greater than some $n_0$ (initial input size for which the bounds apply)
Note that the function used in both the upper and lower bound applied by Big-Omega is identical. Only the constants, $c_1$ & $c_2$, are changed and affect the “scaling” of the bounds as a result, adding a sort of “restriction” that describes the behavior of the algorithm’s function
It’s important to note that the tight-bound performance of an algorithm is NOT the same as its average performance.
- This is especially obvious in probabilistic algorithms/data structures
Spatial Complexity = Auxiliary Space + Input Space