# Dynamic Programming > its like recursion but on steroids (and with more discrete 🤢) **Dynamic programming** is very similar to the divide and conquer technique, but it expands upon it by *storing solutions* between the subproblems in order to avoid recomputation caused by any potential overlaps. In dynamic programming, the problem is broken down into simpler subproblems 1. Characterize the structure of an optimal solution 2. Recursively define the value of an optimal solution 3. Compute the value of an optimal solution, typically in a bottom-up approach (tabulation) 4. Construct an optimal solution from the computed value (optional) - Ex: (make-change) when n=30 and no nickels we know the construct should be 3 dimes https://www.baeldung.com/cs/tabulation-vs-memoization ## Memoization (Top-Down Approach) - Top-down approach - Typically recursive - Used when sub-problems *overlap* with other sub-problems **Use Cases:** - When the sub-problems overlap with eachother - Often involves recursion Start from the original problem and break it down into smaller subproblems. During this process, we store the solutions to the subproblems we’ve encountered in a cache. This allows us to avoid redundant computation ## Tabulation (Bottom-Up Approach) - Bottom-up approach - Typically iterative **Use Cases:** - When the sub-problems don’t overlap with eachother - Start from the simplest subproblems and build up to the original problem ## Common Applications ### Matrix Chain Multiplication (MCM) > See also: > - [[Matrices]] Multiplying an $m \times n$ matrix with an $n \times p$ matrix requires $m \times n \times p$ scalar multiplications - Scalar refers to a single value (as opposed to a vector, matrix, or tensor) ### Longest Common Subsequence ### Finding The Shortest Path