You must have heard about it when preparing for technical interviews, fought through it in a data structures course, or learned how to code on your own; someone must have informed you somewhere along the line that dynamic programming is crucial to comprehend.

Yes, dynamic programming methods are as important as they are feared. But don’t worry, you’ve arrived at the correct place, and believe me, it’ll be worth your time to read.

Today, in this post, we will cover everything from the ground up, including a quick overview of dynamic programming, and the ordered method for solving its issues including recursion, memoization, and also tabulation with the help of an example. We will also enlist the major differences between the Dynamic Programming approach and Greedy Approach.

Let us begin with the most essential question:

## What is Dynamic Programming?

Dynamic programming is a technique for solving problems by breaking them down into smaller subproblems and solving those subproblems in a way that their solutions can be reused later. This approach is used in optimization problems where the goal is to find the best solution given some constraints.

Dynamic programming uses a table or an array to store the solutions to subproblems. This table is called a memo table or a cache. When a subproblem is solved, its solution is stored in the memo table so that it can be reused later instead of recomputing the solution.

The key idea behind dynamic programming is to avoid redundant calculations by remembering the solutions to subproblems that have already been computed. This significantly reduces the computational time needed to solve a problem, especially for large problems.

Dynamic programming can be applied in two ways: top-down and bottom-up. In the top-down approach, a problem is solved by breaking it down into sub-problems and recursively solving those subproblems. In the bottom-up approach, the subproblems are solved first, and the solution to the problem is obtained by combining the solutions to the subproblems.

Dynamic programming is often used in problems involving sequence or chain-like structures. For example, the Longest Common Subsequence problem can be solved using dynamic programming. It involves finding the longest subsequence that appears in both strings. Another example is the Knapsack problem, where the goal is to find the items to be included in a knapsack to maximize its value while not exceeding its capacity.

To solve a Dynamic Programming Problem, we should follow a defined path to obtain the most optimal solution. An ordered way to solve is given below:

**Step 1:** Write a Recursive Solution first along with the appropriate base cases

**Step 2:** Try to memoize the recursive solution.

**Step 3:** Converting the memoized solution to Tabulation.

These Steps are explained below briefly:

### Recursion

In recursion, a problem is solved by breaking it down into smaller subproblems and solving those subproblems recursively. The solution to the original problem is obtained by combining the solutions to the subproblems.

Recursion is the first step in solving any dynamic programming problem. The recursive solution obtained in this step can be further optimized by applying the concepts of memoization and tabulation.

### Memoization

Memoization is a technique used in dynamic programming to reduce the computational time needed to solve a problem by avoiding redundant calculations. The idea behind memoization is to store the solutions to subproblems in a cache or a memo table so that they can be reused later instead of recomputing the solution.

In memoization, when a subproblem is solved, its solution is stored in the memo table. When the same subproblem is encountered again, its solution is retrieved from the memo table instead of being recomputed. This significantly reduces the computational time needed to solve the problem, especially for large problems.

### Tabulation

In dynamic programming, tabulation refers to a technique where the intermediate results of subproblems are stored in a table to avoid redundant computations. This approach is used to solve optimization problems where the solution can be constructed from solutions to subproblems. By storing the results of subproblems in a table, we can avoid recalculating them when they are needed again, leading to a significant improvement in efficiency.

The process of tabulation in dynamic programming starts by breaking down the problem into smaller subproblems and solving them in a bottom-up manner. The solutions to subproblems are stored in a table, and the final solution to the main problem is constructed using these stored solutions.

Now, let us understand each of these steps using the example of the Fibonacci Series.

## The Fibonacci Series

Fibonacci Series is defined as the sequence which goes like:

`0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 ....`

We can notice that except for the first two numbers, every number in the series is the sum of the preceding two numbers. The first and second numbers are 0 and 1, respectively. The third number is 1, which is the total of the two preceding numbers, 0 and 1. The fourth number is 2, which is the sum of the two preceding numbers, 1 and 1. And the series continues in this manner.

Now consider the function F(n), which returns the nth integer in the Fibonacci series. For this, we may devise a recursive formula as

```
F(0) = 0
F(1) = 1
F(n) = F(n-1) + F(n-2)
```

### Solving Fibonacci Series Recursively

To solve a problem recursively, we must divide it into related subproblems and use base cases. Here, in this case, we have base cases as F(0) = 0 and F(1).

The Java Code for this looks like

class Main{ public static int F(int n){ if(n==0) return 0; if(n==1) return 1; return F(n-1) + F(n-2); } public static void main (String[] args) { int n = 5; System.out.println(F(n)); } }

**Output:**

`5`

In this implementation, the function calls itself twice for each iteration, leading to an exponential increase in computation time as n grows. This is better understood using the recursion tree as given below:

## Recursion Tree for finding F(5)

The recursion tree for finding the value of F(5) will look like :

It is clearly visible from the above recursion tree, that we are computing the same Fibonacci number multiple times to get our final solution.

Let us get rid of this recalculation of the same Fibonacci number multiple times.

## Memoization Approach (Top-Down) of Dynamic Programming

Now to get rid of the redundant calculation, we will store the value of subproblems in an array, and later we will use these values stored in the array instead of recalculating the number using the recursion. This helps in the significant reduction of runtime from exponential to linear time complexity.

The Java Code for Memoized solution is given below:

import java.util.*; class Main{ public static int F(int n, int[] dp){ if(n==0) return 0; if(n==1) return 1; if(dp[n]!= -1) return dp[n]; return dp[n]= F(n-1,dp) + F(n-2,dp); } public static void main (String[] args){ int n = 5; int dp[] = new int[n+1]; Arrays.fill(dp,-1); System.out.println(F(n,dp)); } }

**Output:**

`5`

In this implementation, the “dp” array is used to store the intermediate results and avoid redundant calculations. Each time the function is called, it checks the “dp” array first before computing the result, making the overall time complexity much more efficient, and reducing the exponential growth to linear.

By memoization, the tree of subproblems shrinks as shown below:

## Tabulation Approach (Bottom Up) of Dynamic Programming

We’ll restructure the sequence in which we tackle the subproblems using the bottom-up dynamic programming technique.

We’ll calculate F(0), F(1), F(2), and so on like shown below:

This allows us to compute the answer to each issue once and keep only two intermediate results at a time.

For example, while attempting to determine F(2), we just require the answers to F(1) and F(0). Similarly, we just require the solutions to F(2) and F(3) for F(3) (1). This allows us to use less memory in our programs.

import java.util.*; class Main{ public static void main (String[] args){ int n=5; int dp[] = new int[n+1]; Arrays.fill(dp,-1); dp[0]= 0; dp[1]= 1; for(int i=2; i<=n; i++){ dp[i] = dp[i-1]+ dp[i-2]; } System.out.println(dp[n]); } }

**Output:**

`5`

In this implementation, we use an array dp to store the intermediate results and use a loop to build up the solution bottom-up, starting from the base cases and progressing towards the solution for n. This approach is more efficient than the recursive solution, as it avoids the overhead of repeated function calls and has a time complexity of O(n).

Now, let us enlist major points of difference between Top-Down and Bottom-Up Approaches of Dynamic Programming.

## Top-Down vs Bottom-Up Approach

The table below summarizes the differences between the two approaches of Dynamic Programming i.e., Top-Down and Bottom-Down.

Top-Down | Bottom-Up | |
---|---|---|

Definition | A recursive approach that caches results | An iterative approach that builds up the solution |

Order of solving subproblems | Solve the subproblems recursively | Solve the subproblems in a bottom-up fashion |

Data Structure | Cache (hash table) | Table (array) |

Time Complexity | O(n^2) in the worst case | O(n) in the worst case |

Space Complexity | O(n) | O(n) |

## Difference between Dynamic Programming and Greedy Algorithms

Dynamic programming and greedy algorithms are both optimization techniques used in computer science, but they differ in their approach to solving problems.

Dynamic programming is an approach where a problem is divided into smaller subproblems, and the solutions to these subproblems are used to find the solution to the original problem. On the other hand, greedy algorithms are a technique where the solution to a problem is obtained by making the best choice at each step.

The differences between the two approaches are given below in the tabular form:

Dynamic Programming | Greedy Algorithms |
---|---|

Solves problems by breaking them down into smaller subproblems | Solves problems by making the locally optimal choice at each step |

Uses a memo table or cache to store the solutions to subproblems | Does not use a memo table or cache |

Can be applied using a top-down or bottom-up approach | Typically uses a top-down approach |

Focuses on finding the best solution given some constraints | Focuses on finding a solution that maximizes a specific criterion at each step |

Used for optimization problems with sequential or chain-like structures | Used for optimization problems with multiple choices at each step |

Can be slower to implement but provides optimal solutions | Faster to implement but may not provide optimal solutions |

**Conclusion**

To conclude, Dynamic Programming is a method for solving problems by breaking them down into smaller subproblems and combining their solutions to obtain the solution to the original problem. It is mainly used for solving optimization problems and is applicable when the problem exhibits both overlapping subproblems and an optimal substructure. Dynamic Programming can significantly reduce the time complexity of a problem by avoiding redundant computations and caching the intermediate results. Overall, Dynamic Programming is a powerful technique that can be used to solve a wide range of problems in various fields such as computer science, mathematics, and operations research.