Get free ebooK with 50 must do coding Question for Product Based Companies solved
Fill the details & get ebook over email
Thank You!
We have sent the Ebook on 50 Must Do Coding Questions for Product Based Companies Solved over your email. All the best!

Dynamic Programming

Last Updated on July 3, 2023 by Mayank Dham


You must have heard about it while preparing for technical interviews, struggled through it in a data structures course, or learned how to code on your own; somewhere along the way, someone must have informed you that dynamic programming is critical to understand. Yes, dynamic programming methods are both important and feared. But do not worry, you have come to the right place, and believe me, it will be worthwhile to read. Today, with the help of an example, we will cover everything from the ground up, including a quick overview of dynamic programming and the ordered method for solving its issues such as recursion, memoization, and tabulation. We will also discuss the key differences between the Dynamic Programming and Greedy approaches.

Let us begin with the most essential question:

What is Dynamic Programming?

Dynamic programming is a technique for solving problems by breaking them down into smaller subproblems and solving those subproblems in a way that their solutions can be reused later. This approach is used in optimization problems where the goal is to find the best solution given some constraints.

Dynamic programming uses a table or an array to store the solutions to subproblems. This table is called a memo table or a cache. When a subproblem is solved, its solution is stored in the memo table so that it can be reused later instead of recomputing the solution.

The key idea behind dynamic programming is to avoid redundant calculations by remembering the solutions to subproblems that have already been computed. This significantly reduces the computational time needed to solve a problem, especially for large problems.

Dynamic programming can be applied in two ways: top-down and bottom-up. In the top-down approach, a problem is solved by breaking it down into sub-problems and recursively solving those subproblems. In the bottom-up approach, the subproblems are solved first, and the solution to the problem is obtained by combining the solutions to the subproblems.

Dynamic programming is often used in problems involving sequence or chain-like structures. For example, the Longest Common Subsequence problem can be solved using dynamic programming. It involves finding the longest subsequence that appears in both strings. Another example is the Knapsack problem, where the goal is to find the items to be included in a knapsack to maximize its value while not exceeding its capacity.

To solve a Dynamic Programming Problem, we should follow a defined path to obtain the most optimal solution. An ordered way to solve is given below:

Step 1: Write a Recursive Solution first along with the appropriate base cases
Step 2: Try to memoize the recursive solution.
Step 3: Converting the memoized solution to Tabulation.

These Steps are explained below briefly:

Recursion

In recursion, a problem is solved by breaking it down into smaller subproblems and solving those subproblems recursively. The solution to the original problem is obtained by combining the solutions to the subproblems.

Recursion is the first step in solving any dynamic programming problem. The recursive solution obtained in this step can be further optimized by applying the concepts of memoization and tabulation.

Memoization

Memoization is a technique used in dynamic programming to reduce the computational time needed to solve a problem by avoiding redundant calculations. The idea behind memoization is to store the solutions to subproblems in a cache or a memo table so that they can be reused later instead of recomputing the solution.

In memoization, when a subproblem is solved, its solution is stored in the memo table. When the same subproblem is encountered again, its solution is retrieved from the memo table instead of being recomputed. This significantly reduces the computational time needed to solve the problem, especially for large problems.

Tabulation

In dynamic programming, tabulation refers to a technique where the intermediate results of subproblems are stored in a table to avoid redundant computations. This approach is used to solve optimization problems where the solution can be constructed from solutions to subproblems. By storing the results of subproblems in a table, we can avoid recalculating them when they are needed again, leading to a significant improvement in efficiency.

The process of tabulation in dynamic programming starts by breaking down the problem into smaller subproblems and solving them in a bottom-up manner. The solutions to subproblems are stored in a table, and the final solution to the main problem is constructed using these stored solutions.

Now, let us understand each of these steps using the example of the Fibonacci Series.

The Fibonacci Series

Fibonacci Series is defined as the sequence which goes like:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 ....

We can notice that except for the first two numbers, every number in the series is the sum of the preceding two numbers. The first and second numbers are 0 and 1, respectively. The third number is 1, which is the total of the two preceding numbers, 0 and 1. The fourth number is 2, which is the sum of the two preceding numbers, 1 and 1. And the series continues in this manner.

Now consider the function F(n), which returns the nth integer in the Fibonacci series. For this, we may devise a recursive formula as

F(0) = 0
F(1) = 1
F(n) = F(n-1) + F(n-2)

Solving Fibonacci Series Recursively

To solve a problem recursively, we must divide it into related subproblems and use base cases. Here, in this case, we have base cases as F(0) = 0 and F(1).
The Java Code for this looks like

class Main{
    public static int F(int n){
        if(n==0) return 0;
        if(n==1) return 1;
        
        return F(n-1) + F(n-2);
    }
    
    public static void main (String[] args) {
        int n = 5;
        System.out.println(F(n));
    }
}

Output:

5

In this implementation, the function calls itself twice for each iteration, leading to an exponential increase in computation time as n grows. This is better understood using the recursion tree as given below:

Recursion Tree for finding F(5)

The recursion tree for finding the value of F(5) will look like :

It is clearly visible from the above recursion tree, that we are computing the same Fibonacci number multiple times to get our final solution.

Let us get rid of this recalculation of the same Fibonacci number multiple times.

Memoization Approach (Top-Down) of Dynamic Programming

Now to get rid of the redundant calculation, we will store the value of subproblems in an array, and later we will use these values stored in the array instead of recalculating the number using the recursion. This helps in the significant reduction of runtime from exponential to linear time complexity.

The Java Code for Memoized solution is given below:

import java.util.*;

class Main{
    public static int F(int n, int[] dp){
   		if(n==0) return 0;
    	if(n==1) return 1;
    
    	if(dp[n]!= -1) return dp[n];
    		
   		return dp[n]= F(n-1,dp) + F(n-2,dp);
   	}
    
    public static void main (String[] args){
        int n = 5;
        int dp[] = new int[n+1];
        Arrays.fill(dp,-1);
        System.out.println(F(n,dp));
    }
}

Output:

5

In this implementation, the “dp” array is used to store the intermediate results and avoid redundant calculations. Each time the function is called, it checks the “dp” array first before computing the result, making the overall time complexity much more efficient, and reducing the exponential growth to linear.

By memoization, the tree of subproblems shrinks as shown below:

Tabulation Approach (Bottom Up) of Dynamic Programming

We’ll restructure the sequence in which we tackle the subproblems using the bottom-up dynamic programming technique.

We’ll calculate F(0), F(1), F(2), and so on like shown below:

This allows us to compute the answer to each issue once and keep only two intermediate results at a time.

For example, while attempting to determine F(2), we just require the answers to F(1) and F(0). Similarly, we just require the solutions to F(2) and F(3) for F(3) (1). This allows us to use less memory in our programs.

import java.util.*;

class Main{
    public static void main (String[] args){
        int n=5;
        int dp[] = new int[n+1];
        Arrays.fill(dp,-1);
        
        dp[0]= 0;
        dp[1]= 1;
  
        for(int i=2; i<=n; i++){
    		dp[i] = dp[i-1]+ dp[i-2];
        }
  
        System.out.println(dp[n]);
    }
}

Output:

5

In this implementation, we use an array dp to store the intermediate results and use a loop to build up the solution bottom-up, starting from the base cases and progressing towards the solution for n. This approach is more efficient than the recursive solution, as it avoids the overhead of repeated function calls and has a time complexity of O(n).

Now, let us enlist major points of difference between Top-Down and Bottom-Up Approaches of Dynamic Programming.

Top-Down vs Bottom-Up Approach

The table below summarizes the differences between the two approaches of Dynamic Programming i.e., Top-Down and Bottom-Down.

Top-Down Bottom-Up
Definition A recursive approach that caches results An iterative approach that builds up the solution
Order of solving subproblems Solve the subproblems recursively Solve the subproblems in a bottom-up fashion
Data Structure Cache (hash table) Table (array)
Time Complexity O(n^2) in the worst case O(n) in the worst case
Space Complexity O(n) O(n)

Difference between Dynamic Programming and Greedy Algorithms

Dynamic programming and greedy algorithms are both optimization techniques used in computer science, but they differ in their approach to solving problems.

Dynamic programming is an approach where a problem is divided into smaller subproblems, and the solutions to these subproblems are used to find the solution to the original problem. On the other hand, greedy algorithms are a technique where the solution to a problem is obtained by making the best choice at each step.
The differences between the two approaches are given below in the tabular form:

Dynamic Programming Greedy Algorithms
Solves problems by breaking them down into smaller subproblems Solves problems by making the locally optimal choice at each step
Uses a memo table or cache to store the solutions to subproblems Does not use a memo table or cache
Can be applied using a top-down or bottom-up approach Typically uses a top-down approach
Focuses on finding the best solution given some constraints Focuses on finding a solution that maximizes a specific criterion at each step
Used for optimization problems with sequential or chain-like structures Used for optimization problems with multiple choices at each step
Can be slower to implement but provides optimal solutions Faster to implement but may not provide optimal solutions

Conclusion
To summarize, Dynamic Programming is a problem-solving method that divides problems into smaller subproblems and then combines their solutions to obtain the solution to the original problem. It is primarily used to solve optimization problems and is useful when the problem has overlapping subproblems as well as an optimal substructure. By avoiding redundant computations and caching intermediate results, Dynamic Programming can significantly reduce the time complexity of a problem. Overall, Dynamic Programming is a powerful technique for solving a wide range of problems in fields such as computer science, mathematics, and operations research.

Frequently Asked Questions (FAQs)

Q1. What is dynamic programming, and when should I consider using it?
Dynamic programming is a technique used in computer programming to solve complex problems by breaking them down into simpler overlapping subproblems. It is most effective when the problem exhibits optimal substructure and overlapping subproblems. Consider using dynamic programming when you encounter a problem that can be solved recursively, and you can efficiently store and retrieve the results of subproblems.

Q2. What is the difference between dynamic programming and divide-and-conquer?
Dynamic programming and divide-and-conquer are both problem-solving techniques. The key difference is that dynamic programming breaks a problem into overlapping subproblems and solves each subproblem only once, storing the result for future use. In contrast, divide-and-conquer solves a problem by breaking it into non-overlapping subproblems, often solving each subproblem independently.

Q3. How can I recognize if a problem can be solved using dynamic programming?
Problems that exhibit two main characteristics are good candidates for dynamic programming: optimal substructure and overlapping subproblems. Optimal substructure means that an optimal solution to the problem can be constructed from optimal solutions of its subproblems. Overlapping subproblems occur when the same subproblem is solved multiple times. Identifying these characteristics in a problem often indicates that dynamic programming can be applied.

Q4. What is memoization in dynamic programming?
Memoization is a technique used in dynamic programming to optimize the computation of subproblems. It involves storing the results of expensive function calls and retrieving the cached result when the same inputs occur again. By avoiding redundant computations, memoization can significantly improve the performance of dynamic programming algorithms.

Q5. Are there any downsides or limitations to using dynamic programming?
While dynamic programming is a powerful technique, it may not be suitable for every problem. Some potential downsides include the need for additional memory to store the results of subproblems, the complexity of identifying optimal substructure and overlapping subproblems, and the requirement for problems to have an inherent recursive structure. Additionally, dynamic programming may not always be the most efficient approach for solving a problem, so it’s essential to consider other algorithms and techniques as well.

Leave a Reply

Your email address will not be published. Required fields are marked *