What is the difference between dynamic programming and recursion? I have gone through many articles in geeksforgeeks tutorial point and Wikipedia, but it seems to me that both are same.
Can you please explain me with example of Fibonacci series the difference between dynamic programming and recursion?
Calculating terms in the Fibonacci sequence is very easy, since in fact you only need to remember fib(n-2)
and fib(n-1)
in order to calculate fib(n)
. Because it is so easy, any algorithm is going to be extremely simple, so this example blurs the nuances between different dynamic programming paradigms. That being said, the wikipedia page you mentioned has a nice explication about fibonacci: https://en.wikipedia.org/wiki/Dynamic_programming#Fibonacci_sequence
A function is called recursive if it calls itself during its execution.
A dynamic programming algorithm might be implemented with or without using recursion.
The core of dynamic programming is exploiting the two following facts to write an algorithm:
S
to a problem P
is broken into solutions s1
, s2
, ... to subproblems p1
, p2
, ..., then s1
, s2
, ... are all optimal solutions to their respective subproblems.Note that these two facts are not true of all problems. A problem only lends itself to dynamic programming if those two facts apply to it.
A simple example is finding the shortest path from point A to point B: if a shortest path from A to B goes through point C, then its two halves from A to C and from C to B it is made of are also shortest paths.
In most situations, you could make recursive calls to solve the subproblems. But a "naive" recursive approach can easily result in an exponential algorithm, because of the cascading "in order to solve this problem, I need to solve these two (or more) subproblems" that might quickly escalate the number of problems you have to solve. Here is an example with fibonacci:
fib(5) = fib(4) + fib(3)
fib(4) = fib(3) + fib(2)
fib(3) = fib(2) + fib(1)
fib(2) = fib(1) + fib(0)
fib(1) = 1
fib(0) = 0
fib(1) = 1
fib(2) = fib(1) + fib(0)
fib(1) = 1
fib(0) = 0
fib(1) = 1
fib(3) = fib(2) + fib(1)
fib(2) = fib(1) + fib(0)
fib(1) = 1
fib(0) = 0
fib(1) = 1
Here we had to calculate 16 terms to find fib(5)
. But notice that there are only 6 different terms in total. Surely we could be more efficient by avoiding repeating the same calculations again and again.
To avoid this, dynamic programming algorithms most often amount to filling an array with the solutions to the subproblems. Once you've identified the list of subproblems and the array, there might not be much incentive to go "top-down" with recursive calls that start with the largest problem and successively break it down into smaller subproblems. Instead, you can fill the array "bottom-up" starting from the most trivial problems and then using those to solve the more complex problems, until you've made it up to the problem you originally wanted to solve. In the case of the fibonacci sequence, you might end up with the following code:
int f[n+1];
f[0] = 0;
f[1] = 1;
for (int k = 2; k <= n; k++)
f[k] = f[k-2] + f[k-1];
return f[n];
However, in the case of the fibonacci sequence, you only need to remember the last two terms at any time, so instead of filling a full array with all terms from fib(0)
to fib(n)
, you can simply keep two variables (or a size-2 array) with the previous two results. Arguably this is still dynamic programming, although it ends up simply being a loop that calculates the terms of the sequence in order and it's hard to see anything "dynamic" about it.
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments