Dynamic programming
Published:
Dynamic programming (also known as dynamic optimization) is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions - ideally, using a memory-based data structure. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup). The technique of storing solutions to subproblems instead of recomputing them is called “memoization”. Dynamic programming is based on sacrificing computer memory in order to save computer time in each computation.
Dynamic programming algorithms are often used for optimization. A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. In comparison, a greedy algorithm treats the solution as some sequence of steps and picks the locally optimal choice at each step. Using a greedy algorithm does not guarantee an optimal solution, because picking locally optimal choices may result in a bad global solution, but it is often faster to calculate. Fortunately, some greedy algorithms (such as Kruskal’s or Prim’s for minimum spanning trees) are proven to lead to the optimal solution.
For example, in the coin change problem of finding the minimum number of coins of given denominations needed to make a given amount, a dynamic programming algorithm would find an optimal solution for each amount by first finding an optimal solution for each smaller amount and then using these solutions to construct an optimal solution for the larger amount. In addition to finding optimal solutions to some problem, dynamic programming can also be used for counting the number of solutions, for example counting the number of ways a certain amount of change can be made from a given collection of coins, or counting the number of optimal solutions to the coin change problem described above.
Sometimes, applying memoization to the naive recursive algorithm (namely the one obtained by a direct translation of the problem into recursive form) already results in a dynamic programming algorithm with asymptotically optimal time complexity, but for optimization problems in general the optimal algorithm might require more sophisticated algorithms. Some of these may be recursive (and hence can be memoized) but parametrized differently from the naive algorithm. For other problems the optimal algorithm may not even be a memoized recursive algorithm in any reasonably natural sense. An example of such a problem is the Egg Dropping puzzle described below.
It is related with different field as:
- Mathematics
- Management science
- Economics
- Computer science
- Bioinformatics
- Control theory
- Robotics
- …
Algorithms based on dynamic programming
- Recurrent solutions to lattice models for protein-DNA binding
- Backward induction as a solution method for finite-horizon discrete-time dynamic optimization problems
- Method of undetermined coefficients can be used to solve the Bellman equation in infinite-horizon, discrete-time, discounted, time-invariant dynamic optimization problems
- longest common subsequence, a string algorithm
- longest increasing subsequence, a string algorithm
- longest common substring, a string algorithm
- Levenshtein distance (edit distance), a string algorithm
- Many algorithmic problems on graphs can be solved efficiently for graphs of bounded treewidth or bounded clique-width by using dynamic programming on a tree decomposition of the graph.
- The Cocke-Younger-Kasami (CYK) algorithm which determines whether and how a given string can be generated by a given context-free grammar
- Knuth’s word wrapping algorithm that minimizes raggedness when word wrapping text
- The use of transposition tables and refutation tables in computer chess
- The Viterbi algorithm (used for hidden Markov models)
- The Earley algorithm (a type of chart parser)
- The Needleman–Wunsch and other algorithms used in bioinformatics, including sequence alignment, structural alignment, RNA structure prediction
- Floyd’s all-pairs shortest path algorithm
- Optimizing the order for chain matrix multiplication
- Pseudo-polynomial time algorithms for the subset sum and knapsack and partition problems
- The dynamic time warping algorithm for computing the global distance between two time series
- The Selinger (a.k.a. System R) algorithm for relational database query optimization
- De Boor algorithm for evaluating B-spline curves
- Duckworth–Lewis method for resolving the problem when games of cricket are interrupted
- The value iteration method for solving Markov decision processes
- Some graphic image edge following selection methods such as the “magnet” selection tool in Photoshop
- Some methods for solving interval scheduling problems
- Some methods for solving the travelling salesman problem, either exactly (in exponential time) or approximately (e.g. via the bitonic tour)
- Recursive least squares method
- Beat tracking in music information retrieval
- Adaptive-critic training strategy for artificial neural networks
- Stereo algorithms for solving the correspondence problem used in stereo vision
- Seam carving (content aware image resizing)
- The Bellman–Ford algorithm for finding the shortest distance in a graph
- Some approximate solution methods for the linear search problem
- Kadane’s algorithm for the maximum subarray problem
See also
Material
- http://20bits.com/article/introduction-to-dynamic-programming
- Stuart Dreyfus. Richard Bellman on the birth of Dynamical Programming.
- A Gentle Introduction to Dynamic Programming and the Viterbi Algorithm
- http://apmonitor.com/do/
- http://mat.gsia.cmu.edu/classes/dynamic/dynamic.html
- http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=dynProg
Papers
- Sniedovich, M. (2006), Dijkstra’s algorithm revisited: the dynamic programming connexion, Journal of Control and Cybernetics 35 (3): 599-620
Books
- Bertsekas, Dimitri P. (2005). Dynamic Programming And Optimal Control, Vol. 1. Athena Scientific
- Bertsekas, Dimitri P. (2007). Dynamic Programming And Optimal Control, Vol. 2. Athena Scientific