Dynamic Programming

          Dynamic programming is an algorithmic technique of simplifying a com-plex problem into smaller simpler subproblems. This might sound familiar, this definition is awfully close to the definition of recursion, although dynamic programming can use recursion from time to time. The main difference is that a recursive algorithm repeats the same procedure on the subproblem of same type of problem. Whereas a dynamic algorithm caches the results of each subproblem so that every subproblem is only solved once (1). Recursion can end up solving the same subproblem multiple time as it calls up the stack this for obvious reasons is inefficient, dynamic programming uses varying technics to avoid this.

Step 1

            The steps to creating a dynamic algorithm can vary but there are a few steps that will help you create a dynamic program. Firstly, Identify the sub problem; this is most like the most important step. Like with recursion you most think about the problem set from the bottom up, what is most you can reasonably and efficiently reduce the problem into sub problems (2). This is easy to mishandle I’d advise sketching out graphs and trees on paper, it can be easy to make mistakes even if you just write it out and then start coding, you must understand the problem thoroughly before being able to code the solution.

Step 2

            Secondly, find the reoccurring operations that you will be conducting on the sub problem. Whether the algorithm is recursive or not this step seems more difficult than it is,  provided that you have completed step 1 successfully (2). This is where drawing out the tree of executions come in handy it becomes clear what the base case is going to be and what the recursive case is going to be or steps of operation.

Step 3

            Thirdly, use the preceding steps to solve the entire problem this is a slightly harder step then it seems. As connecting the sub problem logic to solve the larger problem can seem daunting and edge cases can commonly slip through the cracks but if you have been diligent on these steps it should not be too difficult.

Step 4

            Lastly caching or memoization, this is the clear step where dynamic programming differs from recursive algorithms. Commonly called a memoization data structure commonly a hash or array. This array is used to store the results of expensive process in the algorithm, so if you are doing the same computation many times that uses results of previous computation that you are recomputing. Then you should set up your memoization array to save those  results. The difficult part is deciding the dimension and directions of how it should be filled out (2). This can be tricky the dimensions of the array rely on the number and size of the variable of the operation that is taking place in step 2. The direction will also just derive whether you are solving the problem bottom up or top down.

Dynamic programming as concept can be a little tricky to explain but makes sense once you start coding the algorithm. So, for next week I will walk through and algorithm and how to create a dynamic version of it. Thanks for reading!

Source:

  1. https://weibeld.net/algorithms/recursion.html#:~:text=Recursion%3A%20repeated%20application%20of%20the,subproblem%20is%20solved%20only%20once.
  2. https://www.freecodecamp.org/news/demystifying-dynamic-programming-3efafb8d4296/

Leave a comment

Design a site like this with WordPress.com
Get started