Deep Learning Essentials
上QQ阅读APP看书,第一时间看更新

Optimization algorithms

Optimization is the key to how a network learns. Learning is basically an optimization process. It refers to the process that minimizes the error, cost, or finds the locus of least errors. It then adjusts the network coefficients step by step. A very basic optimization approach is the one we used in the previous section on gradient descents. However, there are multiple variations that do a similar job but with a bit of improvement added. TensorFlow provides multiple options for you to choose as the optimizer, for example, GradientDescentOptimizer, AdagradOptimizer, MomentumOptimizer, AdamOptimizer, FtrlOptimizer, and RMSPropOptimizer. For the API and how to use them, please see this page:

https://www.tensorflow.org/versions/master/api_docs/python/tf/train#optimizers.

These optimizers should be sufficient for most deep learning techniques. If you aren’t sure which one to use, use GradientDescentOptimizer as a starting point.