Gradient descent works out a solution by starting from a random solution when given a set of parameters (a data matrix made of features and a response). It then proceeds in various iterations using the feedback from the cost function, thus changing its parameters with values that gradually improve the initial random solution and lower the error.
Even though the optimization may take a large number of iterations before reaching a good mapping, it relies on changes that improve the response cost function most (lower error) during each iteration. Here’s an example of a complex optimization process with many local minima (the minimum points on the curve marked with letters) where the process can get stuck (it no longer continues after the deep minimum marked with an asterisk) and cannot continue its descent.You can visualize the optimization process as a walk in high mountains, with the parameters being the different paths to descend to the valley. A gradient descent optimization occurs at each step. At each iteration, the algorithm chooses the path that reduces error the most, regardless of the direction taken. The idea is that if steps aren’t too large (causing the alogorithm to jump over the target), always following the most downward direction will result in finding the lowest place.
Unfortunately, this result doesn’t always occur because the algorithm can arrive at intermediate valleys, creating the illusion that it has reached the target. However, in most cases, gradient descent leads the machine learning algorithm to discover the right hypothesis for successfully mapping the problem. A different starting point can make the difference. Starting point x1 ends toward a local minimum, whereas points x2 and x3 reach the global minimum.
In an optimization process, you distinguish between different optimization outcomes. You can have a global minimum that’s truly the minimum error from the cost function, and you can have many local minima — solutions that seem to produce the minimum error but actually don’t (the intermediate valleys where the algorithm gets stuck). As a remedy, given the optimization process’s random initialization, running the optimization many times is good practice. This means trying different sequences of descending paths and not getting stuck in the same local minimum.