Convergence refers to the limit of the process and can be a useful analytical tool in assessing the expected performance of an optimization algorithm.

It can also be a useful empirical tool when studying the learning dynamics of an optimization algorithm and machine learning algorithms trained in an optimization algorithm, such as deep-learning neural networks. This motivates you to explore learning curves and techniques such as early termination.

If optimization is a process that produces candidate solutions, convergence is a stable point at the end of the process when no new changes or improvements are expected. Premature convergence refers to the failure state of the optimization algorithm where the process stops at a stable point that does not represent a globally optimal solution.

In this tutorial, you will find a gentle introduction to the premature convergence of machine learning.

After completing this tutorial, you will know:

  • Convergence refers to a stable point found at the end of a solution set using an iterative optimization algorithm.
  • Premature convergence suggests a stable point that was found too soon, perhaps close to the starting point of the search and worse than expected.
  • The greed of the optimization algorithm gives control to the convergence rate of the algorithm.

Let’s get started.

Gentle premature convergence
Picture: Don Graham, some rights reserved.

Curriculum Overview

This tutorial is divided into three parts; they are:

  1. Convergence in machine learning
  2. Premature convergence
  3. Intervention in early convergence

Convergence in machine learning

Rapprochement usually refers to process values ​​that tend to behave over time.

It is a useful idea when working with optimization algorithms.

Optimization refers to the type of problem that requires a set of inputs that result in the maximum or minimum value of the objective function. Optimization is an iterative process that produces a series of candidate solutions until a final solution is finally reached at the end of the process.

This behavior or dynamics of the optimization algorithm arriving at the stable point final solution is called convergence, e.g., convergence of optimization algorithms. In this way, convergence determines the end of the optimization algorithm.

Local landing involves iteratively selecting a landing direction and then repeating in the direction of the step and repeating the process until a convergence or some termination condition is met.

– page 13, Optimization algorithms, 2019.

  • Rapprochement: Stop conditions of the optimization algorithm with a stable point and continuous iterations of the algorithm are unlikely to lead to improvement.

We can measure and study the convergence of the optimization algorithm empirically, as used learning curves. In addition, we can also analytically study the convergence of the optimization algorithm, such as the proof of the convergence or the average computational complexity of the case.

Strong selection pressure will lead to rapid but possibly premature convergence. Decreased selection pressure slows down the application process…

– page 78, Evolutionary calculus: an integrated approach, 2002.

Optimization and convergence of optimization algorithms are an important concept in machine learning for those algorithms that fit (learn) training sets through an iterative optimization algorithm, such as logistic regression and artificial neural networks.

As such, we can choose optimization algorithms that lead to better convergence behavior than other algorithms, or spend a lot of time adjusting the convergence dynamics (learning dynamics) of the optimization algorithm using optimization hyperparameters (e.g., learning rate).

Convergence behavior can be compared, often in terms of the number of iterations of the algorithm required for convergence, to the assessment of the objective function of a stable point found in convergence, and to combinations of these concerns.

Premature convergence

Premature convergence refers to the convergence of a process that took place too soon.

In optimization, it refers to an algorithm that approaches a stable point with poorer-than-expected performance.

Premature convergence typically affects complex optimization problems where the objective function is convex, meaning that the response surface contains many different good solutions (stable points), perhaps with one (or a few) best solutions.

If we consider the response surface of an optimized objective function as a geometric landscape and search for a minimum number of functions, premature optimization means finding a valley near the starting point of the search that has less depth than the deepest valley in the problem area.

For problems with highly multimodal (rugged) fitness landscapes or landscapes that change over time, over-exploitation usually leads to premature convergence to suboptimal peaks in space.

– page 60, Evolutionary calculus: an integrated approach, 2002.

In this way, premature convergence is described as finding a locally optimal solution for a globally optimal solution for an optimization algorithm. It is a special failure case for the optimization algorithm.

  • Premature convergence: Convergence of the optimization algorithm to a worse than optimal stable point that is likely to be close to the starting point.

In other words, convergence marks the end of the search process, e.g., a stable point was found, and new iterations of the algorithm are unlikely to improve the solution. Premature convergence refers to the achievement of this stopping condition of the optimization algorithm less than at the desired fixed point.

Intervention in early convergence

Premature convergence can be a significant concern in reasonably challenging optimization tasks.

For example, most research on evolutionary computation and genetic algorithms involves identifying and overcoming the premature convergence of the algorithm in the optimization task.

If selection focuses on the most appropriate individuals, selection pressure may lead to premature convergence due to the reduced diversity of new populations.

– page 139, Computational Intelligence: An Introduction, 2nd edition, 2007.

Population-based optimization algorithms, such as evolutionary algorithms and flock intelligence, often describe their dynamics as an interaction between selective pressures and convergence. For example, strong selective pressure will lead to faster convergence and likely premature convergence. Weaker selective pressures may lead to slower convergence (higher computational costs), although perhaps a better or even global optimum will be found.

An operator with high selective pressure reduces population diversity faster than operators with low selective pressure, which can lead to premature convergence to sub-optimal solutions. High selective pressure limits the search capacity of the population.

– page 135, Computational Intelligence: An Introduction, 2nd edition, 2007.

This idea of ​​selective pressure helps to generally understand the learning dynamics of optimization algorithms. For example, an optimization defined as too greedy (e.g., through hyperparameters such as step size or learning rate) may fail due to premature convergence, while the same algorithm defined as less greedy may overcome premature convergence and find a better or globally optimal solution.

Premature convergence can occur when using a stochastic slope calculation to train a neural network model, which is marked by a learning curve that rapidly drops exponentially and then stops healing.

The number of upgrades required for convergence usually increases with the size of the training set. However, when m approaches infinity, the model finally approaches its best possible test error before the SGD has sampled each example in the training set.

– page 153, Deep learning, 2016.

The premature fit of neural networks motivates the use of methods such as learning curves to monitor and diagnose problems through model convergence to educational material and validation, such as early termination, which stops the optimization algorithm before finding a stable point, occurs at the expense of poorer performance of the Holdout dataset.

As such, much research into neural networks that are deeply learning has ultimately been directed at overcoming premature convergence.

Empirically, it is often observed that ‘tanh’ activation functions lead to a faster convergence of training algorithms than logistical functions.

– page 127, Neural networks for pattern recognition, 1995.

This includes techniques such as work weight initialization, which is critical because the initial weights of the neural network determine the starting point for the optimization process, and poor initialization can lead to premature convergence.

The starting point can determine if the algorithm agrees at all, with some starting points so unstable that the algorithm encounters numerical difficulties and fails completely.

– page 301, Deep learning, 2016.

This also includes a large number of variations and extensions of the stochastic gradient descent optimization algorithm, such as adding an impulse so that the algorithm does not exceed the optimum (stable point), and Adam, which automatically adds a custom step size hyperparameter (Learning speed) for each parameter to be optimized, which dramatically accelerates convergence.

Further reading

This section provides more resources on the topic if you want to go deeper.

Guides

Books

Articles

Summary

In this tutorial, you will find a gentle introduction to the premature convergence of machine learning.

In particular, you will learn:

  • Convergence refers to a stable point found at the end of a solution set using an iterative optimization algorithm.
  • Premature convergence suggests a stable point that was found too soon, perhaps close to the starting point of the search and worse than expected.
  • The greed of the optimization algorithm gives control to the convergence rate of the algorithm.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

LEAVE A REPLY

Please enter your comment!
Please enter your name here