Optimization

We provide a host of models and functions that are often used for testing and benchmarking exercises in the numerical optimization literature.


temfpy.optimization.ackley(x, a=20, b=0.2, c=6.283185307179586)[source]

Ackley function.

\[f(x) = -a \exp{\left(-b \sqrt{\frac{1}{p} \sum_{i=1}^p x_i^2}\right)} \exp{\left(\frac{1}{p} \sum_{i=1}^p \cos(c x_i)\right)} + a + \exp(1)\]
Parameters
  • x (array_like) – Input domain with dimension \(p\), which is usually evaluated on the hypercube \(x_i \in [-32.768, 32.768]\) for all \(i = 1, \dots, p\).

  • a (float, optional) – The default value is 20.

  • b (float, optional) – The default value is 0.2.

  • c (float, optional) – The default value is \(2\pi\).

Returns

Output domain

Return type

float

Notes

This function was proposed by David Ackley in [A1987] and used in [B1996] and [M2005]. It is characterized by an almost flat outer region and a central hole or peak where modulations become increasingly influential. The function has its global minimum \(f(x) = 0\) at \(x = \begin{pmatrix}0 & \dots & 0 \end{pmatrix}^T\).

_images/fig-ackley.png

References

A1987

Ackley, D. H. (1987). A connectionist machine for genetic hillclimbing. Kluwer Academic Publishers.

B1996

Back, T. (1996). Evolutionary algorithms in theory and practice: Evolution strategies, evolutionary programming, genetic algorithms. Oxford University Press.

M2005

Molga, M., and Smutnicki, C. (2005). Test functions for optimization needs. Retrieved June 2020, from http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf.

Examples

>>> from temfpy.optimization import ackley
>>> import numpy as np
>>>
>>> x = [0, 0]
>>> y = ackley(x)
>>> np.testing.assert_almost_equal(y, 0)
temfpy.optimization.carlberg(x, a, b)[source]

Carlberg function.

\[f(x) = \frac{1}{2}\sum_{i=1}^p a_i (x_i - 1)^2 + b \left[p - \sum_{i=1}^p \cos(2 \pi(x_i-1)) \right]\]
Parameters
  • x (array_like) – Input vector with dimension \(p\).

  • a (array_like) – Input vector with dimension \(p\).

  • b (float) – Cannot be smaller than zero. For more information see Notes.

Returns

Output domain

Return type

float

Notes

If the values in \(a\) are widely distributed, the function is said to be ill-conditioned, making it hard to minimize in some directions for Hessian-free numerical methods. If \(b=0\) (see second graph below), the function is convex, smooth, and has its minimum at \(x = \begin{pmatrix}1 & \dots & 1 \end{pmatrix}^T\). For \(b>0\) the function is no longer convex and has many local minima (see first graph below). These circumstances make it hard for local optimization methods to find the global minimum, which is still at \(x = \begin{pmatrix}1 & \dots & 1 \end{pmatrix}^T\).

_images/fig-carlberg-noise.png
_images/fig-carlberg-no-noise.png

References

C2019

Carlberg, K. (2019). Optimization in Python. Fundamentals of Data Science Summer Workshops, Stanford.

Examples

>>> from temfpy.optimization import carlberg
>>> import numpy as np
>>>
>>> x = [1, 1]
>>> a = [1, 1]
>>> b = 1
>>> y = carlberg(x,a,b)
>>> np.testing.assert_almost_equal(y, 0)
temfpy.optimization.rastrigin(x, a=10)[source]

Rastrigin function.

\[f(x) = a p + \sum_{i=1}^p \left(x_i^2 - 10 \cos(2\pi x_i)\right)\]
Parameters
  • x (array_like) – Input domain with dimension \(p\). It is usually evaluated on the hypercube \(x_i\in [-5.12, 5.12]\) for all \(i = 1, \dots, p\).

  • a (float, optional) – The default value is 10.

Returns

Output domain

Return type

float

Notes

The function was first proposed by Leonard Rastrigin in [R1974]. It produces frequent local minima, as it is highly multimodal. However, the location of the minima are regularly distributed. The function has its global minimum \(f(x) = 0\) at \(x = \begin{pmatrix}0 & \dots & 0 \end{pmatrix}^T\).

_images/fig-rastrigin.png

References

R1974

Rastrigin, L. A. (1974). Systems of extremal control. Moscow, Russia.

Examples

>>> from temfpy.optimization import rastrigin
>>> import numpy as np
>>>
>>> x = [0, 0]
>>> y = rastrigin(x)
>>> np.testing.assert_almost_equal(y, 0)
temfpy.optimization.rosenbrock(x)[source]

Rosenbrock function.

\[f(x) = \sum^{p-1}_{i = 1} \left[100(x_{i+1}-x_i^2)^2 + (1-x_i^2) \right]\]
Parameters

x (array_like) – Input domain with dimension \(p > 1\).

Returns

Output domain

Return type

float

Notes

The function was first proposed by Howard H. Rosenbrock in [R1960] and is often referred to as Rosenbrock’s valley or Rosenbrock’s Banana function due to its shape. The function has its global minimum at \(x = \begin{pmatrix}1 & \dots & 1 \end{pmatrix}^T\).

_images/fig-rosenbrock.png

References

R1960

Rosenbrock, H. H. (1960). An automatic method for finding the greatest or least value of a function. The Computer Journal, 3(3): 175-184.

Examples

>>> from temfpy.optimization import rosenbrock
>>> import numpy as np
>>>
>>> x = [1, 1]
>>> y = rosenbrock(x)
>>> np.testing.assert_almost_equal(y, 0)