Skip to content
Bruno Peixoto edited this page Feb 22, 2019 · 16 revisions

Welcome to the Barycenter method wiki!

Overview

This repository implements the method described on https://arxiv.org/abs/1801.10533. The method is categorized as a direct optimization method for not requiring the function gradient but only punctual evaluation.

in words, its batch version, given a collection of investigation points, takes the weighted average of x-coordinates by the exponential of the negative value from respective function value multiplied by a constant (method hyperparameter), and provides the optimum value for such points.

Intuitively, the optimum point stays within the convex hull of provided points. Therefore, to compensate this weakness, the method also has its recursive version, which requires an initial point, arbitrary for a generic application, and considers a "curiosity factor", which increment the barycenter found on previous iteration by a gaussian variable with known mean and standard deviation. One can "help" the curiosity factor by providing as mean factor for the gaussian variable a value proportional i.e. to previous barycenter. When not, then Hence, the method, given required constants, has an asymptotic behaviour, reassembling classical gradient methods like Newton-Raphson, Conjugate Gradients or Descendent Gradient.

Finally, consider a noisy objective function and hence, it might be necessary to explore the neighborhood of found solution to assure optimality and discard noise effects. For such, one implements a relaxation factor .

How to use

This tutorial is for Linux users. The first step is to clone the repository on desired machine. For such, navigate to a folder one want to clone it and, on terminal, type:

git clone [email protected]:brunolnetto/baryopt.git

Open Matlab and add the files on its path by typing

addpath('$BARYOPT_PATH')
savepath

The string '$BARYOPT_PATH' stands for the path one cloned the repository. Then, one can access the files from wherever the project requires.

So far, the author implements the optimization of a convex function here. The hyperparameters described on section "Overview" were iteratively chosen: one must caution to avoid overflow of solution due exponential; moreover, depending of the number of iterations required on the method, it may not suffice to obtain desired precision.

The script implements both batch and recursive vesions of the method. As investigation points, the author chooses an uniform distribution of points over [-10, 10]x[-10, 10]. Furthermore, considers a factor, namely shape factor, proposed by Guilherme Scabin here. For instance, consider only the method originally proposed on article mentioned on section "Overview" i.e. 'normal.

Finnally, the it plots the contour of the function and each step the method takes to reach a neighborhood of the optimum point.

Clone this wiki locally