Cautious: current master
branch and 0.4.2.postX
version introduce tentative APIs which may be removed in the near future. Use version 0.4.2
for a more stable version.
- as an experimental feature,
tell
method can now receive a list/array of losses for multi-objective optimization #775. For now it is neither robust, nor scalable, nor stable, nor optimal so be careful when using it. More information in the documentation. DE
and its variants have been updated to make use of the multi-objective losses #789. This is a preliminary fix since the initialDE
implementaton was ill-suited for this use case.tell
argumentvalue
is renamed toloss
for clarification #774. This can be breaking when using named arguments!
recommend
now provides an evaluated candidate when possible. For non-deterministic parametrization likeChoice
, this means we won't resample, and we will actually recommend the best past evaluated candidate #668. Still, some optimizers (likeTBPSA
) may recommend a non-evaluated point.Choice
andTransitionChoice
can now take arepetitions
parameters for sampling several times, it is equivalent to :code:Tuple(*[Choice(options) for _ in range(repetitions)])
but can be be up to 30x faster for large numbers of repetitions #670 #696.- Defaults for bounds in
Array
is nowbouncing
, which is a variant ofclipping
avoiding over-sompling on the bounds #684 and #691.
This version should be robust. Following versions may become more unstable as we will add more native multiobjective optimization as an experimental feature. We also are in the process of simplifying the naming pattern for the "NGO/Shiwa" type optimizers which may cause some changes in the future.
Archive
now stores the best corresponding candidate. This requires twice the memory compared to before the change. #594Parameter
now holds aloss: Optional[float]
attribute which is set and used by optimizers after thetell
method.- Quasi-random samplers (
LHSSearch
,HammersleySearch
,HaltonSearch
etc...) now sample in the full range of bounded variables when thefull_range_sampling
isTrue
#598. This required some ugly hacks, help is most welcome to find nices solutions. full_range_sampling
is activated by default if both range are provided inArray.set_bounds
.- Propagate parametrization system features (generation tracking, ...) to
OnePlusOne
based algorithms #599. - Moved the
Selector
dataframe overlay so that basic requirements do not includepandas
(only necessary for benchmarks) #609 - Changed the version name pattern (removed the
v
) to unify withpypi
versions. Expect more frequent intermediary versions to be pushed (deployment has now been made pseudo-automatic). - Started implementing more ML-oriented testbeds #642
- Removed all deprecated code #499. That includes:
instrumentation
as init parameter of anOptimizer
(replaced byparametrization
)instrumentation
as attribute of anOptimizer
(replaced byparametrization
)candidate_maker
(not needed anymore)optimize
methods ofOptimizer
(renamed tominimize
)- all the
instrumentation
subpackage (replaced byparametrization
) and its legacy methods (set_cheap_constraint_checker
etc)
- Removed
ParametrizedOptimizer
andOptimizerFamily
in favor ofConfiguredOptimizer
with simpler usage #518 #521. - Some variants of algorithms have been removed from the
ng.optimizers
namespace to simplify it. All such variants can be easily created using the correspondingConfiguredOptimizer
. Also, addingimport nevergrad.optimization.experimentalvariants
will populateng.optimizers.registry
with all variants, and they are all available for benchmarks #528. - Renamed
a_min
anda_max
inArray
,Scalar
andLog
parameters for clarity. Using old names will raise a deprecation warning for the time being. archive
is pruned much more often (eg.: fornum_workers=1
, usually pruned to 100 elements when reaching 1000), so you should not rely on it for storing all results, use a callback instead #571. If this is a problem for you, let us know why and we'll find a solution!
- Propagate parametrization system features (generation tracking, ...) to
TBPSA
,PSO
andEDA
based algorithms. - Rewrote multiobjective core system #484.
- Activated Windows CI (still a bit flaky, with a few deactivated tests).
- Better callbacks in
np.callbacks
, including exporting tohiplot
. - Activated documentation on github pages.
- Scalar now takes optional
lower
andupper
bounds at initialization, andsigma
(and optionnallyinit
) if is automatically set to a sensible default #536.
- Fist argument of optimizers is renamed to
parametrization
instead ofinstrumentation
for consistency #497. There is currently a deprecation warning, but this will be breaking in v0.4.0. - Old
instrumentation
classes now raise deprecation warnings, and will disappear in versions >0.3.2. Hence, prefere using parameters fromng.p
thanng.var
, and avoid usingng.Instrumentation
altogether if you don't need it anymore (or import it throughng.p.Instrumentation
). CandidateMaker
(optimizer.create_candidate
) raisesDeprecationWarning
s since it new candidates/parameters can be straightforwardly created (parameter.spawn_child(new_value=new_value)
)Candidate
class is completely removed, and is completely replaced byParameter
#459. This should not break existing code sinceParameter
can be straightforwardly used as aCandidate
.
- New parametrization is now as efficient as in v0.3.0 (see CHANGELOG for v0.3.1 for contect)
- Optimizers can now hold any parametrization, not just
Instrumentation
. This for instance mean that when you doOptimizerClass(instrumentation=12, budget=100)
, the instrumentation (and therefore the candidates) will be of classng.p.Array
(and notng.p.Instrumentation
), and their attributevalue
will be the correspondingnp.ndarray
value. You can still useargs
andkwargs
if you want, but it's no more needed! - Added experimental evolution-strategy-like algorithms using new parametrization #471 (the behavior and API of these optimizers will probably evolve in the near future).
DE
algorithms comply with the new parametrization system and can be set to use parameter's recombination.- Fixed array as bounds in
Array
parameters
Note: this is the first step to propagate the instrumentation/parametrization framework. Learn more on the Facebook user group. If you are looking for stability, await for version 0.4.0, but the intermediary releases will help by providing deprecation warnings.
FolderFunction
must now be accessed throughnevergrad.parametrization.FolderFunction
- Instrumentation names are changed (possibly breaking for benchmarks records)
- Old instrumentation classes now all inherits from the new parametrization classes #391. Both systems coexists, but optimizers use the old API at this point (it will use the new one in version 0.3.2).
- Temporary performance loss is expected in orded to keep compatibility between
Variable
andParameter
frameworks. PSO
now uses initialization by sampling the parametrization, instead of sampling all the real space. A newWidePSO
optimizer was created, using the previous initial sampling method #467.
Note: this version is stable, but the following versions will include breaking changes which may cause instability. The aim of this changes will be to update the instrumentation system for more flexibility. See PR #323 and Fb user group for more information.
Instrumentation
is now aVariable
for simplicity and flexibility. TheVariable
API has therefore heavily changed, and bigger changes are coming (instrumentation
will becomeparametrization
with a different API). This should only impact custom-made variables.InstrumentedFunction
has been aggressively deprecated to solve bugs and simplify code, in favor of using theInstrumentation
directly at the optimizer initialization, and of usingExperimentFunction
to define functions to be used in benchmarks. Main differences are:instrumentation
attribute is renamed toparametrization
for forward compatibility.__init__
takes exactly two arguments (main function and parametrization/instrumentation) and- calls to
__call__
is directly forwarded to the main function (instead of converting from data space),
Candidates
have now auid
instead of auuid
for compatibility reasons.- Update archive
keys/items_as_array
methods tokeys/items_as_arrays
for consistency.
- Benchmark plots now show confidence area (using partially transparent lines).
Chaining
optimizer family enables chaining of algorithms.- Cleaner installation.
- New simplified
Log
variable for log-distributed scalars. - Cheap constraints can now be provided through the
Instrumentation
- Added preliminary multiobjective function support (may be buggy for the time being, and API will change)
- New callback for dumping parameters and loss, and loading them back easily for display (display yet to come).
- Added a new parametrization module which is expected to soon replace the instrumentation module.
- Added new test cases: games, power system, etc (experimental)
- Added new algorithms: quasi-opposite one shot optimizers
- instrumentations now hold a
random_state
attribute which can be seeded (optimizer.instrumentation.random_state.seed(12)
). Seedingnumpy
's global random state seed before using the instrumentation still works (but if not, this change can break reproducibility). The random state is used by the optimizers through theoptimizer._rng
property.
- added a
Scalar
variable as a shortcut toArray(1).asscalar(dtype)
to simplify specifying instrumentation. - added
suggest
method to optimizers in order to manually provide the nextCandidate
from theask
method (experimental feature, name and behavior may change). - populated
nevergrad
's namespace so thatimport nevergrad as ng
gives access tong.Instrumentation
,ng.var
andng.optimizers
. Theoptimizers
namespace is quite messy, some non-optimizer objects will eventually be removed from there. - renamed
optimize
tominimize
to be more explicit. Usingoptimize
will raise aDeprecationWarning
for the time being. - added first game-oriented testbed function in the
functions.rl
module. This is still experimental and will require refactoring before the API becomes stable.
- changed
tanh
toarctan
as default for bounded variables (much wider range). - changed cumulative Gaussian density to
arctan
for rescaling inBO
(much wider range). - renamed
Array.asfloat
method toArray.asscalar
and allow casting toint
as well through an argument.
- fixed
tell_not_asked
forDE
family of optimizers. - added
dump
andload
method toOptimizer
. - Added warnings against inefficient settings:
BO
algorithms with dis-continuous or noisy instrumentations without appropriate parametrization,PSO
andDE
for low budget. - improved benchmark plots legend.
- first parameter of optimizers is now
instrumentation
instead ofdimension
. This allows the optimizer to have information on the underlying structure.int
s are still allowed as before and will set the instrumentation to theInstrumentation(var.Array(n))
(which is basically the identity). - removed
BaseFunction
in favor ofInstrumentedFunction
and use instrumentation instead of defining specific transforms (breaking change for benchmark function implementation). ask()
andprovide_recommendation()
now return aCandidate
with attributesargs
,kwargs
(depending on the instrumentation) anddata
(the array which was formerly returned).tell
must now receive this candidate as well instead of the array.- removed
tell_not_asked
in favor oftell
. A newnum_tell_not_asked
attribute is added to check the number oftell
calls with non-asked points.
- updated
bayesion-optimization
version to 1.0.1. - from now on, optimizers should preferably implement
_internal_ask_candidate
and_internal_tell_candidate
instead of_internal_ask
and_internal_tell
. This should take at most one more line:x = candidate.data
. - added an
_asked
private attribute to register uuid of particuels that were asked for. - solved
ArtificialFunction
delay bug.
- corrected a bug introduced by v0.1.5 for
PSO
. - activated
tell_not_ask
forPSO
,TBPSA
and differential evolution algorithms. - added a pruning mechanisms for optimizers archive in order to avoid using a huge amount of memory.
- corrected typing after activating
numpy-stubs
.
- provided different install procedures for optimization, benchmark and dev (requirements differ).
- added an experimental
tell_not_asked
method to optimizers. - switched to
pytest
for testing, and removed dependency tonosetests
andgenty
. - made archive more memory efficient by using bytes as key instead of tuple of floats.
- started rewritting some optimizers as instance of a family of optimizers (experimental).
- added pseudotime in benchmarks for both steady mode and batch mode.
- made the whole chain from
Optimizer
toBenchmarkChunk
stateful and able to restart from where it was stopped. - started introducing
tell_not_asked
method (experimental).
- fixed
PSO
in asynchronous case - started refactoring
instrumentation
in depth, and more specifically instantiation of external code (breaking change) - Added Photonics and ARCoating test functions
- Added variants of algorithms
- multiple bug fixes
- multiple typo corrections (including modules changing names)
- added MLDA functions
- allowed steady state in experiments
- allowed custom file types for external code instantiation
- added dissymetric noise case to
ArtificialFunction
- prepared an
Instrumentation
class to simplify instrumentation (breaking changes will come) - added new algorithms and benchmarks
- improved plotting
- added a transform method to
BaseFunction
(more breaking changes will come)
Work on instrumentation
will continue and breaking changes will be pushed in the following versions.
Initial version