Issue
When performing a single-objective optimization with Optuna, the best parameters of the study are accessible using:
import optuna
def objective(trial):
x = trial.suggest_uniform('x', -10, 10)
return (x - 2) ** 2
study = optuna.create_study(direction='minimize')
study.optimize(objective, n_trials=100)
study.best_params # E.g. {'x': 2.002108042}
If I want to perform a multi-objective optimization, this would be become for example :
import optuna
def multi_objective(trial):
x = trial.suggest_uniform('x', -10, 10)
f1 = (x - 2) ** 2
f2 = -f1
return f1, f2
study = optuna.create_study(directions=['minimize', 'maximize'])
study.optimize(multi_objective, n_trials=100)
This works, but the command study.best_params
fails with RuntimeError: The best trial of a 'study' is only supported for single-objective optimization.
How can I get the best parameters for a multi-objective optimization ?
Solution
In multi-objective optimization, you often end up with more than one best trial, but rather a set of trials. This set if often referred to as the Pareto front. You can get this Pareto front, or the list of trials, via study.best_trials
, then look at the parameters from each individual trial i.e. study.best_trials[some_index].params
.
For instance, given your directions of minimizing f1
and maximizing f2
, you might end up with a trial that has a small value for f1
(good) but at the same time small value for f2
(bad) while another trial might have a large value for both f1
(bad) and f2
(good). Both of these trials could be returned from study.best_trials
.
Answered By - hvy
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.