smt_optim.core package#

class smt_optim.core.ConstraintConfig(constraint: list[Callable], type: str = 'less', value: float = 0, surrogate: Surrogate = None, surrogate_kwargs: dict | None = None)[source]#

Bases: object

Configuration of a constraint function used in the optimization problem.

This class stores a constraint callable(s) together with surrogate modeling information used to approximate the constraint during optimization.

constraint#

List of constraint functions. Each callable must accept a decision variable vector x and return a scalar constraint value. The functions must be ordered in increasing level of fidelity.

Type:

list[Callable]

type#

Specifies whether the feasible domain is defined when the constraint value is less, greater or equal to the configuration value.

Type:

{“less”, “greater”, “equal”}, default=”minimize”

value#

Specifies the value defining the feasible domain.

Type:

float, default=0

surrogate#

Surrogate model used to approximate the constraint function.

Type:

Surrogate or None, default=None

surrogate_kwargs#

Optional keyword arguments passed to the surrogate model.

Type:

dict or None, default=None

constraint: list[Callable]#
surrogate: Surrogate = None#
surrogate_kwargs: dict | None = None#
type: str = 'less'#
value: float = 0#
class smt_optim.core.Driver(problem, config, strategy, strategy_kwargs={})[source]#

Bases: object

call_loggers(state)[source]#
finalize()[source]#
iteration(state)[source]#

Perform an optimization iteration. An iteration consists of: - scaling all the training data - building the surrogate models - acquiring points to sample (and their associated fidelity level) - sampling the expensive-to-evaluate functions

Parameters:

state (State) – Optimization state on which to perform an iteration.

Returns:

Return optimization state on which an iteration was performed.

Return type:

State

make_res_dir(res_dir: str | None) str | None[source]#
optimize()[source]#

Perform the optimization process which consists of: - generating a DoE if the initial dataset is empty - while no termination criteria is met, perform iterations

Returns:

optimization state on which the optimization process was performed.

Return type:

State

class smt_optim.core.DriverConfig(ctol: float = 0.0001, max_iter: int | None = None, max_budget: float = inf, max_time: float = inf, nt_init: int | None = None, xt_init: ndarray | None = None, results_dir: str | None = 'bo_result', verbose: bool = False, log_doe: bool = False, log_stats: bool = False, callback_func: list[Callable] | Callable | None = None, scaling: bool = True, seed: None = None)[source]#

Bases: object

Optimization driver configuration

max_iter#

Maximum number of iteration

Type:

int

max_budget#

Maximum budget before termination of the optimization process

Type:

float, default=inf

max_time#

Maximum time before termination of the optimization process

Type:

float, default=inf

nt_init#

Number of samples in the initial DoE

Type:

int

xt_init#

Initial DoE to use. The Numpy array must be of shape (num_sample, num_dimension). By providing an initial DoE, the driver will not generate an initial DoE. Cannot be used with nt_init

Type:

list[np.ndarray]

results_dir#

Name of the logging directory

Type:

str or None, default=None

verbose#

Print optimization information.

Type:

bool, default=False

log_doe#

Log the value of the function values as soon as they are sampled. The values are stored in a .csv file.

Type:

bool, default=False

log_stats#

Log optimization statistics at the end of each iteration. The stats are store in a .jsonl file.

Type:

bool, default=False

scaling#

Scale the data. The objective is standardized. The constraints are divided by their standard deviation. The design variables are normalized between 0 and 1.

Type:

bool, default=True

seed#

Seed for experiment reproducibility

Type:

default=None

callback_func: list[Callable] | Callable | None = None#
ctol: float = 0.0001#
log_doe: bool = False#
log_stats: bool = False#
max_budget: float = inf#
max_iter: int | None = None#
max_time: float = inf#
nt_init: int | None = None#
results_dir: str | None = 'bo_result'#
scaling: bool = True#
seed: None = None#
verbose: bool = False#
xt_init: ndarray | None = None#
class smt_optim.core.Evaluator(problem, res_path: str | None = None)[source]#

Bases: object

Evaluate the expensive-to-evaluate functions.

problem#

Optimization problem.

Type:

Problem

res_path#

Logging directory path.

Type:

str

log_sample(sample) None[source]#

Log the sample data.

Parameters:

sample (Sample) – The sample to log.

Return type:

None

sample_func(infill: list[ndarray | None], state) None[source]#

Sample the problem functions at requested query points.

Parameters:
  • infill (list[np.ndarray | None]) – Query points. Each np.ndarray in the list corresponds to a fidelity level. The np.ndarray must have the shape (num_points, num_dim).

  • state (State) – Optimization state.

Return type:

None

class smt_optim.core.ObjectiveConfig(objective: list[Callable], surrogate: type[Surrogate], type: str = 'minimize', surrogate_kwargs: dict | None = None)[source]#

Bases: object

Configuration of the objective function used in the optimization problem.

This class stores an objective callable(s) together with surrogate modeling information used to approximate the objective during optimization.

objective#

List of an objective functions. Each callable must accept a decision variable vector x and return a scalar objective value. The functions must be ordered in increasing level of fidelity.

Type:

list[Callable]

type#

Specifies whether the objective should be minimized or maximized.

Type:

{“minimize”, “maximize”}, default=”minimize”

surrogate#

Surrogate model used to approximate the objective function.

Type:

Surrogate or None, default=None

surrogate_kwargs#

Optional keyword arguments passed to the surrogate model.

Type:

dict or None, default=None

objective: list[Callable]#
surrogate: type[Surrogate]#
surrogate_kwargs: dict | None = None#
type: str = 'minimize'#
class smt_optim.core.OptimizationDataset[source]#

Bases: object

Store samples.

samples#
Type:

list[Sample]

num_obj#

Number of objectives

Type:

int

num_cstr#

Number of constraints

Type:

int

num_fidelity#

Number of fidelity levels

Type:

int

fidelities#

Fidelity levels sorted in increasing order.

Type:

list

num_samples#

Number of samples for each fidelity levels.

Type:

dict

add(sample: Sample)[source]#
export_as_dict() dict[source]#

Exports the samples data as a dictionary. Each attribute corresponds to a key in the dictionary.

Returns:

Dictionary containing all sample data.

Return type:

dict

export_data(idx: int | list[int], lvl: int) ndarray[source]#
get_by_fidelity(lvl: int)[source]#
class smt_optim.core.Problem(obj_configs: list, design_space, cstr_configs: list = [], costs: list[float] | None = None)[source]#

Bases: object

Problem configuration

num_dim#

Number of dimensions.

Type:

int

num_obj#

Number of objectives.

Type:

int

num_cstr#

Number of constraints.

Type:

int

num_fidelity#

Number of fidelities.

Type:

int

design_space#

Problem design space.

Type:

np.ndarray

costs#

Fidelity level costs.

Type:

list

obj_configs#

Objective configurations.

Type:

list

obj_funcs#

Objective functions.

Type:

list

cstr_configs#

Constraint configurations.

Type:

list

cstr_funcs#

Constraint functions.

Type:

list

class smt_optim.core.Sample(x: ndarray, fidelity: int, obj: ndarray | None, cstr: ndarray | None, eval_time: ndarray | None, metadata: dict = <factory>)[source]#

Bases: object

Store sample data.

x#

Variable

Type:

np.ndarray

obj#

Objective value(s). Array dimension: (num_obj,)

Type:

np.ndarray

cstr#

Contraint value(s). Array dimension: (num_cstr,)

Type:

np.ndarray

eval_time#

Evaluation times of each QoI. Array dimension: (num_obj+num_cstr,)

Type:

np.ndarray

metadata#

Dictionary with sample metadata such as iter, budget and fidelity.

Type:

dict

cstr: ndarray | None#
eval_time: ndarray | None#
fidelity: int#
metadata: dict#
obj: ndarray | None#
x: ndarray#
class smt_optim.core.State(problem)[source]#

Bases: object

State of the optimization process at a given moment.

The optimization state holds information about the optimization process at a given moment.

Parameters:

problem (Problem) – The problem to be optimized.

problem#

The problem to be optimized.

Type:

Problem

iter#

Current iteration number.

Type:

int

budget#

Current used budget.

Type:

float

bo_start#

Start time of the optimization problem

Type:

float

bo_time#

Elapsed time of the optimization driver.

Type:

float

obj_models#

List containing the surrogate modeling the objective(s) function(s).

Type:

list[Surrogate]

cstr_models#

List containing the surrogate modeling the constraint(s) function(s).

Type:

list[Surrogate]

cstr_types#

List containing the constraint types.

Type:

list[str]

dataset#

The dataset containing all samples from the expensive-to-evaluate functions.

Type:

OptimizationDataset

scaled_dataset#

The scaled dataset.

Type:

OptimizationDataset

iter_log#

Dictionary containing logging data.

Type:

dict

scale_dataset(unit_std: bool)[source]#

Scale data in the dataset. The scaled dataset is accessible using the `scaled_dataset`attribute.

build_models()[source]#

Builds the surrogate models based on the scale dataset.

get_best_sample()[source]#

Returns the best sample in the dataset.

build_models()[source]#

Builds the surrogate models.

Return type:

None

get_best_sample(ctol=0.0001, fidelity=-1)[source]#

Returns the best sample based on the objective function value.

Parameters:
  • ctol (float, optional) – Tolerance for constraint violation. Default is 1e-4.

  • fidelity (int, optional) – Fidelity level to consider. If -1, uses the highest fidelity. Default is -1.

Returns:

sample – The best sample based on the objective function value.

Return type:

Sample

scale_dataset(unit_std: bool = False)[source]#

Scales the dataset.

Parameters:

unit_std (bool, optional) – If True, normalize by standard deviation.

Return type:

None

Submodules#

smt_optim.core.driver module#

class smt_optim.core.driver.ConstraintConfig(constraint: list[Callable], type: str = 'less', value: float = 0, surrogate: Surrogate = None, surrogate_kwargs: dict | None = None)[source]#

Bases: object

Configuration of a constraint function used in the optimization problem.

This class stores a constraint callable(s) together with surrogate modeling information used to approximate the constraint during optimization.

constraint#

List of constraint functions. Each callable must accept a decision variable vector x and return a scalar constraint value. The functions must be ordered in increasing level of fidelity.

Type:

list[Callable]

type#

Specifies whether the feasible domain is defined when the constraint value is less, greater or equal to the configuration value.

Type:

{“less”, “greater”, “equal”}, default=”minimize”

value#

Specifies the value defining the feasible domain.

Type:

float, default=0

surrogate#

Surrogate model used to approximate the constraint function.

Type:

Surrogate or None, default=None

surrogate_kwargs#

Optional keyword arguments passed to the surrogate model.

Type:

dict or None, default=None

constraint: list[Callable]#
surrogate: Surrogate = None#
surrogate_kwargs: dict | None = None#
type: str = 'less'#
value: float = 0#
class smt_optim.core.driver.Driver(problem, config, strategy, strategy_kwargs={})[source]#

Bases: object

call_loggers(state)[source]#
finalize()[source]#
iteration(state)[source]#

Perform an optimization iteration. An iteration consists of: - scaling all the training data - building the surrogate models - acquiring points to sample (and their associated fidelity level) - sampling the expensive-to-evaluate functions

Parameters:

state (State) – Optimization state on which to perform an iteration.

Returns:

Return optimization state on which an iteration was performed.

Return type:

State

make_res_dir(res_dir: str | None) str | None[source]#
optimize()[source]#

Perform the optimization process which consists of: - generating a DoE if the initial dataset is empty - while no termination criteria is met, perform iterations

Returns:

optimization state on which the optimization process was performed.

Return type:

State

class smt_optim.core.driver.DriverConfig(ctol: float = 0.0001, max_iter: int | None = None, max_budget: float = inf, max_time: float = inf, nt_init: int | None = None, xt_init: ndarray | None = None, results_dir: str | None = 'bo_result', verbose: bool = False, log_doe: bool = False, log_stats: bool = False, callback_func: list[Callable] | Callable | None = None, scaling: bool = True, seed: None = None)[source]#

Bases: object

Optimization driver configuration

max_iter#

Maximum number of iteration

Type:

int

max_budget#

Maximum budget before termination of the optimization process

Type:

float, default=inf

max_time#

Maximum time before termination of the optimization process

Type:

float, default=inf

nt_init#

Number of samples in the initial DoE

Type:

int

xt_init#

Initial DoE to use. The Numpy array must be of shape (num_sample, num_dimension). By providing an initial DoE, the driver will not generate an initial DoE. Cannot be used with nt_init

Type:

list[np.ndarray]

results_dir#

Name of the logging directory

Type:

str or None, default=None

verbose#

Print optimization information.

Type:

bool, default=False

log_doe#

Log the value of the function values as soon as they are sampled. The values are stored in a .csv file.

Type:

bool, default=False

log_stats#

Log optimization statistics at the end of each iteration. The stats are store in a .jsonl file.

Type:

bool, default=False

scaling#

Scale the data. The objective is standardized. The constraints are divided by their standard deviation. The design variables are normalized between 0 and 1.

Type:

bool, default=True

seed#

Seed for experiment reproducibility

Type:

default=None

callback_func: list[Callable] | Callable | None = None#
ctol: float = 0.0001#
log_doe: bool = False#
log_stats: bool = False#
max_budget: float = inf#
max_iter: int | None = None#
max_time: float = inf#
nt_init: int | None = None#
results_dir: str | None = 'bo_result'#
scaling: bool = True#
seed: None = None#
verbose: bool = False#
xt_init: ndarray | None = None#
class smt_optim.core.driver.ObjectiveConfig(objective: list[Callable], surrogate: type[Surrogate], type: str = 'minimize', surrogate_kwargs: dict | None = None)[source]#

Bases: object

Configuration of the objective function used in the optimization problem.

This class stores an objective callable(s) together with surrogate modeling information used to approximate the objective during optimization.

objective#

List of an objective functions. Each callable must accept a decision variable vector x and return a scalar objective value. The functions must be ordered in increasing level of fidelity.

Type:

list[Callable]

type#

Specifies whether the objective should be minimized or maximized.

Type:

{“minimize”, “maximize”}, default=”minimize”

surrogate#

Surrogate model used to approximate the objective function.

Type:

Surrogate or None, default=None

surrogate_kwargs#

Optional keyword arguments passed to the surrogate model.

Type:

dict or None, default=None

objective: list[Callable]#
surrogate: type[Surrogate]#
surrogate_kwargs: dict | None = None#
type: str = 'minimize'#
smt_optim.core.driver.check_bounds(x: ndarray, bounds: ndarray) ndarray[source]#

Apply L1 correction to x point to make sure it’s within the problem’s bounds.

Parameters:
  • x (np.ndarray) – Infill point.

  • bounds (np.ndarray) – Problem boundaries.

Returns:

The bounds corrected infill point.

Return type:

np.ndarray

smt_optim.core.driver.compute_rscv(cstr_array: ndarray, cstr_config: list, g_tol: float = 0.0, h_tol: float = 0.0) ndarray[source]#
smt_optim.core.driver.wrap_array(array: ndarray, factor: float | ndarray = 1.0, step: float | ndarray = 0.0) ndarray[source]#
smt_optim.core.driver.wrap_func(func: Callable, factor: float = 1, step: float = 0) Callable[source]#

Wrap function to return factor * (func - step).

Parameters:
  • func (Callable) – Function to wrap.

  • factor (float) – Multiplicative factor.

  • step (float) – Additive factor.

Returns:

Wrapped function.

Return type:

Callable

smt_optim.core.problem module#

class smt_optim.core.problem.Problem(obj_configs: list, design_space, cstr_configs: list = [], costs: list[float] | None = None)[source]#

Bases: object

Problem configuration

num_dim#

Number of dimensions.

Type:

int

num_obj#

Number of objectives.

Type:

int

num_cstr#

Number of constraints.

Type:

int

num_fidelity#

Number of fidelities.

Type:

int

design_space#

Problem design space.

Type:

np.ndarray

costs#

Fidelity level costs.

Type:

list

obj_configs#

Objective configurations.

Type:

list

obj_funcs#

Objective functions.

Type:

list

cstr_configs#

Constraint configurations.

Type:

list

cstr_funcs#

Constraint functions.

Type:

list

smt_optim.core.sample module#

class smt_optim.core.sample.Evaluator(problem, res_path: str | None = None)[source]#

Bases: object

Evaluate the expensive-to-evaluate functions.

problem#

Optimization problem.

Type:

Problem

res_path#

Logging directory path.

Type:

str

log_sample(sample) None[source]#

Log the sample data.

Parameters:

sample (Sample) – The sample to log.

Return type:

None

sample_func(infill: list[ndarray | None], state) None[source]#

Sample the problem functions at requested query points.

Parameters:
  • infill (list[np.ndarray | None]) – Query points. Each np.ndarray in the list corresponds to a fidelity level. The np.ndarray must have the shape (num_points, num_dim).

  • state (State) – Optimization state.

Return type:

None

class smt_optim.core.sample.OptimizationDataset[source]#

Bases: object

Store samples.

samples#
Type:

list[Sample]

num_obj#

Number of objectives

Type:

int

num_cstr#

Number of constraints

Type:

int

num_fidelity#

Number of fidelity levels

Type:

int

fidelities#

Fidelity levels sorted in increasing order.

Type:

list

num_samples#

Number of samples for each fidelity levels.

Type:

dict

add(sample: Sample)[source]#
export_as_dict() dict[source]#

Exports the samples data as a dictionary. Each attribute corresponds to a key in the dictionary.

Returns:

Dictionary containing all sample data.

Return type:

dict

export_data(idx: int | list[int], lvl: int) ndarray[source]#
fidelities: list#
get_by_fidelity(lvl: int)[source]#
num_cstr: int | None#
num_fidelity: int#
num_obj: int | None#
num_samples: dict#
samples: list[Sample]#
class smt_optim.core.sample.Sample(x: ndarray, fidelity: int, obj: ndarray | None, cstr: ndarray | None, eval_time: ndarray | None, metadata: dict = <factory>)[source]#

Bases: object

Store sample data.

x#

Variable

Type:

np.ndarray

obj#

Objective value(s). Array dimension: (num_obj,)

Type:

np.ndarray

cstr#

Contraint value(s). Array dimension: (num_cstr,)

Type:

np.ndarray

eval_time#

Evaluation times of each QoI. Array dimension: (num_obj+num_cstr,)

Type:

np.ndarray

metadata#

Dictionary with sample metadata such as iter, budget and fidelity.

Type:

dict

cstr: ndarray | None#
eval_time: ndarray | None#
fidelity: int#
metadata: dict#
obj: ndarray | None#
x: ndarray#
smt_optim.core.sample.sample_func(x_new: ndarray, func: Callable) tuple[float, float][source]#

Evaluates the function func value evaluated at x_new. Returns the function value and the elapsed time. If the function output is of type np.ndarray, converts it to a float.

Parameters:
  • x_new (np.ndarry) – Point to sample.

  • func – Function to sample.

Returns:

  • float – Function value at x_new.

  • float – Elapsed time for sampling the function.

smt_optim.core.state module#

class smt_optim.core.state.State(problem)[source]#

Bases: object

State of the optimization process at a given moment.

The optimization state holds information about the optimization process at a given moment.

Parameters:

problem (Problem) – The problem to be optimized.

problem#

The problem to be optimized.

Type:

Problem

iter#

Current iteration number.

Type:

int

budget#

Current used budget.

Type:

float

bo_start#

Start time of the optimization problem

Type:

float

bo_time#

Elapsed time of the optimization driver.

Type:

float

obj_models#

List containing the surrogate modeling the objective(s) function(s).

Type:

list[Surrogate]

cstr_models#

List containing the surrogate modeling the constraint(s) function(s).

Type:

list[Surrogate]

cstr_types#

List containing the constraint types.

Type:

list[str]

dataset#

The dataset containing all samples from the expensive-to-evaluate functions.

Type:

OptimizationDataset

scaled_dataset#

The scaled dataset.

Type:

OptimizationDataset

iter_log#

Dictionary containing logging data.

Type:

dict

scale_dataset(unit_std: bool)[source]#

Scale data in the dataset. The scaled dataset is accessible using the `scaled_dataset`attribute.

build_models()[source]#

Builds the surrogate models based on the scale dataset.

get_best_sample()[source]#

Returns the best sample in the dataset.

build_models()[source]#

Builds the surrogate models.

Return type:

None

get_best_sample(ctol=0.0001, fidelity=-1)[source]#

Returns the best sample based on the objective function value.

Parameters:
  • ctol (float, optional) – Tolerance for constraint violation. Default is 1e-4.

  • fidelity (int, optional) – Fidelity level to consider. If -1, uses the highest fidelity. Default is -1.

Returns:

sample – The best sample based on the objective function value.

Return type:

Sample

scale_dataset(unit_std: bool = False)[source]#

Scales the dataset.

Parameters:

unit_std (bool, optional) – If True, normalize by standard deviation.

Return type:

None