smt_optim.core
Core#
- class ConstraintConfig(constraint: list[Callable], lower: float = None, upper: float = None, equal: float = None, surrogate: type[Surrogate] = None, surrogate_kwargs: dict | None = None)[source]#
Bases:
objectConfiguration of a constraint function used in the optimization problem.
This class stores a constraint callable(s) together with surrogate modeling information used to approximate the constraint during optimization.
- constraint#
List of constraint functions. Each callable must accept a decision variable vector
xand return a scalar constraint value. The functions must be ordered in increasing level of fidelity.- Type:
list[Callable]
- lower#
Lower bound of the constraint. If not specified, the constraint is considered unconstrained in this direction.
- Type:
float | None
- upper#
Upper bound of the constraint. If not specified, the constraint is considered unconstrained in this direction.
- Type:
float | None
- equal#
Equality constraint.
- Type:
float | None
- surrogate#
Surrogate model used to approximate the constraint function.
- Type:
Surrogate or None, default=None
- surrogate_kwargs#
Optional keyword arguments passed to the surrogate model.
- Type:
dict or None, default=None
Notes
A constraint must either be an inequality constraint or an equality constraint. For inequality constraints, it is possible to define a lower and upper bound.
- constraint: list[Callable]#
- equal: float = None#
- lower: float = None#
- surrogate_kwargs: dict | None = None#
- upper: float = None#
- class Driver(problem: Problem, config: DriverConfig, strategy: AcquisitionStrategy, strategy_kwargs: dict = {})[source]#
Bases:
object- iteration(state)[source]#
Performs an optimization iteration on the given state.
The iteration process involves the following steps: 1. Scaling the training data according to the specified scaling configuration. 2. Building surrogate models for approximating the expensive-to-evaluate functions. 3. Acquiring points to sample and their associated fidelity levels using the infill strategy. 4. Sampling the original, unmodified functions with the acquired points.
- make_res_dir(res_dir: str | None) str | None[source]#
Creates a unique results directory path based on the provided input.
If a directory with the same name already exists, the method will append an incrementing index to it (e.g., “results” -> “results_1”, “results_2”, etc.) until a unique name is found. No directory will be created if res_dir is set to None.
- Parameters:
res_dir (Optional[str]) – The desired results directory path. If None, no directory will be created.
- Returns:
The unique results directory path, or None if the input was None.
- Return type:
str or None
- optimize()[source]#
Performs an optimization process on the current state.
The process consists of two stages: 1. If the initial dataset is empty, it generates a Design of Experiment (DoE). 2. Iteratively performs optimization iterations until termination criteria are met.
- Returns:
The updated optimization state after completing the optimization process.
- Return type:
- start_optim()[source]#
Initializes the optimization process by creating an initial Design of Experiment (DoE) if necessary.
If the State dataset is empty (i.e., contains no samples), a new DoE will be generated. Otherwise, no action is taken to avoid modifying existing sampling points.
- Return type:
None
- class DriverConfig(ctol: float = 0.0001, max_iter: int | None = None, max_budget: float = inf, max_time: float = inf, nt_init: int | None = None, xt_init: ndarray | None = None, results_dir: str | None = 'bo_result', verbose: bool = False, log_doe: bool = False, log_stats: bool = False, callback_func: list[Callable] | Callable | None = None, scaling: bool = True, seed: None = None)[source]#
Bases:
objectOptimization driver configuration
- max_iter#
Maximum number of iteration
- Type:
int
- max_budget#
Maximum budget before termination of the optimization process
- Type:
float, default=inf
- max_time#
Maximum time before termination of the optimization process
- Type:
float, default=inf
- nt_init#
Number of samples in the initial DoE
- Type:
int
- xt_init#
Initial DoE to use. The Numpy array must be of shape (num_sample, num_dimension). By providing an initial DoE, the driver will not generate an initial DoE. Cannot be used with nt_init
- Type:
list[np.ndarray]
- results_dir#
Name of the logging directory
- Type:
str or None, default=None
- verbose#
Print optimization information.
- Type:
bool, default=False
- log_doe#
Log the value of the function values as soon as they are sampled. The values are stored in a .csv file.
- Type:
bool, default=False
- log_stats#
Log optimization statistics at the end of each iteration. The stats are store in a .jsonl file.
- Type:
bool, default=False
- scaling#
Scale the data. The objective is standardized. The constraints are divided by their standard deviation. The design variables are normalized between 0 and 1.
- Type:
bool, default=True
- seed#
Seed for experiment reproducibility
- Type:
default=None
- callback_func: list[Callable] | Callable | None = None#
- ctol: float = 0.0001#
- log_doe: bool = False#
- log_stats: bool = False#
- max_budget: float = inf#
- max_iter: int | None = None#
- max_time: float = inf#
- nt_init: int | None = None#
- results_dir: str | None = 'bo_result'#
- scaling: bool = True#
- seed: None = None#
- verbose: bool = False#
- xt_init: ndarray | None = None#
- class Evaluator(problem, res_path: str | None = None)[source]#
Bases:
objectEvaluate the expensive-to-evaluate functions.
- res_path#
DOE logging directory path.
- Type:
str | None
- log_sample(sample) None[source]#
Append the sample data to the DOE CSV file.
This method appends new rows to the existing file at the specified path. If the file does not exist, it will be created with a header row.
- Parameters:
sample (Sample) – The sample to log.
- Return type:
None
- sample_func(infill: list[ndarray | None], state) None[source]#
Sample the problem functions at requested query points and add the samples to the optimization state’s dataset.
- Parameters:
infill (list[np.ndarray | None]) – Query points: each numpy array in the list represents a fidelity level and must have shape (num_points, num_dim); if a level is set to None, it will be skipped.
state (State) – Optimization state object.
- Return type:
None
- class ObjectiveConfig(objective: list[Callable], surrogate: type[Surrogate], type: str = 'minimize', surrogate_kwargs: dict | None = None)[source]#
Bases:
objectConfiguration of the objective function used in the optimization problem.
This class stores an objective callable(s) together with surrogate modeling information used to approximate the objective during optimization.
- objective#
List of an objective functions. Each callable must accept a decision variable vector
xand return a scalar objective value. The functions must be ordered in increasing level of fidelity.- Type:
list[Callable]
- type#
Specifies whether the objective should be minimized or maximized.
- Type:
{“minimize”, “maximize”}, default=”minimize”
- surrogate#
Surrogate model used to approximate the objective function.
- Type:
Surrogate or None, default=None
- surrogate_kwargs#
Optional keyword arguments passed to the surrogate model.
- Type:
dict or None, default=None
- objective: list[Callable]#
- surrogate_kwargs: dict | None = None#
- type: str = 'minimize'#
- class OptimizationDataset[source]#
Bases:
objectStore samples.
- num_obj#
Number of objectives
- Type:
int
- num_cstr#
Number of constraints
- Type:
int
- num_fidelity#
Number of fidelity levels
- Type:
int
- fidelities#
Fidelity levels sorted in increasing order.
- Type:
list
- num_samples#
Number of samples for each fidelity levels.
- Type:
dict
- add(sample: Sample)[source]#
Add a new sample to the dataset.
- Parameters:
sample (Sample) – The sample to be added. It should contain objective function values (obj) and/or constraint function values (cstr) for each variable in the problem.
Notes
If no samples have been added yet, the number of objectives and constraints are set to the lengths of sample.obj and sample.cstr, respectively. Subsequent samples must have the same number of objectives and constraints as the first sample.
If the fidelity level of the new sample is not already in the dataset, it is added, along with a counter for the number of samples at that fidelity.
- export_as_dict() dict[source]#
Exports the samples data as a dictionary, including fidelity levels, evaluation times, input values, objective function values, constraint function values, and RSCV values.
- Returns:
- A dictionary containing the following keys:
”cstr”: an array of shape (num_samples, num_cstr) representing the constraint function values for each sample.
”eval_time”: an array of shape (num_samples,) representing the total evaluation time for each sample.
”fidelity”: an array of shape (num_samples,) representing the fidelity level of each sample.
”obj”: an array of shape (num_samples, num_obj) representing the objective function values for each sample.
”rscv”: an array of shape (num_samples,) representing the Root Square Constraint Violation value for each sample.
”x”: an array of shape (num_samples, nvar) representing the input values for each sample.
- Return type:
dict
- get_by_fidelity(lvl: int) list[Sample][source]#
Fetches all the samples corresponding to the specified fidelity level.
- Parameters:
lvl (int) – Fidelity level (starting at 0 for the lowest fidelity level) from which to retrieve samples.
- Returns:
A list of samples of the corresponding fidelity level.
- Return type:
list[Sample]
- class Problem(obj_configs: list, design_space: ndarray | DesignSpace, cstr_configs: list = [], costs: list[float] | None = None)[source]#
Bases:
objectProblem configuration
- num_dim#
Number of dimensions.
- Type:
int
- num_obj#
Number of objectives.
- Type:
int
- num_cstr#
Number of constraints.
- Type:
int
- num_fidelity#
Number of fidelities.
- Type:
int
- design_space#
Problem design space.
- Type:
np.ndarray
- costs#
Fidelity level costs.
- Type:
list
- obj_configs#
Objective configurations.
- Type:
list
- obj_funcs#
Objective functions.
- Type:
list
- cstr_configs#
Constraint configurations.
- Type:
list
- cstr_funcs#
Constraint functions.
- Type:
list
- class Sample(x: ndarray, fidelity: int, obj: ndarray | None, cstr: ndarray | None, eval_time: ndarray | None, metadata: dict = <factory>)[source]#
Bases:
objectStore sample data.
- x#
Variable
- Type:
np.ndarray
- obj#
Objective value(s). Array dimension: (num_obj,)
- Type:
np.ndarray
- cstr#
Contraint value(s). Array dimension: (num_cstr,)
- Type:
np.ndarray
- eval_time#
Evaluation times of each QoI. Array dimension: (num_obj+num_cstr,)
- Type:
np.ndarray
- metadata#
Dictionary with sample metadata such as iter, budget and fidelity.
- Type:
dict
- cstr: ndarray | None#
- eval_time: ndarray | None#
- fidelity: int#
- metadata: dict#
- obj: ndarray | None#
- x: ndarray#
- class State(problem)[source]#
Bases:
objectState of the optimization process at a given moment.
The optimization state holds information about the optimization process at a given moment.
- Parameters:
problem (Problem) – The problem to be optimized.
- iter#
Current iteration number.
- Type:
int
- budget#
Current used budget.
- Type:
float
- bo_start#
Start time of the optimization problem
- Type:
float
- bo_time#
Elapsed time of the optimization driver.
- Type:
float
- obj_models#
List containing the surrogate modeling the objective(s) function(s).
- Type:
list[Surrogate]
- cstr_models#
List containing the surrogate modeling the constraint(s) function(s).
- Type:
list[Surrogate]
- dataset#
The dataset containing all samples from the expensive-to-evaluate functions.
- Type:
- scaled_dataset#
The scaled dataset.
- Type:
- iter_log#
Dictionary containing logging data.
- Type:
dict
- scale_dataset(unit_std: bool)[source]#
Scale data in the dataset. The scaled dataset is accessible using the `scaled_dataset`attribute.
- get_best_sample(ctol: float = 0.0001, fidelity: int = -1, scaled: bool = False) Sample[source]#
Returns the best sample based on the objective function value.
- Parameters:
ctol (float, optional) – Tolerance for constraint violation. Default is 1e-4.
fidelity (int, optional) – Fidelity level to consider. If -1, uses the highest fidelity. Default is -1.
- Returns:
sample – The best sample based on the objective function value.
- Return type: