Non-dominated sorting optmization with Scipy

adjoint - gradient based optimization

class glennopt.optimizers.nsopt.NSOPT(eval_command: str = 'python evaluation.py', eval_folder: str = 'Evaluation', optimization_folder: Optional[str] = None, single_folder_eval: bool = False, overwrite_input_file: bool = False, linear_network: List[int] = [64, 64, 64, 64], epochs: int = 200, train_test_split: float = 0.8, pareto_resolution: int = 32, min_method: str = 'Powell')[source]
add_eval_parameters(eval_params: List[Parameter])[source]

Add evaluation parameters. This is part of the initialization

Parameters

eval_params (List[Parameter]) – Add in a list of evaluation parameters

add_objectives(objectives: List[Parameter])[source]

Add the objectives

Parameters

objectives (List[Parameter]) – List of objectives

add_performance_parameters(performance_params: Optional[List[Parameter]] = None)[source]

Add performance parameters

Parameters

performance_params (List[Parameter], optional) – List of performance parameters. Defaults to None.

optimize_from_population(pop_start: int, n_generations: int)[source]

Starts the optimization by reading the values of a population. This can be a DOE or a previous evaluation

Parameters
  • pop_start (int) – pop_start=-1 for DOE. Reads the population folder and starts at pop_start+1

  • n_generations (int) – Number of generations to iterate for

Raises

Exception – [description]

start_doe(doe_individuals: Optional[List[Individual]] = None, doe_size: int = 128)[source]

Starts a design of experiments. This generates the parameters for the individuals to be evaluated and executes each case. If the DOE has already started and there is an output file for an individual then the individual won’t be evaluated

Parameters
  • doe_individuals (List[Individual], optional) – List of individuals to evaluate. Defaults to None.

  • doe_size (int, optional) – Number of individuals to evaluate in the design of experiments. This is only used if doe_individuals is None. Defaults to 128.

train(individuals: List[Individual], retrain: bool = False)[source]

Trains the neural network to predict the output given an input

Parameters
  • individuals (List[Individual]) – List of individuals to train the neural network

  • retrain (bool, Optional) – (True) retrains the existing model on new data. (False) create a new model for every population. Setting it to false means training takes longer but the model is more accurate because it renormalizes the inputs and outputs.

Returns

Train Loss and test loss

Return type

Tuple[float float]

glennopt.optimizers.nsopt.surrogate_objective_func(x0: ndarray, model: Module, reference_points: List[ndarray], dist_index: int, labels_scaler: List[MinMaxScaler], features_scaler: List[MinMaxScaler])[source]

Objective function of adjoint using neural networks. The goal is to use this function to find values of x0 that minimize the distance to the reference point

Parameters
  • x0 (np.ndarray) – initial guess

  • model (nn.Module) – neural network model used for prediction

  • reference_points (List[np.ndarray]) – array of reference points along the pareto front

  • intercepts (np.ndarray) –

Returns

sum of all the fitnesses. This is the value to be minimized

Return type

(float)