GlennOPT Helpers

Converting to Numpy Array

glennopt.helpers.convert_to_ndarray.convert_to_ndarray(t) ndarray[source]

Converts a scalar or list to a numpy array

Parameters

t (float,list) – [description]

Returns

variable as an array

Return type

np.ndarray

Copy

Copies a directory

param src

source directory

type src

str

param dest

destination directory

type dest

str

param symlinks

This parameter accepts True or False, depending on which the metadata of the original links or linked links will be copied to the new tree. Defaults to False.

type symlinks

bool, optional

param ignore

If ignore is given, it must be a callable that will receive as its arguments the directory being visited by copytree(), and a list of its contents, as returned by os.listdir(). Defaults to None.

type ignore

[type], optional

glennopt.helpers.copy.__dir__()

Default dir() implementation.

glennopt.helpers.copy.__format__(format_spec, /)

Default object formatter.

glennopt.helpers.copy.__init_subclass__()

This method is called when a class is subclassed.

The default implementation does nothing. It may be overridden to extend subclasses.

glennopt.helpers.copy.__new__(*args, **kwargs)

Create and return a new object. See help(type) for accurate signature.

glennopt.helpers.copy.__reduce__()

Helper for pickle.

glennopt.helpers.copy.__reduce_ex__(protocol, /)

Helper for pickle.

glennopt.helpers.copy.__sizeof__()

Size of object in memory, in bytes.

glennopt.helpers.copy.__subclasshook__()

Abstract classes can override this to customize issubclass().

This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).

Mutations

glennopt.helpers.mutate.crossover(x1: ndarray, x2: ndarray) Tuple[ndarray, ndarray][source]

perform simple crossover on two arrays

Parameters
  • x1 (np.ndarray) – array of evaluation parameters from an individual

  • x2 (np.ndarray) – array of evaluation parameters from a different individual

Returns

containing the following

y1 (np.ndarray): new set of evaluation parameters after crossover y2 (np.ndarray): new set of evaluation parameters after crossover

Return type

(Tuple)

glennopt.helpers.mutate.de_best_1_bin(best: Individual, individuals: List[Individual], objectives: List[Parameter], eval_parameters: List[Parameter], performance_parameters: List[Parameter], F: float = 0.6, C: float = 0.7)[source]
Applies mutation and crossover using de_1_rand_bin to a list of individuals

This type of mutation and crossover strategy is good for single objective but it could lead to local minimums

Citatons:

https://gist.github.com/martinus/7434625df79d820cd4d9 Storn, R., & Price, K. (1997). Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. Journal of Global Optimization, 11(4), 341–359. https://doi.org/10.1023/A:1008202821328 Ao, Y., & Chi, H. (2009). Multi-parent Mutation in Differential Evolution for Multi-objective Optimization. 2009 Fifth International Conference on Natural Computation, 4, 618–622. https://doi.org/10.1109/ICNC.2009.149

Parameters
  • best (Individual) – Best individual

  • individuals (List[Individual]) – list of all individuals

  • objectives (List[Parameter]) – list of objectives of those individuals

  • eval_parameters (List[Parameter]) – list of evaluation parameters

  • performance_parameters (List[Parameter]) – list of performance parameters

  • F (float, optional) – Amplification Factor [0,2]. Defaults to 0.6.

  • C (float, optional) – Crossover factor [0,1]. Defaults to 0.7.

Returns

New list of individuals all mutated and crossovered

Return type

List[Individual]

glennopt.helpers.mutate.de_dmp(individuals: List[Individual], objectives: List[Parameter], eval_parameters: List[Parameter], performance_parameters: List[Parameter])[source]
Difference Mean Based Perturbation - less greedy than DE/best/1 = less chance of getting stuck at local minima, prefers exploration.

F - Amplification Factor randomly switched from 0.5 to 2 randomly C - Crossover factor sampled uniform at random from 0.3 to 1 b - Crossover blending rate randomly chosen from 0.1, 0.5(median), 0.9

Citatons:

Gosh, A., Das, S., Mallipeddi, R., Das, A. K., & Dash, S. S. (2017). A Modified Differential Evolution with Distance-based Selection for Continuous Optimization in Presence of Noise. IEEE Access, 5, 26944–26964. https://doi.org/10.1109/ACCESS.2017.2773825

Parameters
  • individuals (List[Individual]) – list of all individuals, sorted in terms of best performing

  • objectives (List[Parameter]) – list of objectives

  • eval_parameters (List[Parameter]) – list of evaluation parameters

  • performance_parameters (List[Parameter]) – list of performance parameters

Returns

New list of individuals all mutated and crossovered

Return type

List[Individual]

glennopt.helpers.mutate.de_dmp_bak(best: Individual, individuals: List[Individual], objectives: List[Parameter], eval_parameters: List[Parameter], performance_parameters: List[Parameter], num_children: int, C: float = 0.5)[source]
Difference Mean Based Perturbation - less greedy than DE/best/1 = less chance of getting stuck at local minima, prefers exploration.

This version is archived, it uses the best individuals to generate the next generation.

F - Amplification Factor randomly switched from 0.5 to 2 randomly C - Crossover factor sampled uniform at random from 0.3 to 1 b - Crossover blending rate randomly chosen from 0.1, 0.5(median), 0.9

Citatons:

Gosh, A., Das, S., Mallipeddi, R., Das, A. K., & Dash, S. S. (2017). A Modified Differential Evolution with Distance-based Selection for Continuous Optimization in Presence of Noise. IEEE Access, 5, 26944–26964. https://doi.org/10.1109/ACCESS.2017.2773825

Parameters
  • best (Individual) – best individuals

  • individuals (List[Individual]) – individuals

  • objectives (List[Parameter]) – list of objectives

  • eval_parameters (List[Parameter]) – list of evaluation parameters

  • performance_parameters (List[Parameter]) – list of performance parameters

  • num_children (int) – number of children to generate

  • C (float, optional) – Crossover factor sampled uniform at random from 0.3 to 1. Defaults to 0.5.

Returns

New list of individuals all mutated and crossovered

Return type

List[Individual]

class glennopt.helpers.mutate.de_mutation_type(value)[source]

Differential evolution mutation type. users can select what kind of mutation type to use

Parameters
  • de_rand_1_bin – Differential evolution using mutation with rand crossover and mutation

  • de_best_1_bin – Differential evolution using mutation with crossover with best individual

  • simple – Differential evolution using mutation with cross over and mutation using best individual

  • de_rand_1_bin_spawn – Applies mutation and crossover using de_rand_1_bin to a list of individuals to spawn even more individual combinations

  • de_dmp – uses Difference Mean Based Perturbation style crossover and mutation

glennopt.helpers.mutate.de_rand_1_bin(individuals: List[Individual], objectives: List[Parameter], eval_parameters: List[Parameter], performance_parameters: List[Parameter], min_parents: int = 3, max_parents: int = 3, F: float = 0.6, C: float = 0.7) List[Individual][source]

Applies mutation and crossover using de_rand_1_bin to a list of individuals

Citatons:

https://gist.github.com/martinus/7434625df79d820cd4d9 Storn, R., & Price, K. (1997). Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. Journal of Global Optimization, 11(4), 341–359. https://doi.org/10.1023/A:1008202821328 Ao, Y., & Chi, H. (2009). Multi-parent Mutation in Differential Evolution for Multi-objective Optimization. 2009 Fifth International Conference on Natural Computation, 4, 618–622. https://doi.org/10.1109/ICNC.2009.149

Parameters
  • individuals (List[Individual]) – list of individuals. Takes the best individual[0] (sorted lowest to highest)

  • objectives (List[Parameter]) – list of objectives

  • eval_parameters (List[Parameter]) – list of evaluation parameters parameters

  • performance_parameters (List[Parameter]) – list of performance parameters

  • min_parents (int, optional) – Minimum number of parents. Defaults to 3.

  • max_parents (int, optional) – Maximum number of parents. Defaults to 3.

  • F (float, optional) – Amplification Factor. Range [0,2]. Defaults to 0.6.

  • C (float, optional) – Crossover factor. Range [0,1]. Defaults to 0.7.

Returns

New list of individuals all mutated and crossovered

Return type

List[Individual]

glennopt.helpers.mutate.de_rand_1_bin_spawn(individuals: List[Individual], objectives: List[Parameter], eval_parameters: List[Parameter], performance_parameters: List[Parameter], num_children: int, F: float = 0.6, C: float = 0.7) List[Individual][source]

Applies mutation and crossover using de_rand_1_bin to a list of individuals to spawn even more individual combinations

Parameters
  • individuals (List[Individual]) – list of individuals. Takes the best individual[0] (sorted lowest to highest)

  • objectives (List[Parameter]) – list of objectives

  • eval_parameters (List[Parameter]) – Evaluation parameters F(x) the x part

  • performance_parameters (List[Parameter]) – parameters you want to keep track of

  • num_children (int) – Number of children to spawn

  • F (float, optional) – Amplification Factor. Defaults to 0.6.

  • C (float, optional) – Crossover Factor. Defaults to 0.7.

Returns

List of individuals

Return type

List[Individual]

glennopt.helpers.mutate.get_eval_param_matrix(individuals: List[Individual]) Tuple[ndarray, float, float][source]

Gets the evaluation parameter as a matrix

Parameters

individuals (List[Individual]) – List of individuals

Returns

containing the following

population (np.ndarray): population evaluation parameters xmin (np.ndarray): min evaluaton parameters for first individual xmax (np.ndarray): max evaluaton parameters for first individual

Return type

(Tuple)

glennopt.helpers.mutate.get_pairs(nIndividuals: int, nParents: int, parent_indx_seed=[]) List[int][source]

Gets a list of integers that are not in the parent_indx_seed array

Parameters
  • nIndividuals (int) – number of individuals

  • nParents (int) – number of parents. this controls the length of the returned list

  • parent_indx_seed (list, optional) – pre-populate the parent index array. Defaults to [].

Returns

list of individual indeicies that can pair with

Return type

List[int]

glennopt.helpers.mutate.mutate(x1: ndarray, xmin: ndarray, xmax: ndarray, mu: float = 0.02, sigma: float = 0.2) ndarray[source]

Mutate the evaluation parameters

Parameters
  • x1 (np.ndarray) – array of evaluation parameters

  • xmin (np.ndarray) – array of minimum evaluation parameter values

  • xmax (np.ndarray) – array of maximum evaluation parameter values

  • mu (float, optional) – percentage of population to mutate. Defaults to 0.02.

  • sigma (float, optional) – mutation step size. Defaults to 0.2.

Returns

array of mutated values

Return type

np.ndarray

class glennopt.helpers.mutate.mutation_parameters(mutation_type: de_mutation_type = de_mutation_type.de_rand_1_bin, sigma: float = 0.2, mu: float = 0.02, F: float = 0.6, C: float = 0.8, nParents: int = 16)[source]

Data class for storing the mutation parameters used for NSGA and differential evolution problems

Parameters
  • mutation_type (de_mutation_type) – type of mutation to use

  • sigma (float) – mutation step size 0.1

  • mu (float) – mutation rate

  • F (float) – Amplification Factor [0,2]

  • C (float) – Crossover factor [0,1]

__eq__(other)

Return self==value.

__hash__ = None
glennopt.helpers.mutate.set_eval_parameters(eval_parameters: List[Parameter], x: ndarray)[source]

Set the evaluation parameters

Parameters
  • eval_parameters (List[Parameter]) – list of parameters as a class. x is mapped to eval_parameter.value

  • x (np.ndarray) – represents an the mutated value/array to be evaluated

Returns

returns the new parameter

Return type

Parameter

glennopt.helpers.mutate.simple(individuals: List[Individual], nCrossover: int, nMutation: int, objectives: List[Parameter], eval_parameters: List[Parameter], performance_parameters: List[Parameter], mu: float, sigma: float)[source]

Performs a simple mutation and crossover on the individuals

Parameters
  • individuals (List[Individual]) – list of individuals

  • nCrossover (int) – number of individuals to use for crossover

  • nMutation (int) – number of individuals to use for mutation

  • objectives (List[Parameter]) – Objectives defined as a list of Parameters

  • eval_parameters (List[Parameter]) – Evaluation parameters

  • performance_parameters (List[Parameter]) – Performance Parameters

  • mu (float) – Mutation rate

  • sigma (float) – Mutation step size

Returns

[description]

Return type

[type]

Non Dominated Sorting

Loops through the list of individuals and sorts through them.

Citation:

Yuan, Y., Xu, H., & Wang, B. (2014). An Improved NSGA-III Procedure for Evolutionary Many-objective Optimization. Genetic and Evolutionary Computation Conference (GECCO 2014), 661–668. https://doi.org/10.1145/2576768.2598342

param individuals

all individuals of a population

type individuals

List[Individual]

param k

number of individuals to select

type k

int

param first_front_only

gets only the best individuals. Defaults to False.

type first_front_only

bool, optional

returns

List containing fronts so List of lists of individuals.

rtype

List[Individual]

glennopt.helpers.non_dominated_sorting.__dir__()

Default dir() implementation.

glennopt.helpers.non_dominated_sorting.__format__(format_spec, /)

Default object formatter.

glennopt.helpers.non_dominated_sorting.__init_subclass__()

This method is called when a class is subclassed.

The default implementation does nothing. It may be overridden to extend subclasses.

glennopt.helpers.non_dominated_sorting.__new__(*args, **kwargs)

Create and return a new object. See help(type) for accurate signature.

glennopt.helpers.non_dominated_sorting.__reduce__()

Helper for pickle.

glennopt.helpers.non_dominated_sorting.__reduce_ex__(protocol, /)

Helper for pickle.

glennopt.helpers.non_dominated_sorting.__sizeof__()

Size of object in memory, in bytes.

glennopt.helpers.non_dominated_sorting.__subclasshook__()

Abstract classes can override this to customize issubclass().

This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).

Parallel Settings

class glennopt.helpers.parallel_settings.parallel_settings(concurrent_executions: int = 1, cores_per_execution: int = 1, execution_timeout: int = 10)[source]

These settings control how the optimizer will execute. .. rubric:: Example

Parameters
  • concurrent_executions (int) – Number of concurrent executions per machine

  • cores_per_execution (int) – number of cores per exectuion. Defaults to 1. Set it to 0 to ignore using cores per execution and just go with concurrent executions

  • execution_timeout (int) – execution timeout in minutes. Defaults to 10 minutes

  • machine_filename (str) – path to the machine file. Defaults to ‘machinefile.txt’

  • database_filename (str) – path to the database. Defaults to ‘database.csv’

  • ignore_cores (bool) – indicate whether to ignore the number of cores per execution and just execute the code

__eq__(other)

Return self==value.

__hash__ = None

Population Distance

glennopt.helpers.population_distance.distance(individuals: List[Individual], newIndividuals: List[Individual])[source]

Calculates the distance between individuals

Parameters
  • individuals (List[Individual]) – past list of individuals

  • newIndividuals (List[Individual]) – updated list of individuals

Returns

distance

Return type

float

glennopt.helpers.population_distance.diversity(individuals: List[Individual]) float[source]

Computes the diversity of a population. Higher diversity value the better.

Citations:
  1. Hu, J. Zeng, and Y. Tan, ‘‘A diversity-guided particle swarm optimizer for dynamic environments,’’ in Bio-Inspired Computational Intelligence and Applications. Berlin, Germany: Springer, 2007, pp. 239–247

Parameters

individuals (List[Individual]) – List of individuals of a population

Returns

diversity value

Return type

float

Post Processing

glennopt.helpers.post_processing.get_best(individuals: List[Individual], pop_size: int)[source]

Gets the best individual vs Pop. Some populations won’t generate a better design but the best design will always be carried to the next population for crossover + mutation

Important: Call this function with inputs from individuals = ns.read_calculation_folder()

Parameters
  • individuals (List[Individual]) – Takes in a list of the individuals

  • pop_size (int) – population size i.e. how many individuals

Returns

tuple containing:

objectives (List[Parameter]): numpy array of best objective values for each population. For multi-objective problems use best fronts for better representation of design space

pop_folders (List[int]): this is a list of populations

best_fronts (List[List[Individual]]): List of individuals contained in best fronts, empty list if single objective

Return type

(tuple)

glennopt.helpers.post_processing.get_pop_best(individuals: List[Individual])[source]

Gets the best individuals from each population (not rolling best) typically you would use opt.read_calculation_folder() where opt is an object representing your nsga3 or sode class.

Note: Important: Call this function with inputs from individuals = ns.read_calculation_folder()

Returns

tuple containing:

best_individuals (List[Dict[str,List[Individual]]]): this is an array of individuals that are best at each objective

[
POP001: [best_individual_objective1, best_individual,objective2, best_individual,objective3], best_individual_compromise
POP002: [best_individual_objective1, best_individual,objective2, best_individual,objective3], best_individual_compromise
POP003: [best_individual_objective1, best_individual,objective2, best_individual,objective3], best_individual_compromise
]

comp_individuals (List[Individual]): this is an array of individuals that is the best compromise between all the objectives

Return type

(tuple)

glennopt.helpers.post_processing.plot_pareto(best_fronts, pop, objective1_index, objective2_index)[source]