Bingo Tutorial 3: Archipelagos and Logging#
Goal: Use an archipelago in evolution to find a list of numbers with zero magnitude. Also use logging to track the progress of evolution.#
Pre-Requisites#
It is assumed that the reader is familiar with the setup of the second tutorial before continuing.
Zero Min Problem Setup#
We will be working with the same problem from the second tutorial; finding a list of numbers with zero magnitude through genetic optimization. So, the setup is roughly the same.
Chromosome Generator#
[1]:
import numpy as np
from bingo.chromosomes.multiple_floats import MultipleFloatChromosomeGenerator
VALUE_LIST_SIZE = 8
np.random.seed(0)
def get_random_float():
return np.random.random_sample()
generator = MultipleFloatChromosomeGenerator(get_random_float, VALUE_LIST_SIZE, [1, 3, 4])
Chromosome Variation#
[2]:
from bingo.chromosomes.multiple_values import SinglePointCrossover
from bingo.chromosomes.multiple_values import SinglePointMutation
crossover = SinglePointCrossover()
mutation = SinglePointMutation(get_random_float)
Fitness and Evaluation#
[3]:
from bingo.evaluation.fitness_function import FitnessFunction
from bingo.local_optimizers.scipy_optimizer import ScipyOptimizer
from bingo.local_optimizers.local_opt_fitness import LocalOptFitnessFunction
from bingo.evaluation.evaluation import Evaluation
class ZeroMinFitnessFunction(FitnessFunction):
def __call__(self, individual):
return np.linalg.norm(individual.values)
fitness = ZeroMinFitnessFunction()
optimizer = ScipyOptimizer(fitness)
local_opt_fitness = LocalOptFitnessFunction(fitness, optimizer)
evaluator = Evaluation(local_opt_fitness) # evaluates a population (list of chromosomes)
Selection#
[4]:
from bingo.selection.tournament import Tournament
GOAL_POPULATION_SIZE = 25
selection = Tournament(GOAL_POPULATION_SIZE)
Evolutionary Algorithm#
[5]:
from bingo.evolutionary_algorithms.mu_plus_lambda import MuPlusLambda
MUTATION_PROBABILITY = 0.4
CROSSOVER_PROBABILITY = 0.4
NUM_OFFSPRING = GOAL_POPULATION_SIZE
evo_alg = MuPlusLambda(evaluator,
selection,
crossover,
mutation,
CROSSOVER_PROBABILITY,
MUTATION_PROBABILITY,
NUM_OFFSPRING)
Hall of Fame#
[6]:
from bingo.stats.hall_of_fame import HallOfFame
def similar_mfcs(mfc_1, mfc_2):
"""identifies if two MultpleFloatChromosomes have similar values"""
difference_in_values = 0
for i, j in zip(mfc_1.values, mfc_2.values):
difference_in_values += abs(i - j)
return difference_in_values < 1e-4
hof = HallOfFame(max_size=5, similarity_function=similar_mfcs)
Evolutionary Optimizer: Archipelago#
In this experiment, we will use a different evolutionary optimizer than tutorial 2. Tutorial 2 uses an Island
to coordinate the evolutionary process. However, we will be using an Archipelago
, to do the evolution. An Archipelago
is an EvolutionaryOptimizer
(an object that starts and progresses the evolutionary process) which performs evolution on multiple Island
s and periodically will migrate the populations of randomly selected pairs of Island
s.
There are currently two Archipelago
s implemented in Bingo: a SerialArchipelago
which performs evolution on islands consecutively, and a ParallelArchipelago
which will perform evolution on each island in parallel.
Here we’ll be using a SerialArchipelago
, which takes an Island
to use in the Archipelago
, the total number of islands in the Archipelago
, and an optional HallOfFame
.
Setting Up the Island#
We can setup an Island
in the same way we did in tutorial 2, but note that we’re leaving the HallOfFame
out in favor of putting it in the Archipelago
.
[7]:
from bingo.evolutionary_optimizers.island import Island
POPULATION_SIZE = 10
island = Island(evo_alg, generator, POPULATION_SIZE)
Setting Up the Archipelago#
As mentioned before, we’re using a SerialArchipelago
which requires an Island
and optionally takes a total number of Island
s and HallOfFame
.
[8]:
from bingo.evolutionary_optimizers.serial_archipelago import SerialArchipelago
archipelago = SerialArchipelago(island, num_islands=4, hall_of_fame=hof)
Logging#
Before we start evolution, we can setup a log using configure_logging
to log the progress of evolution. configure_logging
takes an optional verbosity
(“quiet”, “standard”, “detailed”, “debug”, or an integer (0 - 100) that corresponds to typical python log level); an optional module
which will show a module’s name when logging output if set to True
; an optional timestamp
which will show the time stamp on each log entry if set to True
; an optional stats_file
which
is a str of the path to a file that will be used to log evolution stats; and an optional logfile
which is a str of the path of a file that will be used for non-stats logs.
[9]:
import tempfile
from bingo.util.log import configure_logging
temp_file = tempfile.NamedTemporaryFile(mode="w+", delete=False)
# close file so we can use it for logging
temp_file.close()
configure_logging(verbosity="standard", logfile=temp_file.name)
You can also use Python’s standard logging module for logging. See the logging module’s docs for more details.
Evolution#
As mentioned in the previous tutorial, there are two mechanisms for performing evolution in Bingo. An Archipelago
can be evolved in the same way as an Island
, either by
Manually stepping through a set number of generations
[10]:
print("Archipelago age:", archipelago.generational_age,
" with best fitness:", archipelago.get_best_fitness())
archipelago.evolve(num_generations=10)
print("Archipelago age:", archipelago.generational_age,
" with best fitness:", archipelago.get_best_fitness())
Archipelago age: 0 with best fitness: 0.4794205717163302
Archipelago age: 10 with best fitness: 0.12439522102582232
or by
Evolving until convergence criteria are met
[11]:
archipelago.evolve_until_convergence(max_generations=1000,
fitness_threshold=0.05)
print("Archipelago age:", archipelago.generational_age,
" with best fitness:", archipelago.get_best_fitness(), "\n")
print("Best indv: ", archipelago.get_best_individual())
Generation: 10 Elapsed time: 0.000156 Best training fitness: 1.243952e-01
Generation: 11 Elapsed time: 1.014481 Best training fitness: 8.348861e-02
Generation: 12 Elapsed time: 2.075975 Best training fitness: 5.877228e-02
Generation: 13 Elapsed time: 3.211962 Best training fitness: 5.877228e-02
Generation: 14 Elapsed time: 4.374839 Best training fitness: 5.877228e-02
Generation: 15 Elapsed time: 5.509353 Best training fitness: 5.710052e-02
Generation: 16 Elapsed time: 6.650868 Best training fitness: 5.621542e-02
Generation: 17 Elapsed time: 7.807022 Best training fitness: 5.446524e-02
Generation: 18 Elapsed time: 8.937985 Best training fitness: 5.446524e-02
Generation: 19 Elapsed time: 10.083138 Best training fitness: 4.966088e-02
/home/runner/work/bingo/bingo/bingo/evolutionary_algorithms/ea_diagnostics.py:90: RuntimeWarning: invalid value encountered in scalar divide
cross_mut_tots[1] / cross_mut_tots[0],
/home/runner/work/bingo/bingo/bingo/evolutionary_algorithms/ea_diagnostics.py:91: RuntimeWarning: invalid value encountered in scalar divide
cross_mut_tots[2] / cross_mut_tots[0],
Evolution successfully converged.
Absolute convergence occurred with best fitness < 0.05
Hall of Fame:
0.04966088398508853 [0.005346495273260032, np.float64(4.0483858360739866e-09), 0.011155833613863075, np.float64(-2.3534367221479328e-08), np.float64(-4.1033976106331554e-08), 0.017091252958985836, 0.008538660512296792, 0.04413780818978985]
0.054465241825757245 [0.005346495273260032, np.float64(-5.455830222330119e-09), 0.024994235186936442, np.float64(-1.622906248296656e-08), np.float64(-7.819758106745893e-09), 0.017091252958985836, 0.008538660512296792, 0.04413780818978985]
0.05621542353918716 [0.005346495273260032, np.float64(-5.493559961761435e-08), 0.024994235186936442, np.float64(8.607294979766769e-09), np.float64(-4.0550304084631635e-08), 0.017091252958985836, 0.01632850268370789, 0.04413780818978985]
0.057100521607361794 [0.017960846650472484, np.float64(-4.913620481315896e-09), 0.024994235186936442, np.float64(-1.0267897576982558e-08), np.float64(-1.5947222160857607e-09), 0.017091252958985836, 0.008538660512296792, 0.04413780818978985]
0.05877227955746177 [0.017960846650472484, np.float64(-7.774114548841368e-09), 0.024994235186936442, np.float64(-7.512621638855513e-09), np.float64(-7.957486683199423e-09), 0.017091252958985836, 0.01632850268370789, 0.04413780818978985]
Archipelago age: 19 with best fitness: 0.04966088398508853
Best indv: [0.005346495273260032, np.float64(4.0483858360739866e-09), 0.011155833613863075, np.float64(-2.3534367221479328e-08), np.float64(-4.1033976106331554e-08), 0.017091252958985836, 0.008538660512296792, 0.04413780818978985]
Getting the Best Individuals#
After evolution is finished, we can use the HallOfFame
in the same way as in the previous tutorial.
[12]:
print("RANK FITNESS")
for i, member in enumerate(hof):
print(" ", i, " ", member.fitness)
RANK FITNESS
0 0.04966088398508853
1 0.054465241825757245
2 0.05621542353918716
3 0.057100521607361794
4 0.05877227955746177
Viewing the Log#
We can view the contents of our log to see more detailed information on what happened during the evolution.
[13]:
with open(temp_file.name, "r") as f:
print(f.read())
Generation: 10 Elapsed time: 0.000156 Best training fitness: 1.243952e-01
Generation: 11 Elapsed time: 1.014481 Best training fitness: 8.348861e-02
Generation: 12 Elapsed time: 2.075975 Best training fitness: 5.877228e-02
Generation: 13 Elapsed time: 3.211962 Best training fitness: 5.877228e-02
Generation: 14 Elapsed time: 4.374839 Best training fitness: 5.877228e-02
Generation: 15 Elapsed time: 5.509353 Best training fitness: 5.710052e-02
Generation: 16 Elapsed time: 6.650868 Best training fitness: 5.621542e-02
Generation: 17 Elapsed time: 7.807022 Best training fitness: 5.446524e-02
Generation: 18 Elapsed time: 8.937985 Best training fitness: 5.446524e-02
Generation: 19 Elapsed time: 10.083138 Best training fitness: 4.966088e-02
Evolution successfully converged.
Absolute convergence occurred with best fitness < 0.05
Hall of Fame:
0.04966088398508853 [0.005346495273260032, np.float64(4.0483858360739866e-09), 0.011155833613863075, np.float64(-2.3534367221479328e-08), np.float64(-4.1033976106331554e-08), 0.017091252958985836, 0.008538660512296792, 0.04413780818978985]
0.054465241825757245 [0.005346495273260032, np.float64(-5.455830222330119e-09), 0.024994235186936442, np.float64(-1.622906248296656e-08), np.float64(-7.819758106745893e-09), 0.017091252958985836, 0.008538660512296792, 0.04413780818978985]
0.05621542353918716 [0.005346495273260032, np.float64(-5.493559961761435e-08), 0.024994235186936442, np.float64(8.607294979766769e-09), np.float64(-4.0550304084631635e-08), 0.017091252958985836, 0.01632850268370789, 0.04413780818978985]
0.057100521607361794 [0.017960846650472484, np.float64(-4.913620481315896e-09), 0.024994235186936442, np.float64(-1.0267897576982558e-08), np.float64(-1.5947222160857607e-09), 0.017091252958985836, 0.008538660512296792, 0.04413780818978985]
0.05877227955746177 [0.017960846650472484, np.float64(-7.774114548841368e-09), 0.024994235186936442, np.float64(-7.512621638855513e-09), np.float64(-7.957486683199423e-09), 0.017091252958985836, 0.01632850268370789, 0.04413780818978985]
Finally, let’s delete the log to cleanup (specific to a temporary file).
[14]:
import os
import logging
# stop using temp file
logger = logging.getLogger()
logger.handlers = []
temp_file.close()
os.unlink(temp_file.name)
Animation of Evolution#
[15]:
# Reinitialize and rerun archipelago while documenting best individual
archipelago = SerialArchipelago(island, num_islands=4)
best_indv_values = []
best_indv_values.append(archipelago.get_best_individual().values)
for i in range(50):
archipelago.evolve(1)
best_indv_values.append(archipelago.get_best_individual().values)
[16]:
import matplotlib.pyplot as plt
import matplotlib.animation as animation
def animate_data(list_of_best_indv_values):
fig, ax = plt.subplots()
num_generations = len(list_of_best_indv_values)
x = np.arange(0, len(list_of_best_indv_values[0]))
y = list_of_best_indv_values
zero = [0]*len(x)
polygon = ax.fill_between(x, zero, y[0], color='b', alpha=0.3)
points, = ax.plot(x, y[0], 'bs')
points.set_label('Generation :' + str(0))
legend = ax.legend(loc='upper right', shadow=True)
def animate(i):
for artist in ax.collections:
artist.remove()
polygon = ax.fill_between(x, zero, y[i], color='b', alpha=0.3)
points.set_ydata(y[i]) # update the data
points.set_label('Generation :' + str(i))
legend = ax.legend(loc='upper right')
return points, polygon, legend
# Init only required for blitting to give a clean slate.
def init():
points.set_ydata(np.ma.array(x, mask=True))
return points, polygon, points
plt.xlabel('Chromosome Value Index', fontsize=15)
plt.ylabel('Value Magnitude', fontsize=15)
plt.title("Values of Best Individual in Archipelago", fontsize=15)
plt.ylim(-0.01,0.5)
ax.tick_params(axis='y', labelsize=15)
ax.tick_params(axis='x', labelsize=15)
plt.close()
return animation.FuncAnimation(fig, animate, num_generations, init_func=init,
interval=250, blit=True)
[17]:
from IPython.display import HTML
HTML(animate_data(best_indv_values).to_jshtml())
[17]: