Title: | Evolutionary Computation in R |
---|---|
Description: | Framework for building evolutionary algorithms for both single- and multi-objective continuous or discrete optimization problems. A set of predefined evolutionary building blocks and operators is included. Moreover, the user can easily set up custom objective functions, operators, building blocks and representations sticking to few conventions. The package allows both a black-box approach for standard tasks (plug-and-play style) and a much more flexible white-box approach where the evolutionary cycle is written by hand. |
Authors: | Jakob Bossek [aut, cre, cph], Michael H. Buselli [ctb, cph], Wessel Dankers [ctb, cph], Carlos M. Fonseca [ctb, cph], Manuel Lopez-Ibanez [ctb, cph], Luis Paquete [ctb, cph], Joshua Knowles [ctb, cph], Eckart Zitzler [ctb, cph], Olaf Mersmann [ctb] |
Maintainer: | Jakob Bossek <[email protected]> |
License: | GPL-3 |
Version: | 2.1.1 |
Built: | 2024-11-22 04:23:06 UTC |
Source: | https://github.com/jakobbossek/ecr2 |
Consider a data frame with results of multi-objective stochastic optimizers on
a set of problems from different categories/groups (say indicated by column “group”).
Occasionally, it is useful to unite the results of several groups into a meta-group.
The function addUnionGroup
aids in generation of such a meta-group while
function addAllGroup
is a wrapper around the former which generates a
union of all groups.
addUnionGroup(df, col, group, values) addAllGroup(df, col, group = "all")
addUnionGroup(df, col, group, values) addAllGroup(df, col, group = "all")
df |
[ |
col |
[ |
group |
[ |
values |
[ |
[data.frame
] Modified data frame.
df = data.frame( group = c("A1", "A1", "A2", "A2", "B"), perf = runif(5), stringsAsFactors = FALSE) df2 = addUnionGroup(df, col = "group", group = "A", values = c("A1", "A2")) df3 = addAllGroup(df, col = "group", group = "ALL")
df = data.frame( group = c("A1", "A1", "A2", "A2", "B"), perf = runif(5), stringsAsFactors = FALSE) df2 = addUnionGroup(df, col = "group", group = "A", values = c("A1", "A2")) df3 = addAllGroup(df, col = "group", group = "ALL")
Helper functions to compute nadir or ideal point from sets of points, e.g., multiple approximation sets.
approximateNadirPoint(..., sets = NULL) approximateIdealPoint(..., sets = NULL)
approximateNadirPoint(..., sets = NULL) approximateIdealPoint(..., sets = NULL)
... |
[ |
sets |
[ |
[numeric
] Reference point.
Other EMOA performance assessment tools:
approximateRefPoints()
,
approximateRefSets()
,
computeDominanceRanking()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()
E.g., for calculation of dominated hypervolume.
approximateRefPoints(df, obj.cols = c("f1", "f2"), offset = 0, as.df = FALSE)
approximateRefPoints(df, obj.cols = c("f1", "f2"), offset = 0, as.df = FALSE)
df |
[ |
obj.cols |
[ |
offset |
[ |
as.df |
[ |
[list
| data.frame
]
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefSets()
,
computeDominanceRanking()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()
The function takes an data frame with columns at least
specified by obj.cols
and “prob”. The reference set for
each unique problem in column “prob” is then obtained by
combining all approximation sets generated by all considered algorithms
for the corresponding problem and filtering the non-dominated solutions.
approximateRefSets(df, obj.cols, as.df = FALSE)
approximateRefSets(df, obj.cols, as.df = FALSE)
df |
[ |
obj.cols |
[ |
as.df |
[ |
[list
| data.frame
] Named list of matrizes
(names are the problems) or data frame with columns obj.cols
and “prob”.
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
computeDominanceRanking()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()
The AS-EMOA, short for aspiration set evolutionary multi-objective algorithm aims to incorporate expert knowledge into multi-objective optimization [1]. The algorithm expects an aspiration set, i.e., a set of reference points. It then creates an approximation of the pareto front close to the aspiration set utilizing the average Hausdorff distance.
asemoa( fitness.fun, n.objectives = NULL, minimize = NULL, n.dim = NULL, lower = NULL, upper = NULL, mu = 10L, aspiration.set = NULL, normalize.fun = NULL, dist.fun = computeEuclideanDistance, p = 1, parent.selector = setup(selSimple), mutator = setup(mutPolynomial, eta = 25, p = 0.2, lower = lower, upper = upper), recombinator = setup(recSBX, eta = 15, p = 0.7, lower = lower, upper = upper), terminators = list(stopOnIters(100L)) )
asemoa( fitness.fun, n.objectives = NULL, minimize = NULL, n.dim = NULL, lower = NULL, upper = NULL, mu = 10L, aspiration.set = NULL, normalize.fun = NULL, dist.fun = computeEuclideanDistance, p = 1, parent.selector = setup(selSimple), mutator = setup(mutPolynomial, eta = 25, p = 0.2, lower = lower, upper = upper), recombinator = setup(recSBX, eta = 15, p = 0.7, lower = lower, upper = upper), terminators = list(stopOnIters(100L)) )
fitness.fun |
[ |
n.objectives |
[ |
minimize |
[ |
n.dim |
[ |
lower |
[ |
upper |
[ |
mu |
[ |
aspiration.set |
[ |
normalize.fun |
[ |
dist.fun |
[ |
p |
[ |
parent.selector |
[ |
mutator |
[ |
recombinator |
[ |
terminators |
[ |
[ecr_multi_objective_result
]
This is a pure R implementation of the AS-EMOA algorithm. It hides the regular ecr interface and offers a more R like interface while still being quite adaptable.
[1] Rudolph, G., Schuetze, S., Grimme, C., Trautmann, H: An Aspiration Set EMOA Based on Averaged Hausdorff Distances. LION 2014: 153-156. [2] G. Rudolph, O. Schuetze, C. Grimme, and H. Trautmann: A Multiobjective Evolutionary Algorithm Guided by Averaged Hausdorff Distance to Aspiration Sets, pp. 261-273 in A.-A. Tantar et al. (eds.): Proceedings of EVOLVE - A bridge between Probability, Set Oriented Numerics and Evolutionary Computation V, Springer: Berlin Heidelberg 2014.
Given a data frame and a grouping column of type factor or character this function generates a new grouping column which groups the groups.
categorize(df, col, categories, cat.col, keep = TRUE, overwrite = FALSE)
categorize(df, col, categories, cat.col, keep = TRUE, overwrite = FALSE)
df |
[ |
col |
[ |
categories |
[ |
cat.col |
[ |
keep |
[ |
overwrite |
[ |
[data.frame
]
df = data.frame(
group = c("A1", "A1", "A2", "A2", "B1", "B2"),
perf = runif(6),
stringsAsFactors = FALSE)
df2 = categorize(df, col = "group", categories = list(A = c("A1", "A2"), B = c("B1", "B2")), cat.col = "group2")
Computes the average Hausdroff distance measure between two point sets.
computeAverageHausdorffDistance( A, B, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance )
computeAverageHausdorffDistance( A, B, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance )
A |
[ |
B |
[ |
p |
[ |
normalize |
[ |
dist.fun |
[ |
[numeric(1)
] Average Hausdorff distance of sets A
and B
.
The crowding distance is a measure of spread of solutions in the approximation of the Pareto front. It is used, e.g., in the NSGA-II algorithm as a second selection criterion.
computeCrowdingDistance(x)
computeCrowdingDistance(x)
x |
[ |
[numeric
] Vector of crowding distance values.
K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on Evolutionary Computation In Evolutionary Computation, IEEE Transactions on, Vol. 6, No. 2. (07 April 2002), pp. 182-197, doi:10.1109/4235.996017
Helper to compute distance between a single point and a point set.
computeDistanceFromPointToSetOfPoints( a, B, dist.fun = computeEuclideanDistance )
computeDistanceFromPointToSetOfPoints( a, B, dist.fun = computeEuclideanDistance )
a |
[ |
B |
[ |
dist.fun |
[ |
[numeric(1)
]
Ranking is performed by merging all approximation sets over all
algorithms and runs per instance. Next, each approximation set is assigned a
rank which is 1 plus the number of approximation sets that are better than
. A set
is better than
, if for each point
there
exists a point in
which weakly dominates
.
Thus, each approximation set is reduced to a number – its rank. This rank distribution
may act for first comparrison of multi-objecitve stochastic optimizers.
See [1] for more details.
This function makes use of
parallelMap
to
parallelize the computation of dominance ranks.
computeDominanceRanking(df, obj.cols)
computeDominanceRanking(df, obj.cols)
df |
[ |
obj.cols |
[ |
[data.frame
] Reduced df
with columns “prob”, “algorithm”, “repl”
and “rank”.
Since pairwise non-domination checks are performed over all algorithms and algorithm runs this function may take some time if the number of problems, algorithms and/or replications is high.
[1] Knowles, J., Thiele, L., & Zitzler, E. (2006). A Tutorial on the Performance Assessment of Stochastic Multiobjective Optimizers. Retrieved from https://sop.tik.ee.ethz.ch/KTZ2005a.pdf
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()
Helper to compute the Generational Distance (GD) between two sets of points.
computeGenerationalDistance( A, B, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance )
computeGenerationalDistance( A, B, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance )
A |
[ |
B |
[ |
p |
[ |
normalize |
[ |
dist.fun |
[ |
[numeric(1)
]
The function computeHV
computes the dominated
hypervolume of a set of points given a reference set whereby
computeHVContr
computes the hypervolume contribution
of each point.
If no reference point is given the nadir point of the set x
is
determined and a positive offset with default 1 is added. This is to ensure
that the reference point dominates all of the points in the reference set.
computeHV(x, ref.point = NULL, ...) computeHVContr(x, ref.point = NULL, offset = 1)
computeHV(x, ref.point = NULL, ...) computeHVContr(x, ref.point = NULL, offset = 1)
x |
[ |
ref.point |
[ |
... |
[any] |
offset |
[ |
[numeric(1)
] Dominated hypervolume in the case of
computeHV
and the dominated hypervolume contributions
for each point in the case of computeHVContr
.
: Keep in mind that this function assumes all objectives to be minimized.
In case at least one objective is to be maximized the matrix x
needs
to be transformed accordingly in advance.
Given a data.frame of Pareto-front approximations for different
sets of problems, algorithms and replications, the function computes sets
of unary and binary EMOA performance indicators.
This function makes use of parallelMap
to
parallelize the computation of indicators.
computeIndicators( df, obj.cols = c("f1", "f2"), unary.inds = NULL, binary.inds = NULL, normalize = FALSE, offset = 0, ref.points = NULL, ref.sets = NULL )
computeIndicators( df, obj.cols = c("f1", "f2"), unary.inds = NULL, binary.inds = NULL, normalize = FALSE, offset = 0, ref.points = NULL, ref.sets = NULL )
df |
[ |
obj.cols |
[ |
unary.inds |
[ |
binary.inds |
[ |
normalize |
[ |
offset |
[ |
ref.points |
[ |
ref.sets |
[ |
[list
] List with components “unary” (data frame of
unary indicators), “binary” (list of matrizes of binary indicators),
“ref.points” (list of reference points used) and “ref.sets”
(reference sets used).
[1] Knowles, J., Thiele, L., & Zitzler, E. (2006). A Tutorial on the Performance Assessment of Stochastic Multiobjective Optimizers. Retrieved from https://sop.tik.ee.ethz.ch/KTZ2005a.pdf [2] Knowles, J., & Corne, D. (2002). On Metrics for Comparing Non-Dominated Sets. In Proceedings of the 2002 Congress on Evolutionary Computation Conference (CEC02) (pp. 711–716). Honolulu, HI, USA: Institute of Electrical and Electronics Engineers. [3] Okabe, T., Yaochu, Y., & Sendhoff, B. (2003). A Critical Survey of Performance Indices for Multi-Objective Optimisation. In Proceedings of the 2003 Congress on Evolutionary Computation Conference (CEC03) (pp. 878–885). Canberra, ACT, Australia: IEEE.
Helper to compute the Inverted Generational Distance (IGD) between two sets of points.
computeInvertedGenerationalDistance( A, B, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance )
computeInvertedGenerationalDistance( A, B, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance )
A |
[ |
B |
[ |
p |
[ |
normalize |
[ |
dist.fun |
[ |
[numeric(1)
]
These functions take a numeric matrix x
where each column corresponds to
a point and return a logical vector. The i-th position of the latter is
TRUE
if the i-th point is dominated by at least one other point for
dominated
and FALSE
for nonDominated
.
dominated(x) nondominated(x)
dominated(x) nondominated(x)
x |
[ |
[logical
]
Check if a vector dominates another (dominates
) or is
dominated by another (isDominated
). There are corresponding infix
operators dominates
and isDominatedBy
.
dominates(x, y) isDominated(x, y) x %dominates% y x %isDominatedBy% y
dominates(x, y) isDominated(x, y) x %dominates% y x %isDominatedBy% y
x |
[ |
y |
[ |
[logical(1)
]
Fast non-dominated sorting algorithm proposed by Deb. Non-dominated sorting expects a set of points and returns a set of non-dominated fronts. In short words this is done as follows: the non-dominated points of the entire set are determined and assigned rank 1. Afterwards all points with the current rank are removed, the rank is increased by one and the procedure starts again. This is done until the set is empty, i.~e., each point is assigned a rank.
doNondominatedSorting(x)
doNondominatedSorting(x)
x |
[ |
[list
]
List with the following components
Integer vector of ranks of length ncol(x)
. The higher
the rank, the higher the domination front the corresponding point is
located on.
Integer vector of length ncol(x)
. The i-th element
is the domination number of the i-th point.
This procedure is the key survival selection of the famous NSGA-II multi-objective
evolutionary algorithm (see nsga2
).
[1] Deb, K., Pratap, A., and Agarwal, S. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6 (8) (2002), 182-197.
The most flexible way to setup evolutionary algorithms with ecr is by
explicitely writing the evolutionary loop utilizing various ecr utlity functions.
However, in everyday life R users frequently need to optimize a single-objective R function.
The ecr
function thus provides a more R like interface for single
objective optimization similar to the interface of the optim
function.
ecr( fitness.fun, minimize = NULL, n.objectives = NULL, n.dim = NULL, lower = NULL, upper = NULL, n.bits, representation, mu, lambda, perm = NULL, p.recomb = 0.7, p.mut = 0.3, survival.strategy = "plus", n.elite = 0L, log.stats = list(fitness = list("min", "mean", "max")), log.pop = FALSE, monitor = NULL, initial.solutions = NULL, parent.selector = NULL, survival.selector = NULL, mutator = NULL, recombinator = NULL, terminators = list(stopOnIters(100L)), ... )
ecr( fitness.fun, minimize = NULL, n.objectives = NULL, n.dim = NULL, lower = NULL, upper = NULL, n.bits, representation, mu, lambda, perm = NULL, p.recomb = 0.7, p.mut = 0.3, survival.strategy = "plus", n.elite = 0L, log.stats = list(fitness = list("min", "mean", "max")), log.pop = FALSE, monitor = NULL, initial.solutions = NULL, parent.selector = NULL, survival.selector = NULL, mutator = NULL, recombinator = NULL, terminators = list(stopOnIters(100L)), ... )
fitness.fun |
[ |
minimize |
[ |
n.objectives |
[ |
n.dim |
[ |
lower |
[ |
upper |
[ |
n.bits |
[ |
representation |
[ |
mu |
[ |
lambda |
[ |
perm |
[ |
p.recomb |
[ |
p.mut |
[ |
survival.strategy |
[ |
n.elite |
[ |
log.stats |
[ |
log.pop |
[ |
monitor |
[ |
initial.solutions |
[ |
parent.selector |
[ |
survival.selector |
[ |
mutator |
[ |
recombinator |
[ |
terminators |
[ |
... |
[any] |
fn = function(x) { sum(x^2) } lower = c(-5, -5); upper = c(5, 5) res = ecr(fn, n.dim = 2L, n.objectives = 1L, lower = lower, upper = lower, representation = "float", mu = 20L, lambda = 10L, mutator = setup(mutGauss, lower = lower, upper = upper))
fn = function(x) { sum(x^2) } lower = c(-5, -5); upper = c(5, 5) res = ecr(fn, n.dim = 2L, n.objectives = 1L, lower = lower, upper = lower, representation = "float", mu = 20L, lambda = 10L, mutator = setup(mutGauss, lower = lower, upper = upper))
In ecr it is possible to parallelize the fitness function evaluation
to make use, e.g., of multiple CP cores or nodes in a HPC cluster.
For maximal flexibility this is realized by means of the parallelMap package
(see the official
GitHub page for instructions on how to set up parallelization).
The different levels of parallelization can be specified in the
parallelStart*
function. At them moment only the level
“ecr.evaluateFitness” is supported.
Keep in mind that parallelization comes along with some overhead. Thus activating parallelization, e.g., for evaluation a fitness function which is evaluated lightning-fast, may result in higher computation time. However, if the function evaluations are computationally more expensive, parallelization leads to significant running time benefits.
S3 object returned by ecr
containing the best found
parameter setting and value in the single-objective case and the Pareto-front/-set
in case of a multi-objective optimization problem. Moreover a set of further
information, e.g., reason of termination, the control object etc. are returned.
The single objective result object contains the following fields:
The ecr_optimization_task
.
Overall best parameter setting.
Overall best objective value.
Logger object.
Last population.
Numeric vector of fitness values of the last population.
Character string describing the reason of termination.
In case of a solved multi-objective function the result object contains the following fields:
The ecr_optimization_task
.
Logger object.
Indizes of the non-dominated solutions in the last population.
(n x d) matrix of the approximated non-dominated front where n is the number of non-dominated points and d is the number of objectives.
Matrix of decision space values resulting with objective values given in pareto.front.
Last population.
Character string describing the reason of termination.
Functions for the computation of unary and binary measures which are useful for the evaluation of the performace of EMOAs. See the references section for literature on these indicators.
Given a set of points points
, emoaIndEps
computes the
unary epsilon-indicator provided a set of reference points ref.points
.
The emoaIndHV
function computes the hypervolume indicator
Hyp(X, R, r). Given a set of points X (points
), another set of reference
points R (ref.points
) (which maybe the true Pareto front) and a reference
point r (ref.point
) it is defined as Hyp(X, R, r) = HV(R, r) - HV(X, r).
Function emoaIndR1
, emoaIndR2
and emoaIndR3
calculate the
R1, R2 and R3 indicator respectively.
Function emoaIndMD
computes the minimum distance indicator, i.e., the minimum
Euclidean distance between two points of the set points
while function
emoaIndM1
determines the mean Euclidean distance between points
and points from a reference set ref.points
.
Function emoaIndC
calculates the coverage of the sets points
(A) and
ref.points
(B). This is the ratio of points in B which are dominated by
at least one solution in A.
emoaIndONVG
calculates the “Overall Non-dominated Vector Generation”
indicator. Despite its complicated name it is just the number of non-dominated points
in points
.
Functions emoaIndSP
and emoaIndDelta
calculate spacing indicators.
The former was proposed by Schott: first calculate the sum of squared distances
between minimal distancesof points to all other points and the mean of these minimal
distance. Next, normalize by the number of points minus 1 and finally calculate the
square root. In contrast, Delta-indicator
emoaIndEps(points, ref.points, ...) emoaIndHV(points, ref.points, ref.point = NULL, ...) emoaIndR1( points, ref.points, ideal.point = NULL, nadir.point = NULL, lambda = NULL, utility = "tschebycheff", ... ) emoaIndR2( points, ref.points, ideal.point = NULL, nadir.point = NULL, lambda = NULL, utility = "tschebycheff", ... ) emoaIndR3( points, ref.points, ideal.point = NULL, nadir.point = NULL, lambda = NULL, utility = "tschebycheff", ... ) emoaIndMD(points, ...) emoaIndC(points, ref.points, ...) emoaIndM1(points, ref.points, ...) emoaIndONVG(points, ...) emoaIndGD( points, ref.points, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance, ... ) emoaIndIGD( points, ref.points, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance, ... ) emoaIndDeltap( points, ref.points, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance, ... ) emoaIndSP(points, ...) emoaIndDelta(points, ...)
emoaIndEps(points, ref.points, ...) emoaIndHV(points, ref.points, ref.point = NULL, ...) emoaIndR1( points, ref.points, ideal.point = NULL, nadir.point = NULL, lambda = NULL, utility = "tschebycheff", ... ) emoaIndR2( points, ref.points, ideal.point = NULL, nadir.point = NULL, lambda = NULL, utility = "tschebycheff", ... ) emoaIndR3( points, ref.points, ideal.point = NULL, nadir.point = NULL, lambda = NULL, utility = "tschebycheff", ... ) emoaIndMD(points, ...) emoaIndC(points, ref.points, ...) emoaIndM1(points, ref.points, ...) emoaIndONVG(points, ...) emoaIndGD( points, ref.points, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance, ... ) emoaIndIGD( points, ref.points, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance, ... ) emoaIndDeltap( points, ref.points, p = 1, normalize = FALSE, dist.fun = computeEuclideanDistance, ... ) emoaIndSP(points, ...) emoaIndDelta(points, ...)
points |
[ |
ref.points |
[ |
... |
[any] |
ref.point |
[ |
ideal.point |
[ |
nadir.point |
[ |
lambda |
[ |
utility |
[ |
p |
[ |
normalize |
[ |
dist.fun |
[ |
[numeric(1)
] Epsilon indicator.
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
computeDominanceRanking()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()
This function expects a list of individuals, computes the fitness and always
returns a matrix of fitness values; even in single-objective optimization a
(1 x n) matrix is returned for consistency, where n is the number of individuals.
This function makes use of parallelMap
to
parallelize the fitness evaluation.
evaluateFitness(control, inds, ...)
evaluateFitness(control, inds, ...)
control |
[ |
inds |
[ |
... |
[any] |
[matrix
].
Given a data frame and a column name, function
explode
splits the content of a column by a specified
delimiter (thus exploded) into multiple columns. Function implode
does vice versa, i.e., given a non-empty set of column names or
numbers, the function glues together the columns. Hence, functions
explode
and implode
are kind of inverse to each other.
explode(df, col, by = ".", keep = FALSE, col.names = NULL) implode(df, cols, by = ".", keep = FALSE, col.name)
explode(df, col, by = ".", keep = FALSE, col.names = NULL) implode(df, cols, by = ".", keep = FALSE, col.name)
df |
[ |
col |
[ |
by |
[ |
keep |
[ |
col.names |
[ |
cols |
[ |
col.name |
[ |
[data.frame
] Modified data frame.
df = data.frame(x = 1:3, y = c("a.c", "a.b", "a.c")) df.ex = explode(df, col = "y", col.names = c("y1", "y2")) df.im = implode(df.ex, cols = c("y1", "y2"), by = "---", col.name = "y", keep = TRUE)
df = data.frame(x = 1:3, y = c("a.c", "a.b", "a.c")) df.ex = explode(df, col = "y", col.names = c("y1", "y2")) df.im = implode(df.ex, cols = c("y1", "y2"), by = "---", col.name = "y", keep = TRUE)
Filter approximation sets by duplicate objective vectors.
filterDuplicated(x, ...) ## S3 method for class 'data.frame' filterDuplicated(x, ...) ## S3 method for class 'matrix' filterDuplicated(x, ...) ## S3 method for class 'ecr_multi_objective_result' filterDuplicated(x, ...) ## S3 method for class 'list' filterDuplicated(x, ...)
filterDuplicated(x, ...) ## S3 method for class 'data.frame' filterDuplicated(x, ...) ## S3 method for class 'matrix' filterDuplicated(x, ...) ## S3 method for class 'ecr_multi_objective_result' filterDuplicated(x, ...) ## S3 method for class 'list' filterDuplicated(x, ...)
x |
[ |
... |
[any] |
[object
] Modified input x
.
Note that this may be misleading if there can be solutions with identical objective function values but different values in decision space.
Function mutate
expects a control object, a list of individuals, and a mutation
probability. The mutation operator registered in the control object is then applied
with the given probability to each individual.
Function recombinate
expects a control object, a list of individuals as well as
their fitness matrix and creates lambda
offspring individuals by recombining parents
from inds
. Which parents take place in the parent selection depends on
the parent.selector
registered in the control object.
Finally, function generateOffspring
is a wrapper for both recombinate
and mutate
. Both functions are applied subsequently to generate new individuals
by variation and mutation.
generateOffspring(control, inds, fitness, lambda, p.recomb = 0.7, p.mut = 0.1) mutate(control, inds, p.mut = 0.1, slot = "mutate", ...) recombinate( control, inds, fitness, lambda = length(inds), p.recomb = 0.7, slot = "recombine", ... )
generateOffspring(control, inds, fitness, lambda, p.recomb = 0.7, p.mut = 0.1) mutate(control, inds, p.mut = 0.1, slot = "mutate", ...) recombinate( control, inds, fitness, lambda = length(inds), p.recomb = 0.7, slot = "recombine", ... )
control |
[ |
inds |
[ |
fitness |
[ |
lambda |
[ |
p.recomb |
[ |
p.mut |
[ |
slot |
[ |
... |
[any] |
[list
] List of individuals.
Returns as to whether the recombinator generates multiple Children.
generatesMultipleChildren(recombinator)
generatesMultipleChildren(recombinator)
recombinator |
[ |
[logical
]
Boolean
Utility functions to build a set of individuals. The function
gen
expects an R expression and a number n in order to create a list
of n individuals based on the given expression. Functions genBin
,
genPerm
and genReal
are shortcuts for initializing populations
of binary strings, permutations or real-valued vectors respectively.
gen(expr, n) genBin(n, n.dim) genPerm(n, n.dim) genReal(n, n.dim, lower, upper)
gen(expr, n) genBin(n, n.dim) genPerm(n, n.dim) genReal(n, n.dim, lower, upper)
expr |
[R expression] |
n |
[ |
n.dim |
[ |
lower |
[ |
upper |
[ |
[list
]
Get all non-dominated points in objective space, i.e., an (m x n) matrix of fitness with m being the number of objectives and n being the number of non-dominated points in the Pareto archive.
getFront(x)
getFront(x)
x |
[ |
[matrix
]
Get the non-dominated individuals logged in the Pareto archive.
getIndividuals(x)
getIndividuals(x)
x |
[ |
[list
]
Other ParetoArchive:
getSize()
,
initParetoArchive()
,
updateParetoArchive()
Returns the number children generated by the recombinator
getNumberOfChildren(recombinator)
getNumberOfChildren(recombinator)
recombinator |
[ |
[numeric
]
Number of children generated
Returns the number of parents needed for mating.
getNumberOfParentsNeededForMating(recombinator)
getNumberOfParentsNeededForMating(recombinator)
recombinator |
[ |
[numeric
]
Number of Parents need for mating
Returns the fitness values of all individuals as a data.frame with columns f1, ..., fo, where o is the number of objectives and column “gen” for generation.
getPopulationFitness(log, trim = TRUE)
getPopulationFitness(log, trim = TRUE)
log |
[ |
trim |
[ |
[list
] List of populations.
Other logging:
getPopulations()
,
getStatistics()
,
initLogger()
,
updateLogger()
Simple getter for the logged populations.
getPopulations(log, trim = TRUE)
getPopulations(log, trim = TRUE)
log |
[ |
trim |
[ |
This functions throws an error if the logger was initialized with
log.pop = FALSE
(see initLogger
).
[list
] List of populations.
Other logging:
getPopulationFitness()
,
getStatistics()
,
initLogger()
,
updateLogger()
Returns the number of stored individuals in Pareto archive.
getSize(x)
getSize(x)
x |
[ |
[integer(1)
]
Other ParetoArchive:
getIndividuals()
,
initParetoArchive()
,
updateParetoArchive()
Simple getter for the logged fitness statistics.
getStatistics(log, trim = TRUE)
getStatistics(log, trim = TRUE)
log |
[ |
trim |
[ |
[data.frame
] Logged statistics.
Other logging:
getPopulationFitness()
,
getPopulations()
,
initLogger()
,
updateLogger()
Returns the character vector of representation which the operator supports.
getSupportedRepresentations(operator)
getSupportedRepresentations(operator)
operator |
[ |
[character
]
Vector of representation types.
The control object keeps information on the objective function and a set of evolutionary components, i.e., operators.
initECRControl(fitness.fun, n.objectives = NULL, minimize = NULL)
initECRControl(fitness.fun, n.objectives = NULL, minimize = NULL)
fitness.fun |
[ |
n.objectives |
[ |
minimize |
[ |
[ecr_control
]
Logging is a central aspect of each EA. Besides the final solution(s)
especially in research often we need to keep track of different aspects of the
evolutionary process, e.g., fitness statistics. The logger of ecr keeps
track of different user-defined statistics and the population.
It may also be used to check stopping conditions (see makeECRTerminator
). Most
important this logger is used internally by the ecr
black-box interface.
initLogger( control, log.stats = list(fitness = list("min", "mean", "max")), log.extras = NULL, log.pop = FALSE, init.size = 1000L )
initLogger( control, log.stats = list(fitness = list("min", "mean", "max")), log.extras = NULL, log.pop = FALSE, init.size = 1000L )
control |
[ |
log.stats |
[ |
log.extras |
[ |
log.pop |
[ |
init.size |
[ |
[ecr_logger
]
An S3 object of class ecr_logger
with the following components:
The log.stats
list.
The log.pop
parameter.
Initial size of the log.
The actual log. This is an R environment which ensures, that in-place modification is possible.
Statistics are logged in a data.frame
.
Other logging:
getPopulationFitness()
,
getPopulations()
,
getStatistics()
,
updateLogger()
control = initECRControl(function(x) sum(x), minimize = TRUE, n.objectives = 1L) control = registerECROperator(control, "mutate", mutBitflip, p = 0.1) control = registerECROperator(control, "selectForMating", selTournament, k = 2) control = registerECROperator(control, "selectForSurvival", selGreedy) log = initLogger(control, log.stats = list( fitness = list("mean", "myRange" = function(x) max(x) - min(x)), age = list("min", "max") ), log.pop = TRUE, init.size = 1000L) # simply pass stuff down to control object constructor population = initPopulation(mu = 10L, genBin, n.dim = 10L) fitness = evaluateFitness(control, population) # append fitness to individuals and init age for (i in seq_along(population)) { attr(population[[i]], "fitness") = fitness[, i] attr(population[[i]], "age") = 1L } for (iter in seq_len(10)) { # generate offspring offspring = generateOffspring(control, population, fitness, lambda = 5) fitness.offspring = evaluateFitness(control, offspring) # update age of population for (i in seq_along(population)) { attr(population[[i]], "age") = attr(population[[i]], "age") + 1L } # set offspring attributes for (i in seq_along(offspring)) { attr(offspring[[i]], "fitness") = fitness.offspring[, i] # update age attr(offspring[[i]], "age") = 1L } sel = replaceMuPlusLambda(control, population, offspring) population = sel$population fitness = sel$fitness # do some logging updateLogger(log, population, n.evals = 5) } head(getStatistics(log))
control = initECRControl(function(x) sum(x), minimize = TRUE, n.objectives = 1L) control = registerECROperator(control, "mutate", mutBitflip, p = 0.1) control = registerECROperator(control, "selectForMating", selTournament, k = 2) control = registerECROperator(control, "selectForSurvival", selGreedy) log = initLogger(control, log.stats = list( fitness = list("mean", "myRange" = function(x) max(x) - min(x)), age = list("min", "max") ), log.pop = TRUE, init.size = 1000L) # simply pass stuff down to control object constructor population = initPopulation(mu = 10L, genBin, n.dim = 10L) fitness = evaluateFitness(control, population) # append fitness to individuals and init age for (i in seq_along(population)) { attr(population[[i]], "fitness") = fitness[, i] attr(population[[i]], "age") = 1L } for (iter in seq_len(10)) { # generate offspring offspring = generateOffspring(control, population, fitness, lambda = 5) fitness.offspring = evaluateFitness(control, offspring) # update age of population for (i in seq_along(population)) { attr(population[[i]], "age") = attr(population[[i]], "age") + 1L } # set offspring attributes for (i in seq_along(offspring)) { attr(offspring[[i]], "fitness") = fitness.offspring[, i] # update age attr(offspring[[i]], "age") = 1L } sel = replaceMuPlusLambda(control, population, offspring) population = sel$population fitness = sel$fitness # do some logging updateLogger(log, population, n.evals = 5) } head(getStatistics(log))
A Pareto archive is usually used to store all / a part of the non-dominated points stored during a run of an multi-objective evolutionary algorithm.
initParetoArchive(control, max.size = Inf, trunc.fun = NULL)
initParetoArchive(control, max.size = Inf, trunc.fun = NULL)
control |
[ |
max.size |
[ |
trunc.fun |
[ |
[ecr_pareto_archive
]
Other ParetoArchive:
getIndividuals()
,
getSize()
,
updateParetoArchive()
Generates the initial population. Optionally a set of initial solutions can be passed.
initPopulation(mu, gen.fun, initial.solutions = NULL, ...)
initPopulation(mu, gen.fun, initial.solutions = NULL, ...)
mu |
[ |
gen.fun |
[ |
initial.solutions |
[ |
... |
[any] |
[ecr_population
]
Check if the given operator supportds a certain representation, e.g., “float”.
is.supported(operator, representation)
is.supported(operator, representation)
operator |
[ |
representation |
[ |
[logical(1)
]
TRUE
, if operator supports the representation type.
Checks if the passed object is of type ecr_operator
.
isEcrOperator(obj)
isEcrOperator(obj)
obj |
[any] |
[logical(1)
]
Monitor objects serve for monitoring the optimization process, e.g., printing
some status messages to the console. Each monitor includes the functions
before
, step
and after
, each being a function and expecting
a log log
of type ecr_logger
and ...
as the only parameters.
This way the logger has access to the entire log.
makeECRMonitor(before = NULL, step = NULL, after = NULL, ...)
makeECRMonitor(before = NULL, step = NULL, after = NULL, ...)
before |
[ |
step |
[ |
after |
[ |
... |
[ |
[ecr_monitor
]
Monitor object.
Simple wrapper for function which compute
performance indicators for multi-objective stochastic algorithm.
Basically this function appends some meta information to the
passed function fun
.
makeEMOAIndicator(fun, minimize, name, latex.name)
makeEMOAIndicator(fun, minimize, name, latex.name)
fun |
[ |
minimize |
[ |
name |
[ |
latex.name |
[ |
[function(points, ...)
] Argument fun
with all other
arguments appended.
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
computeDominanceRanking()
,
emoaIndEps()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()
Helper function which constructs a mutator, i. e., a mutation operator.
makeMutator(mutator, supported = getAvailableRepresentations())
makeMutator(mutator, supported = getAvailableRepresentations())
mutator |
[ |
supported |
[ |
[ecr_mutator
]
Mutator object.
Helper function which constructs an evolutionary operator.
makeOperator(operator, supported = getAvailableRepresentations())
makeOperator(operator, supported = getAvailableRepresentations())
operator |
[ |
supported |
[ |
[ecr_operator
] Operator object.
In general you will not need this function, but rather one of its
deriviatives like makeMutator
or makeSelector
.
An optimization task consists of the fitness/objective function, the number of objectives, the “direction” of optimization, i.e., which objectives should be minimized/maximized and the names of the objectives.
makeOptimizationTask( fun, n.objectives = NULL, minimize = NULL, objective.names = NULL )
makeOptimizationTask( fun, n.objectives = NULL, minimize = NULL, objective.names = NULL )
fun |
[ |
n.objectives |
[ |
minimize |
[ |
objective.names |
[ |
[ecr_optimization_task
]
Helper function which constructs a recombinator, i. e., a recombination operator.
makeRecombinator( recombinator, supported = getAvailableRepresentations(), n.parents = 2L, n.children = NULL )
makeRecombinator( recombinator, supported = getAvailableRepresentations(), n.parents = 2L, n.children = NULL )
recombinator |
[ |
supported |
[ |
n.parents |
[ |
n.children |
[ |
[ecr_recombinator
]
Recombinator object.
If a recombinator returns more than one child, the multiple.children
parameter needs to be TRUE
, which is the default. In case of multiple
children produced these have to be placed within a list.
Helper function which defines a selector method, i. e., an operator which takes the population and returns a part of it for mating or survival.
makeSelector( selector, supported = getAvailableRepresentations(), supported.objectives, supported.opt.direction = "minimize" )
makeSelector( selector, supported = getAvailableRepresentations(), supported.objectives, supported.opt.direction = "minimize" )
selector |
[ |
supported |
[ |
supported.objectives |
[ |
supported.opt.direction |
[ |
[ecr_selector
]
Selector object.
Wrap a function within a stopping condition object.
makeTerminator(condition.fun, name, message)
makeTerminator(condition.fun, name, message)
condition.fun |
[ |
name |
[ |
message |
[ |
[ecr_terminator
]
Pareto-front approximations for some graph problems obtained by several algorithms for the multi-criteria minimum spanning tree (mcMST) problem.
mcMST
mcMST
A data frame with four variables:
f1
First objective (to be minimized).
f2
Second objective (to be minimized).
algorithm
Short name of algorithm used.
prob
Short name of problem instance.
repl
Algorithm run.
The data is based on the mcMST package.
This operator works only on binary representation and flips each bit
with a given probability . Usually it is recommended to
set
where
is the number of bits in the
representation.
mutBitflip(ind, p = 0.1)
mutBitflip(ind, p = 0.1)
ind |
[ |
p |
[ |
[binary
]
[1] Eiben, A. E. & Smith, James E. (2015). Introduction to Evolutionary Computing (2nd ed.). Springer Publishing Company, Incorporated. 52.
Other mutators:
mutGauss()
,
mutInsertion()
,
mutInversion()
,
mutJump()
,
mutPolynomial()
,
mutScramble()
,
mutSwap()
,
mutUniform()
Default Gaussian mutation operator known from Evolutionary Algorithms.
This mutator is applicable only for representation="float"
. Given
an individual this mutator adds a Gaussian
distributed random value to each component of
, i.~e.,
.
mutGauss(ind, p = 1L, sdev = 0.05, lower, upper)
mutGauss(ind, p = 1L, sdev = 0.05, lower, upper)
ind |
[ |
p |
[ |
sdev |
[ |
lower |
[ |
upper |
[ |
[numeric
]
[1] Beyer, Hans-Georg & Schwefel, Hans-Paul (2002). Evolution strategies. Kluwer Academic Publishers.
[2] Mateo, P. M. & Alberto, I. (2011). A mutation operator based on a Pareto ranking for multi-objective evolutionary algorithms. Springer Science+Business Meda. 57.
Other mutators:
mutBitflip()
,
mutInsertion()
,
mutInversion()
,
mutJump()
,
mutPolynomial()
,
mutScramble()
,
mutSwap()
,
mutUniform()
The Insertion mutation operator selects a position random and inserts it at a random position.
mutInsertion(ind)
mutInsertion(ind)
ind |
[ |
[integer
]
Other mutators:
mutBitflip()
,
mutGauss()
,
mutInversion()
,
mutJump()
,
mutPolynomial()
,
mutScramble()
,
mutSwap()
,
mutUniform()
The Inversion mutation operator selects two positions within the chromosome at random and inverts the elements inbetween.
mutInversion(ind)
mutInversion(ind)
ind |
[ |
[integer
]
Other mutators:
mutBitflip()
,
mutGauss()
,
mutInsertion()
,
mutJump()
,
mutPolynomial()
,
mutScramble()
,
mutSwap()
,
mutUniform()
The jump mutation operator selects two positions within the chromosome at
random, say and
with
. Next, all elements at
positions
are shifted to the right by one position
and finally the element at position
is assigned at position
.
mutJump(ind)
mutJump(ind)
ind |
[ |
[integer
]
Other mutators:
mutBitflip()
,
mutGauss()
,
mutInsertion()
,
mutInversion()
,
mutPolynomial()
,
mutScramble()
,
mutSwap()
,
mutUniform()
Performs an polynomial mutation as used in the SMS-EMOA algorithm. Polynomial mutation tries to simulate the distribution of the offspring of binary-encoded bit flip mutations based on real-valued decision variables. Polynomial mutation favors offspring nearer to the parent.
mutPolynomial(ind, p = 0.2, eta = 10, lower, upper)
mutPolynomial(ind, p = 0.2, eta = 10, lower, upper)
ind |
[ |
p |
[ |
eta |
[ |
lower |
[ |
upper |
[ |
[numeric
]
[1] Deb, Kalyanmoy & Goyal, Mayank. (1999). A Combined Genetic Adaptive Search (GeneAS) for Engineering Design. Computer Science and Informatics. 26. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.27.767&rep=rep1&type=pdf
Other mutators:
mutBitflip()
,
mutGauss()
,
mutInsertion()
,
mutInversion()
,
mutJump()
,
mutScramble()
,
mutSwap()
,
mutUniform()
The Scramble mutation operator selects two positions within the chromosome at random and randomly intermixes the subsequence between these positions.
mutScramble(ind)
mutScramble(ind)
ind |
[ |
[integer
]
Other mutators:
mutBitflip()
,
mutGauss()
,
mutInsertion()
,
mutInversion()
,
mutJump()
,
mutPolynomial()
,
mutSwap()
,
mutUniform()
Chooses two positions at random and swaps the genes.
mutSwap(ind)
mutSwap(ind)
ind |
[ |
[integer
]
Other mutators:
mutBitflip()
,
mutGauss()
,
mutInsertion()
,
mutInversion()
,
mutJump()
,
mutPolynomial()
,
mutScramble()
,
mutUniform()
This mutation operator works on real-valued genotypes only. It selects a position in the solution vector at random and replaced it with a uniformally chosen value within the box constraints of the corresponding parameter. This mutator may proof beneficial in early stages of the optimization process, since it distributes points widely within the box constraints and thus may hinder premature convergence. However, in later stages - when fine tuning is necessary, this feature is disadvantegous.
mutUniform(ind, lower, upper)
mutUniform(ind, lower, upper)
ind |
[ |
lower |
[ |
upper |
[ |
[numeric
]
Other mutators:
mutBitflip()
,
mutGauss()
,
mutInsertion()
,
mutInversion()
,
mutJump()
,
mutPolynomial()
,
mutScramble()
,
mutSwap()
This formatter function should be applied to
tables where each table cell contains a -value of
a statistical significance test. See
toLatex
for an application.
niceCellFormater(cell, alpha = 0.05)
niceCellFormater(cell, alpha = 0.05)
cell |
[any] |
alpha |
[ |
Formatted output.
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
computeDominanceRanking()
,
emoaIndEps()
,
makeEMOAIndicator()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()
Normalization is done by subtracting the min.value
for each dimension
and dividing by the difference max.value - min.value
for each dimension
Certain EMOA indicators require all elements to be strictly positive. Hence, an optional
offset
is added to each element which defaults to zero.
normalize(x, obj.cols, min.value = NULL, max.value = NULL, offset = NULL)
normalize(x, obj.cols, min.value = NULL, max.value = NULL, offset = NULL)
x |
[ |
obj.cols |
[ |
min.value |
[ |
max.value |
[ |
offset |
[ |
[matrix
| data.frame
]
In case a data.frame is passed and a “prob” column exists, normalization is performed for each unique element of the “prob” column independently (if existent).
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
computeDominanceRanking()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()
The NSGA-II merges the current population and the generated offspring and reduces it by means of the following procedure: It first applies the non dominated sorting algorithm to obtain the nondominated fronts. Starting with the first front, it fills the new population until the i-th front does not fit. It then applies the secondary crowding distance criterion to select the missing individuals from the i-th front.
nsga2( fitness.fun, n.objectives = NULL, n.dim = NULL, minimize = NULL, lower = NULL, upper = NULL, mu = 100L, lambda = mu, mutator = setup(mutPolynomial, eta = 25, p = 0.2, lower = lower, upper = upper), recombinator = setup(recSBX, eta = 15, p = 0.7, lower = lower, upper = upper), terminators = list(stopOnIters(100L)), ... )
nsga2( fitness.fun, n.objectives = NULL, n.dim = NULL, minimize = NULL, lower = NULL, upper = NULL, mu = 100L, lambda = mu, mutator = setup(mutPolynomial, eta = 25, p = 0.2, lower = lower, upper = upper), recombinator = setup(recSBX, eta = 15, p = 0.7, lower = lower, upper = upper), terminators = list(stopOnIters(100L)), ... )
fitness.fun |
[ |
n.objectives |
[ |
n.dim |
[ |
minimize |
[ |
lower |
[ |
upper |
[ |
mu |
[ |
lambda |
[ |
mutator |
[ |
recombinator |
[ |
terminators |
[ |
... |
[any] |
[ecr_multi_objective_result
]
This is a pure R implementation of the NSGA-II algorithm. It hides the regular ecr interface and offers a more R like interface while still being quite adaptable.
Deb, K., Pratap, A., and Agarwal, S. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6 (8) (2002), 182-197.
Visualizes of empirical distributions of unary EMOA indicator
based on the results of computeIndicators
.
plotDistribution( inds, plot.type = "boxplot", fill = "algorithm", facet.type = "grid", facet.args = list(), logscale = character() )
plotDistribution( inds, plot.type = "boxplot", fill = "algorithm", facet.type = "grid", facet.args = list(), logscale = character() )
inds |
[ |
plot.type |
[ |
fill |
[ |
facet.type |
[ |
facet.args |
[ |
logscale |
[ |
[ggplot
]
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
computeDominanceRanking()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()
The function expects a data.frame or a matrix. By default the first
2 or 3 columns/rows are assumed to contain the elements of the approximation sets.
Depending on the number of numeric columns (in case of a data.frame) or the
number of rows (in case of a matrix) the function internally calls
plotScatter2d
or plotScatter3d
.
plotFront(x, obj.names = NULL, minimize = TRUE, ...)
plotFront(x, obj.names = NULL, minimize = TRUE, ...)
x |
[ |
obj.names |
[ |
minimize |
[ |
... |
[any] |
[ggplot
] ggplot object.
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
computeDominanceRanking()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()
Given a matrix or list of matrizes x
this function
visualizes each matrix with a heatmap.
plotHeatmap(x, value.name = "Value", show.values = FALSE)
plotHeatmap(x, value.name = "Value", show.values = FALSE)
x |
[ |
value.name |
[ |
show.values |
[ |
[ggplot
] ggplot object.
# simulate two (correlation) matrizes x = matrix(runif(100), ncol = 10) y = matrix(runif(100), ncol = 10) ## Not run: pl = plotHeatmap(x) pl = plotHeatmap(list(x, y), value.name = "Correlation") pl = plotHeatmap(list(MatrixX = x, MatrixY = y), value.name = "Correlation") ## End(Not run)
# simulate two (correlation) matrizes x = matrix(runif(100), ncol = 10) y = matrix(runif(100), ncol = 10) ## Not run: pl = plotHeatmap(x) pl = plotHeatmap(list(x, y), value.name = "Correlation") pl = plotHeatmap(list(MatrixX = x, MatrixY = y), value.name = "Correlation") ## End(Not run)
Given a data frame with the results of (multiple) runs of (multiple)
different multi-objective optimization algorithms on (multiple) problem instances
the function generates ggplot
plots of the obtained
Pareto-front approximations.
plotScatter2d( df, obj.cols = c("f1", "f2"), shape = "algorithm", colour = NULL, highlight.algos = NULL, offset.highlighted = 0, title = NULL, subtitle = NULL, facet.type = "wrap", facet.args = list() )
plotScatter2d( df, obj.cols = c("f1", "f2"), shape = "algorithm", colour = NULL, highlight.algos = NULL, offset.highlighted = 0, title = NULL, subtitle = NULL, facet.type = "wrap", facet.args = list() )
df |
[ |
obj.cols |
[ |
shape |
[ |
colour |
[ |
highlight.algos |
[ |
offset.highlighted |
[ |
title |
[ |
subtitle |
[ |
facet.type |
[ |
facet.args |
[ |
[ggplot
] A ggplot object.
At the moment only approximations of bi-objective functions are supported.
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
computeDominanceRanking()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter3d()
,
toLatex()
## Not run: # load examplary data data(mcMST) print(head(mcMST)) # no customization; use the defaults pl = plotFronts(mcMST) # algo PRIM is obtained by weighted sum scalarization # Since the front is (mainly) convex we highlight these solutions pl = plotFronts(mcMST, highlight.algos = "PRIM") # customize layout pl = plotFronts(mcMST, title = "Pareto-approximations", subtitle = "based on different mcMST algorithms.", facet.args = list(nrow = 2)) ## End(Not run)
## Not run: # load examplary data data(mcMST) print(head(mcMST)) # no customization; use the defaults pl = plotFronts(mcMST) # algo PRIM is obtained by weighted sum scalarization # Since the front is (mainly) convex we highlight these solutions pl = plotFronts(mcMST, highlight.algos = "PRIM") # customize layout pl = plotFronts(mcMST, title = "Pareto-approximations", subtitle = "based on different mcMST algorithms.", facet.args = list(nrow = 2)) ## End(Not run)
Given a data frame with the results of (multiple) runs of (multiple) different three-objective optimization algorithms on (multiple) problem instances the function generates 3D scatterplots of the obtained Pareto-front approximations.
plotScatter3d( df, obj.cols = c("f1", "f2", "f3"), max.in.row = 4L, package = "scatterplot3d", ... )
plotScatter3d( df, obj.cols = c("f1", "f2", "f3"), max.in.row = 4L, package = "scatterplot3d", ... )
df |
[ |
obj.cols |
[ |
max.in.row |
[ |
package |
[ |
... |
[any] |
Nothing
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
computeDominanceRanking()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
toLatex()
Expects a data.frame of logged statistics, e.g., extracted from
a logger object by calling getStatistics
, and generates a basic
line plot. The plot is generated with the ggplot2 package and the ggplot
object is returned.
plotStatistics(x, drop.stats = character(0L))
plotStatistics(x, drop.stats = character(0L))
x |
[ |
drop.stats |
[ |
The one-point crossover recombinator is defined for float and binary representations. Given two real-valued/binary vectors of length n, the selector samples a random position i between 1 and n-1. In the next step it creates two children. The first part of the first child contains of the subvector from position 1 to position i of the first parent, the second part from position i+1 to n is taken from the second parent. The second child is build analogously. If the parents are list of real-valued/binary vectors, the procedure described above is applied to each element of the list.
recCrossover(inds)
recCrossover(inds)
inds |
[ |
[list
]
Other recombinators:
recIntermediate()
,
recOX()
,
recPMX()
,
recSBX()
,
recUnifCrossover()
Intermediate recombination computes the component-wise mean value of the
k
given parents. It is applicable only for float representation.
recIntermediate(inds)
recIntermediate(inds)
inds |
[ |
[numeric
] Single offspring.
Other recombinators:
recCrossover()
,
recOX()
,
recPMX()
,
recSBX()
,
recUnifCrossover()
This recombination operator is specifically designed for permutations. The operators chooses two cut-points at random and generates two offspring as follows: a) copy the subsequence of one parent and b) remove the copied node indizes from the entire sequence of the second parent from the sescond cut point and b) fill the remaining gaps with this trimmed sequence.
recOX(inds)
recOX(inds)
inds |
[ |
[list
]
Other recombinators:
recCrossover()
,
recIntermediate()
,
recPMX()
,
recSBX()
,
recUnifCrossover()
This recombination operator is specifically designed for permutations. The operators chooses two cut-points at random and generates two offspring as follows: a) copy the subsequence of one parent and b) fill the remaining positions while preserving the order and position of as many genes as possible.
recPMX(inds)
recPMX(inds)
inds |
[ |
[ecr_recombinator
]
Other recombinators:
recCrossover()
,
recIntermediate()
,
recOX()
,
recSBX()
,
recUnifCrossover()
The Simulated Binary Crossover was first proposed by [1]. It i suited for
float representation only and creates two offspring. Given parents
the offspring are generated as
where
. This way
is assured.
recSBX(inds, eta = 5, p = 1, lower, upper)
recSBX(inds, eta = 5, p = 1, lower, upper)
inds |
[ |
eta |
[ |
p |
[ |
lower |
[ |
upper |
[ |
[ecr_recombinator
]
This is the default recombination operator used in the NSGA-II EMOA (see nsga2
).
[1] Deb, K. and Agrawal, R. B. (1995). Simulated binary crossover for continuous search space. Complex Systems 9(2), 115-148.
Other recombinators:
recCrossover()
,
recIntermediate()
,
recOX()
,
recPMX()
,
recUnifCrossover()
Produces two child individuals. The i-th gene is from parent1 with probability
p
and from parent2 with probability 1-p
.
recUnifCrossover(inds, p = 0.5)
recUnifCrossover(inds, p = 0.5)
inds |
[ |
p |
[ |
[list
]
Other recombinators:
recCrossover()
,
recIntermediate()
,
recOX()
,
recPMX()
,
recSBX()
Combine multiple data frames into a single data.frame.
reduceToSingleDataFrame(res = list(), what = NULL, group.col.name)
reduceToSingleDataFrame(res = list(), what = NULL, group.col.name)
res |
[ |
what |
[ |
group.col.name |
[ |
In ecr the control object stores information on the fitness function and serves as a storage for evolutionary components used by your evolutionary algorithm. This function handles the registration process.
registerECROperator(control, slot, fun, ...)
registerECROperator(control, slot, fun, ...)
control |
[ |
slot |
[ |
fun |
[ |
... |
[any] |
[ecr_control
]
Takes a population of mu individuals and another set of lambda offspring individuals and selects mu individuals out of the union set according to the survival selection strategy stored in the control object.
replaceMuPlusLambda( control, population, offspring, fitness = NULL, fitness.offspring = NULL ) replaceMuCommaLambda( control, population, offspring, fitness = NULL, fitness.offspring = NULL, n.elite = base::max(ceiling(length(population) * 0.1), 1L) )
replaceMuPlusLambda( control, population, offspring, fitness = NULL, fitness.offspring = NULL ) replaceMuCommaLambda( control, population, offspring, fitness = NULL, fitness.offspring = NULL, n.elite = base::max(ceiling(length(population) * 0.1), 1L) )
control |
[ |
population |
[ |
offspring |
[ |
fitness |
[ |
fitness.offspring |
[ |
n.elite |
[ |
[list
] List with selected population and corresponding fitness matrix.
Performs non-dominated sorting and drops the individual from the last front
with minimal hypervolume contribution. This selector is the basis of the
S-Metric Selection Evolutionary Multi-Objective Algorithm, termed SMS-EMOA
(see smsemoa
).
selDomHV(fitness, n.select, ref.point)
selDomHV(fitness, n.select, ref.point)
fitness |
[ |
n.select |
[ |
ref.point |
[ |
[integer
] Vector of survivor indizes.
Note that the current implementation expects n.select = ncol(fitness) - 1
and the selection process quits with an error message if n.select
is greater
than 1.
Other selectors:
selDomNumberPlusHV()
,
selGreedy()
,
selNondom()
,
selRanking()
,
selRoulette()
,
selSimple()
,
selTournament()
Alternative SMS-EMOA survival selection as proposed in Algorithm 3 of [1]. Performs non-dominated
sorting first. If the number of non-domination levels is at least two the algorithm
drops the individual with the highest number of dominating points (ties are
broken at random) from the last layer. If there is just one non-domination layer,
i.e., all points are non-domianted the method drops the individual with minimal
hypervolume contribution. This selector is the basis of the
S-Metric Selection Evolutionary Multi-Objective Algorithm, termed SMS-EMOA
(see smsemoa
).
selDomNumberPlusHV(fitness, n.select, ref.point)
selDomNumberPlusHV(fitness, n.select, ref.point)
fitness |
[ |
n.select |
[ |
ref.point |
[ |
[integer
] Vector of survivor indizes.
Note that the current implementation expects n.select = ncol(fitness) - 1
and the selection process quits with an error message if n.select
is greater
than 1.
[1] Beume, Nicola, Boris Naujoks and M. Emmerich. SMS-EMOA: Multiobjective selection based on dominated hypervolume.” European Journal of Operational Research. 181 (2007): 1653-1669.
Other selectors:
selDomHV()
,
selGreedy()
,
selNondom()
,
selRanking()
,
selRoulette()
,
selSimple()
,
selTournament()
This utility functions expect a control object, a matrix of
fitness values - each column containing the fitness value(s) of one individual -
and the number of individuals to select.
The corresponding selector, i.e., mating selector for selectForMating
or survival selector for selectForSurvival
is than called internally
and a vector of indizes of selected individuals is returned.
selectForMating(control, fitness, n.select) selectForSurvival(control, fitness, n.select)
selectForMating(control, fitness, n.select) selectForSurvival(control, fitness, n.select)
control |
[ |
fitness |
[ |
n.select |
[ |
Both functions check the optimization directions stored in the task
inside the control object, i.e., whether to minimize or maximize each objective,
and transparently prepare/transform the fitness
matrix for the selector.
[integer
] Integer vector with the indizes of selected individuals.
Sorts the individuals according to their fitness value in increasing order and selects the best ones.
selGreedy(fitness, n.select)
selGreedy(fitness, n.select)
fitness |
[ |
n.select |
[ |
[integer
] Vector of survivor indizes.
Other selectors:
selDomHV()
,
selDomNumberPlusHV()
,
selNondom()
,
selRanking()
,
selRoulette()
,
selSimple()
,
selTournament()
Applies non-dominated sorting of the objective vectors and subsequent crowding
distance computation to select a subset of individuals. This is the selector used
by the famous NSGA-II EMOA (see nsga2
).
selNondom(fitness, n.select)
selNondom(fitness, n.select)
fitness |
[ |
n.select |
[ |
[setOfIndividuals
]
Other selectors:
selDomHV()
,
selDomNumberPlusHV()
,
selGreedy()
,
selRanking()
,
selRoulette()
,
selSimple()
,
selTournament()
Rank-based selection preserves a constant selection pressure by sorting the population on the basis of fitness, and then allocating selection probabilities to individuals according to their rank, rather than according to their actual fitness values.
selRanking(fitness, n.select, s = 1.5, scheme = "linear")
selRanking(fitness, n.select, s = 1.5, scheme = "linear")
fitness |
[ |
n.select |
[ |
s |
[ |
scheme |
[ |
[setOfIndividuals
]
Eiben, A. E., & Smith, J. E. (2007). Introduction to evolutionary computing. Berlin: Springer.
Other selectors:
selDomHV()
,
selDomNumberPlusHV()
,
selGreedy()
,
selNondom()
,
selRoulette()
,
selSimple()
,
selTournament()
The chance of an individual to get selected is proportional to its fitness, i.e., better individuals get a higher chance to take part in the reproduction process. Low-fitness individuals however, have a positive fitness as well.
selRoulette(fitness, n.select, offset = 0.1)
selRoulette(fitness, n.select, offset = 0.1)
fitness |
[ |
n.select |
[ |
offset |
[ |
Fitness proportional selection can be naturally applied to single objective
maximization problems. However, negative fitness values can are problematic.
The Roulette-Wheel selector thus works with the following heuristic: if
negative values occur, the negative of the smallest fitness value is added
to each fitness value. In this case to avoid the smallest shifted fitness
value to be zero and thus have a zero probability of being selected an additional
positive constant offset
is added (see parameters).
[setOfIndividuals
]
Other selectors:
selDomHV()
,
selDomNumberPlusHV()
,
selGreedy()
,
selNondom()
,
selRanking()
,
selSimple()
,
selTournament()
Just for testing. Actually does not really select, but instead returns a random
sample of ncol(fitness)
indizes.
selSimple(fitness, n.select)
selSimple(fitness, n.select)
fitness |
[ |
n.select |
[ |
[setOfIndividuals
]
Other selectors:
selDomHV()
,
selDomNumberPlusHV()
,
selGreedy()
,
selNondom()
,
selRanking()
,
selRoulette()
,
selTournament()
k individuals from the population are chosen randomly and the best one is selected to be included into the mating pool. This process is repeated until the desired number of individuals for the mating pool is reached.
selTournament(fitness, n.select, k = 3L)
selTournament(fitness, n.select, k = 3L)
fitness |
[ |
n.select |
[ |
k |
[ |
[integer
] Vector of survivor indizes.
Other selectors:
selDomHV()
,
selDomNumberPlusHV()
,
selGreedy()
,
selNondom()
,
selRanking()
,
selRoulette()
,
selSimple()
The function checks, whether each points of the second set of points is dominated by at least one point from the first set.
setDominates(x, y)
setDominates(x, y)
x |
[ |
y |
[ |
[logical(1)
]
This function builds a simple wrapper around an evolutionary operator, i.e.,
mutator, recombinator or selector and defines its parameters. The result is a
function that does not longer depend on the parameters. E.g., fun = setup(mutBitflip, p = 0.3)
initializes a bitflip mutator with mutation probability 0.3. Thus,
the following calls have the same behaviour: fun(c(1, 0, 0))
and
mutBitflip(fun(c(1, 0, 0), p = 0.3)
.
Basically, this type of preinitialization is only neccessary if operators
with additional parameters shall be initialized in order to use the black-box
ecr
.
setup(operator, ...)
setup(operator, ...)
operator |
[ |
... |
[any] |
[function
] Wrapper evolutionary operator with parameters x
and ...
.
# initialize bitflip mutator with p = 0.3 bf = setup(mutBitflip, p = 0.3) # sample binary string x = sample(c(0, 1), 100, replace = TRUE) set.seed(1) # apply preinitialized function print(bf(x)) set.seed(1) # apply raw function print(mutBitflip(x, p = 0.3)) # overwrite preinitialized values with mutate ctrl = initECRControl(fitness.fun = function(x) sum(x), n.objectives = 1L) # here we define a mutation probability of 0.3 ctrl = registerECROperator(ctrl, "mutate", setup(mutBitflip, p = 0.3)) # here we overwrite with 1, i.e., each bit is flipped print(x) print(mutate(ctrl, list(x), p.mut = 1, p = 1)[[1]])
# initialize bitflip mutator with p = 0.3 bf = setup(mutBitflip, p = 0.3) # sample binary string x = sample(c(0, 1), 100, replace = TRUE) set.seed(1) # apply preinitialized function print(bf(x)) set.seed(1) # apply raw function print(mutBitflip(x, p = 0.3)) # overwrite preinitialized values with mutate ctrl = initECRControl(fitness.fun = function(x) sum(x), n.objectives = 1L) # here we define a mutation probability of 0.3 ctrl = registerECROperator(ctrl, "mutate", setup(mutBitflip, p = 0.3)) # here we overwrite with 1, i.e., each bit is flipped print(x) print(mutate(ctrl, list(x), p.mut = 1, p = 1)[[1]])
Default monitor object that outputs messages to the console
based on a default logger (see initLogger
).
setupECRDefaultMonitor(step = 10L)
setupECRDefaultMonitor(step = 10L)
step |
[ |
[ecr_monitor
]
Pure R implementation of the SMS-EMOA. This algorithm belongs to the group of indicator based multi-objective evolutionary algorithms. In each generation, the SMS-EMOA selects two parents uniformly at, applies recombination and mutation and finally selects the best subset of individuals among all subsets by maximizing the Hypervolume indicator.
smsemoa( fitness.fun, n.objectives = NULL, n.dim = NULL, minimize = NULL, lower = NULL, upper = NULL, mu = 100L, ref.point = NULL, mutator = setup(mutPolynomial, eta = 25, p = 0.2, lower = lower, upper = upper), recombinator = setup(recSBX, eta = 15, p = 0.7, lower = lower, upper = upper), terminators = list(stopOnIters(100L)), ... )
smsemoa( fitness.fun, n.objectives = NULL, n.dim = NULL, minimize = NULL, lower = NULL, upper = NULL, mu = 100L, ref.point = NULL, mutator = setup(mutPolynomial, eta = 25, p = 0.2, lower = lower, upper = upper), recombinator = setup(recSBX, eta = 15, p = 0.7, lower = lower, upper = upper), terminators = list(stopOnIters(100L)), ... )
fitness.fun |
[ |
n.objectives |
[ |
n.dim |
[ |
minimize |
[ |
lower |
[ |
upper |
[ |
mu |
[ |
ref.point |
[ |
mutator |
[ |
recombinator |
[ |
terminators |
[ |
... |
[any] |
[ecr_multi_objective_result
]
This helper function hides the regular ecr interface and offers a more R like interface of this state of the art EMOA.
Beume, N., Naujoks, B., Emmerich, M., SMS-EMOA: Multiobjective selection based on dominated hypervolume, European Journal of Operational Research, Volume 181, Issue 3, 16 September 2007, Pages 1653-1669.
Sort Pareto-front approximation by objective.
sortByObjective(x, obj = 1L, ...) ## S3 method for class 'data.frame' sortByObjective(x, obj = 1L, ...) ## S3 method for class 'matrix' sortByObjective(x, obj = 1L, ...) ## S3 method for class 'ecr_multi_objective_result' sortByObjective(x, obj = 1L, ...) ## S3 method for class 'list' sortByObjective(x, obj = 1L, ...)
sortByObjective(x, obj = 1L, ...) ## S3 method for class 'data.frame' sortByObjective(x, obj = 1L, ...) ## S3 method for class 'matrix' sortByObjective(x, obj = 1L, ...) ## S3 method for class 'ecr_multi_objective_result' sortByObjective(x, obj = 1L, ...) ## S3 method for class 'list' sortByObjective(x, obj = 1L, ...)
x |
[ |
obj |
[ |
... |
[any] |
Modified object.
Stop the EA after a fixed number of fitness function evaluations, after a predefined number of generations/iterations, a given cutoff time or if the known optimal function value is approximated (only for single-objective optimization).
stopOnEvals(max.evals = NULL) stopOnIters(max.iter = NULL) stopOnOptY(opt.y, eps) stopOnMaxTime(max.time = NULL)
stopOnEvals(max.evals = NULL) stopOnIters(max.iter = NULL) stopOnOptY(opt.y, eps) stopOnMaxTime(max.time = NULL)
max.evals |
[ |
max.iter |
[ |
opt.y |
[ |
eps |
[ |
max.time |
[ |
[ecr_terminator
]
Transform the data.frame of logged statistics from wide to ggplot2-friendly long format.
toGG(x, drop.stats = character(0L))
toGG(x, drop.stats = character(0L))
x |
[ |
drop.stats |
[ |
[data.frame
]
Returns high-quality LaTeX-tables of the test results of
statistical tests performed with function test
on per-instance basis. I.e., a table is returned for each instances combining
the results of different indicators.
toLatex( stats, stat.cols = NULL, probs = NULL, type = "by.instance", cell.formatter = NULL ) ## S3 method for class 'list' toLatex( stats, stat.cols = NULL, probs = NULL, type = "by.instance", cell.formatter = NULL ) ## S3 method for class 'data.frame' toLatex( stats, stat.cols = NULL, probs = NULL, type = "by.instance", cell.formatter = NULL )
toLatex( stats, stat.cols = NULL, probs = NULL, type = "by.instance", cell.formatter = NULL ) ## S3 method for class 'list' toLatex( stats, stat.cols = NULL, probs = NULL, type = "by.instance", cell.formatter = NULL ) ## S3 method for class 'data.frame' toLatex( stats, stat.cols = NULL, probs = NULL, type = "by.instance", cell.formatter = NULL )
stats |
[ |
stat.cols |
[ |
probs |
[ |
type |
[ |
cell.formatter |
[ |
[list
] Named list of strings (LaTeX tables). Names correspond to the
selected problem instances in probs
.
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
computeDominanceRanking()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
Inside ecr EMOA algorithms the fitness is maintained in an matrix
where
is the number of objectives and
is the number of individuals.
This function basically transposes such a matrix and converts it into a data frame.
toParetoDf(x, filter.dups = FALSE)
toParetoDf(x, filter.dups = FALSE)
x |
[ |
filter.dups |
[ |
[data.frame
]
Some selectors support maximization only, e.g., roulette wheel selector, or minimization (most others). This function computes a factor from -1, 1 for each objective to match supported selector optimization directions and the actual objectives of the task.
transformFitness(fitness, task, selector)
transformFitness(fitness, task, selector)
fitness |
[matrix] Matrix of fitness values with the fitness vector of individual i in the i-th column. |
task |
[ecr_optimization_task] Optimization task. |
selector |
[ecr_selector] Selector object. |
[matrix] Transformed / scaled fitness matrix.
This function modifies the log in-place, i.e., without making copies.
updateLogger(log, population, fitness = NULL, n.evals, extras = NULL, ...)
updateLogger(log, population, fitness = NULL, n.evals, extras = NULL, ...)
log |
[ |
population |
[ |
fitness |
[ |
n.evals |
[ |
extras |
[ |
... |
[any] |
Other logging:
getPopulationFitness()
,
getPopulations()
,
getStatistics()
,
initLogger()
This function updates a Pareto archive, i.e., an archive of non-dominated
points. It expects the archive, a set of individuals, a matrix of fitness values
(each column corresponds to the fitness vector of one individual) and updates
the archive “in-place”. If the archive has unlimited capacity all non-dominated points of
the union of archive and passed individuals are stored. Otherwise, i.e., in case
the archive is limited in capacity (argument max.size
of
initParetoArchive
was set to an integer value greater zero), the
trunc.fun
function passed to initParetoArchive
is applied to
all non-dominated points to determine which points should be dropped.
updateParetoArchive(archive, inds, fitness, ...)
updateParetoArchive(archive, inds, fitness, ...)
archive |
[ |
inds |
[ |
fitness |
[ |
... |
[any] |
Other ParetoArchive:
getIndividuals()
,
getSize()
,
initParetoArchive()
Given a matrix with one point per column which.dominated
returns the
column numbers of the dominated points and which.nondominated
the column
numbers of the nondominated points. Function isMaximallyDominated
returns
a logical vector with TRUE
for each point which is located on the last
non-domination level.
which.dominated(x) which.nondominated(x) isMaximallyDominated(x)
which.dominated(x) which.nondominated(x) isMaximallyDominated(x)
x |
[ |
[integer
]
data(mtcars) # assume we want to maximize horsepower and minimize gas consumption cars = mtcars[, c("mpg", "hp")] cars$hp = -cars$hp idxs = which.nondominated(as.matrix(cars)) print(mtcars[idxs, ])
data(mtcars) # assume we want to maximize horsepower and minimize gas consumption cars = mtcars[, c("mpg", "hp")] cars$hp = -cars$hp idxs = which.nondominated(as.matrix(cars)) print(mtcars[idxs, ])
Should be used if the recombinator returns multiple children.
wrapChildren(...)
wrapChildren(...)
... |
[any] |
[list
] List of individuals.