Optimizer - Algorithm Settings
Simulation Optimizer Properties
Simulation Start Simulation Optimizer Properties
Simulation Start Simulation Optimizer Properties
Simulation Start Simulation Optimizer Properties
Depending on which
algorithm has been chosen in the optimizer's
settings page this dialog box offers the possibility to change the
optimization settings for the CMA Evolution
Strategy, the Nelder
Mead Simplex or
Genetic Algorithm or the Particle Swarm Optimization.
Nelder Mead Simplex Algorithm
This
algorithm features several breaking criteria. The reason for this is that
it has some properties of local and some properties of global optimization
techniques. It starts with a set of points distributed in the parameter
space like the global optimization algorithms and as the optimization
run proceeds it will converge to some optimum, which may be a local extremum.
Goal
Function Level: A desired goal function
level can be specified for the Nelder
Mead Simplex Algorithm. The algorithm will be stopped if the goal
function value is less than the specified level. However, if the optimization
is done distributed this criterion will only be checked after all parallel
evaluations are calculated. If the desired accuracy is set to zero then
the algorithm will only stop if one of the other criteria is satisfied.
Minimal
Simplex Size: For optimization the parameter space is mapped onto
the unit cube. The simplex is a geometrical figure that moves in this
multidimensional space. The algorithm will stop as soon as the largest
edge of the Simplex will be smaller than the specified size. If the optimization
is defined over just one parameter in the interval [0;1] then this setting
corresponds with the desired accuracy in the parameter space.
Maximal Number of Evaluations: Depending on the optimization
problem definition it is possible that the specified goal function level
can't be reached. If in addition a small value is chosen for minimal simplex
size it is possible that the algorithm needs more goal function evaluations
to converge. In this case it is convenient to define a maximal number
of function evaluations to restrict optimization time a priory. This number
has to be greater than one.
CMA Evolution Strategy
CMA Evolution Strategy
is the abbreviation for Covariance Matrix Adaptation Evolutionary Strategy.
The algorithm uses a statistical model to sample the design space efficiently.
The parameter space is sampled by approximating a multi variate normal
distribution while taking the covariance matrix into account, which adapted
during the optimization. The statistical approach gives the algorithm
the robustness of a global optimization method, avoiding early convergence
to a local minimum. The algorithm also has an internal step size that
will control convergence. So the method shares properties of local and
global optimization.
Maximal Number of Evaluations: Depending on the optimization
problem definition it is possible that the specified goal function level
can't be reached. If in addition a small value is chosen for minimal simplex
size it is possible that the algorithm needs more goal function evaluations
to converge. In this case it is convenient to define a maximal number
of function evaluations to restrict optimization time a priory. This number
has to be greater than one.
Sigma: The chosen value scales the algorithms initial
step size. A small value (close to zero) will make
the method more local and improve it's convergence properties. A large
value (close to one) will make the method more global and improve the
robustness against early convergence to a local minimum.
Genetic Algorithm
Population Size: It's
possible to specify the population size for the algorithm. Keep in mind
that choosing a small population size increases the risk that the genes
can be depleted. If a large population size is chosen there will be more
solver evaluations necessary for the calculation of each generation.
Maximal Number
of Iterations: The Genetic Algorithm will stop after the maximal
number of iterations have been done. Like this, it is possible to estimate
the maximal optimization time a priori. If "n" is the population
size and "m" is the maximal number of iterations "(m+1)*n/2
+ 1" solver runs will be done. However this estimation is not valid
if the Interpolation feature (see Optimizer
- Settings) is switched on, the optimization is aborted or the desired
accuracy is reached
Goal Function
Level: A desired goal
function level can be specified for the Genetic Algorithm. The algorithm will be stopped if the goal function value
is less than the specified level. However, if the optimization is done
distributed this criterion will only be checked after the complete population
was calculated. If the desired level is set to zero then the Maximal Number of Iterations is the only breaking condition.
Mutation
Rate: If the genes of two parents are similar
enough the mutation rate specifies the probability that a mutation occurs.
Particle
Swarm Optimization
Swarm Size: It's
possible to specify the swarm size for the algorithm. Keep in mind that
choosing a small swarm size increases the risk that the desired improvement
needs more iterations . If a large swarm size is chosen there will be
more solver evaluations necessary for the each iteration.
Maximal Number of Iterations:
The Particle Swarm Algorithm will stop
after the maximal number of iterations have been done. Like this, it is
possible to estimate the maximal optimization time a priori. If "n"
is the population size and "m" is the maximal number of iterations
then at most "m*n + 1" solver runs will be done. However this
estimation is not valid if the
Interpolation feature is switched on, the optimization
is aborted or the desired accuracy is reached.
Goal Function Level: A
desired goal function level can be specified for the Particle Swarm
Algorithm. The algorithm will be stopped if the goal function value
is less than the specified level. However, if the optimization is done
distributed this criterion will only be checked after the complete swarm
was calculated. If the desired level is set to zero then the Maximal Number of Iterations is the only breaking condition.
General
Setting for Optimization:
Choice
of the Initial Data Set
For
the featured global optimization techniques and the Nelder Mead Simplex Algorithm a set of initial points in the parameter space are necessary.
These points will automatically be generated by a uniform random distribution
generator or by the Latin Hypercube approach.
Uniform Random Numbers: For
each starting point a pseudo random number generator will choose uniformly distributed
points in the parameter space.
Latin Hypercube: Randomly
chosen points sometimes have the disadvantage that they do not have optimal
space filling properties in the parameter space. The Latin Hypercube sampling has the special
property that a projection onto each parameter interval yields an equidistant
sampling.
Noisy Latin Hypercube
Distribution:
The
initial points are distributed similar to the Latin Hypercube Distribution
but a perturbation is added on each point. This distribution type has
similar space filling properties as the Latin Hypercube Distribution but
the generated point set will be less regular. This distribution type is
only available for the
Nelder Mead Simplex Algorithm.
Cancel
Close
the optimizer dialog box without applying any changes.
OK
Stores the current settings and closes
the dialog box. Please note that the settings that were set in the dialog
box for the Genetic Algorithm or the Particle Swarm Optimization
will only be valid if Start or Apply in the optimizer dialog is pressed afterwards.
Help
Shows
this help text.
See also
Optimizer
Overview, Optimizer
- Interpolation of Primary Data, Optimizer
Settings, Optimizer
Goals, Optimizer Info
Page, Solver
Overview
![Ansoft Designer 教程](/images/Designer/VNA_AN.png)
HFSS视频教程
ADS视频教程
CST视频教程
Ansoft Designer 中文教程
|