淘宝官方店     推荐课程     在线工具     联系方式     关于我们  
 
 

微波射频仿真设计   Ansoft Designer 中文培训教程   |   HFSS视频培训教程套装

 

Agilent ADS 视频培训教程   |   CST微波工作室视频教程   |   AWR Microwave Office

          首页 >> Ansoft Designer >> Ansoft Designer在线帮助文档


Ansoft Designer / Ansys Designer 在线帮助文档:


Optimetrics >
   Optimization Overview >
       Choosing an Optimizer           


Choosing an Optimizer

When running an optimization analysis, you can choose from five optimizers, though in most cases, the Sequential Non-Linear Programming optimizer is recommended:

• Quasi Newton

If the Sequential Non Linear Programming Optimizer has difficulty, and if the numerical noise is insignificant during the solution process, use the Quasi Newton optimizer to obtain the results. This optimizer uses gradient approximation of a user-defined cost function in its search for the minimum location of the cost function. This gradient approximation is only accurate enough if there is little noise involved in the cost function calculation. The cost function calculation involves FEA, which possesses finite accuracy.

• Pattern Search

If the noise is significant in the nominal project, use the Pattern Search optimizer to obtain the results. It performs a grid-based simplex search, which makes use of simplices: triangles in 2D space or tetrahedra in 3D space. The cost value is calculated at the vertices of the simplex. The optimizer mirrors the simplex across one of its faces based on mathematical guidelines and determines if the new simplex provides better results. If it does not produce a better result, the next face is used for mirroring and the pattern continues. If no improvement occurs, the grid is refined. If improvement occurs, the step is accepted and the new simplex is generated to replace the original one. Pattern Search algorithms are less sensitive to noise.

• Merit-based Sequential Quadratic Programming

The sequential quadratic programming (sequential QP) algorithm is a generalization of Newton's method for unconstrained optimization in that it finds a step away from the current point by minimizing a quadratic model of the problem. The Lagrange multiplier estimates that are needed to set up the second-order term can be obtained by solving an auxiliary problem or by simply using the optimal multipliers for the quadratic subproblem at the previous iteration.

The sequential QP algorithm replaces the objective function with the quadratic approximation and replaces the constraint functions by linear approximations. For the formulation, the step size is calculated by solving a quadratic subprogram.

The sequential QP approach outlined requires the computation of the second derivative of the corresponding Lagrangian function. In this work, we replace this (Hessian) matrix with the Broyden Fletcher Goldfarb Shanno (BFGS) approximation which is updated at each iteration.

However, one of the properties that make Broyden-class methods appealing for unconstrained problems is that its maintenance of positive definiteness is no longer assured, since it is usually positive definite only in a subspace. We overcome this difficulty, when this happens, by modifying the BFGS estimate with identity matrix and as a result having the algorithm to behave more like a conjugate gradient algorithm.

The convergence properties of the basic sequential QP algorithm can be improved by using a line search. The choice of distance to move along the direction generated by the subproblem is not as clear as in the unconstrained case, where we simply choose a step length that approximately minimizes the cost function along the search direction.

For constrained problems, we would like the next iterate not only to decrease the cost function but also to come closer to satisfying the constraints. Often these two aims conflict, so it is necessary to weigh their relative importance and define a merit or penalty function, which we can use as a criterion for determining whether or not one point is better than another. In this work, we use a Fibonacci search algorithm when the nonlinear cost function is smooth and a Wolfe search algorithm when the cost function is not smooth enough. Hence, we refer to this algorithm as the merit-based sequential quadratic program (MSQP).

Given an iterate with the search direction, the MSQP sets the next iterate. If the merit function is selected, the step length is chosen to approximately minimize the merit function and the direction estimate is found by solving the quadratic programming subproblem associated with the Lagrange multiplier.

• Sequential Non-Linear Programming

The main advantage of SNLP over quasi Newton is that it handles the optimization problem in more depth. This optimizer assumes that the optimization variables span a continuous space.

Like the Quasi Newton, the SNLP optimizer assumes that the noise is not significant. It does reduce the effect of the noise, but the noise filtering is not strong. The SNLP optimizer approximates the FEA characterization with Response Surfaces. With the FEA-approximation and with light evaluation of the cost function, SNLP has a good approximation of the cost function in terms of the optimization variables. This approximation allows the SNLP optimizer to estimate the location of improving points. The overall cost approximations are more accurate. This allows the SNLP optimizer a faster practical convergence speed then that of quasi Newton.

The SNLP Optimizer attempts to solve a series of NLP problems on a series of inexpensive, local surrogates. Direct application of a Nonlinear Programming (NLP) solver is impractical because the cost evaluation involves finite element analysis (FEA), which uses extensive computational resources.

The SNLP method is similar to the Sequential Quadratic Programming (SQP) method in two ways: Both are sequential, and both use local and inexpensive surrogates. However, in the SNLP case, the surrogate can be of a higher order and is more generally constrained. The inexpensive surrogate model is obtained by response surface (RS) techniques. The goal is to achieve a surrogate model that is accurate enough on a winder scale, so that the search procedures are well lead by the surrogate, even for relatively large steps. All functions calculated by the supporting finite element product (for example, Maxwell 3D or Designer) is assumed to be expensive, while the rest of the cost calculation (for example, an extra user-defined expression) -- which is implemented in Optimetrics -- is assumed to be inexpensive. For this reason, it makes sense to remove inexpensive evaluations from the finite element problem and, instead, implement them in Optimetrics. This optimizer holds several advantages over the Quasi Newton and Pattern Search optimizers.

Most importantly, due to the separation of expensive and inexpensive evaluations in the cost calculation, the SNLP optimizer is more tightly integrated with the supporting FEA tools. This tight integration provides more insight into the optimization problem, resulting in a significantly faster optimization process. A second advantage is that the SNLP optimizer does not require cost-derivatives to be approximated, protecting against uncertainties (noise) in cost evaluations. In addition to derivative-free state of the RS-based SNLP, the RS technique also proves to have noise suppression properties. Finally, this optimizer allows you to use nonlinear constraints, making this approach much more general than either of the other two optimizers.

• Sequential Mixed Integer Non-Linear Programming

To be able to optimize on number of turns or quarter turns, the optimizer must handle discrete optimization variables. This optimizer can mix continuous variables among the integers, or can have only integers, and works if all variables are continuous. The setup resembles that for SNLP, except that you must flag the supporting integer variables. You can set up internal variables based on the integer optimization variable.

For example, consider N to be an integer optimization variable. By definition it can only assume integer values. You can establish another variable, which further depends on this one: K = 2.345 * N, or K = sin(30 * N). This way K has a discrete value, but is not necessarily integer. Or, one can use N directly as a design parameter.

• Genetic Algorithm

The Genetic Algorithm (GA) search is an iterative process that goes through a number of generations. In each generation some new individuals (Children / Number of Individuals) are created and the so grown population participates in a selection (natural-selection) process that in turn reduces the size of the population to a desired level (Next Generation / Number of Individuals).

When a smaller set of individuals must be created from a bigger set, the GA selects individuals from the original set. During this process, better fit (in relation to the cost function) individuals are preferred. In the elitist selection, simply the best so many individuals are selected, but if you turn on the roulette selection, then the selection process gets relaxed. An iterative process starts selecting the individuals and fill up the resulting set, but instead of selecting the best so many, we use a roulette wheel that has for each selection-candidate divisions made proportional to the fitness level (relative to the cost function) of the candidate. This means that the fitter the individual is, the larger the probability of his survival will be.

 

Related Topics

Optimization Variables in Design Space

Cost Function

Advanced Genetic Algorithm Optimizer Options




HFSS视频教学培训教程 ADS2011视频培训教程 CST微波工作室教程 Ansoft Designer 教程

                HFSS视频教程                                      ADS视频教程                               CST视频教程                           Ansoft Designer 中文教程


 

      Copyright © 2006 - 2013   微波EDA网, All Rights Reserved    业务联系:mweda@163.com