Study Types

Design Manager offers several types of design study by which you can explore the design space of your product.

While CAD robustness studies and performance assessment studies do not require specific licensing, design optimization studies are only available with Simcenter STAR-CCM+ Intelligent Design Exploration licensing.

CAD Robustness

A CAD Robustness study allows you to check whether the CAD geometries that are created from geometric input parameters are valid. Invalid geometries can cause meshing problems in the upcoming studies. Before running a performance assessment or an optimization study, you can run a CAD robustness study to check the geometric validity of designs in your design space.

You supply the input parameter values using one of the following sampling methods:

  • Manual
  • Sweep
  • Latin Hypercube

For details on how you define the input parameter values, see Manual, Sweep, or Latin Hypercube Sampling DOE study.

A CAD Robustness study changes the parameter values and updates the CAD geometry—it does not push the modified geometry through the mesh pipeline and run a simulation. If a change to an input parameter causes the CAD geometry to become invalid, Design Manager sets the status of the design to failed.

Usually, you perform a CAD Robustness study before you run a performance assessment or optimization study in order to check if the CAD geometry successfully updates over the given range of parameters. In case of high failure rates of the CAD robustness study, adjust the parameters range accordingly, try to simplify the geometry, or to re-parametrize the geometry.

Performance Assessment

Manual
In a Manual study, you define a set of designs using tabulated data, where each design is a certain combination of input parameter values. You provide the set of designs before starting the analysis.
A Manual study allows you to automate the process of running a collection of specific designs.
Sweep
In Sweep mode, before starting the analysis, you provide values for the input parameters. For each input parameter, you can either provide a constant value or a list of discrete values. You can also define a lower and an upper bound for a parameter in conjunction with a specified increment or resolution. Design Manager then creates a full factorial design sweep that goes through all combinations of parameter values.
A Sweep allows you to replicate test results, create aerodynamic load databases, create compressor maps, or to run any application where you know the configuration for each job in advance.
Smart Sweep
A Smart Sweep study is an advanced sweep study where you create design runs along specified operating conditions. At each sweep, the step between two designs is controlled through a stepping parameter, which is one of the input parameters of the smart sweep study. The end of each individual sweep is determined through stopping criteria.

One example application for the smart sweep is in generating the performance map for compressors. The smart sweep intelligently finds the bounds of the performance map based on the criteria supplied.

You supply the initial values for input parameters at the start of each sweep. You define several values/combinations of input parameter(s) through the Design Study - Design Table. A smart sweep stops when all the defined sweeps have run. For more details, refer to: 设置智能扫掠研究.

Design Optimization (Simcenter STAR-CCM+ Intelligent Design Exploration)

Optimization
In an Optimization study, you define one or more objectives that the analysis must meet as best it can. Along with the objectives, you define input parameters in the same manner as for a Sweep study. During the analysis, a search algorithm chooses the parameter values for the designs so as to best meet the analysis objectives.
Design Manager provides the following types of optimization study::
  • Weighted sum of all objectives

    This type of Optimization study allows you to optimize based on a single objective or based on multiple objectives. If you define multiple objectives, a linear weighting is used that combines all objectives into a single performance function. Running a weighted sum of all objectives analysis with multiple objectives returns a single best design:



    For this type of Optimization study, typical goals include minimizing pressure drop, maximizing outlet uniformity, minimizing component temperatures, or minimizing stress.

    Weighted sum of all objectives optimization uses the SHERPA search algorithm.

  • Multiple objective tradeoff study (Pareto front)

    In this type of Optimization study, you optimize based on competing objectives.

    Pareto optimization is suited to cases where there are two objectives that are competitive in nature. In such cases, there is no single optimum design. Instead, the optimization returns a curve along which all designs are optimum in one objective for a given value in the other objective—known as the non-dominated design condition. This curve, known as Pareto front, expresses the optimum trade-off relationship between two competing objectives:



    For a Pareto optimization study, the archive size is the number of non-dominated designs that you want the algorithm to find during the search in order to resolve the Pareto front at the end of the study.

    During the run of a Pareto optimization study, Design Manager runs several cycles, where each cycle runs a number of designs that is equal to the archive size. After each cycle, Design Manager places all the non-dominated designs into the first rank. Compared to other designs in the cycle, non-dominated designs are the designs which are better performed in at least one objective and not worse in all other objectives.

    In the next cycle, Design Manager re-ranks the non-dominated designs and establishes the final first rank of the pareto study, which determines the pareto front.

    Finding the Pareto front for cases where there are more than two competing objectives requires a large number of design evaluations. In such cases, you are advised to investigate if all objectives are truly competing or if possibly some of them are analogous and can be combined. Alternatively, you can try to pose the problem in terms of two objectives and constraints.

    You can optionally choose to define a weighting for the objectives before you run the analysis.

    A multiple objective tradeoff study (Pareto front) uses the MO (Multi Objective)-SHERPA search algorithm.

    For information on the search algorithm, see The SHERPA Algorithm.

  • Sequential Quadratic Programming (SQP)

    SQP is a gradient-based iterative method for constrained nonlinear optimization. It solves a sequence of optimization sub-problems, each of which optimizes a quadratic model of the objective subject to linearization of the constraints. See 基于梯度优化.

    The optimization line search is made along the direction (gradient-based sensitivity) from the quadratic model to reduce the objective function. A step size is computed to determine how far it should move along that direction. See Optimization Type Reference.

    The following example shows the optimization path of searching minimum pressure drop.



Compared to SHERPA, SQP can handle larger number of the parameters. However, it requires smooth gradients and the physics compatibility with the Adjoint solver.
DOE
A DOE (Design Of Experiments) study is used to examine how sensitive input parameters affect a given design, either locally around the best design or in a larger area of the design space.

In a DOE study, the two settings DOE Type and Parameter Type specify the final input parameter values. The DOE Type determines the sampling method—How design points are selected from the design space.

The available DOE Type options are shown below:

  • 2 Level Full Factorial
  • 3 Level Full Factorial
  • Latin Hypercube Sampling

For more details, see DOE Type Reference and DOE 研究的采样方法.

Adaptive Sampling
An Adaptive Sampling study intelligently characterizes the design space guided by your goals for the study. Adaptive sampling uses the Adaptive Strategy you select to give the best possible sampling for the number of evaluations you specify. The result is an efficiently-constructed set of designs that you can use to create surrogates and train other kinds of metamodels. This iterative algorithm is an efficient method for finding design points within the number of Designs to Run.

An Adaptive Sampling study allows you to specify design seeds which are used for the initial sampling. See also 使用预定义设计设定研究种子 and Adaptive Sampling Reference.

If you do not provide design seeds, the adaptive sampling study uses an Optimal Latin Hypercube method to generate initial sample points.

Compared to a DOE study, the adaptive sampling strategy is faster at finding a precise description of the design space of interest. In the case of surrogate generation, the adaptive sampling algorithm specifies the best fit surrogate properties automatically. You can apply the Adaptive Sampling study type to:

  • Create a surrogate with a precise global description of the design space.
  • Create a surrogate with a precise local description of a specific area, for example, where a response is the largest.
  • Find more designs in a specific area, for example, where a sharp change occurs in a response.
Robustness and Reliability
A Robustness and Reliability study helps you assess the robustness and reliability of a design, that is, how minor variations in the input parameter values affect the outputs.
In such an analysis, you apply a probability distribution to the input parameters. This probability distribution describes the likelihood that an input parameter will take a particular value.

In a Robustness and Reliability study, the parameter distribution type and sampling method specify the final input parameter values. For more details, refer to 稳健性和可靠性研究的采样方法.

The motivation for a Robustness and Reliability study stems from the fact that Optimization studies seek to find designs that are optimal without accounting for small perturbations, such as installation or manufacturing tolerances. When constraints limit the design space, these deterministic optima often lie on constraint boundaries (see Study Outputs). However, small changes in input parameter values can affect the values of the outputs in a non-trivial way, resulting in a large number of infeasible designs. Accounting for variations leads to more reliable designs, which effectively pushes the optimal designs away from the constraint boundaries:


When increasing Designs to Run for a Robustness and Reliability study without Clear Study, the extended design number appends to the previously executed designs. For example, you increase the Designs to Run from 100 to 200, the second design run starts with the number 101 ends with the number 300. To avoid this, you are advised to use the right-click action Clear Study when changing the Designs to Run property.

Usually, the process of design optimization combines Optimization, DOE, and Robustness and Reliability studies:
  1. An Optimization study evaluates designs over a broad range of input parameters and parameter values. This study gives you the best design.
  2. A DOE study evaluates the input parameters in a small range of parameter values about the best design. This approach gives you the parameters that have the greatest impact on the performance of the design.
  3. A Robustness and Reliability study uses the sensitive parameters identified in the DOE study to determine the robustness of the best design.