Abstract
Recent advances in design optimization have significant potential to improve the function of mechanical components and systems. Coupled with additive manufacturing, topology optimization is one category of numerical methods used to produce algorithmically generated optimized designs making a difference in the mechanical design of hardware currently being introduced to the market. Unfortunately, many of these algorithms require extensive manual setup and control, particularly of tuning parameters that control algorithmic function and convergence. This paper introduces a framework based on machine learning approaches to recommend tuning parameters to a user in order to avoid costly trial and error involved in manual tuning. The algorithm reads tuning parameters from a repository of prior, similar problems adjudged using a dissimilarity metric based on problem metadata and refines them for the current problem using a Bayesian optimization approach. The approach is demonstrated for a simple topology optimization problem with the objective of achieving good topology optimization solution quality and then with the additional objective of finding an optimal “trade” between solution quality and required computational time. The goal is to reduce the total number of “wasted” tuning runs that would be required for purely manual tuning. With more development, the framework may ultimately be useful on an enterprise level for analysis and optimization problems—topology optimization is one example but the framework is also applicable to other optimization problems such as shape and sizing and in high-fidelity physics-based analysis models—and enable these types of advanced approaches to be used more efficiently.
1 Introduction
Recent advances in design optimization have significant potential to improve the function of mechanical products. Coupled with progress in additive and advanced manufacturing, there is the possibility to improve performance in many industries including applications in aerospace [1,2], thermal management [3,4], and medicine [5]. Topology optimization (TO) [6–9] is one category of numerical methods used to produce algorithmically generated optimized structures. The promise of TO is that the algorithm can create a design led by physics when supplied with basic information that defines the problem. Due to its effectiveness, there has been an expansion into disciplines beyond the traditional core of static structural mechanics such as crashworthiness [10], active composites [11], fluid flow [12–14], and heat transfer [15–17] as well as adoption in industry.
Poorly tuned numerical parameters can result in inferior quality or nonsensical results. Take as an example the standard cantilevered beam problem (Fig. 1). Oversmoothing (Fig. 2(a)) occurs if the wP term is set too high leading to a smeared result of intermediate density and an ill-defined structure. At a lower value of wP, it is possible to find an acceptable balance (Fig. 2(b)). As wP is further decreased, its contribution becomes negligible leading to an isoline that is stair-stepped and rough (Fig. 2(c)). Even further reduction gives rise to mesh-dependent results and numerical instabilities, often referred to as checker-boarding [20].
A list of other potential tuning parameters for the cantilevered beam problem is given in Table 1. Other types of topology optimization problems may have tuning parameters in addition to or instead of the ones shown, for example, level set TO [21] which may require choices regarding initial seeding [22].
Tuning parameters |
---|
Weight of perimeter objective, wP |
Weight of strain energy objective, wC |
Mesh size, h |
Constraint scaling |
Density initial values |
SIMP exponent penalty, P |
Objective tolerance |
Maximum number of forward problem evaluations |
Minimum allowable stiffness |
Meshing method/element type |
Tuning parameters |
---|
Weight of perimeter objective, wP |
Weight of strain energy objective, wC |
Mesh size, h |
Constraint scaling |
Density initial values |
SIMP exponent penalty, P |
Objective tolerance |
Maximum number of forward problem evaluations |
Minimum allowable stiffness |
Meshing method/element type |
The purpose of this contribution is to introduce an approach where problem definition and tuning parameter history can be captured and, possibly on an enterprise level, leveraged to mitigate the issue of tuning. The state-of-the-art process to establish appropriate values for tuning parameters is manual and tedious, requiring trial and error and multiple TO runs. Similar efforts may be repeated many times amongst different practitioners. Ideally, the solution would not need extensive data to begin but would reduce the number of required TO runs over time by leveraging built-up experience (Fig. 3). A two-stage framework, based on machine learning (ML) approaches, is presented in Sec. 2 and applied to simple example problems in Sec. 3. The limitations and outlook for the future are discussed in Sec. 4.
2 Approach
2.1 Framework.
The approach is divided into two stages due to the envisioned need to account for data-rich and data-poor scenarios. The first stage seeks to use similarities between the current problem and previous problems and provide a recommendation for tuning parameters based on similarity to existing problems stored in a database (data-rich scenario). The second stage is intended to refine the tuning parameters for the specific problem and would dominate in the case where insufficient prior data are present, such as if a new TO feature is introduced (data-poor scenario). The balance between the two steps would be expected to shift over time as experience builds. The overall process is depicted in Fig. 4.
2.1.1 Stage 1: Initiation of Tuning Parameters From Existing Designs Via Metalearning.
The search can be accelerated if the optimization is started with near-optimal sets of tuning parameters, θi. If a design problem can be described by a set of metafeatures (m) that define the problem, the value of θi for the initiation is chosen based on proximity as measured by a distance metric on the space of m. At a high level, problems with the same types of metafeatures are the closest to one another (e.g., variations of a structural problem), and problems with dissimilar categories of metafeatures are not close (e.g., structural versus fluid).
This paper focuses on the case where there are consistent but numerically different quantities, e.g., orientation of load or volume fraction, and uses a simple Euclidean distance (L2 norm of difference between metafeature vectors) between two design problems. Functional distance metric between two relatively similar design problems, e.g., defined by the negative Spearman correlation coefficient [23], could be used in more complex situations but is reserved for future work.
2.1.2 Stage 2: Metamodel-Based Tuning Parameter Search.
For a bi-objective case, an exact computation of EHI can be performed [26] with time complexity of O(n), where n denotes the number of current set of nondominated observations.
2.2 Case Study.
The 2D cantilever beam of Fig. 1 is used as the primary TO example in this section. Two of the possible metafeatures, m, that define the problem are illustrated in Fig. 6: the angle at which a force is applied (Fig. 6(a)) and the volume fraction constraint (Fig. 6(b)).
The metric f used in the metamodeling step represents the quality of the TO result produced by a set of tuning parameters θ. It is intended to translate the qualitative human perception of solution quality into a quantitative value upon which ML-based optimization may be performed. In the present case, it quantifies a solution as being neither too diffuse nor too rough.
2.3 Implementation.
This two-step ML approach was implemented in a modular style. The ML framework was created as a stand-alone module in python. The commercial finite element software comsol was used for TO, controlled via matlab api. A python translator was created to convert the direction of the ML package into instructions for matlab/comsol.
3 Results
3.1 Single Tuning Parameter, Single Objective.
The framework was first applied to the 2D cantilevered beam problem (Sec. 2.1). Varying metafeatures force angle and volume fraction resulted in dramatically different TO results (Fig. 8) and required different values of tuning parameters to achieve the best TO results. The state-of-art is for the appropriate values of wp (Eq. (4)) to be established manually for each case in the figure.
In the envisioned use case, a repository would be populated over time by designers working on a variety of similar problems. The ML algorithm would refer to the repository to choose problems with small distance from the current problem and the tuning parameters from those cases would be taken as an initial set θi. Being a new algorithm, there was no such pre-existing repository; instead, a set of manual line searches of wP was performed at different metafeature settings in order to populate a repository for demonstration.
The line search for a single combination of metafeatures is shown in Fig. 9. The quality metric has a minimum value at around log(wP) = −3.8 indicating the best value to use for that particular combination of metafeatures. Other values of wP produce higher f, indicating that the TO results are either more diffuse or more stair-stepped than the optimum.
Eight different combinations of the metafeatures were explored (Fig. 10(a)). The best values of wP were then extracted from the individual line searches (thus forming the repository) and plotted in Fig. 10(b). Best value of wP varies as a function of the metafeatures.
A new unique combination of the metafeatures (“New point” in Fig. 11(a)) was then specified with the intention of using the framework to obtain a recommendation of wP. In the envisioned use case, this is analogous to a new mission profile. The metalearning step (M-L) produced wP recommendations based on the distance (Euclidean norm) in the metafeature space, m, between the new point and the three closest points in the repository. These closest prior points are labeled M-L Rec1, 2, and 3 in Fig. 11(a) and listed in Table 2.
Topology optimization volume fraction | Force angle (deg) | wP recommendations from metalearning step (values are log 10(wP)) | wP from metamodeling step | ||
---|---|---|---|---|---|
1 | 2 | 3 | |||
0.41 | 46 | −4.80 | −5.40 | −4.60 | −5.01 |
Topology optimization volume fraction | Force angle (deg) | wP recommendations from metalearning step (values are log 10(wP)) | wP from metamodeling step | ||
---|---|---|---|---|---|
1 | 2 | 3 | |||
0.41 | 46 | −4.80 | −5.40 | −4.60 | −5.01 |
In order to understand the quality of these recommendations and to provide an illustration, a manual line search of wP was performed at the new point (Fig. 11(b)) and indicated a minimum in the quality metric at approximately log 10(wP) = −5. The recommended wP values are superimposed on the plot as arrows pointing to the wP axis. All three recommendations are located near, though not exactly at, the minimum. This indicates that the recommendations were indeed a good starting point for the metamodeling stage of the framework.
The metamodeling operation then received the three recommended wP values as inputs for the Bayesian optimization, which served to refine the recommendations and seek the optimum wP for this specific metafeature combination. The value of log 10(wP) = −5 apparent from the line search was recovered after only two incremental metamodeling cycles. These values are also indicated by arrows in Fig. 11(b) (M-M1 and 2), and the final value is provided in Table 2. The Bayesian search required only 5 executions of the TO (3 from recommended points to initialize the GP model and 2 in the incremental Bayesian optimization) as opposed to the 12 required to establish the line search.
3.2 Dual Tuning Parameters, Dual Objectives.
The metamodeling framework was evaluated for the case of two tuning parameters and two objectives. The two tuning parameters used were wP and finite element mesh size. The two objectives were the quality metric, used previously, and elapsed time for the TO problem to run. The goal of the ML optimization, therefore, was to establish tuning parameters leading to the optimal trade between solution quality and time required to obtain it, which enables effective use of an engineering or computational budget. The first step, metalearning, was not performed in this example. Instead, the metafeatures of the cantilevered beam problem were fixed using vf = 0.3 and force angle of 90 deg. The Bayesian optimization was then seeded randomly with four initial sets of tuning parameters. This was a more difficult setup than if strong recommendations of tuning parameters had been provided from the metalearning step.
The results of the tuning parameter optimization are shown in Fig. 12. Circles indicate the quality and elapsed time of the four initial points. One of the initial points had a very high value of quality metric (∼27, indicating poor quality) and long elapsed time (∼450 s). The inset picture of the TO solution shows that the result was little more than a diffuse density field. Another initial point had better quality (∼12) and lower time (∼250 s), but the inset image also shows a qualitatively poor quality TO result.
The metamodeling algorithm created and advanced a Pareto front (see Fig. 5) using the initial points as a basis. The x markers in Fig. 12 indicate the final set of nondominated solutions after 15 ML iterations. TO solutions requiring long times to achieve high quality are located in the upper left corner. Moving to the right, the elapsed time decreases while the solution quality gets worse (metric increases) allowing a designer to make a time-quality tradeoff.
3.3 Scalability and Adaptability.
The metamodeling framework was extended to a simply supported beam problem (Fig. 13) having symmetric 3 × 1 aspect ratio, and TO was performed using software developed by the University of Colorado Boulder. Three turning parameters governing a SIMP continuation scheme were used (Table 3), where the SIMP penalty parameter P is increased by ΔP prior to each continuation step. Quality metric in this case was tied to the histogram density distribution only (without perimeter). The initial sampling was populated with 7 random points followed by 30 iterations of metamodeling optimization for a total of 37 explored points. The resulting Pareto front (Fig. 13) represents an optimal trade between solution quality and time, and was obtained similarly as for the simpler 2D problem in Sec. 3.2. The result in the upper left corner had more binary density distribution and cleaner structural features indicating higher quality whereas the design in the lower right-hand corner had more diffuse density and less-stiff structural features, indicating worse quality.
Tuning parameter | Description | Minimum | Maximum |
---|---|---|---|
Nit | Optimization iterations per continuation step | 10 | 70 |
ΔP | Change of SIMP exponent, P, per continuation step | 0.1 | 3.0 |
Ncs | Number of continuation steps | 3 | 8 |
Tuning parameter | Description | Minimum | Maximum |
---|---|---|---|
Nit | Optimization iterations per continuation step | 10 | 70 |
ΔP | Change of SIMP exponent, P, per continuation step | 0.1 | 3.0 |
Ncs | Number of continuation steps | 3 | 8 |
4 Discussion
The problem of manually tuning TO algorithms prevents true design automation. In this contribution, we showed for a simple example that tuning parameters may need to be changed with the TO problem specification and may require manual rework when a problem is altered. We also introduced one possible algorithm, based on machine learning approaches, which could automate tuning and result in fewer TO runs. The proposed framework does not itself optimize mechanical part designs. Rather, it mitigates the large amount of human intervention required to run algorithms like TO and obtain meaningful results. The overall idea is not limited to TO, but rather it is applicable to a broad class of numerical analysis and optimization methods. We focused on TO due to its recent increase in popularity and the prevalence of tuning parameters that are difficult to adjust unless used by an expert.
The examples and algorithmic configurations selected for this paper, including the use of perimeter penalty, were simple by intention in order to clearly introduce and demonstrate the proposed algorithm. In order to be useful in a real-world scenario, the algorithm needs to be scaled up and demonstrated on different and more complicated problems. This can include TO for fluid flow and multiphysics phenomena as well as with a greater number of tuning parameters.
As the scope of problems expands, we anticipate that there will be a need to further differentiate problem type and their accompanying metafeatures. For instance, a simple Euclidean distance metric is not appropriate to assess distance between a cantilevered beam problem and a fluid flow problem. Thus, there is a need to work on more abstract distance metrics. We envision one possibility being to split Step 1 (metalearning) into two substeps. The first, Step 1a, could be a classification-based metric, which would determine the class of a problem in terms of physics. Step 1b could be a further evaluation for relatively similar problems in terms of Euclidean or functional distance metric, similar to the approach demonstrated above.
One eventual challenge will be the introduction of multiphysics into our framework. The presence of multiple physics may introduce interactions not present in single-physics problems, which would complicate the calculation of distance metric. We reserve this challenge for future work, specifically in defining a high-level classification-based metric assessment.
In the near-term, improvements to the framework will aid functionality and efficiency. One example is to prevent points from bunching on the Pareto during the metamodeling optimization. This will ensure that space is efficiently explored. In addition, multifidelity metamodeling would be helpful to thorough and efficient exploration. A final example is a robust definition of metafeatures for structural mechanical problems which will enable a wide variety of problems with single physics in common to be stored and assessed.
One eventual future application of this work is to enable an enterprise-wide approach for capturing and using knowledge associated with automated design and numerical analysis. Besides TO, there are other fields, such as the general areas of computational fluid dynamics and finite element analysis, where tuning parameters that control solver settings are regularly used. The current state-of-art in large organizations is for individual designers to manually tune these parameters.
5 Conclusion
Algorithmic design optimization is a promising means to generate effective mechanical components and systems. Topology optimization is one category of such methods but can require extensive manual setup and control, particularly of tuning parameters that control algorithmic function and convergence. This paper introduced a machine learning framework to recommend tuning parameters to a user in order to avoid costly trial and error involved in manual tuning. This framework consisted of two steps, a metalearning step where recommendation is drawn from similar problems and a metamodeling step where Bayesian optimization is used to efficiently optimize the parameters for the specific TO problem. A quality metric was developed to quantify a human's perception of solution quality. The framework was then demonstrated on relatively simple problems in cases with one to three tuning parameters using single (quality) and dual (quality and time) objectives. The approach was shown to be more effective than line search. Future work should center on handling of more complex TO problems, multiphysics, development of a classification-based metric for very different problem types, and scale-up.
Acknowledgment
This work was funded by the DARPA TRAnsformative DESign (TRADES) program (Contract Grant No. HR0011-17-2-0022; Funder ID: 10.13039/100000185). The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing computing and collaboration resources that have contributed to the research results reported within this paper.