Abstract

While fundamentals of DFMA (design for manufacturing and assembly) are widely accepted and used in the engineering community, many CAD environments lack tools that address manufacturing concerns by providing rapid feedback about costs resulting from design choices. This article presents an experiment-based testing and validation of a rapid feedback tool that provides users history-based prediction of manufacturing time based on the current state of the design. A between-subject experiment is designed to evaluate the impact of the tool on design outcomes based on modeling time, part mass, and manufacturing time. Participants in the study included mechanical engineering graduate and undergraduate students with at least one semester of experience using solidworks. The experiment included three modeling activities and three tool conditions. Participants completed up to three sessions with different experimental conditions. Analysis of the data collected shows that the use of the design tool results in a small but nonsignificant increase in modeling time. Moreover, the use of the tool results in reduced part mass on average (both between subjects and within subjects). Tool use reduced manufacturing time in open-ended activities, but increased manufacturing time when activities focus more on mass reduction. Participant feedback suggests that the tool helped guide their material removal actions by showing the impact on manufacturing time. Finally, potential improvements and future expansions of the tool are discussed.

1 Computer-Aided Design Tools for DFMA

A systematic engineering design process employs a variety of tools to structure the approach and guide engineers in the development and analysis of solutions [15]. Computer-aided design (CAD) software is one such tool with examples of catia, proe creo, solidworks, and others that are used for both prototyping preliminary design concepts as well as creating production quality solid models and engineering drawings. While traditional CAD packages provide a variety of tools for geometrical and mechanical evaluation of designs, late-stage concepts such as manufacturability and environmental impact are often not tightly interconnected [68]. Consideration of manufacturing aspects is either done outside of CAD using traditional design for manufacturing and assembly (DFMA) methods [912] or ignored altogether during the design process. A recent study explored whether these DFMA methods, specifically focusing on additive manufacturing, resulted in fixation on features prematurely [13]. It was found that practicing engineers, without using CAD tools to supplement the evaluation process, fixated on subfeatures without consideration of other system requirements, causing project manufacturing challenges.

Supplementary CAD tools have been developed by both professional and academic users of CAD software to support design for manufacturing [14]. Some tools are focused on analyzing completed parts for manufacturability, providing scores for accessibility or machinability [15], while others focus on providing a final cost estimate for the final part model [16]. DFMA tools can be classified as knowledge based or history based. Knowledge-based approaches seek to provide design guidelines and offer recommendations to change the product to make it more manufacturable [1720]. Alternatively, a history-based approach uses past manufacturing or assembly costs to create predictive models based on product complexity [11,2123], features [9], or assembly movements [2426].

One example of previous work to support DFMA includes prediction of late-stage production information such as assembly time and market value using early-stage information such function structure models and assembly models [2729]. These predictions are generated using artificial neural network models and require a representative sample of products for training prediction models. Alternatively, researchers have integrated manufacturing information into CAD/CAM environments to guide designers toward using geometrical features that are more suited to the manufacturing capabilities available for production. A feature library is imported into the CAD environment that includes a hierarchical list of features linked to machining information, functionality, design limitations, and design for manufacturing (DFM) guidelines [30]. In the field of architectural design, researchers have proposed entirely new CAD environments that emphasize manufacturing and environmental concerns, by providing rapid feedback of relevant scientific data. This allows designers to evaluate the effects of their design decisions on a variety of metrics including airflow, energy consumption, lighting, and environmental impact through life cycle analysis [31].

Commercially available solutions such as dfmpro3 offer addons that analyze part geometry and features to provide suggestions for manufacturing-focused improvements. These addons provide feedback that is easy to understand and implement with little impact on the modeling process. Alternatively, more comprehensive solutions such as aPriori4 use a broader scope of engineering information (CAD models, BOM, materials, volume, factory capabilities) to provide decision support including costing information. While these tools take a more holistic approach to the design for manufacturing, they are less suited for rapid feedback in conceptual design. DFM concurrent costing5 is a tool that is integrated into various software packages that require user interaction to answer questions about the parts in question. It is primarily used to estimate a completely modeled part, rather than to illustrate how a modeling action changes the cost. easykost6 is a solution that compares designed products to a database of available products and their associated costs. Finally, leandesigner7 automatically identifies regions of the solid model that are in conflict with the predefined design for manufacturing rules (expert system) and estimates the manufacturing cost based on general geometric properties (dimensions, mass, and volume). While these tools are presented in the literature as design enablers, their utility is not explored to understand how these tools affect the design and modeling processes through experimental user studies.

2 Evaluating Design Tools

For design tools to be useful in practice, they must be validated not only for the accuracy of the information they provide but also for their usefulness to designers. Moreover, the tool should be logical, use meaningful and reliable information, and must not bias the designer [32]. Unlike descriptive mathematical models, engineering design tools are generally prescriptive, meaning they aim to predict future outcomes. Additionally, early-stage design tools are often subjective, meaning different users may draw different conclusions from the use of a tool. This may be due to the level of uncertainty present in the input from human designers and the qualitative nature of the design. As such, design tools are experimentally tested to confirm that they produce the expected outcomes and to ascertain whether designers find the tool useful. The comparison could be subjective and qualitative or may be objective and quantitative.

Experimental user studies have been conducted to explore how different design tools influence the final outcomes. This could range from studying how the size and shape of a morphological matrix can affect how designers explore the design space [33], how the user interface of a CAD system influences the errors made in modeling activities [34], or how different CAD systems result in different modeling approaches for users [35]. Even with these approaches, the studies are limited to comparing between different representations, different interfaces, or different CAD systems. The literature is notably lacking with respect to studying how a design tool changes how users create models.

Toward filling this gap, this article will discuss the experimental validation of a CAD tool developed by a space and aeronautics defense contractor. The tool aims to provide rapid feedback to designers about manufacturing time. A history-based estimation of manufacturing time generated from a representative set of parts is used. It is assumed that parts are produced using traditional machining or milling techniques. Finally, it should be noted that the accuracy of the tool has been verified by the industry partner. The experiment discussed here focuses on the effect of tool use. For brevity, the tool will be referred to as the DAX tool [36].

3 Experimental Design

The DAX tool is developed to assist designers in creating CAD models by providing on-demand prediction of different types of costs associated with the production of the part being designed. In this case, we are specifically interested in the labor hours (as predicted by the tool) needed to manufacture a part. Three conditions of tool availability are studied: with tool, placebo tool, and no tool. The tool is based on a historically trained surrogate model that considers the connectivity graph (boundary representation model) of the part that is modeled. The placebo tool should behave like the DAX tool but lack the added analytical value provided by the live tool. Specifically, the placebo tool presented cost estimation back to the user as a function of the overall mass of the part modeled. Both the “tool” and the “placebo tool” have identical interfaces where the source of the cost estimate is hidden from the user. The final “no tool” condition provides no-cost estimation feature to the participants.

3.1 Independent Variables.

To investigate the effects of the DAX tool, a between-subject replication experiment was designed. Two independent variables were modulated: (1) the type of tool being used and (2) the type of modeling activity performed by the participant. With three tool types (no tool, placebo, and live tool) and three modeling activities (part creation, lightweight redesign, and interface design), nine total conditions were explored. For each activity, participants were asked to reduce part mass and minimize the expected manufacturing time. The manufacturing time (cost) and mass reduction are generally conflicting objectives for a machined part. This was selected as a representative scenario that could capture tradeoffs experienced by designers. A sample of the instructions provided to the participants is found in Table 1. Other detailed task instructions given to the participants are omitted here for brevity. Participants were randomly assigned to one of the nine conditions while making sure participants do not repeat options from either of the variables. This was done to mitigate the effects of the task sequence.

Table 1

Instructions provided for modeling activities (interface design)

Interface design with tool (or placebo)
Problem summary
In this session, you will be provided with a drawing showing the desired location of multiple system components. You are to create a part in CAD that will connect each of these components. This part should be lightweight, but ease and cost of manufacturing is the most important quality of the part. The final part must follow the needs and requirements listed below.

Needs
  • The part must interface with each of the openings shown in the existing drawing.

  • The part must be manufacturable out of metal using traditional milling techniques.

  • The part mass should be minimized.

  • The machining cost of the part should be minimized.


Geometric Requirements
Keep in mind that measurements shown below are in inch-pound-second unit system. The unit system you use to create the parts may be different. In that case, make sure that you appropriately apply these requirements.
  • The cross-sectional area of the part must be greater than 0.5 in.2 in all directions.

    • • This is a restriction on the total area of the cross section, not a restriction for each member.

  • The distance between features that remove material and an outside edge must be greater than 0.375 in.

  • The distance between two features that remove material must be greater than 0.25 in.

  • The thickness of the part must be greater than 0.125 in. at all points and in all directions.

  • The machine is not able to cut inside corners with a radius smaller than 0.125 in.


Tool Usage
To help with this problem, you have been provided a tool. This tool will assist in determining the manufacturing time of the part in labor hours. The following information will help you use the tool.

By clicking the “Update” button, the tool will calculate the labor hours for the current part. A graph will be shown that displays the current and all previous costs of the part. Note that the graph will only update after the “Update” button is clicked. There are no limits to how often you can generate this graph. Keep in mind that the underlying computation needed to generate the graph may take anywhere from 15 to 60 s.
Interface design with tool (or placebo)
Problem summary
In this session, you will be provided with a drawing showing the desired location of multiple system components. You are to create a part in CAD that will connect each of these components. This part should be lightweight, but ease and cost of manufacturing is the most important quality of the part. The final part must follow the needs and requirements listed below.

Needs
  • The part must interface with each of the openings shown in the existing drawing.

  • The part must be manufacturable out of metal using traditional milling techniques.

  • The part mass should be minimized.

  • The machining cost of the part should be minimized.


Geometric Requirements
Keep in mind that measurements shown below are in inch-pound-second unit system. The unit system you use to create the parts may be different. In that case, make sure that you appropriately apply these requirements.
  • The cross-sectional area of the part must be greater than 0.5 in.2 in all directions.

    • • This is a restriction on the total area of the cross section, not a restriction for each member.

  • The distance between features that remove material and an outside edge must be greater than 0.375 in.

  • The distance between two features that remove material must be greater than 0.25 in.

  • The thickness of the part must be greater than 0.125 in. at all points and in all directions.

  • The machine is not able to cut inside corners with a radius smaller than 0.125 in.


Tool Usage
To help with this problem, you have been provided a tool. This tool will assist in determining the manufacturing time of the part in labor hours. The following information will help you use the tool.

By clicking the “Update” button, the tool will calculate the labor hours for the current part. A graph will be shown that displays the current and all previous costs of the part. Note that the graph will only update after the “Update” button is clicked. There are no limits to how often you can generate this graph. Keep in mind that the underlying computation needed to generate the graph may take anywhere from 15 to 60 s.

3.2 Tool Integration in Computer-Aided Design Software.

The DAX tool processes CAD data to provide estimates of manufacturing time. The solid modeling data are sent, upon request from a solidworks macro, to an external server that generates cost estimates. The estimates are displayed in a graph within a browser window as shown in Fig. 1. All results from each request are displayed so that the designer can see the impact of the changes between estimate requests. The mechanism of cost estimation is derivate of the previous work for graph/complexity-based assembly time estimation [22,23,37,38]. The internal components of this mechanism are proprietary and not detailed here.

Fig. 1
Visualization of tool output
Fig. 1
Visualization of tool output
Close modal

3.3 Participants.

Participants were recruited from engineering programs at Clemson University through a “invitation to participate” survey, which was subsequently used to filter candidates based on their experience with solidworks. A total of 104 students responded to the initial call for participation by completing the survey, 98 of whom were qualified for participating in the study. The qualified participants were scheduled for an experimental session based on their reported preferences. Each experimental session included a single-part modeling activity. Of the 98 students contacted, 72 students appeared at their scheduled experiment sessions and completed at least one session of the study, though some data files were corrupted, reducing the number of data sets. Data from 67 unique participants were fully analyzed. Among these, 20 participants completed only 1 session, 10 participants completed only 2 sessions, and 37 participants completed all 3 sessions. A total of 151 sessions were completed. Finally, participants were provided an incentive to complete the tasks in the form of cash value gift cards ($25 gift card for completing the first session and $75 for completing the subsequent two sessions). The study was reviewed and approved by the Clemson University institutional review board.

3.4 Execution Procedure.

Due to restrictions stemming from the COVID-19 pandemic, the experiment was conducted virtually using remote access through citrix workspace. As a result, most participants likely completed their sessions on small laptop screens and with the added layer of remote desktop connection. While this provided challenges in terms of consistency of hardware across participants, it allowed participants to use a familiar computer and allowed for scheduling flexibility. Unique virtual sessions were convened through MS Teams to record the sessions and to communicate questions and instructions. A batch script was created to launch solidworks (for the activity), camtasia (for recording workstation screen), and a python script to log cursor location and button presses. If the experiment condition included one of the two tools (live or placebo), participants directly launched the solidworks macro to request tool updates. When the participants completed the activity, they informed the experimenter, who then confirmed that the necessary files (a solidworks part file, a screen recording of the session, a text log of mouse/keyboard activity, and a *.jsonl file with log of tool use) were saved in an appropriate location. These files were transferred from the local workstations to a shared drive location accessible only to the researchers and anonymized.

3.5 Data Collection and Postprocessing.

The planned experiment procedure included collecting four key types of raw data from the sessions: a solidworks part file, a screen recording of the session, a text log of mouse/keyboard activity, a *.jsonl file with log of tool use. The part file, screen recording, and the text log were collected from all participants, whereas the *.jsonl file was only collected when participants were assigned either the live tool or the placebo tool. The raw data needed to be processed into the outcome metrics needed for analysis. The procedure for generating each of the metrics (modeling time, part mass, tool use, and labor hours) is discussed in this section.

3.5.1 Modeling Time.

Modeling time is intended to measure the time taken for completing the assigned activity. The simplest measure of modeling time is the length of the video recorded during the experiment sessions. However, the video length includes time for initial setup before the modeling activity is started. Additionally, due to the remote nature of the experiment, participants may have stepped away from the computer or take a break from the activity and do something else. While participants are advised to complete each single session in one sitting without distractions, it was not possible to control the environment as most participants joined the remote sessions from home. Therefore, the text log of keyboard and mouse activity was used to identify instances where participants were inactive. Any period of “no activity” lasting longer than 3 min was tagged and removed from the total modeling time measure. Finally, in sessions where participants were given either the placebo tool or live tool, they spent time setting up the tool before beginning the modeling activity. In this case, calculating the exact amount of time spent in setup was difficult to assess from the text log of keyboard and mouse activity. Therefore, 20 sessions (out of 96) were randomly selected and reviewed to estimate the time taken for tool setup. To emulate the individual differences in participants, the average setup time was made fuzzy by applying a randomizing factor (up to ±25%), resulting in a setup time between 3 and 5 min. The maximum value of this change was informed by the variance in setup time observed in the 20 sessions. Once all the adjustments are applied, the resulting time is recorded as the modeling time.

3.5.2 Part Mass.

Next, the part mass is extracted from the solidworks files. A solidworks macro was created to open each part file, check the unit system, and record the part mass, volume, and surface area. In cases where the participants used an incorrect unit system, the instances were tagged. These models were then scaled to reflect the correct units before recording geometric properties. An initial review of the mass, volume, and surface area showed that volume and surface area trend closely with mass. As a result, only the part mass is used for analysis.

3.5.3 Predicted Manufacturing Time.

After recording the part mass values, the part files were converted into STEP files. The live DAX tool was then used to estimate the labor hours needed to manufacture each part. The tool provides four estimates for each part with confidence levels of 90%, 75%, 50%, and 25%. Predictions are more conservative as confidence level increases. As such, 90% predictions are more conservative, while the 25% predictions are more optimistic. For the analysis presented in this study, 25% and 90% are used to compare the low end and high end of prediction labor hours.

3.5.4 Tool Use Count.

Finally, the *.jsonl files were processed to identify the number of times a tool was used during the modeling activity. This was done by counting the number of lines in the *.jsonl file associated with the session. In cases where duplicate lines occur in succession, the duplicates were removed from the count. This is because duplicate lines in the *.jsonl files are likely due to technical issues or participant misuse of the tool. A few participants appeared to impatiently “click” the cost estimation tool multiple times before the estimate graphs were updated. It should be noted that if duplicate lines are found with other lines between them, they are not removed. This is because participants may have reverted to a previous state of the model, which is a common phenomenon in CAD and not likely to be user/technical error with the tool.

4 Summary of Data Collected

This section summarizes the data collected from the initial survey and provides an overview of the data collected from the experiment sessions. All data collected were reviewed to identify any missing or corrupted files resulting from technical issues or user errors. Ultimately, data from 67 unique participants were analyzed. Among these, 20 participants completed only 1 session, 10 participants completed only 2 sessions, and 37 participants completed all 3 sessions. A total of 151 sessions were completed. It should be noted that analyses presented in this article only consider the experiment sessions that were successfully completed, and no data were lost due to technical issues or user error.

4.1 Survey Responses.

Student responses to the survey were reviewed to understand the demographics of students participating in the study. It should be noted that classical demographics are not collected (e.g., gender, age, ethnicity), instead characteristics of participant that are expected to be relevant to the study are recorded. The initial survey asked students about three topics in addition to scheduling for sessions: (1) information about their engineering graphics course or equivalent, (2) their experience using CAD software in an academic setting, and (3) their experience using CAD software in professional settings.

Participants were asked if, and when, they completed an engineering graphics course. Responses ranged from Spring 2014 to Summer 2020, with 43 of the 68 participants having completed the course between 2017 and 2019. Participants were also asked to report their grades in the course, which shows that 58 of the participants received an A (90% or above) in the course, 7 received a B (80–90%), 1 received a C (70–80%), and 2 did not report a grade.

The initial survey included questions about participants’ experience with CAD software in academic and professional environments. Table 2 shows a count of participants who had some experience using select CAD software. Values under the “Academic” column represent the number of participants who have at least one semester of high school, technical college, or university experience using the respective software. Similarly, values under the “Professional” column represent the number of students who have used the respective software in a professional setting for more than 3 months.

Table 2

Participant experience with CAD software

SoftwareExperience
AcademicProfessional
autodesk/autocad4426
catia1713
ptc creo1313
ironcad11
siemens nx1010
solidedge34
solidworks6851
Other147
SoftwareExperience
AcademicProfessional
autodesk/autocad4426
catia1713
ptc creo1313
ironcad11
siemens nx1010
solidedge34
solidworks6851
Other147

All participants had experience using solidworks in an academic setting, with 75% of them also having used it in a professional setting for more than 3 months. The next most popular choice among students was autocad and/or Autodesk, with 65% and 38% of students having used it in academic and professional settings, respectively. It should be noted that the institution where the experiment was conducted offers an engineering graphics course with autocad, which is required for students enrolled in certain programs and a solidworks class that is required of mechanical engineering students. catia was used by 25% of participants in an academic setting and 19% in a professional setting. ptc creo was used by 19% of participants in both academic and professional cases, whereas 15% of participants used siemens nx.

Each participant also reported how long they have used different CAD software in professional settings. As solidworks is the software used by all participants, only data for solidworks are shown in Table 3.

Table 3

Participant experience with solidworks

Experience with solidworksCount of participants
<3 months17
3–6 months11
6–12 months11
1–2 years13
2–4 years10
>4 years6
Experience with solidworksCount of participants
<3 months17
3–6 months11
6–12 months11
1–2 years13
2–4 years10
>4 years6

Nearly 60% of participants have 1 year or less experience using solidworks in a professional setting. This can be largely attributed to the use of solidworks during internships and co-ops. Alternatively, participants with more than 1 year of professional experience are more likely to be graduate students and/or students with technical degrees. Finally, it should be noted that survey responses are self-reported data, and therefore, it is assumed that participants understood the questions and responded honestly. To mitigate linguistic and semantic issues, the survey was pilot tested with domestic and international graduate students to ensure that it was easy to comprehend.

4.2 Data Collected From Experiment Sessions.

Following the postprocessing, the modeling time, the final part mass, the frequency of tool use, and predicted labor hours for manufacturing the part are recorded and analyzed. Table 4 shows the mean (µ) and standard deviation (σ) of the outcomes based on the type of activity. All values for modeling time are provided in minutes. Similarly, all values for labor hours are provided in hours. Tool use values are simple counts, and part mass is provided in grams. It should be noted that the interface design activity used significantly larger parts; hence, a “k” is added to the mass value to indicate those values are in kilograms.

Table 4

Summary of session data by activity type

Outcome metricsPart creationLightweightingInterface design
µσµσµσ
Modeling time (min)86.7833.6667.9029.0197.6246.75
Part mass (g)203.8221.14562.4586.1824.74 k13.56 k
Tool use4.945.716.035.753.913.55
Labor hours (25%)14.253.9029.336.0644.556.16
Labor hours (90%)25.608.5862.258.45184.3337.49
Outcome metricsPart creationLightweightingInterface design
µσµσµσ
Modeling time (min)86.7833.6667.9029.0197.6246.75
Part mass (g)203.8221.14562.4586.1824.74 k13.56 k
Tool use4.945.716.035.753.913.55
Labor hours (25%)14.253.9029.336.0644.556.16
Labor hours (90%)25.608.5862.258.45184.3337.49

As shown in Table 4, the mean values for all the metrics are different based on the activity. This is expected because the activities were designed to reflect the different types of tasks that designers encounter in their work. Part mass and labor hours show the largest differences between activities. Meanwhile, the modeling time and tool use metrics are less clearly different.

In addition to the different activities, the study also uses three types of tool assignments: no tool, placebo tool, and live tool. Table 5 shows mean and standard deviation data for all sessions evaluated based on the type of tool used. Part mass is not considered here because the different activities are expected to have significantly different part mass (factor of 100), resulting in an average by tool type to yield values that are not meaningful for analysis. This is also true for labor hours, albeit to a significantly small extent (factor of 10).

Table 5

Summary of session data by tool type

Outcome metricsNo toolPlacebo toolLive tool
µσµσµσ
Modeling time (min)82.2346.9188.2140.3684.2026.81
Tool use5.546.804.235.54
Labor hours (25%)29.3613.8730.4713.8927.9913.75
Labor hours (90%)91.0873.8597.6175.5885.6269.95
Outcome metricsNo toolPlacebo toolLive tool
µσµσµσ
Modeling time (min)82.2346.9188.2140.3684.2026.81
Tool use5.546.804.235.54
Labor hours (25%)29.3613.8730.4713.8927.9913.75
Labor hours (90%)91.0873.8597.6175.5885.6269.95

As shown in Table 5, no clear differences are observed between the tool types when considering modeling time. For tool use, the mean frequency of use is higher for the placebo tool; however, the high variance in data suggests that the difference in mean values may not be significant. Further analysis of the effects of tool type and differences based on the type of activity are discussed in the following sections. Finally, it should be noted that data presented in Tables 4 and 5 include potential outliers, which are also discussed in subsequent sections.

5 Evaluation of Observed Metrics

Prior to the statistical analysis, the data were reviewed for outliers. Outliers were identified using the scaled median absolute deviations approach. While the total number of sessions is high (151), replication for each unique condition was unbalanced, ranging from 12 to 20. Additionally, human behavior data are normally widely varied and prone to outliers due to the inherent subjectivity and complexity of participants. As such, not all statistical outliers were removed from the data, rather only the extreme outliers were removed. These are defined as values that, upon removal, change the mean by more than 10% of the original value. The search for outliers was performed in the subsets sorted for analysis, not simply for the entire data set. For each subset, one outlier was removed at most. These outliers were often two to three times larger than the second largest value. No outliers were observed on the lower end of the data.

In addition to the search for outliers, the dependent variables are also tested for statistical independence among them. Correlation tests show that the modeling time was statistically independent from all other dependent variables (maximum R-value of −0.05). Product mass was moderately correlated with the manufacturing time (R = 0.72, 0.86). This is expected because the estimation of manufacturing time is dependent on part mass and geometry. Additionally, the two manufacturing time estimates are moderately correlated with each other (R = 0.79). This is not unexpected as the manufacturing time estimates are computed using the same inputs but with different confidence intervals.

5.1 Effect of Tool Existence.

The session data are sorted into two groups: (1) sessions where participants were given no tool and (2) sessions where participants were given either the live tool or the placebo tool. Mean values are computed for each relevant outcome metric as presented in Table 4. Mean values are found in the rows for each metric. The conditions that performed better, on average, are given in bold. Additionally, p-values comparing “no tool” and “with tool” conditions are presented in Table 6. Note that PC, LW, and ID refer to part creation, lightweighting, and interface design, respectively.

Table 6

Metrics compared based on tool existence

Part creationLightweightingInterface designp-Values
PCLWID
Modeling time (min)No tool77.7866.0989.900.110.740.52
With tool91.4269.1894.38
Part mass (g)No tool206.23563.0425.5 k0.610.990.79
With tool202.59563.2324350
Labor hours (25%)No tool13.9429.3244.840.610.990.81
With tool14.4129.3544.40
Labor hours (90%)No tool24.1762.60188.090.250.830.61
With tool26.3562.00182.34
Part creationLightweightingInterface designp-Values
PCLWID
Modeling time (min)No tool77.7866.0989.900.110.740.52
With tool91.4269.1894.38
Part mass (g)No tool206.23563.0425.5 k0.610.990.79
With tool202.59563.2324350
Labor hours (25%)No tool13.9429.3244.840.610.990.81
With tool14.4129.3544.40
Labor hours (90%)No tool24.1762.60188.090.250.830.61
With tool26.3562.00182.34

Note: The conditions that performed better, on average, are given in bold.

The existence of either tool increased the modeling time for all activity types. Part creation activity shows the largest change in average modeling time when using a tool. The slight increase in modeling time for tool use might be due to the time spent by participants calling the tool, waiting for predictions, and reviewing the graph (30–90 s per request). For part mass, using a tool resulted in lighter parts for the part creation and interface design activities, but had negligible impact for the lightweighting activity. Cost estimations with 25% confidence level showed no noticeable differences in mean values. However, cost estimations with 90% confidence level showed positive effects in the case of interface design and negative effects for part creation.

In all cases compared in Table 6, the mean differences identified are smaller than the standard deviations. Therefore, it is suspected that the differences may not be statistically significant. A set of two sample t-tests are conducted to evaluate the significance of mean differences with a significance level of 0.05. All but two resulting p-values are larger than 0.50, with the smallest p-value being 0.11. Therefore, there is insufficient evidence to conclude that the existence of a tool has a significant impact on the outcome metrics considered.

5.2 Comparing Placebo and Live Tools.

Table 7 shows a comparison of mean values for each of the outcome metrics based on the type of tool and sorted by activity. For this comparison, tool use frequency is also included as an outcome metric. Data in bold identify better performance between the tools. Again, extreme outliers were removed. P-values for each comparison of live and placebo tools are presented.

Table 7

Mean metrics for placebo and live tool

Part creationLightweightingInterface designp-Values
PCLWID
Modeling time (min)Live93.5873.6783.500.150.320.07
Placebo79.8063.58102.97
Part mass (g)Live199.97555.2422.6 k0.370.600.51
Placebo205.37573.2225.7 k
Tool use countLive4.564.743.340.190.550.64
Placebo3.505.553.83
Labor hours (25%)Live13.0030.3743.610.040.180.50
Placebo14.5328.0745.03
Labor hours (90%)Live25.9663.30179.560.400.230.71
Placebo23.9960.39184.54
Part creationLightweightingInterface designp-Values
PCLWID
Modeling time (min)Live93.5873.6783.500.150.320.07
Placebo79.8063.58102.97
Part mass (g)Live199.97555.2422.6 k0.370.600.51
Placebo205.37573.2225.7 k
Tool use countLive4.564.743.340.190.550.64
Placebo3.505.553.83
Labor hours (25%)Live13.0030.3743.610.040.180.50
Placebo14.5328.0745.03
Labor hours (90%)Live25.9663.30179.560.400.230.71
Placebo23.9960.39184.54

Note: Data in bold identify better performance between the tools.

The placebo tool yielded lower average modeling times for part creation and lightweighting activities. The live tool shows lower modeling times on average for the interface design activity. This suggests that the live tool may be more helpful when participants are creating novel parts compared to when participants are modifying existing parts. In the case of part mass, the live tool resulted in a lower part mass in each of the activities. When comparing tool use per session, the live tool is used more often in part creation and less frequently in lightweighting and interface design, but the differences are small. For predicted labor hours (cost estimation), the live tool yields fewer hours for the interface design activity, while the placebo tool performs better for the lightweighting activity. For part creation, more generous predictions favor the live tool, whereas conservative predictions favor the placebo tool. The differences in mean values fall within one standard deviation from the mean. To test the significance, a set of two sample t-tests are conducted with a null hypothesis assuming no difference in means and a significance level of 0.05. A statistically significant result is found for part creation activity for the cost estimation at 25% confidence (p = 0.04 and shown in Fig. 2).

Fig. 2
Distribution of predicted labor hours (25%) for part creation activity
Fig. 2
Distribution of predicted labor hours (25%) for part creation activity
Close modal

This suggests that using the live tool may help participants think more about modifying the parts such that they are lighter yet easier to manufacture. However, the data collected in this study do not provide sufficient evidence to support that claim as the differences in part mass values are not statistically significant. The modeling time for the interface design activity resulted in a p-value = 0.07. Moreover, experiments in human behavior tend to show higher variances, so the use of a higher significance level may be justified. All other t-tests resulted in p-values greater than 0.15.

Modeling time is further investigated to understand the correlation between tool type and these metrics. An analysis of variance comparing the lumped modeling times (combining tool types) for between activities showed that modeling times for the different activities were statistically different based on a significance level of 0.05. Next, the modeling times were investigated within each activity. Figure 3 shows boxplots of modeling time. The lightweighting activity is omitted because modeling time in that activity is similar across both tool types. In the case of part creation, the live tool shows a wider spread, skewed toward the higher end, thus suggesting that the live tool is more likely to yield longer modeling times. Conversely, the interface design activity shows that the placebo tool is more likely to produce longer modeling times.

Fig. 3
Distribution of modeling time
Fig. 3
Distribution of modeling time
Close modal

In the part creation and lightweighting activities, the end goal is more discernable to the participants, such as recreating geometry or achieving mass reduction. Alternatively, the interface design activity asks the students to design the geometry with minimal guidance. In this case, a reliable tool that provides useful feedback on design decisions may help participants reach a satisfactory solution. This is further supported by the part mass and predicted labor hour values, which show that the use of the live tool is correlated with lower part mass and lower manufacturing time. Again, insufficient evidence is available to demonstrate statistical significance.

In summary, while the two tools were not found to be statistically different with respect to modeling time and part mass, the live tool was generally more helpful in creating lighter parts and showed quicker modeling times for open-ended activities.

6 Discussion

Based on the data collected and analyzed, the expectations set up prior to the experiment are evaluated. First, the tool does not negatively impact design outcomes. Second, more frequent use of the tool improves design outcomes. It should be noted that “design outcomes” in this case are evaluated with respect to modeling time, part mass, and manufacturing time or predicted labor hours. Tool use count is not viewed as a design outcome as it is more a measure of participant behavior.

6.1 Modeling Time.

The live tool adds a set of actions that participants need to perform beyond normal modeling behaviors. Combined with the processing time of the tool, it is expected that the tool use would increase the modeling time. This was seen in an 18% increase for the part creation activity and 5% increase for lightweighting and interface design. Finally, the concept of modeling time was noticeably absent from participant feedback in the exit survey. Participants provided a variety of positive and negative remarks about the tool, but none of the 45 participants mentioned modeling time as a concern. This suggests that while the use of the tool showed an increase in average modeling time, participants did not necessarily perceive it as a noteworthy change.

6.2 Part Mass.

The existence of either tool (live or placebo) did not significantly affect the final part mass on average. The largest change was observed in the interface design activity, where the use of a tool resulted in a 4% lower mass. Using the live tool resulted in lighter parts (3% lower mass in the part creation and lightweighting activities and 12% lower mass in the interface design activity). Thus, the live tool produced the lowest part mass on average. While these differences were not found to be statistically different, the rank order of average part mass indicates that the live tool does not negatively impact the design outcome in this case.

The two mass reduction values for each participant are compared to identify whether a given mass reduction value is higher, lower, or similar to its counterpart. Results for the lightweight and part creation activities are presented in Table 8. More participants are found to have better mass reduction when using the live tool. Alternatively, more participants are found to have worse mass reduction when using no tool. In the case of the placebo tool, it produces higher mass reduction at a similar frequency to using no tool. However, it shows fewer cases of lower mass reduction. These results suggest that for any given participant, using a placebo tool is not likely to affect mass reduction negatively while using the live tool is more likely to improve mass reduction.

Table 8

Change in mass reduction by tool type

No toolPlaceboLive
Better (high mass reduction)7717
Worse (lower mass reduction)14116
Similar431
No toolPlaceboLive
Better (high mass reduction)7717
Worse (lower mass reduction)14116
Similar431

Finally, reviewing participant feedback from exit surveys suggests that participants used the labor hour predictions as a check on their material removal actions. Several participants observed that when certain design changes resulted in a large increase in the predicted labor hours, they re-evaluated the changes to achieve a similar reduction in mass without a large increase in manufacturing time. This suggests that the tool can have an impact on the design decisions, provided participants prioritize manufacturing time or try to find a balance between mass reduction and manufacturing time.

6.3 Manufacturing Time.

Using the live tool resulted in the lowest average predictions of labor hours, for both optimistic (25%) and pessimistic (90%) predictions in the case of the interface design activity. Conversely, using the live tool resulted in the highest average prediction of labor hours for the lightweighting activity. In the part creation activity, the live tool produced average optimistic predictions lower than the other tool conditions.

For pessimistic predictions of labor hours in the part creation activity, the live tool resulted in an increase compared to other tool treatments. Overall, this suggests that the impact of tool use on manufacturing time is dependent on the type of activity. It should also be noted that manufacturing time and part mass are inversely related to the types of activities used in this experiment. Therefore, if participants prioritize mass reduction, they are likely to produce parts that require a longer manufacturing time.

6.4 Tool Frequency.

For part creation and lightweighting activities, increasing tool use trends with increasing predicted labor hours. This trend is reversed for the interface design activity, where the increased use of the live tool shows a reduction in the predicted labor hours. While these are weak trends, they suggest that using the tool more frequently during a design activity may be more beneficial when the designer is creating a novel part with minimal restrictions on geometry. Some feedback from participants further supports this. Participants reported that they “tried to make changes that did not substantially increase manufacturing time” and that the tool helped them see the “efficiency of your material removal.” Moreover, some participants noted that the tool did not “work correctly” during the two sessions, inadvertently identifying the difference between the predicted labor hours from live tool and placebo tool.

7 Summary

This article presented an experimental testing and validation of a CAD tool designed to support design for manufacture by providing designers with an estimate of machining time. To test the effects of the tool on CAD modeling and design outcomes, an experiment was developed with between-subject replication. Three different design activities were created, and three different tool treatments were provided. The goal of the experiment was to test the use of the tool in different CAD activities and compare the effects of the DAX tool with a placebo tool and with a control condition where no tool was provided. Data collected from the experiment were processed to obtain four metrics. Three of these metrics (modeling time, final part mass, and predicted labor hours) are used as a measure of the design outcome.

The live tool was found to increase the modeling time on average for all modeling activities, but these differences were not statistically significant. Participants did not perceive the tool to increase modeling time. Analysis of the final part mass shows that using the tool produces lighter parts on average compared to using no tool or using a placebo. While the change in part mass is not found to be significantly different, participants reported that they used the tool to guide their design decisions with respect to material removal. Finally, mixed results are observed when tool use is compared with manufacturing time. In generative design activities, the tool helped participants design parts that are quicker to manufacture; however, in other activities, the tool either did not have a notable impact on the manufacturing time or increased manufacturing time compared to using the placebo or using no tool. Participants found reviewing the labor hours predicted by the tool helped guide them toward parts that they perceived to be easier to manufacture.

More generally, a methodological approach is presented that can identify how a design automation or augmentation tool can influence the design process. With new tools being introduced, companies are challenged in understanding the impact that these tools will have on their current design processes and activities. The project presented here was initiated by industry partners as a means to evaluate how a proposed tool might influence the performance of their designers, in terms of both effectiveness (part mass, part cost) and efficiency (modeling time). Without a clear means of investigation, such as that presented here, the tool impact is only a speculation based on experience and anecdotal evidence and not much more than marketing material.

8 Future Work

While this study shows that the tool did not produce a significant negative impact on the design outcomes, feedback from participants and analysis of the data collected in the experiment provides avenues for potential improvements for the tool. One of the common themes in participant feedback was about the integration of the DAX tool into the CAD environment. While the implementation described in this article provided a simple and usable interface for participants, a more wholistic integration is preferred. This may have been in part due to participants using small laptop screens and not being able to keep the prediction graph, tool user interface (UI), and modeling instructions all in view simultaneously. However, future iterations of the experiment should still provide the tool as a feature integrated into CAD environment so that the tool UI does not interfere with the workspace. In addition to UI improvements, future work should also include a tutorial or help module for the tool. This will ensure consistency in how participants interpret the output from the tool. Moreover, future experiments should also include a within-subject replication component to test whether participants become more familiar and/or comfortable with the tool and how that changes the effectiveness of the tool.

Finally, the scope of the tool can be expanded to cover assembly modeling by providing prediction of assembly time. The tool can also be improved by integrating other manufacturing methods. As previously mentioned, the DAX tool assumes that parts are manufactured using traditional machining techniques. The tool can be expanded to include more advanced manufacturing methods, as well as additive manufacturing. Negative feedback from participants about the tool often cited its lack of applicability for nontraditional manufacturing processes.

Finally, the modeling behaviors can be studied to determine what behavioral sequences are encouraged or discouraged through use of the tool. This is similar to the function modeling work that analyzed the steps designers used in creating systems models [39]. This type of analysis can support a deeper exploration into designer cognition as influenced by augmentation tools.

Footnotes

Conflict of Interest

There are no conflicts of interest. All procedures performed for studies involving human participants were in accordance with the ethical standards stated in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all participants. Documentation provided upon request. Informed consent was obtained for all individuals. Documentation provided upon request. This article does not include any research in which animal participants were involved.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Pahl
,
G.
,
Beitz
,
W.
,
Blessing
,
L.
,
Feldhusen
,
J.
,
Grote
,
K.-H. H.
, and
Wallace
,
K.
,
2013
,
Engineering Design: A Systematic Approach
,
Springer-Verlag London Limited
,
London
.
2.
Dieter
,
G. E.
, and
Schmidt
,
L. C.
,
2013
,
Engineering Design
,
McGraw-Hill
,
New York
.
3.
Dym
,
C. L. L.
, and
Little
,
P.
,
2004
,
Engineering Design: A Project-Based Introduction
,
John Wiley and Sons
,
New York
.
4.
Ulrich
,
K. T.
, and
Eppinger
,
S. D.
,
2016
,
Product Design and Development
,
McGraw-Hill
,
Boston, MA
.
5.
Ullman
,
D. G.
,
2018
,
The Mechanical Design Process
,
David Ullman LLC
,
Independence, OR
.
6.
Terzi
,
S.
,
Bouras
,
A.
,
Dutta
,
D.
,
Garetti
,
M.
, and
Kiritsis
,
D.
,
2010
, “
Product Lifecycle Management—From Its History to Its New Role
,”
Int. J. Prod. Lifecycle Manag.
,
4
(
4
), pp.
360
389
.
7.
Urban
,
S. S.
, and
Rangan
,
R.
,
2004
, “
From Engineering Information Management (EIM) to Product Lifecycle Management (PLM)
,”
ASME J. Comput. Inf. Sci. Eng.
,
4
(
4
), pp.
279
280
.
8.
Herrmann
,
J. W.
,
Cooper
,
J.
,
Gupta
,
S. K.
,
Hayes
,
C. C.
,
Ishii
,
K.
,
Kazmer
,
D.
,
Sandborn
,
P. A.
, and
Wood
,
W. H.
,
2004
, “
New Directions in Design for Manufacturing
,”
ASME 2004 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
Salt Lake City, UT
,
Sept. 28–Oct. 2
,
American Society of Mechanical Engineers Digital Collection
, pp.
853
861
.
9.
Owensby
,
E.
,
Shanthakumar
,
A.
,
Rayate
,
V.
,
Namouz
,
E. Z.
, and
Summers
,
J. D. J. D.
,
2011
, “
Evaluation and Comparison of Two Design for Assembly Methods: Subjectivity of Information
,”
ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
Washington, DC
,
Aug. 28–31
,
ASME
, DETC2011, vol. 47530.
10.
Boothroyd
,
G.
,
Dewhurst
,
P.
, and
Knight
,
W.
,
2010
,
Product Design for Manufacture and Assembly
,
Marcel Dekker
,
New York
.
11.
Poli
,
C.
,
2001
,
Design for Manufacturing: A Structured Approach
,
Butterworth-Heinemann
,
Boston, MA
.
12.
Patterson
,
A. E.
, and
Allison
,
J. T.
,
2022
, “
Mapping and Enforcement of Minimally Restrictive Manufacturability Constraints in Mechanical Design
,”
ASME Open J. Eng.
,
1
, p.
014502
.
13.
Bracken Brennan
,
J.
,
Simpson
,
T. W.
,
Miney
,
W. B.
, and
Jablokow
,
K. W.
,
2021
, “
The Impact of Manufacturing Fixation in Design: Insights From Interviews With Engineering Professionals
,”
ASME International Mechanical Engineering Congress and Exposition
,
Virtual Online
,
Nov. 1–5
,
American Society of Mechanical Engineers
, p.
V02BT02A025
.
14.
Niazi
,
A.
,
Dai
,
J. S.
,
Balabani
,
S.
, and
Seneviratne
,
L.
,
2006
, “
Product Cost Estimation: Technique Classification and Methodology Review
,”
ASME J. Manuf. Sci. Eng.
,
128
(
2
), pp.
563
575
.
15.
Chen
,
N.
, and
Frank
,
M. C.
,
2021
, “
Design for Manufacturing: Geometric Manufacturability Evaluation for Five-Axis Milling
,”
ASME J. Manuf. Sci. Eng.
,
143
(
8
), p.
081007
.
16.
Xu
,
T.
,
Xue
,
J.
,
Chen
,
Z.
,
Li
,
J.
, and
Jiao
,
X.
,
2022
, “
A Systematic Method for Automated Manufacturability Analysis of Machining Parts
,”
Int. J. Adv. Manuf. Technol.
,
122
(
1
), pp.
391
407
.
17.
Phelan
,
K.
,
Wilson
,
C.
, and
Summers
,
J. D.
,
2014
, “
Development of a Design for Manufacturing Rules Database for Use in Instruction of DFM Practices
,”
Proceedings of the ASME Design Engineering Technical Conference
,
Buffalo, NY
,
Aug. 17–20
.
18.
Rayate
,
V. C.
, and
Summers
,
J. D.
,
2012
, “
Representations: Reconciling Design for Disassembly Rules With Design for Manufacturing Rules
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
Chicago, IL
,
Aug. 12–15
,
American Society of Mechanical Engineers
, pp.
369
379
.
19.
Zhao
,
Z.
, and
Shah
,
J. J.
,
2005
, “
Domain Independent Shell for DfM and Its Application to Sheet Metal Forming and Injection Molding
,”
Comput. Des.
,
37
(
9
), pp.
881
898
.
20.
Favi
,
C.
,
Mandolini
,
M.
,
Campi
,
F.
,
Cicconi
,
P.
, and
Germani
,
M.
,
2021
, “
Design for Additive Manufacturing: A Framework to Collect and Reuse Engineering Knowledge Towards a CAD-Based Tool
,”
ASME International Mechanical Engineering Congress and Exposition
,
Virtual Online
,
Nov. 1–5
,
American Society of Mechanical Engineers
, p.
V006T06A015
.
21.
Fagade
,
A. A.
, and
Kazmer
,
D. O.
,
2000
, “
Early Cost Estimation for Injection Molded Components
,”
J. Inject. Molding Technol.
,
4
(
3
), pp.
97
106
.
22.
Namouz
,
E. Z.
, and
Summers
,
J. D.
,
2014
, “
Comparison of Graph Generation Methods for Structural Complexity Based Assembly Time Estimation
,”
ASME J. Comput. Inf. Sci. Eng.
,
14
(
2
), p.
021003
.
23.
Mathieson
,
J. L.
,
Wallace
,
B. A.
, and
Summers
,
J. D.
,
2013
, “
Assembly Time Modelling Through Connective Complexity Metrics
,”
Int. J. Comput. Integr. Manuf.
,
26
(
10
), pp.
955
967
.
24.
Dewhurst
,
P.
, and
Boothroyd
,
G.
,
1983
,
Design for Assembly Handbook
,
Amherst
,
MA
.
25.
Renu
,
R.
,
Peterson
,
M.
,
Mocko
,
G.
, and
Summers
,
J.
,
2013
, “
Automated Navigation of Method Time Measurement Tables for Automotive Assembly Line Planning
,”
Proceedings of the ASME Design Engineering Technical Conference
,
Portland, OR
,
Aug. 4–7
.
26.
Maynard
,
H.
,
Stegemerten
,
G. J.
, and
Schwab
,
J. L.
,
1948
,
Methods Time Measurement
,
McGraw-Hill
,
New York
.
27.
Namouz
,
E. Z.
, and
Summers
,
J. D.
,
2013
, “Complexity Connectivity Metrics-Predicting Assembly Times With Abstract Assembly Models,”
Smart Product Engineering
,
M.
Abramovici
, and
R.
Stark
, eds.,
Springer Berlin Heidelberg
,
Bochum, Germany
, pp.
77
786
.
28.
Mathieson
,
J. L.
,
Shanthakumar
,
A.
,
Sen
,
C.
,
Arlitt
,
R.
,
Summers
,
J. D.
, and
Stone
,
R.
,
2011
, “
Complexity as a Surrogate Mapping Between Function Models and Market Value
,”
Proceedings of the ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
Washington, DC
,
Aug. 28–31
, Vol.
9
, pp.
55
64
.
29.
Mohinder
,
C. V. S.
,
Gill
,
A.
, and
Summers
,
J. D.
,
2016
, “Using Graph Complexity Connectivity Method to Predict Information From Design Representations,”
Design Computing and Cognition’16
,
J. S.
Gero
,
ed., Springer
,
Chicago, IL
, p.
73
.
30.
Hoque
,
A. S. M.
,
Halder
,
P. K.
,
Parvez
,
M. S.
, and
Szecsi
,
T.
,
2013
, “
Integrated Manufacturing Features and Design-for-Manufacture Guidelines for Reducing Product Cost Under CAD/CAM Environment
,”
Comput. Ind. Eng.
,
66
(
4
), pp.
988
1003
.
31.
Pitman
,
M. W.
, and
Watts
,
C. E.
,
2011
, “
A Building Simulation Tool for the Rapid Feedback of Scientific Data in Architectural Design
,”
45th Annual Conference of the Architectural Science Association, ANZAScA 2011
,
Sydney, Australia
,
Nov. 16–18
.
32.
Olewnik
,
A. T.
, and
Lewis
,
K.
,
2005
, “
On Validating Engineering Design Decision Support Tools
,”
Concurr. Eng.
,
13
(
2
), pp.
111
122
.
33.
Smith
,
G.
,
Richardson
,
J.
,
Summers
,
J. D.
, and
Mocko
,
G. M.
,
2012
, “
Concept Exploration Through Morphological Charts: An Experimental Study
,”
ASME J. Mech. Des.
,
134
(
5
), p.
051004
.
34.
Summers
,
J. D.
,
Bayanker
,
S.
, and
Gramopadhye
,
A.
,
2009
, “
Experimental Comparison of CAD Input Devices in Synthesis, Analysis, and Interrogation Tasks
,”
Comput. Aided. Des. Appl.
,
6
(
5
), pp.
595
612
.
35.
Rosso
,
P.
,
Gopsill
,
J.
,
Burgess
,
S.
, and
Hicks
,
B.
,
2021
, “
Investigating and Characterising Variability in CAD Modelling and Its Potential Impact on Editability: An Exploratory Study
,”
Comput. Des. Appl.
,
18
, pp.
1306
1326
.
36.
Patel
,
A.
,
Summers
,
J. D.
,
Patel
,
A.
,
Mathieson
,
J.
,
Sbarra
,
M.
, and
Ortiz
,
J. B.
,
2021
, “
Testing and Validation of a Custom CAD Tool to Support Design for Manufacturing: An Experimental Study
,”
ASME International Design Engineering Technical Conferences and Computer in Engineering Conference
,
ASME, Virtual
, Paper No.
IDETC2021-69820
.
37.
Owensby
,
J. E.
, and
Summers
,
J. D.
,
2014
, “
Assembly Time Estimation: Assembly Mate Based Structural Complexity Metric Predictive Modeling
,”
ASME J. Comput. Inf. Sci. Eng.
,
14
(
1
), p.
011004
.
38.
Miller
,
M. G.
,
Summers
,
J. D.
,
Mathieson
,
J. L.
, and
Mocko
,
G. M.
,
2014
, “
Manufacturing Assembly Time Estimation Using Structural Complexity Metric Trained Artificial Neural Networks
,”
ASME J. Comput. Inf. Sci. Eng.
,
14
(
1
), p.
011005
.
39.
Patel
,
A.
,
Kramer
,
W. S.
,
Flynn
,
M.
,
Summers
,
J. D.
, and
Shuffler
,
M. L.
,
2020
, “
Function Modeling: A Modeling Behavior Analysis of Pause Patterns
,”
ASME J. Mech. Des.
,
142
(
11
), p.
111402
.