Abstract

Function is defined as the ensemble of tasks that enable the product to complete the designed purpose. Functional tools, such as functional modeling, offer decision guidance in the early phase of product design, where explicit design decisions are yet to be made. Function-based design data is often sparse and grounded in individual interpretation. As such, function-based design tools can benefit from automatic function classification to increase data fidelity and provide function representation models that enable function-based intelligent design agents. Function-based design data is commonly stored in manually generated design repositories. These design repositories are a collection of expert knowledge and interpretations of function in product design bounded by function-flow and component taxonomies. In this work, we represent a structured taxonomy-based design repository as assembly-flow graphs, then leverage a graph neural network (GNN) model to perform automatic function classification. We support automated function classification by learning from repository data to establish the ground truth of component function assignment. Experimental results show that our GNN model achieves a micro-average F1-score of 0.617 for tier 1 (broad), 0.624 for tier 2, and 0.415 for tier 3 (specific) functions. Given the imbalance of data features and the subjectivity in the definition of product function, the results are encouraging. Our efforts in this paper can be a starting point for more sophisticated applications in knowledge-based CAD systems and Design-for-X consideration in function-based design.

1 Introduction

Function-based design is a foundational tenet in product design [1]. Function is defined as the application of the product purpose toward solving a design problem [2,3]. Components within the product complete sub-functions necessary to materialize the overarching product function. In product design, functional modeling is used to support and guide designers during early conceptual design phases [4,5]. Here, a designer determines the sub-functions needed to complete the primary product function and purpose [1]. These sub-functions are connected through flows that capture their interactions. In practice, these flows represent material, energy, and signal transfer [6].

Currently, function-based design suffers from subjectivity caused by the designer’s interpretation of function and flow as it applies to a design. Efforts have been made to standardize function and flow into taxonomies to limit subjectivity, while increasing shared domain understanding [6]. The standardization of function-based design principles has led to meaningful curation of taxonomy-based design repositories [79]. While these design repositories have been widely accepted into literature, there remain challenges in function interpretation defined by designer expertise in function-based design. The human interpretation and assignment of function have generated repositories that are often unorganized, sparse, and unbalanced.

Low data quality and scarcity of design repositories have led to an under-utilization of deep learning methods in the data-driven product design field [10,11], as they require large amounts of data [12,13]. Prior work addressed the issue of scarce structured design knowledge datasets by automatically extracting function knowledge from a corpus of mechanical engineering text to construct a design knowledge base [14]. For modalities of data other than text, researchers relied on synthetic design data [15], small amounts of curated knowledge [16], or scraped public online design repositories and manually labeled design knowledge [17,18]. Yet, it remains challenging to apply data-driven methods in the field of mechanical design [19,20]. However, recent progress in graph representation learning and graph neural networks (GNNs), show promise in knowledge discovery in sparse datasets [21,22]. Rapid advancements in deep learning for sparse datasets present an opportunity to apply such methods on design repository data to forward the state-of-the-art in data-driven design, specifically in the context of function-based design shared understanding, standardization, and computer representation.

In this paper, we use GNNs and sparse data from a design repository to classify component function based on assembly and flow relationships. We represent data from a hierarchical taxonomy-based design repository through graphs. The focus of these graphs is to capture function-flow-assembly relationships within products housed in the design repository. We then introduce a hierarchical GNN framework that capitalizes on the three-tier hierarchical nature of the repository data. Using the hierarchical GNN, we classify component function in three tiers ranging from broad primary functions to detailed tertiary functions as introduced in previous literature [6]. We exhaustively evaluate our GNN framework using four types of GNN layers and compare its results against other feed-forward networks to determine the fidelity of our proposed GNN architecture. We also compare our hierarchical GNN architecture against independently trained GNNs for each component function tier. The performance of our GNN framework is presented and subsequently explored through confusion matrices and feature importance analysis.

1.1 Specific Contributions.

The research presented here contributes to the area of function-based data-driven product design by leveraging recent developments in graph representation learning to enable a more descriptive shared understanding between humans and computers about the function of parts in an assembly. Our interest in function classification stems from recent work that applies data-driven approaches to various engineering design tasks [10], such as searching a design space [23], model-based systems engineering [24], or selecting appropriate manufacturing methods [17]. Such work points toward intelligent design agents enabled by knowledge-based design systems, which have been explored by the design research community over many years [10,25].

In the context of our work, functional modeling supports the use of automated reasoning systems, as well as facilitating communication and understanding between designers and co-creative agents, both of which could benefit from a better-shared understanding of the problem when working on a creative task [15,26,27]. We see function as an important theoretical element to allow an intelligent design agent to better understand the designer’s intent when co-creating with the designer. Predicting low-level functions of a design is an initial step toward this vision. The work we present here contributes the following:

  1. A novel approach to automatically predict the function of a part in an assembly using graph neural networks.

  2. A publicly available relational assembly graph model to represent design repository data.

  3. Experimental results of part function classification from a graph representation of the assembly.

In this body of work, we use the Oregon State Design Repository (OSDR) as our structured taxonomy-based data source [28,29]. We provide a publicly available subset of the OSDR dataset used in our work, the assembly graphs representing the OSDR data, and the GNN implementation for the research community to leverage in future work.1 Furthermore, graph representation of the OSDR can be leveraged in data searching tasks and other GNN tasks beyond function classification. GNN and OSDR graph representations can be used to ascertain design knowledge on material choice, assembly order, amfailure modes, Design for the Human Element, Design for the Environment, and component-system classification tasks.

2 Background

In this section, we introduce fundamental concepts and research supporting the extraction of functional knowledge using GNNs. Here, we introduce literature in function-based product design in the context of design support and deep learning. Next, design repositories are discussed as a source of semantic product data that can be used in modern deep learning techniques. Finally, literature and background are established for graph representation and GNNs.

2.1 Function-Based Product Design.

Function-based design has been used as a bridge to bring Design-for-X (DfX) objectives, such as Design for the Environment, from post-design analysis to the earlier design phases of product development. To this end, function-based design has been used with life-cycle assessment data to provide function-based sustainable design knowledge to designers [3032]. In human-centered product design, function has been related to human error and interaction points to determine which functions need special consideration for ergonomics [33,34]. These recent developments in function-based design for meeting DfX objectives suggest a need to predict, learn from, and model function in components as a means to bring further curated data to early design phases.

Previous efforts have been made to use machine learning for improving function-based design methodologies. In other research, association rules and weighted confidence has been used to determine the function of a component within product configurations [3537]. Decision trees have proved useful in reducing the feasible design space of functional assignment when considering product assembly [38]. Furthermore, deep learning approaches have been used to disambiguate customer reviews based on function, form, and behavior [39].

2.2 Taxonomy-Based Design Repositories and Knowledge Discovery.

Taxonomy-based design repositories store product design data relevant to design engineers [28,40]. This type of design repository is generated through expert taxonomy descriptions of classical product life cycle inventory (LCI) data. For example, given common product data such as a bill of materials, specialized taxonomy data can be appended to LCI data. The OSDR is a taxonomy-based repository that houses product LCI data along with assigned specialized taxonomy descriptions [7,29]. In the OSDR case, specialized taxonomy data includes product assembly child-parent notation, functional-flow basis assignment to components, and a standardized component naming schema.

Adoption of design repositories in research and industry has been slow due to resource commitment, human curation, intuition-based knowledge extraction, and lack of well-structured product data. Efforts have been made to improve design repository generation by limiting subjectivity through taxonomy standardization [6,4144]. Furthermore, recent approaches have been introduced to streamline data addition to design repositories [45]. Despite the described challenges, design repositories have been shown to be useful in data-driven design approaches, particularly in machine learning and knowledge extraction tasks.

Design repositories are useful in knowledge discovery tasks and have been effectively employed within machine learning approaches [4649]. Specifically related to function-based design, design repositories were used in the automated extraction of function knowledge from text [14]. In our work, we assert that recent advancements in graph representation learning have allowed for the ability to generate predictive models from sparse, incomplete, subjective, and otherwise unbalanced repository data.

2.3 Graphs in Product Design.

Graphs are robust data structures that represent interactions (i.e., edges) among constituents (i.e., nodes) of a system. They can also capture the direction of interactions, properties of interactions (i.e., edge attributes), and properties of the system constituents (i.e., node attributes). In product design, knowledge graphs [50,51] which are a specific type of graph that represent structural relations between entities of a domain, are widely used. Classically they are most often used in natural language processing tasks [5254]. Current efforts in knowledge graphs have facilitated robust graph representation of domain-specific semantic relationships of product design [55]. TechNet was developed in 2019 by mining semantic relationships of elemental concepts found in US patent data. B-link was introduced in 2017 by mining engineering domain knowledge from engineering-focused academic literature [56]. Knowledge graphs have supported product design by providing language and design relationships. Specifically, engineering design knowledge graphs have been used in concept generation and evaluation [57]. However, there is a need to expand on knowledge graphs with standardized product design resources, such as design repositories.

Recently, a knowledge graph framework has been introduced to create rich node and edge features based upon taxonomy-based product design models [58]. Here, graphs generated with product design taxonomies capture meaningful relationships between product materials, manufacturing method, tolerance, function, and other product features. Specifically, product design knowledge graphs have been useful for case-based reasoning and concept similarity search. In this work, we expand on taxonomy-based graphs with the representation of repository data in a graph structure, with a focus on function, flow, and assembly representations. The generated graphs are then used in prediction tasks using GNNs.

2.4 Graph Neural Networks.

Standard deep learning architectures such as convolutional neural networks (CNN) and recurrent neural networks (RNN) operate on regular-structured inputs such as grids (e.g., images, volumetric data) and sequences (e.g., signals, text). Nevertheless, many real-world applications deal with irregular data structures. For instance, molecular structures, interaction among sub-atomic particles, or robotic configurations cannot be reduced to a sequence or grid representation. Such data can be represented as graphs, which allow for jointly modeling constituents of a system, their properties, and interactions among them. GNNs [5965] can directly take in data structured as graphs and use the graph connectivity as well as node and edge features to learn a representation vector for every node in the graph. Because GNNs utilize the strong inductive bias of connectivity information, they are more data-efficient compared to other deep architectures. GNNs have been successfully applied to point clouds and meshes [66,67], robot designs [68], physical simulations [69,70], particle physics [71], material design [72], power estimation [73], and molecule classification [65].

Let G = (V, E) denote a graph with vertices (node) V, edges E, node attributes Xv for vV and edge attributes euv for (u, v) ∈ E. Given a set of graphs {G1, …, GN} and their node labels {yv11,,yvm1,,yv1N,,yvkN}, the task of supervised node classification is to learn a representation vector (i.e., embedding) hv for every node vG that helps predict its label. GNNs use a neighborhood aggregation approach, where representation of node v is iteratively updated by aggregating representations of neighboring nodes and edges. After k iterations of aggregation, the representation captures the structural information within its k-hop network neighborhood [74]. Formally, the kth layer of a GNN is defined as
(1)
where hv(k) is the representation of node v at the kth layer, euv is the edge feature between nodes u and v, and N(v) denote neighbors of v. fθ(.) denotes a parametric combination function and gϕ(.) denotes an aggregation function. We initialize hv(0)=Xv.

Different instantiations of fθ(.) and gϕ(.) functions result in different variants of GNNs. In this paper, we compare the performance of four well-known variants of GNNs including GraphSAGE [61], graph convolution network (GCN) [62], graph attention network (GAT) [63], and graph isomorphism network (GIN) [64]. For an overview of GNNs, see Refs. [21,22].

3 Methods

In this section, we describe the methodology for function classification using a GNN on repository data that is represented by graphs. First, the data selection and processing is presented. Then, we describe graph schema and graph construction. Finally, a GNN and related parameters are introduced to predict the hierarchical functions.

3.1 Data Selection and Processing

3.1.1 Data Selection.

The OSDR is a function-based relational framework built upon consumer product component (artifact) information [9,28,29]. The repository schema includes system-level bill of materials, system type, component function and flow, material, and assembly relationships. The data within the OSDR utilizes published standard taxonomies for component, function, and flow naming. These standard taxonomies are referred to as basis terms [6,43]. The basis taxonomies feature a hierarchy system that allows for broad-to-specific identification of component name, function, and flow per component artifact in the OSDR. In the taxonomy hierarchy systems within the OSDR, tier 1 basis terms encompass the broadest description of the basis term. In ascending order, tier 2 and 3 increase specificity, information, and accuracy of the basis definition. An example of the basis term hierarchy from each taxonomy is shown in Table 1. This example is not representative of any product within the OSDR and only demonstrates the hierarchical structure of taxonomy basis terms.

Table 1

Example hierarchy for component (Supporter), function (Branch), and flow (Signal) basis terms

Primary (Tier 1)Secondary (Tier 2)Tertiary (Tier 3)
Component
SupporterStabilizerInsert
Support
PositionerWasher
Handle
SecurerBracket
Function
BranchSeparateDivide
Extract
Remove
Distribute
Flow
SignalStatusTactile
Taste
Visual
ControlAnalog
Discrete
Primary (Tier 1)Secondary (Tier 2)Tertiary (Tier 3)
Component
SupporterStabilizerInsert
Support
PositionerWasher
Handle
SecurerBracket
Function
BranchSeparateDivide
Extract
Remove
Distribute
Flow
SignalStatusTactile
Taste
Visual
ControlAnalog
Discrete

The OSDR encapsulates the data of 184 consumer products and 7275 related artifacts. Artifacts are generally components but can also represent sub-assemblies and systems. Artifacts are related through parent-child a familial hierarchy (hypernym and hyponym relations). Functional relationship and product-level functional models are captured through component-level function, input flow, and output flow. In this regard, the OSDR houses 19,627 component-related function data points with 19,667 corresponding flow data points.

3.1.2 Processing.

For our methodology, the data from the OSDR needed to be filtered and processed prior to developing the product graphs. We removed 24 consumer products from the dataset due to a lack of completion in function, flow, or assembly definition. From the 160 products, data points are represented by a single component defined by material, component basis, parent component, functional basis hierarchy, input flow, output flow, input component, and output component. Each data point is a unique representation of the components defined by flow attributes. Concisely, there are many data points per component depending on the number of functions and related flows managed by that component. The processing and filtering of the data within the OSDR resulted in 15,636 data points represented by 137 component basis terms, 51 function basis terms, 36 flow basis terms, and 16 material categories. An example vegetable peeler product with component, function, and flow data is shown in Table 6 found in Appendix  A.

3.2 Assembly-Flow Graph Generation.

The processed OSDR data is represented through relational graphs. Relational graphs are a specific type of graphs with the following properties: (1) they are directed graphs, meaning edges between nodes have directions, (2) they are also attributed, meaning that the graphs contain node and edge attributes, and (3) they are multi-graphs, as more than one edge is allowed between any two nodes. Graph representation of the OSDR is needed to apply our proposed GNN architecture outlined in Sec. 3.3. In the context of the OSDR, the relational graphs are generated per system to represent the assembly and flow relations within the system. These assembly-flow graphs are defined by nodes and connecting edges. Figure 1 shows the graph structure. The graphs are generated using predefined schema definitions from the OSDR. These definitions can be found in related literature and explored through the hosted version of the OSDR [75].2

Fig. 1
Simplified relational assembly graph example of components from Table 6
Fig. 1
Simplified relational assembly graph example of components from Table 6
Close modal

The nodes are representative of each artifact data point and carry the following features: system name, system type, component basis term, material, and functional basis hierarchy. The nodes are connected through two edge types denoted as flow edges and assembly edges.

A flow edge is defined as the connections between functions within the system and the movement of energy, signals, and materials through a system in which a function or set of functions modifies. A function has an input flow and an output flow. Flow edges are directional and defined by the data specified flow basis representation regardless of flow basis hierarchy. In this regard, we are not backing out higher-tier flow labels. Flow edges are defined by only the assigning flow basis term found within the data. Flow edges are only represented once per input-output relationship. Assembly edges are non-directional physical connections between artifacts in the classical product assembly sense. Both assembly and flow edges are used to capture the totality of physical-functional interaction between artifacts. By representing both physical connections (assembly) and function connections (flow) through edge definitions, we aim to classify component function using late-stage design defined physical product assembly and early design stage defined function-flow definition. Considering the totality of the product design process increases the breadth of contribution of this work.

3.2.1 Dataset Metrics.

We generated 160 assembly graphs representative of the 160 non-filtered products from the OSDR. NetworkX [76], a python library for graph processing, is used to materialize the relational graphs [77]. Per graph, there is an average of 98 nodes and 791 edges. When singling out flow edge type, there is an average of 537 flow edges per graph. For assembly edges, there is an average of 262 edges per node.

3.2.2 Assembly-Flow Graph Processing.

We pre-process the data as follows. For initial node attributes, we concatenate one-hot encoding of component basis, system name, system type, and material features resulting in a 316-dimensional multi-hot initial node feature. For edge attributes, we concatenate one-hot encoding of input flow, output flow, and an indicator of whether the edge represents an assembly connection. This results in a 75-dimensional initial edge feature. The dataset contains 9, 22, and 23 category labels for tiers 1, 2, and 3 functions, respectively. It is also noteworthy that label distribution in all three tiers is highly skewed. The label frequencies are shown in Fig. 3.

3.3 Learning Architecture.

Inspired by recent advances in graph representation learning, our approach learns dedicated node representations for each functional tier prediction task. As shown in Fig. 2, our method consists of three GNN encoders that take in graphs connectivity information along with initial node and edge features and produce dedicated node embeddings for each tier. Each GNN is then followed by a dedicated multilayer perceptron (MLP) that acts as a specialized classifier for that tier. Furthermore, we utilize the hierarchical nature of function tiers to augment the predictions and use hierarchically structured local classifiers with a local classifier per tier.

Fig. 2
The proposed hierarchical graph neural network framework
Fig. 2
The proposed hierarchical graph neural network framework
Close modal
Fig. 3
Distribution of class frequencies in (a) tier 1, (b) tier 2, and (c) tier 3 function categories
Fig. 3
Distribution of class frequencies in (a) tier 1, (b) tier 2, and (c) tier 3 function categories
Close modal

Assume a training set D=[G1,G2,,GN] of N graphs where each graph is represented as G = (A, X, E) where A{0,1}n×n denotes the adjacency matrix (one-hop connectivity information), XRn×dx is the initial node features, and ERn×n×de is the initial edge features. We define three GNNs gθk(.):Rn×n×Rn×dx×Rn×n×deRn×dh,k={1,2,3} parametrized by {θk}k=13 corresponding to tiers 1 to 3 functions, respectively. This results in three sets of dedicated node embeddings Ht1,Ht2,Ht3Rn×dh. GNNs are essentially learning to extract strong representations for down-stream classifiers. We use three MLPs fψk(.):Rn×(dh+|Yk1|)Rn×|Yk|,k={1,2,3} parameterized by {ψk}k=13 where kth MLP is the dedicated classifier for predicting tier k function classes. The kth MLP receives the learned node representations from its dedicated GNN gθk(.) (i.e., Htk) and predictions of the predecessor MLP in hierarchy fψk1(.) to predict the function classes for kth tier. Because the first MLP does not have any predecessors (i.e., first tier in hierarchy), we simply pass a vector of zeros to emulate the input predictions.

During the training phase, we utilize teacher forcing [78] to enhance the training process. Teacher forcing is a procedure in which during training, the model receives the ground truth output (rather than predicted output) as input at the next step. In other words, rather than feeding the kth MLP with the actual predictions of (k − 1)th MLP, we feed it with ground truth labels of (k − 1)th tier labels. During inference, however, we do not have access to the ground truth labels. Therefore, we feed the subsequent MLP with the probability distribution of the predicted labels. We use Softmax function over the MLP predictions to transfer the raw predictions into proper probability distributions. Furthermore, we use frequency-based weighting to address the data imbalance during training. We compute the loss such that less frequent classes contribute more to the total loss compared to frequent classes. This practice prevents the model from paying more attention to frequent classes and ignoring the rare ones. We jointly optimize the model parameters with respect to the aggregated and weighted cross-entropy losses of all three functional tier predictions using mini-batch stochastic gradient descent. The process of training for one mini-batch is shown in Algorithm 1.

Algorithm 1

Input: Cross-entropy loss L, GNNs gθk(.), MLPs fψk(.), sampled batch of N graphs {Gj}j=1N, concatenation operator

L

forG in a batch {Gj}j=1Ndo

 // Compute dedicated node embeddings

Ht1gθ1(G)

Ht2gθ2(G)

Ht3gθ3(G)

 // Compute tier predictions

Ypt1Softmax(fψ1([Ht10]))

Ypt2Softmax(fψ2([Ht2Ygt1]))

Ypt3Softmax(fψ3([Ht3Ygt2]))

 // Compute joint loss across all tiers

LL+L(Ypt1,Ygt1)+L(Ypt2,Ygt2)+L(Ypt3,Ygt3)

end

// Compute gradients and update parameters

{θk,ψk}k=13{θk,ψk}k=13γ{θk,ψk}k=131Nj=1NL

4 Results and Method Validation

In this section, we introduce the GNN architecture implementation and results. We explore results further with confusion matrices to determine function-specific performance. We then validate the results of the SAGE graph neural network algorithm against three other state-of-the-art GNNs. The GNN types are GCN, GAT, and GIN [6264]. In closing, we highlight feature importance to determine the most consequential taxonomy-based data features toward classifying component function and look to investigate how our proposed hierarchical GNN architecture compares to a group of independent GNNs.

4.1 Experimental Protocol.

Given the small size of the dataset, we use a 10-fold cross-validation procedure by dividing the data into 10 folds, holding one fold of as the validation set, and using the remainder nine folds for training. We report the mean and standard deviation of the metrics after running the experiments 10 times. In each run, we train a model on the train folds and report the results on the validation folds. This allows us to investigate the model’s performance without bias toward train/test splits. Also, given the imbalanced nature of the labels in all three functional tiers, we use precision (P), recall (R), and F1-score metrics to report the results. These metrics are defined as follows:
(2)
where TP, FP, and FN denote the number of true positive, false positive, and false-negative predictions. Moreover, we report the metrics with three types of averaging: micro, macro, and weighted averaging. Micro-averaging computes F1-score by considering the total number of TP, FP, and FN, whereas macro-averaging computes F1-score for each label and averages it without considering the frequency for each label. On the other hand, weighted-averaging computes F1-score for each label and returns the weighted average based on the frequency of each label in the dataset. In practice, micro-averaging is useful in heavily imbalanced datasets, macro-averaging is useful in balanced datasets, and weighted-averaging is useful in datasets where some features are balanced, and some are not. On all accounts, we are trying to maximize the precision (P), recall (R), and F1-score metrics for each function tier prediction. However, we are most interested in micro-average performance as our dataset is unbalanced and sparse.

We initialize the network parameters using Xavier initialization [79] and train the model using Adam optimizer [80] with an initial learning rate of 1e-3. We use a cosine scheduler [81] to schedule the learning rate and also use early-stopping with a patience of 50. We also apply Leakey rectified linear unit (ReLU) non-linearity [82] with negative slope of 0.2, and dropout [83] with probability of 0.1 after each GNN layer. We choose the number of GNN layers and hidden dimension size from the range of [1, 2, 3] and [64, 128, 256], respectively. Finally, we choose the GNN layer type from GraphSAGE [61], GCN [62], GAT [63], and GIN [64] layers. We implemented the experiments using PyTorch [84] and used Pytorch Geometric [85] to implement the GNNs. The experiments are run on a single RTX 6000 GPU where on average, one epoch of training takes about 1 s and 1.5 GBs of GPU memory.

4.2 Results.

To investigate the performance of the GNNs on the dataset and compare it with other feed-forward networks, we trained an MLP, a logistic regression (linear) model, and four types of GNNs including GraphSAGE [61], GCN [62], GAT [63], and GIN [64]. To train the GNNs, we used the connectivity information along with initial node and edge features, whereas for the MLP and linear models, we only used initial features. The results are shown in Table 2. We observe a classification weighted precision of 0.617, 0.466, 0.401 for tier 1, tier 2, tier 3 functions, respectively. When strongly considering the data imbalance (micro-average), we observe a precision of 0.595, 0.445, 0.363 for tier 1, tier 2, tier 3 functions, respectively. If we ignore data imbalance (macro-average), there is a precision of 0.544, 0.349, 0.235 for tier 1, tier 2, tier 3 functions, respectively. Moreover, results suggest that: (1) GNNs outperform MLP and Linear models in all tiers across all metrics. For example, the best performing GNN in tier 1 function prediction outperforms the MLP model with an absolute F1-score of 0.119, a relative improvement of 25%. This implies that connectivity information plays an important role in the predictions. (2) Among GNNs, a GNN with GraphSAGE layers slightly performs better in tier 1 function predictions, whereas for tier 2 and 3 predictions, GNNs with GIN layers perform better. This shows the importance of treating the GNN layers as hyper-parameters that can yield better performance.

Table 2

Mean and standard deviation of precision, recall, and F1-score on validation folds after 10-fold cross-validation

MicroMacroWeighted
MethodPrecisionRecallF1PrecisionRecallF1PrecisionRecallF1
Tier 1Linear0.451 ± 0.050.451 ± 0.050.451 ± 0.050.445 ± 0.050.345 ± 0.030.352 ± 0.040.468 ± 0.050.451 ± 0.050.429 ± 0.05
MLP0.476 ± 0.050.476 ± 0.050.476 ± 0.050.443 ± 0.050.360 ± 0.040.369 ± 0.040.488 ± 0.050.476 ± 0.050.461 ± 0.05
SAGE0.595 ± 0.020.595 ± 0.020.595 ± 0.020.544 ± 0.040.444 ± 0.030.465 ± 0.030.617 ± 0.020.595 ± 0.020.589 ± 0.03
GCN0.580 ± 0.040.580 ± 0.040.580 ± 0.040.522 ± 0.040.443 ± 0.040.454 ± 0.040.613 ± 0.050.580 ± 0.040.576 ± 0.05
GAT0.593 ± 0.040.593 ± 0.040.593 ± 0.040.518 ± 0.040.448 ± 0.040.464 ± 0.050.604 ± 0.030.593 ± 0.040.587 ± 0.04
GIN0.594 ± 0.040.594 ± 0.040.594 ± 0.040.486 ± 0.050.438 ± 0.050.450 ± 0.050.603 ± 0.040.594 ± 0.040.591 ± 0.05
Tier 2Linear0.294 ± 0.040.294 ± 0.040.294 ± 0.040.293 ± 0.030.181 ± 0.020.174 ± 0.020.396 ± 0.050.294 ± 0.040.285 ± 0.05
MLP0.328 ± 0.040.328 ± 0.040.328 ± 0.040.285 ± 0.030.196 ± 0.020.195 ± 0.020.390 ± 0.040.328 ± 0.040.320 ± 0.05
SAGE0.431 ± 0.050.431 ± 0.050.431 ± 0.050.346 ± 0.040.283 ± 0.030.280 ± 0.030.466 ± 0.050.431 ± 0.050.423 ± 0.04
GCN0.427 ± 0.050.427 ± 0.050.427 ± 0.050.349 ± 0.040.282 ± 0.040.285 ± 0.040.458 ± 0.040.427 ± 0.050.421 ± 0.05
GAT0.440 ± 0.040.440 ± 0.040.440 ± 0.040.336 ± 0.030.289 ± 0.030.285 ± 0.030.459 ± 0.040.440 ± 0.040.431 ± 0.05
GIN0.445 ± 0.050.445 ± 0.050.445 ± 0.050.322 ± 0.040.287 ± 0.030.286 ± 0.040.456 ± 0.040.445 ± 0.050.440 ± 0.05
Tier 3Linear0.300 ± 0.190.300 ± 0.190.300 ± 0.190.204 ± 0.120.218 ± 0.170.190 ± 0.130.327 ± 0.180.300 ± 0.190.281 ± 0.17
MLP0.287 ± 0.160.287 ± 0.160.287 ± 0.160.188 ± 0.110.218 ± 0.160.179 ± 0.110.279 ± 0.190.287 ± 0.160.254 ± 0.16
SAGE0.329 ± 0.130.329 ± 0.130.329 ± 0.130.191 ± 0.110.192 ± 0.120.168 ± 0.090.367 ± 0.190.329 ± 0.130.313 ± 0.14
GCN0.325 ± 0.180.325 ± 0.180.325 ± 0.180.235 ± 0.160.231 ± 0.190.204 ± 0.150.397 ± 0.250.325 ± 0.180.323 ± 0.19
GAT0.283 ± 0.140.283 ± 0.140.283 ± 0.140.164 ± 0.090.175 ± 0.110.155 ± 0.090.294 ± 0.190.283 ± 0.140.265 ± 0.14
GIN0.363 ± 0.180.363 ± 0.180.363 ± 0.180.227 ± 0.130.237 ± 0.170.214 ± 0.140.401 ± 0.210.363 ± 0.180.351 ± 0.18
MicroMacroWeighted
MethodPrecisionRecallF1PrecisionRecallF1PrecisionRecallF1
Tier 1Linear0.451 ± 0.050.451 ± 0.050.451 ± 0.050.445 ± 0.050.345 ± 0.030.352 ± 0.040.468 ± 0.050.451 ± 0.050.429 ± 0.05
MLP0.476 ± 0.050.476 ± 0.050.476 ± 0.050.443 ± 0.050.360 ± 0.040.369 ± 0.040.488 ± 0.050.476 ± 0.050.461 ± 0.05
SAGE0.595 ± 0.020.595 ± 0.020.595 ± 0.020.544 ± 0.040.444 ± 0.030.465 ± 0.030.617 ± 0.020.595 ± 0.020.589 ± 0.03
GCN0.580 ± 0.040.580 ± 0.040.580 ± 0.040.522 ± 0.040.443 ± 0.040.454 ± 0.040.613 ± 0.050.580 ± 0.040.576 ± 0.05
GAT0.593 ± 0.040.593 ± 0.040.593 ± 0.040.518 ± 0.040.448 ± 0.040.464 ± 0.050.604 ± 0.030.593 ± 0.040.587 ± 0.04
GIN0.594 ± 0.040.594 ± 0.040.594 ± 0.040.486 ± 0.050.438 ± 0.050.450 ± 0.050.603 ± 0.040.594 ± 0.040.591 ± 0.05
Tier 2Linear0.294 ± 0.040.294 ± 0.040.294 ± 0.040.293 ± 0.030.181 ± 0.020.174 ± 0.020.396 ± 0.050.294 ± 0.040.285 ± 0.05
MLP0.328 ± 0.040.328 ± 0.040.328 ± 0.040.285 ± 0.030.196 ± 0.020.195 ± 0.020.390 ± 0.040.328 ± 0.040.320 ± 0.05
SAGE0.431 ± 0.050.431 ± 0.050.431 ± 0.050.346 ± 0.040.283 ± 0.030.280 ± 0.030.466 ± 0.050.431 ± 0.050.423 ± 0.04
GCN0.427 ± 0.050.427 ± 0.050.427 ± 0.050.349 ± 0.040.282 ± 0.040.285 ± 0.040.458 ± 0.040.427 ± 0.050.421 ± 0.05
GAT0.440 ± 0.040.440 ± 0.040.440 ± 0.040.336 ± 0.030.289 ± 0.030.285 ± 0.030.459 ± 0.040.440 ± 0.040.431 ± 0.05
GIN0.445 ± 0.050.445 ± 0.050.445 ± 0.050.322 ± 0.040.287 ± 0.030.286 ± 0.040.456 ± 0.040.445 ± 0.050.440 ± 0.05
Tier 3Linear0.300 ± 0.190.300 ± 0.190.300 ± 0.190.204 ± 0.120.218 ± 0.170.190 ± 0.130.327 ± 0.180.300 ± 0.190.281 ± 0.17
MLP0.287 ± 0.160.287 ± 0.160.287 ± 0.160.188 ± 0.110.218 ± 0.160.179 ± 0.110.279 ± 0.190.287 ± 0.160.254 ± 0.16
SAGE0.329 ± 0.130.329 ± 0.130.329 ± 0.130.191 ± 0.110.192 ± 0.120.168 ± 0.090.367 ± 0.190.329 ± 0.130.313 ± 0.14
GCN0.325 ± 0.180.325 ± 0.180.325 ± 0.180.235 ± 0.160.231 ± 0.190.204 ± 0.150.397 ± 0.250.325 ± 0.180.323 ± 0.19
GAT0.283 ± 0.140.283 ± 0.140.283 ± 0.140.164 ± 0.090.175 ± 0.110.155 ± 0.090.294 ± 0.190.283 ± 0.140.265 ± 0.14
GIN0.363 ± 0.180.363 ± 0.180.363 ± 0.180.227 ± 0.130.237 ± 0.170.214 ± 0.140.401 ± 0.210.363 ± 0.180.351 ± 0.18

Note: Bolded values are the highest result for each tier.

4.2.1 Function-Specific Performance.

We also investigate the performance of models on individual labels using confusion matrices. Figure 4 shows the confusion matrix for each function tier. These matrices show the accuracy of a function being correctly classified and, when incorrectly classified, which functions are selected instead of the true function. The color axis determines the occurrence ratio of the function classification. Ideally, high classification occurrence should be observed in matching indices (i.e., denser diagonal), indicating correct classification. As an example, we can observe that the model sometimes confuses the “divide” class with the “remove” class in tier 3 function predictions.

Fig. 4
Confusion matrices of (a) tier 1, (b) tier 2, and (c) tier 3 function predictions. The rows represent the ground truth whereas the columns represent the predictions.
Fig. 4
Confusion matrices of (a) tier 1, (b) tier 2, and (c) tier 3 function predictions. The rows represent the ground truth whereas the columns represent the predictions.
Close modal

4.3 Feature Importance.

To investigate the contribution of the node and edge features to algorithm performance, we systemically drop features and observe the changes in F1-scores. Specifically, in this analysis, we drop single-node features, look at eliminating edge types, and lastly, remove all features from nodes and edges. By eliminating edge types and features, we look to discover if assembly edges and flow edges are more important toward prediction accuracy. In the edge importance analysis, we also look at retaining all edges without any features. We then eliminate all node and edge features to determine if graph topology impacts function predictions. Table 3 shows the feature importance analysis per function tier. The results suggest that component basis has the highest impact on the performance among node features, whereas flow is the most influential edge feature. We also observe that initial node/edge features contribute more to the performance compared to topological information.

Table 3

Feature importance for function prediction, when using GraphSAGE

FeaturesTier 1 F1-scoreTier 2 F1-scoreTier 3 F1-score
NodeEdgeMicroMacroWeightedMicroMacroWeightedMicroMacroWeighted
Com. BasisAll0.615 ± 0.030.478 ± 0.030.612 ± 0.030.468 ± 0.040.301 ± 0.030.464 ± 0.040.351 ± 0.250.245 ± 0.260.357 ± 0.26
Sys. NameAll0.464 ± 0.050.287 ± 0.040.473 ± 0.050.332 ± 0.040.165 ± 0.030.342 ± 0.040.184 ± 0.160.077 ± 0.050.191 ± 0.17
Sys. TypeAll0.459 ± 0.050.338 ± 0.040.452 ± 0.060.329 ± 0.040.179 ± 0.020.327 ± 0.050.132 ± 0.090.051 ± 0.030.144 ± 0.08
MaterialAll0.523 ± 0.040.383 ± 0.040.507 ± 0.040.391 ± 0.040.221 ± 0.030.393 ± 0.050.157 ± 0.170.077 ± 0.080.162 ± 0.18
NoneAll0.472 ± 0.060.331 ± 0.040.467 ± 0.060.342 ± 0.040.183 ± 0.030.342 ± 0.050.132 ± 0.090.067 ± 0.070.122 ± 0.10
AllFlow0.625 ± 0.040.471 ± 0.030.624 ± 0.040.469 ± 0.040.305 ± 0.040.465 ± 0.040.353 ± 0.310.205 ± 0.280.351 ± 0.31
AllAssem.0.447 ± 0.030.344 ± 0.030.435 ± 0.040.287 ± 0.030.173 ± 0.020.279 ± 0.040.370 ± 0.160.196 ± 0.060.400 ± 0.18
AllNone0.497 ± 0.030.360 ± 0.030.487 ± 0.030.359 ± 0.030.219 ± 0.020.353 ± 0.030.347 ± 0.160.172 ± 0.100.310 ± 0.17
AllAll0.595 ± 0.020.465 ± 0.030.589 ± 0.030.445 ± 0.050.286 ± 0.040.440 ± 0.050.363 ± 0.180.214 ± 0.140.351 ± 0.18
NoneNone0.239 ± 0.030.155 ± 0.010.223 ± 0.030.161 ± 0.040.050 ± 0.010.181 ± 0.060.132 ± 0.120.045 ± 0.040.125 ± 0.12
FeaturesTier 1 F1-scoreTier 2 F1-scoreTier 3 F1-score
NodeEdgeMicroMacroWeightedMicroMacroWeightedMicroMacroWeighted
Com. BasisAll0.615 ± 0.030.478 ± 0.030.612 ± 0.030.468 ± 0.040.301 ± 0.030.464 ± 0.040.351 ± 0.250.245 ± 0.260.357 ± 0.26
Sys. NameAll0.464 ± 0.050.287 ± 0.040.473 ± 0.050.332 ± 0.040.165 ± 0.030.342 ± 0.040.184 ± 0.160.077 ± 0.050.191 ± 0.17
Sys. TypeAll0.459 ± 0.050.338 ± 0.040.452 ± 0.060.329 ± 0.040.179 ± 0.020.327 ± 0.050.132 ± 0.090.051 ± 0.030.144 ± 0.08
MaterialAll0.523 ± 0.040.383 ± 0.040.507 ± 0.040.391 ± 0.040.221 ± 0.030.393 ± 0.050.157 ± 0.170.077 ± 0.080.162 ± 0.18
NoneAll0.472 ± 0.060.331 ± 0.040.467 ± 0.060.342 ± 0.040.183 ± 0.030.342 ± 0.050.132 ± 0.090.067 ± 0.070.122 ± 0.10
AllFlow0.625 ± 0.040.471 ± 0.030.624 ± 0.040.469 ± 0.040.305 ± 0.040.465 ± 0.040.353 ± 0.310.205 ± 0.280.351 ± 0.31
AllAssem.0.447 ± 0.030.344 ± 0.030.435 ± 0.040.287 ± 0.030.173 ± 0.020.279 ± 0.040.370 ± 0.160.196 ± 0.060.400 ± 0.18
AllNone0.497 ± 0.030.360 ± 0.030.487 ± 0.030.359 ± 0.030.219 ± 0.020.353 ± 0.030.347 ± 0.160.172 ± 0.100.310 ± 0.17
AllAll0.595 ± 0.020.465 ± 0.030.589 ± 0.030.445 ± 0.050.286 ± 0.040.440 ± 0.050.363 ± 0.180.214 ± 0.140.351 ± 0.18
NoneNone0.239 ± 0.030.155 ± 0.010.223 ± 0.030.161 ± 0.040.050 ± 0.010.181 ± 0.060.132 ± 0.120.045 ± 0.040.125 ± 0.12

Note: Bolded values are the highest result for each tier.

4.4 Hierarchical Versus Independent Graph Neural Networks.

We investigate the contribution of introducing hierarchy on performance by comparing our hierarchical GNN framework with independently trained GNNs (i.e., no input from previous predictions). The results in Table 4 suggest that introducing hierarchical training significantly improves the performance on tier 3, in which we observe an absolute 0.068 increase in micro F1-score a relative improvement of 23%. We also see a marginal enhancement in tier 2 predictions. Because tier 1 GNNs do not have any predecessors in the hierarchy, they produce almost identical results in both cases.

Table 4

Performance of hierarchical and independent GNNs

F1-score
TierMethodMicroMacroWeighted
1Hierarchical0.625 ± 0.040.471 ± 0.030.624 ± 0.04
Independent0.622 ± 0.030.465 ± 0.040.622 ± 0.04
2Hierarchical0.469 ± 0.040.305 ± 0.040.465 ± 0.04
Independent0.478 ± 0.040.321 ± 0.040.475 ± 0.04
3Hierarchical0.370 ± 0.160.196 ± 0.060.400 ± 0.18
Independent0.302 ± 0.330.187 ± 0.30.305 ± 0.33
F1-score
TierMethodMicroMacroWeighted
1Hierarchical0.625 ± 0.040.471 ± 0.030.624 ± 0.04
Independent0.622 ± 0.030.465 ± 0.040.622 ± 0.04
2Hierarchical0.469 ± 0.040.305 ± 0.040.465 ± 0.04
Independent0.478 ± 0.040.321 ± 0.040.475 ± 0.04
3Hierarchical0.370 ± 0.160.196 ± 0.060.400 ± 0.18
Independent0.302 ± 0.330.187 ± 0.30.305 ± 0.33

4.5 Assumptions and Limitation.

The OSDR is a multi-decade-long project that has been manually influenced by many organizations and design engineers. Knowing this, we reiterate that the data from the repository is unbalanced, sparse, and often non-congruent. We observe noted cascading label imbalance and absence from higher-order hierarchy taxonomy to lower-order taxonomy terms. Tier 3 classifications in function, flow, and component basis terms per component are often missing assignment. In short, the OSDR shows a user-curated bias toward defining higher-order basis terms in both function and flow, as shown in Figs. 3 and 5. Although the data is incomplete and imbalanced, we maintain that the compilation of the OSDR is representative of knowledge from many design engineers with varying expertise. As such, the OSDR can be thought of as a sampling of function-based domain knowledge ranging from novice to expert design engineers.

Fig. 5
Distribution of flow edges
Fig. 5
Distribution of flow edges
Close modal

In using a hierarchical GNN model, we inherit the assumptions that were used to create and facilitate the propagation of such taxonomies. Function, flow, and component taxonomies are directional, but parent-child assembly relationships are not. As such, in our graph representations, flow edges are directional, whereas assembly edges are not. The GNN model takes into account direction information and models non-directional assembly edges as bi-directional. We recognize that the OSDR taxonomy approach is one of many adopted function, flow, and component standardizations. These taxonomies are applied to a wide breadth of consumer products. We choose to adopt the OSDR taxonomies as a starting point, but we realize that this definition of function might be generalizable to all design problems and domains.

5 Discussion

As shown in Sec. 4.2, we observe that the overall performance of the GNN with GraphSAGE layers is marginal but strongest in tier 1 function prediction. Tier 2 and 3 function predictions are competitive against other GNN types, only coming third to GNN with GIN (tier 2) and second to GNN with GIN (tier 3) layers. These results are reasonable given the unbalanced and very sparse product design data. With a repository of just 160 products spanning various industries (automotive, consumer goods, furniture), it is encouraging that the proposed GNN architecture was able to ascertain part-level functional classification with a micro-average F1-score of 0.595, 0.445, and 0.363, respectively.

Where the model fails can be identified through the relative performance between function classes. The confusion matrices shown in Fig. 4 could serve as a valuable tool for a practitioner wanting to adopt our method, as they explicitly show the relative performance of all classes of function we can predict. The confusion matrices also suggest cascading false negatives and false positives as our model moves from tier 1 through tier 3 function predictions. We theorize this is caused by the significant scarcity of tier 3 function data, as shown in Fig. 3. Moreover, tier 2 and 3 suffer from more significant data imbalance in comparison to tier 1. In context, the concatenation of a high number of classification labels and data imbalance found in tier 2 and 3 functions resulted in some meaningful false negatives and false positives during testing.

Why the model fails could be attributed to subjectivity in the function definitions and to data imbalance caused by the overall OSDR embedded bias toward defining tier 1 functions and solid flows. In the tier 3 predictions shown in Fig. 4, we observe that the model often confuses function labels that are related. For example, “decrement” is often confused with “extract”. In the same regard, “transmit” is often confused with “collect,” “display,” “link,” and “remove.” The model appears to somewhat ascertain the contextual function correctly but has trouble discerning the details that individualize some tier 3 functions. In this example, the GNN model finds that these unlabeled components generally are “moving” flow. However, the model can not classify if the component is “extracting,” “decrementing,” or “removing” a material flow or “transmitting” a signal or energy flow. These findings can be indicative of fuzzy human assignment of functions that are similar and subjective. Confusion in low-frequency function classes can be also be attributed to conflicting knowledge caused by sparse edges, especially considering confusion between material flows and the other flows.

Moreover, the results in Tables 2 and 3 show that macro scores are usually lower than micro scores, indicating that the least populated classes are poorly classified relative to the more populated classes. Based on this, further application of this work should collect additional data for the least populated classes. Researchers looking to apply these methods would benefit by augmenting the current dataset to address data imbalance and scarcity guided by Fig. 3, while also modifying the dictionary of functions to suit their task.

Looking at Table 3, we anticipated that function would be product family-specific and would cause model confusion between industry domains. This is evident as isolating system type lead to a lower micro-average F1-score compared to having no features. Table 3 shows an adversarial effect between flow edges and assembly edges. When only considering flow edges, the GNN model performed better than with both edge types. Conversely, when just considering assembly edges, performance sharply declines. Upon discovering this effect, we theorized that energy and signal flows are not always correlated with physical assembly or “closeness” of components that are inputting or outputting these types of flows. While there is a significant overlap between the two edge types, the slight differences in flow and assembly edges are enough to cause the adversarial effect. As noted in Fig. 5 in Appendix  B, a majority of our flow edges are labeled “solid.” This finding is advantageous in future work considering geometric and CAD embeddings. Whereas it might be challenging to capture energy and signal flows in CAD data, it might be more promising to capture solid flows. As such, solid flows are likely the most analogous bridge between assembly and function.

Upon identifying the effects of GNN selection and feature importance, we ran an experiment to hyper-parameterize our GNN architecture to select the best GNN and set of features per hierarchy tier. Table 5 shows the best results of 0.617, 0.624, 0.415 per-tier micro-average F1-score. The hyper-parameterization of the GNN architect demonstrates a marked improvement in tier 2 and 3 predictions over the original GraphSAGE-based GNN architecture. However, understanding that the repository data set is a curation of human knowledge of functional assignment, we looked at the effect of fuzzy human assignment of function. To evaluate the effect of human assignment in the data, we looked at the top-k predictions by the GNN, as this would be more analogous to how the method would be applied in a use case. By selecting the correct result from the top-3 and top-5 predictions, as shown in Table 5, we achieve an F1-score of 0.932, 0.948, 0.825 per-tier micro-average for top-3 and F1-score 0.974, 0.975, 0.955 micro-average per-tier for top-5 predictions. These results are a significant improvement over the results, which only consider top-1 predictions. Future work could benefit from implementing a temperature-based probability sampling approach from the top-3 predictions to further improve the quality of the method. In addition, future work should look at establishing a comparison between the method described in this work and a human baseline, as this would also provide insights into the biases of the current dataset and ambiguities around function definition. Future work could improve performance by collecting more data, and including more complex products, possibly from different industries.

Table 5

Top-K results using the best combination of features and hyper-parameters

F1-score
TierTop-KMicroMacroWeighted
1Top-10.617 ± 0.030.466 ± 0.050.619 ± 0.03
Top-30.932 ± 0.020.763 ± 0.040.930 ± 0.02
Top-50.974 ± 0.010.877 ± 0.040.972 ± 0.01
2Top-10.624 ± 0.030.481 ± 0.050.626 ± 0.03
Top-30.948 ± 0.010.790 ± 0.040.945 ± 0.01
Top-50.975 ± 0.010.883 ± 0.050.973 ± 0.01
3Top-10.415 ± 0.040.325 ± 0.030.432 ± 0.04
Top-30.825 ± 0.030.629 ± 0.040.841 ± 0.02
Top-50.955 ± 0.010.819 ± 0.030.957 ± 0.01
F1-score
TierTop-KMicroMacroWeighted
1Top-10.617 ± 0.030.466 ± 0.050.619 ± 0.03
Top-30.932 ± 0.020.763 ± 0.040.930 ± 0.02
Top-50.974 ± 0.010.877 ± 0.040.972 ± 0.01
2Top-10.624 ± 0.030.481 ± 0.050.626 ± 0.03
Top-30.948 ± 0.010.790 ± 0.040.945 ± 0.01
Top-50.975 ± 0.010.883 ± 0.050.973 ± 0.01
3Top-10.415 ± 0.040.325 ± 0.030.432 ± 0.04
Top-30.825 ± 0.030.629 ± 0.040.841 ± 0.02
Top-50.955 ± 0.010.819 ± 0.030.957 ± 0.01

6 Conclusion

In this work, we use graph neural networks to classify the function of parts in an assembly given design knowledge about the part, such as the semantic name, the material, the assembly connections, and the energy flowing into and out of the part. Here, we extract data from 160 products in the OSDR and represent it within 160 graphs and a total of 15,636 nodes, with each node containing design knowledge about the part in a multi-dimensional feature vector. With this data, we are able to train a GNN to predict the top-1 function of a part with a micro-precision of 0.617 for tier 1 (broad), 0.624 for tier 2, and 0.415 for tier 3 (specific) functions. When considering top-3 functions, the GNN architecture predicts function with a micro-precision of 0.932 for tier 1 (broad), 0.948 for tier 2, and 0.825 for tier 3 (specific) functions. Our results suggest that the hierarchical structure of products and relevant design knowledge describing sub-components can be learned effectively with graph neural networks. The quality of these results show promise in supporting the development of a larger function dataset from a more extensive set of products. Our method could be further developed by learning from the geometric data of the part, a prominent design feature missing from the current work in place of the semantic name of the part.

There are several research directions to expand on this work. By inferring the function of a design at any point in the design process, an intelligent design agent could better support the designer throughout various tasks, such as the design of complex systems like industrial machinery. The quality required from the predictive system will be dictated by the task being performed. For some use cases, high-quality top-3 predictions could be sufficient to give enough context to the design agent to support the designer. For example, function data could support the designer during the conceptual design stage in assessing the feasibility of a design [24], searching for functionally similar parts [20], or by enabling automated functional modeling [36,37,86]. In the detail design stages, it could aid in verifying the satisfaction of higher-level design requirements [26]. Furthermore, this work could further the development of function-based sustainability methods and other function-related environmental considerations during the early design phases [30,31,87]. A human-centric case study should be conducted to establish a baseline against which the method presented in this work can be evaluated.

In future work, we look to enable knowledge-based CAD systems through automated function inference by bridging a gap in understanding between the designer and an intelligent design agent. We envision design tools extending beyond documentation, simulation, and optimization towards intelligent reasoning tasks that help designers make informed design decisions.

Footnotes

Acknowledgment

This material is based upon work supported by the National Science Foundation under Grant No. CMMI-1826469. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Conflict of Interest

There are no conflicts of interest.

Appendix A: Data Example From the Oregon State Design Repository

Table 6

Vegetable peeler example product data

System Vegetable peelerID 1Component UnclassifiedChild of –Material –Input Flow – From –Output Flow – To –Function Tier 1/2/3 –
2Blade1SteelSolid - IntaSolid - IntBranch/Separate/-
2Blade1SteelSolid - ExtbSolid - IntChannel/Import/-
2Blade1SteelSolid - IntSolid - ExtChannel/Export/-
2Blade1SteelSolid - IntSolid - ExtChannel/Export/-
2Blade1SteelMechanical - 3Mechanical - ExtChannel/Export/-
2Blade1SteelSolid - IntSolid - IntChannel/Guide/-
2Blade1SteelStatus - IntStatus - ExtSignal/Indicate/-
2Blade1SteelSolid - 1Solid - intSupport/Secure/-
3Handle1PlasticControl - ExtControl - IntChannel/Import/-
3Handle1PlasticHuman - ExtHuman - IntChannel/Import/-
3Handle1PlasticHuman Energy - ExtHuman Energy - IntChannel/Import/-
3Handle1PlasticHuman - IntHuman - ExtChannel/Import/-
3Handle1PlasticHuman Energy - IntMechanical - 2Convert/-/-
3Handle1PlasticSolid - 2Solid - IntSupport/Secure/-
System Vegetable peelerID 1Component UnclassifiedChild of –Material –Input Flow – From –Output Flow – To –Function Tier 1/2/3 –
2Blade1SteelSolid - IntaSolid - IntBranch/Separate/-
2Blade1SteelSolid - ExtbSolid - IntChannel/Import/-
2Blade1SteelSolid - IntSolid - ExtChannel/Export/-
2Blade1SteelSolid - IntSolid - ExtChannel/Export/-
2Blade1SteelMechanical - 3Mechanical - ExtChannel/Export/-
2Blade1SteelSolid - IntSolid - IntChannel/Guide/-
2Blade1SteelStatus - IntStatus - ExtSignal/Indicate/-
2Blade1SteelSolid - 1Solid - intSupport/Secure/-
3Handle1PlasticControl - ExtControl - IntChannel/Import/-
3Handle1PlasticHuman - ExtHuman - IntChannel/Import/-
3Handle1PlasticHuman Energy - ExtHuman Energy - IntChannel/Import/-
3Handle1PlasticHuman - IntHuman - ExtChannel/Import/-
3Handle1PlasticHuman Energy - IntMechanical - 2Convert/-/-
3Handle1PlasticSolid - 2Solid - IntSupport/Secure/-
a

Int (Internal) is nonspecific flows from inside the system.

b

Ext (External) is nonspecific flows from outside the system.

Appendix B: Statistics

Table 7

Statistics of graphs

MeanSTDMinMax0.25 Quantile0.5 Quantile0.75 QuantileSkewnessKurtosis
Nodes97.72,100.013.00930.0042.5079.50125.504.6231.51
Edges790.711039.160.009634.00180.50461.50981.254.4531.42
Density0.110.130.001.500.060.080.137.5173.52
Degree13.297.820.0058.738.4411.1317.102.339.61
MeanSTDMinMax0.25 Quantile0.5 Quantile0.75 QuantileSkewnessKurtosis
Nodes97.72,100.013.00930.0042.5079.50125.504.6231.51
Edges790.711039.160.009634.00180.50461.50981.254.4531.42
Density0.110.130.001.500.060.080.137.5173.52
Degree13.297.820.0058.738.4411.1317.102.339.61
  • 30.82% of edges are assembly and 69.18% are flows.

  • Three out of 160 graphs are DAG.

Appendix C: Hyperparameters and Architecture

For GNNs, we represent the edge features as follows. We represent the in-flow and out-flow features as one-hot representations and the assembly link as a single indicator. Thus, if there are N unique flows in the dataset, the initial edge features are represented as a 2N + 1 vector. We concatenate the one-hot representations of the node features to create the initial node feature. For the linear and MLP baselines, we first project the initial node and features to embeddings of the same size using two dedicated linear layers. We then represent each node by summing up its embedding with the summation of the dot product of all the neighbor node-edge pairs. We then pass the computed embeddings to MLP or linear models. As mentioned, we choose the number of GNN layers and hidden dimension size from the range of [1, 2, 3] and [64, 128, 256], respectively. The best performing hyper-parameters are reported in Table 8.

Table 8

Selected hyper-parameters

TierModelNode featureEdge featureBatch sizeHidden dimensionNum. layersLearning rate
1GATAllFlow6425630.01
2GINAllFlow646420.01
3GINAllAssembly6412830.01
TierModelNode featureEdge featureBatch sizeHidden dimensionNum. layersLearning rate
1GATAllFlow6425630.01
2GINAllFlow646420.01
3GINAllAssembly6412830.01

References

1.
Ullman
,
D.
,
2003
,
The Mechanical Design Process
,
McGraw-Hill Science/Engineering/Math
,
New York
.
2.
Gero
,
J. S.
, and
Kannengiesser
,
U.
,
2004
, “
The Situated Function-Behaviour-Structure Framework
,”
Des. Studies
,
25
(
4
), pp.
373
391
.
3.
Rosenman
,
M. A.
, and
Gero
,
J. S.
,
1998
, “
Purpose and Function in Design: From the Socio-Cultural to the Technophysical
,”
Des. Studies
,
19
(
2
), pp.
161
186
.
4.
Eisenbart
,
B.
,
Gericke
,
K.
, and
Blessing
,
L.
,
2011
, “
A Framework for Comparing Design Modelling Approaches Across Disciplines
,”
DS 68-2: Proceedings of the 18th International Conference on Engineering Design (ICED 11)
,
Copenhagen, Denmark
,
Aug. 15–18
.
5.
Eisenbart
,
B.
,
Gericke
,
K.
, and
Blessing
,
L.
,
2013
, “
An Analysis of Functional Modeling Approaches Across Disciplines
,”
Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM
,
27
(
3
), pp.
281
289
.
6.
Hirtz
,
J.
,
Stone
,
R. B.
,
McAdams
,
D. A.
,
Szykman
,
S.
, and
Wood
,
K. L.
,
2002
, “
A Functional Basis for Engineering Design: Reconciling and Evolving Previous Efforts
,”
Res. Eng. Des. – Theory Appl. Concurrent Eng.
,
13
(
2
), pp.
65
82
.
7.
Ferrero
,
V.
,
Wisthoff
,
A.
,
Huynh
,
T.
,
Ross
,
D.
, and
DuPont
,
B.
,
2018
, “
A Sustainable Design Repository for Influencing the Eco-Design of New Consumer Products
,”
EngrXIV, p. Under Review
.
8.
Oman
,
S.
,
Gilchrist
,
B.
,
Tumer
,
I. Y.
, and
Stone
,
R.
,
2014
, “
The Development of a Repository of Innovative Products (RIP) for Inspiration in Engineering Design
,”
Int. J. Des. Creativity Innovation
,
2
(
4
), pp.
186
202
.
9.
Szykman
,
S.
,
Sriram
,
R. D.
,
Bochenek
,
C.
, and
Racz
,
J. W.
,
1999
, “The NIST Design Repository Project,”
Adv. Soft Computing – Eng. Des. Manuf.
,
R.
Roy
,
T.
Furuhashi
, and
P. K.
Chawdry
, eds.,
Springer-Verlag
,
London
, pp.
5
19
.
10.
Feng
,
Y.
,
Zhao
,
Y.
,
Zheng
,
H.
,
Li
,
Z.
, and
Tan
,
J.
,
2020
, “
Data-Driven Product Design Toward Intelligent Manufacturing: A Review
,”
Int. J. Adv. Robot. Syst.
,
17
(
2
), pp.
1
18
.
11.
Bertoni
,
A.
,
2020
, “
Data-Driven Design in Concept Development: Systematic Review and Missed Opportunities
,”
Proceedings of the Design Society: DESIGN Conference
,
Cavtat, Croatia
,
Oct. 26–29
, Vol. 1, pp.
101
110
.
12.
Halevy
,
A.
,
Norvig
,
P.
, and
Pereira
,
F.
,
2009
, “
The Unreasonable Effectiveness of Data
,”
IEEE Intelligent Syst.
,
24
(
2
), pp.
8
12
.
13.
Sun
,
C.
,
Shrivastava
,
A.
,
Singh
,
S.
, and
Gupta
,
A.
,
2017
, “
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
,”
IEEE International Conference on Computer Vision (ICCV)
,
Venice, Italy
,
Oct. 22–29
, pp.
843
852
.
14.
Cheong
,
H.
,
Li
,
W.
,
Cheung
,
A.
,
Nogueira
,
A.
, and
Iorio
,
F.
,
2017
, “
Automated Extraction of Function Knowledge From Text
,”
ASME J. Mech. Des.
,
139
(
11
), p.
111407
.
15.
Law
,
M. V.
,
Kwatra
,
A.
,
Dhawan
,
N.
,
Einhorn
,
M.
,
Rajesh
,
A.
, and
Hoffman
,
G.
,
2020
, “
Design Intention Inference for Virtual Co-Design Agents
,”
Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents
,
Virtual
,
Oct. 20–22
.
16.
Zhang
,
Y.
,
Liu
,
X.
,
Jia
,
J.
, and
Luo
,
X.
,
2019
, “
Knowledge Representation Framework Combining Case-Based Reasoning With Knowledge Graphs for Product Design
,”
Comput.-Aided Des. Appl.
,
17
(
4
), pp.
763
782
.
17.
Angrish
,
A.
,
Craver
,
B.
, and
Starly
,
B.
,
2019
, “
“fabsearch”: A 3D CAD Model-Based Search Engine for Sourcing Manufacturing Services
,”
J. Comput. Inf. Sci. Eng.
,
19
(
4
), p.
041006
.
18.
Dering
,
M. L.
, and
Tucker
,
C. S.
,
2017
, “
A Convolutional Neural Network Model for Predicting a Products Function, Given Its Form
,”
ASME J. Mech. Des.
,
139
(
11
), p.
111408
.
19.
Han
,
J.
,
Sarica
,
S.
,
Shi
,
F.
, and
Luo
,
J.
,
2020
, “
Semantic Networks for Engineering Design: A Survey
,”
Proceedings of the Design Society
, pp.
2621
2630
.
20.
Lupinetti
,
K.
,
Pernot
,
J.-P.
,
Monti
,
M.
, and
Giannini
,
F.
,
2019
, “
Content-Based Cad Assembly Model Retrieval: Survey and Future Challenges
,”
Comput.-Aided Des.
,
113
, pp.
62
81
.
21.
Zhang
,
Z.
,
Cui
,
P.
, and
Zhu
,
W.
,
2020
, “
Deep Learning on Graphs: A Survey
,”
IEEE Trans. Knowledge Data Eng.
,
1
(
1
), p.
99
.
22.
Wu
,
Z.
,
Pan
,
S.
,
Chen
,
F.
,
Long
,
G.
,
Zhang
,
C.
, and
Philip
,
S. Y.
,
2019
, “
A Comprehensive Survey on Graph Neural Networks
,”
IEEE Trans. Neural Netw. Learning Syst
.
23.
Bang
,
H.
,
Martin
,
A. V.
,
Prat
,
A.
, and
Selva
,
D.
,
2018
, “
Daphne: An Intelligent Assistant for Architecting Earth Observing Satellite Systems
,”
2018 AIAA Information Systems-AIAA Infotech @ Aerospace
24.
Berquand
,
A.
,
Murdaca
,
F.
,
Riccardi
,
A.
,
Soares
,
T.
,
Genere
,
S.
,
Brauer
,
N.
, and
Kumar
,
K.
,
2019
, “
Artificial Intelligence for the Early Design Phases of Space Missions
,”
2019 IEEE Aerospace Conference
,
Big Sky, MT
,
Mar. 2–9
.
25.
Coyne
,
R. D.
,
Rosenman
,
M. A.
,
Radford
,
A. D.
, and
Gero
,
J. S.
,
1990
,
Knowledge-Based Design Systems
,
Addison-Wesley
,
Boston, MA
.
26.
Erden
,
M.
,
Komoto
,
H.
,
Beek
,
T. V.
,
Damelio
,
V.
,
Echavarria
,
E.
, and
Tomiyama
,
T.
,
2008
, “
A Review of Function Modeling: Approaches and Applications
,”
Artif. Intell. Eng. Des. Anal. Manuf.
,
22
(
2
), pp.
147
169
.
27.
Davis
,
N.
,
Hsiao
,
C.-P.
,
Popova
,
Y.
, and
Magerko
,
B.
,
2015
, “
An Enactive Model of Creativity for Computational Collaboration and Co-Creation
,”
Creativity in the Digital Age Springer Series on Cultural Computing
, pp.
109
133
.
28.
Bohm
,
M. R.
, and
Stone
,
R. B.
,
2004
, “
Product Design Support: Exploring a Design Repository System
,”
ASME International Mechanical Engineering Congress and Exposition
,
Anaheim, CA
,
Aug. 13–19
, CED, pp.
55
65
.
29.
Bohm
,
M. R.
,
Stone
,
R. B.
,
Simpson
,
T. W.
, and
Steva
,
E. D.
,
2008
, “
Introduction of a Data Schema to Support a Design Repository
,”
CAD Comput. Aided Des.
,
40
(
7
), pp.
801
811
.
30.
Arlitt
,
R.
,
Van Bossuyt
,
D. L.
,
Stone
,
R. B.
, and
Tumer
,
I. Y.
,
2017
, “
The Function-Based Design for Sustainability Method
,”
J. Mech. Des.
,
139
(
4
), pp.
1
12
.
31.
Devanathan
,
S.
,
Ramanujan
,
D.
,
Bernstein
,
W. Z.
,
Zhao
,
F.
, and
Ramani
,
K.
,
2010
, “
Integration of Sustainability Into Early Design Through the Function Impact Matrix
,”
ASME J. Mech. Des.
,
132
(
8
), p. 081004.
32.
Gilchrist
,
B. P.
,
Tumer
,
I. Y.
,
Stone
,
R. B.
,
Gao
,
Q.
, and
Haapala
,
K. R.
,
2012
, “
Comparison of Environmental Impacts of Innovative and Common Products
,”
International Design Engineering Technical Conferences & Computers and Information in Engineering Conference
,
Chicago, IL
,
Aug. 12–15
, ASME, pp.
1
10
.
33.
Soria Zurita
,
N. F.
,
Stone
,
R. B.
,
Onan Demirel
,
H.
, and
Tumer
,
I. Y.
,
2020
, “
Identification of Human–System Interaction Errors During Early Design Stages Using a Functional Basis Framework
,”
ASCE-ASME J. Risk Uncert. Engrg. Sys. Part B Mech. Engrg.
,
6
(
1
).
34.
Soria Zurita
,
N. F.
,
Stone
,
R. B.
,
Demirel
,
O.
, and
Tumer
,
I. Y.
,
2018
, “
The Function-Human Error Design Method (FHEDM)
,”
ASME International Design Engineering Technical Conferences
,
Quebec City, Quebec, Canada
,
Aug. 26–29
.
35.
Tensa
,
M.
,
Edmonds
,
K.
,
Ferrero
,
V.
,
Mikes
,
A.
,
Soria Zurita
,
N.
,
Stone
,
R.
, and
DuPont
,
B.
,
2019
, “
Toward Automated Functional Modeling: An Association Rules Approach for Mining the Relationship Between Product Components and Function
,”
Proc. Des. Soc.: Int. Conf. Eng. Des.
,
1
(
1
), pp.
1713
1722
.
36.
Mikes
,
A.
,
Edmonds
,
K.
,
Stone
,
R. B.
, and
DuPont
,
B.
,
2020
, “
Optimizing An Algorithm for Data Mining a Design Repository to Automate Functional Modeling
,”
ASME International Design Engineering Technical Conferences
,
Virtual
,
Aug. 17–19
, pp.
1
12
.
37.
Edmonds
,
K.
,
Mikes
,
A.
,
DuPont
,
B.
, and
Stone
,
R. B.
,
2020
, “
A Weighted Confidence Metric to Improve Automated Functional Modeling
,”
Proceedings of the ASME Design Engineering Technical Conference
,
Virtual
,
Aug. 17–19
, pp.
1
13
.
38.
Ferrero
,
V. J.
,
Alqseer
,
N.
,
Tensa
,
M.
, and
DuPont
,
B.
,
2020
, “
Using Decision Trees Supported by Data Mining to Improve Function-Based Design
,”
ASME International Design Engineering and Technical Conferences
,
Virtual
,
Aug. 17–19
, pp.
1
11
.
39.
Singh
,
A.
, and
Tucker
,
C. S.
,
2017
, “
A Machine Learning Approach to Product Review Disambiguation Based on Function, Form and Behavior Classification
,”
Decision Support Syst.
,
97
, pp.
81
91
.
40.
Szykman
,
S.
,
Sriram
,
R. D.
,
Bochenek
,
C.
,
Racz
,
J. W.
, and
Senfaute
,
J.
,
2000
, “
Design Repositories: Engineering Design’s New Knowledge Base
,”
IEEE Intell. Syst. Appl.
,
15
(
3
), pp.
48
55
.
41.
Phelan
,
K.
,
Wilson
,
C.
, and
Summers
,
J. D.
,
2014
, “
Development of a Design for Manufacturing Rules Database for Use in Instruction of DFM Practices
,”
Proceedings of the ASME International Design Engineering Technical Conference
,
Buffalo, NY
,
Aug. 17–20
, Vol. 1A, pp.
1
7
.
42.
Bharadwaj
,
A.
,
Xu
,
Y.
,
Angrish
,
A.
,
Chen
,
Y.
, and
Starly
,
B.
,
2019
, “
Development of a Pilot Manufacturing Cyberinfrastructure With An Information Rich Mechanical CAD 3D Model Repository
,”
ASME 2019 14th International Manufacturing Science and Engineering Conference, MSEC 2019
,
Erie, PA
,
June 10–14
, pp.
1
8
.
43.
Kurtoglu
,
T.
,
Campbell
,
M. I.
,
Bryant
,
C. R.
,
Stone
,
R. B.
, and
Mcadams
,
D. A.
,
2005
, “
Deriving a Component Basis for Computational Functional Synthesis
,”
ICED 05: 15th International Conference on Engineering Design: Engineering Design and the Global Economy
,
Melbourne, Australia
,
Aug. 15–18
.
44.
Cheong
,
H.
,
Chiu
,
I.
,
Shu
,
L. H.
,
Stone
,
R. B.
, and
McAdams
,
D. A.
,
2011
, “
Biologically Meaningful Keywords for Functional Terms of the Functional Basis
,”
ASME J. Mech. Des.
,
133
(
2
), p.
021007
.
45.
Ferrero
,
V.
,
2020
,
PyDamp: Python-based Data Addition and Management of PSQL. 10.5281/zenodo.3873370
.
46.
Fayyad
,
U.
,
Piatetsky-Shapiro
,
G.
, and
Smyth
,
P.
,
1996
, “
The KDD Process for Extracting Useful Knowledge From Volumes of Data
,”
Commun. ACM
,
39
(
11
), pp.
27
34
.
47.
Fayyad
,
U.
,
Piatetsky-Shapiro
,
G.
, and
Smyth
,
P.
,
1996
, “
Knowledge Discovery and Data Mining: Towards a Unifying Framework
,”
AAAI KDD-96 Proceedings
,
Portland, OR
,
Aug. 2–4
, Vol. 14, pp.
82
88
.
48.
Fayyad
,
U.
,
Piatetsky-Shapiro
,
G.
, and
Smyth
,
P.
,
1996
, “
From Data Mining to Knowledge Discovery in Databases
,”
AI Magazine
,
17
(
2
), pp.
37
54
.
49.
Williams
,
G.
,
Meisel
,
N. A.
,
Simpson
,
T. W.
, and
McComb
,
C.
,
2019
, “
Design Repository Effectiveness for 3D Convolutional Neural Networks: Application to Additive Manufacturing
,”
ASME J. Mech. Des.
,
141
(
11
), p.
111701
.
50.
Wang
,
Q.
,
Mao
,
Z.
,
Wang
,
B.
, and
Guo
,
L.
,
2017
, “
Knowledge Graph Embedding: A Survey of Approaches and Applications
,”
IEEE Trans. Knowl. Data Eng.
,
29
(
12
), pp.
2724
2743
.
51.
Ji
,
S.
,
Pan
,
S.
,
Cambria
,
E.
,
Marttinen
,
P.
, and
Yu
,
P. S.
,
2020
, “
A Survey on Knowledge Graphs: Representation, Acquisition and Applications
,”
arXiv:2002.00388
.
52.
Miller
,
G. A.
,
1995
, “
WordNet
,”
Commun. ACM
,
38
(
11
), pp.
39
41
.
53.
Liu
,
H.
, and
Singh
,
P.
,
2004
, “
ConceptNet – A Practical Commonsense Reasoning Tool-Kit
,”
BT Technol. J.
,
22
(
4
), pp.
211
226
.
54.
Sarica
,
S.
,
Luo
,
J.
, and
Wood
,
K. L.
,
2020
, “
TechNet: Technology Semantic Network Based on Patent Data
,”
Expert Syst. Appl.
,
142
.
55.
Sarica
,
S.
,
Song
,
B.
,
Luo
,
J.
, and
Wood
,
K.
,
2019
, “
Technology Knowledge Graph for Design Exploration: Application to Designing the Future of Flying Cars
,”
Proceedings of the ASME International Design Engineering Technical Conference
,
Anaheim, CA
,
Aug. 18–21
, Vol. 1, pp.
1
8
.
56.
Shi
,
F.
,
Chen
,
L.
,
Han
,
J.
, and
Childs
,
P.
,
2017
, “
A Data-Driven Text Mining and Semantic Network Analysis for Design Information Retrieval
,”
ASME J. Mech. Des.
,
139
(
11
), p.
111402
.
57.
Han
,
J.
,
Forbes
,
H.
,
Shi
,
F.
,
Hao
,
J.
, and
Schaefer
,
D.
,
2020
, “
A Data-Driven Approach for Creative Concept Generation and Evaluation
,”
Proc. Des. Soc.: Des. Conf.
,
1
, pp.
167
176
.
58.
Zhang
,
Y.
,
Liu
,
X.
,
Jia
,
J.
, and
Luo
,
X.
,
2019
, “
Knowledge Representation Framework Combining Case-Based Reasoning with Knowledge Graphs for Product Design
,”
Comput.-Aided Des. Appl.
,
17
(
4
), pp.
763
782
.
59.
Hassani
,
K.
, and
Khasahmadi
,
A. H.
,
2020
, “
Contrastive Multi-View Representation Learning on Graphs
,”
International Conference on Machine Learning
,
Vienna, Austria
,
July 13–18
, pp.
4116
4126
.
60.
Li
,
Y.
,
Tarlow
,
D.
,
Brockschmidt
,
M.
, and
Zemel
,
R.
,
2015
, “
Gated Graph Sequence Neural Networks
,”
International Conference on Learning Representations
,
San Juan, Puerto Rico
,
May 2–4
.
61.
Hamilton
,
W.
,
Ying
,
Z.
, and
Leskovec
,
J.
,
2017
, “
Inductive Representation Learning on Large Graphs
,”
Advances in Neural Information Processing Systems
,
Long Beach, CA
,
Dec. 4–9
, pp.
1024
1034
.
62.
Kipf
,
T. N.
, and
Welling
,
M.
,
2017
, “
Semi-Supervised Classification With Graph Convolutional Networks
,”
International Conference on Learning Representations
,
Toulon, France
,
Apr. 24–26
.
63.
Veličković
,
P.
,
Cucurull
,
G.
,
Casanova
,
A.
,
Romero
,
A.
,
Liò
,
P.
, and
Bengio
,
Y.
,
2018
, “
Graph Attention Networks
,”
International Conference on Learning Representations
,
Vancouver, Canada
,
Apr. 30–May 3
.
64.
Xu
,
K.
,
Hu
,
W.
,
Leskovec
,
J.
, and
Jegelka
,
S.
,
2019
, “
How Powerful are Graph Neural Networks?
International Conference on Learning Representations
,
New Orleans, LA
,
May 6–9
.
65.
Duvenaud
,
D. K.
,
Maclaurin
,
D.
,
Iparraguirre
,
J.
,
Bombarell
,
R.
,
Hirzel
,
T.
,
Aspuru-Guzik
,
A.
, and
Adams
,
R. P.
,
2015
, “
Convolutional Networks on Graphs for Learning Molecular Fingerprints
,”
Advances in Neural Information Processing Systems
,
Montreal, Quebec, Canada
,
Dec. 7–12
, pp.
2224
2232
.
66.
Hanocka
,
R.
,
Hertz
,
A.
,
Fish
,
N.
,
Giryes
,
R.
,
Fleishman
,
S.
, and
Cohen-Or
,
D.
,
2019
, “
Meshcnn: A Network With An Edge
,”
ACM Trans. Graphics (TOG)
,
38
(
4
), pp.
1
12
.
67.
Hassani
,
K.
, and
Haley
,
M.
,
2019
, “
Unsupervised Multi-Task Feature Learning on Point Clouds
,”
International Conference on Computer Vision
,
Seoul, South Korea
,
Oct. 27–Nov. 2
, pp.
8160
8171
.
68.
Wang
,
T.
,
Zhou
,
Y.
,
Fidler
,
S.
, and
Ba
,
J.
,
2019
, “
Neural Graph Evolution: Automatic Robot Design
,”
International Conference on Learning Representations
,
New Orleans, LA
,
May 6–9
.
69.
Sanchez-Gonzalez
,
A.
,
Heess
,
N.
,
Springenberg
,
J. T.
,
Merel
,
J.
,
Riedmiller
,
M.
,
Hadsell
,
R.
, and
Battaglia
,
P.
,
2018
, “
Graph Networks As Learnable Physics Engines for Inference and Control
,”
International Conference on Machine Learning
,
Stockholm, Sweden
,
July 10–15
, pp.
4470
4479
.
70.
Sanchez-Gonzalez
,
A.
,
Godwin
,
J.
,
Pfaff
,
T.
,
Ying
,
R.
,
Leskovec
,
J.
, and
Battaglia
,
P.
,
2020
, “
Learning to Simulate Complex Physics With Graph Networks
,”
International Conference on Machine Learning, PMLR
,
Vienna, Austria
,
July 12–18
, pp.
8459
8468
.
71.
Shlomi
,
J.
,
Battaglia
,
P.
, and
Vlimant
,
J.-R.
,
2020
, “
Graph Neural Networks in Particle Physics
,”
Mach. Learning: Sci. Technol.
,
2
(
2
), pp.
1
19
.
72.
Guo
,
K.
, and
Buehler
,
M. J.
,
2020
, “
A Semi-Supervised Approach to Architected Materials Design Using Graph Neural Networks
,”
Extreme Mech. Lett.
,
41
, p.
101029
.
73.
Park
,
J.
, and
Park
,
J.
,
2019
, “
Physics-Induced Graph Neural Network: An Application to Wind-Farm Power Estimation
,”
Energy
,
187
, p.
115883
.
74.
Gilmer
,
J.
,
Schoenholz
,
S. S.
,
Riley
,
P. F.
,
Vinyals
,
O.
, and
Dahl
,
G. E.
,
2017
, “
Neural Message Passing for Quantum Chemistry
,”
International Conference on Machine Learning
,
Sydney, Australia
,
Aug. 6–11
, pp.
1263
1272
.
75.
2020
,
Oregon State Design Repository, https://design.engr.oregonstate.edu/repo
.
76.
Hagberg
,
A. A.
,
Schult
,
D. A.
, and
Swart
,
P. J.
,
2008
, “
Exploring Network Structure, Dynamics, and Function Using networkx
,”
Proceedings of the 7th Python in Science Conference
,
G.
Varoquaux
,
T.
Vaught
, and
J.
Millman
, eds., pp.
11
15
.
77.
Hagberg
,
A.
,
Swart
,
P.
, and
S Chult
,
D.
,
2008
, “
Exploring Network Structure, Dynamics, and Function Using Networkx
,”
Technical Report
,
Los Alamos National Lab. (LANL)
,
Los Alamos, NM (United States)
.
78.
Williams
,
R. J.
, and
Zipser
,
D.
,
1989
, “
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
,”
Neural Comput.
,
1
(
2
), pp.
270
280
.
79.
Glorot
,
X.
, and
Bengio
,
Y.
,
2010
, “
Understanding the Difficulty of Training Deep Feedforward Neural Networks
,”
International Conference on Artificial Intelligence and Statistics
,
Sardinia, Italy
,
May 13–15
, pp.
249
256
.
80.
Kingma
,
D. P.
, and
Ba
,
J. L.
,
2014
, “
ADAM: Amethod for Stochastic Optimization
,”
International Conference on Learning Representation
,
Banff, Canada
,
Apr. 14–16
.
81.
Loshchilov
,
I.
, and
Hutter
,
F.
,
2016
, “
SGDR: Stochastic Gradient Descent With Warm Restarts
,”
International Conference on Learning Representations
,
San Juan, Puerto Rico
,
May 2–4
.
82.
Maas
,
A. L.
,
Hannun
,
A. Y.
, and
Ng
,
A. Y.
,
2013
, “
Rectifier Nonlinearities Improve Neural Network Acoustic Models
,”
International Conference on Machine Learning
,
Atlanta, GA
,
June 16–21
.
83.
Srivastava
,
N.
,
Hinton
,
G.
,
Krizhevsky
,
A.
,
Sutskever
,
I.
, and
Salakhutdinov
,
R.
,
2014
, “
Dropout: A Simple Way to Prevent Neural Networks From Overfitting
,”
J. Mach. Learn. Res.
,
15
(
56
), pp.
1929
1958
.
84.
Paszke
,
A.
,
Gross
,
S.
,
Massa
,
F.
,
Lerer
,
A.
,
Bradbury
,
J.
,
Chanan
,
G.
,
Killeen
,
T.
, et al.,
2019
, “
Pytorch: An Imperative Style, High-Performance Deep Learning Library
,”
Advances in Neural Information Processing Systems 32
,
H.
Wallach
,
H.
Larochelle
,
A.
Beygelzimer
,
F.
d’Alché-Buc
,
E.
Fox
,
R.
Garnett
, eds.,
Curran Associates, Inc.
, pp.
8024
8035
.
85.
Fey
,
M.
, and
Lenssen
,
J. E.
,
2019
, “
Fast Graph Representation Learning With PyTorch Geometric
,”
ICLR Workshop on Representation Learning on Graphs and Manifolds
,
New Orleans, LA
,
May 6–9
.
86.
Cheng
,
Z.
, and
Ma
,
Y.
,
2017
, “
Explicit Function-Based Design Modelling Methodology With Features
,”
J. Eng. Des.
,
28
(
3
), pp.
205
231
.
87.
Bohm
,
M. R.
,
Haapala
,
K. R.
,
Poppa
,
K.
,
Stone
,
R. B.
, and
Tumer
,
I. Y.
,
2010
, “
Integrating Life Cycle Assessment Into the Conceptual Phase of Design Using a Design Repository
,”
ASME J. Mech. Des.
,
132
(
9
), p.
091005
.