Chapter 9 Implementation and numerical results

The proposed CCNNs can be used to construct different neural network architectures for diverse learning tasks. In this section, we demonstrate the generality and the efficacy of CCNNs by evaluating their predictive performance on shape analysis and graph learning tasks. In our geometric processing experiments, we compare CCNNs against state-of-the-art methods, which are highly engineered and trained towards specific tasks. Furthermore, we conduct our experiments on various data modalities commonly studied in geometric data processing, namely on point clouds and 3D meshes. We also perform experiments on graph data. In our experiments, we tune three main components: the choice of CCNN architecture, the learning rate, and the number of replications in data augmentation. We justify the choice of CCNN architecture for each learning task. We have implemented our pipeline in PyTorch, and ran the experiments on a single GPU NVIDIA GeForce RTX 3060 Ti using a Microsoft Windows back-end.

9.1 Software: TopoNetX, TopoEmbedX, and TopoModelX

All our software development and experimental analysis have been carried out using Python. We have developed three Python packages, which we have also used to run our experiments: TopoNetX, TopoEmbedX and TopoModelX. TopoNetX supports the construction of several topological structures, including cell complex, simplicial complex, and combinatorial complex classes. These classes provide methods for computing boundary operators, Hodge Laplacians and higher-order adjacency operators on cell, simplicial, and combinatorial complexes, respectively. TopoEmbedX supports representation learning of higher-order relations on cell complexes, simplicial complexes, and combinatorial complexes. TopoModelX supports computing deep learning models defined on these topological domains.

In addition to our software package implementations, we have utilized PyTorch (Paszke et al. 2017) for training the neural networks reported in this section. Also, we have utilized Scikit-learn (Pedregosa et al. 2011) to compute the eigenvectors of the 1-Hodge Laplacians. The normal vectors of point clouds have been computed using the Point Cloud Utils package (Williams 2022). Finally, we have used NetworkX (Hagberg, Swart, and S Chult 2008) and HyperNetX (Joslyn et al. 2021), both in the development of our software packages as well as in our computations.

9.2 Datasets

In our evaluation of CCNNs, we use four datasets: the Human Body, the COSEG, the SHREC11, and a benchmark dataset for graph classification taken from (Bianchi, Gallicchio, and Micheli 2022). A summary of these datasets follows.

Human Body segmentation dataset. The original Human Body segmentation dataset presented in (Atzmon, Maron, and Lipman 2018) contains relatively large meshes with a size up to 12,000 vertices. The segmentation labels provided in this dataset are set per-face, and the segmentation accuracy is defined to be the ratio of the correctly classified faces over the total number of faces in the entire dataset. In this work, we use a simplified version of the original Human Body dataset, as provided by (Hanocka et al. 2019), in which meshes have less than 1,000 nodes and segmentation labels are remapped to edges. We use this simplified version of the Human Body dataset for the shape analysis (i.e., mesh segmentation) task in Section 9.3.1.

COSEG segmentation dataset. The original COSEG dataset (Y. Wang et al. 2012) contains eleven sets of shapes with ground-truth segmentation. In this work, we use a subset of the original COSEG dataset that contains relatively large sets of aliens, vases, and chairs. These three sets consist of 200, 300, and 400 shapes, respectively. We use this custom subset of the COSEG dataset for the shape analysis (i.e., mesh segmentation) task in Section 9.3.1.

SHREC11 classification dataset. SHREC 2011 (Lian et al. 2011), abbreviated as SHREC11, is a large-scale dataset that contains 600 nonrigid deformed shapes (watertight triangle meshes) from 30 categories, where each category contains an equal number of objects. Examples of these categories include hand, lamp, woman, man, flamingo, and rabbit. The dataset is available as a training and test set consisting of 480 and 120 shapes, respectively. We use the SHREC11 dataset for the shape analysis tasks in Sections 9.3.2 and 9.4.

Benchmark dataset for graph classification. This dataset has graphs belonging to three different classes (Bianchi, Gallicchio, and Micheli 2022). For each graph, the feature vector on each vertex (the 0-cochain) is a one-hot vector of size five, and it stores the relative position of the vertices on the graph. This dataset has easy and hard versions. The easy version has highly connected graphs, while the hard version has sparse graphs. We use this dataset for the graph classification task in Section 9.3.3.

9.3 Shape analysis: mesh segmentation and classification

The CC structure used for the shape analysis experiments (mesh segmentation and classification) is simply induced by the triangulation of the meshes. Specifically, the 0-, 1-, and 2-cells are the vertices, edges, and faces of the mesh, respectively. The matrices used for the CCNNs are \(B_{0,1},~B_{0,2}\), their transpose matrices, and the (co)adjacency matrices \(A_{1,1}\), \(coA_{1,1}\), and \(coA_{2,1}\).

A CCNN takes a vector of cochains as input features. For shape analysis tasks, we consider cochains, whose features are built directly from the vertex coordinates of the underlying mesh. We note that other choices (e.g., spectral-based cochains as in (Mejia, Ruiz-Salguero, and Cadavid 2017)) can also be included. Our shape analysis tasks have three input cochains: the vertex, edge and face cochains. Each vertex cochain has two input features: the position and the normal vectors associated with the vertex. Similar to (Hanocka et al. 2019), each edge cochain consists of five features: the length of the edge, the dihedral angle, two inner angles, and two edge-length ratios for each face. Finally, each input face cochain consists of three input features: the face area, face normal, and the three face angles.

9.3.1 Mesh segmentation

For the Human Body dataset (Maron et al. 2017), we built a CCNN that produces an edge class. The tensor diagram of the architecture is shown in Figure 9.1(a). For the COSEG dataset (Y. Wang et al. 2012), we built a CCNN that combines our proposed feature vectors defined on vertices, edges, and faces to learn the final face class. The architecture uses incidence matrices as well as (co)adjacency matrices to construct a signal flow as demonstrated in Figure 9.1(b). Specifically, the tensor diagram displays three non-squared attention-blocks and three squared attention blocks. The depth of the model is chosen to be two, as indicated in Figure 9.1(b).

The tensor diagrams of the CCNNs used in our experiments. (a): The CCNNs used in the mesh segmentation tasks. In particular, $\mbox{CCNN}_{HB}$ and $\mbox{CCNN}_{COSEG}$ are the architectures used on the Human Body dataset [@atzmon2018point] and on the COSEG dataset [@wang2012active], respectively. (b): The mesh classification CCNN used on the SHREC11 dataset [@lian2011shape]. (c): The graph classification CCNN used on the dataset provided in [@bianchi2020mincutpool]. (d): The mesh/point cloud classification CCNNs used in conjunction with the MOG algorithm on the SHREC11 dataset.

FIGURE 9.1: The tensor diagrams of the CCNNs used in our experiments. (a): The CCNNs used in the mesh segmentation tasks. In particular, \(\mbox{CCNN}_{HB}\) and \(\mbox{CCNN}_{COSEG}\) are the architectures used on the Human Body dataset (Atzmon, Maron, and Lipman 2018) and on the COSEG dataset (Y. Wang et al. 2012), respectively. (b): The mesh classification CCNN used on the SHREC11 dataset (Lian et al. 2011). (c): The graph classification CCNN used on the dataset provided in (Bianchi, Gallicchio, and Micheli 2022). (d): The mesh/point cloud classification CCNNs used in conjunction with the MOG algorithm on the SHREC11 dataset.

Note that the architectures chosen for the COSEG and for the Human Body datasets have the same number and types of building blocks; compare Figures 9.1(a) and (b). We use a random 85%-15% train-test split. For both of these architectures, a softmax activation is applied to the output tensor. All our segmentation models are trained for 600 epochs using a learning rate of 0.0001 and the standard cross-entropy loss. These results are consistent across Human Body and Shape COSEG datasets.

We test the proposed CCNNs on mesh segmentation using the Human Body (Maron et al. 2017) and the Shape COSEG (vase, chair, and alien) (Y. Wang et al. 2012) datasets. For each mesh in these datasets, the utilized CC structure is the one induced by the triangulation of the meshes, although other variations in the CC structure yield comparable results. Further, three \(k\)-cochains are constructed for \(0\leq k \leq 2\) and are utilized in CCNN training. As shown in Table 9.1, CCNNs outperform three neural networks tailored to mesh analysis (HodgeNet (Smirnov and Solomon 2021), PD-MeshNet (Milano et al. 2020) and MeshCCN (Hanocka et al. 2019)) on two out of four datasets, and are among the best two neural networks on all four datasets.

TABLE 9.1: Predictive accuracy on test sets related to shape analysis, namely on Human Body and COSEG (vase, chair, alien) datasets. The results reported here are based on the \(\mbox{CCNN}_{COSEG}\) and \(\mbox{CCNN}_{HB}\) architectures. In particular, the result for \(\mbox{CCNN}_{HB}\) is reported in the first column, whereas the results for \(\mbox{CCNN}_{COSEG}\) are reported in the second, third and forth columns.
Method Human Body COSEG vase COSEG chair COSEG alien
HodgeNet 85.03 90.30 95.68 96.03
PD-MeshNet 85.61 95.36 97.23 98.18
MeshCNN 85.39 92.36 92.99 96.26
CCNN 87.30 93.40 98.30 93.70

Architecture of \(\mbox{CCNN}_{COSEG}\) and \(\mbox{CCNN}_{HB}\). In \(\mbox{CCNN}_{COSEG}\), as shown in Figure 9.1(a), we choose a CCNN pooling architecture as given in Definition 7.5, which pushes signals from vertices, edges and faces, and aggregates their information towards the final face prediction class. We choose \(\mbox{CCNN}_{HB}\) similarly, except that the predicted signal is an edge class. The reason for this choice is that the Human Body dataset (Atzmon, Maron, and Lipman 2018) encodes the segmentation information on edges.

9.3.2 Mesh and point cloud classification

We evaluate our method on mesh classification using the SHREC11 dataset (Lian et al. 2011) based on the same cochains and CC structure used in the segmentation experiment of Section 9.3.1. The CCNN architecture for our mesh classification task, denoted by \(\mbox{CCNN}_{SHREC}\), is demonstrated in Figure 9.1(b). The final layer of \(\mbox{CCNN}_{SHREC}\), depicted as a grey node in Figure 9.1(b), is a simple pooling operation that sums all embeddings of the CC after mapping them to the same Euclidean space. The \(\mbox{CCNN}_{SHREC}\) is trained for 40 epochs with both tanh and identity activation functions using a learning rate of 0.005 and the standard cross-entropy loss. We use anisotropic scaling and random rotations for data augmentation. Each mesh is augmented 30 times, is centered around the vertex center of the mass, and is rescaled to fit inside the unit cube.

The \(\mbox{CCNN}_{SHREC}\) with identity activations and \(\tanh\) activations achieve predictive accuracies of 96.67% and 99.17%, respectively. Table 9.2 shows that CCNNs outperform two neural networks tailored to mesh analysis (HodgeNet and MeshCCN), being the second best model behind PD-MeshNet in mesh and point cloud classification. It is worth mentioning that the mesh classification CCNN requires a significantly lower number of epochs to train (40 epochs) as compared to the mesh segmentation CCNNs (600 epochs).

TABLE 9.2: Predictive accuracy on the SHREC11 test dataset. The left and right column report the mesh and point cloud classification results, respectively. The CCNN for mesh classification is \(\mbox{CCNN}_{SHREC}\), while the CCNN for point cloud classification is \(\mbox{CCNN}_{MOG2}\).
Method Mesh Point cloud
HodgeNet 99.10 94.70
PD-MeshNet 99.70 99.10
MeshCNN 98.60 91.00
CCNN 99.17 95.20

Architecture of \(\mbox{CCNN}_{SHREC}\). The \(\mbox{CCNN}_{SHREC}\) has two layers and is chosen as a pooling CCNN in the sense of Definition 7.5, similar to \(\mbox{CCNN}_{COSEG}\) and \(\mbox{CCNN}_{HB}\). The main difference is that the final layer of \(\mbox{CCNN}_{SHREC}\), represented by the grey point in Figure 9.1(b), is a global pooling function that sums all embeddings of all dimensions (zero, one and two) of the underlying CC after mapping them to the same Euclidean space.

9.3.3 Graph classification

For the graph classification task, we use the graph classification benchmark provided in (Bianchi, Gallicchio, and Micheli 2022); the dataset consists of graphs with three different labels. For each graph, the feature vector on each vertex (the 0-cochain) is a one-hot vector of size five, and it stores the relative position of the vertex on the graph. To construct the CC structure, we use the 2-clique complex of the input graph. We then proceed to build the CCNN for graph classification, denoted by \(\mbox{CCNN}_{Graph}\), which is visualized in Figure 9.1(c). The matrices used for the construction of \(\mbox{CCNN}_{Graph}\) are \(B_{0,1},~B_{1,2},~B_{0,2}\), their transpose matrices, and the (co)adjacency matrices \(A_{0,1},A_{1,1},~coA_{2,1}\). The cochains of \(\mbox{CCNN}_{Graph}\) are constructed as follows. For each graph in the dataset, we set the 0-cochain to be the one-hot vector of size 5 provided by the dataset. This one-hot vector stores the relative position of the vertex on the graph. We also construct the 1-cochain and 2-cochain on the 2-clique complex of the graph by considering the coordinate-wise max value of the one-hot vectors attached to the vertices of each cell. The input to \(\mbox{CCNN}_{graoh}\) consists of the 0-cochain provided as a part of the dataset as well as the constructed 1 and 2-cochains. The grey node in Figure 9.1(c) indicates a simple mean pooling operation. We train this network with a learning rate of 0.005 and no data augmentation.

Table 9.3 reports the results on the easy and the hard versions of the datasets8, and compares them to six state-of-the-art GNNs. As shown in Table 9.3, CCNNs outperform all six GNNs on the hard dataset, and five of the GNNs on the easy dataset. The proposed CCNN outperforms MinCutPool on the hard dataset, while it attains comparable performance to MinCutPool on the easy dataset.

TABLE 9.3: Predictive accuracy on the test set of (Bianchi, Gallicchio, and Micheli 2022) related to graph classification. All results are reported using the \(\mbox{CCNN}_{Graph}\) architecture.
Dataset Graclus NDP DiffPool Top-K SAGPool MinCutPool CCNN
Easy 97.81 97.93 98.64 82.47 84.23 99.02 98.90
Hard 69.08 72.67 69.98 42.80 37.71 73.80 75.59

Architecture of \(\mbox{CCNN}_{Graph}\). In the \(\mbox{CCNN}_{Graph}\) displayed in Figure 9.1(c) we choose a CCNN pooling architecture as given in Definition 7.5 that pushes signals from vertices, edges and faces, and aggregate their information towards the higher-order cells before making making the final prediction. For the dataset of (Bianchi, Gallicchio, and Micheli 2022), we experiment with two architectures; the first one is identical to the \(\mbox{CCNN}_{SHREC}\) shown in Figure 9.1(b), and the second one is the \(\mbox{CCNN}_{Graph}\) shown in Figure 9.1(c). We report the results for \(\mbox{CCNN}_{Graph}\), as it provides superior performance. Note that when this neural network is conducted on an underlying simplicial complex, the neighborhood matrices \(B_{0,1}\) and \(B_{1,3}\) are typically not considered, hence the CC-structure equipped with these additional incidence matrices improves the generalization performance of the \(\mbox{CCNN}_{Graph}\).

9.4 Pooling with mapper on graphs and data classification

We perform experiments to measure the effectiveness of the MOG pooling strategy discussed in Section 7.4. Recall that the MOG algorithm requires two pieces of input: the 1-skeleton of a CC \(\mathcal{X}\), and a scalar function on the vertices of \(\mathcal{X}\). Our choice for the input scalar function is the average geodesic distance (AGD) (V. G. Kim et al. 2010), which is suitable for shape detection as it is invariant to reflection and rotation. For two entities \(u\) and \(v\) on a graph, the geodesic distance between \(u\) and \(v\), denoted by \(d(v,u)\), is computed using Dijkstra’s shortest path algorithm. The AGD is given by the following equation: \[\begin{equation} AGD(v)=\frac{1}{|V|}\sum_{u\in V}d(v,u). \tag{9.1} \end{equation}\]

From Equation (9.1), it is immediate that the vertices near the center of the graph are likely to have low function values, while points on the periphery are likely to have high values. This observation has been utilized to study graph symmetry (V. G. Kim et al. 2010), and it provides a justification for selecting the AGD for the MOG pooling strategy. Figure 9.2 presents a few examples of applying the MOG pooling strategy using AGD on the SHREC11 dataset.

Examples of applying the MOG algorithm on the SHREC11 dataset [@lian2011shape]. In each figure, we show the original mesh graph on the left and the mapper graph on the right. The scalar function chosen for the MOG algorithm is the average geodesic distance (AGD). We observe that the pooled mapper graph has similar overall shape to the original graphs.

FIGURE 9.2: Examples of applying the MOG algorithm on the SHREC11 dataset (Lian et al. 2011). In each figure, we show the original mesh graph on the left and the mapper graph on the right. The scalar function chosen for the MOG algorithm is the average geodesic distance (AGD). We observe that the pooled mapper graph has similar overall shape to the original graphs.

In order to demonstrate the effectiveness of our MOG pooling approach, we conduct three experiments on the SHREC11 dataset: mesh classification based on CC-pooling with input vertex and edge features (Section 9.4.1), mesh classification based on CC-pooling with input vertex features only (Section 9.4.2), and point cloud classification based on CC-pooling with input vertex features only (Section 9.4.3). The experiments in Sections 9.4.1 and 9.4.2 utilize the mesh structure in the SHREC11 dataset, whereas the experiment in Section 9.4.3 utilizes its own point cloud version. In particular, we choose two simple CCNN architectures shown in Figure 9.1(d), denoted by \(\mbox{CCNN}_{MOG1}\) and \(\mbox{CCNN}_{MOG2}\), as opposed to the more complicated architecture of \(\mbox{CCNN}_{SHREC}\) in Figure 9.1(b). The main difference between \(\mbox{CCNN}_{MOG1}\) and \(\mbox{CCNN}_{MOG2}\) is the choice of the input feature vectors as described next.

9.4.1 Mesh classification: CC-pooling with input vertex and edge features

In this experiment, we consider the vertex feature vector to be the position concatenated with the normal vectors for each vertex in the underlying mesh. For the edge features, we compute the first ten eigenvectors of the 1-Hodge Laplacian (Dodziuk 1976; Eckmann 1944) and attach a 10-dimensional feature vector to the edges of the underlying mesh. The CC that we consider here is 3-dimensional, as it consists of the triangular mesh (vertices, edges and faces) and of 3-cells. The 3-cells are obtained using the MOG algorithm, and are used for augmenting each mesh. We calculate the 3-cells via the MOG algorithm using the AGD scalar function as input. We conduct this experiment using the CCNN defined via the tensor diagram \(\mbox{CCNN}_{MOG1}\) given in Figure 9.1(d). During training, we augment each mesh with ten additional meshes, with each of these additional meshes being obtained by a random rotation as well as 0.1% noise perturbation to the vertex positions. We train \(\mbox{CCNN}_{MOG1}\) for 100 epochs using a learning rate of 0.0002 and the standard cross-entropy loss, and obtain an accuracy of 98.1%. While the accuracy of \(\mbox{CCNN}_{MOG1}\) is lower than the one we report for \(\mbox{CCNN}_{SHREC}\) (99.17%) in Table 9.2, we note that \(\mbox{CCNN}_{MOG1}\) requires a significantly smaller number of replications for mesh augmentation to achieve a similar accuracy (\(\mbox{CCNN}_{MOG1}\) requires 10, whereas \(\mbox{CCNN}_{SHREC}\) required 30 replications).

Architecture of \(\mbox{CCNN}_{MOG1}\). The tensor diagram \(\mbox{CCNN}_{MOG1}\) of Figure 9.1(d) corresponds to a pooling CCNN. In particular, \(\mbox{CCNN}_{MOG1}\) pushes forward the signal towards two different higher-order cells: the faces of the mesh as well as the 3-cells obtained from the MOG algorithm.

9.4.2 Mesh classification: CC-pooling with input vertex features only

In this experiment, we consider the position and the normal vectors of the input vertices. The CC structure that we consider is the underlying graph structure obtained from each mesh; i.e., we only use the vertices and the edges, and ignore the faces. We augment this structure by 2-cells obtained via the MOG algorithm using the AGD scalar function as input. We choose the network architecture to be relatively simpler than \(\mbox{CCNN}_{MOG1}\), and report it in Figure 9.1(d) as \(\mbox{CCNN}_{MOG2}\). During training we augment each mesh with 10 additional meshes, with each of these additional meshes being obtained by a random rotation as well as 0.05% noise perturbation to the vertex positions. We train \(\mbox{CCNN}_{MOG2}\) for 100 epochs using a learning rate of 0.0003 and the standard cross-entropy loss, and obtain an accuracy of 97.1%.

Architecture of \(\mbox{CCNN}_{MOG2}\) for mesh classification. The tensor diagram \(\mbox{CCNN}_{MOG2}\) of Figure 9.1(d) corresponds to a pooling CCNN. In particular, \(\mbox{CCNN}_{MOG2}\) pushes forward the signal towards a single 2-cell obtained from the MOG algorithm. Observe that the overall architecture of \(\mbox{CCNN}_{MOG2}\) is similar in principle to AlexNet (Krizhevsky, Sutskever, and Hinton 2017), where convolutional layers are followed by pooling layers.

9.4.3 Point cloud classification: CC-pooling with input vertex features only

In this experiment, we consider point cloud classification on the SHREC11 dataset. The setup is similar in principle to the one studied in Section 9.4.2 where we consider only the features supported on the vertices of the point cloud as input. Specifically, for each mesh in the SHREC11 dataset, we sample 1,000 points from the surface of the mesh. Additionally, we estimate the normal vectors of the resulting point clouds using the Point Cloud Utils package (Williams 2022). To build the CC structure, we first consider the \(k\)-nearest neighborhood graph obtained from each point cloud using \(k=7\). We then augment this graph by 2-cells obtained via the MOG algorithm using the AGD scalar function as input. We train the \(\mbox{CCNN}_{MOG2}\) shown in Figure 9.1(d). During training, we augment each point cloud with 12 additional instances, each one of these instances being obtained by random rotation. We train \(\mbox{CCNN}_{MOG2}\) for 100 epochs using a learning rate of 0.0003 and the standard cross-entropy loss, and obtain an accuracy of 95.2% (see Table 9.2).

9.5 Ablation studies

In this section, we perform two ablation studies. The first ablation study reveals that pooling strategies in CCNNs have a crucial effect on predictive performance. The second ablation study demonstrates that CCNNs have better predictive capacity than GNNs; the advantage of CCNNs arises from their topological pooling operations and from their ability to learn from topological features.

Pooling strategies in CCNNs. To evaluate the impact of the choice of pooling strategy on predictive performance, we experiment with two pooling strategies using the SHREC11 classification dataset. The first pooling strategy is the MOG algorithm described in Section 9.4; the results of this pooling strategy based on \(\mbox{CCNN}_{MOG2}\) are discussed in Section 9.4.2 (97.1%). The second pooling strategy is briefly described as follows. For each mesh, we consider the 2-dimensional CC obtained by considering each 1-hop neighborhood to be the 1-cells in the CC and each 2-hop neighborhood to be the 2-cells in the CC. We train \(\mbox{CCNN}_{MOG2}\), and obtain an accuracy of 89.2%, which is lower than 97.1%. These experiments suggest that the choice of pooling strategy has a crucial effect on predictive performance.

Comparing CCNNs to GNNs in terms of predictive performance. Observe that \(\mbox{CCNN}_{SHREC}\) has topological features of dimension one and two as inputs. On the other hand, \(\mbox{CCNN}_{MOG2}\) has only vertex features as input, but it learns the higher-order cell latent features by using the push-forward operation that pushes the signal from 0-cells to the 2-cells obtained from the MOG algorithm. In both cases, using a higher-order structure is essential for improving predictive performance, even though two different strategies towards exploiting the higher-order structures are utilized. To support our claim, we run an experiment in which we replace the pooling layer in \(\mbox{CCNN}_{MOG2}\) by the cochain operator induced by \(A_{0,1}\), effectively rendering the neural network as a GNN. In this setting, using the same setup as in experiment 9.4.2, we obtain an accuracy of 84.56%. This experiment reveals the performance advantages of employing higher-order structures, either by utilizing the input topological features supported on higher-order cells or via pooling strategies that augment higher-order cells.

References

Atzmon, Matan, Haggai Maron, and Yaron Lipman. 2018. “Point Convolutional Neural Networks by Extension Operators.” ACM Transactions on Graphics 37 (4).
Bianchi, Filippo Maria, Claudio Gallicchio, and Alessio Micheli. 2022. “Pyramidal Reservoir Graph Neural Network.” Neurocomputing 470: 389–404.
Dodziuk, Jozef. 1976. “Finite-Difference Approach to the Hodge Theory of Harmonic Forms.” American Journal of Mathematics 98 (1): 79–104.
Eckmann, Beno. 1944. “Harmonische Funktionen Und Randwertaufgaben in Einem Komplex.” Commentarii Mathematici Helvetici 17 (1): 240–55.
Hagberg, Aric, Pieter Swart, and Daniel S Chult. 2008. “Exploring Network Structure, Dynamics, and Function Using NetworkX.” Los Alamos National Lab (LANL), Los Alamos, NM (United States).
Hanocka, Rana, Amir Hertz, Noa Fish, Raja Giryes, Shachar Fleishman, and Daniel Cohen-Or. 2019. “MeshCNN: A Network with an Edge.” ACM Transactions on Graphics 38 (4): 1–12.
Joslyn, Cliff A, Sinan G Aksoy, Tiffany J Callahan, Lawrence E Hunter, Brett Jefferson, Brenda Praggastis, Emilie Purvine, and Ignacio J Tripodi. 2021. “Hypernetwork Science: From Multidimensional Networks to Computational Topology.” In Unifying Themes in Complex Systems x: Proceedings of the Tenth International Conference on Complex Systems, 377–92. Springer.
Kim, Vladimir G, Yaron Lipman, Xiaobai Chen, and Thomas Funkhouser. 2010. “Möbius Transformations for Global Intrinsic Symmetry Analysis.” Computer Graphics Forum 29 (5): 1689–1700.
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. 2017. “Imagenet Classification with Deep Convolutional Neural Networks.” Communications of the ACM 60 (6): 84–90.
Lian, Z., A. Godil, B. Bustos, M Daoudi, J. Hermans, S. Kawamura, Y. Kurita, G. Lavoua, P. Dp Suetens, et al. 2011. “Shape Retrieval on Non-Rigid 3D Watertight Meshes.” In Eurographics Workshop on 3d Object Retrieval (3DOR). Citeseer.
Maron, Haggai, Meirav Galun, Noam Aigerman, Miri Trope, Nadav Dym, Ersin Yumer, Vladimir G Kim, and Yaron Lipman. 2017. “Convolutional Neural Networks on Surfaces via Seamless Toric Covers.” ACM Transactions on Graphics 36 (4): 71–71.
Mejia, Daniel, Oscar Ruiz-Salguero, and Carlos A. Cadavid. 2017. “Spectral-Based Mesh Segmentation.” International Journal on Interactive Design and Manufacturing 11 (3): 503–14.
Milano, Francesco, Antonio Loquercio, Antoni Rosinol, Davide Scaramuzza, and Luca Carlone. 2020. “Primal-Dual Mesh Convolutional Neural Networks.” Conference on Neural Information Processing Systems 33: 952–63.
Paszke, Adam, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. “Automatic Differentiation in PyTorch.” In NIPS Workshop.
Pedregosa, F., G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, et al. 2011. “Scikit-Learn: Machine Learning in Python.” Jmlr 12: 2825–30.
Smirnov, Dmitriy, and Justin Solomon. 2021. “HodgeNet: Learning Spectral Geometry on Triangle Meshes.” ACM Transactions on Graphics 40 (4): 1–11.
Wang, Yunhai, Shmulik Asafi, Oliver Van Kaick, Hao Zhang, Daniel Cohen-Or, and Baoquan Chen. 2012. “Active Co-Analysis of a Set of Shapes.” ACM Transactions on Graphics 31 (6): 1–10.
Williams, Francis. 2022. “Point Cloud Utils.”

  1. The difficulty in these datasets is controlled by the compactness degree of the graph clusters; clusters in the ‘easy’ data have more in-between cluster connections, while clusters in the `hard’ data are more isolated (Bianchi, Gallicchio, and Micheli 2022).↩︎