Chapter 2 Motivation

Combinatorial complex neural networks (CCNNs), as presented in this work, generalize graph neural networks (GNNs) and their higher-order analogues via topological constructions. In this section, we provide the motivation for the use of topological constructions in machine learning, and specifically in deep learning from three angles. First, data modeling: how topological abstraction can help us to reason and compute with various data types supported on topological spaces. Second, utility: how accounting for topology improves performance. Third, unification: how topology can be used to synthesize and abstract disparate concepts.

2.1 Modeling and learning from data on topological spaces

In the context of machine learning, the domain on which the data is supported is typically a (linear) vector space. This underlying vector space facilitates many computations and is often implicitly assumed. However, as has been recognized within geometric deep learning, it is often essential to consider data supported on different domains. Indeed, explicitly considering and modeling the domain on which the data is supported can be crucial for various reasons.

First, different domains can have distinct characteristics and properties that require different types of deep learning models to effectively process and analyze the data. For example, a graph domain may require a graph convolutional network (Kipf and Welling 2016) to account for the non-Euclidean structure, while a point cloud domain may require a PointNet-like architecture (Qi et al. 2017; Zaheer et al. 2017) to handle unordered sets of points.

Second, the specific data within each domain can vary widely in terms of size, complexity and noise. By understanding specific data properties within a given domain, researchers can develop models that are tailored to those particular properties. For example, a model designed for point clouds with high levels of noise may incorporate techniques such as outlier removal or local feature aggregation.

Third, accurately distinguishing between domain and data is important for developing models that generalize well to new, unseen data. By identifying the underlying structure of the domain, researchers can develop models that are robust to variations in the data while still being able to capture relevant geometric features. To summarize, by considering both the domain and data together, researchers can develop models that are better suited to handle the specific challenges and complexities of different geometric learning tasks.

Different perspectives on data: domains and relations. When referring to topological data or topological data analysis in the literature, there are different views on what the actual data consists of and what it is supposed to model. Here, we follow (Bick et al. 2021) and distinguish between different types of data and goals of our learning procedures, even though the distinction between these different types of data can sometimes be more blurry than what is presented here.

Relational data. Relational data describes the relations between different entities or objects. This fundamental concept can manifest in various ways, such as connections between users in social networks, where friend relationships or follower-ship serve as examples of such relations. Traditionally, these relations are understood through graphs, with vertices and edges representing entities and their pairwise connections, respectively. Stated differently, the edges within the graph are considered to be units of measurement; i.e., every piece of relational data involves two vertices.

However, many real-world phenomena naturally involve complex, multi-party interactions, where more than two entities interact with each other in intricate ways. For instance, groups of individuals within a society, sets of co-authors, and genes or proteins that interact with each other are all examples of such higher-order relations. Such dependencies, which go beyond pairwise interactions, can be modeled by higher-order relations and aptly abstracted as hypergraphs, cell complexes, or CCs. Clearly, topological ideas can play an important role for understanding this data, and crucially enable us to move from graphs modeling pairwise relations to richer representations.

Data as an instance of a domain. A related conceptualization of what our observed data is supposed to represent is adopted in TDA. Namely, all of observed data is supposed to correspond to a noisy instance of a topological object itself; i.e., we aim to learn the `topological shape’ of the data. Note that in this context, we typically do not directly observe relational data, but instead we construct relations from observed data, which we then use to classify the observed dataset. Persistent homology performed on point cloud data is a mainstream example for this viewpoint.

Data supported on a topological domain. Topological ideas also play an important role when considering data that is defined on top of a domain such as a graph. This could be particular dynamics, such as epidemic spreading, or it could be any type of other data or signals supported on the vertices (cases of edge-signals on a graph are not considered often, though exceptions exist (Schaub and Segarra 2018; Schaub et al. 2021)). In the language of graph signal processing, a variety of functions or signals can be defined on a graph domain, and these are said to be supported by the domain. Crucially, the graph itself can be arbitrarily complex but is typically considered to be fixed; the observed data is not relational, but supported on the vertices of a graph.

Moving beyond graphs, more general topological constructions, such as CCs, can support data not just on vertices and edges, but also on other `higher-order’ entities, as demonstrated in Figure 2.1(a-b). For example, vector fields defined on meshes in computer graphics are often data supported on edges and faces, and can be conveniently modeled as data supported on a higher-order domain (Goes, Desbrun, and Tong 2016). Similarly, class-labeled data can be provided on the edges and faces of a given mesh; see Figure 2.1(c) for an example. To process such data it is again relevant to take into account the structure of the underlying topological domain.

Data might be supported naturally on higher-order relations. (a): An edge-based vector field. (b): A face-based vector field. Both vector fields in (a) and (b) are defined on a cell complex torus^[An interactive visualization of (a-b) is provided [here](https://app.vectary.com/p/5uLAflZj6U2kvACv2kk2tN).]. (c): Class-labeled topological data might naturally be supported on higher-order relations. For instance, mesh segmentation labels for 2-faces are depicted by different colors (blue, green, turquoise, pink, brown) to represent different parts (head, neck, body, legs, tail) of the horse.

FIGURE 2.1: Data might be supported naturally on higher-order relations. (a): An edge-based vector field. (b): A face-based vector field. Both vector fields in (a) and (b) are defined on a cell complex torus2. (c): Class-labeled topological data might naturally be supported on higher-order relations. For instance, mesh segmentation labels for 2-faces are depicted by different colors (blue, green, turquoise, pink, brown) to represent different parts (head, neck, body, legs, tail) of the horse.

Modeling and processing data beyond graphs: illustrative examples. Graphs are well-suited for modeling systems that exhibit pairwise interactions. For instance, particle-based fluid simulations can be effectively represented using graphs as shown in Figure 2.2(a), with message passing used to update the physical state of fluid molecules. In this approach, each molecule is represented as a vertex containing its physical state, and the edges connecting them are utilized to calculate their interactions (Sanchez-Gonzalez et al. 2020; Shlomi, Battaglia, and Vlimant 2020).

Examples of processing data supported on graphs or on higher-order networks. (a): Graphs can be used to model particle interactions in fluid dynamics. Vertices represent particles, whereas particle-to-particle interactions are modeled via message passing among vertices [@sanchez2020learning; @shlomi2020graph]. (b): When modeling springs and self-collision, it is natural to work with edges rather than vertices. This is because the behavior of the cloth is determined by the tension and compression forces acting along the edges, and not only by the position of individual particles. To model the interactions among multiple edges, polygonal faces can be used to represent the local geometry of the cloth. The polygonal faces provide a way to compute higher-order message passing among the edges.

FIGURE 2.2: Examples of processing data supported on graphs or on higher-order networks. (a): Graphs can be used to model particle interactions in fluid dynamics. Vertices represent particles, whereas particle-to-particle interactions are modeled via message passing among vertices (Sanchez-Gonzalez et al. 2020; Shlomi, Battaglia, and Vlimant 2020). (b): When modeling springs and self-collision, it is natural to work with edges rather than vertices. This is because the behavior of the cloth is determined by the tension and compression forces acting along the edges, and not only by the position of individual particles. To model the interactions among multiple edges, polygonal faces can be used to represent the local geometry of the cloth. The polygonal faces provide a way to compute higher-order message passing among the edges.

Graphs may not be adequate for modeling more complex systems such as cloth simulation, where state variables are associated with relations like edges or triangular faces rather than vertices. In such cases, higher-order message passing is required to compute and update physical states, as depicted in Figure 2.2(b). A similar challenge arises in natural language processing, where language exhibits multiple layers of syntax, semantics and context. Although GNNs may capture basic syntactic and semantic relations between words, more complex relations such as negation, irony or sarcasm may be difficult to represent (Girault, Narayanan, and Ortega 2017). Including higher-order and hierarchical relations can provide a better model for these relations, and can enable a more nuanced and accurate understanding of language.

2.2 The utility of topology

In addition to providing a versatile framework for modeling complex systems, TDL models have broad utility: they can induce meaningful inductive biases for learning, facilitate efficient message propagation over longer ranges in the underlying domain, enhance the expressive power of existing graph-based models, and have the potential to unveil crucial characteristics of deep networks themselves, as described next.

Building hierarchical representations of data. While leveraging higher-order information is important for learning representations, it is also crucial to preserve the complex higher-order relations between entities for achieving powerful and versatile representations. For instance, humans can form abstractions and analogies by building relations among relations hierarchically. However, graph-based models, which are commonly used for building relational reasoning among various entities (Adam Santoro et al. 2017; S.-X. Zhang et al. 2020; Yunpeng Chen et al. 2019; Schlichtkrull et al. 2018), are limited in their ability to build higher-order relations among these entities.

The need for hierarchical relational reasoning (C. Xu et al. 2022) demands methods that can capture deeper abstract relations among relations, beyond modeling relations between raw entities. Higher-order network models provide a promising solution for addressing the challenge of higher-order relational reasoning, as depicted in Figure 2.3.

An illustration of representing hierarchical data via higher-order networks. Black arrows indicate graph augmentation by higher-order relations, whereas orange arrows indicate coarsened graph extraction. (a): A graph encodes binary relations (pink edges) among abstract entities (yellow vertices). (b): Higher-order relations, represented by blue cells, can be thought as relations among vertices or edges of the original graph. (c): Extraction of a coarsened version of the original graph. In the coarsened graph, vertices represent the higher-order relations (blue cells) of the original graph, and edges represent the intersections of these blue cells. (d-e): The same process repeats to obtain a more coarsened version of the original graph. The entire process corresponds to hierarchical higher-order relations, that is relations among relations, which extract meaning and content (including the `shape of data'), a common task in topological data analysis [@carlsson2009topology; @dey22].

FIGURE 2.3: An illustration of representing hierarchical data via higher-order networks. Black arrows indicate graph augmentation by higher-order relations, whereas orange arrows indicate coarsened graph extraction. (a): A graph encodes binary relations (pink edges) among abstract entities (yellow vertices). (b): Higher-order relations, represented by blue cells, can be thought as relations among vertices or edges of the original graph. (c): Extraction of a coarsened version of the original graph. In the coarsened graph, vertices represent the higher-order relations (blue cells) of the original graph, and edges represent the intersections of these blue cells. (d-e): The same process repeats to obtain a more coarsened version of the original graph. The entire process corresponds to hierarchical higher-order relations, that is relations among relations, which extract meaning and content (including the `shape of data’), a common task in topological data analysis (G. Carlsson 2009; Dey and Wang 2022a).

Improving performance through inductive bias. A topological deep learning model can be leveraged to enhance the predictive performance on graph-based learning tasks by providing a well-defined procedure to lift a graph to a given higher-order network. The incorporation of multi-vertex relations via this approach can be viewed as an inductive bias, and is utilized to improve predictive performance. Inductive bias allows a learning algorithm to prioritize one solution over another, based on factors beyond the observed data (T. M. Mitchell 1980). Figure 2.4 illustrates two forms of augmentation on graphs using higher-order cells.

The graph in the middle can be augmented with higher-order relations to improve a learning task. In (a), cells have been added to the missing faces; a similar inductive bias has been considered in [@bodnar2021weisfeiler] to improve classification on molecular data. In (b), one-hop neighborhoods have been added to some of the vertices in the graph; [@feng2019hypergraph] use such an inductive bias, based on lifting a graph to its corresponding one-hop neighborhood hypergraph, to improve performance on vertex-based tasks.

FIGURE 2.4: The graph in the middle can be augmented with higher-order relations to improve a learning task. In (a), cells have been added to the missing faces; a similar inductive bias has been considered in (Bodnar et al. 2021) to improve classification on molecular data. In (b), one-hop neighborhoods have been added to some of the vertices in the graph; (Feng et al. 2019) use such an inductive bias, based on lifting a graph to its corresponding one-hop neighborhood hypergraph, to improve performance on vertex-based tasks.

Construction of efficient graph-based message passing over long-range. The hierarchical structure of TDL networks facilitates the construction of long-range interactions in an efficient manner. By leveraging such structures, signals defined on the vertices of a domain can propagate efficiently distant dependencies among vertices connected by long edge paths. Figure 2.5 illustrates the process of lifting a signal defined on graph vertices by augmenting the graph with additional topological relations and unpooling the signal back to the vertices. (Hajij, Ramamurthy, et al. 2022) employ such pooling and unpooling operations to construct models capable of capturing distant dependencies, leading to more effective and adaptive learning of both the local and global structure of the underlying domain. For related work on graph-based pooling, we refer the reader to (Su, Hu, and Li 2021; Itoh, Kubo, and Ikeda 2022).

Illustration of message-passing among binary or higher-order cells. (a): Using a graph-based message-passing scheme, some information that starts at vertex $x$ needs to travel a long distance, that is a long edge path, before reaching vertex $y$. (b): Using a higher-order cell structure, indicated by the the blue cell, the signal can be lifted from the vertices to a higher-order cell and thus propagates back and forth between vertices $x$ and $y$ in fewer steps. This shortcut allows to share information efficiently, in fewer computational steps, among all vertices that belong to the blue cell [@hajij2022high].

FIGURE 2.5: Illustration of message-passing among binary or higher-order cells. (a): Using a graph-based message-passing scheme, some information that starts at vertex \(x\) needs to travel a long distance, that is a long edge path, before reaching vertex \(y\). (b): Using a higher-order cell structure, indicated by the the blue cell, the signal can be lifted from the vertices to a higher-order cell and thus propagates back and forth between vertices \(x\) and \(y\) in fewer steps. This shortcut allows to share information efficiently, in fewer computational steps, among all vertices that belong to the blue cell (Hajij, Ramamurthy, et al. 2022).

Improved expressive power. TDL can capture intricate and subtle dependencies that elude traditional graph-based models. By allowing richer higher-order interactions, higher-order networks can have improved expressive power and lead to superior performance in numerous tasks (Morris et al. 2019; Bodnar et al. 2021).

2.3 A unifying perspective on deep learning and structured computations

Topology provides a framework to generalize numerous discrete domains that are typically encountered in scientific computations. Examples of these domains include graphs, simplicial complexes, and cell complexes; see Figure 2.6. Computational models supported on such domains can be seen as a generalization of various deep learning architectures; i.e., can provide a unifying framework to compare and design TDL architectures. Further, understanding the connections between different types of TDL via a unifying theory can provide valuable insights into the underlying structure and behavior of complex systems, where higher-order relations naturally occur (Hansen and Ghrist 2019; Hajij, Istvan, and Zamzmi 2020; Bick et al. 2021; Majhi, Perc, and Ghosh 2022).

This work introduces combinatorial complexes, which are higher-order networks that generalize most discrete domains typically encountered in scientific computing, including (a) sequences and images, (b) graphs, (c) 3D shapes and simplicial complexes, (d) cubical and cellular complexes, (e) discrete manifolds, and (f) hypergraphs.

FIGURE 2.6: This work introduces combinatorial complexes, which are higher-order networks that generalize most discrete domains typically encountered in scientific computing, including (a) sequences and images, (b) graphs, (c) 3D shapes and simplicial complexes, (d) cubical and cellular complexes, (e) discrete manifolds, and (f) hypergraphs.

Understanding why deep networks work. A remarkable aspect of contemporary deep neural networks is their capacity to generalize very well to unseen data, even though they have an enormous number of parameters. This contradicts our previous understanding of statistical learning theory, prompting researchers to seek novel approaches in comprehending the underlying workings of deep neural networks (Neyshabur et al. 2019). Recent studies have revealed that the topology of graphs induced by deep neural networks or their corresponding training trajectories, modeled as Markov chains, exhibit strong correlations with generalization performance (Birdal et al. 2021; Dupuis, Deligiannidis, and Şimşekli 2023). An open question is how far such correlations between topological structure, computational function and generalization properties can be found in learning architectures supported on higher-order domains. We believe that by studying these questions using our novel topological constructs, such as CCs and CCNNs, we can gain deeper insights not only into TDL but also into deep learning more generally.

References

Bick, Christian, Elizabeth Gross, Heather A Harrington, and Michael T Schaub. 2021. “What Are Higher-Order Networks?” arXiv Preprint arXiv:2104.11329.
Birdal, Tolga, Aaron Lou, Leonidas J Guibas, and Umut Simsekli. 2021. “Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks.” Advances in Neural Information Processing Systems.
Bodnar, Cristian, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Lio, Guido F Montufar, and Michael Bronstein. 2021. “Weisfeiler and Lehman Go Cellular: CW Networks.” In Advances in Neural Information Processing Systems.
Carlsson, Gunnar. 2009. “Topology and Data.” Bulletin of the American Mathematical Society 46 (2): 255–308.
Chen, Yunpeng, Marcus Rohrbach, Zhicheng Yan, Yan Shuicheng, Jiashi Feng, and Yannis Kalantidis. 2019. “Graph-Based Global Reasoning Networks.” In Conference on Computer Vision and Pattern Recognition.
Dey, Tamal K., and Yusu Wang. 2022a. Computational Topology for Data Analysis. Cambridge University Press.
Dupuis, Benjamin, George Deligiannidis, and Umut Şimşekli. 2023. “Generalization Bounds with Data-Dependent Fractal Dimensions.” arXiv Preprint arXiv:2302.02766.
Feng, Yifan, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. 2019. “Hypergraph Neural Networks.” Proceedings of the AAAI Conference on Artificial Intelligence 33 (01): 3558–65.
Girault, Benjamin, Shrikanth S. Narayanan, and Antonio Ortega. 2017. “Towards a Definition of Local Stationarity for Graph Signals.” In IEEE International Conference on Acoustics, Speech and Signal Processing.
Goes, Fernando de, Mathieu Desbrun, and Yiying Tong. 2016. “Vector Field Processing on Triangle Meshes.” In ACM SIGGRAPH 2016 Courses, 1–49. Association for Computing Machinery.
Hajij, Mustafa, Kyle Istvan, and Ghada Zamzmi. 2020. “Cell Complex Neural Networks.” In NeurIPS 2020 Workshop TDA and Beyond.
Hajij, Mustafa, Karthikeyan Natesan Ramamurthy, Aldo Saenz, and Ghada Zamzmi. 2022. “High Skip Networks: A Higher Order Generalization of Skip Connections.” In ICLR 2022 Workshop on Geometrical and Topological Representation Learning.
Hansen, Jakob, and Robert Ghrist. 2019. “Toward a Spectral Theory of Cellular Sheaves.” Journal of Applied and Computational Topology 3 (4): 315–58.
Itoh, Takeshi D., Takatomi Kubo, and Kazushi Ikeda. 2022. “Multi-Level Attention Pooling for Graph Neural Networks: Unifying Graph Representations with Multiple Localities.” Neural Networks 145: 356–73.
Kipf, Thomas N., and Max Welling. 2016. “Semi-Supervised Classification with Graph Convolutional Networks.” arXiv Preprint arXiv:1609.02907.
Majhi, Soumen, Matjaž Perc, and Dibakar Ghosh. 2022. “Dynamics on Higher-Order Networks: A Review.” Journal of the Royal Society Interface 19 (188): 20220043.
Mitchell, Tom M. 1980. “The Need for Biases in Learning Generalizations.”
Morris, Christopher, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. 2019. “Weisfeiler and Leman Go Neural: Higher-Order Graph Neural Networks.” In Proceedings of the AAAI Conference on Artificial Intelligence.
Neyshabur, Behnam, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. 2019. “The Role of over-Parametrization in Generalization of Neural Networks.” In International Conference on Learning Representations.
Qi, Charles R., Hao Su, Kaichun Mo, and Leonidas J. Guibas. 2017. “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation.” In Cvpr, 652–60.
Sanchez-Gonzalez, Alvaro, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter Battaglia. 2020. “Learning to Simulate Complex Physics with Graph Networks.” In International Conference on Machine Learning.
Santoro, Adam, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. “A Simple Neural Network Module for Relational Reasoning.” In Advances in Neural Information Processing Systems.
Schaub, Michael T., and Santiago Segarra. 2018. “Flow Smoothing and Denoising: Graph Signal Processing in the Edge-Space.” In 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 735–39.
Schaub, Michael T., Yu Zhu, Jean-Baptiste Seby, T. Mitchell Roddenberry, and Santiago Segarra. 2021. “Signal Processing on Higher-Order Networks: Livin’on the Edge... And Beyond.” Signal Processing 187: 108149.
Schlichtkrull, Michael, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. “Modeling Relational Data with Graph Convolutional Networks.” In European Semantic Web Conference.
Shlomi, Jonathan, Peter Battaglia, and Jean-Roch Vlimant. 2020. “Graph Neural Networks in Particle Physics.” Machine Learning: Science and Technology 2 (2): 021001.
Su, Zidong, Zehui Hu, and Yangding Li. 2021. “Hierarchical Graph Representation Learning with Local Capsule Pooling.” In ACM International Conference on Multimedia in Asia.
Xu, Chenxin, Maosen Li, Zhenyang Ni, Ya Zhang, and Siheng Chen. 2022. “GroupNet: Multiscale Hypergraph Neural Networks for Trajectory Prediction with Relational Reasoning.” In Conference on Computer Vision and Pattern Recognition.
Zaheer, Manzil, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. 2017. “Deep Sets.” In Advances in Neural Information Processing Systems.
Zhang, Shi-Xue, Xiaobin Zhu, Jie-Bo Hou, Chang Liu, Chun Yang, Hongfa Wang, and Xu-Cheng Yin. 2020. “Deep Relational Reasoning Graph Network for Arbitrary Shape Text Detection.” In Cvpr, 9699–9708.

  1. An interactive visualization of (a-b) is provided here.↩︎