D Learning discrete exterior calculus operators with CCANNs
The operator \(G_{tr}=G\odot att\) of Equations (5.4) and (5.5) has an advantageous cross-cutting interpretation. First, recall that \(G_{tr}\) has the same shape as the original operator \(G\). More importantly, \(G_{tr}\) can be viewed as a learnt version of \(G\). For instance, if \(G\) is the \(k\)-Hodge Laplacian \(\mathbf{L}_k\), then the learnt attention version \(G_{tr}\) of it represents a \(k\)-Hodge Laplacian that is adapted to the domain \(\mathcal{X}\) for the learning task at hand. This perspective converts our attention framework to a tool for learning discrete exterior calculus (DEC) operators (Desbrun, Kanso, and Tong 2008). We refer the interested reader to recent works along these lines (Smirnov and Solomon 2021; Trask, Huang, and Hu 2022), where neural networks are used to learn Laplacian operators in various shape analysis tasks.
Concretely, one of the main building blocks of DEC is a collection of linear operators of the form \(\mathcal{A} \colon \mathcal{C}^i(\mathcal{X}) \to \mathcal{C}^j(\mathcal{X})\) that act on a cochain \(\mathbf{H}\) to produce another cochain \(\mathcal{A}(\mathbf{H})\). An example of an operator \(\mathcal{A}\) is the graph Laplacian. There are seven primitive DEC operators, including the discrete exterior derivative, the hodge star and the wedge product. These seven primitive operators can be combined together to form other operators. In our setting, the discrete exterior derivatives are precisely a signed version of the incidence matrices defined in the context of cell/simplicial complexes. We denote the \(k\)-signed incidence matrix defined on a cell/simplicial complex by \(\mathbf{B}_k\). It is common in the context of discrete exterior calculus (Desbrun, Kanso, and Tong 2008) to refer to \(\mathbf{B}_k^T\) as the \(k^{th}\) discrete exterior derivative \(d^k\). So, from a DEC point of view, the matrices \(\mathbf{B}_0^T, \mathbf{B}_1^T\) and \(\mathbf{B}_2^T\) are regarded as the discrete exterior derivatives \(d^0(\mathbf{H})\), \(d^1 (\mathbf{H})\), and \(d^2 (\mathbf{H})\) of some 0-, 1-, and 2-cochains defined on \(\mathcal{\mathcal{X}}\), which in turn are the discrete analogs of the gradient \(\nabla \mathbf{H}\), curl \(\nabla\times \mathbf{H}\) and divergence \(\nabla \cdot \mathbf{H}\) of a smooth function defined on a smooth surface. We refer the reader to (Desbrun, Kanso, and Tong 2008) for a coherent list of DEC operators and their interpretation. Together, cochains and the operators that act on them provide a concrete framework that facilitates computing a cochain of interest, such as a cochain obtained by solving a partial differential equation on a discrete surface.
Our attention framework can be viewed as a non-linear version of the DEC based on linear operators \(\mathcal{A}\), and can be used to learn the DEC operators on a domain \(\mathcal{X}\) for a particular learning task. Specifically, a linear operator \(\mathcal{A}\), as it appears in classical DEC, can be considered as a special case of Equations (5.4) and (5.5). Unlike existing work (Smirnov and Solomon 2021; Trask, Huang, and Hu 2022), our DEC learning approach based on CCANNs generalizes and applies to all domains in which DEC is typically applicable; examples of such domains include triangular and polygonal meshes (Crane et al. 2013). In contrast, existing operator learning methods are defined only for particular types of DEC operators, and therefore cannot be used to learn arbitrary types of DEC operators.