Talks
Talks
2024
2024
PhD Viva
PhD Viva
Imperial College London
Imperial College London
Date: Monday, 2nd December 2024, 10:30am GMT
Date: Monday, 2nd December 2024, 10:30am GMT
Title: Diagrammatic Algebra for Equivariant Neural Network Architectures
Title: Diagrammatic Algebra for Equivariant Neural Network Architectures
Twenty-Seventh European Conference on Artificial Intelligence, Santiago de Compostela, Spain
Twenty-Seventh European Conference on Artificial Intelligence, Santiago de Compostela, Spain
Date: Thursday, 24th October 2024, 11:00am CET
Date: Thursday, 24th October 2024, 11:00am CET
Title: Connecting Permutation Equivariant Neural Networks and Partition Diagrams
Title: Connecting Permutation Equivariant Neural Networks and Partition Diagrams
Permutation equivariant neural networks are often constructed using tensor powers of $\mathbb{R}^{n}$ as their layer spaces. We show that all of the weight matrices that appear in these neural networks can be obtained from Schur-Weyl duality between the symmetric group and the partition algebra. In particular, we adapt Schur-Weyl duality to derive a simple, diagrammatic method for calculating the weight matrices themselves.
Permutation equivariant neural networks are often constructed using tensor powers of $\mathbb{R}^{n}$ as their layer spaces. We show that all of the weight matrices that appear in these neural networks can be obtained from Schur-Weyl duality between the symmetric group and the partition algebra. In particular, we adapt Schur-Weyl duality to derive a simple, diagrammatic method for calculating the weight matrices themselves.
Workshop on Symmetry and Equivariance in Deep Learning, Paris, France
Workshop on Symmetry and Equivariance in Deep Learning, Paris, France
Date: Wednesday, 4th September 2024, 2:00pm CET
Date: Wednesday, 4th September 2024, 2:00pm CET
Title: Diagrammatic Mathematics for Equivariant Deep Learning Architectures
Title: Diagrammatic Mathematics for Equivariant Deep Learning Architectures
Video (YouTube), Slides
In deep learning, we would like to develop principled approaches for constructing neural network architectures. One important approach involves encoding symmetries into neural network architectures using representations of groups such that the learned functions are equivariant to the group. In this talk, we show how certain group equivariant neural network architectures can be built using set partition diagrams. In many cases, we can establish a category theory framework both for the set partition diagrams and for the equivariant linear maps between layer spaces. We extend this framework to characterise the weight matrices that appear in neural networks that are equivariant to the automorphism group of a graph.
In deep learning, we would like to develop principled approaches for constructing neural network architectures. One important approach involves encoding symmetries into neural network architectures using representations of groups such that the learned functions are equivariant to the group. In this talk, we show how certain group equivariant neural network architectures can be built using set partition diagrams. In many cases, we can establish a category theory framework both for the set partition diagrams and for the equivariant linear maps between layer spaces. We extend this framework to characterise the weight matrices that appear in neural networks that are equivariant to the automorphism group of a graph.
Quantum Groups Seminar (Online)
Quantum Groups Seminar (Online)
Date: Monday, 20th May 2024, 4:00pm CET
Date: Monday, 20th May 2024, 4:00pm CET
Compact Matrix Quantum Group Equivariant Neural Networks
Compact Matrix Quantum Group Equivariant Neural Networks
In deep learning, we would like to develop principled approaches for constructing neural networks. One important approach involves identifying symmetries that are inherent in data and then encoding them into neural network architectures using representations of groups. However, there exist so-called “quantum symmetries” that cannot be understood formally by groups. In this talk, we show how to construct neural networks that are equivariant to compact matrix quantum groups using Woronowicz’s version of Tannaka-Krein duality. We go on to characterise the linear weight matrices that appear in these neural networks for a class of compact matrix quantum groups known as “easy”. In particular, we show that every compact matrix group equivariant neural network is a compact matrix quantum group equivariant neural network.
In deep learning, we would like to develop principled approaches for constructing neural networks. One important approach involves identifying symmetries that are inherent in data and then encoding them into neural network architectures using representations of groups. However, there exist so-called “quantum symmetries” that cannot be understood formally by groups. In this talk, we show how to construct neural networks that are equivariant to compact matrix quantum groups using Woronowicz’s version of Tannaka-Krein duality. We go on to characterise the linear weight matrices that appear in these neural networks for a class of compact matrix quantum groups known as “easy”. In particular, we show that every compact matrix group equivariant neural network is a compact matrix quantum group equivariant neural network.
The Royal Institution of Great Britain, London, UK
The Royal Institution of Great Britain, London, UK
Date: Thursday, 29th February 2024, 2:00pm GMT
Date: Thursday, 29th February 2024, 2:00pm GMT
Exploring Group Equivariant Neural Networks Using Set Partition Diagrams
Exploring Group Equivariant Neural Networks Using Set Partition Diagrams
2023
2023
Fortieth International Conference on Machine Learning, Honolulu, Hawaii, United States
Fortieth International Conference on Machine Learning, Honolulu, Hawaii, United States
Date: Wednesday, July 26th, 5:12pm HST (Thursday, July 27th, 4:12am BST)
Date: Wednesday, July 26th, 5:12pm HST (Thursday, July 27th, 4:12am BST)
Brauer's Group Equivariant Neural Networks
Brauer's Group Equivariant Neural Networks
Live Presentation (SlidesLive)
We provide a full characterisation of all of the possible group equivariant neural networks whose layers are some tensor power of $\mathbb{R}^{n}$ for three symmetry groups that are missing from the machine learning literature: $O(n)$, the orthogonal group; $SO(n)$, the special orthogonal group; and $Sp(n)$, the symplectic group. In particular, we find a spanning set of matrices for the learnable, linear, equivariant layer functions between such tensor power spaces in the standard basis of $\mathbb{R}^{n}$ when the group is $O(n)$ or $SO(n)$, and in the symplectic basis of $\mathbb{R}^{n}$ when the group is $Sp(n)$.
We provide a full characterisation of all of the possible group equivariant neural networks whose layers are some tensor power of $\mathbb{R}^{n}$ for three symmetry groups that are missing from the machine learning literature: $O(n)$, the orthogonal group; $SO(n)$, the special orthogonal group; and $Sp(n)$, the symplectic group. In particular, we find a spanning set of matrices for the learnable, linear, equivariant layer functions between such tensor power spaces in the standard basis of $\mathbb{R}^{n}$ when the group is $O(n)$ or $SO(n)$, and in the symplectic basis of $\mathbb{R}^{n}$ when the group is $Sp(n)$.
Date: Wednesday, June 21st, 10am BST
Date: Wednesday, June 21st, 10am BST
Exploring Group Equivariant Neural Networks Using Set Partition Diagrams
Exploring Group Equivariant Neural Networks Using Set Partition Diagrams
Video (YouTube), Slides
What do jellyfish and an 11th century Japanese novel have to do with neural networks? In recent years, much attention has been given to developing neural network architectures that can efficiently learn from data with underlying symmetries. These architectures ensure that the learned functions maintain a certain geometric property called group equivariance, which determines how the output changes based on a change to the input under the action of a symmetry group. In this talk, we will describe a number of new group equivariant neural network architectures that are built using tensor power spaces of $R^n$ as their layers. We will show that the learnable, linear functions between these layers can be characterised by certain subsets of set partition diagrams. This talk will be based on several papers that are to appear in ICML 2023.
What do jellyfish and an 11th century Japanese novel have to do with neural networks? In recent years, much attention has been given to developing neural network architectures that can efficiently learn from data with underlying symmetries. These architectures ensure that the learned functions maintain a certain geometric property called group equivariance, which determines how the output changes based on a change to the input under the action of a symmetry group. In this talk, we will describe a number of new group equivariant neural network architectures that are built using tensor power spaces of $R^n$ as their layers. We will show that the learnable, linear functions between these layers can be characterised by certain subsets of set partition diagrams. This talk will be based on several papers that are to appear in ICML 2023.