Giovanni Luca Marchetti
I am a postdoctoral researcher at the Department of Mathematics of the Royal Institute of Technology (KTH) in Stockholm, Sweden.
I apply tools from pure mathematics (algebra, geometry, topology, ...) to machine learning and high-dimensional statistics. More specifically, I am interested in algebro-geometric aspects of deep neural networks, manifold/representation learning, geometric density estimation, and topological data analysis. Check these slides for a high-level presentation of my research so far.
Resources: CV,  Google Scholar,  GitHub,  X/Twitter,  LinkedIn
Publications
Below you can find a selection of my academic works, subdivided into topics. For a complete list, please visit my Google Scholar profile. The symbol * denotes equal contribution.
Algebraic Geometry of Deep Learning: these works explore the (algebraic) geometry of function spaces defined by neural networks.
- An Invitation to Neuroalgebraic Geometry
Marchetti*, Shahverdi*, Mereta*, Trager*, Kohn*
ICML Spotlight, 2025
- Learning on a Razor's Edge: the Singularity Bias of Polynomial Neural Networks
Shahverdi*, Marchetti*, Kohn*
Preprint, 2025
- Alternating Gradient Flows: A Theory of Feature Learning in Two-layer Neural Networks
Kunin, Marchetti, Chen, Karkada, Simon, DeWeese, Ganguli, Miolane
Preprint, 2025
- Geometry of Lightning Self-Attention: Identifiability and Dimension
Henry*, Marchetti*, Kohn*
ICLR, 2025
- On the Geometry and Optimization of Polynomial Convolutional Networks
Shahverdi*, Marchetti*, Kohn*
AISTATS, 2025
Equivariant/Invariant Deep Learning: these works explore the interaction between symmetry and deep learning, ranging from the invariant theory of neural networks to equivariant representation learning, with applications to robotics.
- Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks
Marchetti, Hillar, Kragic, Sanborn
COLT, 2024
- Equivariant Representation Learning via Class-Pose Decomposition
Marchetti*, Tegner*, Varava, Kragic
AISTATS, 2023
- Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach
Marchetti, Cesa, Kumar, Behboodi
TMLR, 2025
- Equivariant Representation Learning in the Presence of Stabilizers
Rey*, Marchetti*, Kragic, Jarnikov, Holenderski
ECML-PKDD, 2023
- Learning Geometric Representations of Objects via Interaction
Reichlin*, Marchetti*, Yin, Varava, Kragic
ECML-PKDD, 2023
- Back to the Manifold: Recovering from Out-of-Distribution States
Reichlin, Marchetti, Yin, Ghadirzadeh, Kragic
IROS, 2022
- Relative representations: Topological and Geometric Perspectives
Garcia-Castellanos, Marchetti, Kragic, Scolamiero
NeurIPS Workshop (UniReps), 2024
Computational Geometry: these works concern high-dimensional Voronoi tessellations and Delaunay triangulations, with applications to density estimation and active learning.
- Active Nearest Neighbor Regression Through Delaunay Refinement
Kravberg*, Marchetti*, Polianskii*, Varava, Pokorny, Kragic
ICML, 2022
- An Efficient and Continuous Voronoi Density Estimator
Marchetti, Polianskii, Varava, Pokorny, Kragic
AISTATS Oral, 2023
- Voronoi Density Estimator: Computation, Compactification and Convergence
Polianskii*, Marchetti*, Kravberg, Varava, Pokorny, Kragic
UAI, 2022
- Hyperbolic Delaunay Geometric Alignment
Medbouhi, Marchetti, Polianskii, Kravberg, Poklukar, Varava, Kragic
ECML-PKDD, 2024
- HyperSteiner: Computing Heuristic Hyperbolic Steiner Minimal Trees
Garcia-Castellanos*, Medbouhi*, Marchetti, Bekkers, Kragic
ALENEX, 2025
Other: these works concern various topics in pure mathematics, e.g., category theory and combinatorics.
Thesis: I obtained my doctoral degree from KTH in 2024 under the supervision of Prof. Danica Kragic. Below you can download the thesis.