Petra Poklukar

Mathematics and machine learning

I am a PhD student working with prof Danica Kragic at the division of Robotics, Perception and Learning at KTH Royal Institute of Technology in Stockholm, Sweden.

About me

I am a PhD student at the division of Robotics, Perception and Learning at KTH Royal Institute of Technology in Stockhom, Sweden since December 2018 supervised by prof Danica Kragic. My research is focused on representation learning and deep generative models both from theoretical and applicative perspective. I am especially interested in connections between geometric/topological methods and machine learning, which is the core of my recent work on evaluation of data representations.

I obtaned both my Bachelor's and Master's degree in Theoretical Mathematics from University of Ljubljana in September 2014 and September 2016, respectively. My Master thesis On the real spectrum compactification of Teichmüller space was awarded with dr. France Prešern Award and was written at ETH Zürich under supervision of prof. dr. Marc Burger and prof. dr. Franc Forstnerič. Full (but maybe not up to date) CV can be found here.

News

July, 2022: My internship at Google went by way too fast! Thanks to JP Vert, Felipe Llinares, Laetitia Meng-Papaxanthos, Sergii Kashubin and the rest of the Combini team for hosting me, it's been great to work with you guys. Next stop: ICML!

June, 2022: I am happy to say that I successfully defended my thesis Learning and Evaluating the Geometric Structure of Representation Spaces on June 13th. I would like to thank to my opponent Søren Hauberg for a great discussion, my committee members Bei Wang, Jana Košecká and Fredrik Kahl for the relevant questions and my supervisor Danica Kragic for all the support during the past 3 years. Slides of my presentation can be found here.

May, 2022: Our paper GMC - Geometric Multimodal Contrastive Representation Learning got accepted to ICML 2022.

April, 2022: Happy to share that I started as a research intern at Google Brain!

January, 2022: My paper Delaunay Component Analysis for Evaluation of Data Representations got accepted to ICLR 2022.

December, 2021: Gave a talk Learning and Evaluating Representations: Geometric and Topological Aspects for the Data Science Community at the SEB bank.

November, 2021: Had my 80% seminar on Learning and Evaluating Representations: Geometric and Topological Aspects at our divison (RPL at KTH). Slides can be found here.

June, 2021: Gave a talk Learning and Evaluating Data Representations at Women in Data Science Conference 2021, Ljubljana. Slides can be found here.

June, 2021: My paper GeomCA: Geometric Evaluation of Data Representations got accepted to ICML 2021.

November 11, 2020: Had my 50% seminar on Representation Learning with Deep Generative Models at our divison (RPL at KTH). Slides can be found here.

July 1, 2020: Our paper Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation got accepted to IROS 2020. Good job everybody!.

April 7, 2020: Had my 30% seminar on Deep Generative Models at our divison (RPL at KTH). Slides can be found here.

December 8, 2019: Had a poster session presenting our workshop paper Seeing the whole picture instead of a single point: Self-supervised likelihood learning for deep generative models at 2nd Symposium on Advances in Approximate Bayesian Inference in Vancouver, Canada.

November 28, 2019: Gave a talk Variational autoencoders: from theory to implementation at Kidbrooke Advisory. Slides are available here.

November 1-3, 2019: Participated at Openhack Stockholm hackathon. Our team was working on the problem presented by Engineers Without Borders Sweden, see our solution here.

September 1-3, 2019: Participated at STHLM TECH Fest hackathon. Our team won the Stora Enso challenge and presented our solution in the second round to the rest of the teams on the closing event in the City Hall. Slides are available here.

June 17, 2019: Had a poster session presenting our workshop paper Modeling Assumptions and Evaluation Schemes: On the Assessment of Deep Latent Variable Models at CVPR Uncertainty and Robustness in Deep Visual Learning workshop on Long Beach, California.

June 14, 2019: Had a poster session presenting our workshop paper Modeling Assumptions and Evaluation Schemes: On the Assessment of Deep Latent Variable Models at ICML Uncertainty & Robustness in Deep Learning workshop on Long Beach, California.

Publications

Journal papers

Enabling Visual Action Planning for Object Manipulation through Latent Space Roadmap, Martina Lippi*, Petra Poklukar*, Michael C. Welle*, Anastasiia Varava, Hang Yin, Alessandro Marino, Danica Kragic, Accepted to IEEE Transactions on Robotics 2021.

TL;DR We present a journal extension of our Latent Space Roadmap (LSR) framework for visual action planning of complex manipulation tasks with high-dimensional state spaces. We extended the original LSR (see IROS submission below) with several improvements and conducted a thorough ablation study.

[pdf] [bibtex] [website & code]

Data-efficient visuomotor policy training using reinforcement learning and generative models, Ali Ghadirzadeh*, Petra Poklukar*, Ville Kyrki, Danica Kragic, Mårten Björkman, Under review for Journal of Machine Learning Research (JMLR).

TL;DR We present a data-efficient framework for solving visuomotor sequential decision-making problems which exploits the combination of reinforcement learning (RL) and latent variable generative models. Our approach enables safe exploration and alleviates the data-inefficiency problem as it exploits prior knowledge about valid sequences of motor actions. Moreover, we provide a set of measures for evaluation of generative models such that we are able to predict the performance of the RL policy training prior to the actual training on a physical robot.

[pdf] [bibtex] [code - coming soon]

Conference papers

GMC - Geometric Multimodal Contrastive Representation Learning, Petra Poklukar*, Miguel Vasco*, Hang Yin, Francisco S. Melo, Ana Paiva, Danica Kragic, International Conference on Machine Learning (ICML), 2022

TL;DR We present a novel Geometric Multimodal Contrastive (GMC) representation learning method for learning representations of multimodal data representations that provide robust performance in downstream tasks under missing modalities at test time. GMC is comprised of two main components: i) a two-level architecture consisting of modality-specific base encoder, allowing to process an arbitrary number of modalities to an intermediate representation of fixed dimensionality, and a shared projection head, mapping the intermediate representations to a latent representation space; ii) a multimodal contrastive loss function that encourages the geometric alignment of the learned representations. We experimentally demonstrate that GMC representations are semantically rich and achieve state-of-the-art performance with missing modality information on three different learning problems including prediction and reinforcement learning tasks.

[pdf] [bibtex - coming soon!] [code - coming soon!]

GraphDCA - a Framework for Node Distribution Comparison in Real and Synthetic Graphs, Ciwan Ceylan*, Petra Poklukar*, Hanna Hultin, Alexander Kravchenko, Anastasia Varava, Danica Kragic, Preprint.

TL;DR We present GraphDCA - a framework for evaluating similarity between graphs based on the alignment of their respective node representation sets. The sets are compared using a recently proposed method for comparing representation spaces, called Delaunay Component Analysis (DCA), which we extend to graph data. We apply GraphDCA to evaluate three publicly available real-world graph datasets and demonstrate, using gradual edge perturbations, that GraphDCA satisfyingly captures gradually decreasing similarity, unlike global statistics. Finally, we use GraphDCA to evaluate two state-of-the-art graph generative models, NetGAN and CELL, and conclude that further improvements are needed for these models to adequately reproduce local structural features.

[pdf] [bibtex - coming soon!] [code - coming soon!]

Delaunay Component Analysis for Evaluation of Data Representations, Petra Poklukar, Vladislav Polianskii, Anastasia Varava, Florian Pokorny, Danica Kragic, Intenational Conference on Learning Representations (ICLR), 2022

TL;DR We introduce an algorithm for evaluating data representations, called Delaunay Component Analysis (DCA), which approximates the data manifold using a more suitable neighbourhood graph called Delaunay graph. This yields a reliable manifold estimation even for challenging geometric arrangements of representations such as clusters with varying shape and density as well as outliers. Moreover, we exploit the nature of Delaunay graphs and introduce a framework for assessing the quality of individual novel data representations. We experimentally validate the proposed DCA method on representations obtained from neural networks trained with contrastive objective, supervised and generative models, and demonstrate various use cases of our extended single point evaluation framework.

[pdf] [bibtex] [code]

GeomCA: Geometric Evaluation of Data Representations, Petra Poklukar, Anastasiia Varava, Danica Kragic, Intenational Conference on Machine Learning (ICML), 2021

TL;DR We present Geometric Component Analysis (GeomCA) algorithm for assessing the quality of data representations by leveraging their geometric and topological properties. We demonstrate that GeomCA can be applied to representations of any dimension and independently of the model that generated them by analyzing representations obtained in several scenarios including contrastive learning models, generative models and supervised learning models

[pdf] [bibtex] [code]

Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic Platforms, Ali Ghadirzadeh*, Xi Chen*, Petra Poklukar, Chelsea Finn, Mårten Björkman, Danica Kragic, International Conference on Intelligent Robots and Systems (IROS) 2021.

TL;DR we address the challenging problem of adapting a policy, trained to perform a task, to a novel robotic hardware platform given only few demonstrations of robot motion trajectories on the target robot. We present a learning framework consisting of a probabilistic gradient-based meta-learning algorithm that models the uncertainty arising from the few-shot setting with a low-dimensional latent variable. Our results show that the proposed method can successfully adapt a trained policy to different robotic platforms with novel physical parameters.

[pdf] [bibtex]

Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation, Martina Lippi*, Petra Poklukar*, Michael C. Welle*, Anastasiia Varava, Hang Yin, Alessandro Marino, Danica Kragic, International Conference on Intelligent Robots and Systems (IROS) 2020.

TL;DR We present a novel framework for visual action planning of complex manipulation tasks with high-dimensional state spaces which is based on a graph, called Latent Space Roadmap, built in a low-dimensional latent state space encoding the system's dynamics.

[pdf] [bibtex] [website & code]

Workshop papers

Batch Curation for Unsupervised Contrastive Representation Learning, Petra Poklukar*, Michael C. Welle*, Danica Kragic, International Conference on Machine Learning (ICML) 2021, Workshop on Self-Supervised Learning for Reasoning and Perception.

TL;DR In visual contrastive representation learning, similar pairs are constructed by randomly extracting patches from the same image and applying several other transformations such as color jittering or blurring, while transformed patches from different image instances in a given batch are regarded as dissimilar pairs. We argue that this approach can result similar pairs that are semantically dissimilar and introduce a batch curation scheme that selects batches during the training process that are more inline with the underlying contrastive objective. We provide insights into what constitutes beneficial similar and dissimilar pairs as well as validate batch curation on CIFAR10 by integrating it in the SimCLR model.

[pdf] [bibtex]

Few-Shot Learning with Weak Supervision, Ali Ghadirzadeh*, Petra Poklukar*, Xi Chen, Huaxiu Yao, Hossein Azizpour, Mårten Björkman, Chelsea Finn, Danica Kragic, Intenational Conference on Learning Representations (ICLR), 2021, Learning to Learn Workshop

TL;DR we propose a Bayesian gradient-based meta-learning algorithm that can readily incorporate weak labels to reduce task ambiguity inheret in few-shot learning problems as well as improve performance. Our approach is cast in the framework of amortized variational inference and trained by optimizing a variational lower bound. We demonstrate that our method is competitive to state-of-the-art methods on few-shot regression and image classification problems and achieves significant performance gains in settings where weak labels are available.

[pdf] [bibtex]

Analyzing Representations through Interventions, Petra Poklukar*, Michael C. Welle*, Anastasia Varava, Danica Kragic, 32nd annual workshop of the Swedish Artificial Intelligence Society (SAIS), 2020.

TL;DR Designing procedures for evaluation of data representations that are independent from the task at hand remains one of the key challenges in machine learning. Inspired by the theory of causal reasoning, we consider interventions for analysing data representation. Specifically, we leverage them to introduce Interventional Score for measuring disentanglement of the latent representations.

[pdf] [bibtex]

Seeing the whole picture instead of a single point: Self-supervised likelihood learning for deep generative models, Petra Poklukar*, Judith Butepage*, Danica Kragic, 2nd Symposium on Advances in Approximate Bayesian Inference 2019.

TL;DR We develop a novel likelihood function for Variational Autoencoders (VAEs) that is based not only on the parameters returned by the VAE but also on the features of the data learned in a self-supervised fashion such that the model additionally capture the semantic information that is disregarded by the usual VAE likelihood function.

[pdf]

Modeling assumptions and evaluation schemes: On the assessment of deep latent variable models, Judith Butepage*, Petra Poklukar*, Danica Kragic, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019, Workshop on Uncertainty and Robustness in Deep Visual Learning.

TL;DR We present theoretical findings that contribute to unreasonably high likelihoods on out-of-distribution data points assigned by deep generative models: 1) modeling assumptions such as the choice of the likelihood parametrization, and 2) the evaluation under local posterior distributions vs global prior distributions.

[pdf] [bibtex]

Teaching

Geometry and Machine Learning Reading Group (PhD) [Organizer]: Ongoing since Spring 2020

Machine Learning Reading Group (PhD) [Organizer]: Ongoing since Fall 2019

Machine Learning (Msc) [Teaching Asistant]: Fall 2019, Fall 2020, Fall 2021

Supervision

Samuel Norling, Probabilistic Forecasting through Reformer Conditioned Normalizing Flows< [Master thesis, Autumn 2021]

Simon Westberg, Investigating Learning Behavior of Generative Adversarial Networks [Master thesis, Spring 2021]

Contact

poklukar{at}kth.se
Teknikringen 14, SE-100 44 Stockholm, Sweden