Petra Poklukar

Mathematics and machine learning

I am a PhD student working with prof Danica Kragic at the division of Robotics, Perception and Learning at KTH Royal Institute of Technology in Stockholm, Sweden.

About me

I am a PhD student at the division of Robotics, Perception and Learning at KTH Royal Institute of Technology in Stockhom, Sweden since December 2018 supervised by prof Danica Kragic. My research is focused on representation learning and deep generative models both from theoretical and applicative perspective. Currently, I am exploring combinations of meta-learning and the above mentioned areas. On the side, I am also interested in connections between geometric and topological methods and machine learning.

I obtaned both my Bachelor's and Master's degree in Theoretical Mathematics from University of Ljubljana in September 2014 and September 2016, respectively. My Master thesis On the real spectrum compactification of Teichmüller space was rewared with dr. France Prešern Award and was written at ETH Zürich under supervision of prof. dr. Marc Burger and prof. dr. Franc Forstnerič. Full (but maybe not up to date) CV can be found here.

News

June, 2021:Had a talk at Women in Data Science Conference 2021, Ljubljana. Slides can be found here.

June, 2021:My paper GeomCA: Geometric Evaluation of Data Representations got accepted to ICML 2021.

November 11, 2020: Had my 50% seminar on Representation Learning with Deep Generative Models at our divison (RPL at KTH). Slides can be found here.

July 1, 2020: Our paper Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation got accepted to IROS 2020. Good job everybody!.

April 7, 2020: Had my 30% seminar on Deep Generative Models at our divison (RPL at KTH). Slides can be found here.

December 8, 2019: Had a poster session presenting our workshop paper Seeing the whole picture instead of a single point: Self-supervised likelihood learning for deep generative models at 2nd Symposium on Advances in Approximate Bayesian Inference in Vancouver, Canada.

November 28, 2019: Gave a talk Variational autoencoders: from theory to implementation at Kidbrooke Advisory. Slides are available here.

November 1-3, 2019: Participated at Openhack Stockholm hackathon. Our team was working on the problem presented by Engineers Without Borders Sweden, see our solution here.

September 1-3, 2019: Participated at STHLM TECH Fest hackathon. Our team won the Stora Enso challenge and presented our solution in the second round to the rest of the teams on the closing event in the City Hall. Slides are available here.

June 17, 2019: Had a poster session presenting our workshop paper Modeling Assumptions and Evaluation Schemes: On the Assessment of Deep Latent Variable Models at CVPR Uncertainty and Robustness in Deep Visual Learning workshop on Long Beach, California.

June 14, 2019: Had a poster session presenting our workshop paper Modeling Assumptions and Evaluation Schemes: On the Assessment of Deep Latent Variable Models at ICML Uncertainty & Robustness in Deep Learning workshop on Long Beach, California.

Publications

Delaunay Component Analysis for Evaluation of Data Representations, Petra Poklukar, Vladislav Polianskii, Anastasia Varava, Florian Pokorny, Danica Kragic (under review)

TL;DR We introduce an algorithm for evaluating data representations, called Delaunay Component Analysis (DCA), which approximates the data manifold using a more suitable neighbourhood graph called Delaunay graph. This yields a reliable manifold estimation even for challenging geometric arrangements of representations such as clusters with varying shape and density as well as outliers. Moreover, we exploit the nature of Delaunay graphs and introduce a framework for assessing the quality of individual novel data representations. We experimentally validate the proposed DCA method on representations obtained from neural networks trained with contrastive objective, supervised and generative models, and demonstrate various use cases of our extended single point evaluation framework.

[pdf - TBA]

GeomCA: Geometric Evaluation of Data Representations, Petra Poklukar, Anastasiia Varava, Danica Kragic Intenational Conference on Machine Learning, 2021

TL;DR We present Geometric Component Analysis (GeomCA) algorithm for assessing the quality of data representations by leveraging their geometric and topological properties. We demonstrate that GeomCA can be applied to representations of any dimension and independently of the model that generated them by analyzing representations obtained in several scenarios including contrastive learning models, generative models and supervised learning models

[pdf] [bibtex] [code]

Tackling Ambiguity in Few-Shot Learning with Variational Task Embeddings and Weak Supervision, Ali Ghadirzadeh*, Petra Poklukar*, Xi Chen, Huaxiu Yao, HHossein Azizpour, Mårten Björkman, Chelsea Finn, Danice Kragic (Under review)

TL;DR we propose a Bayesian gradient-based meta-learning algorithm that can readily incorporate weak labels to reduce task ambiguity inheret in few-shot learning problems as well as improve performance. Our approach is cast in the framework of amortized variational inference and trained by optimizing a variational lower bound. We demonstrate that our method is competitive to state-of-the-art methods on few-shot regression and image classification problems and achieves significant performance gains in settings where weak labels are available.

[pdf - TBA]

Enabling Visual Action Planning for Object Manipulation through Latent Space Roadmap, Martina Lippi*, Petra Poklukar*, Michael C. Welle*, Anastasiia Varava, Hang Yin, Alessandro Marino, Danica Kragic, Submitted to IEEE Transactions on Robotics 2021.

TL;DR We present a journal extension of our Latent Space Roadmap (LSR) framework for visual action planning of complex manipulation tasks with high-dimensional state spaces. We extended the original LSR (see IROS submission below) with several improvements and conducted a thorough ablation study.

[pdf] [bibtex] [website & code]

Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation, Martina Lippi*, Petra Poklukar*, Michael C. Welle*, Anastasiia Varava, Hang Yin, Alessandro Marino, Danica Kragic, International Conference on Intelligent Robots and Systems (IROS2020).

TL;DR We present a novel framework for visual action planning of complex manipulation tasks with high-dimensional state spaces which is based on a graph, called Latent Space Roadmap, built in a low-dimensional latent state space encoding the system's dynamics.

[pdf] [bibtex] [website & code]

Seeing the whole picture instead of a single point: Self-supervised likelihood learning for deep generative models, Petra Poklukar*, Judith Butepage*, Danica Kragic, 2nd Symposium on Advances in Approximate Bayesian Inference 2019

TL;DR We develop a novel likelihood function for Variational Autoencoders (VAEs) that is based not only on the parameters returned by the VAE but also on the features of the data learned in a self-supervised fashion such that the model additionally capture the semantic information that is disregarded by the usual VAE likelihood function.

[pdf] [bibtex - TBA] [code - TBA]

Modeling assumptions and evaluation schemes: On the assessment of deep latent variable models, Judith Butepage*, Petra Poklukar*, Danica Kragic, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019, Workshop on Uncertainty and Robustness in Deep Visual Learning.

TL;DR We present theoretical findings that contribute to unreasonably high likelihoods on out-of-distribution data points assigned by deep generative models: 1) modeling assumptions such as the choice of the likelihood parametrization, and 2) the evaluation under local posterior distributions vs global prior distributions.

[pdf] [bibtex]

Teaching

Geometry and Machine Learning Reading Group (PhD) [Organizer]: Ongoing since Spring 2020

Machine Learning Reading Group (PhD) [Organizer]: Ongoing since Fall 2019

Machine Learning (Msc) [Teaching Asistant]: Fall 2019, Fall 2020, Fall 2021

Supervision

Tommy Wallin, Structural comparison of data representations obtained from generative and supervised models [Master thesis, ongoing since Spring 2021]

David Norrman, Semantic segmentation for improving OOD detection using VAEs and normalizing flow models [Master thesis, ongoing since Spring 2021]

Samuel Norling, Reformer Conditioned Masked Autoregressive Flow [Master thesis, ongoing since Spring 2021]

Simon Westberg, Investigating Learning Behavior of Generative Adversarial Networks [Master thesis, Spring 2021]

Joakim Dahl, Analysis of the Effect of Latent Dimensions on Disentanglement in Variational Autoencoders [Master thesis, Fall 2020]

Contact

poklukar{at}kth.se
Teknikringen 14, SE-100 44 Stockholm, Sweden