Michael C. Welle

Robotics and Representation Learning

I'm a final year PhD student working with Danica Kragic, Anastasiia Varava, and Hang Yin at the Robotics, Perception and Learning Lab (RPL), EECS, at KTH in Stockholm, Sweden.

I'm working on learning representations for rigid and deformable objects manipulation. My work includes partial caging as well as working with highly deformable objects (clothing).

My CV can be found here (not neccesary up to date).

A general overview into my work up to the 08.04.2021 can be seen in my 80% - Seminar: ( Slides )

Recent Work

State Representation Learning with Task-Irrelevant Factors of Variation in Robotics
Submitted to Conference on Robot Learning (CoRL) 2021
Constantinos Chamzas*, Martina Lippi*, Michael C. Welle*, Anastasiia Varava, Lydia Kavraki, and Danica Kragic

Abstract Learning state representations enables robotic planning directly from raw observations such as images. Most methods learn state representations by utilizing losses based on the reconstruction of the raw observations from a lower dimensional latent space. The similarity between observations in the space of images is often assumed and used as a proxy for estimating similarity between the underlying states of the system. However, observations commonly contain task irrelevant factors of variation which are nonetheless important for reconstruction, such as varying lighting and different camera viewpoints. In this work, we define relevant evaluation metrics and perform a thorough study of different loss functions for state representation learning. We show that models exploiting weak supervision, such as Siamese networks with a simple contrastive loss, outperform reconstruction-based representations in visual planning.

Enabling Visual Action Planning for Object Manipulation through Latent Space Roadmap
Submitted to IEEE Transactions on Robotics
Martina Lippi*, Petra Poklukar*, Michael C. Welle*, Anastasiia Varava, Hang Yin, Alessandro Marino, and Danica Kragic

Abstract We present a framework for visual action planning of complex manipulation tasks with high-dimensional state spaces, focusing on manipulation of deformable objects. We propose a Latent Space Roadmap (LSR) for task planning, a graph-based structure capturing globally the system dynamics in a low-dimensional latent space. Our framework consists of three parts: (1) a Mapping Module (MM) that maps observations, given in the form of images, into a structured latent space extracting the respective states, that generates observations from the latent states, (2) the LSR which builds and connects clusters containing similar states in order to find the latent plans between start and goal states extracted by MM, and (3) the Action Proposal Module that complements the latent plan found by the LSR with the corresponding actions. We present a thorough investigation of our framework on two simulated box stacking tasks and a folding task executed on a real robot.

Dedicated website for Visual Action Planning

Publications

Journal Papers:


Partial Caging: A Clearance-Based Definition, Datasets and Deep Learning

Michael Welle, Anastasiia Varava, Jeffrey Mahler, Ken Goldberg, Danica Kragic, and Florian T. Pokorny
Published in Autonomous Robots, Special Issue Topological Methods in Robotics 2021

Benchmarking Bimanual Cloth Manipulation

Irene Garcia-Camacho*, Martina Lippi*, Michael C. Welle, Hang Yin, Rika Antonova, Anastasiia Varava, Júlia Borràs, Carme Torras, Alessandro Marino, Guillem Alenyà, Danica Kragic
Puplished in IEEE Robotics and Automation Letters 5.2 (2020)

From Visual Understanding to Complex Object Manipulation

Judith Butepage, Silvia Cruciani, Mia Kokic, Michael Welle, and Danica Kragic
Published in Annual Review of Control, Robotics, and Autonomous Systems (2019)


Conference Papers:


Textile Taxonomy and Classification Using Pulling and Twisting

Alberta Longhini, Michael C. Welle, Ioanna Mitsioni and Danica Kragic
Accepted in International Conference on Intelligent Robots and Systems (IROS2021)

Learning Task Constraints in Visual-Action Planning from Demonstrations

Francesco Esposito, Christian Pek, Michael C. Welle and Danica Kragic
Puplished in IEEE Int. Conf. on Robot and Human Interactive Communication (ROMAN2021)

Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation

Martina Lippi*, Petra Poklukar*, Michael C. Welle*, Anastasiia Varava, Hang Yin, Alessandro Marino, and Danica Kragic
Puplished in International Conference on Intelligent Robots and Systems (IROS2020)

Fashion Landmark Detection and Category Classification for Robotics

Thomas Ziegler, Judith Butepage, Michael C. Welle, Anastasiia Varava, Tonci Novkovic and Danica Kragic
Published in IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC2020)

Partial Caging: A Clearance-Based Definition and Deep Learning

Anastasiia Varava*, Michael Welle*, Jeffrey Mahler, Ken Goldberg, Danica Kragic, and Florian T. Pokorny
Published in International Conference on Intelligent Robots and Systems (IROS 2019)

On the use of Unmanned Aerial Vehicles for Autonomous Object Modeling

Michael Welle, Ludvig Ericson, Rares Ambrus, Patric Jensfelt
Published in European Conference on Mobile Robots (ECMR2017)



Workshops and Projects

Organisation:

Representing and Manipulating Deformable Objects Workshop @ ICRA2021

Martina Lippi*, Michael C. Welle*, Anastasiia Varava*, Hang Yin, Rika Antonova, Florian T. Pokorny, Danica Kragic, Yiannis Karayiannidis, Ville Kyrki, Alessandro Marino, Julia Borras, Guillem Alenya, Carme Torras
Workshop held at ICRA2021



Contributions:

Batch Curation for Unsupervised Contrastive Representation Learning

Michael C. Welle*, Petra Poklukar*, and Danica Kragic
Workshop on Self-Supervised Learning for Reasoning and Perception, Workshop at International Conference on Machine Learning 2021

State Representations in Robotics: Identifying Relevant Factors of Variation using Weak Supervision

Constantinos Chamzas*, Martina Lippi*, Michael C. Welle*, Anastasiia Varava, Lydia Kavraki, and Alessandro Marino, and Danica Kragic
NeurIPS 2020 Workshop on Robot Learning

Latent Space Roadmap for Visual Action Planning

Martina Lippi*, Petra Poklukar*, Michael C. Welle*, Anastasiia Varava, Hang Yin, Alessandro Marino, and Danica Kragic
RSS 2020 Workshop - Visual Learning and Reasoning for Robotic Manipulation

Analyzing Representations through Interventions

Petra Poklukar*, Michael C. Welle*, Anastasiia Varava and Danica Kragic
32nd annual workshop of the Swedish Artificial Intelligence Society (SAIS)

Projects:

Baxter plays Tic-Tac-Toe while shopping with Simtrack

Summer Internship at HKUST, HongKong, Supervisors: Michael Wang, Hang Kaiyu

Open Master Theses

Incorporation of Force/Torque Sensors for Dual Arm Dressing Tasks

Putting clothing on mannequins or humans is a challenging task, it is important that the garment or mannequin is not damaged by exerting too much force on either. At KTH a partial dressing baseline for a benchmark was developed [1] by using a dual arm setup composed of two Franka Emika Panda robots. The baseline is based on a predefined control strategy to put a T-shirt over a head and makes a limited use of the force estimates at the robots end effectors (available form the Franka libraries) . In this project we want to improve this method two fold. Installation/Calibration of OptoForce/ATI force/torque sensors and Integration into the dressing strategy. Improving the current strategie with Reinforcement Learning approach, initialized with demonstrations learning using DAGGER[2] or similar. The methods performance can be compared to the Baseline performance in [1], as well as contextualized with other RL based approaches as [3] or [4].

Required qualifications: proficiency in ROS and basic Reinforcement learning algorithms. Applicants are expected to have passed KTH courses such as Introduction to Robotics, Machine Learning, Project Course in Data Science, Control Theory and Practice, Advanced Course, or equivalent.

Contact person: Anastasiia Varava (KTH) varava AT kth.se and Michael Welle (KTH) mwelle AT kth.se

References:
[1] Garcia-Camacho, I., Lippi, M., Welle, M. C., Yin, H., Antonova, R., Varava, A., ... & Kragic, D. (2020). Benchmarking bimanual cloth manipulation. IEEE Robotics and Automation Letters, 5(2), 1111-1118.
[2] Ross, S., Gordon, G., & Bagnell, D. (2011, June). A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics (pp. 627-635).
[3] Clegg, A., Yu, W., Tan, J., Liu, C. K., & Turk, G. (2018). Learning to dress: Synthesizing human dressing motion via deep reinforcement learning. ACM Transactions on Graphics (TOG), 37(6), 1-10.
[4] Tamei, T., Matsubara, T., Rai, A., & Shibata, T. (2011, October). Reinforcement learning of clothing assistance with a dual-arm robot. In 2011 11th IEEE-RAS International Conference on Humanoid Robots (pp. 733-738). IEEE.