Vladimir Vlassov

Professor, PhD, ACM member, IEEE member

Vladimir (Vlad) Vlassov is a full Professor in Computer Systems at the Division of Software and Computer Systems (SCS), Department of Computer Science (CS), School of Electrical Engineering and Computer Science (EECS), KTH Royal Institute of Technology, Stockholm, Sweden. He is a member of the Distributed Computing research group. He was a visiting scientist at the Massachusetts Institute of Technology (MIT) in 1998 and the University of Massachusetts (UMASS) Amherst in 2004, USA. Vladimir has participated in EU projects such as Grid4All (2006-2009, FP6), SELFMAN (2006-2009, FP6), ENCORE (2010-2013, FP7), PaPP (2012-2015, FP7), CLOMMUNITY (2013-2015, FP7), EMJD-DC (2011-2020), and ExtremeEarth (2019-2021, H2020). Currently, he is a principal investigator in the project "ALEC2 - Adaptive Level of Effective and Continuous Care to Common Mental Health Disorders" (2021-2025) that aims at the development of AVA (Automated Vigilance Assistant) - an AI-supported system for online cognitive behavioral therapy. At KTH, he is teaching courses on Data Mining, Distributed Systems, Concurrent Programming, Stream Processing. His current research interests include Cloud computing, data-intensive computing, stream processing, scalable distributed deep learning, autonomic computing, and distributed systems.


Research

Current research interests: Distributed systems; Cloud computing; Autonomic computing; Data-intensive computing, Stream processing, Scalable distributed deep learning, Reinforcement Learning, NLP and NLU

ALEC2 - Adaptive Level of Effective and Continuous Care to common mental health disorders (CMDs)

Research Council of Norway, PN 321561, 2021-2025

Abstract The ALEC2 project combines expertise from Clinical Psychology and Computer Science to develop and implement ML methods to improve adherence, clinical efficiency, and treatment efficacy in the delivery of (Braive’s) digital, iCBT (Internet-based Cognitive Behavioral Therapy) supported psychotherapy solutions. Braive (the company coordinating the project) will bring to market a new patient-centric and R&D-driven solution that will address the shortcomings of current iCBT solutions. To meet this goal, we will create a new generation of iCBT that gradually automates the timely response to a patient’s development in treatment. Our new system AVA (Automated Vigilance Assistant) - an AI-supported system for online cognitive behavioral therapy to patients with common mental health disorders - will make use of AI technologies, namely NLP, NLU, and DL, to automatically diagnose, predict, and monitor mental health conditions, to support patients towards full recovery. AVA will be able to: (i) Take patient’s guided inputs from clinically validated Mental Health Check (MHC) tool and support clinical decision-making by remote therapists, using quantitative scores and qualitative analysis; (ii) Understand patients’ notes and queries through Deep Learning (DL) and Natural Language Understanding (NLU) systems; (iii) Monitor and detect deviations from treatment trajectories, by interpreting written input and analyze sporadic queries with patients to assess compliance and do sentiment analysis; (iv) Trigger human- or AI-led interventions targeted to each patient and the observed deviation from treatment trajectory.

ExtremeEarth: From Copernicus Big Data to Extreme Earth Analytics

Abstract Copernicus is the European program for monitoring the Earth. The geospatial data produced by the Sentinel satellites puts Copernicus at the forefront of the Big Data paradigm, giving rise to all the relevant challenges: volume, velocity, variety, veracity, and value. ExtremeEarth concentrates on developing the technologies that will make Europe a pioneer in Extreme Earth Analytics, i.e., the Remote Sensing and Artificial Intelligence techniques needed for extracting information and knowledge out of the petabytes of Copernicus data. The ExtremeEarth consortium comprises Remote Sensing and Artificial Intelligence researchers and technologists with outstanding scientific track records and relevant commercial expertise. The research and innovation activities undertaken in ExtremeEarth will significantly advance the frontiers in Big Data, Earth Analytics, and Deep Learning for Copernicus data and Linked Geospatial Data, making Europe the top player internationally. The ExtremeEarth technologies will be demonstrated in two use cases with societal, environmental, and financial value: the Food Security and Polar use cases. ExtremeEarth will bring together the Food Security and Polar communities and work with them to develop technologies that these communities can use in the respective application areas. The results of ExtremeEarth will be exploited commercially by the industrial partners of the consortium.

EMJD-DC: Erasmus Mundus Joint Doctorate in Distributed Computing

EU/EACEA EMJD, GA 2012-0030, 2011-2020

Abstract EMJD-DC is an international doctoral programme in Distributed Systems. Students conduct their research over up to four years in two universities from different countries, with additional mobility to industry in most projects. Joint training schools cover scientific topics and transferable skills, such as project and scientific management, communication, and innovation techniques. EMJD-DC initially awards double degrees; however, a task is evaluating the implementation of a Joint Degree. The research projects address some of the key technological challenges of our time, mainly but not exclusively: ubiquitous data-intensive applications, scalable distributed systems (including Cloud computing and P2P models), adaptive distributed systems (autonomic computing, green computing, decentralized and voluntary computing), and applied distributed systems (distributed algorithms and systems, working in an inter-disciplinary manner, in existing and emerging fields to address industrial and societal needs in the European and worldwide context. The consortium partners assembled in EMJD-DC have a high international reputation in the research fields described above. They complement each other very well in their specialization fields of research and the corresponding training offers. The first language of all training and research activities will be English, but students will be exposed to local languages.

CLOMMUNITY: A Community networking Cloud in a Box

Objective Community networking is an emerging model for the Future Internet across Europe and beyond where communities of citizens can build, operate, and own open IP-based networks, a key infrastructure for individual and collective digital participation. The CLOMMUNITY project aims to address the obstacles for communities of citizens in bootstrapping, running, and expanding community-owned networks that provide community services organized as community clouds. That requires solving specific research challenges imposed by the requirement of self-managing and scalable (decentralized) infrastructure services for the management and aggregation of a large number of widespread low-cost unreliable networking, storage, and home computing resources; distributed platform services to support and facilitate the design and operation of elastic, resilient and scalable service overlays and user-oriented services built over these underlying services, providing a good quality of experience at the lowest economic and environmental cost. This will be achieved through experimentally-driven research, using the FIRE CONFINE community networking testbed, the participation of large user communities (20000+ people) and software developers from several community networks by extending existing cloud service prototypes in a cyclic participatory process of design, development, experimentation, evaluation, and optimization for each challenge. The consortium has two representative community networks with a large number of end-users and developers who use diverse applications (e.g., content distribution, multimedia communication, community participation) and also service providers, research institutions with experience and prototypes in the key related areas, and a recognized international organization for the dissemination of the outcome.

E2E-Clouds: End-to-End Distributed Clouds

Summary The E2E-cloud project proposes to develop a distributed and federated cloud infrastructure that meets the challenge of scale and performance for data-intensive services by aggregating, provisioning, and managing computational, storage and network resources from multiple centers and providers. It is an open, secure, and integrated network, storage, and computing infrastructure where different organizations and organizations own different nodes and may combine the roles of provider and user. The management of network resources is integrated with the management of computation and storage, enabling good performance of applications running across multiple centers and timely and efficient delivery of content to end-users, encouraging further digital convergence between Telco, Media, and ICT. The E2E-Cloud is based on an open, self-managing decentralized architecture, aggregating and managing distributed resources in a secure and fault-tolerant manner. It will facilitate the construction of novel data-intensive and media-intensive Internet services that use resources on-demand in short, intense bursts, using and generating massive amounts of data. Several demonstrators will be developed with industrial collaborators to test, evaluate, and illustrate the E2E-Cloud platform, ranging from telecom to media production and distribution, large-scale analysis of Web content, and software testing as a service.

PaPP: Portable and Predictable Performance on Heterogeneous Embedded Manycores

Objective Modern advanced products today use embedded computing systems with exacting requirements on execution speed, timeliness, and power consumption. It is a grand challenge to guarantee these requirements across product families and in the face of rapid technological evolution, as current development practices cannot manage performance requirements the same way they manage functional requirements. Even worse, with the proliferation of complex parallel target platforms, designing a system that reaches a given performance goal with the minimum amount of resources managed correctly becomes more difficult. Today, the only solution to this problem is to over-design systems: systems are equipped pragmatically with an overcapacity that likely avoids under-performance, but for this very reason, they are more expensive and consume more resources than necessary. The proposed project aims at making performance predictable in every development phase, from the modeling of the system over its implementation to its execution, by allowing for early specification and analysis of the performance of systems and its adaptation to different hardware platforms, including an adaptive runtime system. The developed methods and tools will be evaluated during the project on several industrial use cases and demonstrators in three application domains important to European industry: Multimedia, Avionics and space, and Mobile communication. This approach will guarantee that the methods and tools developed are usable and effective. To achieve our goals, we have built a highly skilled European consortium consisting of a balanced mix of problem owners, domain experts, and technology providers: large enterprises as application drivers, platform providers, and system integrators, SMEs as key technology innovators, and research institutes and universities bringing leading-edge perspectives.

ENCORE: ENabling technologies for a programmable many-CORE

Objective Design complexity and power density implications stopped the trend towards faster single-core processors. The trend is to double the core count every 18 months, leading to chips with 100+ cores in 10-15 years. Developing parallel applications to harness such multicores is the key challenge for scalable computing systems. The ENCORE project aims to achieve a breakthrough in the usability, reliability, code portability, and performance scalability of such multicores. The project achieves this through three main contributions. First, define an easy-to-use parallel programming model that offers code portability across several architectures. Second, develop a runtime management system that dynamically detects, manages, and exploits parallelism, data locality, and shared resources. Third, providing adequate hardware support for the parallel programming and runtime environment that ensures scalability, performance, and cost-efficiency. The technology will be developed and evaluated using multiple applications provided by the partners or industry-standard benchmarks, ranging from massively parallel high-performance computing codes, where performance and efficiency are paramount, to embedded parallel workloads with strong real-time and energy constraints. The project integrates all partners under a common runtime system on real multicore platforms, a shared FPGA architecture prototype, and a large-scale software-simulated architecture. Architecture features will be validated through implementation on ARM's detailed development infrastructure. ENCORE takes a holistic approach to parallelization and programmability by analyzing the requirements of several relevant applications ranging from High-Performance Computing to embedded multicore, by parallelizing these applications using the proposed programming model, by optimizing the runtime system for a range of parallel architectures, and by developing hardware support for the runtime system.

SELFMAN: Self Management for Large-Scale Distributed Systems

Objective The goal of SELFMAN is to make large-scale distributed applications that are self-managing by combining the strong points of component models and structured overlay networks. One of the key obstacles to deploying large-scale applications running on networks such as the Internet is the issue of management. Currently, many specialized personnel are needed to keep large Internet applications running. SELFMAN will contribute to removing this obstacle and thus enable the development of many more Internet applications. In the context of SELFMAN, we define self-management along four axes: self-configuration (systems configure themselves according to high-level management policies), self-healing (systems automatically handle faults and repair them), self-tuning (systems continuously monitor their performance and adjust their behavior to optimize resource usage and meet service level agreements), and self-protection (systems protect themselves against security attacks). SELFMAN will provide self-management by combining a component model with a structured overlay network.

Grid4All: Self-* Grid: Dynamic Virtual Organizations for schools, families, and all

Objective Grid4All aims to enable domestic users and non-profit organizations, such as schools and small enterprises, to share their resources and to access massive grid resources when needed, envisioning a future in which access to resources is democratized and cooperative. Examples include home users of image editing applications, school projects like volcanic eruption simulations, or small businesses doing data mining. Cooperation examples include joint homework between pupils or international collaboration.Grid4All goals entail a system pooling large amounts of cheap resources (connecting to commercial cluster providers when needed); a dynamic system satisfying spikes of demand; using self-management techniques to scale; supporting isolated, secure, dynamic, geographically distributed user groups and using secure peer-to-peer techniques to federate large numbers of small-scale resources into large-scale grids. We target small communities such as domestic users, schools, and SMEs (for-profit or non-profit), harnessing their resources added to resources from operated IT centers to form on-demand service-oriented grids, avoiding pre-configured infrastructures. The technical issues addressed are security, support for multiple administrative and management authorities, P2P self-management/adaptivity/dynamicity techniques, on-demand resource allocation, heterogeneity, and fault tolerance. The proof of concept applications include e-learning tools for collaborative editing in schools and digital content processing services accessible by end residential users.

CoreGrid: European research network on foundations, software infrastructures and applications for large-scale distributed, grid and peer-to-peer technologies

Objective CoreGRID aims at strengthening and advancing scientific and technological excellence in the area of Grid and Peer-to-Peer technologies. The Network brings together 119 permanent researchers and 165 PhD students from 42 institutions to achieve this objective. An ambitious joint program of activity will be conducted around six complementary research areas that have been selected based on their strategic importance, their research challenges, and the recognized European expertise to develop next-generation Grid middleware, namely knowledge & data management; programming models, system architecture; Grid information and monitoring services; resource management and scheduling; problem-solving environments, tools, and GRID systems.

EVERGROW: EVER-GROWing Global Scale-Free Networks, Their Provisioning, Repair and Unique Functions

The project's goal is to build the science-based foundations for the global information networks of the future. Not only will networks soon provide us access to all the world's knowledge, but society will become network-based, from private life and business to industry and government processes. The demands on the future Internet will be high. We can already see how the complexity of the Internet is continually increasing, and we know a great deal about the problems this will cause. Above all, several of today's highly manual processes must be automated, such as network management, network provisioning, and network repair on all levels.

PEPITO: Peer-To-Peer-Implementation-and-TheOry

Traditional centralized system architectures are increasingly inadequate. A good understanding of future decentralized peer-to-peer (P2P) models for collaboration and computing needs to be improved, both in how to build them robustly and what can be built. The PEPITO project will investigate completely decentralized models of P2P computing.




PhD Students

Primary supervisor

Co-supervisor


Publications

2023

2022

2021

2020

2019

2018

2017

2016

2015

2014

2013

Selected papers 2012 and before

Complete List of Publications: DBLP | Google Scholar | KTH DiVA

Services

Selected Program Committees

Selected Conference Organisations


Contact

  • Email: vladv@kth.se

  • Phone: +46 8 7904115

  • Mobile phone: +46 73 6441465

  • Postal address: Vladimir Vlassov, KTH/EECS/SCS, Electrum 229, SE-164 40, Kista, Sweden

  • Visiting address: Kistagången 16, Electrum, elevator A, level 4, Software and Computer Systems, room 2490