Seif Haridi

The History of Oz a Multiparadigm Programming Language at ACM HOPL IV 2021

The video presentation of the talk can be accessed Here

Seif Haridi is Chair-Professor of Computer Systems specialized in parallel and distributed computing, and the head of Distributed Computing group (DC@KTH) at KTH Royal Institute of Technology, Stockholm, Sweden. He is also the Chief Scientific Advisor of RISE SICS until December 2019. He led a European research program on Cloud Computing and Big Data by EIT-Digital between 2010 to 2013, and is a co-founder of a number of start-ups in the area of distributed and cloud computing including HiveStreaming and LogicalClocks. He is well-known for the textbook "Concepts, Techniques, and Models of Computer Programming" that explains many difficult programming concepts in a simple and insightful way. For more than ten years he has been teaching two popular courses on Distributed Algorithms and Peer-to-Peer Computing at KTH. His research is focused on the combination of systems research and theory in the areas of programming systems and distributed computing. He is a co-designer of SICStus Prolog, the most well known logic programming system, and the Mozart Programming System, a high-quality open-source development platform based on the Oz multi-paradigm programming language. Recently, together with his research group, he has been contributing to the design of Apache Flink, a Big Data distributed engine for streaming analytics, and HOPS the first European complete platform for data-analytics and machine learning. HOPS-FS the file system of HOPS won IEEE Scale Prize 2017 as the most scalable HDFS filesystem. He also with LogicalClocks and Hopsworks won the award for European Data Science Technology Innovation 2019.


Publications

 

  • Book: Concepts Models and Techniques of Computer Programming, MIT-Press 2004: details

  • The publictions on KTH DiVA: KTH DiVA

  • The publictions on Semantic Scholar: Semantic Scholar

  • The publictions on Google Scholar: Google Scholar

  • The publictions on DBLP: DBLP


Projects

Continuous Deep Analytics (CDA)

SSF, 2018-2022

Modern end-to-end data pipelines are highly complex and unoptimized. They combine code from different frontends (e.g., SQL, Beam, Keras), declared in different programming languages (e.g., Python, Scala) and execute across many backend runtimes (e.g., Spark, Flink, Tensorflow). Data and intermediate results take a long and slow path through excessive materialization, conversions down to different partially supported hardware accelerators. End-to-End guarantees are typically complex to reason due to the mismatch of processing semantics across runtimes. The Continuous Deep Analytics (CDA) project aims to shape the next-generation software for scalable, data-driven applications and pipelines. Our work binds state of the art mechanisms in compiler and database technology together with hardware-accelerated machine learning and distributed stream processing.

ExtremeEarth

H2020, 2019-2021

ExtremeEarth concentrates on developing techniques and software that will enable the extraction of information and knowledge from big Copernicus data using deep learning techniques and extreme geospatial analytics, and the development of two use cases based on this information and knowledge and other relevant non-EO data sets. ExtremeEarth will impact developments in the Integrated Ground Segment of Copernicus and the Sentinel Collaborative Ground Segment. ExtremeEarth tools and techniques can be used for extracting information and knowledge from big Copernicus data and making this information and knowledge available as linked data and, in this way, allow the easy development of applications by developers with minimal or no knowledge of EO techniques, file formats, data access protocols etc.

A Big Data Analytics Framework for a Smart Society (BIDAF)

KK-stiffen (KKS), 2014-2019

The overall aim of the BIDAF project is to significantly further the research within massive data analysis, by means of machine learning, in response to the increasing demand of retrieving value from data in all of society. This will be done by creating a strong distributed research environment for big data analytics. There are challenges on several levels that must be addressed: (i) platforms to store and process the data, (ii) machine learning algorithms to analyze the data, and (iii) high level tools to access the results.

StreamLine

H2020, 2016-2018

Streamline is funded by the European Union’s Horizon 2020 research and innovation program to enhance the European data plattform Apache Flink to handle both stream data and batch data in a unified way. The project includes both research and use cases to validate the results. The project has the following objectives: (i) to research, design, and develop a massively scalable, robust, and efficient processing platform for data at rest and data in motion in a single system, (ii) to develop a high accuracy, massively scalable data stream-oriented machine learning library based on new algorithms and approximate data structures, (iii) to provide a unified interactive programming environment that is user-friendly, i.e., easy to deploy in the cloud and validate its success as measured by well-defined KPIs, (iv) to implement a real-time contextualization engine, enabling analytical and predictive models to take real world context into account, and (v) to develop a multi-faceted, effective dissemination of Streamline results to the research, academic, and international community, especially targeting SMEs, developers, data analysts, and the open source community.

iSocial

FP7 Marie Curie ITN, 2013-2017

The rapid proliferation of Online Social Networking (OSN) sites is expected to reshape the Internet’s structure, design, and utility. We believe that OSNs create a potentially transformational change in consumer behavior and will bring a far-reaching impact on traditional industries of content, media, and communications. The iSocial ITN aspires to bring a transformational change in OSN provision, pushing the state-of-the-art from centralized services towards totally decentralized systems that will pervade our environment and seamlessly integrate with future Internet and media services. OSN decentralization can address privacy considerations and improve service scalability, performance and fault-tolerance in the presence of an expanding base of users and applications. The project will pursue the vision of a decentralized Ubiquitous Social Networking Layer and the development of a novel distributed computing substrate that provides Decentralized Online Social Networking (DOSN) services and supports the seamless development and deployment of new social applications and services, in the absence of central management and control. The iSocial consortium envisions the emergence of distributed and scalable overlay networking and distributed storage infrastructures that will provide support for open social networks and for innovative social network applications, preserving end-user privacy and information ownership. The main objective of iSocial is to provide world class training for a next generation of researchers, computer scientists, and Web engineers, emphasizing on a strong combination of advanced understanding in both theoretical and experimental approaches, methodologies and tools that are required to develop DOSN platforms. iSocial is divided into four interconnected research topics, which include important research challenges with a high exploitation potential: (i) overlay Infrastructure for Decentralized Online Social Networking Services, (ii) data storage & distribution, (iii) security, privacy & trust, and (iv) modelling and Simulation.

A Community networking Cloud in a Box (CLOMMUNITY)

FP7 EU-project, 2013-2015

Community networking is an emerging model for the Future Internet across Europe and beyond where communities of citizens can build, operate and own open IP-based networks, a key infrastructure for individual and collective digital participation. The CLOMMUNITY project aims at addressing the obstacles for communities of citizens in bootstrapping, running and expanding community-owned networks that provide community services organised as community clouds. That requires solving specific research challenges imposed by the requirement of: self-managing and scalable (decentralized) infrastructure services for the management and aggregation of a large number of widespread low-cost unreliable networking, storage and home computing resources; distributed platform services to support and facilitate the design and operation of elastic, resilient and scalable service overlays and user-oriented services built over these underlying services, providing a good quality of experience at the lowest economic and environmental cost. This will be achieved through experimentally-driven research, using the FIRE CONFINE community networking testbed, the participation of large user communities (20000+ people) and software developers from several community networks, by extending existing cloud service prototypes in a cyclic participatory process of design, development, experimentation, evaluation and optimization for each challenge. The consortium has two representative community networks with a large number of end-users and developers, who use diverse applications (e.g., content distribution, multimedia communication, community participation) and also service providers, research institutions with experience and prototypes in the key related areas, and a recognized international organisation for the dissemination of the outcome.

E2E-Clouds

SSF, 2012-2017

E2E-Clouds was a five-year research project financed by the Swedish Foundation for Strategic Research. The goal of the project is to develop an End-to-End information-centric Cloud (E2E-Cloud) for data intensive services and applications. The E2E-Clouds is a distributed and federated cloud infrastructure that meets the challenge of scale by aggregating, provisioning and managing computational, storage and networking resources from multiple centers and providers. Like some current data-center clouds it manages computation and storage in an integrated fashion for efficiency, but adds wide-scale distribution.

Peer-to-Peer Live Streaming (PeerTV)

Vinnova, 2007-2010

The PeerTV project is defined to develop, deploy and validate peer-to-peer media streaming platforms that address three key requirments not currently met by existing broadband infrastructures: (i) efficient utilization of upload bandwidth available at peers to reduce the amount of bandwidth that needs to be centrally provisioned and paid for by TV broadcasters, (ii) reducing the playback latency and increasing the playback continuity of video, through constructing novel topologies, and (iii) minimizing the amount (and cost) of network traffic for Internet Service Providers (ISPs) through building an autonomous-system infrastructure aware.

Self Management for Large-Scale Distributed Systems (SELFMAN)

FP6 EU-project, 2006-2009

The goal of SELFMAN is to make large-scale distributed applications that are self managing, by combining the strong points of component models and structured overlay networks. One of the key obstacles to deploying large-scale applications running on networks such as the Internet is the issue of management. Currently many specialized personnel are needed to keep large Internet applications running. SELFMAN will contribute to removing this obstacle, and thus enable the development of many more Internet applications. In the context of SELFMAN, we define self management along four axes: self configuration (systems configure themselves according to high-level management policies), self healing (systems automatically handle faults and repair them), self tuning (systems continuously monitor their performance and adjust their behaviour to optimize resource usage and meet service level agreements), and self protection (systems protect themselves against security attacks). SELFMAN will provide self management by combining a component model with a structured overlay network.

Peer-To-Peer-Implementation-and-TheOry (PEPITO)

FP5 EU-project, 2002-2004

Traditional centralised system architectures are ever more inadequate. A good understanding is lacking of future decentralised peer-to-peer (P2P) models for collaboration and computing, both of how to build them robustly and of what can be built. The PEPITO project will investigate completely decentralised models of P2P computing.

Information Cities (ICities)

EU-project, 2000-2003

The Information Cities project models the aggregation and segregation patterns in a virtual world of infohabitants (humans, virtual firms, on-line communities and software agents acting on their behalf). The objective is to capture aggregate patterns of virtual organisation, emerging from the interaction over the emerging information infrastructure, a virtual place where millions (or billions) meet of infohabitants meet, co-operate and trade: a stable and scalable micro-environment that supports the efficient provision of many e-commerce and personal services, and allows for the continuous creation of new activities and relationships. To investigate conditions of emergence and evolution of Information Cities, we will develop an open multiagent environment, flexible and adaptive to the dynamic nature of the Information Society.

EVERGROW, a European Research Project on the Future Internet

The goal of the project is to build the science-based foundations for the global information networks of the future. Not only will networks soon provide us with access to all the world's knowledge, but society as a whole will become network-based, from private life and business to industry and the processes of government. The demands on the future Internet will be high. We can already see how the complexity of the Internet is continually increasing, and we know a great deal about the problems this will cause. Above all, a number of today's highly manual processes must be automated, such as network management, network provisioning and network repair on all levels.

CoreGrid

The CoreGRID Network of Excellence (NoE) aims at strengthening and advancing scientific and technological excellence in the area of Grid and Peer-to-Peer technologies. To achieve this objective, the Network brings together a critical mass of well-established researchers (161 permanent researchers and 164 PhD students) from forty-one institutions who have constructed an ambitious joint programme of activities. This joint programme of activity is structured around six complementary research areas that have been selected on the basis of their strategic importance, their research challenges and the recognised European expertise to develop next generation Grid middleware.


Systems

HOPS

Hops is a next-generation distribution of Apache Hadoop, with a heavily adapted impelementation of HDFS, called HopsFS. HopsFS is a new implementation of the the Hadoop Filesystem (HDFS) based on Apache Hadoop 2.8, that supports multiple stateless NameNodes, where the metadata is stored in an in-memory distributed database (NDB). HopsFS enables NameNode metadata to be both customized and analyzed, because it can be easily accessed via SQL or the native API (NDB API).

Flink

Apache Flink is a platform for efficient, distributed, general-purpose data processing. It features powerful programming abstractions in Java and Scala, a high-performance runtime, and automatic program optimization. It has native support for iterations, incremental iterations, and programs consisting of large DAGs of operations. Flink Streaming is an extension of the core Flink API for high-throughput, low-latency data stream processing. The system can connect to and process data streams from many data sources like RabbitMQ, Flume, Twitter, ZeroMQ and also from any user defined data source.

Kompics

Kompics is a message-passing component model for building distributed systems by putting together protocols programmed as event-driven components. Systems built with Kompics leverage multi-core machines out of the box and can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic debugging and reproducible performance evaluation of unmodified Kompics distributed systems.

Scalaris

Scalaris is a scalable, transactional, distributed key-value store. It was the first NoSQL database, that supported the ACID properties for multi-key transactions. It can be used for building scalable Web 2.0 services. Scalaris uses a structured overlay with a non-blocking Paxos commit protocol for transaction processing with strong consistency over replicas. Scalaris is implemented in Erlang.

CATS

Distributed key-value stores provide scalable, fault-tolerant, and self-organizing storage services, but fall short of guaranteeing linearizable consistency in partially synchronous, lossy, partitionable, and dynamic networks, when data is distributed and replicated automatically by the principle of consistent hashing. CATS is a distributed key-value store that uses consistent quorums to guarantee linearizability and partition tolerance in such adverse and dynamic network conditions. CATS is scalable, elastic, and self-organizing; key properties for modern cloud storage middleware.

MOZART

The Mozart Programming System combines ongoing research in programming language design and implementation, constraint logic programming, distributed computing, and human-computer interfaces. Mozart implements the Oz language and provides both expressive power and advanced functionality.

AKL

AKL (AGENTS Kernel Language) is a concurrent constraint programming language developed at the Swedish Institute of Computer Science (SICS). In AKL, computation is performed by agents interacting through stores of constraints. This notion accomodates multiple programming paradigms; in appropriate contexts, AKL agents may be thought of as processes, objects, functions, relations, or constraints.

SICStus Prolog

SICStus is an ISO standard compliant, Prolog development system. See our article. SICStus is built around a high performance Prolog engine that can use the full virtual memory space for 32 and 64 bit architectures alike. SICStus is efficient and robust for large amounts of data and large applications.


Teaching

  • EdX - Reliable Distributed Algorithms (Part I) [link]

  • EdX - Reliable Distributed Algorithms (Part II) [link]

  • Youtube - Distributed Algorithms [link]

  • Advanced Topics in Distributed Systems (ID2220) [link]

  • Distributed Systems, Advanced Course (ID2203) [link]

  • Distributed Computing, Peer-to-Peer and GRIDS (ID2210) [link]

  • Distributed Computer Systems (2G1126)

  • Computer Science II (2G1512)


PhD Students

  • Paris Carbone, Scalable and Reliable Data Stream Processing, 2018, KTH

  • Fatemeh Rahimian, Gossip-Based Algorithms for Information Dissemination and Graph Clustering, 2014, KTH

  • Raul Jimenze, Distributed Peer Discovery in Large-Scale P2P Streaming Systems, 2013, KTH

  • Roberto Roverso, A System, Tools and Algorithms for Adaptive HTTP-live Streaming on Peer-to-peer Overlays, 2013, KTH

  • John Ardelius, On the Performance Analysis of Large Scale, Dynamic, Distributed and Parallel Systems, 2013, KTH

  • Amir H. Payberah, Live Streaming in P2P and Hybrid P2P-Cloud Environments for the Open Internet, 2013, KTH

  • Cosmin Arad, Programming Model and Protocols for Reconfigurable Distributed Systems, 2013, KTH

  • Tallat Shafaat, Partition Tolerance and Data Consistency in Structured Overlay Networks, 2013, KTH

  • Ali Ghodsi, Distributed k-ary System: Algorithms for Distributed Hash Tables, 2006, KTH

  • Erik Klintskog, Generic Distribution Support for Programming Systems, 2005, KTH

  • Sameh El-Ansary, Designs and Analyses in Structured Peer-To-Peer Systems, 2005, KTH

  • Per Brand, Design Philosophy of Distributed Programming Systems: the Mozart Experience, 2005, KTH

  • Joe Armstrong, Making Reliable Distributed Systems in the Presence of Software Errors, 2003, KTH

  • Ashley Saulsbury, Attacking Latency Bottlenecks in Distributed Memory Systems, 1999, KTH

  • Johan Montelius, Exploiting Fine-grain Parallelism in Concurrent Constraint Languages, 1997, Uppsala University

  • Björn Carlsson, Compiling and Executing Finite Domain Constraints, 1995, Uppsala University

  • Torbjörn Keisu, Tree Constraints, 1994, KTH

  • Sverker Janson, AKL - a Multiparadigm Programming Language, 1994, Uppsala University

  • Erik Hagersten, Towards Scalable Cache-Only Memory Architectures, 1992, KTH

  • Roland Karlsson, A High Performance OR-parallel Prolog System, 1992, KTH

  • Dan Sahlin, An Automatic Partial Evaluator for Full Prolog, 1991, KTH

  • Nabiel El Shiewy, Robust Coordinated Reactive Computing in SANDRA, 1990, KTH

  • Bogumil Hausman, Pruning and Speculative Work in OR-Parallel Prolog, 1990, KTH

  • Mats Carlsson, Design and Implementation of an OR-parallel Prolog Engine, 1990, KTH


Contact

  • Email: haridi@kth.se

  • Phone: +46-(0)8-790 41 22

  • Address: Electrum 229, Kistagången 16, 16440 Kista, Sweden