Data-driven character animation holds great promise for enhancing realism and creativity in games, film, virtual avatars and social robots. However, due to the high bar on visual quality, most existing AI animation solutions focus narrowly on a specific task, and do not generalise to different motion types. This talk makes the case that 1) machine learning now has advanced far enough that strong, task-agnostic motion models are possible, and that 2) these models should be probabilistic in nature, to accommodate the great diversity of human behaviour. We present MoGlow, a new, award-winning deep-learning architecture that leverages normalising flows and satisfies our two criteria. MoGlow is competitive with the state-of-the-art in locomotion generation for humans and dogs, as well as in speech-driven gesture generation. We also combine MoGlow with our research in synthesising spontaneous-sounding speech to make a virtual character walk, talk, and gesticulate from text input alone. For a longer introduction showing our models in action, please see the following YouTube videos: Locomotion synthesis: https://youtu.be/pe-YTvavbtA Co-speech gesture generation: https://youtu.be/egf3tjbWBQE Joint synthesis of speech and motion: https://youtu.be/4_Gq9rU_yWg