Principles of Probabilistic Programming

Joose-Pieter Katoen

RWTH Aachen University, Germany


Abstract: Probabilistic programming is en vogue. It is used to describe complex Bayesian networks, quantum programs, security protocols and biological systems. Programming languages like C, C#, Java, Prolog, Scala, etc. all have their probabilistic version. Key features are random sampling and means to adjust distributions based on obtained information from system observations. We show some semantic intricacies, argue that termination is more involved than the halting problem, and discuss recursion and run-time analysis. We also show some applications to the analysis of randomised algorithms and Bayesian networks.

June Andronick, UNSW, Australia,

Rustan M. Leino, Microsoft Research, Redmond, USA


Towards a Contract-Based Model of Evolving Software Architecture

Zhiming Liu

Centre for Research and Innovation in Software Engineering (RISE)

Southwest University, Chongqing, China

Abstract: The sustainable development of most economies and the quality of life of their citizens largely depend on the development and application of evolutionary digital ecosystems. The characteristic features of these systems are reflected in the so called Internet of Things (IoT),

Smart Cities, Cyber-Physical Systems (CPS), Systems of Human-Machine-Things, and Data Centers. Compared to the challenges in traditional ICT applications that engineers used to face, system and software development is to develop and integrate new components or subsystems, new applications and front end services that depends on the infrastructures of existing systems. This has to deal with the complexity of ever evolving architectures digital components, physical components, together with sensors and smart devices controlled and coordinated by software. The architectural components are designed with different technologies, run on different platforms and interact through different communication technologies. Software runs in these systems for data processing and analytics, computation, and intelligent control. However, the critical requirements of applications of these systems should not be compromised, and thus critical components need to be “provably correct”. In this tutorial we argue, for development software in these emerging systems, the need of a foundation for the combination of traditional software engineering, AI (or knowledge-based engineering) and the emerging Big Data technologies. We discuss how an interface theory could play a core role in this foundation for seamless combination of different models, methods and tools for software development, AI and Big Data, as well as for system integration. Based on these motivating discussion, propose a research agenda towards the development of the contract-based model of component-based design to continuously evolving software architectures for inter-networking systems.