Motivation & Scope
The fields of embedded computing, wireless communication, data mining and artificial intelligence are exhibiting impressive improvements. Their combination fosters the emergence of "smart environments": Systems made of networked physical objects embedded in public places and private spheres of everyday individuals. This trend is supporting the rise of a broad variety of data-driven services that are highly customized to various aspect of our life, and hold great social and economic potential. Examples include wearable computing systems and applications for monitoring of personal health and physical/social activities; Intelligent Transport Systems (ITS) relying on cars that are becoming increasingly aware of their environment and drivers; and home automation systems that even support face and emotion recognition applications and provide web access to entirely novel types of content.
Such disruptive technologies are expected to increasingly rely on sophisticated machine learning and statistical inference techniques to obtain a much clearer semantic understanding of people’ states, activities, environments, contexts and goals. However, these developments also raise new technical, social, ethical and legal privacy challenges which, if left unaddressed, will jeopardize the wider deployment and thus undermine potential social and economic benefits of the aforementioned emerging technologies. Indeed, algorithms increasingly used for complex information processing in today's hyperconnected society are rarely designed with privacy and data protection in mind. On the other hand, privacy researchers are increasingly interested in leveraging machine learning and inference models when designing both attacks and innovative privacy-enhancing tools.
Aiming to foster an exchange of ideas and an interdisciplinary discussion on both theoretical and practical issues that applying inference models to jeopardize/enhance data protection and privacy may entail, this workshop provides researchers and practitioners with a unique opportunity to share their perspectives with others interested in the various aspects of privacy and inference. Topics of interest include (but are not limited to):
- Adversarial learning and emerging privacy threats
- Anonymous communication
- Discrimination-aware Learning
- Privacy-preserving deep learning models
- Deep learning models for privacy
- Privacy-preserving clustering, ranking, regression, etc.
- Privacy and anonymity metrics
- Statistical disclosure control
- Differential privacy and relaxations
- Machine learning and statistical inference on encrypted data
- Machine learning and statistical inference for cybersecurity (e.g., for malware and misbehaviour detection, analysis, prevention)
- Social graph matching and de-anonymization techniques
- Private information retrieval
- Algorithms and accountability
- Case studies and experimental datasets
- Legal, regulatory, and ethical issues