Right now you are surfing a limited version of umu.se. What does this mean?
Image: Victoria Skeidsvoll
Research group We are interested in privacy-aware transparent AI systems. We focus on data privacy for data processing, privacy-aware machine learning for building models and data analytics, and decision models for making decisions.
AI systems are increasingly used for enhancing decision-making. One of the main building blocks of AI is data. Machine and statistical learning methods are used to extract knowledge from the underlying data in terms of models and inferences. The typical workflow comprises of feeding pre-processed data into machine learning algorithms, followed by algorithms transforming data into models, and finally, the models are embedded into AI systems for decision-making.
Machine learning and AI have spread into numerous domains where sensitive personal data are collected from users. Domains like healthcare, personal financial services, social networking, e-commerce, location services and recommender systems are some of these domains. Data from these domains are continuously collected and analysed to derive useful decision and inferences. However, the sensitive nature of these data raises privacy concerns that cannot be successfully addressed through naive anonymization alone.
Not only data, but models and aggregates can also lead to disclosure as they can contain traces of the data used in their computations. Attacks on data (e.g., reidentification and transparency attacks) and on models (e.g., membership attacks, model inversion) have proven the need for appropriate protection mechanisms.
Our research is mainly supported by the Wallenberg AI, Autonomous Systems and Software Programme, WASP, funded by the Knut and Alice Wallenberg Foundation. Additional support is provided by Kempestiftelserna, the Swedish Research Council and Forte, the Swedish Research Council for Health, Working Life and Welfare.
The NAUSICA research group develops techniques so that data, models, and decisions are made with appropriate privacy guarantees. AI systems must be able to handle uncertainty in order to be used in the real world, where ambiguity, vagueness and randomness are rarely absent. Approximate reasoning studies models of reasoning that deal with uncertainty, such as probability-based, proof theory-based and fuzzy set-based models.
AI systems, in line with trustworthy AI guidelines, have as fundamental requirements fairness, accountability, explainability, and transparency. Requirements affect the whole design and building process of AI systems, from data to decisions. Data privacy, machine and statistical learning, and approximate reasoning models are basic components of this process, but they need to be combined to provide a holistic solution.
Our research group is interested in privacy-aware transparent AI systems. We want to understand the fundamental principles that permit us to build these systems, and develop algorithms for this purpose. We focus on data privacy for data processing, privacy-aware machine learning for building models and data analytics, and decision models for making decisions.
The group collaborates with several national and international research groups at Tamagawa University, Osaka University, and Tsukuba University in Japan, as well as Maynooth University in Ireland and Universitat Autònoma de Barcelona, and has links with industry and governmental organisations.
Data privacy and machine learning
Approximate reasoning
Sonakshi Garg shows that privacy is not a barrier to progress.
25 young researchers are selected to meet to be inspired and inspire others.
Vicenç Torra is a new professor at the department of Computing Science.