The Sustainable Artificial Intelligence (SAI) Research Group explores the development and application of explainable, reliable, and ethically coherent artificial intelligence methods, with a focus on the United Nations Sustainable Development Goals (SDGs). The group investigates how artificial intelligence can be responsibly integrated into complex societal and environmental systems, ensuring transparency, accountability and long-term positive impact.
Explainable AI (XAI) is at the core of the group's research, along with ethical AI frameworks and trustworthy AI systems aligned with EU values and legislative frameworks, including the AI Act and GDPR. It aims to develop AI models and decision-making algorithms that can be interpreted by humans so that end-users, regulators and stakeholders can understand how and why decisions are made. This is particularly important in sensitive areas such as healthcare, energy and public policy, where trust, fairness and the reduction of bias are crucial.
SAI explores how algorithmic transparency and societal alignment can be embedded into AI systems from the outset, rather than treated as an afterthought. To support transparency and reproducibility, the group also contributes to the creation and dissemination of annotated open datasets for research purposes, with a strong emphasis on privacy-preserving annotation practices. In line with sustainable computing practices, the group investigates Tiny Machine Learning (TinyML) approaches and the development of HPC-less (High-Performance Computing-less) algorithms for edge devices and resource-constrained environments. These lightweight AI models reduce computational energy demands and enable real-time, low-power applications in different sectors while minimizing their environmental impact.
The group’s activities are interdisciplinary, combining insights from computer science, philosophy, social sciences, and sustainability studies. Applications focus on areas where AI can serve the public good, such as energy optimization, climate adaptation, smart mobility, digital health, and participatory urban governance, while ensuring that technological progress remains inclusive, equitable, and environmentally conscious.
Research topics:
- Human-interpretable and explainable AI systems
- Sustainable algorithm engineering
- TinyML and low-power AI model design for edge-based sustainable intelligence
- Privacy-preserving AI models and datasets
- Bias-Aware learning algorithms
- Ethical risk assessment frameworks for AI systems aligned with the EU AI Act
- The transparent use of synthetic data in AI
Research services:
- Development of explainable AI systems
- Curation and deployment of privacy-preserving annotated open datasets (including synthetic data)
- Verification of AI systems compliance through auditable indicators (e.g., aligned with the EU AI Act and GDPR)
- Development of TinyML and low-power AI models for edge computing in resource-constrained environments
- Development of AI robustness testing protocols under adversarial and uncertain conditions
- Sustainability assessments of AI systems, including carbon footprint modeling and energy efficiency evaluation