
Technologies based on artificial intelligence are increasingly shaping our daily lives, from healthcare and medicine to industry and finance. In particular, the analysis of time series data and complex medical datasets presents both opportunities and challenges, especially when AI decisions need to be transparent and trustworthy.
The Applied AI-Lab, led by Prof. Dr. Jennifer Hannig, is a young and dynamic research group dedicated to Explainable Artificial Intelligence (XAI). We bring together expertise from computer science, statistics, and applied mathematics, with the goal of developing AI systems that are interpretable, reliable, and applicable to real-world problems.
While the technical aspects of our research focus on designing, implementing, and validating AI models, our applied perspective ensures that these models can be used responsibly in domains such as healthcare, epidemiology, and industrial monitoring. We emphasize explainability, interpretability, and robustness, so that users and decision-makers can trust AI-driven insights.
Through interdisciplinary collaborations, innovative modeling approaches, and advanced visualization techniques, we aim to make AI not only powerful but also transparent and actionable. Our projects span areas like time series forecasting, medical data analysis, and algorithmic transparency, always striving to bridge the gap between cutting-edge methods and real-world applications.
More details on our projects, publications, and research activities can be found on our website. We welcome collaborations and discussions with researchers, industry partners, and practitioners interested in applied AI and explainable machine learning.