Author: Yang Yang (University of Surrey) - Self-driving laboratories (SDLs) are transformative platforms that integrate artificial intelligence (AI) and robotics to autonomously design, execute, and optimise scientific experiments. However, the opaque nature of many AI systems in SDLs poses a critical barrier to effective human–AI collaboration, and in some cases, AI decisions can impact research quality, safety, accountability, and regulatory compliance. In this study, we present the first comprehensive, human-centred framework for explainability in SDLs. We identify and categorise the explanation needs of both internal (and external stakeholders, emphasising how these needs vary by role and responsibility. We then propose a novel three-stage structure for SDL campaigns—pre-loop, in-loop, and post-loop—and analyse how explainability contributes to each stage, from parameter definition to real-time decision-making and post hoc knowledge extraction. To bridge explainability theory and SDL practice, we provide a structured overview of explainable AI (XAI) techniques, classified into pre-modelling, interpretable modelling, and post-hoc explainability, and various explanation types. Finally, we outline design principles for delivering human-centred explanations, advocating for user-adaptive, layered, and context-aware communication strategies that balance interpretability with usability. Our insights aim to support the responsible design, implementation, and operation of SDLs by positioning explainability as a foundational capability. This work calls for interdisciplinary collaboration to develop tailored XAI solutions that enable trust, transparency, and responsible innovation in autonomous science.