Call for Contributions
This workshop aims to bring together researchers, professionals, and academics to discuss and advance the state of the art and best practices in observability and explainability of AI systems. An explainable AI system allows users and stakeholders to understand the automated decision-making process for ensuring trust, and compliance with regulations and ethical standards. Observability, which involves the ability to measure, monitor, and understand the internal states and behaviors of AI systems. The combination of observability and explainability contributes to form the data centric base for responsible AI development. This workshop invites contributions that explore the intersection of observability and explainability in AI, focusing on methods, frameworks, and practices that enhance our understanding and development of AI-driven systems.
The rapid advancements in complex neural networks en- ables the ability of AI system processing and integrating different modalities of data. Inevitably, the adoption of AI systems grows in a wide domain of the society. Meanwhile, the constant growth of neural networks’ scale and complexity also leads to high barrier of cost to develop the AI systems from the original training. The practices are adopting cloud-based AI services or adopting open source AI models to deploy as services. Retrieving domain related data and augmenting the prompts for foundation models become data driven in different scales. Within such a context, observability provides the trace of data usage within models. Explainability refers to the ability to provide understandable and interpretable explanations of how AI systems make decisions. The combination of ob- servability and explainability allows users and stakeholders to understand how input data influences the automated decisions. This transparency is essential for building trust in AI systems, especially in critical applications, such as healthcare, finance, cybersecurity, legal system and government services. Devel- oping both capabilities is beyond the scope of foundation of AI models that further involves system design for services and cloud computing, software engineering and data management. The topics covered in this workshop are supplementary to CASCON’s main program. In particular, this workshop offers opportunities for researchers, practitioners, and academics to share insights, discoveries, and best practices that connect observability and explainability of AI systems.
We invite workshop talks on topics related to observability and explainability of AI systems, including but not limited to:p>
- Observability in AI Systems: Techniques for monitoring and logging AI behaviors, tools for real-time analysis of AI systems, metrics for assessing system performance, and frameworks for AI system observability.
- Explainability in AI: Methods for generating inter- pretable models, techniques for explaining AI decisions, human-centered approaches to AI explainability, and evaluation metrics for explainability.
- Integrating Observability and Explainability: Ap- proaches to combining observability and explainability to enhance AI system transparency, methods for aligning explainability with observability metrics, and frameworks for AI system management.
- Ethical and Social Implications: The role of observabil- ity and explainability in ensuring ethical AI, regulatory requirements, and impacts of opaque AI systems.
- Case Studies and Applications: Real-world examples of observability and explainability in AI systems, challenges encountered, and lessons learned in deploying observable and explainable AI.
Important Dates
- Talk abstract submission: October 10, 2024, AOE.
- Author notification: October 15, 2024.
- Submissions of presentation slides: November 5, 2024.
How to submit: Please submit a one-page PDF with the title of your talk, the name and affiliation of the authors, and a short bio of the presenter. Please send your file by email to the workshop organizers: Yan Liu (yan.liu@concordia.ca), Wahab Hamou-Lhadj (wahab.hamou-lhadj@concordia.ca), and Naser Ezzati (nezzatijivan@brocku.ca).
Proceedings: Please note that OXAI does not publish workshop proceedings. Presenters should submit an electronic copy of their slides, which will be posted on this website shortly after the event. There is no requirement for originality in the work presented; previously published or submitted research is also welcome. If applicable, please include a link to the prior publication or submission with your presentation.
Organization:
- Yan Liu, Concordia University, Canada
- Wahab Hamou-Lhadj, Concordia University, Canada
- Naser Ezzati-Jivan, Brock University, Canada
Related Events:
- IEEE Software Special Issue on Observability and Explainability for Software Systems Decision Making