Call for Contributions

This workshop aims to bring together researchers, professionals, and academics to discuss and advance the state of the art and best practices in observability and explainability of AI systems. An explainable AI system allows users and stakeholders to understand the automated decision-making process for ensuring trust, and compliance with regulations and ethical standards. Observability, which involves the ability to measure, monitor, and understand the internal states and behaviors of AI systems. The combination of observability and explainability contributes to form the data centric base for responsible AI development. This workshop invites contributions that explore the intersection of observability and explainability in AI, focusing on methods, frameworks, and practices that enhance our understanding and development of AI-driven systems.

The rapid advancements in complex neural networks en- ables the ability of AI system processing and integrating different modalities of data. Inevitably, the adoption of AI systems grows in a wide domain of the society. Meanwhile, the constant growth of neural networks’ scale and complexity also leads to high barrier of cost to develop the AI systems from the original training. The practices are adopting cloud-based AI services or adopting open source AI models to deploy as services. Retrieving domain related data and augmenting the prompts for foundation models become data driven in different scales. Within such a context, observability provides the trace of data usage within models. Explainability refers to the ability to provide understandable and interpretable explanations of how AI systems make decisions. The combination of ob- servability and explainability allows users and stakeholders to understand how input data influences the automated decisions. This transparency is essential for building trust in AI systems, especially in critical applications, such as healthcare, finance, cybersecurity, legal system and government services. Devel- oping both capabilities is beyond the scope of foundation of AI models that further involves system design for services and cloud computing, software engineering and data management. The topics covered in this workshop are supplementary to CASCON’s main program. In particular, this workshop offers opportunities for researchers, practitioners, and academics to share insights, discoveries, and best practices that connect observability and explainability of AI systems.

We invite workshop talks on topics related to observability and explainability of AI systems, including but not limited to:p>


Program

Date: Tuesday November 12, 2024
Venue: The conference venue is the Second Student Centre (SCC) on the Keele Campus of York University located at 15 Library Lane, M3J-1P3, Toronto, Canada.
Room: D2


Important Dates

How to submit: Please submit a one-page PDF with the title and abstract of your talk, the name and affiliation of the authors, and a short bio of the presenter. Please send your file by email to the workshop organizers: Yan Liu (yan.liu@concordia.ca), Wahab Hamou-Lhadj (wahab.hamou-lhadj@concordia.ca), and Naser Ezzati (nezzatijivan@brocku.ca).

Proceedings: Please note that OXAI does not publish workshop proceedings. Presenters should submit an electronic copy of their slides, which will be posted on this website shortly after the event. There is no requirement for originality in the work presented; previously published or submitted research is also welcome. If applicable, please include a link to the prior publication or submission with your presentation.


Organization:


Related Events: