1st Workshop on Attributing Model Behavior at Scale
New Orleans Convention Center (Room TBD)
Quick details: OpenReview submission portal, format guidelines here, deadline September 29 (now extended!)
Contact info: attrib-neurips23 [at] googlegroups [dot] com
What makes ML models tick? How do we attribute model behavior to the training data, algorithm, architecture, or scale used in training?
Recently-developed algorithmic innovations and large-scale datasets have given rise to machine learning models with impressive capabilities. However, there is much left to understand in how these different factors combine to give rise to observed behaviors. For example, we still do not fully understand how the composition of training datasets influence downstream model capabilities, how to attribute model capabilities to subcomponents inside the model, and which algorithmic choices really drive performance.
A common theme underlying all these challenges is model behavior attribution. That is, the need to tie model behavior back to factors in the machine learning pipeline—such as the choice of training dataset or particular training algorithm—that we can control or reason about. This workshop aims to bring together researchers and practitioners with the goal of advancing our understanding of model behavior attribution.
September 23: Deadline for both idea track and main track papers