Abstract Scope |
Developing interpretable/explainable machine learned (ML) models will enhance our trust in the algorithms, expose their shortcomings, and guide us towards enhanced solutions. We will draw from five recent case studies on ML derived relationships in process-structure-property data. These five studies will illustrate distinct strategies towards interpreting ML models. The studies span two manufacturing methods: physical vapor deposition and laser powder bed fusion, and numerous data sources including both experimental data and synthetic data from simulations. Causality is interpreted through embedded physical models, saliency maps in image data, and through unsupervised clustering in mutli-modal datasets. Also, traditional forensic materials science, though laborious, provides a pathway to unravel cause-and-effect in high-performing materials identified by high-throughput exploration. Sandia is a multiprogram laboratory managed and operated by NTESS under DOE NNSA contract DE-NA0003525. |