About this Abstract |
Meeting |
2021 TMS Annual Meeting & Exhibition
|
Symposium
|
AI/Data Informatics: Applications and Uncertainty Quantification at Atomistics and Mesoscales
|
Presentation Title |
Accuracy, Uncertainty, Inspectability: The Benefits of Compositionally-restricted Attention-based Networks |
Author(s) |
Taylor D. Sparks, Steven K. Kauwe, Ryan J. Murdock, Anthony Yu-Tung Wang |
On-Site Speaker (Planned) |
Taylor D. Sparks |
Abstract Scope |
We describe a new model architecture, the Compositionally-Restricted Attention-Based Network (CrabNet). CrabNet generates high-fidelity predictions based on the self-attention mechanism, a fundamental component of the transformer architecture which revolutionized natural language processing. The transformer encoder uses self-attention to encode the context-dependent behavior for the components within a system. In physical environments, elements contribute differently to a material's property based on the materials system itself. For example, boron behaving as an electrical dopant in one system while behaving as a mechanical strengthening bond modification in another. CrabNet's ability to potentially capture this type of context-dependent behavior allows for highly accurate model predictions. Importantly, CrabNet generates simple and inspectable self-attention maps. These attention maps govern the learned material property by representing element importance and interactions. The visualization and analysis of these attention maps are available during training and inference periods. |
Proceedings Inclusion? |
Planned: |