About this Abstract |
Meeting |
MS&T21: Materials Science & Technology
|
Symposium
|
Materials Informatics for Images and Multi-dimensional Datasets
|
Presentation Title |
Training Deep-learning Models with 3D Microstructure Images to Predict Location-dependent Mechanical Properties in Additive Manufacturing |
Author(s) |
Ashley D. Spear, Carl Herriott |
On-Site Speaker (Planned) |
Ashley D. Spear |
Abstract Scope |
Three-dimensional images of additively manufactured (AM) microstructures were used to train deep-learning models to predict effective mechanical properties and their spatial variability throughout AM builds. Images were acquired from high-fidelity, multi-physics simulations of SS316L produced by directed energy deposition under different build conditions. Microstructural subvolumes and corresponding homogenized yield-strength values (~7700 data points) were then used to train convolutional neural network (CNN) models. For comparison, two types of machine-learning (ML) models (Ridge regression and XGBoost) were trained using the same dataset. The ML models required substantial pre-processing to extract volume-averaged microstructural descriptors; whereas, 3D image data comprising basic microstructural information were input to the CNN models. Among all models tested, CNN models that use crystal orientation as input provided the best predictions, required little pre-processing, and predicted spatial-property maps in a matter of seconds. Results demonstrate that suitably trained data-driven models can complement physics-driven modeling by massively expediting structure-property predictions. |