Abstract Scope |
Many models have been developed which continue to increase predictive performance for various machine learning tasks such as bulk modulus, band gap, formation energy, and metallicity. As new algorithms continue to be developed, it becomes clear that certain algorithms perform better for certain tasks. For example, CGCNN excels in formation energy prediction and metallicity while CrabNet has the best-in-class performance for experimental and computational band gap predictions. In the last few years, progress has been incremental, with many algorithms displaying results for certain tasks only slightly better than prior work. This begs the question, does each algorithm learn the same information, or does each algorithm have its own contribution in terms of what is learned? We explore and compare compound-wise and chemical class-wise (rather than task-wise) performance for various machine learning models such as CGCNN, CrabNet, Automatminer, MODNet, MEGNet, and DimeNet++ and present an ensemble of these state-of-the-art models. |