MA5:  Machine Learning in Engineering Simulations

Room: Old Main Academic Center 3110

Webex Link

Organizer: Ted Dickel, Mississippi State University


William Curtin, EPFL (Swiss Federal Institute of Technology in Lausanne, Switzerland)        

Time: 10:00 am - 10:25 am (CST)

Title: Machine Learning for Metallurgy

Abstract: Industrial processing and application of advanced materials requires exquisite control of both composition and processing path to achieve optimal performance. Bridging the chasm between chemical interactions and macroscopic material behavior can be aided by emerging integrated multiscale materials modeling approaches. Machine-learned interatomic potentials for atomistic simulations are an essential part of the path from atoms to metallurgical properties. Here, we illustrate the pathway from first-principles modeling to alloy evolution during processing to alloy yield strength for Al-Mg-Si Al-6xxx alloy. A first-principles database of many metallurgically-relevant structures is used to develop a Neural Network interatomic potential (NNP) for the Al-Mg-Si system. The broad applicability of the NNP across metallurgical and alloy properties is demonstrated. The NNP is then used in a Kinetic Monte Carlo study of natural aging to show that early-stage clusters trap vacancies and delay further evolution at room temperature. The NNP is then further used in direct atomistic simulations of precipitate strengthening at experimental scales, showing the shearing and Orowan looping that control alloy strength. The simulations are used to validate analytic models of shearing and a mesoscale discrete dislocation dynamics method is calibrated to atomistic NNP quantities and used to simulate Orowan looping. Theory and simulation then lead to predictions of peak-aged strength in Al-6xxx in good agreement with experiments. This example thus serves as a demonstration of many parts of a general multiscale path connecting quantum to meso to macro scale performance with machine learning potentials providing a critical link in this path.


              Danny Perez, Los Alamos National Laboratory

Time: 10:25 am - 10:50 am (CST)

Title: An Entropy-maximization Approach for the Generation of Training Sets for Machine-learned Potentials

Abstract: The last few years have seen considerable advances in the development of machine-learned interatomic potentials. A very important aspect of the parameterization of transferable potentials is the generation of training sets that are sufficiently diverse, yet compact enough to be affordably characterized with high-fidelity reference methods. We formulate the generation of a training set as an optimization problem where the figure of merit is the entropy of the distribution of atom-wise descriptors. This can be used to create a fictitious potential to explicitly drive the generation of new configurations that maximally improve the diversity of the training set in a completely autonomous fashion. We show that this strategy yields potentials that are much more transferable than conventional domain-expertise-based approaches to training set generation, while still exhibiting low errors on expert-selected configurations. In contrast, potentials trained on expert-curated datasets do well at representing qualitatively similar configurations, but can dramatically fail at capturing the physics of qualitatively different classes of configurations, as well as that of entropy-generated configurations.


             Christopher Barrett, Mississippi State University

Time: 10:50 am - 11:15 am (CST)

Title: Rapid Atomistic Neural Network Potentials: Development and Applications for Atomic Scale Materials Science

Abstract: Atomistic modeling serves as a critical length scale bridge between high-accuracy first principles calculations, and larger-scale continuum material models. Despite its importance, quantitatively accurate potentials for complex and diverse properties of interest are rare and time-consuming to produce. Using machine learning, we have developed a new formulation which can rapidly generate new atomic potentials based on first principles training data. The developed potential method demonstrates remarkable predictive power and speeds approaching those of empirical methods, while their accuracy is much greater due to the large number of fitted parameters. This new method comes with the additional challenge that extrapolation beyond the training set can lead to widely divergent results. We have addressed this issue by designing an architecture which limits the variation of the network from an empirical state equation, keeping it within reasonable bounds. Results from our RANN potentials show accurate predictions of thermodynamics and plasticity, including full phase diagram reproduction and accurate dislocation and twin interactions. Implementation of our potentials int he open-source LAMMPS atomistic software ensures that they are widely useable to the community.

This work is in collaboration with Doyl Dickel and Mashroor Nitol.


Eric Collins, CAVS, Mississippi State University

Time: 11:15 am - 11:40 am (CST)

Title: libANDI: Adaptive N-Dimensional Interpolation Library

Abstract: There exists a practical need for fast interpolation of sampled datapoints for complex high-dimensional fields. In recent years, deep learn-ing techniques have demonstrated an impressive capacity for approximating high-dimensional data, however these methods also require massive amounts of training data and significant computational resources to store and run. In addition, such deep networks are somewhat inscrutible in their use of hidden variables and hyperparameters. There also exist the possibility that the resulting inferences may not have the desired continuity properties, especially in regions that may not have been adequately covered by the training data. In many engineering applications, we need a light-weight, interpretable representation that is fast to evaluate and has some guarantees on the continuity of the function approximation. In this work, we demonstrate the capabilities of an Adaptive N-DimensionalInterpolation library (libANDI). This library utilizes a set of high-order polynomial splines to approximate a given data set to a desired error tolerance. The library routines generate a recursively refined table of spline-patches to adaptively sampled data (samples are queried where additional data is needed). We demonstrate the accuracy and efficiency of the table generation and evaluation as well as compare its storage and performance against other well known methods.


Mashroor Nitol, Los Alamos National Laboratory

Time: 11:40 am - 12:05 pm (CST)

Title: Machine Learning Models for Predictive Materials Science from Fundamental Physics: An Application to Titanium and Zirconium

Abstract: Machine learning techniques using rapid artificial neural networks (RANN) have proven to be effective tools to rapidly mimic first principles calculations. New neural network potentials are capable of accurately modeling the transformations between the α, ? and ω phases of titanium and zirconium including accurate prediction of the equilibrium phase diagram. These potentials show remarkable accuracy beyond their first principle dataset, indicating that they reliably parameterize the underlying physics. Transitions between each of the phase pairs are observed in dynamic simulation using calculations of the Gibbs free energy. The calculated triple points are 8.67 GPa, 1058 K for Ti and 5.04 GPa, 988.35 K for Zr, close to their experimentally observed values. The success of the RANN potentials with single element phase transitions suggests the potential of this method to make robust alloy phase diagram calculations such as for TiAl. This can augment or anticipate experiments to accelerate materials discovery.