Chapter 03

Author

Lars Pastewka

Chapter 3: Pair potentials

Context: Interatomic forces or interatomic potentials determine the material that we want to study. There is a plethora of interatomic potentials of varying accuracy, transferability and computational cost available in the literature. We here discuss simple pair potentials and point out algorithmic considerations.

Additional resources:

3.1 Introduction

https://uni-freiburg.cloud.panopto.eu/Panopto/Pages/Embed.aspx?id=42ad47da-d1c1-48bc-9eda-ad2301538f4f

The expression for (E_({ _{i} } )) is the model for the material that we use in our molecular dynamics calculations. It determines whether we model water, proteins, metals, or any other physical object. Models are typically characterized by their accuracy, their transferability and the computational cost involved. (Computational cost also includes the computational complexity.) At constant computational cost, there is always a tradeoff between accuracy and transferability. Accuracy and transferability can typically only be improved at the expense of additional computational cost.

  • Accuracy: Accuracy describes how close to we can get to a reference metric, experimentally measured or theoretical. For example, we can compare the vacancy formation energy to experimental values, and compute accuracy as the absolute value of the energy difference (E_{} - E_{}^{}), which can be (1 ), (0.1 ) (typical), (0.01 ) (computationally expensive!). (The vacancy formation energy is the energy required to remove a single atom from a solid. The resulting “hole” in the solid is called a vacancy.)
  • Transferability: Transferability describes the ability for a model to satisfy different accuracy metrics. Let’s assume we get the vacancy formation energy right to within (0.1 ) of the experimental value. Does the interstitial formation energy, i.e. the energy to insert an additional atom between lattice sites, give the same value? If so, then the potential is transferable between these two situations. Most interatomic potentials are not generally transferable, and they need to be tested when used in new situations, e.g. when the potential has been used to study crystals, but you want to use it to study a glass.
  • Computational cost: Computational cost describes the number of floating point operations to compute an energy or a force. (Nowadays, actual electrical energy requirements for doing the calculation would be a better measure.) This is related to computational complexity, that describes how the computational cost (i.e. the number of operations required to compute the result) scales with the number of atoms. Ideally we would like (O(N)) complexity (i.e. a system with twice as many particles takes twice the computing time), but many methods do not scale linearly. Quantum methods (tight-binding, density-functional theory) are usually (O(N^{3})) or worse.

3.2 Pair potentials

https://uni-freiburg.cloud.panopto.eu/Panopto/Pages/Embed.aspx?id=35001264-ad07-4873-89f1-ad2301538f75

We have already encountered the simplest (and oldest) form of interaction potential, the pair potential. The total energy for a system interacting in pairs can be written quite generally as where (r_{ij} = |{i} - {r}{j}|) is the distance between atom i and atom j. (V(r_{ij})) is the pair interaction energy or just the pair potential and we assume that the interaction is pair-wise additive. The sum on the right ((_{ir_c), then we only need to look exactly the neighboring cells.

Figure 3.1: Illustration of the typical data structure used for an (O(N)) neighbor search in a molecular dynamics simulation. For searching the neighbors within a cutoff (r_c) of the red atom, we only need to consider the candidate atoms that are in the cells adjacent to the red atom.

We will here illustrate a typical neighbor search using the two-dimensional example shown in Fig. 3.1. Let us assume that each atom has a unique index (i), where (N) is the total number of atoms. (Note: in C++ and other common languages, indices start at (0) and run to (N-1).) A neighbor search algorithm first builds individual lists ({B_{k,mn}}) that contain the indices off all atoms in cell ((m,n)), i.e. (kN_{nm}) where (N_{nm}) is the number of atoms in this cell. The cell can simply be determined by dividing the position of the atom by the cell size (b), i.e. atom (i) resides in cell (m_i=x_i/b ) and (n_i=y_i/b ) where () indicates the closest smaller integer. The lists ({B_{k,mn}}) are most conveniently stored in a single contiguous array; for purposes of accessing individual cells, a second array is required that stores the index of the first entry of the cell ((m,n)). Note that this second array’s size is equal to the number of cells, and can become prohibitively large when the system contains a lot of vacuum.

The neighbor search then proceeds as follows: for atom (i), compute the cell ((m_i,n_i)) in which this atom resides and then loop over all atoms in this cell and in cells ((m_i,n_i)), ((m_i, n_i)) and ((m_i, n_i)). In two dimensions, this yields a loop over (9) cells, in three-dimensions there the loop runs over (27). If the distance between these two atoms is smaller than the cutoff (r_c), we add it to the neighbor list. Note that if the cell size (b) is smaller than (r_c), we need to include more cells in the search.

Bibliography