Chapter 7
Domain decomposition

Context: Parallelization in molecular dynamics typically occurs through domain decomposition. The simulation domain is divided into subdomains, each of which runs within an MPI process. This distributes the workload among different compute units. Communications occurs only at the interface of the subdomain, either to exchange atoms between subdomains or to communicate ghost atoms that are required for the computation of correct forces in short-range interatomic potentials.

7.1 Simulation domain

Our atomic system has so far lived in an infinite space consisting of vaccum. We have made no reference to a simulation domain and the code developed up to Milestone 07 makes not reference to such a domain. We now introduce domain decomposition and for this need a simulation domain, i.e. a region of space \(\Omega \) in which our atoms can reside. This domain can be periodic, which we will discuss in more detail in the next chapter.

We will assume that the simulation has its origin at \((0,0,0)\) and is spanned by three linearly independent vectors \(\vec {a}_1\), \(\vec {a}_2\) and \(\vec {a}_3\). Any atomic position can then be expressed as \begin {equation} \vec {r}_i = s_{i,1} \vec {a}_1 + s_{i,2}\vec {a}_2 + s_3 \vec {a}_{i,3} \end {equation} with \(s_\alpha \in [0,1)\). \(s_\alpha \) must remain in this interval since we do not allow atoms outside of the simulation domain. The vector \(\vec {s}_i\) is the scaled position of the atom \(i\). Using the domain matrix \(\underline {h}=(\vec {a}_1, \vec {a}_2, \vec {a}_3)\), we can express this more compactly as \(\vec {r}_i=\underline {h}\cdot \vec {s}_i\). Conversely, we obtain the scaled positions from \(\vec {s}_i=\underline {h}^{-1}\cdot \vec {r}_i\).

In what follows, we assume rectilinear domains, i.e. \(\vec {a}_1=(L_x, 0, 0)\), \(\vec {a}_2=(0, L_y, 0)\) and \(\vec {a}_3=(0, 0, L_z)\) where \(L_x\), \(L_y\) and \(L_z\) are the linear dimensions of the domain. The methods that are described in the following are straightforwardly extended to arbitrary (tilted) domains.

7.2 Decomposition into Cartesian domains

https://uni-freiburg.cloud.panopto.eu/Panopto/Pages/Embed.aspx?id=cee045e7-7cdb-4cda-ad4a-ad6b007de343

We decompose the full system into \(N_x\times N_y\times N_z\) subdomains. For a rectilinear domain, this means teach subdomain has linear dimensions of \(L_x/N_x\), \(L_y/N_y\) and \(L_z/N_z\). Each subdomain propagates its own atoms. When atoms leave the subdomain, they are transferred to the respective neighboring domain. We call this process atom exchange.

Domain decomposition algorithms for MD simulations have started to appear in the literature around 1990. Some of the earliest references to this type of algorithm are Brugè and Fornili (1990); Liem et al. (1991); Chynoweth et al. (1991); Pinches et al. (1991); Brown et al. (1993); Plimpton (1995).

7.3 Ghost atoms

https://uni-freiburg.cloud.panopto.eu/Panopto/Pages/Embed.aspx?id=03a83e2f-54d9-4423-acb8-ad6b007de308

The atoms within each subdomain are not sufficient to compute the forces upon these atoms. In order to compute forces for atoms near the domain boundary, we need to transfer atoms that sit outside of the subdomain from the neighboring subdomains. These atoms are called ghost atoms. All atoms up to a distance \(r_\text {G}\) from the subdomain boundary are transferred. For a Lennard-Jones potential, \(r_\text {G}=r_c\) but for the EAM potential discussed here \(r_\text {G}=2 r_c\). This is because a force in the EAM potential is affected by an atom that sits twice the cutoff radius \(r_c\) away.

7.4 Communication pattern

https://uni-freiburg.cloud.panopto.eu/Panopto/Pages/Embed.aspx?id=2ff96a92-b994-4cee-84b6-ad6b008fc0d3

The basic communication pattern involves two MPI_Sendrecv commands per Cartesian direction. The atoms that are send (either exchanged or as ghost atoms) must be serialized into a send buffer. Given that serialization has occured into the buffers send_left and send_right, the communication pattern looks as follows:

1MPI_Sendrecv(&send_left, left_, &recv_right, right_, comm_)}; 
2MPI_Sendrecv(&send_right, right_, &recv_left, left_, comm_)};

Here comm_ contains the MPI communicator and left_ and right_ the MPI ranks of the processes that host the subdomain to the left and the right, respectively, of the current subdomain. The buffers recv_left and recv_right hold the serialized atomic information received from the left and right, respectively. This information needs to be deserialized into the respective atom data type.

Bibliography

   D. Brown, J. H. R. Clarke, M. Okuda, and T. Yamazaki. A domain decomposition parallelization strategy for molecular dynamics simulations on distributed memory machines. Comput. Phys. Comm., 74(1):67–80, 1993. URL https://doi.org/10.1016/0010-4655(93)90107-N.

   F. Brugè and S. L. Fornili. Concurrent molecular dynamics simulation of spinodal phase transition on transputer arrays. Comput. Phys. Comm., 60(1):31–38, 1990. URL https://doi.org/10.1016/0010-4655(90)90076-D.

   S. Chynoweth, U. C. Klomp, and L. E. Scales. Simulation of organic liquids using pseudo-pairwise interatomic forces on a toroidal transputer array. Comput. Phys. Comm., 62(2):297–306, 1991. URL https://doi.org/10.1016/0010-4655(91)90102-Q.

   S. Y. Liem, D. Brown, and J. H. R. Clarke. Molecular dynamics simulations on distributed memory machines. Comput. Phys. Comm., 67(2):261–267, 1991. URL https://doi.org/10.1016/0010-4655(91)90021-C.

   M. R. S. Pinches, D. J. Tildesley, and W. Smith. Large scale molecular dynamics on parallel computers using the link-cell algorithm. Molecular Simulation, 6(1-3):51–87, 1991. URL https://doi.org/10.1080/08927029108022139.

   S. Plimpton. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys., 117(1):1–19, 1995. URL https://doi.org/10.1006/jcph.1995.1039.


Copyright © 2021-2023 Lars Pastewka, Wolfram Nöhring, Lucas Frérot. All material licensed under CC BY SA 4.0 unless otherwise noted.