Kinetic Monte Carlo Notes

Stefan Bringuier

Monte Carlo Methods

  • Solve complex problems using random sampling from a probablity distribution (i.e. stochastic description).
  • Useful to evolve a physical system to a new state from an esemble of potential future states.

Integrating a function MC sampling

  • If we want to evaluate the integral of a function over some domain we can numerically approximate this using the midpoint rule: \[ \int_a^b f(x) dx = \frac{b-a}{N} \sum_{i=1}^N f(x_i) \qquad(1)\]

  • There is an alternative way to do this using probablity theory to determine the expectation value of a function \(f(x)\) for random variable \(x\): \[ \int_a^b p(x) f(x) dx = \frac{b-a}{N} \sum_{i=1}^N f(x_i) \qquad(2)\] where \(p(x)\) is a uniform probablity distribution over the interval \([a,b]\).

  • The difference between numerically evaluating Equation 1 and Equation 2, is that Equation 1 is evaluated over a grid of points and Equation 2 is randomly sampled points.

  • The error of MC integration is \(\propto \frac{1}{\sqrt{N}}\) as a result of central limit theorem

Example integrating a function using MC sampling1

Figure 1: Random sampled points from uniform distribution over the interval \([1,4]\). The black points are those that are accepted.

Example integrating a function using MC sampling

Figure 2: Integration of \(log(x)\) using MC.

Statistical Thermodynamics & Ensemble Properties

  • Microscopic → Macroscopic description
    • How positions and momenta of \(10^{23}\) particles relates to bulk temperature, pressure, or volume.
  • Ensembles use probablity of specific microstate. Probability theory provides average of a function or variable, \(\langle X \rangle\): \[ \langle X \rangle = \frac{1}{N} \sum_{i=1}^N n_i \, X_i = \sum_{i=1}^N \underbrace{p_i}_{\text{PDF}} X_i \qquad(3)\]
  • If \(\langle X \rangle\) is continous, Equation 3 is an integral.
  • \(p_i\) is the probablity the system is in state \(i\). The probablity density function (PDF) has the property that its normalized, i.e. \(\sum_{i=1}^N p_i = 1\)

Statistical Thermodynamics & Ensemble Properties

  • The consqequence of Equation 3 is that microscopic collections (i.e. ensemble of systems) can be used to calculate macroscopic properties.
  • Choice of \(p_i=\frac{z_i}{Z}\) depends on macroscopic conditions which manifest through the partition function: \[ Z = \sum_i e^{-\beta\,X_i} \qquad(4)\]
  • For a macroscopic system that has constant particles, volume and temperature, i.e., canonical.
    • \(\beta = \frac{1}{k_b\,T}\) and \(X_i=E_i\) where Boltzmann factor is \(z_i = e^{-\frac{E_i}{k_b\,T}}\) \[ \langle E \rangle = \frac{1}{Z} \sum_i e^{-\frac{E_i}{k_b\,T}} E_i \qquad(5)\]

Statistical Thermodynamics & Ensemble Properties

  • The biggest challenge in evaluating Equation 5 is it requires knowledge of all possible configurations.
  • If \(Z\) is a configurational integral, e.g., \(Z = \int e^{-U(\mathbf{r}^N)/k_B\,T} d\mathbf{r}^N\), then there are \(3N\) possible configs!
  • The key insight is that most configurations are not probable:
    • If the two atoms are extremely close at moderate \(T\), the term \(U(\mathbf{r})\) is large an hence the probablity low.
  • The question then becomes, can we determine \(p_i = \frac{1}{Z}e^{-\frac{E_i}{k_B\,T}}\) efficiently, that is the states with highest probablity centered around \(\langle E \rangle\) given that \(Z\) is not accessible.

Metropolis Monte Carlo

  • If we wanted to evaluate Equation 5 for an atomic system (i.e. the discrete states are replaced by continous atomic configurations), we could use the MC sampling as in Equation 2.
  • However we need to integrate over \(3N\) dimensions!
  • This eliminates the feasability for determining the partition function \(Z\) which is required to know the probablity of any specific configuration \(p_i\)

Metropolis Monte Carlo

  • The Metropolis algorithm is a process to sample states \(i\) with probablity \(p_i\)
  • This is achieved by using relative probablities, i.e., \(\frac{p_i}{p_j}\)
  • From this we get the correct average quantities.
  • This works because, even though we don’t know \(Z\) and can’t determine \(p_i\), the results of \(\frac{p_i}{p_j}\) gives the correct distribution
  • Relative probablities are given as: \[ \frac{p_i}{p_j} = \frac{e^{-E_i/\,k_B\,T}}{Z} \frac{Z}{e^{-E_j/\,k_B\,T}} = e^{-(E_i - E_j)/\,k_B\,T} \qquad(6)\]
  • Which only depends on energy difference between states as shown in graph

Metropolis Monte Carlo

Figure 3: Relative probablity for two states.

Metropolis Monte Carlo

  • In the metropolis MC approach we use the relations on Figure 3 to create a trajectory of states.
  • The steps for MMC are:
    1. Generate configuration \(i\) with \(E_i\)
    2. Randomly trial configuration, \(i+1\), and calculate \(E_{i+1}\)
    3. Get relative probablity via Equation 6.
    4. Use relations in Figure 3 to accept or accept with probability \(\frac{p_i}{p_j} < 1\) given a randomly generated number betwen \((0,1)\)
    5. If accepted, add \(i+1\) to trajectory, otherwise add \(i\) again2. Repeat until quantity \(\langle X \rangle = \frac{1}{N} \sum_{i=1}^N X_i\)

Metropolis Monte Carlo

Figure 4: Acceptance and rejection of Metropolis MC (see LeSar (2013)).

Backmatter

Connect with me!

stefanbringuier@gmail.com

Note

This presentation can be viewed online at https://stefanbringuier.github.io/KMCNotes. A report formated PDF of this presentation can be downloaded here.

Tip

To export revealjs presentations to pdf, press ‘e’ then ‘ctrl-p’ ➡ ‘save as pdf’

References

Andersen, Mie, Chiara Panosetti, and Karsten Reuter. 2019. “A Practical Guide to Surface Kinetic Monte Carlo Simulations.” Frontiers in Chemistry 7. https://doi.org/10.3389/fchem.2019.00202.
LeSar, R. 2013. Introduction to Computational Materials Science: Fundamentals to Applications. Cambridge University Press. https://books.google.com/books?id=QzkhAwAAQBAJ.
“Numerical Integration - Midpoint, Trapezoid, Simpson’s Rule.” 2021.

Footnotes

  1. A more detailed notebook implementing the code can be viewed here

  2. It is required to add the previous configuration \(i\) to the trajectory if the configuration \(i+1\) is rejected in order to ensure the distribution is valid