780.20: 1094 Session 13

Handouts: Excerpts on autocorrelation functions and variational Monte Carlo, printouts of codes

In this session, we'll continue our survey of Monte Carlo computational methods with a look at autocorrelation and variational Monte Carlo applied to simple problems. The discussion is easily generalized to the more complex problems for which these approaches are well suited.

Session 12 (cont.)

Continue with "The Two-D Ising Model", but move on to Session 13 "Autocorrelation" after an hour. If you have time later (possibly next week), go back and finish it.


When evaluating an average over Monte Carlo configurations, we want to skip the first n1 steps and then use data taken every n0 Monte Carlo steps. How do we determine how large to take n0 and n1? We'll use a simple integration problem to explore these issues, by integrating x2e-x2 from 0 to 10 and dividing by the integral of e-x2 (so this is like < x2 > with a probability distribution proportional to e-x2).

  1. Calculate the (normalized) integral in Mathematica for reference.
  2. Consider first random sampling as a review/recap of our discussions.
  3. Next consider the Metropolis algorithm. There are three parameters for you to adjust (besides the total number of iterations): "max_step", "initial_skip" and "skip".

  4. To figure out a good value for "skip", we'll calculate the "autocorrelation function". This is defined in the Pang handout as equation (9.21). For us, "A" is "x2". Your task is to generate the analog of Fig. 9.1 for the current problem.
  5. Now that the Metropolis results are reliable, repeat step 2. above, but now for the Metropolis sampling.

Variational Monte Carlo

We'll do a simple example of variational Monte Carlo to illustrate the basic idea. Generalizing to more complex systems is straightforward (but takes a lot longer to run!).

  1. Take a look at the "variational_SHO.cpp" code and how it implements variational Monte Carlo for a one-dimensional harmonic oscillator using the VariationalMC class. Make a variational_SHO project and add variational_SHO.cpp, random_seed.cpp, VaritionalMC.h, and VariationalMC.cpp. Run it. You'll be asked to supply a range and step size for the variational parameter "a". This will require some experimentation to make sure the minimum with respect to "a" is in the interval you select.

  2. The variational_SHO.dat file is suitable for plotting in gnuplot. The plotfile "variational_SHO.plt" is provided to illustrate how to plot it with error bars. Try it out (and attach a plot). Does the graph make sense? How might you modify the code to find the minimum automatically (rather than graphically)?

  3. Does this code implement the features explored in the "Autocorrelation" section? If not, how would you improve this code?

  4. Try modifying the trial function to another reasonable form with one variational parameter "a" (use your imagination!). You'll have to modify the "Psi" and "E_local" functions. Find the minimum graphically and compare to the exact solution.

780.20: 1094 Session 13. Last modified: .