Creating your own Jastrow Factor

We present here how to create your own electron-electron Jastrow factor and use it in QMCTorch. During the import you must import the base class of the electron-electron Jastrow. We aso create a H2 molecule

[1]:
import torch
from qmctorch.scf import Molecule
from qmctorch.wavefunction import SlaterJastrow
from qmctorch.wavefunction.jastrows.elec_elec.jastrow_factor_electron_electron import JastrowFactorElectronElectron
from qmctorch.wavefunction.jastrows.elec_elec.kernels import JastrowKernelElectronElectronBase
INFO:QMCTorch|  ____    __  ______________             _
INFO:QMCTorch| / __ \  /  |/  / ___/_  __/__  ________/ /
INFO:QMCTorch|/ /_/ / / /|_/ / /__  / / / _ \/ __/ __/ _ \
INFO:QMCTorch|\___\_\/_/  /_/\___/ /_/  \___/_/  \__/_//_/
[2]:
mol = Molecule(atom='H 0. 0. 0; H 0. 0. 1.', calculator='pyscf', unit='bohr', redo_scf=True)
INFO:QMCTorch|
INFO:QMCTorch| SCF Calculation
INFO:QMCTorch|  Removing H2_pyscf_dzp.hdf5 and redo SCF calculations
INFO:QMCTorch|  Running scf  calculation
converged SCF energy = -1.07280585930373
INFO:QMCTorch|  Molecule name       : H2
INFO:QMCTorch|  Number of electrons : 2
INFO:QMCTorch|  SCF calculator      : pyscf
INFO:QMCTorch|  Basis set           : dzp
INFO:QMCTorch|  SCF                 : HF
INFO:QMCTorch|  Number of AOs       : 10
INFO:QMCTorch|  Number of MOs       : 10
INFO:QMCTorch|  SCF Energy          : -1.073 Hartree

We can then use this base class to create a new Jastrow Factor. This is done in the same way one would create a new neural network layer in pytorch.

[3]:
from torch import nn
class MyJastrowKernel(JastrowKernelElectronElectronBase):
    def __init__(self, nup, ndown, cuda, size=16):
        super().__init__(nup, ndown, cuda)
        self.fc1 = nn.Linear(1, size, bias=False)
        self.fc2 = nn.Linear(size, 1, bias=False)
    def forward(self, x):
        nbatch, npair = x.shape
        x = x.reshape(-1,1)
        x = self.fc2(self.fc1(x))
        return x.reshape(nbatch, npair)

As seen above the prototype of the class constructor must be:

def __init__(self, nup, ndown, cuda, **kwargs)

The list of keyword argument can contain any pairs such as size=16.

This Jastrow use two fully connected layers. The size of the hidden layer is here controlled by a keyword argument size whose defauilt value is 16 It is important to note that the calculation of the first and second derivative of the jastrow kernel wrt the electronic positions are then done via automatic differentiation as implemented in the JastrowKernelElectronElectronBase class. Hence there is no need to derive and implement these derivatives. However it is necessary that the forward function, which takes as input a torch.tensor of dimension [Nbatch, Npair] first reshape this tensor to [Nbatch*Npair,1], then applies the transformation on this tensor and finally reshape the output tensor to [Nbatch, Npair].

To use this new Jastrow kernel in the SlaterJastrow wave function ansatz we first need to instantiate a Jastrow factor that uses the kernel.

[4]:
jastrow = JastrowFactorElectronElectron(mol, MyJastrowKernel, kernel_kwargs={'size': 64})

This jastrow factor can then be passed as an argument of the SlaterJastrow wavefunction.

[5]:
wf = SlaterJastrow(mol, jastrow=jastrow)
INFO:QMCTorch|
INFO:QMCTorch| Wave Function
INFO:QMCTorch|  Jastrow factor      : True
INFO:QMCTorch|  Jastrow kernel      : ee -> MyJastrowKernel
INFO:QMCTorch|  Highest MO included : 10
INFO:QMCTorch|  Configurations      : ground_state
INFO:QMCTorch|  Number of confs     : 1
INFO:QMCTorch|  Kinetic energy      : jacobi
INFO:QMCTorch|  Number var  param   : 249
INFO:QMCTorch|  Cuda support        : False
[6]:
pos = torch.rand(10, wf.nelec*3)
print(wf(pos))
tensor([[0.3465],
        [0.2254],
        [0.1533],
        [0.2485],
        [0.4022],
        [0.2991],
        [0.2480],
        [0.3140],
        [0.3298],
        [0.1233]], grad_fn=<MulBackward0>)
[ ]: