Creating your own Backflow transformation

We present here how to create your own backflow transformation and use it in QMCTorch. During the import you must import the base class of the backflow kernel. We aso create a H2 molecule

[1]:
import torch
from qmctorch.scf import Molecule
from qmctorch.wavefunction import SlaterJastrow
from qmctorch.wavefunction.orbitals.backflow.kernels import BackFlowKernelBase
from qmctorch.wavefunction.orbitals.backflow import BackFlowTransformation

mol = Molecule(atom='H 0. 0. 0; H 0. 0. 1.', unit='bohr', redo_scf=True)
INFO:QMCTorch|  ____    __  ______________             _
INFO:QMCTorch| / __ \  /  |/  / ___/_  __/__  ________/ /
INFO:QMCTorch|/ /_/ / / /|_/ / /__  / / / _ \/ __/ __/ _ \
INFO:QMCTorch|\___\_\/_/  /_/\___/ /_/  \___/_/  \__/_//_/
INFO:QMCTorch|
INFO:QMCTorch| SCF Calculation
INFO:QMCTorch|  Removing H2_adf_dzp.hdf5 and redo SCF calculations
INFO:QMCTorch|  Running scf  calculation
[05.12|11:49:03] PLAMS working folder: /home/nico/QMCTorch/docs/notebooks/plams_workdir
INFO:QMCTorch|  Molecule name       : H2
INFO:QMCTorch|  Number of electrons : 2
INFO:QMCTorch|  SCF calculator      : adf
INFO:QMCTorch|  Basis set           : dzp
INFO:QMCTorch|  SCF                 : HF
INFO:QMCTorch|  Number of AOs       : 10
INFO:QMCTorch|  Number of MOs       : 10
INFO:QMCTorch|  SCF Energy          : -1.082 Hartree

We can then use this base class to create a new backflow transformation kernel. This is done in the same way one would create a new neural network layer in pytorch

[2]:
from torch import nn
class MyBackflowKernel(BackFlowKernelBase):
    def __init__(self, mol, cuda, size=16):
        super().__init__(mol, cuda)
        self.fc1 = nn.Linear(1, size, bias=False)
        self.fc2 = nn.Linear(size, 1, bias=False)
    def forward(self, x):
        original_shape = x.shape
        x = x.reshape(-1,1)
        x = self.fc2(self.fc1(x))
        return x.reshape(*original_shape)

This backflow transformation consists of two fully connected layers. The calculation of the first and second derivative are then done via automatic differentiation as implemented in the BackFlowKernelBase class. To use this new kernel in the SlaterJastrow wave function ansatz we first need to instantiate a backflow layer using this kernel

[3]:
backflow = BackFlowTransformation(mol, MyBackflowKernel, backflow_kernel_kwargs={'size': 8})

We can then use this backflow transformation in the call of the wave function:

[4]:
wf = SlaterJastrow(mol, backflow=backflow)
INFO:QMCTorch|
INFO:QMCTorch| Wave Function
INFO:QMCTorch|  Jastrow factor      : True
INFO:QMCTorch|  Jastrow kernel      : ee -> PadeJastrowKernel
INFO:QMCTorch|  Highest MO included : 10
INFO:QMCTorch|  Configurations      : ground_state
INFO:QMCTorch|  Number of confs     : 1
INFO:QMCTorch|  Kinetic energy      : jacobi
INFO:QMCTorch|  Number var  param   : 134
INFO:QMCTorch|  Cuda support        : False
[5]:
pos = torch.rand(10, wf.nelec*3)
print(wf(pos))
tensor([[0.0871],
        [0.0390],
        [0.0783],
        [0.1098],
        [0.0740],
        [0.0394],
        [0.1762],
        [0.0719],
        [0.0748],
        [0.0882]], grad_fn=<MulBackward0>)