Skip to content

Commit

Permalink
Merge branch 'master' of github.com:GPUE-group/GPUE
Browse files Browse the repository at this point in the history
  • Loading branch information
leios committed Dec 10, 2018
2 parents df59862 + 9ef03a3 commit 9efad61
Show file tree
Hide file tree
Showing 6 changed files with 76 additions and 24 deletions.
5 changes: 4 additions & 1 deletion paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -186,14 +186,17 @@ @article{ORiordan2016b
@misc{documentation,
title = {{GPUE documentation website}},
author = {Schloss, J and O'Riordan, L. J.},
year = {2018},
howpublished = {\url{https://gpue-group.github.io/}}
}


@misc{WittekGPE2016,
title = {{Comparing three numerical solvers of the Gross-Pitaevskii equation}},
author = {Wittek, P.},
year = {2016},
howpublished = {\url{https://web.archive.org/web/20171120181431/https://peterwittek.com/gpe-comparison.html}},
note = {Accessed: 2018-10-04}
note = {Updated: 18/01/2017. Accessed: 2018-10-04}
}

@phdthesis {ORiordan2017,
Expand Down
19 changes: 10 additions & 9 deletions paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,39 +17,40 @@ authors:
affiliations:
- name: Okinawa Institute of Science and Technology Graduate University, Onna-son, Okinawa 904-0495, Japan.
index: 1
date: 21 September 2018
date: 10 December 2018
bibliography: paper.bib
---

# Summary

Bose--Einstein Condensates (BECs) are superfluid systems consisting of bosonic atoms that have been cooled and condensed into a single, macroscopic ground state [@PethickSmith2008; @FetterRMP2009].
These systems can be created in an experimental laboratory and allow for the the exploration of many interesting physical phenomena, such as superfluid turbulence [@Roche2008; @White2014; @Navon2016], chaotic dynamics [@Gardiner2002; @Kyriakopoulos2014; @Zhang2017], and other analogous quantum systems [@DalibardRMP2011].
Numerical simulations of BECs that directly mimic what can be seen in experiments are valuable for fundamental research in these areas.
These systems can be created in an experimental laboratory and allow for the the exploration of physical phenomenon such as superfluid turbulence [@Roche2008; @White2014; @Navon2016], chaotic dynamics [@Gardiner2002; @Kyriakopoulos2014; @Zhang2017], and analogues of other quantum systems [@DalibardRMP2011].
Numerical simulations of BECs that directly mimic experiments are valuable to fundamental research in these areas and allow for theoretical advances before experimental validation.
The dynamics of BEC systems can be found by solving the non-linear Schrödinger equation known as the Gross--Pitaevskii Equation (GPE),

$$
i\hbar \frac{\partial\Psi(\mathbf{r},t)}{\partial t} = \left( -\frac{\hbar^2}{2m} {\nabla^2} + V(\mathbf{r}) + g|\Psi(\mathbf{r},t)|^2\right)\Psi(\mathbf{r},t),
$$

where $\Psi(\mathbf{r},t)$ is the three-dimensional many-body wavefunction of the quantum system, $\mathbf{r} = (x,y,z)$, $m$ is the atomic mass, $V(\mathbf{r})$ is an external potential, $g = \frac{4\pi\hbar^2a_s}{m}$ is a coupling factor, and $a_s$ is the scattering length of the atomic species.
Here, the GPE is shown in three dimensions, but it can easily be modified for one or two dimensions [@PethickSmith2008].
The split-operator method is one straightforward technique to solve the GPE and has previously been accelerated with GPU devices [@Ruf2009; @Bauke2011]
No generalized software packages are available using this method on GPU devices; however, software packages have been designed to simulate BECs with other methods, including GPELab [@Antoine2014] the Massively Parallel Trotter-Suzuki Solver [@Wittek2013], and XMDS [@xmds].
Here, the GPE is shown in three dimensions, but it can easily be modified to one or two dimensions [@PethickSmith2008].
One of the most straightforward methods for solving the GPE is the split-operator method, which has previously been accelerated with GPU devices [@Ruf2009; @Bauke2011].
No generalized software packages are vailable using this method on GPU devices that allow for user-configurable simulations and a variety of different system types; however,
several software packages exist to simulate BECs with other methods and on different architectures, including GPELab [@Antoine2014] the Massively Parallel Trotter-Suzuki Solver [@Wittek2013], and XMDS [@xmds].

GPUE is a GPU-based GPE solver via the split-operator method for superfluid simulations of both linear and non-linear Schrödinger equations, emphasizing Bose--Einstein Condensates with vortex dynamics in 2 and 3 dimensions. GPUE provides a fast, robust, and accessible method to simulate superfluid physics for fundamental research in the area and has been used to simulate and manipulate large vortex lattices in two dimensions [@ORiordan2016; @ORiordan2016b], along with ongoing studies on quantum vortex dynamics in two and three dimensions.
GPUE is a GPU-based Gross--Pitaevskii Equation solver via the split-operator method for superfluid simulations of both linear and non-linear Schrödinger equations, emphasizing superfluid vortex dynamics in two and three dimensions. GPUE is a fast, robust, and accessible software suite to simulate physics for fundamental research in the area of quantum systems and has been used to manipulate large vortex lattices in two dimensions [@ORiordan2016; @ORiordan2016b] along with ongoing studies of vortex dynamics.

For these purposes, GPUE provides a number of unique features:
1. Dynamic field generation for trapping potentials and other variables on the GPU device.
2. Vortex tracking in 2D and vortex highlighting in 3D.
3. Configurable gauge fields for the generation of artificial magnetic fields and corresponding vortex distributions [@DalibardRMP2011; @Ghosh2014].
4. Vortex manipulation via direct control of the wavefunction phase [@Dobrek1999].

All of these features enable GPUE to simulate a wide variety of linear and non-linear (BEC) dynamics of quantum systems. The above features enable configurable physical system parameters and GPUE’s high-performance numerical solver improves over other suites [@WittekGPE2016; @ORiordan2017]. All GPUE features and functionalities have been described in further detail in the documentation [@documentation].
All of these features enable GPUE to simulate a wide variety of linear and non-linear dynamics of quantum systems. GPUE additionally features a numerical solver with improvements over other suites [@WittekGPE2016; @ORiordan2017]. All of GPUE's features and functionality have been described in further detail in the documentation [@documentation].

# Acknowledgements
This work has been supported by the Okinawa Institute of Science and Technology Graduate University and by JSPS KAKENHI Grant Number JP17J01488.
We would also like to thank Thomas Busch, Rashi Sachdeva, Tiantian Zhang, Albert Benseney, and Angela White for discussions on useful physical systems to simulate with the GPUE codebase, along with Peter Wittek and Tadhg Morgan for contributions to the code, itself.
These acknowledgements can be found in `acknowledgements.md`.
These acknowledgements can be found in `GPUE/acknowledgements.md`.

# References
45 changes: 45 additions & 0 deletions py/plot.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,48 @@ def plot_wfc_phase(xDim, yDim, data_dir, pltval, start, end, incr):
#fig = plt.figure()
#fig.savefig('wfc.png')

# Function to plot wfc cut
def plot_wfc_cut(xDim, yDim, data_dir, pltval, start, end, incr):
if data_dir[0] != "/":
data_dir = "../" + data_dir
for i in range(start,end,incr):
print(i)
data_real = data_dir + "/wfc_0_const_%s" % i
data_im = data_dir + "/wfc_0_consti_%s" % i
if pltval == "wfc_cut_ev":
data_real = data_dir + "/wfc_ev_%s" % i
data_im = data_dir + "/wfc_evi_%s" % i

lines_real = np.loadtxt(data_real)
lines_im = np.loadtxt(data_im)
wfc_real = np.reshape(lines_real, (xDim,yDim));
wfc_im = np.reshape(lines_im, (xDim,yDim));

wfc = abs(wfc_real + 1j * wfc_im)
wfc = wfc*wfc

max = 0
for j in range(xDim):
for k in range(yDim):
if (wfc[j][k] > max):
max = wfc[j][k]

print("Max value is: ",max)
for j in range(xDim):
for k in range(yDim):
if (wfc[j][k] > max*0.4):
wfc[j][k] = 1.0
else:
wfc[j][k] = 0.0

plt.imshow(wfc, extent=(-6.9804018707623236e-04,6.9804018707623236e-04,-6.9804018707623236e-04,6.9804018707623236e-04), interpolation='nearest',
cmap = cm.jet)
plt.colorbar()
plt.show()
#fig = plt.figure()
#fig.savefig('wfc.png')



# Function to parse arguments for plotting
# Note: We assume that the parameters come in sets
Expand Down Expand Up @@ -233,6 +275,9 @@ def plot(par):
elif (par.item == "GK" or par.item == "GV"):
plot_complex(par.xDim, par.yDim, par.data_dir, par.item,
par.start, par.end, par.incr)
elif (par.item == "wfc_cut" or par.item == "wfc_cut_ev"):
plot_wfc_cut(par.xDim, par.yDim, par.data_dir, par.item,
par.start, par.end, par.incr)
elif (par.end != 1):
plot_var_range(par.xDim, par.yDim, par.data_dir, par.item,
par.start, par.end, par.incr)
Expand Down
14 changes: 7 additions & 7 deletions py/vort.py
Original file line number Diff line number Diff line change
Expand Up @@ -245,13 +245,13 @@ def run(start,fin,incr): #Performs the tracking
v0c = vorts_c.element(index_r[0]).sign #Get the sign of the smallest distance vortex
v0p = vorts_p.element(i3).sign # Get the sign of the current vortex at index i3
v1c = vorts_c.element(index_r[0]).uid #Get uid of current vortex
#Check if distance is less than 7 grid points, and that the sign is matched between previous and current vortices, and that the current vortex has a negative uid, indicating that a pair has not yet been found. If true, then update the current vortex index to that of the previous vortex index, and turn vortex on --- may be dangerous
if (index_r[1] < 30) and (vorts_c.element(index_r[0]).sign == vorts_p.element(i3).sign) and (vorts_c.element(index_r[0]).uid < 0) and (vorts_p.element(i3).isOn == True):
vorts_c.element(index_r[0]).update_uid(vorts_p.element(i3).uid)
vorts_c.element(index_r[0]).update_on(True)
else:
print "Failed to find any matching vortex. Entering interactive mode. Exit with Ctrl+D"
from IPython import embed; embed()
#Check if distance is less than 7 grid points, and that the sign is matched between previous and current vortices, and that the current vortex has a negative uid, indicating that a pair has not yet been found. If true, then update the current vortex index to that of the previous vortex index, and turn vortex on --- may be dangerous
if (index_r[1] < 30) and (vorts_c.element(index_r[0]).sign == vorts_p.element(i3).sign) and (vorts_c.element(index_r[0]).uid < 0) and (vorts_p.element(i3).isOn == True):
vorts_c.element(index_r[0]).update_uid(vorts_p.element(i3).uid)
vorts_c.element(index_r[0]).update_on(True)
else:
print "Failed to find any matching vortex. Entering interactive mode. Exit with Ctrl+D"
from IPython import embed; embed()


#You will never remember why this works
Expand Down
5 changes: 3 additions & 2 deletions src/ds.cu
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,9 @@ void generate_plan_other3d(cufftHandle *plan_fft1d, Grid &par, int axis){

if(result != CUFFT_SUCCESS){
printf("Result:=%d\n",result);
printf("Error: Could not execute cufftPlan3d(%s ,%d ,%d ).\n",
"plan_1d", (unsigned int)xDim, (unsigned int)yDim);
printf("Error: Could not execute cufftPlan3d(%s, %d, %d, %d).\n",
"plan_3d", (unsigned int)xDim, (unsigned int)yDim,
(unsigned int)zDim);
exit(1);
}

Expand Down
12 changes: 7 additions & 5 deletions src/init.cu
Original file line number Diff line number Diff line change
Expand Up @@ -309,11 +309,15 @@ int init(Grid &par){
(unsigned int)xDim, (unsigned int)yDim);
exit(1);
}
generate_plan_other2d(&plan_other2d, par);

generate_plan_other3d(&plan_1d, par, 0);
generate_plan_other3d(&plan_dim2, par, 1);
generate_plan_other3d(&plan_dim3, par, 2);
if (dimnum == 2){
generate_plan_other2d(&plan_other2d, par);
}
if (dimnum == 3){
generate_plan_other3d(&plan_dim3, par, 2);
generate_plan_other3d(&plan_dim2, par, 1);
}
result = cufftPlan3d(&plan_3d, xDim, yDim, zDim, CUFFT_Z2Z);
if(result != CUFFT_SUCCESS){
printf("Result:=%d\n",result);
Expand Down Expand Up @@ -526,7 +530,6 @@ void set_variables(Grid &par, bool ev_type){

// Special variables / instructions for 2/3d case
if (dimnum > 1 && !par.bval("Ay_time")){
pAy_gpu = par.cufftDoubleComplexval("pAy_gpu");
EpAy = par.cufftDoubleComplexval("EpAy");
err=cudaMemcpy(pAy_gpu, EpAy, sizeof(cufftDoubleComplex)*gsize,
cudaMemcpyHostToDevice);
Expand All @@ -538,7 +541,6 @@ void set_variables(Grid &par, bool ev_type){
}

if (dimnum > 2 && !par.bval("Az_time")){
pAz_gpu = par.cufftDoubleComplexval("pAz_gpu");
EpAz = par.cufftDoubleComplexval("EpAz");
err=cudaMemcpy(pAz_gpu, EpAz, sizeof(cufftDoubleComplex)*gsize,
cudaMemcpyHostToDevice);
Expand Down

0 comments on commit 9efad61

Please sign in to comment.