k-Wave Toolbox Previous   Next

Optimising k-Wave Performance Example

Overview

This example demonstrates how to increase the computational performance of k-Wave using optional input parameters and data casting. A separate standardised benchmarking script benchmark is also included within the k-Wave toolbox to allow computational times to be compared across different computers and GPUs.

 Back to Top

Controlling input options

To investigate where the computational effort is spent during a k-Wave simulation, it is useful to use the inbuilt MATLAB profiler which examines the execution times for the various k-Wave and inbuilt functions. Running the profiler on a typical forward simulation using kspaceFirstOrder2D with a Cartesian sensor mask and no optional inputs gives the following command line output (set example_number = 1 within the example m-file):

Running k-space simulation...
  dt: 3.9063ns, t_end: 9.4258us, time steps: 2414
  input grid size: 512 by 512 pixels (10 by 10mm)
  maximum supported frequency: 38.4MHz
  smoothing p0 distribution...
  calculating Delaunay triangulation (TriScatteredInterp)...
  precomputation completed in 5.7285s
  starting time loop...
  estimated simulation time 3min 0.11431s...
  simulation completed in 3min 24.3161s
  total computation time 3min 30.093s

The corresponding profiler output is given below.

Aside from computations within the parent functions, it is clear the majority of the time is spent running ifft2 and fft2. Several seconds are also spent computing the Delaunay triangulation used for calculating the pressure over the Cartesian sensor mask using interpolation. The triangulation is calculated once during the precomputations and this time is encapsulated within the precomputation time printed to the command line (in this case 5.7285s). The Delaunay triangulation can be avoided by using a binary sensor mask, or by setting the optional input 'CartInterp' to 'nearest'. Several seconds are also spent running the various functions associated with the animated visualisation (imagesc, newplot, cla, etc). This visualisation can be switched off by setting the optional input 'PlotSim' to false. Re-running the profile with these two changes gives the following command line output (set example_number = 2 within the example m-file):

Running k-space simulation...
  dt: 3.9063ns, t_end: 9.4258us, time steps: 2414
  input grid size: 512 by 512 pixels (10 by 10mm)
  maximum supported frequency: 38.4MHz
  smoothing p0 distribution...
  precomputation completed in 1.2082s
  starting time loop...
  estimated simulation time 2min 58.2092s...
  simulation completed in 3min 13.6012s
  reordering Cartesian measurement data...
  total computation time 3min 14.828s

The precomputation time has been reduced, and the loop computation time has also been reduced by several seconds. The corresponding profiler output is given below.

Data casting

Even after the modifications above, the majority of the computational time is still spent computing the FFT and the point-wise multiplication of large matrices (within the function kspaceFirstOrder2D). It is possible to decrease this burden by capatilising on MATLAB's use of overloaded functions for different data types. For example, computing an FFT of a matrix of single type takes less time than for double (the standard data format used within MATLAB). For most computations, the loss in precision as a result of doing the computations in single type is negligible. Within the kspaceFirstOrder1D, kspaceFirstOrder2D, and kspaceFirstOrder3D codes, the data type used for the variables within the time loop can be controlled via the optional input parameter 'DataCast'. Re-running the profile with 'DataCast' set to 'single' gives the following command line output (set example_number = 3 within the example m-file):

Running k-space simulation...
  dt: 3.9063ns, t_end: 9.4258us, time steps: 2414
  input grid size: 512 by 512 pixels (10 by 10mm)
  maximum supported frequency: 38.4MHz
  smoothing p0 distribution...
  casting variables to single type...
  precomputation completed in 1.2327s
  starting time loop...
  estimated simulation time 2min 23.6508s...
  simulation completed in 2min 35.11s
  reordering Cartesian measurement data...
  total computation time 2min 36.359s

The overall computational speed has been significantly reduced, in this example by around 25%. The corresponding profiler output is given below.

 Back to Top

Running k-Wave on the GPU

The computational time can be further improved by using other data types, in particular those which force program execution on the GPU (Graphics Processing Unit). There are now several MATLAB toolboxes available which contain overloaded MATLAB functions (such as the FFT) that work with any NVIDIA CUDA-enabled GPU. These toolboxes utilise an interface developed by NVIDIA called the CUDA SDK which allows programs written in C to run on the GPU, and then a MEX interface to allow the C programs to be run from MATLAB. Within MATLAB, the execution is as simple as casting the variables to the required data type. For example, a free toolbox has been released by GP-You called GPUmat (http://www.gp-you.org). To use this toolbox within k-Wave, the optional input parameter 'DataCast' is set to 'GPUsingle' or 'GPUdouble'. Note, for 3D computations, two additional wrapper functions are needed (see Getting started with GPUmat on the k-Wave Forum for more details on using k-Wave with GPUmat).

To illustrate, the command line output obtained by setting 'DataCast' to 'GPUsingle' is given below (set example_number = 4 within the example m-file). The computational speed has increased by more than 5 times compared to the standard execution, and 4 times compared to setting 'DataCast' to 'single'. Note, the interpolation function used within kspaceFirstOrder2D and kspaceFirstOrder3D does not currently support GPU usage, so the optional input parameter 'CartInterp' should be set to 'nearest' if using a Cartesian sensor mask.

Running k-space simulation...
  dt: 3.9063ns, t_end: 9.4258us, time steps: 2414
  input grid size: 512 by 512 pixels (10 by 10mm)
  maximum supported frequency: 38.4MHz
  smoothing p0 distribution...
  casting variables to kWaveGPUsingle type...
  precomputation completed in 1.2527s
  starting time loop...
  estimated simulation time 48.3867s...
  simulation completed in 35.8639s
  reordering Cartesian measurement data...
  total computation time 37.14s

The corresponding profiler output is given below. The majority of time is now spent on computing matrix operations and the FFT on the GPU. Further details on the speed-up obtained when using different NVIDIA GPUs is given in benchmark.

 Back to Top

Multicore support

The command line and profile outputs shown here were generated using MATLAB R2010b. Some earlier MATLAB versions do not include multicore support for parallelisable functions such as the FFT. If using an earlier version of MATLAB, it is possible to get a noticeable increase in computational speed simply by changing MATLAB versions.

 Back to Top


© 2009, 2010, 2011 Bradley Treeby and Ben Cox.