Hi,
I need to run simulations on very large 3D domains using a frequency of 1 MHz, this translates in high RAM occupation (>128 GB) when the algorithm kSpaceFirstOrder3D tries to create the sensor.record matrix. To overcome this problem, my strategy is to define the sensor.record matrix as small as possible exactly where I need to observe the simulation results. To compute the whole domain solution, I was thinking about create multiple cuboids iteratively and save the solution within them once the computation is completed. My question is all about it, is there a way to run a single kSpaceFirstOrder3D and then store the solution in every matrix one at time?
Or should I run kSpaceFirstOrder3D iteratively (for cycle) with different sensor.record matrix every time, saving the results and deleting the variables from the workspace? The second option will of course take a longer time because I will need to run the simulations N times to cover all my 3D domain with different sensor.record matrices.
Thank you in advance, you are doing such a great work with this toolbox!
Federico
k-Wave
A MATLAB toolbox for the time-domain
simulation of acoustic wave fields
Question regarding multiple sensor.record within one simulation
(3 posts) (2 voices)-
Posted 2 years ago #
-
Hi Federico,
You can only define one
sensor.mask
region using the 'cuboid corners' approach, but you can definesensor.mask
as a binary source mask with any set of points. If you want to record very many time series, then the output will get quite large. It may be worth thinking about whether you need the full time series, or whether you could define another quantity to be recorded, eg. by usingsensor.record = {'p_max'}
.Best wishes
BenPosted 2 years ago # -
Hi Ben,
Thank you a lot for your answer. Luckly, in the meantime, I was able to run large simulations without running out of RAM just dividing the sensor.record matrix in different parts and then using 'cat' Matlab function to concatenate all the "result matrices parts".
I will attach my code here:parts=n; %number of split in the domain along z
for i=1:parts %domain splitted in half along z
x_1=1;
x_2=Nx;
y_1=1;
y_2=Ny;
z_1=floor((Nz*i)/parts)-(floor(Nz/parts))+1;
z_2=ceil(Nz/parts)+ceil((Nz*(i-1))/parts);
if z_2>ceil(Nz*i/parts)
z_2=ceil(Nz*i/parts);
end
sensor_cuboid=[x_1,y_1,z_1,x_2,y_2,z_2].';
sensor.mask=sensor_cuboid;
sensor.record={'p_max'};
%Starting simulation
reset(gpuDevice(1)); %reset GPU memory
input_args={ 'DataCast','single','PMLInside', false};
sensor_data = kspaceFirstOrder3D(kgrid, medium, source, sensor, input_args{:},'PlotSim', false);
data=struct('P_max',sensor_data.p_max,'Nx',Nx,'Ny',Ny,'Nz',Nz,'dx',dx,'dy',dy,'dz',dz,'signal',signal);
save(['your_folder\data_', num2str(i), '.mat'],'data','-v7.3','-nocompression');
clear data
clear sensor.record
clear sensor.mask
endSurprisingly the division of the domain led to less computation time.
The script cannot divide the domain in exact parts and may be an overlap in the results. That will be easy to correct, just deleting the exact same slices running a for cycle.Best regards,
FedericoPosted 2 years ago #
Reply
You must log in to post.