Hi
I am trying to simulate Synthetic transmit aperture (STA) method using spatial encoding and coded excitations. It is a big challenge for me. Because I want to use K-wave for my simulations but in papers that I have read, most of authors used commercial ultrasound equipment to test the excitation methods. So, I want to know that, is it possible to simulate STA with coded excitations and spatial encoding using K-wave? Is it possible in K-wave to excite each transducer elements by different excitation signals simultaneously?
thank you in advanced for your help
Sepand
k-Wave
A MATLAB toolbox for the time-domain
simulation of acoustic wave fields
synthetic transmit aperture
(11 posts) (2 voices)-
Posted 9 years ago #
-
Hi Sepand,
If I understand your question properly, it is possible to do it (see user manual for more details):
1) define your transducer as a binary mask
2) Let N denote the number of voxels representing your transducer and Nt be the number of time steps. You can define your source term as a N*Nt matrix containing the excitation signal at each voxel of your transducer.Maybe the development team has a more straightforward solution but this should work.
Best regards,
AnthonyPS : by the way, I suggest you to take the time to entirely read the user manual, it has many helpful information on source terms (f.e. you probably want to set it as Dirichlet boundary condition...).
Posted 9 years ago # -
Dear Anthony
thanks for your help.
SepandPosted 9 years ago # -
Hi
I am trying to define a binary mask for transducer. After making the transducer the command “ make transducer” gives me the binary mask of all and active elements automatically. But it is not possible for me to define a binary mask for input signal using transducer.input_signal_mask. Is any way to define a binary mask for input signal of transducer?
Because of this drawback, I was forced to use “source” definition instead of “ transducer”. I want to have a 32 element transducer at the middle of z, y plane, so, I assume a binary mask for source term. The elements width is in y direction and the height is in z direction.
I have used the following commands for using 2 types of input signal for excitation of 2 elements simultaneously. But I faced to error using “kspaceFirstOrder3D” .
My commands are:source1.mask = zeros(Nx, Ny, Nz);
source1.mask(1, (Ny/2-16):(Ny/2+15),(Nz/2-6):(Nz/2+5)) = 1;
source1.u_mask=zeros(Nx,Ny,Nz);
source1.u_mask(1,(Ny/2-16):(Ny/2+15),(Nz/2-6):(Nz/2+5))=1;
for i=1:2:31
source1.uy(i,:)=input_signal1;
end
for j=2:2:32
source1.uy(j,:)=input_signal2;
end
source1.focus_distance = 20e-3; % focus distance [m]
source1.elevation_focus_distance = 19e-3;
source1.steering_angle = 0; % steering angle [degrees]
% apodization
source1.transmit_apodization = 'Rectangular';
source1.receive_apodization = 'Rectangular';
display_mask = source1.mask;
voxelPlot(source1.mask);
%---multielement synthetic aperture----
sensor_data=cell(8,1);
j=1;
number_subaperture=2;
number_elements=32;
for i=0:number_subaperture:( number_elements-number_subaperture+1)
source1.mask(1,(Ny/2-number_elements/2):(Ny/2+number_elements/2-1),(Nz/2-6):(Nz/2+5))=0;
source1.mask(:,(Ny/2-number_elements/2+i):(Ny/2-number_elements/2+i+1),:)=1;
% source1.uy((Ny/2-number_elements/2+i),:)=input_signal1;
% source1.uy((Ny/2-number_elements/2+i+1),:)=input_signal2;
sensor_data{j,1} = kspaceFirstOrder3D (kgrid, medium, source1, transducer);
j=j+1;
endSo, my questions will be :
1) it is not possible for me to define a binary mask for input signal using transducer.input_signal_mask. Is any way to define a binary mask for input signal of “transducer” instead of source?
2) The big drawback for me is about using “source.uy”, is it possible to give an example for me? Is the above definition in commands is true?
3) Another problem in using “source” is about “apodization” and “focusing”. I think that these inputs are just for “transducer” term, not “source” term. How can I define these inputs for source term?
4) Are the defining a binary mask for source term and source.u_mask true in my commands?
Regards
SepandPosted 9 years ago # -
Hi Sepand,
Actually, I am always using "source" in my codes, so I am not familiar with the possibilities of the "transducer" class. I can still try to answer some of your question...
3) From what I understand, with a transducer class, your can set the apodization window, focusing depth etc. and it is automatically applied to every source point. If you define your source without the transducer class, you will have to change the amplitude and phase of your signal manually for each source point.
From your question, I guess that your excitation signal "input_signal1" is the same for each source point (except for those which are set to "input_signal2" of course). Instead, you have to set a different input signal at each of your source point, calculating the phase yourself in order to have the focusing depth and steering angle that you want and the amplitude which corresponds to the apodization you want. It is fully manual.
It could result in something like that:for i = 1:N_source_points
source.uy(i,:) = source_mag(i) * sin(2*pi*source_freq*kgrid.t_array + phase(i));
end
with source_mag and phase which are spatially varying.The fields
focus_distance
, 'elevation_focus_distance,
steering_angle` etc. should therefore not exist at all: they won't be used as you are not using the transducer class.4) Your definition of the binary masks seems correct to me...
I am not sure to be clear but... hope it helps !
AnthonyPS: when you read this post, mentally add something like "from what I know" at the beginning of each sentence. I am not 100% sure of what I say, it is just the way I use it ;-)
Posted 9 years ago # -
Dear Anthony
Thank you so much for your kind help.
I have used your suggestions about working in source and sensor classes instead of transducer class. I have defined a mask for source and sensor classes that matches on my transducer geometry. Then I have defined source.uy as a sine wave. My transducer has 64 elements with 0.1 mm element width (1 grid point) and 0.1mm height. So, I want to have 4 active elements for each transmission. Each of 4 elements (16th , 31th ,46th , 61 th ) should be excited by coded versions of sine wave. For example:
For a one transmission: excitations for 4 mentioned elements respectively are: sine waves with phases(pi/2 0 pi/2 0) These 4 elements should excited and transmitted simultaneously without any steering and focusing. I have written the following codes that are a part of Hadamard encoding technique.
source.u_mask = zeros(Nx, Ny, Nz);
source.u_mask(1,16,(Nz/2)) = 1;
source.u_mask(1,31,(Nz/2)) = 1;
source.u_mask(1,46,(Nz/2)) = 1;
source.u_mask(1,61,(Nz/2)) = 1;phase=[pi/2 0 pi/2 0]';
k=1;
for j=16:15:61
source.uy(j,:)=sin(2*pi*source_freq*kgrid.t_array+phase(k,:));
k=k+1;
end
input_args = {'PlotLayout', true, 'PlotPML', false, ...
'DataCast', 'single', 'CartInterp', 'nearest'};
sensor_data = kspaceFirstOrder3D(kgrid, medium, source, sensor,input_args{:});but in using the ‘kspaceFirstOrder3D’ I faced to these errors:
“Error using ==> kspaceFirstOrder_inputChecking at 540
The number of time series in source.ux (etc) must match the number of source elements in source.u_mask
Error in ==> kspaceFirstOrder3D at 495
kspaceFirstOrder_inputChecking;
Error in ==> SA_hadamard_source at 94
sensor_data = kspaceFirstOrder3D(kgrid, medium, source, sensor,input_args{:}); “My problems in using source and sensor are:
1- Why matlab gives me this error? How should set these phase shifts for source.uy? from the mentioned error, I think that source.uy should be just a one dimensional wave and it’s not possible to be size (Ny, Nt).
2- could you please give me some examples about using these techniques. It really critical point for me to know that if kwave has this property(I mean setting different excitations for specific elements and excite them simultaneously).3- At the following codes I used elements just in size of dx, dy, dz, but If I want to have a specific size for transducer, with grid size 0.1 mm, I have to set each 5 grid points as a transducer width. But in receive phase, all grid points that I set them as sensor (receiver) will give me the sensor data. And it is not true because each 5 grid points are one element of transducer (in z, y plane). I mean that, the size of sensor data is the size of active grid points instead of number of elements of transducer. So, how can I solve this problem?
4- Another problem is about other properties of source and sensor that has been mentioned in user’s manual. Is it necessary to set inputs for all these properties such as (source.p0, source.u_mask, source.u_mode, source.ux, source.uz, sensor.record, sensor.record_start_index, sensor.time_reversal_boundary_data, sensor.frequency_response, sensor.directivity_size, …)?
5- In user’s manual all ultrasound examples are in transducer class, because of this I am not really familiar with using source and sensor, so could you please give me some examples?
It is really critical for me to know the answers as soon as possible, please help me to run my simulations.
Thank you in advanced for your help.
Best regards
SepandPosted 9 years ago # -
Hi Sepand,
I am sorry, I don't have time to answer all your questions today but I wrote you a little example. Just change the value of the variable called FOCUSED to see how I manage steering.
Hope it helps :-)
Anthonyclear all close all clc % Play with this parameter FOCUSED = true; % create the computational grid Nx = 128-40; Ny = 128-40; dx = 0.1e-3; dy = 0.1e-3; kgrid = makeGrid(Nx, dx, Ny, dy); % define the properties of the propagation medium medium.sound_speed = 1500; medium.density = 1000; % create time array kgrid.t_array = makeTime(kgrid, medium.sound_speed); % create source distFromBorder = 5; source.p_mask = zeros(Nx, Ny); xc = round(Nx/2); yc = round(Ny/2); xgrid = (1:Nx)-xc; source.p_mask(abs(xgrid)<0.4*Nx,distFromBorder) = 1; source.p_mode = 'dirichlet'; % define source source_mag = 0.1e6; source_f0 = 3e6; if ~FOCUSED source.p = source_mag*cos(2 * pi * kgrid.t_array * source_f0); else distToFocus = sqrt( (dx*(abs(xgrid(abs(xgrid)<0.4*Nx)))).^2 + (dy*(yc-distFromBorder)).^2 ); distDiff = distToFocus - (dy*(yc-distFromBorder)); % phase difference = 0 in the middle lambda = medium.sound_speed/source_f0; Phase = 2*pi*distDiff/lambda; source.p = source_mag*cos(bsxfun(@plus, 2 * pi * kgrid.t_array * source_f0, Phase')); end % define sensor sensor.mask = ones(Nx, Ny); sensor.record = {'p', 'u_non_staggered'}; % run the simulation sensor_data = kspaceFirstOrder2D(kgrid, medium, source, sensor,... 'DisplayMask', 'off', 'PlotScale', [-0.5, 0.5]*source_mag,'PMLInside',false,'PlotPML',false);
Posted 9 years ago # -
About question 3.:
You can define a sensor mask, exactly as you did for your source mask.
If I understand properly, you have several "physical" sensor which are bigger than one voxel. For each "physical" sensor you can simply take the mean of the signals recorded for all the voxels of your physical sensor. There are probably better functions than simply taking the mean but it should do the job...About question 4.
No, you don't have to set all properties : if you have an initial pressure distribution, set source.p0, if it is a time-varying pressure source, then set source.p etc... If you don't have a velocity source, for example, you don't need to set the variable source.u_mask for example.Best regards,
AnthonyPosted 9 years ago # -
Dear Anthony
I am really sorry for my long questions and I am really thankful for your complete answers.
but I have some problems in Image reconstruction.
I am using source and sensor class in 2D (at the same place and with the same geometry as transmitter and receiver) for ultrasound synthetic aperture imaging. I want to use dynamic focusing in transmit and receive for image reconstruction from sensor_data. I mean that according to the delay times for transmit and receive for all transmission and reception I could find the samples of sensor_data corresponding to each pixels (Conventional beam forming for calculation of image values in each pixel of image).but my reconstructed image is not true for point target.
- But I am not sure that if I can calculate the image values directly from samples of sensor_data? Because sensor_data is pressure not voltage.
- Is the sensor_data, echo signal from the defined medium and sensed by defined sensor or not?another questions:
- K-wave toolbox does not assign impulse response for transducers. Is it necessary to convolve the output data by an impulse response?
- What is the difference between photo acoustic simulation and ultrasound simulation? Is it true to use source and sensor class for ultrasound imaging? Does it need any additional commands for using source and sensors in ultrasound simulation?Thank you in advanced
Best Regards
SepandPosted 9 years ago # -
Hi Sepand,
I hope someone more qualified than me will have a look at your questions ^^. However, according to my limited knowledge, what I can say is that k-waves directly computes the pressure field at each point of the domain by solving the mass conservation, momentum conservation etc.
For your sensor, I would say that you need to convert the pressure into voltage yourself, provided you know the behavior of your "real world" sensor :-)
If your question about "convolution by an impulse response" is related to the source transducer, what you need to set as source term is definitely the acoustic source term, which means, what will effectively be generated by your transducer and not the electric signal you send to your transducer. Starting from the input electric signal, you will have to take into account your transducer's efficiency, frequency response... and set the resulting acoustic signal as k-wave source term.Hope it helps!
AnthonyPosted 9 years ago # -
Hi Anthony
Thank you so much for your answers. They were really helpful for my simulations and I sure that you are a qualified person in this field. as you suggested, I will get help from other members too.
thank you so much
good luck
best regards
SepandPosted 9 years ago #
Reply
You must log in to post.