I run k-wave version 1.0 on a Matlab 2012a and a 64-bit windows 7. According to the toolbox manual, I replaced the kspaceFirstOrder3D with kspaceFirstOrder3DC in native examples of the toolbox, such as B-mode example or Phased-Array example so to speed up, but I found out that the main outputs (sensor-data) mismatch between the two prospecting equivalent functions.
Sensor-data from the Matlab function has a true dimension and seemingly true values, however the corresponding output of the c++ code, is oversized and weird value. I checked if the former is a subset of the latter, but it was NOT.
Is anything more than just replacing the two function names, necessary to utilize the c++ code? Or there is kind of bug?
k-Wave
A MATLAB toolbox for the time-domain
simulation of acoustic wave fields
mismatch between matlab and c code
(11 posts) (3 voices)-
Posted 11 years ago #
-
Hi hhamtaii,
When you use an object of the
kWaveTransducer
class as the sensor, the MATLAB implementation ofkspaceFirstOrder3D
will automatically average the signals across the grid points belonging to each transducer element using the appropriate delays. The outputsensor_data
will then contain as many time series as there are transducer elements.Unfortunately this behaviour is not implemented in the C++ code, so instead the output
sensor_data
will contain as many time series as the number of grid points that form the transducer. These can still be combined to form the same output as the MATLAB code, but it is a little fiddly. We are planning on adding an extra function to thekWaveTransducer
class to automatically do this with the next release.I hope that explains things - apologies for not documenting this behaviour more comprehensively!
Brad.
Posted 11 years ago # -
Brad:
Not quite sure if what I encountered is the same as what you mentioned above. I run the us_bmode_linear example, in Matlab version kspaceFirstOrder3D, the size of sensor_data is just the number_elements x samples = 32 x 1585. However, for C++ version kspaceFirstOrder3DC, the size of sensor_data is (number_element*element_width*element_length) x samples= (32*2*24) x 1585 = 1536 x 1585. And the first 16 sensor_data_3DC(1:32,:) is not the same as sensor_data_3D(:,:). Then the image is not the same.If that so, which one is more realistic and why? Or anything I didn't do it right?
thanks.
ZhiliPosted 11 years ago # -
Hi Zhili,
This is the same problem as mentioned above, i.e., the output from the MATLAB code is a single time series per physical element of the transducer, while the output from the C++ code is a single time series per grid point in the sensor mask. We plan to add a new method to the kWaveTransducer class that converts the sensor data returned by the C++ code to the same format as the MATLAB code in the next release.
Brad.
Posted 11 years ago # -
Hi Brad,
Thank you very much for the answer and looking forward for the new release. And for current simulation, which solution do you suggest because C++ version is much faster:
1. sum these element_width*element_length signals together to become one output as matlab version
2. pick one of them as the output, say 1,3,5,...,61,63 rows(because element_width=2) in above case
3. just let element_width and element_length=1 though the dy resolution will be lower
4.Or other way you suggest.thanks a lot
Posted 11 years ago # -
Hi Zhili,
If there is no elevation focus, then I would suggest averaging the signals over blocks of element_width*element_length. This will match what happens in the MATLAB version. If you are not interested in the spatial averaging effects that arise due to a finite element size, then you could also use your option 2 or 3.
Brad.
Posted 11 years ago # -
Hi Brad:
Thank you very much for the suggestion. I'll do so as what you suggested.On the other hand, if I'd like to use focus_distance and steering at the same time, say focus at 20mm, and steering at 15 degree, so the final focus point will be at (x,y)=(20mm,20*sin(15*pi/180) ), or at (x,y) = (20*cos(15*pi/180),20*sin(15*pi/180) )?
Actually, what I'd like to do is to directly give the simulator the beamforming_delays which I have calculated by my own according to any focus point(x,y)? Any convenient way to assign the beamforming_delays to the engine?
thanks a lot.
Posted 11 years ago # -
Do you want to set the beamforming delays on receive, i.e., a modification of the scan_line method, or on transmit?
Posted 11 years ago # -
on transmit side, thanks.
Posted 11 years ago # -
Hi zhili,
Unfortunately manually defining the beamforming delays on transmit is not currently supported. However, I can certainly take a look at adding this functionality for a future release.
In the meantime, if you want to try to modify the code yourself, take a look at the
delay_mask
method of thekWaveTransducer
class, and the linedelay_mask = source.delay_mask;
in /private/kspaceFirstOrder_inputChecking.m.Hope that helps,
Brad.
Posted 11 years ago # -
Hi Brad:
Thank you very much for the info and I will try it later.Posted 11 years ago #
Reply
You must log in to post.