Hi there,
Just passing by to report what I think may be a bug: when I compile kspaceFirstOrder-CUDA (__KWAVE_GIT_HASH__="468dc31c2842a7df5f2a07c3a13c16c9b0b2b770"
) on Linux with gcc version 13.2.1, the compilation succeeds and the binary runs successfully. But in fact it only appears to run successfully. In some simulations of mine, after running kspaceFirstOrder-CUDA, MATLAB's h5read()
reads in the /p_final
and /p
fields of the output HDF5 file as all-NaN arrays (of the expected size), the three /u[xyz]_non_staggered
as all-zeros arrays, and /p_min
as an array with all entries set to 3.4028e+38 (which equals realmax('single')
), for example. The compiled binary produces no unusual messages at runtime.
However, when I revert to gcc version 12.3.1 and compile kspaceFirstOrder-CUDA anew, keeping all else unchanged, the resulting binary works properly; the output HDF5 file contains meaningful information.
I tried fiddling with the NVIDIA drivers and the CUDA Toolkit, but I don't think they are to blame. The main reason being (as I found out later) that the kspaceFirstOrder-OMP (__KWAVE_GIT_HASH__="0ba023063e3f29685e1e346f56883378d961f9f1"
) binary also suffers from the same symptoms (except /p_min
becomes an all-NaN array) when compiled with the more recent gcc 13.2.1; but not with the older gcc 12.3.1.
I am not sure if that is a problem with my particular system or if the bug can be reproduced elsewhere. Maybe this info can be useful to someone. In any case, though, I think maybe the external binaries could do with some sort of sanity check on the output; something inexpensive, just to catch egregious conditions?
Thanks for your great software!