Anyone routinely increasing the frequency matrix in your sequences to decrease scan time? Say, 320 X 192 or 320 X 224, 320 being the frequency matrix.
The size of your matrix in any direction doesn't actually affect scan time. With regard to matrixes and k-space, Scan time is determined completely by resolution, as resolution is determined by the amount of time spent reading the signal. For example if you have two sinusoids with the same phase and very similiar frequency - you would have to sample them for a very long time in order for a difference in their values to evolve. How fast you sampled them (which determines FOV) determines the maximum frequency you can measure, but says nothing about what values of frequency you can discriminate between.
(07-13-2015 12:42 AM)AndrewBworth Wrote: [ -> ]The size of your matrix in any direction doesn't actually affect scan time. With regard to matrixes and k-space, Scan time is determined completely by resolution, as resolution is determined by the amount of time spent reading the signal. For example if you have two sinusoids with the same phase and very similiar frequency - you would have to sample them for a very long time in order for a difference in their values to evolve. How fast you sampled them (which determines FOV) determines the maximum frequency you can measure, but says nothing about what values of frequency you can discriminate between.
oops... that is only true in the frequency direction. I sometimes reduce the number of phase points to save time, yeah.