WO2015164825A1 - Dual space dictionary learning for magnetic resonance (mr) image reconstruction - Google Patents

Dual space dictionary learning for magnetic resonance (mr) image reconstruction Download PDF

Info

Publication number
WO2015164825A1
WO2015164825A1 PCT/US2015/027648 US2015027648W WO2015164825A1 WO 2015164825 A1 WO2015164825 A1 WO 2015164825A1 US 2015027648 W US2015027648 W US 2015027648W WO 2015164825 A1 WO2015164825 A1 WO 2015164825A1
Authority
WO
WIPO (PCT)
Prior art keywords
matrix
data
domain
weights
dictionary
Prior art date
Application number
PCT/US2015/027648
Other languages
French (fr)
Inventor
Chun Yuan
Zechen ZHOU
Jinnan Wang
Niranjan BALU
Original Assignee
Chun Yuan
Zhou Zechen
Jinnan Wang
Balu Niranjan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chun Yuan, Zhou Zechen, Jinnan Wang, Balu Niranjan filed Critical Chun Yuan
Publication of WO2015164825A1 publication Critical patent/WO2015164825A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/5608Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/561Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution by reduction of the scanning time, i.e. fast acquiring systems, e.g. using echo-planar pulse sequences
    • G01R33/5611Parallel magnetic resonance imaging, e.g. sensitivity encoding [SENSE], simultaneous acquisition of spatial harmonics [SMASH], unaliasing by Fourier encoding of the overlaps using the temporal dimension [UNFOLD], k-t-broad-use linear acquisition speed-up technique [k-t-BLAST], k-t-SENSE

Definitions

  • a magnetic resonance (MR) device can emit magnetic fields and radio waves.
  • the emitted magnetic fields can cause protons in a body under observation to be uniformly aligned.
  • the uniform proton alignment creates a magnetic vector that can be reflected by the emitted radio waves.
  • the strength of the magnetic fields can be altered electronically during an MR scan. By altering the local magnetic field by these small increments, different slices of the body will resonate as different frequencies are applied.
  • the radio waves are not emitted, the magnetic vector returns to its resting state, causing a signal (also a radio wave) to be emitted.
  • the emitted signal may be captured by one or more of k (k > 0) receiver coils in the MR device, and the emitted signal used to create MR images.
  • a computing device receives a data matrix A.
  • the computing device performs one or more iterations to determine a reconstruction matrix A2 of the data matrix.
  • An iteration of the one or more iterations includes: (1) reshaping the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A with each patch including at most NP entries of the data matrix A and with NP > 0, where each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non- overlapping patches; (2) determining a dictionary Dl and weights Wl, where the dictionary Dl includes K atoms, K > 0, each atom having NP entries, and where the weights Wl include, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms; (3) generating an estimated matrix XI of the matrix X based on
  • a computing device includes a processor and a tangible computer-readable medium.
  • the tangible computer-readable medium is configured to include instructions that, when executed by the processor, are configured to cause the computing device to perform functions.
  • the functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine a reconstruction matrix A2 of the data matrix, where an iteration of the one or more iterations includes: (bl) reshaping the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A with each patch including at most NP entries of the data matrix A and with NP > 0, where each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non-overlapping patches; (b2) determining a dictionary Dl and weights Wl, where the dictionary Dl includes K atoms, K > 0, each atom having NP entries, and where the weights Wl include, for each patch Pl(
  • a magnetic resonance system in another aspect, includes one or more coils and a computing device.
  • the computing device includes a processor and a tangible computer-readable medium.
  • the tangible computer-readable medium is configured to include instructions that, when executed by the processor, are configured to cause the computing device to perform functions.
  • the functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine a reconstruction matrix A2 of the data matrix, where an iteration of the one or more iterations includes: (bl) reshaping the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A with each patch including at most NP entries of the data matrix A and with NP > 0, where each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non-overlapping patches; (b2) determining a dictionary Dl and weights Wl, where the dictionary Dl includes K atoms, K > 0, each atom having NP entries, and where the weights Wl include, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms; (b3) generating an estimated matrix XI of the matrix X based on a combination of
  • a tangible computer-readable medium configured to include instructions that, when executed by a processor of a computing device, are configured to cause the computing device to perform functions.
  • the functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine a reconstruction matrix A2 of the data matrix, where an iteration of the one or more iterations includes: (bl) reshaping the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A with each patch including at most NP entries of the data matrix A and with NP > 0, where each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non-overlapping patches; (b2) determining a dictionary Dl and weights Wl, where the dictionary Dl includes K atoms, K > 0, each atom having NP entries, and where the weights Wl include, for each patch Pl(i) of the plurality
  • a device in another aspect, includes: (a) means for receiving a data matrix A; (b) means for performing one or more iterations to determine a reconstruction matrix A2 of the data matrix, where an iteration of the one or more iterations includes: (bl) reshaping the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A with each patch including at most NP entries of the data matrix A and with NP > 0, where each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non-overlapping patches; (b2) determining a dictionary Dl and weights Wl, where the dictionary Dl includes K atoms, K > 0, each atom having NP entries, and where the weights Wl include, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms; (b3) generating an
  • a computing device receives a data matrix A.
  • the computing device performs one or more iterations to determine reconstruction matrix A2 of the data matrix A.
  • An iteration of the one or more iterations includes: (1) partitioning the data matrix A into a plurality of data blocks; (2) for a particular data block L of the plurality of data blocks: (2a) extracting the data block L from the data matrix A; (2b) vectorizing the data block L into a data sub-matrix AL, and (2c) factoring AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property; (3) forming a weighted data block by averaging values of at least the factored matrices D L and WL; and (4) generating the reconstruction matrix A2 based on the weighted data block.
  • the computing device provides the reconstruction matrix A2.
  • a computing device includes a processor and a tangible computer-readable medium.
  • the tangible computer-readable medium is configured to include instructions that, when executed by the processor, are configured to cause the computing device to perform functions.
  • the functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine reconstruction matrix A2 of the data matrix A, where an iteration of the one or more iterations includes: (bl) partitioning the data matrix A into a plurality of data blocks; (b2) for a particular data block L of the plurality of data blocks: (b2a) extracting the data block L from the data matrix A; (b2b) vectorizing the data block L into a data sub-matrix AL, and (b2c) factoring AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property; (b3) forming a weighted data block by averaging values of at least the factored
  • a magnetic resonance system in another aspect, includes one or more coils and a computing device.
  • the computing device includes a processor and a tangible computer-readable medium.
  • the tangible computer-readable medium is configured to include instructions that, when executed by the processor, are configured to cause the computing device to perform functions.
  • the functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine reconstruction matrix A2 of the data matrix A, where an iteration of the one or more iterations includes: (bl) partitioning the data matrix A into a plurality of data blocks; (b2) for a particular data block L of the plurality of data blocks: (b2a) extracting the data block L from the data matrix A; (b2b) vectorizing the data block L into a data sub-matrix AL, and (b2c) factoring AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property; (b3) forming a weighted data block by averaging values of at least the factored matrices D L and WL; and (b4) generating the reconstruction matrix A2 based on the weighted data block; and (3) providing the reconstruction matrix A2.
  • a tangible computer-readable medium configured to include instructions that, when executed by the processor, are configured to cause the computing device to perform functions.
  • the functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine reconstruction matrix A2 of the data matrix A, where an iteration of the one or more iterations includes: (bl) partitioning the data matrix A into a plurality of data blocks; (b2) for a particular data block L of the plurality of data blocks: (b2a) extracting the data block L from the data matrix A; (b2b) vectorizing the data block L into a data sub-matrix AL, and (b2c) factoring AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property; (b3) forming a weighted data block by averaging values of at least the factored matrices D L and W L ; and (
  • a device in another aspect, includes: (a) means for receiving a data matrix A; (b) means for performing one or more iterations to determine reconstruction matrix A2 of the data matrix A, where an iteration of the one or more iterations includes: (bl) partitioning the data matrix A into a plurality of data blocks; (b2) for a particular data block L of the plurality of data blocks: (b2a) extracting the data block L from the data matrix A; (b2b) vectorizing the data block L into a data sub-matrix A L , and (b2c) factoring A L into at least matrices D L and W L while enforcing at least a low-rank property and a sparsity property; (b3) forming a weighted data block by averaging values of at least the factored matrices D L and W L ; and (b4) generating the reconstruction matrix A2 based on the weighted data block; and (3) meas for providing the
  • FIG. 1 is a flowchart depicting the KDL technique, in accordance with an example embodiment
  • FIG. 2 illustrates a performance comparison at x3 acceleration with insufficient ACS data between SAKE and KDL, in accordance with an example embodiment
  • FIG. 3 illustrates a performance comparison at x4 acceleration with ACS data between SPIRiT, SAKE, and KDL, in accordance with an example embodiment
  • FIG. 4 is a flowchart depicting the DSDL technique, in accordance with an example embodiment
  • FIG. 5 illustrates a performance comparison at x4 retrospective acceleration between LI -SPIRiT and DSDL, in accordance with an example embodiment
  • FIG. 6 illustrates a performance comparison at x2.67 prospective acceleration between LI -SPIRiT and DSDL, in accordance with an example embodiment
  • FIG. 7 is a flowchart of the STEP procedure for iterative imaging reconstruction; in accordance with an example embodiment
  • FIG. 8 demonstrates a local subspace basis and a Laplace a priori model, in accordance with an example embodiment
  • FIG. 9 shows performance improvement by using a local signal characteristic promotion model at three-fold acceleration, in accordance with an example embodiment
  • FIG. 10 shows a comparison of different reconstruction algorithms in reconstruction result with 5 times magnified mean error and an empirical g-factor map in a uniformly under- sampled pattern with 30 x 30 calibration area, in accordance with an example embodiment
  • FIG. 11 illustrates a performance comparison between SAKE and STEP from fourfold vd-PDS calibration-less sampling with elliptical shutter, in accordance with an example embodiment
  • FIG. 12 illustrates a performance comparison between the SPIRiT, SAKE, and STEP reconstruction procedures for a 3D brain dataset, in accordance with an example embodiment
  • FIG. 13 illustrates a performance comparison between the SPIRIT, LI -SPIRiT, STEP, and LI -STEP reconstruction procedures for a 3D knee dataset and an artery wall in the knee from 5.6-fold vd-PDS auto-calibrating sampling with elliptical shutter, in accordance with an example embodiment
  • FIG. 14A is a block diagram of a computing environment, in accordance with an example embodiment
  • FIG. 14B is a block diagram of an example computing device, in accordance with an example embodiment
  • FIG. 14C is a block diagram of an example magnetic resonance device, in accordance with an example embodiment
  • FIGS. 15A and 15B are a flowchart of a method, in accordance with an example embodiment.
  • FIG. 16 is a flowchart of another method, in accordance with an example embodiment.
  • Data from an MR device can be related to one of at least two domains: a signal acquisition or k- space domain where input (radio) signals are received from a body of interest at some or all of k coils of the MR device, and an signal acquisition or image domain related to images generated by the MR device.
  • a data matrix A can be sampled in the signal acquisition domain, where A includes data values captured using one or more coils of a magnetic resonance system. Data from each coil of the MR system can be captured separately and combined to form the data matrix A.
  • Data from data matrix A in the signal acquisition / k-space domain can be transformed into a data in a second domain, such as a signal observation / image domain.
  • the transformation between data in the signal observation domain, such as a MR image can be a Fourier transformation or other transformation.
  • the second domain can be related back to the first domain using an inverse transformation, such as an inverse Fourier transformation.
  • a dictionary can be used to approximate an image in the image domain (or data matrix in the signal observation domain), where a portion of an image is approximated by a weighted sum of dictionary entries, and so the whole image can be approximated by a combination of weighted sums.
  • the dictionary entries can be stored in one or more dictionary matrices, and the corresponding weights, or coefficients, can be stored in one or more coefficient matrices.
  • the herein-described techniques include a k-space dictionary learning (KDL) technique, a dual space dictionary learning (DSDL) technique, and a self-supporting tailored k- space estimation for parallel imaging (STEP) technique.
  • KDL k-space dictionary learning
  • DSDL dual space dictionary learning
  • STEP self-supporting tailored k- space estimation for parallel imaging
  • the KDL technique includes training of dictionaries in a signal observation (k-space) domain.
  • the DSDL technique includes training of dictionaries in both a signal observation and an image domain.
  • a dictionary trained in the image domain for representing an information structure corresponding to a high frequency spectrum of k-space, which can compensate for a low signal-to-noise (SNR) scenario in the high frequency spectrum of k-space.
  • a dictionary trained in a k-space domain can preserve image contrast information accumulated in a low frequency spectrum, which can be regarded as a global energy distribution constraint in image domain.
  • the STEP technique is a self-supporting tailored parallel imaging reconstruction method that enhances representat on of local data features in k-space.
  • STEP can be implemented in both auto-calibrating and calibration-less conditions.
  • STEP takes advantage of properties of locality, sparsity, and rank deficiency in its iterative reconstruction.
  • two tailored procedures including k-space partitioning and basis selection are used to promote a spatially variant signal subspace and incorporation into a self-supporting structured low rank model that enforces locality, sparsity, and rank deficiency properties, which can be formulated into a constrained optimization problem and solved by an iterative algorithm.
  • arbitrary sampling trajectory and flexible regularization constraints can be integrated into STEP.
  • Experimental results using simulated and in vivo datasets demonstrate that STEP can reduce noise amplification and residual aliasing artifacts while preserving structural information and tissue contrast
  • a parallel reconstruction technique has been developed using Singular Value Decomposition (SVD) to impose a low rank property without using auto-calibrating signal (ACS) data for MR devices, which can further improve results provided with sufficient sampled ACS area.
  • SVD Singular Value Decomposition
  • ACS auto-calibrating signal
  • KDL patch-based k-space dictionary learning
  • FIG. 1 is a flowchart depicting the KDL technique, in accordance with an example embodiment.
  • the weighting coefficient matrix W can be a sparse matrix due to the low rank property of ⁇ t .
  • a dictionary can be used to describe this low rank matrix with sparse coefficients.
  • the KDL technique includes defining: • a division of k-space that includes overlapping patches;
  • the KDL technique can be formulated as follows::
  • POCS Projection onto Convex Sets
  • the reconstruction result of the KDL technique was compared with results from both a SPIRiT technique and a SAKE technique based on normalized root mean square error (nRMSE) values. All three techniques used the same 7 > ⁇ 7 kernel per patch size.
  • nRMSE normalized root mean square error
  • the singular-value threshold was fixed at 60.
  • dictionary size varied from 60 to 256 to test the impact of dictionary size, where the initial dictionary was trained using SVD. Other relevant parameters were optimized to achieve best performance for each reconstruction technique.
  • FIG. 2 illustrates a performance comparison at x3 acceleration with insufficient ACS data between the SAKE and KDL techniques, in accordance with an example embodiment.
  • An image of a PDS sampling mask used in the performance comparison is shown at left of FIG. 2.
  • an image generated using the SAKE technique is shown, and at right of FIG. 2, an image generated using the KDL technique is shown.
  • the KDL technique outperforms the reconstruction result of SAKE, as KDL has an nRMSE of 0.0255, which is smaller than SAKE's corresponding nRMSE of 0.0258.
  • FIG. 3 illustrates a performance comparison at x4 acceleration with ACS data between the SPIRiT, SAKE, and KDL techniques, in accordance with an example embodiment.
  • the top row of FIG. 3 includes, from left to right, a full image, an image generated using the SPIRiT technique, an image generated using the SAKE technique, an image generated using the KDL technique with a dictionary size of 60, an image generated using the KDL technique with a dictionary size of 128, and an image generated using the KDL technique with a dictionary size of 256.
  • RE reconstruction error
  • the KDL technique also provides less error than either the SPIRiT or SAKE technique with x4 acceleration factor (FIG. 3), even for the smallest dictionary size (60) used in the comparison results.
  • the comparison results also indicate that KDL estimation error can be further reduced by increasing the KDL dictionary size.
  • an adaptive sparse transform/learned dictionary in k-space can improve the image quality for parallel reconstruction.
  • KDL can be combined with image domain information for tailored image reconstruction of small structures.
  • a particular example is vessel wall MRI used for atherosclerotic plaque screening where small structures within the plaque have to be resolved.
  • 3D large coverage multi-contrast Black Blood (BB) sequences used for joint intracranial and carotid plaque screening can have relatively long scan durations. Acceleration of 3D MRI sequences can be achieved by partial parallel imaging techniques such as SPIRIT or SAKE but this process can degrade vessel wall sharpness in accelerated images.
  • a tailored image reconstruction technique that can preserve image sharpness for complicated plaque morphology and even small lesions can be used for this type of vessel wall screening.
  • dictionary learning techniques can adaptively represent the signal and enhance image quality.
  • FIG. 4 is a flowchart depicting the DSDL technique, in accordance with an example embodiment.
  • the DSDL technique let:
  • R and RT be respective forward and backward Hankel structure matrix reshape operations
  • D a and D 0 be dictionary matrices for respective acquisition (k-space) and observation (image) domains, where for each of D a and D 0 matrices, the following constraints can be placed on the columns of the matrix: ⁇ d k
  • ⁇ 1, k 1, 2, ... K; and
  • W a and W 0 be respective corresponding sparse coefficient matrices for the respective acquisition and observation domains.
  • ⁇ D Wi) mm Da ⁇ v>Wa ⁇ D a W a - R
  • ⁇ D wj m Do ⁇ D 0 W 0 - F H R T ⁇ DM)f p + ⁇ ⁇ ⁇ ⁇ 0
  • MERGE data was fully sampled and retrospectively under-sampled by a factor of 4 using a PDS pattern (calibration area 26 x 26).
  • the result reconstructed by the DSDL technique was compared with a result using an Ll-SPIRiT technique based on nRMSE values.
  • SNAP data was prospectively x2.67 accelerated with the PDS pattern (calibration area 26 x 26) and performance compared with Ll-SPIRiT. Both DSDL and Ll-SPIRiT used the same 7 > ⁇ 7 kernel/patch size.
  • the dictionary size was 60 for the k-space domain and 256 for the image domain - the larger image domain dictionary was used since the image domain can include more complex structures than a k-space domain.
  • the initial dictionaries for both domains were generated using singular value decomposition techniques.
  • FIG. 5 shows a performance comparison between Ll- SPIRiT and DSDL at x4 retrospective acceleration for imagery of a common carotid artery.
  • FIG. 5 includes a full image, an Ll-SPIRiT image generated using Ll-SPIRiT having an nRMSE value of 0.0326, and a DSDL image generated using DSDL having an nRMSE value of 0.0316.
  • the white square in each of the full, Ll-SPIRiT, and DSDL images of FIG. 5 indicates a lesion region.
  • the DSDL technique with an nRMSE of 0.0316 outperforms the reconstruction result with an nRMSE of 0.0326 for Ll- SPIRiT. Beyond the overall noise level reduction, DSDL better preserves small structure information around the lesion region than Ll-SPIRiT.
  • FIG. 6 shows results of a performance comparison at x2.67 prospective acceleration.
  • a top row of three images of FIG. 6 show results using the Ll-SPIRiT technique and a bottom row of three images of FIG. 6 show results from using the DSDL technique.
  • the left images of both rows show magnitude of phase sensitive images
  • the center images of both rows show magnitude of reference (MoR) images
  • the right images of both rows show phase corrected real images.
  • FIG. 6 indicates that SNAP images can also be better restored using the DSDL technique.
  • DSDL 3D multi-contrast plaque screening acceleration.
  • DSDL has been validated for feasibility on patients.
  • jointly using adaptive k-space and image domain sparse transform / learned dictionaries can better preserve image structure and reduce overall noise level compared to a universal sparse transform.
  • Thus DSDL extends KDL by including image domain information for image information preserving reconstruction.
  • the Self-supporting Tailored k-space Estimation for Parallel imaging reconstruction (STEP) technique is described herein.
  • Parallel imaging has been the most commonly used technique to reduce image acquisition time in clinical MR scans.
  • k-space data are simultaneously acquired from multiple receiver coils with different coil sensitivity information that can establish data correlation among k-space neighbors and across multichannel coil elements. This data redundancy can be used to restore under-sampled k-space data through a reconstruction algorithm but at the cost of measurement accuracy and noise amplification.
  • Some existing subspace techniques assume that a subspace basis remains invariant across the entire k-space.
  • observations of signal characteristics in k-space reveal high magnitude with large variations in a central low-spatial frequency area while low magnitude with small variation in a peripheral high-spatial-frequency area.
  • the difference of signal characteristics in distinct k-space areas can be further enlarged in particular imaging sequences, such as inversion recovery prepared imaging sequence, fast spin echo sequence configured with long echo train and MR scan with contrast agent injection.
  • the spatially invariant subspace basis/interpolation weights may lead to the increase of noise and artifact level.
  • the STEP technique can address issues related to subspace parallel imaging reconstruction in the case of insufficient data for calibration, arbitrary Cartesian under-sampling and spatially varying signal characteristics.
  • STEP establishes a self-supporting low rank model in which several supporting subspace bases are extracted from structured k-space data.
  • STEP uses tailored procedures including k-space partitioning and basis selection to adapt for spatially varia t signal characteristics.
  • the STEP procedure uses a spatially variant subspace basis to capture local k-space signal characteristics to account for small variations in signal characteristics a peripheral high- spatial-frequency area and for large variations in signal characteristics in a central low-spatial- frequency area.
  • tailored k-space partitiomng and basis selection procedures are used for spatially variant signal subspace promotion and incorporated into the factorization of data matrix while maintaining a low rank matrix structure.
  • the k-space partitioning procedure guarantees locality for the extraction and representation of subspace bases, which is equivalent to using a spatially variant convolution kernel for different k-space data blocks.
  • the basis selection procedure promotes sparsity for coefficients in a weighting matrix and further enhances the adaptability to local signal variations within one k-space data block.
  • a self-supporting low rank model provided by the basis selection procedure can provide flexibility for an arbitrary under-sampling pattern with or without calibration signals.
  • STEP therefore combines the properties of locality, sparsity, and rank deficiency during image reconstruction.
  • STEP can provide improved parallel imaging reconstruction with reduction of noise amplification and/or residual aliasing artifacts in both calibration-less and auto-calibrating conditions.
  • STEP and its extension with compressed sensing LI -STEP show more accurate structures (vessel wail boundaries) and tissue contrast preservation (measured using wall-lumen signal ratios) over other methodologies.
  • Subspace Parallel Imaging Parallel reconstruction methods are discussed from a signal/noise subspace perspective. For simplicity, only two-dimensional (2D) Cartesian k-space is considered here but the same theory can be extended to 3D reconstruction in a similar manner.
  • a block of k-space dataset x E c NlXNzXNc with ⁇ ', ⁇ frequency encodes, N2 phase encodes and Nc coil elements can generate a structured data matrix A £ C 2 N c ( N i - W +I ) X ( JV 2 - W +I ) w jj 0se columns are vectorized patches extracted by sliding a w x w x Nc sized multichannel window across the block of k-space dataset x.
  • A becomes a rank-deficient matrix with appropriately chosen window size w x w and hence has a nontrivial left null space. Furthermore, a signal subspace l ⁇ 3 ⁇ 4 and noise subspace L _ can be distinguished from each other according to the order of their corresponding singular values using an 8VD-based approach. A can therefore be approximated by its first K dominant singular values and singular vectors as indicated by Eq.
  • a only contains acquired auto-calibration signals two distinct approaches can be used for a calibration process.
  • Auto-calibrating methods can derive a set of representative nulling vectors in the noise subspace from auto-calibration signals. Some self-calibrating methods directly extract coil sensitivities using support vectors in a signal subspace from auto -calibration signals. Then in the reconstruction process, the calibrated vectors can be applied throughout k- space as interpolation weights. If auto-calibration signal data has not been sufficiently acquired, a joint calibration and reconstruction method can iteratively solve the resulting calibration-less problem using the low rank property of the structured data matrix A formed by the entire k-space data regardless of whether or not auto-calibration signal data was acquired or not.
  • Auto-calibrating/self-calibrating methods can have a mismatch for a subspace basis calibrated from the central k-space that may not be optimal for a peripheral k-space.
  • One attempt to solve this problem is based on the global subspace established for the entire k-space data.
  • the signal characteristics vary in k-space particularly for imaging sequences with gradually evolved signals due to T> and/or T 2 relaxation.
  • magnetization prepared imaging and fast spin echo scan are used, data acquisition in low-spatial-frequency can be designed to maintain certain image contrast while the data in high-spatial-frequency is sampled during signal evolution period to reduce scan time.
  • a resulting magnetization modulation can aggravate differences of signal characteristics between distinct areas in k-space.
  • the global subspace may not represent the local signal variation pattern accurately while maintaining a low- rank property, in addition, the estimated correlation relation determined from limited kernel size does not fully represent the true coil sensitivity profiles, leading to aliasing artifacts and noise amplification.
  • the STEP technique constructs and optimizes multiple subspace bases for local k-space signal representation.
  • the k-space partition procedure can be flexible; for example, a non-uniform partitioning procedure can used to generate a non-overlapping k- space partition.
  • a uniform k-space partition can be used by the STEP procedure for simplicity— in other embodiments, the STEP procedure can use a non-uniform k-space partition.
  • a local k-space data block can still generate rank-deficient data matrix and D > can be expected to provide more local data features in k-space.
  • distinct subspace bases can be extracted from different local k-space regions.
  • a k-space in an overlapping area between data blocks can be an average of different over-lapped k-space blocks after low rank approximation.
  • the basis selection procedure can apply a statistical a priori analysis in favor of basis selection on each element in W.
  • each column vector in A can contain one specified signal structure and so be represented by a few basis vectors.
  • the result is that only a few weighting coefficients in W contain large absolute values while the rest of the weighting coefficients contain relatively small absolute values.
  • the distribution of coefficient values in if can form a peak around zero with a long tail representing the remaining weights.
  • the retained weighting coefficients function as a basis selection to adapt the subspace basis for signal representation in different k-space locations.
  • the STEP procedure can use a Laplace distribution to approximate prior distribution of signal coefficients, which can be formulated into an LI -norm regularization based on Bayesian theory.
  • the weighting coefficients can also improve noise suppression with sparsity-promoting prior information.
  • FIG. 7 is a flowchart of the STEP procedure for iterative imaging reconstniction; in accordance with an example embodiment.
  • the STEP procedure can receive as input a "k-space input" from k coils of a magnetic resonance device and generate a reconstructed matrix A2 as an output.
  • Data blocks can be extracted from the k-space input, where the data blocks can be associated with coils; e.g., one or more data blocks can have data from, and so be associated with, coil 1, one or more data blocks can have data from, and so be associated with, coil 2, ... and one or more data blocks can have data from, and so be associated with, coil k.
  • the data blocks can be vectorized for form a data matrix A.
  • Data matrix A can be factored, and perhaps partitioned, into matrices D and W, where D can include one or more sub-space bases, and W can include one or more weights for the sub-space bases.
  • the factorization can enforce various properties, such as but not limited to, a low-rank property, a sparsity property and/or a joint sparsity property for a wavelet domain.
  • Weighted data blocks can be constructed from the D and W matrices, and the weighted data blocks can be combined to form a reconstruction matrix A2. In some cases, reconstruction matrix A2 can take less storage than matrix A.
  • the STEP parallel imaging reconstruction procedure can improve the local signal representation in k-space with both k-space partition and basis selection constraints.
  • An objective function for STEP is indicated in Eq. [3] below: L
  • P ] and P ] are operators to project the acquired and un-acquired data, respectively, onto Cartesian grids in k-space;
  • R> is an operator to extract the ' data block in Cartesian k-space and reshape the data block into a structured low rank data matrix
  • the STEP procedure can attempt to restore the un-acquired k-space data using Eq, [4] below: min (x, Di, Wi, ... , D L , W L ) [4]
  • the block coordinate descent method can be adopted to solve the problem defined by
  • Eq. [5A] and [5B] are first looped on subscript / from 1 to L and then Eq. [5C] is applied to update x .
  • the STEP procedure can sequentially enforce properties of locality, sparsity, and rank deficiency - loop iterations are repeated until some stopping criteria are met - examples of the stopping criteria include, but are not limited to, a maximum number of iterations or a difference between adjacent updates failing being within a pre-determiiied limit value.
  • the STEP procedure can be extended to integrate with other regularization terms. Considering that some MR images have a sparse representation in a transformed domain, an Ll- STEP procedure can be formulated by adding a joint sparsity property in a wavelet domain for multichannel images, such as indicated by Eq. [6] below:
  • is a parameter for balancing between a conventional STEP constraint and a compressed sensing penalty.
  • Another two 3D datasets were scanned from one healthy volunteer.
  • One brain dataset was acquired by an eight-channel head coil using inversion recovery prepared 3D spoiled gradient echo [T contrast enhanced turbo field echo (TFE)] sequence.
  • the other knee dataset was acquired using the same knee coil as above with a 3D Volumetric isotropic Turbo spin echo Acquisition (VISTA) sequence.
  • VISTA 3D Volumetric isotropic Turbo spin echo Acquisition
  • image Quality Evaluation Two criteria for image quality evaluation - the nRMSE and a mean structural similarity (mSSIM) index - were measured in a region-of-interest (ROI) of a multichannel combined image. These criteria were computed using a point-wise rSoS.
  • nRMSE x 100%
  • SQS denotes a channel-combined image from fully-acquired data
  • M R O I denotes an ROI mask that restricts the nRMSE value measured from a tissue area.
  • the mSSIM index attempts to quantity the visible differences between the reconstructed and reference image using a variety of known properties of the human visual system and the mean operation still applied within the given ROI. A more accurate reconstruction result is indicated by a smaller nRMSE value and/or a higher mSSIM value.
  • Model Performance Assessment To demonstrate the proposed model, a ground- truth dataset was constructed using a 200 x 200 Shepp-Logan phantom and a set of eight-channel sensitivity maps simulated based on the Biot-Savart law. Additional zero-mean Gaussian white noise was added to adjust the SNR as 20 dB for the lowest intensity of Shepp-Logan phantom in the image domain. Without loss of generality, a kernel/patch size of 7 x 7 for ail algorithms and overlapping area of 20% for k-space partitioning were used in this assessment. Nine equal, overlapping blocks covering k-space (3 blocks x 3 blocks on each side) were used to examine rank value distribution and subspace basis differences in various parts of k-space.
  • n denotes a standard deviation computed for each pixel across reconstructed images
  • the k-space was divided into 5 x 5 data blocks.
  • SAKE and STEP shared the same rank value of 60 and different values of the STEP regularization parameter ⁇ ranging from 0.2 to 5 were tested to determine an impact of a Laplace a priori penalty.
  • STEP can also be used for auto- calibrating parallel reconstruction.
  • the 3D brain dataset to compare performance comparison between SPIRIT, SAKE and STEP.
  • the 3D brain dataset was uniformly under- sampled by a factor of 4 (2 x 2 in each direction) and a central 26 x 26 area was fully sampled for a SPIRiT calibration process.
  • a rank value of 60 was used for SAiCE and each local subspace in STEP.
  • the impact of the k-space partition and basis selection procedures were evaluated by separately and jointly adding a 5 x 5 o verlapping k-space partition and setting a SAKE regularization parameter ⁇ value to I .
  • SAKE+Laplace a priori, SAKE+Partition, and STEP performance 7 iterations of a CG SPIRiT reconstruction result were used to initialize each slice.
  • Lumen contour and vessel wall pixels were manually selected on 120 axial slices from fully sampled reference images covering the popliteal artery using ImageJ software (version 1 .48, National Institutes of Health, Bethesda, MD).
  • FIG. 8 demonstrates a local subspace basis and a Laplace a priori model, in accordance with an example embodiment. Image (a) at upper left of FIG.
  • Image (b) at upper left of FIG. 8 includes a comparison of the global, central and peripheral subspace basis computed by SVD. The singular values calculated from each data block are normalized by their largest singular value for rank estimation, and only partial subspace basis vectors representing the 3rd coil are shown for comparison. Images (c) and (d) respectively at lower left and lower right of FIG. 8 show respective Laplace and Gaussian probability density functions indicate respective complex signal and noise coefficients in a weighting matrix.
  • Image (a) of FIG. 8 shows one example of k-space partition with 3 x 3 overlapping data blocks.
  • three types of data matrices generated from different data blocks in distinct colors have been factorized by SVD.
  • Image (b) of FIG. 8 different data blocks in k-space share similar rank value but might have different subspace bases.
  • the global subspace basis fails to represent central and peripheral subspace basis well particularly for those vectors corresponding to smaller singular values.
  • the subspace difference between global and peripheral basis is larger than the difference between global and central basis.
  • Images (c) and (d) of FIG. 8 respectively illustrate the 2D histogram of complex signal/noise coefficients in weighting matrix and the fitted Laplace/Gaussian probability density function.
  • the histogram of signal and noise coefficients can differ significantly while the histogram of signal coefficients can be approximated by a Laplace distribution.
  • FIG. 9 show r s performance improvement by using a local signal characteristic promotion model at three-fold acceleration, in accordance with an example embodiment.
  • FIG. 9 demonstrates the performance improvement of two tailored k-space partitioning and procedures in STEP.
  • Image (a) of upper-left of FIG. 9 shows a vd-PDS pattern PI used for under-sampling a dataset. Reconstruction results of the data set under-sampled using vd-PDS pattern PI are shown in image (b) at upper-right of FIG. 9.
  • Image (b) of FIG. 9 has two rows: a top row with four reconstructed images, and a bottom row of corresponding error maps that have been 5x magnified. The four reconstructed images on the top row of image (b) of FIG. 9 were respectively generated (from left to right in FIG.
  • the four error maps on the bottom row of image (b) of FIG. 9 show error between a reference image and respective images (shown immediately above) generated by (from left to right) the SAKE procedure, the SAKE with Laplace procedure, the SAKE with partitioning procedure, and the STEP procedure.
  • Image (c) of lower- left of FIG. 9 shows an elliptical vd-PDS pattern El used for under-sampling a dataset. Reconstruction results of the data set under-sampled using elliptical vd-PDS pattern El are shown in image (d) at lower-right of FIG. 9.
  • Image (d) of FIG. 9 has two rows: a top row with four reconstructed images, and a bottom row of corresponding error maps that have been 5x magnified.
  • the four reconstructed images on the top row of image (d) of FIG. 9 were respectively generated (from left to right in FIG. 9) using the SAKE procedure, the SAKE with Laplace procedure, the SAKE with partitioning procedure, and the STEP procedure.
  • the four error maps on the bottom row of image (d) of FIG. 9 show error between a reference image and respective images (shown immediately above) generated by (from left to right) the SAKE procedure, the SAKE with Laplace procedure, the SAKE with partitioning procedure, and the STEP procedure. Respective nRMSE and mSSIM values corresponding to the error maps in image (d) of FIG.
  • FIG. 10 shows a comparison of different reconstruction algorithms in reconstruction result with 5 times magnified mean error and an empirical g-factor map in a uniformly under-sampled pattern with 30 x 30 calibration area, in accordance with an example embodiment.
  • Two k-space partitions and two initialization strategies are also examined for the impact of parameters in the STEP procedure.
  • White arrows are used in FIG, 10 to illustrate aliasing artifacts in reconstruction results of SPIRiT and SAKE,
  • FIG. 10 is made up of seven columns and three rows of images.
  • the leftmost column of the top row of FIG. 10 shows a reference image made up of an average of 100 fully sampled rSoS images.
  • the remaining six columns of images of FIG. 10 indicate results from six different reconstruction procedures - from left to right in FIG. 10, these procedures are SPIRiT, SAKE, STEP using a zero-filled initialization and a 3x3 k-space partitioning, STEP using a zero-filled initialization and a 5x5 k-space partitioning, STEP using a heuristic initialization and a 3x3 k- space partitioning, and STEP using a heuristic initialization and a 5x5 k-space partitioning. These procedures were used on each under-sampled dataset to estimate un-acquired k-space data.
  • the top row of images of FIG. 10 are, from left to right, the reference image and the average of reconstructed images generated by the six different reconstruction procedures in the left-to-right order discussed in the paragraph above.
  • the middle row of images of FIG. 10 show 5 times magnified residual error maps for the six different reconstruction procedures in the left- to-right order discussed in the paragraph above.
  • the bottom row of images of FIG. 10 provide empirical g-factors estimated for the six different reconstruction procedures in the left-to-right order discussed in the paragraph above.
  • STEP can provide less aliasing artifacts in its reconstruction results and smaller noise amplification in its empirical g-factor maps. Residual errors for some boundary information may arise without use of relatively-good initialization, such as the heuristic initialization used to generate the right-most columns of images in FIG. 10. Heuristic initialization can be particularly useful in the case where more data blocks have been divided. Once a relatively-good initialization is adopted, STEP can generate less residual errors with more data blocks divided and maintain its benefit on the property of empirical g-factor map in comparison to SPIRiT and SAKE,
  • FIG. 11 illustrates a performance comparison between SAKE and STEP from four-fold vd-PDS calibration-less sampling with elliptical shutter, in accordance with an example embodiment
  • FIG. 1 1 has a top row and a middle row both having five images, and a bottom row with two graphs.
  • the axial slice reference image is a 2D axial slice generated from the 3D brain dataset - the SAKE and STEP procedures were attempting image reconstruction after the axial slice reference image was under-sampled by four-fold.
  • the bottom row of FIG. 11 provides graphs, from left to right, of ROI based nRMSE and mSSIM measurements of the SAKE and various STEP procedures across all slices as a quantitative evaluation of nRMSE and mSSIM from other 2D axial slices.
  • FIG. 12 illustrates a performance comparison between the SPIRIT, SAKE, and STEP reconstruction procedures for a 3D brain dataset, in accordance with an example embodiment.
  • FIG. 12 has a top row and a middle row both having six images, and a bottom row with two graphs.
  • the top row of FIG. 12 shows, from left to right, a reference image of the axial slice discussed above in more detail with respect to FIG. 12 and five reconstruction images of the axial slice reference image generated using, respectively, a SPIRIT procedure, the SAKE procedure, a SAKE with Laplace procedure, a SAKE with partitioning procedure, and a STEP procedure.
  • the axial slice reference image is a 2D axial slice generated from the 3D brain dataset - the SPIRiT, SAKE, and STEP procedures were attempting image reconstruction after the axial slice reference image was uniformly sampled at a net reduction factor of 3.8
  • the white arrows in the SPIRiT image indicate reconstruction aliasing artifacts.
  • FIG. 12 shows, from left to right, a reference image of an zoomed- in portion of the axial slice and five reconstruction images of the zoomed-in reference image generated using, respectively, the SPIRiT procedure, the SAKE procedure, the SAKE with Laplace procedure, the SAKE with partitioning procedure, and a STEP procedure.
  • the reconstruction results of STEP can provide less residual aliasing artifacts compared with SPIRiT and SAKE.
  • STEP without either of the it- space partitioning and basis selection tailored procedures reduces to SAKE.
  • FIG. 12 also illustrates that the k- space partitioning and basis selection tailored procedures separately combined with SAKE can reduce aliasing artifacts but not as much as reduced by STEP.
  • FIG. 12 provides graphs, from left to right, of ROI based nRMSE and mSSIM measurements of the SPIRiT, STEP, and various SAKE procedures across all slices as a quantitative evaluation of nRMSE and mSSIM from other 2D axial slices.
  • STEP can achieve consistent improvement of image quality for all slices compared with SPIRiT as shown in Table 2 below.
  • FIG. 13 illustrates a performance comparison between the SPIRiT, LI -SPI iT, STEP, and L -STEP reconstruction procedures for a 3D knee dataset and an artery wall in the knee from 5.6-fold vd-PDS auto-calibrating sampling with elliptical shutter, in accordance with an example embodiment.
  • the LI-SPIRiT and LI-STEP procedures include an LI -norm penalty in a wavelet domain, while the SPIRiT and STEP procedures do not include the LI -norm penalty,
  • FIG. 13 has a top row and a middle row both having five images, and a bottom row with two graphs.
  • the top row of FIG. 13 shows, from left to right, a reference image of an axial slice of the reconstructed 3D knee dataset and four reconstruction images of the axial slice reference image generated using, respectively, the SPIRiT procedure, a Ll-SPIRiT procedure, the STEP procedure, and a LI -STEP procedure.
  • the middle row of FIG. 13 shows, from left to right, a reference image of an zoomed- in portion of a popliteal artery area shown in the axial slice and four reconstruction images of the zoomed-in reference image generated using, respectively, the SPIRiT procedure, the Ll-SPIRiT procedure, the STEP procedure, and the LI -STEP procedure.
  • the lower-left corner of the zoomed-in reference image includes an example of manual selection for lumen area and wall pixels.
  • the zoomed-in images demonstrate that STEP more clearly preserves vessel wall boundary information than do SPIRiT based approaches.
  • FIG. 13 provides graphs, from left to right, of ROI based nRMSE and mSSIM measurements of the SPIRiT, Ll-SPIRiT, STEP and LI -STEP procedures across all slices as a quantitative image quality comparison.
  • the nRMSE and mSSIM graphs suggest the benefit of STEP based approaches for various slices of MR images.
  • Table 3 below shows a comparison of wall-lumen signal ratios computed on all slices from SPIRiT and STEP-based procedures further indicates that the STEP-based procedures can provide more accurate tissue contrast between vessel wall and lumen.
  • STEP performance is related to a number of parameters including: a number of data blocks in a k- space partition parameter, a rank value estimated for each subspace, and a regularization weight of statistical a priori analysis.
  • Increase the number of data blocks can improve monitoring local data features, but more data blocks leads to establishment of more local subspaces each with less k-space data and so might cause overfitting particularly in low SN data blocks.
  • both 3 x 3 and 5 5 uniform overlapping k- space partitions were found to be a good compromise between computational complexity and performance.
  • the k-space partitioning can be further optimized in a non-uniform manner with consideration of k-space magnitude variation information and/or application of a high-pass filter to transform the signal characteristics before k-space partition.
  • the rank value and the regularization weight can either be identical or distinct in different partitioned data blocks.
  • rank values were found to slightly vary within blocks due to varying signal-to-noise characteristics (see FIG. 8).
  • the rank value was estimated from a central k-space area with sufficient SNR or other scans using the same coil and applied to peripheral k-space areas.
  • a same regularization weight was also used in different data blocks with the assumption of spatially invariant threshold for basis selection and uniformly distributed Gaussian noise in k-space. The regularization weight configured within a proper range can improve the subspace basis selection for one data block and gives rise to visual enhancement of noise suppression across all imaging slices. However, overly weighted regularization can cause over-smoothing of sharp edges.
  • Each STEP iteration can include a SVD for each data block.
  • k-space partitioning can reduce the dimension for each data block and thus the computational complexity of SVD.
  • the average reconstruction time of SAKE was 22 I s, while STEP took 140s.
  • the sequential computation of SVD still dominates most of the reconstruction time.
  • SVD implementations can be accelerated and/or can utilize multicore parallel computing for different data blocks.
  • heuristic initialization can lead to fast convergence as the optimization problem in Eq. [4] is not jointly convex.
  • five to eight iterations of the SPIRIT method can initialize k-space for STEP operating on auto-calibrating image data, while similar information embedded between adjacent slices can be used with calibration-less image data.
  • a fully sampled central k-space area in auto-calibrating experiments can be used to implement SPIRIT thai can provide a fast and good initialization for STEP; however, this strategy cannot be applied for calibration-less image data having an under-sampled central k-space.
  • FIG. 14A depicts a network 1400 in accordance with an example embodiment.
  • servers 1408 and 1410 are configured to communicate, via a network 1406, with client devices 1404a, 1404b, and 1404c.
  • client devices can include a personal computer 1404a, a laptop computer 1404b, and a smart-phone 1404c.
  • client devices 1404a- 1404c can be any sort of computing device, such as a workstation, network terminal, desktop computer, laptop computer, wireless communication device (e.g., a cell phone or smart phone), and so on.
  • the network 1406 can correspond to a local area network, a wide area network, a corporate intranet, the public Internet, combinations thereof, or any other type of network(s) configured to provide communication between networked computing devices.
  • Servers 1408 and 1410 can share content and/or provide content to client devices 1404a-1404c. As shown in FIG. 14A, servers 1408 and 1410 are not physically at the same location. Alternatively, servers 1408 and 1410 can be co-located, and/or can be accessible via a network separate from network 1406. Although FIG. 14A shows three client devices and two servers, network 1406 can service more or fewer than three client devices and/or more or fewer than two servers.
  • FIG. 14B is a block diagram of an example computing device 1420 including user interface module 1421 , network-communication interface module 1422, one or more processors 1423, and data storage 1424, in accordance with embodiments of the invention.
  • computing device 1420 shown in FIG. 14A can be configured to perform one or more functions of client devices 1404a-1404c, servers 1408, 1410, computing device 1432, method 1500, and method 1600, and/or to implement at least part of the herein- described techniques, including but not limited to, the herein-described KDL technique, the herein-described DSDL technique, and the herein-described STEP technique.
  • Computing device 1420 can include a user interface module 1421, a network-communication interface module 1422, one or more processors 1423, and data storage 1424, all of which may be linked together via a system bus, network, or other connection mechanism 1425.
  • Computing device 1420 can be a desktop computer, laptop or notebook computer, personal data assistant (PDA), mobile phone, embedded processor, or any similar device that is equipped with at least one processing unit capable of executing machine-language instructions that implement at least part of the herein-described techniques and methods, including but not limited to functionality related to method 1500, method 1600, the herein-described KDL technique, the herein-described DSDL technique, and the herein-described STEP technique.
  • PDA personal data assistant
  • User interface 1421 can receive input and/or provide output, perhaps to a user.
  • User interface 1421 can be configured to send and/or receive data to and/or from user input from input device(s), such as a keyboard, a keypad, a touch screen, a computer mouse, a track ball, a joystick, and/or other similar devices configured to receive user input from a user of the computing device 1420.
  • input device(s) such as a keyboard, a keypad, a touch screen, a computer mouse, a track ball, a joystick, and/or other similar devices configured to receive user input from a user of the computing device 1420.
  • User interface 1421 can be configured to provide output to output display devices, such as.
  • User interface module 1421 can also be configured to generate audible output(s), such as a speaker, speaker jack, audio output port, audio output device, earphones, and/or other similar devices configured to convey sound and/or audible information to a user of computing device 1420.
  • user interface module 1421 can be configured to provide haptic outputs such as tactile feedback, vibrations, forces, motions, and/or other touch-related outputs.
  • Network-communication interface module 1422 can be configured to send and receive data over wireless interfaces 1427 and/or wired interfaces 1428 via a network, such as network 1406.
  • Wired interface(s) 1428 if present, can include a wire, cable, fiber-optic link and/or similar physical connection to a data network, such as a wide area network (WAN), a local area network (LAN), one or more public data networks, such as the Internet, one or more private data networks, or any combination of such networks.
  • Wireless interface(s) 1427 can utilize an air interface, such as a ZigBee, Wi-Fi, and/or WiMAX interface to a data network, such as a WAN, a LAN, one or more public data networks (e.g., the Internet), one or more private data networks, or any combination of public and private data networks.
  • a data network such as a WAN, a LAN, one or more public data networks (e.g., the Internet), one or more private data networks, or any combination of public and private data networks.
  • network-communication interface module 1422 can be configured to provide reliable, secured, and/or authenticated communications. For each communication described herein, information for ensuring reliable communications (i.e., guaranteed message delivery) can be provided, perhaps as part of a message header and/or footer (e.g., packet/message sequencing information, encapsulation header(s) and/or footer(s), size/time information, and transmission verification information such as CRC and/or parity check values).
  • reliable communications i.e., guaranteed message delivery
  • information for ensuring reliable communications i.e., guaranteed message delivery
  • a message header and/or footer e.g., packet/message sequencing information, encapsulation header(s) and/or footer(s), size/time information, and transmission verification information such as CRC and/or parity check values.
  • Communications can be made secure (e.g., be encoded or encrypted) and/or decrypted/decoded using one or more cryptographic protocols and/or algorithms, such as, but not limited to, DES, AES, RSA, Diffie-Hellman, and/or DSA.
  • cryptographic protocols and/or algorithms such as, but not limited to, DES, AES, RSA, Diffie-Hellman, and/or DSA.
  • Other cryptographic protocols and/or algorithms can be used as well as or in addition to those listed herein to secure (and then decrypt/decode) communications .
  • Processor(s) 1423 can include one or more central processing units, computer processors, mobile processors, digital signal processors (DSPs), GPUs, microprocessors, computer chips, and/or other processing units configured to execute machine-language instructions and process data.
  • Processor(s) 1423 can be configured to execute computer-readable program instructions 1426 that are contained in data storage 1424 and/or other instructions as described herein.
  • Data storage 1424 can include one or more physical and/or non-transitory storage devices, such as read-only memory (ROM), random access memory (RAM), removable-disk- drive memory, hard-disk memory, magnetic-tape memory, flash memory, and/or other storage devices.
  • Data storage 1424 can include one or more physical and/or non-transitory storage devices with at least enough combined storage capacity to contain computer-readable program instructions 1426 and any associated/related data structures.
  • Computer-readable program instructions 1426 and any data structures contained in data storage 1426 include computer-readable program instructions executable by processor(s) 1423 and any storage required, respectively, at least to perform functionality related to the herein-described techniques and/or methods, including but not limited to, functionality related to method 1500, method 1600, the herein-described KDL technique, the herein-described DSDL technique, and the herein-described STEP technique.
  • FIG. 14C is a block diagram of an example magnetic resonance device 1430 including computing device 1432 and k coils 1434, where k > 0, in accordance with embodiments of the invention.
  • Computing device 1432 can be a computing device with one or more processors and data storage, such as computing device 1420 discussed above in the context of FIG.
  • Coils 1434 can include one or more (k) radio frequency, gradient, and/or other coils. Coils 1434 can be configured to transmit and/or receive electromagnetic radiation and/or magnetic forces between magnetic resonance device 1430 and a body under observation; e.g., part or all of a body of a patient. Coils 1434 can include one or more surface coils, body coils, Helmholtz pair coils, paired saddle coils, bird cage coils, and/or other types of coils.
  • FIGS. 15A and 15B describe a method 1500 for determining a reconstruction matrix A2 of a data matrix A.
  • the data matrix A can be sampled in a first domain, such as a signal acquisition domain.
  • A can be a matrix of data values captured using one or more coils of a magnetic resonance system, where each data from each coil is captured separately and combined to form the data matrix A.
  • the first domain can be related to a second domain, such as a signal observation domain.
  • the signal observation domain can be an image, such as a MR image, related to the first domain by a transformation, such as a Fourier transformation or other transformation.
  • the second domain can be related back to the first domain using an inverse transformation, such as an inverse Fourier transformation.
  • FIG. 15A indicates that method 1500 can begin at block 1510, where a computing device can receive a data matrix A, where the computing device can be such as computing device 1420 discussed above in the context of FIG. 14B.
  • the data matrix A can include data captured from a magnetic resonance system.
  • the data matrix A can be sampled from a first domain and can be transformed by a transform Fl to a second domain, where the second domain differs from the first domain.
  • the first domain can be a signal acquisition domain, such as a k-space domain
  • the second domain can be a signal observation domain, such as an image domain.
  • the computing device can perform one or more iterations to determine reconstruction matrix A2 of the data matrix A.
  • An iteration of the one or more iterations can include the procedures of blocks 1520 through 1584 described immediately below and shown on FIG. 15B.
  • the computing device can reshape the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A.
  • Each patch can include at most NP entries of the data matrix A, where NP > 0.
  • Each column j of the matrix X can be based on a patch Pl(j) of the plurality PI of non-overlapping patches.
  • each of the patches in PI may have at most NP entries. For example, if A were a 7 x 7 matrix and NP is 25, based on a 5 x 5 patch, then one example set of 4 non overlapping patches of A is:
  • the computing device can determine a dictionary Dl and weights Wl .
  • the dictionary Dl can include K atoms, with K > 0, and where each atom can have NP entries.
  • the weights Wl can include, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms.
  • each atom in dictionary Dl can be a patch from A, a representation of a patch, or some other data.
  • Dl is arranged as a matrix with the K atoms stored as columns in the matrix. Then, for patch Pl(i), the product Dl *Wl(i) can represent a linear combination of the atoms in Dl . In some other embodiments, the product Dl *Wl(i) can equal Pl(i); while in still other embodiments, the product Dl *Wl(i) can approximate Pl(i).
  • determining a dictionary Dl and weights Wl can include determining the dictionary Dl and the weights Wl iterative ly.
  • an iteration for determining Dl and Wl iteratively can involve first fixing the dictionary Dl and determining weights Wl and then fixing the weights Wl and determining dictionary Dl (or vice versa).
  • the dictionary Dl and the weights Wl can be determined by minimizing a sum of residual values, where each residual value of the residual values includes a residual value for a patch. For an example of minimizing the sum, see the equation for formulating the KDL technique in the discussion of FIG. 1 above.
  • minimizing the sum of the residual values can include determining a residual value for a patch Pl(j) of the plurality PI of non-overlapping patches by determining a difference between data values y(j) that are a j th column of data values in the data matrix A corresponding to patch Pl(j) from a product of the dictionary Dl and sparse vector Wl(j) of the weights Wl .
  • the product of the dictionary Dl and sparse vector Wl(j) of the weights Wl can include a product of a patch Pl(j), the dictionary Dl, and the sparse vector Wl(j).
  • method 1500 can include reducing the weights Wl(i) so that an absolute magnitude of a sum of the weights Wl(i) approaches a predetermined value.
  • the predetermined value can be 0.
  • a dictionary D can include the dictionary Dl for the first domain and the dictionary D2 for the second domain.
  • weights W can include weights Wl for the first domain and weights W2 for the second domain.
  • the dictionary Dl and the weights Wl can be determined by at least: minimizing a sum of a first plurality of residual values, where a residual value of the first plurality of residual values includes a difference between a product of Dl and Wl and the second transform F2 of a product of D2 and W2; and after determining the dictionary Dl and the weights Wl, minimizing a sum of a second plurality of residual values, wherein a residual value of the second plurality of residual values includes a difference between a product of D2 and W2 and the first transform Fl of a product of Dl and Wl .
  • the computing device can generate an estimated matrix XI of the matrix X based on a combination of the dictionary Dl and the weights Wl .
  • the computing device can reshape the estimated matrix XI to form a data matrix I.
  • the computing device can apply a first transform Fl to the data matrix Al to obtain a data matrix I.
  • the data matrix Al can include values in a first domain and the data matrix I can include values in a second domain that is different from the first domain.
  • the computing device can reshape the data matrix I into a matrix 12 using a plurality P2 of non-overlapping patches that cover the data matrix I, where each patch of P2 can include at most NP2 entries of the data matrix I, and where NP2 > 0.
  • the computing device can determine a dictionary D2 and weights W2 in the second domain.
  • the dictionary D2 can include K2 atoms, where K2 > 0.
  • Each atom can have NP2 entries.
  • the weights W2 can include, for each patch P2(i) of the plurality P2 of non- overlapping patches, a sparse vector of weights W2(i) for combining the K2 atoms.
  • the computing device can generate an estimated matrix II of the matrix 12 based on a combination of the dictionary D2 and the weights W2
  • the computing device can apply a second transform F2 to the estimated matrix II to obtain reconstruction matrix A2.
  • the reconstruction matrix A2 can include values in the first domain.
  • the computing device can, for each value A(t) in the data matrix A on an acquisition trajectory, replace a corresponding value A2(t) with the value A(t).
  • the computing device can determine if all iterations are complete. If all of the iterations are complete, method 1500 can proceed to block 1592. However, if not all of the iterations are complete, method 1500 can return to block 1520 and perform another iteration.
  • the computing device can provide the reconstruction matrix A2.
  • the computing device can be embedded in an MR device that also has one or more coils, such as MR device 1430.
  • the data matrix A can be determined using the one or more coils of the MR device.
  • a device can be provided with means to perform any or all of the above-disclosed procedures, techniques, and/or blocks associated with method 1500.
  • FIG. 16 describes a method 1600 for determining a reconstruction matrix A2 of a data matrix A.
  • the data matrix A can be sampled in a first domain, such as a signal acquisition domain.
  • A can be a matrix of data values captured using one or more coils of an MR system, where each data from each coil is captured separately and combined to form the data matrix A.
  • the first domain can be related to a second domain, such as a signal observation domain.
  • the signal observation domain can be an image, such as a MR image, related to the first domain by a transformation, such as a Fourier transformation or other transformation.
  • the second domain can be related back to the first domain using an inverse transformation, such as an inverse Fourier transformation.
  • FIG. 16 indicates that method 1600 can begin at block 1610, where a computing device can receive a data matrix A, where the computing device can be such as computing device 1420 discussed above in the context of FIG. 14B.
  • the data matrix A can include data captured from a magnetic resonance system.
  • the data matrix A can be sampled from a first domain and can be transformed by a transform Fl to a second domain, where the second domain differs from the first domain.
  • the first domain can be a signal acquisition domain, such as a k-space domain
  • the second domain can be a signal observation domain, such as an image domain.
  • the computing device can perform one or more iterations to determine reconstruction matrix A2 of the data matrix A.
  • An iteration of the one or more iterations can include the procedures of blocks 1620 through 1660 described immediately below.
  • the computing device can partition the data matrix A into a plurality of data blocks, such as discussed above in the context of at least FIGS. 7-14.
  • the computing device can, for a particular data block L of the plurality of data blocks, perform at least the procedures of blocks 1640, 1642, and 1644. [0155] At block 1640, the computing device can extract the data block L from the data matrix A.
  • the computing device can vectorize the data block L into a data sub- matrix A L .
  • the computing device can factor A L into at least matrices D L and W L while enforcing at least a low-rank property and a sparsity property.
  • factoring A L into at least the matrices D L and W L can include determining D L using SVD.
  • factoring A L into at least matrices D L and W L while enforcing at least the low-rank property and the sparsity property can include factoring A L into at least matrices D L and W L while enforcing at least the low-rank property, the sparsity property, and a joint sparsity property in a wavelet domain.
  • the computing device can form a weighted data block by averaging values of at least the factored matrices D L and W L .
  • the computing device can generate the reconstruction matrix A2 based on the weighted data block.
  • method 1600 additionally includes determining a Ll-norm of the matrix W L , and determining a basis selection based on the Ll-norm of the matrix W L .
  • method 1600 additionally includes determining a rank value K - in these embodiments, enforcing at least the low-rank property and the sparsity property can include enforcing that a rank of D L is less than or equal to K.
  • the computing device can determine if all iterations are complete. If all of the iterations are complete, method 1600 can proceed to block 1680. However, if not all of the iterations are complete, method 1600 can return to block 1620 and perform another iteration.
  • the computing device can provide the reconstruction matrix A2.
  • the computing device can be embedded in an MR device that also has one or more coils, such as MR device 1430.
  • block 1610 for receiving data matrix A can include determining the data matrix A using the one or more coils.
  • a device can be provided with means to perform any or all of the above- disclosed procedures, techniques, and/or blocks associated with method 1600.

Abstract

This disclosure relates to techniques for magnetic resonance (MR) image reconstruction. A herein-described k-space dictionary learning (KDL) technique and a herein-described dual space dictionary learning (DSDL) technique can use dictionaries to approximate an image and/or a data matrix in a signal observation domain for MR imaging by a weighted sum of dictionary entries. A herein-described self-supporting tailored k-space estimation for parallel imaging (STEP) technique can use k-space partitioning and basis selection tailored procedures to promote a spatially variant signal subspace and incorporation into a self-supporting structured low rank model that enforces locality, sparsity, and rank deficiency properties. The model can be formulated into a constrained optimization problem solvable by an iterative algorithm.

Description

DUAL SPACE DICTIONARY LEARNING FOR MAGNETIC RESONANCE (MR)
IMAGE RECONSTRUCTION
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority to U.S. Provisional Patent Application No. 61/983,978 entitled "Dual Space Dictionary Learning for Magnetic Resonance (MR) Image Reconstruction", filed April 24, 2014, which is entirely incorporated by reference herein for all purposes.
STATEMENT OF GOVERNMENT RIGHTS
[0002] This invention was made with government support under Grant Nos. R21 NS072464 and R01 HL103609 from the National Institutes of Health. The government has certain rights in the invention.
BACKGROUND OF THE INVENTION
[0003] Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
[0004] A magnetic resonance (MR) device can emit magnetic fields and radio waves. The emitted magnetic fields can cause protons in a body under observation to be uniformly aligned. The uniform proton alignment creates a magnetic vector that can be reflected by the emitted radio waves. The strength of the magnetic fields can be altered electronically during an MR scan. By altering the local magnetic field by these small increments, different slices of the body will resonate as different frequencies are applied. When the radio waves are not emitted, the magnetic vector returns to its resting state, causing a signal (also a radio wave) to be emitted. The emitted signal may be captured by one or more of k (k > 0) receiver coils in the MR device, and the emitted signal used to create MR images.
SUMMARY
[0005] In one aspect, a method is provided. A computing device receives a data matrix A. The computing device performs one or more iterations to determine a reconstruction matrix A2 of the data matrix. An iteration of the one or more iterations includes: (1) reshaping the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A with each patch including at most NP entries of the data matrix A and with NP > 0, where each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non- overlapping patches; (2) determining a dictionary Dl and weights Wl, where the dictionary Dl includes K atoms, K > 0, each atom having NP entries, and where the weights Wl include, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms; (3) generating an estimated matrix XI of the matrix X based on a combination of the dictionary Dl and the weights Wl; (4) reshaping the estimated matrix XI to form a data matrix Al; (5) applying a first transform Fl to the data matrix Al to obtain a data matrix I, where the data matrix Al includes values in a first domain, and where the data matrix I includes values in a second domain different from the first domain; (6) reshaping the data matrix I into a matrix 12 using a plurality P2 of non-overlapping patches that cover the data matrix I, where each patch of P2 includes at most NP2 entries of the data matrix I, NP2 > 0; (7) determining a dictionary D2 and weights W2 in the second domain, where the dictionary D2 includes K2 atoms, K2 > 0, each atom having NP2 entries, and where the weights W2 include, for each patch P2(i) of the plurality P2 of non-overlapping patches, a sparse vector of weights W2(i) for combining the K2 atoms; (8) generating an estimated matrix II of the matrix 12 based on a combination of the dictionary D2 and the weights W2, (9) applying a second transform F2 to the estimated matrix II to obtain reconstruction matrix A2, where the reconstruction matrix A2 includes values in the first domain, and (10) for each value A(t) in the data matrix A that is associated with an acquisition trajectory, replace a corresponding value A2(t) with the value A(t). The computing device can determine whether the one or more iterations are complete using the computing device. After determining that the one or more iterations are complete, the computing device can provide the reconstruction matrix A2.
[0006] In another aspect, a computing device is provided. The computing device includes a processor and a tangible computer-readable medium. The tangible computer-readable medium is configured to include instructions that, when executed by the processor, are configured to cause the computing device to perform functions. The functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine a reconstruction matrix A2 of the data matrix, where an iteration of the one or more iterations includes: (bl) reshaping the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A with each patch including at most NP entries of the data matrix A and with NP > 0, where each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non-overlapping patches; (b2) determining a dictionary Dl and weights Wl, where the dictionary Dl includes K atoms, K > 0, each atom having NP entries, and where the weights Wl include, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms; (b3) generating an estimated matrix XI of the matrix X based on a combination of the dictionary Dl and the weights Wl; (b4) reshaping the estimated matrix XI to form a data matrix Al; (b5) applying a first transform Fl to the data matrix Al to obtain a data matrix I, where the data matrix Al includes values in a first domain, and where the data matrix I includes values in a second domain different from the first domain; (b6) reshaping the data matrix I into a matrix 12 using a plurality P2 of non-overlapping patches that cover the data matrix I, where each patch of P2 includes at most NP2 entries of the data matrix I, NP2 > 0; (b7) determining a dictionary D2 and weights W2 in the second domain, where the dictionary D2 includes K2 atoms, K2 > 0, each atom having NP2 entries, and where the weights W2 include, for each patch P2(i) of the plurality P2 of non-overlapping patches, a sparse vector of weights W2(i) for combining the K2 atoms; (b8) generating an estimated matrix II of the matrix 12 based on a combination of the dictionary D2 and the weights W2, (b9) applying a second transform F2 to the estimated matrix II to obtain reconstruction matrix A2, where the reconstruction matrix A2 includes values in the first domain, and (blO) for each value A(t) in the data matrix A that is associated with an acquisition trajectory, replace a corresponding value A2(t) with the value A(t); (c) determining whether the one or more iterations are complete; and (d) after determining that the one or more iterations are complete, providing the reconstruction matrix A2.
[0007] In another aspect, a magnetic resonance system is provided. The magnetic resonance system includes one or more coils and a computing device. The computing device includes a processor and a tangible computer-readable medium. The tangible computer-readable medium is configured to include instructions that, when executed by the processor, are configured to cause the computing device to perform functions. The functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine a reconstruction matrix A2 of the data matrix, where an iteration of the one or more iterations includes: (bl) reshaping the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A with each patch including at most NP entries of the data matrix A and with NP > 0, where each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non-overlapping patches; (b2) determining a dictionary Dl and weights Wl, where the dictionary Dl includes K atoms, K > 0, each atom having NP entries, and where the weights Wl include, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms; (b3) generating an estimated matrix XI of the matrix X based on a combination of the dictionary Dl and the weights Wl; (b4) reshaping the estimated matrix XI to form a data matrix Al; (b5) applying a first transform Fl to the data matrix Al to obtain a data matrix I, where the data matrix Al includes values in a first domain, and where the data matrix I includes values in a second domain different from the first domain; (b6) reshaping the data matrix I into a matrix 12 using a plurality P2 of non-overlapping patches that cover the data matrix I, where each patch of P2 includes at most NP2 entries of the data matrix I, NP2 > 0; (b7) determining a dictionary D2 and weights W2 in the second domain, where the dictionary D2 includes K2 atoms, K2 > 0, each atom having NP2 entries, and where the weights W2 include, for each patch P2(i) of the plurality P2 of non-overlapping patches, a sparse vector of weights W2(i) for combining the K2 atoms; (b8) generating an estimated matrix II of the matrix 12 based on a combination of the dictionary D2 and the weights W2, (b9) applying a second transform F2 to the estimated matrix II to obtain reconstruction matrix A2, where the reconstruction matrix A2 includes values in the first domain, and (blO) for each value A(t) in the data matrix A that is associated with an acquisition trajectory, replace a corresponding value A2(t) with the value A(t); (c) determining whether the one or more iterations are complete; and (d) after determining that the one or more iterations are complete, providing the reconstruction matrix A2.
[0008] In another aspect, a tangible computer-readable medium is provided. The tangible computer-readable medium is configured to include instructions that, when executed by a processor of a computing device, are configured to cause the computing device to perform functions. The functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine a reconstruction matrix A2 of the data matrix, where an iteration of the one or more iterations includes: (bl) reshaping the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A with each patch including at most NP entries of the data matrix A and with NP > 0, where each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non-overlapping patches; (b2) determining a dictionary Dl and weights Wl, where the dictionary Dl includes K atoms, K > 0, each atom having NP entries, and where the weights Wl include, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms; (b3) generating an estimated matrix XI of the matrix X based on a combination of the dictionary Dl and the weights Wl; (b4) reshaping the estimated matrix XI to form a data matrix Al; (b5) applying a first transform Fl to the data matrix Al to obtain a data matrix I, where the data matrix Al includes values in a first domain, and where the data matrix I includes values in a second domain different from the first domain; (b6) reshaping the data matrix I into a matrix 12 using a plurality P2 of non-overlapping patches that cover the data matrix I, where each patch of P2 includes at most NP2 entries of the data matrix I, NP2 > 0; (b7) determining a dictionary D2 and weights W2 in the second domain, where the dictionary D2 includes K2 atoms, K2 > 0, each atom having NP2 entries, and where the weights W2 include, for each patch P2(i) of the plurality P2 of non-overlapping patches, a sparse vector of weights W2(i) for combining the K2 atoms; (b8) generating an estimated matrix II of the matrix 12 based on a combination of the dictionary D2 and the weights W2, (b9) applying a second transform F2 to the estimated matrix II to obtain reconstruction matrix A2, where the reconstruction matrix A2 includes values in the first domain, and (blO) for each value A(t) in the data matrix A that is associated with an acquisition trajectory, replace a corresponding value A2(t) with the value A(t); (c) determining whether the one or more iterations are complete; and (d) after determining that the one or more iterations are complete, providing the reconstruction matrix A2.
[0009] In another aspect, a device is provided. The device includes: (a) means for receiving a data matrix A; (b) means for performing one or more iterations to determine a reconstruction matrix A2 of the data matrix, where an iteration of the one or more iterations includes: (bl) reshaping the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A with each patch including at most NP entries of the data matrix A and with NP > 0, where each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non-overlapping patches; (b2) determining a dictionary Dl and weights Wl, where the dictionary Dl includes K atoms, K > 0, each atom having NP entries, and where the weights Wl include, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms; (b3) generating an estimated matrix XI of the matrix X based on a combination of the dictionary Dl and the weights Wl; (b4) reshaping the estimated matrix XI to form a data matrix Al ; (b5) applying a first transform Fl to the data matrix Al to obtain a data matrix I, where the data matrix Al includes values in a first domain, and where the data matrix I includes values in a second domain different from the first domain; (b6) reshaping the data matrix I into a matrix 12 using a plurality P2 of non-overlapping patches that cover the data matrix I, where each patch of P2 includes at most NP2 entries of the data matrix I, NP2 > 0; (b7) determining a dictionary D2 and weights W2 in the second domain, where the dictionary D2 includes K2 atoms, K2 > 0, each atom having NP2 entries, and where the weights W2 include, for each patch P2(i) of the plurality P2 of non-overlapping patches, a sparse vector of weights W2(i) for combining the K2 atoms; (b8) generating an estimated matrix II of the matrix 12 based on a combination of the dictionary D2 and the weights W2, (b9) applying a second transform F2 to the estimated matrix II to obtain reconstruction matrix A2, where the reconstruction matrix A2 includes values in the first domain, and (blO) for each value A(t) in the data matrix A that is associated with an acquisition trajectory, replace a corresponding value A2(t) with the value A(t); (c) means for determining whether the one or more iterations are complete; and (d) means for, after determining that the one or more iterations are complete, providing the reconstruction matrix A2.
[0010] In another aspect, a method is provided. A computing device receives a data matrix A. The computing device performs one or more iterations to determine reconstruction matrix A2 of the data matrix A. An iteration of the one or more iterations includes: (1) partitioning the data matrix A into a plurality of data blocks; (2) for a particular data block L of the plurality of data blocks: (2a) extracting the data block L from the data matrix A; (2b) vectorizing the data block L into a data sub-matrix AL, and (2c) factoring AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property; (3) forming a weighted data block by averaging values of at least the factored matrices DL and WL; and (4) generating the reconstruction matrix A2 based on the weighted data block. The computing device provides the reconstruction matrix A2.
[0011] In another aspect, a computing device is provided. The computing device includes a processor and a tangible computer-readable medium. The tangible computer-readable medium is configured to include instructions that, when executed by the processor, are configured to cause the computing device to perform functions. The functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine reconstruction matrix A2 of the data matrix A, where an iteration of the one or more iterations includes: (bl) partitioning the data matrix A into a plurality of data blocks; (b2) for a particular data block L of the plurality of data blocks: (b2a) extracting the data block L from the data matrix A; (b2b) vectorizing the data block L into a data sub-matrix AL, and (b2c) factoring AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property; (b3) forming a weighted data block by averaging values of at least the factored matrices DL and WL; and (b4) generating the reconstruction matrix A2 based on the weighted data block; and (3) providing the reconstruction matrix A2.
[0012] In another aspect, a magnetic resonance system is provided. The magnetic resonance system includes one or more coils and a computing device. The computing device includes a processor and a tangible computer-readable medium. The tangible computer-readable medium is configured to include instructions that, when executed by the processor, are configured to cause the computing device to perform functions. The functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine reconstruction matrix A2 of the data matrix A, where an iteration of the one or more iterations includes: (bl) partitioning the data matrix A into a plurality of data blocks; (b2) for a particular data block L of the plurality of data blocks: (b2a) extracting the data block L from the data matrix A; (b2b) vectorizing the data block L into a data sub-matrix AL, and (b2c) factoring AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property; (b3) forming a weighted data block by averaging values of at least the factored matrices DL and WL; and (b4) generating the reconstruction matrix A2 based on the weighted data block; and (3) providing the reconstruction matrix A2.
[0013] In another aspect, a tangible computer-readable medium is provided. The tangible computer-readable medium is configured to include instructions that, when executed by the processor, are configured to cause the computing device to perform functions. The functions include: (a) receiving a data matrix A; (b) performing one or more iterations to determine reconstruction matrix A2 of the data matrix A, where an iteration of the one or more iterations includes: (bl) partitioning the data matrix A into a plurality of data blocks; (b2) for a particular data block L of the plurality of data blocks: (b2a) extracting the data block L from the data matrix A; (b2b) vectorizing the data block L into a data sub-matrix AL, and (b2c) factoring AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property; (b3) forming a weighted data block by averaging values of at least the factored matrices DL and WL; and (b4) generating the reconstruction matrix A2 based on the weighted data block; and (3) providing the reconstruction matrix A2.
[0014] In another aspect, a device is provided. The device includes: (a) means for receiving a data matrix A; (b) means for performing one or more iterations to determine reconstruction matrix A2 of the data matrix A, where an iteration of the one or more iterations includes: (bl) partitioning the data matrix A into a plurality of data blocks; (b2) for a particular data block L of the plurality of data blocks: (b2a) extracting the data block L from the data matrix A; (b2b) vectorizing the data block L into a data sub-matrix AL, and (b2c) factoring AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property; (b3) forming a weighted data block by averaging values of at least the factored matrices DL and WL; and (b4) generating the reconstruction matrix A2 based on the weighted data block; and (3) meas for providing the reconstruction matrix A2.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] Various examples of particular embodiments are described herein with reference to the following drawings, wherein like numerals denote like entities, in which:
[0016] FIG. 1 is a flowchart depicting the KDL technique, in accordance with an example embodiment;
[0017] FIG. 2 illustrates a performance comparison at x3 acceleration with insufficient ACS data between SAKE and KDL, in accordance with an example embodiment;
[0018] FIG. 3 illustrates a performance comparison at x4 acceleration with ACS data between SPIRiT, SAKE, and KDL, in accordance with an example embodiment;
[0019] FIG. 4 is a flowchart depicting the DSDL technique, in accordance with an example embodiment;
[0020] FIG. 5 illustrates a performance comparison at x4 retrospective acceleration between LI -SPIRiT and DSDL, in accordance with an example embodiment;
[0021] FIG. 6 illustrates a performance comparison at x2.67 prospective acceleration between LI -SPIRiT and DSDL, in accordance with an example embodiment;
[0022] FIG. 7 is a flowchart of the STEP procedure for iterative imaging reconstruction; in accordance with an example embodiment;
[0023] FIG. 8 demonstrates a local subspace basis and a Laplace a priori model, in accordance with an example embodiment; [0024] FIG. 9 shows performance improvement by using a local signal characteristic promotion model at three-fold acceleration, in accordance with an example embodiment;
[0025] FIG. 10 shows a comparison of different reconstruction algorithms in reconstruction result with 5 times magnified mean error and an empirical g-factor map in a uniformly under- sampled pattern with 30 x 30 calibration area, in accordance with an example embodiment;
[0026] FIG. 11 illustrates a performance comparison between SAKE and STEP from fourfold vd-PDS calibration-less sampling with elliptical shutter, in accordance with an example embodiment;
[0027] FIG. 12 illustrates a performance comparison between the SPIRiT, SAKE, and STEP reconstruction procedures for a 3D brain dataset, in accordance with an example embodiment;
[0028] FIG. 13 illustrates a performance comparison between the SPIRIT, LI -SPIRiT, STEP, and LI -STEP reconstruction procedures for a 3D knee dataset and an artery wall in the knee from 5.6-fold vd-PDS auto-calibrating sampling with elliptical shutter, in accordance with an example embodiment;
[0029] FIG. 14A is a block diagram of a computing environment, in accordance with an example embodiment;
[0030] FIG. 14B is a block diagram of an example computing device, in accordance with an example embodiment;
[0031] FIG. 14C is a block diagram of an example magnetic resonance device, in accordance with an example embodiment;
[0032] FIGS. 15A and 15B are a flowchart of a method, in accordance with an example embodiment; and
[0033] FIG. 16 is a flowchart of another method, in accordance with an example embodiment.
DETAILED DESCRIPTION
[0034] This disclosure relates to techniques for magnetic resonance imaging (MRI). Data from an MR device can be related to one of at least two domains: a signal acquisition or k- space domain where input (radio) signals are received from a body of interest at some or all of k coils of the MR device, and an signal acquisition or image domain related to images generated by the MR device. A data matrix A can be sampled in the signal acquisition domain, where A includes data values captured using one or more coils of a magnetic resonance system. Data from each coil of the MR system can be captured separately and combined to form the data matrix A. Data from data matrix A in the signal acquisition / k-space domain can be transformed into a data in a second domain, such as a signal observation / image domain. The transformation between data in the signal observation domain, such as a MR image, can be a Fourier transformation or other transformation. In some embodiments, the second domain can be related back to the first domain using an inverse transformation, such as an inverse Fourier transformation.
[0035] A dictionary can be used to approximate an image in the image domain (or data matrix in the signal observation domain), where a portion of an image is approximated by a weighted sum of dictionary entries, and so the whole image can be approximated by a combination of weighted sums. The dictionary entries can be stored in one or more dictionary matrices, and the corresponding weights, or coefficients, can be stored in one or more coefficient matrices.
[0036] The herein-described techniques include a k-space dictionary learning (KDL) technique, a dual space dictionary learning (DSDL) technique, and a self-supporting tailored k- space estimation for parallel imaging (STEP) technique. In the discussion below, the variable "k" represents a number of imaging coils used by a MRI device of interest, and k-space represents input data from the k imaging coils in a signal acquisition domain, while an image domain represents an image generated by the MRI device using the k-space input data.
[0037] The KDL technique includes training of dictionaries in a signal observation (k-space) domain. The DSDL technique includes training of dictionaries in both a signal observation and an image domain. A dictionary trained in the image domain for representing an information structure corresponding to a high frequency spectrum of k-space, which can compensate for a low signal-to-noise (SNR) scenario in the high frequency spectrum of k-space. A dictionary trained in a k-space domain can preserve image contrast information accumulated in a low frequency spectrum, which can be regarded as a global energy distribution constraint in image domain.
[0038] The STEP technique is a self-supporting tailored parallel imaging reconstruction method that enhances representat on of local data features in k-space. STEP can be implemented in both auto-calibrating and calibration-less conditions. STEP takes advantage of properties of locality, sparsity, and rank deficiency in its iterative reconstruction. In particular, two tailored procedures including k-space partitioning and basis selection are used to promote a spatially variant signal subspace and incorporation into a self-supporting structured low rank model that enforces locality, sparsity, and rank deficiency properties, which can be formulated into a constrained optimization problem and solved by an iterative algorithm. Also, arbitrary sampling trajectory and flexible regularization constraints (compressed sensing) can be integrated into STEP. Experimental results using simulated and in vivo datasets demonstrate that STEP can reduce noise amplification and residual aliasing artifacts while preserving structural information and tissue contrast
[0039] An advantage of the STEP technique with respect to image quality is demonstrated by retrospectively under-sampled multichannel Cartesian data with various patterns. Compared with the SPIRIT technique and the SAKE technique, the STEP technique can provide more accurate reconstruction images with less residual aliasing artifacts and reduced noise amplification in simulation and in vivo experiments, in addition, the STEP technique has the capability of combining compressed sensing with arbitrary sampling trajectory. Using k-space partition and basis selection can further improve the performance of parallel imaging reconstruction with or without calibration signals.
Parallel Reconstruction using Patch Based K-Space Dictionary Learning (KDL)
[0040] A parallel reconstruction technique has been developed using Singular Value Decomposition (SVD) to impose a low rank property without using auto-calibrating signal (ACS) data for MR devices, which can further improve results provided with sufficient sampled ACS area. Although a predetermined SVD factor analysis technique can have good analytical and numerical properties, a learned dictionary can better adapt to acquired data and improve reconstruction results. Herein a patch-based k-space dictionary learning (KDL) technique for estimating local signal features in k-space is described and performance of KDL is demonstrated.
[0041] Theory: FIG. 1 is a flowchart depicting the KDL technique, in accordance with an example embodiment. Low rank matrix completion can be solved by singular-value thresholding. Assuming thatA = U∑ VH ^ UttV" = UtW, then the missing entries of a low rank matrix A can be completed using a linear combination of the basis column vectors in Ut, where Ut corresponds to a left singular matrix U after hard thresholding.
[0042] Also, the weighting coefficient matrix W can be a sparse matrix due to the low rank property of∑t. A dictionary can be used to describe this low rank matrix with sparse coefficients. The KDL technique includes defining: • a division of k-space that includes overlapping patches;
• yi as a vector of the acquired k-space data with patch index i ranging from 1 to Np;
• Pi as an under-sampling pattern for patch i;
• D as a dictionary matrix with the following constraints applied to the columns of D: :
\\dk\\ ≤l, k = 1, 2, ... AT; and
• Wi as a sparse vector for patch i.
[0043] The KDL technique can be formulated as follows::
minDE2w E jy - Ρ^ ||| + μΙ ||1, 2) = {D | ||dfc ||| < l, k = 1, 2, ... , K};
The Projection onto Convex Sets (POCS) technique with block proximal-gradient dictionary learning can be used to solve the above minimization problem as illustrated in FIG. 1.
[0044] Experimental Methods: Brain data of a healthy volunteer was scanned on a 3 Tesla whole body MR system (Philips Achieva, R3.2.1, Netherlands) using a standard eight-channel head coil. A three-dimensional (3D) MPRAGE sequence was performed with the following parameters: FOV = 200x200x200mm3, voxel size = lxlxlmm3, TR/TE = 10/4.1ms, IRTR/TI = 998/498ms, FA = 11, TFE factor = 98, linear encoding, and no fat suppression. The fully sampled k-space data was retrospectively x3/x4 under-sampled with Poisson Disk Sampling (PDS) pattern with calibration areas of 4 x 4 and 26 x 26. The reconstruction result of the KDL technique was compared with results from both a SPIRiT technique and a SAKE technique based on normalized root mean square error (nRMSE) values. All three techniques used the same 7 >< 7 kernel per patch size. In the SAKE technique, the singular-value threshold was fixed at 60. In the KDL technique, dictionary size varied from 60 to 256 to test the impact of dictionary size, where the initial dictionary was trained using SVD. Other relevant parameters were optimized to achieve best performance for each reconstruction technique.
[0045] Results and Discussion: FIG. 2 illustrates a performance comparison at x3 acceleration with insufficient ACS data between the SAKE and KDL techniques, in accordance with an example embodiment. An image of a PDS sampling mask used in the performance comparison is shown at left of FIG. 2. At center of FIG. 2, an image generated using the SAKE technique is shown, and at right of FIG. 2, an image generated using the KDL technique is shown. For this x3 accelerated performance comparison without calibration, the KDL technique outperforms the reconstruction result of SAKE, as KDL has an nRMSE of 0.0255, which is smaller than SAKE's corresponding nRMSE of 0.0258. [0046] FIG. 3 illustrates a performance comparison at x4 acceleration with ACS data between the SPIRiT, SAKE, and KDL techniques, in accordance with an example embodiment. For the comparison for x4 acceleration with ACS data, the top row of FIG. 3 includes, from left to right, a full image, an image generated using the SPIRiT technique, an image generated using the SAKE technique, an image generated using the KDL technique with a dictionary size of 60, an image generated using the KDL technique with a dictionary size of 128, and an image generated using the KDL technique with a dictionary size of 256. The bottom row of FIG. 3 illustrates, from left to right, a PDS sampling mask for this comparison, an image illustrating reconstruction error (RE) by the SPIRiT technique and with an nRMSE = 0.0274, an image illustrating reconstruction error by the SAKE technique and with an nRMSE = 0.0272, an image illustrating reconstruction error by the KDL technique with a dictionary size of 60 and with an nRMSE = 0.0270, an image illustrating reconstruction error by the KDL technique with a dictionary size of 128 and with an nRMSE = 0.0269, and an image illustrating reconstruction error by the KDL technique with a dictionary size of 256 and with an nRMSE = 0.0266.
[0047] In the case of data with sufficient ACS data, the KDL technique also provides less error than either the SPIRiT or SAKE technique with x4 acceleration factor (FIG. 3), even for the smallest dictionary size (60) used in the comparison results. The comparison results also indicate that KDL estimation error can be further reduced by increasing the KDL dictionary size. Thus, an adaptive sparse transform/learned dictionary in k-space can improve the image quality for parallel reconstruction.
Dual Space Dictionary Learning (DSDL) for Tailored Image Reconstruction
[0048] KDL can be combined with image domain information for tailored image reconstruction of small structures. A particular example is vessel wall MRI used for atherosclerotic plaque screening where small structures within the plaque have to be resolved. 3D large coverage multi-contrast Black Blood (BB) sequences used for joint intracranial and carotid plaque screening can have relatively long scan durations. Acceleration of 3D MRI sequences can be achieved by partial parallel imaging techniques such as SPIRIT or SAKE but this process can degrade vessel wall sharpness in accelerated images. A tailored image reconstruction technique that can preserve image sharpness for complicated plaque morphology and even small lesions can be used for this type of vessel wall screening. In particular, dictionary learning techniques can adaptively represent the signal and enhance image quality. [0049] Theory: Herein is described the DSDL technique for estimating local information both in a k-space domain and in an image domain. The performance of DSDL has been demonstrated using in-vivo 3D Plaque MRI sequence (MERGE and SNAP) screening data from patients. A dictionary trained in the image domain for representing an information structure corresponding to a high frequency spectrum of k-space, which can compensate for a low SNR scenario in the high frequency spectrum of k-space. A dictionary trained in a k-space domain can preserve image contrast information accumulated in a low frequency spectrum, which can be regarded as a global energy distribution constraint in image domain. The DSDL technique can potentially capture the most critical information in signal acquisition domain (the k-space domain) and signal observation domain (the image domain).
[0050] FIG. 4 is a flowchart depicting the DSDL technique, in accordance with an example embodiment. In the DSDL technique, let:
• y be acquired Cartesian k-space data;
• R and RT be respective forward and backward Hankel structure matrix reshape operations;
• F and F H be respective Fourier and inverse Fourier transform operations;
• u be an under-sampling pattern;
• Da and D0 be dictionary matrices for respective acquisition (k-space) and observation (image) domains, where for each of Da and D0 matrices, the following constraints can be placed on the columns of the matrix: \\dk || < 1, k = 1, 2, ... K; and
• Wa and W0 be respective corresponding sparse coefficient matrices for the respective acquisition and observation domains.
[0051] Then as indicated by FIG. 1, for the h iteration, i > 1, of the DSDL technique, the following operations are performed:
{D Wi) = mmDa≡v>Wa \\DaWa - R
Figure imgf000015_0001
{D wj) = m Do^^D0W0 - FHRT{DM)fp + μ Ι Ι ^0 | | 1
V = {D \ \\dk \\2 2 < l, k = l, 2, ... , K};
where each sub-problem can be solved using a block proximal-gradient dictionary learning technique. [0052] Experimental Methods: Two patients (luminal stenosis > 50%) were recruited with informed consent and undertook experiments on a 3 Tesla whole body MR system (Philips Achieva, R3.2.1, Netherland) using a dedicated eight-channel carotid coil. A 3D MERGE sequence was performed with the following parameters: FOV = 250xl60x42mm3, voxel size = 0.8x0.8x0.8mm3, TR/TE = 9.2/4.3ms, FA = 6, TFE factor = 90, low-high encoding, NSA = 1, fat suppressed. For a 3D SNAP sequence, the primary parameters include: FOV = 160xl60x32mm3, voxel size = 0.8x0.8x0.8mm3, TR/TE = 10/5.7ms, IRTR/TI = 2011/501ms, FAs = 11/5, TFE factor = 98, linear encoding, NSA = 2, fat suppressed.
[0053] To test performance of the DSDL technique, MERGE data was fully sampled and retrospectively under-sampled by a factor of 4 using a PDS pattern (calibration area 26 x 26). The result reconstructed by the DSDL technique was compared with a result using an Ll-SPIRiT technique based on nRMSE values. SNAP data was prospectively x2.67 accelerated with the PDS pattern (calibration area 26 x 26) and performance compared with Ll-SPIRiT. Both DSDL and Ll-SPIRiT used the same 7 >< 7 kernel/patch size. In DSDL, the dictionary size was 60 for the k-space domain and 256 for the image domain - the larger image domain dictionary was used since the image domain can include more complex structures than a k-space domain. The initial dictionaries for both domains were generated using singular value decomposition techniques.
[0054] Results and Discussion: FIG. 5 shows a performance comparison between Ll- SPIRiT and DSDL at x4 retrospective acceleration for imagery of a common carotid artery. FIG. 5 includes a full image, an Ll-SPIRiT image generated using Ll-SPIRiT having an nRMSE value of 0.0326, and a DSDL image generated using DSDL having an nRMSE value of 0.0316. The white square in each of the full, Ll-SPIRiT, and DSDL images of FIG. 5 indicates a lesion region. At x4 retrospective acceleration for a MERGE sequence, the DSDL technique with an nRMSE of 0.0316 outperforms the reconstruction result with an nRMSE of 0.0326 for Ll- SPIRiT. Beyond the overall noise level reduction, DSDL better preserves small structure information around the lesion region than Ll-SPIRiT.
[0055] FIG. 6 shows results of a performance comparison at x2.67 prospective acceleration. A top row of three images of FIG. 6 show results using the Ll-SPIRiT technique and a bottom row of three images of FIG. 6 show results from using the DSDL technique. In FIG. 6, the left images of both rows show magnitude of phase sensitive images, the center images of both rows show magnitude of reference (MoR) images, and the right images of both rows show phase corrected real images. FIG. 6 indicates that SNAP images can also be better restored using the DSDL technique.
[0056] These results indicate that the DSDL technique can be used for 3D multi-contrast plaque screening acceleration. DSDL has been validated for feasibility on patients. Also, jointly using adaptive k-space and image domain sparse transform / learned dictionaries can better preserve image structure and reduce overall noise level compared to a universal sparse transform. .Thus DSDL extends KDL by including image domain information for image information preserving reconstruction.
Self-supporting Tailored k-Space Estimation for Parallel Imagisig Reconstruction (STEP)
[0057] The Self-supporting Tailored k-space Estimation for Parallel imaging reconstruction (STEP) technique is described herein. Parallel imaging has been the most commonly used technique to reduce image acquisition time in clinical MR scans. In this technique, k-space data are simultaneously acquired from multiple receiver coils with different coil sensitivity information that can establish data correlation among k-space neighbors and across multichannel coil elements. This data redundancy can be used to restore under-sampled k-space data through a reconstruction algorithm but at the cost of measurement accuracy and noise amplification.
[0058] Some existing subspace techniques assume that a subspace basis remains invariant across the entire k-space. However, observations of signal characteristics in k-space reveal high magnitude with large variations in a central low-spatial frequency area while low magnitude with small variation in a peripheral high-spatial-frequency area. The difference of signal characteristics in distinct k-space areas can be further enlarged in particular imaging sequences, such as inversion recovery prepared imaging sequence, fast spin echo sequence configured with long echo train and MR scan with contrast agent injection. With different signal characteristics, the spatially invariant subspace basis/interpolation weights may lead to the increase of noise and artifact level.
[0059] To address this problem, some techniques introduce spatially varying interpolation weights to improve reconstruction performance. However, these techniques still require central calibration data. Further, there are limitations to using a uniform under-sampling pattern in adapting to flexible regularization terms, for example, when compressed sensing with a nonuniform sampling trajectory is utilized. [0060] The STEP technique can address issues related to subspace parallel imaging reconstruction in the case of insufficient data for calibration, arbitrary Cartesian under-sampling and spatially varying signal characteristics. To address the calibration- less and non-uniform under-sampling issues, STEP establishes a self-supporting low rank model in which several supporting subspace bases are extracted from structured k-space data. In addition, STEP uses tailored procedures including k-space partitioning and basis selection to adapt for spatially varia t signal characteristics.
[0061] The STEP procedure uses a spatially variant subspace basis to capture local k-space signal characteristics to account for small variations in signal characteristics a peripheral high- spatial-frequency area and for large variations in signal characteristics in a central low-spatial- frequency area. In STEP, tailored k-space partitiomng and basis selection procedures are used for spatially variant signal subspace promotion and incorporated into the factorization of data matrix while maintaining a low rank matrix structure. The k-space partitioning procedure guarantees locality for the extraction and representation of subspace bases, which is equivalent to using a spatially variant convolution kernel for different k-space data blocks.
[0062] The basis selection procedure promotes sparsity for coefficients in a weighting matrix and further enhances the adaptability to local signal variations within one k-space data block. In comparison to other developed spatially varying reconstruction methods, a self-supporting low rank model provided by the basis selection procedure can provide flexibility for an arbitrary under-sampling pattern with or without calibration signals. STEP therefore combines the properties of locality, sparsity, and rank deficiency during image reconstruction. For imaging sequences that cause large difference of signal characteristics in distinct k-space areas, STEP can provide improved parallel imaging reconstruction with reduction of noise amplification and/or residual aliasing artifacts in both calibration-less and auto-calibrating conditions. In a clinical application of vessel wall imaging, STEP and its extension with compressed sensing LI -STEP show more accurate structures (vessel wail boundaries) and tissue contrast preservation (measured using wall-lumen signal ratios) over other methodologies.
[0063] Subspace Parallel Imaging: Parallel reconstruction methods are discussed from a signal/noise subspace perspective. For simplicity, only two-dimensional (2D) Cartesian k-space is considered here but the same theory can be extended to 3D reconstruction in a similar manner. [0064] A block of k-space dataset x E cNlXNzXNc with Λ',· frequency encodes, N2 phase encodes and Nc coil elements can generate a structured data matrix A £ C 2 Nc (Ni -W+I)X(JV2-W+I) wjj0se columns are vectorized patches extracted by sliding a w x w x Nc sized multichannel window across the block of k-space dataset x. As coil sensitivities are low-spatiai-freq ency dominated in k-space, A becomes a rank-deficient matrix with appropriately chosen window size w x w and hence has a nontrivial left null space. Furthermore, a signal subspace l<¾ and noise subspace L _ can be distinguished from each other according to the order of their corresponding singular values using an 8VD-based approach. A can therefore be approximated by its first K dominant singular values and singular vectors as indicated by Eq.
[1] below:
A = U∑VH = t/| | | | Vj + U iy «
Figure imgf000019_0001
[!]
[0065] If A only contains acquired auto-calibration signals, two distinct approaches can be used for a calibration process. Auto-calibrating methods can derive a set of representative nulling vectors in the noise subspace from auto-calibration signals. Some self-calibrating methods directly extract coil sensitivities using support vectors in a signal subspace from auto -calibration signals. Then in the reconstruction process, the calibrated vectors can be applied throughout k- space as interpolation weights. If auto-calibration signal data has not been sufficiently acquired, a joint calibration and reconstruction method can iteratively solve the resulting calibration-less problem using the low rank property of the structured data matrix A formed by the entire k-space data regardless of whether or not auto-calibration signal data was acquired or not.
[0066] Auto-calibrating/self-calibrating methods can have a mismatch for a subspace basis calibrated from the central k-space that may not be optimal for a peripheral k-space. One attempt to solve this problem is based on the global subspace established for the entire k-space data. However, the signal characteristics vary in k-space particularly for imaging sequences with gradually evolved signals due to T> and/or T2 relaxation. When magnetization prepared imaging and fast spin echo scan are used, data acquisition in low-spatial-frequency can be designed to maintain certain image contrast while the data in high-spatial-frequency is sampled during signal evolution period to reduce scan time. A resulting magnetization modulation can aggravate differences of signal characteristics between distinct areas in k-space. Therefore, the global subspace may not represent the local signal variation pattern accurately while maintaining a low- rank property, in addition, the estimated correlation relation determined from limited kernel size does not fully represent the true coil sensitivity profiles, leading to aliasing artifacts and noise amplification. To alleviate this issue, the STEP technique constructs and optimizes multiple subspace bases for local k-space signal representation.
[0067] Data Matrix Factorization with Local Signal Characteristics Promotion: To capture the spatially varying signal characteristics, two tailored procedures - k-space partitioning and basis selection - are used by the STEP procedure and integrated into a simplified low rank matrix factorization of Eq. [ 1 ]. The low rank approximation of A can be equivalently factorized as a product of two matrices as indicated by Eq. [2] :
Figure imgf000020_0001
where the column vectors in matrix D ~ ί/ G CW2 NC*K span the signal subspace and each column vector of matrix W contarns the weighting
Figure imgf000020_0002
coefficients to represent the corresponding column in A using the subspace basis in D. The rank deficiency of A remains as long as the basis of D is under-complete; that is, as long as K <
Figure imgf000020_0003
STEP can apply the k-space partitioning and basis selection tailored procedures simultaneously to promote the representation of local data features in k-space from different perspectives.
[0068] To promote subspace locality, the k-space partitioning procedure can divide k-space into L overlapping data blocks and assign each data block with its own subspace basis DT £ CW*NC*K, I = 1 L and weighting coefficient matrix! ^ £ cK ^~w+1^N^~w+1 I = 1, ... , L . The k-space partition procedure can be flexible; for example, a non-uniform partitioning procedure can used to generate a non-overlapping k- space partition. Because a non-uniform partition can be related to specific k-space signal distribution, a uniform k-space partition can be used by the STEP procedure for simplicity— in other embodiments, the STEP procedure can use a non-uniform k-space partition. Given a sufficiently large block size, a local k-space data block can still generate rank-deficient data matrix and D > can be expected to provide more local data features in k-space. Then, distinct subspace bases can be extracted from different local k-space regions. A k-space in an overlapping area between data blocks can be an average of different over-lapped k-space blocks after low rank approximation. [0069] The basis selection procedure can apply a statistical a priori analysis in favor of basis selection on each element in W. Because a subspace basis is assumed to span the column space of A, each column vector in A can contain one specified signal structure and so be represented by a few basis vectors. The result is that only a few weighting coefficients in W contain large absolute values while the rest of the weighting coefficients contain relatively small absolute values. Then, the distribution of coefficient values in if can form a peak around zero with a long tail representing the remaining weights. The retained weighting coefficients function as a basis selection to adapt the subspace basis for signal representation in different k-space locations. The STEP procedure can use a Laplace distribution to approximate prior distribution of signal coefficients, which can be formulated into an LI -norm regularization based on Bayesian theory. In addition to being a characteristic for basis selection, the weighting coefficients can also improve noise suppression with sparsity-promoting prior information.
[0070] STEP Procedure for Imaging Reconstruction: FIG. 7 is a flowchart of the STEP procedure for iterative imaging reconstniction; in accordance with an example embodiment. As indicated by FIG. 7, the STEP procedure can receive as input a "k-space input" from k coils of a magnetic resonance device and generate a reconstructed matrix A2 as an output. Data blocks can be extracted from the k-space input, where the data blocks can be associated with coils; e.g., one or more data blocks can have data from, and so be associated with, coil 1, one or more data blocks can have data from, and so be associated with, coil 2, ... and one or more data blocks can have data from, and so be associated with, coil k. The data blocks can be vectorized for form a data matrix A. Data matrix A can be factored, and perhaps partitioned, into matrices D and W, where D can include one or more sub-space bases, and W can include one or more weights for the sub-space bases. The factorization can enforce various properties, such as but not limited to, a low-rank property, a sparsity property and/or a joint sparsity property for a wavelet domain. Weighted data blocks can be constructed from the D and W matrices, and the weighted data blocks can be combined to form a reconstruction matrix A2. In some cases, reconstruction matrix A2 can take less storage than matrix A.
[0071] The STEP parallel imaging reconstruction procedure can improve the local signal representation in k-space with both k-space partition and basis selection constraints. An objective function for STEP is indicated in Eq. [3] below: L
]λ{χ, Dlt Wlt ... , DL, WL) = ^ - (pTy + pTx) - DtWt || ^ + \\Wt [3]
1=1
[0072] where:
• y and x are the acquired and un-acquired k-space data respectively;
• P] and P] are operators to project the acquired and un-acquired data, respectively, onto Cartesian grids in k-space;
• R> is an operator to extract the ' data block in Cartesian k-space and reshape the data block into a structured low rank data matrix;
• Di and W) correspond to the local sub-space basis and the weighting coefficient matrix, respectively, as an approximated matrix factorization of the f data matrix using Eq. [2]; and
• the nonnegative parameter λ controls the dependence on the a priori distribution.
[0073] When a rank value K has been estimated in advance, the STEP procedure can attempt to restore the un-acquired k-space data using Eq, [4] below: min (x, Di, Wi, ... , DL, WL) [4]
X,D1,W1,...,DL,WL
s. t. D[ Dt = I, rankiD,) = K, I = 1, ... , L
[0074] The block coordinate descent method can be adopted to solve the problem defined by
Eq. [4], which turns out to be a POCS-rype approach as indicated by FIG. 7. During an iteration of the loop indicated by FIG. 7, the STEP procedure can update three sets of unknown variables alternatively as indicated by Equations [5A], [5B], and [5C]:
Wl = ¾ [D ^R, (pTy + pTxl-i j, [5A] D\ = UVH, where Rt (pTy + ~P xi_1) Wt iH ^> USV H, ^5B^
Figure imgf000022_0001
where:
• ¾.() denotes a soft-thresholding operator that can be defined as:
(¾WL = ^ maxvm,n (|xmn | - 1, 0),
i n \Amn\ denotes the SVD operation, and
• (∑i=i r)-1 denotes as one or more normalized weights to compensate for multiple summations of overlapping areas.
[0075] In an iteration of the STEP procedure, Eq. [5A] and [5B] are first looped on subscript / from 1 to L and then Eq. [5C] is applied to update x . Also, D° and W° can be initialized by SVD as = i/fi and W? =∑\\
Figure imgf000023_0001
for the first loop. The STEP procedure can sequentially enforce properties of locality, sparsity, and rank deficiency - loop iterations are repeated until some stopping criteria are met - examples of the stopping criteria include, but are not limited to, a maximum number of iterations or a difference between adjacent updates failing being within a pre-determiiied limit value.
[0076] The STEP procedure can be extended to integrate with other regularization terms. Considering that some MR images have a sparse representation in a transformed domain, an Ll- STEP procedure can be formulated by adding a joint sparsity property in a wavelet domain for multichannel images, such as indicated by Eq. [6] below:
[6]
Figure imgf000023_0002
s. t. D? Dl = I, ran Di = K, = 1, ... , L where:
• FH and Ψ denote respective inverse Fourier transform and wavelet transform operators,
• L2,1-norm jj ||2;i is a norm where multichannel data in square root of the sum of absolute squares (rSoS) form are combined first and the combined results are then summed, and
• μ is a parameter for balancing between a conventional STEP constraint and a compressed sensing penalty.
[0077] Experimental Methods - MR Scans: All MR scans were performed on a 3 Tesla (T) whole body MR scanner (Philips Achieva, R3.2.1 , Best, Netherlands) and the k-spaee data were fully acquired as reference. A single 2D axial slice of one orange was scarmed 100 times using a turbo spin echo (TSE) sequence. The scan parameters were: repetition time/echo time = 800/7.6 ms, TSE factor = 10, slice thickness = 3.0 mm, in-plane resolution = 0.5 x 0.5 mm2, field of view = 100 x 100 cm2, and no fat suppression. An eight-channel knee coil was used as a signal receiver for phantom imaging. [0078] Another two 3D datasets were scanned from one healthy volunteer. One brain dataset was acquired by an eight-channel head coil using inversion recovery prepared 3D spoiled gradient echo [T contrast enhanced turbo field echo (TFE)] sequence. The scan parameters were: repetition time/echo time = 10/4.1 ms, inversion recovery repetition time/inversion time = 993/494 ms, TFE factor :=: 98, flip angle = 1 1 degrees, resolution = 1.0 x 1.0 x 1.0 mm3, field of view = 200 x 200 x 200 cm3, no fat suppression, and total scan time = 6 mi 47 s. The other knee dataset was acquired using the same knee coil as above with a 3D Volumetric isotropic Turbo spin echo Acquisition (VISTA) sequence. The scan parameters were: repetition time/echo time = 1300/18 ms, TSE factor = 69 including 4 start-up echoes, constant 35 degree reduced flip angle for refocusing radiofrequency pulse, resolution = 0.6 x 0.6 x 0.6 mm3, field of view = 116 x 140 x 50 cm3, SPAIR fat suppression, and total scan time = 9 min 10 s.
[0079] image Quality Evaluation: Two criteria for image quality evaluation - the nRMSE and a mean structural similarity (mSSIM) index - were measured in a region-of-interest (ROI) of a multichannel combined image. These criteria were computed using a point-wise rSoS.
[0080] The nRMSE was quantified as indicated by Eq. [7] below:
[7] nRMSE = x 100%,
Figure imgf000024_0001
where:
• where ί$οί°η denotes a channel-combined image from reconstructed data,
• !SQS denotes a channel-combined image from fully-acquired data, and
• MROI denotes an ROI mask that restricts the nRMSE value measured from a tissue area.
[0081] The mSSIM index attempts to quantity the visible differences between the reconstructed and reference image using a variety of known properties of the human visual system and the mean operation still applied within the given ROI. A more accurate reconstruction result is indicated by a smaller nRMSE value and/or a higher mSSIM value.
[0082] Model Performance Assessment: To demonstrate the proposed model, a ground- truth dataset was constructed using a 200 x 200 Shepp-Logan phantom and a set of eight-channel sensitivity maps simulated based on the Biot-Savart law. Additional zero-mean Gaussian white noise was added to adjust the SNR as 20 dB for the lowest intensity of Shepp-Logan phantom in the image domain. Without loss of generality, a kernel/patch size of 7 x 7 for ail algorithms and overlapping area of 20% for k-space partitioning were used in this assessment. Nine equal, overlapping blocks covering k-space (3 blocks x 3 blocks on each side) were used to examine rank value distribution and subspace basis differences in various parts of k-space.
[0083] SVD was performed on a data matrix generated by global, central and peripheral data in k-space to obtain singular values and subspace bases. Given two subspace bases U\ \ E CW2NCXK AND ^ E CW2NCXK A subSpace difference can be defined as ¾ - Then, a smaller subspace difference indicates a more accurate representation of basis using basis U< \ . in addition, signal and noise weighting coefficient matrices were constructed by projecting the respective signal and noise data matrices onto a global subspace basis. Then histograms of signal and noise coefficients in their own weighting matrix were compared with the Laplace and G aussian d i si rib u tions .
[0084] To investigate the performance improvement of the STEP procedure, the simulated k- space was under-sampled by a factor of three using variable-density Poisson disk sampling (vd- PDS) patterns with and without el liptical shutter. Additional central ACSs were not kept for both vd-PDS patterns. The reconstruction results were compared among different combinations of the k-space partition and basis selection procedures in terms of the nRMSE and mSSIM index criteria measured within a ROI. The 3 3 k-space overlapping partition and the regularization parameter λ = 3 x 10"4 correspond to a model parameter of each proposed procedure set in this experiment. Zero filled k-space data was used as an initial k-space estimation and the rank value was empirically set to 60. Iterations of the STEP procedure looped until a difference in nRMSE between adjacent updates of ROI based nRMSE was less than 1.0 x 10°. This stopping criterion was used for all experiments to compare performance among different algorithms.
[0085] Artifacts and Noise Measurement: The above-mentioned 2D orange dataset was used in evaluating reconstruction error and noise amplification distribution of the STEP technique in comparison to the SPIRIT and SA KE techniques. Each 2D k-space was uniformly under-sampled along each dimension by a factor of 2 and a central 30 x 30 area was fully sampled for SPIRIT calibration which resulted in a net reduction factor of 3.7. The 100 retrospectively under-sampled datasets were sequentially reconstructed by each of the SPIRIT, SAKE and STEP techniques. A conjugate gradient (CG) SPIRiT implementation with Tykhonov regularization of 1 x 10° and I x 10~5 for respective calibration and reconstruction processes was used in all STEP-related measurements. The rank value of 60 was estimated for SAKE and STEP and λ = 0.1 was used in STEP. Furthermore, the impact of 3x3 or 5x5 k-space overlapping partitions and k-space initialization by zero filling or 5 iterations of CGSPIRiT were also evaluated for STEP. The fully sampled and reconstracted multichannel images were, first, combined with rSoS and a mean difference error between each other was calculated. In addition, an empirical g-factor map value gemp for each technique was estimated as:
[8]
Figure imgf000026_0001
where:
asos°n denotes a standard deviation computed for each pixel across reconstructed images,
• denotes a standard deviation computed for each pixel across fully-sampled reference images, and
• R is a net reduction factor.
For evaluation purposes, &sos°n ar|d (? os were calculated over 100 reconstructed and fully- sampled reference images.
[0086] Calibration-Less Parallel Reconstruction: To compare the performance of SAKE and STEP for calibration-less parallel reconstruction, the 3D brain dataset was retrospectively under-sampled by a four-fold acceleration in vd-PDS pattern with elliptical shutter. In this study, a 3D Fourier encoded dataset was, first transformed using an inverse Fourier transform along a readout direction. Under-sampled 2D multichannel k-space data was reconstracted sequentially. To further boost algorithm convergence, un-acquired k-space data in a current slice was initialized by the estimated k-space data in a previous slice except for the first 2D slice, which was initialized by zero filled k-space data. For STEP, the k-space was divided into 5 x 5 data blocks. SAKE and STEP shared the same rank value of 60 and different values of the STEP regularization parameter λ ranging from 0.2 to 5 were tested to determine an impact of a Laplace a priori penalty.
[0087] Auto-calibrating Parallel Reconstruction: STEP can also be used for auto- calibrating parallel reconstruction. To this end, the 3D brain dataset to compare performance comparison between SPIRIT, SAKE and STEP. The 3D brain dataset was uniformly under- sampled by a factor of 4 (2 x 2 in each direction) and a central 26 x 26 area was fully sampled for a SPIRiT calibration process. In this experiment, a rank value of 60 was used for SAiCE and each local subspace in STEP. In addition, the impact of the k-space partition and basis selection procedures were evaluated by separately and jointly adding a 5 x 5 o verlapping k-space partition and setting a SAKE regularization parameter λ value to I . To boost the convergence of SAKE, SAKE+Laplace a priori, SAKE+Partition, and STEP performance, 7 iterations of a CG SPIRiT reconstruction result were used to initialize each slice.
[0088] Combination with Compressed Sensing: To demonstrate the regularization capability of STEP, the 3D knee dataset was under-sampled by a net factor of 5.6 in vd-PDS pattern with elliptical shutter and a fully sampled 26 x 26 center area. SPIRiT, I . i -SPIR iT. STEP, and LI -STEP were compared for reconstruction performance. In addition to the accuracy measurements within ROI (nRMSE and mSSIM), a wall-lumen signal ratio measurement R calculated as R = Sw ' ' S was introduced to evaluate tissue contrast change in comparison to the fully sampled reference, where Sw is an average signal measured on a vessel wall and ¾ is an average signal measured within the lumen area, improved performance in tissue contrast preservation is indicated when R values from a reconstruction algorithm get closer to R values calculated for reference images. Lumen contour and vessel wall pixels were manually selected on 120 axial slices from fully sampled reference images covering the popliteal artery using ImageJ software (version 1 .48, National Institutes of Health, Bethesda, MD).
[0089] Statistical Analysis: In addition to direct visual comparison between different reconstruction algorithms, a two sided Wilcoxoii signed rank test was used to validate the improvement of ROI based accuracy measurements (nRMSE and mSSIM) for 3D brain and knee datasets and wall-lumen signal ratios in comparison to the fully sampled reference for the knee dataset. In all the tests, statistical significance was defined at P < 0.05 level. Results are presented in Tables 1-3 below with a format of median (interquartile range) for all algorithms in each experiment.
[0090] Hardware and Software: MATLAB© (R2013b, The MathWorks, Inc., Natick, MA) was used to implement the herein -described STEP performance and to perform statistical analysis. All software was run on a personal laptop equipped with an Intel i7-47G0HQ, 2.40 GHz CPU, and 8 GB of memory. The MATLAB® online release versions of the SPIRiT and SAKE procedures obtained from www.eecs.berkeiey.edu/~mlustig Software.hlml were used in this performance comparison. [0091] Model Performance Assessment: FIG. 8 demonstrates a local subspace basis and a Laplace a priori model, in accordance with an example embodiment. Image (a) at upper left of FIG. 8 shows a 3 x 3 k-space partition with 20% overlapping area. Image (b) at upper left of FIG. 8 includes a comparison of the global, central and peripheral subspace basis computed by SVD. The singular values calculated from each data block are normalized by their largest singular value for rank estimation, and only partial subspace basis vectors representing the 3rd coil are shown for comparison. Images (c) and (d) respectively at lower left and lower right of FIG. 8 show respective Laplace and Gaussian probability density functions indicate respective complex signal and noise coefficients in a weighting matrix.
[0092] Image (a) of FIG. 8 shows one example of k-space partition with 3 x 3 overlapping data blocks. Using the numerical phantom dataset, three types of data matrices generated from different data blocks in distinct colors have been factorized by SVD. Image (b) of FIG. 8 different data blocks in k-space share similar rank value but might have different subspace bases. The global subspace basis fails to represent central and peripheral subspace basis well particularly for those vectors corresponding to smaller singular values. The subspace difference between global and peripheral basis is larger than the difference between global and central basis. Images (c) and (d) of FIG. 8 respectively illustrate the 2D histogram of complex signal/noise coefficients in weighting matrix and the fitted Laplace/Gaussian probability density function. The histogram of signal and noise coefficients can differ significantly while the histogram of signal coefficients can be approximated by a Laplace distribution.
[0093] FIG. 9 showrs performance improvement by using a local signal characteristic promotion model at three-fold acceleration, in accordance with an example embodiment. By adding different local signal characteristics promotion procedure one after another to conventional SAKE, the improved reconstruction accuracy can be seen in both under-sampling patterns from the 5 times brightened error maps and the RQI based quantitative measurements (nRMSE and mSSIM).
[0094] FIG. 9 demonstrates the performance improvement of two tailored k-space partitioning and procedures in STEP. Image (a) of upper-left of FIG. 9 shows a vd-PDS pattern PI used for under-sampling a dataset. Reconstruction results of the data set under-sampled using vd-PDS pattern PI are shown in image (b) at upper-right of FIG. 9. Image (b) of FIG. 9 has two rows: a top row with four reconstructed images, and a bottom row of corresponding error maps that have been 5x magnified. The four reconstructed images on the top row of image (b) of FIG. 9 were respectively generated (from left to right in FIG. 9) using the SAKE procedure, the SAKE with Laplace procedure, the SAKE with partitioning procedure, and the STEP procedure. The four error maps on the bottom row of image (b) of FIG. 9 show error between a reference image and respective images (shown immediately above) generated by (from left to right) the SAKE procedure, the SAKE with Laplace procedure, the SAKE with partitioning procedure, and the STEP procedure. Respective nRMSE and mSSIM values corresponding to the error maps in image (b) of FIG. 9 are: 7.20% (nRMSE) / 0.780 (mSSIM) for the SAKE procedure, 6.86% (nRMSE) / 0.803 (mSSIM) for the SAKE with Laplace procedure, 6.55% (nRMSE) / 0.810 (mSSIM) for the SAKE with partitioning procedure, and 6.27% (nRMSE) / 0.837 (mSSIM) for the STEP procedure.
[0095] Image (c) of lower- left of FIG. 9 shows an elliptical vd-PDS pattern El used for under-sampling a dataset. Reconstruction results of the data set under-sampled using elliptical vd-PDS pattern El are shown in image (d) at lower-right of FIG. 9. Image (d) of FIG. 9 has two rows: a top row with four reconstructed images, and a bottom row of corresponding error maps that have been 5x magnified. The four reconstructed images on the top row of image (d) of FIG. 9 were respectively generated (from left to right in FIG. 9) using the SAKE procedure, the SAKE with Laplace procedure, the SAKE with partitioning procedure, and the STEP procedure. The four error maps on the bottom row of image (d) of FIG. 9 show error between a reference image and respective images (shown immediately above) generated by (from left to right) the SAKE procedure, the SAKE with Laplace procedure, the SAKE with partitioning procedure, and the STEP procedure. Respective nRMSE and mSSIM values corresponding to the error maps in image (d) of FIG. 9 are: 6.28% (nRMSE) / 0.828 (mSSIM) for the SAKE procedure, 6.24% (nRMSE) / 0.871 (mSSIM) for the SAKE with Laplace procedure, 5.68% (nRMSE) / 0.889 (mSSIM) for the SAKE with partitioning procedure, and 5.59% (nRMSE) / 0.896 (mSSIM) for the STEP procedure.
[0096] Artifacts and Noise Measurement: FIG. 10 shows a comparison of different reconstruction algorithms in reconstruction result with 5 times magnified mean error and an empirical g-factor map in a uniformly under-sampled pattern with 30 x 30 calibration area, in accordance with an example embodiment. Two k-space partitions and two initialization strategies are also examined for the impact of parameters in the STEP procedure. White arrows are used in FIG, 10 to illustrate aliasing artifacts in reconstruction results of SPIRiT and SAKE,
[0097] FIG. 10 is made up of seven columns and three rows of images. The leftmost column of the top row of FIG. 10 shows a reference image made up of an average of 100 fully sampled rSoS images. The remaining six columns of images of FIG. 10 indicate results from six different reconstruction procedures - from left to right in FIG. 10, these procedures are SPIRiT, SAKE, STEP using a zero-filled initialization and a 3x3 k-space partitioning, STEP using a zero-filled initialization and a 5x5 k-space partitioning, STEP using a heuristic initialization and a 3x3 k- space partitioning, and STEP using a heuristic initialization and a 5x5 k-space partitioning. These procedures were used on each under-sampled dataset to estimate un-acquired k-space data.
[0098] The top row of images of FIG. 10 are, from left to right, the reference image and the average of reconstructed images generated by the six different reconstruction procedures in the left-to-right order discussed in the paragraph above. The middle row of images of FIG. 10 show 5 times magnified residual error maps for the six different reconstruction procedures in the left- to-right order discussed in the paragraph above. The bottom row of images of FIG. 10 provide empirical g-factors estimated for the six different reconstruction procedures in the left-to-right order discussed in the paragraph above.
[0099] In general, STEP can provide less aliasing artifacts in its reconstruction results and smaller noise amplification in its empirical g-factor maps. Residual errors for some boundary information may arise without use of relatively-good initialization, such as the heuristic initialization used to generate the right-most columns of images in FIG. 10. Heuristic initialization can be particularly useful in the case where more data blocks have been divided. Once a relatively-good initialization is adopted, STEP can generate less residual errors with more data blocks divided and maintain its benefit on the property of empirical g-factor map in comparison to SPIRiT and SAKE,
[0100] Calibratioii-kss Parallel Reconstruction: FIG. 11 illustrates a performance comparison between SAKE and STEP from four-fold vd-PDS calibration-less sampling with elliptical shutter, in accordance with an example embodiment; FIG. 1 1 has a top row and a middle row both having five images, and a bottom row with two graphs. The top row of FIG. 11 shows, from left to right, a reference image of an axial slice and four reconstruction images of the axial slice reference image generated using, respectively, a SAKE procedure, a STEP procedure with λ = 0.2, a STEP procedure with λ = 1.0, and a STEP procedure with λ = 5.0. The axial slice reference image is a 2D axial slice generated from the 3D brain dataset - the SAKE and STEP procedures were attempting image reconstruction after the axial slice reference image was under-sampled by four-fold.
[0101] The middle row of FIG. 1 1 shows, from left to right, a reference image of an zoomed- in portion of the axial slice and four reconstruction images of the zoomed-in reference image generated using, respectively, the SAKE procedure, the STEP procedure with λ = 0.2, the STEP procedure with λ = 1.0, and the STEP procedure with λ = 5.0. The bottom row of FIG. 11 provides graphs, from left to right, of ROI based nRMSE and mSSIM measurements of the SAKE and various STEP procedures across all slices as a quantitative evaluation of nRMSE and mSSIM from other 2D axial slices.
[0102] The reconstruction results of STEP indicate that a range of values of regularization parameter λ can improve the noise suppression performance and generate higher SNR images compared with SAKE. However, an overly weighted λ parameter value can give rise to over- smoothed images for some slices. Using the STEP procedure with a properly weighted parameter λ can lead to consistently effective image quality improvement across ail slices. Table 1 below shows a statistical assessment of this image quality improvement.
Figure imgf000031_0001
Table 1
[0103] Auto-calibrating Parallel Reconstruction: FIG. 12 illustrates a performance comparison between the SPIRIT, SAKE, and STEP reconstruction procedures for a 3D brain dataset, in accordance with an example embodiment. FIG. 12 has a top row and a middle row both having six images, and a bottom row with two graphs. The top row of FIG. 12 shows, from left to right, a reference image of the axial slice discussed above in more detail with respect to FIG. 12 and five reconstruction images of the axial slice reference image generated using, respectively, a SPIRIT procedure, the SAKE procedure, a SAKE with Laplace procedure, a SAKE with partitioning procedure, and a STEP procedure. The axial slice reference image is a 2D axial slice generated from the 3D brain dataset - the SPIRiT, SAKE, and STEP procedures were attempting image reconstruction after the axial slice reference image was uniformly sampled at a net reduction factor of 3.8 The white arrows in the SPIRiT image indicate reconstruction aliasing artifacts.
[0104] The middle row of FIG. 12 shows, from left to right, a reference image of an zoomed- in portion of the axial slice and five reconstruction images of the zoomed-in reference image generated using, respectively, the SPIRiT procedure, the SAKE procedure, the SAKE with Laplace procedure, the SAKE with partitioning procedure, and a STEP procedure. Besides improvement of noise suppression performance, the reconstruction results of STEP can provide less residual aliasing artifacts compared with SPIRiT and SAKE. STEP without either of the it- space partitioning and basis selection tailored procedures reduces to SAKE. FIG. 12 also illustrates that the k- space partitioning and basis selection tailored procedures separately combined with SAKE can reduce aliasing artifacts but not as much as reduced by STEP.
[0105] The bottom row of FIG. 12 provides graphs, from left to right, of ROI based nRMSE and mSSIM measurements of the SPIRiT, STEP, and various SAKE procedures across all slices as a quantitative evaluation of nRMSE and mSSIM from other 2D axial slices. Similarly, STEP can achieve consistent improvement of image quality for all slices compared with SPIRiT as shown in Table 2 below.
Figure imgf000032_0001
Table 2
[0106] Combination with Compressed Sensing: FIG. 13 illustrates a performance comparison between the SPIRiT, LI -SPI iT, STEP, and L -STEP reconstruction procedures for a 3D knee dataset and an artery wall in the knee from 5.6-fold vd-PDS auto-calibrating sampling with elliptical shutter, in accordance with an example embodiment. The LI-SPIRiT and LI-STEP procedures include an LI -norm penalty in a wavelet domain, while the SPIRiT and STEP procedures do not include the LI -norm penalty,
[0107] FIG. 13 has a top row and a middle row both having five images, and a bottom row with two graphs. The top row of FIG. 13 shows, from left to right, a reference image of an axial slice of the reconstructed 3D knee dataset and four reconstruction images of the axial slice reference image generated using, respectively, the SPIRiT procedure, a Ll-SPIRiT procedure, the STEP procedure, and a LI -STEP procedure.
[0108] The middle row of FIG. 13 shows, from left to right, a reference image of an zoomed- in portion of a popliteal artery area shown in the axial slice and four reconstruction images of the zoomed-in reference image generated using, respectively, the SPIRiT procedure, the Ll-SPIRiT procedure, the STEP procedure, and the LI -STEP procedure. The lower-left corner of the zoomed-in reference image includes an example of manual selection for lumen area and wall pixels. The zoomed-in images demonstrate that STEP more clearly preserves vessel wall boundary information than do SPIRiT based approaches.
[0109] The bottom row of FIG. 13 provides graphs, from left to right, of ROI based nRMSE and mSSIM measurements of the SPIRiT, Ll-SPIRiT, STEP and LI -STEP procedures across all slices as a quantitative image quality comparison. The nRMSE and mSSIM graphs suggest the benefit of STEP based approaches for various slices of MR images. Table 3 below shows a comparison of wall-lumen signal ratios computed on all slices from SPIRiT and STEP-based procedures further indicates that the STEP-based procedures can provide more accurate tissue contrast between vessel wall and lumen.
Figure imgf000033_0001
Table 3
[0110] As discussed above, STEP performance is related to a number of parameters including: a number of data blocks in a k- space partition parameter, a rank value estimated for each subspace, and a regularization weight of statistical a priori analysis. Increase the number of data blocks can improve monitoring local data features, but more data blocks leads to establishment of more local subspaces each with less k-space data and so might cause overfitting particularly in low SN data blocks. Empirically, both 3 x 3 and 5 5 uniform overlapping k- space partitions were found to be a good compromise between computational complexity and performance. The k-space partitioning can be further optimized in a non-uniform manner with consideration of k-space magnitude variation information and/or application of a high-pass filter to transform the signal characteristics before k-space partition. The rank value and the regularization weight can either be identical or distinct in different partitioned data blocks.
[0111] To evaluate the impact of spatially variant subspace bases, a same rank value was used for both SAKE and different data blocks in STEP, However, rank values were found to slightly vary within blocks due to varying signal-to-noise characteristics (see FIG. 8). To avoid fitting noise, the rank value was estimated from a central k-space area with sufficient SNR or other scans using the same coil and applied to peripheral k-space areas. In addition, a same regularization weight was also used in different data blocks with the assumption of spatially invariant threshold for basis selection and uniformly distributed Gaussian noise in k-space. The regularization weight configured within a proper range can improve the subspace basis selection for one data block and gives rise to visual enhancement of noise suppression across all imaging slices. However, overly weighted regularization can cause over-smoothing of sharp edges.
[0112] Each STEP iteration can include a SVD for each data block. Compared with SAKE, k-space partitioning can reduce the dimension for each data block and thus the computational complexity of SVD, In the calibration-less experiment discussed above in the context of FIG. I I, the average reconstruction time of SAKE was 22 I s, while STEP took 140s. The sequential computation of SVD, however, still dominates most of the reconstruction time. To reduce SVD computation times, SVD implementations can be accelerated and/or can utilize multicore parallel computing for different data blocks.
[0113] In addition, heuristic initialization can lead to fast convergence as the optimization problem in Eq. [4] is not jointly convex. For example, five to eight iterations of the SPIRIT method can initialize k-space for STEP operating on auto-calibrating image data, while similar information embedded between adjacent slices can be used with calibration-less image data. A fully sampled central k-space area in auto-calibrating experiments can be used to implement SPIRIT thai can provide a fast and good initialization for STEP; however, this strategy cannot be applied for calibration-less image data having an under-sampled central k-space. Instead of applying SA KE for central k-space interpolation and then implementing SPl l iT for initialization, a different operation using adjacent slices can be used to initialize STEP in calibration-less operation. The combination of STEP and compressed sensing regularization can be further optimized by using structure promoted methods, which have demonstrated benefits over LI minimization with respect to wavelet domain sparsity and total variation.
An Example Computing Network
[0114] FIG. 14A depicts a network 1400 in accordance with an example embodiment. In FIG. 14A, servers 1408 and 1410 are configured to communicate, via a network 1406, with client devices 1404a, 1404b, and 1404c. As shown in FIG. 14A, client devices can include a personal computer 1404a, a laptop computer 1404b, and a smart-phone 1404c. More generally, client devices 1404a- 1404c (or any additional client devices) can be any sort of computing device, such as a workstation, network terminal, desktop computer, laptop computer, wireless communication device (e.g., a cell phone or smart phone), and so on.
[0115] The network 1406 can correspond to a local area network, a wide area network, a corporate intranet, the public Internet, combinations thereof, or any other type of network(s) configured to provide communication between networked computing devices. Servers 1408 and 1410 can share content and/or provide content to client devices 1404a-1404c. As shown in FIG. 14A, servers 1408 and 1410 are not physically at the same location. Alternatively, servers 1408 and 1410 can be co-located, and/or can be accessible via a network separate from network 1406. Although FIG. 14A shows three client devices and two servers, network 1406 can service more or fewer than three client devices and/or more or fewer than two servers.
An Example Computing Device
[0116] FIG. 14B is a block diagram of an example computing device 1420 including user interface module 1421 , network-communication interface module 1422, one or more processors 1423, and data storage 1424, in accordance with embodiments of the invention.
[0117] In particular, computing device 1420 shown in FIG. 14A can be configured to perform one or more functions of client devices 1404a-1404c, servers 1408, 1410, computing device 1432, method 1500, and method 1600, and/or to implement at least part of the herein- described techniques, including but not limited to, the herein-described KDL technique, the herein-described DSDL technique, and the herein-described STEP technique. Computing device 1420 can include a user interface module 1421, a network-communication interface module 1422, one or more processors 1423, and data storage 1424, all of which may be linked together via a system bus, network, or other connection mechanism 1425.
[0118] Computing device 1420 can be a desktop computer, laptop or notebook computer, personal data assistant (PDA), mobile phone, embedded processor, or any similar device that is equipped with at least one processing unit capable of executing machine-language instructions that implement at least part of the herein-described techniques and methods, including but not limited to functionality related to method 1500, method 1600, the herein-described KDL technique, the herein-described DSDL technique, and the herein-described STEP technique.
[0119] User interface 1421 can receive input and/or provide output, perhaps to a user. User interface 1421 can be configured to send and/or receive data to and/or from user input from input device(s), such as a keyboard, a keypad, a touch screen, a computer mouse, a track ball, a joystick, and/or other similar devices configured to receive user input from a user of the computing device 1420. User interface 1421 can be configured to provide output to output display devices, such as. one or more cathode ray tubes (CRTs), liquid crystal displays (LCDs), light emitting diodes (LEDs), displays using digital light processing (DLP) technology, printers, light bulbs, and/or other similar devices capable of displaying graphical, textual, and/or numerical information to a user of computing device 1420. User interface module 1421 can also be configured to generate audible output(s), such as a speaker, speaker jack, audio output port, audio output device, earphones, and/or other similar devices configured to convey sound and/or audible information to a user of computing device 1420. In some embodiments, user interface module 1421 can be configured to provide haptic outputs such as tactile feedback, vibrations, forces, motions, and/or other touch-related outputs.
[0120] Network-communication interface module 1422 can be configured to send and receive data over wireless interfaces 1427 and/or wired interfaces 1428 via a network, such as network 1406. Wired interface(s) 1428, if present, can include a wire, cable, fiber-optic link and/or similar physical connection to a data network, such as a wide area network (WAN), a local area network (LAN), one or more public data networks, such as the Internet, one or more private data networks, or any combination of such networks. Wireless interface(s) 1427 if present, can utilize an air interface, such as a ZigBee, Wi-Fi, and/or WiMAX interface to a data network, such as a WAN, a LAN, one or more public data networks (e.g., the Internet), one or more private data networks, or any combination of public and private data networks.
[0121] In some embodiments, network-communication interface module 1422 can be configured to provide reliable, secured, and/or authenticated communications. For each communication described herein, information for ensuring reliable communications (i.e., guaranteed message delivery) can be provided, perhaps as part of a message header and/or footer (e.g., packet/message sequencing information, encapsulation header(s) and/or footer(s), size/time information, and transmission verification information such as CRC and/or parity check values). Communications can be made secure (e.g., be encoded or encrypted) and/or decrypted/decoded using one or more cryptographic protocols and/or algorithms, such as, but not limited to, DES, AES, RSA, Diffie-Hellman, and/or DSA. Other cryptographic protocols and/or algorithms can be used as well as or in addition to those listed herein to secure (and then decrypt/decode) communications .
[0122] Processor(s) 1423 can include one or more central processing units, computer processors, mobile processors, digital signal processors (DSPs), GPUs, microprocessors, computer chips, and/or other processing units configured to execute machine-language instructions and process data. Processor(s) 1423 can be configured to execute computer-readable program instructions 1426 that are contained in data storage 1424 and/or other instructions as described herein.
[0123] Data storage 1424 can include one or more physical and/or non-transitory storage devices, such as read-only memory (ROM), random access memory (RAM), removable-disk- drive memory, hard-disk memory, magnetic-tape memory, flash memory, and/or other storage devices. Data storage 1424 can include one or more physical and/or non-transitory storage devices with at least enough combined storage capacity to contain computer-readable program instructions 1426 and any associated/related data structures.
[0124] Computer-readable program instructions 1426 and any data structures contained in data storage 1426 include computer-readable program instructions executable by processor(s) 1423 and any storage required, respectively, at least to perform functionality related to the herein-described techniques and/or methods, including but not limited to, functionality related to method 1500, method 1600, the herein-described KDL technique, the herein-described DSDL technique, and the herein-described STEP technique. [0125] FIG. 14C is a block diagram of an example magnetic resonance device 1430 including computing device 1432 and k coils 1434, where k > 0, in accordance with embodiments of the invention. Computing device 1432 can be a computing device with one or more processors and data storage, such as computing device 1420 discussed above in the context of FIG. 14B. Coils 1434 can include one or more (k) radio frequency, gradient, and/or other coils. Coils 1434 can be configured to transmit and/or receive electromagnetic radiation and/or magnetic forces between magnetic resonance device 1430 and a body under observation; e.g., part or all of a body of a patient. Coils 1434 can include one or more surface coils, body coils, Helmholtz pair coils, paired saddle coils, bird cage coils, and/or other types of coils.
Methods of Operation
[0126] FIGS. 15A and 15B describe a method 1500 for determining a reconstruction matrix A2 of a data matrix A. The data matrix A can be sampled in a first domain, such as a signal acquisition domain. For example, A can be a matrix of data values captured using one or more coils of a magnetic resonance system, where each data from each coil is captured separately and combined to form the data matrix A. The first domain can be related to a second domain, such as a signal observation domain. The signal observation domain can be an image, such as a MR image, related to the first domain by a transformation, such as a Fourier transformation or other transformation. In some embodiments, the second domain can be related back to the first domain using an inverse transformation, such as an inverse Fourier transformation.
[0127] FIG. 15A indicates that method 1500 can begin at block 1510, where a computing device can receive a data matrix A, where the computing device can be such as computing device 1420 discussed above in the context of FIG. 14B. In some examples, the data matrix A can include data captured from a magnetic resonance system. In some embodiments, such as discussed above in the context of FIGS. 4-6, the data matrix A can be sampled from a first domain and can be transformed by a transform Fl to a second domain, where the second domain differs from the first domain. In particular, the first domain can be a signal acquisition domain, such as a k-space domain, and the second domain can be a signal observation domain, such as an image domain.
[0128] At block 1512, the computing device can perform one or more iterations to determine reconstruction matrix A2 of the data matrix A. An iteration of the one or more iterations can include the procedures of blocks 1520 through 1584 described immediately below and shown on FIG. 15B.
[0129] At block 1520, as shown in FIG. 15B, the computing device can reshape the data matrix A into a matrix X using a plurality PI of non-overlapping patches that cover the data matrix A. Each patch can include at most NP entries of the data matrix A, where NP > 0. Each column j of the matrix X can be based on a patch Pl(j) of the plurality PI of non-overlapping patches.
[0130] In some embodiments, each of the patches in PI can include NP entries - for example, suppose that A is a lO x 10 matrix and each patch in PI is a 5 x 5 patch. Then, NP would be 5 x 5 = 25 and A can be covered by 4 non-overlapping patches:
a patch whose upper-left-hand corner is entry (1, 1) of A and whose lower-right-hand corner is entry (5, 5) of A,
a patch whose upper-left-hand corner is entry (6, 1) of A and whose lower-right-hand corner is entry (10, 5) of A,
a patch whose upper-left-hand corner is entry (1, 6) of A and whose lower-right-hand corner is entry (5, 10) of A, and
a patch whose upper-left-hand corner is entry (6, 6) of A and whose lower-right-hand corner is entry (10, 10) of A.
[0131] In other embodiments, each of the patches in PI may have at most NP entries. For example, if A were a 7 x 7 matrix and NP is 25, based on a 5 x 5 patch, then one example set of 4 non overlapping patches of A is:
a patch with 25 entries whose upper- left-hand corner is entry (1, 1) of A and whose lower-right-hand corner is entry (5, 5) of A,
a patch with 10 entries whose upper- left-hand corner is entry (6, 1) of A and whose lower-right-hand corner is entry (7, 5) of A,
a patch with 10 entries whose upper- left-hand corner is entry (1, 6) of A and whose lower-right-hand corner is entry (5, 7) of A, and
a patch with 4 entries covering entries (6, 6), (6, 7), (7, 6), and (7, 7) of A. Many other examples are possible as well.
[0132] At block 1530, the computing device can determine a dictionary Dl and weights Wl . The dictionary Dl can include K atoms, with K > 0, and where each atom can have NP entries. The weights Wl can include, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms. For example, each atom in dictionary Dl can be a patch from A, a representation of a patch, or some other data.
[0133] In some embodiments, Dl is arranged as a matrix with the K atoms stored as columns in the matrix. Then, for patch Pl(i), the product Dl *Wl(i) can represent a linear combination of the atoms in Dl . In some other embodiments, the product Dl *Wl(i) can equal Pl(i); while in still other embodiments, the product Dl *Wl(i) can approximate Pl(i).
[0134] In yet other embodiments, determining a dictionary Dl and weights Wl can include determining the dictionary Dl and the weights Wl iterative ly. For example, an iteration for determining Dl and Wl iteratively can involve first fixing the dictionary Dl and determining weights Wl and then fixing the weights Wl and determining dictionary Dl (or vice versa).
[0135] In still other embodiments, the dictionary Dl and the weights Wl can be determined by minimizing a sum of residual values, where each residual value of the residual values includes a residual value for a patch. For an example of minimizing the sum, see the equation for formulating the KDL technique in the discussion of FIG. 1 above.
[0136] In particular embodiments, minimizing the sum of the residual values can include determining a residual value for a patch Pl(j) of the plurality PI of non-overlapping patches by determining a difference between data values y(j) that are a jth column of data values in the data matrix A corresponding to patch Pl(j) from a product of the dictionary Dl and sparse vector Wl(j) of the weights Wl . In other particular embodiments, the product of the dictionary Dl and sparse vector Wl(j) of the weights Wl can include a product of a patch Pl(j), the dictionary Dl, and the sparse vector Wl(j). In even other particular embodiments, method 1500 can include reducing the weights Wl(i) so that an absolute magnitude of a sum of the weights Wl(i) approaches a predetermined value. For example, the predetermined value can be 0.
[0137] In still other embodiments, such as described in the context of FIGS. 4-6, a dictionary D can include the dictionary Dl for the first domain and the dictionary D2 for the second domain. Also, weights W can include weights Wl for the first domain and weights W2 for the second domain.
[0138] In particular of these embodiments, the dictionary Dl and the weights Wl can be determined by at least: minimizing a sum of a first plurality of residual values, where a residual value of the first plurality of residual values includes a difference between a product of Dl and Wl and the second transform F2 of a product of D2 and W2; and after determining the dictionary Dl and the weights Wl, minimizing a sum of a second plurality of residual values, wherein a residual value of the second plurality of residual values includes a difference between a product of D2 and W2 and the first transform Fl of a product of Dl and Wl .
[0139] At block 1540, the computing device can generate an estimated matrix XI of the matrix X based on a combination of the dictionary Dl and the weights Wl .
[0140] At block 1542, the computing device can reshape the estimated matrix XI to form a data matrix I.
[0141] At block 1550, the computing device can apply a first transform Fl to the data matrix Al to obtain a data matrix I. The data matrix Al can include values in a first domain and the data matrix I can include values in a second domain that is different from the first domain.
[0142] At block 1560, the computing device can reshape the data matrix I into a matrix 12 using a plurality P2 of non-overlapping patches that cover the data matrix I, where each patch of P2 can include at most NP2 entries of the data matrix I, and where NP2 > 0.
[0143] At block 1570, the computing device can determine a dictionary D2 and weights W2 in the second domain. The dictionary D2 can include K2 atoms, where K2 > 0. Each atom can have NP2 entries. The weights W2 can include, for each patch P2(i) of the plurality P2 of non- overlapping patches, a sparse vector of weights W2(i) for combining the K2 atoms.
[0144] At block 1580, the computing device can generate an estimated matrix II of the matrix 12 based on a combination of the dictionary D2 and the weights W2
[0145] At block 1582, the computing device can apply a second transform F2 to the estimated matrix II to obtain reconstruction matrix A2. The reconstruction matrix A2 can include values in the first domain.
[0146] At block 1584 the computing device can, for each value A(t) in the data matrix A on an acquisition trajectory, replace a corresponding value A2(t) with the value A(t).
[0147] Returning to FIG. 15 A, at block 1590, the computing device can determine if all iterations are complete. If all of the iterations are complete, method 1500 can proceed to block 1592. However, if not all of the iterations are complete, method 1500 can return to block 1520 and perform another iteration.
[0148] At block 1592, the computing device can provide the reconstruction matrix A2. [0149] In some embodiments, the computing device can be embedded in an MR device that also has one or more coils, such as MR device 1430. In particular embodiments, the data matrix A can be determined using the one or more coils of the MR device. In other embodiments, a device can be provided with means to perform any or all of the above-disclosed procedures, techniques, and/or blocks associated with method 1500.
[0150] FIG. 16 describes a method 1600 for determining a reconstruction matrix A2 of a data matrix A. The data matrix A can be sampled in a first domain, such as a signal acquisition domain. For example, A can be a matrix of data values captured using one or more coils of an MR system, where each data from each coil is captured separately and combined to form the data matrix A. The first domain can be related to a second domain, such as a signal observation domain. The signal observation domain can be an image, such as a MR image, related to the first domain by a transformation, such as a Fourier transformation or other transformation. In some embodiments, the second domain can be related back to the first domain using an inverse transformation, such as an inverse Fourier transformation.
[0151] FIG. 16 indicates that method 1600 can begin at block 1610, where a computing device can receive a data matrix A, where the computing device can be such as computing device 1420 discussed above in the context of FIG. 14B. In some embodiments, the data matrix A can include data captured from a magnetic resonance system. In other embodiments, such as discussed above in the context of FIGS. 7-14, the data matrix A can be sampled from a first domain and can be transformed by a transform Fl to a second domain, where the second domain differs from the first domain. In particular, the first domain can be a signal acquisition domain, such as a k-space domain, and the second domain can be a signal observation domain, such as an image domain.
[0152] At block 1612, the computing device can perform one or more iterations to determine reconstruction matrix A2 of the data matrix A. An iteration of the one or more iterations can include the procedures of blocks 1620 through 1660 described immediately below.
[0153] At block 1620, the computing device can partition the data matrix A into a plurality of data blocks, such as discussed above in the context of at least FIGS. 7-14.
[0154] At block 1630, the computing device can, for a particular data block L of the plurality of data blocks, perform at least the procedures of blocks 1640, 1642, and 1644. [0155] At block 1640, the computing device can extract the data block L from the data matrix A.
[0156] At block 1642, the computing device can vectorize the data block L into a data sub- matrix AL.
[0157] At block 1644, the computing device can factor AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property. In some embodiments, factoring AL into at least the matrices DL and WL can include determining DL using SVD.
[0158] In other embodiments, factoring AL into at least matrices DL and WL while enforcing at least the low-rank property and the sparsity property can include factoring AL into at least matrices DL and WL while enforcing at least the low-rank property, the sparsity property, and a joint sparsity property in a wavelet domain.
[0159] At block 1650, the computing device can form a weighted data block by averaging values of at least the factored matrices DL and WL.
[0160] At block 1660, the computing device can generate the reconstruction matrix A2 based on the weighted data block.
[0161] In some embodiments, method 1600 additionally includes determining a Ll-norm of the matrix WL, and determining a basis selection based on the Ll-norm of the matrix WL.
[0162] In some embodiments, method 1600 additionally includes determining a rank value K - in these embodiments, enforcing at least the low-rank property and the sparsity property can include enforcing that a rank of DL is less than or equal to K.
[0163] At block 1670, the computing device can determine if all iterations are complete. If all of the iterations are complete, method 1600 can proceed to block 1680. However, if not all of the iterations are complete, method 1600 can return to block 1620 and perform another iteration.
[0164] At block 1680, the computing device can provide the reconstruction matrix A2.
[0165] In some embodiments, the computing device can be embedded in an MR device that also has one or more coils, such as MR device 1430. In particular embodiments, block 1610 for receiving data matrix A can include determining the data matrix A using the one or more coils. In other embodiments, a device can be provided with means to perform any or all of the above- disclosed procedures, techniques, and/or blocks associated with method 1600.

Claims

1. A method, comprising:
receiving data matrix A at a computing device;
performing one or more iterations to determine a reconstruction matrix A2 of the data matrix A using the computing device, wherein an iteration of the one or more iterations comprises:
reshaping the data matrix A into a matrix X using a plurality PI of non- overlapping patches that cover the data matrix A, each patch comprising at most NP entries of the data matrix A, NP > 0, wherein each column j of the matrix X is based on a patch Pl(j) of the plurality PI of non-overlapping patches;
determining a dictionary Dl and weights Wl, wherein the dictionary Dl comprises K atoms, K > 0, each atom having NP entries, and wherein the weights Wl comprise, for each patch Pl(i) of the plurality PI of non-overlapping patches, a sparse vector of weights Wl(i) for combining the K atoms,
generating an estimated matrix XI of the matrix X based on a combination of the dictionary Dl and the weights Wl,
reshaping the estimated matrix XI to form a data matrix Al,
applying a first transform Fl to the data matrix Al to obtain a data matrix I, wherein the data matrix Al comprises values in a first domain, and wherein the data matrix I comprises values in a second domain different from the first domain,
reshaping the data matrix I into a matrix 12 using a plurality P2 of non- overlapping patches that cover the data matrix I, wherein each patch of P2 comprises at most NP2 entries of the data matrix I, NP2 > 0,
determining a dictionary D2 and weights W2 in the second domain, wherein the dictionary D2 comprises K2 atoms, K2 > 0, each atom having NP2 entries, and wherein the weights W2 comprise, for each patch P2(i) of the plurality P2 of non-overlapping patches, a sparse vector of weights W2(i) for combining the K2 atoms,
generating an estimated matrix II of the matrix 12 based on a combination of the dictionary D2 and the weights W2, applying a second transform F2 to the estimated matrix II to obtain reconstruction matrix A2, wherein the reconstruction matrix A2 comprises values in the first domain, and
for each value A(t) in the data matrix A that is associated with an acquisition trajectory, replacing a corresponding value A2(t) with the value A(t);
determining whether the one or more iterations are complete using the computing device; and
after determining that the one or more iterations are complete, providing the reconstruction matrix A2 using the computing device.
2. The method of claim 1, wherein determining the dictionary Dl and weights Wl comprises determining the dictionary Dl and the weights Wl iteratively.
3. The method of claim 1 or claim 2, wherein determining a dictionary Dl and weights Wl comprises minimizing a sum of residual values, wherein each residual value of the residual values comprises a residual value for a patch.
4. The method of claim 3, wherein minimizing the sum of the residual values comprises determining a residual value for a patch Pl(j) of the plurality PI of patches by determining a difference between y j) from a product of the dictionary Dl and sparse vector Wl(j) of the weights Wl , wherein y j) is a jth column of data values in the data matrix A corresponding to patch PI j).
5. The method of claim 4, wherein the product of the dictionary Dl and sparse vector Wl j) of the weights Wl comprises a product of a patch Pl(j), the dictionary Dl, and the sparse vector Wl(j).
6. The method of any of claims 1-5, further comprising:
reducing the weights Wl(i) so that an absolute magnitude of a sum of the weights Wl(i) approaches a predetermined value.
7. The method of claim 6, wherein the predetermined value is 0.
8. The method of any of claims 1-7, wherein the data matrix A is sampled from the first domain, wherein the data matrix A is configured to be transformed by the transform Fl to the second domain, and wherein the second domain differs from the first domain.
9. The method of claim 8, wherein the first domain is a signal acquisition domain and wherein the second domain is a signal observation domain.
10. The method of claim 8 or claim 9, wherein a dictionary D comprises the dictionary Dl for the first domain and the dictionary D2 for the second domain, and wherein weights W comprise the weights Wl for the first domain and the weights W2 for the second domain.
11. The method of claim 10, wherein determining the dictionary Dl and the weights Wl comprises:
determining the dictionary Dl and the weights Wl by at least minimizing a sum of a first plurality of residual values, wherein a residual value of the first plurality of residual values comprises a difference between a product of Dl and Wl and the second transform F2 of a product of D2 and W2; and
after determining the dictionary Dl and the weights Wl, minimizing a sum of a second plurality of residual values, wherein a residual value of the second plurality of residual values comprises a difference between a product of D2 and W2 and the first transform Fl of a product of Dl and Wl .
12. The method of any of claims 1-11, wherein the data matrix A comprises data captured from a magnetic resonance (MR) system.
13. A computing device, comprising:
a processor; and a tangible computer-readable medium configured to comprise instructions that, when executed by the processor, are configured to cause the computing device to perform the method of any of claims 1-12.
14. The computing device of claim 13, wherein the tangible computer-readable medium comprises a non-transitory computer-readable medium.
15. A magnetic resonance system, comprising:
one or more coils; and
the computing device of either claim 13 or 14.
16. The magnetic resonance system of claim 15, wherein receiving data matrix A comprises determining the data matrix A using the one or more coils.
17. A tangible computer-readable medium configured to comprise instructions that, when executed by a processor of a computing device, are configured to cause the computing device to perform the method of any of claims 1-12.
18. The tangible computer-readable medium of claim 17, wherein the tangible computer- readable medium comprises a non-transitory computer-readable medium.
19. A device, comprising:
means for performing the method of any of claims 1-12.
20. A method, comprising:
receiving data matrix A at a computing device;
performing one or more iterations to determine reconstruction matrix A2 of the data matrix A using the computing device, wherein an iteration of the one or more iterations comprises:
partitioning the data matrix A into a plurality of data blocks;
for a particular data block L of the plurality of data blocks: extracting the data block L from the data matrix A;
vectorizing the data block L into a data sub-matrix AL, and
factoring AL into at least matrices DL and WL while enforcing at least a low-rank property and a sparsity property; and
forming a weighted data block by averaging values of at least the factored matrices DL and WL; and
generating the reconstruction matrix A2 based on the weighted data block; and providing the reconstruction matrix A2 using the computing device.
21. The method of claim 20, further comprising:
determining a LI -norm of the matrix WL; and
determining a basis selection based on the LI -norm of the matrix WL.
22. The method of either claim 20 or claim 21 , further comprising:
determining a rank value K;
wherein enforcing at least the low-rank property and the sparsity property comprises enforcing that a rank of DL is less than or equal to K.
23. The method of any one of claims 20-22, wherein factoring AL into at least the matrices DL and WL comprises determining DL using singular value decomposition (SVD).
24. The method of any one of claims 20-23, wherein factoring AL into at least matrices DL and WL while enforcing at least the low-rank property and the sparsity property comprises factoring AL into at least matrices DL and WL while enforcing at least the low-rank property, the sparsity property, and a joint sparsity property in a wavelet domain.
25. The method of any one of claims 20-24, wherein the data matrix A comprises data captured from a magnetic resonance (MR) system.
26. A computing device, comprising:
a processor; and a tangible computer-readable medium configured to comprise instructions that, when executed by the processor, are configured to cause the computing device to perform the method of any of claims 20-25.
27. The computing device of claim 26, wherein the tangible computer-readable medium comprises a non-transitory computer-readable medium.
28. A magnetic resonance system, comprising:
one or more coils; and
the computing device of either claim 26 or 27.
29. The magnetic resonance system of claim 28, wherein receiving data matrix A comprises determining the data matrix A using the one or more coils.
30. A tangible computer-readable medium configured to comprise instructions that, when executed by a processor of a computing device, are configured to cause the computing device to perform the method of any of claims 20-25.
31. The tangible computer-readable medium of claim 30, wherein the tangible computer- readable medium comprises a non-transitory computer-readable medium.
32. A device, comprising:
means for performing the method of any of claims 20-25.
PCT/US2015/027648 2014-04-24 2015-04-24 Dual space dictionary learning for magnetic resonance (mr) image reconstruction WO2015164825A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461983978P 2014-04-24 2014-04-24
US61/983,978 2014-04-24

Publications (1)

Publication Number Publication Date
WO2015164825A1 true WO2015164825A1 (en) 2015-10-29

Family

ID=54333322

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/027648 WO2015164825A1 (en) 2014-04-24 2015-04-24 Dual space dictionary learning for magnetic resonance (mr) image reconstruction

Country Status (1)

Country Link
WO (1) WO2015164825A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111193925A (en) * 2019-12-25 2020-05-22 杭州中威电子股份有限公司 Image compressed sensing coding and normalization method based on block vector inner product
WO2020215597A1 (en) * 2019-04-24 2020-10-29 深圳先进技术研究院 Magnetic resonance imaging method, apparatus and system, and storage medium
CN115359144A (en) * 2022-10-19 2022-11-18 之江实验室 Magnetic resonance plane echo imaging artifact simulation method and system
WO2023201625A1 (en) * 2022-04-21 2023-10-26 中国科学院深圳先进技术研究院 Artifact correction method and apparatus based on normalizing flow

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6560353B1 (en) * 2000-03-21 2003-05-06 Magnetic Resonance Innovations, Inc. Method of MRI image reconstruction from partially acquired data in two or more dimensions using a multidimensional inverse transform technique
US7143030B2 (en) * 2001-12-14 2006-11-28 Microsoft Corporation Parametric compression/decompression modes for quantization matrices for digital audio
US20120099774A1 (en) * 2010-10-21 2012-04-26 Mehmet Akcakaya Method For Image Reconstruction Using Low-Dimensional-Structure Self-Learning and Thresholding
CN103218795A (en) * 2013-05-05 2013-07-24 西安电子科技大学 Partial K space sequence image reconstruction method based on self-adapted double-dictionary learning
CN103646256A (en) * 2013-12-17 2014-03-19 上海电机学院 Image characteristic sparse reconstruction based image classification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6560353B1 (en) * 2000-03-21 2003-05-06 Magnetic Resonance Innovations, Inc. Method of MRI image reconstruction from partially acquired data in two or more dimensions using a multidimensional inverse transform technique
US7143030B2 (en) * 2001-12-14 2006-11-28 Microsoft Corporation Parametric compression/decompression modes for quantization matrices for digital audio
US20120099774A1 (en) * 2010-10-21 2012-04-26 Mehmet Akcakaya Method For Image Reconstruction Using Low-Dimensional-Structure Self-Learning and Thresholding
CN103218795A (en) * 2013-05-05 2013-07-24 西安电子科技大学 Partial K space sequence image reconstruction method based on self-adapted double-dictionary learning
CN103646256A (en) * 2013-12-17 2014-03-19 上海电机学院 Image characteristic sparse reconstruction based image classification method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020215597A1 (en) * 2019-04-24 2020-10-29 深圳先进技术研究院 Magnetic resonance imaging method, apparatus and system, and storage medium
US11397231B2 (en) 2019-04-24 2022-07-26 Shenzhen Institutes Of Advanced Technology Magnetic-resonance imaging method, apparatus and system, and storage medium
CN111193925A (en) * 2019-12-25 2020-05-22 杭州中威电子股份有限公司 Image compressed sensing coding and normalization method based on block vector inner product
WO2023201625A1 (en) * 2022-04-21 2023-10-26 中国科学院深圳先进技术研究院 Artifact correction method and apparatus based on normalizing flow
CN115359144A (en) * 2022-10-19 2022-11-18 之江实验室 Magnetic resonance plane echo imaging artifact simulation method and system
CN115359144B (en) * 2022-10-19 2023-03-03 之江实验室 Magnetic resonance plane echo imaging artifact simulation method and system

Similar Documents

Publication Publication Date Title
Cordero-Grande et al. Complex diffusion-weighted image estimation via matrix recovery under general noise models
US10203393B2 (en) System and method for removing gibbs artifact in medical imaging system
Alansary et al. Fast fully automatic segmentation of the human placenta from motion corrupted MRI
US10638951B2 (en) Systems and methods for magnetic resonance imaging
US11633146B2 (en) Automated co-registration of prostate MRI data
US11313927B2 (en) Systems and methods for magnetic resonance imaging
CN107072592B (en) Magnetic resonance imaging apparatus and quantitative magnetic susceptibility matching method
US11587269B2 (en) Method and system for determining magnetic susceptibility distribution
Zakaria et al. Estimation and use of prior information in FEM-CSI for biomedical microwave tomography
US20190073748A1 (en) Method and Apparatus to Perform Local De-noising of a Scanning Imager Image
Pieciak et al. Non-stationary Rician noise estimation in parallel MRI using a single image: a variance-stabilizing approach
US10935619B2 (en) Method, device and MRI system for correcting phase shifts
US10338174B2 (en) Robust dual echo Dixon imaging with flexible echo times
WO2015164825A1 (en) Dual space dictionary learning for magnetic resonance (mr) image reconstruction
Gavazzi et al. Deep learning‐based reconstruction of in vivo pelvis conductivity with a 3D patch‐based convolutional neural network trained on simulated MR data
US9069998B2 (en) Determining electrical properties of tissue using magnetic resonance imaging and least squared estimate
US11051711B2 (en) Noninvasive determination of electrical properties of tissues and materials using magnetic resonance measurements
KR102428725B1 (en) Method and program for imaging quality improving
US20230116931A1 (en) Medical information processing apparatus, medical information processing method, and storage medium
Patel et al. Mesh-based spherical deconvolution: a flexible approach to reconstruction of non-negative fiber orientation distributions
Zhou et al. STEP: Self‐supporting tailored k‐space estimation for parallel imaging reconstruction
US10775469B2 (en) Magnetic resonance imaging apparatus and method
Barmpoutis et al. Diffusion kurtosis imaging: robust estimation from DW-MRI using homogeneous polynomials
US11941732B2 (en) Multi-slice MRI data processing using deep learning techniques
US20230410315A1 (en) Deep magnetic resonance fingerprinting auto-segmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15782300

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15782300

Country of ref document: EP

Kind code of ref document: A1