CN111127575A - Image reconstruction method, computer-readable medium, and computer device - Google Patents

Image reconstruction method, computer-readable medium, and computer device Download PDF

Info

Publication number
CN111127575A
CN111127575A CN201911275740.5A CN201911275740A CN111127575A CN 111127575 A CN111127575 A CN 111127575A CN 201911275740 A CN201911275740 A CN 201911275740A CN 111127575 A CN111127575 A CN 111127575A
Authority
CN
China
Prior art keywords
image reconstruction
parameters
relationship
equation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911275740.5A
Other languages
Chinese (zh)
Inventor
梁栋
程静
王海峰
朱燕杰
郑海荣
刘新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201911275740.5A priority Critical patent/CN111127575A/en
Priority to PCT/CN2019/125099 priority patent/WO2021114216A1/en
Publication of CN111127575A publication Critical patent/CN111127575A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image reconstruction method, which comprises the following steps: solving a calculation model capable of solving an iterative reconstruction relation to obtain the iterative reconstruction relation; expanding the iterative reconstruction relation to a deep neural network to obtain an initial image reconstruction model; disassembling parameters combined by at least two sub-parameters in a fixed combination relation in the iterative reconstruction relation to obtain at least two independent sub-parameters; and optimizing the initial image reconstruction model according to all independent sub-parameters to obtain an optimized image reconstruction model. The invention introduces a deep learning method, expands the traditional optimization algorithm on a neural network, utilizes the neural network to learn the unknown quantities such as parameters, functional relations and the like in the algorithm, further breaks up the structural relation among the parameters in the algorithm, enables the neural network to freely learn the combination relation among the parameters, and finally obtains the deep neural network with the optimized algorithm structure for rapidly reconstructing the image.

Description

Image reconstruction method, computer-readable medium, and computer device
Technical Field
The present invention relates to image processing technologies, and in particular, to an image reconstruction method, a computer-readable medium, and a computer device for reconstructing an image using a machine learning method.
Background
Magnetic resonance images human tissue using static and radio frequency magnetic fields, which not only provides rich tissue contrast, but also is harmless to the human body, thus becoming a powerful tool for medical clinical diagnosis. However, the low imaging speed is a major bottleneck restricting the rapid development of the imaging device, and how to increase the scanning speed and reduce the scanning time is particularly important on the premise that the imaging quality is clinically acceptable.
In terms of fast imaging, techniques commonly used at present are parallel imaging techniques and compressive sensing techniques. The parallel imaging technology utilizes the correlation among multi-channel coils to accelerate acquisition, and the compressed sensing technology utilizes the prior information of sparsity of an imaged object to reduce K space sampling points. However, limited by conditions such as hardware, the acceleration factor of the parallel imaging technology is limited, and as the acceleration factor increases, the image has a noise amplification phenomenon; the compressed sensing technology has very long reconstruction time due to the adoption of iterative reconstruction, and is difficult to select sparse transformation and reconstruction parameters.
Disclosure of Invention
In order to solve the technical problems of the prior art, the present invention provides an image reconstruction method, a computer readable medium, and a computer device for reconstructing an image by using a machine learning method.
According to an aspect of the present invention, there is provided an image reconstruction method. The image reconstruction method comprises the following steps: solving a calculation model capable of solving an iterative reconstruction relation to obtain the iterative reconstruction relation; expanding the iterative reconstruction relation to a deep neural network to obtain an initial image reconstruction model; disassembling parameters combined by at least two sub-parameters in a fixed combination relation in the iterative reconstruction relation to obtain at least two independent sub-parameters; and optimizing the initial image reconstruction model according to all independent sub-parameters to obtain an optimized image reconstruction model.
In the image reconstruction method according to an aspect of the present invention, optionally, the iterative reconstruction relationship includes the image data relationship, the image data relationship is represented by equation 1,
[ formula 1]xn+1=proxτ[R](xn-τA*dn+1)
Where n is an integer greater than or equal to 1, x represents image data, prox represents an approximation operator, τ represents a first algorithm parameter, A*Representing the adjoint of an undersampled Fourier transform operator, d representing a dual parameter, R representing a regularization term, xn-τA*dn+1Parameters with a fixed combination relationship.
In the image reconstruction method according to an aspect of the present invention, optionally, the method of disassembling parameters having a fixed combination relationship includes:
replacing prox in equation 1 with operator Λτ[R]So that the expression 1 is transformed into an expression 2,
[ formula 2]xn+1=Λ(xn-τA*dn+1)
Splitting said parameter into at least two independent sub-parameters xnAnd A*dn+1So that said equation 2 is transformed into equation 3,
[ formula 3 ]]xn+1=Λ(xn,A*dn+1)
Wherein x isnAnd A*dn+1The correlation relationship between the operators can be obtained by a deep learning method, and the operator lambda can be obtained by the deep learning method.
In the image reconstruction method according to an aspect of the present invention, optionally, the iterative reconstruction relationship further includes a dual parameter relationship, the dual parameter relationship is expressed by equation 4,
[ formula 4]
Figure BDA0002315509840000021
Where n is an integer greater than or equal to 1, σ represents a second algorithm parameter,
Figure BDA0002315509840000022
representing an update variable parameter, f representing an undersampled K-space data parameter, and d representing a dual parameter.
In the image reconstruction method according to an aspect of the present invention, optionally, the iterative reconstruction relationship further includes an updated variable relationship, which is expressed by the following equation 5,
[ formula 5]
Figure BDA0002315509840000023
Wherein n is an integer greater than or equal to 1,
Figure BDA0002315509840000024
representing an updating variable parameter, x representing image data, theta representing an algorithm updating step size, and theta being obtained by a deep learning method.
In the image reconstruction method according to an aspect of the present invention, optionally, the equation 1, the equation 4, and the equation 5 are expanded onto a deep neural network to obtain the initial image reconstruction model.
In the image reconstruction method according to an aspect of the present invention, optionally, the corresponding network structure relationship of the expression 1 expanded onto the deep neural network is optimized by the expression 3 to obtain the optimized image reconstruction model.
In the image reconstruction method according to an aspect of the present invention, optionally, the image reconstruction method includes: scanning an object in real time to obtain initial data, wherein the initial data comprises initial image data, initial updating variable parameters, undersampled K-space data parameters and initial dual parameters; and inputting the initial data into the optimized image reconstruction model to obtain a reconstructed image.
According to another aspect of the present invention, there is also provided a computer-readable medium. The computer readable medium has stored thereon an image reconstruction program which, when executed by a processor, implements the image reconstruction method as described above.
According to yet another aspect of the invention, a computer device is also provided. The computer device comprises a memory, a processor and an image reconstruction program stored on the memory and executable on the processor, the image reconstruction program when executed by the processor implementing the image reconstruction method as described above.
The invention has the beneficial effects that: the invention adopts a deep learning method to reconstruct images, thereby learning parameters, functional relations and the like required by reconstruction from a large amount of training data and obtaining better imaging quality and higher acceleration times than the traditional parallel imaging or compressed sensing method. Furthermore, the deep learning idea is introduced, the traditional optimization algorithm is expanded to the neural network, unknown quantities such as parameters, functional relations and the like in the algorithm are learned by using the neural network, the structural relation among the parameters in the algorithm is further broken up, the neural network freely learns the combination relation among the parameters, and finally the deep neural network with the optimized algorithm structure is obtained and used for rapidly reconstructing the image.
Drawings
The above and other aspects, features and advantages of embodiments of the present invention will become more apparent from the following description taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of an image reconstruction method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of the acquisition operator Λ using a convolutional neural network, according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of image reconstruction according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer apparatus according to an embodiment of the present invention.
Detailed Description
Hereinafter, specific embodiments of the present invention will be described in detail with reference to the accompanying drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the specific embodiments set forth herein. Rather, these embodiments are provided to explain the principles of the invention and its practical application to thereby enable others skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use contemplated.
Fig. 1 is a flowchart of an image reconstruction method according to an embodiment of the present invention.
Referring to fig. 1, the image reconstruction method according to an embodiment of the present invention includes step S110, step S120, step S130, and step S140. It should be understood that although fig. 1 shows an arrangement of steps in the order from step S110 to step S140, the present invention is not limited thereto, and the order of the steps may be changed according to actual requirements.
In step S110, the calculation model capable of solving the iterative reconstruction relationship is solved to obtain the iterative reconstruction relationship.
In constrained magnetic resonance imaging, as an embodiment, in order to avoid disturbance of noise, the solution can be made unique and stable by using the prior knowledge of the image itself, and a regular term can be added to define the solution space. Under the regular constraint, the iterative reconstruction relationship can be obtained by solving a calculation model capable of solving the iterative reconstruction relationship, represented by equation 1 below.
[ formula 1]
Figure BDA0002315509840000041
The first term is used for measuring the degree of coincidence between an ideal image and an observed image under a degradation model and is called a fidelity term, A represents an under-acquired Fourier transform operator, X represents a reconstructed image, f represents an under-acquired K space data parameter, and lambda is used for balancing the two terms and is called a regularization parameter. R representsA regularization term, typically R | | | Ψ X | | non-woven1Where Ψ denotes a sparse transform, a fixed transform, such as a full-variational or wavelet transform, is typically chosen.
To solve equation 1 above, a specific optimization algorithm, such as alternating direction multiplier (ADMM) or original dual algorithm (PDHG), may be employed. In one embodiment, the above equation 1 is solved using the original dual algorithm, the solution of which is represented as the following equation 2 representing the image data relationship that the iterative reconstruction relationship comprises.
[ formula 2]xn+1=proxτ[R](xn-τA*dn+1)
Where n is an integer greater than or equal to 1, x represents image data, prox represents an approximation operator, τ represents a first algorithm parameter (which may be set as the case may be), A represents a first algorithm parameter*Representing the adjoint of an undersampled Fourier transform operator A, d representing a dual parameter, R representing a regularization term, xn-τA*dn+1Is the parameter. Here, x denotes image data, and may be image data obtained by magnetic resonance scanning of a subject, for example.
As another embodiment of the present invention, further, in the process of solving the calculation model (i.e. equation 1 above) capable of solving the image data relationship by using the original dual algorithm, the dual parameter relationship and the updated variable relationship included in the iterative reconstruction relationship can also be obtained.
Specifically, the dual parameter relationship may be expressed as the following equation 3.
[ formula 3 ]]
Figure BDA0002315509840000051
Where n is an integer greater than or equal to 1, σ represents a second algorithm parameter,
Figure BDA0002315509840000052
representing an update variable parameter, f representing an undersampled K-space data parameter, and d representing a dual parameter. σ may be obtained by deep learning, and the present invention is not particularly limited. For example, canBy applying a known amount of data (i.e., a known amount of d)n、dn+1F and
Figure BDA0002315509840000053
) And inputting the signal into a convolutional neural network, and obtaining sigma through deep learning.
The updated variable relationship is expressed by the following equation 4,
[ formula 4]
Figure BDA0002315509840000054
Wherein n is an integer greater than or equal to 1,
Figure BDA0002315509840000055
denotes the update variable parameter, x denotes the image data, and θ denotes the algorithm update step. θ can be obtained by a deep learning method.
With continued reference to fig. 1, in step S120, the iterative reconstruction relationship is expanded onto the deep neural network to obtain an initial image reconstruction model. It should be noted that, here, the initial image reconstruction model is formed by expanding the above equation 2, equation 3, and equation 4 onto the deep neural network, but the present invention is not limited thereto.
With continued reference to fig. 1, in step S130, the parameters in the iterative reconstruction relationship, which are combined by at least two sub-parameters in a fixed combination relationship, are disassembled to obtain at least two independent sub-parameters.
Here, the above equation 2 is described as an example. To implement step S130, first, prox in equation 2 is replaced with operator Λ, which can be obtained by a deep learning methodτ[R]So that the equation 2 can be transformed into the following equation 5.
[ formula 5]xn+1=Λ(xn-τA*dn+1)
Secondly, in order to better utilize the learning ability of deep learning and improve the quality of the reconstructed image, the two sub-parameters, namely x, are used in the formula 5nAnd A*dn+1In a fixed combination relationParameter x of the combinationn-τA*dn+1Breaks the fixed combination relationship of (a) so as to split the parameter into two sub-parameters, namely xnAnd A*dn+1So that the equation 5 can be transformed into the following equation 6.
[ formula 6 ]]xn+1=Λ(xn,A*dn+1)
Wherein x isnAnd A*dn+1Correlation relationship between each other (i.e. relationship: x)n-τA*dn+1) The operator Lambda can be obtained by deep learning method, and the operator Lambda can also be obtained by deep learning method. Here, the input of the network during deep learning is changed from the original one (i.e. x)n-τA*dn+1) Become two inputs now (i.e., x)nAnd A*dn+1) The original input is composed of two sub-parameters through a certain combination relation, and the fixed combination relation is broken, so that the relation between the two sub-parameters after being disassembled is not the original fixed relation any more, and the relation between the two sub-parameters after being disassembled can be obtained through deep learning. This is done because after networking, some mathematical properties of the original optimization algorithm, such as convergence, etc., are no longer maintained, so that at this time, the network can be designed without the variable constraints of the original algorithm, and at this time, the fixed inter-parameter combination relationship is released to obtain better imaging effect.
The Convolutional Neural Network (CNN) is taken as an example to illustrate how the operator Λ is obtained by the deep learning method. FIG. 2 is a schematic diagram of the acquisition operator Λ using a convolutional neural network, according to an embodiment of the present invention.
Referring to fig. 2, as an embodiment of the present invention, the convolution kernel size is set to 3 × 3, and the data above each convolution layer (indicated by a rectangular frame in fig. 2) indicates the number of channels of the convolution layer (i.e., indicates how many convolution kernels). For example, the data can be obtained by combining a known large amount of data (i.e., a known large amount of x)n、A*dn+1And xn+1) Input into the convolutional neural network shown in FIG. 2, and obtained by deep learningThe operator Λ.
In addition, please refer to equation 3 and equation 4 above, and the references to σ and θ in equation 3 and equation 4 above, respectively, can also be obtained by a deep learning method. For example, as can be seen from equation 4 above, a known large amount of data, i.e., a known large amount of x, is providedn
Figure BDA0002315509840000061
And xn+1Then θ can be obtained by deep learning. Of course, the learning model for θ and the learning model for the operator Λ can be combined, since both use common data xnAnd xn+1. In other words, the learning model of θ and the learning model of the operator Λ may form a combined model, and the combined model may be obtained by a deep learning method, which may speed up the learning process. Similarly, the learning model of σ and the learning model of the operator Λ (and/or the learning model of θ) can be combined together, and the learning model of σ and the learning model of the operator Λ and/or the learning model of θ) can form a combined model, and the combined model can be obtained by a deep learning method, so that the learning process can be accelerated.
With continued reference to fig. 1, in step S140, the initial image reconstruction model is optimized according to all the independent sub-parameters to obtain an optimized image reconstruction model.
Here, the corresponding network structure relationship of the expression 2 expanded onto the deep neural network is optimized according to the above expression 6 to obtain the optimized image reconstruction model. In other words, the optimized image reconstruction model is formed by expanding the above equations 3, 4 and 6 onto the deep neural network.
In addition, as another embodiment of the present invention, the image reconstruction method is different from the image reconstruction method shown in fig. 1 in that: the image reconstruction method further includes: and reconstructing an image according to the optimized image reconstruction model.
Specifically, first, the magnetic resonance scanning apparatus scans the subject to acquire initial data including initial image data x1Initial updating of variable parameters
Figure BDA0002315509840000071
Undersampled K-space data parameters f and initial dual parameters d1. It should be noted that these initial data are the initial parameters that the magnetic resonance scanning apparatus automatically generates when scanning the object.
Next, an image is reconstructed from the initial data and the optimized reconstructed image model constructed by the equation 3, the equation 4, and the equation 6. Here, as described above, σ in equation 3, θ in equation 4, and Λ in equation 6 can be obtained by a deep learning method.
Thus, the initial image data x is obtained1Initial updating of variable parameters
Figure BDA0002315509840000072
Undersampled K-space data parameters f and initial dual parameters d1Then, through a plurality of iterations of the equation 3, the equation 4, and the equation 6, the reconstruction of the image can be completed. FIG. 3 is a schematic illustration of image reconstruction according to an embodiment of the present invention. In fig. 3, N iterations are set, where N is less than or equal to N. Further, in fig. 3, the calculation process of the first iteration is shown, and it should be understood that the other iterations are the same as the calculation process in the second iteration. It should be noted that, in fig. 3, X represents a graph reconstructed after N iterations.
In summary, the invention adopts a deep learning method to reconstruct images, so as to learn the parameters, functional relationships and the like required by reconstruction from a large amount of training data, and obtain better imaging quality and higher acceleration multiple than the traditional parallel imaging or compressed sensing method. Furthermore, the deep learning idea is introduced, the traditional optimization algorithm is expanded to the neural network, unknown quantities such as parameters, functional relations and the like in the algorithm are learned by using the neural network, the structural relation among the parameters in the algorithm is further broken up, the neural network freely learns the combination relation among the parameters, and finally the deep neural network with the optimized algorithm structure is obtained and used for rapidly reconstructing the image.
Fig. 4 is a schematic structural diagram of a computer apparatus according to an embodiment of the present invention. Referring to fig. 4, at the hardware level, the computer device includes a processor 410, an internal bus 420, a network interface 430, and a memory 440, but may also include hardware required for other services. The processor 410 reads a corresponding computer program from the memory 440 and runs it, forming a request handling means on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Further, the memory 440 has stored thereon an image reconstruction program which, when executed by the processor, implements the image reconstruction method as shown in fig. 1.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or Flash memory (Flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer-readable media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (10)

1. An image reconstruction method, comprising:
solving a calculation model capable of solving an iterative reconstruction relation to obtain the iterative reconstruction relation;
expanding the iterative reconstruction relation to a deep neural network to obtain an initial image reconstruction model;
disassembling parameters combined by at least two sub-parameters in a fixed combination relation in the iterative reconstruction relation to obtain at least two independent sub-parameters;
and optimizing the initial image reconstruction model according to all independent sub-parameters to obtain an optimized image reconstruction model.
2. The image reconstruction method of claim 1, wherein the iterative reconstruction relationship comprises the image data relationship, the image data relationship being represented by equation 1,
[ formula 1]xn+1=proxτ[R](xn-τA*dn+1)
Where n is an integer greater than or equal to 1, x represents image data, prox represents an approximation operator, τ represents a first algorithm parameter, A*Representing the adjoint of an undersampled Fourier transform operator, d representing a dual parameter, R representing a regularization term, xn-τA*dn+1Parameters with a fixed combination relationship.
3. The image reconstruction method according to claim 2, wherein the method of disassembling the parameters having the fixed combination relationship comprises:
replacing prox in equation 1 with operator Λτ[R]So that the expression 1 is transformed into an expression 2,
[ formula 2]xn+1=Λ(xn-τA*dn+1)
Splitting said parameter into at least two independent sub-parameters xnAnd A*dn+1So that said equation 2 is transformed into equation 3,
[ formula 3 ]]xn+1=Λ(xn,A*dn+1)
Wherein x isnAnd A*dn+1The correlation relationship between the operators can be obtained by a deep learning method, and the operator lambda can be obtained by the deep learning method.
4. The image reconstruction method of claim 3, wherein the iterative reconstruction relationship further comprises a dual parameter relationship, the dual parameter relationship being represented by equation 4,
[ formula 4]
Figure FDA0002315509830000011
Where n is an integer greater than or equal to 1, σ represents a second algorithm parameter,
Figure FDA0002315509830000012
representing an update variable parameter, f representing an undersampled K-space data parameter, and d representing a dual parameter.
5. The image reconstruction method according to claim 4, wherein the iterative reconstruction relationship further comprises updating a variable relationship, the updating variable relationship being expressed by the following equation 5,
[ formula 5]
Figure FDA0002315509830000021
Wherein n is an integer greater than or equal to 1,
Figure FDA0002315509830000022
representing an updating variable parameter, x representing image data, theta representing an algorithm updating step size, and theta being obtained by a deep learning method.
6. The image reconstruction method according to claim 5, wherein the equation 1, the equation 4 and the equation 5 are expanded onto a deep neural network to obtain the initial image reconstruction model.
7. The image reconstruction method according to claim 5 or 6, wherein the corresponding network structure relationship of the expression 1 expanded onto the deep neural network is optimized by the expression 3 to obtain the optimized image reconstruction model.
8. The image reconstruction method according to claim 1, further comprising:
scanning an object in real time to obtain initial data, wherein the initial data comprises initial image data, initial updating variable parameters, undersampled K-space data parameters and initial dual parameters;
and inputting the initial data into the optimized image reconstruction model to obtain a reconstructed image.
9. A computer-readable medium, in which an image reconstruction program is stored, which when executed by a processor implements the image reconstruction method according to any one of claims 1 to 8.
10. A computer device comprising a memory, a processor, and an image reconstruction program stored on the memory and executable on the processor, the image reconstruction program when executed by the processor implementing the image reconstruction method of any one of claims 1 to 8.
CN201911275740.5A 2019-12-12 2019-12-12 Image reconstruction method, computer-readable medium, and computer device Pending CN111127575A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911275740.5A CN111127575A (en) 2019-12-12 2019-12-12 Image reconstruction method, computer-readable medium, and computer device
PCT/CN2019/125099 WO2021114216A1 (en) 2019-12-12 2019-12-13 Image reconstruction method, computer readable storage medium, and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911275740.5A CN111127575A (en) 2019-12-12 2019-12-12 Image reconstruction method, computer-readable medium, and computer device

Publications (1)

Publication Number Publication Date
CN111127575A true CN111127575A (en) 2020-05-08

Family

ID=70499950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911275740.5A Pending CN111127575A (en) 2019-12-12 2019-12-12 Image reconstruction method, computer-readable medium, and computer device

Country Status (2)

Country Link
CN (1) CN111127575A (en)
WO (1) WO2021114216A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538693A (en) * 2021-07-06 2021-10-22 太原理工大学 Microwave mammary gland image reconstruction method based on deep learning
US11189016B1 (en) 2020-06-08 2021-11-30 Guangzhou Computational Super-Resolution Biotech Co., Ltd. Systems and methods for image processing
WO2022126614A1 (en) * 2020-12-18 2022-06-23 中国科学院深圳先进技术研究院 Manifold optimization-based deep learning method for dynamic magnetic resonance imaging

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671129A (en) * 2018-12-14 2019-04-23 深圳先进技术研究院 A kind of the dynamic magnetic resonance image method for reconstructing and device of auto-adaptive parameter study

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019033390A1 (en) * 2017-08-18 2019-02-21 Shenzhen United Imaging Healthcare Co., Ltd. System and method for reconstructing images
KR102215702B1 (en) * 2018-06-04 2021-02-16 한국과학기술원 Method for processing magnetic resonance imaging using artificial neural network and apparatus therefor
CN110728732A (en) * 2019-10-12 2020-01-24 深圳先进技术研究院 Image reconstruction method, device, equipment and medium
CN110717958A (en) * 2019-10-12 2020-01-21 深圳先进技术研究院 Image reconstruction method, device, equipment and medium
CN110766769B (en) * 2019-10-23 2023-08-11 深圳先进技术研究院 Magnetic resonance image reconstruction method, device, equipment and medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671129A (en) * 2018-12-14 2019-04-23 深圳先进技术研究院 A kind of the dynamic magnetic resonance image method for reconstructing and device of auto-adaptive parameter study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JING CHENG等: "Model Learning Primal Dual Networks for Fast MR Imaging" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11189016B1 (en) 2020-06-08 2021-11-30 Guangzhou Computational Super-Resolution Biotech Co., Ltd. Systems and methods for image processing
WO2021248262A1 (en) * 2020-06-08 2021-12-16 Guangzhou Computational Super-Resolution Biotech Co., Ltd. Systems and methods for image processing
US11790502B2 (en) 2020-06-08 2023-10-17 Guangzhou Computational Super-Resolutions Biotech Co., Ltd. Systems and methods for image processing
WO2022126614A1 (en) * 2020-12-18 2022-06-23 中国科学院深圳先进技术研究院 Manifold optimization-based deep learning method for dynamic magnetic resonance imaging
CN113538693A (en) * 2021-07-06 2021-10-22 太原理工大学 Microwave mammary gland image reconstruction method based on deep learning

Also Published As

Publication number Publication date
WO2021114216A1 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
Schlemper et al. A deep cascade of convolutional neural networks for MR image reconstruction
US20190369191A1 (en) MRI reconstruction using deep learning, generative adversarial network and acquisition signal model
CN111127575A (en) Image reconstruction method, computer-readable medium, and computer device
Ye et al. Computational acceleration for MR image reconstruction in partially parallel imaging
KR20190138107A (en) Method for processing interior computed tomography image using artificial neural network and apparatus therefor
US10627470B2 (en) System and method for learning based magnetic resonance fingerprinting
CN110766768A (en) Magnetic resonance image reconstruction method, device, equipment and medium
US20160178720A1 (en) Memory Efficiency Of Parallel Magnetic Resonance Imaging Reconstruction
CN111856362A (en) Magnetic resonance imaging method, device, system and storage medium
CN112164008A (en) Training method of image data enhancement network, and training device, medium, and apparatus thereof
CN109171727A (en) A kind of MR imaging method and device
Singh et al. Joint frequency and image space learning for MRI reconstruction and analysis
Zhang et al. High‐Order Total Bounded Variation Model and Its Fast Algorithm for Poissonian Image Restoration
CN112329920B (en) Unsupervised training method and unsupervised training device for magnetic resonance parameter imaging model
CN115115723A (en) Image reconstruction model generation method, image reconstruction device, image reconstruction equipment and medium
CN111856364B (en) Magnetic resonance imaging method, device and system and storage medium
CN108846430B (en) Image signal sparse representation method based on multi-atom dictionary
Koolstra et al. Learning a preconditioner to accelerate compressed sensing reconstructions in MRI
Gunel et al. Scale-equivariant unrolled neural networks for data-efficient accelerated MRI reconstruction
Qiao et al. MEDL‐Net: A model‐based neural network for MRI reconstruction with enhanced deep learned regularizers
CN107895387B (en) MRI image reconstruction method and device
WO2023050249A1 (en) Magnetic resonance imaging method and system based on deep learning, and terminal and storage medium
Knopp et al. A deep learning approach for automatic image reconstruction in MPI
Huang et al. Self-Supervised Deep Unrolled Reconstruction Using Regularization by Denoising
CN112336337B (en) Training method and device for magnetic resonance parameter imaging model, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination