CN115222055A - Training method and system of image processing model - Google Patents

Training method and system of image processing model Download PDF

Info

Publication number
CN115222055A
CN115222055A CN202110412250.6A CN202110412250A CN115222055A CN 115222055 A CN115222055 A CN 115222055A CN 202110412250 A CN202110412250 A CN 202110412250A CN 115222055 A CN115222055 A CN 115222055A
Authority
CN
China
Prior art keywords
image
energy
sample
low
material density
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110412250.6A
Other languages
Chinese (zh)
Inventor
杨美丽
杜岩峰
傅建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202110412250.6A priority Critical patent/CN115222055A/en
Priority to PCT/CN2022/087499 priority patent/WO2022218441A1/en
Publication of CN115222055A publication Critical patent/CN115222055A/en
Priority to US18/488,002 priority patent/US20240046534A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the specification discloses a training method and a training system for an image processing model. Wherein, the method comprises the following steps: obtaining a plurality of training samples, wherein each training sample in the plurality of training samples comprises a sample low energy image; inputting the training sample into an image processing model, and determining a base material density image corresponding to the sample low-energy image; and adjusting parameters of the image processing model based on one or more of the label-based material density image, the sample low-energy image, the sample high-energy image and the sample topological data and the base material density image by taking the optimized target loss function as a training target to obtain the trained image processing model.

Description

Training method and system of image processing model
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and system for training an image processing model.
Background
When CT (Computed Tomography), the ray source emits X-ray to the detected part of the target object, the detector receives the attenuation signal of the X-ray passing through the detected part, so that the computer reconstructs the tomographic image of the detected part of the target object.
Dual Energy (DE) CT, there are several methods of performing dual energy CT acquisitions, such as dual source, fast voltage (kVp) switching, and dual layer detector configurations, which have higher detection accuracy relative to conventional CT, can accurately obtain material information of a scanned object, which is a specific configuration of spectral CT, solves for photoelectric and compton contributions using two attenuation values acquired over two different energy spectra, which include the mass attenuation coefficients of the material, and thereby identifies the unknown material by the value of its photoelectric and compton contributions. Since iodine can be distinguished from e.g. calcium and water due to photoelectric/compton properties. Because any two linearly independent summations of two basis functions span the entire attenuation coefficient space, any material can be represented by a linear combination of the other two materials (so-called basis materials), such as water and iodine. New applications are provided, such as monochrome images, material removal images, effective atomic number images, and electron density images. The material decomposition is carried out on the high-energy image or the low-energy image or the image obtained by the traditional scanning, and the obtained base or the combination of the bases plays an important role in various applications such as automatic separation of bone and contrast agent in the enhanced scanning, qualitative analysis of iodine quantification, kidney stone and other focuses, generation of pseudo-monoenergetic images and virtual non-enhanced images, and the like. It is therefore necessary to obtain a basis with a high signal-to-noise ratio. However, the existing dual-energy CT imaging method faces some inconveniences in practical application, for example, the dual-energy CT image directly passes through matrix inversion material decomposition to cause degradation of the signal-to-noise ratio of the base material density image.
Therefore, it is necessary to provide a training method for an image processing model to obtain a density image of a base material with a high signal-to-noise ratio through the trained image processing model.
Disclosure of Invention
One aspect of embodiments of the present specification provides a method of training an image processing model. The training method of the image processing model comprises the following steps: obtaining a plurality of training samples, wherein each training sample in the plurality of training samples comprises a sample low energy image; inputting the training sample into an image processing model, and determining a base material density image corresponding to the sample low-energy image; adjusting parameters of the image processing model based on one or more of the label-based material density image, the sample low-energy image, the sample high-energy image and the sample topological data and the base material density image by taking an optimized target loss function as a training target to obtain a trained image processing model; wherein the sample high energy image corresponds to the sample low energy image.
Another aspect of embodiments of the present specification provides a training system for an image processing model. The system comprises: the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a plurality of training samples, and each training sample in the plurality of training samples comprises a sample low-energy image; the first determining module is used for inputting the training sample to the image processing model and determining a base material density image corresponding to the sample low-energy image; the parameter adjusting module is used for adjusting parameters of the image processing model based on one or more of the label base material density image, the sample low-energy image, the sample high-energy image and the sample topological data and the base material density image by taking a minimum target loss function as a training target to obtain the trained image processing model; wherein the sample high energy image corresponds to the sample low energy image.
Another aspect of an embodiment of the present specification provides a method of generating a density image of a base material, the method comprising: acquiring an image to be processed of a target object; and inputting the image to be processed into the image processing model obtained by training the training method of the image processing model, and determining the base material density image of the image to be processed.
Another aspect of embodiments of the present specification provides a system for generating an image of density of a base material, the system comprising: the second acquisition module is used for acquiring an image to be processed of the target object; and the second determining module is used for inputting the image to be processed into the image processing model obtained by the training method of the image processing model and determining the base material density image of the image to be processed.
Another aspect of an embodiment of the present specification provides a method of generating a high-energy image, the method comprising: acquiring a low-energy image and topological data of a target object; the topological data comprises low-energy topological data and high-energy topological data, and the low-energy topological data corresponds to the high-energy topological data; inputting the low-energy image into the image processing model obtained by training the training method of the image processing model, and determining a base material density image of the low-energy image; determining a topology data difference based on the low energy topology data and the high energy topology data; determining a high-energy image corresponding to the low-energy image based on the low-energy image, the basis material density image and the topology data difference.
Another aspect of embodiments of the present specification provides a system for generating a high-energy image, the system comprising: the third acquisition module is used for acquiring a low-energy image and topological data of the target object; the topological data comprises low-energy topological data and high-energy topological data, and the low-energy topological data corresponds to the high-energy topological data; the third determining module is used for inputting the low-energy images into the image processing model obtained by training the training method of the image processing model and determining the base material density images of the low-energy images; a fourth determining module for determining a topology data difference based on the low energy topology data and the high energy topology data; and a fifth determining module for determining a high-energy image corresponding to the low-energy image based on the low-energy image, the basis material density image and the topology data difference.
Another aspect of an embodiment of the present specification provides a training apparatus for an image processing model, comprising at least one storage medium for storing computer instructions and at least one processor; the at least one processor is configured to execute the computer instructions to implement a training method for an image processing model.
Another aspect of embodiments of the present specification provides a computer-readable storage medium storing computer instructions, and a computer executes a training method of an image processing model when the computer instructions in the storage medium are read by the computer.
In the embodiment of the specification, the image processing model is trained by using the sample low-energy image, and the trained image processing model can realize the material decomposition function through the low-energy image; meanwhile, hardware of the embodiment of the specification is simple and convenient to implement, the base material density image can be obtained by using the single-energy CT image obtained by scanning the target object by using the rays with lower dose, the radiation dose to the target object is reduced, and the obtained base material density image can have higher signal-to-noise ratio.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals refer to like structures, wherein:
FIG. 1 is a schematic diagram of an exemplary application scenario of an image processing system in accordance with some embodiments of the present description;
FIG. 2 is an exemplary flow diagram of a method of training an image processing model according to some embodiments shown herein;
FIG. 3 is an exemplary flow diagram illustrating the determination of a base material density image loss function according to some embodiments of the present description;
FIG. 4 is an exemplary flow diagram for determining a first high energy image loss function, according to some embodiments of the present description;
FIG. 5 is an exemplary flow diagram illustrating the determination of a second high-energy image loss function according to some embodiments of the present description;
FIG. 6 is an exemplary flow diagram for determining a topologically high-energy image, according to some embodiments of the present description;
FIG. 7 is an exemplary block diagram of a training system for an image processing model in accordance with some embodiments of the present description;
FIG. 8 is an exemplary block diagram of a system for generating a density image of a base material in accordance with some embodiments of the present description;
FIG. 9 is an exemplary block diagram of a system for generating high energy images in accordance with some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies of different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flowcharts are used in this specification to illustrate the operations performed by the system according to embodiments of the present specification. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Computed Tomography (CT) is currently widely used in clinical diagnostics. The dual-energy CT technology has been rapidly developed and gradually becomes one of the examination methods commonly used in clinical diagnosis.
At present, an implementation manner of clinical dual-energy CT imaging is realized by hardware design, for example, a fast kVp switching technology, a double-layer detector technology, a dual-source CT imaging technology, a multi-scan technology, and the like, and then, an image domain direct matrix inversion or iterative material decomposition is performed to obtain a basis material density image. The fast kVp switching technique only uses a single radiation source, can use a single bulb tube to realize fast switching of high and low voltages so as to acquire dual-energy data, generally has a long scanning time, has poor distinguishing capability between different energy levels, generates a high-energy image and a low-energy image with small difference, and needs one additional scanning. Dual source CT imaging techniques use two radiation sources to emit X-rays at two energy levels, respectively. The two ray sources form a certain angle with respect to the rotation center thereof, the two ray sources can emit X-rays simultaneously, the scanning ranges of low-energy rays and high-energy rays at the same time are different greatly, the dual-source CT imaging technology has better energy level distinguishing capability, but the hardware implementation is complex, and due to the fact that a certain angle deviation exists between the two ray sources, deviation or distortion is easy to generate between dual-energy images generated in a region with severe motion, and artifacts may be generated in the images obtained by material decomposition. In addition, the dual-energy CT techniques described above all have a common problem in that the radiation dose to the patient is large, and the speed of directly performing iterative material decomposition on the dual-energy CT image is slow, which makes it difficult to meet clinical requirements.
The embodiment of the specification discloses a training method of an image processing model, which combines a machine learning technology to train the model, and can use a single-energy CT image to realize a material decomposition function through the trained image processing model. The technical solutions disclosed in the present specification are explained in detail by the description of the drawings below.
FIG. 1 is a schematic diagram of an exemplary application scenario of an image processing system in accordance with some embodiments of the present description.
In some embodiments, the image processing system 100 may be used to acquire its corresponding basis material density image based on a single-energy CT image. For example, the image processing system 100 may process a single-energy CT image (e.g., a low-energy CT image) using an image processing model to obtain its corresponding basis material density image.
In some embodiments, the image processing model may be trained in other systems (e.g., a training system for the image processing model, not shown), or may be trained using the image processing system 100. For example, the image processing system 100 may acquire a plurality of training samples, wherein each training sample of the plurality of training samples comprises a sample low energy image; the sample high-energy image corresponds to the sample low-energy image; the image processing system 100 may input the training sample to the image processing model, and determine a basis material density image corresponding to the sample low-energy image; the image processing system 100 may adjust parameters of the image processing model based on one or more of the label-based material density image, the sample low-energy image, the sample high-energy image, and the sample topology data and the base material density image, with the optimization target loss function as a training target, to obtain a trained image processing model. Wherein the sample high energy image corresponds to the sample low energy image.
As shown in fig. 1, the image processing system 100 may include an imaging device 110, a network 120, a terminal 130, a processing device 140, and a storage device 150.
The imaging device 110 may be used to image a target object to produce an image. Imaging device 110 may be a medical imaging device (e.g., a Computed Tomography (CT) device). In some embodiments, imaging device 110 may include a gantry 111, a detector 112, a scan region 113, and a scan bed 114. A target object may be placed on scan bed 114 to be scanned. Gantry 111 may support detector 112. In some embodiments, detector 112 may include one or more detector units. The detector units may be and/or include a single row of detectors and/or multiple rows of detectors. The detector units may include scintillation detectors (e.g., cesium iodide detectors) and other detectors, etc. in some embodiments, gantry 111 may rotate, e.g., in CT imaging devices, gantry 111 may rotate clockwise or counterclockwise about a gantry axis of rotation.
Processing device 140 may process data and/or information obtained from imaging device 110, terminal 130, and/or storage device 150. For example, the processing device 140 may process image information detected by the detector 112 and generated thereby to obtain a CT image. For another example, the processing device 140 may process the CT image to obtain a basis material density image corresponding to the CT image. In some embodiments, the processing device 140 may be a single server or a group of servers. The server group may be centralized or distributed. In some embodiments, the processing device 140 may be local or remote. For example, processing device 140 may access information and/or data from imaging device 110, terminal 130, and/or storage device 150 via network 120. As another example, processing device 140 may be directly connected to imaging device 110, terminal 130, and/or storage device 150 to access information and/or data. In some embodiments, the processing device 140 may be implemented on a cloud platform. For example, the cloud platform may include one or a combination of private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, cross-cloud, multi-cloud, and the like.
The terminal 130 may include a mobile device 131, a tablet computer 132, a notebook computer 133, and the like, or any combination thereof. In some embodiments, terminal 130 may interact with other components in image processing system 100 over a network. For example, the terminal 130 may send one or more control instructions to the imaging device 110 to control the imaging device 110 to scan the target object according to the instructions. As another example, the terminal 130 may also receive the base material density image determined by the processing device 140 and display the base material density image for analysis and confirmation by an operator. In some embodiments, the mobile device 131 may include smart home devices, wearable devices, mobile devices, virtual reality devices, augmented reality devices, and the like, or any combination thereof. In some embodiments, the smart home devices may include smart lighting devices, smart appliance control devices, smart monitoring devices, smart televisions, smart ray machines, interphones, and the like, or any combination thereof. In some embodiments, the wearable device may be a bagIncluding bracelets, footwear, glasses, helmets, watches, clothing, backpacks, smart accessories, and the like, or any combination thereof. In some embodiments, the mobile device may comprise a mobile phone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a POS device, a laptop, a tablet, a desktop, etc., or any combination thereof. In some embodiments, the virtual reality device and/or augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, and the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include a Google Glass TM 、Oculus Rift TM 、HoloLens TM Or Gear VR TM And the like. In some embodiments, the terminal 130 may be part of the processing device 140. In some embodiments, the terminal 130 may be integrated with the processing device 140 as an operating console for the imaging device 110. For example, a user/operator (e.g., a physician) of the image processing system 100 may control the operation of the device imaging device 110 via the console, such as scanning a target object, controlling the movement of the scanning bed 114, training of image processing models, acquiring images of basis material densities using the image processing models, and so forth.
Storage device 150 may store data (e.g., scan data for a target object), instructions, and/or any other information. In some embodiments, storage device 150 may store data obtained from imaging device 110, terminal 130, and/or processing device 140, e.g., storage device 150 may store treatment plans obtained from imaging device 110, scan data of a target subject, etc. In some embodiments, storage device 150 may store data and/or instructions that processing device 140 may execute or use to perform the example methods described herein. In some embodiments, the storage device 150 may include one or a combination of mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like. Mass storage may include magnetic disks, optical disks, solid state drives, removable storage, and the like. The removable memory may include a flash drive, floppy disk, optical disk, memory card, ZIP disk, magnetic tape, or the like. Volatile read and write memory can include Random Access Memory (RAM). The RAM may include Dynamic Random Access Memory (DRAM), double data rate synchronous dynamic random access memory (DDR-SDRAM), static Random Access Memory (SRAM), silicon controlled random access memory (T-RAM), zero capacitance random access memory (Z-RAM), and the like. The ROM may include mask read-only memory (MROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), optical discs such as digital versatile discs, and the like. In some embodiments, the storage device 150 may be implemented by a cloud platform as described herein. For example, the cloud platform may include one or a combination of private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, cross-cloud, multi-cloud, and the like.
In some embodiments, storage device 150 may be connected to network 120 to enable communication with one or more components in image processing system 100 (e.g., processing device 140, terminal 130, etc.). One or more components in the image processing system 100 may read data or instructions in the storage device 150 over the network 120. In some embodiments, the storage device 150 may be part of the processing device 140 or may be separate and directly or indirectly coupled to the processing device.
Network 120 may include any suitable network capable of facilitating information and/or data exchange for image processing system 100. In some embodiments, one or more components of image processing system 100 (e.g., imaging device 110, terminal 130, processing device 140, storage device 150, etc.) may exchange information and/or data with one or more components of image processing system 100 via network 120. For example, the processing device 140 may acquire scan data from the imaging device 110 over the network 120. Network 120 may include a public network (e.g., the internet), a private network (e.g., a Local Area Network (LAN), a Wide Area Network (WAN)), etc.), a wired network (e.g., ethernet), a wireless network (e.g., an 802.11 network, a wireless Wi-Fi network, etc.), a cellular network (e.g., a Long Term Evolution (LTE) network), a frame relay network, a Virtual Private Network (VPN), a satellite network, a telephone network, a routerOne or a plurality of combinations of a hub, a server computer and the like. For example, network 120 may include a wireline network, a fiber optic network, a telecommunications network, a local area network, a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), bluetooth TM Network, zigBee TM Network, near Field Communication (NFC) network, and the like. In some embodiments, network 120 may include one or more network access points. For example, network 120 may include wired and/or wireless network access points, such as base stations and/or internet exchange points, through which one or more components of image processing system 100 may connect to network 120 to exchange data and/or information.
FIG. 2 is an exemplary flow diagram of a method of training an image processing model, according to some embodiments of the present description. In some embodiments, flow 200 may be performed by a processing device (e.g., processing device 140). For example, the process 200 may be stored in a storage device (e.g., an onboard storage unit of a processing device or an external storage device) in the form of a program or instructions that, when executed, may implement the process 200. The flow 200 may include the following operations.
Step 202, a plurality of training samples are obtained. In some embodiments, step 202 may be performed by the first obtaining module 710.
The training samples include low-energy image data used to train the image processing model.
In some embodiments, each training sample of the plurality of training samples may include a sample low energy image. The low-energy image refers to an image obtained by scanning and imaging a target object with low-dose rays. The target object may include a patient, or other medical subject (e.g., other animal such as a laboratory white mouse), etc. The target object may also be part of a patient or other medical subject, including organs and/or tissues, such as the heart, lungs, ribs, abdominal cavity, etc.
In some embodiments, the processing device may obtain the plurality of training samples by reading from a database, a storage device, or an imaging device.
Step 204, inputting the training sample into an image processing model, and determining a base material density image corresponding to the sample low-energy image. In some embodiments, step 204 may be performed by the first determination module 720.
In some embodiments, the processing device may input the training sample (sample low energy image) to an image processing model, which outputs a basis material density image corresponding to the sample low energy image.
The essence of CT imaging is attenuation coefficient imaging, which is based on the principle that X-rays have penetrability and can penetrate objects (e.g., living bodies and objects), different tissues of the objects have different absorption and transmittance to the X-rays, when the X-rays pass through the objects, the attenuation of different parts is different, the attenuated rays are measured, and then data of different parts of the objects can be obtained, and after the obtained data is input into an electronic computer for processing, a cross-section or a three-dimensional image of the examined part of the object can be reconstructed. And the linear attenuation coefficient of any object can be expressed as a linear combination of the selected materials, and the CT image of any object can be expressed as a linear combination of the selected materials because the CT image and the linear attenuation coefficient are in a linear relation. For example, the CT image can be represented by equation (1).
I=M·A (1)
Wherein, I represents a CT image, the dimensionality is Nx 1, and N is the total number of pixels in the CT image; m is a density image of the base material, the dimension is Nxm, and M is the type of the decomposition material; a is a material decomposition matrix with dimension of m × 1.
The base material density image output by the image processing model can be reversely used for carrying out constraint training on the image processing model and can also be used for acquiring a high-energy image corresponding to the low-energy image. The description of obtaining the high-energy image may refer to other parts of the present specification, for example, step 206 and the description of fig. 6, which are not repeated herein.
In some embodiments, the image processing model may include a U-net model, a V-net model, or the like based on deep learning.
And step 206, based on the base material density image and one or more of the label base material density image, the sample low-energy image, the sample high-energy image and the sample topological data, and taking an optimization target loss function as a training target, and adjusting parameters of the image processing model to obtain a trained image processing model. In some embodiments, step 206 may be performed by parameter adjustment module 730.
The label basis material density image refers to a known basis material density image corresponding to the sample low energy image. Each training sample may have a label-based material density image corresponding thereto.
In some embodiments, the label-based material density image may be obtained by performing inverse decomposition (e.g., bibase decomposition, tribase decomposition, multibase material decomposition, or the like) on a high-energy image matrix corresponding to the dual-energy CT image, or performing iterative material decomposition calculation on the conventional dose dual-energy CT image.
In some embodiments, the sample high energy image corresponds to the sample low energy image. The correspondence means that the sample high-energy image and the sample low-energy image are from the same target object, and the scanning angles when the sample high-energy image and the sample low-energy image are obtained can be different. The high-energy image is an image obtained by scanning and imaging the target object by using rays with high energy level, and the low-energy image is an image obtained by scanning and imaging the target object by using rays with lower energy level relative to the rays with high energy level. For example, a sample low energy image may be obtained by scanning the target object at a first radiation dose, and a sample high energy image may be obtained by scanning the target object at a second radiation dose; wherein the first radiation dose is lower than the second radiation dose.
The topology data refers to scan data of a certain angle (which may be any angle, such as a transverse plane, a coronal plane, a sagittal plane, etc.) obtained by scanning the target object. The sample topology data can include sample low energy topology data and sample high energy topology data. The sample low-energy topological data refers to topological data obtained by scanning with lower-energy-level rays, and the sample high-energy topological data refers to topological data obtained by scanning with higher-energy-level rays. The low-energy topological data of the sample in a single training sample correspond to the high-energy topological data of the sample, for example, the low-energy topological data of the sample are obtained by scanning the same target object. In some embodiments, the scan angle of the sample low energy topology data and the sample high energy topology data are the same. In some embodiments, the sample low energy topological data and the sample high energy topological data can be obtained simultaneously. The scanning angle of the sample topological data and the scanning angles of the sample low-energy image and the sample high-energy image can be different, so that more information can be contained in the training sample data.
In some embodiments, the processing device may input training samples into the image processing model, output a prediction result (i.e., a base material density image) from the image processing model, and construct a loss function based on one or more of the label base material density image, the sample low-energy image, the sample high-energy image, and the sample topology data to perform constraint, and minimize a loss function value corresponding to each training sample by continuously adjusting parameters of the image processing model, so that the final prediction result of the image processing model may be more accurate. When the loss function value meets the requirement (for example, the loss function value is smaller than a certain preset value and converges) or the iteration reaches a preset number of times, a trained image processing model can be obtained.
In some embodiments, the target loss function may be any one or combination of a base material density image loss function, a low energy image loss function, a first high energy image loss function, and a second high energy image loss function. For example, the target loss function may be a combination of a base material density image and a first high-energy image loss function, a combination of a base material density image loss function and a second high-energy image loss function, or a combination of a base material density image loss function and a first high-energy image loss function, a second high-energy image loss function, or the like.
In some embodiments, the basis material density image loss function may be determined based at least on the label basis material density image. For example, the basis material density image loss function may be constructed based on the basis material density image and the label basis material density image predicted by the image processing model. Illustratively, the basis material density image loss function may be as shown in equation (2).
Figure BDA0003024631530000101
Wherein L is Maerial Representing a basis material density image loss function; i is Low For sample low energy images, F (I) Low ) A base material density image predicted for the image processing model; material is a label base Material density image; n is the total number of training samples used for model training.
In the embodiment, the trained image processing model is constrained by directly constructing the loss function based on the label basis material density image, so that the prediction result of the model is close to the real label basis material density image obtained based on the dual-energy CT image. The low-energy image is input into the trained image processing model to obtain the density image of the base material, and the single-energy CT image is used for realizing the decomposition function of the material. Because the base material density image can be obtained only by using the single-energy CT image, compared with the method that the base material density image is obtained by scanning the target object for multiple times by using the dual-energy CT image, the scanning time of the target object is shorter, and the radiation dose to the target object is lower.
In some embodiments, a low energy image loss function may be determined based at least on the sample low energy image. For example, a low energy image loss function may be determined based on the sample low energy image and the predicted low energy image. In some embodiments, the predicted low energy image may be calculated from the basis material density image and the sample low energy image. For example, the base material density image may be multiplied by a material decomposition matrix corresponding to the sample low energy image to obtain a predicted low energy image. The material decomposition matrix of the sample low-energy image may be obtained by solving the sample low-energy image and the base material density image in a least square fitting manner, and further details may be referred to in the description of fig. 4, which is not described herein again.
Illustratively, the low energy image loss function may be represented by equation (3).
Figure BDA0003024631530000111
Wherein,
Figure BDA0003024631530000112
to predict low energy images; I.C. A Low A sample low-energy image is obtained; n is the total number of training samples. The value of the low-energy image loss function can reflect the difference between the predicted low-energy image and the sample low-energy image, and the predicted low-energy image is obtained by solving based on the base material density image, so that the accuracy of the prediction result of the model can be indirectly reflected.
In some embodiments, the first high energy image loss function may be determined based at least on the sample high energy image. For example, a first high-energy image loss function may be constructed based on the sample high-energy image and the prediction high-energy image. In some embodiments, the predicted high energy image may be calculated from the basis material density image and the sample high energy image. For example, the base material density image may be multiplied by a material decomposition matrix corresponding to the sample high energy image to obtain a predicted high energy image. The material decomposition matrix of the sample high-energy image may be obtained by solving the sample high-energy image and the base material density image in a least square fitting manner, and further details may be referred to in the description of fig. 4, which is not described herein again.
Illustratively, a first high energy image loss function L High-1 As shown in equation (4).
Figure BDA0003024631530000113
Wherein,
Figure BDA0003024631530000114
to predict high energy images; I.C. A High A sample high-energy image is obtained; n is the total number of training samples.
In the embodiment, the predicted high-energy image is obtained through the predicted base material density image and the sample high-energy image, the image processing model is constrained in a mode of constructing the loss function based on the predicted high-energy image and the sample high-energy image, and the predicted high-energy image is obtained through solving based on the predicted base material density image, so that the accuracy of the predicted base material density image can be reflected to a certain extent by the loss function constructed by using the predicted high-energy image and the sample high-energy image, the aim of obtaining the base material density image by using the single-energy low-energy image can be achieved through the trained image processing model, and the decomposition function of the material can be realized by using the single-energy CT image.
In some embodiments, the second high energy image loss function may be determined based at least on the sample high energy image and the sample topology data. For example, the second high-energy image loss function may be constructed based on the sample high-energy image and the topologically high-energy image obtained from the sample topological data.
For details of determining topologically high-energy images based on sample topology data, reference may be made to fig. 6 and its associated description, which are not repeated herein.
In some embodiments, the second high-energy image loss function L High-2 As shown in equation (5).
Figure BDA0003024631530000121
Wherein,
Figure BDA0003024631530000122
is a topological high-energy image; I.C. A High A high-energy image of the sample is obtained; n is the total number of training samples.
In the embodiment, the topological image is obtained through the predicted base material density image and the sample topological data, and then the model is constrained in a loss function constructing mode based on the topological high-energy image and the sample high-energy image.
Exemplarily, a target loss function of a partial loss function combination is exemplarily shown in the following embodiments, for example, a combination of a base material density image loss function, a low energy image loss function, and a first high energy image loss function may be as shown in equation (6).
L 1 =λ·L Maerial +L Low +L High-1 (6)
Wherein L is 1 A combined loss function representing a basis material density image loss function, a low-energy image loss function, and a first high-energy image loss function; λ is a weight balance factor, λ is a constant; l is Maerial Is a loss function of the density image of the base material, L Low Representing a low energy image loss function; l is High-1 Representing a first high energy image loss function.
In some embodiments, the combined target loss function of the basis material density image loss function, the low energy image loss function, the first high energy image loss function, and the second high energy image loss function may be as shown in equation (7).
L 2 =λ·L Maerial +L Low +L High-1 +L High-2 (7)
Wherein L is 2 A combined loss function representing a basis material density image loss, a low-energy image loss function, a first high-energy image loss function, and a second high-energy image loss function; λ is a weight balance factor, λ is a constant; l is Maerial Expressing the basis material density image loss function, L Low Representing a low energy image loss function; l is High-1 Representing a first high energy image loss function, L High-2 Representing a second high-energy image loss function. For a more detailed description of the loss function of each part, refer to the above description, and are not repeated herein.
Preferably, in some embodiments, the image processing model may be constrained by a combined loss function as exemplified by equation (7), and the constraint condition of the target loss function as shown in equation (7) is more, so that the trained model may have stronger robustness, and the prediction result may be better.
By constraining the model through the loss function shown in the above example, after the loss function value meets a preset condition (e.g., the loss function value is converged, is smaller than a preset threshold, etc.) or reaches a preset iteration number, a trained image processing model can be obtained. The trained image processing model may be used to obtain an image of the basis material density.
In some embodiments, the processing device may obtain the base material density image using a trained image processing model by the method described in the embodiments below.
In some embodiments, the processing device may acquire a to-be-processed image of the target object. The image to be processed of the target object may be a low energy CT image obtained with a low dose radiation scan.
In some embodiments, the processing device may obtain the to-be-processed image of the target object by reading from the imaging device, the database, the storage device, and calling the data interface.
In some embodiments, the processing device may input the image to be processed into an image processing model trained according to the method described in the embodiments of the present specification, and determine a basis material density image of the image to be processed. Specifically, the processing device may input an image to be processed of the target object into a trained image processing model, and after the image to be processed is processed by the image processing model, a base material density image of the image to be processed is output.
In some embodiments, the processing device may also generate the high-energy image based on the image of the basis material density obtained by the image processing model by the method described in the embodiments below.
In some embodiments, the processing device may acquire low-energy images and topology data of the target object; the topology data comprises low-energy topology data and high-energy topology data, and the low-energy topology data corresponds to the high-energy topology data.
For the description of the low energy image and the topology data, reference may be made to the related descriptions in step 202-step 206, which are not repeated herein. In some embodiments, the processing device may obtain scan data by scanning the target object with the scanning device, reconstruct a low energy image from the scan data, and obtain topology data from the scan data. For example, the processing device may respectively perform one low-energy scan to obtain low-energy topological data and one high-energy scan to obtain high-energy topological data by using the scanning device, or may extract low-energy topological data from scanning data corresponding to a low-energy image and perform one high-energy scan to obtain high-energy topological data. In some embodiments, the processing device may also read the low energy image and topology data from the storage device.
The processing device may input the low energy image into an image processing model obtained by training the training method of the image processing model according to the embodiment of the present specification, and determine a basis material density image of the low energy image.
The processing device may determine a topology data difference based on the low energy topology data and the high energy topology data.
For determining the topology data difference, refer to fig. 6 and the related description thereof in this specification, and details are not repeated here.
Determining a high-energy image corresponding to the low-energy image based on the low-energy image, the basis material density image, and the topological data difference.
In some embodiments, the processing device may determine material density data based on the base material density image; determining a topology data difference based on the low energy topology data and the high energy topology data; determining a material decomposition matrix difference based on the material density data and the topology data difference; determining image differences based on the material decomposition matrix differences and the base material density images; and determining the high-energy image based on the low-energy image and the image difference.
The process of determining the high-energy image corresponding to the low-energy image is similar to the process of determining the topologically high-energy image described in this specification, and the difference is that the used data is different, so the specific process of determining the high-energy image corresponding to the low-energy image may refer to the related description of fig. 6, and will not be described herein again.
In the embodiment of the description, the image processing model is trained by using training sample data, the model can be constrained based on a loss function constructed by one or a combination of a label-based material density image, a sample low-energy image, a sample high-energy image and sample topological data, and the trained model can realize a material decomposition function through a single-energy CT image. Under the condition that the model is constrained by combining and constructing the loss function, the robustness of the model can be improved due to the addition of more constraint conditions, so that the prediction result of the trained model can be more accurate. Meanwhile, the base material density image can be obtained only by using the single-energy CT image, and compared with the method for obtaining the base material density image by using the dual-energy CT image, the dual-energy CT image needs to scan the target object for multiple times, the scanning time of the target object is longer, the radiation dose of the target object is higher, the scanning time of the single-energy CT image on the target object is shorter, and the radiation dose of the target object is lower. In addition, the images are processed through the trained image processing model, compared with the method of directly obtaining the base material density images through matrix inversion decomposition or iterative material decomposition, the method does not cause serious degradation of the signal to noise ratio of the base material density images, and can reduce the time required for obtaining the base material density images.
FIG. 3 is an exemplary flow diagram illustrating a determination of a base material density image loss function according to some embodiments of the present description. In some embodiments, flow 300 may be performed by a processing device. For example, the process 300 may be stored in a storage device (e.g., an onboard storage unit of a processing device or an external storage device) in the form of a program or instructions that, when executed, may implement the process 300. As shown in fig. 3, the process 300 may include the following operations.
Step 302, processing the sample low-energy image through the image processing model to obtain the base material density image.
In some embodiments, the processing device may input the sample low energy image to the image processing model, and output the basis material density image after processing the sample low energy image by the image processing model.
Step 304, determining the basis material density image loss function based on the basis material density image and the label basis material density image.
In some embodiments, the determined basis material density image loss function is as shown in equation (2).
In some embodiments, the processing device may determine the difference between the base material density image and the label base material density image based on the base material density image loss function (i.e., equation (2)) illustrated in step 204. For example, the basis material density image and the label basis material density image are substituted into equation (2) to be calculated, and the value of the basis material density image loss function is obtained. The difference between the base material density image and the label base material density image can be reflected by the value of the base material density image loss function, and the difference can be reduced by minimizing the loss function, so that the prediction result of the model is more accurate.
FIG. 4 is an exemplary flow diagram illustrating the determination of a low-energy image loss function and a first high-energy image loss function according to some embodiments of the present description. In some embodiments, flow 400 may be performed by a processing device. For example, the process 400 may be stored in a storage device (e.g., an onboard memory unit of a processing device or an external storage device) in the form of a program or instructions that, when executed, may implement the process 400. As shown in fig. 4, the flow 400 may include the following operations.
Step 402, processing the sample low-energy image through the image processing model to obtain the base material density image.
The process of obtaining the base material density image is the same as the process exemplarily described in fig. 3, and further details can be referred to the description of fig. 3, which is not repeated herein.
Step 404, determining the predicted low energy image based on the basis material density image.
The predicted low-energy image refers to a low-energy image calculated based on a prediction result of the material density image (i.e., the material density image) of the image processing model.
In some embodiments, the predicted low energy image may be determined based on equation (8) below.
Figure BDA0003024631530000151
Wherein,
Figure BDA0003024631530000152
representing a predicted low energy image; m is the predicted basis material density image; a. The low A material decomposition matrix for a low energy image of the sample. The density image M of the base material and the material decomposition matrix A low Multiplying to obtain the predicted low-energy image
Figure BDA0003024631530000153
A low The low-energy image and the density image of the base material of the sample can be solved by a least square fitting method.
Step 406, determining a low energy image loss function based on the predicted low energy image and the sample low energy image.
In some embodiments, the low energy image loss function may be calculated by substituting the predicted low energy image and the sample low energy image into equation (3) as the low energy image loss function illustrated by equation (3) in step 204 above.
Based on the base material density image, a predicted high energy image is determined, step 408.
The predicted high-energy image refers to a high-energy image calculated based on a prediction result of the image processing model-based material density image (i.e., a base material density image).
In some embodiments, the predicted low energy image may be determined based on equation (9) below.
Figure BDA0003024631530000161
Wherein,
Figure BDA0003024631530000162
representing a predicted high-energy image; m is the predicted basis material density image; a. The High The material decomposition matrix is the sample high energy image. The density image M of the base material and the material decomposition matrix A High Multiplying to obtain a predicted high-energy image
Figure BDA0003024631530000163
In some embodiments, A high The high-energy image and the density image of the base material of the sample can be solved by a least square fitting method.
Step 410, determining a first high energy image loss function based on the predicted high energy image and the sample high energy image.
In some embodiments, the first image loss function may be calculated by substituting the predicted high energy image and the sample high energy image into equation (4) as illustrated in step 204 above to obtain the value of the first high energy image loss function.
FIG. 5 is an exemplary flow diagram illustrating the determination of a second high-energy image loss function according to some embodiments of the present description. In some embodiments, flow 500 may be performed by a processing device. For example, the process 500 may be stored in a storage device (e.g., an onboard storage unit of a processing device or an external storage device) in the form of a program or instructions that, when executed, may implement the process 500. As shown in fig. 5, the flow 500 may include the following operations.
Step 502, determining a topologically high energy image based on the basis material density image, the sample low energy image and the sample topological data.
The topological high-energy image is a high-energy image calculated based on a prediction result (namely a base material density image) of an image processing model, a sample low-energy image and sample topological data.
The sample topology data includes sample low energy topology data and sample high energy topology data.
For details of determining topologically high-energy images, reference may be made to fig. 6 and its associated description, which are not repeated herein.
Step 504, determining the second high-energy image loss function based on the sample high-energy image and the topological high-energy image.
In some embodiments, the determined second high-energy image loss function may be as shown in equation (5) of step 204.
In some embodiments, the processing device may substitute the topological high-energy image and the sample high-energy image into a second high-energy image loss function, and calculate a value of the second high-energy image loss function.
FIG. 6 is an exemplary flow diagram for determining a topologically high energy image, according to some embodiments of the present description. In some embodiments, flow 600 may be performed by a processing device. For example, the process 600 may be stored in a storage device (e.g., an onboard storage unit of a processing device or an external storage device) in the form of a program or instructions that, when executed, may implement the process 600. As shown in fig. 6, the flow 600 may include the following operations.
Step 602, determining sample material density data based on the base material density image.
The material density data is obtained by forward projecting the base material density image. The material density data may be used to determine the decomposition matrix difference of the high energy image and the low energy image.
In some embodiments, the processing device may forward project the base material density image to obtain the sample material density data.
Step 604, determining a sample topology data difference based on the sample low energy topology data and the sample high energy topology data.
In some embodiments, the processing device may subtract the sample low energy topological data from the sample high energy topological data to obtain the sample topological data difference.
Step 606, determining material decomposition matrix differences based on the sample material density data and the sample topology data differences.
In some embodiments, for a dual-energy CT image, it can be represented by equation (10) and equation (11) below, respectively.
I Low =M·A Low (10)
I High =M·A High (11)
Wherein, I Low Representing low-energy images, I High Representing a high-energy image, A Low Material decomposition matrix for low energy image, A High Is the material decomposition matrix for the high energy image and M is the basis material density image.
Subtracting the low energy image from the high energy image yields equation (12).
I diff =M·A diff (12)
Wherein, I diff Representing the difference between the high-energy image and the low-energy image, A diff Representing the difference in decomposition matrices for the high-energy image and the low-energy image.
And then, performing forward projection on two sides of equation (12) at the same time, and recording the sign of the forward projection as R, so as to obtain equation (13) after forward projection.
R·I diff =R·M·A diff (13)
Wherein R.I diff Representing energy data differences between the high-energy image and the low-energy image; R.M represents material density data; a. The diff Representing the decomposition matrix difference. In some embodiments, A diff Equation (13) can be solved by using a least square fitting method.
Based on the principles described above, in some embodiments, the processing device may find the material decomposition matrix difference based on equation (13) above. Specifically, the sample topology data difference is R.I diff And expressing the energy difference between the sample high-energy topological data and the sample low-energy topological data, wherein the material density data can be obtained by performing forward projection on a sample base material density image, namely R.M, and finally the material decomposition matrix difference can be obtained by least square fitting calculation.
Step 608, determining image differences based on the material decomposition matrix differences and the base material density image.
Image differences refer to the difference between a sample high energy image and a sample low energy image. In passing through the sample topologyAfter the material decomposition matrix difference is found from the difference and label basis material density image, the image difference can be found by equation (12) described above. In equation (12), M is the basis material density image, A diff The image difference I can be obtained by substituting the two into equation (12) for the obtained material decomposition matrix difference diff
Step 610, determining the topologically high-energy image based on the sample low-energy image and the image difference.
In some embodiments, the processing device may determine the topologically high-energy image based on the sample low-energy image and the image difference, and equations (10) through (12). Due to I diff =I High -I Low Has already found out that M.A diff Has also found that Low And if the image is a sample low-energy image, adding the aberration difference to the sample low-energy image to obtain the topological high-energy image.
It should be noted that the above description of the respective flows is only for illustration and description, and does not limit the applicable scope of the present specification. Various modifications and alterations to the flow may occur to those skilled in the art, given the benefit of this description. However, such modifications and variations are intended to be within the scope of the present description. For example, changes to the flow steps described herein, such as the addition of pre-processing steps and storage steps, may be made.
FIG. 7 is an exemplary block diagram of a training system for an image processing model in accordance with some embodiments of the present description. As shown in fig. 7, the system 700 may include an acquisition first acquisition module 710, a first determination module 720, and a parameter adjustment module 730.
The first acquisition module 710 may be used to acquire a plurality of training samples.
In some embodiments, the first obtaining module 710 may obtain the plurality of training sample tags by reading from a database, a storage device, or an imaging device.
The first determining module 720 may be configured to input the training samples into an image processing model, and determine a basis material density image corresponding to the sample low energy image.
The parameter adjusting module 730 may be configured to adjust parameters of the image processing model based on one or more of the label-based material density image, the sample low-energy image, the sample high-energy image, and the sample topology data and the base material density image, with a minimum target loss function as a training target, to obtain a trained image processing model.
Wherein the sample high-energy image corresponds to the sample low-energy image.
In some embodiments, the loss function corresponding to each training sample comprises: any one or combination of a base material density image loss function, a low energy image loss function, a first high energy image loss function, and a second high energy image loss function. Wherein the basis material density image loss function is determined based at least on the label basis material density image, the low energy image loss function is determined based at least on the sample low energy image, the first high energy image loss function is determined based at least on the sample high energy image, and the second high energy image loss function is determined based at least on the sample high energy image and sample topology data; the sample topological data comprise sample low-energy topological data and sample high-energy topological data, and the sample low-energy topological data correspond to the sample high-energy topological data.
Fig. 8 is an exemplary block diagram of a system for generating a density image of a base material, according to some embodiments of the present disclosure. As shown in fig. 8, the system 800 may include a second obtaining module 810 and a second determining module 820.
The second obtaining module 810 may be configured to obtain a to-be-processed image of the target object.
The second determining module 820 may be configured to input the image to be processed into the image processing model obtained by training the image processing model training method shown in the embodiment of the present specification, and determine the basis material density image of the image to be processed.
FIG. 9 is an exemplary block diagram of a system for generating high-energy images in accordance with some embodiments of the present description. As shown in fig. 9, the system 900 may include a third obtaining module 910, a third determining module 920, a fourth determining module 930, and a fifth determining module 940.
The third acquisition module 910 may be used to acquire low energy images and topology data of the target object.
The topological data comprise low-energy topological data and high-energy topological data, and the low-energy topological data correspond to the high-energy topological data.
The third determining module 920 may be configured to input the low energy image into an image processing model trained by the image processing model training method described in the embodiments of the present specification, and determine a basis material density image of the low energy image.
The fourth determination module 930 may be configured to determine a topology data difference based on the low energy topology data and the high energy topology data.
A fifth determination module 940 may be used to determine a high-energy image corresponding to the low-energy image based on the low-energy image, the basis material density image, and the topology data difference.
With regard to the detailed description of the various modules of the above system, reference may be made to the flow chart section of this specification, e.g., the associated description of fig. 2-6.
It should be understood that the systems shown in fig. 7-9 and their modules may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules in this specification may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of hardware circuits and software (e.g., firmware).
It should be noted that the above description of the training system and the modules thereof for the image processing model is only for convenience of description, and the description is not limited to the scope of the embodiments. It will be appreciated by those skilled in the art that, given the teachings of the system, any combination of modules or sub-system may be configured to interface with other modules without departing from such teachings. For example, in some embodiments, the first obtaining module 710 and the first determining module 720 may be different modules in a system, or may be a module that implements the functions of two or more modules described above. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
The beneficial effects that may be brought by the embodiments of the present specification include, but are not limited to: (1) The image processing model is trained by utilizing a low-energy image in combination with a deep learning technology, and the trained image processing model can realize the material decomposition function by using a single-energy CT image; (2) In the model training process, a loss function is constructed in multiple modes for constraint, so that the robustness of an image processing model obtained by training can be improved; (3) The hardware of the embodiment of the specification is simple and convenient to implement, the base material density image can be obtained by using the single-energy CT image obtained by scanning the target object by using the low-dose rays, the radiation dose to the target object is reduced, and meanwhile, the obtained base material density image can have a higher signal-to-noise ratio; (4) Before the low-energy image is input into the image processing model, the low-energy image does not need to be subjected to processing such as denoising and the like, so that the process of obtaining the base material density image is simplified; (5) Compared with the method that the estimated high-energy image is obtained through the low-energy image by using the deep learning model, and the base material density image is obtained based on the low-energy image and the estimated high-energy image, the method can avoid or reduce the problem of serious degradation of the image signal to noise ratio caused by matrix inversion of the low-energy image and the high-energy image, and save the time required by iterative material decomposition.
It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such alterations, modifications, and improvements are intended to be suggested in this specification, and are intended to be within the spirit and scope of the exemplary embodiments of this specification.
Also, the description uses specific words to describe embodiments of the specification. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable categories or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful modification thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C + +, C #, VB.NET, python, and the like, a conventional programming language such as C, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences are described in this specification, the use of numerical letters, or other designations are not intended to limit the order of the processes and methods described in this specification, unless explicitly stated in the claims. While certain presently contemplated useful embodiments of the invention have been discussed in the foregoing disclosure by way of various examples, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein described. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the foregoing description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit-preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those explicitly described and depicted herein.

Claims (11)

1. A method of training an image processing model, the method comprising:
obtaining a plurality of training samples, wherein each training sample in the plurality of training samples comprises a sample low energy image;
inputting the training sample into an image processing model, and determining a base material density image corresponding to the sample low-energy image;
adjusting parameters of the image processing model based on one or more of a label-based material density image, the sample low-energy image, the sample high-energy image and the sample topological data and the base material density image by taking an optimized target loss function as a training target to obtain a trained image processing model; wherein the sample high-energy image corresponds to the sample low-energy image.
2. The method of claim 1, wherein the objective loss function comprises:
any one or combination of a base material density image loss function, a low-energy image loss function, a first high-energy image loss function and a second high-energy image loss function;
wherein the basis material density image loss function is determined based at least on the label basis material density image, the low energy image loss function is determined based at least on the sample low energy image, the first high energy image loss function is determined based at least on the sample high energy image, and the second high energy image loss function is determined based at least on the sample high energy image and sample topology data;
the sample topological data comprise sample low-energy topological data and sample high-energy topological data, and the sample low-energy topological data correspond to the sample high-energy topological data.
3. The method of claim 2, wherein determining a second high energy image loss function based on the sample high energy image and sample topology data comprises:
determining a topologically high energy image based on the basis material density image, the sample low energy image and the sample topological data;
determining the second high-energy image loss function based on the sample high-energy image and the topological high-energy image.
4. The method of claim 3, wherein determining a topologically high-energy image based on the basis material density image, the sample low-energy image, and the sample topology data comprises:
determining sample material density data based on the base material density image;
determining sample topological data differences based on the sample low-energy topological data and the sample high-energy topological data;
determining a material decomposition matrix difference based on the sample material density data and the sample topology data difference;
determining image differences based on the material decomposition matrix differences and the basis material density images;
determining the topologically high-energy image based on the sample low-energy image and the image difference.
5. The method of claim 1, wherein the sample low energy image is obtained by scanning the target object at a first radiation dose and the sample high energy image is obtained by scanning the target object at a second radiation dose;
wherein the first radiation dose is lower than the second radiation dose.
6. A system for training an image processing model, the system comprising:
a first obtaining module, configured to obtain a plurality of training samples, where each training sample in the plurality of training samples includes a sample low energy image;
the first determining module is used for inputting the training sample to an image processing model and determining a base material density image corresponding to the sample low-energy image;
the parameter adjusting module is used for adjusting parameters of the image processing model based on one or more of the label-based material density image, the sample low-energy image, the sample high-energy image and the sample topological data and the base material density image by taking a minimized target loss function as a training target to obtain a trained image processing model; wherein the sample high energy image corresponds to the sample low energy image.
7. A method of generating a density image of a base material, the method comprising:
acquiring an image to be processed of a target object;
inputting the image to be processed into an image processing model obtained by training according to the method of any one of claims 1-5, and determining a base material density image of the image to be processed.
8. A system for generating an image of density of a base material, the system comprising:
the second acquisition module is used for acquiring an image to be processed of the target object;
a second determining module, configured to input the image to be processed into an image processing model trained according to the method of any one of claims 1-5, and determine a basis material density image of the image to be processed.
9. A method of generating an energetic image, the method comprising:
acquiring a low-energy image and topological data of a target object; the topological data comprise low-energy topological data and high-energy topological data, and the low-energy topological data correspond to the high-energy topological data;
inputting the low energy image into an image processing model trained by the method of any one of claims 1-5, determining a basis material density image of the low energy image;
determining a topology data difference based on the low energy topology data and the high energy topology data;
determining a high-energy image corresponding to the low-energy image based on the low-energy image, the basis material density image, and the topological data difference.
10. A system for generating an energetic image, the system comprising:
the third acquisition module is used for acquiring a low-energy image and topological data of the target object; the topological data comprise low-energy topological data and high-energy topological data, and the low-energy topological data correspond to the high-energy topological data;
a third determining module, configured to input the low energy image into an image processing model trained by the method according to any one of claims 1-5, and determine a basis material density image of the low energy image;
a fourth determining module for determining a topology data difference based on the low energy topology data and the high energy topology data;
a fifth determining module to determine a high-energy image corresponding to the low-energy image based on the low-energy image, the basis material density image, and the topological data difference.
11. An apparatus for training an image processing model, comprising at least one storage medium and at least one processor, the at least one storage medium storing computer instructions; the at least one processor is configured to execute the computer instructions to implement the method of any of claims 1-5.
CN202110412250.6A 2021-04-16 2021-04-16 Training method and system of image processing model Pending CN115222055A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110412250.6A CN115222055A (en) 2021-04-16 2021-04-16 Training method and system of image processing model
PCT/CN2022/087499 WO2022218441A1 (en) 2021-04-16 2022-04-18 Systems and methods for imaging
US18/488,002 US20240046534A1 (en) 2021-04-16 2023-10-16 Systems and methods for imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110412250.6A CN115222055A (en) 2021-04-16 2021-04-16 Training method and system of image processing model

Publications (1)

Publication Number Publication Date
CN115222055A true CN115222055A (en) 2022-10-21

Family

ID=83604287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110412250.6A Pending CN115222055A (en) 2021-04-16 2021-04-16 Training method and system of image processing model

Country Status (1)

Country Link
CN (1) CN115222055A (en)

Similar Documents

Publication Publication Date Title
US11126914B2 (en) Image generation using machine learning
CN111373448B (en) Image reconstruction using machine learning regularization
JP2021521993A (en) Image enhancement using a hostile generation network
US10395353B2 (en) Model-based scatter in multi-modality multi-energy SPECT reconstruction
CN110751702B (en) Image reconstruction method, system, device and storage medium
CN113689342B (en) Image quality optimization method and system
US9489752B2 (en) Ordered subsets with momentum for X-ray CT image reconstruction
US10628973B2 (en) Hierarchical tomographic reconstruction
US20160071245A1 (en) De-noised reconstructed image data edge improvement
CN109060849A (en) A kind of mthods, systems and devices of determining dose of radiation modulation lines
US8897529B2 (en) Apparatus, system, and method for non-convex prior image constrained compressed sensing
US20130129172A1 (en) Computed-tomography system and method for determining volume information for a body
US11615530B2 (en) Medical data processing apparatus for reconstructing a medical image using a neural network
CN111462267A (en) Method and system for acquiring X-ray projection data
CN109124666A (en) A kind of mthods, systems and devices of determining dose of radiation modulation lines
US9984476B2 (en) Methods and systems for automatic segmentation
CN110176047B (en) Method and system for improving CT image quality
WO2022152218A1 (en) Methods and systems for correcting projection data
CN113706419A (en) Image processing method and system
EP3888060B1 (en) System for reconstructing an image of an object
CN110853742B (en) Image reconstruction method, system, device and storage medium
CN115222055A (en) Training method and system of image processing model
US20190180481A1 (en) Tomographic reconstruction with weights
CN115222605A (en) Image imaging method and system
WO2016186746A1 (en) Methods and systems for automatic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination