CN114693642A - Nodule matching method and device, electronic equipment and storage medium - Google Patents

Nodule matching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114693642A
CN114693642A CN202210332611.0A CN202210332611A CN114693642A CN 114693642 A CN114693642 A CN 114693642A CN 202210332611 A CN202210332611 A CN 202210332611A CN 114693642 A CN114693642 A CN 114693642A
Authority
CN
China
Prior art keywords
nodule
image
candidate
pair
nodules
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210332611.0A
Other languages
Chinese (zh)
Other versions
CN114693642B (en
Inventor
代玉婷
丁佳
吕晨翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Yizhun Intelligent Technology Co ltd
Original Assignee
Beijing Yizhun Medical AI Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yizhun Medical AI Co Ltd filed Critical Beijing Yizhun Medical AI Co Ltd
Priority to CN202210332611.0A priority Critical patent/CN114693642B/en
Publication of CN114693642A publication Critical patent/CN114693642A/en
Application granted granted Critical
Publication of CN114693642B publication Critical patent/CN114693642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a nodule matching method, apparatus, electronic device and storage medium, the method comprising: acquiring a nodule image to be processed and a reference nodule image; inputting the nodule image to be processed and the reference nodule image into an image registration network model to obtain candidate nodule images output by the image registration network model; determining a nodule in the nodule candidate image and a nodule in the reference nodule image, respectively; determining a nodule pair formed by the nodule candidate image and the reference nodule image based on a relationship between a nodule in the nodule candidate image and a nodule in the reference nodule image.

Description

Nodule matching method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of medical image processing technologies, and in particular, to a nodule matching method and apparatus, an electronic device, and a storage medium.
Background
The nodule is closely related to human health, and the imaging diagnosis is an important technical means for analyzing and researching the change condition of the nodule by medical personnel at present.
However, for medical personnel, it takes a lot of time and energy to determine the matching relationship of the nodule according to the medical image, so that it is important to improve the work efficiency of the medical personnel to determine the matching relationship of the nodule according to the medical image automatically, and at the same time, how to improve the effect of automatically matching the nodule and meet the requirements of imaging diagnosis is a difficult problem to overcome.
Disclosure of Invention
The present disclosure provides a nodule matching method, apparatus, electronic device, and storage medium to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a nodule matching method, the method comprising: acquiring a nodule image to be processed and a reference nodule image; inputting the nodule image to be processed and the reference nodule image into an image registration network model to obtain candidate nodule images output by the image registration network model; determining a nodule in the nodule candidate image and a nodule in the reference nodule image, respectively; determining a nodule pair formed by the nodule candidate image and the reference nodule image based on a relationship between a nodule in the nodule candidate image and a nodule in the reference nodule image.
In an embodiment, the method further comprises: inputting a sample nodule image to be processed and a reference sample nodule image into the image registration network model; determining a dense displacement field based on an output of the image registration network model; processing the sample nodule image to be processed by using the dense displacement field to obtain a candidate sample nodule image; determining a difference between the candidate sample nodule image and the reference sample nodule image; optimizing parameters of the image registration network model based on the differences.
In a possible implementation, the position of the first pixel in the sample nodule image to be processed is equal to the sum of the position of the first pixel in the candidate sample nodule image and the displacement of the first pixel in the dense displacement field.
In one possible implementation, the determining the difference between the candidate sample nodule image and the reference sample nodule image includes: determining an average of squared differences of pixel values at all corresponding same pixel locations between the reference sample nodule image and the candidate sample nodule image.
In an embodiment, the determining a nodule pair formed by the nodule candidate image and the reference nodule image based on the relationship between the nodule in the nodule candidate image and the nodule in the reference nodule image includes: constructing a bipartite graph based on nodules in the candidate nodule image and nodules in the reference nodule image; determining nodes with connection relations in the bipartite graph as candidate nodule pairs; determining the nodule pair based on the nodule pair candidate.
In one embodiment, the determining the nodule pair based on the nodule candidate pair includes: determining a difference in image features between two nodules included in the pair of candidate nodules; determining a difference in distance between two nodules included in the pair of candidate nodules; determining an average of a difference in image features between two nodules included in the pair of nodule candidates and a difference in distance between two nodules included in the pair of nodule candidates as a difference between two nodules included in the pair of nodule candidates; determining a weight of a connecting edge between nodes corresponding to two nodules included in the candidate nodule pair in the bipartite graph based on a difference between the two nodules included in the candidate nodule pair; and constructing a weighted bipartite graph based on the weight, and determining the nodule pairs in the weighted bipartite graph, wherein the differences of the nodule pairs meet preset conditions.
In one embodiment, the determining the difference in image features between two nodules included in the candidate nodule pair includes: determining a weighted average of a volume difference between two nodules included in the pair of nodule candidates, a nodule type difference between two nodules included in the pair of nodule candidates, and a feature vector difference between two nodules included in the pair of nodule candidates as an image feature difference between two nodules included in the pair of nodule candidates; determining a relative difference in volume between two nodules comprised by the pair of nodule candidates as a difference in volume between the two nodules comprised by the pair of nodule candidates; determining a nodule type difference between two nodules comprised by the pair of nodule candidates based on a hamming distance encoded by the types of the two nodules comprised by the pair of nodule candidates; determining a feature vector difference between two nodules included in the pair of nodule candidates based on a cosine similarity of feature vectors of the two nodules included in the pair of nodule candidates.
In one embodiment, the determining a distance difference between two nodules included in the pair of candidate nodules comprises: determining a ratio of a distance between two nodules included in the pair of nodule candidates to a distance threshold as a difference in distance between the two nodules included in the pair of nodule candidates.
According to a second aspect of the present disclosure, there is provided a nodule matching apparatus, the apparatus comprising: the acquiring module is used for acquiring a nodule image to be processed and a reference nodule image; the input module is used for inputting the nodule image to be processed and the reference nodule image into an image registration network model to obtain a candidate nodule image output by the image registration network model; a determination module for determining a nodule in the nodule candidate image and a nodule in the reference nodule image, respectively; determining a nodule pair formed by the nodule candidate image and the reference nodule image based on a relationship between a nodule in the nodule candidate image and a nodule in the reference nodule image.
In an embodiment, the input module is further configured to input a sample nodule image to be processed and a reference sample nodule image into the image registration network model; the determination module is further configured to determine a dense displacement field based on an output of the image registration network model; the device further comprises: the processing module is used for processing the sample nodule image to be processed by utilizing the dense displacement field to obtain a candidate sample nodule image; the determination module is further configured to determine a difference between the candidate sample nodule image and the reference sample nodule image; the device further comprises: an optimization module to optimize parameters of the image registration network model based on the differences.
In a possible implementation, the position of the first pixel in the sample nodule image to be processed is equal to the sum of the position of the first pixel in the candidate sample nodule image and the displacement of the first pixel in the dense displacement field.
In an embodiment, the determining module is specifically configured to determine an average of squared differences of pixel values at all corresponding same pixel positions between the reference sample nodule image and the candidate sample nodule image.
In an embodiment, the determining module is specifically configured to construct a bipartite graph based on the nodules in the candidate nodule image and the nodules in the reference nodule image; determining nodes with connection relations in the bipartite graph as candidate nodule pairs; determining the nodule pair based on the nodule pair candidate.
In an implementation, the determining module is specifically configured to determine a difference in image features between two nodules included in the pair of candidate nodules; determining a difference in distance between two nodules comprised by the pair of candidate nodules; determining an average of the difference in image features between two nodules included in the pair of nodule candidates and the difference in distance between two nodules included in the pair of nodule candidates as the difference between two nodules included in the pair of nodule candidates; determining a weight of a connecting edge between nodes corresponding to two nodules included in the candidate nodule pair in the bipartite graph based on a difference between the two nodules included in the candidate nodule pair; and constructing a weighted bipartite graph based on the weight, and determining the nodule pairs in the weighted bipartite graph, wherein the differences of the nodule pairs meet preset conditions.
In an implementation, the determining module is specifically configured to determine a weighted average of a volume difference between two nodules included in the nodule candidate pair, a nodule type difference between two nodules included in the nodule candidate pair, and a feature vector difference between two nodules included in the nodule candidate pair as an image feature difference between two nodules included in the nodule candidate pair; determining a relative difference in volume between two nodules comprised by the pair of nodule candidates as a difference in volume between the two nodules comprised by the pair of nodule candidates; determining a nodule type difference between two nodules comprised by the nodule candidate pair based on a hamming distance encoded by types of the two nodules comprised by the nodule candidate pair; determining a feature vector difference between two nodules included in the nodule candidate pair based on a cosine similarity of feature vectors of the two nodules included in the nodule candidate pair.
In an embodiment, the determining module is specifically configured to determine a ratio of a distance between two nodules included in the nodule candidate pair and a distance threshold as a distance difference between the two nodules included in the nodule candidate pair.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the present disclosure.
According to the nodule matching method, the nodule matching device, the electronic equipment and the storage medium, the complex transformation image is processed through the image registration network model, the image registration effect is improved, the nodule matching relation is solved based on the registered image, and the nodule matching effect is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 is a flow chart diagram illustrating a nodule matching method according to an embodiment of the present disclosure;
FIG. 2 is another flow chart diagram illustrating a nodule matching method in accordance with an embodiment of the present disclosure;
FIG. 3 is a detailed alternative flow diagram of a nodule matching method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an algorithmic flow of a nodule matching method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a model structure of a nodule matching method according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another model structure of a nodule matching method according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating a bipartite graph matching flow of a nodule matching method according to an embodiment of the disclosure;
FIG. 8 is a schematic diagram illustrating the components of a nodule matching apparatus according to an embodiment;
fig. 9 is a schematic diagram illustrating a composition structure of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more apparent and understandable, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The current method for determining the matching relationship of the nodule according to the medical image mainly comprises two steps of image registration and nodule matching.
The purpose of image registration is to align images, currently, translation, rotation and affine transformation are mainly used for aligning images, a primary transformation matrix is determined according to a positioning anchor point, then the transformation matrix is aligned according to organ segmentation, and the images are aligned by using the transformation matrix, so that the image registration is completed. However, the degrees of freedom of the image transformation in the image registration step are assumed to be too low, so that only simple transformations such as translation, scaling, rotation and the like can be described, and the image registration methods perform well when the deformation of the image is simple, but the image registration methods do not work well when the image has complex deformation.
The purpose of nodule matching is to calculate nodule matching relations in different images after the images are registered, currently, a greedy matching strategy is adopted mainly based on absolute distance or Euclidean distance, the matching relations are selected according to the absolute distance or the Euclidean distance from small to large, normalized cross correlation coefficients among different image nodules are calculated, and whether the nodules in the images are the matching relations or not is judged according to the cross correlation coefficients, so that nodule matching is completed. However, these nodule matching methods do not combine the nodule distance and the nodule image feature at the same time, and the nodule matching effect is poor for the case where there are a plurality of adjacent nodules locally.
Aiming at the defect that the conventional image registration method can only process simple image transformation, the method for predicting the dense displacement field by using the full convolution network is provided, and the dense displacement field can represent abundant image transformation, so that the image registration effect is effectively improved; aiming at the defect that the existing nodule matching method cannot combine nodule distance information and nodule image features simultaneously and easily generates error matching under the condition that a plurality of adjacent nodules exist locally, the method provides the method for extracting the nodule image features by using the deep neural network and effectively improves the effect of nodule matching by combining the nodule distance and the nodule image features.
Fig. 1 shows a flowchart of a nodule matching method according to an embodiment of the present disclosure.
Referring to fig. 1, a processing flow of a nodule matching method according to an embodiment of the present disclosure includes at least the following steps:
step S101, a nodule image to be processed and a reference nodule image are acquired.
In some embodiments, the nodule matching method provided in the embodiments of the present disclosure may further include at least: and training the image registration network model. The nodule image to be processed and the reference nodule image may be used as input parameters of an image registration network model, and the process of training the image registration network model may include:
step a, inputting a sample nodule image to be processed and a reference sample nodule image into an image registration network model;
in some embodiments, one of the sets of sample nodule images is designated as a reference sample nodule image IfAlso referred to as fixed images; the other group of sample nodule images is marked as sample nodule images I to be processedmWhich may also be referred to as moving images, each set of images being three-dimensional images.
In some embodiments, the reference sample nodule image and the sample nodule image to be processed are input to an image registration network, which may use any one of semantic segmentation networks, and the image registration network may be a full convolution network.
B, determining a dense displacement field based on the output of the image registration network model;
in some embodiments, the reference sample nodule image and the sample nodule image to be processed are stitched to form a two-channel image, and the formed two-channel image is input to an image registration network, and a dense displacement field output by the image registration network is obtained, where the dense displacement field represents a pixel-by-pixel displacement transformation of the sample nodule image to be processed with respect to the reference sample nodule image in a three-dimensional space direction.
Step c, processing the sample nodule image to be processed by using the dense displacement field to obtain a candidate sample nodule image;
in some embodiments, the sample nodule image to be processed is transformed by the dense displacement field to obtain a candidate sample nodule image, i.e. a transformed image. The pixel position of the sample nodule image to be processed and the pixel position of the candidate sample nodule image have a corresponding relation, and the pixel position of the candidate sample nodule image in the sample nodule image to be processed can be calculated by obtaining the displacement of the same pixel position of the dense displacement field and the candidate sample nodule image. Accordingly, the pixel value at the pixel position of the sample nodule candidate image should be the pixel value at the pixel position of the sample nodule candidate image to be processed, i.e. the position of the first pixel in the sample nodule candidate image to be processed is equal to the sum of the position of the first pixel in the sample nodule candidate image and the displacement of said first pixel in the dense displacement field.
D, determining the difference between the candidate sample nodule image and the reference sample nodule image;
in some embodiments, a specific implementation of determining the difference between the candidate sample nodule image and the reference sample nodule image may include:
an average of the squared differences of pixel values at all corresponding same pixel locations between the reference sample nodule image and the candidate sample nodule image is determined.
In general, a loss function is associated with the optimization problem as a learning criterion, i.e., the model is solved and evaluated by minimizing the loss function, and the image registration network model is solved and optimized in the embodiment of the present disclosure by reflecting the difference between the candidate sample nodule image and the reference sample nodule image by the loss function.
During the training process of the image registration network model, the true labeling of the dense displacement field may be absent, so the similarity loss between the reference sample nodule image and the candidate sample nodule image may be selected as the loss function. The mean value of the squared differences of the pixel values at all corresponding identical pixel positions between the reference sample nodule image and the candidate sample nodule image may characterize the similarity loss between the candidate sample nodule image and the reference sample nodule image, reflecting the difference between the candidate sample nodule image and the reference sample nodule image.
As an example, one implementation of calculating the similarity loss is to calculate the mean square error loss, i.e., calculate the mean of the squared differences of the pixel values at all the same pixel positions of the reference sample nodule image and the candidate sample nodule image, and when the mean square error loss is smaller, the reference sample nodule image is more similar to the candidate sample nodule image, as shown in the following equation (1):
Figure BDA0003573583390000081
wherein L issimDenotes loss of similarity, IfA reference sample nodule image is represented and,
Figure BDA0003573583390000091
a dense displacement field is represented by a field of displacement,
Figure BDA0003573583390000092
representing a composite mapping, ImRepresenting a sample nodule image to be processed, N representing a set of natural numbers, and p representing a pixel position of a candidate sample nodule image.
In the embodiment of the disclosure, the similarity loss can optimize the dense displacement field, so that the sample nodule image to be processed is similar to the reference sample nodule image after being transformed by the dense displacement field, and the purpose of aligning the images is achieved.
And e, optimizing parameters of the image registration network model based on the difference.
By optimizing the parameters of the image registration network model, the image registration network model can be trained.
And S102, inputting the nodule image to be processed and the reference nodule image into an image registration network model to obtain a candidate nodule image output by the image registration network model.
In step S103, a nodule in the candidate nodule image and a nodule in the reference nodule image are determined, respectively.
In some embodiments, after the image registration step, a nodule candidate image, that is, an aligned nodule image, is obtained, and the position of the nodule in the nodule candidate image may be calculated according to the position of the nodule in the image to be processed, so as to obtain the position of the nodule in the nodule candidate image, that is, the aligned nodule position. Wherein the positions of the nodule in the image to be processed and the nodule in the reference nodule image may be acquired by a nodule detection system.
And step S104, determining a nodule pair formed by the candidate nodule image and the reference nodule image based on the relationship between the nodule in the candidate nodule image and the nodule in the reference nodule image.
In some embodiments, determining a nodule pair formed by the nodule candidate image and the reference nodule image based on a relationship between a nodule in the nodule candidate image and a nodule in the reference nodule image may include at least:
step f, constructing a bipartite graph based on the nodules in the candidate nodule image and the nodules in the reference nodule image;
the nodule matching problem can be formalized as a bipartite graph matching problem, and nodules detected according to the same nodule image are different nodules which independently exist, so that no connection relation exists between the nodules in the same nodule image, and the nodules detected according to different nodule images may cause the situation that the same nodule is present in a plurality of different nodule images in an independently existing form, so that connection relations exist between the nodules in different nodule images. Generally, nodules with excessively large position differences in different nodule images cannot be the same nodule, so that the connection relation between the nodules with excessively large position differences can be deleted in a distance threshold value setting mode, and a bipartite graph is preliminarily constructed.
Step g, determining nodes with connection relations in the bipartite graph as candidate nodule pairs;
and the nodes with the connection relation in the bipartite graph are nodules with the connection relation in the lung image.
And h, determining the nodule pairs based on the candidate nodule pairs.
In some embodiments, the specific implementation process for determining a nodule pair based on a nodule pair candidate may include at least:
step i, determining image feature difference between two nodules included in a candidate nodule pair;
step g, determining the distance difference between two nodules included in the candidate nodule pair;
step k, determining an average value of image feature differences between two nodules included in the candidate nodule pair and distance differences between two nodules included in the candidate nodule pair as differences between the two nodules included in the candidate nodule pair;
step l, determining a weight of a connecting edge between nodes corresponding to two nodules included in a candidate nodule pair in the bipartite graph based on a difference between the two nodules included in the candidate nodule pair;
and m, constructing a weighted bipartite graph based on the weight, and determining the nodule pairs in the weighted bipartite graph, wherein the differences of the nodule pairs meet preset conditions.
In the embodiment of the disclosure, a method for performing nodule matching by combining nodule distance and image features between nodules is provided, so that the problem of nodule matching error may exist only by performing nodule matching through the distance between nodules under the condition that a plurality of adjacent nodules are locally present in a nodule image, and the accuracy of nodule matching is further improved.
Fig. 2 is another flow chart of a nodule matching method according to an embodiment of the present disclosure.
Referring to fig. 2, another processing flow of a nodule matching method according to an embodiment of the present disclosure includes at least the following steps:
step S201, inputting the sample nodule image to be processed and the reference sample nodule image into an image registration network model.
In some embodiments, one of the sets of sample nodule images is designated as a reference sample nodule image IfAlso referred to as fixed images; another set of sample nodule imagesSample nodule image I marked as to be processedmAlso called moving images, each group of images is a three-dimensional image.
In some embodiments, the reference sample nodule image and the sample nodule image to be processed are input to an image registration network, which may use any one of semantic segmentation networks, and the image registration network may be a full convolution network.
Step S202, determining a dense displacement field based on the output of the image registration network model.
In some embodiments, the reference sample nodule image and the sample nodule image to be processed are stitched to form a two-channel image, and the formed two-channel image is input to an image registration network, and a dense displacement field output by the image registration network is obtained, where the dense displacement field represents a pixel-by-pixel displacement transformation of the sample nodule image to be processed with respect to the reference sample nodule image in a three-dimensional space direction.
And step S203, processing the sample nodule image to be processed by using the dense displacement field to obtain a candidate sample nodule image.
In some embodiments, the sample nodule image to be processed is transformed by the dense displacement field to obtain a candidate sample nodule image, i.e. a transformed image. The pixel position of the sample nodule image to be processed and the pixel position of the candidate sample nodule image have a corresponding relation, and the pixel position of the candidate sample nodule image in the sample nodule image to be processed can be calculated by obtaining the displacement of the same pixel position of the dense displacement field and the candidate sample nodule image. Accordingly, the pixel value at the pixel position of the sample nodule candidate image should be the pixel value at the pixel position of the sample nodule candidate image to be processed, i.e. the position of the first pixel in the sample nodule candidate image to be processed is equal to the sum of the position of the first pixel in the sample nodule candidate image and the displacement of said first pixel in the dense displacement field.
In some embodiments, the pixel positions of the sample nodule candidate image may not be integer positions, but the pixel values at the pixel positions of the sample nodule candidate image corresponding to the pixel positions of the sample nodule candidate image are integers, thus resulting in a failure of the pixels in the sample nodule candidate image to correspond to the pixels in the sample nodule candidate image. The embodiment of the disclosure provides that the pixel position of the corresponding to-be-processed sample nodule image is obtained through the pixel position and the dense displacement field of the candidate sample nodule image, and the corresponding relation between the pixel in the candidate sample nodule image and the pixel in the to-be-processed sample nodule image is solved, so that the problem can be avoided.
In step S204, a difference between the candidate sample nodule image and the reference sample nodule image is determined.
In some embodiments, a specific implementation of determining the difference between the candidate sample nodule image and the reference sample nodule image may include:
an average of the squared differences of pixel values at all corresponding same pixel locations between the reference sample nodule image and the candidate sample nodule image is determined.
In general, a loss function is associated with the optimization problem as a learning criterion, i.e., the model is solved and evaluated by minimizing the loss function, and the image registration network model is solved and optimized in the embodiment of the present disclosure by reflecting the difference between the candidate sample nodule image and the reference sample nodule image by the loss function.
During the training process of the image registration network model, the true labeling of the dense displacement field may be absent, so the similarity loss between the reference sample nodule image and the candidate sample nodule image may be selected as the loss function. The mean value of the squared differences of the pixel values at all corresponding identical pixel positions between the reference sample nodule image and the candidate sample nodule image can characterize the similarity loss between the candidate sample nodule image and the reference sample nodule image, and reflect the difference between the candidate sample nodule image and the reference sample nodule image.
As an example, one implementation of calculating the similarity loss is to calculate the mean square error loss, i.e., calculate the mean of the squared differences of the pixel values at all the same pixel positions of the reference sample nodule image and the candidate sample nodule image, and when the mean square error loss is smaller, the reference sample nodule image is more similar to the candidate sample nodule image, as shown in the following equation (1):
Figure BDA0003573583390000121
wherein L issimDenotes loss of similarity, IfA reference sample nodule image is represented and,
Figure BDA0003573583390000131
a dense displacement field is represented by a field of displacement,
Figure BDA0003573583390000132
representing a composite mapping, ImRepresenting a sample nodule image to be processed, N representing a set of natural numbers, and p representing a pixel position of a candidate sample nodule image.
In the embodiment of the disclosure, the similarity loss can optimize the dense displacement field, so that the sample nodule image to be processed is similar to the reference sample nodule image after being transformed by the dense displacement field, and the purpose of aligning the images is achieved.
In some embodiments, the loss function further includes a smooth loss, and the smooth loss has the effect of making the transformation of the neighboring region in the sample nodule image to be processed similar, so that the predicted dense displacement field is effectively prevented from being unreasonable, and the efficiency and the accuracy of image registration are improved.
As an example, one implementation of the smoothness penalty is the L1 norm of the dense displacement field gradient, as shown in equation (2) below:
Figure BDA0003573583390000133
wherein the content of the first and second substances,
Figure BDA0003573583390000134
Figure BDA0003573583390000135
by
Figure BDA0003573583390000136
Figure BDA0003573583390000137
Can be calculated and obtained by the same way
Figure BDA0003573583390000138
And
Figure BDA0003573583390000139
Lsmoothit is indicated that the loss of smoothness,
Figure BDA00035735833900001310
representing a dense displacement field, p representing the pixel location of the candidate sample nodule image, N representing a natural number set,
Figure BDA00035735833900001311
representing a dense displacement field
Figure BDA00035735833900001312
The displacement of the same pixel location as the candidate sample nodule image,
Figure BDA00035735833900001313
the displacement in the directions of x, y and z dimensions is included, and the displacement corresponds to the three dimensions of the image respectively.
In the embodiment of the present disclosure, the dense displacement field gradient is a difference between pixel values at adjacent pixel positions of the dense displacement field, and the L1 norm of the dense displacement field gradient is increased by a loss, which may reflect the proximity of the displacement of the adjacent pixel positions of the dense displacement field.
In some embodiments, although the existence of the smooth loss avoids the unreasonable situation of the predicted dense displacement field, the expressive ability of the displacement transformation is limited, and the complex transformation of the image is difficult to process, so that if the smooth loss is included in the loss function, the image can be transformed for many times.
And S205, optimizing parameters of the image registration network model based on the difference.
In step S206, a nodule image to be processed and a reference nodule image are acquired.
By optimizing the parameters of the image registration network model, the image registration network model can be trained.
And step S207, inputting the nodule image to be processed and the reference nodule image into the image registration network model to obtain a candidate nodule image output by the image registration network model.
In the embodiment of the disclosure, the dense displacement field is predicted through the image registration network, and the dense displacement field can represent rich image transformation, so that more and more complex image registration cases can be solved compared with affine transformation and other modes, and multiple transformations are proposed during testing, and the complex transformation is gradually realized through a step-by-step mode, so that the transformation result can be gradually corrected, and more accurate image registration is realized.
In step S208, nodules in the candidate nodule image and nodules in the reference nodule image are determined, respectively.
In some embodiments, after the image registration step, a nodule candidate image, that is, an aligned nodule image is obtained, and the position of the nodule in the nodule candidate image may be calculated according to the position of the nodule in the nodule image to be processed, so as to obtain the position of the nodule in the nodule candidate image, that is, the aligned nodule position. Wherein the positions of the nodule in the nodule image to be processed and the nodule in the reference nodule image may be acquired by a nodule detection system.
In step S209, a bipartite graph is constructed based on the nodules in the candidate nodule image and the nodules in the reference nodule image.
The nodule matching problem can be formalized as a bipartite graph matching problem, and nodules detected according to the same nodule image are different nodules which independently exist, so that no connection relation exists between the nodules in the same nodule image, and the nodules detected according to different nodule images may cause the situation that the same nodule is present in a plurality of different nodule images in an independently existing form, so that connection relations exist between the nodules in different nodule images. Generally, nodules with excessively large position differences in different nodule images cannot be the same nodule, so that the connection relation between the nodules with excessively large position differences can be deleted in a mode of setting a distance threshold value, and a bipartite graph is preliminarily constructed.
In some embodiments, a connection relationship between the nodules in the candidate lung image and the nodules in the reference lung image is established, a distance threshold is set to delete the connection relationship between the nodules with excessively large position differences, and a bipartite graph is preliminarily constructed. Wherein the bipartite graph connecting edges in the bipartite graph are determined based on a connection relationship between the nodules in the candidate lung image and the nodules in the reference lung image.
And step S210, determining nodes with connection relations in the bipartite graph as candidate nodule pairs.
Nodes with a connection relation in the bipartite graph are nodes with a connection relation in the node image, and two nodes with a connection relation in the bipartite graph are determined as a set of candidate node pairs, which can also be understood as marking two nodes connected by a connection edge of the bipartite graph as a set of adjacent nodes.
In step S211, a nodule pair is determined based on the nodule pair candidate.
After the bipartite graph connecting edges are determined, weights are given to the bipartite graph connecting edges for carrying out nodule matching. Usually, after locating the nodule, the imaging physician will further compare the nodule features according to: and comparing the nodule morphological characteristics to determine the matching relationship of the nodules.
In the embodiment of the disclosure, a method for performing nodule matching by combining nodule distance and image features between nodules is provided.
In some embodiments, the specific implementation process for determining a nodule pair based on the nodule pair candidate may include at least:
and step A, determining the image characteristic difference between two nodules included in the candidate nodule pair.
The difference in image features between two nodules included in a candidate nodule pair is mainly composed of a volume difference between the two nodules included in the candidate nodule pair, a nodule type difference between the two nodules included in the candidate nodule pair, and a feature vector difference between the two nodules included in the candidate nodule pair.
If the two nodules included in the candidate nodule pair are respectively designated as an a nodule and a B nodule, the difference in image characteristics between the a nodule and the B nodule is shown in the following equations (3) to (5):
Figure BDA0003573583390000151
wherein L represents the volume difference between the A and B nodules, abs represents a function for absolute value, and VADenotes the volume of the A nodule, VBVolume of B nodule, VmaxRepresents the volume maximum among the a nodule volume and the B nodule volume.
M=D(xA,xB) (4)
Where M represents the nodule type difference between A and B nodules and D represents xAAnd xBHamming distance between, xAType code representing A node, xBIndicating the type code of the B nodule. The hamming distance is the number of two nodule type codes (binary codes) which are subjected to bitwise exclusive-or operation, and the statistical result is 1.
N=1-cos(yA,yB) (5)
Wherein, A and B nodes are shownDifference of feature vector between, yAFeature vector, y, representing the A noduleBAnd representing the feature vector of the B node, wherein the cosine similarity between the feature vector of the A node and the feature vector of the B node is calculated by using an Euclidean dot product formula.
The difference in image features between two nodules included in a nodule candidate pair is a weighted average of the difference in volume between the two nodules included in the nodule candidate pair, the difference in nodule type between the two nodules included in the nodule candidate pair, and the difference in feature vectors between the two nodules included in the nodule candidate pair.
Wherein the difference in image features between two nodules included in the candidate nodule pair is equal to the volume difference between two nodules included in the candidate nodule pair, the ratio of the nodule type difference between two nodules included in the candidate nodule pair to three, and the ratio of the sum of the feature vector differences between two nodules included in the candidate nodule pair to three.
And step B, determining the distance difference between two nodules included in the candidate nodule pair.
The ratio of the distance between two nodules included in the candidate nodule pair and the distance threshold is determined as the difference in distance between two nodules included in the candidate nodule pair, i.e., the numerical normalization process is performed on the distance between two nodules included in the candidate nodule pair.
Wherein the difference in distance between two nodules included in a pair of nodule candidates is equal to a ratio of the distance between two nodules included in a pair of nodule candidates to a distance threshold.
And step C, determining the average value of the image feature difference between the two nodules included in the candidate nodule pair and the distance difference between the two nodules included in the candidate nodule pair as the difference between the two nodules included in the candidate nodule pair.
Wherein the difference between two nodules included in the candidate nodule pair is equal to the average of the sum of the difference in image features between the two nodules included in the candidate nodule pair and the difference in distance between the two nodules included in the candidate nodule pair.
And D, determining the weight of a connecting edge between nodes corresponding to two nodules included in the candidate nodule pair in the bipartite graph based on the difference between the two nodules included in the candidate nodule pair.
And E, constructing a weighted bipartite graph based on the weight of the connecting edges, and determining the nodule pairs in the weighted bipartite graph, wherein the differences of the nodule pairs meet preset conditions.
In the embodiment of the disclosure, a method for performing nodule matching by combining nodule distance and image features between nodules is provided, so that the problem of nodule matching error may exist only by performing nodule matching through nodule distance under the condition that a plurality of adjacent nodules are locally in a nodule image, and the accuracy of nodule matching is further improved.
Fig. 4 is a detailed alternative flowchart of a nodule matching method according to an embodiment of the present disclosure.
Referring to fig. 4, a detailed alternative flow of the nodule matching method according to the embodiment of the present disclosure is illustrated by taking two sets of lung images as an example, and includes at least the following steps:
cancer is one of the high-grade diseases nowadays, and lung cancer is the cancer with the largest number of deaths worldwide. Medically, "prognosis" refers to empirically predicted disease progression, and because early lung cancer is well-predicted, early screening and diagnosis are the primary means of reducing lung cancer mortality. In the early screening and diagnosis process of lung cancer, imaging doctors usually use follow-up analysis to match lung nodules according to lung images of patients in different periods to observe changes of the lung nodules, evaluate changes of the lung nodules in shape, size and the like, and comprehensively formulate a proper treatment scheme. Follow-up usually refers to a method of observation by which a hospital communicates or otherwise makes regular observations about changes in the patient's condition and guides the patient to recovery for a patient who has been visiting the hospital.
The embodiment of the disclosure effectively improves the working efficiency of imaging doctors by automatically matching the lung nodules according to the lung images of patients in different periods, and the automatic matching of the lung nodules mainly comprises two steps of lung image registration and lung nodule matching.
Because of the different positions of image shooting and the difference caused by the respiration of human body, even two groups of lung images of the same patient have different degrees of difference, the registration of the lung images aims at aligning the two groups of lung images with difference, so that the position difference of the same pulmonary nodule in different pulmonary images is very small, and the position difference of different pulmonary nodules in different pulmonary images is very large, therefore, the position information of the lung nodule can be used for judging whether the lung nodule is the same lung nodule or not, and the lung can be understood as a real organ, in the two sets of lung images, the lung organ may be composed of a plurality of elements, each element constituting the lung organ has a corresponding relationship in the two sets of lung images, and the step of registering the lung images is to expect that the corresponding relationship can be established in the same coordinate system, thereby realizing more accurate lung nodule matching. It should be appreciated that the purpose of image registration is to align the images, not limited to lung images, nor to two sets of images.
Step S301, two groups of sample lung images are obtained.
Specifically, one group of sample lung images is taken as a reference sample lung image IfAlso called fixed images; the other group of sample lung images are recorded as sample lung images I to be processedmWhich may also be referred to as moving images, each set of images being three-dimensional images.
Optionally, the two sets of sample lung images to be processed are two sets of lung images of the same patient at different times.
Step S302, two groups of sample lung images are input into an image registration network model.
Specifically, as shown in fig. 5, the reference sample lung image and the sample lung image to be processed are input to an image registration network, which may use any semantic segmentation network, and may be a full convolution network.
Alternatively, a Vnet semantic segmentation network is used and is noted as the image registration network F.
Step S303, a dense displacement field is determined based on the output of the image registration network model.
Specifically, the reference sample lung image and the sample lung image to be processed are spliced,forming two-channel images, inputting the formed two-channel images into a semantic segmentation network, and acquiring a dense displacement field output by the semantic segmentation network
Figure BDA0003573583390000181
Wherein the dense displacement field characterizes a pixel-by-pixel displacement transformation of the sample lung image to be processed with respect to the reference sample lung image in a three-dimensional spatial direction.
And step S304, processing the sample lung image to be processed by using the dense displacement field to obtain a candidate sample lung image.
Specifically, a sample lung image to be processed is transformed through a dense displacement field to obtain a candidate sample lung image, namely a transformed image
Figure BDA0003573583390000191
Specifically, a corresponding relation exists between the pixel position p' of the lung image of the sample to be processed and the pixel position p of the lung image of the candidate sample, and a dense displacement field is obtained
Figure BDA0003573583390000192
Displacement of same pixel position as candidate sample lung image
Figure BDA0003573583390000193
The corresponding pixel position p 'of the pixel position p of the candidate sample lung image in the sample lung image to be processed, i.e. the pixel position p' can be calculated
Figure BDA0003573583390000194
Accordingly, the pixel value at pixel position p of the candidate sample lung image should be the pixel value at pixel position p' of the sample lung image to be processed. Wherein the content of the first and second substances,
Figure BDA0003573583390000195
the displacement in the directions of x, y and z dimensions is included, and the displacement corresponds to the three dimensions of the image respectively.
Optionally, if the pixel position p ' of the sample lung image to be processed is a non-integer, the pixel value at the pixel position cannot be directly obtained, and at this time, the pixel value at the pixel position p ' may be calculated by using a bilinear difference value according to the pixel positions of 8 adjacent pixels around the pixel position p ' of the sample lung image to be processed, as shown in the following equation (6):
Figure BDA0003573583390000196
in step S305, the difference between the candidate sample lung image and the reference sample lung image is determined.
In particular, during the training of the image registration network model, the true labeling of the dense displacement field may be lacking, so the loss of similarity of the candidate sample lung image and the reference sample lung image is used as a loss function.
Alternatively, one implementation of calculating the similarity loss is to calculate the mean square error loss, i.e. calculating the mean of the squared differences of the pixel values at all the same pixel positions of the reference sample lung image and the candidate sample lung image, the reference sample lung image and the candidate sample lung image being more similar when the mean square error loss is smaller, as shown in the following equation (1):
Figure BDA0003573583390000197
in the embodiment of the disclosure, the similarity loss can optimize the dense displacement field, so that the sample lung image to be processed is similar to the reference sample lung image after being transformed by the dense displacement field, thereby achieving the purpose of aligning the images.
In particular, the loss function also includes a smooth loss, which has the effect of making the transformation of the neighboring regions in the lung image of the sample to be processed similar, and can effectively prevent the predicted dense displacement field from being unreasonable. The change of the lung during exhalation of the human body is taken as an example to explain, when the human body exhales, the diaphragm gradually relaxes, the lung gradually shrinks, in the process of gradually shrinking the lung, the element A forming the lung and the element B in the adjacent area of the element A are all transformed towards the direction of contracting the lung, if the element A is transformed towards the direction of expanding the lung and the element B is transformed towards the direction of contracting the lung, the difference of the transformed directions of the element A and the element B is overlarge, which is unreasonable, and the smooth loss can effectively prevent the occurrence of the unreasonable situation, thereby improving the efficiency and the accuracy of image registration.
Alternatively, one implementation of the smoothness penalty is the L1 norm of the dense displacement field gradient, as shown in equation (2) below:
Figure BDA0003573583390000201
wherein the content of the first and second substances,
Figure BDA0003573583390000202
Figure BDA0003573583390000203
by
Figure BDA0003573583390000204
Figure BDA0003573583390000205
Can be calculated and obtained by the same way
Figure BDA0003573583390000206
And
Figure BDA0003573583390000207
in the embodiment of the present disclosure, the dense displacement field gradient is a difference between pixel values at adjacent pixel positions of the dense displacement field, and the L1 norm of the dense displacement field gradient is increased by a loss, which may reflect the proximity of the displacement of the adjacent pixel positions of the dense displacement field.
Optionally, although the existence of the smooth loss avoids the unreasonable situation of the predicted dense displacement field, the expressive ability of the displacement transformation is limited, and the complex transformation of the image is difficult to process, so that if the loss function includes the smooth loss, the image can be transformed for many times.
Particularly, two groups of lung images have larger change, so that the lung images can be transformed for many times, the two groups of lung images are aligned recursively, the complex transformation of the images is gradually realized step by step, the two groups of lung images are aligned from coarse to fine, and the registration effect of the lung images is further improved.
The specific implementation manner of performing multiple transformations on two groups of lung images is shown in fig. 3, wherein K represents the transformation times, the two groups of lung images are respectively used as fixed images and moving images, the fixed images and the moving images are input into an image registration network to obtain a dense displacement field, and the images after transformation are calculated by the dense displacement field. And taking the transformed image as a moving image again, inputting the moving image and the fixed image into the image registration network together, continuously obtaining a new dense displacement field, and calculating a new transformed image by using the dense displacement field. The above steps are repeated until the conversion count reaches the conversion number K.
And S306, optimizing parameters of the image registration network model based on the difference.
Step S307, two sets of lung images are acquired.
Specifically, the image registration network model can be trained by optimizing parameters of the image registration network model.
Step S308, inputting the reference lung image and the to-be-processed lung image into the image registration network model to obtain a candidate lung image output by the image registration network model.
In the embodiment of the disclosure, the dense displacement field is predicted by using the full convolution network, and the dense displacement field can represent rich image transformation, so that more and more complex image registration cases can be solved compared with affine transformation and other modes, and multiple transformations are proposed during testing, and the complex transformation is gradually realized in a step-by-step mode, so that the transformation result can be gradually corrected, and more accurate image registration is realized.
After image registration is performed on the two sets of lung images, the corresponding relation of nodules in different lung images needs to be calculated to complete nodule matching. The nodule matching is explained by using two sets of lung images, namely X and Y, wherein 10 nodules exist in the X set of lung images, 11 nodules exist in the Y set of lung images, and all that is needed for nodule matching is to find out the corresponding relation of the nodules in the two sets of lung images, namely, the 10 nodules existing in the X set of lung images correspond to which 10 nodules of the 11 nodules existing in the Y set of lung images respectively.
Different from the prior art, the method for judging whether the nodules are in a matching relationship only by calculating the nodule distance, namely calculating the normalized cross-correlation coefficient among the nodules in different lung images and judging whether the nodules are in the matching relationship according to the cross-correlation coefficient is provided in the embodiment of the disclosure.
Step S309, a lung nodule in the reference lung image and a lung nodule in the to-be-processed lung image are respectively determined.
Specifically, through the image registration step, a candidate lung image, that is, an aligned nodule image is obtained, and the position of a lung nodule in the candidate lung image can be calculated according to the position of the lung nodule in the lung image to be processed, so as to obtain the position of the lung nodule in the candidate lung image, that is, the aligned nodule position. Wherein the positions of the lung nodules in the lung image to be processed and the lung nodules in the reference lung image may be obtained by a nodule detection system.
In step S310, a bipartite graph is constructed based on the nodules in the candidate lung image and the nodules in the reference lung image.
The nodule matching problem can be formed into a bipartite graph matching problem, nodules detected according to the same lung image are different nodules which independently exist, so that a connection relation does not exist among the nodules in the same lung image, and the nodules detected according to different lung images may appear that the same nodule is in a plurality of different nodule images and is presented in an independently existing form, so that connection relations exist among the nodules in different lung images. Generally, nodules with excessively different positions in two different sets of lung images are not the same nodule, so that the connection relation between the nodules with excessively different positions can be deleted in a mode of setting a distance threshold value, and a bipartite graph is preliminarily constructed.
Specifically, a connection relation between nodules in the candidate lung image and nodules in the reference lung image is established, a distance threshold value is set to delete the connection relation between the nodules with overlarge position difference, and a bipartite graph is preliminarily constructed. Wherein the bipartite graph connecting edges in the bipartite graph are determined based on a connection relationship between the nodules in the candidate lung image and the nodules in the reference lung image.
In step S311, the nodes having the connection relationship in the bipartite graph are determined as candidate nodule pairs.
Specifically, the nodes having a connection relationship in the bipartite graph are nodules having a connection relationship in the lung image. Determining two nodes having a connection relationship in the bipartite graph as a set of candidate nodule pairs may also be understood as recording two nodules connected by a connecting edge of the bipartite graph as a set of adjacent nodules.
In step S312, a nodule pair is determined based on the nodule pair candidate.
After the bipartite graph connecting edges are determined, weights need to be given to the bipartite graph connecting edges for carrying out nodule matching. Usually, after locating the nodule, the imaging physician will further compare the nodule features according to: and comparing the nodule morphological characteristics to determine the matching relationship of the nodules.
Specifically, the specific implementation process for determining the nodule pair based on the nodule pair candidate at least includes:
and F, determining the image characteristic difference between two nodules included in the candidate nodule pair.
Specifically, the difference in image features between two nodules included in a nodule candidate pair is mainly composed of a volume difference between two nodules included in the nodule candidate pair, a nodule type difference between two nodules included in the nodule candidate pair, and a feature vector difference between two nodules included in the nodule candidate pair.
As shown in fig. 6, by using the lung nodule segmentation system, obtaining lung nodule volumes, calculating the relative difference of the lung nodule volumes, and obtaining the volume difference between two nodules included in a candidate nodule pair; classifying the nodules into four categories of ground glass, mixed density, solidity and calcification through a lung nodule type classification system, respectively coding the nodule types into 000, 001, 011 and 111, and calculating the Hamming distance of the nodule type codes to obtain the nodule type difference between two nodules included in the candidate nodule pair; and obtaining the feature vector of the lung nodule through a lung nodule type classification system, calculating the cosine similarity of the feature vector of the nodule, and obtaining the feature vector difference between two nodules included in the candidate nodule pair according to the cosine similarity of the feature vector.
Specifically, two nodules included in the candidate nodule pair are respectively denoted as an a nodule and a B nodule, and the difference in image characteristics between the a nodule and the B nodule is shown in the following formulas (3) to (5):
Figure BDA0003573583390000231
wherein L represents the volume difference between the A and B nodules, abs represents a function for absolute value, VADenotes the volume of the A nodule, VBDenotes the volume of the B nodule, VmaxRepresents the volume maximum among the a nodule volume and the B nodule volume.
M=D(xA,xB) (4)
Where M represents the nodule type difference between A and B nodules and D represents xAAnd xBHamming distance between, xAType code, x, representing A noduleBIndicating the type code of the B nodule. The hamming distance is the number of two nodule type codes (binary codes) which are subjected to bitwise exclusive-or operation, and the statistical result is 1.
N=1-cos(yA,yB) (5)
Where, y represents the feature vector difference between the A and B nodulesAFeature vector, y, representing the A noduleBAnd representing the feature vector of the B node, wherein the cosine similarity between the feature vector of the A node and the feature vector of the B node is calculated by using an Euclidean dot product formula.
In particular, the difference in image features between two nodules included in a nodule candidate pair is a weighted average of the difference in volume between the two nodules included in the nodule candidate pair, the difference in nodule type between the two nodules included in the nodule candidate pair, and the difference in feature vectors between the two nodules included in the nodule candidate pair.
Wherein the difference in image features between two nodules included in the candidate nodule pair is equal to the volume difference between two nodules included in the candidate nodule pair, the ratio of the nodule type difference between two nodules included in the candidate nodule pair to three, and the ratio of the sum of the feature vector differences between two nodules included in the candidate nodule pair to three.
And G, determining the distance difference between two nodules included in the candidate nodule pair.
Specifically, the ratio of the distance between two nodules included in the candidate nodule pair and the distance threshold is determined, and the distance difference between the two nodules included in the candidate nodule pair is determined, that is, the numerical normalization processing is performed on the distance between the two nodules included in the candidate nodule pair.
Wherein the difference in distance between two nodules included in a pair of nodule candidates is equal to a ratio of the distance between two nodules included in a pair of nodule candidates to a distance threshold.
And step H, determining the average value of the image feature difference between the two nodules included in the candidate nodule pair and the distance difference between the two nodules included in the candidate nodule pair as the difference between the two nodules included in the candidate nodule pair.
In particular, the difference between two nodules included in a nodule pair candidate is equal to the average of the sum of the difference in image features between the two nodules included in the nodule pair candidate and the difference in distance between the two nodules included in the nodule pair candidate.
And step I, determining the weight of a connecting edge between nodes corresponding to two nodules included in the candidate nodule pair in the bipartite graph based on the difference between the two nodules included in the candidate nodule pair.
And step J, constructing a weighted bipartite graph based on the weight of the connecting edges, and determining the nodule pairs in the weighted bipartite graph, wherein the differences of the nodule pairs meet preset conditions.
Specifically, solving the minimum weighted bipartite graph matching to obtain a final nodule matching result.
As shown in fig. 7, the lung nodule matching problem in the lung image is converted into a weighted bipartite graph matching problem, and nodes with connection relations in the bipartite graph are lung nodules with connection relations in the lung nodules; recording the connection relation between the nodes as a bipartite graph connection edge; then setting a distance threshold value, and deleting bipartite graph connecting edges corresponding to nodes with overlarge position differences; and finally, based on the difference between the nodes corresponding to the nodes in the bipartite graph, giving a weight to the connecting edges of the bipartite graph, and solving the minimum weighted bipartite graph matching relation to obtain a final node matching result.
In the embodiment of the disclosure, the problem of nodule matching is converted into the problem of weighted bipartite graph matching, the nodule image features are extracted by using a deep neural network, the similarity between nodules with a connection relation in the bipartite graph is acquired by combining the nodule distance, and meanwhile, the nodule distance and the nodule image features are used, so that the problem of nodule matching error possibly exists only by performing nodule matching through the nodule distance under the condition that a plurality of adjacent nodules exist in the part of a lung image is improved, and the accuracy of nodule matching is further improved.
Fig. 8 is a schematic diagram illustrating a structure of a nodule matching apparatus according to an embodiment.
Referring to fig. 8, an embodiment of a nodule matching apparatus, nodule matching apparatus 40 includes: an obtaining module 401, configured to obtain a nodule image to be processed and a reference nodule image; an input module 402, configured to input a nodule image to be processed and a reference nodule image into an image registration network model, so as to obtain a candidate nodule image output by the image registration network model; a determining module 403 for determining a nodule in the candidate nodule image and a nodule in the reference nodule image respectively; a nodule pair formed by the nodule candidate image and the reference nodule image is determined based on a relationship between a nodule in the nodule candidate image and a nodule in the reference nodule image.
In some embodiments, the input module 402 is further configured to input the sample nodule image to be processed and the reference sample nodule image into an image registration network model; a determining module 403, further configured to determine a dense displacement field based on an output of the image registration network model; the nodule matching apparatus 40 further comprises: a processing module 404, configured to process the sample nodule image to be processed by using the dense displacement field to obtain a candidate sample nodule image; a determining module 403, further configured to determine a difference between the candidate sample nodule image and the reference sample nodule image; the nodule matching apparatus 40 further comprises: an optimization module 405 for optimizing parameters of the image registration network model based on the differences.
In some embodiments, the position of the first pixel in the sample nodule image to be processed is equal to the sum of the position of the first pixel in the candidate sample nodule image and the displacement of the first pixel in the dense displacement field.
In some embodiments, the determining module 403 is specifically configured to determine an average of squared differences of pixel values at all corresponding same pixel positions between the reference sample nodule image and the candidate sample nodule image.
In some embodiments, the determining module 403 is specifically configured to construct a bipartite graph based on nodules in the nodule candidate image and nodules in the reference nodule image; determining nodes with connection relation in the bipartite graph as candidate nodule pairs; a nodule pair is determined based on the nodule pair candidates.
In some embodiments, the determining module 403 is specifically configured to determine a difference in image features between two nodules included in the candidate nodule pair; determining a distance difference between two nodules comprised by the candidate nodule pair; determining an average value of the difference in image features between two nodules included in the candidate nodule pair and the difference in distance between two nodules included in the candidate nodule pair as a difference between two nodules included in the candidate nodule pair; determining a weight of a connecting edge between nodes corresponding to two nodules included in a candidate nodule pair in the bipartite graph based on a difference between the two nodules included in the candidate nodule pair; and constructing a weighted bipartite graph based on the weight, and determining the nodule pairs in the weighted bipartite graph, wherein the differences of the nodule pairs meet preset conditions.
In some embodiments, the determining module 403 is specifically configured to determine a weighted average of a volume difference between two nodules included in the nodule pair candidate, a nodule type difference between two nodules included in the nodule pair candidate, and a feature vector difference between two nodules included in the nodule pair candidate as an image feature difference between two nodules included in the nodule pair candidate; determining a relative difference in volume between two nodules included in a candidate nodule pair as a volume difference between the two nodules included in the candidate nodule pair; determining a nodule type difference between two nodules comprised by the candidate nodule pair based on a hamming distance encoded by the types of the two nodules comprised by the candidate nodule pair; feature vector differences between two nodules included in a candidate nodule pair are determined based on cosine similarities of feature vectors of the two nodules included in the candidate nodule pair.
In some embodiments, the determining module 403 is specifically configured to determine a ratio of a distance between two nodules included in a nodule candidate pair to a distance threshold as a difference in distance between the two nodules included in the nodule candidate pair.
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
FIG. 9 shows a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable electronic devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device 500 includes a computing unit 501, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic apparatus 500 can also be stored. The calculation unit 501, the ROM502, and the RAM503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the electronic device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the electronic device 500 to exchange information/data with other electronic devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 performs the respective methods and processes described above, such as the nodule matching method. For example, in some embodiments, the nodule matching method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 500 via the ROM502 and/or the communication unit 509. When the computer program is loaded into RAM503 and executed by the computing unit 501, one or more steps of the nodule matching method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the nodule matching method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, "a plurality" means two or more unless specifically limited otherwise.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (18)

1. A nodule matching method, comprising:
acquiring a nodule image to be processed and a reference nodule image;
inputting the nodule image to be processed and the reference nodule image into an image registration network model to obtain candidate nodule images output by the image registration network model;
determining a nodule in the nodule candidate image and a nodule in the reference nodule image, respectively;
determining a nodule pair formed by the nodule candidate image and the reference nodule image based on a relationship between a nodule in the nodule candidate image and a nodule in the reference nodule image.
2. The method of claim 1, further comprising:
inputting a sample nodule image to be processed and a reference sample nodule image into the image registration network model;
determining a dense displacement field based on an output of the image registration network model;
processing the sample nodule image to be processed by using the dense displacement field to obtain a candidate sample nodule image;
determining a difference between the candidate sample nodule image and the reference sample nodule image;
optimizing parameters of the image registration network model based on the differences.
3. The method of claim 2, wherein the position of a first pixel in the sample nodule image to be processed is equal to the sum of the position of the first pixel in the candidate sample nodule image and the displacement of the first pixel in the dense displacement field.
4. The method of claim 2, wherein the determining the difference between the candidate sample nodule image and the reference sample nodule image comprises:
determining an average of squared differences of pixel values at all corresponding same pixel locations between the reference sample nodule image and the candidate sample nodule image.
5. The method according to claim 1, wherein determining a nodule pair formed by the nodule candidate image and the reference nodule image based on a relationship between a nodule in the nodule candidate image and a nodule in the reference nodule image comprises:
constructing a bipartite graph based on nodules in the candidate nodule image and nodules in the reference nodule image;
determining nodes with connection relations in the bipartite graph as candidate nodule pairs;
determining the nodule pair based on the nodule pair candidate.
6. The method of claim 5, wherein said determining the nodule pair based on the nodule pair candidate comprises:
determining a difference in image features between two nodules included in the candidate nodule pair;
determining a difference in distance between two nodules comprised by the pair of candidate nodules;
determining an average of the difference in image features between two nodules included in the pair of nodule candidates and the difference in distance between two nodules included in the pair of nodule candidates as the difference between two nodules included in the pair of nodule candidates;
determining a weight of a connecting edge between nodes corresponding to two nodules included in the candidate nodule pair in the bipartite graph based on a difference between the two nodules included in the candidate nodule pair;
and constructing a weighted bipartite graph based on the weight, and determining the nodule pairs in the weighted bipartite graph, wherein the differences of the nodule pairs meet preset conditions.
7. The method of claim 6, wherein said determining a difference in image features between two nodules comprised in said candidate nodule pair comprises:
determining a weighted average of a volume difference between two nodules included in the pair of nodule candidates, a nodule type difference between two nodules included in the pair of nodule candidates, and a feature vector difference between two nodules included in the pair of nodule candidates as an image feature difference between two nodules included in the pair of nodule candidates;
determining a relative difference in volume of two nodules included in the pair of nodule candidates as a difference in volume between the two nodules included in the pair of nodule candidates;
determining a nodule type difference between two nodules comprised by the pair of nodule candidates based on a hamming distance encoded by the types of the two nodules comprised by the pair of nodule candidates;
determining a feature vector difference between two nodules included in the nodule candidate pair based on a cosine similarity of feature vectors of the two nodules included in the nodule candidate pair.
8. The method of claim 6, wherein said determining a difference in distance between two nodules comprised in said candidate nodule pair comprises:
determining a ratio of a distance between two nodules included in the pair of nodule candidates to a distance threshold as a difference in distance between the two nodules included in the pair of nodule candidates.
9. A nodule matching apparatus, the apparatus comprising:
the acquisition module is used for acquiring a nodule image to be processed and a reference nodule image;
the input module is used for inputting the nodule image to be processed and the reference nodule image into an image registration network model to obtain a candidate nodule image output by the image registration network model;
a determination module for determining a nodule in the nodule candidate image and a nodule in the reference nodule image, respectively; determining a nodule pair formed by the nodule candidate image and the reference nodule image based on a relationship between a nodule in the nodule candidate image and a nodule in the reference nodule image.
10. The apparatus according to claim 9, wherein the input module is further configured to input a sample nodule image to be processed and a reference sample nodule image into the image registration network model;
the determination module is further configured to determine a dense displacement field based on an output of the image registration network model;
the device further comprises: the processing module is used for processing the sample nodule image to be processed by utilizing the dense displacement field to obtain a candidate sample nodule image;
the determination module is further configured to determine a difference between the candidate sample nodule image and the reference sample nodule image;
the device further comprises: an optimization module to optimize parameters of the image registration network model based on the differences.
11. The apparatus according to claim 10, wherein the position of a first pixel in the sample nodule image to be processed is equal to the sum of the position of the first pixel in the candidate sample nodule image and the displacement of the first pixel in the dense displacement field.
12. The apparatus according to claim 10, wherein the determining module is specifically configured to determine an average of squared differences of pixel values at all corresponding same pixel positions between the reference sample nodule image and the candidate sample nodule image.
13. The apparatus according to claim 9, wherein the determining module is specifically configured to construct a bipartite graph based on nodules in the nodule candidate image and nodules in the reference nodule image; determining nodes with connection relations in the bipartite graph as candidate nodule pairs; determining the nodule pair based on the nodule pair candidate.
14. The apparatus according to claim 13, wherein the determining module is specifically configured to determine a difference in image features between two nodules included in the candidate nodule pair; determining a difference in distance between two nodules comprised by the pair of candidate nodules; determining an average of the difference in image features between two nodules included in the pair of nodule candidates and the difference in distance between two nodules included in the pair of nodule candidates as the difference between two nodules included in the pair of nodule candidates; determining a weight of a connecting edge between nodes corresponding to two nodules included in the candidate nodule pair in the bipartite graph based on a difference between the two nodules included in the candidate nodule pair; and constructing a weighted bipartite graph based on the weight, and determining the nodule pairs in the weighted bipartite graph, wherein the differences of the nodule pairs meet preset conditions.
15. The apparatus according to claim 14, wherein the determining module is specifically configured to determine a weighted average of a volume difference between two nodules comprised by the nodule pair, a nodule type difference between two nodules comprised by the nodule pair, and a feature vector difference between two nodules comprised by the nodule pair as an image feature difference between two nodules comprised by the nodule pair; determining a relative difference in volume between two nodules comprised by the pair of nodule candidates as a difference in volume between the two nodules comprised by the pair of nodule candidates; determining a nodule type difference between two nodules comprised by the pair of nodule candidates based on a hamming distance encoded by the types of the two nodules comprised by the pair of nodule candidates; determining a feature vector difference between two nodules included in the nodule candidate pair based on a cosine similarity of feature vectors of the two nodules included in the nodule candidate pair.
16. The apparatus according to claim 14, wherein the determining module is specifically configured to determine a ratio of a distance between two nodules included in the pair of nodule candidates and a distance threshold as a difference in distance between two nodules included in the pair of nodule candidates.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the nodule matching method of any of claims 1-8.
18. A non-transitory computer readable storage medium having computer instructions stored thereon for causing a computer to perform the nodule matching method according to any one of claims 1-8.
CN202210332611.0A 2022-03-30 2022-03-30 Nodule matching method and device, electronic equipment and storage medium Active CN114693642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210332611.0A CN114693642B (en) 2022-03-30 2022-03-30 Nodule matching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210332611.0A CN114693642B (en) 2022-03-30 2022-03-30 Nodule matching method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114693642A true CN114693642A (en) 2022-07-01
CN114693642B CN114693642B (en) 2023-03-24

Family

ID=82141335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210332611.0A Active CN114693642B (en) 2022-03-30 2022-03-30 Nodule matching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114693642B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309449A (en) * 2023-03-14 2023-06-23 北京医准智能科技有限公司 Image processing method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088473A (en) * 1998-02-23 2000-07-11 Arch Development Corporation Method and computer readable medium for automated analysis of chest radiograph images using histograms of edge gradients for false positive reduction in lung nodule detection
CN109741379A (en) * 2018-12-19 2019-05-10 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN113112609A (en) * 2021-03-15 2021-07-13 同济大学 Navigation method and system for lung biopsy bronchoscope

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088473A (en) * 1998-02-23 2000-07-11 Arch Development Corporation Method and computer readable medium for automated analysis of chest radiograph images using histograms of edge gradients for false positive reduction in lung nodule detection
CN109741379A (en) * 2018-12-19 2019-05-10 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN113112609A (en) * 2021-03-15 2021-07-13 同济大学 Navigation method and system for lung biopsy bronchoscope

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309449A (en) * 2023-03-14 2023-06-23 北京医准智能科技有限公司 Image processing method, device, equipment and storage medium
CN116309449B (en) * 2023-03-14 2024-04-09 浙江医准智能科技有限公司 Image processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114693642B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
JP6584478B2 (en) Method and apparatus for improved segmentation and recognition of images
Tang et al. A multi-stage framework with context information fusion structure for skin lesion segmentation
CN110838125B (en) Target detection method, device, equipment and storage medium for medical image
CN112767329A (en) Image processing method and device and electronic equipment
US20230052133A1 (en) Medical image processing method and apparatus, device, storage medium, and product
CN112396605B (en) Network training method and device, image recognition method and electronic equipment
US11684333B2 (en) Medical image analyzing system and method thereof
CN111798424B (en) Medical image-based nodule detection method and device and electronic equipment
US11830187B2 (en) Automatic condition diagnosis using a segmentation-guided framework
CN112634265B (en) Method and system for constructing and segmenting fully-automatic pancreas segmentation model based on DNN (deep neural network)
TW202347180A (en) Training method of image processing model
CN113362314A (en) Medical image recognition method, recognition model training method and device
CN111369623A (en) Lung CT image identification method based on deep learning 3D target detection
CN114693642B (en) Nodule matching method and device, electronic equipment and storage medium
CN113782181A (en) CT image-based lung nodule benign and malignant diagnosis method and device
CN111192320A (en) Position information determining method, device, equipment and storage medium
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
US11521323B2 (en) Systems and methods for generating bullseye plots
CN111209946B (en) Three-dimensional image processing method, image processing model training method and medium
CN115170510B (en) Focus detection method and device, electronic equipment and readable storage medium
CN116245832B (en) Image processing method, device, equipment and storage medium
US20240005507A1 (en) Image processing method, apparatus and non-transitory computer readable medium for performing image processing
CN114187252B (en) Image processing method and device, and method and device for adjusting detection frame
CN110399907A (en) Thoracic cavity illness detection method and device, storage medium based on induction attention
US11875898B2 (en) Automatic condition diagnosis using an attention-guided framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 3011, 2nd Floor, Building A, No. 1092 Jiangnan Road, Nanmingshan Street, Liandu District, Lishui City, Zhejiang Province, 323000

Patentee after: Zhejiang Yizhun Intelligent Technology Co.,Ltd.

Address before: No. 1202-1203, 12 / F, block a, Zhizhen building, No. 7, Zhichun Road, Haidian District, Beijing 100083

Patentee before: Beijing Yizhun Intelligent Technology Co.,Ltd.