CN114387219A - Method, device, medium and equipment for detecting arteriovenous cross compression characteristics of eyeground - Google Patents
Method, device, medium and equipment for detecting arteriovenous cross compression characteristics of eyeground Download PDFInfo
- Publication number
- CN114387219A CN114387219A CN202111552276.7A CN202111552276A CN114387219A CN 114387219 A CN114387219 A CN 114387219A CN 202111552276 A CN202111552276 A CN 202111552276A CN 114387219 A CN114387219 A CN 114387219A
- Authority
- CN
- China
- Prior art keywords
- target detection
- arteriovenous
- compression
- fundus
- cross
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007906 compression Methods 0.000 title claims abstract description 124
- 230000006835 compression Effects 0.000 title claims abstract description 84
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000001514 detection method Methods 0.000 claims abstract description 187
- 238000013135 deep learning Methods 0.000 claims abstract description 35
- 238000006243 chemical reaction Methods 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims description 61
- 210000003462 vein Anatomy 0.000 claims description 40
- 238000010586 diagram Methods 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 16
- 238000003860 storage Methods 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 9
- 210000001367 artery Anatomy 0.000 claims description 8
- 238000002372 labelling Methods 0.000 claims description 7
- 210000004126 nerve fiber Anatomy 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 4
- 230000001629 suppression Effects 0.000 claims description 3
- 210000001631 vena cava inferior Anatomy 0.000 claims 1
- 210000001525 retina Anatomy 0.000 abstract description 2
- 238000012545 processing Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 14
- 210000004204 blood vessel Anatomy 0.000 description 12
- 238000004891 communication Methods 0.000 description 9
- 210000001841 basilar artery Anatomy 0.000 description 7
- 238000013461 design Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 210000004027 cell Anatomy 0.000 description 5
- 230000004660 morphological change Effects 0.000 description 5
- 208000024172 Cardiovascular disease Diseases 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 208000017667 Chronic Disease Diseases 0.000 description 3
- 206010020772 Hypertension Diseases 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 230000036772 blood pressure Effects 0.000 description 3
- 208000026106 cerebrovascular disease Diseases 0.000 description 3
- 230000002526 effect on cardiovascular system Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 230000002207 retinal effect Effects 0.000 description 2
- 239000004429 Calibre Substances 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 208000032843 Hemorrhage Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000007795 chemical reaction product Substances 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 210000001210 retinal vessel Anatomy 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method, a device, a medium and equipment for detecting the cross compression characteristic of a fundus arteriovenous, which comprises the following steps: acquiring a fundus image, and carrying out size conversion on the fundus image; inputting the fundus images with the converted sizes into a pre-trained target detection model based on deep learning, generating a plurality of target detection frames on the fundus images, and obtaining the probability value of each target detection frame containing arteriovenous cross compression characteristics; and determining the arteriovenous cross compression characteristics according to the probability value of the arteriovenous cross compression characteristics. The invention can quickly, accurately and visually detect the arteriovenous cross compression characteristic of the fundus retina.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device, a medium and equipment for detecting cross compression characteristics of a fundus arterious and vein.
Background
The AV nicking is a phenomenon in which a vein is compressed by a hardened artery at the intersection of the vein and the artery due to an increase in blood pressure (i.e., high blood pressure). In the retinal color photograph, the arteriovenous cross-compression is characterized by a decrease in venous calibre on both sides of an Arteriovenous (AV) intersection. The arteriovenous cross compression sign of the fundus of the human body is not only related to the current blood pressure of the human body, but also related to the past blood pressure, and is a marker of lasting and long-term influence of hypertension, so that the identification and detection of the arteriovenous cross compression sign are very important for early detection of cardiovascular diseases.
When the prior art carries out vein crossing compression characteristic detection on an image of the fundus, the realized process needs complete cross section information and vascular structure information of retinal blood vessels, and only the classification characteristics are obvious under the conditions of higher resolution, high imaging quality and clear vessel segments of an ROI (region of interest). However, in the application process, the imaging quality and information of fundus images are different, lesions (fundus hemorrhage, oozing and the like) may exist in the ROI region, and the accuracy of blood vessel information extraction and classification characteristic parameters is directly influenced by the complex blood vessel structure (large degree of blood vessel bending, serious blood vessel crossing or parallel overlapping and the like).
Meanwhile, the fundus image quality difference is large due to the influence of shooting equipment and shooting operation, the robustness of the arteriovenous vessel segmentation performed by adopting the existing algorithm (such as an unsupervised Kmeans algorithm) is poor due to the difference of the fundus structures of each person, and the accuracy of the detection result of the arteriovenous cross-constriction is directly influenced due to the inaccuracy of arteriovenous vessel information; in addition, the gray values of the arteriovenous vessels in the fundus image are relatively close, so that the arteriovenous vessels have low display difference, the accuracy of the arteriovenous vessel segmentation result is reduced, and the accuracy of the arteriovenous vessel cross-compression feature detection result is influenced.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide a method, an apparatus, a medium, and a device for detecting a fundus arteriovenous cross-compression feature, so as to quickly and accurately determine a fundus retinal arteriovenous cross-compression feature.
In order to achieve the above object, in a first aspect, an embodiment of the present invention provides a method for detecting a cross-compression characteristic of a basilar artery and vein, which specifically includes:
acquiring a fundus image, and carrying out size conversion on the fundus image;
inputting the fundus images with the converted sizes into a pre-trained target detection model based on deep learning, and generating a plurality of target detection frames and probability values of arteriovenous cross compression characteristics corresponding to the target detection frames on the fundus images;
and determining whether the target detection frame contains the arteriovenous cross compression characteristics or not according to the probability value of the arteriovenous cross compression characteristics.
In some possible embodiments, before the acquiring the fundus image, the method further includes:
acquiring a plurality of fundus training images;
labeling the plurality of fundus training images to obtain training samples;
carrying out size transformation on the training samples to obtain a plurality of training samples with the same size;
and inputting the training samples with the same size into a deep learning target detection network for training to obtain the target detection model.
In some possible embodiments, the labeling the plurality of fundus training images to obtain a training sample specifically includes:
marking a rectangular frame at the position of the fundus training image with arteriovenous cross compression characteristics to obtain a plurality of sample detection frames;
determining a first vertex and a second vertex on any diagonal line of the sample detection frame, and acquiring a plurality of sample detection points;
obtaining the training sample according to the plurality of sample detection boxes and the plurality of sample detection points.
In some possible embodiments, the inputting the training sample into a deep learning target detection network for training to obtain the target detection model specifically includes:
step S1: inputting the training sample into a deep learning target detection network for convolution to obtain characteristic graphs of different scales;
step S2: on the feature maps with different scales, a plurality of target preselection frames with different length-width ratios are generated by taking the central points of different unit grids as centers;
step S3: determining a plurality of target detection frames by the plurality of target preselection frames according to a preset intersection ratio, determining a first vertex and a second vertex on a diagonal line of the plurality of target detection frames to obtain a plurality of target detection points, and calculating coordinate offset values of coordinates of the plurality of target detection points relative to the plurality of sample detection points;
the above steps S1 to S3 are iterated continuously until the coordinate offset value is reduced to or below a preset offset value, so as to obtain the target detection model.
In some possible embodiments, the inputting the fundus image after size conversion into a pre-trained target detection model based on deep learning, and generating a plurality of target detection frames and probability values of arteriovenous cross-compression features corresponding to the target detection frames on the fundus image specifically include:
inputting the fundus images with the changed sizes into the target detection model for convolution to obtain a plurality of characteristic images with different scales;
on the different-scale characteristic diagrams, a plurality of target preselection frames in different shapes are generated by taking the center points of different units as centers;
and combining the target preselection frames on the feature maps with different scales, and respectively selecting the plurality of target detection frames by using a non-maximum suppression method.
In some possible embodiments, the method further comprises: and according to a target detection model based on deep learning, obtaining the shape change of the vein, the change trend of the caliber value of the vein and/or the change trend of the curvature value of the vein, and judging that the target detection frame comprises the probability value of the characteristic of arteriovenous cross compression.
In some possible embodiments, the determining whether the target detection frame includes the arteriovenous cross-compression feature according to the arteriovenous cross-compression probability value specifically includes:
and when the probability value of the arteriovenous cross compression on the target detection frame is larger than or equal to a preset probability threshold value, judging that the target detection frame contains the characteristic of arteriovenous cross compression, and the cross position contained in the target detection frame is the arteriovenous cross compression.
In some possible embodiments, after the detecting the arteriovenous cross-compression feature in the target detection frame according to the probability value of the arteriovenous cross-compression, the method further includes:
and deleting the extracted wrong arteriovenous cross compression characteristics and the corresponding target detection frame by combining the nerve fiber layer on the fundus image and the age information of the patient corresponding to the fundus image.
In a second aspect, the present invention provides a device for detecting cross-compression characteristics of a basilar artery and vein, comprising:
the acquisition module is used for acquiring a fundus image and carrying out size conversion on the fundus image;
the target generation module is used for inputting the fundus images with the converted sizes into a pre-trained target detection model based on deep learning, and generating a plurality of target detection frames and probability values of arteriovenous cross compression corresponding to the target detection frames on the fundus images;
and the characteristic detection module is used for determining whether the target detection frame contains arteriovenous cross compression characteristics or not according to the arteriovenous cross compression probability value.
In a third aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements any one of the above-mentioned methods for detecting a cross-arteriovenous compression characteristic.
In a fourth aspect, the present invention provides an apparatus for detecting an arteriovenous cross-compression characteristic, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the above-described methods for detecting a basilar arteriovenous cross-compression feature.
The technical scheme has the following beneficial effects:
according to the embodiment of the invention, the fundus images with the converted sizes are input into a pre-trained target detection model based on deep learning, and a plurality of target detection frames and the probability values of arteriovenous cross compression corresponding to the target detection frames are generated on the fundus images; and determining whether the target detection frame contains arteriovenous cross compression characteristics or not according to the probability value of the arteriovenous cross compression, so as to rapidly, accurately and intuitively determine the fundus retina blood vessel cross compression characteristics.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for detecting cross-compression characteristics of a basilar artery and vein according to an embodiment of the present invention;
FIG. 2A is a schematic diagram of an embodiment of obtaining training samples;
FIG. 2B is a schematic illustration of a calculation of the cross-over ratio according to an embodiment of the invention;
FIG. 2C is a schematic diagram of an embodiment of obtaining a target detection model;
FIG. 2D is a schematic diagram of a target preselection box of different aspect ratios generated on different scale feature maps in accordance with an embodiment of the invention;
FIG. 3A is a schematic diagram illustrating a probability value of arteriovenous cross-compression according to an embodiment of the present invention;
FIG. 3B is a schematic diagram illustrating the morphological changes of the venous vessels during the arteriovenous cross-compression of the present invention;
fig. 4 is a functional block diagram of a device for detecting cross-compression of a basilar artery and vein according to an embodiment of the present invention;
FIG. 5 is a functional block diagram of a computer-readable storage medium of an embodiment of the present invention;
fig. 6 is a functional block diagram of a device for detecting cross-compression of a basilar artery and vein according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Fig. 1 is a flowchart of a method for detecting a cross-compression characteristic of a basilar artery and vein according to an embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:
s110: acquiring a fundus image, and performing size conversion on the fundus image.
Specifically, in this step, after performing size conversion on the fundus image, preprocessing may be performed on the fundus image, and specifically, the method may include the following steps:
s101: extracting a region of interest (namely ROI region) from a fundus image, separating channels of the fundus image, and selecting any one channel image which can be an R channel, a G channel, a B channel, or any one of an H channel, an I channel and an S channel;
s102: performing threshold segmentation on any separated channel according to one or more characteristics of gray level mean value, deviation, gradient and the like, and performing hole filling on a region of interest obtained after the threshold segmentation to ensure that the obtained ROI image is a continuous region;
s103: removing features such as isolated dots, burrs and bridges through morphological operation (such as open operation, wherein the open operation comprises corrosion and expansion) and morphological feature extraction;
s104: and performing enhancement processing on the image of the region of interest, so that the contrast of the feature of interest and the background feature can be enlarged, and the feature of the region of interest is more prominent.
S120: and inputting the fundus image after size conversion into a pre-trained target detection model based on deep learning, and generating a plurality of target detection frames and probability values of arteriovenous cross compression characteristics corresponding to each target detection frame on the fundus image.
In some possible embodiments, the training process of the target frame detection model in step S120 specifically includes:
s121: acquiring a plurality of fundus training images;
s122: labeling the plurality of fundus training images to obtain training samples;
fig. 2A is a schematic diagram of an exemplary method for obtaining training samples according to an embodiment of the present invention. As shown in fig. 2A, a rectangular frame is marked at a characteristic position of arteriovenous crossing compression on a fundus training image, and a sample detection frame K1 and a sample detection frame K2 are obtained, where the sizes of the sample detection frame K1 and the sample detection frame K2 can satisfy the requirement of simultaneously including information such as arterial blood vessels, venous blood vessels, intersection coordinates and the like, and can exclude most peripheral interference information. Preferably, the size of the rectangular frame area may be determined according to the caliber of the blood vessel at the corresponding intersection position, for example, the size of the rectangular frame area may change with the caliber value of the arterial blood vessel and/or the venous blood vessel at the arteriovenous intersection position, for example, when the caliber of the blood vessel at the arteriovenous intersection position is larger, the rectangular frame area is increased accordingly; when the vessel diameter at the arteriovenous intersection is smaller, the area of the rectangular frame is reduced accordingly. Determining a first vertex A and a second vertex B on any diagonal of the sample detection box K1, and a first vertex C and a second vertex D on any diagonal of the sample detection box K2, obtaining sample detection points A, B, C and D, and obtaining a training sample according to the sample detection box K1, the sample detection box K2 and the multiple sample detection points A, B, C and D.
In some possible embodiments, the number of the sample detection frames may be 1, 2, 3 or more, and the specific number may be determined according to the fundus abnormality, while corresponding to the plurality of sample detection points.
S123: carrying out size transformation on the plurality of training samples to obtain a plurality of training samples with the same size;
in the embodiment, the sizes of the training sample images are changed, so that the sizes of different fundus images are in a uniform range while the detailed characteristics of the images are not lost, the specificity of the images during training can be weakened, and the accuracy of a deep learning network training model is improved.
Step S124: and inputting the training samples into a deep learning target frame detection network for training to obtain a target detection model.
In some possible embodiments, the step S124 of inputting the training sample into the deep learning target detection network for training to obtain the target detection model may specifically include the following sub-steps:
step S1: inputting the training samples into a deep learning target frame detection network for convolution to obtain different scale characteristic graphs;
step S2: on the feature maps with different scales, a plurality of target preselection frames with different length-width ratios are generated by taking the central points of different unit lattices as centers;
step S3: determining a plurality of target detection frames by the plurality of target preselection frames according to a preset intersection ratio, determining a first vertex and a second vertex on a diagonal of the target detection frames to obtain a plurality of target detection points, and calculating coordinate deviation values of coordinates of the plurality of target detection points relative to the plurality of sample detection points;
the above steps S1 to S3 are iterated continuously until the coordinate offset value is reduced to the preset offset value, and the target frame detection model is obtained.
The Intersection-to-Union ratio is called an Intersection over Union (IoU for short), and is a ratio of an Intersection and a Union of a "predicted frame" and a "real frame", and fig. 2B is a schematic diagram of calculating the Intersection-to-Union ratio according to an embodiment of the present invention.
Fig. 2C is a schematic diagram of an exemplary model for detecting an acquired target according to an embodiment of the present invention, as shown in fig. 2C:
inputting a plurality of training samples (sample 1, sample 2 and sample 3 … …) with the same size and marked coordinate points ((x1, y1), (x2, y2), (x3, y3) … …) into a deep learning target detection network for convolution to obtain feature maps 1, 2 and 3 with different scales; fig. 2D is a schematic diagram of target preselection frames with different aspect ratios generated on feature maps with different scales, as shown in fig. 2D, the feature map with each scale is divided into a plurality of cells, for example, the feature map with 8x8 is divided into 64 cells, the feature map with 4x4 is divided into 16 cells, each cell corresponds to a group of features in the original image, on the feature map with different scales, a plurality of target preselection frames with different aspect ratios are generated with the center points of different cells as centers, and then a plurality of target detection frames are determined according to a preset intersection ratio; the intersection ratio is the ratio of the intersection and the union of the predicted frame and the real frame, namely the ratio of the intersection and the union of the preselected target frame and the target detection frame;
determining a first vertex and a second vertex on a diagonal line of each target detection frame, acquiring a plurality of target detection points, calculating offset values of coordinates of each target detection point and corresponding sample detection points, namely (x1 ', y 1'), (x2 ', y 2'), (x3 ', y 3') … …, calculating differences between the determined coordinate points and the marked coordinate points (x1, y1), (x2, y2), (x3, y3) … …, namely | x1 '-x 1| - Δ x1, | y 1' -y1| - Δ y1 … …, and continuously iterating the steps until the differences (Δ x1,. DELTA.y 1 … …) are reduced to preset coordinate offset values.
In some embodiments, the number of different scale feature maps is determined by the number of layers of convolutional layers in the deep learning based target detection network, and different convolutional layers will output different scale feature maps (feature maps).
In some embodiments, the deep learning target frame detection network is a Single Shot multi box Detector (SSD deep learning network for short), and the SSD deep learning network uses a plurality of data enhancement algorithms including horizontal flipping, clipping, enlarging, reducing, and the like, which can significantly improve the performance of the algorithms, so that the method has better robustness for input target detection of different sizes and different shapes.
Fig. 3A is a schematic diagram of generating a probability value of arteriovenous cross-compression according to an embodiment of the present invention, and as shown in fig. 3A, the probability value of arteriovenous cross-compression corresponding to the target detection frame region is determined by detecting a morphological change of a vein vessel in arteriovenous cross-compression data, for example, in the target detection frame, the position of an intersection point of the region where the target detection frame is located is determined as a probability value θ of arteriovenous cross-compression by calculating a diameter value of the vein vessel, a curvature value of the vein vessel, and/or a morphological change of the vein vessel.
Specifically, fig. 3B is a schematic diagram of a morphological change of a venous vessel in an arteriovenous crossing compression phenomenon according to an embodiment of the present invention, and as shown in fig. 3B, a is a normal arteriovenous crossing; b, in the arteriovenous cross compression phenomenon, the veins of the cross points are hidden, namely, the two sides of the artery are contracted into a pen point shape, and at the moment, the probability value belonging to the arteriovenous cross compression is judged according to the pipe diameter value of the blood vessel at the position of the pen point shape; c, in the arteriovenous cross compression phenomenon, the vein is in a press-off state, and at the moment, the probability value belonging to the arteriovenous cross compression is judged according to the degree of the vein press-off; d is in the arteriovenous cross-compression phenomenon, the vein is cut down to be fusiform, at the moment, the size of the pipe diameter value of the fusiform part of the vein cut down is judged to judge the probability value belonging to the arteriovenous cross-compression; e is the probability value of the arteriovenous cross compression determined by measuring the tumor-shaped expansion of the far end of the vein in the arteriovenous cross compression phenomenon; f is the vein deflection in the arteriovenous cross compression phenomenon, and at the moment, the value of the arteriovenous cross compression probability is judged according to the degree of the curvature value of the vein vessel; g is in the arteriovenous cross compression phenomenon, when the vein is positioned below the artery, the vein is pressed and sunken into an S shape, and at the moment, the curvature value of the vein is judged to judge the probability value of the arteriovenous cross compression; h is the arteriovenous cross compression phenomenon, when the vein is positioned on the artery, the raised part of the vein crosses the artery in an arch bridge shape, and at the moment, the arteriovenous cross compression probability value is judged according to the size of the arch bridge or the curvature value of the vein; wherein 1 is a vein blood vessel and 2 is an artery blood vessel.
S130: and determining whether the target detection frame contains arteriovenous cross compression characteristics or not according to the probability value of the arteriovenous cross compression.
In this embodiment, if the probability value of the arteriovenous cross compression generated on the target detection frame is greater than or equal to the preset probability threshold, it is determined that the arteriovenous cross compression feature is contained in the target detection frame. As an example, when the preset probability threshold is 0.6, it is determined that the target detection frame corresponding to the probability value greater than or equal to 0.6 contains the arteriovenous crossing compression feature.
In some embodiments, after detecting the arteriovenous cross-compression feature in the target detection frame according to the probability value of the arteriovenous cross-compression, the method further comprises:
and deleting the extracted wrong arteriovenous cross compression characteristics and the corresponding target detection frame by combining the nerve fiber layer on the fundus image and the age information of the patient corresponding to the fundus image.
In some embodiments, in combination with features such as a nerve fiber layer on the fundus image, age information of a corresponding patient, and the like, there may be some positions where the target detection frame is generated, which may not be true arteriovenous cross-compression, and then the target detection frame is deleted if it is indicated that the generated target detection frame does not belong to the arteriovenous cross-compression feature. If hypertension frequently occurs in people over 30 years old, when patients under 30 years old are detected to be suspected to be like arteriovenous cross compression, the patient is generally considered to be overdetected due to image quality or nerve fiber layers, so that a generated target detection frame is deleted, a network detection result is eliminated, and the detection accuracy is improved.
According to the embodiment of the invention, the target detection model is obtained based on deep learning, the arteriovenous cross compression characteristics of the fundus image can be rapidly, accurately and intuitively identified without depending on the quality of the image, and by identifying the arteriovenous cross compression characteristics, fundus complications of a plurality of chronic diseases and cardiovascular and cerebrovascular diseases can be found, structural damage of the chronic diseases and the cardiovascular and cerebrovascular diseases to the body can be reflected to a certain extent, and a basic reference is provided for the assessment and the progress assessment of the chronic diseases and the cardiovascular and cerebrovascular diseases.
Example two
Fig. 4 is a functional block diagram of a device for detecting a cross-compression characteristic of a basilar artery and vein according to an embodiment of the present invention. As shown in fig. 4, based on a similar inventive concept, the fundus arteriovenous cross compression characteristic detecting apparatus 400 includes:
an acquiring module 410, configured to acquire a fundus image and perform size conversion on the fundus image;
a target generation module 420, configured to input the fundus image after size conversion into a pre-trained target detection model based on deep learning, and generate a plurality of target detection frames and probability values of arteriovenous cross compression features corresponding to the target detection frames on the fundus image;
the feature detection module 430 is configured to determine whether the target detection frame includes the arteriovenous cross-compression feature according to the probability value of the arteriovenous cross-compression feature.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
EXAMPLE III
An embodiment of the present invention further provides a computer-readable storage medium 500, where a computer program 510 is stored in the computer-readable storage medium 500, and when the computer program 510 is executed by a processor, the computer program implements:
acquiring a fundus image, and carrying out size conversion on the fundus image;
inputting the fundus images with the converted sizes into a pre-trained target detection model based on deep learning, and generating a plurality of target detection frames and probability values of arteriovenous cross compression characteristics corresponding to the target detection frames on the fundus images;
and determining whether the target detection frame contains the arteriovenous cross compression characteristics or not according to the probability value of the arteriovenous cross compression characteristics.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Example four
An embodiment of the present invention further provides an electronic device 600, as shown in fig. 6, which includes one or more processors 601, a communication interface 602, a memory 603, and a communication bus 604, where the processors 601, the communication interface 602, and the memory 603 complete communication therebetween through the communication bus 604.
A memory 603 for storing a computer program;
the processor 601 is configured to implement, when executing the program stored in the memory 603:
acquiring a fundus image, and carrying out size conversion on the fundus image;
inputting the fundus images with the converted sizes into a pre-trained target detection model based on deep learning, and generating a plurality of target detection frames and probability values of arteriovenous cross compression characteristics corresponding to the target detection frames on the fundus images;
and the target detection frame is used for determining whether the target detection frame contains the arteriovenous cross compression characteristic or not according to the probability value of the arteriovenous cross compression characteristic.
In one possible design, among the processes executed by the processor 601, the processes executed by the processor 601 further include, before acquiring the fundus image, further including:
acquiring a plurality of fundus training images;
labeling the plurality of fundus training images to obtain training samples;
carrying out size transformation on the plurality of training samples to obtain a plurality of training samples with the same size;
and inputting the training samples into a deep learning target detection network for training to obtain a target detection model.
In one possible design, in the processing performed by the processor 601, the processing performed by the processor 601 further includes labeling a plurality of fundus training images to obtain a training sample, and specifically includes:
marking a rectangular frame at a position of the fundus training image in the arteriovenous cross compression characteristic to obtain a plurality of sample detection frames;
determining a first vertex and a second vertex on a diagonal line of a sample detection frame, and acquiring a plurality of sample detection points;
and acquiring training samples according to the plurality of sample detection frames and the plurality of sample detection points.
In one possible design, in the processing performed by the processor 601, the processing performed by the processor 601 further includes inputting a training sample into a deep learning target detection network for training, and obtaining a target detection model, which specifically includes:
step S1: inputting the training samples into a deep learning target detection network for convolution to obtain characteristic graphs of different scales;
step S2: on the feature maps with different scales, a plurality of target preselection frames with different length-width ratios are generated by taking the central points of different unit grids as centers;
step S3: determining a plurality of target detection frames by the plurality of target preselection frames according to a preset intersection ratio, determining a first vertex and a second vertex on a diagonal of the target detection frames to obtain a plurality of target detection points, and calculating coordinate deviation values of coordinates of the plurality of target detection points relative to the plurality of sample detection points;
the above steps S1 to S3 are iterated continuously until the coordinate offset value is reduced to or below the preset offset value, so as to obtain the target detection model.
In one possible design, among the processes executed by the processor 601, the processes executed by the processor 601 further include inputting the fundus image after size conversion into a pre-trained object detection model based on deep learning, and generating a plurality of object detection frames on the fundus image, and specifically include:
inputting the fundus image into a target detection model for convolution to obtain a plurality of different scale characteristic maps;
on the feature maps with different scales, a plurality of target preselection frames with different shapes are generated by taking the center points of different units as centers;
and combining the target preselection frames on the feature maps with different scales, and respectively selecting a plurality of target detection frames by using a non-maximum suppression method.
In one possible design, among the processes executed by the processor 601, further includes,
and judging the probability value of the cross compression corresponding to the target detection frame based on the morphological change of the vein, the change trend of the caliber value of the vein and/or the change trend of the curvature value of the vein obtained by the target detection model of deep learning.
In one possible design, the detecting a arteriovenous cross compression feature in the target detection frame according to the arteriovenous cross compression probability value in the processing performed by the processor 601 specifically includes:
and when the probability value of the arteriovenous cross compression generated on the target detection frame is greater than or equal to a preset probability threshold value, judging that the cross point position in the target detection frame is arteriovenous cross compression, and simultaneously detecting the arteriovenous cross compression characteristic in the target detection frame.
In one possible design, the processing executed by the processor 601, after extracting the arteriovenous cross compression feature according to the probability value of the arteriovenous cross compression in the processing executed by the processor 601, further includes:
and deleting the extracted wrong arteriovenous cross compression characteristics and the corresponding target detection frame by combining the nerve fiber layer on the fundus image and the age information of the patient corresponding to the fundus image.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for communication between the electronic equipment and other equipment.
The bus 604 includes hardware, software, or both for coupling the above-described components to each other. For example, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hyper Transport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. A bus may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
Memory 603 may include mass storage for data or instructions. By way of example, and not limitation, memory 603 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. The memory 603 may include removable or non-removable (or fixed) media, where appropriate. In a particular embodiment, the memory 603 is a non-volatile solid-state memory. In a particular embodiment, the memory 603 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although the present application provides method steps as described in an embodiment or flowchart, more or fewer steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or end product executes, it may execute sequentially or in parallel (e.g., parallel processors or multi-threaded environments, or even distributed data processing environments) according to the method shown in the embodiment or the figures.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device, the electronic device and the readable storage medium embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (10)
1. A detection method for cross compression characteristics of a fundus artery and vein is characterized by comprising the following steps:
acquiring a fundus image, and carrying out size conversion on the fundus image;
inputting the fundus images with the converted sizes into a pre-trained target detection model based on deep learning, and generating a plurality of target detection frames and probability values of arteriovenous cross compression characteristics corresponding to the target detection frames on the fundus images;
and determining whether the target detection frame contains arteriovenous cross compression characteristics or not according to the probability value of the arteriovenous cross compression characteristics contained in the target detection frame.
2. The method according to claim 1, further comprising, prior to said acquiring a fundus image:
acquiring a plurality of fundus training images;
labeling the plurality of fundus training images to obtain training samples;
carrying out size transformation on the training samples to obtain a plurality of training samples with the same size;
and inputting the training samples with the same size into a deep learning target detection network for training to obtain the target detection model.
3. The method according to claim 2, wherein the labeling of the plurality of fundus training images to obtain a training sample comprises:
marking a rectangular frame at the position of the arteriovenous cross compression characteristic on the fundus training image to obtain a plurality of sample detection frames;
determining a first vertex and a second vertex on any diagonal line of the sample detection frame, and acquiring a plurality of sample detection points;
obtaining the training sample according to the plurality of sample detection boxes and the plurality of sample detection points.
4. The method according to claim 3, wherein the inputting the training samples into a deep learning target detection network for training to obtain the target detection model specifically comprises:
step S1: inputting the training sample into a deep learning target detection network for convolution to obtain characteristic graphs of different scales;
step S2: on the feature maps with different scales, a plurality of target preselection frames with different length-width ratios are generated by taking the central points of different unit grids as centers;
step S3: determining a plurality of target detection frames by the plurality of target preselection frames according to a preset intersection ratio, determining a first vertex and a second vertex on a diagonal line of the plurality of target detection frames to obtain a plurality of target detection points, and calculating coordinate offset values of coordinates of the plurality of target detection points relative to the plurality of sample detection points;
the above steps S1 to S3 are iterated continuously until the coordinate offset value is reduced to or below a preset offset value, so as to obtain the target detection model.
5. The method according to claim 1, wherein the step of inputting the fundus image after size conversion into a pre-trained target detection model based on deep learning to generate a plurality of target detection frames and probability values of arteriovenous cross-compression features corresponding to the target detection frames on the fundus image comprises the steps of:
inputting the fundus image into the target detection model for convolution to obtain a plurality of different scale characteristic maps;
on the different-scale characteristic diagrams, a plurality of target preselection frames in different shapes are generated by taking the center points of different units as centers;
combining the target preselection frames on the feature maps with different scales, and respectively selecting the multiple target detection frames by using a non-maximum suppression method;
and according to a target detection model based on deep learning, obtaining the shape change of the vein, the change trend of the caliber value of the vein and/or the change trend of the curvature value of the vein, and judging that the target detection frame comprises the probability value of the characteristic of arteriovenous cross compression.
6. The method according to claim 1, wherein the determining whether the object detection frame includes the arteriovenous cross-compression feature according to the probability value of the arteriovenous cross-compression feature specifically comprises:
and when the probability value of the arteriovenous cross compression on the target detection frame is larger than or equal to a preset probability threshold value, judging that the target detection frame contains the characteristic of arteriovenous cross compression, and the cross position contained in the target detection frame is the arteriovenous cross compression.
7. The method according to any one of claim 6, further comprising, after detecting the arteriovenous cross-compression feature in the target detection frame according to the probability value of the arteriovenous cross-compression:
and deleting the extracted wrong arteriovenous cross compression characteristics and the corresponding target detection frame by combining the nerve fiber layer on the fundus image and the age information of the patient corresponding to the fundus image.
8. The utility model provides a detection apparatus of inferior vena cava cross compression characteristic which characterized in that includes:
the acquisition module is used for acquiring a fundus image and carrying out size conversion on the fundus image;
the target generation module is used for inputting the fundus images with the converted sizes into a pre-trained target detection model based on deep learning, and generating a plurality of target detection frames and probability values of arteriovenous cross compression characteristics corresponding to the target detection frames on the fundus images;
and the characteristic detection module is used for determining whether the target detection frame contains arteriovenous cross compression characteristics or not according to the arteriovenous cross compression probability value.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for detecting a fundic arteriovenous cross-compression feature as set forth in any one of claims 1 to 7.
10. A detection device for a fundus arteriovenous cross-compression feature, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method of detecting a basilar arteriovenous cross-compression feature of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111552276.7A CN114387219A (en) | 2021-12-17 | 2021-12-17 | Method, device, medium and equipment for detecting arteriovenous cross compression characteristics of eyeground |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111552276.7A CN114387219A (en) | 2021-12-17 | 2021-12-17 | Method, device, medium and equipment for detecting arteriovenous cross compression characteristics of eyeground |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114387219A true CN114387219A (en) | 2022-04-22 |
Family
ID=81197793
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111552276.7A Pending CN114387219A (en) | 2021-12-17 | 2021-12-17 | Method, device, medium and equipment for detecting arteriovenous cross compression characteristics of eyeground |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114387219A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115457038A (en) * | 2022-11-11 | 2022-12-09 | 北京鹰瞳科技发展股份有限公司 | Training method of hierarchical prediction model, hierarchical prediction method and related products |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117794A (en) * | 2018-08-16 | 2019-01-01 | 广东工业大学 | A kind of moving target behavior tracking method, apparatus, equipment and readable storage medium storing program for executing |
CN110766643A (en) * | 2019-10-28 | 2020-02-07 | 电子科技大学 | Microaneurysm detection method facing fundus images |
WO2020164282A1 (en) * | 2019-02-14 | 2020-08-20 | 平安科技(深圳)有限公司 | Yolo-based image target recognition method and apparatus, electronic device, and storage medium |
CN111738262A (en) * | 2020-08-21 | 2020-10-02 | 北京易真学思教育科技有限公司 | Target detection model training method, target detection model training device, target detection model detection device, target detection equipment and storage medium |
CN111861999A (en) * | 2020-06-24 | 2020-10-30 | 北京百度网讯科技有限公司 | Detection method and device for artery and vein cross compression sign, electronic equipment and readable storage medium |
CN112101361A (en) * | 2020-11-20 | 2020-12-18 | 深圳佑驾创新科技有限公司 | Target detection method, device and equipment for fisheye image and storage medium |
CN112163541A (en) * | 2020-10-09 | 2021-01-01 | 上海云绅智能科技有限公司 | 3D target detection method and device, electronic equipment and storage medium |
CN112906502A (en) * | 2021-01-29 | 2021-06-04 | 北京百度网讯科技有限公司 | Training method, device and equipment of target detection model and storage medium |
CN113065379A (en) * | 2019-12-27 | 2021-07-02 | 深圳云天励飞技术有限公司 | Image detection method and device fusing image quality and electronic equipment |
CN113470102A (en) * | 2021-06-23 | 2021-10-01 | 依未科技(北京)有限公司 | Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision |
CN113688675A (en) * | 2021-07-19 | 2021-11-23 | 北京鹰瞳科技发展股份有限公司 | Target detection method and device, electronic equipment and storage medium |
-
2021
- 2021-12-17 CN CN202111552276.7A patent/CN114387219A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117794A (en) * | 2018-08-16 | 2019-01-01 | 广东工业大学 | A kind of moving target behavior tracking method, apparatus, equipment and readable storage medium storing program for executing |
WO2020164282A1 (en) * | 2019-02-14 | 2020-08-20 | 平安科技(深圳)有限公司 | Yolo-based image target recognition method and apparatus, electronic device, and storage medium |
CN110766643A (en) * | 2019-10-28 | 2020-02-07 | 电子科技大学 | Microaneurysm detection method facing fundus images |
CN113065379A (en) * | 2019-12-27 | 2021-07-02 | 深圳云天励飞技术有限公司 | Image detection method and device fusing image quality and electronic equipment |
CN111861999A (en) * | 2020-06-24 | 2020-10-30 | 北京百度网讯科技有限公司 | Detection method and device for artery and vein cross compression sign, electronic equipment and readable storage medium |
CN111738262A (en) * | 2020-08-21 | 2020-10-02 | 北京易真学思教育科技有限公司 | Target detection model training method, target detection model training device, target detection model detection device, target detection equipment and storage medium |
CN112163541A (en) * | 2020-10-09 | 2021-01-01 | 上海云绅智能科技有限公司 | 3D target detection method and device, electronic equipment and storage medium |
CN112101361A (en) * | 2020-11-20 | 2020-12-18 | 深圳佑驾创新科技有限公司 | Target detection method, device and equipment for fisheye image and storage medium |
CN112906502A (en) * | 2021-01-29 | 2021-06-04 | 北京百度网讯科技有限公司 | Training method, device and equipment of target detection model and storage medium |
CN113470102A (en) * | 2021-06-23 | 2021-10-01 | 依未科技(北京)有限公司 | Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision |
CN113688675A (en) * | 2021-07-19 | 2021-11-23 | 北京鹰瞳科技发展股份有限公司 | Target detection method and device, electronic equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
张卯年等: "老年眼病的防治", 31 December 1997, 金盾出版社, pages: 228 - 228 * |
李晖晖等: "述眼底训练图像上动静脉交叉压迫特征的位置标注矩形框,获取多个样本检测框", 31 October 2021, 西北工业大学出版社, pages: 171 - 172 * |
李英: "眼底图像中微小目标检测算法的研究与实现", 中国优秀硕士学位论文全文数据库-医药卫生科技辑, 15 January 2020 (2020-01-15), pages 065 - 156 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115457038A (en) * | 2022-11-11 | 2022-12-09 | 北京鹰瞳科技发展股份有限公司 | Training method of hierarchical prediction model, hierarchical prediction method and related products |
CN115457038B (en) * | 2022-11-11 | 2023-08-22 | 北京鹰瞳科技发展股份有限公司 | Training method of hierarchical prediction model, hierarchical prediction method and related products |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200005460A1 (en) | Method and device for detecting pulmonary nodule in computed tomography image, and computer-readable storage medium | |
CN112734785B (en) | Method, device, medium and equipment for determining sub-pixel level fundus blood vessel boundary | |
CN108805180B (en) | Target object detection method and device | |
CN113470102B (en) | Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision | |
CN111145200B (en) | Blood vessel center line tracking method combining convolutional neural network and cyclic neural network | |
CN112734828B (en) | Method, device, medium and equipment for determining center line of fundus blood vessel | |
US20230214989A1 (en) | Defect detection method, electronic device and readable storage medium | |
CN112215217B (en) | Digital image recognition method and device for simulating doctor to read film | |
CN110930414A (en) | Lung region shadow marking method and device of medical image, server and storage medium | |
CN110910441A (en) | Method and device for extracting center line | |
CN112037287A (en) | Camera calibration method, electronic device and storage medium | |
CN112837325A (en) | Medical image processing method, device, electronic equipment and medium | |
CN114764789B (en) | Method, system, device and storage medium for quantifying pathological cells | |
CN114387219A (en) | Method, device, medium and equipment for detecting arteriovenous cross compression characteristics of eyeground | |
CN112529918B (en) | Method, device and equipment for segmenting brain room area in brain CT image | |
CN113706475A (en) | Confidence coefficient analysis method and device based on image segmentation | |
CN113269752A (en) | Image detection method, device terminal equipment and storage medium | |
CN116228776B (en) | Electromechanical equipment welding defect identification method and system | |
CN114387218A (en) | Vision-calculation-based identification method, device, medium, and apparatus for characteristics of fundus oculi | |
CN114387209A (en) | Method, apparatus, medium, and device for fundus structural feature determination | |
CN113408595B (en) | Pathological image processing method and device, electronic equipment and readable storage medium | |
CN114387210A (en) | Method, apparatus, medium, and device for fundus feature acquisition | |
CN113344893A (en) | High-precision fundus arteriovenous identification method, device, medium and equipment | |
CN109949243B (en) | Calcification artifact eliminating method and device and computer storage medium | |
CN110599456A (en) | Method for extracting specific region of medical image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |