CN115841506A - Fluorescent molecule image processing method and system - Google Patents

Fluorescent molecule image processing method and system Download PDF

Info

Publication number
CN115841506A
CN115841506A CN202310137548.XA CN202310137548A CN115841506A CN 115841506 A CN115841506 A CN 115841506A CN 202310137548 A CN202310137548 A CN 202310137548A CN 115841506 A CN115841506 A CN 115841506A
Authority
CN
China
Prior art keywords
image
extreme point
points
pixel points
extreme
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310137548.XA
Other languages
Chinese (zh)
Other versions
CN115841506B (en
Inventor
朱腾
陈媛琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong General Hospital
Original Assignee
Guangdong General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong General Hospital filed Critical Guangdong General Hospital
Priority to CN202310137548.XA priority Critical patent/CN115841506B/en
Publication of CN115841506A publication Critical patent/CN115841506A/en
Application granted granted Critical
Publication of CN115841506B publication Critical patent/CN115841506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fluorescence molecule image processing method and a fluorescence molecule image processing system, wherein an RGB image and an NIR image shot by an endoscope are obtained, pixel points with pixel values larger than a threshold value in the RGB gray image are extracted and are put into a first set; extracting pixel points of which the pixel values are larger than a threshold value in the NIR gray level image and putting the pixel points into a second set; filling pixel points in a first set of RGB gray level images and a second set of NIR gray level images to obtain a first image and a second image; extracting extreme points by using an extreme point selection method of DOG in an SIFT detection method to obtain a first extreme point set of a first image and a second extreme point set of a second image; screening extreme points in the first and second sets of extreme points; and obtaining the feature points and the feature vectors of the feature points according to the screened extreme points, completing registration of the RGB image and the NIR image, realizing fusion of the RGB image and the NIR image, and improving the calculation speed and the registration accuracy.

Description

Fluorescent molecule image processing method and system
Technical Field
The invention relates to the field of medical image processing, in particular to a molecular fluorescence image processing method and system.
Background
By means of medical imaging technology, doctors can observe the body conditions of patients without trauma, focus area images can be directly displayed through images shot by ultrasonic waves, X rays, MRI and the like, the judgment of a plurality of diseases is greatly facilitated, and contrast agents are widely applied to medical imaging and are most well known as barium meals. Fluorescent substances similar to contrast agents are used for marking cells through fluorescent agents, and fluorescent molecule imaging technology is used for imaging tumor cells marked by the fluorescent agents in the operation. Indocyanine green (ICG) and Methylene Blue (MB) are organic molecular probes commonly used in clinic, and the labeled position can be displayed through a Near Infrared (NIR) camera and the ICG.
However, the image shot by the near-infrared camera is greatly different from the human visual image, so that a doctor can observe the mark position in the operation conveniently, one mode is double-window display, namely one window displays the image shot by the common camera, and the other window displays the content shot by the near-infrared camera; in another mode, the image shot by the common camera and the image shot by the near-infrared camera are fused in a superposition mode, and the fused images are superposed into a final visual image. Since the dual-window display doctor needs to observe two windows simultaneously and needs to determine the position in the near-infrared image during the resection, the efficiency of the doctor operation is low, and errors are easy to occur. However, although the superposition method is easy to observe, the images shot by the common camera and the images shot by the near infrared camera need to be accurately superposed together.
The method comprises the steps that fluorescent molecule marks in an NIR image are fused to corresponding parts of an RGB image, the RGB image and the NIR image need to be registered firstly, and because the RGB image is shot by a common camera and the NIR image is shot by a near-infrared camera, the cameras are different, imaging arrays and shooting angles can be different, real-time display is needed, and how to rapidly and accurately fuse the RGB image and the NIR image is an urgent problem to be solved.
Disclosure of Invention
In order to rapidly and accurately fuse the RGB image and the NIR image, the invention provides a fluorescence molecule image processing method, which comprises the following steps:
step 1, acquiring an RGB image and an NIR image shot at the same time, and performing graying on the RGB image and the NIR image respectively to obtain an RGB gray image and an NIR gray image; extracting pixel points of which the pixel values are larger than a threshold value in the RGB gray-scale image and putting the pixel points into a first set; extracting pixel points of which the pixel values are larger than a threshold value in the NIR gray level image and putting the pixel points into a second set;
step 2, filling pixel points in the first set of RGB gray level images and the second set of NIR gray level images by adopting a corrosion method to obtain a first image and a second image; extracting extreme points of the first image and the second image by using an extreme point selection method of DOG in an SIFT detection method to obtain a first extreme point set of the first image and a second extreme point set of the second image;
step 3, screening extreme points in the first extreme point set according to the first extreme point set and the coordinates of the pixel points in the first set; screening the extreme points in the second key point set according to the second extreme point set and the coordinates of the pixel points in the second set;
and 4, selecting key points from the first extreme point set and the second extreme point set according to a key point selection mode in the SIFT detection method, acquiring feature vectors of the key points, and completing registration of the RGB image and the NIR image according to the feature vectors of the key points of the first image and the feature vectors of the key points of the second image.
Preferably, the step 3 specifically comprises:
step 31, for each extreme point in the first extreme point set, calculating an average value of distances between the extreme point and the N pixel points closest to the extreme point in the first set according to the coordinate of the extreme point, if the average value is smaller than a first preset value, removing the extreme point from the first extreme point set, otherwise, keeping the extreme point in the first extreme point set;
and step 32, for each extreme point in the second extreme point set, calculating an average value of the distances between the extreme point and the N pixel points closest to the extreme point in the second set according to the coordinates of the extreme point, if the average value is smaller than a second preset value, removing the extreme point from the second extreme point set, otherwise, keeping the extreme point in the second extreme point set.
Preferably, the method for calculating the first preset value comprises the following steps:
and calculating the ratio of the number of the pixel points in the non-first set in the RGB image to the number of the pixel points in the RGB image, and obtaining a first preset value according to the ratio and a first set preset value.
Preferably, the calculation method of the second preset value is as follows:
and calculating the ratio of the number of pixel points in the NIR image in the non-second set to the number of pixel points in the NIR image, and obtaining a second preset value according to the ratio and a second preset value.
Preferably, after the step 4, a step 5 is further included:
and extracting a fluorescence region from the NIR image, and fusing the fluorescence region into the RGB image according to the registration mode.
In addition, the invention also provides a fluorescent molecule image processing system, which comprises the following modules:
the preprocessing module is used for acquiring an RGB image and an NIR image which are shot at the same time, and graying the RGB image and the NIR image respectively to obtain an RGB gray image and an NIR gray image; extracting pixel points of which the pixel values are larger than a threshold value in the RGB gray-scale image and putting the pixel points into a first set; extracting pixel points with pixel values larger than a threshold value in the NIR gray-scale image and putting the pixel points into a second set;
the extreme point acquisition module is used for filling pixel points in the first set of RGB gray level images and the second set of NIR gray level images respectively by adopting a corrosion method to obtain a first image and a second image; extracting extreme points of the first image and the second image by using an extreme point selection method of DOG in an SIFT detection method to obtain a first extreme point set of the first image and a second extreme point set of the second image;
the extreme point screening module is used for screening the extreme points in the first extreme point set according to the first extreme point set and the coordinates of the pixel points in the first set; screening the extreme points in the second key point set according to the second extreme point set and the coordinates of the pixel points in the second set;
and the registration module is used for selecting key points from the first extreme point set and the second extreme point set according to a key point selection mode in the SIFT detection method, acquiring feature vectors of the key points, and completing registration of the RGB image and the NIR image according to the feature vectors of the key points of the first image and the feature vectors of the key points of the second image.
Preferably, the extreme point screening module specifically includes a first extreme point screening module and a second extreme point screening module:
the first extreme point screening module is used for calculating the average value of the distances between the extreme point and N pixel points which are nearest to the extreme point in the first set according to the coordinates of the extreme point for each extreme point in the first extreme point set, if the average value is smaller than a first preset value, the extreme point is removed from the first extreme point set, and otherwise, the extreme point is kept in the first extreme point set;
and the second extreme point screening module is used for calculating the average value of the distance between the extreme point and the N pixel points which are closest to the extreme point in the second set according to the coordinates of the extreme point for each extreme point in the second extreme point set, if the average value is smaller than a second preset value, the extreme point is removed from the second extreme point set, and otherwise, the extreme point is kept in the second extreme point set.
Preferably, the method for calculating the first preset value comprises the following steps:
calculating the ratio of the number of pixel points in the non-first set in the RGB image to the number of pixel points in the RGB image, and obtaining a first preset value according to the ratio and a first set preset value;
preferably, the calculation method of the second preset value is as follows:
and calculating the ratio of the number of pixel points in the NIR image in the non-second set to the number of pixel points in the NIR image, and obtaining a second preset value according to the ratio and a second preset value.
Preferably, the system further comprises an image fusion module;
the image fusion module is used for extracting a fluorescence region from an NIR image and fusing the fluorescence region into the RGB image according to the registration mode.
Finally, the invention provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the method as described above.
When fluorescent molecules mark diseased parts, the marking position in an NIR image shot by a near infrared camera needs to be fused into an RGB image shot by a common camera, because the common camera and the near infrared camera have different arrays, the shot images are also different, the precision is lower in heterogeneous image registration, the operation speed is low, and aiming at the problem, the RGB image and the NIR image are grayed firstly, pixel points of a light reflection part in the grayed image are placed into a first set and a second set, pixel points in the first set of the RGB gray image and the second set of the NIR gray image are respectively filled by adopting a corrosion method to obtain the first image and the second image, DOG extreme points are extracted from the first image and the second image, the extreme points are filtered, the number of the extreme points is compressed, and the operation speed is improved; moreover, the filtering of the extreme points is beneficial to removing the points which influence the registration precision, so that the registration accuracy is improved, and the fusion accuracy is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of example 1;
FIG. 2 is a method for calculating DOG spatial extreme points;
FIG. 3 is a flowchart of example 5;
FIG. 4 is a structural view of embodiment 10.
Detailed Description
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The fluorescence molecular imaging technology is based on intraoperative imaging in which tumor cells are stained with specific molecular probes such as indocyanine green (ICG) and Methylene Blue (MB), and then the positions of the tumor cells are displayed after being irradiated with specific light, and a common combination mode is ICG and NIR imaging, that is, a labeled position is obtained by matching indocyanine green and near-infrared light, and is usually displayed in green.
The NIR image does not accord with human visual characteristics, and medical personnel watch harder, and the RGB image that ordinary camera was shot accords with human visual characteristics, but can't direct display mark position, and this needs to acquire RGB image and NIR image simultaneously, fuses the two.
Embodiment 1, as shown in fig. 1, the present invention provides a fluorescent molecule image processing method, including the steps of:
step 1, acquiring an RGB image and an NIR image shot at the same time, and performing graying on the RGB image and the NIR image respectively to obtain an RGB gray image and an NIR gray image; extracting pixel points of which the pixel values are larger than a threshold value in the RGB gray-scale image and putting the pixel points into a first set; extracting pixel points of which the pixel values are larger than a threshold value in the NIR gray level image and putting the pixel points into a second set;
in the operation, the endoscope can go deep into the focus position and shoot the image of the focus position, and in the invention, the endoscope simultaneously comprises a CCD or CMOS lens and an NIR lens which respectively shoot an RGB image and an NIR image. Human tissues are smooth and rich in water, when light of an endoscope irradiates, light reflection can occur, the light reflection condition of images shot by different cameras (CCD or CMOS) is different from that of images shot by a near infrared camera, when an RGB image and an NIR image are aligned, the light reflection part needs to be removed, and the influence of the light reflection on the alignment is reduced.
Traversing the RGB image and the NIR image after graying, respectively putting pixel points with pixel values larger than a threshold value into the first set and the second set, wherein the pixel values in the grayscale image are larger, the pixel points are brighter, 0 is black, 255 is white, the light reflecting part is biased to be white, and the light reflecting part can be removed only by judging the pixel values of the pixel points in the grayscale image. The threshold is a particular value, and in one embodiment, the threshold is 210. Of course, the threshold may be determined according to the overall gray level of the gray level image, and the larger the average gray level value of the RGB gray level image is, the larger the threshold corresponding to the RGB gray level image is, and similarly, the larger the average gray level value of the NIR gray level image is, the larger the threshold corresponding to the NIR gray level image is.
Step 2, filling pixel points in the first set of RGB gray level images and the second set of NIR gray level images by adopting a corrosion method to obtain a first image and a second image; extracting extreme points of the first image and the second image by using an extreme point selection method of DOG in an SIFT detection method to obtain a first extreme point set of the first image and a second extreme point set of the second image;
during registration, the whole image is registered, in order to eliminate the influence of the reflective part on the registration, the reflective part is filled by the method, a corrosion method is specifically adopted, corrosion and expansion are morphological operation methods in the image, and pixel points of the reflective part can be filled through corrosion.
SIFT (Scale Invariant Feature Transform matching algorithm) is a typical algorithm in image registration, and local extreme points of DOG (Difference of Guassian) are selected after a DOG gaussian Difference pyramid is constructed, as shown in fig. 2, and are used for selecting subsequent Feature points.
Step 3, screening extreme points in the first extreme point set according to the first extreme point set and the coordinates of the pixel points in the first set; screening the extreme points in the second key point set according to the second extreme point set and the coordinates of the pixel points in the second set;
due to the influence of the light reflection region, inaccuracy occurs in SIFT image registration, and extreme point screening is needed. And screening is carried out after the extreme point of the DOG is calculated, and the screened extreme point does not need to participate in the calculation of subsequent feature points, such as the calculation of features such as key point directions, and the calculation speed is further improved.
The first set is a set of reflective region pixel points, if a point in the first extreme point set is too close to a reflective region in the first set, the extreme point may be affected by the reflective region, and the extreme point is removed from the first extreme point set, which is beneficial to improving the registration accuracy. The method adopts an average distance mode to screen and filter the first extreme point set, and screens and filters the second extreme point set based on the same consideration. The specific manner will be explained in the following embodiments, which are not described herein again.
And 4, selecting key points from the first extreme point set and the second extreme point set according to a key point selection mode in the SIFT detection method, acquiring feature vectors of the key points, and completing registration of the RGB image and the NIR image according to the feature vectors of the key points of the first image and the feature vectors of the key points of the second image.
After the DOG extreme points of the grayed RGB image and the grayed NIR image are obtained, the feature vectors of the feature points can be obtained by adopting a method in SIFT, and then the registration of the RGB image and the NIR image is realized.
Example 2, the step 3 specifically includes:
step 31, for each extreme point in the first extreme point set, calculating an average value of distances between the extreme point and the N pixel points closest to the extreme point in the first set according to the coordinate of the extreme point, if the average value is smaller than a first preset value, removing the extreme point from the first extreme point set, otherwise, keeping the extreme point in the first extreme point set;
and step 32, for each extreme point in the second extreme point set, calculating an average value of the distances between the extreme point and the N pixel points closest to the extreme point in the second set according to the coordinates of the extreme point, if the average value is smaller than a second preset value, removing the extreme point from the second extreme point set, otherwise, keeping the extreme point in the second extreme point set.
The first set is a set of pixel points in a reflective part region in an RGB image, the first extreme point set is a local extreme point set of DOG extracted by SIFT, if an extreme point is too close to a reflective region, the extreme point is easily affected by the reflective part, the extreme point is obviously not a good extreme point, and especially, if an extreme point is located in the first set, namely the reflective region, the extreme point needs to be deleted from the first extreme point set so as to avoid affecting subsequent registration. Specifically, the distance between each extreme point in the first extreme point set and a pixel point in the first set is calculated respectively, more specifically, the distance is calculated in a coordinate mode, if the average value of the distances between the extreme points and the N pixel points is smaller than a first preset value, the extreme point is located in the light reflection area or is close to the light reflection area, and therefore the extreme point needs to be removed from the first extreme point set. The same is done for the second set of extreme points.
Embodiment 3, the method for calculating the first preset value includes:
and calculating the ratio of the number of the pixel points in the non-first set in the RGB image to the number of the pixel points in the RGB image, and obtaining a first preset value according to the ratio and a first set preset value.
The larger the first preset value is, the more extreme points are removed from the first extreme point set; if the light reflecting area is large or multiple, that is, the first set is large, more extreme points can be eliminated, and in order to avoid eliminating excessive extreme points, a first preset value is set for the ratio of the number of pixel points in the RGB image, which are not in the first set, to the number of pixel points in the RGB image. In a specific embodiment, the first preset value is calculated according to the product of the ratio and the first preset value. The first preset value is a default parameter of a system or a default parameter of a system setting, and is related to the luminous intensity of the cold light source of the endoscope and the observation part of the endoscope.
Embodiment 4, the method for calculating the second preset value includes:
and calculating the ratio of the number of pixel points in the NIR image in the non-second set to the number of pixel points in the NIR image, and obtaining a second preset value according to the ratio and a second preset value.
The second preset value was calculated based on the same method as in example 3. The second preset value is a default parameter of the system or a default parameter of the system, and is related to the near-infrared luminous intensity of the endoscope and the observation part of the endoscope.
Embodiment 5, as shown in fig. 3, after the step 4, further comprises a step 5:
and extracting a fluorescence region from the NIR image, and fusing the fluorescence region into the RGB image according to the registration mode.
After the RGB image and the NIR image are registered, a fluorescence area is extracted from the NIR image, specifically, the pixel value and the coordinate of each pixel point in the fluorescence area in the NIR image are extracted, the coordinate is transformed according to a registration transformation mode, and then the pixel values are fused into the RGB image.
Embodiment 6, the present invention also provides a fluorescent molecular image processing system, comprising the following modules:
the preprocessing module is used for acquiring an RGB image and an NIR image which are shot at the same time, and graying the RGB image and the NIR image respectively to obtain an RGB gray image and an NIR gray image; extracting pixel points of which the pixel values are larger than a threshold value in the RGB gray-scale image and putting the pixel points into a first set; extracting pixel points of which the pixel values are larger than a threshold value in the NIR gray level image and putting the pixel points into a second set;
the extreme point acquisition module is used for filling pixel points in the first set of RGB gray level images and the second set of NIR gray level images respectively by adopting a corrosion method to obtain a first image and a second image; extracting extreme points of the first image and the second image by using an extreme point selection method of DOG in an SIFT detection method to obtain a first extreme point set of the first image and a second extreme point set of the second image;
the extreme point screening module is used for screening the extreme points in the first extreme point set according to the first extreme point set and the coordinates of the pixel points in the first set; screening the extreme points in the second key point set according to the second extreme point set and the coordinates of the pixel points in the second set;
and the registration module is used for selecting key points from the first extreme point set and the second extreme point set according to a key point selection mode in the SIFT detection method, acquiring feature vectors of the key points, and completing registration of the RGB image and the NIR image according to the feature vectors of the key points of the first image and the feature vectors of the key points of the second image.
In embodiment 7, the extreme point screening module specifically includes a first extreme point screening module and a second extreme point screening module:
the first extreme point screening module is used for calculating the average value of the distance between the extreme point and N pixel points which are closest to the extreme point in the first set according to the coordinate of the extreme point for each extreme point in the first extreme point set, if the average value is smaller than a first preset value, the extreme point is removed from the first extreme point set, and otherwise, the extreme point is reserved in the first extreme point set;
and the second extreme point screening module is used for calculating the average value of the distance between the extreme point and the N pixel points closest to the extreme point in the second set according to the coordinates of the extreme point for each extreme point in the second extreme point set, and if the average value is smaller than a second preset value, the extreme point is removed from the second extreme point set, otherwise, the extreme point is reserved in the second extreme point set.
Embodiment 8, the method for calculating the first preset value includes:
calculating the ratio of the number of pixel points in the non-first set in the RGB image to the number of pixel points in the RGB image, and obtaining a first preset value according to the ratio and a first set preset value;
embodiment 9, the method for calculating the second preset value includes:
and calculating the ratio of the number of pixel points in the NIR image in the non-second set to the number of pixel points in the NIR image, and obtaining a second preset value according to the ratio and a second preset value.
Embodiment 10, as shown in fig. 4, the system further comprises an image fusion module;
the image fusion module is used for extracting a fluorescence region from an NIR image and fusing the fluorescence region into the RGB image according to the registration mode.
Embodiment 11, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of embodiments 1-5.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A fluorescent molecule image processing method, characterized in that the method comprises the steps of:
step 1, acquiring an RGB image and an NIR image shot at the same time, and performing graying on the RGB image and the NIR image respectively to obtain an RGB gray image and an NIR gray image; extracting pixel points of which the pixel values are larger than a threshold value in the RGB gray-scale image and putting the pixel points into a first set; extracting pixel points with pixel values larger than a threshold value in the NIR gray-scale image and putting the pixel points into a second set;
step 2, filling pixel points in the first set of RGB gray level images and the second set of NIR gray level images by adopting a corrosion method to obtain a first image and a second image; extracting extreme points of the first image and the second image by using an extreme point selection method of DOG in an SIFT detection method to obtain a first extreme point set of the first image and a second extreme point set of the second image;
step 3, screening extreme points in the first extreme point set according to the first extreme point set and the coordinates of the pixel points in the first set; screening the extreme points in the second key point set according to the second extreme point set and the coordinates of the pixel points in the second set;
and 4, selecting key points from the first extreme point set and the second extreme point set according to a key point selection mode in the SIFT detection method, acquiring feature vectors of the key points, and completing registration of the RGB image and the NIR image according to the feature vectors of the key points of the first image and the feature vectors of the key points of the second image.
2. The method according to claim 1, wherein step 3 is specifically:
step 31, for each extreme point in the first extreme point set, calculating an average value of distances between the extreme point and the N pixel points closest to the extreme point in the first set according to the coordinates of the extreme point, if the average value is smaller than a first preset value, removing the extreme point from the first extreme point set, otherwise, keeping the extreme point in the first extreme point set;
and step 32, for each extreme point in the second extreme point set, calculating an average value of the distances between the extreme point and the N pixel points closest to the extreme point in the second set according to the coordinates of the extreme point, if the average value is smaller than a second preset value, removing the extreme point from the second extreme point set, otherwise, keeping the extreme point in the second extreme point set.
3. The method of claim 2, wherein the first predetermined value is calculated by:
and calculating the ratio of the number of the pixel points in the non-first set in the RGB image to the number of the pixel points in the RGB image, and obtaining a first preset value according to the ratio and a first set preset value.
4. The method of claim 2, wherein the second predetermined value is calculated by:
and calculating the ratio of the number of pixel points in the NIR image in the non-second set to the number of pixel points in the NIR image, and obtaining a second preset value according to the ratio and a second preset value.
5. The method of any one of claims 1-4, wherein after step 4, further comprising step 5:
and extracting a fluorescence region from the NIR image, and fusing the fluorescence region into the RGB image according to the registration mode.
6. A fluorescent molecular image processing system, characterized in that the system comprises the following modules:
the preprocessing module is used for acquiring an RGB image and an NIR image which are shot at the same time, and graying the RGB image and the NIR image respectively to obtain an RGB gray image and an NIR gray image; extracting pixel points of which the pixel values are larger than a threshold value in the RGB gray-scale image and putting the pixel points into a first set; extracting pixel points with pixel values larger than a threshold value in the NIR gray-scale image and putting the pixel points into a second set;
the extreme point acquisition module is used for filling pixel points in the first set of RGB gray level images and the second set of NIR gray level images respectively by adopting a corrosion method to obtain a first image and a second image; extracting extreme points of the first image and the second image by using an extreme point selection method of DOG in an SIFT detection method to obtain a first extreme point set of the first image and a second extreme point set of the second image;
the extreme point screening module is used for screening the extreme points in the first extreme point set according to the first extreme point set and the coordinates of the pixel points in the first set; screening the extreme points in the second key point set according to the second extreme point set and the coordinates of the pixel points in the second set;
and the registration module is used for selecting key points from the first extreme point set and the second extreme point set according to a key point selection mode in the SIFT detection method, acquiring feature vectors of the key points, and completing registration of the RGB image and the NIR image according to the feature vectors of the key points of the first image and the feature vectors of the key points of the second image.
7. The system of claim 6, wherein the extreme point filtering module comprises a first extreme point filtering module and a second extreme point filtering module:
the first extreme point screening module is used for calculating the average value of the distance between the extreme point and N pixel points which are closest to the extreme point in the first set according to the coordinate of the extreme point for each extreme point in the first extreme point set, if the average value is smaller than a first preset value, the extreme point is removed from the first extreme point set, and otherwise, the extreme point is reserved in the first extreme point set;
and the second extreme point screening module is used for calculating the average value of the distance between the extreme point and the N pixel points closest to the extreme point in the second set according to the coordinates of the extreme point for each extreme point in the second extreme point set, and if the average value is smaller than a second preset value, the extreme point is removed from the second extreme point set, otherwise, the extreme point is reserved in the second extreme point set.
8. The system of claim 7, wherein the first predetermined value is calculated by:
calculating the ratio of the number of pixel points in the non-first set in the RGB image to the number of pixel points in the RGB image, and obtaining a first preset value according to the ratio and a first set preset value;
the calculation method of the second preset value comprises the following steps:
and calculating the ratio of the number of pixel points in the NIR image in the non-second set to the number of pixel points in the NIR image, and obtaining a second preset value according to the ratio and a second preset value.
9. The system of any one of claims 6-8, further comprising an image fusion module;
the image fusion module is used for extracting a fluorescence region from an NIR image and fusing the fluorescence region into the RGB image according to the registration mode.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, is adapted to carry out the method according to any one of claims 1-5.
CN202310137548.XA 2023-02-20 2023-02-20 Fluorescent molecular image processing method and system Active CN115841506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310137548.XA CN115841506B (en) 2023-02-20 2023-02-20 Fluorescent molecular image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310137548.XA CN115841506B (en) 2023-02-20 2023-02-20 Fluorescent molecular image processing method and system

Publications (2)

Publication Number Publication Date
CN115841506A true CN115841506A (en) 2023-03-24
CN115841506B CN115841506B (en) 2023-05-02

Family

ID=85579884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310137548.XA Active CN115841506B (en) 2023-02-20 2023-02-20 Fluorescent molecular image processing method and system

Country Status (1)

Country Link
CN (1) CN115841506B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714254A (en) * 2009-11-16 2010-05-26 哈尔滨工业大学 Registering control point extracting method combining multi-scale SIFT and area invariant moment features
CN102622759A (en) * 2012-03-19 2012-08-01 苏州迪凯尔医疗科技有限公司 Gray scale and geometric information combined medical image registration method
CN106558073A (en) * 2016-11-23 2017-04-05 山东大学 Based on characteristics of image and TV L1Non-rigid image registration method
CN109410255A (en) * 2018-10-17 2019-03-01 中国矿业大学 A kind of method for registering images and device based on improved SIFT and hash algorithm
CN112215878A (en) * 2020-11-04 2021-01-12 中日友好医院(中日友好临床医学研究所) X-ray image registration method based on SURF feature points
US20220028091A1 (en) * 2020-07-24 2022-01-27 Apple Inc. Systems and Methods for Machine Learning Enhanced Image Registration
CN114445316A (en) * 2022-04-11 2022-05-06 青岛大学附属医院 Method for fusing fluorescence and visible light images of endoscope

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714254A (en) * 2009-11-16 2010-05-26 哈尔滨工业大学 Registering control point extracting method combining multi-scale SIFT and area invariant moment features
CN102622759A (en) * 2012-03-19 2012-08-01 苏州迪凯尔医疗科技有限公司 Gray scale and geometric information combined medical image registration method
CN106558073A (en) * 2016-11-23 2017-04-05 山东大学 Based on characteristics of image and TV L1Non-rigid image registration method
CN109410255A (en) * 2018-10-17 2019-03-01 中国矿业大学 A kind of method for registering images and device based on improved SIFT and hash algorithm
US20220028091A1 (en) * 2020-07-24 2022-01-27 Apple Inc. Systems and Methods for Machine Learning Enhanced Image Registration
CN112215878A (en) * 2020-11-04 2021-01-12 中日友好医院(中日友好临床医学研究所) X-ray image registration method based on SURF feature points
CN114445316A (en) * 2022-04-11 2022-05-06 青岛大学附属医院 Method for fusing fluorescence and visible light images of endoscope

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张莉 等: ""基于Log-Euclidean协方差矩阵描述符的医学图像配准"" *
黄海波 等: ""基于SIFT算法的遥感图像配准研究"", 《激光杂志》 *

Also Published As

Publication number Publication date
CN115841506B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN105513077B (en) A kind of system for diabetic retinopathy screening
CN107451998B (en) Fundus image quality control method
EP2188779B1 (en) Extraction method of tongue region using graph-based approach and geometric properties
JP4409166B2 (en) Image processing device
CN111128382B (en) Artificial intelligence multimode imaging analysis device
US7248736B2 (en) Enhancing images superimposed on uneven or partially obscured background
WO2023103467A1 (en) Image processing method, apparatus and device
CN103327883A (en) Medical image processing device and medical image processing method
CN108392181A (en) Region-of-interest tracks of device
CN112164043A (en) Method and system for splicing multiple fundus images
CN104102899A (en) Retinal vessel recognition method and retinal vessel recognition device
CN105962881A (en) Blood vessel recognition method and device
CN113643184B (en) Optical coherence tomography-based fundus blood vessel display method, system and medium
CN108209858A (en) A kind of ophthalmology function inspection device and image processing method based on slit-lamp platform
US11138728B2 (en) Computer-implemented method, computer program and diagnostic system, in particular for determining at least one geometric feature of a section of a blood vessel
Elbita et al. Preparation of 2D sequences of corneal images for 3D model building
CN115841506A (en) Fluorescent molecule image processing method and system
KR101557059B1 (en) Image monitoring system and method for removing needle
CN106073801A (en) A kind of external cavum nasopharyngeum vena systemica blood oxygen saturation formation method and device
CN108392180A (en) Time-activity curve measurement device
CN109009216A (en) A kind of ultrasonic image naked eye 3D system
CN115701341A (en) Image processing method, apparatus, device, medium, and program product for imaging system
Aloudat et al. Histogram analysis for automatic blood vessels detection: First step of IOP
CN117045178B (en) Image quality intelligent monitoring method and system for fluorescent endoscope imaging
Wirth et al. Combination of color and focus segmentation for medical images with low depth-of-field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant