CN111598947A - Method and system for automatically identifying patient orientation by identifying features - Google Patents

Method and system for automatically identifying patient orientation by identifying features Download PDF

Info

Publication number
CN111598947A
CN111598947A CN202010260772.4A CN202010260772A CN111598947A CN 111598947 A CN111598947 A CN 111598947A CN 202010260772 A CN202010260772 A CN 202010260772A CN 111598947 A CN111598947 A CN 111598947A
Authority
CN
China
Prior art keywords
pixel
identification
value
image
filtered image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010260772.4A
Other languages
Chinese (zh)
Other versions
CN111598947B (en
Inventor
肖建如
马科威
周振华
吕天予
吴志鹏
邵帅
吴哲宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaao Information Technology Development Co ltd
Original Assignee
Shanghai Jiaao Information Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaao Information Technology Development Co ltd filed Critical Shanghai Jiaao Information Technology Development Co ltd
Priority to CN202010260772.4A priority Critical patent/CN111598947B/en
Publication of CN111598947A publication Critical patent/CN111598947A/en
Application granted granted Critical
Publication of CN111598947B publication Critical patent/CN111598947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for automatically identifying the direction of a patient through identification characteristics, wherein the identification is placed according to a set rule, and a CB (machine B) machine is used for obtaining a digital image of a patient X-ray with the identification; filtering the digital image to reduce noise interference to obtain a filtered image, averaging the pixel gray values of the filtered image, screening according to the average value, and enhancing the picture identification area of the filtered image; carrying out binarization processing on the filtered image, searching a connected domain according to a pixel with a pixel value of 0 on the filtered image, finding a pixel set with the largest connected domain, and recording the pixel set as an identification pixel; the identification direction is obtained by comparing the pixel characteristics in the identification pixels, and the position of the patient on the digital image map is obtained by corresponding identification. The method for analyzing the imaging structure of the special marker in the X-ray image of the patient by adopting a certain rule to place the marker solves the problem of how to automatically and quickly determine the direction of the X-ray image of the patient.

Description

Method and system for automatically identifying patient orientation by identifying features
Technical Field
The invention relates to the technical field of image recognition, in particular to a method and a system for automatically recognizing the orientation of a patient through identification features.
Background
At present, in the surgical navigation technology, the intraoperative two-dimensional X-ray image and the preoperative CT/MR three-dimensional image are required to be registered, and because the position and the posture of a preoperative three-dimensional image patient are known, if the position of the intraoperative two-dimensional image patient can be determined, the registration efficiency can be greatly accelerated. In the method for determining the X-ray image of the patient, the observation person needs to have certain medical image reading experience through manual observation and manual input, and the efficiency is low; through a deep learning method, the calculation amount of early training is large; the manual input method is relatively inefficient through the placement of the markers and manual observation.
The prior art related to the present application is patent document CN109925057A, which discloses a spine minimally invasive surgery navigation method and system based on augmented reality, the method includes the following steps: reconstructing a virtual three-dimensional image of the patient's spine; registering the virtual three-dimensional image with the patient space to obtain the position of a virtual focus point in the virtual three-dimensional image in the patient space; projecting the surgical path formulated in the virtual three-dimensional image into a patient space; generating a DRR image from the preoperative CT image, registering the DRR image with an intraoperative X-ray image in real time, and determining an actual focus point; controlling the robot to clamp the surgical instrument to perform an operation on the actual focus point; real operation scenes are obtained in real time in an operation, the obtained video signals are output on a 3D display, preoperative operation path planning is achieved, and focus points are accurately positioned.
Disclosure of Invention
In view of the deficiencies in the prior art, it is an object of the present invention to provide a method and system for automatically identifying patient orientation by identifying features.
According to the invention, the method for automatically identifying the orientation of the patient through the identification features comprises the following steps:
an image acquisition step: placing a mark according to a set rule, and obtaining a digital image of the X-ray of the patient with the mark through a CB (CB) machine;
an image enhancement step: filtering the digital image to reduce noise interference to obtain a filtered image, averaging the pixel gray values of the filtered image, screening according to the average value, and enhancing the picture identification area of the filtered image;
a binary processing step: carrying out binarization processing on the filtered image, searching a connected domain according to a pixel with a pixel value of 0 on the filtered image, finding a pixel set with the largest connected domain, and recording the pixel set as an identification pixel;
and (3) identifying the orientation: the identification direction is obtained by comparing the pixel characteristics in the identification pixels, and the position of the patient on the digital image map is obtained by corresponding identification.
Preferably, the image enhancement step comprises:
a median filtering step: filtering the digital image map by using a median filter to obtain a filtered image;
a normalization step: and averaging the gray value of the picture pixel of the filtered image, setting the gray value larger than the average value as 1, normalizing the gray value smaller than the average value between 0 and the average value, and enhancing the picture identification area of the filtered image.
Preferably, the binary processing step includes:
a threshold value sorting step: setting a threshold value, carrying out binarization processing on the filtered image through the threshold value, if the pixel value in the filtered image is lower than the threshold value, setting the pixel value to be 0, otherwise, setting the pixel value to be 1;
a pixel communication step: and searching a connected domain for the pixel with the pixel value of 0 in the filtered image until all the pixels with the pixel value of 0 are found, and taking the pixel set with the maximum connected domain as an identification pixel.
Preferably, the step of identifying the orientation comprises:
setting a limit step: determining the maximum and minimum value of the identification pixel on the X axis, recording the maximum and minimum value as the maximum and minimum value of the X axis, determining the maximum and minimum value of the identification pixel on the Y axis, and recording the maximum and minimum value as the maximum and minimum value of the Y axis;
a region segmentation step: constructing a bounding box by using the maximum and minimum values of the X axis and the maximum and minimum values of the Y axis, and dividing the bounding box into 4 regions by using a central line;
and a region comparison step: and comparing the region characteristics of the 4 regions to obtain the identification azimuth.
Preferably, the mark comprises a circular ring part and an opening part, the circular ring part is consistent with the head direction of the patient, and the opening part is consistent with the right hand direction of the patient.
According to the invention, a system for automatically identifying the orientation of a patient by identifying features is provided, comprising:
an image acquisition module: placing a mark according to a set rule, and obtaining a digital image of the X-ray of the patient with the mark through a CB (CB) machine;
an image enhancement module: filtering the digital image to reduce noise interference to obtain a filtered image, averaging the pixel gray values of the filtered image, screening according to the average value, and enhancing the picture identification area of the filtered image;
a binary processing module: carrying out binarization processing on the filtered image, searching a connected domain according to a pixel with a pixel value of 0 on the filtered image, finding a pixel set with the largest connected domain, and recording the pixel set as an identification pixel;
an orientation identification module: the identification direction is obtained by comparing the pixel characteristics in the identification pixels, and the position of the patient on the digital image map is obtained by corresponding identification.
Preferably, the image enhancement module comprises:
a median filtering module: filtering the digital image map by using a median filter to obtain a filtered image;
a normalization module: and averaging the gray value of the picture pixel of the filtered image, setting the gray value larger than the average value as 1, normalizing the gray value smaller than the average value between 0 and the average value, and enhancing the picture identification area of the filtered image.
Preferably, the binary processing module includes:
a threshold sorting module: setting a threshold value, carrying out binarization processing on the filtered image through the threshold value, if the pixel value in the filtered image is lower than the threshold value, setting the pixel value to be 0, otherwise, setting the pixel value to be 1;
a pixel communication module: and searching a connected domain for the pixel with the pixel value of 0 in the filtered image until all the pixels with the pixel value of 0 are found, and taking the pixel set with the maximum connected domain as an identification pixel.
Preferably, the identifying the orientation module comprises:
a limit setting module: determining the maximum and minimum value of the identification pixel on the X axis, recording the maximum and minimum value as the maximum and minimum value of the X axis, determining the maximum and minimum value of the identification pixel on the Y axis, and recording the maximum and minimum value as the maximum and minimum value of the Y axis;
a region segmentation module: constructing a bounding box by using the maximum and minimum values of the X axis and the maximum and minimum values of the Y axis, and dividing the bounding box into 4 regions by using a central line;
a region comparison module: and comparing the region characteristics of the 4 regions to obtain the identification azimuth.
Preferably, the mark comprises a circular ring part and an opening part, the circular ring part is consistent with the head direction of the patient, and the opening part is consistent with the right hand direction of the patient.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention effectively simplifies the shooting steps of the X-ray images of the patient, improves the shooting efficiency, and does not need an observer to have rich medical image reading experience;
2. the invention automatically identifies the image posture of the patient by placing the marker according to a certain rule without means such as deep learning model and early training, reduces the difficulty degree of posture identification implementation and is convenient for popularization and application.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic flow chart of the present invention;
figure 2 is a schematic representation of the placement of the marker of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
According to the invention, when a patient shoots an X-ray image in the CB machine, the image posture of the patient is automatically identified through the markers which are placed according to a certain rule. The method for analyzing the imaging structure of the special marker in the X-ray image of the patient by adopting a certain rule to place the marker solves the problem of how to automatically and quickly determine the direction of the X-ray image of the patient.
As shown in fig. 1, the present invention is implemented by the following steps:
an image acquisition step: placing a mark according to a set rule, and obtaining a digital image of the X-ray of the patient with the mark through a CB (CB) machine;
an image enhancement step: filtering the digital image to reduce noise interference to obtain a filtered image, averaging the pixel gray values of the filtered image, screening according to the average value, and enhancing the picture identification area of the filtered image;
a binary processing step: carrying out binarization processing on the filtered image, searching a connected domain according to a pixel with a pixel value of 0 on the filtered image, finding a pixel set with the largest connected domain, and recording the pixel set as an identification pixel;
and (3) identifying the orientation: the identification direction is obtained by comparing the pixel characteristics in the identification pixels, and the position of the patient on the digital image map is obtained by corresponding identification.
Specifically, firstly, a certain rule is set for placing the mark, as shown in fig. 2, the mark is in a shape of letter K, a K ring is consistent with the head of the patient, and the opening direction is consistent with the right hand of the patient; obtaining a digital image of X-ray of a patient with a K mark through a CB machine;
after a digital image is obtained, filtering the image by using a median filter to reduce noise interference, specifically, performing convolution on the whole image by using a 3 x 3 window, sequencing 3 x 3 pixel values in total in the convolution process, and selecting a middle value as an output pixel value after sequencing; averaging the gray values of the pixels of the picture, setting the gray value larger than the average value as 1, and normalizing the gray value smaller than the average value from 0 to the average value to enhance the identification area of the picture;
secondly, because the density of the material of the K-shaped ring is high and the color of the K-shaped ring on the image is dark, the image of the filtering image is subjected to binarization processing by using a threshold value of 0.8, the value lower than the threshold value is set to be 0, and the rest is 1; and searching the connected domain through the pixel with the pixel value of 0 on the picture until all the pixels with the pixel value of 0 are searched. Finding out the pixel set with the largest connected domain, namely the pixel set is the identification pixel;
finally, comparing the pixel characteristics with the maximum connected domain to obtain the identification direction, and obtaining the orientation of the patient on the image through the corresponding identification; specifically, the maximum and minimum values of the X axis and the maximum and minimum values of the Y axis under the maximum connected domain are found out; constructing a bounding box by using the X-axis maximum and minimum values and the Y-axis maximum and minimum values, and dividing the bounding box into 4 regions by using a central line; and comparing the characteristics of the regions, counting the sum of the identification pixels of each region, and counting the smallest region, namely the upper right corner of the K graph, thereby judging the specific orientation of the patient.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A method for automatically identifying patient orientation by identifying features, comprising:
an image acquisition step: placing a mark according to a set rule, and obtaining a digital image of the X-ray of the patient with the mark through a CB (CB) machine;
an image enhancement step: filtering the digital image to reduce noise interference to obtain a filtered image, averaging the pixel gray values of the filtered image, screening according to the average value, and enhancing the picture identification area of the filtered image;
a binary processing step: carrying out binarization processing on the filtered image, searching a connected domain according to a pixel with a pixel value of 0 on the filtered image, finding a pixel set with the largest connected domain, and recording the pixel set as an identification pixel;
and (3) identifying the orientation: the identification direction is obtained by comparing the pixel characteristics in the identification pixels, and the position of the patient on the digital image map is obtained by corresponding identification.
2. The method of claim 1, wherein the image enhancement step comprises:
a median filtering step: filtering the digital image map by using a median filter to obtain a filtered image;
a normalization step: and averaging the gray value of the picture pixel of the filtered image, setting the gray value larger than the average value as 1, normalizing the gray value smaller than the average value between 0 and the average value, and enhancing the picture identification area of the filtered image.
3. The method of automatically identifying patient orientation by identification feature of claim 1 wherein said binary processing step comprises:
a threshold value sorting step: setting a threshold value, carrying out binarization processing on the filtered image through the threshold value, if the pixel value in the filtered image is lower than the threshold value, setting the pixel value to be 0, otherwise, setting the pixel value to be 1;
a pixel communication step: and searching a connected domain for the pixel with the pixel value of 0 in the filtered image until all the pixels with the pixel value of 0 are found, and taking the pixel set with the maximum connected domain as an identification pixel.
4. The method of automatically identifying a patient's orientation by identifying features of claim 1 wherein the identifying an orientation step comprises:
setting a limit step: determining the maximum and minimum value of the identification pixel on the X axis, recording the maximum and minimum value as the maximum and minimum value of the X axis, determining the maximum and minimum value of the identification pixel on the Y axis, and recording the maximum and minimum value as the maximum and minimum value of the Y axis;
a region segmentation step: constructing a bounding box by using the maximum and minimum values of the X axis and the maximum and minimum values of the Y axis, and dividing the bounding box into 4 regions by using a central line;
and a region comparison step: and comparing the region characteristics of the 4 regions to obtain the identification azimuth.
5. The method of claim 1, wherein the indicator comprises a circular portion and an opening, the circular portion is aligned with the head of the patient, and the opening is aligned with the right hand of the patient.
6. A system for automatically identifying the orientation of a patient by identifying features, comprising:
an image acquisition module: placing a mark according to a set rule, and obtaining a digital image of the X-ray of the patient with the mark through a CB (CB) machine;
an image enhancement module: filtering the digital image to reduce noise interference to obtain a filtered image, averaging the pixel gray values of the filtered image, screening according to the average value, and enhancing the picture identification area of the filtered image;
a binary processing module: carrying out binarization processing on the filtered image, searching a connected domain according to a pixel with a pixel value of 0 on the filtered image, finding a pixel set with the largest connected domain, and recording the pixel set as an identification pixel;
an orientation identification module: the identification direction is obtained by comparing the pixel characteristics in the identification pixels, and the position of the patient on the digital image map is obtained by corresponding identification.
7. The system of claim 6, wherein the image enhancement module comprises:
a median filtering module: filtering the digital image map by using a median filter to obtain a filtered image;
a normalization module: and averaging the gray value of the picture pixel of the filtered image, setting the gray value larger than the average value as 1, normalizing the gray value smaller than the average value between 0 and the average value, and enhancing the picture identification area of the filtered image.
8. The system of claim 6, wherein the binary processing module comprises:
a threshold sorting module: setting a threshold value, carrying out binarization processing on the filtered image through the threshold value, if the pixel value in the filtered image is lower than the threshold value, setting the pixel value to be 0, otherwise, setting the pixel value to be 1;
a pixel communication module: and searching a connected domain for the pixel with the pixel value of 0 in the filtered image until all the pixels with the pixel value of 0 are found, and taking the pixel set with the maximum connected domain as an identification pixel.
9. The system of claim 6, wherein the identify orientation module comprises:
a limit setting module: determining the maximum and minimum value of the identification pixel on the X axis, recording the maximum and minimum value as the maximum and minimum value of the X axis, determining the maximum and minimum value of the identification pixel on the Y axis, and recording the maximum and minimum value as the maximum and minimum value of the Y axis;
a region segmentation module: constructing a bounding box by using the maximum and minimum values of the X axis and the maximum and minimum values of the Y axis, and dividing the bounding box into 4 regions by using a central line;
a region comparison module: and comparing the region characteristics of the 4 regions to obtain the identification azimuth.
10. The system of claim 6, wherein the indicator comprises a circular portion that is aligned with the head of the patient and an open portion that is aligned with the right hand of the patient.
CN202010260772.4A 2020-04-03 2020-04-03 Method and system for automatically identifying patient position by identification features Active CN111598947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010260772.4A CN111598947B (en) 2020-04-03 2020-04-03 Method and system for automatically identifying patient position by identification features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010260772.4A CN111598947B (en) 2020-04-03 2020-04-03 Method and system for automatically identifying patient position by identification features

Publications (2)

Publication Number Publication Date
CN111598947A true CN111598947A (en) 2020-08-28
CN111598947B CN111598947B (en) 2024-02-20

Family

ID=72192019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010260772.4A Active CN111598947B (en) 2020-04-03 2020-04-03 Method and system for automatically identifying patient position by identification features

Country Status (1)

Country Link
CN (1) CN111598947B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019000653A1 (en) * 2017-06-30 2019-01-03 清华大学深圳研究生院 Image target identification method and apparatus
WO2019062631A1 (en) * 2017-09-30 2019-04-04 阿里巴巴集团控股有限公司 Local dynamic image generation method and device
CN110688871A (en) * 2019-09-19 2020-01-14 浙江工业大学 Edge detection method based on bar code identification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019000653A1 (en) * 2017-06-30 2019-01-03 清华大学深圳研究生院 Image target identification method and apparatus
WO2019062631A1 (en) * 2017-09-30 2019-04-04 阿里巴巴集团控股有限公司 Local dynamic image generation method and device
CN110688871A (en) * 2019-09-19 2020-01-14 浙江工业大学 Edge detection method based on bar code identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
袁杰;朱斐;: "基于中值滤波和梯度锐化的边缘检测" *

Also Published As

Publication number Publication date
CN111598947B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
EP3509013A1 (en) Identification of a predefined object in a set of images from a medical image scanner during a surgical procedure
US8929602B2 (en) Component based correspondence matching for reconstructing cables
US7970212B2 (en) Method for automatic detection and classification of objects and patterns in low resolution environments
CN106228548B (en) A kind of detection method and device of screen slight crack
CN106898044B (en) Organ splitting and operating method and system based on medical images and by utilizing VR technology
CN110223279B (en) Image processing method and device and electronic equipment
US20110150279A1 (en) Image processing apparatus, processing method therefor, and non-transitory computer-readable storage medium
CN110796659B (en) Target detection result identification method, device, equipment and storage medium
CN114022554B (en) Massage robot acupoint detection and positioning method based on YOLO
TWI684994B (en) Spline image registration method
CN113223004A (en) Liver image segmentation method based on deep learning
CN114972646B (en) Method and system for extracting and modifying independent ground objects of live-action three-dimensional model
CN111161295A (en) Background stripping method for dish image
CN114119695A (en) Image annotation method and device and electronic equipment
CN109919128A (en) Acquisition methods, device and the electronic equipment of control instruction
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
CN110647889B (en) Medical image recognition method, medical image recognition apparatus, terminal device, and medium
CN112861588B (en) Living body detection method and device
CN111598947B (en) Method and system for automatically identifying patient position by identification features
CN112634266B (en) Semi-automatic labeling method, medium, equipment and device for laryngoscope image
CN106097362B (en) The automatic of artificial circular mark detects and localization method in a kind of x-ray image
CN114972881A (en) Image segmentation data labeling method and device
CN114926635A (en) Method for segmenting target in multi-focus image combined with deep learning method
CN113610071A (en) Face living body detection method and device, electronic equipment and storage medium
CN115223034A (en) Automatic hole selection method and device for cryoelectron microscope

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant