CN112288796A - Method for extracting center of perspective image mark point - Google Patents

Method for extracting center of perspective image mark point Download PDF

Info

Publication number
CN112288796A
CN112288796A CN202011508920.6A CN202011508920A CN112288796A CN 112288796 A CN112288796 A CN 112288796A CN 202011508920 A CN202011508920 A CN 202011508920A CN 112288796 A CN112288796 A CN 112288796A
Authority
CN
China
Prior art keywords
area
pixel
sub
local sub
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011508920.6A
Other languages
Chinese (zh)
Other versions
CN112288796B (en
Inventor
程敏
于福翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tuodao Medical Technology Co Ltd
Original Assignee
Nanjing Tuodao Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tuodao Medical Technology Co Ltd filed Critical Nanjing Tuodao Medical Technology Co Ltd
Priority to CN202011508920.6A priority Critical patent/CN112288796B/en
Publication of CN112288796A publication Critical patent/CN112288796A/en
Application granted granted Critical
Publication of CN112288796B publication Critical patent/CN112288796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a perspective image mark point center extraction method, which comprises the following steps: (1) partitioning the perspective image to obtain a central area and an outer area thereof; (2) processing the central area to obtain the positions of all the mark points, and calculating to obtain a local sub-area of the mark points of the central area; (3) calculating according to design parameters to obtain the mark point position of the outer region and the local sub-region thereof, and further obtaining the local sub-regions of all mark points on the perspective image; (4) performing edge extraction on all local sub-areas to obtain pixel outlines of the mark points, and calculating to obtain sub-pixel boundaries of the mark points in each local sub-area according to the pixel outlines of the mark points; (5) and (4) performing least square circle fitting on the sub-pixel boundaries in all the local sub-areas obtained in the step (4), wherein the circle center is the circle center coordinate of the mark point. The marker extraction operation of the invention is limited to be carried out in a small local sub-area, the image quality or noise or interference images of other areas can not influence the extraction function, and the invention has higher stability.

Description

Method for extracting center of perspective image mark point
Technical Field
The invention relates to the field of image processing, in particular to a perspective image mark point center extraction method.
Background
The extraction of the coordinates of the circle center of the space is one of the key technologies of a visual navigation and measurement system, and the rapid and high-precision extraction of the circle center of the plane in a system for carrying out the visual navigation by combining a C-shaped arm X-ray machine is a key problem which needs to be solved in the system. In the face of the distortion correction problem of the perspective image of the C-shaped arm X-ray machine, a universal solution is to sleeve a flat plate structural member fully distributed with spherical steel balls at the end of a C-shaped arm image intensifier (the front view of the flat plate structural member is shown in figure 1), mark point central image coordinates on the perspective image containing coplanar mark points (steel balls) are obtained after extraction and shooting, distortion correction is completed through distortion model fitting calculation, and the stability and correction accuracy of the direct image perspective image correction function are extracted stably and efficiently from the centers of the mark points on the perspective image.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a steady, rapid and efficient mark point center extraction method aiming at coplanar topology mark points on a perspective image of an X-ray machine, and the method has the characteristics of high extraction precision, strong adaptability and high efficiency.
The technical scheme is as follows:
a perspective image mark point center extraction method includes the steps:
(1) partitioning the perspective image to obtain a central area with a set diameter and an outer area thereof; installing a flat plate structural member on the perspective path, wherein a plurality of groups of collinear mark points are arranged on the flat plate structural member;
(2) processing the central area through a segmentation algorithm and a connected domain algorithm to obtain the positions of all the mark points, and extracting the mass centers of all the mark points as the centers of the mark points; calculating to obtain a local sub-area of the mark point of the central area, wherein the local sub-area is defined as a rectangular area with the length of the side being twice the diameter of the mark point, and the center of the rectangle is defined as the center of the mark point;
(3) calculating according to design parameters to obtain the mark point position of the outer region and the local sub-region thereof, and further obtaining the local sub-regions of all mark points on the perspective image;
(4) performing edge extraction on all the local sub-areas obtained in the step (3) to obtain pixel outlines of the mark points, calculating the centroid points of the mark points, and calculating the sub-pixel boundaries of the mark points in each local sub-area according to the centroid points;
(5) and (4) performing least square circle fitting on the sub-pixel boundaries in all the local sub-areas obtained in the step (4), wherein the circle center is the circle center coordinate of the mark point.
In the step (2), the central area is processed by a segmentation algorithm and a connected domain algorithm to obtain the positions of part of the mark points, and the positions of the rest mark points are calculated by design parameters to obtain the positions of all the mark points in the central area.
In the step (2), a dynamic deformation method is adopted for calculating the positions of the mark points of the outer region and the local sub-regions thereof according to the design parameters, and the method specifically comprises the following steps:
(21) local sub-arear i Is a local sub-area with determined position, and the mark point ism i c i The current local sub-area center is calculated by the design parameters to obtain the local sub-area position of the adjacent mark point outside the current local sub-area centerr i+1', the local sub-area center of which isc i+1';
(22) Known local sub-arear i+1' inner mark point containing the local sub-aream i+1Then the local sub-area deformation offset is expressed as a mark pointm i+1To the center of its local sub-areac i+1' applying the deformation offset tor i+1', updated to local sub-arear i+1
(23) And (5) repeating the steps (21) to (22) until all the mark point positions and local sub-areas of the outer area are obtained.
And (4) adopting a canny operator for edge extraction in the step (4).
The step (4) of calculating the sub-pixel boundary of the marker point in each local sub-area specifically comprises the following steps:
(41) of pixel profilesA center of mass ofp c For each pixel point on the contourp i Defining pixel pointsp i A straight line is cut out from a square region having a predetermined pixel width as the centerp c p i The pixel points are located in the square area;
(42) determining the precise position of the sub-pixel boundary corresponding to the pixel point by a moment method;
(43) and (5) repeating the steps (41) to (42) to obtain the sub-pixel boundary of the mark point in the local sub-area.
In the step (42), the precise position of the sub-pixel boundary corresponding to the pixel point is determined by a moment method as follows:
straight line in square regionp c p i Upper pixel pointp i To the center of mass of the pixel profilep c Has a pixel distance ofxAxis in gray scale value ofyEstablishing coordinate system by axis, sequentially taking at least 4 pixels from square region boundary along straight line to pixel direction, and adding pixelp i Drawing a curve;
the sub-pixel boundary exact positionp k Has a gray value of (h 1+h 2)/2,h 1h 2Respectively obtained by averaging the pixels sequentially taken from the square region boundary along the straight line to the pixel direction, and calculating to obtain the accurate position of the sub-pixel boundaryp k And the centroid of the pixel profilep c Is a distance ofk
Has the advantages that: the mark extraction operation of the invention is limited to be carried out in the small local sub-areas, the image quality or noise or interference images of other areas on the image can not influence the extraction function, the stability is high, and the final determination of the sub-pixel boundary is also determined on the local image area in the local sub-area, and the precision is high.
Drawings
Fig. 1 is a schematic view of a flat panel structure.
FIG. 2 is a flow chart of the present invention.
Fig. 3 is a perspective image schematic diagram.
Fig. 4 is a schematic view of a central region communicating region.
FIG. 5 is a schematic diagram of a center region mark point defining a partial sub-region.
Fig. 6 is a schematic diagram of a dynamic deformation process.
Fig. 7 is a schematic diagram of all defined partial sub-regions of a perspective image.
FIG. 8 is a diagram illustrating the result of extracting the pixel outlines of all the marked points.
Fig. 9 is a single local sub-area pixel profile.
FIG. 10 is a schematic diagram of calculating a sub-pixel profile.
Detailed Description
The invention is further elucidated with reference to the drawings and the embodiments.
The method for extracting the center of the marking point of the perspective image aims at the extraction requirement of the center of a coplanar steel ball in the perspective image of an X-ray machine, the perspective image containing the coplanar steel ball is shown in figure 3 (the figure is a goat bone spine image), the main flow is shown in figure 2, and the method comprises the following steps:
(1) partitioning the perspective image to divide the image into a central regionC 1And outer regionC 2Because the outer area has larger distortion in the requirement of distortion correction and more steel ball marks are also needed for constraint, a plurality of groups of collinear steel balls are distributed on the flat structural member, and the central areaC 1The steel balls are sparsely distributed, and the shielding of the image characteristics of the central area is less, as shown in fig. 1; outer regionC 2The steel balls are densely distributed; in actual operation, strong constraint on the distribution shape of steel balls is not needed, and only a reference area is determined according to the marking topological relation during design (the reference area is located in an image middle accessory as much as possible and is a small area, so that the relative position relation between the marked images in the reference area does not generate great difference with the actual situation due to image distortion or deformation factors), and the marked points in other areasThe position can be calculated from the mark point image of the reference area, and the reference area is the central area in the invention;
(2) for the central areaC 1Firstly, using general self-adaptive threshold value binaryzation to make segmentation and obtain central regionC 1An inner connected domain region; separating each black connected domain by a connected domain algorithm, filtering out non-mark point connected domains according to parameters such as shape, perimeter, area and the like to obtain respective rough mark point positions, as shown in fig. 4; taking the mass center of each mark point position as the mark point center;
in this step, the central region is not requiredC 1All the mark points in the space are detected, only most of the mark points are needed to be detected, and the rest mark points can be used for deducing the positions according to the topological relation during design;
(3) defining a central regionC 1A local sub-area of the mark point, the local sub-area being defined as a rectangular area approximately taking twice the mark point diameter as the side length, the rectangular center being defined as the mark point center, and numbering all the local areas according to a certain reference by a pre-designed topological rule, as shown in fig. 5;
(4) the central region obtained according to the step (3)C 1Local subarea numbers of the mark points are combined with topological rules during design, and the method of dynamic deformation is adopted to calculate and obtain the outer areaC 2The preliminary position of the marking point and the local sub-area thereof;
the core idea of dynamic deformation is that the relative position relationship between the mark points which are far apart from each other has a large difference from the design, but the mark points between adjacent mark points are close in distance and the relative position relationship is not greatly shifted, so that the offset between the positions of the local sub-areas between adjacent mark points can be calculated, the offset is superposed in the calculation of the next local sub-area, the positions of all the local sub-areas at the outer side are dynamically calculated, then the self-adaptive threshold value segmentation is adopted in the local sub-areas, the centroid position of the mark points in the local sub-areas is determined, and the center of the local sub-areas is updated to be the centroid position of the mark;
the process is schematically shown in FIG. 6, with partial sub-regionsr i Is a local sub-area with determined position, and the mark point ism i c i The current local sub-area center is calculated by topology during design to obtain the local sub-area position of the adjacent mark point outsider i+1', the local sub-area center of which isc i+1', due tor i Andr i+1' are two adjacent subregions, so that a local subregionr i+1' inner mark point containing the local sub-aream i+1Then the local subregion deformation offset can be expressed as a mark pointm i+1To the center of its local sub-areac i+1' an offset vector, specifically denoted as
Figure DEST_PATH_IMAGE002
Applying the deformation offset tor i+1', updated to local sub-arear i+1(ii) a The same can be formed by local sub-regionsr i+1Calculating the local sub-area of the adjacent mark point outside the local sub-arear i+2
(5) Obtaining the positions of all local sub-areas on the whole perspective image according to the step (3) and the step (4), as shown in fig. 7; extracting the edges of the images in all the local sub-areas by canny operators, extracting the pixel outlines of the mark points, and calculating the centroid points of the pixel outlines of the mark points, as shown in fig. 8;
(6) calculating the sub-pixel boundary of the mark points in each local sub-area based on the pixel outline of each mark point in the step (5);
the pixel outline of the mark point in a single local sub-area is shown in figure 9, and the black pixel point is the mark point pixel outline extracted from the local sub-area, so as top c Is the centroid of the pixel profile; aiming at pixel points on each pixel outlinep i Defining pixel pointsp i With 12 pixels as a wide square area as the center, a straight line is cut outp c p i The upper pixel point is positioned in the square area,determining the precise position of the sub-pixel boundary corresponding to the pixel point by adopting a simplified moment method, and performing the same treatment on the pixel points on other pixel outlines;
the schematic diagram of calculating the sub-pixel boundary is shown in FIG. 10, which is a straight line in the square regionp c p i From the upper pixel point to the pixel outline centroidp c Has a pixel distance ofxAxis in gray scale value ofyEstablishing coordinate system by axis, sequentially taking at least 4 pixels from square region boundary along straight line to pixel direction, and adding pixelp i Drawing a curve;
the sub-pixel boundary exact positionp k Has a gray value of (h 1+h 2)/2,h 1h 2Respectively obtaining the average value of 4 pixels which are sequentially taken from the boundary of the square area along the direction of the straight line to the pixel point, and obtaining the accurate position of the boundary of the sub-pixel by calculation according to the average valuep k And the centroid of the pixel profilep c Is a distance ofk
(7) And performing least square ellipse fitting on the sub-pixel boundaries in all the local sub-regions, wherein the circle center of the sub-pixel boundaries is the circle center coordinate of each mark point.
In the invention, because the final mark extraction operation is limited to be carried out in the small local sub-regions, the image quality or noise or interference images of other regions on the image cannot influence the extraction function, and the method has high stability; and the final determination of the sub-pixel boundaries is also determined on the local image area within the local sub-area with a higher accuracy.
Although the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the details of the foregoing embodiments, and various equivalent changes (such as number, shape, position, etc.) may be made to the technical solution of the present invention within the technical spirit of the present invention, and these equivalent changes are all within the protection scope of the present invention.

Claims (6)

1. A method for extracting the center of a perspective image mark point is characterized in that: the method comprises the following steps:
(1) partitioning the perspective image to obtain a central area with a set diameter and an outer area thereof; installing a flat plate structural member on the perspective path, wherein a plurality of groups of collinear mark points are arranged on the flat plate structural member;
(2) processing the central area through a segmentation algorithm and a connected domain algorithm to obtain the positions of all the mark points, and extracting the mass centers of all the mark points as the centers of the mark points; calculating to obtain a local sub-area of the mark point of the central area, wherein the local sub-area is defined as a rectangular area with the length of the side being twice the diameter of the mark point, and the center of the rectangle is defined as the center of the mark point;
(3) calculating according to design parameters to obtain the mark point position of the outer region and the local sub-region thereof, and further obtaining the local sub-regions of all mark points on the perspective image;
(4) performing edge extraction on all the local sub-areas obtained in the step (3) to obtain pixel outlines of the mark points, calculating the centroid points of the mark points, and calculating the sub-pixel boundaries of the mark points in each local sub-area according to the centroid points;
(5) and (4) performing least square circle fitting on the sub-pixel boundaries in all the local sub-areas obtained in the step (4), wherein the circle center is the circle center coordinate of the mark point.
2. The perspective image marker point center extraction method according to claim 1, characterized in that: in the step (2), the central area is processed by a segmentation algorithm and a connected domain algorithm to obtain the positions of part of the mark points, and the positions of the rest mark points are calculated by design parameters to obtain the positions of all the mark points in the central area.
3. The perspective image marker point center extraction method according to claim 1, characterized in that: in the step (2), a dynamic deformation method is adopted for calculating the positions of the mark points of the outer region and the local sub-regions thereof according to the design parameters, and the method specifically comprises the following steps:
(21) local sub-arear i Is a local sub-area of a determined location,wherein the inner marked point ism i c i The current local sub-area center is calculated by the design parameters to obtain the local sub-area position of the adjacent mark point outside the current local sub-area centerr i+1', the local sub-area center of which isc i+1';
(22) Known local sub-arear i+1' inner mark point containing the local sub-aream i+1Then the local sub-area deformation offset is expressed as a mark pointm i+1To the center of its local sub-areac i+1' applying the deformation offset tor i+1', updated to local sub-arear i+1
(23) And (5) repeating the steps (21) to (22) until all the mark point positions and local sub-areas of the outer area are obtained.
4. The perspective image marker point center extraction method according to claim 1, characterized in that: and (4) adopting a canny operator for edge extraction in the step (4).
5. The perspective image marker point center extraction method according to claim 1, characterized in that: the step (4) of calculating the sub-pixel boundary of the marker point in each local sub-area specifically comprises the following steps:
(41) the center of mass of the pixel profile isp c For each pixel point on the contourp i Defining pixel pointsp i A straight line is cut out from a square region having a predetermined pixel width as the centerp c p i The pixel points are located in the square area;
(42) determining the precise position of the sub-pixel boundary corresponding to the pixel point by a moment method;
(43) and (5) repeating the steps (41) to (42) to obtain the sub-pixel boundary of the mark point in the local sub-area.
6. The perspective image marker point center extraction method according to claim 5, characterized in that: in the step (42), the precise position of the sub-pixel boundary corresponding to the pixel point is determined by a moment method as follows:
straight line in square regionp c p i Upper pixel pointp i To the center of mass of the pixel profilep c Has a pixel distance ofxAxis in gray scale value ofyEstablishing coordinate system by axis, sequentially taking at least 4 pixels from square region boundary along straight line to pixel direction, and adding pixelp i Drawing a curve;
the sub-pixel boundary exact positionp k Has a gray value of (h 1+h 2)/2,h 1h 2Respectively obtained by averaging the pixels sequentially taken from the square region boundary along the straight line to the pixel direction, and calculating to obtain the accurate position of the sub-pixel boundaryp k And the centroid of the pixel profilep c Is a distance ofk
CN202011508920.6A 2020-12-18 2020-12-18 Method for extracting center of perspective image mark point Active CN112288796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011508920.6A CN112288796B (en) 2020-12-18 2020-12-18 Method for extracting center of perspective image mark point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011508920.6A CN112288796B (en) 2020-12-18 2020-12-18 Method for extracting center of perspective image mark point

Publications (2)

Publication Number Publication Date
CN112288796A true CN112288796A (en) 2021-01-29
CN112288796B CN112288796B (en) 2021-03-23

Family

ID=74425940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011508920.6A Active CN112288796B (en) 2020-12-18 2020-12-18 Method for extracting center of perspective image mark point

Country Status (1)

Country Link
CN (1) CN112288796B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012126A (en) * 2021-03-17 2021-06-22 武汉联影智融医疗科技有限公司 Mark point reconstruction method and device, computer equipment and storage medium
CN113284160A (en) * 2021-04-23 2021-08-20 北京天智航医疗科技股份有限公司 Method, device and equipment for identifying operation navigation mark bead body
CN113470056A (en) * 2021-09-06 2021-10-01 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN114750147A (en) * 2022-03-10 2022-07-15 深圳甲壳虫智能有限公司 Robot space pose determining method and device and robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6993177B1 (en) * 2001-11-02 2006-01-31 Cognex Technology And Investment Corporation Gauging based on global alignment and sub-models
CN111274959A (en) * 2019-12-04 2020-06-12 北京航空航天大学 Oil filling taper sleeve pose accurate measurement method based on variable field angle
CN111583188A (en) * 2020-04-15 2020-08-25 武汉联影智融医疗科技有限公司 Operation navigation mark point positioning method, storage medium and computer equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6993177B1 (en) * 2001-11-02 2006-01-31 Cognex Technology And Investment Corporation Gauging based on global alignment and sub-models
CN111274959A (en) * 2019-12-04 2020-06-12 北京航空航天大学 Oil filling taper sleeve pose accurate measurement method based on variable field angle
CN111583188A (en) * 2020-04-15 2020-08-25 武汉联影智融医疗科技有限公司 Operation navigation mark point positioning method, storage medium and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘子腾 等: "视觉标定中圆心投影点精确定位", 《激光与光电子学进展》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012126A (en) * 2021-03-17 2021-06-22 武汉联影智融医疗科技有限公司 Mark point reconstruction method and device, computer equipment and storage medium
CN113012126B (en) * 2021-03-17 2024-03-22 武汉联影智融医疗科技有限公司 Method, device, computer equipment and storage medium for reconstructing marking point
CN113284160A (en) * 2021-04-23 2021-08-20 北京天智航医疗科技股份有限公司 Method, device and equipment for identifying operation navigation mark bead body
CN113284160B (en) * 2021-04-23 2024-03-12 北京天智航医疗科技股份有限公司 Method, device and equipment for identifying surgical navigation mark beads
CN113470056A (en) * 2021-09-06 2021-10-01 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN113470056B (en) * 2021-09-06 2021-11-16 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN114750147A (en) * 2022-03-10 2022-07-15 深圳甲壳虫智能有限公司 Robot space pose determining method and device and robot
CN114750147B (en) * 2022-03-10 2023-11-24 深圳甲壳虫智能有限公司 Space pose determining method and device of robot and robot

Also Published As

Publication number Publication date
CN112288796B (en) 2021-03-23

Similar Documents

Publication Publication Date Title
CN112288796B (en) Method for extracting center of perspective image mark point
CN111243032B (en) Full-automatic detection method for checkerboard corner points
CN105934310B (en) The shape of tool determines device and shape of tool assay method
CN110097542B (en) Method and device for detecting chip bubbles and storage medium
CN107239780A (en) A kind of image matching method of multiple features fusion
JP2017510427A5 (en)
JP4521235B2 (en) Apparatus and method for extracting change of photographed image
CN104732207A (en) High-precision and high anti-interference positioning method and device for Mark point of PCB
CN104715487A (en) Method for sub-pixel edge detection based on pseudo Zernike moments
WO2022127010A1 (en) Perspective image correction method
CN109308462B (en) Finger vein and knuckle print region-of-interest positioning method
JP2015171143A (en) Camera calibration method and apparatus using color-coded structure, and computer readable storage medium
CN110223356A (en) A kind of monocular camera full automatic calibration method based on energy growth
CN109887038B (en) Machine vision image correction method for online detection
CN108416735B (en) Method and device for splicing digital X-ray images based on geometric features
CN116740062B (en) Defect detection method and system based on irregular rubber ring
CN113763279A (en) Accurate correction processing method for image with rectangular frame
CN112734779A (en) Dot calibration plate edge sub-pixel detection method
CN112285876A (en) Camera automatic focusing method based on image processing and bubble detection
CN117611525A (en) Visual detection method and system for abrasion of pantograph slide plate
CN115511966B (en) Element identification positioning method and system based on corner detection and storage medium
CN115511716A (en) Multi-view global map splicing method based on calibration board
CN110264490B (en) sub-pixel precision edge extraction method applied to machine vision system
CN113435412B (en) Cement distribution area detection method based on semantic segmentation
CN111667429B (en) Target positioning correction method for inspection robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 210000 building 3, No. 34, Dazhou Road, Yuhuatai District, Nanjing, Jiangsu Province

Patentee after: Tuodao Medical Technology Co.,Ltd.

Address before: Room 102-86, building 6, 57 Andemen street, Yuhuatai District, Nanjing, Jiangsu 210000

Patentee before: Nanjing Tuodao Medical Technology Co.,Ltd.