CN112837391B - Coordinate conversion relation obtaining method and device, electronic equipment and storage medium - Google Patents

Coordinate conversion relation obtaining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112837391B
CN112837391B CN202110254004.2A CN202110254004A CN112837391B CN 112837391 B CN112837391 B CN 112837391B CN 202110254004 A CN202110254004 A CN 202110254004A CN 112837391 B CN112837391 B CN 112837391B
Authority
CN
China
Prior art keywords
transformation matrix
coordinate system
optimal
image
rotation transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110254004.2A
Other languages
Chinese (zh)
Other versions
CN112837391A (en
Inventor
宫明波
要文杰
谢永召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baihui Weikang Technology Co Ltd
Original Assignee
Beijing Baihui Weikang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baihui Weikang Technology Co Ltd filed Critical Beijing Baihui Weikang Technology Co Ltd
Priority to CN202110254004.2A priority Critical patent/CN112837391B/en
Publication of CN112837391A publication Critical patent/CN112837391A/en
Application granted granted Critical
Publication of CN112837391B publication Critical patent/CN112837391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application provides a coordinate transformation relation obtaining method and device, electronic equipment and a computer storage medium. The method comprises the following steps: acquiring the position of a preset mark point under an image coordinate system as a first position; acquiring the position of the mark point under the robot coordinate system as a second position; constructing an error function based on the first position and the second position by taking a translation transformation matrix and a rotation transformation matrix between an image coordinate system and a robot coordinate system as independent variables; minimizing the error function to obtain the optimal transformation relation between the translation transformation matrix and the rotation transformation matrix; converting the error function into a target function taking the rotation transformation matrix as an independent variable according to the optimal transformation relation; minimizing the target function to obtain an optimal rotation transformation matrix; and obtaining an optimal translation transformation matrix based on the first position, the second position and the optimal rotation transformation matrix. The embodiment of the application can obtain the coordinate conversion relation with higher accuracy.

Description

Coordinate conversion relation obtaining method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of robot application, in particular to a coordinate transformation relation obtaining method and device, electronic equipment and a storage medium.
Background
During the operation of the robot, a number of different coordinate systems are usually involved.
Taking the example of a surgical robot performing a medical procedure, there may be involved an image coordinate system (a CT image coordinate system or the like) and a robot coordinate system with reference to the robot itself.
Since the surgical area is determined by the doctor according to the CT image, that is, it is generally known that the position information of the surgical area (target working area) in the CT image coordinate system is obtained, in order to ensure that the robot arm of the robot can reach the target working area, the position information of the target working area in the robot coordinate system needs to be obtained, and in order to obtain the position information of the target working area in the robot coordinate system, the coordinate transformation relationship between the robot coordinate system and the image coordinate system needs to be obtained first.
Disclosure of Invention
The application aims to provide a coordinate transformation relation obtaining method and device, electronic equipment and a storage medium, which are used for obtaining a coordinate transformation relation between a robot coordinate system and an image coordinate system.
According to a first aspect of the embodiments of the present application, there is provided a marked point extraction method, including:
acquiring a target image containing a plurality of mark points;
determining candidate mark point areas from the target image according to the image value range of the image points corresponding to the mark points in the imaging mode of the target image; the image value of each image point in the candidate mark point area is located in the image value range;
classifying each image point in the candidate mark point area to obtain a plurality of reference mark point sets; each reference mark point set represents a connected region, and corresponds to one mark point;
extracting feature points based on image points in each reference mark point set respectively to obtain initial mark points corresponding to each reference mark point set;
and searching each reference mark point set by adopting a search algorithm based on the initial mark points to obtain the positions of the mark points corresponding to the reference mark point set in the image coordinate system corresponding to the target image.
According to a second aspect of embodiments of the present application, there is provided a marked point extraction apparatus, including:
the image acquisition module is used for acquiring a target image containing a plurality of mark points;
a candidate mark point area determining module, configured to determine a candidate mark point area from the target image according to an image value range of an image point corresponding to the mark point in an imaging mode of the target image; the image value of each image point in the candidate mark point area is located in the image value range;
a reference mark point set obtaining module, configured to perform category division on each image point in the candidate mark point region to obtain multiple reference mark point sets; each reference mark point set represents a connected region, and corresponds to one mark point;
the initial mark point obtaining module is used for extracting feature points based on image points in each reference mark point set respectively to obtain initial mark points corresponding to each reference mark point set;
and the marking point position obtaining module is used for searching each reference marking point set by adopting a search algorithm based on the initial marking points to obtain the positions of the marking points corresponding to the reference marking point set in the image coordinate system corresponding to the target image.
According to a third aspect of the embodiments of the present application, there is provided a coordinate transformation relation obtaining method, including:
acquiring the position of a preset mark point under an image coordinate system as a first position; acquiring the position of the mark point under a robot coordinate system as a second position;
constructing an error function based on the first position and the second position by taking a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as independent variables; wherein the error function is used to characterize: converting the first position by using the translation transformation matrix and the rotation transformation matrix to obtain an error between the converted position and the second position; or, the translation transformation matrix and the rotation transformation matrix are adopted to transform the second position to obtain an error between the transformed position and the first position;
minimizing the error function to obtain an optimal transformation relation between the translation transformation matrix and the rotation transformation matrix;
converting the error function into a target function taking the rotation transformation matrix as an independent variable according to the optimal transformation relation; minimizing the target function to obtain an optimal rotation transformation matrix;
and obtaining an optimal translation transformation matrix based on the first position, the second position and the optimal rotation transformation matrix.
According to a fourth aspect of embodiments of the present application, there is provided a coordinate conversion relationship acquisition apparatus including:
the position acquisition module is used for acquiring the position of a preset mark point in an image coordinate system as a first position; acquiring the position of the mark point under a robot coordinate system as a second position;
an error function constructing module, configured to construct an error function based on the first position and the second position, with a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as arguments; wherein the error function is used to characterize: converting the first position by using the translation transformation matrix and the rotation transformation matrix to obtain an error between the converted position and the second position; or, the translation transformation matrix and the rotation transformation matrix are adopted to transform the second position to obtain an error between the transformed position and the first position;
the optimal transformation relation obtaining module is used for carrying out minimization processing on the error function to obtain the optimal transformation relation between the translation transformation matrix and the rotation transformation matrix;
an optimal rotation transformation matrix obtaining module, configured to convert the error function into a target function using the rotation transformation matrix as an argument according to the optimal transformation relation; minimizing the target function to obtain an optimal rotation transformation matrix;
and the optimal translation transformation matrix obtaining module is used for obtaining an optimal translation transformation matrix based on the first position, the second position and the optimal rotation transformation matrix.
According to a fifth aspect of embodiments of the present application, there is provided an electronic apparatus, the apparatus including: one or more processors; a computer readable medium configured to store one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the marker point extracting method according to the first aspect or the coordinate conversion relationship obtaining method according to the third aspect.
According to a sixth aspect of embodiments of the present application, there is provided a computer-readable medium on which a computer program is stored, which when executed by a processor, implements the marker point extracting method according to the first aspect or implements the coordinate conversion relationship obtaining method according to the third aspect.
According to the coordinate transformation relation obtaining method, the coordinate transformation relation obtaining device, the electronic equipment and the computer storage medium, the position of a preset mark point in an image coordinate system is obtained as a first position; acquiring the position of the mark point under a robot coordinate system as a second position; constructing an error function based on the first position and the second position by taking a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as independent variables; wherein the error function is used to characterize: converting the first position by using the translation transformation matrix and the rotation transformation matrix to obtain an error between the converted position and the second position; or, the translation transformation matrix and the rotation transformation matrix are adopted to transform the second position to obtain an error between the transformed position and the first position; minimizing the error function to obtain an optimal transformation relation between the translation transformation matrix and the rotation transformation matrix; converting the error function into a target function taking the rotation transformation matrix as an independent variable according to the optimal transformation relation; minimizing the target function to obtain an optimal rotation transformation matrix; and obtaining an optimal translation transformation matrix based on the first position, the second position and the optimal rotation transformation matrix.
The embodiment of the application provides a method for automatically acquiring a coordinate transformation relation between a robot coordinate system and an image coordinate system. The method comprises the steps of minimizing an error function which takes a translation transformation matrix and a rotation transformation matrix as independent variables, and minimizing a target function which is converted from the error function to obtain an optimal rotation transformation matrix with high accuracy, and further correspondingly obtaining the optimal translation transformation matrix with high accuracy based on a first position, a second position and the optimal rotation transformation matrix with high accuracy. Therefore, by adopting the scheme of the embodiment of the application, the coordinate conversion relation with higher accuracy can be obtained.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a flowchart illustrating steps of a marker extraction method according to a first embodiment of the present application;
FIG. 2 is a flowchart illustrating steps of a marker extraction method according to a second embodiment of the present application;
fig. 3 is a flowchart illustrating steps of a coordinate transformation relation obtaining method according to a third embodiment of the present application;
fig. 4 is a flowchart illustrating steps of a coordinate transformation relation obtaining method according to a fourth embodiment of the present application;
FIG. 5 is a schematic structural diagram of a fifth exemplary embodiment of a device for extracting a mark point;
fig. 6 is a schematic structural diagram of a coordinate transformation relation obtaining apparatus in the sixth embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device in a seventh embodiment of the present application;
fig. 8 is a hardware structure of an electronic device according to an eighth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Example one
Referring to fig. 1, a flowchart illustrating steps of a marker point extraction method according to a first embodiment of the present application is shown.
The marker point extraction method of the embodiment includes the following steps:
step 101, acquiring a target image containing a plurality of marking points.
In the embodiment of the application, the number of the marking points is not limited, and can be set according to actual conditions.
The target image in this step may be a three-dimensional space image, and the specific obtaining manner of the target image is not limited. For example: the images may be CT images obtained by a CT apparatus, images obtained by a nuclear magnetic resonance apparatus, or images obtained by other imaging apparatuses.
Taking the scene of the medical operation performed by the surgical robot as an example, the marking point may be a ceramic pellet attached near the focus of the patient, and it should be noted that, in the embodiment of the present application, the material and the shape of the marking point are not limited, and only the ceramic pellet is taken as an example for explanation. The positioning plate containing the ceramic balls is attached to the position near the focus of the patient, and a CT three-dimensional image of the focus of the patient containing the ceramic balls is obtained by a CT device and is used as a target image when the positioning plate and the focus of the patient are kept relatively fixed.
And 102, determining a candidate mark point area from the target image according to the image value range of the mark point corresponding to the image point in the imaging mode of the target image.
And the image value of each image point in the candidate mark point area is positioned in the image value range.
Under the same imaging mode, the image value ranges of the image points corresponding to the objects with different materials are different, so that the marker point region can be determined from the target image obtained in step 101 according to the image value ranges of the image points corresponding to the marker points.
Also taking the above-mentioned surgical robot scenario as an example, when the X-ray passes through a homogeneous and uniform object, the intensity of the X-ray decays exponentially, when the X-ray passes through a tissue with the same thickness, the X-ray transmitting through the tissue with stronger absorption capacity to the X-ray is less, the absorption capacity of the tissue to the X-ray depends on the density of the tissue, and the tissue with higher density has stronger ability to absorb the X-ray; conversely, less dense tissue has less ability to absorb X-rays, and more X-rays pass through the tissue. The following table 1 shows CT values corresponding to different human tissues in the CT image:
TABLE 1
Figure BDA0002962077140000061
Figure BDA0002962077140000071
In the CT image, the CT value (greater than 2000Hu) of the ceramic pellet is much higher than that of the normal human tissue, so that the candidate marker point region can be determined from the target image according to the image value range (greater than 2000Hu) of the ceramic pellet in the CT image, specifically: and taking the area where the image point with the image value larger than 2000Hu in the CT image is positioned as the marking point area.
103, classifying each image point in the candidate mark point area to obtain a plurality of reference mark point sets; each reference mark point set represents a connected region, and each reference mark point set corresponds to one mark point.
Because the mark point is an objective object, and the components of the mark point have continuity and similar properties in spatial distribution, the candidate mark point area can be divided into different parts by utilizing a classification mode according to the properties, so that a plurality of reference mark point sets are obtained.
In the embodiment of the application, each image point in the candidate mark point region can be classified in any classification mode. For example: the classification method can adopt a traditional classification algorithm or a clustering algorithm, a machine learning mode, a model evolution (such as a Markov random field model method) mode and the like, and the specific mode adopted for classification is not limited.
With respect to the clustering algorithm, for example: the method can adopt a partitioning clustering algorithm such as K-means and the like, can also adopt a hierarchical clustering algorithm, can also adopt a fuzzy clustering algorithm and the like, and does not limit the specific form of the clustering algorithm.
And 104, respectively extracting feature points based on the image points in each reference mark point set to obtain initial mark points corresponding to each reference mark point set.
Since different imaging modes are affected by different factors (e.g., noise), the image points in the reference mark point set cannot be directly used as the mark points, and each mark point only needs to be represented by one image point. Therefore, according to factors such as an imaging mode, a processing mode, application of feature point extraction and the like, feature points of each reference mark point set can be extracted to serve as initial mark points respectively based on image points in each reference mark point set.
The feature points extracted in this step may be image points characterizing the position features of the reference marker point set, for example: the reference mark point set may be an image point corresponding to the coordinate mean of the image points in the reference mark point set, an image point with a gray scale gradient of 0 in the reference mark point set, an image point corresponding to the center of the maximum connected region in the reference mark point set, and the like.
And 105, searching each reference mark point set by adopting a search algorithm based on the initial mark points to obtain the positions of the mark points corresponding to the reference mark point set in the image coordinate system corresponding to the target image.
For each reference mark point set, after the corresponding initial mark point is obtained, the initial mark point is used as a search starting point, and the reference mark point set is searched by adopting a search algorithm, so that the position of the mark point corresponding to the reference mark point set in the image coordinate system corresponding to the target image can be obtained.
Any search algorithm can be adopted to search the reference mark point set, and in the embodiment of the present application, the specific form of the search algorithm is not limited, for example: a global search algorithm may be employed, or a local search algorithm may be employed, etc.
In the embodiment of the application, after a target image containing a plurality of marking points is obtained, candidate marking point areas are determined, and then a reference marking point set is obtained by classifying the candidate marking point areas; extracting feature points based on image points in each reference mark point set to obtain initial mark points; and then, the position of each marking point in the image coordinate system is obtained by adopting a search algorithm. The process does not need manual participation, automatically realizes the extraction operation of the mark points, and has higher extraction efficiency compared with a manual extraction mode depending on the experience of an operator. Meanwhile, during manual extraction, errors usually exist between the initial mark point position determined by human eyes and the initial mark point position selected finally manually, and the initial mark point extraction can be only performed manually in an integer layer forming a target image.
The marker point extraction method of the present embodiment may be performed by any suitable electronic device with data processing capability, including but not limited to: servers, PCs, even high performance mobile terminals, etc.
Example two
Referring to fig. 2, a flowchart illustrating steps of a marker point extracting method according to a second embodiment of the present application is shown.
The marker point extraction method of the embodiment includes the following steps:
step 201, acquiring a target image containing a plurality of mark points.
In the embodiment of the application, the number of the marking points is not limited, and can be set according to actual conditions.
The target image in this step may be a three-dimensional space image, and the specific obtaining manner of the target image is not limited. For example: the images may be CT images obtained by a CT apparatus, images obtained by a nuclear magnetic resonance apparatus, or images obtained by other imaging apparatuses.
Taking the scene of the medical operation performed by the surgical robot as an example, the marking point may be a ceramic pellet attached near the focus of the patient, and it should be noted that, in the embodiment of the present application, the material and the shape of the marking point are not limited, and only the ceramic pellet is taken as an example for explanation. The positioning plate containing the ceramic balls is attached to the position near the focus of the patient, and a CT three-dimensional image of the focus of the patient containing the ceramic balls is obtained by a CT device and is used as a target image when the positioning plate and the focus of the patient are kept relatively fixed.
Step 202, determining candidate mark point areas from the target image according to the image value range of the corresponding image points of the mark points in the imaging mode of the target image.
And the image value of each image point in the candidate mark point area is positioned in the image value range.
Under the same imaging mode, the image value ranges of the image points corresponding to the objects of different materials are different, so that the marker point region can be determined from the target image obtained in step 201 according to the image value ranges of the image points corresponding to the marker points.
Step 203, classifying each image point in the candidate mark point area to obtain a plurality of reference mark point sets; each reference mark point set represents a connected region, and each reference mark point set corresponds to one mark point.
Because the mark point is an objective object, and the components of the mark point have continuous and similar properties in spatial distribution, the candidate mark point area can be divided into different parts by utilizing a classification mode according to the properties, so that a plurality of reference mark point sets are obtained.
In the embodiment of the application, each image point in the candidate mark point region can be classified in any classification mode. For example: the classification method can adopt a traditional classification algorithm or a clustering algorithm, a machine learning mode, a model evolution (such as a Markov random field model method) mode and the like, and the specific mode adopted for classification is not limited.
With respect to the clustering algorithm, for example: the method can adopt a partitioning clustering algorithm such as K-means and the like, can also adopt a hierarchical clustering algorithm, can also adopt a fuzzy clustering algorithm and the like, and does not limit the specific form of the clustering algorithm.
Because the clustering algorithm can divide the data set into different clusters according to the feature similarity, the clusters meet the condition that the difference in the same cluster is minimum and the difference between different clusters is maximum. Therefore, optionally, in some embodiments, a clustering algorithm may be used to cluster each image point in the candidate marker region, resulting in a plurality of reference marker sets. The mean value averages the system error, so that the random error is small, and the implementation mode is convenient, so that a mean value clustering algorithm can be adopted to cluster all image points in the candidate mark point region to obtain a plurality of reference mark point sets.
Specifically, the method comprises the following steps: a plurality of initial clustering centers can be selected from the candidate mark point area, wherein the number of the initial clustering centers is the same as that of the mark points; then, calculating the distance between each image point and each initial clustering center in the candidate mark point region, wherein the distance can be an Euclidean distance between the image point and the initial clustering center, or a Manhattan distance, or a Chebyshev distance, and the like; each image point is divided into initial clustering centers closest to the image point, so that different clusters are formed, and the sum of the distances between the image point in each cluster and the initial clustering centers is calculated, wherein the sum of the distances in the embodiment of the application can be represented in a mode of error sum of squares, or in other modes such as entropy, and the specific selection of the representation mode is not limited; and extracting the physical sign points based on the image points in each cluster again: calculating the mean value of each image point in the cluster, taking the image point corresponding to the mean value as a new clustering center, and clustering again; and sequentially iterating until a preset iteration termination condition is met, finishing clustering at the moment, and obtaining a plurality of reference mark point sets, wherein the preset iteration termination condition can be as follows: the sum of squares of errors is smaller than a set sum of squares of errors threshold; or the entropy is smaller than a set entropy threshold value; or the iteration number is greater than the specified iteration number, or the function value of the evaluation function is not changed, or the function value variation of the evaluation function is extremely small, and the like.
When the mean clustering algorithm is adopted for clustering, the selection principle of the initial clustering center can be as follows: and taking the image point with the largest sum of the distances in the candidate mark point region as an initial clustering center. For example: if the number of the marker points is 3, the 3 image points with the largest sum of the distances in the candidate marker point region can be used as the initial clustering centers.
Alternatively, the selection principle of the initial cluster center may also be: and selecting an image point with an image value as a preset image value in the candidate mark point area as an initial clustering center. For example: as mentioned above, in the CT image, the CT value (greater than 2000Hu) of the ceramic pellet is much higher than that of the normal human tissue, and it is assumed that the number of the marker points is 3, so that 3 image points among the image points having the image value greater than 2000Hu can be used as the initial clustering center. According to the preset image values of the image points corresponding to the mark points, the initial clustering center is selected in the target image, and the image points with the image values being the preset image values are used as the initial clustering center, so that the initial clustering center is closer to the image points corresponding to the mark points, the clustering iterative process is more quickly converged, the clustering efficiency is improved, and the overall efficiency of extracting the mark points is further improved.
Still alternatively, the selection principle of the initial cluster center may be: and selecting image points of which the formed connecting lines are in a preset shape as initial clustering centers in the candidate mark point areas. The predetermined shape may be a real shape composed of all the mark points, and since a variety of clustering results may occur when clustering is performed if a regular shape (for example, an equilateral triangle, an isosceles triangle, or the like) is used as the predetermined shape, a customized irregular shape is generally used as the predetermined shape. For example: if the real shape formed by the 3 marking points is a customized irregular triangle, then 3 image points which form a connecting line of the irregular triangle can be selected as an initial clustering center in the candidate marking point area. The image points of which the formed connecting lines are in the preset shapes are selected as the initial clustering centers, so that the relative position relationship between the initial clustering centers is closer to the real relative position relationship between the mark points in the image coordinate system, the clustering iterative process is more quickly converged, the clustering efficiency is improved, and the overall efficiency of extracting the mark points can be improved.
And 204, extracting feature points based on the image points in each reference mark point set respectively to obtain initial mark points corresponding to each reference mark point set.
Since different imaging modes are affected by different factors (e.g., noise), the image points in the reference mark point set cannot be directly used as the mark points, and each mark point only needs to be represented by one image point. Therefore, according to factors such as an imaging mode, a processing mode, application of feature point extraction and the like, feature points of each reference mark point set can be extracted to serve as initial mark points respectively based on image points in each reference mark point set.
The feature points extracted in this step may be image points capable of characterizing the position features of the reference marker point set, for example: the reference mark point set may be an image point corresponding to the coordinate mean of the image points in the reference mark point set, an image point with a gray scale gradient of 0 in the reference mark point set, an image point corresponding to the center of the maximum connected region in the reference mark point set, and the like.
The mean value is used as a statistic to better avoid accidental errors and reduce systematic errors, so that optionally, in some embodiments, the mean value of each reference marker point set is selected as the initial marker point. Specifically, the method comprises the following steps: respectively extracting feature points based on image points in each reference mark point set to obtain initial mark points corresponding to each reference mark point set, wherein the method comprises the following steps:
and calculating the coordinate mean value of the image points in each reference mark point set, and taking the image point corresponding to the coordinate mean value as the initial mark point corresponding to the reference mark point set.
And step 205, based on the initial mark points, searching each reference mark point set by using a search algorithm to obtain the positions of the mark points corresponding to the reference mark point set in the image coordinate system corresponding to the target image.
For each reference mark point set, after the corresponding initial mark point is obtained, the initial mark point is used as a search starting point, and the reference mark point set is searched by adopting a search algorithm, so that the position of the mark point corresponding to the reference mark point set in the image coordinate system corresponding to the target image can be obtained.
Any search algorithm can be adopted to search the reference mark point set, and in the embodiment of the present application, the specific form of the local search algorithm is not limited, for example: a global search algorithm may be employed, or a local search algorithm may be employed, etc.
However, since there are a plurality of marker points, it is necessary to determine the correspondence between the marker points in the image coordinates and the marker points in the robot coordinate system in order to obtain the conversion relationship between the image coordinate system and the robot coordinate system.
Step 206, acquiring the position of each marking point in the robot coordinate system as a first position; and extracting the feature points based on the first position to obtain the position of the first feature point.
Step 207, taking the position of each mark point in the image coordinate system as a second position; and extracting the feature points based on the second position to obtain the position of the second feature point.
In this step, the position of the first feature point may be an average value of the first positions, or an image point with a gray scale gradient of 0 in the set of first positions, or an image point corresponding to the center of the largest connected region in the set of first positions, and so on.
Optionally, in some embodiments, extracting the feature point based on the first position to obtain a first feature point position includes:
calculating the position mean value of each first position to obtain a first feature point position;
extracting the feature points based on the second position to obtain a second feature point position, comprising:
and calculating the position mean value of each second position to obtain the position of the second feature point.
Step 208, establishing an initial coordinate system of the robot based on the position of the first characteristic point, the first position farthest from the position of the first characteristic point and the first position second farthest from the position of the first characteristic point; and establishing an image initial coordinate system based on the position of the second characteristic point, the second position farthest from the position of the second characteristic point and the second position second farthest from the position of the second characteristic point.
It should be noted that, in order to ensure that a unique robot initial coordinate system can be established, in this step, the number of first positions farthest from the first feature point position needs to be only 1, and the number of first positions next farthest from the first feature point position also needs to be only 1; correspondingly, in order to ensure that a unique image initial coordinate system can be established, in this step, the number of second positions farthest from the second feature point position needs to be only 1, and the number of second positions second farthest from the second feature point position also needs to be only 1.
Optionally, in some embodiments, establishing the robot initial coordinate system based on the first landmark position, the first position farthest from the first landmark position, and the first position second farthest from the first landmark position includes:
determining an axis where a connecting line of the position of the first characteristic point and a first position farthest from the position of the first characteristic point is located as a first coordinate axis in an initial coordinate system of the robot;
determining a straight line which is perpendicular to a first coordinate axis in the initial coordinate system of the robot and is perpendicular to a connecting line of the position of the first characteristic point and a first position which is next far away from the position of the first characteristic point as a second coordinate axis in the initial coordinate system of the robot;
and determining a third coordinate axis in the initial robot coordinate system based on the first coordinate axis in the initial robot coordinate system and the second coordinate axis in the initial robot coordinate system.
Specifically, a vector parallel to a first coordinate axis in the robot initial coordinate system and a vector parallel to a second coordinate axis in the robot initial coordinate system may be cross-multiplied to obtain a result vector, and an axis where the result vector is located is used as a third coordinate axis in the robot initial coordinate system.
Establishing an image initial coordinate system based on the second feature point position, the second position farthest from the second feature point position, and the second position second farthest from the second feature point position, including:
determining an axis where a connecting line of the position of the second characteristic point and a second position farthest from the position of the second characteristic point is located as a first coordinate axis in an image initial coordinate system;
determining a straight line which is perpendicular to a first coordinate axis in the image initial coordinate system and is perpendicular to a connecting line of a second characteristic point position and a second position which is next far away from the second characteristic point position as a second coordinate axis in the image initial coordinate system;
and determining a third coordinate axis in the image initial coordinate system based on the first coordinate axis in the image initial coordinate system and the second coordinate axis in the image initial coordinate system.
The same process as the construction process of the initial coordinate system of the robot, a vector parallel to a first coordinate axis in the initial coordinate system of the image and a vector parallel to a second coordinate axis in the initial coordinate system of the image can be subjected to cross multiplication to obtain a result vector, and an axis where the result vector is located is taken as a third coordinate axis in the initial coordinate system of the image.
In the above-mentioned establishing process of the image initial coordinate system, the second coordinate axis of the image initial coordinate system is determined by the second feature point position and the second position farthest from the second feature point position, because the precision of the marker point extraction will be affected by the quality of the target image, that is, by the quality of the target image, there is usually a certain error between the marker point position extracted in the target image and the real position of the marker point, and the image initial coordinate system is established by the second feature point position and the position of the marker point farthest from the second feature point position, so that the result calculation error can be minimized, that is: it is possible to minimize errors between the coordinate axes of the established image initial coordinate system and the coordinate axes of the real image coordinate system. Therefore, the axis where the connecting line of the second characteristic point position and the second position farthest from the second characteristic point position is located is determined as the first coordinate axis in the initial image coordinate system, and the accuracy of establishing the initial image coordinate system can be improved.
In addition, compared with the first coordinate axis, the error between the straight line connecting the second characteristic point position and the second position which is the second distance from the second characteristic point position and the coordinate axis of the real image coordinate system is the largest, so in the embodiment of the application, the second coordinate axis in the image initial coordinate system is determined firstly through the first coordinate axis and the straight line connecting the second characteristic point position and the second position which is the second distance from the second characteristic point position, then, based on the first coordinate axis and the second coordinate, the third coordinate axis in the image initial coordinate system is obtained through the cross-product operation, and the accuracy of establishing the image initial coordinate system is further improved.
And step 209, calculating the conversion relation between the robot initial coordinate system and the image initial coordinate system.
Step 210, obtaining a corresponding relationship between the first position and the second position according to the first position, the second position and the conversion relationship of each mark point.
Optionally, in some embodiments, obtaining a corresponding relationship between the first position and the second position according to the first position, the second position, and the initial coordinate transformation relationship of each marker point includes:
respectively carrying out coordinate conversion on each first position based on the initial coordinate conversion relation to obtain a converted position of each first position in an image coordinate system;
for each converted position, determining a second position closest to the converted position as a second position having a corresponding relationship with the first position corresponding to the converted position;
or,
respectively carrying out coordinate conversion on each second position based on the initial coordinate conversion relation to obtain the converted position of each second position in the image coordinate system;
and for each converted position, determining a first position closest to the converted position as a first position having a corresponding relationship with a second position corresponding to the converted position.
In the embodiment of the application, after a target image containing a plurality of marking points is obtained, candidate marking point areas are determined, and then a reference marking point set is obtained by classifying the candidate marking point areas; extracting feature points based on image points in each reference mark point set to obtain initial mark points; and then, the position of each marking point in the image coordinate system is obtained by adopting a search algorithm. The process does not need manual participation, automatically realizes the extraction operation of the mark points, and has higher extraction efficiency compared with a manual extraction mode depending on the experience of an operator. Meanwhile, during manual extraction, errors usually exist between the initial mark point position determined by human eyes and the initial mark point position selected finally manually, and the initial mark point extraction can be only performed manually in an integer layer forming a target image.
The marker point extraction method of the present embodiment may be performed by any suitable electronic device with data processing capability, including but not limited to: servers, PCs, even high performance mobile terminals, etc.
EXAMPLE III
Referring to fig. 3, fig. 3 is a flowchart illustrating steps of a coordinate transformation relation obtaining method according to a third embodiment of the present application. After the marking points are extracted through the first embodiment or the second embodiment of the present application, the coordinate transformation relationship between the image coordinate system and the robot coordinate system may be obtained based on the positions of the extracted marking points.
The coordinate conversion relation obtaining method of the embodiment includes the following steps:
step 301, acquiring the position of a preset mark point in an image coordinate system as a first position; and acquiring the position of the marking point under the robot coordinate system as a second position.
For the coordinate conversion relation between the robot coordinate system and the image coordinate system, the coordinate conversion relation between the two coordinate systems can be calculated by introducing preset mark points and then based on the positions of the mark points in the image coordinate system and the positions of the mark points in the robot coordinate system.
The positions of the marker points in the robot coordinate system may be known, while the positions of the marker points in the image coordinate system may be obtained by: after an image containing the preset mark points is obtained, extracting the preset mark points in an image coordinate system corresponding to the image to obtain the positions of the mark points in the image coordinate system.
Taking the scene of the medical operation performed by the surgical robot as an example, the marking point may be a ceramic pellet attached near the focus of the patient, and it should be noted that, in the embodiment of the present application, the material and the shape of the marking point are not limited, and only the ceramic pellet is taken as an example for explanation. And the position of the mark point in the image coordinate system can be obtained by the following method: the positioning plate containing the ceramic balls is attached to the position near the focus of the patient, the CT three-dimensional image of the focus of the patient containing the ceramic balls is obtained through CT equipment when the positioning plate and the focus of the patient are kept relatively fixed, and then the position of the marking point under the image coordinate system is obtained based on the image coordinate system corresponding to the CT image.
In order to make the finally obtained coordinate transformation relationship more accurate, the number of the marking points in the embodiment of the present application is at least 3, and there are 3 non-collinear marking points in all the marking points.
And 302, constructing an error function based on the first position and the second position by taking a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as independent variables.
Wherein the error function is used to characterize: adopting a translation transformation matrix and a rotation transformation matrix to transform the first position to obtain an error between the transformed position and the second position; or, the error between the converted position and the first position is obtained by converting the second position by adopting the translation transformation matrix and the rotation transformation matrix.
And 303, minimizing the error function to obtain the optimal transformation relation between the translation transformation matrix and the rotation transformation matrix.
Since the error function includes two independent variables, for the convenience of calculation, an optimal transformation relationship between the translation transformation matrix and the rotation transformation matrix may be obtained first, and the error function is converted into a function including only one independent variable, for example: and calculating the optimal rotation transformation matrix which enables the function value to be minimum by only including the function of the rotation transformation matrix, and then obtaining the optimal translation transformation matrix.
Specifically, in this step, the transformation relationship between the translational transformation matrix and the rotational transformation matrix when the error function is minimized may be calculated as the optimal transformation relationship.
Step 304, converting the error function into a target function taking the rotation transformation matrix as an independent variable according to the optimal transformation relation; and minimizing the target function to obtain an optimal rotation transformation matrix.
In this step, corresponding to step 303, a rotation transformation matrix for minimizing the objective function may be calculated as an optimal rotation transformation matrix.
And 305, obtaining an optimal translation transformation matrix based on the first position, the second position and the optimal rotation transformation matrix.
For example: after the optimal rotation transformation matrix is obtained, the optimal rotation transformation matrix is adopted to transform the first position (namely, the coordinate vector of the marking point in the image coordinate system) to obtain a transformed coordinate vector, and then the difference between the transformed coordinate vector and the second position (namely, the coordinate vector of the marking point in the robot coordinate system) is determined as the optimal translation transformation matrix.
Alternatively, as another example:
after the optimal rotation transformation matrix is obtained, the optimal rotation transformation matrix is adopted to transform the second position (namely, the coordinate vector of the marking point in the robot coordinate system) to obtain a transformed coordinate vector, and then the difference between the transformed coordinate vector and the first position (namely, the coordinate vector of the marking point in the image coordinate system) is determined as the optimal translation transformation matrix.
The embodiment of the application provides a method for automatically acquiring a coordinate transformation relation between a robot coordinate system and an image coordinate system. The method comprises the steps of minimizing an error function which takes a translation transformation matrix and a rotation transformation matrix as independent variables, and minimizing a target function which is converted from the error function to obtain an optimal rotation transformation matrix with high accuracy, and further correspondingly obtaining the optimal translation transformation matrix with high accuracy based on a first position, a second position and the optimal rotation transformation matrix with high accuracy. Therefore, by adopting the scheme of the embodiment of the application, the coordinate conversion relation with higher accuracy can be obtained.
The coordinate transformation relation acquisition method of the present embodiment may be executed by any suitable electronic device having data processing capability, including but not limited to: servers, PCs, even high performance mobile terminals, etc.
In the application, the first embodiment and the third embodiment may be combined, or the second embodiment and the third embodiment may be combined, so that the whole process of the marker point extraction and the coordinate transformation relation acquisition is realized. In the combination of the embodiments, in step 301 of the third embodiment of the present application, the position of each marker point in the image coordinate system may be obtained by using the method in the first embodiment or the second embodiment, where the "position of the marker point in the image coordinate system" in the third embodiment of the present application is the "position of the marker point corresponding to the reference marker point set in the image coordinate system corresponding to the target image" in the first embodiment; in the third embodiment of the present application, "the position of the marker in the image coordinate system" is the "position of each marker in the image coordinate system" in the second embodiment.
Example four
Referring to fig. 4, a flowchart of steps of a coordinate transformation relation obtaining method according to a fourth embodiment of the present application is shown, where in order to make an obtained coordinate transformation relation more accurate, the number of mark points in the embodiment of the present application is multiple.
The coordinate conversion relation obtaining method of the embodiment includes the following steps:
step 401, acquiring the position of each preset mark point in an image coordinate system as a first position; and acquiring the position of each marking point under the robot coordinate system as a second position.
In the embodiment of the present application, the position (i.e., the second position) of each marking point in the robot coordinate system may be generally known. The position (i.e., the first position) of each mark point in the image coordinate system can be obtained by extracting the mark point from the image containing the mark point. In the embodiment of the present application, the specific way of extracting the mark point is not limited, for example: the method can be used for observing the image for an operator according to the attribute information of the mark points in the image, such as color, shape and the like, and extracting preset mark points in an image coordinate system; the existing mark point extraction method can be adopted to automatically acquire the mark points, and the like.
Step 402, acquiring a corresponding relation between each first position and each second position.
The first position and the second position which have corresponding relation belong to the same mark point.
For example: if the number of the mark points is three (1, 2 and 3 respectively), three first positions are provided: a. the1、B1、C1Meanwhile, there are also three second positions: a. the2、B2、C2If A is1And A2Position information belonging to the mark point 1; b is1And B2Is the position information belonging to the mark point 2; c1And C2Is the position information belonging to the mark point 3, then A1And A2Corresponds to, B1And B2Corresponds to, C1And C2And (7) corresponding.
When the number of the marking points is multiple, the corresponding relationship between each first position and each second position needs to be determined while the first position and the second position of each marking point are obtained.
The corresponding relation can be determined by the operator according to the relative position relation between the marking points, or can be automatically obtained through position coordinate transformation based on each first position and each second position.
Optionally, in one embodiment, the correspondence relationship may be obtained as follows:
extracting feature points based on the first positions to obtain first feature point positions; extracting feature points based on the second positions to obtain second feature point positions;
establishing a robot initial coordinate system based on the first characteristic point position, a first position farthest from the first characteristic point position and a first position second farthest from the first characteristic point position; establishing an image initial coordinate system based on the second characteristic point position, a second position farthest from the second characteristic point position and a second position second farthest from the second characteristic point position;
calculating a conversion relation between the initial coordinate system of the robot and the initial coordinate system of the image as an initial conversion relation;
and obtaining the corresponding relation between each first position and each second position according to each first position, each second position and the initial conversion relation.
Specifically, the above process is the same as the specific implementation manner in step 206 to step 210 in the embodiment, and the specific implementation manner in this embodiment may refer to the explanation in embodiment two, and is not described here again.
And 403, constructing an error function based on the first positions, the second positions and the corresponding relation by using a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as independent variables.
Wherein the error function is used to characterize: adopting a translation transformation matrix and a rotation transformation matrix to transform each first position to obtain an error between each transformed position and each second position; or, the translation transformation matrix and the rotation transformation matrix are adopted to transform each second position to obtain the error between each transformed position and each first position.
Optionally, in some embodiments, a first preset formula may be adopted, where a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system are used as arguments, and an error function is constructed based on each of the first positions, each of the second positions, and the corresponding relationship, where the first preset formula is:
Figure BDA0002962077140000211
wherein R represents a rotation transformation matrix between the image coordinate system and the robot coordinate system; t represents a translation transformation matrix between the image coordinate system and the robot coordinate system; f (R, T) represents an error function with the translation transformation matrix and the rotation transformation matrix as arguments; n represents the number of the marking points; omegaiRepresenting a preset weight value corresponding to the ith mark point; | | non-woven hair2Represents the square of the vector norm; piRepresents the coordinates of the i-th mark point in the image coordinate system, and qiIndicating the coordinates of the i-th marking point in the robot coordinate system, or, PiRepresents the coordinates of the i-th marking point in the robot coordinate system, and qiIndicating the coordinates of the ith marking point in the image coordinate system.
Specifically, the coordinates of the mark points in the robot coordinate system in this step may be represented as a column vector composed of three-dimensional coordinate values of the mark points in the robot coordinate system, and likewise, the coordinates of the mark points in the image coordinate system may be represented as a column vector composed of three-dimensional coordinate values of the mark points in the image coordinate system.
The preset weight value corresponding to each mark point is used for representing the influence degree of the position of each mark point on the accuracy of the obtained coordinate conversion relationship, and can be set according to the actual situation, and here, the limitation is not made. For example: when the influence of the position of each marking point on the accuracy of the obtained coordinate conversion relationship is the same, the preset weight value corresponding to each marking point may be set to 1.
And step 404, minimizing the error function to obtain the optimal transformation relation between the translation transformation matrix and the rotation transformation matrix.
Optionally, in some embodiments, when the error function is a function obtained by using the first preset formula, the error function may be minimized in the following manner, so as to obtain an optimal transformation relationship between the translation transformation matrix and the rotation transformation matrix:
and (3) obtaining the optimal transformation relation between the translation transformation matrix and the rotation transformation matrix by differentiating the error function and making a derivative value equal to 0:
Figure BDA0002962077140000221
wherein,
Figure BDA0002962077140000222
in order to find the minimum value of the error function, a partial derivative about the translation transformation matrix T may be found for F (R, T) in the first preset formula, and the partial derivative is made equal to 0, so as to obtain a transformation relationship between the translation transformation matrix and the rotation transformation matrix, which is used as an optimal transformation relationship, and the specific finding process is as follows:
Figure BDA0002962077140000223
recording:
Figure BDA0002962077140000224
then, the optimal transformation relation is obtained as follows:
Figure BDA0002962077140000225
step 405, converting the error function into a target function taking the rotation transformation matrix as an independent variable according to the optimal transformation relation; and minimizing the target function to obtain an optimal rotation transformation matrix.
Optionally, in some embodiments, the optimal rotation transformation matrix is obtained in step 404
Figure BDA0002962077140000226
Then, the optimal rotation transformation matrix may be substituted into the first preset formula, so as to obtain an objective function with the rotation transformation matrix as an argument:
Figure BDA0002962077140000231
wherein M (R) represents an objective function with the rotation transformation matrix as an argument.
Minimizing the objective function M (R) to obtain an optimal rotation transformation matrix: r ═ VUT
Wherein,
Figure BDA0002962077140000232
r' is an optimal rotation transformation matrix; v and U are respectively a pair
Figure BDA0002962077140000233
Performing SVD to obtain two unitary matrixes; Λ is a pair
Figure BDA0002962077140000234
The diagonal matrix obtained after SVD decomposition.
Specifically, the process of obtaining the optimal rotation transformation matrix is as follows:
the decomposition of the above M (R) can obtain:
Figure BDA0002962077140000235
according to the above, when
Figure BDA0002962077140000236
When taking the maximum value, M (R) takes the minimum value.
Figure BDA0002962077140000237
The result of (c) is a scalar, from which trace equals itself, one can get:
Figure BDA0002962077140000241
where tr () represents the trace of the matrix.
Due to the fact that
Figure BDA0002962077140000242
Is a square matrix, therefore, the pairs
Figure BDA0002962077140000243
Performing SVD decomposition to obtain:
Figure BDA0002962077140000244
wherein V and U are respectively a pair
Figure BDA0002962077140000245
Performing SVD to obtain two unitary matrixes; Λ is a pair
Figure BDA0002962077140000246
The diagonal matrix obtained after SVD decomposition.
Thus:
Figure BDA0002962077140000247
since Λ is a diagonal matrix, VTRU is an orthogonal matrix, so when VTWhen RU is equal to I (that is to say when VTRU is unit matrix), tr (Λ V)TRU) max (i.e.: the sum of the diagonal elements equal to Λ), i.e.: when V isTWhen the RU is equal to the I,
Figure BDA0002962077140000248
taking the maximum value, M (R) taking the minimum value, wherein R is the optimal rotation transformation matrixThe optimal rotation transformation matrix is denoted as R', and V and U are both unitary matrixes with the product of transposition and the U being a unit matrix, so that the method comprises the following steps: r ═ VUT
And 406, obtaining an optimal translation transformation matrix based on the first positions, the second positions and the optimal rotation transformation matrix.
Optionally, in some embodiments, the optimal translation transformation matrix may be obtained based on each of the first positions, each of the second positions, and the optimal rotation transformation matrix through a second preset formula, where the second preset formula is:
T′=G-R′G*
wherein R' is an optimal rotation transformation matrix; t' is an optimal translation transformation matrix; when P is presentiG when the coordinates of the ith mark point in the image coordinate system are represented*Representing a vector obtained based on the coordinates of the marking points in the image coordinate system, and G representing a vector obtained based on the coordinates of the marking points in the robot coordinate system; when said P isiG represents the coordinates of the ith marking point in the robot coordinate system*And G represents a vector obtained based on the coordinates of the marking point in the robot coordinate system, and G represents a vector obtained based on the coordinates of the marking point in the image coordinate system.
For example: when P is presentiG when the coordinates of the ith mark point in the image coordinate system are represented*The vector corresponding to the coordinate of the single marking point in the image coordinate system can be obtained, and correspondingly, the vector corresponding to the coordinate of the single marking point in the robot coordinate system can also be obtained; or, G*The vector obtained by averaging the coordinates of all the mark points in the image coordinate system may be used, and correspondingly, the vector obtained by averaging the coordinates of all the mark points in the robot coordinate system may be used as G.
The number of the marked points is three, and PiRepresents the coordinates (P) of the ith marking point in the image coordinate systemiIs a column vector for representing the coordinates of the ith marking point in the image coordinate system, i is a natural number which is more than 0 and less than or equal to 3), PiRepresenting the coordinates of the ith marking point in the robot coordinate system as an example, G*Can be expressed as: g*=[pi]G can then be expressed as: g ═ qi](ii) a Or, G*Can also be expressed as:
Figure BDA0002962077140000251
g can also be represented as:
Figure BDA0002962077140000252
and so on.
G when Pi represents the coordinate of the ith marking point in the robot coordinate system*The vector corresponding to the coordinate of the single marking point in the robot coordinate system can be obtained, and correspondingly, the vector corresponding to the coordinate of the single marking point in the image coordinate system can also be obtained; or, G*The vector obtained by averaging the coordinates of all the mark points in the robot coordinate system may be used, and correspondingly, the vector obtained by averaging the coordinates of all the mark points in the image coordinate system may be used as G.
Step 407, acquiring the position of the target operation area in the image coordinate system.
Taking the scene that the surgical robot performs the medical operation as an example, the target operation area may be an operation area to be operated, and the position of the target operation area may be determined from an image such as CT, that is, the position of the target operation area in the image coordinate system is obtained.
And 408, performing coordinate conversion on the position of the target operation area under the image coordinate system according to the optimal rotation transformation matrix and the optimal translation transformation matrix to obtain the position of the target operation area under the robot coordinate system.
Specifically, the optimal rotation transformation matrix and the optimal translation transformation matrix may be fused to obtain a fusion transformation matrix; and then, based on the transformation matrix after fusion, performing coordinate transformation on the position of the target working area in the image coordinate system to obtain the position of the target working area in the robot coordinate system.
Taking three-dimensional space as an example, can be obtained byLower formula is to optimal rotation transformation matrix R3×3And an optimal translation transformation matrix T3×1And (3) carrying out fusion to obtain a fusion transformation matrix H:
Figure BDA0002962077140000261
the embodiment shown in fig. 4 provides a method for automatically acquiring the coordinate transformation relationship between the robot coordinate system and the image coordinate system. The method comprises the steps of minimizing an error function which takes a translation transformation matrix and a rotation transformation matrix as independent variables, and minimizing a target function which is converted from the error function to obtain an optimal rotation transformation matrix with high accuracy, and further correspondingly obtaining the optimal translation transformation matrix with high accuracy based on a first position, a second position and the optimal rotation transformation matrix with high accuracy. Therefore, by adopting the scheme of the embodiment of the application, the coordinate conversion relation with higher accuracy can be obtained.
The coordinate transformation relation acquisition method of the present embodiment may be executed by any suitable electronic device having data processing capability, including but not limited to: servers, PCs, even high performance mobile terminals, etc.
In the application, the first embodiment and the fourth embodiment may be combined, or the second embodiment and the fourth embodiment may be combined, so that the whole process of the marker point extraction and the coordinate transformation relation acquisition is realized. In the fourth step 401 of this application, the position of each marker point in the image coordinate system may be obtained by using the method in the first or second embodiment, where the "position of the marker point in the image coordinate system" in the fourth embodiment of this application is the "position of the marker point corresponding to the reference marker point set in the image coordinate system corresponding to the target image" in the first embodiment; in the fourth embodiment of the present application, "the position of the marker in the image coordinate system" is "the position of each marker in the image coordinate system" in the second embodiment.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of a marker point extracting apparatus in the fifth embodiment of the present application.
The mark point extraction device provided by the embodiment of the application comprises:
an image obtaining module 501, configured to obtain a target image including a plurality of mark points;
a candidate mark point region determining module 502, configured to determine a candidate mark point region from the target image according to an image value range of an image point corresponding to the mark point in an imaging manner of the target image; the image value of each image point in the candidate mark point area is located in the image value range;
a reference mark point set obtaining module 503, configured to perform category division on each image point in the candidate mark point region to obtain multiple reference mark point sets; each reference mark point set represents a connected region, and corresponds to one mark point;
an initial mark point obtaining module 504, configured to perform feature point extraction based on image points in each reference mark point set, respectively, to obtain initial mark points corresponding to the reference mark point set;
a mark point position obtaining module 505, configured to search, based on the initial mark point, each reference mark point set by using a search algorithm, and obtain positions of the mark points corresponding to the reference mark point set in an image coordinate system corresponding to the target image.
Optionally, in an embodiment of the present application, the reference mark point set obtaining module 503 is specifically configured to:
clustering each image point in the candidate mark point area by adopting a clustering algorithm to obtain a plurality of reference mark point sets;
the selection principle of the initial clustering center is as follows: taking the image point with the maximum distance sum in the candidate mark point area as an initial clustering center; or selecting an image point with an image value as a preset image value in the candidate mark point region as an initial clustering center, or selecting an image point with a connecting line in a preset shape in the candidate mark point region as the initial clustering center.
Optionally, in an embodiment of the present application, the initial mark point obtaining module 504 is specifically configured to:
and calculating the coordinate mean value of the image points in each reference mark point set, and taking the image point corresponding to the coordinate mean value as the initial mark point corresponding to the reference mark point set.
Optionally, in an embodiment of the present application, the marked point extracting apparatus further includes:
the first characteristic point position obtaining module is used for obtaining the position of each marking point in the robot coordinate system as a first position; extracting the feature points based on the first position to obtain a first feature point position;
a second feature point position obtaining module, configured to use the position of each mark point in the image coordinate system as a second position; extracting the feature points based on the second position to obtain a second feature point position;
the initial coordinate system establishing module is used for establishing an initial coordinate system of the robot based on the first characteristic point position, the first position farthest from the first characteristic point position and the first position second farthest from the first characteristic point position; establishing an image initial coordinate system based on the position of the second characteristic point, a second position farthest from the position of the second characteristic point and a second position second farthest from the position of the second characteristic point;
the conversion relation calculation module is used for calculating the conversion relation between the initial coordinate system of the robot and the initial coordinate system of the image;
and the corresponding relation obtaining module is used for obtaining the corresponding relation between the first position and the second position according to the first position, the second position and the conversion relation of each mark point.
Optionally, in an embodiment of the present application, the first feature point position obtaining module is specifically configured to:
calculating the position mean value of each first position to obtain a first feature point position;
the second feature point position obtaining module is specifically configured to:
and calculating the position mean value of each second position to obtain the position of the second feature point.
Optionally, in an embodiment of the present application, the initial coordinate system establishing module is specifically configured to:
determining an axis where a connecting line of the position of the first characteristic point and a first position farthest from the position of the first characteristic point is located as a first coordinate axis in an initial coordinate system of the robot;
determining a straight line which is perpendicular to a first coordinate axis in the initial coordinate system of the robot and is perpendicular to a connecting line of the position of the first characteristic point and a first position which is next far away from the position of the first characteristic point as a second coordinate axis in the initial coordinate system of the robot;
determining a third coordinate axis in the initial robot coordinate system based on the first coordinate axis in the initial robot coordinate system and the second coordinate axis in the initial robot coordinate system;
determining an axis where a connecting line of the position of the second characteristic point and a second position farthest from the position of the second characteristic point is located as a first coordinate axis in an image initial coordinate system;
determining a straight line which is perpendicular to a first coordinate axis in the image initial coordinate system and is perpendicular to a connecting line of a second characteristic point position and a second position which is next far away from the second characteristic point position as a second coordinate axis in the image initial coordinate system;
and determining a third coordinate axis in the image initial coordinate system based on the first coordinate axis in the image initial coordinate system and the second coordinate axis in the image initial coordinate system.
Optionally, in an embodiment of the present application, the correspondence obtaining module is specifically configured to:
respectively carrying out coordinate conversion on each first position based on the initial coordinate conversion relation to obtain a converted position of each first position in an image coordinate system;
for each converted position, determining a second position closest to the converted position as a second position having a corresponding relationship with the first position corresponding to the converted position;
or,
respectively carrying out coordinate conversion on each second position based on the initial coordinate conversion relation to obtain the converted position of each second position in the image coordinate system;
and for each converted position, determining a first position closest to the converted position as a first position having a corresponding relationship with a second position corresponding to the converted position.
The marker extraction device of the embodiment of the present application is used to implement the corresponding marker extraction method in the first embodiment or the second embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again. In addition, the functional implementation of each module in the marker point extracting device in the embodiment of the present application can refer to the description of the corresponding part in the foregoing method embodiment, and is not repeated here.
Example six,
Referring to fig. 6, fig. 6 is a schematic structural diagram of a coordinate transformation relation obtaining apparatus in the sixth embodiment of the present application.
The coordinate conversion relation obtaining device provided by the embodiment of the application comprises:
a position obtaining module 601, configured to obtain a position of a preset mark point in an image coordinate system as a first position; acquiring the position of the mark point under the robot coordinate system as a second position;
an error function constructing module 602, configured to construct an error function based on the first position and the second position by using a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as arguments; wherein the error function is used to characterize: adopting a translation transformation matrix and a rotation transformation matrix to transform the first position to obtain an error between the transformed position and the second position; or, the error between the converted position and the first position is obtained after the second position is converted by adopting the translation conversion matrix and the rotation conversion matrix;
an optimal transformation relation obtaining module 603, configured to perform minimization processing on the error function to obtain an optimal transformation relation between the translation transformation matrix and the rotation transformation matrix;
an optimal rotation transformation matrix obtaining module 604, configured to convert the error function into a target function with the rotation transformation matrix as an argument according to the optimal transformation relation; minimizing the target function to obtain an optimal rotation transformation matrix;
an optimal translation transformation matrix obtaining module 605, configured to obtain an optimal translation transformation matrix based on the first position, the second position, and the optimal rotation transformation matrix.
Optionally, in an embodiment of the present application, the number of the mark points is multiple; the position obtaining module 601 is specifically configured to:
acquiring the position of each preset mark point in an image coordinate system as a first position; acquiring the position of each mark point under the robot coordinate system as a second position; acquiring the corresponding relation between each first position and each second position; wherein, the first position and the second position with corresponding relation belong to the same mark point;
the error function building block 602 is specifically configured to: constructing an error function based on each first position, each second position and the corresponding relation by taking a translation transformation matrix and a rotation transformation matrix between an image coordinate system and a robot coordinate system as independent variables;
the optimal translation transformation matrix obtaining module 605 is specifically configured to:
and obtaining an optimal translation transformation matrix based on the first positions, the second positions, the corresponding relation and the optimal rotation transformation matrix.
Optionally, in an embodiment of the present application, the error function constructing module 602 is specifically configured to:
adopting a first preset formula, taking a translation transformation matrix and a rotation transformation matrix between an image coordinate system and a robot coordinate system as independent variables, and constructing an error function based on each first position, each second position and a corresponding relation, wherein the first preset formula is as follows:
Figure BDA0002962077140000311
wherein R represents a rotation transformation matrix between the image coordinate system and the robot coordinate system; t represents a translation transformation matrix between the image coordinate system and the robot coordinate system; f (R, T) for translation transformation matrix and rotationTransforming the matrix into an error function of the independent variable; n represents the number of the mark points; omegaiRepresenting a preset weight value corresponding to the ith mark point; | | non-woven hair2Represents the square of the vector norm; piRepresents the coordinates of the i-th mark point in the image coordinate system, and qiIndicating the coordinates of the i-th marking point in the robot coordinate system, or, PiRepresents the coordinates of the i-th marking point in the robot coordinate system, and qiIndicating the coordinates of the ith marking point in the image coordinate system.
Optionally, in an embodiment of the present application, the optimal transformation relation obtaining module 303 is specifically configured to:
and (3) solving the derivative of the error function, and making the derivative value equal to 0 to obtain the optimal transformation relation between the translation transformation matrix and the rotation transformation matrix:
Figure BDA0002962077140000312
wherein,
Figure BDA0002962077140000321
the optimal rotation transformation matrix obtaining module 304 is specifically configured to:
and according to the optimal transformation relation, converting the error function into an objective function taking the rotation transformation matrix as an independent variable:
Figure BDA0002962077140000322
wherein, M (R) represents an objective function with a rotation transformation matrix as an argument;
minimizing the objective function to obtain an optimal rotation transformation matrix:
R′=VUT
wherein,
Figure BDA0002962077140000323
r' is the optimum rotationTransforming the matrix; v and U are respectively a pair
Figure BDA0002962077140000324
Performing SVD to obtain two unitary matrixes; Λ is a pair
Figure BDA0002962077140000325
The diagonal matrix obtained after SVD decomposition.
Optionally, in an embodiment of the present application, the optimal translation transformation matrix obtaining module 305 is specifically configured to:
obtaining an optimal translation transformation matrix based on each first position, each second position and the optimal rotation transformation matrix through a second preset formula, wherein the second preset formula is as follows:
T′=G-R′G*
wherein R' is an optimal rotation transformation matrix; t' is an optimal translation transformation matrix; when P is presentiG when the coordinates of the ith mark point in the image coordinate system are represented*Representing a vector obtained based on the coordinates of the marking points in the image coordinate system, and G representing a vector obtained based on the coordinates of the marking points in the robot coordinate system; when said P isiG represents the coordinates of the ith marking point in the robot coordinate system*And G represents a vector obtained based on the coordinates of the marking point in the robot coordinate system, and G represents a vector obtained based on the coordinates of the marking point in the image coordinate system.
Optionally, in an embodiment of the present application, the apparatus further includes:
the target operation area position acquisition module is used for acquiring the position of the target operation area in an image coordinate system;
and the coordinate conversion module is used for performing coordinate conversion on the position of the target operation area under the image coordinate system according to the optimal rotation conversion matrix and the optimal translation conversion matrix to obtain the position of the target operation area under the robot coordinate system.
Optionally, in an embodiment of the present application, when the step of obtaining the corresponding relationship between each first location and each second location is executed, the location obtaining module 301 is specifically configured to:
extracting feature points based on the first positions to obtain first feature point positions; extracting the characteristic points of the second positions to obtain the positions of the second characteristic points
Establishing an initial coordinate system of the robot based on the position of the first characteristic point, the first position farthest from the position of the first characteristic point and the first position second farthest from the position of the first characteristic point; establishing an image initial coordinate system based on the position of the second characteristic point, a second position farthest from the position of the second characteristic point and a second position second farthest from the position of the second characteristic point;
calculating a conversion relation between the initial coordinate system of the robot and the initial coordinate system of the image as an initial conversion relation;
and obtaining the corresponding relation between each first position and each second position according to each first position, each second position and the initial conversion relation.
The coordinate transformation relation obtaining device in the embodiment of the present application is used to implement the corresponding coordinate transformation relation obtaining method in the third embodiment or the fourth embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again. In addition, the functional implementation of each module in the coordinate transformation relation obtaining apparatus in the embodiment of the present application can refer to the description of the corresponding part in the foregoing method embodiment, and is not repeated here.
Example seven,
Fig. 7 is a schematic structural diagram of an electronic device in a seventh embodiment of the present application; the electronic device may include:
one or more processors 701;
a computer-readable medium 702, which may be configured to store one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement any of the marker point extracting methods according to the first to second embodiments, or when the one or more programs are executed by the one or more processors, the one or more processors implement any of the coordinate transformation relation obtaining methods according to the third to fourth embodiments.
Example eight,
Fig. 8 is a hardware structure of an electronic device according to an eighth embodiment of the present application; as shown in fig. 8, the hardware structure of the electronic device may include: a processor 801, a communication interface 802, a computer-readable medium 803, and a communication bus 804;
wherein the processor 801, the communication interface 802, and the computer readable medium 803 communicate with each other via a communication bus 804;
alternatively, the communication interface 802 may be an interface of a communication module, such as an interface of a GSM module;
the processor 801 may be specifically configured to: acquiring a target image containing a plurality of mark points; determining a candidate mark point area from the target image according to the image value range of the corresponding image point of the mark point in the imaging mode of the target image; the image value of each image point in the candidate mark point area is located in the image value range; classifying each image point in the candidate mark point area to obtain a plurality of reference mark point sets; each reference mark point set represents a connected region, and corresponds to one mark point; extracting feature points based on image points in each reference mark point set respectively to obtain initial mark points corresponding to each reference mark point set; searching each reference mark point set by adopting a search algorithm based on the initial mark points to obtain the positions of the mark points corresponding to the reference mark point set in an image coordinate system corresponding to the target image; alternatively, the processor 801 may be specifically configured to: acquiring the position of a preset mark point under an image coordinate system as a first position; acquiring the position of the mark point under a robot coordinate system as a second position; constructing an error function based on the first position and the second position by taking a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as independent variables; wherein the error function is used to characterize: converting the first position by using the translation transformation matrix and the rotation transformation matrix to obtain an error between the converted position and the second position; or, the translation transformation matrix and the rotation transformation matrix are adopted to transform the second position to obtain an error between the transformed position and the first position; minimizing the error function to obtain an optimal transformation relation between the translation transformation matrix and the rotation transformation matrix; converting the error function into a target function taking the rotation transformation matrix as an independent variable according to the optimal transformation relation; minimizing the target function to obtain an optimal rotation transformation matrix; and obtaining an optimal translation transformation matrix based on the first position, the second position and the optimal rotation transformation matrix.
The Processor 801 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The computer-readable medium 803 may be, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
In particular, according to an embodiment of the present application, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code configured to perform the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program, when executed by a Central Processing Unit (CPU), performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium of the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access storage media (RAM), a read-only storage media (ROM), an erasable programmable read-only storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only storage media (CD-ROM), an optical storage media piece, a magnetic storage media piece, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code configured to carry out operations for the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may operate over any of a variety of networks: including a Local Area Network (LAN) or a Wide Area Network (WAN) -to the user's computer, or alternatively, to an external computer (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions configured to implement the specified logical function(s). In the above embodiments, specific precedence relationships are provided, but these precedence relationships are only exemplary, and in particular implementations, the steps may be fewer, more, or the execution order may be modified. That is, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises an image acquisition module, a candidate marking point area determination module, a reference marking point set obtaining module, an initial marking point obtaining module and a marking point position obtaining module; alternatively, it can be described as: a processor comprises a position obtaining module, an error function constructing module, an optimal transformation relation obtaining module, an optimal rotation transformation matrix obtaining module and an optimal translation transformation matrix obtaining module. The names of these modules do not constitute a limitation to the module itself in some cases, and for example, the image capturing module may also be described as a "module that captures a target image including a plurality of marker points".
As another aspect, the present application also provides a computer-readable medium on which a computer program is stored, the program, when executed by a processor, implementing the marker point extraction method as described in the above first or second embodiment; alternatively, the program may be executed by a processor to implement the coordinate conversion relationship acquisition method as described in the above third embodiment or fourth embodiment.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a target image containing a plurality of mark points; determining a candidate mark point area from the target image according to the image value range of the corresponding image point of the mark point in the imaging mode of the target image; the image value of each image point in the candidate mark point area is located in the image value range; classifying each image point in the candidate mark point area to obtain a plurality of reference mark point sets; each reference mark point set represents a connected region, and corresponds to one mark point; extracting feature points based on image points in each reference mark point set respectively to obtain initial mark points corresponding to each reference mark point set; searching each reference mark point set by adopting a search algorithm based on the initial mark points to obtain the positions of the mark points corresponding to the reference mark point set in an image coordinate system corresponding to the target image; or, causing the apparatus to: acquiring the position of a preset mark point under an image coordinate system as a first position; acquiring the position of the mark point under a robot coordinate system as a second position; constructing an error function based on the first position and the second position by taking a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as independent variables; wherein the error function is used to characterize: converting the first position by using the translation transformation matrix and the rotation transformation matrix to obtain an error between the converted position and the second position; or, the translation transformation matrix and the rotation transformation matrix are adopted to transform the second position to obtain an error between the transformed position and the first position; minimizing the error function to obtain an optimal transformation relation between the translation transformation matrix and the rotation transformation matrix; converting the error function into a target function taking the rotation transformation matrix as an independent variable according to the optimal transformation relation; minimizing the target function to obtain an optimal rotation transformation matrix; and obtaining an optimal translation transformation matrix based on the first position, the second position and the optimal rotation transformation matrix.
The expressions "first", "second", "said first" or "said second" as used in various embodiments of the present application may modify various components irrespective of order and/or importance, but these expressions do not limit the respective components. The above description is only configured for the purpose of distinguishing elements from other elements. For example, the first user equipment and the second user equipment represent different user equipment, although both are user equipment. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present application.
When an element (e.g., a first element) is referred to as being "operably or communicatively coupled" or "connected" (operably or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the element is directly connected to the other element or the element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it is understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), no element (e.g., a third element) is interposed therebetween.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A coordinate conversion relationship acquisition method, characterized by comprising:
acquiring the position of a preset mark point under an image coordinate system as a first position; acquiring the position of the mark point under a robot coordinate system as a second position;
constructing an error function based on the first position and the second position by taking a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as independent variables; wherein the error function is used to characterize: converting the first position by using the translation transformation matrix and the rotation transformation matrix to obtain an error between the converted position and the second position; or, the translation transformation matrix and the rotation transformation matrix are adopted to transform the second position to obtain an error between the transformed position and the first position;
minimizing the error function to obtain an optimal transformation relation between the translation transformation matrix and the rotation transformation matrix;
converting the error function into a target function taking the rotation transformation matrix as an independent variable according to the optimal transformation relation; minimizing the target function to obtain an optimal rotation transformation matrix;
obtaining an optimal translation transformation matrix based on the first position, the second position and the optimal rotation transformation matrix;
wherein, the minimizing the error function to obtain the optimal transformation relation between the translation transformation matrix and the rotation transformation matrix includes: calculating a transformation relation between the translation transformation matrix and the rotation transformation matrix when the error function is minimum, and taking the transformation relation as an optimal transformation relation;
the minimizing the objective function to obtain an optimal rotation transformation matrix includes: calculating a rotation transformation matrix when the objective function obtains the minimum value, and taking the rotation transformation matrix as an optimal rotation transformation matrix;
obtaining an optimal translation transformation matrix based on the first position, the second position, and the optimal rotation transformation matrix, including: transforming the first position by adopting the optimal rotation transformation matrix to obtain a transformed coordinate vector; determining the difference between the transformed coordinate vector and the second position as an optimal translation transformation matrix; or, transforming the second position by adopting the optimal rotation transformation matrix to obtain a transformed coordinate vector; and determining the difference between the transformed coordinate vector and the first position as an optimal translation transformation matrix.
2. The method according to claim 1, wherein the number of the marker points is plural; the position of the preset mark point under the image coordinate system is obtained and used as a first position; acquiring the position of the mark point under the robot coordinate system as a second position, wherein the second position comprises the following steps:
acquiring the position of each preset mark point in an image coordinate system as a first position; acquiring the position of each mark point under a robot coordinate system as a second position; acquiring the corresponding relation between each first position and each second position; wherein, the first position and the second position with corresponding relation belong to the same mark point;
constructing an error function based on the first position and the second position by taking a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as independent variables, including:
constructing an error function based on each first position, each second position and the corresponding relation by taking a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as independent variables;
obtaining an optimal translation transformation matrix based on the first position, the second position, and the optimal rotation transformation matrix, including:
and obtaining an optimal translation transformation matrix based on each first position, each second position, the corresponding relation and the optimal rotation transformation matrix.
3. The method according to claim 2, wherein constructing an error function based on each of the first positions, each of the second positions, and the corresponding relationship with a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as arguments comprises:
adopting a first preset formula, taking a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as independent variables, and constructing an error function based on each first position, each second position and the corresponding relation, wherein the first preset formula is as follows:
Figure FDA0003283069450000031
wherein R represents a rotation transformation matrix between the image coordinate system and the robot coordinate system; t represents the image coordinate system andtranslation transformation matrix between the robot coordinate systems; f (R, T) represents an error function with the translation transformation matrix and the rotation transformation matrix as arguments; n represents the number of the marking points; omegaiRepresenting a preset weight value corresponding to the ith mark point; | | non-woven hair2Represents the square of the vector norm; piRepresents the coordinates of the i-th mark point in the image coordinate system, and qiIndicating the coordinates of the i-th marking point in the robot coordinate system, or, PiRepresents the coordinates of the i-th marking point in the robot coordinate system, and qiIndicating the coordinates of the ith marking point in the image coordinate system.
4. The method of claim 3, wherein minimizing the error function to obtain an optimal transformation relationship between the translation transformation matrix and the rotation transformation matrix comprises:
and (3) obtaining the optimal transformation relation between the translation transformation matrix and the rotation transformation matrix by differentiating the error function and making a derivative value equal to 0:
Figure FDA0003283069450000032
wherein,
Figure FDA0003283069450000033
converting the error function into a target function taking the rotation transformation matrix as an independent variable according to the optimal transformation relation; and minimizing the objective function to obtain an optimal rotation transformation matrix, comprising:
converting the error function into an objective function taking the rotation transformation matrix as an independent variable according to the optimal transformation relation:
Figure FDA0003283069450000034
wherein M (R) represents an objective function with the rotation transformation matrix as an argument;
minimizing the objective function to obtain an optimal rotation transformation matrix:
R′=VUT
wherein,
Figure FDA0003283069450000041
r' is an optimal rotation transformation matrix; v and U are respectively a pair
Figure FDA0003283069450000042
Performing SVD to obtain two unitary matrixes; Λ is a pair
Figure FDA0003283069450000043
The diagonal matrix obtained after SVD decomposition.
5. The method according to any one of claims 2-4, wherein obtaining an optimal translation transformation matrix based on each of the first locations, each of the second locations, and the optimal rotation transformation matrix comprises:
obtaining an optimal translation transformation matrix based on each first position, each second position and the optimal rotation transformation matrix through a second preset formula, wherein the second preset formula is as follows:
T′=G-R′G*
wherein R' is an optimal rotation transformation matrix; t' is an optimal translation transformation matrix; when P is presentiG when the coordinates of the ith mark point in the image coordinate system are represented*Representing a vector obtained based on the coordinates of the marking points in the image coordinate system, and G representing a vector obtained based on the coordinates of the marking points in the robot coordinate system; when said P isiG represents the coordinates of the ith marking point in the robot coordinate system*Representing a vector based on coordinates of the marking points in the robot coordinate system, G representingAnd obtaining a vector based on the coordinates of the mark points in the image coordinate system.
6. The method of claim 2, further comprising:
acquiring the position of a target operation area under the image coordinate system;
and performing coordinate conversion on the position of the target operation area under the image coordinate system according to the optimal rotation transformation matrix and the optimal translation transformation matrix to obtain the position of the target operation area under the robot coordinate system.
7. The method according to claim 2, wherein said obtaining a correspondence between each of said first positions and each of said second positions comprises:
extracting feature points based on the first positions to obtain first feature point positions; extracting the feature points based on the second positions to obtain the positions of the second feature points
Establishing a robot initial coordinate system based on the first characteristic point position, a first position farthest from the first characteristic point position and a first position second farthest from the first characteristic point position; establishing an image initial coordinate system based on the second characteristic point position, a second position farthest from the second characteristic point position and a second position second farthest from the second characteristic point position;
calculating a conversion relation between the initial coordinate system of the robot and the initial coordinate system of the image as an initial conversion relation;
and obtaining the corresponding relation between each first position and each second position according to each first position, each second position and the initial conversion relation.
8. A coordinate conversion relation acquisition apparatus, characterized in that the apparatus comprises:
the position acquisition module is used for acquiring the position of a preset mark point in an image coordinate system as a first position; acquiring the position of the mark point under a robot coordinate system as a second position;
an error function constructing module, configured to construct an error function based on the first position and the second position, with a translation transformation matrix and a rotation transformation matrix between the image coordinate system and the robot coordinate system as arguments; wherein the error function is used to characterize: converting the first position by using the translation transformation matrix and the rotation transformation matrix to obtain an error between the converted position and the second position; or, the translation transformation matrix and the rotation transformation matrix are adopted to transform the second position to obtain an error between the transformed position and the first position;
the optimal transformation relation obtaining module is used for carrying out minimization processing on the error function to obtain the optimal transformation relation between the translation transformation matrix and the rotation transformation matrix;
an optimal rotation transformation matrix obtaining module, configured to convert the error function into a target function using the rotation transformation matrix as an argument according to the optimal transformation relation; minimizing the target function to obtain an optimal rotation transformation matrix;
an optimal translation transformation matrix obtaining module, configured to obtain an optimal translation transformation matrix based on the first position, the second position, and the optimal rotation transformation matrix;
the optimal transformation relation obtaining module is specifically configured to calculate a transformation relation between the translation transformation matrix and the rotation transformation matrix as an optimal transformation relation when the error function is minimized;
the optimal rotation transformation matrix obtaining module is specifically configured to calculate a rotation transformation matrix when the target function obtains the minimum value as an optimal rotation transformation matrix when the step of minimizing the target function to obtain the optimal rotation transformation matrix is performed;
the optimal translation transformation matrix obtaining module is specifically configured to transform the first position by using the optimal rotation transformation matrix to obtain a transformed coordinate vector; determining the difference between the transformed coordinate vector and the second position as an optimal translation transformation matrix; or, transforming the second position by adopting the optimal rotation transformation matrix to obtain a transformed coordinate vector; and determining the difference between the transformed coordinate vector and the first position as an optimal translation transformation matrix.
9. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the coordinate conversion relation acquisition method as claimed in any one of claims 1 to 7.
10. A computer storage medium, characterized in that a computer program is stored thereon, which when executed by a processor, implements the coordinate conversion relation acquisition method according to any one of claims 1 to 7.
CN202110254004.2A 2021-03-04 2021-03-04 Coordinate conversion relation obtaining method and device, electronic equipment and storage medium Active CN112837391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110254004.2A CN112837391B (en) 2021-03-04 2021-03-04 Coordinate conversion relation obtaining method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110254004.2A CN112837391B (en) 2021-03-04 2021-03-04 Coordinate conversion relation obtaining method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112837391A CN112837391A (en) 2021-05-25
CN112837391B true CN112837391B (en) 2022-02-18

Family

ID=75929897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110254004.2A Active CN112837391B (en) 2021-03-04 2021-03-04 Coordinate conversion relation obtaining method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112837391B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114452545B (en) * 2021-08-25 2024-07-09 西安大医集团股份有限公司 Method, device and system for confirming coordinate system conversion relation
CN113442144B (en) * 2021-09-01 2021-11-19 北京柏惠维康科技有限公司 Optimal pose determining method and device under constraint, storage medium and mechanical arm
CN114037814B (en) * 2021-11-11 2022-12-23 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium
CN113947637B (en) * 2021-12-15 2022-04-22 北京柏惠维康科技有限公司 Mark point extraction method and device, electronic equipment and computer storage medium
CN116772804A (en) * 2022-03-10 2023-09-19 华为技术有限公司 Positioning method and related equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106730106B (en) * 2016-11-25 2019-10-08 哈尔滨工业大学 The coordinate scaling method of the micro-injection system of robot assisted
CN109242892B (en) * 2018-09-12 2019-11-12 北京字节跳动网络技术有限公司 Method and apparatus for determining the geometric transform relation between image
CN110123451B (en) * 2019-04-17 2020-07-28 华南理工大学 Patient surface registration method applied to optical operation navigation system without mark points
CN110991085B (en) * 2019-12-20 2023-08-29 上海有个机器人有限公司 Method, medium, terminal and device for constructing robot image simulation data

Also Published As

Publication number Publication date
CN112837391A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN112862813B (en) Mark point extraction method and device, electronic equipment and computer storage medium
CN112837391B (en) Coordinate conversion relation obtaining method and device, electronic equipment and storage medium
CN104778688B (en) The method for registering and device of cloud data
CN107833181B (en) Three-dimensional panoramic image generation method based on zoom stereo vision
Havlena et al. Randomized structure from motion based on atomic 3D models from camera triplets
CN112927363B (en) Voxel map construction method and device, computer readable medium and electronic equipment
CN111145240A (en) Living body Simmental cattle body ruler online measurement method based on 3D camera
CN112907642B (en) Registration and superposition method, system, storage medium and equipment
CN112037146B (en) Automatic correction method and device for medical image artifacts and computer equipment
CN111612731B (en) Measuring method, device, system and medium based on binocular microscopic vision
CN112598790A (en) Brain structure three-dimensional reconstruction method and device and terminal equipment
CN116309880A (en) Object pose determining method, device, equipment and medium based on three-dimensional reconstruction
CN113487656B (en) Image registration method and device, training method and device, control method and device
CN111260702B (en) Laser three-dimensional point cloud and CT three-dimensional point cloud registration method
CN115578320A (en) Full-automatic space registration method and system for orthopedic surgery robot
CN116563096B (en) Method and device for determining deformation field for image registration and electronic equipment
CN108304578A (en) Processing method, medium, device and the computing device of map datum
CN110009726B (en) Method for extracting plane from point cloud according to structural relationship between plane elements
CN112862975B (en) Bone data processing method, system, readable storage medium and device
CN113643328A (en) Calibration object reconstruction method and device, electronic equipment and computer readable medium
Wu et al. Point cloud registration algorithm based on the volume constraint
Su et al. No-reference Point Cloud Geometry Quality Assessment Based on Pairwise Rank Learning
CN115619835B (en) Heterogeneous three-dimensional observation registration method, medium and equipment based on depth phase correlation
CN118095654B (en) BIM-based building engineering construction management method and system
Song et al. Improved FCM algorithm for fisheye image cluster analysis for tree height calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 100191 Room 501, floor 5, building 9, No. 35 Huayuan North Road, Haidian District, Beijing

Patentee after: Beijing Baihui Weikang Technology Co.,Ltd.

Address before: 100191 Room 608, 6 / F, building 9, 35 Huayuan North Road, Haidian District, Beijing

Patentee before: Beijing Baihui Wei Kang Technology Co.,Ltd.