WO2018161555A1 - 物***姿的检测方法和装置 - Google Patents
物***姿的检测方法和装置 Download PDFInfo
- Publication number
- WO2018161555A1 WO2018161555A1 PCT/CN2017/104668 CN2017104668W WO2018161555A1 WO 2018161555 A1 WO2018161555 A1 WO 2018161555A1 CN 2017104668 W CN2017104668 W CN 2017104668W WO 2018161555 A1 WO2018161555 A1 WO 2018161555A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pose
- image
- mark code
- calibration
- code
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Definitions
- the present invention relates to the field of computer technology, and in particular, to a method and apparatus for detecting an object pose.
- the pose estimation problem refers to the process of estimating the relative rotation and translation between two spatial coordinate systems by using feature correspondence information, which is a core technical problem to be solved by realizing the function of the intelligent robot to carry objects.
- the pose estimation problem is an important basic problem in computer vision, computer graphics, and photogrammetry.
- the existing methods for estimating poses are mainly as follows: pose-based pose estimation method, model-based pose estimation method and learning-based pose estimation method.
- Model-based pose estimation methods usually use the geometric relationship of objects to estimate.
- the basic idea is to use a certain geometric model or structure to represent the structure and shape of the object, and to extract some object features, establish a correspondence between the model and the image, and then realize the estimation of the space pose by geometric or other methods.
- the model used here may be a simple geometric shape, such as a plane, a cylinder, or a certain geometry, or a three-dimensional model obtained by laser scanning or other methods.
- the model-based pose estimation method updates the object pose by performing similarity calculation by comparing the real image and the composite image.
- the current model-based method In order to avoid the optimization search in the global state space, the current model-based method generally degrades the optimization problem into matching problems of multiple local features, which is very dependent on the accurate detection of local features. The robustness of the method is greatly affected when the noise is too large to extract accurate local features.
- the learning-based method uses the machine learning method to learn the correspondence between the two-dimensional observation image and the three-dimensional pose from the training samples in different postures acquired in advance, and applies the learned decision rule or regression function to the sample.
- the result is estimated as a pose for the sample.
- the learning-based method generally adopts global observation features, and does not need to detect or identify the local features of the object, and has good robustness.
- the disadvantage is that the intensive sampling required for continuous estimation in high-dimensional space cannot be obtained, so the accuracy and continuity of pose estimation cannot be guaranteed.
- the pose-based estimation method of the chair requires collecting and labeling a large number of samples for various chairs, and also requires a lot of manpower and material resources.
- the invention provides a method and a device for detecting the pose of an object, which can simplify the process of detecting the pose of the object, improve the efficiency of the detection process, and can improve the accuracy of the detection result.
- the method for detecting the posture of an object provided by the present invention specifically includes:
- the mark code image is an image attached to a mark code on an object to be detected
- the method further includes:
- the generating and displaying at least one target positioning posture information, and obtaining M calibration images according to the target positioning posture information specifically including the steps of:
- the calibration mark code image is set as a calibration image, and the value of m is modified to m+1, and returns to step S1;
- step S1 the determining, according to the calibration mark code image and the current target positioning posture information, whether the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information; If yes, the calibration mark code image is set as the calibration image, and the value of m is modified to be m+1; if not, the process returns to step S1, which specifically includes:
- step S1 If yes, it is confirmed that the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information, and the calibration mark code image is set as a calibration image, and the value of m is modified. Is m+1, and returns to step S1;
- the tag code includes at least one sub tag code
- calculating the pose of the mark code specifically including:
- the rotation amount of the mark code relative to the reference pose includes a rotation angle ⁇ and a unit direction vector (r rx , r ry , r rz ); a displacement of the mark code relative to the reference pose
- the quantity includes the displacement vector r t ;
- Body includes:
- the present invention also provides a device for detecting the posture of an object, which specifically includes:
- a mark code image obtaining module configured to receive an original image captured by the camera, and extract a obtained mark code image from the original image; wherein the mark code image is an image attached to a mark code on the object to be detected;
- a mark code pose obtaining module configured to calculate a pose of the mark code according to the mark code image and a pre-generated pose calculation model; wherein the pose of the mark code includes the mark code relative to The amount of rotation and displacement of the reference pose; and,
- the object pose obtaining module is configured to calculate a current pose of the object to be detected according to the pose of the mark code.
- the detecting device for the pose of the object further includes:
- a calibration image obtaining module configured to generate and display at least one target positioning posture information, and obtain M calibration images according to the target positioning posture information; wherein, M>0;
- a pose calculation model generating module configured to generate the pose calculation model according to each of the calibration image and corresponding target positioning posture information
- the calibration image obtaining module specifically includes:
- the current target positioning position information display unit is configured to generate and display a current target positioning posture information when the number m of the currently obtained calibration images is less than M;
- a calibration mark code image obtaining unit configured to receive an original calibration image that is acquired by the camera and corresponding to the current target positioning posture information, and extract and obtain a calibration mark code image from the original calibration image;
- a looping unit configured to determine, according to the calibration mark code image and the current target positioning posture information, whether the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information; If yes, the calibration mark code image is set as a calibration image, and the value of m is modified to be m+1, and the current target position information display unit is returned; if not, the current target position information display is returned. unit.
- loop unit specifically includes:
- a key point distance calculation subunit configured to identify a key point in the current calibration mark code image, and calculate a distance between the key points
- a key point distance determining subunit configured to determine, according to the current target positioning posture information, whether a distance between the key points is within a preset distance range
- a first loop subunit configured to: when the distance between the key points is within a preset distance range, confirm that the calibration mark code corresponding to the calibration mark code image is in the current target positioning posture information a target positioning posture, and setting the calibration mark code image as a calibration image, and modifying the value of m to m+1, and returning the current target positioning position information display unit;
- a second loop subunit configured to: when the distance between the key points is not within a preset distance range, confirm that the calibration mark code corresponding to the calibration mark code image is not in the current target position information Corresponding target positioning posture, and returning to the current target positioning posture information display unit.
- the tag code includes at least one sub tag code
- the mark code pose obtaining module specifically includes:
- a sub-marker image obtaining unit configured to perform image segmentation on the mark code image to obtain at least one sub-marker image conforming to a shape requirement
- a legal sub-tag code image obtaining unit configured to compare each of the sub-tag code images with each of the pre-stored standard sub-tag code images to obtain a legal sub-tag code image in the sub-tag code image;
- the mark code pose calculation obtaining unit is configured to calculate a pose of the mark code according to each of the legal subtag code image and the pose calculation model.
- the rotation amount of the mark code relative to the reference pose includes a rotation angle ⁇ and a unit direction vector (r rx , r ry , r rz ); a displacement of the mark code relative to the reference pose
- the quantity includes the displacement vector r t ;
- the object pose obtaining module specifically comprising:
- An object plane rotation angle obtaining unit for calculating a formula according to the rotation amount v and a plane angle Calculating a plane rotation angle ⁇ of the object to be detected with respect to the reference pose;
- An object plane displacement amount obtaining unit configured to calculate, according to the displacement vector r t , a plane displacement amount s of the object to be detected relative to the reference pose;
- An object current pose obtaining unit for rotating the angle ⁇ according to the plane and the plane displacement vector s, obtaining the current pose of the object to be detected.
- the method and device for detecting the pose of an object mark the pose of the object to be detected by using a mark code, so that the system can obtain the pose of the mark code by calculating the mark code image captured by the camera, and further Calculate the pose of the object to be inspected. Since the position of the object to be detected is marked by the mark code, the system only needs to analyze and process the image of the mark code, thereby greatly improving the efficiency of the process of detecting the pose of the object to be detected, and because of the mark code image
- the feature points are more obvious and easier to identify. Therefore, the recognition and calculation of these feature points are difficult and accurate, so that the accuracy of the detection result of detecting the pose of the object to be detected can be improved.
- FIG. 1 is a schematic flow chart of a preferred embodiment of a method for detecting an attitude of an object provided by the present invention
- FIG. 2 is a schematic view showing a label code attached to a chair back in a preferred embodiment of the method for detecting the pose of an object provided by the present invention
- FIG. 3 is a schematic view of a calibration plate in still another preferred embodiment of the method for detecting the pose of an object provided by the present invention
- Fig. 4 is a schematic view showing the structure of a preferred embodiment of the apparatus for detecting the posture of an object provided by the present invention.
- the invention performs the image of the mark code attached to the object to be detected collected by the camera. Analyze and calculate to obtain the pose of the mark code, and further calculate the pose of the object to be detected.
- the invention marks the pose of the object to be detected by means of the mark code, so that the system only needs to analyze and process the image of the mark code, thereby greatly improving the efficiency of the process of detecting the pose of the object to be detected, and because of the mark code
- the feature points in the image are more obvious and easier to identify. Therefore, the recognition and calculation of these feature points are difficult and accurate, and the accuracy of the detection result of detecting the pose of the object to be detected can be improved.
- a schematic flowchart of a preferred embodiment of a method for detecting an attitude of an object provided by the present invention includes steps S11 to S13, as follows:
- S11 receiving an original image collected by a camera, and extracting a mark code image from the original image; wherein the mark code image is an image of a mark code attached to an object to be detected;
- S12 Calculate a pose of the mark code according to the mark code image and a pre-generated pose calculation model, wherein the pose of the mark code includes a rotation amount of the mark code relative to a reference pose and Displacement amount;
- the system detects the pose of the object to be detected, it is necessary to attach a mark code to the surface of the object to be detected.
- the marking code needs to be attached to the facade of the object to be inspected.
- the system can detect the pose of the object to be detected. Specifically, the camera collects the original image in real time, and sends the collected original image to the system, and the system separately analyzes and calculates the received original image of each frame.
- the system After receiving the original image of the current frame, the system determines whether the original image contains a mark code image (ie, determines whether the camera captures the mark code), and if so, extracts the mark code image contained therein from the original image. And substituting the mark code image into a pre-generated pose calculation model, calculating a pose of the corresponding mark code, and calculating a pose of the corresponding object to be detected according to the pose of the mark code; if not, then The original image is not processed.
- the posture of the object to be detected obtained by the calculation may be a plane pose or a three-dimensional pose.
- the system can obtain the pose of the mark code by calculating the mark code image captured by the camera, and then calculate the pose of the object to be detected. Since the position of the object to be detected is marked by the mark code, the system only needs to analyze and process the image of the mark code, thereby greatly improving the efficiency of the process of detecting the pose of the object to be detected, and because of the mark code image
- the feature points are more obvious and easier to identify. Therefore, the recognition and calculation of these feature points are difficult and accurate, so that the accuracy of the detection result of detecting the pose of the object to be detected can be improved.
- the present invention calculates the pose of the object to be detected by means of the mark code, it is only necessary to paste the mark code on the surface of the object to be detected, without modeling according to the shape of the object to be detected, thereby reducing the human and material resources.
- the cost of resources, etc. and can greatly increase the universality of the pose detection method.
- the tag code includes at least one sub tag code
- calculating the pose of the mark code specifically including:
- the rotation amount of the mark code relative to the reference pose includes a rotation angle ⁇ and a unit direction vector (r rx , r ry , r rz ); a displacement of the mark code relative to the reference pose
- the quantity includes the displacement vector r t ;
- calculating a current pose of the object to be detected specifically comprising:
- the tag code includes at least one sub tag code, and thus the tag code image captured by the camera includes at least one sub tag code image.
- the subtag code is a tag similar to the two-dimensional code.
- the system After receiving the original image sent by the camera, the system first determines whether the original image contains a mark code image (whether the camera captures the mark code), that is, whether the original mark image contains the sub mark code image. Specifically, the system adopts an adaptive threshold method to image the original image, and extracts the edge contour from the original image after image segmentation, and the concave or non-approximate quadrilateral or the area is too large or the area is too The small or center-to-close edge contour and its internal content are deleted, thereby obtaining an edge contour having a quadrangular shape or an approximately quadrilateral shape and its internal content, that is, obtaining a sub-marker image. If no edge contour remains after edge contour extraction and deletion, it means that the original image does not contain the sub-marker image, and the system does not process the original image.
- the system determines whether the obtained sub-marker image is a legal sub-marker image.
- the system first performs perspective transformation on the obtained sub-marker image, and converts the sub-marker image into a flat view state; then meshes the entire mark-code image according to the size of the sub-marker image and the size of the entire mark-code image, Thereby, the whole mark code image is divided into two-dimensional mesh; then the Otsu (Otsu algorithm, also known as the maximum inter-class variance algorithm) threshold method is used to segment the entire mark code image, and each grid is judged according to the result of image segmentation.
- Otsu Otsu algorithm, also known as the maximum inter-class variance algorithm
- the system substitutes the above-mentioned legal sub-marker image into the pre-generated pose calculation model, calculates the pose of each legal sub-marker image, and calculates the pose of the whole mark code, that is, obtains the entire mark code relative to The amount of rotation and displacement of the reference pose.
- the reference pose is a pose perpendicular to the horizontal plane at the location of the camera.
- the system calculates the pose of the object to be detected according to the pose of the mark code, that is, the system projects the pose of the mark code to the horizontal plane (it is understood that the object to be detected is generally Standing on a horizontal plane) to obtain the pose of the object to be inspected.
- the system first converts the rotation amount of the mark code with respect to the reference pose according to the above-described rotation transformation formula, thereby expressing the rotation amount in the form of a rotation matrix R; and then marking according to the rotation matrix R and the above conversion formula
- the amount of rotation of the code relative to the reference pose is projected onto the horizontal plane to obtain the amount of rotation of the object to be detected relative to the reference pose; finally, the relative amount of the object to be detected relative to the reference pose is calculated to obtain the relative of the object to be detected.
- the plane rotation angle ⁇ of the reference pose the system projects the translation amount of the marker code with respect to the reference pose to the horizontal plane, thereby obtaining the plane displacement amount s of the object to be detected with respect to the reference pose.
- the system determines the pose of the object to be detected according to the calculated plane rotation angle ⁇ and the plane displacement amount s. It can be understood that the posture of the object to be detected obtained at this time is a plane pose.
- the method further includes:
- the generating and displaying at least one target positioning posture information, and obtaining M calibration images according to the target positioning posture information specifically including the steps of:
- the calibration mark code image is set as a calibration image, and the value of m is modified to m+1, and returns to step S1;
- step S1 the determining, according to the calibration mark code image and the current target positioning posture information, whether the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information; If yes, the calibration mark code image is set as the calibration image, and the value of m is modified to be m+1; if not, the process returns to step S1, which specifically includes:
- the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information, and the calibration mark code image is set as a calibration image, and Modify the value of m to m+1, and return to the step S1;
- the camera calibration is also required, that is, the camera parameter M is calculated according to the generated target position information x and the acquired calibration image y, thereby obtaining the camera corresponding to the camera.
- the pose calculation model Y M ⁇ X. Where Y is the pose of the object and X is the image of the acquired object.
- the calibration mark code is pasted on the board, thereby obtaining a calibration plate as shown in FIG.
- the system determines whether the number of currently acquired calibration images is greater than or equal to M (M>0), and if so, calculates the pose calculation model according to the collected calibration image and the corresponding target positioning posture information; , generates and displays a target positioning position information.
- the user places the calibration plate according to the posture specified by the target positioning posture information.
- the camera captures an image of the calibration plate placed by the user, obtains the original calibration image, and sends the original calibration image to the system.
- the system extracts the calibration mark code image from the image, and calculates the distance between the key points in the calibration mark code image, if the distance between the key points is at a preset distance within the scope, it is considered that the posture of the calibration plate is consistent with the posture specified by the target positioning posture information, and the calibration marker image is set as the calibration image, and the number of currently acquired calibration images is re-determined.
- the subsequent steps are performed, and the cycle is sequentially performed; if the distance between the key points is not within the preset distance range, the posture and the target positioning posture information of the calibration plate are considered
- the specified poses are inconsistent, so the calibration marker image is not processed, and it is re-determined whether the number of currently acquired calibration images is greater than or equal to M, and the subsequent steps are performed according to the result of the determination, or the same standard is displayed again.
- Positioning posture information to prompt the user to place the calibration plate according to the posture specified by the target positioning posture information.
- the posture calculation model can be calculated and the camera calibration is completed, because the process of the camera calibration is simple and easy to operate. Therefore, the efficiency of the process of detecting the pose of the object to be detected can be further improved, and the user experience can be improved.
- the method for detecting the pose of an object marks the object to be detected by using a mark code
- the pose of the body enables the system to calculate the pose of the mark code by calculating the mark code image captured by the camera, and then calculate the pose of the object to be detected.
- the system since the position of the object to be detected is marked by the mark code, the system only needs to analyze and process the image of the mark code, thereby greatly improving the efficiency of the process of detecting the pose of the object to be detected, and
- the feature points in the code image are more obvious and easier to recognize. Therefore, the recognition and calculation of these feature points are difficult and accurate, and the accuracy of the detection result of detecting the pose of the object to be detected can be improved.
- the present invention calculates the pose of the object to be detected by means of the mark code, it is only necessary to paste the mark code on the surface of the object to be detected, without having to model according to the shape of the object to be detected, thereby reducing
- the cost of resources such as manpower and material resources can greatly increase the universality of the pose detection method.
- the posture calculation model can be calculated and the camera calibration is completed, because the calibration process of the camera is simple, It is easy to operate, so it can further improve the efficiency of the process of detecting the pose of the object to be detected, and improve the user experience.
- the present invention also provides an apparatus for detecting an attitude of an object, which is capable of realizing all the processes of the method for detecting the pose of the object.
- FIG. 4 it is a schematic structural diagram of a preferred embodiment of the apparatus for detecting the pose of an object provided by the present invention, which is specifically as follows:
- a mark code image obtaining module 41 configured to receive an original image acquired by the camera, and extract a obtained mark code image from the original image; wherein the mark code image is an image of a mark code attached to an object to be detected ;
- a mark code pose obtaining module 42 configured to calculate a pose of the mark code according to the mark code image and a pre-generated pose calculation model; wherein the pose of the mark code includes the mark code relative to The amount of rotation and displacement of the reference pose; and,
- the object pose obtaining module 43 is configured to calculate a current pose of the object to be detected according to the pose of the mark code.
- the marking code include at least one subtag code
- the tag code pose obtaining module 42 specifically includes:
- a sub-marker image obtaining unit configured to perform image segmentation on the mark code image to obtain at least one sub-marker image conforming to a shape requirement
- a legal sub-tag code image obtaining unit configured to compare each of the sub-tag code images with each of the pre-stored standard sub-tag code images to obtain a legal sub-tag code image in the sub-tag code image;
- the mark code pose calculation obtaining unit is configured to calculate a pose of the mark code according to each of the legal subtag code image and the pose calculation model.
- the rotation amount of the mark code relative to the reference pose includes a rotation angle ⁇ and a unit direction vector (r rx , r ry , r rz ); a displacement of the mark code relative to the reference pose
- the quantity includes the displacement vector r t ;
- the object pose obtaining module 43 specifically includes:
- An object plane rotation angle obtaining unit for calculating a formula according to the rotation amount v and a plane angle Calculating a plane rotation angle ⁇ of the object to be detected with respect to the reference pose;
- An object plane displacement amount obtaining unit configured to calculate, according to the displacement vector r t , a plane displacement amount s of the object to be detected relative to the reference pose;
- An object current pose obtaining unit for rotating the angle ⁇ according to the plane and the plane displacement vector s, obtaining the current pose of the object to be detected.
- the apparatus for detecting the pose of the object further comprises:
- a calibration image obtaining module configured to generate and display at least one target positioning posture information, and obtain M calibration images according to the target positioning posture information; wherein, M>0;
- a pose calculation model generating module configured to generate the pose calculation model according to each of the calibration image and corresponding target positioning posture information
- the calibration image obtaining module specifically includes:
- the current target positioning position information display unit is configured to generate and display a current target positioning posture information when the number m of the currently obtained calibration images is less than M;
- a calibration mark code image obtaining unit configured to receive an original calibration image that is acquired by the camera and corresponding to the current target positioning posture information, and extract and obtain a calibration mark code image from the original calibration image;
- a looping unit configured to determine, according to the calibration mark code image and the current target positioning posture information, whether the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information; If yes, the calibration mark code image is set as a calibration image, and the value of m is modified to be m+1, and the current target position information display unit is returned; if not, the current target position information display is returned. unit.
- loop unit specifically includes:
- a key point distance calculation subunit configured to identify a key point in the current calibration mark code image, and calculate a distance between the key points
- a key point distance determining subunit configured to determine, according to the current target positioning posture information, whether a distance between the key points is within a preset distance range
- a first loop subunit configured to: when the distance between the key points is within a preset distance range, confirm that the calibration mark code corresponding to the calibration mark code image is in the current target positioning posture information a target positioning posture, and setting the calibration mark code image as a calibration image, and modifying the value of m to m+1, and returning the current target positioning position information display unit;
- a second loop subunit configured to: when the distance between the key points is not within a preset distance range, confirm that the calibration mark code corresponding to the calibration mark code image is not in the current target position information Corresponding target positioning posture, and returning to the current target positioning posture information display unit.
- the apparatus for detecting the pose of the object marks the pose of the object to be detected by using the mark code, so that the system can obtain the pose of the mark code by calculating the mark code image captured by the camera, and further Calculate the pose of the object to be inspected.
- the system since the position of the object to be detected is marked by the mark code, the system only needs to analyze and process the image of the mark code, thereby greatly improving the efficiency of the process of detecting the pose of the object to be detected, and The feature points in the code image are more obvious and easier to recognize. Therefore, the recognition and calculation of these feature points are difficult and accurate, and the accuracy of the detection result of detecting the pose of the object to be detected can be improved.
- the present invention calculates the pose of the object to be detected by means of the mark code, it is only necessary to paste the mark code on the surface of the object to be detected, without having to model according to the shape of the object to be detected, thereby reducing
- the cost of resources such as manpower and material resources can greatly increase the universality of the pose detection method.
- the posture calculation model can be calculated and the camera calibration is completed, because the calibration process of the camera is simple, It is easy to operate, so it can further improve the efficiency of the process of detecting the pose of the object to be detected, and improve the user experience.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (10)
- 一种物***姿的检测方法,其特征在于,包括:接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像;其中,所述标记码图像为贴附于待检测物体上的标记码的图像;根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿;其中,所述标记码的位姿包括所述标记码相对于基准位姿的旋转量及位移量;根据所述标记码的位姿,计算获得所述待检测物体的当前位姿。
- 如权利要求1所述的物***姿的检测方法,其特征在于,在所述接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像之前,还包括:生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像;其中,M>0;根据每个所述标定图像及相对应的标定位姿信息,生成所述位姿计算模型。
- 如权利要求2所述的物***姿的检测方法,其特征在于,所述生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像,具体包括步骤:S1:当当前获得的标定图像的个数m小于M时,生成并显示一个当前标定位姿信息;S2:接收所述摄像机采集的与所述当前标定位姿信息相对应的原始标定图像,并从所述原始标定图像中提取获得标定标记码图像;S3:根据所述标定标记码图像与所述当前标定位姿信息判断与所述标定标记码图像相对应的标定标记码是否处于所述当前标定位姿信息所对应的标定位姿;若是,则将所述标定标记码图像设置为标定图像,并修改m的值为m+1, 并返回步骤S1;若否,则返回所述步骤S1。
- 如权利要求3所述的物***姿的检测方法,其特征在于,所述根据所述标定标记码图像与所述当前标定位姿信息判断与所述标定标记码图像相对应的标定标记码是否处于所述当前标定位姿信息所对应的标定位姿;若是,则将所述标定标记码图像设置为标定图像,并修改m的值为m+1;若否,则返回步骤S1,具体包括:识别获得所述当前标定标记码图像中的关键点,并计算所述关键点之间的距离;根据所述当前标定位姿信息,判断所述关键点之间的距离是否在预设的距离范围内;若是,则确认与所述标定标记码图像相对应的标定标记码处于所述当前标定位姿信息所对应的标定位姿,并将所述标定标记码图像设置为标定图像,并修改m的值为m+1,并返回所述步骤S1;若否,则确认与所述标定标记码图像相对应的标定标记码不处于所述当前标定位姿信息所对应的标定位姿,并返回所述步骤S1。
- 如权利要求1所述的物***姿的检测方法,其特征在于,所述标记码中包含至少一个子标记码;则所述根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿,具体包括:对所述标记码图像进行图像分割,获得符合形状要求的至少一个子标记码图像;将每个所述子标记码图像与预先存储的各个标准子标记码图像进行比较,获得所述子标记码图像中的合法子标记码图像;根据各个所述合法子标记码图像与所述位姿计算模型,计算获得所述标记 码的位姿。
- 如权利要求1所述的物***姿的检测方法,其特征在于,所述标记码相对于所述基准位姿的旋转量中包括旋转角度γ和单位方向向量(rrx,rry,rrz);所述标记码相对于所述基准位姿的位移量中包括位移向量rt;则所述根据所述标记码的位姿,计算获得所述待检测物体的当前位姿,具体包括:根据所述位移向量rt,计算获得所述待检测物体相对于所述基准位姿的平面位移量s;根据所述平面旋转角度θ及所述平面位移向量s,获得所述待检测物体的当前位姿。
- 一种物***姿的检测装置,其特征在于,包括:标记码图像获得模块,用于接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像;其中,所述标记码图像为贴附于待检测物体上的标记码的图像;标记码位姿获得模块,用于根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿;其中,所述标记码的位姿包括所述标记码相对于基准位姿的旋转量及位移量;以及,物***姿获得模块,用于根据所述标记码的位姿,计算获得所述待检测物体的当前位姿。
- 如权利要求7所述的物***姿的检测装置,其特征在于,所述物***姿的检测装置,还包括:标定图像获得模块,用于生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像;其中,M>0;以及,位姿计算模型生成模块,用于根据每个所述标定图像及相对应的标定位姿信息,生成所述位姿计算模型。
- 如权利要求7所述的物***姿的检测装置,其特征在于,所述标记码中包含至少一个子标记码;则所述标记码位姿获得模块,具体包括:子标记码图像获得单元,用于对所述标记码图像进行图像分割,获得符合形状要求的至少一个子标记码图像;合法子标记码图像获得单元,用于将每个所述子标记码图像与预先存储的各个标准子标记码图像进行比较,获得所述子标记码图像中的合法子标记码图像;以及,标记码位姿计算获得单元,用于根据各个所述合法子标记码图像与所述位姿计算模型,计算获得所述标记码的位姿。
- 如权利要求7所述的物***姿的检测装置,其特征在于,所述标记码相对于所述基准位姿的旋转量中包括旋转角度γ和单位方向向量(rrx,rry,rrz);所述标记码相对于所述基准位姿的位移量中包括位移向量rt;则所述物***姿获得模块,具体包括:标记码旋转矩阵获得单元,用于根据所述旋转角度γ及所述单位方向向量(rrx,rry,rrz)及旋转变换公式R=I+ωsinγ+ω2(1-cosγ),计算获得所述标记码相对于所述基准位姿的旋转矩阵R;其中,物体平面位移量获得单元,用于根据所述位移向量rt,计算获得所述待检测物体相对于所述基准位姿的平面位移量s;以及,物体当前位姿获得单元,用于根据所述平面旋转角度θ及所述平面位移向量s,获得所述待检测物体的当前位姿。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710127752.8 | 2017-03-06 | ||
CN201710127752.8A CN106971406B (zh) | 2017-03-06 | 2017-03-06 | 物***姿的检测方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018161555A1 true WO2018161555A1 (zh) | 2018-09-13 |
Family
ID=59328826
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/104668 WO2018161555A1 (zh) | 2017-03-06 | 2017-09-29 | 物***姿的检测方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106971406B (zh) |
WO (1) | WO2018161555A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110763204A (zh) * | 2019-06-25 | 2020-02-07 | 西安理工大学 | 一种平面编码靶标及其位姿测量方法 |
CN111540016A (zh) * | 2020-04-27 | 2020-08-14 | 深圳南方德尔汽车电子有限公司 | 基于图像特征匹配的位姿计算方法、装置、计算机设备及存储介质 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106971406B (zh) * | 2017-03-06 | 2019-10-29 | 广州视源电子科技股份有限公司 | 物***姿的检测方法和装置 |
CN107595562A (zh) * | 2017-09-22 | 2018-01-19 | 华南理工大学 | 一种基于自识别标记的室内导盲杖及其导盲方法 |
CN107845326A (zh) * | 2017-12-19 | 2018-03-27 | 中铁第四勘察设计院集团有限公司 | 高速铁路钢轨伸缩调节器位移识别标识牌及测量方法 |
CN109307585A (zh) * | 2018-04-26 | 2019-02-05 | 东南大学 | 一种近目式显示器性能的智能测试*** |
CN109677217A (zh) * | 2018-12-27 | 2019-04-26 | 魔视智能科技(上海)有限公司 | 牵引车与挂车偏航角的检测方法 |
CN110009683B (zh) * | 2019-03-29 | 2021-03-30 | 北京交通大学 | 基于MaskRCNN的实时平面上物体检测方法 |
CN114820814A (zh) * | 2019-10-30 | 2022-07-29 | 深圳市瑞立视多媒体科技有限公司 | 摄影机位姿计算方法、装置、设备及存储介质 |
CN113643380A (zh) * | 2021-08-16 | 2021-11-12 | 安徽元古纪智能科技有限公司 | 一种基于单目相机视觉标靶定位的机械臂引导方法 |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05301183A (ja) * | 1992-04-28 | 1993-11-16 | Fujitsu Ltd | ロボット制御装置及びロボット制御方法 |
US20050102060A1 (en) * | 2003-11-06 | 2005-05-12 | Fanuc Ltd | Device for correcting positional data of robot |
CN101419055A (zh) * | 2008-10-30 | 2009-04-29 | 北京航空航天大学 | 基于视觉的空间目标位姿测量装置和方法 |
CN101839692A (zh) * | 2010-05-27 | 2010-09-22 | 西安交通大学 | 单相机测量物体三维位置与姿态的方法 |
CN102207368A (zh) * | 2010-03-29 | 2011-10-05 | 富士施乐株式会社 | 装配接收构件识别结构及使用该结构的装配信息识别装置和装配处理装置 |
CN102922521A (zh) * | 2012-08-07 | 2013-02-13 | 中国科学技术大学 | 一种基于立体视觉伺服的机械臂***及其实时校准方法 |
CN103020952A (zh) * | 2011-07-08 | 2013-04-03 | 佳能株式会社 | 信息处理设备和信息处理方法 |
CN103743393A (zh) * | 2013-12-20 | 2014-04-23 | 西安交通大学 | 一种圆柱状目标的位姿测量方法 |
CN103759716A (zh) * | 2014-01-14 | 2014-04-30 | 清华大学 | 基于机械臂末端单目视觉的动态目标位置和姿态测量方法 |
CN106971406A (zh) * | 2017-03-06 | 2017-07-21 | 广州视源电子科技股份有限公司 | 物***姿的检测方法和装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103208122A (zh) * | 2013-04-18 | 2013-07-17 | 湖南大学 | 基于一维标定杆设计的多相机标定方法 |
CN104463833B (zh) * | 2013-09-22 | 2017-11-03 | 大族激光科技产业集团股份有限公司 | 一种标定一维面阵相机组相机参数的方法和*** |
CN103942796B (zh) * | 2014-04-23 | 2017-04-12 | 清华大学 | 一种高精度的投影仪‑摄像机标定***及标定方法 |
CN104880176B (zh) * | 2015-04-15 | 2017-04-12 | 大连理工大学 | 基于先验知识模型优化的运动物位姿测量方法 |
CN104933717B (zh) * | 2015-06-17 | 2017-08-11 | 合肥工业大学 | 基于方向性标定靶标的摄像机内外参数自动标定方法 |
CN106408556B (zh) * | 2016-05-23 | 2019-12-03 | 东南大学 | 一种基于一般成像模型的微小物体测量***标定方法 |
-
2017
- 2017-03-06 CN CN201710127752.8A patent/CN106971406B/zh active Active
- 2017-09-29 WO PCT/CN2017/104668 patent/WO2018161555A1/zh active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05301183A (ja) * | 1992-04-28 | 1993-11-16 | Fujitsu Ltd | ロボット制御装置及びロボット制御方法 |
US20050102060A1 (en) * | 2003-11-06 | 2005-05-12 | Fanuc Ltd | Device for correcting positional data of robot |
CN101419055A (zh) * | 2008-10-30 | 2009-04-29 | 北京航空航天大学 | 基于视觉的空间目标位姿测量装置和方法 |
CN102207368A (zh) * | 2010-03-29 | 2011-10-05 | 富士施乐株式会社 | 装配接收构件识别结构及使用该结构的装配信息识别装置和装配处理装置 |
CN101839692A (zh) * | 2010-05-27 | 2010-09-22 | 西安交通大学 | 单相机测量物体三维位置与姿态的方法 |
CN103020952A (zh) * | 2011-07-08 | 2013-04-03 | 佳能株式会社 | 信息处理设备和信息处理方法 |
CN102922521A (zh) * | 2012-08-07 | 2013-02-13 | 中国科学技术大学 | 一种基于立体视觉伺服的机械臂***及其实时校准方法 |
CN103743393A (zh) * | 2013-12-20 | 2014-04-23 | 西安交通大学 | 一种圆柱状目标的位姿测量方法 |
CN103759716A (zh) * | 2014-01-14 | 2014-04-30 | 清华大学 | 基于机械臂末端单目视觉的动态目标位置和姿态测量方法 |
CN106971406A (zh) * | 2017-03-06 | 2017-07-21 | 广州视源电子科技股份有限公司 | 物***姿的检测方法和装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110763204A (zh) * | 2019-06-25 | 2020-02-07 | 西安理工大学 | 一种平面编码靶标及其位姿测量方法 |
CN111540016A (zh) * | 2020-04-27 | 2020-08-14 | 深圳南方德尔汽车电子有限公司 | 基于图像特征匹配的位姿计算方法、装置、计算机设备及存储介质 |
CN111540016B (zh) * | 2020-04-27 | 2023-11-10 | 深圳南方德尔汽车电子有限公司 | 基于图像特征匹配的位姿计算方法、装置、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN106971406B (zh) | 2019-10-29 |
CN106971406A (zh) | 2017-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018161555A1 (zh) | 物***姿的检测方法和装置 | |
CN108764048B (zh) | 人脸关键点检测方法及装置 | |
US11727593B1 (en) | Automated data capture | |
US7965904B2 (en) | Position and orientation measuring apparatus and position and orientation measuring method, mixed-reality system, and computer program | |
CN109934847B (zh) | 弱纹理三维物体姿态估计的方法和装置 | |
Azad et al. | Stereo-based 6d object localization for grasping with humanoid robot systems | |
CN106981091B (zh) | 人体三维建模数据处理方法及装置 | |
CN111862296A (zh) | 三维重建方法及装置、***、模型训练方法、存储介质 | |
CN111862201A (zh) | 一种基于深度学习的空间非合作目标相对位姿估计方法 | |
CN106384355B (zh) | 一种投影交互***中的自动标定方法 | |
JP2019192022A (ja) | 画像処理装置、画像処理方法及びプログラム | |
CN111784775B (zh) | 一种标识辅助的视觉惯性增强现实注册方法 | |
CN107480603B (zh) | 基于slam和深度摄像头的同步建图与物体分割方法 | |
CN105934757B (zh) | 一种用于检测第一图像的关键点和第二图像的关键点之间的不正确关联关系的方法和装置 | |
WO2022021782A1 (zh) | 六维姿态数据集自动生成方法、***、终端以及存储介质 | |
JP2010267232A (ja) | 位置姿勢推定方法および装置 | |
CN111695431A (zh) | 一种人脸识别方法、装置、终端设备及存储介质 | |
CN112613123A (zh) | 一种飞机管路ar三维注册方法及装置 | |
CN112348869A (zh) | 通过检测和标定恢复单目slam尺度的方法 | |
JP2015219868A (ja) | 情報処理装置、情報処理方法、プログラム | |
WO2020015501A1 (zh) | 地图构建方法、装置、存储介质及电子设备 | |
JP7171294B2 (ja) | 情報処理装置、情報処理方法及びプログラム | |
CN114187253A (zh) | 一种电路板零件安装检测方法 | |
US11989928B2 (en) | Image processing system | |
US9098746B2 (en) | Building texture extracting apparatus and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17899511 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17899511 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.03.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17899511 Country of ref document: EP Kind code of ref document: A1 |