WO2018161555A1 - 物***姿的检测方法和装置 - Google Patents

物***姿的检测方法和装置 Download PDF

Info

Publication number
WO2018161555A1
WO2018161555A1 PCT/CN2017/104668 CN2017104668W WO2018161555A1 WO 2018161555 A1 WO2018161555 A1 WO 2018161555A1 CN 2017104668 W CN2017104668 W CN 2017104668W WO 2018161555 A1 WO2018161555 A1 WO 2018161555A1
Authority
WO
WIPO (PCT)
Prior art keywords
pose
image
mark code
calibration
code
Prior art date
Application number
PCT/CN2017/104668
Other languages
English (en)
French (fr)
Inventor
杨铭
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2018161555A1 publication Critical patent/WO2018161555A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • the present invention relates to the field of computer technology, and in particular, to a method and apparatus for detecting an object pose.
  • the pose estimation problem refers to the process of estimating the relative rotation and translation between two spatial coordinate systems by using feature correspondence information, which is a core technical problem to be solved by realizing the function of the intelligent robot to carry objects.
  • the pose estimation problem is an important basic problem in computer vision, computer graphics, and photogrammetry.
  • the existing methods for estimating poses are mainly as follows: pose-based pose estimation method, model-based pose estimation method and learning-based pose estimation method.
  • Model-based pose estimation methods usually use the geometric relationship of objects to estimate.
  • the basic idea is to use a certain geometric model or structure to represent the structure and shape of the object, and to extract some object features, establish a correspondence between the model and the image, and then realize the estimation of the space pose by geometric or other methods.
  • the model used here may be a simple geometric shape, such as a plane, a cylinder, or a certain geometry, or a three-dimensional model obtained by laser scanning or other methods.
  • the model-based pose estimation method updates the object pose by performing similarity calculation by comparing the real image and the composite image.
  • the current model-based method In order to avoid the optimization search in the global state space, the current model-based method generally degrades the optimization problem into matching problems of multiple local features, which is very dependent on the accurate detection of local features. The robustness of the method is greatly affected when the noise is too large to extract accurate local features.
  • the learning-based method uses the machine learning method to learn the correspondence between the two-dimensional observation image and the three-dimensional pose from the training samples in different postures acquired in advance, and applies the learned decision rule or regression function to the sample.
  • the result is estimated as a pose for the sample.
  • the learning-based method generally adopts global observation features, and does not need to detect or identify the local features of the object, and has good robustness.
  • the disadvantage is that the intensive sampling required for continuous estimation in high-dimensional space cannot be obtained, so the accuracy and continuity of pose estimation cannot be guaranteed.
  • the pose-based estimation method of the chair requires collecting and labeling a large number of samples for various chairs, and also requires a lot of manpower and material resources.
  • the invention provides a method and a device for detecting the pose of an object, which can simplify the process of detecting the pose of the object, improve the efficiency of the detection process, and can improve the accuracy of the detection result.
  • the method for detecting the posture of an object provided by the present invention specifically includes:
  • the mark code image is an image attached to a mark code on an object to be detected
  • the method further includes:
  • the generating and displaying at least one target positioning posture information, and obtaining M calibration images according to the target positioning posture information specifically including the steps of:
  • the calibration mark code image is set as a calibration image, and the value of m is modified to m+1, and returns to step S1;
  • step S1 the determining, according to the calibration mark code image and the current target positioning posture information, whether the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information; If yes, the calibration mark code image is set as the calibration image, and the value of m is modified to be m+1; if not, the process returns to step S1, which specifically includes:
  • step S1 If yes, it is confirmed that the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information, and the calibration mark code image is set as a calibration image, and the value of m is modified. Is m+1, and returns to step S1;
  • the tag code includes at least one sub tag code
  • calculating the pose of the mark code specifically including:
  • the rotation amount of the mark code relative to the reference pose includes a rotation angle ⁇ and a unit direction vector (r rx , r ry , r rz ); a displacement of the mark code relative to the reference pose
  • the quantity includes the displacement vector r t ;
  • Body includes:
  • the present invention also provides a device for detecting the posture of an object, which specifically includes:
  • a mark code image obtaining module configured to receive an original image captured by the camera, and extract a obtained mark code image from the original image; wherein the mark code image is an image attached to a mark code on the object to be detected;
  • a mark code pose obtaining module configured to calculate a pose of the mark code according to the mark code image and a pre-generated pose calculation model; wherein the pose of the mark code includes the mark code relative to The amount of rotation and displacement of the reference pose; and,
  • the object pose obtaining module is configured to calculate a current pose of the object to be detected according to the pose of the mark code.
  • the detecting device for the pose of the object further includes:
  • a calibration image obtaining module configured to generate and display at least one target positioning posture information, and obtain M calibration images according to the target positioning posture information; wherein, M>0;
  • a pose calculation model generating module configured to generate the pose calculation model according to each of the calibration image and corresponding target positioning posture information
  • the calibration image obtaining module specifically includes:
  • the current target positioning position information display unit is configured to generate and display a current target positioning posture information when the number m of the currently obtained calibration images is less than M;
  • a calibration mark code image obtaining unit configured to receive an original calibration image that is acquired by the camera and corresponding to the current target positioning posture information, and extract and obtain a calibration mark code image from the original calibration image;
  • a looping unit configured to determine, according to the calibration mark code image and the current target positioning posture information, whether the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information; If yes, the calibration mark code image is set as a calibration image, and the value of m is modified to be m+1, and the current target position information display unit is returned; if not, the current target position information display is returned. unit.
  • loop unit specifically includes:
  • a key point distance calculation subunit configured to identify a key point in the current calibration mark code image, and calculate a distance between the key points
  • a key point distance determining subunit configured to determine, according to the current target positioning posture information, whether a distance between the key points is within a preset distance range
  • a first loop subunit configured to: when the distance between the key points is within a preset distance range, confirm that the calibration mark code corresponding to the calibration mark code image is in the current target positioning posture information a target positioning posture, and setting the calibration mark code image as a calibration image, and modifying the value of m to m+1, and returning the current target positioning position information display unit;
  • a second loop subunit configured to: when the distance between the key points is not within a preset distance range, confirm that the calibration mark code corresponding to the calibration mark code image is not in the current target position information Corresponding target positioning posture, and returning to the current target positioning posture information display unit.
  • the tag code includes at least one sub tag code
  • the mark code pose obtaining module specifically includes:
  • a sub-marker image obtaining unit configured to perform image segmentation on the mark code image to obtain at least one sub-marker image conforming to a shape requirement
  • a legal sub-tag code image obtaining unit configured to compare each of the sub-tag code images with each of the pre-stored standard sub-tag code images to obtain a legal sub-tag code image in the sub-tag code image;
  • the mark code pose calculation obtaining unit is configured to calculate a pose of the mark code according to each of the legal subtag code image and the pose calculation model.
  • the rotation amount of the mark code relative to the reference pose includes a rotation angle ⁇ and a unit direction vector (r rx , r ry , r rz ); a displacement of the mark code relative to the reference pose
  • the quantity includes the displacement vector r t ;
  • the object pose obtaining module specifically comprising:
  • An object plane rotation angle obtaining unit for calculating a formula according to the rotation amount v and a plane angle Calculating a plane rotation angle ⁇ of the object to be detected with respect to the reference pose;
  • An object plane displacement amount obtaining unit configured to calculate, according to the displacement vector r t , a plane displacement amount s of the object to be detected relative to the reference pose;
  • An object current pose obtaining unit for rotating the angle ⁇ according to the plane and the plane displacement vector s, obtaining the current pose of the object to be detected.
  • the method and device for detecting the pose of an object mark the pose of the object to be detected by using a mark code, so that the system can obtain the pose of the mark code by calculating the mark code image captured by the camera, and further Calculate the pose of the object to be inspected. Since the position of the object to be detected is marked by the mark code, the system only needs to analyze and process the image of the mark code, thereby greatly improving the efficiency of the process of detecting the pose of the object to be detected, and because of the mark code image
  • the feature points are more obvious and easier to identify. Therefore, the recognition and calculation of these feature points are difficult and accurate, so that the accuracy of the detection result of detecting the pose of the object to be detected can be improved.
  • FIG. 1 is a schematic flow chart of a preferred embodiment of a method for detecting an attitude of an object provided by the present invention
  • FIG. 2 is a schematic view showing a label code attached to a chair back in a preferred embodiment of the method for detecting the pose of an object provided by the present invention
  • FIG. 3 is a schematic view of a calibration plate in still another preferred embodiment of the method for detecting the pose of an object provided by the present invention
  • Fig. 4 is a schematic view showing the structure of a preferred embodiment of the apparatus for detecting the posture of an object provided by the present invention.
  • the invention performs the image of the mark code attached to the object to be detected collected by the camera. Analyze and calculate to obtain the pose of the mark code, and further calculate the pose of the object to be detected.
  • the invention marks the pose of the object to be detected by means of the mark code, so that the system only needs to analyze and process the image of the mark code, thereby greatly improving the efficiency of the process of detecting the pose of the object to be detected, and because of the mark code
  • the feature points in the image are more obvious and easier to identify. Therefore, the recognition and calculation of these feature points are difficult and accurate, and the accuracy of the detection result of detecting the pose of the object to be detected can be improved.
  • a schematic flowchart of a preferred embodiment of a method for detecting an attitude of an object provided by the present invention includes steps S11 to S13, as follows:
  • S11 receiving an original image collected by a camera, and extracting a mark code image from the original image; wherein the mark code image is an image of a mark code attached to an object to be detected;
  • S12 Calculate a pose of the mark code according to the mark code image and a pre-generated pose calculation model, wherein the pose of the mark code includes a rotation amount of the mark code relative to a reference pose and Displacement amount;
  • the system detects the pose of the object to be detected, it is necessary to attach a mark code to the surface of the object to be detected.
  • the marking code needs to be attached to the facade of the object to be inspected.
  • the system can detect the pose of the object to be detected. Specifically, the camera collects the original image in real time, and sends the collected original image to the system, and the system separately analyzes and calculates the received original image of each frame.
  • the system After receiving the original image of the current frame, the system determines whether the original image contains a mark code image (ie, determines whether the camera captures the mark code), and if so, extracts the mark code image contained therein from the original image. And substituting the mark code image into a pre-generated pose calculation model, calculating a pose of the corresponding mark code, and calculating a pose of the corresponding object to be detected according to the pose of the mark code; if not, then The original image is not processed.
  • the posture of the object to be detected obtained by the calculation may be a plane pose or a three-dimensional pose.
  • the system can obtain the pose of the mark code by calculating the mark code image captured by the camera, and then calculate the pose of the object to be detected. Since the position of the object to be detected is marked by the mark code, the system only needs to analyze and process the image of the mark code, thereby greatly improving the efficiency of the process of detecting the pose of the object to be detected, and because of the mark code image
  • the feature points are more obvious and easier to identify. Therefore, the recognition and calculation of these feature points are difficult and accurate, so that the accuracy of the detection result of detecting the pose of the object to be detected can be improved.
  • the present invention calculates the pose of the object to be detected by means of the mark code, it is only necessary to paste the mark code on the surface of the object to be detected, without modeling according to the shape of the object to be detected, thereby reducing the human and material resources.
  • the cost of resources, etc. and can greatly increase the universality of the pose detection method.
  • the tag code includes at least one sub tag code
  • calculating the pose of the mark code specifically including:
  • the rotation amount of the mark code relative to the reference pose includes a rotation angle ⁇ and a unit direction vector (r rx , r ry , r rz ); a displacement of the mark code relative to the reference pose
  • the quantity includes the displacement vector r t ;
  • calculating a current pose of the object to be detected specifically comprising:
  • the tag code includes at least one sub tag code, and thus the tag code image captured by the camera includes at least one sub tag code image.
  • the subtag code is a tag similar to the two-dimensional code.
  • the system After receiving the original image sent by the camera, the system first determines whether the original image contains a mark code image (whether the camera captures the mark code), that is, whether the original mark image contains the sub mark code image. Specifically, the system adopts an adaptive threshold method to image the original image, and extracts the edge contour from the original image after image segmentation, and the concave or non-approximate quadrilateral or the area is too large or the area is too The small or center-to-close edge contour and its internal content are deleted, thereby obtaining an edge contour having a quadrangular shape or an approximately quadrilateral shape and its internal content, that is, obtaining a sub-marker image. If no edge contour remains after edge contour extraction and deletion, it means that the original image does not contain the sub-marker image, and the system does not process the original image.
  • the system determines whether the obtained sub-marker image is a legal sub-marker image.
  • the system first performs perspective transformation on the obtained sub-marker image, and converts the sub-marker image into a flat view state; then meshes the entire mark-code image according to the size of the sub-marker image and the size of the entire mark-code image, Thereby, the whole mark code image is divided into two-dimensional mesh; then the Otsu (Otsu algorithm, also known as the maximum inter-class variance algorithm) threshold method is used to segment the entire mark code image, and each grid is judged according to the result of image segmentation.
  • Otsu Otsu algorithm, also known as the maximum inter-class variance algorithm
  • the system substitutes the above-mentioned legal sub-marker image into the pre-generated pose calculation model, calculates the pose of each legal sub-marker image, and calculates the pose of the whole mark code, that is, obtains the entire mark code relative to The amount of rotation and displacement of the reference pose.
  • the reference pose is a pose perpendicular to the horizontal plane at the location of the camera.
  • the system calculates the pose of the object to be detected according to the pose of the mark code, that is, the system projects the pose of the mark code to the horizontal plane (it is understood that the object to be detected is generally Standing on a horizontal plane) to obtain the pose of the object to be inspected.
  • the system first converts the rotation amount of the mark code with respect to the reference pose according to the above-described rotation transformation formula, thereby expressing the rotation amount in the form of a rotation matrix R; and then marking according to the rotation matrix R and the above conversion formula
  • the amount of rotation of the code relative to the reference pose is projected onto the horizontal plane to obtain the amount of rotation of the object to be detected relative to the reference pose; finally, the relative amount of the object to be detected relative to the reference pose is calculated to obtain the relative of the object to be detected.
  • the plane rotation angle ⁇ of the reference pose the system projects the translation amount of the marker code with respect to the reference pose to the horizontal plane, thereby obtaining the plane displacement amount s of the object to be detected with respect to the reference pose.
  • the system determines the pose of the object to be detected according to the calculated plane rotation angle ⁇ and the plane displacement amount s. It can be understood that the posture of the object to be detected obtained at this time is a plane pose.
  • the method further includes:
  • the generating and displaying at least one target positioning posture information, and obtaining M calibration images according to the target positioning posture information specifically including the steps of:
  • the calibration mark code image is set as a calibration image, and the value of m is modified to m+1, and returns to step S1;
  • step S1 the determining, according to the calibration mark code image and the current target positioning posture information, whether the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information; If yes, the calibration mark code image is set as the calibration image, and the value of m is modified to be m+1; if not, the process returns to step S1, which specifically includes:
  • the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information, and the calibration mark code image is set as a calibration image, and Modify the value of m to m+1, and return to the step S1;
  • the camera calibration is also required, that is, the camera parameter M is calculated according to the generated target position information x and the acquired calibration image y, thereby obtaining the camera corresponding to the camera.
  • the pose calculation model Y M ⁇ X. Where Y is the pose of the object and X is the image of the acquired object.
  • the calibration mark code is pasted on the board, thereby obtaining a calibration plate as shown in FIG.
  • the system determines whether the number of currently acquired calibration images is greater than or equal to M (M>0), and if so, calculates the pose calculation model according to the collected calibration image and the corresponding target positioning posture information; , generates and displays a target positioning position information.
  • the user places the calibration plate according to the posture specified by the target positioning posture information.
  • the camera captures an image of the calibration plate placed by the user, obtains the original calibration image, and sends the original calibration image to the system.
  • the system extracts the calibration mark code image from the image, and calculates the distance between the key points in the calibration mark code image, if the distance between the key points is at a preset distance within the scope, it is considered that the posture of the calibration plate is consistent with the posture specified by the target positioning posture information, and the calibration marker image is set as the calibration image, and the number of currently acquired calibration images is re-determined.
  • the subsequent steps are performed, and the cycle is sequentially performed; if the distance between the key points is not within the preset distance range, the posture and the target positioning posture information of the calibration plate are considered
  • the specified poses are inconsistent, so the calibration marker image is not processed, and it is re-determined whether the number of currently acquired calibration images is greater than or equal to M, and the subsequent steps are performed according to the result of the determination, or the same standard is displayed again.
  • Positioning posture information to prompt the user to place the calibration plate according to the posture specified by the target positioning posture information.
  • the posture calculation model can be calculated and the camera calibration is completed, because the process of the camera calibration is simple and easy to operate. Therefore, the efficiency of the process of detecting the pose of the object to be detected can be further improved, and the user experience can be improved.
  • the method for detecting the pose of an object marks the object to be detected by using a mark code
  • the pose of the body enables the system to calculate the pose of the mark code by calculating the mark code image captured by the camera, and then calculate the pose of the object to be detected.
  • the system since the position of the object to be detected is marked by the mark code, the system only needs to analyze and process the image of the mark code, thereby greatly improving the efficiency of the process of detecting the pose of the object to be detected, and
  • the feature points in the code image are more obvious and easier to recognize. Therefore, the recognition and calculation of these feature points are difficult and accurate, and the accuracy of the detection result of detecting the pose of the object to be detected can be improved.
  • the present invention calculates the pose of the object to be detected by means of the mark code, it is only necessary to paste the mark code on the surface of the object to be detected, without having to model according to the shape of the object to be detected, thereby reducing
  • the cost of resources such as manpower and material resources can greatly increase the universality of the pose detection method.
  • the posture calculation model can be calculated and the camera calibration is completed, because the calibration process of the camera is simple, It is easy to operate, so it can further improve the efficiency of the process of detecting the pose of the object to be detected, and improve the user experience.
  • the present invention also provides an apparatus for detecting an attitude of an object, which is capable of realizing all the processes of the method for detecting the pose of the object.
  • FIG. 4 it is a schematic structural diagram of a preferred embodiment of the apparatus for detecting the pose of an object provided by the present invention, which is specifically as follows:
  • a mark code image obtaining module 41 configured to receive an original image acquired by the camera, and extract a obtained mark code image from the original image; wherein the mark code image is an image of a mark code attached to an object to be detected ;
  • a mark code pose obtaining module 42 configured to calculate a pose of the mark code according to the mark code image and a pre-generated pose calculation model; wherein the pose of the mark code includes the mark code relative to The amount of rotation and displacement of the reference pose; and,
  • the object pose obtaining module 43 is configured to calculate a current pose of the object to be detected according to the pose of the mark code.
  • the marking code include at least one subtag code
  • the tag code pose obtaining module 42 specifically includes:
  • a sub-marker image obtaining unit configured to perform image segmentation on the mark code image to obtain at least one sub-marker image conforming to a shape requirement
  • a legal sub-tag code image obtaining unit configured to compare each of the sub-tag code images with each of the pre-stored standard sub-tag code images to obtain a legal sub-tag code image in the sub-tag code image;
  • the mark code pose calculation obtaining unit is configured to calculate a pose of the mark code according to each of the legal subtag code image and the pose calculation model.
  • the rotation amount of the mark code relative to the reference pose includes a rotation angle ⁇ and a unit direction vector (r rx , r ry , r rz ); a displacement of the mark code relative to the reference pose
  • the quantity includes the displacement vector r t ;
  • the object pose obtaining module 43 specifically includes:
  • An object plane rotation angle obtaining unit for calculating a formula according to the rotation amount v and a plane angle Calculating a plane rotation angle ⁇ of the object to be detected with respect to the reference pose;
  • An object plane displacement amount obtaining unit configured to calculate, according to the displacement vector r t , a plane displacement amount s of the object to be detected relative to the reference pose;
  • An object current pose obtaining unit for rotating the angle ⁇ according to the plane and the plane displacement vector s, obtaining the current pose of the object to be detected.
  • the apparatus for detecting the pose of the object further comprises:
  • a calibration image obtaining module configured to generate and display at least one target positioning posture information, and obtain M calibration images according to the target positioning posture information; wherein, M>0;
  • a pose calculation model generating module configured to generate the pose calculation model according to each of the calibration image and corresponding target positioning posture information
  • the calibration image obtaining module specifically includes:
  • the current target positioning position information display unit is configured to generate and display a current target positioning posture information when the number m of the currently obtained calibration images is less than M;
  • a calibration mark code image obtaining unit configured to receive an original calibration image that is acquired by the camera and corresponding to the current target positioning posture information, and extract and obtain a calibration mark code image from the original calibration image;
  • a looping unit configured to determine, according to the calibration mark code image and the current target positioning posture information, whether the calibration mark code corresponding to the calibration mark code image is in a target positioning posture corresponding to the current target positioning posture information; If yes, the calibration mark code image is set as a calibration image, and the value of m is modified to be m+1, and the current target position information display unit is returned; if not, the current target position information display is returned. unit.
  • loop unit specifically includes:
  • a key point distance calculation subunit configured to identify a key point in the current calibration mark code image, and calculate a distance between the key points
  • a key point distance determining subunit configured to determine, according to the current target positioning posture information, whether a distance between the key points is within a preset distance range
  • a first loop subunit configured to: when the distance between the key points is within a preset distance range, confirm that the calibration mark code corresponding to the calibration mark code image is in the current target positioning posture information a target positioning posture, and setting the calibration mark code image as a calibration image, and modifying the value of m to m+1, and returning the current target positioning position information display unit;
  • a second loop subunit configured to: when the distance between the key points is not within a preset distance range, confirm that the calibration mark code corresponding to the calibration mark code image is not in the current target position information Corresponding target positioning posture, and returning to the current target positioning posture information display unit.
  • the apparatus for detecting the pose of the object marks the pose of the object to be detected by using the mark code, so that the system can obtain the pose of the mark code by calculating the mark code image captured by the camera, and further Calculate the pose of the object to be inspected.
  • the system since the position of the object to be detected is marked by the mark code, the system only needs to analyze and process the image of the mark code, thereby greatly improving the efficiency of the process of detecting the pose of the object to be detected, and The feature points in the code image are more obvious and easier to recognize. Therefore, the recognition and calculation of these feature points are difficult and accurate, and the accuracy of the detection result of detecting the pose of the object to be detected can be improved.
  • the present invention calculates the pose of the object to be detected by means of the mark code, it is only necessary to paste the mark code on the surface of the object to be detected, without having to model according to the shape of the object to be detected, thereby reducing
  • the cost of resources such as manpower and material resources can greatly increase the universality of the pose detection method.
  • the posture calculation model can be calculated and the camera calibration is completed, because the calibration process of the camera is simple, It is easy to operate, so it can further improve the efficiency of the process of detecting the pose of the object to be detected, and improve the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种物***姿的检测方法和装置。所述物***姿的检测方法包括:接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像;其中,所述标记码图像为贴附于待检测物体上的标记码的图像;根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿;其中,所述标记码的位姿包括所述标记码相对于基准位姿的旋转量及位移量;根据所述标记码的位姿,计算获得所述待检测物体的当前位姿。采用本发明,能够简化对物体的位姿进行检测的过程,提高检测过程的效率,且能够提高检测结果的准确度。

Description

物***姿的检测方法和装置 技术领域
本发明涉及计算机技术领域,尤其涉及一种物***姿的检测方法和装置。
背景技术
在智能机器人领域,智能机器人在搬运物体的过程中,需要对物体当前相对于自己的位置和角度进行检测判断。位姿估计问题指的是使用特征对应信息来估计两个空间坐标系之间相对旋转和平移的过程,是实现智能机器人搬运物体的功能所要解决的核心技术问题。位姿估计问题是计算机视觉、计算机图形学、摄影测量学的一个重要基本问题。
现有的对位姿进行估计的方法主要有如下三种:基于特征点的位姿估计方法、基于模型的位姿估计方法和基于学习的位姿估计方法。
(1)基于特征点的位姿估计方法
基于特征点的位姿估计方法,首先从图像中提取若干特征点,随后将该图像与标准图像进行特征匹配,从而获得至少一个相匹配的特征点,最后根据这些相匹配的特征点求解物***姿。
尽管基于特征点的位姿估计方法在视觉里程计中占据主流地位,但是这类方法仍有以下几个缺点:首先,关键点的提取与特征描述子的计算非常耗时。实践当中,如SIFT等局部特征提取目前在CPU上是无法实时计算的,而ORB也需要近20毫秒的计算。其次,使用特征点时,忽略了除特征点以外的所有信息。一张图像有几十万个像素,而特征点只有几百个。只使用特征点丢弃了大部分可能有用的图像信息。最后,并非所有物体都有大量的有效特征。例如,有时们会面对一堵白墙,或者一个空荡荡的走廓。这些场景下特征点数量会明显减少,我们可能找不到足够的匹配点来计算位置和角度。
特别地,对于椅子位姿估计来说,由于椅子的纹理往往比较少,有效的特 征点可能也不多,在这种情况下,基于特征点的位姿估计方法可能找不到足够的匹配点,对于位置和角度的估计可能会非常不精确。
(2)基于模型的位姿估计方法
基于模型的位姿估计方法通常利用物体的几何关系来估计。其基本思想是利用某种几何模型或结构来表示物体的结构和形状,并通过提取某些物体特征,在模型和图像之间建立起对应关系,然后通过几何或者其它方法实现物体空间姿态的估计。这里所使用的模型既可能是简单的几何形体,如平面、圆柱,也可能是某种几何结构,也可能是通过激光扫描或其它方法获得的三维模型。基于模型的位姿估计方法是通过比对真实图像和合成图像,进行相似度计算更新物体姿态。
目前基于模型的方法为了避免在全局状态空间中进行优化搜索,一般都将优化问题先降解成多个局部特征的匹配问题,非常依赖于局部特征的准确检测。噪声较大无法提取准确的局部特征的时候,该方法的鲁棒性受到很大影响。
特别地,对于椅子位姿估计来说,由于椅子的形态千差万别,并没有一个通用的几何模型可以近似描述所有的椅子。这意味着如果采用基于模型的位姿估计方法,则需要针对每款椅子的几何形状进行建模,这需要耗费大量的人力物力。
(3)基于学习的位姿估计方法
基于学习的方法借助于机器学习方法,从事先获取的不同姿态下的训练样本中学习二维观测图像与三维姿态之间的对应关系,并将学习得到的决策规则或回归函数应用于样本,所得结果作为对样本的位姿估计。
基于学习的方法一般采用全局观测特征,不需检测或识别物体的局部特征,具有较好的鲁棒性。其缺点是由于无法获取在高维空间中进行连续估计所需要的密集采样,因此无法保证位姿估计的精度与连续性。
特别地,对于椅子位姿估计来说,除了上述缺点之外,基于学习的位姿估计方法需要对各种椅子采集、标注大量样本,同样需要耗费大量的人力物力。
发明内容
本发明提出一种物***姿的检测方法和装置,能够简化对物体的位姿进行检测的过程,提高检测过程的效率,且能够提高检测结果的准确度。
本发明提供的一种物***姿的检测方法,具体包括:
接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像;其中,所述标记码图像为贴附于待检测物体上的标记码的图像;
根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿;其中,所述标记码的位姿包括所述标记码相对于基准位姿的旋转量及位移量;
根据所述标记码的位姿,计算获得所述待检测物体的当前位姿。
进一步地,在所述接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像之前,还包括:
生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像;其中,M>0;
根据每个所述标定图像及相对应的标定位姿信息,生成所述位姿计算模型;
进一步地,所述生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像,具体包括步骤:
S1:当当前获得的标定图像的个数m小于M时,生成并显示一个当前标定位姿信息;
S2:接收所述摄像机采集的与所述当前标定位姿信息相对应的原始标定图像,并从所述原始标定图像中提取获得标定标记码图像;
S3:根据所述标定标记码图像与所述当前标定位姿信息判断与所述标定标记码图像相对应的标定标记码是否处于所述当前标定位姿信息所对应的标定位姿;
若是,则将所述标定标记码图像设置为标定图像,并修改m的值为m+1,并返回步骤S1;
若否,则返回所述步骤S1。
进一步地,所述根据所述标定标记码图像与所述当前标定位姿信息判断与所述标定标记码图像相对应的标定标记码是否处于所述当前标定位姿信息所对应的标定位姿;若是,则将所述标定标记码图像设置为标定图像,并修改m的值为m+1;若否,则返回步骤S1,具体包括:
识别获得所述当前标定标记码图像中的关键点,并计算所述关键点之间的距离;
根据所述当前标定位姿信息,判断所述关键点之间的距离是否在预设的距离范围内;
若是,则确认与所述标定标记码图像相对应的标定标记码处于所述当前标定位姿信息所对应的标定位姿,并将所述标定标记码图像设置为标定图像,并修改m的值为m+1,并返回所述步骤S1;
若否,则确认与所述标定标记码图像相对应的标定标记码不处于所述当前标定位姿信息所对应的标定位姿,并返回所述步骤S1。
进一步地,所述标记码中包含至少一个子标记码;
则所述根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿,具体包括:
对所述标记码图像进行图像分割,获得符合形状要求的至少一个子标记码图像;
将每个所述子标记码图像与预先存储的各个标准子标记码图像进行比较,获得所述子标记码图像中的合法子标记码图像;
根据各个所述合法子标记码图像与所述位姿计算模型,计算获得所述标记码的位姿。
进一步地,所述标记码相对于所述基准位姿的旋转量中包括旋转角度γ和单位方向向量(rrx,rry,rrz);所述标记码相对于所述基准位姿的位移量中包括位移向量rt
则所述根据所述标记码的位姿,计算获得所述待检测物体的当前位姿,具 体包括:
根据所述旋转角度γ及所述单位方向向量(rrx,rry,rrz)及旋转变换公式R=I+ωsinγ+ω2(1-cosγ),计算获得所述标记码相对于所述基准位姿的旋转矩阵R;其中,
Figure PCTCN2017104668-appb-000001
根据所述旋转矩阵R及旋转公式v=Rvref,计算获得所述待检测物体相对于所述基准位姿的旋转量v=(vx,vy,vz);其中,
Figure PCTCN2017104668-appb-000002
为水平面上的以所述基准位姿为原点的单位单方向向量;
根据所述旋转量v及平面角度计算公式
Figure PCTCN2017104668-appb-000003
计算获得所述待检测物体相对于所述基准位姿的平面旋转角度θ;
根据所述位移向量rt,计算获得所述待检测物体相对于所述基准位姿的平面位移量s;
根据所述平面旋转角度θ及所述平面位移向量s,获得所述待检测物体的当前位姿。
相应地,本发明还提供了一种物***姿的检测装置,具体包括:
标记码图像获得模块,用于接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像;其中,所述标记码图像为贴附于待检测物体上的标记码的图像;
标记码位姿获得模块,用于根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿;其中,所述标记码的位姿包括所述标记码相对于基准位姿的旋转量及位移量;以及,
物***姿获得模块,用于根据所述标记码的位姿,计算获得所述待检测物体的当前位姿。
进一步地,所述物***姿的检测装置,还包括:
标定图像获得模块,用于生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像;其中,M>0;以及,
位姿计算模型生成模块,用于根据每个所述标定图像及相对应的标定位姿信息,生成所述位姿计算模型;
进一步地,所述标定图像获得模块,具体包括:
当前标定位姿信息显示单元,用于当当前获得的标定图像的个数m小于M时,生成并显示一个当前标定位姿信息;
标定标记码图像获得单元,用于接收所述摄像机采集的与所述当前标定位姿信息相对应的原始标定图像,并从所述原始标定图像中提取获得标定标记码图像;以及,
循环单元,用于根据所述标定标记码图像与所述当前标定位姿信息判断与所述标定标记码图像相对应的标定标记码是否处于所述当前标定位姿信息所对应的标定位姿;若是,则将所述标定标记码图像设置为标定图像,并修改m的值为m+1,并返回所述当前标定位姿信息显示单元;若否,则返回所述当前标定位姿信息显示单元。
进一步地,所述循环单元,具体包括:
关键点距离计算子单元,用于识别获得所述当前标定标记码图像中的关键点,并计算所述关键点之间的距离;
关键点距离判断子单元,用于根据所述当前标定位姿信息,判断所述关键点之间的距离是否在预设的距离范围内;
第一循环子单元,用于当所述关键点之间的距离在预设的距离范围内时,确认与所述标定标记码图像相对应的标定标记码处于所述当前标定位姿信息所对应的标定位姿,并将所述标定标记码图像设置为标定图像,并修改m的值为m+1,并返回所述当前标定位姿信息显示单元;以及,
第二循环子单元,用于当所述关键点之间的距离不在预设的距离范围内时,确认与所述标定标记码图像相对应的标定标记码不处于所述当前标定位姿信息所对应的标定位姿,并返回所述当前标定位姿信息显示单元。
进一步地,所述标记码中包含至少一个子标记码;
则所述标记码位姿获得模块,具体包括:
子标记码图像获得单元,用于对所述标记码图像进行图像分割,获得符合形状要求的至少一个子标记码图像;
合法子标记码图像获得单元,用于将每个所述子标记码图像与预先存储的各个标准子标记码图像进行比较,获得所述子标记码图像中的合法子标记码图像;以及,
标记码位姿计算获得单元,用于根据各个所述合法子标记码图像与所述位姿计算模型,计算获得所述标记码的位姿。
进一步地,所述标记码相对于所述基准位姿的旋转量中包括旋转角度γ和单位方向向量(rrx,rry,rrz);所述标记码相对于所述基准位姿的位移量中包括位移向量rt
则所述物***姿获得模块,具体包括:
标记码旋转矩阵获得单元,用于根据所述旋转角度γ及所述单位方向向量(rrx,rry,rrz)及旋转变换公式R=I+ωsinγ+ω2(1-cosγ),计算获得所述标记码相对于所述基准位姿的旋转矩阵R;其中,
物体旋转量获得单元,用于根据所述旋转矩阵R及旋转公式v=Rvref,计算获得所述待检测物体相对于所述基准位姿的旋转量v=(vx,vy,vz);其中,
Figure PCTCN2017104668-appb-000005
为水平面上的以所述基准位姿为原点的单位单方向向量;
物体平面旋转角度获得单元,用于根据所述旋转量v及平面角度计算公式
Figure PCTCN2017104668-appb-000006
计算获得所述待检测物体相对于所述基准位姿的平面旋转角度θ;
物体平面位移量获得单元,用于根据所述位移向量rt,计算获得所述待检测物体相对于所述基准位姿的平面位移量s;以及,
物体当前位姿获得单元,用于根据所述平面旋转角度θ及所述平面位移向量 s,获得所述待检测物体的当前位姿。
实施本发明,具有如下有益效果:
本发明提供的物***姿的检测方法和装置,通过采用标记码标记待检测物体的位姿,使得***能够通过对摄像头所拍摄到的标记码图像进行计算,获得该标记码的位姿,进而计算获得待检测物体的位姿。由于借助了标记码标记待检测物体的位姿,使得***仅需对标记码的图像进行分析和处理,从而能够大大提高对待检测物体的位姿进行检测的过程的效率,且由于标记码图像中的特征点较为明显,较易识别,因此对这些特征点进行识别和计算的难度低、准确度高,从而能够提高对待检测物体的位姿进行检测的检测结果的准确度。
附图说明
图1是本发明提供的物***姿的检测方法的一个优选的实施例的流程示意图;
图2是本发明提供的物***姿的检测方法的一个优选的实施例中的一个标记码贴附于一个椅子椅背上的示意图;
图3是本发明提供的物***姿的检测方法的又一个优选的实施例中的一个标定板的示意图;
图4是本发明提供的物***姿的检测装置的一个优选的实施例的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明通过对摄像机所采集到的贴附于待检测物体上的标记码的图像进行 分析、计算,从而获得标记码的位姿,并进一步计算获得该待检测物体的位姿。本发明由于借助了标记码标记待检测物体的位姿,使得***仅需对标记码的图像进行分析和处理,从而能够大大提高对待检测物体的位姿进行检测的过程的效率,且由于标记码图像中的特征点较为明显,较易识别,因此对这些特征点进行识别和计算的难度低、准确度高,从而能够提高对待检测物体的位姿进行检测的检测结果的准确度。
如图1所示,为本发明提供的物***姿的检测方法的一个优选的实施例的流程示意图,包括步骤S11至S13,具体如下:
S11:接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像;其中,所述标记码图像为贴附于待检测物体上的标记码的图像;
S12:根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿;其中,所述标记码的位姿包括所述标记码相对于基准位姿的旋转量及位移量;
S13:根据所述标记码的位姿,计算获得所述待检测物体的当前位姿。
需要说明的是,在***对待检测物体的位姿进行检测之前,需要在待检测物体的表面上贴上标记码。一般而言,该标记码需贴附于待检测物体的立面。如,当待检测物体为椅子时,如图2所示,为一个标记码贴附于该椅子的椅背上的示意图。在完成标记码的粘贴之后,***即可对待检测物体的位姿进行检测。具体地,摄像机实时地采集原始图像,并将所采集到的原始图像发送给***,***对接收到的每一帧原始图像分别进行分析和计算。***在接收到当前一帧的原始图像之后,判断该原始图像中是否包含标记码图像(即,判断摄像机是否拍摄到标记码),若是,则从该原始图像中提取获得其中包含的标记码图像,并将该标记码图像代入预先生成的位姿计算模型中,计算获得相应的标记码的位姿,并根据该标记码的位姿计算获得相应的待检测物体的位姿;若否,则不对该原始图像进行处理。其中,计算获得的待检测物体的位姿可以为平面位姿,也可以为三维立***姿。
通过采用标记码标记待检测物体的位姿,使得***能够通过对摄像头所拍摄到的标记码图像进行计算,获得该标记码的位姿,进而计算获得待检测物体的位姿。由于借助了标记码标记待检测物体的位姿,使得***仅需对标记码的图像进行分析和处理,从而能够大大提高对待检测物体的位姿进行检测的过程的效率,且由于标记码图像中的特征点较为明显,较易识别,因此对这些特征点进行识别和计算的难度低、准确度高,从而能够提高对待检测物体的位姿进行检测的检测结果的准确度。另外,由于本发明借助标记码计算待检测物体的位姿,只需要将标记码粘贴于待检测物体的表面上即可,而不需要根据待检测物体的形状进行建模,因此能够减少人力物力等资源的耗费,并且能够大大增加该位姿检测方法的普适性。
在另一个优选的实施例中,在上述优选的实施例的基础之上,所述标记码中包含至少一个子标记码;
则所述根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿,具体包括:
对所述标记码图像进行图像分割,获得符合形状要求的至少一个子标记码图像;
将每个所述子标记码图像与预先存储的各个标准子标记码图像进行比较,获得所述子标记码图像中的合法子标记码图像;
根据各个所述合法子标记码图像与所述位姿计算模型,计算获得所述标记码的位姿。
进一步地,所述标记码相对于所述基准位姿的旋转量中包括旋转角度γ和单位方向向量(rrx,rry,rrz);所述标记码相对于所述基准位姿的位移量中包括位移向量rt
则所述根据所述标记码的位姿,计算获得所述待检测物体的当前位姿,具体包括:
根据所述旋转角度γ及所述单位方向向量(rrx,rry,rrz)及旋转变换公式R=I+ωsinγ+ω2(1-cosγ),计算获得所述标记码相对于所述基准位姿的旋转矩阵R;其中,
Figure PCTCN2017104668-appb-000007
根据所述旋转矩阵R及旋转公式v=Rvref,计算获得所述待检测物体相对于所述基准位姿的旋转量v=(vx,vy,vz);其中,
Figure PCTCN2017104668-appb-000008
为水平面上的以所述基准位姿为原点的单位单方向向量;
根据所述旋转量v及平面角度计算公式
Figure PCTCN2017104668-appb-000009
计算获得所述待检测物体相对于所述基准位姿的平面旋转角度θ;
根据所述位移向量rt,计算获得所述待检测物体相对于所述基准位姿的平面位移量s;
根据所述平面旋转角度θ及所述平面位移向量s,获得所述待检测物体的当前位姿。
需要说明的是,标记码中包含至少一个子标记码,因此摄像机拍摄到的标记码图像中包含至少一个子标记码图像。其中,子标记码为类似于二维码的标记。
***在接收到摄像机发送的原始图像之后,首先判断该原始图像中是否包含标记码图像(摄像机是否拍摄到标记码),即,判断该原始图像中是否包含子标记码图像。具体地,***采用自适应阈值法对原始图像进行图像分割,并从经过图像分割后的原始图像中提取边缘轮廓,并将其中内凹的或者不近似于四边形的或者面积太大的或者面积太小的或者中心靠得过近的边缘轮廓及其内部的内容删除,从而获得形状为四边形或者近似四边形的边缘轮廓及其内部的内容,即获得子标记码图像。若经过边缘轮廓提取和删除之后无边缘轮廓剩余,则说明该原始图像中不包含子标记码图像,则***不对该原始图像进行处理。
随后,***判断所获得的子标记码图像是否为合法子标记码图像。具体地, ***首先对所获得的子标记码图像进行透视变换,将子标记码图像转换至平视图状态;随后根据子标记码图像的尺寸与整个标记码图像的尺寸对整个标记码图像进行网格划分,从而将整个标记码图像划分为二维网格;随后采用Otsu(大津算法,又称最大类间方差算法)阈值法对整个标记码图像进行图像分割,并根据图像分割的结果判断每个网格的颜色(黑色或者白色),从而获得每个子标记码图像的信息;最后,根据所获得的子标记码图像的信息判断该子标记码图像是否存在于预设的子标记码图像字典(该子标记码图像字典中存储有多个标准子标记码图像)中,若存在,则说明该子标记码图像为合法的子标记码图像,若否,则说明该子标记码图像为非法的子标记码图像。
随后,***将上述合法的子标记码图像代入预先生成的位姿计算模型中,计算获得各个合法子标记码图像的位姿,进而计算获得整个标记码的位姿,即获得整个标记码相对于基准位姿的旋转量和位移量。一般而言,该基准位姿为摄像机所在位置上的与水平面垂直的姿态。
***在计算获得整个标记码的位姿之后,根据该标记码的位姿计算待检测物体的位姿,即***将标记码的位姿投影至水平面上(可以理解的是,待检测物体一般是立于水平面上的),从而获得待检测物体的位姿。具体地,***首先根据上述旋转变换公式对标记码相对于基准位姿的旋转量进行转换,从而将该旋转量以旋转矩阵R的形式表示;随后根据该旋转矩阵R及上述转换公式,将标记码相对于基准位姿的旋转量投影至水平面上,从而获得待检测物体相对于基准位姿的旋转量;最后根据所获得的待检测物体相对于基准位姿的旋转量计算获得待检测物体相对于基准位姿的平面旋转角度θ。与此同时,***将标记码相对于基准位姿的平移量投影至水平面上,从而获得待检测物体相对于基准位姿的平面位移量s。***根据计算获得的平面旋转角度θ和平面位移量s即可确定待检测物体的位姿。可以理解的是,此时计算获得的待检测物体的位姿为平面位姿。
在又一个优选的实施例中,在上述优选的实施例的基础之上,在所述接收 由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像之前,还包括:
生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像;其中,M>0;
根据每个所述标定图像及相对应的标定位姿信息,生成所述位姿计算模型;
进一步地,所述生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像,具体包括步骤:
S1:当当前获得的标定图像的个数m小于M时,生成并显示一个当前标定位姿信息;
S2:接收所述摄像机采集的与所述当前标定位姿信息相对应的原始标定图像,并从所述原始标定图像中提取获得标定标记码图像;
S3:根据所述标定标记码图像与所述当前标定位姿信息判断与所述标定标记码图像相对应的标定标记码是否处于所述当前标定位姿信息所对应的标定位姿;
若是,则将所述标定标记码图像设置为标定图像,并修改m的值为m+1,并返回步骤S1;
若否,则返回所述步骤S1。
进一步地,所述根据所述标定标记码图像与所述当前标定位姿信息判断与所述标定标记码图像相对应的标定标记码是否处于所述当前标定位姿信息所对应的标定位姿;若是,则将所述标定标记码图像设置为标定图像,并修改m的值为m+1;若否,则返回步骤S1,具体包括:
识别获得所述当前标定标记码图像中的关键点,并计算所述关键点之间的距离;
根据所述当前标定位姿信息,判断所述关键点之间的距离是否在预设的距离范围内;
若是,则确认与所述标定标记码图像相对应的标定标记码处于所述当前标定位姿信息所对应的标定位姿,并将所述标定标记码图像设置为标定图像,并 修改m的值为m+1,并返回所述步骤S1;
若否,则确认与所述标定标记码图像相对应的标定标记码不处于所述当前标定位姿信息所对应的标定位姿,并返回所述步骤S1。
需要说明的是,***在对待检测物体的位姿进行检测之前,还需要进行相机标定,即根据生成的标定位姿信息x和采集的标定图像y计算获得相机参数M,从而获得与摄像机相对应的位姿计算模型Y=M·X。其中,Y为物***姿,X为所采集的物体图像。具体地,首先,将标定标记码粘贴于板上,从而获得如图3所示的标定板。随后,***判断当前所采集的标定图像的个数是否大于或者等于M(M>0),若是,则根据所采集的标定图像与相应的标定位姿信息计算获得上述位姿计算模型;若否,则生成并显示一个标定位姿信息。用户根据该标定位姿信息所指定的位姿摆放标定板。摄像机采集用户摆放的标定板的图像,获得原始标定图像,并将该原始标定图像发送给***。***在接收到摄像机发送的原始标定图像之后,从其中提取出标定标记码图像,并计算该标定标记码图像中的关键点之间的距离,若这些关键点之间的距离在预设的距离范围内,则认为该标定板摆放的位姿与标定位姿信息所指定的位姿相一致,则将该标定标记码图像设置为标定图像,并重新判断当前所采集的标定图像的个数是否大于或者等于M,并根据判断的结果执行后续的步骤,依次循环;若这些关键点之间的距离不在预设的距离范围内,则认为该标定板摆放的位姿与标定位姿信息所指定位姿不一致,因此不对该标定标记码图像进行处理,并重新判断当前所采集的标定图像的个数是否大于或者等于M,并根据判断的结果执行后续的步骤,或者再次显示相同的标定位姿信息以提示用户按照该标定位姿信息所指定的位姿摆放标定板。
通过生成和显示标定位姿信息,并通过摄像机采集与该标定位姿信息相对应的标定标记码图像,即可计算获得位姿计算模型,完成相机标定,由于该相机标定的过程简单、易操作,因此能够进一步提高对待检测物体的位姿进行检测的过程的效率,且提高用户体验。
本发明实施例提供的物***姿的检测方法,通过采用标记码标记待检测物 体的位姿,使得***能够通过对摄像头所拍摄到的标记码图像进行计算,获得该标记码的位姿,进而计算获得待检测物体的位姿。一方面,由于借助了标记码标记待检测物体的位姿,使得***仅需对标记码的图像进行分析和处理,从而能够大大提高对待检测物体的位姿进行检测的过程的效率,且由于标记码图像中的特征点较为明显,较易识别,因此对这些特征点进行识别和计算的难度低、准确度高,从而能够提高对待检测物体的位姿进行检测的检测结果的准确度。另一方面,由于本发明借助标记码计算待检测物体的位姿,只需要将标记码粘贴于待检测物体的表面上即可,而不需要根据待检测物体的形状进行建模,因此能够减少人力物力等资源的耗费,并且能够大大增加该位姿检测方法的普适性。另外,通过生成和显示标定位姿信息,并通过摄像机采集与该标定位姿信息相对应的标定标记码图像,即可计算获得位姿计算模型,完成相机标定,由于该相机标定的过程简单、易操作,因此能够进一步提高对待检测物体的位姿进行检测的过程的效率,且提高用户体验。
相应地,本发明还提供一种物***姿的检测装置,能够实现上述物***姿的检测方法的所有流程。
参见图4,是本发明提供的物***姿的检测装置的一个优选的实施例的结构示意图,具体如下:
标记码图像获得模块41,用于接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像;其中,所述标记码图像为贴附于待检测物体上的标记码的图像;
标记码位姿获得模块42,用于根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿;其中,所述标记码的位姿包括所述标记码相对于基准位姿的旋转量及位移量;以及,
物***姿获得模块43,用于根据所述标记码的位姿,计算获得所述待检测物体的当前位姿。
在另一个优选的实施例中,在上述优选的实施例的基础之上,所述标记码 中包含至少一个子标记码;
则所述标记码位姿获得模块42,具体包括:
子标记码图像获得单元,用于对所述标记码图像进行图像分割,获得符合形状要求的至少一个子标记码图像;
合法子标记码图像获得单元,用于将每个所述子标记码图像与预先存储的各个标准子标记码图像进行比较,获得所述子标记码图像中的合法子标记码图像;以及,
标记码位姿计算获得单元,用于根据各个所述合法子标记码图像与所述位姿计算模型,计算获得所述标记码的位姿。
进一步地,所述标记码相对于所述基准位姿的旋转量中包括旋转角度γ和单位方向向量(rrx,rry,rrz);所述标记码相对于所述基准位姿的位移量中包括位移向量rt
则所述物***姿获得模块43,具体包括:
标记码旋转矩阵获得单元,用于根据所述旋转角度γ及所述单位方向向量(rrx,rry,rrz)及旋转变换公式R=I+ωsinγ+ω2(1-cosγ),计算获得所述标记码相对于所述基准位姿的旋转矩阵R;其中,
Figure PCTCN2017104668-appb-000010
物体旋转量获得单元,用于根据所述旋转矩阵R及旋转公式v=Rvref,计算获得所述待检测物体相对于所述基准位姿的旋转量v=(vx,vy,vz);其中,
Figure PCTCN2017104668-appb-000011
为水平面上的以所述基准位姿为原点的单位单方向向量;
物体平面旋转角度获得单元,用于根据所述旋转量v及平面角度计算公式
Figure PCTCN2017104668-appb-000012
计算获得所述待检测物体相对于所述基准位姿的平面旋转角度θ;
物体平面位移量获得单元,用于根据所述位移向量rt,计算获得所述待检测物体相对于所述基准位姿的平面位移量s;以及,
物体当前位姿获得单元,用于根据所述平面旋转角度θ及所述平面位移向量 s,获得所述待检测物体的当前位姿。
在又一个优选的实施例中,在上述优选的实施例的基础之上,所述物***姿的检测装置,还包括:
标定图像获得模块,用于生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像;其中,M>0;以及,
位姿计算模型生成模块,用于根据每个所述标定图像及相对应的标定位姿信息,生成所述位姿计算模型;
进一步地,所述标定图像获得模块,具体包括:
当前标定位姿信息显示单元,用于当当前获得的标定图像的个数m小于M时,生成并显示一个当前标定位姿信息;
标定标记码图像获得单元,用于接收所述摄像机采集的与所述当前标定位姿信息相对应的原始标定图像,并从所述原始标定图像中提取获得标定标记码图像;以及,
循环单元,用于根据所述标定标记码图像与所述当前标定位姿信息判断与所述标定标记码图像相对应的标定标记码是否处于所述当前标定位姿信息所对应的标定位姿;若是,则将所述标定标记码图像设置为标定图像,并修改m的值为m+1,并返回所述当前标定位姿信息显示单元;若否,则返回所述当前标定位姿信息显示单元。
进一步地,所述循环单元,具体包括:
关键点距离计算子单元,用于识别获得所述当前标定标记码图像中的关键点,并计算所述关键点之间的距离;
关键点距离判断子单元,用于根据所述当前标定位姿信息,判断所述关键点之间的距离是否在预设的距离范围内;
第一循环子单元,用于当所述关键点之间的距离在预设的距离范围内时,确认与所述标定标记码图像相对应的标定标记码处于所述当前标定位姿信息所对应的标定位姿,并将所述标定标记码图像设置为标定图像,并修改m的值为m+1,并返回所述当前标定位姿信息显示单元;以及,
第二循环子单元,用于当所述关键点之间的距离不在预设的距离范围内时,确认与所述标定标记码图像相对应的标定标记码不处于所述当前标定位姿信息所对应的标定位姿,并返回所述当前标定位姿信息显示单元。
本发明实施例提供的物***姿的检测装置,通过采用标记码标记待检测物体的位姿,使得***能够通过对摄像头所拍摄到的标记码图像进行计算,获得该标记码的位姿,进而计算获得待检测物体的位姿。一方面,由于借助了标记码标记待检测物体的位姿,使得***仅需对标记码的图像进行分析和处理,从而能够大大提高对待检测物体的位姿进行检测的过程的效率,且由于标记码图像中的特征点较为明显,较易识别,因此对这些特征点进行识别和计算的难度低、准确度高,从而能够提高对待检测物体的位姿进行检测的检测结果的准确度。另一方面,由于本发明借助标记码计算待检测物体的位姿,只需要将标记码粘贴于待检测物体的表面上即可,而不需要根据待检测物体的形状进行建模,因此能够减少人力物力等资源的耗费,并且能够大大增加该位姿检测方法的普适性。另外,通过生成和显示标定位姿信息,并通过摄像机采集与该标定位姿信息相对应的标定标记码图像,即可计算获得位姿计算模型,完成相机标定,由于该相机标定的过程简单、易操作,因此能够进一步提高对待检测物体的位姿进行检测的过程的效率,且提高用户体验。
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。

Claims (10)

  1. 一种物***姿的检测方法,其特征在于,包括:
    接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像;其中,所述标记码图像为贴附于待检测物体上的标记码的图像;
    根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿;其中,所述标记码的位姿包括所述标记码相对于基准位姿的旋转量及位移量;
    根据所述标记码的位姿,计算获得所述待检测物体的当前位姿。
  2. 如权利要求1所述的物***姿的检测方法,其特征在于,在所述接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像之前,还包括:
    生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像;其中,M>0;
    根据每个所述标定图像及相对应的标定位姿信息,生成所述位姿计算模型。
  3. 如权利要求2所述的物***姿的检测方法,其特征在于,所述生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像,具体包括步骤:
    S1:当当前获得的标定图像的个数m小于M时,生成并显示一个当前标定位姿信息;
    S2:接收所述摄像机采集的与所述当前标定位姿信息相对应的原始标定图像,并从所述原始标定图像中提取获得标定标记码图像;
    S3:根据所述标定标记码图像与所述当前标定位姿信息判断与所述标定标记码图像相对应的标定标记码是否处于所述当前标定位姿信息所对应的标定位姿;
    若是,则将所述标定标记码图像设置为标定图像,并修改m的值为m+1, 并返回步骤S1;
    若否,则返回所述步骤S1。
  4. 如权利要求3所述的物***姿的检测方法,其特征在于,所述根据所述标定标记码图像与所述当前标定位姿信息判断与所述标定标记码图像相对应的标定标记码是否处于所述当前标定位姿信息所对应的标定位姿;若是,则将所述标定标记码图像设置为标定图像,并修改m的值为m+1;若否,则返回步骤S1,具体包括:
    识别获得所述当前标定标记码图像中的关键点,并计算所述关键点之间的距离;
    根据所述当前标定位姿信息,判断所述关键点之间的距离是否在预设的距离范围内;
    若是,则确认与所述标定标记码图像相对应的标定标记码处于所述当前标定位姿信息所对应的标定位姿,并将所述标定标记码图像设置为标定图像,并修改m的值为m+1,并返回所述步骤S1;
    若否,则确认与所述标定标记码图像相对应的标定标记码不处于所述当前标定位姿信息所对应的标定位姿,并返回所述步骤S1。
  5. 如权利要求1所述的物***姿的检测方法,其特征在于,所述标记码中包含至少一个子标记码;
    则所述根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿,具体包括:
    对所述标记码图像进行图像分割,获得符合形状要求的至少一个子标记码图像;
    将每个所述子标记码图像与预先存储的各个标准子标记码图像进行比较,获得所述子标记码图像中的合法子标记码图像;
    根据各个所述合法子标记码图像与所述位姿计算模型,计算获得所述标记 码的位姿。
  6. 如权利要求1所述的物***姿的检测方法,其特征在于,所述标记码相对于所述基准位姿的旋转量中包括旋转角度γ和单位方向向量(rrx,rry,rrz);所述标记码相对于所述基准位姿的位移量中包括位移向量rt
    则所述根据所述标记码的位姿,计算获得所述待检测物体的当前位姿,具体包括:
    根据所述旋转角度γ及所述单位方向向量(rrx,rry,rrz)及旋转变换公式R=I+ωsinγ+ω2(1-cosγ),计算获得所述标记码相对于所述基准位姿的旋转矩阵R;其中,
    Figure PCTCN2017104668-appb-100001
    根据所述旋转矩阵R及旋转公式v=Rvref,计算获得所述待检测物体相对于所述基准位姿的旋转量v=(vx,vy,vz);其中,
    Figure PCTCN2017104668-appb-100002
    为水平面上的以所述基准位姿为原点的单位单方向向量;
    根据所述旋转量v及平面角度计算公式
    Figure PCTCN2017104668-appb-100003
    计算获得所述待检测物体相对于所述基准位姿的平面旋转角度θ;
    根据所述位移向量rt,计算获得所述待检测物体相对于所述基准位姿的平面位移量s;
    根据所述平面旋转角度θ及所述平面位移向量s,获得所述待检测物体的当前位姿。
  7. 一种物***姿的检测装置,其特征在于,包括:
    标记码图像获得模块,用于接收由摄像机采集的原始图像,并从所述原始图像中提取获得标记码图像;其中,所述标记码图像为贴附于待检测物体上的标记码的图像;
    标记码位姿获得模块,用于根据所述标记码图像和预先生成的位姿计算模型,计算获得所述标记码的位姿;其中,所述标记码的位姿包括所述标记码相对于基准位姿的旋转量及位移量;以及,
    物***姿获得模块,用于根据所述标记码的位姿,计算获得所述待检测物体的当前位姿。
  8. 如权利要求7所述的物***姿的检测装置,其特征在于,所述物***姿的检测装置,还包括:
    标定图像获得模块,用于生成并显示至少一个标定位姿信息,并根据所述标定位姿信息获得M个标定图像;其中,M>0;以及,
    位姿计算模型生成模块,用于根据每个所述标定图像及相对应的标定位姿信息,生成所述位姿计算模型。
  9. 如权利要求7所述的物***姿的检测装置,其特征在于,所述标记码中包含至少一个子标记码;
    则所述标记码位姿获得模块,具体包括:
    子标记码图像获得单元,用于对所述标记码图像进行图像分割,获得符合形状要求的至少一个子标记码图像;
    合法子标记码图像获得单元,用于将每个所述子标记码图像与预先存储的各个标准子标记码图像进行比较,获得所述子标记码图像中的合法子标记码图像;以及,
    标记码位姿计算获得单元,用于根据各个所述合法子标记码图像与所述位姿计算模型,计算获得所述标记码的位姿。
  10. 如权利要求7所述的物***姿的检测装置,其特征在于,所述标记码相对于所述基准位姿的旋转量中包括旋转角度γ和单位方向向量(rrx,rry,rrz);所述标记码相对于所述基准位姿的位移量中包括位移向量rt
    则所述物***姿获得模块,具体包括:
    标记码旋转矩阵获得单元,用于根据所述旋转角度γ及所述单位方向向量(rrx,rry,rrz)及旋转变换公式R=I+ωsinγ+ω2(1-cosγ),计算获得所述标记码相对于所述基准位姿的旋转矩阵R;其中,
    Figure PCTCN2017104668-appb-100004
    物体旋转量获得单元,用于根据所述旋转矩阵R及旋转公式v=Rvref,计算获得所述待检测物体相对于所述基准位姿的旋转量v=(vx,vy,vz);其中,
    Figure PCTCN2017104668-appb-100005
    为水平面上的以所述基准位姿为原点的单位单方向向量;
    物体平面旋转角度获得单元,用于根据所述旋转量v及平面角度计算公式
    Figure PCTCN2017104668-appb-100006
    计算获得所述待检测物体相对于所述基准位姿的平面旋转角度θ;
    物体平面位移量获得单元,用于根据所述位移向量rt,计算获得所述待检测物体相对于所述基准位姿的平面位移量s;以及,
    物体当前位姿获得单元,用于根据所述平面旋转角度θ及所述平面位移向量s,获得所述待检测物体的当前位姿。
PCT/CN2017/104668 2017-03-06 2017-09-29 物***姿的检测方法和装置 WO2018161555A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710127752.8 2017-03-06
CN201710127752.8A CN106971406B (zh) 2017-03-06 2017-03-06 物***姿的检测方法和装置

Publications (1)

Publication Number Publication Date
WO2018161555A1 true WO2018161555A1 (zh) 2018-09-13

Family

ID=59328826

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/104668 WO2018161555A1 (zh) 2017-03-06 2017-09-29 物***姿的检测方法和装置

Country Status (2)

Country Link
CN (1) CN106971406B (zh)
WO (1) WO2018161555A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110763204A (zh) * 2019-06-25 2020-02-07 西安理工大学 一种平面编码靶标及其位姿测量方法
CN111540016A (zh) * 2020-04-27 2020-08-14 深圳南方德尔汽车电子有限公司 基于图像特征匹配的位姿计算方法、装置、计算机设备及存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106971406B (zh) * 2017-03-06 2019-10-29 广州视源电子科技股份有限公司 物***姿的检测方法和装置
CN107595562A (zh) * 2017-09-22 2018-01-19 华南理工大学 一种基于自识别标记的室内导盲杖及其导盲方法
CN107845326A (zh) * 2017-12-19 2018-03-27 中铁第四勘察设计院集团有限公司 高速铁路钢轨伸缩调节器位移识别标识牌及测量方法
CN109307585A (zh) * 2018-04-26 2019-02-05 东南大学 一种近目式显示器性能的智能测试***
CN109677217A (zh) * 2018-12-27 2019-04-26 魔视智能科技(上海)有限公司 牵引车与挂车偏航角的检测方法
CN110009683B (zh) * 2019-03-29 2021-03-30 北京交通大学 基于MaskRCNN的实时平面上物体检测方法
CN114820814A (zh) * 2019-10-30 2022-07-29 深圳市瑞立视多媒体科技有限公司 摄影机位姿计算方法、装置、设备及存储介质
CN113643380A (zh) * 2021-08-16 2021-11-12 安徽元古纪智能科技有限公司 一种基于单目相机视觉标靶定位的机械臂引导方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05301183A (ja) * 1992-04-28 1993-11-16 Fujitsu Ltd ロボット制御装置及びロボット制御方法
US20050102060A1 (en) * 2003-11-06 2005-05-12 Fanuc Ltd Device for correcting positional data of robot
CN101419055A (zh) * 2008-10-30 2009-04-29 北京航空航天大学 基于视觉的空间目标位姿测量装置和方法
CN101839692A (zh) * 2010-05-27 2010-09-22 西安交通大学 单相机测量物体三维位置与姿态的方法
CN102207368A (zh) * 2010-03-29 2011-10-05 富士施乐株式会社 装配接收构件识别结构及使用该结构的装配信息识别装置和装配处理装置
CN102922521A (zh) * 2012-08-07 2013-02-13 中国科学技术大学 一种基于立体视觉伺服的机械臂***及其实时校准方法
CN103020952A (zh) * 2011-07-08 2013-04-03 佳能株式会社 信息处理设备和信息处理方法
CN103743393A (zh) * 2013-12-20 2014-04-23 西安交通大学 一种圆柱状目标的位姿测量方法
CN103759716A (zh) * 2014-01-14 2014-04-30 清华大学 基于机械臂末端单目视觉的动态目标位置和姿态测量方法
CN106971406A (zh) * 2017-03-06 2017-07-21 广州视源电子科技股份有限公司 物***姿的检测方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208122A (zh) * 2013-04-18 2013-07-17 湖南大学 基于一维标定杆设计的多相机标定方法
CN104463833B (zh) * 2013-09-22 2017-11-03 大族激光科技产业集团股份有限公司 一种标定一维面阵相机组相机参数的方法和***
CN103942796B (zh) * 2014-04-23 2017-04-12 清华大学 一种高精度的投影仪‑摄像机标定***及标定方法
CN104880176B (zh) * 2015-04-15 2017-04-12 大连理工大学 基于先验知识模型优化的运动物位姿测量方法
CN104933717B (zh) * 2015-06-17 2017-08-11 合肥工业大学 基于方向性标定靶标的摄像机内外参数自动标定方法
CN106408556B (zh) * 2016-05-23 2019-12-03 东南大学 一种基于一般成像模型的微小物体测量***标定方法

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05301183A (ja) * 1992-04-28 1993-11-16 Fujitsu Ltd ロボット制御装置及びロボット制御方法
US20050102060A1 (en) * 2003-11-06 2005-05-12 Fanuc Ltd Device for correcting positional data of robot
CN101419055A (zh) * 2008-10-30 2009-04-29 北京航空航天大学 基于视觉的空间目标位姿测量装置和方法
CN102207368A (zh) * 2010-03-29 2011-10-05 富士施乐株式会社 装配接收构件识别结构及使用该结构的装配信息识别装置和装配处理装置
CN101839692A (zh) * 2010-05-27 2010-09-22 西安交通大学 单相机测量物体三维位置与姿态的方法
CN103020952A (zh) * 2011-07-08 2013-04-03 佳能株式会社 信息处理设备和信息处理方法
CN102922521A (zh) * 2012-08-07 2013-02-13 中国科学技术大学 一种基于立体视觉伺服的机械臂***及其实时校准方法
CN103743393A (zh) * 2013-12-20 2014-04-23 西安交通大学 一种圆柱状目标的位姿测量方法
CN103759716A (zh) * 2014-01-14 2014-04-30 清华大学 基于机械臂末端单目视觉的动态目标位置和姿态测量方法
CN106971406A (zh) * 2017-03-06 2017-07-21 广州视源电子科技股份有限公司 物***姿的检测方法和装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110763204A (zh) * 2019-06-25 2020-02-07 西安理工大学 一种平面编码靶标及其位姿测量方法
CN111540016A (zh) * 2020-04-27 2020-08-14 深圳南方德尔汽车电子有限公司 基于图像特征匹配的位姿计算方法、装置、计算机设备及存储介质
CN111540016B (zh) * 2020-04-27 2023-11-10 深圳南方德尔汽车电子有限公司 基于图像特征匹配的位姿计算方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN106971406B (zh) 2019-10-29
CN106971406A (zh) 2017-07-21

Similar Documents

Publication Publication Date Title
WO2018161555A1 (zh) 物***姿的检测方法和装置
CN108764048B (zh) 人脸关键点检测方法及装置
US11727593B1 (en) Automated data capture
US7965904B2 (en) Position and orientation measuring apparatus and position and orientation measuring method, mixed-reality system, and computer program
CN109934847B (zh) 弱纹理三维物体姿态估计的方法和装置
Azad et al. Stereo-based 6d object localization for grasping with humanoid robot systems
CN106981091B (zh) 人体三维建模数据处理方法及装置
CN111862296A (zh) 三维重建方法及装置、***、模型训练方法、存储介质
CN111862201A (zh) 一种基于深度学习的空间非合作目标相对位姿估计方法
CN106384355B (zh) 一种投影交互***中的自动标定方法
JP2019192022A (ja) 画像処理装置、画像処理方法及びプログラム
CN111784775B (zh) 一种标识辅助的视觉惯性增强现实注册方法
CN107480603B (zh) 基于slam和深度摄像头的同步建图与物体分割方法
CN105934757B (zh) 一种用于检测第一图像的关键点和第二图像的关键点之间的不正确关联关系的方法和装置
WO2022021782A1 (zh) 六维姿态数据集自动生成方法、***、终端以及存储介质
JP2010267232A (ja) 位置姿勢推定方法および装置
CN111695431A (zh) 一种人脸识别方法、装置、终端设备及存储介质
CN112613123A (zh) 一种飞机管路ar三维注册方法及装置
CN112348869A (zh) 通过检测和标定恢复单目slam尺度的方法
JP2015219868A (ja) 情報処理装置、情報処理方法、プログラム
WO2020015501A1 (zh) 地图构建方法、装置、存储介质及电子设备
JP7171294B2 (ja) 情報処理装置、情報処理方法及びプログラム
CN114187253A (zh) 一种电路板零件安装检测方法
US11989928B2 (en) Image processing system
US9098746B2 (en) Building texture extracting apparatus and method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17899511

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17899511

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.03.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17899511

Country of ref document: EP

Kind code of ref document: A1