CN116697888A - Method and system for measuring three-dimensional coordinates and displacement of target point in motion - Google Patents

Method and system for measuring three-dimensional coordinates and displacement of target point in motion Download PDF

Info

Publication number
CN116697888A
CN116697888A CN202310477600.6A CN202310477600A CN116697888A CN 116697888 A CN116697888 A CN 116697888A CN 202310477600 A CN202310477600 A CN 202310477600A CN 116697888 A CN116697888 A CN 116697888A
Authority
CN
China
Prior art keywords
target point
identification
camera
displacement
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310477600.6A
Other languages
Chinese (zh)
Inventor
于涛
陈小凡
牛增辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tianxiang Ruiyi Technology Co ltd
Original Assignee
Beijing Tianxiang Ruiyi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tianxiang Ruiyi Technology Co ltd filed Critical Beijing Tianxiang Ruiyi Technology Co ltd
Priority to CN202310477600.6A priority Critical patent/CN116697888A/en
Publication of CN116697888A publication Critical patent/CN116697888A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a method and a system for measuring three-dimensional coordinates and displacement of a target point in motion, wherein the method comprises the following steps: acquiring and storing image data of a target point by adopting a camera array formed by a plurality of cameras, wherein each camera completes joint calibration in advance; identifying an identification graph fixed on the target point by adopting a target identification algorithm and calculating pixel coordinates of four corner points in the identification graph; calculating world coordinate system coordinates of four corner points in the identification graph; and calculating world coordinate system coordinates of the center point of the identification graph, and calculating physical quantities such as relative displacement among multiple target points according to the world coordinate system coordinates. The application discloses a method and a system for measuring three-dimensional coordinates and displacement of a target point in motion, which solve the defects of low measurement precision and complex system existing in the traditional coordinate and displacement measurement technology.

Description

Method and system for measuring three-dimensional coordinates and displacement of target point in motion
Technical Field
The application belongs to the technical field of computer vision and data processing, and particularly relates to a method and a system for measuring three-dimensional coordinates and displacement of a target point in motion.
Background
The existing coordinate and displacement measurement technology of an object in a motion space mainly comprises the following steps: stay wire displacement meter technology, inertial sensor, binocular distance measurement technology, TOF distance measurement technology, reflective ball attached infrared positioning technology and the like. The stay wire displacement meter and the inertial sensor technology are traditional measurement methods which do not involve optics. The displacement dimension measured by the stay wire displacement meter is limited, and meanwhile, the self stress and the motion condition of a target point can be changed to a certain extent due to the stress of the stay wire, and even the stay wire meter can be damaged in motion. The measurement accuracy of the inertial sensor is limited, and the measurement error greatly increases with time. The binocular distance measurement technology needs to cover the same area, needs twice the number of cameras and twice the data processing capacity, still needs to attach marks on target points or select and identify characteristic points on an image operation interface, and has high complexity. The TOF ranging technology also has problems of precision, measurement range, operation graphical interface and the like, and a TOF camera needs to be additionally added to a conventional camera. The infrared positioning technology for attaching the reflective ball requires an infrared transmitter and an infrared receiver, and simultaneously requires designing and manufacturing a reflective ball combined bracket, so that the cost is high.
Three-dimensional localization of target objects in a motion space using artificial intelligence and machine vision techniques is an emerging technology with great potential for application. The identification of the unique design of the target through the identification of the camera array further realizes the identification and positioning of the target point, and compared with other technologies, the method has higher reliability, higher accuracy, quicker response speed, wider application space and lower implementation cost; the design and identification of the identification chart apply the basic technologies of mature image gray threshold, symmetry, haar, morphology, topological characteristics and the like in the industry, and the three-dimensional coordinates and displacement of the target point in the motion space are measured with high precision by matching with technologies of camera arrays, instant positioning, map construction (SLAM) and the like.
Therefore, how to design the camera array by using the machine vision technology and provide a method with high measurement precision and low system complexity capable of realizing high-precision measurement of the three-dimensional coordinates and displacement of the target point in the motion space is a technical problem to be solved at present.
Disclosure of Invention
The application aims to provide a method and a system for measuring three-dimensional coordinates and displacement of a target point in a motion space, which are used for solving the defects of low measurement precision and complex system existing in the traditional coordinate and displacement measurement technology.
In order to solve the above technical problem, a first aspect of the present application provides a method for measuring three-dimensional coordinates and displacement of a target point in a motion space, including:
acquiring and storing image data of a target by adopting a camera array formed by a plurality of cameras, wherein each camera finishes single camera calibration and multi-camera combined calibration in advance to acquire camera internal parameters and camera external parameters;
identifying an identification chart fixed on the target point by adopting a target identification algorithm;
calculating pixel coordinates of four corner points in the identification graph;
and converting according to pixel coordinates of four corner points in the identification graph to obtain coordinates and displacement of the target point in a world coordinate system.
Further, identifying the identification map fixed on the target point by using a target identification algorithm comprises:
the key points in the identification graph are detected by adopting a feature detection algorithm based on the features of a gray threshold value, a Haar feature description operator, morphological symmetry, topology and the like, classifiers are respectively constructed for the features on the shot image, the identification graph is further identified, and pixel coordinates of angular points of four different angles of the identification graph are calculated.
Further, converting according to the pixel coordinates of the four corner points in the identification chart to obtain coordinates and displacement of the target point in the world coordinate system includes:
identifying the ID of the identification graph, and confirming semantic information of the target point marked by the identification, namely distinguishing the target point;
and solving the three-dimensional coordinates of the world coordinate system of the point location by adopting a PnP algorithm. The method comprises the steps of carrying out a first treatment on the surface of the
Combining the semantic information to obtain coordinates of the target point in a world coordinate system;
and calculating the displacement physical quantity between the target points through the space coordinates of the target points.
Further, the camera array is formed by surrounding at least four cameras which are respectively arranged in different directions, wherein the lens of each camera points to the direction to which the target point possibly moves, and the area after the fields of view of the lenses of the cameras are overlapped is the movement range of the target point.
Further, each camera is provided with a connecting rod connected to a central shaft positioned at the central part, and each adjacent connecting rods are connected and fixed through a rigid piece, wherein after each camera is assembled on the camera array, the camera array can have a motion condition in a motion space, and the camera array structurally keeps the relative pose among the cameras unchanged strictly.
Further, the lens fields of view of the adjacent cameras need to have at least 20% overlapping area as a common field of view.
In another aspect of the present application, there is also provided a system for measuring three-dimensional coordinates and displacement of a target point in motion, the system comprising:
the target image acquisition module is used for acquiring and storing image data of a target point by adopting a camera array formed by a plurality of cameras, wherein each camera finishes single-camera calibration and multi-camera combined calibration in advance;
the target recognition and corner point calculation module recognizes an identification chart fixed on the target point by adopting a target recognition algorithm, and calculates pixel coordinates of four corner points in the identification chart;
and the three-dimensional coordinate calculation module is used for converting the pixel coordinates of the four corner points in the identification graph to obtain the three-dimensional coordinates of the target point in the world coordinate system.
The target point coordinate and displacement calculation module calculates the three-dimensional coordinate of each angular point under the world coordinate system according to the pixel coordinate of the angular point, so as to obtain the related physical quantity of the target point including the three-dimensional coordinate and the relative displacement in the world coordinate system.
By adopting the method and the system for measuring the three-dimensional coordinates and displacement of the target point in the motion space, the following technical effects are achieved:
1. the coordinate measurement accuracy is high. The feature detection algorithm based on the features of gray threshold, haar feature description operator, morphological symmetry, topology and the like and the design of the identification graph of fig. 3 ensure that the measurement accuracy of coordinate points is still stable in an extremely high-speed motion scene, the coordinate measurement error of multiple target points is controlled at millimeter level, and the measurement accuracy is high.
2. Compared with the optical measurement modes such as a binocular ranging method, a Tof ranging method, an infrared reflecting ball and the like, the method greatly saves equipment, reduces data acquisition quantity and reduces system complexity and implementation cost when measuring the coordinates of the target point under the condition of large-range movement.
3. The application is very flexible in designing the camera array, can expand the number of cameras or change the model of the camera and lens according to the movement range of the target point with measurement, can cover the flexible and changeable measuring range after the joint calibration of multiple cameras, can also change the high-frame rate camera, is used for carrying on the high-precision coordinate calculation of the target point moving at a high speed.
4. The camera array is a rigid combination, can be placed in a motion space, and has no influence on a measurement result due to the motion of the camera array.
5. The target points can move in the tested range without limitation, and only the fixed identification patterns and the camera array are required to be not shielded, so that the number of the tested target points can be further improved by increasing the variety of the designed identification patterns.
Drawings
Fig. 1 is a flowchart of a method for measuring three-dimensional coordinates and displacement of a moving target point in an embodiment of the application.
Fig. 2 is a structural frame diagram of a camera array in an embodiment of the application.
Fig. 3 is a diagram illustrating a design example of an identification chart in an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a system for measuring three-dimensional coordinates and displacement of a moving target point in an embodiment of the present application.
Fig. 5 is a schematic diagram of the transformation of the relationship between the pixel coordinate system and the world coordinate system in the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be noted that the illustrations provided in the present embodiment are merely schematic illustrations of the basic idea of the present application.
The structures, proportions, sizes, etc. shown in the drawings attached hereto are for illustration purposes only and should not be construed as limiting the application to the extent that it can be practiced, since modifications, changes in the proportions, or otherwise, used in the practice of the application, are particularly adapted to the specific details of construction and the use of the application, without departing from the spirit or essential characteristics thereof, which fall within the scope of the application as defined by the appended claims.
References in this specification to orientations or positional relationships as indicated by "front", "rear", "left", "right", "middle", "longitudinal", "transverse", "horizontal", "inner", "outer", etc., are based on the orientation or positional relationships shown in the drawings, are also for convenience of description only, and do not indicate or imply that the device or element in question must have a particular orientation, be constructed and operated in a particular orientation, and therefore should not be construed as limiting the application. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Example 1
Referring to fig. 1, a first embodiment of the present application provides a method for measuring three-dimensional coordinates and displacement of a target point in motion, which mainly includes the following steps:
s1, acquiring and storing image data of a target point by adopting a camera array formed by a plurality of cameras, wherein each camera finishes single-camera calibration and multi-camera combined calibration in advance;
specifically, after the camera starts shooting, the collected image data of the target point is stored on a computer connected with the camera. Referring to fig. 2, the camera array is formed by surrounding at least four cameras respectively arranged in different directions (southeast, southwest and northwest), seven cameras are adopted in the embodiment, a circular ring is formed by surrounding, the lens is slightly inclined downwards, and the lens is lifted at the top of the movement space through a central shaft to perform nodding. Of course, those skilled in the art will appreciate that the shapes illustrated are not limited to those illustrated. The lens of each camera points to the direction in which the target point moves and the direction to which the target point possibly moves, and the area of the overlapped lens fields of each camera is the movement range of the target point. The target point is located below the structural frame formed by the camera arrays, and the range of the target point area is the area where the lens view angles (FOV) of the cameras are overlapped, for example, the target point area can be a substantially hemispherical area with the center of the camera arrays as the center of a sphere. The camera array can measure physical quantities such as relative motion coordinates, displacement, rotation angle and the like among a plurality of target points in irregular motion. Meanwhile, the design of the camera array fully considers the movement range of the target point, so that the moving target point is in the field of view of any (at least one) camera in the camera array.
At least two fields of view between every two adjacent cameras are overlapped, and one preferable mode is that at least 20% of overlapping areas of lens fields of the adjacent cameras need to be used as a common field of view for multi-camera combined calibration.
Referring to fig. 2, each camera 1 has a connecting rod 2 connected to a central shaft at a central portion, each camera 1 is detachably connected to the connecting rod 2, each adjacent connecting rod 2 is further connected and fixed by a rigid member 3, and in the present application, a plurality of rigid members are provided between each two adjacent connecting rods 2 to ensure stability and rigidity between the connecting rods. Since each camera in this embodiment forms a ring shape, it is in a plane, and the plane structure is forced in the vertical direction to have unstable characteristics, the central shaft 4, and the diagonal brace 5 connecting the central shaft 4 and the connecting rod 2 solve the stability and rigidity in the vertical direction. After each camera 1 is assembled on the camera array, the camera array can have motion conditions in a motion space, and the camera array strictly keeps the relative pose among the cameras unchanged in structure.
Of course, it should be understood by those skilled in the art that parameters such as the number of cameras in the camera array, the focal length of the camera lens, the position and angle of each camera in the camera array, etc. can be adjusted according to specific scenes, or different models are selected, and are determined by factors such as the size of the measurement coverage area, the distance between the camera and the measurement target point, etc., and reference can be made to formula FOV (wide or high) =working distance x sensing size (wide or high)/focal length. Once the above parameters are confirmed, they need to be kept constant during the measurement. The camera array is designed to maintain enough rigidity, and parameters such as the relative position and the relative angle of each camera in the camera array are constant.
The camera array is required to be subjected to multi-camera combined calibration after assembly, and can be used for measurement. Specifically, each camera is calibrated independently, namely, a projector and the camera are calibrated, and internal references of the projector and the camera are obtained; and then, calibrating adjacent cameras (up-down adjacent cameras and left-right adjacent cameras) in pairs, namely, calibrating multiple cameras in a combined way to obtain the pose parameters of the cameras.
Compared with the traditional optical measurement modes such as a binocular ranging method, a Tof ranging method and an infrared reflecting ball, the method uses the camera array which is rigidly connected to collect the image of the human body, thereby greatly saving equipment, reducing data collection quantity and reducing system complexity and implementation cost. The camera array is a rigid combination, and the vibration of the camera has no influence on the measurement result. In addition, the application is very flexible in designing the camera array, can expand the number of cameras or change the model of camera and lens according to the movement range with determination target point, after the joint calibration of multiple cameras, can cover the flexible and changeable measuring range, can also change the high-frame rate camera flexibly, is used for carrying on the high-accuracy coordinate calculation of the target point of the high-speed movement.
And S2, identifying the identification graph fixed on the target point from the acquired picture by adopting a target identification algorithm, and calculating pixel coordinates of the corner point of the identification graph on the acquired picture.
Specifically, the identifying the identification map fixed on the target point by using the target identification algorithm includes:
referring to fig. 3, a feature detection algorithm based on features such as a gray threshold, a Haar feature description operator, morphological symmetry, topology and the like is adopted to detect key points in an identification graph, classifiers are respectively constructed for the features on a shot image, and hierarchical screening is performed, so that identification of the identification graph is completed, and pixel coordinates of angular points of four different angles of the identification graph, such as angular point 1, angular point 2, angular point 3 and angular point 4 in the graph, are calculated.
And dividing all brightness values in the image into two types, namely a higher threshold value and a lower threshold value, according to the appointed brightness value, namely a threshold value, and separating the bright and dark features from the background to obtain a binary image. If the image collected by the camera is color, the image needs to be subjected to gray processing in advance. And extracting the Haar characteristic description in the binary image by using a Haar classifier training tool.
Through morphological symmetry and topological features, four corner points of the identification map are detected by using CornerHarris corner point detection algorithm or other corner point detection algorithms.
And matching the feature description with the labeled identification graph training set, and carrying out hierarchical screening through a Cascade Classification and other classifiers to identify the identification graph. If the identification is, pixel coordinates of corner points of four different angles of the identification graph are calculated.
The identification of the target is realized by identifying the identification chart of the unique design on the target, and compared with other technologies, the method has higher reliability, higher accuracy, quicker response speed, wider application space and lower implementation cost. The application is based on the design of the characteristic detection algorithm and the characteristic identification graph of the characteristics such as the gray threshold value, the Haar characteristic description operator, the morphological symmetry, the topology and the like, so that the measurement accuracy of coordinates and displacement is still stable in an extremely high-speed vibration scene, and the coordinate measurement error of multiple target points is controlled at the millimeter level.
And S3, performing PnP (PnP) calculation according to pixel coordinates of four corner points in the identification chart, camera internal parameters and camera external parameters to obtain three-dimensional coordinates of the world coordinate system of the point positions.
Specifically, the resolving process includes:
solving the pose of the camera in a world coordinate system by adopting a PnP (Perselect-n-Point) algorithm to finish the positioning of multiple cameras; therefore, when a plurality of target points move in the whole visual field range of the camera array, the three-dimensional coordinates of each target point can still be calculated through the linkage of the plurality of cameras.
And calculating the displacement physical quantity between the target points through the space coordinates of the target points.
The principle of the conversion between the pixel coordinate system and the world coordinate system is shown in fig. 5:
wherein, ow-Xw, yw, zw are world coordinate systems, the origin is usually set as the end of the robot base or the actuator, and the unit is mm; oc-Xc, yc, zc are camera coordinate system, origin is camera optical center, unit mm; o-x, y is an image coordinate system, and the unit is mm; uv-pixel coordinate system, unit pixelP (Xw, yw, zw) -point under world coordinate system; p (x, y) -the next point in the image coordinate system, the corresponding pixel coordinate is (u, v); f-camera focal length, equal to Oc to o distance.
The mapping relation matrix from the pixel coordinate system to the world coordinate system is as follows:
the internal parameters and the external parameters of the camera can be calibrated and obtained through Zhang Zhengyou, and a coordinate point in three dimensions can be found out from the image through the final conversion relation. And obtaining three-dimensional coordinates of the identification map under the four corner world coordinate systems.
And S4, solving physical quantities such as three-dimensional coordinates of the central points of the identification graphs and relative displacement among the multiple target points.
The four corner points of the identification graph have central symmetry, and three-dimensional coordinates of the identification graph center point O in a world coordinate system can be calculated.
Since a plurality of identification patterns can be fixed to different target points at the same time, the three-dimensional coordinates of the different target points can be measured, and the relative displacement between the target points can be calculated. In this way, even if the camera module itself moves, it is possible to measure the physical quantity such as displacement of the stationary object such as the floor or wall to which the logo is fixed, with respect to other target points.
Example two
In accordance with the method of the first embodiment, referring to fig. 4, another embodiment of the present application provides a system for measuring three-dimensional coordinates and displacement of a target point in motion, the system including:
the target image acquisition module acquires and stores image data of a target point by adopting a camera array formed by a plurality of cameras, wherein each camera finishes single-camera calibration and multi-camera combined calibration in advance;
the target recognition and corner point calculation module recognizes an identification chart fixed on the target point by adopting a target recognition algorithm, and calculates pixel coordinates of four corner points in the identification chart;
and the target point coordinate and displacement calculation module calculates the three-dimensional coordinate of each angular point under the world coordinate system according to the pixel coordinate of the angular point, and further obtains the related physical quantity of the target point, including the three-dimensional coordinate and the relative displacement, in the world coordinate system. Because a plurality of identification graphs can be fixed to different target points at the same time, the three-dimensional coordinates of the different target points can be measured, and the relative displacement between the target points can be calculated.
The system in the embodiment of the present application is used to execute the method in the above embodiment, and has the same technical effects, but not elaborated herein, please refer to the above embodiment, and the description is omitted.
In a third embodiment of the present application, a computer readable storage medium is provided, in which a computer program is stored, where the computer program when executed by a processor causes the processor to perform the steps of the method for measuring three-dimensional coordinates and displacement of a target point in motion. The steps of the method for measuring the three-dimensional coordinates and displacement of the target point in motion may be the steps in the method for measuring the three-dimensional coordinates and displacement of the target point in motion in the above embodiments: acquiring image data of a target point by adopting a camera array formed by a plurality of cameras and storing the image data in a computer, wherein each camera finishes single-camera calibration and multi-camera combined calibration in advance; identifying an identification image fixed on the target point from the acquired image by adopting a target identification algorithm and calculating pixel coordinates of an identification image corner; pnP (virtual private point) calculation is carried out according to pixel coordinates of four corner points in the identification chart, camera internal parameters and camera external parameters to obtain three-dimensional coordinates of a world coordinate system of the point location, so as to obtain three-dimensional coordinates of a target point; and (3) solving the physical quantities such as the three-dimensional coordinates of the central points of the identification graphs and the relative displacement among the multiple target points.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality" means at least two.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, a plurality of steps or methods may be stored in a memory and executed by a suitable instruction execution system, and those skilled in the art will understand that all or part of the steps carried in implementing the method of the above embodiment may be implemented by a program, where the program may be stored in a computer readable storage medium, and the program includes one or a combination of steps of the method embodiment when executed.
For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (8)

1. A method of three-dimensional coordinate and displacement measurement of a target point in motion, the method comprising:
the method comprises the steps that a camera array formed by a plurality of cameras is used for collecting and storing image data of target points, wherein one or more target points can be arranged, and each camera is calibrated in a combined mode in advance;
identifying an identification chart fixed on the target point by adopting a target identification algorithm, identifying the target point, and calculating pixel coordinates of four corner points of the identification chart;
and calculating the three-dimensional coordinates of each corner point under the world coordinate system according to the pixel coordinates of the corner points, so as to obtain the related physical quantity of the target point, including the three-dimensional coordinates and the relative displacement, in the world coordinate system.
2. The identification of an identification map on a moving target point according to claim 1, wherein the identification of the target point using the target identification algorithm and the calculation of the pixel coordinates of the corner point comprises:
and detecting key points in the identification graph by adopting a feature detection algorithm, respectively constructing classifiers for the features on the shot images, carrying out hierarchical screening, further completing identification of the identification graph, and finally calculating pixel coordinates of corner points of four different angles in the identification graph.
3. The method of measuring three-dimensional coordinates and displacement of a moving target point according to claim 1, wherein calculating the three-dimensional coordinates of each target point in the world coordinate system from the pixel coordinates of the corner points comprises:
and carrying out PnP pose calculation according to pixel coordinates of four corner points in the identification graph, camera internal parameters and camera external parameters to obtain three-dimensional coordinates of a world coordinate system of each corner point.
And combining the identified identification graph and the central symmetry of the identification graph, and confirming semantic information of the target point marked in the identification graph to obtain the three-dimensional coordinate of the central point of the identification graph, namely the three-dimensional coordinate of the target point.
And calculating the relative displacement of each target point through the three-dimensional coordinates of the position of each target point.
4. The method for measuring the three-dimensional coordinates and displacement of a moving target point according to claim 2, wherein the camera array is formed by enclosing at least four cameras respectively arranged in different directions, wherein the lens of each camera points to the direction to which the target point moves, and the area of the overlapped lens fields of the cameras is the movement range of the target point.
5. The method of measuring the three-dimensional coordinates and displacement of a target point in motion according to claim 4, wherein each of the cameras has a connecting rod connected to a central axis located at a central position, each adjacent connecting rod is further connected and fixed by a rigid member, and the relative pose between each of the cameras is unchanged.
6. The method of measuring the three-dimensional coordinates and displacement of a moving target point according to claim 4, wherein at least 20% of the overlapping area of the lens fields of view of adjacent cameras is required to be used as a common field of view for joint calibration of multiple cameras.
7. A method of three-dimensional coordinates and displacement measurement of a moving target point according to claim 3, characterized in that the mapping of the pixel coordinate system to the world coordinate system by means of the camera internal and external reference matrices is based on the following formula:
and calculating the space coordinates of the corresponding points from the pixel coordinates of the identification graph, and further obtaining the displacement physical quantity of the identification graph on the x, y and z axes.
8. A system for three-dimensional coordinate and displacement measurement of a target point in motion, the system comprising:
the target image acquisition module acquires and stores image data of a target point by adopting a camera array formed by a plurality of cameras, wherein each camera finishes single camera calibration and multi-camera combined calibration in advance to acquire camera internal parameters and camera external parameters;
the target recognition and corner point calculation module recognizes an identification chart fixed on the target point by adopting a target recognition algorithm, and calculates pixel coordinates of four corner points in the identification chart;
and the target point coordinate and displacement calculation module is used for calculating the three-dimensional coordinate of each angular point under the world coordinate system according to the pixel coordinates of the angular points so as to obtain the related physical quantity of the target point, including the three-dimensional coordinate and the relative displacement, in the world coordinate system.
CN202310477600.6A 2023-04-28 2023-04-28 Method and system for measuring three-dimensional coordinates and displacement of target point in motion Pending CN116697888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310477600.6A CN116697888A (en) 2023-04-28 2023-04-28 Method and system for measuring three-dimensional coordinates and displacement of target point in motion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310477600.6A CN116697888A (en) 2023-04-28 2023-04-28 Method and system for measuring three-dimensional coordinates and displacement of target point in motion

Publications (1)

Publication Number Publication Date
CN116697888A true CN116697888A (en) 2023-09-05

Family

ID=87828312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310477600.6A Pending CN116697888A (en) 2023-04-28 2023-04-28 Method and system for measuring three-dimensional coordinates and displacement of target point in motion

Country Status (1)

Country Link
CN (1) CN116697888A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117190875A (en) * 2023-09-08 2023-12-08 重庆交通大学 Bridge tower displacement measuring device and method based on computer intelligent vision
CN117434570A (en) * 2023-12-20 2024-01-23 绘见科技(深圳)有限公司 Visual measurement method, measurement device and storage medium for coordinates
CN117606747A (en) * 2023-11-22 2024-02-27 北京天翔睿翼科技有限公司 High-precision calibration method of laser galvanometer system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117190875A (en) * 2023-09-08 2023-12-08 重庆交通大学 Bridge tower displacement measuring device and method based on computer intelligent vision
CN117606747A (en) * 2023-11-22 2024-02-27 北京天翔睿翼科技有限公司 High-precision calibration method of laser galvanometer system
CN117434570A (en) * 2023-12-20 2024-01-23 绘见科技(深圳)有限公司 Visual measurement method, measurement device and storage medium for coordinates
CN117434570B (en) * 2023-12-20 2024-02-27 绘见科技(深圳)有限公司 Visual measurement method, measurement device and storage medium for coordinates

Similar Documents

Publication Publication Date Title
CN106643699B (en) Space positioning device and positioning method in virtual reality system
CN110136208B (en) Joint automatic calibration method and device for robot vision servo system
CN116697888A (en) Method and system for measuring three-dimensional coordinates and displacement of target point in motion
CN109598765B (en) Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
US9965870B2 (en) Camera calibration method using a calibration target
CN113379822B (en) Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN110728715B (en) Intelligent inspection robot camera angle self-adaptive adjustment method
US11830216B2 (en) Information processing apparatus, information processing method, and storage medium
US6917702B2 (en) Calibration of multiple cameras for a turntable-based 3D scanner
JP3735344B2 (en) Calibration apparatus, calibration method, and calibration program
US9672630B2 (en) Contour line measurement apparatus and robot system
CN108406731A (en) A kind of positioning device, method and robot based on deep vision
García-Moreno et al. LIDAR and panoramic camera extrinsic calibration approach using a pattern plane
JP2013101045A (en) Recognition device and recognition method of three-dimensional position posture of article
CN110288656A (en) A kind of object localization method based on monocular cam
CN110763204B (en) Planar coding target and pose measurement method thereof
Kümmerle et al. Unified intrinsic and extrinsic camera and LiDAR calibration under uncertainties
CN114283203B (en) Calibration method and system of multi-camera system
JP7479324B2 (en) Information processing device, information processing method, and program
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
JP4419570B2 (en) 3D image photographing apparatus and method
WO2021185215A1 (en) Multi-camera co-calibration method in 3d modeling
US11259000B2 (en) Spatiotemporal calibration of RGB-D and displacement sensors
JP2018522240A (en) Method for measuring artifacts
Sun et al. High-accuracy three-dimensional measurement based on multi-directional cooperative target with weighted SfM algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination