CN113405532B - Forward intersection measuring method and system based on structural parameters of vision system - Google Patents
Forward intersection measuring method and system based on structural parameters of vision system Download PDFInfo
- Publication number
- CN113405532B CN113405532B CN202110604995.2A CN202110604995A CN113405532B CN 113405532 B CN113405532 B CN 113405532B CN 202110604995 A CN202110604995 A CN 202110604995A CN 113405532 B CN113405532 B CN 113405532B
- Authority
- CN
- China
- Prior art keywords
- point
- target
- coordinate system
- measured
- carrier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a forward intersection measuring method and system based on structural parameters of a vision system, wherein the method comprises the following steps: expressing a main target point vector when the carrier is positioned at an actual position in a world coordinate system; solving the cross-tangential distance and the main optical axis rotation radius and determining a fitting function between the focal length and the cross-tangential distance; expressing each image point to be detected after the error correction of the three-dimensional field in a world coordinate system; and carrying out forward intersection measurement on the point to be measured to obtain the coordinate value of the point to be measured in the world coordinate system. According to the method and the system for measuring the forward intersection based on the structural parameters of the vision system, a world coordinate system is established according to the structural parameters of the vision system under the condition that a forward control point is not needed, forward intersection measurement is carried out on a point to be measured, and on the basis of linear space analysis, a method for measuring a forward intersection space point based on the structural parameters of the vision system is innovated, so that more accurate photogrammetry on the point to be measured is realized.
Description
Technical Field
The invention relates to the technical field of digital photogrammetry, in particular to a forward intersection measuring method and system based on structural parameters of a vision system.
Background
With the expansion of the application requirements of machine vision technology and AI technology in the field of fine agriculture, the requirements for high-precision and control-point-free measurement in the automatic intelligent operation of agriculture and other industries are higher and higher.
Forward-meeting photogrammetry based on monocular or monocular cameras typically requires control points or other sensor-assisted measurements. However, on one hand, there is usually no condition for ideal target setting in agriculture or other actual life scenarios, and on the other hand, the adoption of other sensor assistance not only has high equipment cost, but also the environmental conditions of the scenario limit the application scenario and measurement accuracy of the vision system. In summary, the existing forward intersection measurement method is difficult to accurately obtain the position of the point to be measured without a control point.
Disclosure of Invention
The invention provides a forward rendezvous measurement method and system based on structural parameters of a vision system, which are used for solving the defect that the position of a point to be measured is difficult to accurately obtain under the condition of no control point in the prior art and realizing more accurate acquisition of the position of the point to be measured based on the structural parameters of the vision system under the condition of no control point.
The invention provides a forward intersection measuring method based on structural parameters of a vision system, which comprises the following steps:
respectively expressing main target point vectors when the carrier is positioned at an actual position in a world coordinate system according to the posture of the target relative to an image plane and a preset position of the carrier in a visual system; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a visual sensor in the visual system on a target through a main optical axis;
solving an intersection distance and a main optical axis rotation radius based on a focal length, an intersection point and an external orientation element in the structural parameters of the visual system, a gyro value of a gyro device in the visual system and a main target point vector when the carrier expressed in the world coordinate system is located at an actual position, and determining a fitting function between the focal length and the intersection distance;
after a plurality of target images including the point to be measured are obtained by using the vision sensor, stereo field error correction is carried out on the corresponding image point to be measured of the point to be measured in each target image, and each image point vector to be measured after the stereo field error correction is expressed in the world coordinate system; in any two target images, the postures of the points to be measured relative to the vision system are different from the intersection point vector; the vision sensor is arranged on the carrier; the intersection point vector refers to a vector from the rotation center of the carrier to the intersection point;
performing front intersection measurement on the point to be measured based on the intersection distance obtained through the fitting function, the main optical axis rotation radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to stereo field error correction, so as to obtain a coordinate value of the point to be measured in the world coordinate system;
the main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is located at the rotation center of the carrier, and the directions of the coordinate axes of the world coordinate system are determined according to the directions of the coordinate axes when the gyro device is located at the initial position.
The invention also provides a forward rendezvous measurement system based on the structural parameters of the vision system, which comprises:
the coordinate expression module is used for respectively expressing main target point vectors when the vector is positioned at an actual position in a world coordinate system according to the posture of the target in the visual system relative to the image plane and the preset position of the vector; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a visual sensor in the visual system on a target through a main optical axis;
the function determining module is used for solving the intersection distance and the main optical axis rotation radius based on the focal length, the intersection point and the external orientation element in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and the main target point vector when the carrier is located at the actual position in the world coordinate system, and determining a fitting function between the focal length and the intersection distance;
the error correction module is used for performing stereo field error correction on the image point to be measured corresponding to the point to be measured in each target image after acquiring a plurality of target images comprising the point to be measured by using the vision sensor, and expressing each image point vector to be measured after the stereo field error correction in the world coordinate system; in any two target images, the postures of the points to be measured relative to the vision system are different from the intersection point vector; the vision sensor is arranged on the carrier; the intersection point vector refers to a vector from the rotation center of the carrier to the intersection point;
the intersection measurement module is used for carrying out forward intersection measurement on the point to be measured based on the fitting function, the visual system structure parameters and each image point to be measured expressed in the world coordinate system and subjected to stereo field error correction, so as to obtain a coordinate value of the point to be measured in the world coordinate system; wherein the vision system comprises one or more vision sensors;
the main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is located at the rotation center of the carrier, and the directions of the coordinate axes of the world coordinate system are determined according to the directions of the coordinate axes when the gyro device is located at the initial position.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the vision system structure parameter-based forward intersection measurement method.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for forward cross-measurement based on vision system configuration parameters as described in any of the above.
According to the method and the system for measuring the forward rendezvous based on the structural parameters of the vision system, a world coordinate system is established according to the structural parameters of the vision system under the condition that a forward control point is not needed, the forward rendezvous measurement is carried out on the point to be measured, and on the basis of linear space analysis, a method for measuring the forward rendezvous space point based on the structural parameters of the vision system (or the same hand eye system) is innovated, so that more accurate photogrammetry on the point to be measured is realized, and experiments prove that the error between the coordinate of the point to be measured and the real coordinate obtained by the method for measuring the forward rendezvous based on the structural parameters of the vision system is about 2.5%.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for measuring frontal encounter based on structural parameters of a vision system according to the present invention;
FIG. 2 is a schematic diagram of the intersection distance of multi-target intersection in the method for measuring frontal intersection based on the structural parameters of the vision system according to the present invention;
FIG. 3 is a schematic diagram of a point to be measured by the forward rendezvous measurement method based on structural parameters of a vision system provided by the invention;
FIG. 4 is a second flowchart illustrating a method for forward rendezvous measurement based on structural parameters of a vision system according to the present invention;
FIG. 5 is a schematic structural diagram of a front-meeting measurement system based on structural parameters of a vision system according to the present invention;
fig. 6 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Fig. 1 is a schematic flow chart of a method for measuring a forward-encounter based on structural parameters of a vision system according to the present invention. The method for forward rendezvous measurement based on the structural parameters of the vision system according to the invention is described below with reference to fig. 1. As shown in fig. 1, the method includes: 101, respectively expressing main target point vectors when a carrier is positioned at an actual position in a world coordinate system according to the posture of a target relative to an image plane and a preset position of the carrier in a visual system; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a vision sensor in a vision system on a target through a main optical axis.
It should be noted that before the forward rendezvous measurement method based on the structural parameters of the vision system, a plurality of targets in any postures can be laid out at preset positions.
For any target, after defining the coordinate system of the target as the coordinate system of the point to be measured, the intersection point of the image point of the visual angle sensor in the visual system and the actual point on the target can be obtained.
The actual points on the target are all other points of the target except the primary target point.
And 102, solving an intersection distance and a main optical axis rotation radius based on a focal length, intersection points and external orientation elements in the structural parameters of the visual system, a gyro value of a gyro device in the visual system and a main target point vector when the carrier is actually positioned in a world coordinate system, and determining a fitting function between the focal length and the intersection distance.
103, acquiring a plurality of target images comprising points to be measured by using a vision sensor in a vision system, performing stereo field error correction on the corresponding points to be measured in each target image, and expressing each point to be measured after the stereo field error correction in a world coordinate system; in any two target images, the postures of the points to be measured relative to the visual system are different from the intersection point vectors; a vision sensor disposed on the carrier; the intersection point vector refers to a vector from the rotation center to the intersection point.
It should be noted that the visual sensor in the embodiment of the present invention may be a video camera.
And step 104, performing front intersection measurement on the points to be measured based on the intersection distance obtained through the fitting function, the main optical axis rotating radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system after the stereo field error correction, and obtaining the coordinate values of the points to be measured in the world coordinate system.
The main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the main optical axis and the focal length are determined based on a vision sensor; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is positioned at the rotation center of the carrier of the visual system, and the directions of all coordinate axes of the world coordinate system are determined according to the directions of all coordinate axes when the gyro device is positioned at the initial position.
It should be noted that the carrier may be a pan-tilt or a robotic arm. The visual sensor may be a camera or the like that can capture images.
In the process of measuring by using a holder (or a mechanical arm) as a carrier of a camera, the effect of each vector in the process is analyzed, and when the focal distance is fixed, the intersection point vector is uniquely determined by a normal vector (namely, a rotation radius) of a tangent point of a main optical axis and a vector (intersection and tangent distance) from the tangent point to the intersection point. Therefore, the rotation center of the pan/tilt head (or the robot arm) is conditioned as the origin of world coordinates. When the front intersection is carried out on the basis of the world coordinate system, a target is not needed, but the intersection distance and the main optical axis rotation radius need to be accurately measured.
For this purpose, the coordinates of the center of the sphere can be obtained by geometric algebraic transformation, i.e., by algebraic transformation of the beam intersection into tangential co-spherical intersection. Through research on intersection data, it is found that through proper setting of experimental conditions, the precise structural parameters of the visual system can be directly obtained, including: the cross-cut distance and the radius of rotation of the primary optical axis.
After the accurate intersection distance and the main optical axis rotation radius are obtained, a world coordinate system can be established by using the two structural parameters, and the intersection point vector and the image point vector are expressed in the world coordinate system for front intersection.
According to the embodiment of the invention, a world coordinate system is established according to the structural parameters of the visual system, the forward rendezvous measurement is carried out on the point to be measured, and on the basis of linear space analysis, a method for carrying out forward rendezvous space point measurement based on the structural parameters of the visual system (or the same hand eye system) is innovated, so that more accurate photogrammetry on the point to be measured is realized, and experiments prove that the error between the coordinate of the point to be measured and the real coordinate obtained by the forward rendezvous measurement method based on the structural parameters of the visual system is about 2.5%.
Based on the content of each embodiment, the main target point vector when the carrier is located at the actual position is respectively expressed in the world coordinate system according to the posture of the target relative to the image plane and the preset position of the carrier in the visual system, which specifically includes: and calibrating the focal length and the principal point coordinates of the vision sensor.
Obtaining the exterior orientation element between each target and the image plane in any postureWherein m is the mark of the target, and m is 1,2, 3.; n is the mark of the target gesture, and n is 1,2, 3.; the position of each target is predetermined, and the pose of each target is acquired by back-crossing or from a gyro when the carrier is in the actual position.
It should be noted that the exterior orientation element may include six, three of which may describe the coordinates of the intersection point of the vision system; the other three angular elements that can describe the spatial attitude of the photographic beam.
Obtaining a rotation transformation matrix between a coordinate system of a target with n postures and an identifier of m and a world coordinate system
Wherein the content of the first and second substances,a rotation transformation matrix representing the coordinate system of the target in the n poses to the coordinate system of the vision sensor;a rotation transformation matrix representing a coordinate system of the vision sensor to a coordinate system of the carrier;a rotation transformation matrix from a coordinate system of the carrier corresponding to the target representing the n postures to a coordinate system when the gyroscope device is located at an initial position;a rotation transformation matrix representing the coordinate system to the world coordinate system when the gyro device is located at the initial position; the carrier is a holder or a mechanical arm, and the gyro device and the visual sensor are carried on the carrier; the vision sensor comprises a lens; the coordinate system of the preset position of the carrier is consistent with the world coordinate; the actual position of the carrier is determined according to the gyro value of the gyro device; and the coordinate system of the gyro device at the initial position is determined according to the built-in navigation module of the gyro device.
According to the preset position of the carrier, a rotation transformation matrix from the coordinate system of the actual position of the carrier to the coordinate system of the preset position of the carrier is obtained
Wherein the content of the first and second substances,representing n posesThe rotation transformation matrix from the preset position of the carrier corresponding to the target with the identifier m to the main target point when the carrier is located at the actual position;a rotation transformation matrix from the initial position of the gyro device to the preset position of the gyro device;representing a rotation change matrix from the initial position of the gyro device corresponding to the target marked as n postures to the main target point when the carrier is positioned at the actual position; the main target point is a projection point of a main point of the vision sensor on the target through a main optical axis.
It should be noted that the output value of the gyro module when the carrier is located at the actual position and the sum of the output values can be used as the basisThe pose of each target is obtained.
FIG. 2 is a schematic diagram of intersection distance determination for multi-target intersection in the method for forward intersection measurement based on structural parameters of a vision system according to the present invention. As shown in FIG. 2, An、An-1Respectively representing the main target points on the targets with n postures and n-1 postures and marked as m; a. then-i、An-jRespectively representing the main target points marked as m-1 in the n-i posture and the n-j posture; o istmOrigin, X, of a coordinate system representing a target identified as mtm、YtmAnd ZtmRespectively representing the directions of three coordinate axes of a coordinate system of the target marked as m; o isiRepresenting the origin, X, of a coordinate system in which the gyro device is located in an initial positioni、YiAnd ZiRespectively representing the directions of three coordinate axes of a coordinate system of the navigation module; o iswRepresenting the centre of rotation of the carrier and also the origin of the world coordinate system, Xw、YwAnd ZwDirections of three coordinate axes of a world coordinate system are respectively; o isbnIs the origin, X, of the coordinate system of the gyro unit with the carrier in the actual positionbn、YbnAnd ZbnFor tops with carriers in actual positionsThree coordinate axis directions of a coordinate system of the gyro device.
OcnIs the origin, X, of the image plane coordinate system when the carrier is in the actual positioncn、YcnAnd ZcnThe directions of three coordinate axes of an image plane coordinate system when the carrier is positioned at an actual position are respectively; pn-1、Pn、Pn-iAnd Pn-jRespectively is the tangent point of the main optical axis and the main optical axis rotating ball; fnAnd Fn-1The rear intersection points of the random point group marked as m on the target and the image point group thereof are respectively n postures and n-1 postures; fn-iAnd Fn-jThe arbitrary set of points on the target identified as m-1 for the n-i pose and the n-j pose, respectively, meet the point behind its set of image points.
When the carrier is at the preset position, the origin O of the world coordinate system is determined in the world coordinate systemWTo the point of tangency P of the main optical axis and the main optical axis rotating ballpIs represented asThe origin O of the world coordinate systemWTo tangent point PpCorresponding intersection point FpIs represented asWhere ρ is0Is the primary optical axis radius of rotation; dzIs the cross-cutting distance; the intersection and tangent distance is the distance between the intersection point corresponding to the tangent point and the tangent point; the main optical axis spin ball is determined based on a common ball tangent.
It should be noted that the forward intersection measurement method based on the structural parameters of the vision system of the present invention involves two intersections: firstly, the traditional rear intersection of the target point and the image point can determine the external orientation elements between each target and the image plane in any posture based on the traditional rear intersection of the target point and the image pointAnd an attitude angle; wherein A isnFnRepresenting the distance between the intersection point and the corresponding main target point; second, the main optical axes meet together. Using the above elements for external orientationStructural parameters and gyro values, vectors of intersection points, tangent points and main target points when the expression vector is located at any practical position and related quantities thereof, such as two rotation transformation matrices (rotation transformation matrix from coordinate system of target to world coordinate system)And a rotation transformation matrix between the carrier actual position coordinate system and the preset position coordinate system)。
Based onAndin the world coordinate system, the origin O of the world coordinate system is definedWThe main target point A when the carrier is at the preset positionpIs represented as
For a target identified as m, from a preset bit in the world coordinate system To the origin O of the world coordinate systemWTo the main target point A when the carrier is in the actual positionmVector of (2)The rotation matrix of (a) is:
The translation vector between the target coordinate system and the world coordinate system is expressed as:
wherein:andto representEach of (1); a. themCoordinate system origin O of target for identification mtm(ii) a The main target point A when the target marked as m is in a certain posturemCorresponding intersection point FmAnd the main target point AmAre positioned on the same main optical axis; wherein A ismFmRepresents a meeting point FmCorresponding main target point AmThe distance between them;and the translation amount between the world coordinate system and the target coordinate system is represented.
According to the embodiment of the invention, the vector of the main target point when the carrier is positioned at the actual position is expressed in the world coordinate system based on the rotation transformation matrix among the coordinate systems, each vector can be expressed in the same world coordinate system, a data basis can be provided for acquiring the coordinate of the point to be measured in the world coordinate system, and the consistency of photogrammetry can be improved.
Based on the content of each embodiment, the intersection distance and the main optical axis rotation radius are solved based on the focal length, the intersection point and the external orientation element in the structural parameters of the visual system, the gyro value of the gyro device in the visual system and the main target point vector when the carrier expressed in the world coordinate system is located at the actual position, and the fitting function between the focal length and the intersection distance is determined, which specifically comprises the following steps: for each target, the visual sensor acquires a plurality of sample images of the target with any posture at any target focal length; and in any two sample images of the target, the distance and the posture of the target relative to the vision sensor are different.
Respectively acquiring each intersection distance d corresponding to each target focal length based on the sample image of each target acquired at each target focal lengthzThe solution specifically includes: for a target marked as m, acquiring a transformation matrix of a posture coordinate system relative to a world coordinate system based on a sample image of n postures acquired at any target focal lengthObtaining a main target point A on the target of the n postures when the carrier is positioned at the actual positionnCoordinates in a target coordinate system
Based on main target point AnCoordinates in the coordinate system of the target identified as mExpressing the main target point A in the world coordinate systemnCoordinates of (2)
For a target identified as m, a carrier position may be acquired based on each sample image acquired at the target focal length that includes the target identified as n posesThe intersection point F corresponding to the tangent point of the main optical axis and the main optical axis rotating ball in actual positionnCoordinates in a target coordinate systemWherein, the intersection point FnAnd the main target point AnCorrespondingly, are located on the same main optical axis.
Based on the meeting point FnCoordinates in the coordinate system of the target identified as mExpressing the intersection point F in the world coordinate systemnCoordinates of (2)
Expressing a main target point A in a world coordinate system based on a sample image of a target with n-posture marks m acquired under a target focal lengthnAnd a meeting point FnCorresponding main optical axis and tangent point of main optical axis rotating ball
Wherein, the main target point AnMeeting point FnAnd point of tangency PnAll located on the same main optical axis.
For targets with n-attitude identifiers m, based on main target point AnCoordinates in the world coordinate SystemMeeting point FnCoordinates in the world coordinate SystemAnd point of tangency PnOf (2)Obtaining a tangent three-point collinearity equation:
where λ represents a constant.
Radius of rotation ρ to be based on the main optical axis0And cross-cut distance dzExpressed Primary target AnCoordinates of (2), intersection point FnCoordinate and tangent point P ofnSubstituting the coordinates into a tangent three-point collinear equation to obtain:
wherein the content of the first and second substances,
wherein, the first and the second end of the pipe are connected with each other,
wherein the content of the first and second substances,
wherein the content of the first and second substances,
The error equation is expressed as:
Solving the nth iteration correction number to be Xn=(Λn TΛn)-1Λn TLn。
According to Xn=[d(dz),dρ0]TObtaining the cross-cut distance d corresponding to the target focal length through iterative operationzThe solution of (1).
In addition, the intersection distance d is obtainedzAnd main optical axis rotation radius ρ0The method of (3) is not limited to the above one, and other solutions can be applied to the present invention.
Obtaining each intersection distance d corresponding to each target focal lengthzAfter the solution, based on the cross-cut distance d between each target focal length and the corresponding target focal lengthzObtaining the focal length f and the cross-cut distance dzThe fitting function of (1).
It should be noted that the focal length f and the cross-tangential distance d can be establishedzBy interpolating a mapping matrix or fitting function dz(f) In that respect Focal length f and cross-cut distance dzThe specific fitting method is not particularly limited in the embodiment of the present invention.
It should be noted that the invention directly utilizes two structural parameters of the visual system as variables to express the main target point vector, introduces the gyro value to jointly express the main target point vector when the vector is positioned at any set actual position, and uses the main target point vector as the expression parameter of the translation vector converted into the world coordinate system. Then tangent lines are intersected with a sphere to obtain a structure parameter intersection tangent distance dz and a main optical axis rotation radius rho0. In the prior art, the coordinates of the rotation center are obtained by directly intersecting the front main target point, the first rear intersection point and the tangent point, and then the rotation radius is obtained. No gyro value was introduced for the measurement.
According to the embodiment of the invention, by acquiring the fitting function of the focal length and the intersection distance, when the forward intersection measurement is carried out on any point to be measured in the predetermined measurement space, the intersection distance can be quickly determined based on the focal length of the vision sensor and the acquired fitting function of the focal length and the intersection distance, and the coordinate of the point to be measured in the world coordinate system can be more accurately and quickly acquired based on the image point, the intersection distance and the main optical axis rotating radius.
Based on the content of the above embodiment, after acquiring a plurality of target images including points to be measured by using a vision sensor in a vision system, performing stereo field error correction on corresponding points to be measured in each target image, and expressing each point to be measured after stereo field error correction in a world coordinate system, specifically including: and acquiring a plurality of target images comprising points to be measured by using a vision sensor in a vision system.
Based on each target image, determining the image coordinates of the corresponding image point to be measured in each target image in the visual sensor coordinate system
Using the previously acquired image side vertical axis correction data W ═ Wx,Wy]TCorrecting image coordinate of image point to be measuredObtaining a first coordinate after correcting the vertical axis of the image space of each image point to be detected in the coordinate system of the vision sensorCorrecting the coordinates of each image point to be measured by using the axial correction data aObtaining the corrected second coordinate of each image point to be detected in the coordinate system of the vision sensor
Expressing a preset bit vector from the origin of the world coordinate system to the origin of the image plane coordinate system as [0, rho ] in the world coordinate system0,-f-dz]TThen, the first coordinate in the vision sensor coordinate system is determinedAnd second coordinatesConverting the coordinate into a world coordinate system to obtain a third coordinate of each image point to be measured in the world coordinate system corresponding to the image point to be measuredAnd fourth coordinate
According to the embodiment of the invention, the accuracy of the acquired coordinate of the point to be measured in the world coordinate system can be further improved by performing three-dimensional field error correction on the point to be measured and converting the point to the world coordinate system.
Based on the content of the above embodiment, performing forward intersection measurement on the point to be measured based on the fitting function, the structural parameters of the visual system, and each image point to be measured expressed in the world coordinate system after the stereo field error correction to obtain coordinate values of the point to be measured in the world coordinate system, specifically including: and calibrating the focal length of the visual sensor to determine the focal length f of the visual sensor.
Based on the predetermined focal length f and the cross-cut distance dzDetermining the cross-cut distance d corresponding to the focal length fz。
Based on the origin O of the world coordinate system when the carrier is at the preset positionWTo the meeting point FpVector of (2)The origin O of the world coordinate system when the carrier is at the actual positionWMeeting point corresponding to point to be measuredThe vector of (a) is expressed as:wherein the content of the first and second substances,and converting the preset bits of the carrier into a rotation matrix of the carrier positioned at the actual bits.
FIG. 3 is a schematic diagram of the method for measuring points to be measured based on the forward rendezvous measurement method based on the structural parameters of the vision system, as shown in FIG. 3, based on the points B to be measurednCoordinates in the world coordinate SystemPoint to be measured BnThe third coordinate of each corresponding image point to be measured in the world coordinate systemAnd point B to be measurednCorresponding intersection point FnAnd obtaining a three-point collinearity equation:
taylor expansion was performed to obtain:
wherein the content of the first and second substances,
the error equation is expressed as:
wherein, constant term
The nth iteration correction is solved as follows: chi shapen=(Λn TΛn)-1Λn TLn。
Correcting the x according to the nth iterationnAnd iteratively operating to obtain a point B to be measured corresponding to the third coordinatenFirst solution of
Based on point B to be measurednPoint B to be measurednThe fourth coordinate of each corresponding image point to be measured in the world coordinate systemAnd point B to be measurednCorresponding intersection point FnBased on the method, the point B to be measured corresponding to the fourth coordinate is obtainednSecond solution of
According to the first solutionAnd the second solutionDetermining a point B to be measurednThe coordinate values in the world coordinate system are
According to the embodiment of the invention, any point to be measured is measured in a forward intersection manner by the adjustment of the multi-image light beams based on the acquired fitting function of the focal length and the intersection and the expression of each vector in the world coordinate system, and the coordinate of the point to be measured in the world coordinate system is obtained, so that the point to be measured can be more accurately and efficiently photogrammetrically measured based on the structural parameters of the vision system under the condition of no front control point.
Based on the disclosure of the above embodiments, the vision system includes two or more vision sensors.
Correspondingly, a visual sensor in a visual system is utilized to acquire a plurality of target images including points to be measured, and the method specifically comprises the following steps: and synchronously acquiring a plurality of target images comprising points to be measured by using each vision sensor, and synchronously and parallelly processing image data of each target image acquired by each vision sensor.
According to the embodiment of the invention, more data can be acquired through the plurality of vision sensors, so that the coordinates of the point to be measured in the world coordinate system can be acquired more accurately based on more data.
Based on the content of the above embodiments, the vision system includes two or more vision sensors; the detection points may be dynamic points.
Correspondingly, the forward intersection measurement is carried out on the point to be measured, and after the coordinate value of the point to be measured in the world coordinate system is obtained, the method further comprises the following steps: and based on the coordinate of the point to be measured in the world coordinate system independently determined by each vision sensor, combining the world coordinates obtained by other vision sensors, and further obtaining the motion vector of the point to be measured.
Specifically, for each vision sensor in the vision system, the vision sensor can independently acquire a plurality of target images including points to be measured.
Based on each target image acquired by each vision sensor, the coordinate of a point to be measured in the world coordinate system can be independently determined.
And obtaining the motion vector of the point to be measured based on the measured difference of the point to be measured in the world coordinate system independently determined by each vision sensor.
According to the embodiment of the invention, the dynamic point to be measured is independently measured by the plurality of vision sensors, so that the motion vector of the dynamic point to be measured can be obtained.
Fig. 4 is a second flowchart of the method for forward rendezvous measurement based on the structural parameters of the vision system according to the invention. As shown in fig. 4, after the forward rendezvous measurement method based on the structural parameters of the vision system is started, the main target point vector when the vector is located at the actual position is expressed in a world coordinate system according to the posture of the target relative to the image plane and the preset position of the vector in the vision system.
The following steps need to be acquired: intrinsic parameters of the vision sensor; the external orientation elements of the right angles between each target and the image plane in any posture; a rotation transformation matrix from the coordinate system of the target to a world coordinate system; and a rotation transformation matrix of the coordinate system when the carrier is positioned at the actual position to the coordinate system when the carrier is positioned at the preset position.
And expressing the main target point vector when the carrier is positioned at the actual position in a world coordinate system by using the rotation transformation matrix and the structural parameters.
And secondly, intersecting multi-target collinear equations, and accurately solving the intersection tangent distance and the main optical axis rotation radius. The method specifically comprises the following steps: converting the front main target point vector corresponding to the image principal point of the sample image of each target in any posture from the coordinate system of the target to a world coordinate system; respectively establishing collinear equation common-sphere intersection for the postures of the multiple targets, and accurately calculating intersection tangent distance and main optical axis rotation radius; and respectively calculating the intersection and tangent distances under different focal lengths, and fitting to establish an interpolation mapping matrix and a fitting function of the focal lengths and the intersection and tangent distances.
Again, the pixel stereo field error is corrected and converted to world coordinates. The method specifically comprises the following steps: respectively correcting two image point vectors to be detected generated by points to be detected in the target image by using the image vertical axis correction data and the axial correction data; and converting the two image point vectors to be measured into a holder coordinate system.
Finally, a front-meeting measurement is performed for any object to be measured in the measurement space. The method specifically comprises the following steps: calibrating the focal length of a vision sensor used for measurement, and acquiring the intersection distance and the main optical axis rotation radius under the focal length from a fitting function of the focal length and the intersection distance; expressing the intersection point of the carrier when the carrier is positioned at a preset position in a world coordinate system by using the known structural parameters of the visual system; expressing world coordinates of each actual bit intersection point and each image point based on the actual bits and the preset bits; obtaining a collinear condition equation based on the point to be measured, the corresponding intersection point and the image point; and obtaining the coordinates of the point to be measured in the world coordinate system.
Fig. 5 is a schematic structural diagram of a forward-rendezvous measurement system based on structural parameters of a vision system according to the present invention. The vision system structural parameter-based forward intersection measurement system provided by the present invention is described below with reference to fig. 5, and the vision system structural parameter-based forward intersection measurement system described below and the vision system structural parameter-based forward intersection measurement method described above may be referred to correspondingly. As shown in fig. 5, the apparatus includes: a coordinate representation module 501, a function determination module 502, an error correction module 503, and an intersection measurement module 504.
A coordinate expression module 501, configured to respectively express, in a world coordinate system, a main target point vector when a vector is located in an actual position according to a posture of a target in the visual system relative to the image plane and a preset position of the vector; the vector of the main target point when the carrier is positioned at the actual position is a vector from the origin of a world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a visual sensor in the visual system on a target plane through a main optical axis;
and the function determining module 502 is configured to solve the intersection distance and the main optical axis rotation radius based on the focal length, the intersection point and the external orientation element in the structural parameters of the visual system, the gyro value of the gyro device in the visual system, and the main target point vector when the vector expressed in the world coordinate system is located at the actual position, and determine a fitting function between the focal length and the intersection distance.
The error correction module 503 is configured to, after acquiring a plurality of target images including points to be measured by using the vision sensor, perform stereo field error correction on corresponding points to be measured in each target image, and express each point to be measured after the stereo field error correction in a world coordinate system; in any two target images, the postures of the points to be measured relative to the visual system are different from the intersection point vectors; a vision sensor disposed on the carrier; and the intersection point vector refers to a vector from the rotation center of the carrier to the intersection point.
And the intersection measurement module 504 is configured to perform forward intersection measurement on the to-be-measured point based on the intersection distance obtained through the fitting function, the main optical axis rotation radius in the structural parameter of the visual system, the gyro value of the gyro device, and each to-be-measured image point expressed in the world coordinate system and subjected to the stereo field error correction, so as to obtain a coordinate value of the to-be-measured point in the world coordinate system.
The main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is positioned at the rotation center of the carrier of the visual system, and the directions of all coordinate axes of the world coordinate system are determined according to the directions of all coordinate axes when the gyro device is positioned at the initial position.
According to the embodiment of the invention, a world coordinate system is established according to the structural parameters of the visual system, the forward rendezvous measurement is carried out on the point to be measured, and on the basis of linear space analysis, a method for carrying out forward rendezvous space point measurement based on the structural parameters of the visual system (or the same hand eye system) is innovated, so that the standard-free and more accurate photogrammetry on the point to be measured is realized, and experiments prove that the error between the coordinate of the point to be measured and the real coordinate obtained by the forward rendezvous measurement method based on the structural parameters of the visual system is about 2.5%.
Fig. 6 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 6: a processor (processor)610, a communication Interface (Communications Interface)620, a memory (memory)630 and a communication bus 640, wherein the processor 610, the communication Interface 620 and the memory 630 communicate with each other via the communication bus 640. The processor 610 may invoke logic instructions in the memory 630 to perform a method of frontal encounter measurement based on the parameters of the vision system structure, the method comprising: respectively expressing main target point vectors when the carrier is positioned at an actual position in a world coordinate system according to the posture of a target in a visual system relative to an image plane and the preset position of the carrier; solving an intersection distance and a main optical axis rotation radius based on a focal length, an intersection point and an external orientation element in a visual system structure parameter, a gyro value of a gyro device in the visual system and a main target point vector when a carrier expressed in a world coordinate system is located at an actual position, and determining a fitting function between the focal length and the intersection distance; after a plurality of target images including points to be measured are obtained by using a vision sensor, stereo field error correction is carried out on the corresponding points to be measured in each target image, and each point to be measured after the stereo field error correction is expressed in a world coordinate system; and performing front intersection measurement on the points to be measured based on the intersection distance obtained by the fitting function, the main optical axis rotating radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to the stereo field error correction, so as to obtain the coordinate values of the points to be measured in the world coordinate system.
In addition, the logic instructions in the memory 630 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions, which when executed by a computer, enable the computer to perform the method for forward cross-measurement based on structural parameters of a vision system provided by the above methods, the method comprising: respectively expressing main target point vectors when the carrier is positioned at an actual position in a world coordinate system according to the posture of a target in a visual system relative to an image plane and the preset position of the carrier; solving an intersection distance and a main optical axis rotation radius based on a focal length, an intersection point and an external orientation element in a visual system structure parameter, a gyro value of a gyro device in the visual system and a main target point vector when a carrier expressed in a world coordinate system is located at an actual position, and determining a fitting function between the focal length and the intersection distance; after a plurality of target images including points to be measured are obtained by using a vision sensor, stereo field error correction is carried out on the corresponding points to be measured in each target image, and each point to be measured after the stereo field error correction is expressed in a world coordinate system; and performing front intersection measurement on the points to be measured based on the intersection distance obtained by the fitting function, the main optical axis rotating radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to the stereo field error correction, so as to obtain the coordinate values of the points to be measured in the world coordinate system.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the above-mentioned methods for providing forward cross-talk measurement based on structural parameters of a vision system, the method comprising: respectively expressing main target point vectors when the carrier is positioned at an actual position in a world coordinate system according to the posture of a target in a visual system relative to an image plane and the preset position of the carrier; determining a fitting function between the focal length and the intersection and tangent distance based on the focal length, the intersection point and the external orientation element in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and a main target point vector when a carrier expressed in a world coordinate system is positioned at an actual position; after a plurality of target images including points to be measured are obtained by using a vision sensor, stereo field error correction is carried out on the corresponding points to be measured in each target image, and each point to be measured after the stereo field error correction is expressed in a world coordinate system; and solving the intersection distance and the main optical axis rotation radius, and performing forward intersection measurement on the points to be measured on the basis of the intersection distance obtained through a fitting function, the main optical axis rotation radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to the stereo field error correction to obtain coordinate values of the points to be measured in the world coordinate system.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (6)
1. A forward intersection measurement method based on structural parameters of a vision system is characterized by comprising the following steps:
respectively expressing main target point vectors when the carrier is positioned at an actual position in a world coordinate system according to the posture of the target relative to an image plane and a preset position of the carrier in a visual system; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a visual sensor in the visual system on a target through a main optical axis;
solving an intersection distance and a main optical axis rotation radius based on a focal length, an intersection point and an external orientation element in the structural parameters of the visual system, a gyro value of a gyro device in the visual system and a main target point vector when the carrier expressed in the world coordinate system is located at an actual position, and determining a fitting function between the focal length and the intersection distance;
after a plurality of target images including the point to be measured are obtained by using the vision sensor, stereo field error correction is carried out on the corresponding image point to be measured of the point to be measured in each target image, and each image point vector to be measured after the stereo field error correction is expressed in the world coordinate system; in any two target images, the postures of the points to be measured relative to the vision system are different from the intersection point vector; the vision sensor is arranged on the carrier; the intersection point vector refers to a vector from the rotation center of the carrier to the intersection point;
performing front intersection measurement on the point to be measured based on the intersection distance obtained through the fitting function, the main optical axis rotation radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to stereo field error correction, so as to obtain a coordinate value of the point to be measured in the world coordinate system;
the main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is positioned at the rotation center of the carrier, and the directions of all coordinate axes of the world coordinate system are determined according to the directions of all coordinate axes when the gyro device is positioned at the initial position;
the method includes the following steps of respectively expressing a main target point vector when a carrier is located at an actual position in a world coordinate system according to the posture of a target relative to an image plane and a preset position of the carrier in the visual system, and specifically includes:
calibrating the focal length and the principal point coordinates of the vision system;
obtaining the exterior orientation elements between each target and the image plane in any postureWherein m is an identifier of the target, and m is 1,2, 3.; n is an identifier of the pose of the target, and n is 1,2, 3.; the position of each target is predetermined, and the posture of each target is acquired by back intersection or a gyro module when the carrier is located at the actual position;
calculating a rotation transformation matrix from the coordinate system of the target with n gesture marks m to the world coordinate system
Wherein the content of the first and second substances,a rotation transformation matrix representing a coordinate system of the target in the n poses to a coordinate system of the vision sensor;a rotation transformation matrix representing the coordinate system of the vision sensor to the coordinate system of the carrier actual bits;a rotation transformation matrix which represents a coordinate system of the carrier actual position corresponding to the n-posture target to a coordinate system when the gyro device is located at an initial position;a rotation transformation matrix representing a coordinate system of the gyro device at an initial position to the world coordinate system; the carrier is a holder or a mechanical arm, and the gyro device and the visual sensor are carried by the carrier; the vision sensor comprises a lens; the coordinate system of the carrier when the carrier is positioned at the preset position is consistent with the world coordinate; the actual bits of the carrier are determined according to the gyro value of the gyro device; the coordinate system of the gyro device at the initial position is determined according to the built-in navigation module of the gyro device;
according to the preset position of the carrier, acquiring a rotation transformation matrix from the preset position of the carrier corresponding to the target marked as the n posture to the actual position of the carrier
Wherein the content of the first and second substances,representing the pre-stage of the carrier corresponding to the target with n-posture mark mSetting a rotation transformation matrix when the carrier is positioned at an actual position;a rotation transformation matrix representing the initial bit of the gyro device to a preset bit of the carrier;representing a rotation change matrix from the initial position of the gyro device corresponding to the target identified as n gestures to a main target point when the carrier is located at an actual position;
when the carrier is positioned at a preset position, the origin O of the world coordinate system is positioned in the world coordinate systemWTo the point of tangency P of the main optical axis and the main optical axis rotating ballpIs represented asThe origin O of the world coordinate systemWTo the tangent point PpCorresponding intersection point FpIs represented asWhere ρ is0Is the primary optical axis radius of rotation; dzIs the cross-cutting distance; the intersection and tangent distance is the distance between an intersection point corresponding to the tangent point and the tangent point; the primary optical axis spin ball is determined based on the tangential common-ball intersection;
based on theAnd saidIn the world coordinate axis, the origin O of the world coordinate system is setWA main target point A when the carrier is positioned at a preset positionpIs represented as
For a target identified as m, from a preset position in the world coordinate systemTo the origin O of the world coordinate systemWTo the main target point A when the carrier is in the actual positionmVector of (2)The rotation matrix of (a) is:
The translation vector between the target coordinate system and the world coordinate system is expressed as:
wherein:andto representEach of (1); a. themOrigin O of coordinate system as target mtm(ii) a The main target point A when the target with the mark m is in a certain posturemCorresponding intersection point FmAnd the main target point AmOn the same main optical axis; wherein A ismFmRepresents a meeting point FmCorresponding main target point AmThe distance between them;representing the translation amount between the world coordinate system and the target coordinate system;
the method specifically includes the steps of solving an intersection distance and a main optical axis rotation radius based on the focal length, the intersection point and the external orientation element in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and the main target point vector when the carrier expressed in the world coordinate system is located at an actual position, and determining a fitting function between the focal length and the intersection distance:
for each target, the vision sensor acquires a plurality of sample images of the target comprising any posture at any target focal length; in any two sample images of the target, the distance and the posture of the target relative to the visual sensor are different;
respectively acquiring each intersection distance d corresponding to each target focal length based on the sample image of each attitude of each target acquired at each target focal lengthzThe solution specifically includes:
for a target identified as m, based on the sample image of n poses acquired at any target focal length; obtaining a main target point A on the target of the n postures when the carrier is positioned at the actual positionnCoordinates in the coordinate system of the target identified as m
Based on the main target point AnCoordinates in the coordinate system of the target identified as mExpressing the main target point A in the world coordinate systemnCoordinates of (2)
Acquiring an intersection point F corresponding to a tangent point of the main optical axis and the main optical axis rotating ball when the carrier is positioned at an actual position based on each sample image of the target with the n gesture marks m acquired under the target focal lengthnCoordinates in the coordinate system of the target
Based on the meeting point FnCoordinates in the coordinate system of the targetExpressing the intersection point F in the world coordinate systemnCoordinates of (2)
Expressing the main target point A in the world coordinate system based on the sample image of the target with n gestures acquired under the target focal lengthnAnd said meeting point FnThe tangent point of the corresponding main optical axis and the main optical axis rotating ball
Wherein the main target point AnThe meeting point FnAnd said tangent point PnAre all positioned on the same main optical axis;
for targets with n-pose identifiers m, based on the main target point AnCoordinates in the world coordinate systemSaid intersection point FnCoordinates in the world coordinate systemAnd said tangent point PnCoordinates of (2)Obtaining a tangent three-point collinearity equation:
wherein λ represents a constant;
radius of rotation ρ to be based on the main optical axis0And the cross-cutting distance dzThe expressed main target AnThe coordinates of (a), the intersection point FnAnd said tangent point PnSubstituting the coordinates into the tangent three-point collinear equation to obtain:
wherein the content of the first and second substances,
wherein the content of the first and second substances,
wherein the content of the first and second substances,
wherein
the error equation is expressed as:
Solving the nth iteration correction number to be Xn=(Λn TΛn)-1Λn TLn;
Resolving X according to the correction number term of the error equationn=[d(dz),dρ0]TAnd obtaining the cross-cut distance d corresponding to the target focal length through iterative operationzAnd main optical axis rotation radius ρ0The solution of (1);
obtaining each intersection distance d corresponding to each target focal lengthzBased on the intersection and tangent distance d of each target focal length corresponding to the target focal lengthzObtaining the focal length f and the cross-tangential distance dzThe fitting function of (a);
after acquiring a plurality of target images including points to be measured by using a vision sensor in the vision system, performing stereo field error correction on corresponding points to be measured in each target image of the points to be measured, and expressing each point to be measured after the stereo field error correction in the world coordinate system, specifically comprising:
acquiring a plurality of target images comprising points to be measured by using a vision sensor in a vision system;
determining the image coordinates of the corresponding image point of the point to be measured in each target image in the vision sensor coordinate system based on each target image
Using the previously acquired image side vertical axis correction data W [ W ]x,Wy]TCorrecting the image coordinate of the image point to be detectedObtaining a first coordinate of each image point to be detected in the vision sensor coordinate system after the vertical axis correctionCorrecting the coordinates of each image point to be detected by using the axial correction data aObtaining a corrected second coordinate of each image point to be detected in the coordinate system of the vision sensor
Expressing a preset bit vector from the origin of the world coordinate system to the origin of the image plane coordinate system as [ O, rho ] in the world coordinate system0,-f-dz]TThereafter, the first coordinates in the vision sensor coordinate system are comparedAnd the second coordinateConverting the image point to be measured into the world coordinate system to obtain a third coordinate of each image point to be measured corresponding to the point to be measured in the world coordinate systemAnd fourth coordinate
The performing forward intersection measurement on the point to be measured based on the fitting function, the structural parameters of the vision system, and each image point to be measured expressed in the world coordinate system after the stereo field error correction to obtain coordinate values of the point to be measured in the world coordinate system specifically includes:
calibrating the focal length of the visual sensor, and determining the focal length f of the visual sensor;
based on the predetermined focal length f and the intersection distance dzDetermining the cross-tangential distance d corresponding to the focal length fz;
Based on the origin O of the world coordinate system when the carrier is at the preset positionWTo the meeting point FpVector of (2)The origin O of the world coordinate system when the carrier is positioned at the actual positionWTo the meeting point corresponding to the point to be measuredThe vector of (a) is expressed as:wherein the content of the first and second substances,a rotation transformation matrix from the preset position of the carrier to the main target point when the carrier is positioned at the actual position;
based on the point B to be measurednCoordinates in the world coordinate systemPoint to be measured BnThe third coordinate of each corresponding image point to be measured in the world coordinate systemAnd the point B to be measurednCorresponding intersection point FnAnd obtaining a three-point collinearity equation:
taylor expansion was performed to obtain:
wherein the content of the first and second substances,
the error equation is expressed as:
wherein, constant term
The nth iteration correction is solved as follows: chi shapen=(Λn TΛn)-1Λn TLn;
Correcting the x according to the nth iterationnAnd iteratively operating to obtain a point B to be measured corresponding to the third coordinatenFirst solution of
Based on the point B to be measurednPoint B to be measurednThe fourth coordinate of each corresponding image point to be measured in the world coordinate systemAnd the point B to be measurednCorresponding intersection point FnBased on the method, the point B to be measured corresponding to the fourth coordinate is obtainednSecond solution of
2. The vision system structural parameter-based frontal intersection measurement method of claim 1, wherein said vision system comprises two or more of said vision sensors;
correspondingly, the acquiring a plurality of target images including points to be measured by using the vision sensor in the vision system specifically includes:
and synchronously acquiring a plurality of target images comprising the points to be measured by utilizing the vision sensors, and synchronously and parallelly processing the image data of the target images acquired by the vision sensors.
3. The vision system structural parameter-based frontal intersection measurement method of claim 1, wherein said vision system comprises two or more of said vision sensors; the point to be measured is a dynamic point;
correspondingly, after the forward intersection measurement is performed on the point to be measured to obtain the coordinate value of the point to be measured in the world coordinate system, the method further includes:
and obtaining the motion vector of the point to be measured based on the coordinate of the point to be measured in the world coordinate system independently determined by each vision sensor.
4. A vision system structural parameter based frontal intersection measurement system, comprising:
the coordinate expression module is used for respectively expressing a main target point vector when the carrier is positioned at an actual position in a world coordinate system according to the posture of the target in the visual system relative to the image plane and the preset position of the carrier; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a visual sensor in the visual system on a target through a main optical axis;
the function determining module is used for solving the intersection distance and the main optical axis rotation radius based on the focal length, the intersection point and the external orientation element in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and the main target point vector when the carrier is located at the actual position in the world coordinate system, and determining a fitting function between the focal length and the intersection distance;
the error correction module is used for performing stereo field error correction on the image points to be detected corresponding to the points to be detected in each target image after acquiring a plurality of target images comprising the points to be detected by using the visual sensor, and expressing each image point vector to be detected after the stereo field error correction in the world coordinate system; in any two target images, the postures of the points to be measured relative to the vision system are different from the intersection point vector; the vision sensor is arranged on the carrier; the intersection point vector refers to a vector from the rotation center of the carrier to the intersection point;
the intersection measurement module is used for performing forward intersection measurement on the point to be measured on the basis of the intersection distance obtained through the fitting function, the main optical axis rotating radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to stereo field error correction, so as to obtain the coordinate value of the point to be measured in the world coordinate system;
the main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is positioned at the rotation center of the carrier, and the directions of all coordinate axes of the world coordinate system are determined according to the directions of all coordinate axes when the gyro device is positioned at the initial position; the vision system comprises one or more vision sensors;
the method includes the following steps of respectively expressing a main target point vector when a carrier is located at an actual position in a world coordinate system according to the posture of a target relative to an image plane and a preset position of the carrier in the visual system, and specifically includes:
calibrating the focal length and the principal point coordinates of the vision system;
obtaining the exterior orientation elements between each target and the image plane in any postureWherein m is an identifier of the target, and m is 1,2, 3.; n is an identifier of the pose of the target, and n is 1,2, 3.; the position of each target is predetermined, and the posture of each target is acquired by back intersection or a gyro module when the carrier is located at the actual position;
calculating a rotation transformation matrix from the coordinate system of the target with n gesture marks m to the world coordinate system
Wherein the content of the first and second substances,a rotation transformation matrix representing a coordinate system of the target in the n poses to a coordinate system of the vision sensor;representing said vision sensorA rotation transformation matrix of a coordinate system to a coordinate system of the carrier actual position;a rotation transformation matrix which represents a coordinate system of the actual carrier position corresponding to the n attitude targets to a coordinate system when the gyroscope device is located at an initial position;a rotation transformation matrix representing a coordinate system of the gyro device at an initial position to the world coordinate system; the carrier is a holder or a mechanical arm, and the gyro device and the visual sensor are carried by the carrier; the vision sensor comprises a lens; the coordinate system of the carrier when the carrier is positioned at the preset position is consistent with the world coordinate; the actual bits of the carrier are determined according to the gyro value of the gyro device; the coordinate system of the gyro device at the initial position is determined according to the built-in navigation module of the gyro device;
according to the preset position of the carrier, acquiring a rotation transformation matrix from the preset position of the carrier corresponding to the target marked as the n posture to the actual position of the carrier
Wherein, the first and the second end of the pipe are connected with each other,representing a rotation transformation matrix from a preset position of a carrier corresponding to a target with n gesture identifications being m to a position when the carrier is located at an actual position;representing a rotational transformation of an initial bit of the gyro device to a preset bit of the carrierA matrix;representing a rotation change matrix from the initial position of the gyro device corresponding to the target identified as n gestures to a main target point when the carrier is located at an actual position;
when the carrier is positioned at a preset position, the origin O of the world coordinate system is positioned in the world coordinate systemWTo the point of tangency P of the main optical axis and the main optical axis rotating ballpIs represented asThe origin O of the world coordinate systemWTo the tangent point PpCorresponding intersection point FpIs represented asWhere ρ is0Is the primary optical axis radius of rotation; dzIs the cross-cutting distance; the intersection and tangent distance is the distance between an intersection point corresponding to the tangent point and the tangent point; the primary optical axis spin ball is determined based on the tangential common-ball intersection;
based on theAnd saidIn the world coordinate axis, the origin O of the world coordinate system is definedWA main target point A when the carrier is positioned at a preset positionpIs represented as
For a target identified as m, from a preset position in the world coordinate systemTo the origin O of the world coordinate systemWTo the main target point A when the carrier is in the actual positionmVector of (2)The rotation matrix of (a) is:
The translation vector between the target coordinate system and the world coordinate system is expressed as:
wherein:andto representEach of (1); a. themOrigin O of coordinate system as target mtm(ii) a The main target point A when the target marked as m is in a certain posturemCorresponding intersection point FmAnd the main target point AmOn the same main optical axis; wherein A ismFmRepresents a meeting point FmCorresponding main target point AmThe distance between them;representing the translation amount between the world coordinate system and the target coordinate system;
the method specifically includes the steps of solving an intersection distance and a main optical axis rotation radius based on the focal length, the intersection point and the external orientation element in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and the main target point vector when the carrier expressed in the world coordinate system is located at an actual position, and determining a fitting function between the focal length and the intersection distance:
for each target, the vision sensor acquires a plurality of sample images of the target comprising any posture at any target focal length; in any two sample images of the target, the distance and the posture of the target relative to the visual sensor are different;
respectively acquiring each intersecting distance d corresponding to each target focal length based on the sample image of each attitude of each target acquired at each target focal lengthzThe solution specifically includes:
for a target identified as m, based on the sample image of n poses acquired at any target focal length; obtaining a main target point A on the target of the n postures when the carrier is positioned at the actual positionnCoordinates in the coordinate system of the target identified as m
Based on the main target point AnCoordinates in the coordinate system of the target identified as mExpressing the main target point A in the world coordinate systemnOf (2)
Acquiring an intersection point F corresponding to a tangent point of the main optical axis and the main optical axis rotating ball when the carrier is positioned at an actual position based on each sample image of the target with the n gesture marks m acquired under the target focal lengthnCoordinates in the coordinate system of the target
Based on the meeting point FnCoordinates in the coordinate system of the targetExpressing the intersection point F in the world coordinate systemnCoordinates of (2)
Expressing the main target point A in the world coordinate system based on the sample image of the target with n gestures acquired under the target focal lengthnAnd said meeting point FnThe tangent point of the corresponding main optical axis and the main optical axis rotating ball
Wherein the main target point AnThe meeting point FnAnd said tangent point PnAre all positioned on the same main optical axis;
for targets with n-pose identifiers of m, based on the main target point AnCoordinates in the world coordinate systemThe meeting point FnCoordinates in the world coordinate systemAnd said tangent point PnCoordinates of (2)Obtaining a tangent three-point collinearity equation:
wherein λ represents a constant;
radius of rotation ρ to be based on the main optical axis0And the cross-cutting distance dzThe expressed main target AnThe coordinates of (a), the intersection point FnAnd said tangent point PnSubstituting the coordinates into the tangent three-point collinear equation to obtain:
wherein the content of the first and second substances,
wherein the content of the first and second substances,
wherein the content of the first and second substances,
wherein
the error equation is expressed as:
Solving the nth iteration correction number to be Xn=(Λn TΛn)-1Λn TLn;
Resolving X according to the correction number term of the error equationn=[d(dz),dρ0]TAnd obtaining the cross-cut distance d corresponding to the target focal length through iterative operationzAnd main optical axis rotation radius ρ0The solution of (1);
obtaining each intersection distance d corresponding to each target focal lengthzBased on the intersection and tangent distance d of each target focal length corresponding to the target focal lengthzObtaining the focal length f and the cross-tangential distance dzThe fitting function of (a);
after acquiring a plurality of target images including points to be measured by using a vision sensor in the vision system, performing stereo field error correction on corresponding points to be measured in each target image of the points to be measured, and expressing each point to be measured after the stereo field error correction in the world coordinate system, specifically comprising:
acquiring a plurality of target images comprising points to be measured by using a vision sensor in a vision system;
determining the image coordinates of the corresponding image point of the point to be measured in each target image in the visual sensor coordinate system based on each target image
Using the previously acquired image side vertical axis correction data W [ W ]x,Wy]TCorrecting the image coordinate of the image point to be detectedObtaining a first coordinate of each image point to be detected in the vision sensor coordinate system after the vertical axis correctionCorrecting the coordinates of each image point to be detected by using axial correction data aObtaining a second coordinate of each image point to be detected in the vision sensor coordinate system after correction
Expressing a preset bit vector from the origin of the world coordinate system to the origin of the image plane coordinate system as [0, rho ] in the world coordinate system0,-f-dz]TThereafter, the first coordinates in the vision sensor coordinate system are comparedAnd the second coordinateConverting the image point to be measured into the world coordinate system to obtain a third coordinate of each image point to be measured corresponding to the point to be measured in the world coordinate systemAnd fourth coordinate
The performing forward intersection measurement on the point to be measured based on the fitting function, the structural parameters of the vision system, and each image point to be measured expressed in the world coordinate system after the stereo field error correction to obtain coordinate values of the point to be measured in the world coordinate system specifically includes:
calibrating the focal length of the visual sensor, and determining the focal length f of the visual sensor;
based on the predetermined focal length f and the intersection distance dzDetermining the cross-tangential distance d corresponding to the focal length fz;
Based on the origin O of the world coordinate system when the carrier is at the preset positionWTo the meeting point FpVector of (2)The origin O of the world coordinate system when the carrier is positioned at the actual positionWIntersection to the point to be measuredMeeting pointThe vector of (a) is expressed as:wherein, the first and the second end of the pipe are connected with each other,a rotation transformation matrix from the preset position of the carrier to the main target point when the carrier is positioned at the actual position;
based on the point B to be measurednCoordinates in the world coordinate systemPoint to be measured BnThe third coordinate of each corresponding image point to be measured in the world coordinate systemAnd said point B to be measurednCorresponding intersection point FnAnd obtaining a three-point collinearity equation:
taylor expansion was performed to obtain:
wherein the content of the first and second substances,
the error equation is expressed as:
wherein, constant term
The nth iteration correction is solved as follows: chi shapen=(Λn TΛn)-1Λn TLn;
According to the nth iteration correction number xnAnd iteratively operating to obtain a point B to be measured corresponding to the third coordinatenFirst solution of
Based on the point B to be measurednPoint B to be measurednThe fourth coordinate of each corresponding image point to be measured in the world coordinate systemAnd the point B to be measurednCorresponding intersection point FnBased on the method, the point B to be measured corresponding to the fourth coordinate is obtainednSecond solution of
5. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the method for vision system based forward cross-talk measurement of structural parameters according to any of claims 1 to 3.
6. A non-transitory computer readable storage medium, having stored thereon a computer program, wherein the computer program, when being executed by a processor, implements the steps of the method for forward cross-measurement based on vision system configuration parameters according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110604995.2A CN113405532B (en) | 2021-05-31 | 2021-05-31 | Forward intersection measuring method and system based on structural parameters of vision system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110604995.2A CN113405532B (en) | 2021-05-31 | 2021-05-31 | Forward intersection measuring method and system based on structural parameters of vision system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113405532A CN113405532A (en) | 2021-09-17 |
CN113405532B true CN113405532B (en) | 2022-05-06 |
Family
ID=77675517
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110604995.2A Active CN113405532B (en) | 2021-05-31 | 2021-05-31 | Forward intersection measuring method and system based on structural parameters of vision system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113405532B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114516051B (en) * | 2022-03-18 | 2023-05-30 | 中国农业大学 | Front intersection method and system for three or more degrees of freedom robot vision measurement |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107449419A (en) * | 2017-07-21 | 2017-12-08 | 中国人民解放军国防科学技术大学 | The Full Parameterized vision measuring method of the continuous kinematic parameter of body target |
CN107564061A (en) * | 2017-08-11 | 2018-01-09 | 浙江大学 | A kind of binocular vision speedometer based on image gradient combined optimization calculates method |
CN108489395A (en) * | 2018-04-27 | 2018-09-04 | 中国农业大学 | Vision measurement system structural parameters calibration and affine coordinate system construction method and system |
CN110345921A (en) * | 2019-06-12 | 2019-10-18 | 中国农业大学 | Stereoscopic fields of view vision measurement and vertical axial aberration and axial aberration bearing calibration and system |
CN111829452A (en) * | 2020-06-04 | 2020-10-27 | 中国人民解放军63921部队 | Towed stereo measurement unit, system and space target measurement method |
-
2021
- 2021-05-31 CN CN202110604995.2A patent/CN113405532B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107449419A (en) * | 2017-07-21 | 2017-12-08 | 中国人民解放军国防科学技术大学 | The Full Parameterized vision measuring method of the continuous kinematic parameter of body target |
CN107564061A (en) * | 2017-08-11 | 2018-01-09 | 浙江大学 | A kind of binocular vision speedometer based on image gradient combined optimization calculates method |
CN108489395A (en) * | 2018-04-27 | 2018-09-04 | 中国农业大学 | Vision measurement system structural parameters calibration and affine coordinate system construction method and system |
CN110345921A (en) * | 2019-06-12 | 2019-10-18 | 中国农业大学 | Stereoscopic fields of view vision measurement and vertical axial aberration and axial aberration bearing calibration and system |
CN111829452A (en) * | 2020-06-04 | 2020-10-27 | 中国人民解放军63921部队 | Towed stereo measurement unit, system and space target measurement method |
Also Published As
Publication number | Publication date |
---|---|
CN113405532A (en) | 2021-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110296691B (en) | IMU calibration-fused binocular stereo vision measurement method and system | |
CN111156998B (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
WO2019205299A1 (en) | Vision measurement system structure parameter calibration and affine coordinate system construction method and system | |
CN109544630B (en) | Pose information determination method and device and visual point cloud construction method and device | |
JP2018179981A (en) | Camera calibration method, camera calibration program and camera calibration device | |
CN109752003B (en) | Robot vision inertia point-line characteristic positioning method and device | |
CN110728715A (en) | Camera angle self-adaptive adjusting method of intelligent inspection robot | |
CN103020952A (en) | Information processing apparatus and information processing method | |
CN113850126A (en) | Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle | |
JP2008070267A (en) | Method for measuring position and attitude, and device | |
CN111123242B (en) | Combined calibration method based on laser radar and camera and computer readable storage medium | |
CN111415387A (en) | Camera pose determining method and device, electronic equipment and storage medium | |
CN110842901A (en) | Robot hand-eye calibration method and device based on novel three-dimensional calibration block | |
CN111791235B (en) | Robot multi-camera visual inertia point-line characteristic positioning method and device | |
CN109272555B (en) | External parameter obtaining and calibrating method for RGB-D camera | |
CN116433737A (en) | Method and device for registering laser radar point cloud and image and intelligent terminal | |
CN113744340A (en) | Calibrating cameras with non-central camera models of axial viewpoint offset and computing point projections | |
JPH0680404B2 (en) | Camera position and orientation calibration method | |
Ding et al. | A robust detection method of control points for calibration and measurement with defocused images | |
JP2023505891A (en) | Methods for measuring environmental topography | |
CN111915685A (en) | Zoom camera calibration method | |
CN113405532B (en) | Forward intersection measuring method and system based on structural parameters of vision system | |
CN111915681B (en) | External parameter calibration method, device, storage medium and equipment for multi-group 3D camera group | |
CN113436267A (en) | Visual inertial navigation calibration method and device, computer equipment and storage medium | |
CN114516051B (en) | Front intersection method and system for three or more degrees of freedom robot vision measurement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |