CN115790366A - Visual positioning system and method for large array surface splicing mechanism - Google Patents

Visual positioning system and method for large array surface splicing mechanism Download PDF

Info

Publication number
CN115790366A
CN115790366A CN202210779455.2A CN202210779455A CN115790366A CN 115790366 A CN115790366 A CN 115790366A CN 202210779455 A CN202210779455 A CN 202210779455A CN 115790366 A CN115790366 A CN 115790366A
Authority
CN
China
Prior art keywords
array surface
real
pose
splicing
splicing mechanism
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210779455.2A
Other languages
Chinese (zh)
Inventor
胡长明
李靖轩
李喆
刘敏
冯展鹰
娄华威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 14 Research Institute
Original Assignee
CETC 14 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 14 Research Institute filed Critical CETC 14 Research Institute
Priority to CN202210779455.2A priority Critical patent/CN115790366A/en
Publication of CN115790366A publication Critical patent/CN115790366A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a visual positioning system and method for a large array surface splicing mechanism, and belongs to the technical field of intelligent manufacturing. The system is used for splicing the array surface with the butt joint frame according to the set pose, and comprises an array surface splicing mechanism, a visual target, an image acquisition module and a control module; the array surface is stably connected to the array surface splicing mechanism, and the visual target is arranged on the end surface of the butt joint frame facing to the array surface; the image acquisition module is used for acquiring the image information of the visual target; the control module obtains the real-time relative pose of the array surface and the docking frame according to the image information of the visual target, calculates the real-time pose deviation between the array surface and the docking frame according to the real-time relative pose and the set primary teaching reference pose, sends an instruction to the array surface splicing mechanism, and adjusts the relative pose of the array surface and the docking frame to enable the array surface to be spliced with the docking frame according to the set pose. The invention can automatically complete the splicing action and improve the splicing efficiency and the repeated positioning precision of the array surface.

Description

Visual positioning system and method for large array surface splicing mechanism
Technical Field
The invention belongs to the technical field of intelligent manufacturing, and particularly relates to a visual positioning system and method for a large array surface splicing mechanism.
Background
In some large-scale electronic equipment, a large array surface needs to be put into normal use after splicing operation, the splicing efficiency of the large array surface has great influence on the timeliness of equipment erection, and meanwhile, the splicing precision of the array surface has an important influence on the performance of the equipment, so that the positioning of the large array surface is an important technical means in the process of using a splicing mechanism.
The current market mainly adopts manual detection to position a large array face: the method comprises the following steps that a measurer steps on a butt joint frame and a to-be-spliced array surface, attitude deviations in three directions of pitching, rolling and yawing between the array surface and the butt joint frame are measured by using a level meter, a laser angle measuring instrument and the like, position deviations in three directions of horizontal, vertical and depth between the array surface and the butt joint frame are measured by using a laser range finder, a gauge block, a feeler gauge and the like, attitude and position adjustment amounts required by splicing the array surface to the butt joint frame are calculated according to measurement data, an installer adjusts a positioning stud on a tool to realize attitude adjustment, and attitude measurement and adjustment need to be repeatedly executed until a measurement result meets the accuracy requirement of array surface splicing. This kind of mode measurement inefficiency needs the scene to have operating personnel to cooperate, measures the uniformity poor, needs to increase the cost of labor simultaneously.
The existing array surface automatic splicing positioning technology mainly uses a vision measuring positioning technology, a vision measuring system needs to be erected manually and independently when the vision measuring system is used, the positioning splicing efficiency is low, and the splicing automation degree is reduced; the vision measurement system needs to adjust the angle to ensure that the view field covers the whole array surface, higher requirements are provided for the technology of the staff for erecting the vision measurement system, and uncertain factors of system operation are increased; the target of the vision measuring system is arranged on the array surface, and the large array surface is influenced by the size and the material, so that large elastic deformation is easy to occur, the positioning precision is influenced, and the repetition precision is poor.
Disclosure of Invention
The invention aims to provide a visual positioning system and a visual positioning method for a large array surface splicing mechanism, which can automatically finish splicing actions and improve the array surface splicing efficiency and repeated positioning precision.
Specifically, on one hand, the invention provides a large array surface splicing mechanism vision positioning system which is used for splicing an array surface with a butt joint frame according to a set pose and comprises an array surface splicing mechanism, a vision target, an image acquisition module and a control module;
the array surface splicing mechanism comprises a static platform which is static relative to the working space and a movable platform which moves relative to the working space, and the array surface is stably connected to the movable platform; the array surface splicing mechanism receives the instruction of the control module and adjusts the relative pose of the array surface and the butt joint frame;
the visual target is arranged on the end face, facing the array face, of the butt joint frame;
the image acquisition module is arranged on the array surface splicing mechanism and is used for acquiring the image information of the visual target;
the control module receives the image information of the visual target acquired by the image acquisition module, processes the image, obtains the real-time relative pose of the array surface and the docking frame according to the image information of the visual target, calculates the real-time pose deviation between the array surface and the docking frame according to the real-time relative pose, sends an instruction to the array surface splicing mechanism according to the real-time pose deviation, and adjusts the relative pose of the array surface and the docking frame by the array surface splicing mechanism so that the real-time pose deviation is within a set allowable range, thereby splicing the array surface and the docking frame according to the set pose.
Further, the system includes a plurality of visual targets; the number of the image acquisition modules is the same as that of the visual targets, and each image acquisition module acquires image information of the corresponding visual target; and each image acquisition module is provided with a control module, the image information of each visual target is processed in each corresponding control module, the control modules exchange visual target data, and one control module dynamically sends an instruction to the splicing mechanism according to the real-time pose deviation.
On the other hand, the invention also provides a visual positioning method of the large array surface splicing mechanism, which is realized by adopting the visual positioning system of the large array surface splicing mechanism and comprises the following steps:
calculating a real-time relative pose: the image acquisition module acquires the image information of the visual target, the control module receives the image information of the visual target to perform image processing, and the real-time relative pose of the array surface and the butt joint frame is calculated according to the image information of the visual target
Figure BDA0003728492010000021
Calculating the real-time pose deviation: according to the real-time relative pose
Figure BDA0003728492010000022
Calculating real-time pose deviation between the array surface and the butt joint frame
Figure BDA0003728492010000023
Judging whether the real-time pose deviation is within a set allowable range: judging whether the real-time pose deviation is within an allowable range, and if the real-time pose deviation is within the set allowable range, ending the process; if the real-time pose deviation is not within the allowable range, sending an instruction to the array surface splicing mechanism according to the real-time pose deviation, adjusting the relative pose of the array surface and the butt joint frame by the array surface splicing mechanism according to the translation motion amount and the rotation motion amount in the instruction, and repeating the steps of calculating the real-time relative pose, calculating the real-time pose deviation and judging whether the real-time pose deviation is within the set allowable range until the end.
Further, the array is calculated by the image information of the visual targetReal-time relative pose of face and docking frame
Figure BDA0003728492010000024
The method specifically comprises the following steps:
obtaining the graph center point coordinates of a first feature graph, a second feature graph, a third feature graph and a fourth feature graph on the visual target, wherein the graph center point coordinates are q respectively 1 、q 2 、q 3 、q 4 Both the spatial information comprise x, y and z spatial information of the central point of the graph under a static platform reference system; the control module calculates the real-time relative pose of the array surface and the butt joint frame according to the following formula
Figure BDA0003728492010000025
Figure BDA0003728492010000031
Figure BDA0003728492010000032
Figure BDA0003728492010000033
Wherein,
Figure BDA0003728492010000034
the pose change of the real-time pose relative to the coordinate system of the static platform is a 3-row and 3-column matrix,
Figure BDA0003728492010000035
represents q 4 Point of direction q 2 The spatial vector of (a) is determined,
Figure BDA0003728492010000036
presentation pair
Figure BDA0003728492010000037
The vector is taken modulo, and the X represents the cross of the vectorAnd multiplying.
Figure BDA0003728492010000038
For positional changes of real-time pose relative to a stationary platform coordinate system, where q 4 I.e. x 4 、y 4 、z 4 Respectively, x, y, z translation amounts.
Further, calculating real-time pose deviation between the array surface and the docking frame according to the real-time relative pose
Figure BDA0003728492010000039
The method specifically comprises the following steps:
Figure BDA00037284920100000310
wherein the real-time pose deviation
Figure BDA00037284920100000311
Expressed in the form of homogeneous transformation matrix and represented by real-time relative pose
Figure BDA00037284920100000312
Position and posture with reference
Figure BDA00037284920100000313
The inverse matrix of (a) is multiplied to obtain.
Further, the calculation method of the translational motion amount and the rotational motion amount is as follows:
Figure BDA00037284920100000314
Figure BDA00037284920100000315
Figure BDA00037284920100000316
wherein,
Figure BDA00037284920100000317
for real-time pose deviation
Figure BDA00037284920100000318
The amount of rotational movement with respect to the reference coordinate system includes (Rx, ry, rz), rx being the amount of rotation about the x-axis, ry being the amount of rotation about the y-axis, rz being the amount of rotation about the z-axis,
Figure BDA00037284920100000319
for real-time pose deviation
Figure BDA00037284920100000320
The amount of translational movement relative to the reference coordinate system includes (x, y, z), x being the amount of displacement along the x-axis, y being the amount of displacement along the y-axis, and z being the amount of displacement along the z-axis.
Further, before the step of calculating the real-time relative pose, a step of primary teaching is further included, specifically:
step 2.1, detecting the pose deviation of the array surface relative to the butt joint frame by using an instrument; the pose deviation comprises translation deviations x, y and z and rotation deviations Rx, ry and Rz which are required to be carried out when the array surface reaches the splicing completion state; the translation deviation x, y and z are the reference directions of the static platform coordinate system, and the lengths of the projection of the connecting line of the same point of the array surface in the current direction and the connection line of the same point of the array surface in the installation frame in the direction are measured in the x direction, the y direction and the z direction respectively; the rotation deviation is obtained by subtracting two rotation quantity values of the current Rx, ry and Rz rotation quantity values of the array surface and corresponding values of the mounting frame when splicing is completed in the Rx, ry and Rz directions by taking a coordinate system of the static platform as a reference direction;
step 2.2, judging the pose deviation, if the pose deviation is out of the set allowable range, determining that the splicing of the array surface and the butt-joint frame is not finished, outputting the numerical values of translational motion x, y and z and rotational motion Rx, ry and Rz given by the pose deviation to a splicing mechanism movable platform by a control module, moving the splicing mechanism movable platform according to the numerical values of the translational motion x, y and z and the rotational motion Rx, ry and Rz given by the pose deviation to adjust the attitude of the array surface, and repeating the steps 2.1 and 2.2; if the pose deviation is within the set allowable range, the splicing of the array surface and the butt joint frame is determined to be finished, and the step 2.3 is carried out;
and 2.3, recording the pose deviation of the current array surface and the docking frame as a primary teaching reference pose to finish primary teaching, wherein in the step of calculating the real-time relative pose, the primary teaching reference pose is adopted as an initial value of the real-time relative pose.
Further, when said q is 1 、q 2 、q 3 、q 4 When there is a single feature point missing, use
Figure BDA0003728492010000041
Substituted in formula (5)
Figure BDA0003728492010000042
Calculation, use
Figure BDA0003728492010000043
Substituted in formula (5)
Figure BDA0003728492010000044
Calculating; use of
Figure BDA0003728492010000045
Or
Figure BDA0003728492010000046
Q in formula (6) 4 And (4) calculating.
Further, when the center points of all feature patterns are identified, the use is made
Figure BDA0003728492010000047
And
Figure BDA0003728492010000048
is substituted into formula (5)
Figure BDA0003728492010000049
Calculation, use
Figure BDA00037284920100000410
And
Figure BDA00037284920100000411
in formula (5)
Figure BDA00037284920100000412
And (4) calculating.
Further, the visual positioning method of the large array surface splicing mechanism further comprises a combined preprocessing method adopted in the process of acquiring the image information of the visual target by the image acquisition module, wherein the combined preprocessing step is as follows:
s102, carrying out Gaussian blur processing on the original picture, and filtering high-frequency information in the original picture;
s103, using the gradient information of the gray map of the image, a portion having a large gray gradient is separated, and a local feature edge is detected.
S104, performing morphological transformation;
and S105, carrying out contour detection to obtain the contour of the characteristic point.
The visual positioning system and method of the large array surface splicing mechanism have the following beneficial effects:
the visual target is arranged on the butt joint frame with high rigidity, and the elastic deformation of the butt joint frame is far smaller than that of the array surface, so that the reduction of the identification precision possibly caused by using the visual target on the array surface is avoided, and the material and structure cost of the array surface is reduced; by using the small visual target and the image acquisition module (such as a wide-angle camera set) which are arranged on the end surface of the butt joint frame, the target can be ensured to be always present in the coverage range of the view field of the image acquisition module, and the requirement on the relative position of the target and the image acquisition module is reduced; by fixedly connecting the image acquisition module with the splicing mechanism, the independent erection step of a visual positioning system is omitted, and the requirement on manual operation is reduced.
By using the combination of the image acquisition module and the visual target group, the visual positioning can be realized only by identifying part of the targets by the image acquisition module, so that the fault tolerance and the anti-interference performance of the visual positioning system are improved; the recognition results of the targets can be mutually verified, and the recognition precision is improved.
The vision positioning method acquires the pose of the array surface in real time, and can acquire the position and the posture of the array surface more quickly compared with manual measurement, so that the splicing efficiency of the array surface is improved.
In order to improve the reliability of visual target image identification and reduce the influence of complex illumination conditions and imaging environments of engineering scenes on image identification, the invention improves the robustness from two dimensions of hardware and software. In the aspect of hardware, two or more light supplementing light sources are respectively arranged on the upper side and the lower side of the visual target, so that the uniform illumination intensity at the edge and the center of the visual target is ensured. In terms of software, the vision acquisition module is set to acquire the vision target image by using the set fixed exposure time, and it is ensured that the white part of the vision target reaches a first threshold (for example, 90%) of the maximum exposure value and the black part of the vision target reaches a second threshold (for example, 30%) of the maximum exposure value under the condition of light supplement by the single light supplement light source. The light supplement light source hardware design and the fixed exposure design can ensure the accurate exposure of the visual target image, and the high-power light supplement light source and the uniform illumination design can greatly reduce the influence of environmental illumination on the target pattern capture; in the process of identifying the images of the visual target group, the large array surface splicing visual positioning method adopts a combined pretreatment method, reduces the influence of ambient light on identification, can obtain the outlines of the characteristic points with high success rate under the condition of being influenced by the ambient light, and finally obtains the coordinates of the central points of all or part of the images; compared with the scheme of using the self-luminous targets, the light supplementing light source scheme can effectively improve the illumination uniformity of the targets, simultaneously avoid glare and ghost images of the self-luminous targets on the lens and reduce the influence on the imaging quality of the targets.
According to the vision positioning system and method for the large array surface splicing mechanism, the camera group arranged on the array surface splicing mechanism is used for shooting the vision target group on the butt-joint frame, the pose deviation of the array surface and the butt-joint frame is obtained by using a computer vision identification technology, the splicing mechanism is guided to adjust the relative pose of the array surface and the butt-joint frame in real time, the splicing action is automatically completed, and a special person is not needed to operate equipment to splice the array surface, so that the labor cost is reduced, and the danger of manual operation is eliminated; after the array surface splicing accuracy reaches the set allowable range, the positioning working method is regarded as completion, and the repeated positioning accuracy of array surface splicing is improved. Compared with a scheme aiming at an ideal illumination condition, the vision positioning system and the vision positioning method of the large array surface splicing mechanism aim at the complex illumination condition and the imaging environment of an engineering scene on software and hardware, and reduce the interference of the environment on imaging and characteristic image contour identification through the modes of target light source, image acquisition module imaging parameter configuration and combined preprocessing flow.
Drawings
Fig. 1 is a schematic system diagram of embodiment 1 of the present invention.
FIG. 2 is a diagram of the physical and data relationship of visual positioning in embodiment 1 of the present invention.
FIG. 3 is a flow chart of the visual positioning of the present invention.
FIG. 4 is a flow chart of the primary vision teaching of the present invention.
FIG. 5 is a schematic view of a target pattern of the present invention.
FIG. 6 is a schematic view of the target light supplement of the present invention.
Fig. 7 is a flow chart of visual target group image identification of the present invention.
Fig. 8 is a schematic diagram of an intermediate process of target image recognition preprocessing under the influence of ambient light, where (1) is an original picture before combination preprocessing, (2) is a result after gaussian blur processing is performed on (1), (3) is a result after edge detection is performed on (2), (4) is a result after morphological transformation is performed on (3), and (5) is a result after contour detection is performed on (4).
Fig. 9 is a system schematic diagram of embodiment 2 of the present invention.
FIG. 10 is a diagram of the physical and data relationship of visual positioning in accordance with embodiment 2 of the present invention.
Reference numerals are as follows: 1-a front surface splicing mechanism, 101-a splicing mechanism movable platform, 102-a splicing mechanism static platform, 2-a butt joint frame, 3-a visual target, 301-a first visual target, 302-a second visual target, 4-a front surface, 5-an image acquisition module, 501-a first image acquisition module, 502-a second image acquisition module, 6-a processing module, 7-a first characteristic pattern, 8-a second characteristic pattern, 9-a third characteristic pattern, 10-a fourth characteristic pattern, 11-an upper characteristic point and 12-a lower characteristic point.
Detailed Description
The present invention will be described in further detail below with reference to the accompanying drawings in conjunction with embodiments.
Example 1:
one embodiment of the invention is a vision positioning system of a large array surface splicing mechanism, which is used for splicing an array surface with a butt joint frame according to a set pose. The wavefront is the object of wavefront splicing; and the butt joint frame is a target for splicing the front surface and is used for a mechanical structural part spliced with the front surface. The hardware components of one embodiment of the invention are shown in fig. 1. The method mainly comprises the following steps:
the array surface splicing mechanism 1 is used for adjusting the relative pose of the array surface 4 and the butt joint frame 2 and comprises a splicing mechanism movable platform 101 and a splicing mechanism static platform 102. The splicing mechanism static platform 102 and the working space are relatively static; the splicing mechanism moves the platform 101 and the working space relatively. The front surface 4 is firmly attached to the movable stage 101. The array surface splicing mechanism 1 receives the instruction of the control module 6 and adjusts the relative pose of the array surface 4 and the butt joint frame 2.
And the visual target 3 is arranged on the end surface, facing the array surface 4, of the butt joint frame 2, and the image acquisition module 5 acquires image information of the visual target 3 so as to identify the spatial position of the butt joint frame 2. The butt joint frame 2 adopts a high-rigidity design relative to the array surface 4, so that elastic deformation is reduced, the measurement precision is improved, the rigidity requirement on the array surface 4 is reduced, and the material cost is saved; meanwhile, only one set of visual target needs to be installed on the butt joint frame 2, and the consistency of the positions of the visual targets when a plurality of array surfaces are spliced is ensured compared with the scheme that the visual targets are installed on each array surface, and the step that the relative positions of the targets and the array surfaces on different array surfaces are unified is also omitted.
An image acquisition module 5, for example a wide-angle camera, is mounted on the wavefront splicing mechanism 1. Preferably, the image acquisition module 5 is mounted on the splicing mechanism movable platform 101 at a position close to the docking frame 2. The image acquisition module 5 is used for acquiring image information of the visual target 3. The length direction of the array surface 4 is parallel to the image acquisition module 5, and the width direction is parallel to the optical axis of the image acquisition module 5.
The control module 6 receives the image information of the visual target 3 acquired by the image acquisition module 5, performs image processing, and by the visual positioning method of the large array surface splicing mechanism of the embodiment, the real-time relative pose of the array surface and the docking frame is obtained from the image information of the visual target, the real-time pose deviation between the array surface 4 and the docking frame 2 is calculated according to the real-time relative pose and the set primary teaching reference pose, and an instruction is sent to the array surface splicing mechanism 1 according to the real-time pose deviation, and the array surface splicing mechanism 1 adjusts the relative pose of the array surface 4 and the docking frame 2, so that the real-time pose deviation is within the set allowable range, and the array surface is spliced with the docking frame according to the set pose.
The control module 6 automatically obtains the pose deviation between the array surface 4 and the butt joint frame 2, performs large array surface splicing visual positioning, guides the splicing mechanism (the splicing mechanism movable platform 101 and the splicing mechanism static platform 102) to realize splicing action, does not need to arrange a special person to finish the array surface splicing action, and reduces the labor cost. Compared with the scheme of independently erecting the vision measuring system by using an independent mechanical structure in the prior art, the vision positioning system of the large array surface splicing mechanism integrates the image acquisition module and the splicing mechanism, moves synchronously with the splicing mechanism, reduces the erection steps and improves the erection efficiency.
The physical level of the large-scale wavefront splicing visual positioning system of the embodiment is related to the data level as shown in fig. 2.
Physical layer:
the array surface 4 is mechanically and fixedly connected with a splicing mechanism movable platform 101; the splicing mechanism movable platform 101 and the splicing mechanism static platform 102 are controllably moved, and the position and the posture of the splicing mechanism movable platform 101 are adjusted; the splicing mechanism static platform 102 and the working space are relatively static; the image acquisition module 5 is mechanically and fixedly connected with the splicing mechanism movable platform 101; the visual target 3 is mechanically and fixedly connected with the butt joint frame 2.
And (3) data layer:
the visual target 3 is imaged in the image acquisition module 5; the image acquisition module 5 transmits the wavefront pose information to the splicing mechanism moving platform 101, so as to guide the splicing mechanism moving platform 101 to move and realize the wavefront splicing visual positioning.
The flow of the large-scale wavefront splicing visual positioning method is shown in fig. 3.
(optional) judging whether the system is subjected to primary teaching:
the control module 6 checks the working state of the system, judges whether the system carries out primary teaching or not, and if the system does not carry out primary teaching, the step of primary teaching is carried out; and if the primary teaching is carried out, turning to the step of calculating the real-time relative pose.
And judging whether the system performs primary teaching or not by checking the working state of the system, and if the system is in a positioning state and the reference pose data of the primary teaching in the system is not a null value, judging that the system performs the primary teaching. The primary teaching reference pose data are x, y, z, rx, ry and Rz, which respectively represent the amount of x-direction translation (millimeter unit), y-direction translation (millimeter unit), z-direction translation (millimeter unit), x-direction rotation (radian unit), y-direction rotation (radian unit) and z-direction rotation (radian unit) of the pose of the array surface target relative to the current pose in the reference system of the static platform. A static platform reference system is defined as follows, the original point is the geometric central point of the upper surface of the static platform 102 of the splicing mechanism, the x axis is parallel to the initial width direction of the array surface 4, and the positive direction points to the installation side of the image acquisition module 5 from the original point; the y axis is parallel to the initial length direction of the array surface 4, and the right side of the positive direction points to the left side from the shooting target direction of the image acquisition module; the z axis is parallel to the initial height direction of the array surface 4, the direction is vertical to the array surface and points to the side far away from the splicing mechanism static platform 102, and the x axis, the y axis and the z axis accord with the arrangement rule of Cartesian coordinates. Wherein the primary taught reference pose data can be transformed using an equivalent homogeneous transformation matrix
Figure BDA0003728492010000081
And the expression is used for describing the position and the posture of the benchmark in the coordinate system of the static platform. For changing the attitude of the reference in the coordinate system of the stationary platform
Figure BDA0003728492010000082
Indicating changes in position of reference in the stationary table coordinate system
Figure BDA0003728492010000083
And (4) showing. Primary taught reference pose data
Figure BDA0003728492010000084
And
Figure BDA0003728492010000085
the corresponding conversion relationship is as follows:
Figure BDA0003728492010000086
Figure BDA0003728492010000087
Figure BDA0003728492010000088
where Rx is the amount of rotation about the x-axis, ry is the amount of rotation about the y-axis, rz is the amount of rotation about the z-axis, x is the amount of displacement along the x-axis, y is the amount of displacement along the y-axis, and z is the amount of displacement along the z-axis.
(optional) performing a primary teaching step:
reference pose for primary teaching
Figure BDA0003728492010000089
This can be achieved by manually entering the visual positioning system or by performing a primary teaching step.
Preferably, in another embodiment, the step of calculating the real-time relative pose further includes a primary teaching step, as shown in fig. 4, the primary teaching step includes the following steps:
and 2.1, detecting the pose deviation of the array surface 4 relative to the butt joint frame 2 by using an instrument.
The pose deviation comprises translation deviations x, y and z and rotation deviations Rx, ry and Rz which are needed when the array surface 4 reaches the splicing completion state. The translation deviation is realized by using a measuring tape, a measuring block and a vernier caliper, taking a stationary platform coordinate system as a reference direction, the length of the projection of the line of the same point of the front surface 4 at the current and mounted frame when the splicing is completed in this direction is measured in the x, y and z directions, respectively. The rotation deviation is realized by using a level meter and a gyroscope, a static platform coordinate system is taken as a reference direction, the current Rx, ry and Rz three rotation quantity values of the array surface 4 and the corresponding values when the splicing of the installation frame is completed are measured in the Rx, ry and Rz three directions, and the deviation is obtained by subtracting two by two.
Step 2.2, judging the pose deviation, if the pose deviation is out of the allowable range, determining that the splicing of the array surface 4 and the butt joint frame 2 is not finished, and repeating the step 2.1 and the step 2.2; if the pose deviation is within the allowable range, the splicing of the array surface 4 and the butt joint frame 2 is determined to be completed, and the step 2.3 is carried out.
Wherein the allowable range is given according to the physical structure of the front surface and the butt joint frame, namely the tolerance range on the mechanical structure of the butt joint between the front surface 4 and the butt joint frame 2 is not exceeded so as to meet the requirement of splicing precision; in the process of adjusting the posture of the wavefront 4, the control module 6 outputs the given values of the translational motion x, y and z and the rotational motion Rx, ry and Rz to the splicing mechanism movable platform 101, and the splicing mechanism movable platform 101 moves according to the given values of the translational motion x, y and z and the rotational motion Rx, ry and Rz of the pose deviation so as to adjust the posture of the wavefront 4.
And 2.3, recording the pose deviation of the front surface 4 and the butt joint frame 2 at the moment, and taking the pose deviation as a primary teaching reference pose to finish primary teaching.
The primary teaching reference pose data comprises relative translation and relative rotation information of the front surface 4 and the butt joint frame 2 in a front surface splicing completion state, and primary teaching reference pose dataThe teaching reference pose data is stored in the control module 6, and the primary teaching reference pose is adopted as the initial value of the real-time relative pose for subsequently obtaining the real-time relative pose of the array surface 4 and the docking frame 2
Figure BDA0003728492010000091
(see the step of calculating the real-time relative pose) and then calculating the real-time pose deviation (see the step of calculating the real-time pose deviation).
Calculating a real-time relative pose:
the image acquisition module 5 acquires the image information of the visual target 3, the control module 6 receives the image information of the visual target for image processing, and the real-time relative pose of the array surface 4 and the butt joint frame 2 is calculated according to the image information of the visual target
Figure BDA0003728492010000092
Wherein, the image acquisition module 5 captures an image of the visual target 3, and the real-time relative pose of the array surface 4 and the butt joint frame 2 is obtained from the image of the visual target
Figure BDA0003728492010000093
Obtained real-time relative pose
Figure BDA0003728492010000094
Translation and rotation information of the front 4 with respect to the docking frame 2 should be included. Calculating the real-time relative pose of the array surface 4 and the butt joint frame 2 according to the image information of the visual target
Figure BDA0003728492010000095
The method of (2) is as follows.
As shown in fig. 5, the coordinates of the center points of the first feature pattern, the second feature pattern, the third feature pattern and the fourth feature pattern on the visual target (for example, the coordinates of the center points of the feature patterns 9, 10, 11 and 12 on the visual target 3) are respectively obtained by the image recognition technology, and are q 1 、q 2 、q 3 、q 4 All including the center point of the pattern under the reference system of the stationary platenX, y, z spatial information of (a). After the control module 6 calculates and obtains the coordinate data of the central points of the 3 characteristic graphs of the target, the real-time relative pose of the array surface 4 and the butt joint frame 2 is calculated and obtained
Figure BDA0003728492010000101
The coordinates q of the feature points now acquired by the image acquisition module 5 4 (i.e. x) 4 、y 4 、z 4 ) For example, the calculation of the real-time relative pose of the array surface 4 and the docking frame 2 based on the feature point coordinate data will be described
Figure BDA0003728492010000102
The specific calculation formulas of (a) and (b) are formula (4), formula (5) and formula (6).
Figure BDA0003728492010000103
Figure BDA0003728492010000104
Figure BDA0003728492010000105
Wherein,
Figure BDA0003728492010000106
the pose change of the real-time pose relative to the coordinate system of the static platform is a 3-row and 3-column matrix,
Figure BDA0003728492010000107
represents q 4 Point of direction q 2 The spatial vector of (a) is determined,
Figure BDA0003728492010000108
presentation pair
Figure BDA0003728492010000109
The vector is subjected to modulus taking, and the x represents the cross multiplication of the vector。
Figure BDA00037284920100001010
For positional changes of real-time pose relative to a stationary platform coordinate system, where q 4 I.e. x 4 、y 4 、z 4 Respectively representing x, y, z translation amounts.
It will be appreciated that the coordinate data of any one of the feature points may be employed as
Figure BDA00037284920100001011
When using q 1 、q 2 、q 3 When substituting the above formula to calculate, the right side x i 、y i 、z i The index i of (a) should correspond to the index of q indicating the displacement data of the corresponding feature point.
Preferably, in another embodiment, when q is 1 、q 2 、q 3 、q 4 When there is a single feature point missing, use
Figure BDA00037284920100001012
Substituted in formula (5)
Figure BDA00037284920100001013
Calculation, use
Figure BDA00037284920100001014
In formula (5)
Figure BDA00037284920100001015
Calculating; use of
Figure BDA00037284920100001016
Or
Figure BDA00037284920100001017
Q in formula (6) 4 And calculation can improve the reliability when a single characteristic point is absent.
Preferably, in another embodiment, when the central points of all the feature patterns are identified, the visual positioning method for splicing large-scale wavefront is beneficial toUsing redundancy of number of feature points to improve calculation accuracy
Figure BDA00037284920100001018
And
Figure BDA00037284920100001019
is substituted into formula (5)
Figure BDA00037284920100001020
Calculation, use
Figure BDA00037284920100001021
And
Figure BDA00037284920100001022
is substituted into formula (5)
Figure BDA00037284920100001023
And (4) calculating, and reducing random errors.
Compared with the prior scheme, the large array surface splicing visual positioning method has the following difference: vectors selected by real-time relative pose calculation are fully utilized, the characteristic of quantity redundancy of feature graphs on the visual target is fully utilized, equivalent vectors are used for substitution under the condition that partial feature graphs are shielded, and the robustness of pose identification is improved; under the condition of all recognition, the feature of quantity redundancy is utilized, the mean value substitution of equivalent vectors is used, the pose recognition precision is improved, and the algorithm reliability can be improved. .
The large array surface splicing is used for actual engineering scenes, generally in outdoor environment, and has relatively complex illumination conditions, thereby causing very serious influence on image identification. In order to improve the reliability of image recognition and reduce the influence of illumination conditions, the invention preferably improves the robustness from two dimensions of hardware and software in another embodiment. In the aspect of hardware, as shown in fig. 6, two or more light supplement light sources are respectively arranged on the upper side and the lower side of the visual target, the light supplement light sources are linear non-stroboscopic divergent light sources, the total power is not lower than 100W, the light emitting uniformity is ensured by using hardware such as a soft light lampshade, the light emitting direction of the light supplement light sources faces the center of the visual target, the distance from the visual target exceeds 50% of the longest edge of the visual target, and the illumination intensity of the edge and the center of the visual target is ensured to be uniform. In terms of software, the image acquisition module is configured to acquire the visual target image with a fixed exposure time. Preferably, the selection of the fixed exposure time is based on ensuring that the white part of the visual target reaches more than 90% of the maximum exposure value and the black part of the visual target reaches less than 30% of the maximum exposure value under the condition of light supplement by a single light supplement light source. Accurate exposure of the visual target image can be guaranteed through light supplement light source hardware design and fixed exposure design, and the influence of environment illumination on visual target pattern capture can be greatly reduced through a high-power light supplement light source and uniform illumination design.
Preferably, in another embodiment, in the step of respectively capturing images of corresponding visual targets by using an image acquisition module to obtain real-time relative poses, the large-scale wavefront-splicing visual positioning method further includes a method of combining preprocessing during the visual target group image identification process, so as to reduce the influence of ambient illumination on the identification, as shown in fig. 7.
Before the combination preprocessing, the visual target image is not uniformly illuminated due to the influence of the ambient light, and the brightness of a partial image of the original picture is significantly high, as shown in fig. 8 (1). The intermediate results of the combined pre-processing under the influence of ambient lighting can be referred to the picture content in fig. 8. The combined preprocessing steps in the visual target group image identification process are as follows:
s102, the original picture, for example, the visual target image 8 (1) affected by the ambient light, is subjected to the gaussian blur processing, and the result is shown in fig. 8 (2). Through Gaussian blur processing, high-frequency information in an original picture can be filtered, and visual target image interference caused by noise points, target fine defects, wind, sand, rain, snow and the like is effectively reduced.
As a result of the edge detection, as shown in fig. 8 (3), the edge detection is performed in S103, and as a result, a portion having a large gradation gradient is separated by using the gradient information of the gradation map of the image, and the local characteristic edge can be detected even under the condition of uneven light.
And S104, performing morphological transformation, and connecting the interrupted characteristic point edges to improve the recognition success rate under the condition that the visual target image is not ideal as shown in a figure 8 (4).
S105, contour detection is performed, and as a result, the contour of the feature point is obtained as shown in fig. 8 (5).
Through the above-mentioned combination preprocessing step, the contour of the feature point can still be obtained with a high success rate under the condition of being influenced by the ambient light, the contents of the remaining steps in fig. 7 are the existing mature technologies, and finally the coordinates of the center point of all or part of the graph can be obtained.
Compared with the scheme of using the self-luminous targets, the light supplementing light source scheme can effectively improve the illumination uniformity of the targets, simultaneously avoid glare and ghost images of the self-luminous targets on the lens and reduce the influence on the imaging quality of the targets.
Compared with a scheme aiming at an ideal illumination condition, the method reduces the interference of the environment on imaging and characteristic image contour recognition by aiming at the complex illumination condition and the imaging environment of an engineering scene on software and hardware through the modes of target light source, image acquisition module imaging parameter configuration and combined pretreatment process.
Calculating the real-time pose deviation:
calculating real-time pose deviation between the array surface and the butt joint frame according to the real-time relative pose
Figure BDA0003728492010000121
The real-time pose deviation comprises information of translation motion and rotation motion required for the array surface to reach the splicing completion state. The invention provides a real-time pose deviation
Figure BDA0003728492010000122
The calculation method comprises the following steps:
Figure BDA0003728492010000123
wherein, real-time poseDeviation of
Figure BDA0003728492010000124
Expressed in the form of homogeneous transformation matrix and represented by real-time relative pose
Figure BDA0003728492010000125
With reference pose
Figure BDA0003728492010000126
The inverse matrix of (a) is multiplied to obtain.
Judging whether the real-time pose deviation is within a set allowable range:
determining real-time pose deviation
Figure BDA0003728492010000127
Whether the pose is within the allowable range or not, if the pose is deviated in real time
Figure BDA0003728492010000128
Within the allowable range, the positioning work is finished; if the real-time pose is deviated
Figure BDA0003728492010000129
And if the real-time pose deviation is not within the allowable range, sending the real-time pose deviation as a guide instruction to the splicing mechanism, splicing the motion of the splicing mechanism according to the given translational motion amount and the given rotational motion amount of the pose deviation, and repeating the steps of obtaining the real-time relative pose, calculating the real-time pose deviation and judging whether the real-time pose deviation is within the set allowable range until the end. The calculation method of the translational motion amount and the rotational motion amount is as follows:
Figure BDA00037284920100001210
Figure BDA00037284920100001211
Figure BDA00037284920100001212
wherein,
Figure BDA00037284920100001213
for real-time pose deviation
Figure BDA00037284920100001214
The amount of rotational movement with respect to the reference coordinate system includes (Rx, ry, rz), rx being the amount of rotation about the x-axis, ry being the amount of rotation about the y-axis, rz being the amount of rotation about the z-axis,
Figure BDA00037284920100001215
for real-time pose deviation
Figure BDA00037284920100001216
The amount of translational movement relative to the reference coordinate system includes (x, y, z), x being the amount of displacement along the x-axis, y being the amount of displacement along the y-axis, and z being the amount of displacement along the z-axis. Wherein the allowable range is determined according to the physical structure of the front surface and the docking frame, namely, the tolerance range on the mechanical structure of the docking between the front surface and the docking frame is not exceeded, so as to meet the requirement of splicing precision, and the allowable range is the same as that used in the initial teaching.
Example 2:
another embodiment of the present invention is a vision positioning system of a large wavefront splicing mechanism, and the hardware components are shown in fig. 9. The main difference from embodiment 1 is that in this embodiment, two image capturing modules and two visual targets are included, which specifically includes:
the array surface splicing mechanism 1 is used for adjusting the relative pose of the array surface and the butt joint frame and comprises a splicing mechanism movable platform 101 and a splicing mechanism static platform 102. The splicing mechanism static platform 102 and the working space are relatively static; the splicing mechanism moves the platform 101 relative to the working space.
The docking frame 2, which is a target of the wavefront splicing, is a mechanical structural member for splicing with the wavefront.
The visual targets 3, including the first visual target 301 and the second visual target 302, are respectively mounted on the end faces of the docking frame 2 facing the front face 4. The visual target is used to identify the spatial position and geometric features of the docking frame 2. The distance between the two visual targets is far, so that the visual targets can accurately reflect the spatial positions of different parts of the end face of the butt joint frame 2, and the influence caused by the flatness difference of the end face of the butt joint frame 2 is reduced.
The array surface 4 is firmly connected with the splicing mechanism movable platform 101. The array surface is an object spliced by the array surface and is used for splicing with the butt joint frame 2 according to a certain pose requirement.
The image acquisition module 5 comprises a first image acquisition module 501 and a second image acquisition module 502 which are respectively arranged on the array surface splicing mechanism 1. Preferably, the first image capturing module 501 and the second image capturing module 502 are installed at a position where the splicing mechanism moving platform 101 is close to the docking frame 2, and the distance between the two image capturing modules is consistent with the distance between the two visual targets. The image acquisition module 5 is used to capture image information of the visual target. The length direction of the array surface 4 is parallel to the arrangement direction of the first image acquisition module 501 and the second image acquisition module 502, and the width direction is parallel to the optical axes of the first image acquisition module 501 and the second image acquisition module 502.
When the large array surface splicing mechanism vision positioning system adopts a plurality of vision targets 3, the number of the image acquisition modules 5 is the same as that of the vision targets, each image acquisition module respectively acquires the image information of the corresponding vision target, and the distance between the image acquisition modules is consistent with that between the vision targets.
The control module 6 receives the visual target image captured by the image acquisition module 5, processes and analyzes the image, automatically obtains the pose deviation between the array surface 4 and the butt joint frame 2 by the visual positioning method of the large array surface splicing mechanism of the embodiment, performs large array surface splicing visual positioning, guides the splicing mechanism (the splicing mechanism movable platform 101 and the splicing mechanism static platform 102) to realize splicing action, does not need to arrange a special person to finish the array surface splicing action, and reduces the labor cost. The first image acquisition module 501 and the second image acquisition module 502 are both provided with a control module 6, and the image information processing of the visual target is performed by the control module 6 in the image acquisition modules, so that the communication pressure among the image acquisition modules can be reduced, and the calculation pressure of the centralized processing of the single image acquisition module can be reduced; the two image acquisition modules can be interchanged, are part of maintainable design, and can be quickly replaced by spare parts when a fault occurs. Compared with the scheme of independently erecting the vision measuring system by using an independent mechanical structure in the prior art, the vision positioning system of the large array plane splicing mechanism integrates the image acquisition module and the splicing mechanism, and synchronously moves with the splicing mechanism, so that the erection steps are reduced, and the erection efficiency is improved.
The physical layer and data layer relationship of the large wavefront splicing visual positioning system of the present embodiment is shown in fig. 10.
Physical layer:
the array surface 4 is mechanically and fixedly connected with a splicing mechanism movable platform 101; the splicing mechanism movable platform 101 and the splicing mechanism static platform 102 are controllably moved, and the position and the posture of the splicing mechanism movable platform 101 are adjusted; the splicing mechanism static platform 102 and the working space are relatively static; the first image acquisition module 501 and the second image acquisition module 502 are mechanically and fixedly connected with the splicing mechanism movable platform 101 respectively; the first visual target 301 and the second visual target 302 are each mechanically secured to the docking frame 2.
And (3) data layer:
the first visual target 301 is imaged in the first image acquisition module 501; the second visual target 302 is imaged in the second image acquisition module 502; the first image acquisition module 501 exchanges visual target identification data with the second image acquisition module 502 (for example, the second image acquisition module 502 transmits the coordinates of the center point of the target feature pattern obtained by the second image acquisition module 502 to the first image acquisition module 501); the first image acquisition module 501 transmits real-time pose deviation to the splicing mechanism moving platform 101, and is used for guiding the splicing mechanism moving platform 101 to move so as to realize the front splicing visual positioning.
The flow of the large-scale wavefront splicing visual positioning method is shown in fig. 3.
(optional) judging whether the system is subjected to primary teaching:
the control module 6 checks the working state of the system, judges whether the system carries out primary teaching or not, and if the system does not carry out primary teaching, the step of primary teaching is carried out; and if the primary teaching is carried out, turning to the step of calculating the real-time relative pose.
And judging whether the system performs primary teaching or not through checking the working state of the system, and if the system is in a positioning state and the reference pose data in the system is not a null value, judging that the system performs primary teaching. The datum pose data are x, y, z, rx, ry and Rz, which respectively represent the amount of translation (millimeter in unit), translation (millimeter in unit) in the y direction, translation (millimeter in unit) in the z direction, rotation (radian in unit) in the x direction, rotation (radian in unit) in the y direction and rotation (radian in unit) in the z direction of the array surface target pose relative to the current pose in the x direction, the y direction and the z direction of the static platform reference system. The reference system of the static platform is defined as follows, the original point is the geometric central point of the upper surface of the static platform 102 of the splicing mechanism, the x axis is parallel to the initial width direction of the array surface 4, and the positive direction points to the installation side of the image acquisition module 5 from the original point; the y axis is parallel to the initial length direction of the array surface 4, and the positive direction is consistent with the direction of the second image acquisition module 502 pointing to the first image acquisition module 501; the z axis is parallel to the initial height direction of the array surface 4, the direction is vertical to the array surface and points to the side far away from the splicing mechanism static platform 102, and the x axis, the y axis and the z axis accord with the arrangement rule of Cartesian coordinates. Wherein the primary taught reference pose data can be transformed with an equivalent homogeneous transformation matrix
Figure BDA0003728492010000151
And the representation is used for describing the position and the posture of the benchmark in the coordinate system of the static platform. For changing the attitude of the reference in the coordinate system of the stationary platform
Figure BDA0003728492010000152
Indicating changes in position of reference in the stationary table coordinate system
Figure BDA0003728492010000153
The corresponding conversion relationship is shown as follows:
Figure BDA0003728492010000154
Figure BDA0003728492010000155
Figure BDA0003728492010000156
where Rx is the amount of rotation about the x-axis, ry is the amount of rotation about the y-axis, rz is the amount of rotation about the z-axis, x is the amount of displacement along the x-axis, y is the amount of displacement along the y-axis, and z is the amount of displacement along the z-axis.
(optional) performing a primary teaching step:
reference pose for primary teaching
Figure BDA0003728492010000157
This can be done either by manually entering the visual positioning system or by performing a primary teaching step.
Preferably, in another embodiment, the step of calculating the real-time relative pose further includes a primary teaching step, as shown in fig. 4, the primary teaching step includes the following steps:
and 2.1, detecting the pose deviation of the array surface 4 relative to the butt joint frame 2 by using an instrument.
The pose deviation comprises translation deviations x, y and z and rotation deviations Rx, ry and Rz which are required to be carried out when the array surface 4 reaches the splicing completion state. The translation deviation is realized by using a measuring tape, a measuring block and a vernier caliper, taking a stationary platform coordinate system as a reference direction, the length of the projection of the line of the same point of the front surface 4 at the current and mounted frame when the splicing is completed in this direction is measured in the x, y and z directions, respectively. The rotation deviation is realized by using a level meter and a gyroscope, a static platform coordinate system is taken as a reference direction, the current Rx, ry and Rz three rotation quantity values of the array surface 4 and the corresponding values when the splicing of the installation frame is completed are measured in the Rx, ry and Rz three directions, and the deviation is obtained by subtracting two by two.
Step 2.2, judging the position posture deviation, if the position posture deviation is out of an allowable range, determining that the splicing of the array surface 4 and the butt joint frame 2 is not finished, adjusting the posture of the array surface 4 through the splicing mechanism 1, and repeating the step 2.1 and the step 2.2; if the pose deviation is within the allowable range, the splicing of the array surface 4 and the butt joint frame 2 is determined to be completed, and the step 2.3 is carried out.
Wherein the allowable range is given according to the physical structure of the front surface and the butt joint frame, namely the tolerance range on the mechanical structure of the butt joint between the front surface 4 and the butt joint frame 2 is not exceeded so as to meet the requirement of splicing precision; in the process of adjusting the posture of the wavefront 4, the control module 6 outputs the given values of the translational motion x, y and z and the rotational motion Rx, ry and Rz to the splicing mechanism movable platform 101, and the splicing mechanism movable platform 101 moves according to the given values of the translational motion x, y and z and the rotational motion Rx, ry and Rz of the pose deviation so as to adjust the posture of the wavefront 4.
And 2.3, recording the position and posture deviation of the current array surface 4 and the docking frame 2 as a primary teaching reference position and posture, and finishing primary teaching.
The primary teaching reference pose data comprises information of relative translation and relative rotation of the front surface 4 and the docking frame 2 in a front surface splicing completion state, the information is stored in the control module 6, and the primary teaching reference pose is used as an initial value of the real-time relative pose and used for obtaining the real-time relative pose of the front surface 4 and the docking frame 2 in the follow-up process
Figure BDA0003728492010000161
(see the step of calculating the real-time relative pose) and then calculating the real-time pose deviation (see the step of calculating the real-time pose deviation).
Calculating a real-time relative pose:
the image acquisition module 5 acquires the image information of the visual target 3, the control module 6 receives the image information of the visual target, performs image processing, and calculates the real-time relative pose of the array surface 4 and the butt joint frame 2 according to the image information of the visual target
Figure BDA0003728492010000162
Wherein, the first image acquisition module 501 captures the image of the first visual target 301, the second image acquisition module 502 captures the image of the second visual target 302, and the real-time relative pose of the array surface 4 and the docking frame 2 is obtained from the image of the visual target
Figure BDA0003728492010000163
The obtained real-time relative pose of the array surface 4 and the butt joint frame 2
Figure BDA0003728492010000164
The translation and rotation information of the front 4 with respect to the docking frame 2 should be included. The real-time relative pose of the array surface 4 and the butt joint frame 2 is obtained from the image information of the visual target
Figure BDA0003728492010000165
The method of (2) is as follows.
As shown in fig. 5, the coordinates of the center points of the feature patterns 9, 10, 11, and 12 on the visual targets 301 and 302, which are p1, p2, p3, and p4, respectively, are obtained through an image recognition technique, and all include x, y, and z spatial information of the center points of the feature patterns in the stationary platform reference system. And (3) taking the middle point of p1 and p3 to obtain the coordinate of the upper characteristic point 11, and taking the middle point of p2 and p4 to obtain the coordinate of the lower characteristic point 12. The coordinates of the upper and lower feature points obtained by the processing module on the first image acquisition module 501 are defined as q respectively 1 、q 2 The coordinates of the upper and lower feature points obtained by the processing module of the second image capturing module 502 are q 3 、q 4 . After the control module 6 of the second image acquisition module 502 calculates the coordinate data of the upper and lower feature points of the second target 3, the coordinate data is transmitted to the processing module of the first image acquisition module 501, and the real-time relative pose of the array surface 4 and the docking frame 2 is calculated
Figure BDA0003728492010000166
Now the coordinates q of the lower feature point obtained by the second image capturing module 502 are used 4 (i.e. x) 4 、y 4 、z 4 ) For example, the calculation of the wavefront 4 and the pair from the coordinate data of the feature points will be describedReal-time relative pose of joint frame 2
Figure BDA0003728492010000167
The specific calculation formulas of (a) and (b) are formula (4), formula (5) and formula (6).
Figure BDA0003728492010000171
Figure BDA0003728492010000172
Figure BDA0003728492010000173
Wherein,
Figure BDA0003728492010000174
the pose change of the real-time pose relative to the coordinate system of the static platform is a 3-row and 3-column matrix,
Figure BDA0003728492010000175
denotes q 4 Point of direction q 2 The spatial vector of (a) is calculated,
Figure BDA0003728492010000176
presentation pair
Figure BDA0003728492010000177
The vector is modulo, and x represents the vector cross product.
Figure BDA0003728492010000178
For positional changes of real-time pose relative to a stationary platform coordinate system, where q 4 I.e. x 4 、y 4 、z 4 Respectively, x, y, z translation.
It will be appreciated that the coordinate data of any one of the feature points may be employed as
Figure BDA0003728492010000179
When using q 1 、q 2 、q 3 When substituting into the above formula, the right side x i 、y i 、z i The index i of (a) should correspond to the index of q indicating the displacement data of the corresponding feature point.
Preferably, in another embodiment, when q is 1 、q 2 、q 3 、q 4 When there is a single feature point missing, use
Figure BDA00037284920100001710
Substituted in formula (5)
Figure BDA00037284920100001711
Calculation, use
Figure BDA00037284920100001712
In formula (5)
Figure BDA00037284920100001713
Calculating; use of
Figure BDA00037284920100001714
Or
Figure BDA00037284920100001715
Q in formula (6) 4 And calculation can improve the reliability when a single characteristic point is missing.
Preferably, in another embodiment, when the central points of all the feature patterns are identified, the large-scale wavefront splicing visual positioning method of the invention utilizes the number redundancy of the feature points to improve the calculation accuracy, and uses
Figure BDA00037284920100001716
And
Figure BDA00037284920100001717
is substituted into formula (5)
Figure BDA00037284920100001718
Calculation, use
Figure BDA00037284920100001719
And
Figure BDA00037284920100001720
is substituted into formula (5)
Figure BDA00037284920100001721
And random errors are reduced by calculation.
Preferably, in another embodiment, in the step of respectively capturing images of corresponding visual targets by using the image acquisition module to obtain real-time relative poses, the large-scale wavefront splicing visual positioning method further includes a method of combined preprocessing adopted in the visual target group image identification process to reduce the influence of ambient illumination on identification. The combination preprocessing step in the visual target group image identification process is described in embodiment 1, and is not described herein again.
Calculating the real-time pose deviation:
calculating the real-time pose deviation between the array surface and the butt joint frame according to the real-time relative pose
Figure BDA00037284920100001722
The real-time pose deviation comprises information of translation motion and rotation motion required by the array surface to reach the splicing completion state. The invention provides a pose deviation
Figure BDA00037284920100001723
The process of the calculation method is as follows:
Figure BDA00037284920100001724
wherein the attitude deviation
Figure BDA00037284920100001725
Expressed in the form of homogeneous transformation matrix and represented by real-time relative pose
Figure BDA00037284920100001726
Position and posture with reference
Figure BDA0003728492010000181
Is multiplied by the inverse matrix of (c).
Judging whether the real-time pose deviation is within a set allowable range:
determining real-time pose deviation
Figure BDA0003728492010000182
Whether the pose is within the allowable range or not, and if the pose is in the real-time deviation
Figure BDA0003728492010000183
Within the allowable range, the positioning work is finished; if the real-time pose is deviated
Figure BDA0003728492010000184
And if the real-time pose deviation is not within the allowable range, sending the real-time pose deviation as a guide instruction to the splicing mechanism, splicing the motion of the splicing mechanism according to the given translational motion amount and the given rotational motion amount of the pose deviation, and repeating the steps of obtaining the real-time relative pose, calculating the real-time pose deviation and judging whether the real-time pose deviation is within the set allowable range until the end. The calculation method of the translational motion amount and the rotational motion amount is as follows:
Figure BDA0003728492010000185
Figure BDA0003728492010000186
Figure BDA0003728492010000187
wherein,
Figure BDA0003728492010000188
for real-time pose deviation
Figure BDA0003728492010000189
The amount of rotational movement relative to the reference coordinate system includes (Rx, ry, rz), rx being the amount of rotation about the x-axis, ry being the amount of rotation about the y-axis, rz being the amount of rotation about the z-axis,
Figure BDA00037284920100001810
for real-time pose deviation
Figure BDA00037284920100001811
The amount of translational movement relative to the reference coordinate system includes (x, y, z), x being the amount of displacement along the x-axis, y being the amount of displacement along the y-axis, and z being the amount of displacement along the z-axis. Wherein the allowable range is determined according to the physical structure of the front surface and the butt joint frame, namely, the tolerance range on the mechanical structure of the butt joint between the front surface and the butt joint frame is not exceeded so as to meet the requirement of splicing precision, and the allowable range is the same as that used in the initial teaching.
It can be understood that when the image capturing module 5 includes three or more cameras and the visual target 3 includes three or more visual targets, the coordinates of the four feature points are defined by defining the average value of the coordinates of the feature points, and the visual positioning can still be performed according to the steps described in embodiment 2.
Although the present invention has been described in terms of the preferred embodiment, it is not intended that the invention be limited to the embodiment. Any equivalent changes or modifications made without departing from the spirit and scope of the present invention are also within the protection scope of the present invention. The scope of the invention should therefore be determined with reference to the appended claims.

Claims (10)

1. A vision positioning system of a large array surface splicing mechanism is used for splicing an array surface with a butt joint frame according to a set pose and is characterized by comprising an array surface splicing mechanism, a vision target, an image acquisition module and a control module;
the array surface splicing mechanism comprises a static platform which is static relative to the working space and a movable platform which moves relative to the working space, and the array surface is stably connected to the movable platform; the array surface splicing mechanism receives the instruction of the control module and adjusts the relative pose of the array surface and the butt joint frame;
the visual target is arranged on the end face, facing the array face, of the butt joint frame;
the image acquisition module is arranged on the array surface splicing mechanism and is used for acquiring the image information of the visual target;
the control module receives the image information of the visual target acquired by the image acquisition module, processes the image, obtains the real-time relative pose of the array surface and the docking frame according to the image information of the visual target, calculates the real-time pose deviation between the array surface and the docking frame according to the real-time relative pose, sends an instruction to the array surface splicing mechanism according to the real-time pose deviation, and adjusts the relative pose of the array surface and the docking frame by the array surface splicing mechanism so that the real-time pose deviation is within a set allowable range, thereby splicing the array surface and the docking frame according to the set pose.
2. The large wavefront splicing mechanism visual positioning system of claim 1, wherein the system comprises a plurality of visual targets; the number of the image acquisition modules is the same as that of the visual targets, and each image acquisition module acquires image information of the corresponding visual target; and each image acquisition module is provided with a control module, the image information of each visual target is processed in each corresponding control module, the control modules exchange visual target data, and one control module dynamically sends an instruction to the splicing mechanism according to the real-time pose deviation.
3. A visual positioning method for a large-scale wavefront splicing mechanism, which is implemented by the visual positioning system for the large-scale wavefront splicing mechanism according to any one of claims 1 to 2, and comprises the following steps:
calculating a real-time relative pose: the image acquisition module acquires the image information of the visual target, the control module receives the image information of the visual target to perform image processing, and the real-time relative pose of the array surface and the butt joint frame is calculated according to the image information of the visual target
Figure FDA0003728492000000011
Calculating the real-time pose deviation: according to the real-time relative pose
Figure FDA0003728492000000012
Calculating real-time pose deviation between the array surface and the butt joint frame
Figure FDA0003728492000000013
Judging whether the real-time pose deviation is within a set allowable range: judging whether the real-time pose deviation is within an allowable range, and if the real-time pose deviation is within the set allowable range, ending the process; if the real-time pose deviation is not within the allowable range, sending an instruction to the array surface splicing mechanism according to the real-time pose deviation, adjusting the relative pose of the array surface and the butt joint frame by the array surface splicing mechanism according to the amount of translational motion and the amount of rotational motion in the instruction, and repeating the steps of calculating the real-time relative pose, calculating the real-time pose deviation and judging whether the real-time pose deviation is within the set allowable range until the end.
4. The visual positioning method for large-scale wavefront splicing mechanism according to claim 3, wherein the real-time relative pose of the wavefront and the docking frame is calculated from the image information of the visual target
Figure FDA0003728492000000021
The method specifically comprises the following steps:
obtaining a first characteristic graph, a second characteristic graph and a third characteristic on the visual targetThe coordinates of the center point of the graph and the fourth characteristic graph are q respectively 1 、q 2 、q 3 、q 4 The spatial information of the center point of the graph under a reference system of a static platform comprises x, y and z spatial information; the control module calculates the real-time relative pose of the array surface and the butt joint frame according to the following formula
Figure FDA0003728492000000022
Figure FDA0003728492000000023
Figure FDA0003728492000000024
Figure FDA0003728492000000025
Wherein,
Figure FDA0003728492000000026
is a 3-row and 3-column matrix for the attitude change of the real-time pose relative to a stationary platform coordinate system,
Figure FDA0003728492000000027
represents q 4 Point of direction q 2 The spatial vector of (a) is determined,
Figure FDA0003728492000000028
presentation pair
Figure FDA0003728492000000029
The vector is modulo, and x represents the vector cross product.
Figure FDA00037284920000000210
For real-time pose relative to a stationary platformPosition change of coordinate system, wherein q 4 I.e. x 4 、y 4 、z 4 Respectively, x, y, z translation amounts.
5. The visual positioning method for large-scale wavefront splicing mechanism according to claim 3, wherein the real-time pose deviation between the wavefront and the docking frame is calculated according to the real-time relative pose
Figure FDA00037284920000000211
The method specifically comprises the following steps:
Figure FDA00037284920000000212
wherein the real-time pose deviation
Figure FDA00037284920000000213
Expressed in the form of homogeneous transformation matrix and represented by real-time relative pose
Figure FDA00037284920000000214
Position and posture with reference
Figure FDA00037284920000000215
The inverse matrix of (a) is multiplied to obtain.
6. The visual positioning method for the large-scale wavefront splicing mechanism according to claim 3, wherein the amount of translational motion and the amount of rotational motion are calculated as follows:
Figure FDA00037284920000000216
Figure FDA00037284920000000217
Figure FDA00037284920000000218
wherein,
Figure FDA0003728492000000031
for real-time pose deviation
Figure FDA0003728492000000032
The amount of rotational movement with respect to the reference coordinate system includes (Rx, ry, rz), rx being the amount of rotation about the x-axis, ry being the amount of rotation about the y-axis, rz being the amount of rotation about the z-axis,
Figure FDA0003728492000000033
for real-time pose deviation
Figure FDA0003728492000000034
The amount of translational movement relative to the reference coordinate system includes (x, y, z), x being the amount of displacement along the x-axis, y being the amount of displacement along the y-axis, and z being the amount of displacement along the z-axis.
7. The visual positioning method for the large-scale wavefront splicing mechanism according to claim 3, wherein the step of calculating the real-time relative pose further comprises a step of primary teaching, specifically:
step 2.1, detecting the pose deviation of the array surface relative to the butt joint frame by using an instrument; the pose deviation comprises translation deviations x, y and z and rotation deviations Rx, ry and Rz which are required to be carried out when the array surface reaches the splicing completion state; the translation deviation x, y and z are the reference directions of the static platform coordinate system, and the lengths of the projection of the connecting line of the same point of the array surface in the current direction and the connection line of the same point of the array surface in the installation frame in the direction are measured in the x direction, the y direction and the z direction respectively; the rotation deviation is obtained by subtracting two rotation quantity values of the current Rx, ry and Rz rotation quantity values of the array surface and corresponding values of the mounting frame when splicing is completed in the Rx, ry and Rz directions by taking a coordinate system of the static platform as a reference direction;
step 2.2, judging the pose deviation, if the pose deviation is out of the set allowable range, determining that the splicing of the array surface and the butt-joint frame is not finished, outputting the numerical values of translational motion x, y and z and rotational motion Rx, ry and Rz given by the pose deviation to a splicing mechanism movable platform by a control module, moving the splicing mechanism movable platform according to the numerical values of the translational motion x, y and z and the rotational motion Rx, ry and Rz given by the pose deviation to adjust the attitude of the array surface, and repeating the steps 2.1 and 2.2; if the pose deviation is within the set allowable range, determining that the splicing of the array surface and the butt joint frame is finished, and turning to the step 2.3;
and 2.3, recording the position and posture deviation of the array surface and the butt joint frame at the moment to serve as a primary teaching reference position and finish primary teaching, wherein in the step of calculating the real-time relative position and posture, the primary teaching reference position and posture is adopted as an initial value of the real-time relative position and posture.
8. The visual positioning method for large-scale wavefront splicing mechanism according to claim 4, wherein when q is greater than q, q is equal to q 1 、q 2 、q 3 、q 4 When there is a single feature point missing, use
Figure FDA0003728492000000035
Substituted in formula (5)
Figure FDA0003728492000000036
Calculation, use
Figure FDA0003728492000000037
Substituted in formula (5)
Figure FDA0003728492000000038
Calculating; use of
Figure FDA0003728492000000039
Or
Figure FDA00037284920000000310
Q in formula (6) 4 And (4) calculating.
9. The visual positioning method for large-scale wavefront splicing mechanism according to claim 4, wherein when the central points of all the feature patterns are identified, the method uses
Figure FDA00037284920000000311
And
Figure FDA00037284920000000312
in formula (5)
Figure FDA00037284920000000313
Calculating, using
Figure FDA00037284920000000314
And with
Figure FDA00037284920000000315
Is substituted into formula (5)
Figure FDA00037284920000000316
And (4) calculating.
10. The visual positioning method for the large-scale wavefront splicing mechanism according to claim 3, further comprising a method of combining preprocessing during the image acquisition module acquiring the image information of the visual target, wherein the steps of combining preprocessing are as follows:
s102, carrying out Gaussian blur processing on the original picture, and filtering high-frequency information in the original picture;
s103, using the gradient information of the gray map of the image, a portion having a large gray gradient is separated, and a local feature edge is detected.
S104, performing morphological transformation;
and S105, carrying out contour detection to obtain the contour of the characteristic point.
CN202210779455.2A 2022-07-04 2022-07-04 Visual positioning system and method for large array surface splicing mechanism Pending CN115790366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210779455.2A CN115790366A (en) 2022-07-04 2022-07-04 Visual positioning system and method for large array surface splicing mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210779455.2A CN115790366A (en) 2022-07-04 2022-07-04 Visual positioning system and method for large array surface splicing mechanism

Publications (1)

Publication Number Publication Date
CN115790366A true CN115790366A (en) 2023-03-14

Family

ID=85431290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210779455.2A Pending CN115790366A (en) 2022-07-04 2022-07-04 Visual positioning system and method for large array surface splicing mechanism

Country Status (1)

Country Link
CN (1) CN115790366A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116518849A (en) * 2023-06-20 2023-08-01 常州市新创智能科技有限公司 Device and method for accurately positioning and detecting depth of aluminum alloy vehicle body interface

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116518849A (en) * 2023-06-20 2023-08-01 常州市新创智能科技有限公司 Device and method for accurately positioning and detecting depth of aluminum alloy vehicle body interface
CN116518849B (en) * 2023-06-20 2023-09-08 常州市新创智能科技有限公司 Device and method for accurately positioning and detecting depth of aluminum alloy vehicle body interface

Similar Documents

Publication Publication Date Title
CN109029299B (en) Dual-camera measuring device and method for butt joint corner of cabin pin hole
CN111028340B (en) Three-dimensional reconstruction method, device, equipment and system in precise assembly
CN113674345B (en) Two-dimensional pixel-level three-dimensional positioning system and positioning method
CN112629431B (en) Civil structure deformation monitoring method and related equipment
CN109448054A (en) The target Locate step by step method of view-based access control model fusion, application, apparatus and system
US20120147149A1 (en) System and method for training a model in a plurality of non-perspective cameras and determining 3d pose of an object at runtime with the same
CN111922510B (en) Laser visual processing method and system
CN112767338A (en) Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision
CN113643384B (en) Coordinate system calibration method, automatic assembly method and device
CN111879354A (en) Unmanned aerial vehicle measurement system that becomes more meticulous
CN108981608A (en) A kind of Novel wire Constructed Lighting Vision System and scaling method
CN111024047A (en) Six-degree-of-freedom pose measurement device and method based on orthogonal binocular vision
CN110595374B (en) Large structural part real-time deformation monitoring method based on image transmission machine
CN115790366A (en) Visual positioning system and method for large array surface splicing mechanism
CN117284499B (en) Monocular vision-laser-based pose measurement method for spatial unfolding mechanism
CN112381881A (en) Monocular vision-based automatic butt joint method for large rigid body component
CN114994850B (en) Optical path calibration method
CN114754695B (en) Multi-view-field bridge deflection measuring device and method and storage medium
CN113359461B (en) Kinematics calibration method suitable for bionic eye system
CN112686960B (en) Method for calibrating entrance pupil center and sight direction of camera based on ray tracing
CN111862237B (en) Camera calibration method for optical surface shape measurement and device for realizing method
CN112284360B (en) Double-shield six-degree-of-freedom measurement method and system based on binocular vision system
CN115205390A (en) Industrial robot surface structured light stereo camera pose online calibration method and system
CN108592789A (en) A kind of steel construction factory pre-assembly method based on BIM and machine vision technique
CN111526297B (en) Curved screen image acquisition method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination