CN117470132A - Numbering method under situation of deformation and deletion of multi-line laser bar - Google Patents

Numbering method under situation of deformation and deletion of multi-line laser bar Download PDF

Info

Publication number
CN117470132A
CN117470132A CN202311407338.4A CN202311407338A CN117470132A CN 117470132 A CN117470132 A CN 117470132A CN 202311407338 A CN202311407338 A CN 202311407338A CN 117470132 A CN117470132 A CN 117470132A
Authority
CN
China
Prior art keywords
light
camera
laser
bar
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311407338.4A
Other languages
Chinese (zh)
Inventor
周舵
尹仕斌
郭寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yi Si Si Hangzhou Technology Co ltd
Original Assignee
Yi Si Si Hangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yi Si Si Hangzhou Technology Co ltd filed Critical Yi Si Si Hangzhou Technology Co ltd
Priority to CN202311407338.4A priority Critical patent/CN117470132A/en
Publication of CN117470132A publication Critical patent/CN117470132A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a numbering method under the situation that a multi-line laser bar deforms and lacks, which comprises the steps of obtaining depth values of points on an object to be measured under a camera coordinate system; setting a height tolerance range corresponding to each pixel point according to the depth value; a camera collects laser bar images; searching a laser bar connected domain, and extracting a light bar central line; calculating corresponding space point coordinates of pixel points on the central line of the light bar; determining the coordinate value z j Whether the pixel point is within the height tolerance range corresponding to the pixel point or not; counting z within a highly acceptable range j The number is calculated to be the ratio of the number to the total number of the pixel points; if the ratio is greater than the threshold value, storing the light plane equation of the current number and the center line of the light bar correspondingly; if not, continuing to judge by using the light plane equation of the next number and the center line of the current light bar; until all the light bars are traversed; the method still has good robustness when the laser bar is deformed, missing and shielded, and is suitable for the multi-line structure light plane calibration process of the surface of a complex object.

Description

Numbering method under situation of deformation and deletion of multi-line laser bar
Technical Field
The invention relates to the field of light plane calibration, in particular to a numbering method under the condition that a multi-line laser bar is deformed and missing.
Background
The optical three-dimensional measurement technology has the advantages of non-contact, high acquisition speed, flexible equipment and the like, and is widely applied to the fields of product reversing, defect detection, robot navigation, augmented reality, pose estimation and the like. The multi-line structured light measurement adopts a multi-line laser and a camera, and can project a plurality of laser strips to the surface of an object to be measured, so that the multi-line structured light measurement device can not only take a large view field into account, but also realize high-speed acquisition, is suitable for measuring the surface to be measured with strong textures, and is widely applied to the structured light vision measurement field.
In the application scheme of the multi-line laser sensor, plane parameter calibration is needed to be sequentially carried out on each laser light plane in advance, a light plane equation is obtained, and then when three-dimensional reconstruction is carried out, each laser bar is extracted from a camera image, and the light plane equation corresponding to each light bar is inquired. Therefore, during three-dimensional reconstruction, each laser bar corresponds to each light plane equation calibrated in advance, and if confusion occurs, a light plane correspondence error can cause a larger three-dimensional reconstruction error to influence the accuracy of a measurement structure.
A typical method of matching a multi-line laser bar to a pre-calibrated light plane, comprising: ordering, zoning parameters, binocular constraints, etc. Wherein the sequencing method requires that all the light bars can be clearly shot by the camera, and each laser bar is sequenced in sequence according to a specific direction (for example, left to right); when the surface of the measured object is complex and has the problems of shielding and view field, the problems of light bar deletion, deformation, shielding and the like can occur in the acquired image, and the method is invalid at the moment;
the regional parameter method also requires that the camera can clearly collect all light bars, and when the light bars in the image are missing, the method fails; meanwhile, the method needs to divide a reference area and a measurement area in the measured view field, and the number of the light bar is estimated by shifting the average value of the pixel coordinates of each light bar in the measurement area, so that the robustness is not high;
the binocular constraint method needs to additionally add a camera, and adopts the binocular camera to collect laser bar images so as to eliminate the singularities of the numbers of all the light bars; when there is only one camera in the sensor, the method also needs to add cameras, which increases the complexity and cost of the system and is not beneficial to implementation.
Disclosure of Invention
In order to solve the technical problems, the invention provides a numbering method under the situation that a multi-line laser bar deforms and lacks, the method can accurately match a correct light plane for each laser bar in an image based on the appearance of an object to be measured, and the method still has good robustness when the laser bar deforms, lacks and is shielded, is suitable for a multi-line structure light plane calibration process of the surface of a complex object, and has the characteristics of low cost, high degree of automation and high accuracy.
For this purpose, the technical scheme of the invention is as follows:
the numbering method under the situation that the multi-line laser bar is deformed and missing comprises a multi-line laser projector and a camera, wherein the multi-line laser projector is used for projecting the multi-line laser bar to the surface of an object to be measured, and the camera is used for collecting laser bar images;
the following information is obtained in advance:
camera internal parameters;
the method comprises the steps that (1) the light plane equations of all light planes under a camera coordinate system are numbered sequentially according to a preset ordering mode;
detecting depth values of points on an object to be detected in the area under a camera coordinate system; and setting each pixel point (u according to the depth value j ,v j ) Corresponding height tolerance ranges:
[(h(u j ,v j )-△h(u j ,v j )),(h(u j ,v j )+△h(u j ,v j ))]
wherein h (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) The depth value of the corresponding object to be measured under the camera coordinate system; Δh (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) A preset height tolerance at;
during detection, the relative positions of the multi-line laser sensor and the objects to be detected are fixed, the objects to be detected of the same type are sequentially located in the detection area in batches, and the pose deviation among the objects to be detected is smaller than a preset value;
the multi-line laser projector projects multi-line laser strips to an object to be detected in the detection area, and the camera acquires laser strip images;
the following steps are used to match the correct number for each laser stripe in the laser stripe image:
searching laser bar communication domains in the laser bar image, and respectively extracting light bar center lines in the communication domains; sequentially numbering the central lines of all the light bars according to the same ordering mode as the light plane equation, wherein the numerical value of the initial number of the laser bar connected domain is consistent with the numerical value of the initial number of the light plane equation;
executing the second step by utilizing the light bar central line of the initial number and the light plane equation of the initial number;
step two, substituting the coordinates of each pixel point on the light bar central line into the following formula to calculate each pixel point (u) j ,v j ) Corresponding spatial point coordinates (x j ,y j ,z j ):
Wherein f and d are the dimensions of the focal length and the transverse pixels obtained by calibrating the camera respectively, (u '' 0 ,v' 0 ) As principal point coordinates, A i x j +B i y j +C i z j +d=0 is the light plane equation with the number i input this time; wherein the coordinate value z j Representing coordinate values in the depth direction in the camera coordinate system;
respectively judging each coordinate value z j Whether or not it is at pixel point (u j ,v j ) Corresponding height tolerance range [ h (u) j ,v j )-△h(u j ,v j ),h(u j ,v j )+△h(u j ,v j )]Inside; and count the coordinate value z within the height tolerance range j Then calculating the ratio between the number and the total number of the pixel points in the current connected domain;
if the ratio is greater than the threshold value, storing the current numbered light plane equation and the light bar center line correspondingly, and enabling the number of the light bar center line to be equal to the number of the current light plane equation; executing the fourth step;
if the ratio is smaller than or equal to the threshold value, judging whether the number of the current light plane equation is equal to the end number of the light plane equation:
if yes, marking the center line of the current light bar as an invalid light bar and not numbering; executing the fourth step;
if not, continuing to execute the second step by using the light plane equation corresponding to the next number value and the current light bar center line;
judging whether the light bar center line which is not subjected to the second step exists, if so, searching the light bar center line which is closest to the initial number light bar center line, and continuing to perform the second step by utilizing the searched light bar center line and the current number light plane equation; if not, the numbering is ended.
Further, the preset sorting mode comprises left to right, right to left, top to bottom and bottom to top.
The invention also discloses another numbering method under the situation that the multi-line laser bar is deformed and missing, wherein the multi-line laser sensor comprises a multi-line laser projector and a camera, the multi-line laser projector is used for projecting the multi-line laser bar to the surface of the object to be detected, and the camera is used for collecting the image of the laser bar;
the method is characterized in that the following information is acquired in advance:
camera internal parameters;
the light plane equations of all the light planes under the camera coordinate system are numbered respectively, and the numbers of the light plane equations are different;
detecting depth values of points on an object to be detected in the area under a camera coordinate system; and setting each pixel point (u according to the depth value j ,v j ) Corresponding height tolerance ranges:
[(h(u j ,v j )-△h(u j ,v j )),(h(u j ,v j )+△h(u j ,v j ))]
wherein h (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) The depth value of the corresponding object to be measured under the camera coordinate system; Δh (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) A preset height tolerance at;
during detection, the relative positions of the multi-line laser sensor and the objects to be detected are fixed, the objects to be detected of the same type are sequentially located in the detection area in batches, and the pose deviation among the objects to be detected is smaller than a preset value;
the multi-line laser projector projects multi-line laser strips to an object to be detected in the detection area, and the camera acquires laser strip images;
the following steps are used to match the correct number for each laser stripe in the laser stripe image:
searching laser bar communication domains in the laser bar image, and respectively extracting light bar center lines in the communication domains; numbering the central lines of all the light bars respectively;
step two, a center line of one light bar and a light plane equation are taken randomly, and the step two is executed;
step two, substituting the coordinates of each pixel point on the light bar central line into the following formula to calculate each pixel point (u) j ,v j ) Corresponding spatial point coordinates (x j ,y j ,z j ):
Wherein f and d are the dimensions of the focal length and the transverse pixels obtained by calibrating the camera respectively, (u '' 0 ,v' 0 ) As principal point coordinates, A i x j +B i y j +C i z j +d=0 is the light plane equation with the number i input this time; wherein the coordinate value z j Representing coordinate values in the depth direction in the camera coordinate system;
respectively judging each coordinate value z j Whether or not it is at pixel point (u j ,v j ) Corresponding height tolerance range [ h (u) j ,v j )-△h(u j ,v j ),h(u j ,v j )+△h(u j ,v j )]Inside; and count the coordinate value z within the height tolerance range j Then calculating the ratio between the number and the total number of the pixel points in the current connected domain;
if the ratio is greater than the threshold value, storing the current numbered light plane equation and the light bar center line correspondingly, and enabling the number of the light bar center line to be equal to the number of the current light plane equation; executing the fourth step;
if the ratio is smaller than or equal to the threshold value, continuing to execute the second step by using the light plane equation with other numbers and the center line of the current light bar until the ratio is larger than the threshold value or all the light plane equations are traversed, and executing the fourth step;
judging whether the light bar center line which is not used for executing the second step exists, if so, selecting one light bar center line, and continuing to execute the second step by utilizing the light bar center line and any numbered light plane equation; if not, the numbering is ended.
Preferably, the preset tolerance Δh (u j ,v j ) The setting mode of the device comprises the following two modes:
mode one: setting according to an experience value;
mode two:
wherein L is the horizontal distance between the optical center of the laser and the optical center of the camera, and gamma is the included angle between the middle laser bar emitted by the laser and the optical axis of the camera;
wherein f, u 0 And d is the focal length, the transverse principal point coordinate and the transverse pixel size in the camera calibration result in the step 1 respectively.
Preferably, the depth value of each point on the object to be detected in the detection area under the camera coordinate system is obtained in advance, and the modes comprise the following two modes:
mode one: converting the modeling coordinate system into a camera coordinate system in the digital-analog of the object to be detected;
mode two: three-dimensional point cloud information on one of the objects to be measured is acquired by using a three-dimensional measuring instrument, the relation between a coordinate system of the three-dimensional measuring instrument and a coordinate system of a camera is calibrated, and the three-dimensional point cloud information is converted into the coordinate system of the camera; the three-dimensional measuring instrument comprises a photogrammetry system and a three-dimensional scanner.
Preferably, in the third step, the threshold value is 0.7-0.95.
Preferably, the internal parameters of the camera are solved by means of a method of a Zhongson calibration or beam adjustment.
The method of the invention has the following characteristics:
only one camera is needed, no other equipment is needed for assistance, the cost is low, and the complexity of the system is not increased;
based on the appearance of the object to be detected, setting a height tolerance range according to depth information corresponding to each pixel point, and screening a correct light plane for each laser bar in an image by utilizing the height tolerance range, wherein the accuracy is high, and the robustness is strong;
the situations to which the method can be applied include: compared with the existing light plane matching method, the method has wider use cases, and therefore, when the appearance of an object to be detected is complex, the method can still keep effectiveness, and correct light planes can be matched for laser bars.
Drawings
FIG. 1 shows the determination of Δh (u j ,v j ) Schematic of (2);
FIG. 2 is a two-dimensional image of a laser stripe projected onto the surface of an object to be measured;
FIG. 3 is a laser bar image acquired by a camera;
FIG. 4 is a schematic diagram of the step one numbering of the laser bars;
fig. 5 is a schematic diagram of the final laser bar numbering.
Detailed Description
The technical scheme of the invention is described in detail below with reference to the accompanying drawings and the detailed description.
Example 1
In this embodiment, the light bar center line and the light plane are numbered in the same order (e.g., all from left to right). When the central line of the light bar is matched with the light plane (step two to step four), a number-by-number matching mode is adopted, namely: step two is executed from the light bar center line of the initial number, because the number order of the light bar center line and the light plane number is the same, therefore, the light bar center line of the small number is easier to be successfully matched with the light plane number of the small number, and the light bar center line of the large number is easier to be successfully matched with the light plane number of the large number. Based on the premise, the technical scheme in the embodiment can improve numbering efficiency, and is particularly suitable for the situation of a large number of light bars.
The specific scheme is as follows:
the multi-line laser bar numbering method under the situation that the laser bar is deformed and missing comprises a multi-line laser projector and a camera, wherein the multi-line laser projector is used for projecting multi-line laser bars to the surface of an object to be detected, and the camera is used for collecting laser bar images (as shown in figure 3);
the following information is obtained in advance:
camera internal parameters;
the method comprises the steps that (1) the light plane equations of all light planes under a camera coordinate system are numbered sequentially according to a preset ordering mode;
detecting depth values of points on an object to be detected in the area under a camera coordinate system; and setting each pixel point (u according to the depth value j ,v j ) Corresponding height tolerance ranges:
[(h(u j ,v j )-△h(u j ,v j )),(h(u j ,v j )+△h(u j ,v j ))]
wherein h (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) The depth value of the corresponding object to be measured under the camera coordinate system; Δh (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) A preset height tolerance at;
during detection, the relative positions of the multi-line laser sensor and the objects to be detected are fixed, the objects to be detected of the same type are sequentially located in the detection area in batches, and the pose deviation among the objects to be detected is smaller than a preset value; that is, the poses of the plurality of objects to be detected in the detection area are approximately the same, for example: are horizontally placed or are obliquely placed;
in the specific implementation, a clamp can be arranged in the detection area, and objects to be detected are sequentially arranged on the clamp, so that the pose is ensured to be uniform; or, the method only depends on the in-place precision of the object to be detected, if the object to be detected can move, the in-place precision deviation is small; or the objects to be detected are driven into the detection area by the robot, and the pose uniformity and the pose deviation of each object to be detected are ensured to be smaller than the preset value by the aid of the in-place deviation of the robot.
The multi-line laser projector projects multi-line laser strips to an object to be detected in the detection area, and the camera acquires laser strip images;
the following steps are used to match the correct number for each laser stripe in the laser stripe image:
searching laser bar communication domains in the laser bar image, and respectively extracting light bar center lines in the communication domains; sequentially numbering the central lines of all the light bars according to the same ordering mode as the light plane equation (as shown in figure 4), wherein the numerical value of the initial number of the laser bar connected domain is consistent with the numerical value of the initial number of the light plane equation during numbering;
for example, the light plane equations are ordered from left to right, starting with 3, respectively: a light plane No. 3, a light plane No. 4, a light plane No. 5; then, the centerlines of the light bars are also numbered sequentially in order from 3 from left to right; the method comprises the following steps of: no. 3 light bar center line, no. 4 light bar center line, no. 5 light bar center line.
Executing the second step by utilizing the light bar central line of the initial number and the light plane equation of the initial number;
step two, substituting the coordinates of each pixel point on the light bar central line into the following formula to calculate each pixel point (u) j ,v j ) Corresponding spatial point coordinates (x j ,y j ,z j ):
Wherein f and d are the dimensions of the focal length and the transverse pixels obtained by calibrating the camera respectively, (u '' 0 ,v' 0 ) As principal point coordinates, A i x j +B i y j +C i z j +d=0 is the light plane equation with the number i input this time; wherein the coordinate value z j Representing coordinate values in the depth direction in the camera coordinate system;
respectively judging each coordinate value z j Whether or not it is at pixel point (u j ,v j ) Corresponding height tolerance range [ h (u) j ,v j )-△h(u j ,v j ),h(u j ,v j )+△h(u j ,v j )]Inside; and count the coordinate value z within the height tolerance range j Then calculating the ratio between the number and the total number of the pixel points in the current connected domain;
step three, if the ratio is larger than a threshold (the value of the threshold is 0.7-0.95), storing the light plane equation with the current number and the center line of the light bar correspondingly, and enabling the number of the center line of the light bar to be equal to the number of the current light plane equation; executing the fourth step;
if the ratio is smaller than or equal to the threshold value, judging whether the number of the current light plane equation is equal to the end number of the light plane equation:
if yes, marking the center line of the current light bar as an invalid light bar and not numbering; executing the fourth step;
if not, continuing to execute the second step by using the light plane equation corresponding to the next number value and the current light bar center line; (e.g., when the light plane of number 1 does not match the light bar center line of number 1 (the ratio is less than the threshold value), then step two is performed using the light plane No. 2 and the light bar center line No. 1, if the two match, then the number of the light bar center line No. 1 is modified from 1 to 2, if the two do not match, then step two … … is performed using the light plane No. 3 and the light bar center line No. 1)
Judging whether the light bar center line which is not subjected to the second step exists, if so, searching the light bar center line which is closest to the initial number light bar center line, and continuing to perform the second step by utilizing the searched light bar center line and the current number light plane equation; (as shown in fig. 4, after the second step is finished, the matching result is that the No. 7 light plane and the No. 6 light bar center line are modified to the No. 7 light bar center line, at this time, in the fourth step, the light bar center line closest to the initial number light bar center line in the light bar center line of the second step is the No. 7 light bar center line, and the second step is continuously executed by using the No. 7 light bar center line and the No. 7 light plane).
If not, the numbering is ended (as in FIG. 5, the initial numbers 6-24 of the errors caused by the optical stripe changes in FIG. 4 are corrected, and the corrected numbers are 7-24).
Illustratively, the selectable preset ordering includes left to right, right to left, top to bottom, and bottom to top.
More specifically, the preset tolerance Δh (u j ,v j ) The setting mode of the device comprises the following two modes:
mode one: setting according to an experience value; for example, the empirical value is: the minimum difference value of two adjacent laser bars in the depth direction of the camera coordinate system in the laser bar image.
Mode two:
as shown in fig. 1, L is a horizontal distance between the optical center of the laser and the optical center of the camera, and γ is an included angle between the middle laser stripe emitted by the laser and the optical axis of the camera;
wherein f, u 0 And d is the focal length, the transverse principal point coordinate and the transverse pixel size in the camera calibration result in the step 1 respectively.
The method comprises the following steps of obtaining depth values of each point on an object to be detected in a detection area in advance under a camera coordinate system:
mode one: converting the modeling coordinate system into a camera coordinate system in the digital-analog of the object to be detected; when the feature of the object to be measured is simple (e.g., the step object to be measured in fig. 2 of this embodiment), the distance value between the object to be measured coordinate system and the camera coordinate system can be directly measured, the coordinate values of each point on the surface to be measured are directly corrected by using the distance value, and the corrected coordinate values of each point are used as the depth value under the camera coordinate system.
Mode two: three-dimensional point cloud information on one of the objects to be measured is acquired by using a three-dimensional measuring instrument, the relation between a coordinate system of the three-dimensional measuring instrument and a coordinate system of a camera is calibrated, and the three-dimensional point cloud information is converted into the coordinate system of the camera; the three-dimensional measuring instrument comprises a photogrammetry system and a three-dimensional scanner.
In the specific implementation process, the depth value of each point on the object to be detected under the camera coordinate system is an estimated value/predicted value, and in the specific implementation process, the light plane with the maximum probability matched with the light bar is selected through iteration (step two to step four) and probability statistics (step three, judging whether the ratio is larger than a threshold value or not), so that the corresponding relation between the light bar and the light plane is obtained.
Specifically, the internal parameters of the camera are calculated by a method of a Zhongshi calibration or a beam adjustment.
In this scheme, the morphology of the plurality of laser bars projected by the multi-line laser is not limited, and preferably, each laser bar is projected along the same direction, such as a plurality of parallel multi-line laser bars and a plurality of multi-line laser bars projected along the vertical/horizontal direction, and when the projection directions of the plurality of multi-line laser bars are not uniform, such as a plurality of line laser bars in a cross shape, the laser bars in the horizontal direction can be numbered first, and then the laser bars in the vertical direction can be numbered.
Example 2
The difference between this embodiment and embodiment 1 is that the number order of the light bar center line and the light plane numbers can be randomly distributed, for example, the numbers of the 3 light planes are 3, 1, 5 in sequence from left to right, and the initial numbers of the light bar center line are also random, for example, the numbers are 4, 2, 6 in sequence from left to right. When the central line of the light bar is matched with the light plane (step two to step four), a random combination mode is adopted, and finally, the light plane matched with the central line of the light bar is only required to be inquired, and then the number of the light plane is assigned to the number of the central line of the light bar.
Embodiment 1 is more suitable for the case of a large number of light bars, but in this embodiment, it is more suitable for the case of a small number of light bars (e.g. when the number of light bars is less than 8).
The specific technical scheme is as follows:
the multi-line laser sensor comprises a multi-line laser projector and a camera, wherein the multi-line laser projector is used for projecting multi-line laser strips to the surface of an object to be detected, and the camera is used for collecting laser strip images;
the following information is obtained in advance:
camera internal parameters;
the light plane equations of all the light planes under the camera coordinate system are numbered respectively, and the numbers of the light plane equations are different;
detecting depth values of points on an object to be detected in the area under a camera coordinate system; and setting each pixel point (u according to the depth value j ,v j ) Corresponding height tolerance ranges:
[(h(u j ,v j )-△h(u j ,v j )),(h(u j ,v j )+△h(u j ,v j ))]
wherein h (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) The depth value of the corresponding object to be measured under the camera coordinate system; Δh (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) A preset height tolerance at;
during detection, the relative positions of the multi-line laser sensor and the objects to be detected are fixed, the objects to be detected of the same type are sequentially located in the detection area in batches, and the pose deviation among the objects to be detected is smaller than a preset value;
the multi-line laser projector projects multi-line laser strips to an object to be detected in the detection area, and the camera acquires laser strip images;
the following steps are used to match the correct number for each laser stripe in the laser stripe image:
searching laser bar communication domains in the laser bar image, and respectively extracting light bar center lines in the communication domains; numbering the central lines of all the light bars respectively;
step two, a center line of one light bar and a light plane equation are taken randomly, and the step two is executed;
step two, substituting the coordinates of each pixel point on the light bar central line into the following formula to calculate each pixel point (u) j ,v j ) Corresponding spatial point coordinates (x j ,y j ,z j ):
Wherein f and d are the dimensions of the focal length and the transverse pixels obtained by calibrating the camera respectively, (u '' 0 ,v' 0 ) As principal point coordinates, A i x j +B i y j +C i z j +d=0 is the light plane equation with the number i input this time; wherein the coordinate value z j Representing coordinate values in the depth direction in the camera coordinate system;
respectively judging each coordinate value z j Whether or not it is at pixel point (u j ,v j ) Corresponding height tolerance range [ h (u) j ,v j )-△h(u j ,v j ),h(u j ,v j )+△h(u j ,v j )]Inside; and count the coordinate value z within the height tolerance range j Then calculating the ratio between the number and the total number of the pixel points in the current connected domain;
if the ratio is greater than the threshold value, storing the current numbered light plane equation and the light bar center line correspondingly, and enabling the number of the light bar center line to be equal to the number of the current light plane equation; executing the fourth step;
if the ratio is smaller than or equal to the threshold (e.g. 0.8), continuing to execute the second step by using the light plane equation with other numbers and the center line of the current light bar until the ratio is larger than the threshold or all the light plane equations are traversed, and executing the fourth step;
judging whether the light bar center line which is not used for executing the second step exists, if so, selecting one light bar center line, and continuing to execute the second step by utilizing the light bar center line and any numbered light plane equation; if not, the numbering is ended.
More specifically, the preset tolerance Δh (u j ,v j ) The setting mode of the device comprises the following two modes:
mode one: setting according to an experience value; for example, the empirical value is: the minimum difference value of two adjacent laser bars in the depth direction of the camera coordinate system in the laser bar image.
Mode two:
as shown in fig. 1, L is a horizontal distance between the optical center of the laser and the optical center of the camera, and γ is an included angle between the middle laser stripe emitted by the laser and the optical axis of the camera;
wherein f, u 0 And d is the focal length, the transverse principal point coordinate and the transverse pixel size in the camera calibration result in the step 1 respectively.
The method comprises the following steps of obtaining depth values of each point on an object to be detected in a detection area in advance under a camera coordinate system:
mode one: converting the modeling coordinate system into a camera coordinate system in the digital-analog of the object to be detected;
mode two: three-dimensional point cloud information on one of the objects to be measured is acquired by using a three-dimensional measuring instrument, the relation between a coordinate system of the three-dimensional measuring instrument and a coordinate system of a camera is calibrated, and the three-dimensional point cloud information is converted into the coordinate system of the camera; the three-dimensional measuring instrument comprises a photogrammetry system and a three-dimensional scanner.
Specifically, the internal parameters of the camera are calculated by a method of a Zhongshi calibration or a beam adjustment.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain the specific principles of the invention and its practical application to thereby enable others skilled in the art to make and utilize the invention in various exemplary embodiments and with various alternatives and modifications. It is intended that the scope of the invention be defined by the following claims and their equivalents.

Claims (7)

1. The numbering method under the situation that the multi-line laser bar is deformed and missing comprises a multi-line laser projector and a camera, wherein the multi-line laser projector is used for projecting the multi-line laser bar to the surface of an object to be measured, and the camera is used for collecting laser bar images;
the method is characterized in that the following information is acquired in advance:
camera internal parameters;
the method comprises the steps that (1) the light plane equations of all light planes under a camera coordinate system are numbered sequentially according to a preset ordering mode;
detecting depth values of points on an object to be detected in the area under a camera coordinate system; and setting each pixel point (u according to the depth value j ,v j ) Corresponding height tolerance ranges:
[(h(u j ,v j )-△h(u j ,v j )),(h(u j ,v j )+△h(u j ,v j ))]
wherein h (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) The depth value of the corresponding object to be measured under the camera coordinate system; Δh (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) A preset height tolerance at;
during detection, the relative positions of the multi-line laser sensor and the objects to be detected are fixed, the objects to be detected of the same type are sequentially located in the detection area in batches, and the pose deviation among the objects to be detected is smaller than a preset value;
the multi-line laser projector projects multi-line laser strips to an object to be detected in the detection area, and the camera acquires laser strip images;
the following steps are used to match the correct number for each laser stripe in the laser stripe image:
searching laser bar communication domains in the laser bar image, and respectively extracting light bar center lines in the communication domains; sequentially numbering the central lines of all the light bars according to the same ordering mode as the light plane equation, wherein the numerical value of the initial number of the laser bar connected domain is consistent with the numerical value of the initial number of the light plane equation;
executing the second step by utilizing the light bar central line of the initial number and the light plane equation of the initial number;
step two, substituting the coordinates of each pixel point on the light bar central line into the following formula to calculate each pixel point (u) j ,v j ) Corresponding spatial point coordinates (x j ,y j ,z j ):
Wherein f and d are the dimensions of the focal length and the transverse pixels obtained by calibrating the camera respectively, (u '' 0 ,v' 0 ) As principal point coordinates, A i x j +B i y j +C i z j +d=0 is the light plane equation with the number i input this time; wherein the coordinate value z j Representing coordinate values in the depth direction in the camera coordinate system;
respectively judging each coordinate value z j Whether or not it is at pixel point (u j ,v j ) Corresponding height tolerance range [ h (u) j ,v j )-△h(u j ,v j ),h(u j ,v j )+△h(u j ,v j )]Inside; and count the coordinate value z within the height tolerance range j The number of (C)Then calculating the ratio between the value and the total number of the pixel points in the current connected domain;
if the ratio is greater than the threshold value, storing the current numbered light plane equation and the light bar center line correspondingly, and enabling the number of the light bar center line to be equal to the number of the current light plane equation; executing the fourth step;
if the ratio is smaller than or equal to the threshold value, judging whether the number of the current light plane equation is equal to the end number of the light plane equation:
if yes, marking the center line of the current light bar as an invalid light bar and not numbering; executing the fourth step;
if not, continuing to execute the second step by using the light plane equation corresponding to the next number value and the current light bar center line;
judging whether the light bar center line which is not subjected to the second step exists, if so, searching the light bar center line which is closest to the initial number light bar center line, and continuing to perform the second step by utilizing the searched light bar center line and the current number light plane equation; if not, the numbering is ended.
2. A numbering process as claimed in claim 1, wherein: the preset ordering mode comprises left to right, right to left, top to bottom and bottom to top.
3. The numbering method under the situation that the multi-line laser bar is deformed and missing comprises a multi-line laser projector and a camera, wherein the multi-line laser projector is used for projecting the multi-line laser bar to the surface of an object to be measured, and the camera is used for collecting laser bar images;
the method is characterized in that the following information is acquired in advance:
camera internal parameters;
the light plane equations of all the light planes under the camera coordinate system are numbered respectively, and the numbers of the light plane equations are different;
detecting depth values of points on an object to be detected in the area under a camera coordinate system; and setting each pixel point (u according to the depth value j ,v j ) Corresponding height tolerance ranges:
[(h(u j ,v j )-△h(u j ,v j )),(h(u j ,v j )+△h(u j ,v j ))]
wherein h (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) The depth value of the corresponding object to be measured under the camera coordinate system; Δh (u) j ,v j ) Represents the j-th pixel point (u j ,v j ) A preset height tolerance at;
during detection, the relative positions of the multi-line laser sensor and the objects to be detected are fixed, the objects to be detected of the same type are sequentially located in the detection area in batches, and the pose deviation among the objects to be detected is smaller than a preset value;
the multi-line laser projector projects multi-line laser strips to an object to be detected in the detection area, and the camera acquires laser strip images;
the following steps are used to match the correct number for each laser stripe in the laser stripe image:
searching laser bar communication domains in the laser bar image, and respectively extracting light bar center lines in the communication domains; numbering the central lines of all the light bars respectively;
step two, a center line of one light bar and a light plane equation are taken randomly, and the step two is executed;
step two, substituting the coordinates of each pixel point on the light bar central line into the following formula to calculate each pixel point (u) j ,v j ) Corresponding spatial point coordinates (x j ,y j ,z j ):
Wherein f and d are the dimensions of the focal length and the transverse pixels obtained by calibrating the camera respectively, (u '' 0 ,v' 0 ) As principal point coordinates, A i x j +B i y j +C i z j +d=0 is the light plane equation with the number i input this time; wherein, sitScale z j Representing coordinate values in the depth direction in the camera coordinate system;
respectively judging each coordinate value z j Whether or not it is at pixel point (u j ,v j ) Corresponding height tolerance range [ h (u) j ,v j )-△h(u j ,v j ),h(u j ,v j )+△h(u j ,v j )]Inside; and count the coordinate value z within the height tolerance range j Then calculating the ratio between the number and the total number of the pixel points in the current connected domain;
if the ratio is greater than the threshold value, storing the current numbered light plane equation and the light bar center line correspondingly, and enabling the number of the light bar center line to be equal to the number of the current light plane equation; executing the fourth step;
if the ratio is smaller than or equal to the threshold value, continuing to execute the second step by using the light plane equation with other numbers and the center line of the current light bar until the ratio is larger than the threshold value or all the light plane equations are traversed, and executing the fourth step;
judging whether the light bar center line which is not used for executing the second step exists, if so, selecting one light bar center line, and continuing to execute the second step by utilizing the light bar center line and any numbered light plane equation; if not, the numbering is ended.
4. A numbering process according to claim 1 or 3, wherein: the preset tolerance Δh (u j ,v j ) The setting mode of the device comprises the following two modes:
mode one: setting according to an experience value;
mode two:
wherein L is the horizontal distance between the optical center of the laser and the optical center of the camera, and gamma is the included angle between the middle laser bar emitted by the laser and the optical axis of the camera;
wherein f, u 0 And d is the focal length, the transverse principal point coordinate and the transverse pixel size in the camera calibration result in the step 1 respectively.
5. A numbering process according to claim 1 or 3, wherein: the method for acquiring the depth value of each point on the object to be detected in the detection area under the camera coordinate system comprises the following two modes:
mode one: converting the modeling coordinate system into a camera coordinate system in the digital-analog of the object to be detected;
mode two: three-dimensional point cloud information on one of the objects to be measured is acquired by using a three-dimensional measuring instrument, the relation between a coordinate system of the three-dimensional measuring instrument and a coordinate system of a camera is calibrated, and the three-dimensional point cloud information is converted into the coordinate system of the camera; the three-dimensional measuring instrument comprises a photogrammetry system and a three-dimensional scanner.
6. A numbering process according to claim 1 or 3, wherein: in the third step, the threshold value is 0.7-0.95.
7. A numbering process according to claim 1 or 3, wherein: the internal parameters of the camera are solved by a method of the Zhongshi calibration or the beam adjustment.
CN202311407338.4A 2023-10-26 2023-10-26 Numbering method under situation of deformation and deletion of multi-line laser bar Pending CN117470132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311407338.4A CN117470132A (en) 2023-10-26 2023-10-26 Numbering method under situation of deformation and deletion of multi-line laser bar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311407338.4A CN117470132A (en) 2023-10-26 2023-10-26 Numbering method under situation of deformation and deletion of multi-line laser bar

Publications (1)

Publication Number Publication Date
CN117470132A true CN117470132A (en) 2024-01-30

Family

ID=89632414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311407338.4A Pending CN117470132A (en) 2023-10-26 2023-10-26 Numbering method under situation of deformation and deletion of multi-line laser bar

Country Status (1)

Country Link
CN (1) CN117470132A (en)

Similar Documents

Publication Publication Date Title
CN110689579B (en) Rapid monocular vision pose measurement method and measurement system based on cooperative target
CN110230998B (en) Rapid and precise three-dimensional measurement method and device based on line laser and binocular camera
CN111260615B (en) Laser and machine vision fusion-based method for detecting apparent diseases of unmanned aerial vehicle bridge
CN109801333B (en) Volume measurement method, device and system and computing equipment
JP2004340840A (en) Distance measuring device, distance measuring method and distance measuring program
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN110966956A (en) Binocular vision-based three-dimensional detection device and method
CN106996748A (en) Wheel diameter measuring method based on binocular vision
CN116342718B (en) Calibration method, device, storage medium and equipment of line laser 3D camera
JP2005017286A (en) Method and system for camera calibration
CN109920009B (en) Control point detection and management method and device based on two-dimensional code identification
CN110763204A (en) Planar coding target and pose measurement method thereof
CN116188558B (en) Stereo photogrammetry method based on binocular vision
JPH06137840A (en) Automatic calibration device for visual sensor
CN111640156A (en) Three-dimensional reconstruction method, equipment and storage equipment for outdoor weak texture target
CN113008158A (en) Multi-line laser tyre pattern depth measuring method
CN113074660A (en) Surface shape measuring method for large-size transparent object
CN115855955A (en) Mold surface structure defect detection device and method based on multi-beam laser
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN114396875A (en) Rectangular parcel volume measurement method based on vertical shooting of depth camera
CN112525106B (en) Three-phase machine cooperative laser-based 3D detection method and device
CN109373901B (en) Method for calculating center position of hole on plane
CN108180825A (en) A kind of identification of cuboid object dimensional and localization method based on line-structured light
CN117470132A (en) Numbering method under situation of deformation and deletion of multi-line laser bar
CN114170321A (en) Camera self-calibration method and system based on distance measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination