CN113048899A - Thickness measuring method and system based on line structured light - Google Patents

Thickness measuring method and system based on line structured light Download PDF

Info

Publication number
CN113048899A
CN113048899A CN202110611414.8A CN202110611414A CN113048899A CN 113048899 A CN113048899 A CN 113048899A CN 202110611414 A CN202110611414 A CN 202110611414A CN 113048899 A CN113048899 A CN 113048899A
Authority
CN
China
Prior art keywords
light bar
line segment
segment
line
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110611414.8A
Other languages
Chinese (zh)
Inventor
何文浩
郭跃
宋海涛
周小伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202110611414.8A priority Critical patent/CN113048899A/en
Publication of CN113048899A publication Critical patent/CN113048899A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/06Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a thickness measuring method and system based on line structure light, which obtains a target image of a target object irradiated by the line structure light; inputting the target image into a semantic segmentation model to obtain a probability map of a first light strip line segment, a second light strip line segment and a third light strip line segment in the target image output by the semantic segmentation model, wherein the second light strip line segment is positioned between the first light strip line segment and the third light strip line segment, and is positioned on the surface of a target object in the target image; the method can increase the significance of the laser line in a complex background image, and reduces the influence of laser line form change irrelevant to measurement on a final result by extracting the key line segment or point of the laser line projected on the complex target object, so that the measured thickness of the target object is more accurate, and the method can be applied to scenes with more complex textures.

Description

Thickness measuring method and system based on line structured light
Technical Field
The invention relates to the field of machine vision, in particular to a thickness measuring method and system based on line structured light.
Background
Line structured light has become a classic and popular method in many industrial measurement tasks, such as track profiling, weld positioning, topographical surface reconstruction, robotic navigation, and thickness measurement of target objects.
For the thickness measuring method based on the line structured light, the measuring effect depends on the performance of the line structured light, and the performance of the line structured light depends on the mature camera calibration method and the conversion of the fixed 2D pixels to the 3D target points to a great extent.
In the prior art, the thickness measurement method based on the line structured light is irrelevant to the visual characteristics of the target object, which causes the measured thickness of the target object to be inaccurate, thereby hindering the application of three-dimensional visual measurement of the target object in a scene with more complex textures.
Disclosure of Invention
The invention provides a line structured light-based thickness measurement method and a line structured light-based thickness measurement system, which are used for solving the defect that the line structured light thickness measurement method in the prior art is irrelevant to the visual characteristics of a target object and realizing the three-dimensional visual measurement application of the target object in a scene with more complex textures.
The invention provides a thickness measuring method based on line structured light, which comprises the following steps:
acquiring a target image of a target object irradiated by the line structure light;
inputting the target image into a semantic segmentation model to obtain a probability map of a first light bar line segment, a second light bar line segment and a third light bar line segment in the target image, wherein the first light bar line segment, the second light bar line segment and the third light bar line segment are output by the semantic segmentation model, the second light bar line segment is located between the first light bar line segment and the third light bar line segment, and the second light bar line segment is located on the surface of a target object in the target image;
determining a thickness of the target object based on the probability map of the first, second, and third light bar segments and the target image;
the semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
The method for measuring the thickness based on line structured light provided by the present invention is a method for determining the thickness of a target object based on a probability map of a first light bar line segment, a second light bar line segment, and a third light bar line segment, and a target image, and specifically includes:
extracting laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment respectively based on the probability maps of the first light bar line segment, the second light bar line segment and the third light bar line segment and the target image;
determining a first distance of the second light bar segment from the first light bar segment and a second distance of the second light bar segment from the third light bar segment based on a three-dimensional spatial representation of laser line centers of the first, second, and third light bar segments under a camera coordinate system;
determining a thickness of the target object based on the first distance and the second distance.
According to the line structured light based thickness measuring method provided by the present invention, the extracting laser line centers of the first light strip line segment, the second light strip line segment, and the third light strip line segment respectively based on the probability map of the first light strip line segment, the second light strip line segment, and the third light strip line segment and the target image specifically comprises:
for any light strip line segment of the first light strip line segment, the second light strip line segment and the third light strip line segment, multiplying or adding any element in the probability map of any light strip line segment with an element at a corresponding position in the target image to obtain a new target image;
and extracting the laser line center of any light bar line segment based on the new target image.
According to the thickness measuring method based on the line structured light, provided by the invention, the target image is acquired based on a camera;
correspondingly, the three-dimensional spatial representation of the laser line centers of the first, second and third light bar line segments in the camera coordinate system is determined by:
acquiring an internal reference matrix of the camera, and determining a laser plane generated by the linear structure light irradiating a calibration object in the calibration process of the camera;
and respectively determining three-dimensional space representations of laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment in a camera coordinate system based on the internal reference matrix and the laser plane.
According to the thickness measuring method based on the line structure light provided by the invention, the step of determining the laser plane generated by the line structure light irradiating calibration object in the calibration process of the camera specifically comprises the following steps:
determining laser line projection generated by the linear structure light irradiating calibration object in the calibration process of the camera, and extracting the laser line center of the laser line projection;
and fitting to obtain the laser plane based on the three-dimensional space representation of the laser line center projected by the laser line under the camera coordinate system.
According to the thickness measuring method based on line structured light provided by the present invention, the determining three-dimensional spatial representations of the laser line centers of the first, second, and third light bar line segments in a camera coordinate system based on the internal reference matrix and the laser plane respectively specifically includes:
for any of the first, second, and third light bar segments, determining a three-dimensional spatial representation of a laser line center of the any light bar segment under a normalized camera coordinate system based on the internal reference matrix;
determining a scaling factor based on the three-dimensional space representation of the laser plane and the laser line center of any light bar line segment under a normalized camera coordinate system;
and determining the three-dimensional space representation of the laser line center of any light bar line segment in the camera coordinate system based on the scaling factor and the three-dimensional space representation of the laser line center of any light bar line segment in the normalized camera coordinate system.
According to the line structured light thickness measuring method provided by the present invention, the determining a first distance between the second light bar line segment and the first light bar line segment and a second distance between the second light bar line segment and the third light bar line segment based on the three-dimensional space representation of the laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment in the camera coordinate system specifically includes:
determining a first endpoint of the second light bar segment that is proximate to the first light bar segment and a second endpoint of the second light bar segment that is proximate to the third light bar segment based on a three-dimensional spatial representation of a laser line center of the second light bar segment in a camera coordinate system;
determining a first straight line where the first light bar line segment is located based on three-dimensional space representation of the laser line center of the first light bar line segment in a camera coordinate system, and determining a second straight line where the third light bar line segment is located based on three-dimensional space representation of the laser line center of the third light bar line segment in the camera coordinate system;
and determining the distance between the first end point and the first straight line as the first distance, and determining the distance between the second end point and the second straight line as the second distance.
The invention also provides a thickness measuring system based on line structured light, comprising:
the target image acquisition module is used for acquiring a target image of a target object irradiated by the line structure light;
the semantic segmentation module is used for inputting the target image into a semantic segmentation model to obtain a probability map of a first light bar line segment, a second light bar line segment and a third light bar line segment in the target image, wherein the first light bar line segment, the second light bar line segment and the third light bar line segment are output by the semantic segmentation model, the second light bar line segment is located between the first light bar line segment and the third light bar line segment, and the second light bar line segment is located on the surface of a target object in the target image;
a thickness determination module to determine a thickness of the target object based on the target image and a probability map of the first, second, and third light bar line segments;
the semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
The invention further provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of any one of the above-mentioned line-structured light-based thickness measurement methods.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the line structured light based thickness measurement method as described in any of the above.
According to the thickness measuring method and system based on the line structured light, provided by the invention, the target image of the target object is segmented by using the semantic segmentation model to obtain the probability maps of the first light bar line segment, the second light bar line segment and the third light bar line segment, and then the thickness of the target object is determined according to the probability maps of the three light bar line segments and the target image. The method introduces semantic features, can increase the significance of the laser line in a complex background image, reduces the influence of laser line form change irrelevant to measurement on a final result by extracting key line segments or points of the laser line projected on a complex target object, enables the thickness of the measured target object to be more accurate, and can enable the thickness measuring method based on line structured light to be applied to scenes with more complex textures.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a line structured light-based thickness measurement method according to an embodiment of the present invention;
fig. 2 is a schematic view of a measurement scenario of a thickness measurement method based on line structured light according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the results of a fit to a laser plane in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a measurement error of a line structured light based thickness measurement method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of line segment fitting at point A in FIG. 4;
FIG. 6 is a schematic diagram of line segment fitting at point B in FIG. 4;
fig. 7 is a schematic flowchart of a thickness measuring method based on line structured light according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a line structured light based thickness measurement system according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The conventional method for measuring the thickness of the target object based on linear structured light is irrelevant to the visual characteristics of the target object, so that the measured thickness of the target object is inaccurate, and the application of three-dimensional visual measurement of the target object in a scene with more complex textures is hindered.
Therefore, the invention provides a thickness measuring method based on line structured light. Fig. 1 is a schematic flowchart of a thickness measurement method based on line structured light according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
s1, acquiring a target image of the target object irradiated by the line-structured light;
s2, inputting the target image into a semantic segmentation model, and obtaining a probability map of a first light bar line segment, a second light bar line segment, and a third light bar line segment in the target image output by the semantic segmentation model, where the second light bar line segment is located between the first light bar line segment and the third light bar line segment, and the second light bar line segment is located on a surface of a target object in the target image;
s3, determining the thickness of the target object based on the probability map of the first, second and third light bar segments and the target image;
the semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
Specifically, the implementation subject of the thickness measuring method based on line structured light provided in the embodiment of the present invention is a thickness measuring apparatus, which includes an image acquisition device, a laser line emitter, a plane capable of placing a target object, and an image processing device. The image processing device may be a server, the server may be a local server, and may also be a cloud server, and the local server may specifically be a computer, a tablet computer, a smart phone, and the like, which is not specifically limited in the embodiment of the present invention.
Step S1 is performed first. In the embodiment of the invention, a laser line emitter can be adopted to project line structured light to the surface of a target object, the line structured light is uniformly projected to the surface of the target object when irradiating the target object, at this time, a laser stripe image of the surface of the target object can be obtained through an image acquisition device in the thickness measurement device, the laser stripe image can be a color image, and for subsequent operation, the color image can be converted into a gray image to obtain the gray image of the laser stripe image of the surface of the target object, namely the target image. The target object may be an object of any thickness, and particularly, may be an object with more complicated texture.
Then, step S2 is executed. After the target image is acquired, the target image can be input into the semantic segmentation model, and the probability map of the first light strip line segment, the second light strip line segment and the third light strip line segment in the target image output by the semantic segmentation model can be obtained through segmentation of the semantic segmentation model. The second light bar line segment is located between the first light bar line segment and the third light bar line segment, and the second light bar line segment is located on the surface of the target object in the target image. The semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
The first light bar line segment, the second light bar line segment and the third light bar line segment mean that when the line structured light irradiates on the surface of the target object with proper height and brightness, the line structured light beam can form the light bar line segments on the surface of the target object, namely the line structured light beam can be divided into three segments by the target object, and the three segments are the light bar line segments on two sides of the target object and the light bar line segments on the surface of the target object respectively. In the embodiment of the present invention, the light strip line segment on one side of the target object may be determined as a first light strip line segment, the light strip line segment on the surface of the target object may be determined as a second light strip line segment, and the light strip line segment on the other side of the target object may be determined as a third light strip line segment. According to the position relationship of the three light bar line segments in the target image, the first light bar line segment can be an upper line segment at an upper position in the target image, the third light bar line segment can be a lower line segment at a lower position in the target image, and the second light bar line segment is a middle line segment between the first light bar line segment and the third light bar line segment.
The semantic segmentation model has the input of a target image I (the width of the target image I is w, and the height of the target image I is h), the output of the semantic segmentation model is a probability graph of four channels, and the corresponding category of each channel is a background, a first light bar line segment, a second light bar line segment and a third light bar line segment in sequence. Wherein, the probability of the pixel in the probability map of any channel is the possibility that the model is predicted to be the category corresponding to the channel at the current pixel position. The type of the probability map in the embodiment of the present invention may be selected according to actual needs, which is not specifically limited in the embodiment of the present invention.
Semantics in the semantic segmentation model refer to semantic features of line segments or key points that face a particular target, e.g., semantics may refer to semantic features that face a second light bar line segment, etc. The semantic segmentation model in the embodiment of the invention can be an existing semantic segmentation model. For example, the semantic segmentation model may use a full convolution network, a U-type network, a morphing network, or a deepab v3 using a 50-layer residual error network as a feature extraction network, which is not particularly limited in this embodiment of the present invention.
The semantic segmentation model can predict the category of each pixel in the target image by segmenting the target image, namely, classifying each pixel in the target image into one of a background, a first light bar line segment, a second light bar line segment and a third light bar line segment. After the semantic segmentation is completed, the position of any pixel point in the first light strip line segment, the second light strip line segment and the third light strip line segment in the target image can be obtained.
Before the target image is segmented by using the semantic segmentation model, the semantic segmentation model needs to be trained. The semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
When training the semantic segmentation model, each acquired image sample can be manually labeled, and the real bounding boxes b of the first light bar line segment, the second light bar line segment and the third light bar line segment are usedt、bmAnd bbAnd marking to obtain image samples carrying the first light bar segment label, the second light bar segment label and the third light bar segment label, and training the semantic segmentation model by using the image samples.
In training the semantic segmentation model, the gray centering method can also be used to extract the laser line center within each bounding box. The gray center method can extract the laser line center of each bounding box according to the following formula:
Figure 461465DEST_PATH_IMAGE001
(1)
Figure 546096DEST_PATH_IMAGE002
the category of the line segments is a first light bar line segment, a second light bar line segment and a third light bar line segment;
Figure 777357DEST_PATH_IMAGE003
the laser center of the y column of a certain light bar line segment is referred to; x denotes the x-th row;
Figure 896623DEST_PATH_IMAGE004
a pixel value of an element at an x-th row and a y-th column;
Figure 522514DEST_PATH_IMAGE005
refers to element-by-element multiplication, i.e., operating on every element point in the target image.
Before the gray level center method is used for extracting the center of the laser line in each boundary frame, thresholding can be carried out on probability maps of different classes to obtain binary maps; namely, the semantic segmentation model outputs probability maps of three categories of a first light bar line segment, a second light bar line segment and a third light bar line segment, and for each probability map, when the probability of a pixel in the map is greater than a specified threshold value, the pixel at the same position in the binary map is set as 1, otherwise, the pixel is set as 0. The threshold is in a range from 0 to 1, and may be set according to actual needs, for example, set to 0.5, which is not specifically limited in this embodiment of the present invention.
After thresholding, three binary images of different categories can be obtained, namely the binary images of three categories of a first light strip line segment, a second light strip line segment and a third light strip line segment. The gray center method is then used for the element with pixel 1 in these binary images.
In the above process, in order to obtain a true semantic mask, all 1 s are used to initialize the mask of channel one, and pixels containing light bar line segments are set to 0 s in the mask of channel one; the masks for channels two to four are initialized with all 0 s, setting the pixel in the center of the laser line for each type of light bar line segment to 1 in the corresponding type of mask. The first channel is a background image channel, and the second to fourth channels respectively correspond to the first light bar line segment, the second light bar line segment and the third light bar line segment.
In order to achieve a better model training effect, a Focal loss function can be used for supervising the training process of the semantic segmentation model. The method mainly aims to solve the problem of serious imbalance of the proportion of positive samples and negative samples in target detection, and the weight of a large number of simple negative samples in training can be reduced by using the Focal loss function in semantic segmentation model training, so that a better training effect is achieved.
Finally, step S3 is performed. After the probability maps of the first light bar line segment, the second light bar line segment and the third light bar line segment are obtained, the thickness of the target object can be determined according to the three light bar line segments and the target image. The position relations of the first light bar line segment, the second light bar line segment and the third light bar line segment in the target image can be obtained through semantic segmentation, so that the position relations of the first light bar line segment, the second light bar line segment and the third light bar line segment in the target image can be mapped into the real world coordinate system according to the mapping relation of the pixel coordinate system where the target image is located and the real world coordinate system, and the thickness of the target object can be determined according to the position relations of the three light bar line segments in the real world coordinate system.
According to the line structured light-based thickness measuring method, a semantic segmentation model is used for segmenting a target image of a target object to obtain probability graphs of a first light bar line segment, a second light bar line segment and a third light bar line segment, and then the thickness of the target object is determined according to the probability graphs of the three light bar line segments and the target image. The method introduces semantic features, can increase the significance of the laser line in a complex background image, reduces the influence of laser line form change irrelevant to measurement on a final result by extracting key line segments or points of the laser line projected on a complex target object, enables the thickness of the measured target object to be more accurate, and can enable the thickness measuring method based on line structured light to be applied to scenes with more complex textures.
On the basis of the foregoing embodiment, the method for measuring thickness based on line structured light according to an embodiment of the present invention determines the thickness of the target object based on the probability map of the first light bar line segment, the second light bar line segment, and the third light bar line segment, and the target image, and specifically includes:
extracting laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment respectively based on the probability maps of the first light bar line segment, the second light bar line segment and the third light bar line segment and the target image;
determining a first distance of the second light bar segment from the first light bar segment and a second distance of the second light bar segment from the third light bar segment based on a three-dimensional spatial representation of laser line centers of the first, second, and third light bar segments under a camera coordinate system;
determining a thickness of the target object based on the first distance and the second distance.
Specifically, in the embodiment of the present invention, after the probability maps of the first light bar line segment, the second light bar line segment, and the third light bar line segment are obtained, the laser line centers of the first light bar line segment, the second light bar line segment, and the third light bar line segment may be respectively extracted by combining with the target image, that is, when the laser line center is extracted, the target image and the probability map are considered at the same time. The method for extracting the center of the laser line may be a weighted average according to columns in the image, or may operate according to rows in the image, which is not specifically limited in this embodiment of the present invention.
After the laser line center of any light bar line segment is extracted, the extraction is carried out by combining a target image and a probability map, so that the obtained laser line center is a two-dimensional laser line center under a pixel coordinate system, and the extracted laser line center of any light bar line segment needs to be mapped to a three-dimensional space so as to obtain three-dimensional space representation of the laser line centers of a first light bar line segment, a second light bar line segment and a third light bar line segment under a camera coordinate system; and determining a first distance between the second light bar line segment and the first light bar line segment and a second distance between the second light bar line segment and the third light bar line segment based on the three-dimensional space representation of the laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment in the camera coordinate system. The first distance and the second distance can be determined according to the position relation of the laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment in a three-dimensional space under a camera coordinate system.
After the first distance and the second distance are obtained, the thickness of the target object can be determined according to the first distance and the second distance. The thickness of the target object may be an average of the first distance and the second distance.
According to the line structured light based thickness measuring method, the laser line center of any light strip line segment is extracted, the first distance between the second light strip line segment and the first light strip line segment and the second distance between the second light strip line segment and the third light strip line segment are determined according to the three-dimensional space representation of the laser line center of any light strip line segment in the camera coordinate system, the average value of the first distance and the second distance is used as the thickness of a target object, and the semantics are introduced when the laser line center is extracted, so that the thickness of the target object is measured more accurately.
On the basis of the foregoing embodiment, the method for measuring thickness based on line structured light according to an embodiment of the present invention specifically includes, based on the probability maps of the first, second, and third light bar line segments and the target image, extracting laser line centers of the first, second, and third light bar line segments, respectively:
for any light strip line segment of the first light strip line segment, the second light strip line segment and the third light strip line segment, multiplying or adding any element in the probability map of any light strip line segment with an element at a corresponding position in the target image to obtain a new target image;
and extracting the laser line center of any light bar line segment based on the new target image.
Specifically, in the embodiment of the present invention, for any light strip line segment of the first light strip line segment, the second light strip line segment, and the third light strip line segment, any element in the probability map of any light strip line segment may be multiplied or added with an element at a corresponding position in the target image to obtain a new target image, and then the laser line center of any light strip line segment is extracted according to the new target image.
When any element in the probability map of any light strip line segment is multiplied by an element at a corresponding position in the target image, a new target image is obtained by using the sum operation, and the semantic part of the center of the laser line is enhanced.
For example, when the probability map for a first light bar segment and the target image are multiplied by element, the values of pixels in the target image having a greater probability of being of the first light bar segment class may be relatively greater, while the values of pixels in the target image having a lesser probability of being of a non-first light bar segment class (i.e., having a greater probability of being of the second light bar segment, the third light bar segment, or the background class) may be relatively lesser.
After the operation is finished, the laser line center of any light bar line segment can be extracted according to the new target image.
The center of any light bar line segment in each column of the new target image can be obtained according to the column of the new target image, and after the and operation, the laser line center of any light bar line segment can be extracted according to the following formula.
Figure 687916DEST_PATH_IMAGE006
(2)
Wherein the content of the first and second substances,
Figure 457289DEST_PATH_IMAGE007
the probability value at the x row and y column in the probability graph of a certain line segment is referred, and the meaning of the rest part representation in the formula is the same as that in the formula (1).
When any element in the probability map of any light strip line segment is added with the element at the corresponding position in the target image, a new target image is obtained by using or operation, and the semantic part of the center of the laser line is also enhanced.
Or the operation trades off detail information in the target image against structural information in the semantic mask. After or operation is finished, the laser line center of any light bar line segment can be extracted according to the new target image.
The center of any light bar line segment in each column of the new target image can be obtained according to the column of the new target image, and after or operation, the laser line center of any light bar line segment can be extracted according to the following formula.
Figure 696641DEST_PATH_IMAGE008
(3)
Wherein the content of the first and second substances,
Figure 198160DEST_PATH_IMAGE009
the weight parameter can be set according to actual needs, which is not specifically limited in the embodiment of the present invention, and the meaning of the other parameter characterization is the same as that of the formula (2).
In the embodiment of the invention, the operation or the operation can be selected according to the actual requirement, and the invention is not particularly limited to this, and only needs to ensure that the effect of strengthening the semantics is achieved. However, the operation only has the effect of enhancing the semantic meaning when the resolution of the target image is the same as that of the image output by the semantic segmentation model, so that the semantic meaning needs to be enhanced by the operation when the resolution of the target image is different from that of the image output by the semantic segmentation model.
According to the thickness measuring method based on the line structured light, the semantic part of the center of the laser line is enhanced through the operation or the operation, and the center of the laser line of any light bar line segment is obtained according to the weighted average method, so that the extracted center of the laser line is more accurate, and the thickness measurement is more accurate.
On the basis of the above embodiment, in the thickness measuring method based on line structured light provided by the embodiment of the present invention, the target image is acquired based on a camera;
correspondingly, the three-dimensional spatial representation of the laser line centers of the first, second and third light bar line segments in the camera coordinate system is determined by:
acquiring an internal reference matrix of the camera, and determining a laser plane generated by the linear structure light irradiating a calibration object in the calibration process of the camera;
and respectively determining three-dimensional space representations of laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment in a camera coordinate system based on the internal reference matrix and the laser plane.
Specifically, in the embodiment of the present invention, the target image is acquired by a camera. As shown in fig. 2, when the laser line emitter projects a stripe-shaped laser beam onto the surface of the target object to form a laser stripe image, the laser stripe image may be photographed using a camera. The laser stripe image shot by the camera can be a color image, the gray scale of the color image is converted, and the converted gray scale image is a target image. The camera may be selected according to actual needs, and may be, for example, a Charge-coupled Device (CCD) camera, which is not specifically limited in this embodiment of the present invention.
Before determining the three-dimensional spatial representation of the laser line centers of the first, second and third light bar line segments in the camera coordinate system, an internal reference matrix of the camera used needs to be acquired. The internal reference matrix of the camera can be obtained by calibrating the camera, and meanwhile, in the calibration process, the laser plane generated by the linear structure light irradiating calibration object needs to be determined. After the internal reference matrix and the laser plane of the camera are determined, three-dimensional space representation of the laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment in a camera coordinate system can be determined.
The method for calibrating the camera to obtain the internal reference matrix of the camera may be a single-plane checkerboard calibration method. The calibration method comprises the following steps: printing a checkerboard, pasting the checkerboard on a plane as a calibration object, shooting photos in different directions for the calibration object by adjusting the direction of a camera or the calibration object, extracting angular points of the checkerboard from the photos, and estimating the numerical values of five internal parameters and six external parameters under the condition of ideal and no distortion to obtain an internal parameter matrix and an external parameter matrix of the camera.
The internal reference matrix is a 3 × 3 matrix, which can be expressed as:
Figure 24427DEST_PATH_IMAGE010
(4)
wherein the content of the first and second substances,
Figure 331912DEST_PATH_IMAGE011
representing the focal length of the x-axis and y-axis and the offset of the optical axis in the x-axis and y-axis, respectively.
In the embodiment of the invention, because the image shot by the camera may have distortion, the distortion refers to a deviation of a straight line projection, and is an inherent characteristic of the camera, the distortion can be corrected in the process of calibrating the camera. Furthermore, since the distortion of the camera is only related to the target image acquired by the camera, the distortion coefficient of the camera can be ignored if the target image is not distorted.
In the embodiment of the invention, if the target image acquired by the camera has distortion, the radial distortion of the camera can be corrected. Radial distortion includes barrel distortion and pincushion distortion. Radial distortion comes from the lens shape, for which the distortion at the center of the camera is 0, and the distortion becomes more severe moving to the edge. The preset distortion coefficient of the radial distortion can be calculated through the camera calibration process, so that the position information is corrected. The distortion coefficient under the actual radial distortion can be estimated by firstly solving an internal reference matrix and an external reference matrix of the camera and then applying a least square method.
After the internal reference matrix of the camera is acquired, a laser plane generated by the linear structure light irradiating the calibration object in the calibration process of the camera needs to be determined. As shown in fig. 2, during the calibration process of the camera, the placing cube, i.e. the plane for placing the target object, is changed to place the calibration object, which may be a small-sized checkerboard. In the camera calibration process, the laser plane generated by the linear structure light irradiating calibration object is the laser plane to be determined.
After the internal reference matrix and the laser plane of the camera are determined, the laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment can be converted into a three-dimensional space coordinate system from a two-dimensional pixel coordinate system according to the internal reference matrix of the camera, and then three-dimensional space representation of the laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment in the camera coordinate system is determined according to the laser plane.
According to the thickness measuring method based on the line structured light, disclosed by the embodiment of the invention, the camera is calibrated to obtain the internal reference matrix of the camera and the laser plane generated by the line structured light irradiating the calibration object in the calibration process, so that the three-dimensional space representation of the centers of the laser lines of the first light strip line segment, the second light strip line segment and the third light strip line segment under the camera coordinate system is more accurate, and the error of the measured thickness of the target object is reduced.
On the basis of the foregoing embodiment, the method for measuring thickness based on line structured light according to an embodiment of the present invention for determining a laser plane generated by a calibration object irradiated by the line structured light during a calibration process of the camera specifically includes:
determining laser line projection generated by the linear structure light irradiating calibration object in the calibration process of the camera, and extracting the laser line center of the laser line projection;
and fitting to obtain the laser plane based on the three-dimensional space representation of the laser line center projected by the laser line under the camera coordinate system.
Specifically, in the embodiment of the present invention, a laser line projection generated by irradiating the calibration object with line structure light in the calibration process of the camera may be determined, a laser line center of the laser line projection is extracted, the extracted laser line center is mapped to the camera coordinate system, a three-dimensional spatial representation of the laser line center in the camera coordinate system is obtained, that is, a coordinate of the laser line center in the camera coordinate system is obtained, and the laser plane is obtained by fitting according to the three-dimensional spatial representation of the laser line center in the camera coordinate system.
The calibration object can be a small-size checkerboard, namely, the line structure irradiates the calibration object to generate laser line projection, and an image of the laser line projection is obtained. And carrying out laser line center extraction on the image projected by the laser line to obtain the laser line center projected by the laser line. The method of extracting the center of the laser line may be the gray center method described above.
After the laser line center of the laser line projection is obtained, the position of the laser line center of the laser line projection in the pixel coordinate system is obtained, and the three-dimensional space representation of the laser line center of the laser line projection in the camera coordinate system can be determined according to the internal reference matrix of the camera.
Fig. 3 is a diagram showing the fitting result of the laser plane in the embodiment of the present invention. Because the laser plane comprises the laser line centers of different laser line projections, the laser plane can be fitted through the three-dimensional space representation of the laser line centers of the laser line projections in the camera coordinate system, and the laser plane is determined.
The laser plane may be defined using a system of equations of the ternary equation of the first order, and may be expressed as:
Figure 425770DEST_PATH_IMAGE012
(5)
wherein the content of the first and second substances,
Figure 160508DEST_PATH_IMAGE013
Figure 34923DEST_PATH_IMAGE014
and
Figure 644633DEST_PATH_IMAGE015
are parameters of a one-dimensional equation of three-dimensional,
Figure 592998DEST_PATH_IMAGE016
Figure 764216DEST_PATH_IMAGE017
and
Figure 798031DEST_PATH_IMAGE018
is that the laser line is centered on the phaseThree-dimensional space representation under a machine coordinate system, i.e. coordinates under a camera coordinate system.
To get a more accurate laser plane, the laser plane can be fitted using RANSAC estimation. The RANSAC fitting process can be that the laser plane equation is determined by the three-dimensional space coordinates of the center of the laser line projected by some laser lines; testing the three-dimensional space coordinates of the centers of the laser lines projected by the rest laser lines by using the obtained laser plane equation, if the three-dimensional space coordinates of the center of the laser line projected by a certain laser line accord with the determined laser plane equation, considering the three-dimensional space coordinates as local interior points, otherwise, considering the three-dimensional space coordinates as local exterior points; if the local points are enough, the laser plane equation is reasonable, and the local points are used for estimating the parameters of the laser plane equation again; and finally determining the laser plane in an iterative mode.
According to the thickness measuring method based on the line structured light, disclosed by the embodiment of the invention, the laser plane is obtained by extracting the laser line center projected by the laser line and fitting the laser plane based on the three-dimensional space representation of the laser line center projected by the laser line in the camera coordinate system, so that the obtained laser plane is more accurate, and the thickness measurement of a subsequent target object is facilitated.
On the basis of the foregoing embodiment, the method for measuring a thickness based on line structured light according to an embodiment of the present invention specifically includes, based on the internal reference matrix and the laser plane, determining three-dimensional spatial representations of laser line centers of the first light bar line segment, the second light bar line segment, and the third light bar line segment in a camera coordinate system, where the three-dimensional spatial representations include:
for any of the first, second, and third light bar segments, determining a three-dimensional spatial representation of a laser line center of the any light bar segment under a normalized camera coordinate system based on the internal reference matrix;
determining a scaling factor based on the three-dimensional space representation of the laser plane and the laser line center of any light bar line segment under a normalized camera coordinate system;
and determining the three-dimensional space representation of the laser line center of any light bar line segment in the camera coordinate system based on the scaling factor and the three-dimensional space representation of the laser line center of any light bar line segment in the normalized camera coordinate system.
Specifically, in the embodiment of the present invention, based on the internal reference matrix of the camera, the laser line center of any one of the first light bar line segment, the second light bar line segment, and the third light bar line segment may be mapped from the pixel coordinate system into the normalized camera coordinate system, that is, a three-dimensional spatial representation of the laser line center of any one light bar line segment in the normalized camera coordinate system is obtained. After the three-dimensional space representation of the laser line center of any light bar line segment under the normalized camera coordinate system is obtained, the zoom factor can be determined by combining the laser plane obtained by the fitting. Based on the scaling factor and the three-dimensional spatial representation of the laser line center of any light bar segment in the normalized camera coordinate system, the three-dimensional spatial representation of the laser line center of any light bar segment in the camera coordinate system can be determined.
Wherein a three-dimensional spatial representation of the laser line center of any of the light bar line segments in the normalized camera coordinate system can be determined based on the internal reference matrix by the following formula.
Figure 152045DEST_PATH_IMAGE019
(6)
Figure 954916DEST_PATH_IMAGE020
Is the coordinate point of the laser line center of any light strip line segment in the undistorted target image, namely the coordinate of the laser line center of any light strip line segment in the pixel coordinate system,
Figure 31457DEST_PATH_IMAGE021
the coordinate of the laser line center of any light bar line segment in the pixel coordinate system is mapped to the coordinate in the normalized camera coordinate system, namely the three-dimensional space representation of the laser line center of any light bar line segment in the normalized camera coordinate system.
After the three-dimensional space representation of the laser line center of any light bar line segment under the normalized camera coordinate system is determined, the scaling factor can be determined by combining the laser plane equation. Wherein modifying the laser plane equation according to the scaling factor may be:
Figure 818147DEST_PATH_IMAGE022
(7)
the scaling factor is then:
Figure 504081DEST_PATH_IMAGE023
(8)
after determining the scaling factor, a three-dimensional spatial representation of the laser line center of any light bar segment in the camera coordinate system can be determined by the following formula.
Figure 754934DEST_PATH_IMAGE024
(9)
Wherein the content of the first and second substances,
Figure 2376DEST_PATH_IMAGE025
i.e. the three-dimensional space representation of the laser line center of any light strip line segment in the camera coordinate system.
According to the thickness measuring method based on the line structured light, the scaling factor is determined, the scaling factor and the three-dimensional space representation of the laser line center of any light strip line segment in the normalized camera coordinate system are used for determining the three-dimensional space representation of the laser line center of any light strip line segment in the camera coordinate system, the scaling problem in the normalized camera coordinate system is eliminated, and the distance between different points is kept consistent with the distance between the different points in the real world coordinate system.
On the basis of the foregoing embodiment, the method for measuring thickness of line structured light according to an embodiment of the present invention is a method for determining a first distance between a first light bar line segment and a second distance between the second light bar line segment and a third light bar line segment based on a three-dimensional spatial representation of laser line centers of the first light bar line segment, the second light bar line segment, and the third light bar line segment in a camera coordinate system, and specifically includes:
determining a first endpoint of the second light bar segment that is proximate to the first light bar segment and a second endpoint of the second light bar segment that is proximate to the third light bar segment based on a three-dimensional spatial representation of a laser line center of the second light bar segment in a camera coordinate system;
determining a first straight line where the first light bar line segment is located based on three-dimensional space representation of the laser line center of the first light bar line segment in a camera coordinate system, and determining a second straight line where the third light bar line segment is located based on three-dimensional space representation of the laser line center of the third light bar line segment in the camera coordinate system;
and determining the distance between the first end point and the first straight line as the first distance, and determining the distance between the second end point and the second straight line as the second distance.
Specifically, in the embodiment of the present invention, two end points of the second light bar line segment may be determined according to a three-dimensional space representation of a laser line center of the second light bar line segment in a camera coordinate system, and a point close to the first light bar line segment is used as a first end point; the segment proximate to the third light bar segment is taken as a second endpoint.
Similarly, a first straight line in which the first light bar line segment is located can be determined according to the three-dimensional space representation of the laser line center of the first light bar line segment in the camera coordinate system; and determining a second straight line where the third light bar line segment is located according to the three-dimensional space representation of the laser line center of the third light bar line segment in the camera coordinate system.
And finally, calculating the distance between the first end point and the first straight line to be the first distance, and calculating the distance between the second end point and the second straight line to be the second distance.
The method for determining the second line of the third light bar line segment can use Huber to carry out fitting, and parameters of the first light bar line segment, the second light bar line segment and the third light bar line segment in the camera coordinate system, namely the slopes and the offsets of the first light bar line segment, the second light bar line segment and the third light bar line segment in the camera coordinate system, are obtained through fitting.
After determining the parameters of the first, second and third light bar line segments in the camera coordinate system using Huber fitting, the first and second end points of the middle line segment, and also the first and second straight lines, may be determined. And then, calculating the first distance and the second distance according to a distance formula from the point to the straight line.
For example, the first endpoint determined is
Figure 276362DEST_PATH_IMAGE026
The first straight line is the starting point of the first light strip segment
Figure 267452DEST_PATH_IMAGE027
And the end point of the first light strip line segment
Figure 546380DEST_PATH_IMAGE028
The fitted straight line of (a); likewise, the second endpoint is
Figure 167985DEST_PATH_IMAGE029
The second straight line is the starting point of the third light strip segment
Figure 398109DEST_PATH_IMAGE030
And the end point of the third light strip line segment
Figure 192890DEST_PATH_IMAGE031
Is fitted to the straight line.
The first distance may be determined by the following formula:
Figure 588974DEST_PATH_IMAGE032
(10)
similarly, the second distance may be determined by the following equation:
Figure 506115DEST_PATH_IMAGE033
(11)
fig. 4 is a schematic measurement error diagram of the thickness measurement method based on line structured light according to the embodiment of the present invention. FIG. 5 is a schematic diagram of a line segment fit to point A of FIG. 4, as shown in FIG. 5; fig. 6 is a schematic diagram of line segment fitting at point B in fig. 4.
According to the thickness measuring method based on the line structured light, the two end points of the second light bar line segment, the first straight line where the first light bar line segment is located and the second straight line where the third light bar line segment is located are determined through fitting, and the first distance and the second distance are determined according to the distance between the points and the straight lines, so that the accuracy of the thickness measurement of the target object is improved.
Fig. 7 is a schematic specific flowchart of a thickness measuring method based on line structured light according to an embodiment of the present invention, and as shown in fig. 7, the method includes:
inputting the target image into a semantic segmentation model to obtain a probability graph of outputting a first light strip line segment, a second light strip line segment and a third light strip line segment by the semantic segmentation model;
multiplying or adding the target image with the probability maps of the first light bar line segment, the second light bar line segment and the third light bar line segment to obtain a new target image;
respectively extracting the laser line center of any light strip line segment based on three new target images, and carrying out binarization on the new target images during extraction;
distortion correction is carried out on the extracted laser line center of any light strip line segment, and three-dimensional space representation of the laser line center of any light strip line segment is determined;
fitting is carried out based on the three-dimensional space representation of the laser line center of any light strip line segment, and two end points of a second light strip line segment, a straight line where the first light strip line segment is located and a straight line where a third light strip line segment is located are determined;
and determining the thickness of the target object according to the distance from the point to the straight line based on the two end points of the second light bar line segment, the straight line where the first light bar line segment is located and the straight line where the third light bar line segment is located.
As shown in fig. 8, on the basis of the above embodiment, an embodiment of the present invention provides a thickness measurement system based on line structured light, including:
a target image obtaining module 801, configured to obtain a target image of a target object irradiated by the line structure light;
a semantic segmentation module 802, configured to input the target image into a semantic segmentation model, and obtain a probability map of a first light bar segment, a second light bar segment, and a third light bar segment in the target image output by the semantic segmentation model, where the second light bar segment is located between the first light bar segment and the third light bar segment, and the second light bar segment is located on a surface of a target object in the target image;
a thickness determining module 803, configured to determine the thickness of the target object based on the probability map of the first, second, and third light bar line segments and the target image;
the semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
On the basis of the foregoing embodiment, in the thickness measurement system based on line structured light provided in an embodiment of the present invention, the thickness determination module specifically includes:
a laser line center extraction sub-module, configured to extract laser line centers of the first light bar line segment, the second light bar line segment, and the third light bar line segment, respectively, based on the probability maps of the first light bar line segment, the second light bar line segment, and the third light bar line segment, and the target image;
a distance determination sub-module for determining a first distance of the second light bar line segment from the first light bar line segment and a second distance of the second light bar line segment from the third light bar line segment based on a three-dimensional spatial representation of laser line centers of the first, second and third light bar line segments in a camera coordinate system;
a thickness determination submodule to determine a thickness of the target object based on the first distance and the second distance.
On the basis of the foregoing embodiment, in the thickness measurement system based on line structured light provided in an embodiment of the present invention, the laser line center extraction sub-module specifically includes:
a new target image obtaining subunit, configured to multiply or add, to any light bar line segment of the first light bar line segment, the second light bar line segment, and the third light bar line segment, any element in the probability map of any light bar line segment and an element at a corresponding position in the target image, so as to obtain a new target image;
and the laser line center extraction subunit is used for extracting the laser line center of any light bar line segment based on the new target image.
On the basis of the above embodiment, in the thickness measurement system based on line structured light provided by the embodiment of the present invention, the target image is acquired based on a camera;
correspondingly, the distance determination submodule specifically includes:
the laser plane determining subunit is used for acquiring the internal reference matrix of the camera and determining a laser plane generated by the linear structure light irradiating a calibration object in the calibration process of the camera;
and the three-dimensional space representation determining subunit is configured to determine, based on the internal reference matrix and the laser plane, three-dimensional space representations of laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment in a camera coordinate system, respectively.
On the basis of the foregoing embodiment, in the thickness measurement system based on line structured light provided in an embodiment of the present invention, the laser plane determination subunit is specifically configured to:
determining laser line projection generated by the linear structure light irradiating calibration object in the calibration process of the camera, and extracting the laser line center of the laser line projection;
and fitting to obtain the laser plane based on the three-dimensional space representation of the laser line center projected by the laser line under the camera coordinate system.
On the basis of the foregoing embodiment, in the thickness measurement system based on line structured light provided in an embodiment of the present invention, the three-dimensional spatial representation determining subunit is specifically configured to:
for any of the first, second, and third light bar segments, determining a three-dimensional spatial representation of a laser line center of the any light bar segment under a normalized camera coordinate system based on the internal reference matrix;
determining a scaling factor based on the three-dimensional space representation of the laser plane and the laser line center of any light bar line segment under a normalized camera coordinate system;
and determining the three-dimensional space representation of the laser line center of any light bar line segment in the camera coordinate system based on the scaling factor and the three-dimensional space representation of the laser line center of any light bar line segment in the normalized camera coordinate system.
On the basis of the foregoing embodiment, in the thickness measurement system based on line structured light provided in an embodiment of the present invention, the distance determination submodule specifically further includes:
an endpoint determination subunit to determine a first endpoint of the second light bar segment that is proximate to the first light bar segment and a second endpoint of the second light bar segment that is proximate to the third light bar segment based on a three-dimensional spatial representation of a laser line center of the second light bar segment in a camera coordinate system;
the straight line determining subunit is used for determining a first straight line where the first light bar line segment is located based on three-dimensional space representation of the laser line center of the first light bar line segment in a camera coordinate system, and determining a second straight line where the third light bar line segment is located based on three-dimensional space representation of the laser line center of the third light bar line segment in the camera coordinate system;
and the distance determining subunit is configured to determine that the distance between the first end point and the first straight line is the first distance, and determine that the distance between the second end point and the second straight line is the second distance.
Specifically, the functions of the modules in the thickness measurement system based on line structured light provided in the embodiment of the present invention correspond to the operation flows of the steps in the embodiments of the method, and the implementation effects are also consistent.
Fig. 9 illustrates a physical structure diagram of an electronic device, and as shown in fig. 9, the electronic device may include: a Processor (Processor) 910, a Communication Interface (Communication Interface) 920, a Memory (Memory) 930 and a Communication bus 940, wherein the Processor 910, the Communication Interface 920 and the Memory 930 are communicated with each other via the Communication bus 940. Processor 910 may invoke logic instructions in memory 930 to perform the line structured light based thickness measurement method provided by the various embodiments described above, the method comprising: acquiring a target image of a target object irradiated by the line structure light; inputting the target image into a semantic segmentation model to obtain a probability map of a first light bar line segment, a second light bar line segment and a third light bar line segment in the target image, wherein the first light bar line segment, the second light bar line segment and the third light bar line segment are output by the semantic segmentation model, the second light bar line segment is located between the first light bar line segment and the third light bar line segment, and the second light bar line segment is located on the surface of a target object in the target image; determining a thickness of the target object based on the probability map of the first, second, and third light bar segments and the target image; the semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
Furthermore, the logic instructions in the memory 930 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product including a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer being capable of executing the line structured light-based thickness measurement method provided by the above embodiments, the method including: acquiring a target image of a target object irradiated by the line structure light; inputting the target image into a semantic segmentation model to obtain a probability map of a first light bar line segment, a second light bar line segment and a third light bar line segment in the target image, wherein the first light bar line segment, the second light bar line segment and the third light bar line segment are output by the semantic segmentation model, the second light bar line segment is located between the first light bar line segment and the third light bar line segment, and the second light bar line segment is located on the surface of a target object in the target image; determining a thickness of the target object based on the probability map of the first, second, and third light bar segments and the target image; the semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor, is implemented to perform the line structured light based thickness measuring method provided in the above embodiments, the method including: acquiring a target image of a target object irradiated by the line structure light; inputting the target image into a semantic segmentation model to obtain a probability map of a first light bar line segment, a second light bar line segment and a third light bar line segment in the target image, wherein the first light bar line segment, the second light bar line segment and the third light bar line segment are output by the semantic segmentation model, the second light bar line segment is located between the first light bar line segment and the third light bar line segment, and the second light bar line segment is located on the surface of a target object in the target image; determining a thickness of the target object based on the probability map of the first, second, and third light bar segments and the target image; the semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A thickness measuring method based on line structured light is characterized by comprising the following steps:
acquiring a target image of a target object irradiated by the line structure light;
inputting the target image into a semantic segmentation model to obtain a probability map of a first light bar line segment, a second light bar line segment and a third light bar line segment in the target image, wherein the first light bar line segment, the second light bar line segment and the third light bar line segment are output by the semantic segmentation model, the second light bar line segment is located between the first light bar line segment and the third light bar line segment, and the second light bar line segment is located on the surface of a target object in the target image;
determining a thickness of the target object based on the probability map of the first, second, and third light bar segments and the target image;
the semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
2. The line structured light based thickness measurement method according to claim 1, wherein the determining the thickness of the target object based on the probability map of the first, second, and third light bar line segments and the target image specifically comprises:
extracting laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment respectively based on the probability maps of the first light bar line segment, the second light bar line segment and the third light bar line segment and the target image;
determining a first distance of the second light bar segment from the first light bar segment and a second distance of the second light bar segment from the third light bar segment based on a three-dimensional spatial representation of laser line centers of the first, second, and third light bar segments under a camera coordinate system;
determining a thickness of the target object based on the first distance and the second distance.
3. The line structured light thickness measuring method according to claim 2, wherein the extracting the laser line centers of the first, second, and third light bar line segments based on the probability maps of the first, second, and third light bar line segments and the target image respectively comprises:
for any light strip line segment of the first light strip line segment, the second light strip line segment and the third light strip line segment, multiplying or adding any element in the probability map of any light strip line segment with an element at a corresponding position in the target image to obtain a new target image;
and extracting the laser line center of any light bar line segment based on the new target image.
4. The line structured light based thickness measurement method of claim 2, wherein the target image is acquired based on a camera;
correspondingly, the three-dimensional spatial representation of the laser line centers of the first, second and third light bar line segments in the camera coordinate system is determined by:
acquiring an internal reference matrix of the camera, and determining a laser plane generated by the linear structure light irradiating a calibration object in the calibration process of the camera;
and respectively determining three-dimensional space representations of laser line centers of the first light bar line segment, the second light bar line segment and the third light bar line segment in a camera coordinate system based on the internal reference matrix and the laser plane.
5. The method for measuring thickness based on line structured light according to claim 4, wherein the determining the laser plane generated by the line structured light irradiating the calibration object during the calibration process of the camera specifically comprises:
determining laser line projection generated by the linear structure light irradiating calibration object in the calibration process of the camera, and extracting the laser line center of the laser line projection;
and fitting to obtain the laser plane based on the three-dimensional space representation of the laser line center projected by the laser line under the camera coordinate system.
6. The line structured light based thickness measurement method according to claim 4, wherein the determining a three-dimensional spatial representation of laser line centers of the first, second and third light bar segments, respectively, in a camera coordinate system based on the internal reference matrix and the laser plane, specifically comprises:
for any of the first, second, and third light bar segments, determining a three-dimensional spatial representation of a laser line center of the any light bar segment under a normalized camera coordinate system based on the internal reference matrix;
determining a scaling factor based on the three-dimensional space representation of the laser plane and the laser line center of any light bar line segment under a normalized camera coordinate system;
and determining the three-dimensional space representation of the laser line center of any light bar line segment in the camera coordinate system based on the scaling factor and the three-dimensional space representation of the laser line center of any light bar line segment in the normalized camera coordinate system.
7. The line structured light thickness measuring method according to any one of claims 2 to 6, wherein the determining a first distance of the second light bar segment from the first light bar segment and a second distance of the second light bar segment from the third light bar segment based on a three-dimensional spatial representation of laser line centers of the first light bar segment, the second light bar segment and the third light bar segment in a camera coordinate system comprises:
determining a first endpoint of the second light bar segment that is proximate to the first light bar segment and a second endpoint of the second light bar segment that is proximate to the third light bar segment based on a three-dimensional spatial representation of a laser line center of the second light bar segment in a camera coordinate system;
determining a first straight line where the first light bar line segment is located based on three-dimensional space representation of the laser line center of the first light bar line segment in a camera coordinate system, and determining a second straight line where the third light bar line segment is located based on three-dimensional space representation of the laser line center of the third light bar line segment in the camera coordinate system;
and determining the distance between the first end point and the first straight line as the first distance, and determining the distance between the second end point and the second straight line as the second distance.
8. A thickness measuring system based on line structured light is characterized by comprising
The target image acquisition module is used for acquiring a target image of a target object irradiated by the line structure light;
the semantic segmentation module is used for inputting the target image into a semantic segmentation model to obtain a probability map of a first light bar line segment, a second light bar line segment and a third light bar line segment in the target image, wherein the first light bar line segment, the second light bar line segment and the third light bar line segment are output by the semantic segmentation model, the second light bar line segment is located between the first light bar line segment and the third light bar line segment, and the second light bar line segment is located on the surface of a target object in the target image;
a thickness determination module to determine a thickness of the target object based on the target image and a probability map of the first, second, and third light bar line segments;
the semantic segmentation model is obtained by training based on an image sample carrying a first light bar segment label, a second light bar segment label and a third light bar segment label.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the line structured light based thickness measurement method according to any of claims 1 to 7.
10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, performs the steps of the line structured light based thickness measuring method according to any one of claims 1 to 7.
CN202110611414.8A 2021-06-02 2021-06-02 Thickness measuring method and system based on line structured light Pending CN113048899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110611414.8A CN113048899A (en) 2021-06-02 2021-06-02 Thickness measuring method and system based on line structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110611414.8A CN113048899A (en) 2021-06-02 2021-06-02 Thickness measuring method and system based on line structured light

Publications (1)

Publication Number Publication Date
CN113048899A true CN113048899A (en) 2021-06-29

Family

ID=76518599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110611414.8A Pending CN113048899A (en) 2021-06-02 2021-06-02 Thickness measuring method and system based on line structured light

Country Status (1)

Country Link
CN (1) CN113048899A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113828439A (en) * 2021-09-09 2021-12-24 中国科学院自动化研究所 Pattern spraying detection system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103673901A (en) * 2013-11-22 2014-03-26 大连日佳电子有限公司 Solder paste thickness testing method and solder paste thickness tester
CN105783746A (en) * 2016-05-23 2016-07-20 南京林业大学 Wooden product thickness detection system and detection method thereof
CN107578464A (en) * 2017-06-30 2018-01-12 长沙湘计海盾科技有限公司 A kind of conveyor belt workpieces measuring three-dimensional profile method based on line laser structured light
CN109033107A (en) * 2017-06-09 2018-12-18 腾讯科技(深圳)有限公司 Image search method and device, computer equipment and storage medium
CN110175595A (en) * 2019-05-31 2019-08-27 北京金山云网络技术有限公司 Human body attribute recognition approach, identification model training method and device
CN112381948A (en) * 2020-11-03 2021-02-19 上海交通大学烟台信息技术研究院 Semantic-based laser stripe center line extraction and fitting method
CN112419285A (en) * 2020-11-27 2021-02-26 上海商汤智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN112509052A (en) * 2020-12-22 2021-03-16 苏州超云生命智能产业研究院有限公司 Method and device for detecting fovea maculata, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103673901A (en) * 2013-11-22 2014-03-26 大连日佳电子有限公司 Solder paste thickness testing method and solder paste thickness tester
CN105783746A (en) * 2016-05-23 2016-07-20 南京林业大学 Wooden product thickness detection system and detection method thereof
CN109033107A (en) * 2017-06-09 2018-12-18 腾讯科技(深圳)有限公司 Image search method and device, computer equipment and storage medium
CN107578464A (en) * 2017-06-30 2018-01-12 长沙湘计海盾科技有限公司 A kind of conveyor belt workpieces measuring three-dimensional profile method based on line laser structured light
CN110175595A (en) * 2019-05-31 2019-08-27 北京金山云网络技术有限公司 Human body attribute recognition approach, identification model training method and device
CN112381948A (en) * 2020-11-03 2021-02-19 上海交通大学烟台信息技术研究院 Semantic-based laser stripe center line extraction and fitting method
CN112419285A (en) * 2020-11-27 2021-02-26 上海商汤智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN112509052A (en) * 2020-12-22 2021-03-16 苏州超云生命智能产业研究院有限公司 Method and device for detecting fovea maculata, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭雁蓉 等: "复杂环境中的线结构光中心提取方法", 《计算机工程与设计》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113828439A (en) * 2021-09-09 2021-12-24 中国科学院自动化研究所 Pattern spraying detection system

Similar Documents

Publication Publication Date Title
CN110264416B (en) Sparse point cloud segmentation method and device
CN109034017B (en) Head pose estimation method and machine readable storage medium
US20190141247A1 (en) Threshold determination in a ransac algorithm
US20200057831A1 (en) Real-time generation of synthetic data from multi-shot structured light sensors for three-dimensional object pose estimation
CN110119679B (en) Object three-dimensional information estimation method and device, computer equipment and storage medium
CN109711246B (en) Dynamic object recognition method, computer device and readable storage medium
CN101697233A (en) Structured light-based three-dimensional object surface reconstruction method
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
CN109711472B (en) Training data generation method and device
CN109934873B (en) Method, device and equipment for acquiring marked image
CN111768450A (en) Automatic detection method and device for line deviation of structured light camera based on speckle pattern
CN113689578A (en) Human body data set generation method and device
CN112991193A (en) Depth image restoration method, device and computer-readable storage medium
CN112184793B (en) Depth data processing method and device and readable storage medium
CN112102380A (en) Registration method and related device for infrared image and visible light image
CN113379815A (en) Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
CN115953483A (en) Parameter calibration method and device, computer equipment and storage medium
CN115830135A (en) Image processing method and device and electronic equipment
CN111798422A (en) Checkerboard angular point identification method, device, equipment and storage medium
CN111462246A (en) Equipment calibration method of structured light measurement system
CN113048899A (en) Thickness measuring method and system based on line structured light
CN114463437A (en) Camera calibration method, device, equipment and computer readable medium
CN115619783B (en) Method and device for detecting product processing defects, storage medium and terminal
CN115457206A (en) Three-dimensional model generation method, device, equipment and storage medium
CN112270693B (en) Method and device for detecting motion artifact of time-of-flight depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210629

RJ01 Rejection of invention patent application after publication