CN113096094B - Three-dimensional object surface defect detection method - Google Patents

Three-dimensional object surface defect detection method Download PDF

Info

Publication number
CN113096094B
CN113096094B CN202110390016.8A CN202110390016A CN113096094B CN 113096094 B CN113096094 B CN 113096094B CN 202110390016 A CN202110390016 A CN 202110390016A CN 113096094 B CN113096094 B CN 113096094B
Authority
CN
China
Prior art keywords
detected object
image
point cloud
mask
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110390016.8A
Other languages
Chinese (zh)
Other versions
CN113096094A (en
Inventor
吴俊�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wu Jun
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110390016.8A priority Critical patent/CN113096094B/en
Publication of CN113096094A publication Critical patent/CN113096094A/en
Application granted granted Critical
Publication of CN113096094B publication Critical patent/CN113096094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a three-dimensional object surface defect detection method, which comprises the following steps: training by using an image data set of the detected object to obtain a target segmentation model; the method comprises the steps of performing real-time segmentation on a detected object area by utilizing a target segmentation model, and reconstructing the detected object in real time to obtain a point cloud model of the detected object; carrying out attitude estimation to obtain the pose of the detected object in a camera coordinate system; calculating normal vectors of each point on the three-dimensional model, and carrying out surface splitting on the three-dimensional model based on the similarity of the normal vectors to obtain a surface set, a flatness set corresponding to the surface set and a pose set of the surface set in a camera coordinate system; performing rasterization treatment on the surface set to obtain a surface defect detection path of the detected object; and performing defect detection on the surface of the detected object according to the defect detection path. The invention effectively solves the problem that all surface information of the object can not be obtained through one scanning.

Description

Three-dimensional object surface defect detection method
Technical Field
The invention belongs to the technical field of image defect detection in computer vision, and particularly relates to a three-dimensional object surface defect detection method.
Background
The unmanned, the intellectualization of mill is the future development trend, the process such as automatic unstacking, automatic pile up neatly, automatic unloading, automatic processing has been able to realize fairly high intelligent degree, but quality detection still has the problem that needs a large amount of manpower input, especially the outward appearance detection of the object of complicated shape, and the outward appearance detection of the object of great size all need a large amount of manpower input, for example in the automobile production field, often the manual work detects after the automobile surface paint spraying is accomplished, the very low problem of flexibility exists with carrying the camera according to the fixed line to the line with the arm to automatic production line, the automation degree of small batch of product has been seriously restricted, the automation degree in the production of industrial products such as freight train, passenger train, lathe, boats and ships, aircraft, train is lower, the surface defect detection relies on the manual work to accomplish, in order to improve standardization, quality and the quality of product, how relies on instrument equipment to accomplish the defect detection on the surface of complicated three-dimensional object automatically is a urgent problem.
At present, several common complex three-dimensional surface defect detection methods include:
method 1: by means of the quality inspection workers performing the inspection visually, tactilely and by means of a part of the hand-held instrument,
Defects: a typical disadvantage of this method is that the technical level, working attitude and physical state of the worker have a relatively large influence on the detection result, and there is often a local area miss detection condition, so that re-detection and spot detection are often required;
method 2: a complex image acquisition system is built through a plurality of cameras, the measured object is shot from a plurality of angles,
Defects: the cost for constructing the detection environment is high, and the detection environment is often constructed for an object with one model or one shape, so that the flexibility of the method is generally poor, meanwhile, the pose of the object is required to be relatively fixed, and a lot of auxiliary work is required to be manually processed;
Method 3: fixed point defect detection by collecting image data through moving to fixed point each time by a plurality of mechanical arms
Defects: taking the welding quality detection of the frames as an example, the method has higher requirements on the positioning accuracy of each frame, detection points are required to be reset every vehicle shape, the mechanical arms are taught to move to target detection points along a specific path, and meanwhile, the position of each mechanical arm is fixed, so that the detection can be completed by at least 4 mechanical arms for one vehicle, the utilization efficiency of the mechanical arms is lower, and the flexibility is poor.
Disclosure of Invention
In order to solve the problem of poor flexibility in complex three-dimensional surface defect detection in the prior art, the invention provides a three-dimensional object surface defect detection method for solving the problem.
The invention is realized by the following technical scheme:
A three-dimensional object surface defect detection method comprises the following steps:
Step S1, training by utilizing an RGB image data set of a detected object to obtain a target segmentation model;
S2, collecting image information of a plurality of angles of the detected object, performing real-time segmentation on the detected object area by using the target segmentation model, and reconstructing the detected object in real time to obtain a point cloud model of the detected object;
Step S3, carrying out attitude estimation based on a point cloud model and a three-dimensional model of the detected object to obtain the pose of the detected object in a camera coordinate system;
s4, calculating normal vectors of each point on the three-dimensional model, and carrying out surface splitting on the three-dimensional model based on similarity of the normal vectors to obtain a surface set, a flatness set corresponding to the surface set and a pose set of the surface set in a camera coordinate system;
Step S5, rasterizing the surface set to obtain a surface defect detection path of the detected object;
and S6, performing defect detection on the surface of the detected object according to the defect detection path.
Because the surface specificity of the detected object is difficult to obtain the complete and reliable surface information of the detected object through one-time scanning in the process of image acquisition and processing of the detected object, the application maps and correlates the real-time reconstructed detected object with the theoretical three-dimensional model of the detected object, thereby obtaining the surface defect detection path of the detected object through image processing of the three-dimensional model (complete and reliable surface data) of the detected object, carrying out defect detection on the detected object according to the detection path, and improving the flexibility of detection, and being applicable to the three-dimensional object surface defect detection of various types, various sizes and various postures; in addition, according to the method provided by the application, the surface defect detection of the large complex object can be realized without adding additional detection equipment, so that the method is suitable for detecting the surface defect of the large complex object, and the detection cost can be greatly reduced.
Preferably, step S2 of the present invention specifically includes:
S21, collecting RGB images and depth images of the detected object at the current moment, processing the collected images in real time by utilizing the target segmentation model to obtain a first point cloud and a first RGB image of the detected object, and adjusting the gesture of the camera and the distance between the camera and the detected object in real time;
Step S22, taking the center of the first point cloud as a round point, controlling a mobile platform carrying a camera to acquire RGB images and depth images of the detected object when rotating around the detected object from the current position by a preset angle, utilizing the target segmentation model to process the acquired images in real time to obtain a second point cloud and a second RGB image of the detected object, and adjusting the gesture of the camera and the distance between the camera and the detected object in real time;
Step S23, registering the first point cloud and the second point cloud, fusing the registered first point cloud and second point cloud to obtain the pose of the point cloud under the current camera coordinate system, updating the first point cloud by the fused point cloud, and taking the updated first point cloud as a new first point cloud;
Step S24, repeating the steps S22-S23 until the similarity of the first RGB image and the second RGB image meets the preset condition;
and S25, taking the new first point cloud as a point cloud model of the detected object.
Preferably, in the step S21 and the step S22 of the present invention, the processing of the collected image in real time and the adjusting of the pose of the camera and the distance between the camera and the detected object in real time specifically include:
step S211, inputting the acquired RGB image into the target segmentation model to carry out target segmentation, and obtaining a mask image of the detected object;
Step S212, obtaining a depth image and an RGB image only comprising the detected object based on the mask image of the detected object, and generating a point cloud based on the depth image and the RGB image only comprising the detected object;
Step S213, calculating the mass center of a mask region in a mask image of the detected object, calculating the distance from the mass center to the center of the mask image under a pixel coordinate system, and adjusting the posture of the camera based on the distance;
Step S214, calculating the coincidence ratio of the boundary of the mask region in the mask map of the detected object and the boundary of the mask map, and adjusting the distance between the camera and the detected object according to the coincidence ratio.
Preferably, step S213 of the present invention specifically includes:
the distance from the center of mass to the center of the mask map in the pixel coordinate system is calculated by the following steps:
dccu=umc-u0
dccv=vmc-v0
wherein u mc is the coordinate value of the centroid of the mask area a mask in the first mask map on the u-axis of the pixel coordinate system, u 0 is the coordinate value of the center of the first mask map on the u-axis of the pixel coordinate system, v mc is the coordinate value of the centroid of the mask area a mask in the first mask map on the v-axis of the pixel coordinate system, and v 0 is the coordinate value of the center of the first mask map on the v-axis of the pixel coordinate system;
The camera pose is adjusted in real time according to the following conditions:
if d ccu <0, controlling the camera to rotate around the y-axis of the camera coordinate system in a counterclockwise direction;
If d ccu >0, controlling the camera to rotate around the y-axis of the camera coordinate system in a clockwise direction;
If d ccv <0, controlling the camera to rotate around the x-axis of the camera coordinate system in a counterclockwise direction;
If d ccv >0, the camera is controlled to rotate in a clockwise direction about the x-axis of the camera coordinate system.
The step S214 of the present invention specifically includes:
Calculating the coincidence ratio L overlap of the boundary of the mask region in the mask map of the detected object and the boundary of the mask map by the following steps:
Wherein P edge is a point set formed by points of the mask region at the edge of the image, count (P edge) is the number of points in the point set P edge, W is the number of pixels of the image in the u-axis direction, and H is the number of pixels of the image in the v-axis direction;
If L overlap > ρ, wherein ρ is any fraction less than 0.5, controlling the mobile robot to be far away from the detected object, collecting the image of the detected object in real time in the moving process, and processing according to the steps S211, S213 and S214 until L overlap is less than or equal to ρ;
if L overlap =0, then the size of the mask area's duty cycle in the mask map is calculated:
Wherein a overlap is the duty ratio of the mask area a mask in the first mask image, S mask is the number of pixels included in the mask area a mask, W is the number of pixels of the first mask image in the u-axis direction, and H is the number of pixels of the first mask image in the v-axis direction;
if A overlap < alpha, wherein alpha is any decimal less than 1, controlling the mobile robot to move towards the detected object, acquiring the image of the detected object in real time in the moving process, and processing according to the steps S211, S213 and S214 until A overlap is more than or equal to alpha.
Preferably, the preset conditions in step S24 of the present invention are:
If L s is more than or equal to epsilon, stopping the mobile robot from collecting the image information of the detected object;
Wherein L s is the similarity of the first RGB image and the second RGB image; epsilon is any decimal between 0 and 1.
Preferably, step S3 of the present invention specifically includes:
Step S31, calculating the mass center of the source point cloud Q by taking the point cloud model as a source point cloud Q, randomly sampling part of the point clouds by taking the mass center as an origin, calculating the distance between each point cloud obtained by sampling and the adjacent point clouds, and obtaining the sampling interval theta of the three-dimensional model based on the calculated distance;
step S32, sampling the three-dimensional model according to the sampling interval theta, wherein the obtained point cloud is used as a target point cloud P;
In step S33, a rotation matrix R obj and a translation vector t obj from the target point cloud P to the source point cloud Q are calculated, where the rotation matrix R obj and the translation vector t obj are the pose of the detected object under the camera coordinate system, and a new point cloud coordinate of the target point cloud P under the current camera coordinate system is obtained.
Preferably, step S4 of the present invention specifically includes:
step S41, randomly initializing a starting point P 0 as a current point P cur on the three-dimensional model, initializing a plurality of arrays cur for storing point clouds of the current surface, and initializing a Queue next for storing adjacent points P next;
Step S42, searching for a neighboring point P next of the current point P cur by adopting a kd-Tree algorithm, and judging whether the neighboring point P next is in an Array cur;
Step S43, if the adjacent point P next is not in the Array cur, calculating the normal vectors of the adjacent point P next and the current point P cur;
step S44, calculating a cosine similarity C p of normal vectors of the neighboring point P next and the current point P cur;
Step S45, if Or C p = 1, then save the neighbor point P next to Array cur while inserting the neighbor point into the Queue next Queue tail; wherein/>For the cosine similarity of P 0 and the first adjacent point P next, delta epsilon is the allowable surface flatness difference range;
step S46, reading a Queue next Queue head element, updating the current point P cur, taking the updated current point P cur as a new current point P cur, and deleting the Queue head element from the Queue next;
Step S47, repeating the steps S42 to S46 until the Queue next is cleared;
Step S48, storing an Array cur into the Array, deleting the same points in the target point cloud P as those in the Array cur, and resetting the Array cur;
Step S49, repeating the steps S41 to S46 until all the target point clouds P are traversed, and obtaining a point cloud Array set array= { Array i |i=1, 2, …, N } of N surfaces, which is a surface set a of the detected object, where all the surfaces in the surface set a Composition of the corresponding flatness set/>And calculating to obtain a rotation matrix set and a translation vector set corresponding to the surface set A under a camera coordinate system, and obtaining a pose set of the surface set A under the camera coordinate system.
Preferably, step S6 of the present invention specifically includes:
Step S61, obtaining all rectangular units as normal samples through step S5;
Step S62, classifying the normal samples by using a K means clustering algorithm to obtain an image dataset Dataset = { Dataset m |m=1, 2, …, M } of M groups of different texture features;
step S63, respectively establishing a normal sample discrimination model for each image data set in the image data set Dataset;
step S64, collecting rectangular units of the detected object according to the step S5, and judging an image data set to which the rectangular units belong based on the position information of the camera;
Step S65, reconstructing the rectangular unit by using a dictionary matrix library corresponding to the image data set to obtain a set of reconstruction errors { e i |i=1, 2, …, G };
step S66, if any reconstruction error e i<thi exists, the rectangular unit is a positive sample, otherwise, the rectangular unit is a defective sample;
Wherein G represents the number of dictionary matrices in the dictionary matrix library corresponding to the image data set to which the G belongs, and th i represents the threshold value corresponding to the dictionary matrix library corresponding to the image data set to which the G belongs.
Preferably, in step S63 of the present invention, establishing a normal sample discrimination model for an image data set specifically includes:
step S631, randomly selecting an image data set as the current image data set;
step S632, randomly selecting one image in the current image data set as the current image;
step S633, calculating the similarity between the current image and other images, and summing the obtained similarity to obtain a difference value between the current image and the current image data set;
step S634, re-selecting an image as the current image, repeatedly executing step S633 until the images in the image data set are traversed, obtaining the difference values of all the images in the current image data set and the current image data set, and sorting the difference values according to the order from big to small;
step S635, selecting images corresponding to the first G difference values, and performing sparse decomposition on the G images by adopting a KSVD algorithm to obtain a dictionary matrix library;
step S636, reconstructing all images in the current image data set by using all dictionary matrices in the dictionary matrix library D, where each dictionary matrix corresponds to obtain K reconstruction errors e= { E ij |j=1, 2, …, K }, where K represents the amount of image data in the current image data set;
Step S637, obtaining a threshold library for judging normal samples from the dictionary matrix library by taking the maximum value of the K reconstruction errors: th= { th i|thi=max({eij |j=1, 2, …, K }); the dictionary matrix library D and the corresponding threshold value library th form a normal sample discrimination model of the current group;
Step S638, a new image data set is selected as the current image data set, and steps S632 to S637 are repeated until the image data sets in the image data set are traversed.
The invention has the following advantages and beneficial effects:
1. The method provided by the invention is based on a mobile platform (a mobile robot carries a six-axis mechanical arm) to realize the movement of a camera in a three-dimensional space, is based on an image segmentation neural network model and point cloud registration, performs dynamic scanning and modeling on an object, and is based on a three-dimensional model to split and uniformly plan the surface and then map the surface onto a real object, thereby effectively solving the problem that all surface information of the object can not be obtained through one scan.
2. Compared with the method for detecting the surface defects of the multi-three-dimensional object based on radar positioning and the method for detecting the surface defects based on the fixed mechanical arm track, the method has the advantages of being high in flexibility, capable of being used for detecting the surface defects of the three-dimensional object in various types, various sizes and various postures, and capable of greatly reducing the detection cost particularly when being used for detecting the surface defects of the large-scale complex object.
3. The method for adaptively adjusting the distance from the camera to the detected area and the focal length of the camera can effectively remove the interference of the background and the environment and improve the quality and the efficiency of defect detection.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings:
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of a target segmentation model construction process according to the present invention.
Fig. 3 is a schematic diagram of a point cloud model construction flow according to the present invention.
Fig. 4 is a schematic diagram of a gesture estimation process according to the present invention.
Fig. 5 is a schematic diagram of the posture estimation result of the present invention.
Fig. 6 is a schematic diagram of a surface splitting process according to the present invention.
Fig. 7 is a schematic diagram of a rasterization process flow of the present invention.
FIG. 8 is a schematic diagram of the result of the rasterization process of the present invention.
FIG. 9 is a schematic diagram of a defect detection process according to the present invention.
Fig. 10 is a schematic diagram of a computer device according to the present invention.
Detailed Description
For the purpose of making apparent the objects, technical solutions and advantages of the present invention, the present invention will be further described in detail with reference to the following examples and the accompanying drawings, wherein the exemplary embodiments of the present invention and the descriptions thereof are for illustrating the present invention only and are not to be construed as limiting the present invention.
Examples
In order to solve the problem that all surface information of an object cannot be obtained through sequential scanning in the prior art, the embodiment provides a three-dimensional object surface defect detection method. According to the embodiment, images of the detected object are acquired in real time, the detected object is reconstructed based on the image segmentation neural network model and point cloud registration, the surface is split and uniformly planned based on the three-dimensional model, and then the images are mapped to a real object to realize defect detection.
As shown in fig. 1, the method of the present embodiment includes the following steps:
Step S1, training by utilizing an RGB image data set of a detected object to obtain a target segmentation model;
S2, collecting image information of a plurality of angles of the detected object, performing real-time segmentation on the detected object area by using the target segmentation model, and reconstructing the detected object in real time to obtain a point cloud model of the detected object;
Step S3, carrying out attitude estimation based on a point cloud model and a three-dimensional model of the detected object to obtain the pose of the detected object in a camera coordinate system;
s4, calculating normal vectors of each point on the three-dimensional model, and carrying out surface splitting on the three-dimensional model based on similarity of the normal vectors to obtain a surface set, a flatness set corresponding to the surface set and a pose set of the surface set in a camera coordinate system;
Step S5, rasterizing the surface set to obtain a surface defect detection path of the detected object;
and S6, performing defect detection on the surface of the detected object according to the defect detection path.
In one possible implementation manner, as shown in fig. 2, step S1 of the present embodiment specifically includes:
step S11, a training data set consisting of a plurality of RGB images of detected objects with different angles and corresponding mask patterns;
Step S12, inputting the training data set into the MaskRCNN neural network model for training until convergence, to obtain a target segmentation model after training, where the MaskRCNN neural network model and the training method adopted in the embodiment are not described in detail herein.
In one possible implementation manner, as shown in fig. 3, step S2 of this embodiment specifically includes:
S21, collecting RGB images and depth images of the detected object at the current moment, processing the collected images in real time by utilizing the target segmentation model to obtain a first point cloud and a first RGB image of the detected object, and adjusting the gesture of the camera and the distance between the camera and the detected object in real time; the object to be detected is within the shooting range of the camera by controlling the movement of a mobile platform (mobile robot) carrying the camera before the image acquisition.
Step S22, taking the center of the first point cloud as a round point, controlling a mobile platform carrying a camera to acquire RGB images and depth images of the detected object when rotating around the detected object from the current position by a preset angle, utilizing the target segmentation model to process the acquired images in real time to obtain a second point cloud and a second RGB image of the detected object, and adjusting the gesture of the camera and the distance between the camera and the detected object in real time;
Step S23, registering the first point cloud and the second point cloud, fusing the registered first point cloud and second point cloud to obtain the pose of the point cloud under the current camera coordinate system, updating the first point cloud by the fused point cloud, and taking the updated first point cloud as a new first point cloud;
Step S24, repeating the steps S22-S23 until the similarity of the first RGB image and the second RGB image meets the preset condition;
and S25, taking the new first point cloud as a point cloud model of the detected object.
In step S21 and step S22 of this embodiment, the collected images are processed in real time to obtain the point cloud and RGB image of the detected object, and the pose of the camera and the distance between the camera and the detected object are adjusted in real time:
Inputting the acquired RGB image into a trained target segmentation model to carry out target segmentation to obtain a mask image of the detected object;
obtaining a depth image and an RGB image only comprising the detected object based on the mask image of the detected object, the acquired RGB image and the depth image, and generating a point cloud based on the depth image and the RGB image only comprising the detected object;
Calculating the mass center of a mask region in a mask image of the detected object, calculating the distance from the mass center to the center of the mask image under a pixel coordinate system, and adjusting the posture of the camera based on the distance; the method comprises the following steps:
Calculating the mass center of a mask area A mask in a first mask diagram of the detected object, and respectively calculating the distance from the mass center to the center of the image under a pixel coordinate system:
dccu=umc-u0
dccv=vmc-v0
wherein u mc is the coordinate value of the centroid of the mask area a mask in the first mask map on the u-axis of the pixel coordinate system, u 0 is the coordinate value of the center of the first mask map on the u-axis of the pixel coordinate system, v mc is the coordinate value of the centroid of the mask area a mask in the first mask map on the v-axis of the pixel coordinate system, and v 0 is the coordinate value of the center of the first mask map on the v-axis of the pixel coordinate system;
if d ccu <0, controlling the camera to rotate around the y-axis of the camera coordinate system in a counterclockwise direction;
If d ccu >0, controlling the camera to rotate around the y-axis of the camera coordinate system in a clockwise direction;
If d ccv <0, controlling the camera to rotate around the x-axis of the camera coordinate system in a counterclockwise direction;
If d ccv >0, the camera is controlled to rotate in a clockwise direction about the x-axis of the camera coordinate system.
Calculating the coincidence ratio of the boundary of the mask region in the mask map of the detected object and the boundary of the mask map, and adjusting the distance between the camera and the detected object according to the coincidence ratio; the method comprises the following steps:
Determining whether the detected object is all within the camera view by calculating the coincidence ratio L overlap of the boundary of the mask area A mask and the boundary of the first mask map in the first mask map of the detected object,
Wherein P edge is a point set formed by points of the mask region at the edge of the image, count (P edge) is the number of points in the point set P edge, W is the number of pixels of the image in the u-axis direction, and H is the number of pixels of the image in the v-axis direction;
If L overlap > ρ is any fraction less than 0.5, controlling the mobile robot to keep away from the detected object, collecting the image of the detected object in real time in the moving robot keeping away process, and processing the collected image in real time according to the process (including target segmentation, calculation of d ccu and d ccv, judgment of the distance between the center of mass of the mask region A mask of the mask map and the center of the image, and adjustment of the pose of the camera according to the method) until L overlap is smaller than or equal to ρ.
If L overlap =0, the size of the duty ratio of the mask region a mask in the mask map is determined,
Wherein a overlap is the duty ratio of the mask area a mask in the first mask pattern, S mask is the number of pixels included in the mask area a mask, W is the number of pixels of the first mask pattern in the u-axis direction, and H is the number of pixels of the first mask pattern in the v-axis direction;
If A overlap < alpha, wherein alpha is any decimal less than 1, controlling the mobile robot to move towards the detected object, collecting images of the detected object in real time in the moving process, and processing the collected images in real time according to the process (including target segmentation, calculation of d ccu and d ccv, judgment of the distance between the center of mass of the mask area A mask of the mask image and the center of the image and adjustment of the pose of the camera according to the method) until A overlap is more than or equal to alpha.
The preset conditions in step S24 of this embodiment are:
If L s is larger than or equal to epsilon, the mobile robot stops collecting the image information of the detected object, and the new first point cloud is used as a point cloud model of the detected object. Where ε is any decimal between 0 and 1, L s is the similarity of the first RGB image and the second RGB image.
In one possible implementation manner, as shown in fig. 4, step S3 of this embodiment specifically includes:
Step S31, calculating the mass center of the source point cloud Q by taking the point cloud model as a source point cloud Q, randomly sampling part of the point clouds by taking the mass center as an origin, and calculating the median of the distance between each point cloud obtained by sampling and the adjacent point clouds as the sampling interval theta of the three-dimensional model;
And step S32, sampling the three-dimensional model according to a sampling interval theta, wherein the obtained point cloud is a target point cloud P.
Step S33, based on ICP algorithm, calculating a rotation matrix R obj and a translation vector t obj from the target point cloud P to the source point cloud Q to obtain new point cloud coordinates of the target point cloud P under the current camera coordinate system,
Pnew=Robj(P+tobj);
The rotation matrix R obj and the translation vector t obj are the pose of the detected object under the mobile robot camera coordinate system. The result of the posture estimation of the target object in this embodiment is shown in fig. 5.
In one possible implementation manner, calculating the normal vector of each point on the three-dimensional model is equivalent to calculating the normal vector of each point in the target point cloud P under the current camera coordinate system, so as shown in fig. 6, step S4 of this embodiment specifically includes:
Step S41, randomly initializing a starting point P 0 as a current point P cur on the three-dimensional model, initializing a plurality of arrays cur for storing point clouds of the current surface, and initializing a Queue next for storing adjacent points P next;
Step S42, searching for a neighboring point P next of the current point P cur by adopting a kd-Tree algorithm, and judging whether the neighboring point P next is in an Array cur;
Step S43, if the adjacent point P next is not in the Array cur, calculating the normal vectors of the adjacent point P next and the current point P cur;
step S44, calculating a cosine similarity C p of normal vectors of the neighboring point P next and the current point P cur;
Step S45, if Or C p = 1, then save the neighbor point P next to Array cur while inserting the neighbor point into the Queue next Queue tail; wherein/>For the cosine similarity of P 0 and the first adjacent point P next, delta epsilon is the allowable surface flatness difference range;
step S46, reading a Queue next Queue head element, updating the current point P cur, taking the updated current point P cur as a new current point P cur, and deleting the Queue head element from the Queue next;
Step S47, repeating the steps S42 to S46 until the Queue next is cleared;
Step S48, storing an Array cur into the Array, deleting the same points in the target point cloud P as those in the Array cur, and resetting the Array cur;
Step S49, repeating the steps S41 to S46 until all the target point clouds P are traversed, and obtaining a point cloud Array set array= { Array i |i=1, 2, …, N } of N surfaces, wherein the point cloud Array is a split surface, the point cloud Array set is a surface set A, and all the surfaces in the surface set A Form a corresponding flatness degree setCalculating to obtain a rotation matrix set and a translation vector set corresponding to the surface set A in a camera coordinate system, and obtaining a pose set of the surface set A in the camera coordinate system, wherein the pose set specifically comprises:
Calculating the normal vector of the surface set A to obtain a rotation matrix set corresponding to the surface set A under an object coordinate system: r s={Ri |i=1, 2, …, N }, calculating a set of rotation matrices corresponding to the set of surfaces a under the camera coordinate system: r= { R iRobj |i=1, 2, …, N };
Calculating the centroids of all surfaces in the surface set A to obtain a translation vector set of the surface set A under an object coordinate system: t s={ti |i=1, 2, …, N }, calculate a set of translation vectors of the surface set a under the camera coordinate system: t= { t i+tobj |i=1, 2, …, N };
Obtaining a pose set of the surface set A under a camera coordinate system: a pose = (R, t).
In one possible implementation manner, as shown in fig. 7, step S5 of this embodiment specifically includes:
Step S51, using the surface with the shortest distance from the center of mass of all the surfaces in the surface set A to the origin of the camera coordinates as a first surface A 0, and initializing a current surface A cur by using the first surface A 0; and if a plurality of surfaces with the same distance exist, sorting the surfaces with the same distance according to a first priority by taking a z-axis as a first priority, taking a y-axis as a second priority, taking a surface with the same centroid coordinate zmin of the surfaces, or with the same z as y min as x min as a first surface A 0.
Step S52, fitting the current surface A cur with an inscribed rectangular grid according to the flatness of the current surface A cur Calculating the width of rectangular units in the grid; the present embodiment calculates the width of rectangular cells in the grid by:
wherein, gamma E [0 DEG, 180 DEG ] is the included angle of the normal vector of the opposite side points of the rectangular unit, For the point cloud interval on the current surface A cur, sigma is the duty ratio of the defect_size in the image;
where W is the number of pixels of the image in the u-axis direction of the pixel coordinate system and H is the number of pixels of the image in the v-axis direction of the pixel coordinate system.
Step S53, controlling the camera to traverse the surface set A from top to bottom along the clockwise direction of the detected object; a gridded cell as shown in fig. 8 is obtained.
Step S54, the camera is controlled to sequentially scan the rectangular units after the current plane is gridded from top to bottom and from left to right, and simultaneously, the posture of the camera is adjusted so that the z axis of the camera coincides with the normal line of the rectangular units, and the distance from the camera to the rectangular units and the focal length of the camera are adjusted so that the camera can completely shoot the rectangular units;
And step S55, completing the path generated by scanning the surface set A, and obtaining the surface defect detection path of the detected object.
In one possible implementation manner, as shown in fig. 9, step S6 of this embodiment specifically includes:
step S61, a training stage, wherein normal samples are collected, and all rectangular units are obtained as normal samples by scanning the surface set of the non-defective object in the process of step S5;
Step S62, classifying the normal samples by a k_means clustering algorithm to obtain an image dataset set Dataset = { Dataset m |m=1, 2, …, M } of M groups of different texture features;
Step S63, respectively establishing a normal sample discrimination model for the image dataset Dataset, including:
Taking the 1 st group image data group Dataset 1 as a current group;
Traversing all K images in the current group in sequence, calculating the similarity between the image and other K-1 images in the current group, and summing the obtained K-1 similarity to obtain a difference value e between the image and the current group;
Taking G images with the maximum difference value e from the K images, and performing sparse decomposition on the G images based on a KSVD algorithm to obtain a dictionary matrix library D= { D i |i=1, 2, …, G };
Reconstructing the K images by using all dictionary matrixes in the dictionary matrix library D, obtaining K reconstruction errors E= { E ij |j=1, 2, …, K } corresponding to each dictionary matrix, obtaining a maximum value in the K reconstruction errors to obtain a threshold library of the dictionary matrix library for judging normal samples,
th={thi|thi=max({eij|j=1,2,…,K});
The dictionary matrix library D and the corresponding threshold value library th form a normal sample discrimination model of the current group;
traversing M groups of image data in sequence, setting the group of image data group Dataset m as a current group, and repeatedly executing the steps to obtain M groups of normal sample discrimination models;
step S64, a detection stage, namely collecting rectangular units of the detected object according to the process in the step five, and judging an image data set Dataset m to which the rectangular units belong based on the position information of the camera;
Step S65, reconstructing the rectangular unit by using the dictionary matrix library D corresponding to the belonging image data set Dataset m, respectively, to obtain a set of reconstruction errors { e i |i=1, 2, …, G } of the rectangular unit;
Step S66, if any reconstruction error e i<thi exists, the rectangular unit is a normal sample; otherwise, the rectangular unit is a defect sample.
The embodiment also provides a computer device for executing the method of the embodiment.
As particularly shown in fig. 10, the computer device includes a processor, a memory, and a system bus; various device components, including memory and processors, are connected to the system bus. A processor is a piece of hardware used to execute computer program instructions by basic arithmetic and logical operations in a computer system. Memory is a physical device used to temporarily or permanently store computing programs or data (e.g., program state information). The system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus. The processor and the memory may communicate data over a system bus. Where the memory includes Read Only Memory (ROM) or flash memory (not shown), and Random Access Memory (RAM), which generally refers to the main memory loaded with an operating system and computer programs.
Computer devices typically include a storage device. The storage device may be selected from a variety of computer-readable media, which refers to any available media that can be accessed by a computer device and includes both removable and fixed media. For example, computer-readable media includes, but is not limited to, flash memory (micro-SD card), CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer device.
The computer device may be logically connected to one or more network terminals in a network environment. The network terminal may be a personal computer, server, router, smart phone, tablet computer, or other public network node. The computer device is connected to a network terminal through a network interface (local area network LAN interface). Local Area Networks (LANs) refer to computer networks of interconnected networks within a limited area, such as a home, school, computer laboratory, or office building using network media. WiFi and twisted pair wired ethernet are the two most common technologies used to construct local area networks.
It should be noted that other computer systems including more or fewer subsystems than computer devices may also be suitable for use with the invention.
As described in detail above, the computer apparatus suitable for the present embodiment can perform the specified operation of the three-dimensional surface abnormality detection method. The computer device performs these operations in the form of software instructions that are executed by a processor in a computer-readable medium. The software instructions may be read into memory from a storage device or from another device via a lan interface. The software instructions stored in the memory cause the processor to perform the method of processing group member information described above. Furthermore, the invention may be implemented by means of hardware circuitry or by means of combination of hardware circuitry and software instructions. Thus, implementation of the present embodiments is not limited to any specific combination of hardware circuitry and software.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (6)

1. A method for detecting surface defects of a three-dimensional object, comprising the steps of:
Step S1, training by utilizing an RGB image data set of a detected object to obtain a target segmentation model;
S2, collecting image information of a plurality of angles of the detected object, performing real-time segmentation on the detected object area by using the target segmentation model, and reconstructing the detected object in real time to obtain a point cloud model of the detected object;
Step S3, carrying out attitude estimation based on a point cloud model and a three-dimensional model of the detected object to obtain the pose of the detected object in a camera coordinate system;
s4, calculating normal vectors of each point on the three-dimensional model, and carrying out surface splitting on the three-dimensional model based on similarity of the normal vectors to obtain a surface set, a flatness set corresponding to the surface set and a pose set of the surface set in a camera coordinate system;
Step S5, rasterizing the surface set to obtain a surface defect detection path of the detected object;
S6, performing defect detection on the surface of the detected object according to the defect detection path; the step S2 specifically includes:
S21, collecting RGB images and depth images of the detected object at the current moment, processing the collected images in real time by utilizing the target segmentation model to obtain a first point cloud and a first RGB image of the detected object, and adjusting the gesture of the camera and the distance between the camera and the detected object in real time;
Step S22, taking the center of the first point cloud as a round point, controlling a mobile platform carrying a camera to acquire RGB images and depth images of the detected object when rotating around the detected object from the current position by a preset angle, utilizing the target segmentation model to process the acquired images in real time to obtain a second point cloud and a second RGB image of the detected object, and adjusting the gesture of the camera and the distance between the camera and the detected object in real time;
Step S23, registering the first point cloud and the second point cloud, fusing the registered first point cloud and second point cloud to obtain the pose of the point cloud under the current camera coordinate system, updating the first point cloud by the fused point cloud, and taking the updated first point cloud as a new first point cloud;
Step S24, repeating the steps S22-S23 until the similarity of the first RGB image and the second RGB image meets the preset condition;
Step S25, taking the new first point cloud as a point cloud model of the detected object;
the step S3 specifically includes:
Step S31, calculating the mass center of the source point cloud Q by taking the point cloud model as a source point cloud Q, randomly sampling part of the point clouds by taking the mass center as an origin, calculating the distance between each point cloud obtained by sampling and the adjacent point clouds, and obtaining the sampling interval theta of the three-dimensional model based on the calculated distance;
step S32, sampling the three-dimensional model according to the sampling interval theta, wherein the obtained point cloud is used as a target point cloud P;
Step S33, calculating a rotation matrix R obj and a translation vector t obj from the target point cloud P to the source point cloud Q, wherein the rotation matrix R obj and the translation vector t obj are the pose of the detected object under the camera coordinate system, and obtaining a new point cloud coordinate of the target point cloud P under the current camera coordinate system;
The step S4 specifically includes:
step S41, randomly initializing a starting point P 0 as a current point P cur on the three-dimensional model, initializing a plurality of arrays cur for storing point clouds of the current surface, and initializing a Queue next for storing adjacent points P next;
Step S42, searching for a neighboring point P next of the current point P cur by adopting a kd-Tree algorithm, and judging whether the neighboring point P next is in an Array cur;
Step S43, if the adjacent point P next is not in the Array cur, calculating the normal vectors of the adjacent point P next and the current point P cur;
step S44, calculating a cosine similarity C p of normal vectors of the neighboring point P next and the current point P cur;
Step S45, if Or C p = 1, then save the neighbor point P next to Array cur while inserting the neighbor point into the Queue next Queue tail; wherein/>For the cosine similarity of P 0 and the first adjacent point P next, delta epsilon is the allowable surface flatness difference range;
step S46, reading a Queue next Queue head element, updating the current point P cur, taking the updated current point P cur as a new current point P cur, and deleting the Queue head element from the Queue next;
Step S47, repeating the steps S42 to S46 until the Queue next is cleared;
Step S48, storing an Array cur into the Array, deleting the same points in the target point cloud P as those in the Array cur, and resetting the Array cur;
Step S49, repeating the steps S41 to S46 until all the target point clouds P are traversed, and obtaining a point cloud Array set array= { Array i |i=1, 2, …, N } of N surfaces, which is a surface set a of the detected object, where all the surfaces in the surface set a Composition of the corresponding flatness set/>Calculating to obtain a rotation matrix set and a translation vector set corresponding to the surface set A under a camera coordinate system, and obtaining a pose set of the surface set A under the camera coordinate system;
The step S6 specifically includes:
Step S61, obtaining all rectangular units as normal samples through step S5;
Step S62, classifying the normal samples by using a K means clustering algorithm to obtain an image dataset Dataset = { Dataset m |m=1, 2, …, M } of M groups of different texture features;
step S63, respectively establishing a normal sample discrimination model for each image data set in the image data set Dataset;
step S64, collecting rectangular units of the detected object according to the step S5, and judging an image data set to which the rectangular units belong based on the position information of the camera;
Step S65, reconstructing the rectangular unit by using a dictionary matrix library corresponding to the image data set to obtain a set of reconstruction errors { e i |i=1, 2, …, G };
Step S66, if any reconstruction error e i<thi exists, the rectangular unit is a normal sample, otherwise, the rectangular unit is a defective sample;
Wherein G represents the number of dictionary matrices in the dictionary matrix library corresponding to the image data set to which the G belongs, and th i represents the threshold value corresponding to the dictionary matrix library corresponding to the image data set to which the G belongs.
2. The method for detecting surface defects of a three-dimensional object according to claim 1, wherein the processing the collected images in real time and adjusting the pose of the camera and the distance between the camera and the object to be detected in real time in the step S21 and the step S22 specifically comprises:
step S211, inputting the acquired RGB image into the target segmentation model to carry out target segmentation, and obtaining a mask image of the detected object;
Step S212, obtaining a depth image and an RGB image only comprising the detected object based on the mask image of the detected object, and generating a point cloud based on the depth image and the RGB image only comprising the detected object;
Step S213, calculating the mass center of a mask region in a mask image of the detected object, calculating the distance from the mass center to the center of the mask image under a pixel coordinate system, and adjusting the posture of the camera based on the distance;
Step S214, calculating the coincidence ratio of the boundary of the mask region in the mask map of the detected object and the boundary of the mask map, and adjusting the distance between the camera and the detected object according to the coincidence ratio.
3. The method for detecting surface defects of a three-dimensional object according to claim 2, wherein the step S213 specifically comprises:
the distance from the center of mass to the center of the mask map in the pixel coordinate system is calculated by the following steps:
dccu=umc-u0
dccv=vmc-v0
Wherein u mc is the coordinate value of the centroid of the mask area a mask in the first mask image on the u-axis of the pixel coordinate system, u 0 is the coordinate value of the center of the first mask image on the u-axis of the pixel coordinate system, v mc is the coordinate value of the centroid of the mask area a mask in the first mask image on the v-axis of the pixel coordinate system, and v 0 is the coordinate value of the center of the first mask image on the v-axis of the pixel coordinate system;
The camera pose is adjusted in real time according to the following conditions:
if d ccu <0, controlling the camera to rotate around the y-axis of the camera coordinate system in a counterclockwise direction;
If d ccu >0, controlling the camera to rotate around the y-axis of the camera coordinate system in a clockwise direction;
If d ccv <0, controlling the camera to rotate around the x-axis of the camera coordinate system in a counterclockwise direction;
If d ccv >0, the camera is controlled to rotate in a clockwise direction about the x-axis of the camera coordinate system.
4. The method for detecting surface defects of a three-dimensional object according to claim 2, wherein the step S214 specifically comprises:
Calculating the coincidence ratio L overlap of the boundary of the mask region in the mask map of the detected object and the boundary of the mask map by the following steps:
Wherein P edge is a point set formed by points of the mask region at the edge of the image, count (P edge) is the number of points in the point set P edge, W is the number of pixels of the image in the u-axis direction, and H is the number of pixels of the image in the v-axis direction;
If L overlap > ρ, wherein ρ is any fraction less than 0.5, controlling the mobile robot to be far away from the detected object, collecting the image of the detected object in real time in the moving process, and processing according to the steps S211, S213 and S214 until L overlap is less than or equal to ρ;
if L overlap =0, then the size of the mask area's duty cycle in the mask map is calculated:
Wherein a overlap is the duty ratio of the mask area a mask in the first mask pattern, S mask is the number of pixels included in the mask area a mask, W is the number of pixels of the first mask pattern in the u-axis direction, and H is the number of pixels of the first mask pattern in the v-axis direction;
if A overlap < alpha, wherein alpha is any decimal less than 1, controlling the mobile robot to move towards the detected object, acquiring the image of the detected object in real time in the moving process, and processing according to the steps S211, S213 and S214 until A overlap is more than or equal to alpha.
5. The method according to claim 1, wherein the preset conditions in step S24 are:
If L s is more than or equal to epsilon, stopping the mobile robot from collecting the image information of the detected object;
Wherein L s is the similarity of the first RGB image and the second RGB image; epsilon is any decimal between 0 and 1.
6. The method according to claim 1, wherein the establishing a normal sample discrimination model for the image data set in step S63 specifically includes:
step S631, randomly selecting an image data set as the current image data set;
step S632, randomly selecting one image in the current image data set as the current image;
step S633, calculating the similarity between the current image and other images, and summing the obtained similarity to obtain a difference value between the current image and the current image data set;
step S634, re-selecting an image as the current image, repeatedly executing step S633 until the images in the image data set are traversed, obtaining the difference values of all the images in the current image data set and the current image data set, and sorting the difference values according to the order from big to small;
step S635, selecting images corresponding to the first G difference values, and performing sparse decomposition on the G images by adopting a KSVD algorithm to obtain a dictionary matrix library;
step S636, reconstructing all images in the current image data set by using all dictionary matrices in the dictionary matrix library D, where each dictionary matrix corresponds to obtain K reconstruction errors e= { E ij |j=1, 2, …, K }, where K represents the amount of image data in the current image data set;
step S637, obtaining a threshold library for judging normal samples from the dictionary matrix library by taking the maximum value of the K reconstruction errors: th= { th i|thi=max({eij |j=1, 2, …, K }); the dictionary matrix library D and the corresponding threshold value library th form a normal sample discrimination model of the current image data set;
Step S638, a new image data set is selected as the current image data set, and steps S632 to S637 are repeated until the image data sets in the image data set are traversed.
CN202110390016.8A 2021-04-12 2021-04-12 Three-dimensional object surface defect detection method Active CN113096094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110390016.8A CN113096094B (en) 2021-04-12 2021-04-12 Three-dimensional object surface defect detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110390016.8A CN113096094B (en) 2021-04-12 2021-04-12 Three-dimensional object surface defect detection method

Publications (2)

Publication Number Publication Date
CN113096094A CN113096094A (en) 2021-07-09
CN113096094B true CN113096094B (en) 2024-05-17

Family

ID=76676227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110390016.8A Active CN113096094B (en) 2021-04-12 2021-04-12 Three-dimensional object surface defect detection method

Country Status (1)

Country Link
CN (1) CN113096094B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113731836B (en) * 2021-08-04 2023-05-26 华侨大学 Urban solid waste on-line sorting system based on deep learning
JP7500515B2 (en) * 2021-09-03 2024-06-17 株式会社東芝 Processing device related to inspection of subject, inspection system for subject, processing method related to inspection of subject, and processing program related to inspection of subject
CN113706619B (en) * 2021-10-21 2022-04-08 南京航空航天大学 Non-cooperative target attitude estimation method based on space mapping learning
CN114037703B (en) * 2022-01-10 2022-04-15 西南交通大学 Subway valve state detection method based on two-dimensional positioning and three-dimensional attitude calculation
CN114708230B (en) * 2022-04-07 2022-12-16 深圳市精明检测设备有限公司 Vehicle frame quality detection method, device, equipment and medium based on image analysis
CN115953400B (en) * 2023-03-13 2023-06-02 安格利(成都)仪器设备有限公司 Corrosion pit automatic detection method based on three-dimensional point cloud object surface
CN115965628B (en) * 2023-03-16 2023-06-02 湖南大学 Workpiece coating quality online dynamic detection method and detection system
CN116977331B (en) * 2023-09-22 2023-12-12 武汉展巨华科技发展有限公司 3D model surface detection method based on machine vision

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107052086A (en) * 2017-06-01 2017-08-18 扬州苏星机器人科技有限公司 Stamping parts surface defect detection apparatus and detection method based on 3D vision
CN109242828A (en) * 2018-08-13 2019-01-18 浙江大学 3D printing product 3 D defects detection method based on optical grating projection multistep phase shift method
CN109345523A (en) * 2018-09-21 2019-02-15 中国科学院苏州生物医学工程技术研究所 Surface defects detection and three-dimensional modeling method
CN110033447A (en) * 2019-04-12 2019-07-19 东北大学 A kind of high-speed rail heavy rail detection method of surface flaw based on cloud method
CN110400315A (en) * 2019-08-01 2019-11-01 北京迈格威科技有限公司 A kind of defect inspection method, apparatus and system
CN110634140A (en) * 2019-09-30 2019-12-31 南京工业大学 Large-diameter tubular object positioning and inner wall defect detection method based on machine vision
CN111127422A (en) * 2019-12-19 2020-05-08 北京旷视科技有限公司 Image annotation method, device, system and host
CN112161619A (en) * 2020-09-16 2021-01-01 杭州思锐迪科技有限公司 Pose detection method, three-dimensional scanning path planning method and detection system
CN112419482A (en) * 2020-11-23 2021-02-26 太原理工大学 Three-dimensional reconstruction method for mine hydraulic support group pose fused with depth point cloud

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107615008B (en) * 2015-06-01 2020-05-08 日本制铁株式会社 Crankshaft inspection method and apparatus
WO2018136262A1 (en) * 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107052086A (en) * 2017-06-01 2017-08-18 扬州苏星机器人科技有限公司 Stamping parts surface defect detection apparatus and detection method based on 3D vision
CN109242828A (en) * 2018-08-13 2019-01-18 浙江大学 3D printing product 3 D defects detection method based on optical grating projection multistep phase shift method
CN109345523A (en) * 2018-09-21 2019-02-15 中国科学院苏州生物医学工程技术研究所 Surface defects detection and three-dimensional modeling method
CN110033447A (en) * 2019-04-12 2019-07-19 东北大学 A kind of high-speed rail heavy rail detection method of surface flaw based on cloud method
CN110400315A (en) * 2019-08-01 2019-11-01 北京迈格威科技有限公司 A kind of defect inspection method, apparatus and system
CN110634140A (en) * 2019-09-30 2019-12-31 南京工业大学 Large-diameter tubular object positioning and inner wall defect detection method based on machine vision
CN111127422A (en) * 2019-12-19 2020-05-08 北京旷视科技有限公司 Image annotation method, device, system and host
CN112161619A (en) * 2020-09-16 2021-01-01 杭州思锐迪科技有限公司 Pose detection method, three-dimensional scanning path planning method and detection system
CN112419482A (en) * 2020-11-23 2021-02-26 太原理工大学 Three-dimensional reconstruction method for mine hydraulic support group pose fused with depth point cloud

Also Published As

Publication number Publication date
CN113096094A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN113096094B (en) Three-dimensional object surface defect detection method
US11878433B2 (en) Method for detecting grasping position of robot in grasping object
CN108555908B (en) Stacked workpiece posture recognition and pickup method based on RGBD camera
CN107063228B (en) Target attitude calculation method based on binocular vision
CN108010078B (en) Object grabbing detection method based on three-level convolutional neural network
JP3735344B2 (en) Calibration apparatus, calibration method, and calibration program
CN111267095B (en) Mechanical arm grabbing control method based on binocular vision
CN112233181A (en) 6D pose recognition method and device and computer storage medium
CN111091023B (en) Vehicle detection method and device and electronic equipment
CN112435262A (en) Dynamic environment information detection method based on semantic segmentation network and multi-view geometry
CN111079565A (en) Construction method and identification method of view two-dimensional posture template and positioning and grabbing system
CN115032648B (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN112894815A (en) Method for detecting optimal position and posture for article grabbing by visual servo mechanical arm
CN111524168A (en) Point cloud data registration method, system and device and computer storage medium
CN111123242A (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN113538503A (en) Solar panel defect detection method based on infrared image
CN114372992A (en) Edge corner point detection four-eye vision algorithm based on moving platform
CN116309879A (en) Robot-assisted multi-view three-dimensional scanning measurement method
CN114549629A (en) Method for estimating three-dimensional pose of target by underwater monocular vision
CN110570473A (en) weight self-adaptive posture estimation method based on point-line fusion
CN116021519A (en) TOF camera-based picking robot hand-eye calibration method and device
CN115953465A (en) Three-dimensional visual random grabbing processing method based on modular robot training platform
CN116309817A (en) Tray detection and positioning method based on RGB-D camera
Zhang et al. Object detection and grabbing based on machine vision for service robot
CN115131433A (en) Non-cooperative target pose processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220402

Address after: 637000 group 3, Yanjia village, Huilong Town, Yingshan County, Nanchong City, Sichuan Province

Applicant after: Wu Jun

Address before: No. ol-01-202012007, 3rd floor, building 1, No. 366, north section of Hupan Road, Tianfu New District, Chengdu, Sichuan 610000

Applicant before: Chengdu lantu Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant