CN113096094A - Three-dimensional object surface defect detection method - Google Patents

Three-dimensional object surface defect detection method Download PDF

Info

Publication number
CN113096094A
CN113096094A CN202110390016.8A CN202110390016A CN113096094A CN 113096094 A CN113096094 A CN 113096094A CN 202110390016 A CN202110390016 A CN 202110390016A CN 113096094 A CN113096094 A CN 113096094A
Authority
CN
China
Prior art keywords
detected object
image
point cloud
mask
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110390016.8A
Other languages
Chinese (zh)
Other versions
CN113096094B (en
Inventor
吴俊�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wu Jun
Original Assignee
Chengdu Lantu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Lantu Technology Co ltd filed Critical Chengdu Lantu Technology Co ltd
Priority to CN202110390016.8A priority Critical patent/CN113096094B/en
Publication of CN113096094A publication Critical patent/CN113096094A/en
Application granted granted Critical
Publication of CN113096094B publication Critical patent/CN113096094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method for detecting surface defects of a three-dimensional object, which comprises the following steps: training by using an image data set of a detected object to obtain a target segmentation model; the target segmentation model is used for carrying out real-time segmentation on the detected object region, and the detected object is reconstructed in real time to obtain a point cloud model of the detected object; carrying out attitude estimation to obtain the pose of the detected object in a camera coordinate system; calculating a normal vector of each point on the three-dimensional model, and carrying out surface splitting on the three-dimensional model based on the similarity of the normal vectors to obtain a surface set, a flatness degree set corresponding to the surface set and a pose set of the surface set in a camera coordinate system; rasterizing the surface set to obtain a surface defect detection path of the detected object; and carrying out defect detection on the surface of the detected object according to the defect detection path. The invention effectively solves the problem that all surface information of the object can not be acquired by one-time scanning.

Description

Three-dimensional object surface defect detection method
Technical Field
The invention belongs to the technical field of image defect detection in computer vision, and particularly relates to a method for detecting surface defects of a three-dimensional object.
Background
The factory is unmanned, the intellectualization is a future development trend, the procedures of automatic unstacking, automatic stacking, automatic feeding and discharging, automatic processing and the like can realize quite high intellectualization degree, but the quality detection also has the problem of large human input, particularly, the appearance detection of objects with complex shapes and the appearance detection of objects with larger sizes all need large human input, for example, in the field of automobile production, the detection is often carried out manually after the automobile surface is painted, the detection is carried out by a small part of automatic production lines and mechanical arm carrying cameras according to fixed lines, the problem of very low flexibility exists, the automation production of small-batch products is severely restricted, the automation degree in the production of industrial products such as trucks, buses, machine tools, ships, airplanes, trains and the like is lower, the surface defect detection is finished manually, and in order to improve the standardization of the products, The quality and quality of the complex three-dimensional object are urgent problems how to automatically detect the defects on the surface of the complex three-dimensional object by means of instruments and equipment.
At present, several common complex three-dimensional surface defect detection methods include:
the method comprises the following steps: the quality testing worker is relied on to complete the detection through vision, touch and by means of a part of handheld instruments,
the method has the following defects: a typical disadvantage of this method is that the technical level, working attitude, and physical state of the worker greatly affect the detection result, and there is often a case of missing detection in a local area, so that a review and a spot check are often required;
the method 2 comprises the following steps: a complex image acquisition system is built through a plurality of cameras, a measured object is shot from a plurality of angles,
the method has the following defects: the cost for building the detection environment is high, and the detection environment is often built for an object of one type or one shape, so the flexibility of the method is generally poor, and meanwhile, the pose of the object is required to be relatively fixed, and a lot of auxiliary work needs to be manually processed;
the method 3 comprises the following steps: the defect detection of the fixed point position is carried out by moving a plurality of mechanical arms to the fixed point position at each time to collect image data
The method has the following defects: taking the welding quality detection of the vehicle frames as an example, the method has high requirement on the positioning accuracy of each vehicle frame, the detection point position needs to be reset every time a vehicle shape is changed, the mechanical arm is taught to move to the target detection point position along a specific path, and meanwhile, the position of each mechanical arm is fixed, so that one vehicle usually needs at least 4 mechanical arms to complete the detection, and the mechanical arm has low utilization efficiency and poor flexibility.
Disclosure of Invention
In order to solve the problem of poor flexibility in the detection of the defects of the complex three-dimensional surface in the prior art, the invention provides a method for detecting the defects of the surface of the three-dimensional object, which solves the problem.
The invention is realized by the following technical scheme:
a method for detecting surface defects of a three-dimensional object comprises the following steps:
step S1, training to obtain a target segmentation model by using the RGB image data set of the detected object;
step S2, collecting image information of a detected object from multiple angles, utilizing the target segmentation model to segment the detected object region in real time, and reconstructing the detected object in real time to obtain a point cloud model of the detected object;
s3, carrying out attitude estimation based on the point cloud model and the three-dimensional model of the detected object to obtain the pose of the detected object in a camera coordinate system;
step S4, calculating a normal vector of each point on the three-dimensional model, and carrying out surface splitting on the three-dimensional model based on the similarity of the normal vectors to obtain a surface set, a flatness degree set corresponding to the surface set and a pose set of the surface set in a camera coordinate system;
step S5, performing rasterization processing on the surface set to obtain a surface defect detection path of the detected object;
and step S6, detecting the defect on the surface of the detected object according to the defect detection path.
Because of the surface specificity of the detected object, complete and reliable surface information of the detected object can not be obtained easily through one-time scanning in the process of image acquisition and processing of the detected object, the method maps and associates the detected object reconstructed in real time with a theoretical three-dimensional model of the detected object, so that the three-dimensional model (complete and reliable surface data) of the detected object is subjected to image processing, a surface defect detection path of the detected object is planned, and the detected object is subjected to defect detection according to the detection path, so that the flexibility of detection is improved, and the method can be suitable for surface defect detection of three-dimensional objects of various types, sizes and postures; in addition, according to the method provided by the invention, the surface defect detection of the large-scale complex object can be realized without additionally increasing detection equipment, so that the detection cost can be greatly reduced when the method is suitable for the surface defect detection of the large-scale complex object.
Preferably, step S2 of the present invention specifically includes:
step S21, collecting the RGB image and the depth image of the detected object at the current moment, processing the collected image in real time by using the target segmentation model to obtain a first point cloud and a first RGB image of the detected object, and adjusting the posture of the camera and the distance between the camera and the detected object in real time;
step S22, taking the center of the first point cloud as a dot, controlling a mobile platform carrying a camera to collect RGB images and depth images of the detected object when the mobile platform rotates around the detected object by a preset angle from the current position, processing the collected images in real time by using the target segmentation model to obtain a second point cloud and a second RGB image of the detected object, and adjusting the posture of the camera and the distance between the camera and the detected object in real time;
step S23, registering the first point cloud and the second point cloud, fusing the registered first point cloud and the registered second point cloud to obtain the pose of the point cloud under the current camera coordinate system, updating the first point cloud by the fused point cloud, and taking the updated first point cloud as a new first point cloud;
step S24, repeating the steps S22-S23 until the similarity of the first RGB image and the second RGB image meets a preset condition, and stopping;
and step S25, taking the new first point cloud as a point cloud model of the detected object.
Preferably, the processing the acquired image in real time and adjusting the posture of the camera and the distance between the camera and the detected object in real time in step S21 and step S22 of the present invention specifically include:
step S211, inputting the collected RGB image into the target segmentation model for target segmentation to obtain a mask image of the detected object;
step S212, obtaining a depth image and an RGB image only including the detected object based on the mask image of the detected object, and generating a point cloud based on the depth image and the RGB image only including the detected object;
step S213, calculating the centroid of the mask area in the mask image of the detected object, calculating the distance from the centroid to the center of the mask image under the pixel coordinate system, and adjusting the camera posture based on the distance;
step S214, calculating a coincidence degree between the boundary of the mask region in the mask image of the detected object and the boundary of the mask image, and adjusting the distance between the camera and the detected object according to the coincidence degree.
Preferably, step S213 of the present invention specifically includes:
and calculating the distance from the center of mass to the center of the mask image under the pixel coordinate system by the following formula:
dccu=umc-u0
dccv=vmc-v0
wherein u ismcFor a mask region A in the first mask patternmaskOf the centroid on the u-axis of the pixel coordinate system, u0Is the coordinate value, v, of the center of the first mask image on the u-axis of the pixel coordinate systemmcFor a mask region A in the first mask patternmaskOf the centroid on the v-axis of the pixel coordinate system, v0Is a stand forCoordinate values of the center of the first mask diagram on the v axis of the pixel coordinate system;
adjusting the camera posture in real time according to the following conditions:
if d isccuIf the value is less than 0, controlling the camera to rotate around the y axis of the camera coordinate system along the anticlockwise direction;
if d isccuIf the coordinate is more than 0, controlling the camera to rotate around the y axis of the camera coordinate system along the clockwise direction;
if d isccvIf the value is less than 0, controlling the camera to rotate around the x axis of the camera coordinate system along the counterclockwise direction;
if d isccvAnd > 0, the camera is controlled to rotate in a clockwise direction around the x-axis of the camera coordinate system.
Step S214 of the present invention specifically includes:
calculating a coincidence ratio L of the boundary of the mask region and the boundary of the mask pattern in the mask pattern of the detected object by the following formulaoverlap
Figure BDA0003016422470000051
Wherein, PedgeSet of points, count (P), consisting of points of the mask region at the edges of the imageedge) As a set of points PedgeThe number of the middle points, W is the number of the image pixels in the u-axis direction, and H is the number of the image pixels in the v-axis direction;
if L isoverlapRho, wherein rho is any decimal less than 0.5, the mobile robot is controlled to be far away from the detected object, the image of the detected object is acquired in real time in the moving and far process, and the processing is carried out according to the steps S211, S213 and S214 until the L is obtained through calculationoverlap≤ρ;
If L isoverlapAnd calculating the occupation size of the mask region in the mask image as 0:
Figure BDA0003016422470000052
wherein the content of the first and second substances,Aoverlapis a mask region AmaskFraction, S, in the first mask mapmaskIs the mask region AmaskThe number of pixels included is W, the number of pixels of the first mask image in the u-axis direction is W, and the number of pixels of the first mask image in the v-axis direction is H;
if A isoverlap< alpha, where alpha is any decimal number less than 1, the mobile robot is controlled to move towards the detected object, and an image of the detected object is acquired in real time during the moving process, and the processing is performed according to the steps S211, S213 and S214 until A is reachedoverlap≥α。
Preferably, in step S24 of the present invention, the preset conditions are:
if L issIf the distance is more than or equal to epsilon, stopping the mobile robot from acquiring the image information of the detected object;
wherein L issSimilarity of the first RGB image and the second RGB image is obtained; ε is any fractional number between 0 and 1.
Preferably, step S3 of the present invention specifically includes:
step S31, taking the point cloud model as a source point cloud Q, calculating the centroid of the source point cloud Q, taking the centroid as an origin, randomly sampling partial point clouds of the source point cloud Q, calculating the distance between each point cloud obtained by sampling and the adjacent point clouds, and obtaining the sampling interval theta of the three-dimensional model based on the calculated distance;
step S32, sampling the three-dimensional model according to the sampling interval theta, and taking the obtained point cloud as a target point cloud P;
step S33, calculating a rotation matrix R from the target point cloud P to the source point cloud QobjAnd a translation vector tobjSaid rotation matrix RobjAnd a translation vector tobjNamely the pose of the detected object under the camera coordinate system, and new point cloud coordinates of the target point cloud P under the current camera coordinate system are obtained.
Preferably, step S4 of the present invention specifically includes:
step S41, a starting point P is initialized randomly on the three-dimensional model0As the current point PcurAnd initializing an ArraycurFor storing a point cloud of the current surface, initializing a QueuenextFor storing adjacent points Pnext
Step S42, adopting kd tree algorithm to search the current point PcurAdjacent point P ofnextJudging the neighboring point PnextWhether in ArraycurPerforming the following steps;
step S43, if the adjacent point PnextIs not in ArraycurThen, the neighboring point P is calculatednextAnd the current point PcurThe normal vector of (a);
step S44, calculating the adjacent point PnextAnd the current point PcurCosine similarity of normal vector Cp
Step S45, if
Figure BDA0003016422470000061
Or CpWhen 1, the adjacent point P is definednextSave to ArraycurSimultaneously inserting said neighbor point into QueuenextThe tail of the team; wherein the content of the first and second substances,
Figure BDA0003016422470000062
is P0With the first adjacent point PnextThe cosine similarity of (1), wherein delta epsilon is an allowable surface flatness difference range;
step S46, reading QueuenextHead of line element, updating the current point PcurCurrent point P to be updatedcurAs a new current point PcurAnd the Queue head element is selected from the QueuenextDeleting;
step S47, repeating the steps S42 to S46 until the Queue is finishednextClearing;
step S48, Array is processedcurStoring the data into an Array set, and deleting the data in the target point cloud P and the Array set simultaneouslycurThe same point in (1), and the ArraycurClearing;
step (ii) ofS49, repeating the steps S41 to S46 until all the target point clouds P are traversed, and obtaining an Array of point clouds arrays on N surfaces { Array ═ NiI 1, 2., N ═ is the surface set a of the detected object, and the surface set a is the surface set of all the surfaces
Figure BDA0003016422470000071
Form corresponding flatness degree set
Figure BDA0003016422470000072
And calculating to obtain a rotation matrix set and a translation vector set corresponding to the surface set A in the camera coordinate system, and obtaining a pose set of the surface set A in the camera coordinate system.
Preferably, step S6 of the present invention specifically includes:
step S61, obtaining all rectangular cells as normal samples through step S5;
step S62, using KmeansClassifying the normal samples by a clustering algorithm to obtain M groups of image data set sets (Dataset) with different texture characteristicsm|m=1,2,...,M};
Step S63, respectively establishing a normal sample discrimination model for each image data group in the image data group set Dataset;
a step S64 of acquiring rectangular cells of the detected object according to the step S5, and judging image data groups to which the rectangular cells belong based on the position information of the camera;
step S65, reconstructing the rectangular unit by using the dictionary matrix library corresponding to the image data group to obtain a group of reconstruction errors { e ] of the rectangular uniti|i=1,2,...,G};
Step S66, if there is any reconstruction error ei<thiIf not, the rectangular unit is a defect sample;
wherein G represents the number of dictionary matrixes th in the dictionary matrix library corresponding to the image data group to which G belongsiAnd representing the threshold value corresponding to the dictionary matrix library corresponding to the image data set.
Preferably, the step S63 of establishing the normal sample discrimination model for the image data set in the present invention specifically includes:
step S631, randomly selecting an image data group as a current image data group;
step S632, randomly selecting an image in the current image data set as a current image;
step S633, calculating the similarity between the current image and other images, and summing the obtained similarities to obtain a difference value between the current image and the current image data set;
step S634, reselecting an image as the current image, and repeatedly executing step S633 until the images in the image data set are traversed, obtaining the difference values between all the images in the current image data set and the current image data set, and sorting the difference values from large to small;
step S635, selecting images corresponding to the previous G difference values, and performing sparse decomposition on the G images respectively by adopting a KSVD algorithm to obtain a dictionary matrix library;
step S636, reconstructing all the images in the current image data set by using all the dictionary matrices in the dictionary matrix library D, where each dictionary matrix correspondingly obtains K reconstruction errors E ═ Eij1, 2., K }, where K represents an amount of image data in the current image data set;
step S637, taking the maximum value of the K reconstruction errors to obtain a threshold library of the dictionary matrix library for discriminating a normal sample: th is { th ═ thi|thi=max({eij1, 2,. K }); the dictionary matrix library D and the corresponding threshold library th form a normal sample discrimination model of the current group;
step S638, re-selecting an image data set as the current image data set, and repeating steps S632 to S637 until the image data sets in the image data set are traversed completely.
The invention has the following advantages and beneficial effects:
1. the method provided by the invention realizes the movement of the camera in a three-dimensional space based on a mobile platform (a mobile robot carries a six-axis mechanical arm), dynamically scans and models the object based on an image segmentation neural network model and point cloud registration, splits and uniformly plans the surface based on the three-dimensional model, and then maps the surface to the real object, thereby effectively overcoming the problem that all surface information of the object can not be acquired by one-time scanning.
2. Compared with a method for detecting the surface defects of a plurality of three-dimensional objects based on radar positioning and a method for detecting the surface defects based on the track of a fixed mechanical arm, the method has the advantages of high flexibility, capability of being used for detecting the surface defects of the three-dimensional objects with various types, sizes and postures, and capability of greatly reducing the detection cost when being particularly used for detecting the surface defects of large-scale complex objects.
3. The method for adaptively adjusting the distance from the camera to the detected area and the focal length of the camera can effectively remove the interference of the background and the environment and improve the quality and the efficiency of defect detection.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of a target segmentation model construction process according to the present invention.
FIG. 3 is a schematic diagram of a point cloud model construction process according to the present invention.
FIG. 4 is a schematic diagram of an attitude estimation process according to the present invention.
FIG. 5 is a diagram illustrating a result of attitude estimation according to the present invention.
FIG. 6 is a schematic view of the surface resolution process of the present invention.
FIG. 7 is a schematic diagram of a rasterization process flow of the present invention.
FIG. 8 is a diagram illustrating the result of the rasterization process of the present invention.
FIG. 9 is a schematic diagram of a defect detection process according to the present invention.
FIG. 10 is a schematic diagram of a computer device according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Examples
In order to overcome the problem that all surface information of an object cannot be obtained through sequential scanning in the prior art, the embodiment provides a method for detecting surface defects of a three-dimensional object. The method comprises the steps of collecting images of an object to be detected in real time, reconstructing the object to be detected based on an image segmentation neural network model and point cloud registration, splitting and uniformly planning the surface based on a three-dimensional model, and mapping the surface to a real object to realize defect detection.
Specifically, as shown in fig. 1, the method of this embodiment includes the following steps:
step S1, training to obtain a target segmentation model by using the RGB image data set of the detected object;
step S2, collecting image information of a detected object from multiple angles, utilizing the target segmentation model to segment the detected object region in real time, and reconstructing the detected object in real time to obtain a point cloud model of the detected object;
s3, carrying out attitude estimation based on the point cloud model and the three-dimensional model of the detected object to obtain the pose of the detected object in a camera coordinate system;
step S4, calculating a normal vector of each point on the three-dimensional model, and carrying out surface splitting on the three-dimensional model based on the similarity of the normal vectors to obtain a surface set, a flatness degree set corresponding to the surface set and a pose set of the surface set in a camera coordinate system;
step S5, performing rasterization processing on the surface set to obtain a surface defect detection path of the detected object;
and step S6, detecting the defect on the surface of the detected object according to the defect detection path.
In a possible implementation manner, as shown in fig. 2, step S1 of this embodiment specifically includes:
step S11, a training data set composed of a plurality of detected object RGB images with different angles and corresponding mask images;
step S12, inputting the training data set into the MaskRCNN neural network model for training until convergence, so as to obtain a trained target segmentation model.
In a possible implementation manner, as shown in fig. 3, step S2 of this embodiment specifically includes:
step S21, collecting the RGB image and the depth image of the detected object at the current moment, processing the collected image in real time by using the target segmentation model to obtain a first point cloud and a first RGB image of the detected object, and adjusting the posture of the camera and the distance between the camera and the detected object in real time; the detected object is made to be within the shooting range of the camera by controlling the moving platform (the mobile robot) carrying the camera to move before image acquisition.
Step S22, taking the center of the first point cloud as a dot, controlling a mobile platform carrying a camera to collect RGB images and depth images of the detected object when the mobile platform rotates around the detected object by a preset angle from the current position, processing the collected images in real time by using the target segmentation model to obtain a second point cloud and a second RGB image of the detected object, and adjusting the posture of the camera and the distance between the camera and the detected object in real time;
step S23, registering the first point cloud and the second point cloud, fusing the registered first point cloud and the registered second point cloud to obtain the pose of the point cloud under the current camera coordinate system, updating the first point cloud by the fused point cloud, and taking the updated first point cloud as a new first point cloud;
step S24, repeating the steps S22-S23 until the similarity of the first RGB image and the second RGB image meets a preset condition, and stopping;
and step S25, taking the new first point cloud as a point cloud model of the detected object.
In step S21 and step S22 of this embodiment, the acquired image is processed in real time to obtain a point cloud and an RGB image of the detected object, and the pose of the camera and the distance between the camera and the detected object are adjusted in real time:
inputting the collected RGB image into a trained target segmentation model for target segmentation to obtain a mask image of the detected object;
obtaining a depth image and an RGB image only including the detected object based on the mask image, the collected RGB image and the depth image of the detected object, and generating a point cloud based on the depth image and the RGB image only including the detected object;
calculating the centroid of the mask area in the mask image of the detected object, calculating the distance from the centroid to the center of the mask image under a pixel coordinate system, and adjusting the posture of the camera based on the distance; the method specifically comprises the following steps:
calculating a mask area A in a first mask image of the detected objectmaskAnd calculating the distances from the centroid to the image center under the pixel coordinate system, respectively:
dccu=umc-u0
dccv=vmc-v0
wherein u ismcFor a mask region A in the first mask patternmaskOf the centroid on the u-axis of the pixel coordinate system, u0Is the coordinate value, v, of the center of the first mask image on the u-axis of the pixel coordinate systemmcFor a mask region A in the first mask patternmaskOf the centroid on the v-axis of the pixel coordinate system, v0Coordinate values of the center of the first mask image on a v axis of a pixel coordinate system;
if d isccuIf the value is less than 0, controlling the camera to rotate around the y axis of the camera coordinate system along the anticlockwise direction;
if d isccuIf < 0, the camera is controlled to rotate around the y-axis of the camera coordinate system along the clockwise directionRotating;
if d isccvIf the value is less than 0, controlling the camera to rotate around the x axis of the camera coordinate system along the counterclockwise direction;
if d isccvAnd > 0, the camera is controlled to rotate in a clockwise direction around the x-axis of the camera coordinate system.
Calculating the coincidence degree of the boundary of the mask area in the mask image of the detected object and the boundary of the mask image, and adjusting the distance between the camera and the detected object according to the coincidence degree; the method specifically comprises the following steps:
by calculating the mask area A of the first mask image of the detected objectmaskOverlap ratio L of the boundary of (A) and the boundary of the first mask patternoverlapTo determine whether the detected objects are all within the camera view,
Figure BDA0003016422470000121
wherein, PedgeSet of points, count (P), consisting of points of the mask region at the edges of the imageedge) As a set of points PedgeThe number of the middle points, W is the number of the image pixels in the u-axis direction, and H is the number of the image pixels in the v-axis direction;
if L isoverlapRho, wherein rho is any decimal less than 0.5, the mobile robot is controlled to be far away from the detected object, the image of the detected object is acquired in real time in the moving away process of the mobile robot, and the processes (including target segmentation and d calculation) are carried out according to the imageccuAnd dccvJudging the mask area A of the mask patternmaskThe distance between the center of mass and the center of the image and adjusting the pose of the camera according to the method) and processes the acquired image in real time until L is obtained by calculationoverlap≤ρ。
If L isoverlapIf 0, the mask area A is determinedmaskThe fractional size in the mask map,
Figure BDA0003016422470000131
wherein A isoverlapIs a mask region AmaskProportion in the first mask map, SmaskIs a mask region AmaskThe number of pixels included is W, the number of pixels of the first mask image in the u-axis direction is W, and the number of pixels of the first mask image in the v-axis direction is H;
if A isoverlap< alpha, where alpha is any decimal number less than 1, the mobile robot is controlled to move towards the detected object, and the image of the detected object is acquired in real time during the moving process, and the process (including target segmentation and d calculation) is performed according to the above processccuAnd dccvJudging the mask area A of the mask patternmaskThe distance between the center of mass and the center of the image and adjusting the pose of the camera according to the method) and processes the acquired image in real time until A is obtained by calculationoverlap≥α。
In step S24 of this embodiment, the preset conditions are:
if L issAnd if the value is more than or equal to epsilon, stopping the mobile robot from acquiring the image information of the detected object, and taking the new first point cloud as the point cloud model of the detected object. Wherein epsilon is any decimal number between 0 and 1, LsIs the similarity of the first RGB image and the second RGB image.
In a possible implementation manner, as shown in fig. 4, step S3 of this embodiment specifically includes:
step S31, taking the point cloud model as a source point cloud Q, calculating the centroid of the source point cloud Q, taking the centroid as an origin, randomly sampling partial point clouds of the source point cloud Q, and calculating the median of the distance between each point cloud obtained by sampling and the adjacent point cloud as the sampling interval theta of the three-dimensional model;
and step S32, sampling the three-dimensional model according to the sampling interval theta, and obtaining a point cloud which is a target point cloud P.
Step S33, calculating a rotation matrix R from the target point cloud P to the source point cloud Q based on the ICP algorithmobjAnd a translation vector tobjObtaining a new point cloud coordinate of the target point cloud P under the current camera coordinate system,
Pnew=Robj(P+tobj);
the rotation matrix RobjThe translation vector tobjNamely the pose of the detected object under the coordinate system of the mobile robot camera. The result of the pose estimation of the target object by the present embodiment is shown in fig. 5.
In a possible implementation manner, calculating a normal vector of each point on the three-dimensional model is equivalent to calculating a normal vector of each point in the target point cloud P in the current camera coordinate system, so as shown in fig. 6, step S4 of this embodiment specifically includes:
step S41, a starting point P is initialized randomly on the three-dimensional model0As the current point PcurInitializing an ArraycurFor storing a point cloud of the current surface, initializing a QueuenextFor storing adjacent points Pnext
Step S42, adopting kd tree algorithm to search the current point PcurAdjacent point P ofnextJudging the neighboring point PnextWhether in ArraycurPerforming the following steps;
step S43, if the adjacent point PnextIs not in ArraycurThen, the neighboring point P is calculatednextAnd the current point PcurThe normal vector of (a);
step S44, calculating the adjacent point PnextAnd the current point PcurCosine similarity of normal vector Cp
Step S45, if
Figure BDA0003016422470000141
Or CpWhen 1, the adjacent point P is definednextSave to ArraycurSimultaneously inserting said neighbor point into QueuenextThe tail of the team; wherein the content of the first and second substances,
Figure BDA0003016422470000142
is P0With the first adjacent point PnextThe cosine similarity of (1), wherein delta epsilon is an allowable surface flatness difference range;
step S46, readQueue takingnextHead of line element, updating the current point PcurCurrent point P to be updatedcurAs a new current point PcurAnd the Queue head element is selected from the QueuenextDeleting;
step S47, repeating the steps S42 to S46 until the Queue is finishednextClearing;
step S48, Array is processedcurStoring the data into an Array set, and deleting the data in the target point cloud P and the Array set simultaneouslycurThe same point in (1), and the ArraycurClearing;
step S49, repeating the above steps S41 to S46 until all the target point clouds P are traversed, and obtaining a point cloud Array set of N surfaces { Array ═ Ni1, 2, a, N, the point cloud array forming a surface after splitting, the point cloud array set being a surface set a, all surfaces in the surface set a being surface sets
Figure BDA0003016422470000151
Form corresponding flatness degree set
Figure BDA0003016422470000152
Calculating to obtain a rotation matrix set and a translation vector set corresponding to the surface set A in a camera coordinate system, and obtaining a pose set of the surface set A in the camera coordinate system, specifically:
calculating a normal vector of the surface set A to obtain a rotation matrix set corresponding to the surface set A in an object coordinate system: rs={Ri1, 2, a, N }, calculating a rotation matrix set corresponding to the surface set a in a camera coordinate system: r ═ RiRobj|i=1,2,...,N};
Calculating the mass centers of all the surfaces in the surface set A to obtain a translation vector set of the surface set A under an object coordinate system: t is ts={ti1, 2, n }, calculating a set of translation vectors of the surface set a in a camera coordinate system: t ═ ti+tobj|i=1,2,...,N};
Obtaining the tableAnd (3) setting the pose of the surface set A in a camera coordinate system: a. thepose=(R,t)。
In a possible implementation manner, as shown in fig. 7, step S5 of this embodiment specifically includes:
step S51, taking the surface with the shortest distance from the centroid of all the surfaces in the surface set A to the origin of the camera coordinate as the first surface A0With a first surface A0Initializing the current surface Acur(ii) a If a plurality of surfaces with the same distance exist, sorting the surfaces with the same distance according to the first priority of the z-axis, the second priority of the y-axis and the third priority of the c-axis, and taking the surface with the smallest centroid coordinate z, the same minimum y or the same minimum y and the same minimum x as the surface as a first surface A0
Step S52, fitting the current surface A with an internal rectangular gridcurAccording to said current surface AcurDegree of flatness of
Figure BDA0003016422470000161
Calculating the width of the rectangular unit in the grid; the present embodiment calculates the width of the rectangular cells in the grid by:
Figure BDA0003016422470000162
wherein, gamma belongs to [0 DEG ], 180 DEG]Is the angle of the normal vector of the points on the opposite sides of the rectangular unit,
Figure BDA0003016422470000163
is the current surface AcurUpper point cloud interval, sigma is the proportion of the defect _ size in the image;
h=min(W,H);
wherein, W is the pixel number of the image in the direction of the u axis of the pixel coordinate system, and H is the pixel number of the image in the direction of the v axis of the pixel coordinate system.
Step S53, controlling the camera to traverse the surface set A from top to bottom along the clockwise direction of the detected object; a gridding unit as shown in fig. 8 is obtained.
Step S54, controlling the camera to sequentially scan the rectangular unit after the current plane gridding from top to bottom and from left to right, simultaneously adjusting the posture of the camera to ensure that the z axis of the camera coincides with the normal of the rectangular unit, and adjusting the distance from the camera to the rectangular unit and the focal length of the camera to ensure that the camera can completely shoot the rectangular unit;
and step S55, finishing the path generated by scanning the surface set A, namely obtaining the detection path of the surface defect of the detected object.
In a possible implementation manner, as shown in fig. 9, step S6 of this embodiment specifically includes:
step S61, in the training stage, collecting normal samples, and scanning the surface set of the defect-free object through the process of the step S5 to obtain all rectangular units as normal samples;
step S62, classifying the normal samples through a K _ means clustering algorithm to obtain M groups of image data set sets (Dataset) with different texture characteristicsm|m=1,2,...,M};
Step S63, respectively establishing a normal sample discrimination model for the image data set Dataset, including:
with the 1 st group of image data set Dataset1Is the current group;
sequentially traversing all K images in the current group, calculating the similarity between the images and other K-1 images in the current group, and summing the obtained K-1 similarities to obtain a difference value e between the images and the current group;
taking G images with the largest difference value e in the K images, and respectively carrying out sparse decomposition on the G images based on a KSVD algorithm to obtain a dictionary matrix library D ═ D { (D)i|i=1,2,...,G};
Reconstructing the K images by using all dictionary matrixes in the dictionary matrix library D respectively, wherein each dictionary matrix correspondingly obtains K reconstruction errors E ═ E { (E) }ijTaking the maximum value of the K reconstruction errors to obtain a threshold value library used by the dictionary matrix library for judging normal samples,
th={thi|thi=max({eij|j=1,2,...,K});
the dictionary matrix library D and the corresponding threshold library th form a normal sample discrimination model of the current group;
sequentially traversing M groups of image data and setting a group Dataset of the image datamFor the current group, repeatedly executing the steps to obtain M groups of normal sample discrimination models;
step S64, in the detection stage, the rectangular unit of the detected object is collected according to the process in the step five, and the image data set Dataset to which the rectangular unit belongs is judged based on the position information of the cameram
Step S65, using the data sets of the image data respectivelymThe corresponding dictionary matrix library D reconstructs the rectangular unit to obtain a set of reconstruction errors { e } of the rectangular uniti|i=1,2,...,G};
Step S66, if there is any reconstruction error ei<thiIf the rectangular unit is a normal sample; otherwise, the rectangular cell is a defect sample.
The embodiment also provides a computer device for executing the method of the embodiment.
As particularly shown in fig. 10, the computer device includes a processor, a memory, and a system bus; various device components including a memory and a processor are connected to the system bus. A processor is hardware used to execute computer program instructions through basic arithmetic and logical operations in a computer system. Memory is a physical device used for temporarily or permanently storing computing programs or data (e.g., program state information). The system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus. The processor and the memory may be in data communication via a system bus. Including read-only memory (ROM) or flash memory (not shown), and Random Access Memory (RAM), which typically refers to main memory loaded with an operating system and computer programs.
Computer devices typically include a storage device. The storage device may be selected from a variety of computer readable media, which refers to any available media that can be accessed by a computer device, including both removable and non-removable media. For example, computer-readable media includes, but is not limited to, flash memory (micro SD cards), CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer device.
A computer device may be logically connected in a network environment to one or more network terminals. The network terminal may be a personal computer, a server, a router, a smart phone, a tablet, or other common network node. The computer apparatus is connected to the network terminal through a network interface (local area network LAN interface). A Local Area Network (LAN) refers to a computer network formed by interconnecting within a limited area, such as a home, a school, a computer lab, or an office building using a network medium. WiFi and twisted pair wiring ethernet are the two most commonly used technologies to build local area networks.
It should be noted that other computer systems including more or less subsystems than computer devices can also be suitable for use with the invention.
As described above in detail, the computer apparatus adapted to the present embodiment can perform the specified operation of the three-dimensional surface abnormality detection method. The computer device performs these operations in the form of software instructions executed by a processor in a computer-readable medium. These software instructions may be read into memory from a storage device or from another device via a local area network interface. The software instructions stored in the memory cause the processor to perform the method of processing group membership information described above. Furthermore, the present invention can be implemented by hardware circuits or by a combination of hardware circuits and software instructions. Thus, implementation of the present embodiments is not limited to any specific combination of hardware circuitry and software.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for detecting surface defects of a three-dimensional object is characterized by comprising the following steps:
step S1, training to obtain a target segmentation model by using the RGB image data set of the detected object;
step S2, collecting image information of a detected object from multiple angles, utilizing the target segmentation model to segment the detected object region in real time, and reconstructing the detected object in real time to obtain a point cloud model of the detected object;
s3, carrying out attitude estimation based on the point cloud model and the three-dimensional model of the detected object to obtain the pose of the detected object in a camera coordinate system;
step S4, calculating a normal vector of each point on the three-dimensional model, and carrying out surface splitting on the three-dimensional model based on the similarity of the normal vectors to obtain a surface set, a flatness degree set corresponding to the surface set and a pose set of the surface set in a camera coordinate system;
step S5, performing rasterization processing on the surface set to obtain a surface defect detection path of the detected object;
and step S6, detecting the defect on the surface of the detected object according to the defect detection path.
2. The method for detecting surface defects of three-dimensional objects according to claim 1, wherein the step S2 specifically comprises:
step S21, collecting the RGB image and the depth image of the detected object at the current moment, processing the collected image in real time by using the target segmentation model to obtain a first point cloud and a first RGB image of the detected object, and adjusting the posture of the camera and the distance between the camera and the detected object in real time;
step S22, taking the center of the first point cloud as a dot, controlling a mobile platform carrying a camera to collect RGB images and depth images of the detected object when the mobile platform rotates around the detected object by a preset angle from the current position, processing the collected images in real time by using the target segmentation model to obtain a second point cloud and a second RGB image of the detected object, and adjusting the posture of the camera and the distance between the camera and the detected object in real time;
step S23, registering the first point cloud and the second point cloud, fusing the registered first point cloud and the registered second point cloud to obtain the pose of the point cloud under the current camera coordinate system, updating the first point cloud by the fused point cloud, and taking the updated first point cloud as a new first point cloud;
step S24, repeating the steps S22-S23 until the similarity of the first RGB image and the second RGB image meets a preset condition, and stopping;
and step S25, taking the new first point cloud as a point cloud model of the detected object.
3. The method as claimed in claim 2, wherein the step S21 and the step S22 of processing the captured image in real time and adjusting the pose of the camera and the distance between the camera and the detected object in real time specifically include:
step S211, inputting the collected RGB image into the target segmentation model for target segmentation to obtain a mask image of the detected object;
step S212, obtaining a depth image and an RGB image only including the detected object based on the mask image of the detected object, and generating a point cloud based on the depth image and the RGB image only including the detected object;
step S213, calculating the centroid of the mask area in the mask image of the detected object, calculating the distance from the centroid to the center of the mask image under the pixel coordinate system, and adjusting the camera posture based on the distance;
step S214, calculating a coincidence degree between the boundary of the mask region in the mask image of the detected object and the boundary of the mask image, and adjusting the distance between the camera and the detected object according to the coincidence degree.
4. The method for detecting surface defects of a three-dimensional object according to claim 3, wherein the step S213 specifically comprises:
and calculating the distance from the center of mass to the center of the mask image under the pixel coordinate system by the following formula:
dccu=umc-u0
dccv=vmc-v0
wherein u ismcFor a mask region A in the first mask patternmaskOf the centroid on the u-axis of the pixel coordinate system, u0Is the coordinate value, v, of the center of the first mask image on the u-axis of the pixel coordinate systemmcFor a mask region A in the first mask patternmaskOf the centroid on the v-axis of the pixel coordinate system, v0Coordinate values of the center of the first mask image on a v axis of a pixel coordinate system;
adjusting the camera posture in real time according to the following conditions:
if d isccuIf the value is less than 0, controlling the camera to rotate around the y axis of the camera coordinate system along the anticlockwise direction;
if d isccuIf the coordinate is more than 0, controlling the camera to rotate around the y axis of the camera coordinate system along the clockwise direction;
if d isccvIf the value is less than 0, controlling the camera to rotate around the x axis of the camera coordinate system along the counterclockwise direction;
if d isccvAnd > 0, the camera is controlled to rotate in a clockwise direction around the x-axis of the camera coordinate system.
5. The method according to claim 3, wherein the step S214 specifically comprises:
calculating a coincidence ratio L of the boundary of the mask region and the boundary of the mask pattern in the mask pattern of the detected object by the following formulaoverlap
Figure FDA0003016422460000031
Wherein, PedgeSet of points, count (P), consisting of points of the mask region at the edges of the imageedge) As a set of points PedgeThe number of the middle points, W is the number of the image pixels in the u-axis direction, and H is the number of the image pixels in the v-axis direction;
if L isoverlapRho, wherein rho is any decimal less than 0.5, the mobile robot is controlled to be far away from the detected object, the image of the detected object is acquired in real time in the moving and far process, and the processing is carried out according to the steps S211, S213 and S214 until the L is obtained through calculationoverlap≤ρ;
If L isoverlapAnd calculating the occupation size of the mask region in the mask image as 0:
Figure FDA0003016422460000032
wherein A isoverldpIs a mask region AmaskFraction, S, in the first mask mapmaskIs the mask region AmeaskThe number of pixels included is W, the number of pixels of the first mask image in the u-axis direction is W, and the number of pixels of the first mask image in the v-axis direction is H;
if A isoverlap< alpha, where alpha is any decimal number less than 1, the mobile robot is controlled to move towards the detected object, and an image of the detected object is acquired in real time during the moving process, and the processing is performed according to the steps S211, S213 and S214 until A is reachedoverlap≥α。
6. The method for detecting surface defects of three-dimensional objects according to claim 2, wherein the preset conditions in step S24 are:
if L issIf the value is more than or equal to epsilon, the mobile robot stops collecting the detected objectImage information;
wherein L issSimilarity of the first RGB image and the second RGB image is obtained; ε is any fractional number between 0 and 1.
7. The method of claim 1, wherein the step S31 is performed by taking the point cloud model as a source point cloud Q, calculating a centroid of the source point cloud Q, taking the centroid as an origin, randomly sampling partial point clouds from the source point cloud Q, calculating a distance between each point cloud obtained by the sampling and its neighboring point clouds, and obtaining a sampling interval θ of the three-dimensional model based on the calculated distance;
step S32, sampling the three-dimensional model according to the sampling interval theta, and taking the obtained point cloud as a target point cloud P;
step S33, calculating a rotation matrix R from the target point cloud P to the source point cloud QobjAnd a translation vector tobjSaid rotation matrix RobjAnd a translation vector tobjNamely the pose of the detected object under the camera coordinate system, and new point cloud coordinates of the target point cloud P under the current camera coordinate system are obtained.
8. The method for detecting the surface defects of the three-dimensional object according to claim 1, wherein the step S4 specifically comprises:
step S41, a starting point P is initialized randomly on the three-dimensional model0As the current point PcurAnd initializing an ArraycurFor storing a point cloud of the current surface, initializing a QueuenextFor storing adjacent points Pnext
Step S42, adopting kd tree algorithm to search the current point PcurAdjacent point P ofnextJudging the neighboring point PnextWhether in ArraycurPerforming the following steps;
step S43, if the adjacent point PnextIs not in ArraycurThen, the neighboring point P is calculatednextAnd the current point PcurThe normal vector of (a);
step S44, calculating the adjacent point PnextAnd the current point PcurCosine similarity of normal vector Cp
Step S45, if
Figure FDA0003016422460000051
Or CpWhen 1, the adjacent point P is definednextSave to ArraycurSimultaneously inserting said neighbor point into QueuenextThe tail of the team; wherein the content of the first and second substances,
Figure FDA0003016422460000052
is P0With the first adjacent point PnextThe cosine similarity of (1), wherein delta epsilon is an allowable surface flatness difference range;
step S46, reading QueuenextHead of line element, updating the current point PcurCurrent point P to be updatedcurAs a new current point PcurAnd the Queue head element is selected from the QueuenextDeleting;
step S47, repeating the steps S42 to S46 until the Queue is finishednextClearing;
step S48, Array is processedcurStoring the data into an Array set, and deleting the data in the target point cloud P and the Array set simultaneouslycurThe same point in (1), and the ArraycurClearing;
step S49, repeating the above steps S41 to S46 until all the target point clouds P are traversed, and obtaining a point cloud Array set of N surfaces { Array ═ NiI 1, 2., N ═ is the surface set a of the detected object, and the surface set a is the surface set of all the surfaces
Figure FDA0003016422460000053
Form corresponding flatness degree set
Figure FDA0003016422460000054
Calculating to obtain the coordinates of the surface set A in the cameraAnd (5) obtaining a pose set of the surface set A in a camera coordinate system by using the corresponding rotation matrix set and the translation vector set under the system.
9. The method for detecting surface defects of three-dimensional objects according to claim 1, wherein the step S6 specifically comprises:
step S61, obtaining all rectangular cells as normal samples through step S5;
step S62, using KmeansClassifying the normal samples by a clustering algorithm to obtain M groups of image data set sets (Dataset) with different texture characteristicsm|m=1,2,...,M};
Step S63, respectively establishing a normal sample discrimination model for each image data group in the image data group set Dataset;
a step S64 of acquiring rectangular cells of the detected object according to the step S5, and judging image data groups to which the rectangular cells belong based on the position information of the camera;
step S65, reconstructing the rectangular unit by using the dictionary matrix library corresponding to the image data group to obtain a group of reconstruction errors { e ] of the rectangular uniti|i=1,2,...,G};
Step S66, if there is any reconstruction error ei<thiIf not, the rectangular unit is a defect sample;
wherein G represents the number of dictionary matrixes th in the dictionary matrix library corresponding to the image data group to which G belongsiAnd representing the threshold value corresponding to the dictionary matrix library corresponding to the image data set.
10. The method of claim 9, wherein the step S63 of establishing a normal sample discrimination model for the image data set specifically includes:
step S631, randomly selecting an image data group as a current image data group;
step S632, randomly selecting an image in the current image data set as a current image;
step S633, calculating the similarity between the current image and other images, and summing the obtained similarities to obtain a difference value between the current image and the current image data set;
step S634, reselecting an image as the current image, and repeatedly executing step S633 until the images in the image data set are traversed, obtaining the difference values between all the images in the current image data set and the current image data set, and sorting the difference values from large to small;
step S635, selecting images corresponding to the previous G difference values, and performing sparse decomposition on the G images respectively by adopting a KSVD algorithm to obtain a dictionary matrix library;
step S636, reconstructing all the images in the current image data set by using all the dictionary matrices in the dictionary matrix library D, where each dictionary matrix correspondingly obtains K reconstruction errors E ═ Eij1, 2., K }, where K represents an amount of image data in the current image data set;
step S637, taking the maximum value of the K reconstruction errors to obtain a threshold library of the dictionary matrix library for discriminating a normal sample: th is { th ═ thi|thi=max({eij1, 2,. K }); the dictionary matrix library D and the corresponding threshold library th form a normal sample discrimination model of the current group;
step S638, re-selecting an image data set as the current image data set, and repeating steps S632 to S637 until the image data sets in the image data set are traversed completely.
CN202110390016.8A 2021-04-12 2021-04-12 Three-dimensional object surface defect detection method Active CN113096094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110390016.8A CN113096094B (en) 2021-04-12 2021-04-12 Three-dimensional object surface defect detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110390016.8A CN113096094B (en) 2021-04-12 2021-04-12 Three-dimensional object surface defect detection method

Publications (2)

Publication Number Publication Date
CN113096094A true CN113096094A (en) 2021-07-09
CN113096094B CN113096094B (en) 2024-05-17

Family

ID=76676227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110390016.8A Active CN113096094B (en) 2021-04-12 2021-04-12 Three-dimensional object surface defect detection method

Country Status (1)

Country Link
CN (1) CN113096094B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706619A (en) * 2021-10-21 2021-11-26 南京航空航天大学 Non-cooperative target attitude estimation method based on space mapping learning
CN113731836A (en) * 2021-08-04 2021-12-03 华侨大学 Urban solid waste online sorting system based on deep learning
CN114037703A (en) * 2022-01-10 2022-02-11 西南交通大学 Subway valve state detection method based on two-dimensional positioning and three-dimensional attitude calculation
CN114708230A (en) * 2022-04-07 2022-07-05 深圳市精明检测设备有限公司 Vehicle frame quality detection method, device, equipment and medium based on image analysis
EP4145115A1 (en) * 2021-09-03 2023-03-08 Kabushiki Kaisha Toshiba Processing device relating to inspection of inspection object, inspection system of inspection object, processing method relating to inspection of inspection object, and processing program relating to inspection of inspection object
CN115953400A (en) * 2023-03-13 2023-04-11 安格利(成都)仪器设备有限公司 Automatic corrosion pit detection method based on three-dimensional point cloud object surface
CN115965628A (en) * 2023-03-16 2023-04-14 湖南大学 Online dynamic detection method and detection system for workpiece coating quality
CN116977331A (en) * 2023-09-22 2023-10-31 武汉展巨华科技发展有限公司 3D model surface detection method based on machine vision

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107052086A (en) * 2017-06-01 2017-08-18 扬州苏星机器人科技有限公司 Stamping parts surface defect detection apparatus and detection method based on 3D vision
US20180172436A1 (en) * 2015-06-01 2018-06-21 Nippon Steel & Sumitomo Metal Corporation Method and device for inspecting crankshaft
US20180211373A1 (en) * 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection
CN109242828A (en) * 2018-08-13 2019-01-18 浙江大学 3D printing product 3 D defects detection method based on optical grating projection multistep phase shift method
CN109345523A (en) * 2018-09-21 2019-02-15 中国科学院苏州生物医学工程技术研究所 Surface defects detection and three-dimensional modeling method
CN110033447A (en) * 2019-04-12 2019-07-19 东北大学 A kind of high-speed rail heavy rail detection method of surface flaw based on cloud method
CN110400315A (en) * 2019-08-01 2019-11-01 北京迈格威科技有限公司 A kind of defect inspection method, apparatus and system
CN110634140A (en) * 2019-09-30 2019-12-31 南京工业大学 Large-diameter tubular object positioning and inner wall defect detection method based on machine vision
CN111127422A (en) * 2019-12-19 2020-05-08 北京旷视科技有限公司 Image annotation method, device, system and host
CN112161619A (en) * 2020-09-16 2021-01-01 杭州思锐迪科技有限公司 Pose detection method, three-dimensional scanning path planning method and detection system
CN112419482A (en) * 2020-11-23 2021-02-26 太原理工大学 Three-dimensional reconstruction method for mine hydraulic support group pose fused with depth point cloud

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180172436A1 (en) * 2015-06-01 2018-06-21 Nippon Steel & Sumitomo Metal Corporation Method and device for inspecting crankshaft
US20180211373A1 (en) * 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection
CN107052086A (en) * 2017-06-01 2017-08-18 扬州苏星机器人科技有限公司 Stamping parts surface defect detection apparatus and detection method based on 3D vision
CN109242828A (en) * 2018-08-13 2019-01-18 浙江大学 3D printing product 3 D defects detection method based on optical grating projection multistep phase shift method
CN109345523A (en) * 2018-09-21 2019-02-15 中国科学院苏州生物医学工程技术研究所 Surface defects detection and three-dimensional modeling method
CN110033447A (en) * 2019-04-12 2019-07-19 东北大学 A kind of high-speed rail heavy rail detection method of surface flaw based on cloud method
CN110400315A (en) * 2019-08-01 2019-11-01 北京迈格威科技有限公司 A kind of defect inspection method, apparatus and system
CN110634140A (en) * 2019-09-30 2019-12-31 南京工业大学 Large-diameter tubular object positioning and inner wall defect detection method based on machine vision
CN111127422A (en) * 2019-12-19 2020-05-08 北京旷视科技有限公司 Image annotation method, device, system and host
CN112161619A (en) * 2020-09-16 2021-01-01 杭州思锐迪科技有限公司 Pose detection method, three-dimensional scanning path planning method and detection system
CN112419482A (en) * 2020-11-23 2021-02-26 太原理工大学 Three-dimensional reconstruction method for mine hydraulic support group pose fused with depth point cloud

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113731836A (en) * 2021-08-04 2021-12-03 华侨大学 Urban solid waste online sorting system based on deep learning
EP4145115A1 (en) * 2021-09-03 2023-03-08 Kabushiki Kaisha Toshiba Processing device relating to inspection of inspection object, inspection system of inspection object, processing method relating to inspection of inspection object, and processing program relating to inspection of inspection object
US20230071341A1 (en) * 2021-09-03 2023-03-09 Kabushiki Kaisha Toshiba Processing device relating to inspection of inspection object, inspection system of inspection object, processing method relating to inspection of inspection object, and non-transitory storage medium
CN113706619A (en) * 2021-10-21 2021-11-26 南京航空航天大学 Non-cooperative target attitude estimation method based on space mapping learning
CN114037703A (en) * 2022-01-10 2022-02-11 西南交通大学 Subway valve state detection method based on two-dimensional positioning and three-dimensional attitude calculation
CN114708230A (en) * 2022-04-07 2022-07-05 深圳市精明检测设备有限公司 Vehicle frame quality detection method, device, equipment and medium based on image analysis
CN115953400A (en) * 2023-03-13 2023-04-11 安格利(成都)仪器设备有限公司 Automatic corrosion pit detection method based on three-dimensional point cloud object surface
CN115965628A (en) * 2023-03-16 2023-04-14 湖南大学 Online dynamic detection method and detection system for workpiece coating quality
CN116977331A (en) * 2023-09-22 2023-10-31 武汉展巨华科技发展有限公司 3D model surface detection method based on machine vision
CN116977331B (en) * 2023-09-22 2023-12-12 武汉展巨华科技发展有限公司 3D model surface detection method based on machine vision

Also Published As

Publication number Publication date
CN113096094B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
CN113096094B (en) Three-dimensional object surface defect detection method
CN108555908B (en) Stacked workpiece posture recognition and pickup method based on RGBD camera
CN107063228B (en) Target attitude calculation method based on binocular vision
JP6681729B2 (en) Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object
JP3735344B2 (en) Calibration apparatus, calibration method, and calibration program
CN112233181A (en) 6D pose recognition method and device and computer storage medium
CN111524168B (en) Point cloud data registration method, system and device and computer storage medium
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN111815707A (en) Point cloud determining method, point cloud screening device and computer equipment
CN115359021A (en) Target positioning detection method based on laser radar and camera information fusion
CN111079565A (en) Construction method and identification method of view two-dimensional posture template and positioning and grabbing system
CN115032648B (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
JP2018169660A (en) Object attitude detection apparatus, control apparatus, robot and robot system
CN113538503A (en) Solar panel defect detection method based on infrared image
CN114372992A (en) Edge corner point detection four-eye vision algorithm based on moving platform
CN112070870A (en) Point cloud map evaluation method and device, computer equipment and storage medium
CN114549629A (en) Method for estimating three-dimensional pose of target by underwater monocular vision
CN116921932A (en) Welding track recognition method, device, equipment and storage medium
CN113592976B (en) Map data processing method and device, household appliance and readable storage medium
CN116385356A (en) Method and system for extracting regular hexagonal hole features based on laser vision
CN115953465A (en) Three-dimensional visual random grabbing processing method based on modular robot training platform
Rashd et al. Open-box target for extrinsic calibration of LiDAR, camera and industrial robot
CN114862969A (en) Onboard holder camera angle self-adaptive adjusting method and device of intelligent inspection robot
CN116523984B (en) 3D point cloud positioning and registering method, device and medium
Vitiuk et al. Software Package for Evaluation the Stereo Camera Calibration for 3D Reconstruction in Robotics Grasping System.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220402

Address after: 637000 group 3, Yanjia village, Huilong Town, Yingshan County, Nanchong City, Sichuan Province

Applicant after: Wu Jun

Address before: No. ol-01-202012007, 3rd floor, building 1, No. 366, north section of Hupan Road, Tianfu New District, Chengdu, Sichuan 610000

Applicant before: Chengdu lantu Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant