CN110400315B - Defect detection method, device and system - Google Patents

Defect detection method, device and system Download PDF

Info

Publication number
CN110400315B
CN110400315B CN201910711135.1A CN201910711135A CN110400315B CN 110400315 B CN110400315 B CN 110400315B CN 201910711135 A CN201910711135 A CN 201910711135A CN 110400315 B CN110400315 B CN 110400315B
Authority
CN
China
Prior art keywords
image
original image
camera
pose
transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910711135.1A
Other languages
Chinese (zh)
Other versions
CN110400315A (en
Inventor
付兴银
李广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201910711135.1A priority Critical patent/CN110400315B/en
Publication of CN110400315A publication Critical patent/CN110400315A/en
Application granted granted Critical
Publication of CN110400315B publication Critical patent/CN110400315B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of defect detection, and provides a defect detection method, device and system. The defect detection method comprises the following steps: acquiring a first original image containing a first part; determining a first foreground region corresponding to the first part in the first original image; removing image information in a first background area in the first original image to obtain an image to be detected; and detecting the defects on the surface of the first part in the image to be detected by using the pre-trained neural network model. According to the method, the image information in the first background area is eliminated, so that the content of the first background area basically does not influence the detection process of the neural network model, and the condition that the defect is detected in the background area by mistake basically does not occur, so that the defect detection precision can be improved, and the false detection rate can be reduced.

Description

Defect detection method, device and system
Technical Field
The invention relates to the technical field of defect detection, in particular to a defect detection method, device and system.
Background
In the industrial field, it is often necessary to detect defects on the surface of a component. With the development of artificial intelligence technology and computer vision technology, the detection of industrial defects is more and more intelligent. In some currently proposed inspection schemes, an image of a part is obtained by shooting through a camera, and then the image is input into an inspection model which is established in advance, and an inspection result is output by the model. However, since the image of the part usually includes not only the part but also the background, the model in the above scheme often falsely detects the content in the background as a defect during the detection, which results in a decrease in the precision of the defect detection.
Disclosure of Invention
An embodiment of the present invention provides a method, an apparatus, and a system for defect detection to solve the above technical problem.
In order to achieve the above purpose, the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides a defect detection method, including: acquiring a first original image containing a first part; determining a first foreground region corresponding to the first part in the first original image; determining a first foreground region corresponding to the first part in the first original image, and removing image information in a first background region in the first original image to obtain an image to be detected, wherein the first background region is a region except the first foreground region in the first original image; and detecting the defects on the surface of the first part in the image to be detected by using a pre-trained neural network model.
According to the method, the acquired first original image is not directly detected, but image information in a first background area in the first original image is firstly removed, and then the neural network model is used for detecting the defects of the surface of the part in the acquired image to be detected. Because the image information in the first background area is removed, the content of the first background area basically does not influence the detection process of the neural network model, and the condition of mistakenly detecting the defects in the first background area basically does not occur, so that the defect detection precision can be improved, and the false detection rate is reduced.
In addition, the neural network model has good learning and generalization capabilities, so that the defect detection is performed by using the pre-trained neural network model, and the defect detection precision is improved.
In an implementation manner of the first aspect, the clearing image information in a first background region in the first original image includes: and setting the part in the first background area in the first original image as a single color.
Since the defect can be detected in the image, the color difference between the surface of the part at the defect position and the surface of the surrounding part is inevitable, and after the part in the first background area in the first original image is set to be the single color, the colors of all pixels in the first background area are the same, and no difference exists, so that the situation that the defect is detected by mistake in the first background area is basically avoided during detection.
In one implementation of the first aspect, the single color is similar to the color of the first part by less than a predetermined threshold.
When the part in the first background area in the first original image is set to be a single color, the color which is greatly different from the color of the first part can be selected as much as possible, and the detection effect is prevented from being influenced due to the fact that the first part and the background are difficult to distinguish in the image to be detected.
In one implementation manner of the first aspect, the determining a first foreground region corresponding to the first part in the first original image includes: determining a first transformation between a pose of a camera at the time of acquiring the first original image and a pose of a three-dimensional model of the first part, wherein the three-dimensional model of the first part has a preset pose; and projecting the three-dimensional model of the first part into the first original image by using the first transformation, and determining a region formed by projection as the first foreground region, or projecting the three-dimensional model of the first part by using the first transformation, and forming a first mask image, and determining a region which is not covered by the first mask image after the phase of the first original image and the first mask image as the first foreground region.
The three-dimensional model of the first part may be pre-constructed, for example, by drawing with drawing software, or by three-dimensionally reconstructing a part of the same type as the first part. The first foreground area is determined accurately by adopting a three-dimensional model projection mode, so that the first foreground area and the first background area can be distinguished in the first original image, and the detection precision is improved.
In one implementation of the first aspect, the determining a first transformation between the pose of the camera at the time of acquiring the first raw image and the pose of the three-dimensional model of the first part includes: acquiring depth data of a scene in the first original image acquired by the camera while acquiring the first original image; and performing point cloud registration between the point cloud corresponding to the depth data and the point cloud corresponding to the three-dimensional model of the first part, and determining the first transformation between the two point clouds.
In one implementation of the first aspect, the determining a first transformation between the pose of the camera at the time of acquiring the first raw image and the pose of the three-dimensional model of the first part includes: acquiring a second transformation between the pose of the camera during calibration and the pose of the three-dimensional model of the first part; acquiring a third transformation between the pose of the camera when acquiring the first original image and the pose of the camera when calibrating; determining a product of the third transform and the second transform as the first transform.
To project the three-dimensional model of the first part, a first transformation between the pose of the camera when acquiring the first original image and the pose of the three-dimensional model of the first part needs to be obtained.
The first realization mode is as follows: the first implementation method requires that the camera has a function of acquiring depth data, for example, an RGB-D camera may be used.
The second implementation mode is as follows: when the first part is detected, the pose of the camera and the pose of the three-dimensional model are calibrated, and second transformation between the two poses during calibration is obtained. And a third transformation between the pose of the camera when acquiring the first raw image and the pose of the camera when performing the calibration may in some cases be obtained: for example, the camera is translated and/or rotated in a preset manner, and a first original image is acquired until a certain preset pose is reached, and since the translation and/or rotation behavior of the camera is preset, a third transformation can be obtained; for another example, the camera is provided at the end of a robot arm of the robot, and the pose change caused by the translation and/or rotation of the robot arm can be recorded by the robot, so that the third change can be obtained. After the third transform and the second transform are obtained, the third transform and the second transform may be multiplied to obtain the first transform (where the multiplication refers to matrix multiplication corresponding to the transform). The second implementation is more efficient in computing the first transformation.
If multiple shots need to be taken at multiple different positions and at different angles when the first part is detected, the first transformation may be calculated by the first implementation manner for each shot, or the first transformation may be calculated by the first implementation manner only for the first shot (i.e., calibration), and the first transformation may be calculated by the second implementation manner for each subsequent shot.
In an implementation manner of the first aspect, the detecting, by using a pre-trained neural network model, a defect existing on a surface of the first part in the image to be detected includes: and detecting whether the surface of the first part has defects or not in the image to be detected by using a pre-trained neural network model, or detecting the position of the surface defect of the first part.
In some applications, only the surface of the first part needs to be detected whether to have defects or not, namely, one of two results of yes and no is output; in other applications, the location of the surface defects of the part is output, for example by outputting one or more rectangular frames containing the defects in the image to be detected.
In one implementation of the first aspect, the acquiring a first raw image containing a first part includes: and acquiring a first original image obtained by shooting the first part at a plurality of preset positions and a plurality of preset angles by a camera.
In order to comprehensively detect the defects on the surface of the first part, firstly, images of the surface of the first part need to be comprehensively acquired, so that the camera can be used for shooting at a plurality of preset positions, and the camera can be used for shooting at each preset position according to a plurality of preset angles. The camera can be one, sequentially moves to each preset position, and sequentially rotates to each preset angle for shooting; the number of the cameras may be multiple, for example, one camera is arranged at each preset position, and each camera is controlled to rotate to each preset angle in sequence to shoot, or multiple cameras are arranged at each preset position, and each camera shoots towards one preset angle.
In an implementation manner of the first aspect, the acquiring a first original image obtained by shooting the first part by the camera at a plurality of preset positions and a plurality of preset angles includes: the mechanical arm with the camera arranged at the control tail end sequentially moves to a plurality of preset positions and sequentially rotates to a plurality of preset angles at each position to shoot the first part, and a first original image obtained through shooting is obtained.
The mechanical arm can freely translate and/or rotate within a certain range, so that the camera can be accurately controlled to complete shooting, and meanwhile, the mechanical arm is often used for industrial production and belongs to equipment which is easy to obtain in a factory, so that the difficulty in implementing a scheme for controlling the camera to move by the mechanical arm is low.
In one implementation of the first aspect, before the acquiring the original image containing the part, the method further comprises: acquiring a second original image containing a second part; determining a second foreground region corresponding to the second part in the second original image; and removing image information in a second background area in the second original image to obtain a training image, wherein the second background area is an area except for the second foreground area in the second original image, and the training image is used for training the neural network model.
In an implementation manner of the first aspect, the clearing image information in a second background region in the second original image includes: and setting the part in the second background area in the second original image as a single color.
In one implementation manner of the first aspect, the determining a second foreground region corresponding to the second part in the second original image includes: determining a fourth transformation between the pose of the camera at the time of acquiring the second original image and the pose of the three-dimensional model of the second part, wherein the three-dimensional model of the second part has a preset pose; and projecting the three-dimensional model of the second part into the second original image by using the fourth transformation, and determining a region formed by projection as the second foreground region, or projecting the three-dimensional model of the second part by using the fourth transformation, forming a second mask image, and determining a region which is not covered by the second mask image after the phase difference between the second original image and the second mask image as the second foreground region.
The above three implementation modes are the acquisition process of the training set for training the neural network, and the process is similar to the steps for detecting the part defects and is not repeated.
In one implementation of the first aspect, after the obtaining the training image, the method further comprises: and training the neural network model by using the training image and the labeling information obtained after labeling the training image, or sending the training image to a server, so that the server can train the neural network model by using the training image and the labeling information obtained after labeling the training image.
The training of the neural network model may be performed on the same device as the detection of the surface defects of the part, for example, a computer located at the inspection site. However, because the training process consumes huge computing resources, the training image can also be sent to a server to be trained on the server, and after the neural network model is trained, the model is deployed on a computer on a detection site for actual detection.
In a second aspect, an embodiment of the present application provides a defect detecting apparatus, including: the first image acquisition module is used for acquiring a first original image containing a first part; a first foreground determining module, configured to determine a first foreground region corresponding to the first part in the first original image; a first background removing module, configured to determine a first foreground region corresponding to the first part in the first original image, and remove image information in a first background region in the first original image to obtain an image to be detected, where the first background region is a region of the first original image other than the first foreground region; and the detection module is used for detecting the defects on the surface of the first part in the image to be detected by utilizing the pre-trained neural network model.
In a third aspect, an embodiment of the present application provides a defect detection system, including: the robot comprises a robot, wherein a camera is arranged at the tail end of a mechanical arm of the robot; the control device is used for sending a control instruction to the robot, controlling the camera to acquire a first original image containing a first part, determining a first foreground region corresponding to the first part in the first original image, eliminating image information in a first background region in the first original image, obtaining an image to be detected, and detecting defects on the surface of the first part in the image to be detected by using a pre-trained neural network model, wherein the first background region is a region of the first original image except the first foreground region.
In one implementation of the third aspect, the camera comprises an RGB-D camera.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where computer program instructions are stored, and when the computer program instructions are read and executed by a processor, the computer program instructions perform the steps of the method provided in the first aspect or any one of the possible implementation manners of the first aspect.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a memory in which computer program instructions are stored, and a processor, where the computer program instructions, when read and executed by the processor, perform the steps of the method provided by the first aspect or any one of the possible implementations of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a schematic diagram of a defect detection system provided by an embodiment of the present application;
FIG. 2 is a flow chart illustrating a method for defect detection according to an embodiment of the present application;
fig. 3(a) to 3(C) are schematic diagrams illustrating detection effects of a defect detection method provided by an embodiment of the present application;
FIG. 4 is a functional block diagram of a defect detection apparatus according to an embodiment of the present application;
fig. 5 shows a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Fig. 1 shows a schematic diagram of a defect detection system 100 provided in an embodiment of the present application. Referring to fig. 1, the defect detection system includes a robot 110 and a control device 120, and data interaction between the robot 110 and the control device 120 may be performed in a wired or wireless manner. The robot 110 may further include a robot body 116, a robot arm 114 coupled to the robot body 116, and a camera 112 disposed at an end of the robot arm 114. The control device 120 implements the defect detection method provided by the embodiment of the present application (the method steps are set forth in fig. 2). In short, when a defect on the surface of the part needs to be detected, the part is placed first, then the control device 120 sends a control command to the robot body 116, and the robot body 116 controls the mechanical arm 114 according to the received control command, for example, controls the mechanical arm 114 to translate and/or rotate, so as to drive the camera 112 to translate and/or rotate until the camera 112 reaches a proper position and/or rotates to a proper angle, and shoots the part. The image captured by the camera 112 is transmitted back to the control device 120 through the robot body 116, and the defect detection is performed on the control device 120 and the detection result is output.
The control device 120 may be, but is not limited to, a dedicated device, a physical device such as a desktop computer, a notebook computer, a tablet computer, a smart phone, or a virtual device such as a virtual machine. The control device 120 may be one device or a combination of a plurality of devices. The control device 120 may be installed at an inspection site, perform defect inspection in real time, and output the inspection result to a worker at the site for viewing. Of course, the control device 120 may be remotely located, or the control device 120 may be integrated with the robot 110.
In various implementations, the camera 112 may be a general camera or an RGB-D camera, and if the camera 112 is an RGB-D camera, it may also capture depth data of a scene in an image when capturing an image of a part (the purpose of the depth data is described later). Alternatively, a common camera and a depth camera may be arranged together to implement the RGB-D camera-like function.
Fig. 2 shows a flowchart of a defect detection method provided in an embodiment of the present application. The method does not directly detect the original image containing the part, but tries to clear image information in a background area in the original image, and then uses the obtained image for defect detection, and can improve the precision of defect detection because the content in the background area can not interfere the detection process any more. It is noted that the method of fig. 2 may be applied in a defect detection system, but may also be applied in other systems or devices. Referring to fig. 2, the method includes:
step S200: a first raw image is acquired.
The first original image is an image including a part to be detected, and hereinafter, in order to distinguish the part used in the defect detection stage from the part used in the model training stage, the part to be detected is referred to as a first part. For example, referring to fig. 3(a), a peripheral rectangular box represents a first original image, a central cylinder represents a first part, a defect (hereinafter, referred to as defect a and defect B) is present at a position of the surface A, B of the first part, and the defect a and the defect B are also targets of defect detection, and a defect (hereinafter, referred to as defect C) is also present at a position of a table surface C on which the first part is placed, and the defect C is not a defect of the surface of the first part and is not a target of defect detection.
The first raw image may be obtained by photographing a surface of the first part with a camera. In some implementations, in order to comprehensively detect the defects on the surface of the first part, it is necessary to comprehensively acquire an image of the surface of the first part, so that the camera can be used to shoot at a plurality of preset positions, and the camera can shoot at each preset position according to a plurality of preset angles. For example, for a cylindrical first part, 10 shooting positions may be determined along the circumference thereof, and 10 shooting angles are determined at each shooting position, so that the shot image can cover the surface of the first part as much as possible.
Furthermore, the number of the cameras for shooting can be one, and the cameras sequentially move to the preset positions and sequentially rotate to the preset angles for shooting. For example, in the defect detection system, the robot arm may be moved under the control of the control device so as to transport the phase disposed at the end of the robot arm to a predetermined position and rotate to a predetermined angle to complete the task of acquiring the first original image. Of course, the number of cameras for shooting may be multiple, for example, one camera is arranged at each preset position, and each camera is controlled to rotate to each preset angle in sequence to shoot, or multiple cameras are arranged at each preset position, and each camera is fixed to face one preset angle to shoot. In the following, for simplicity, the case where one camera is used is mainly described as an example, but this should not be construed as limiting the scope of the present application.
The processing is similar for each of the acquired first raw images, so the following steps are mainly explained for the case of one of the first raw images.
Step S201: a first foreground region in a first original image is determined.
The first foreground region refers to a region corresponding to the first part in the first original image, for example, in fig. 3(a), the region occupied by the cylinder is the first foreground region.
There are various ways to determine the first foreground region, and in some implementations, some image processing methods may be used, such as image segmentation of the first original image to segment the foreground object from the background.
In other implementations, if the three-dimensional model of the first part can be obtained in advance, a first transformation between the pose of the camera when acquiring the first original image and the pose of the three-dimensional model of the first part may be determined first. Then, determining a first foreground area by projecting a three-dimensional model of the first part, specifically at least including the following two ways:
firstly, a three-dimensional model of a first part is projected into a first original image by using first transformation, and a region formed by projection is a first foreground region.
Secondly, the three-dimensional model of the first part is projected by using first transformation to form a first mask image, and then a region which is not covered by the first mask image after the phase of the first original image and the first mask image is determined as a first foreground region. For example, the first mask image may have the same size as the first original image, the pixel values of the image may take two values, that is, 0 or 1, the pixel value of the first mask image in the area formed by projection of the three-dimensional model is 1, and the pixel value of the first mask image outside the area is 0, after the and operation is performed on the pixels at the corresponding positions of the first original image and the first mask image, the pixel values of some pixels in the first original image will be set to zero (masked), and the area formed by the pixels whose values are not set to zero is the first foreground area (unmasked).
The three-dimensional model of the first part may be obtained in different ways: for example, the drawing may be performed by drawing software (e.g., CAD, etc.) according to the design criteria of the first part, and may be directly used in the solution of the present application since the factory is likely to have drawn a three-dimensional model of the first part when it is manufactured; as another example, a three-dimensional reconstruction of the first part may be achieved by scanning (e.g., using a depth camera, a three-dimensional scanner, etc.), and so forth. The first foreground area is determined by adopting a three-dimensional model projection mode, and complex processing is not needed to be carried out on the first original image, so that the method is simple and rapid, the accuracy is higher, and the subsequent detection accuracy is improved.
It should be noted that the above three-dimensional model of the first part refers to a model shared by all types of parts of the part currently being detected, and is not only for the part currently being detected. Also, the three-dimensional model itself may be defect-free: for the drawing mode, the three-dimensional model has no defects originally; for the three-dimensional reconstruction, the reconstruction can be performed based on a part without defects.
A pose can be preset for the three-dimensional model of the first part for calculation of the first transformation, and the pose can be set arbitrarily since the three-dimensional model is only a virtual part. The calculation of the first transformation includes at least two ways:
first mode
The method comprises the steps of firstly, acquiring depth data of a scene in a first original image acquired when a camera acquires the first original image, then carrying out point cloud registration between a point cloud corresponding to the depth data and a point cloud corresponding to a three-dimensional model of a first part, and determining first transformation between the two point clouds.
It has been mentioned in the introduction to fig. 1 that the acquisition of depth data can be performed with an RGB-D camera, in which a common camera acquiring the first raw image and a depth measurement module acquiring the depth data are located at a close distance and can therefore be considered to be directed to the same scene. The point cloud can be obtained based on the depth data, and because the depth data generated when the camera shoots the first part at different positions and different angles are different, the point cloud obtained when the camera collects the first original image actually contains the pose information of the camera when the camera collects the first original image. And another piece of point cloud can be obtained according to the three-dimensional model of the first part, and the point cloud represents the pose information of the three-dimensional model of the first part (namely the above-mentioned pose preset for the three-dimensional model), so that point cloud registration is carried out between the two pieces of point clouds, and the first transformation between the pose when the camera collects the first original image and the pose of the three-dimensional model of the first part can be obtained.
One way of point cloud registration is to perform coarse registration and fine registration. The rough registration is to find a rotation matrix and a translation matrix approximate to two pieces of point cloud under the condition that the relative position relationship of the two pieces of point cloud is completely unclear, and for example, the rough registration can be performed based on ppf (point Pair feature). The fine registration is to further calculate a more accurate rotation matrix and translation matrix under the condition that the initial value of the rotation translation (obtained by the coarse registration) is known, and for example, an icp (iterative closed point) algorithm can be used for the fine registration.
Regarding the point cloud registration, the appropriate point cloud features and registration algorithm can be selected according to the morphological features of the part. In addition, for the point cloud obtained from the depth data, since the depth data is for the whole scene and not only for the first part, the point cloud data belonging to the background can be filtered out before the registration is performed, so as to improve the registration effect.
Second mode
When the first part is detected, the initial pose of the camera and the pose of the three-dimensional model of the first part are calibrated, and a second transformation between the two poses is obtained. For example, if the camera needs to shoot the first part at 10 preset positions and shoot the first part at each position by 10 preset angles, the initial pose of the camera may be the pose when the first part is shot for the first time (the camera is located at the first preset position and rotates to the first preset angle). The calculation of the second transformation may be performed by means of point cloud registration, which has already been described above in connection with the first way of obtaining the first transformation, and will not be repeated here.
After the second transformation is obtained, a third transformation between the pose of the camera when the first original image is acquired and the pose of the camera when the camera is calibrated needs to be obtained. Since the capturing of the camera is generally a controlled behavior, a third transformation is available: for example, the camera is translated and/or rotated in a preset manner, and a first original image is acquired until a certain preset pose is reached, and since the translation and/or rotation behavior of the camera is preset, a third transformation can be obtained; for another example, even if the camera is not moved in a predetermined manner, since the camera is provided at the tip of the robot arm of the robot, the pose change caused by the translation and/or rotation of the robot arm can be recorded by the robot, so the third change is also available.
After the third transform and the second transform are obtained, the third transform and the second transform may be multiplied to obtain the first transform. Mathematically, the pose transformation may be represented in the form of a transformation matrix, and thus the multiplication of the third transformation and the second transformation may refer to the multiplication of the transformation matrices corresponding to the pose transformation.
The two methods for obtaining the first transformation have high calculation accuracy, but the point cloud registration calculation amount is large, and the second method can obtain the first transformation directly through a mathematical calculation method after obtaining the second transformation through calibration, so that the efficiency is high.
Also taking the example mentioned earlier, the camera takes a first part at 10 preset positions, each position taking 10 preset angles. The first transformation can be calculated in the first way only for the first shot (calibration time) and in the second way for the next 99 shots (in which case the second transformation used in the second way is the first transformation obtained in calibration time). Of course, it is also possible to calculate the first transformation in the first way for all 100 shots, but only with a relatively large amount of calculation.
It is further noted that if the first transformation is calculated in the second way, a calibration has to be performed first, which calibration process has to be performed again for each part to be inspected.
After the first transformation is obtained, the three-dimensional model of the first part may be projected by using the first transformation, for example, in OpenGL, the three-dimensional model of the first part may be projected to the first original image according to the three-dimensional model (using a mesh structure), the first transformation, and the internal parameters of the camera. Still referring to fig. 3(a), if the projection result is ideal, the area where the middle cylinder is located is the first foreground area.
Step S202: and removing image information in a first background area in the first original image to obtain an image to be detected.
The first background area is an area of the first original image except the first foreground area, and after the first foreground area is determined in step S201, the background area of the first original image is also determined. Still referring to fig. 3(a), the area of the first original image other than the cylinder is the first background area, and ideally, the first background area does not include the first part.
The image information in the first background region in the first original image represents the objects existing in the first background region, and these objects may interfere with the defect detection process, so that these image information may be removed in step S202, thereby avoiding affecting the defect detection. For example, in fig. 3(a), if there is a desktop on which the first part is placed in the first background region, the defect C on the desktop may be mistakenly detected as a defect on the surface of the first part, but if the image information characterizing the desktop in the first background region is cleared, the mistaken detection can be avoided. It should be noted that reflections, textures, objects, etc. in the first background region may cause false detection, not only the defect in the first background region will cause false detection, but also the defect C is taken as an example in fig. 3 (a). And after the image information in the first background area in the first original image is removed, the obtained image is called an image to be detected.
Further, since the defect can be detected in the image, the color difference between the surface of the part at the defect position and the surface of the part around the defect position is inevitable, and therefore, in some implementations, the image information may be cleared by setting the portion in the first background region in the first original image as a single color, and after processing, since the colors of all pixels in the first background region are the same, there is no difference, and thus the problem of false detection of the defect in the first background region can be improved. Referring to fig. 3(B), the first background area is shown in a shaded manner, which indicates that the first background area has been set to a single color, and at this time, the desktop of the first original image in the first background area and the defect C on the desktop do not exist, and the image to be detected is obtained in fig. 3 (B).
When the portion of the first original image within the first background region is set to a single color, the degree of similarity between the selected single color and the color of the first part may be less than a preset threshold. The alternative scheme aims to select the color with larger color difference with the first part as the single color as possible, and avoid the influence on the detection effect caused by difficulty in distinguishing the first part from the background in the image to be detected. For example, if it is known in advance that the first part is light, the single color may be selected to be black; for another example, if it is known in advance that the first part is dark, the single color may be selected to be white; for another example, if it is not known in advance what color the first part is, the color of the first part may be estimated from the pixels in the first foreground region in the first original image, and then the color with the similarity smaller than the preset threshold may be automatically selected as the single color.
Clearing the image information in the first background area in the first original image is not limited to the above-mentioned recoloring manner, for example, in some implementations, a portion in the first background area in the first original image may also be set to be transparent.
In addition, it should be further noted that step S202 and step S201 may be executed sequentially or simultaneously, for example, if the first foreground region is determined by performing an and operation on the first original image and the first mask image, while the performing an and operation determines the first foreground region, pixels in the first background region may be set to zero (which is equivalent to being set to black), that is, the content of step S202 is also completed at the same time.
Step S203: and detecting the defects on the surface of the first part in the image to be detected by using the pre-trained neural network model.
And inputting the image to be detected into the trained neural network model, and outputting a detection result by the model. According to different detection requirements, different types of models can be trained, and different detection results can be output. For example, in some applications, it is only necessary to detect whether a defect exists on the surface of the first part, and at this time, the neural network model only needs to output one of two results, that is, "yes" and "no", and the neural network model at this time may be a binary model, and model architectures such as VGG, ResNet, GoogleNet and the like are selected for training. In other applications, the position of the defect on the surface of the first part needs to be detected (of course, if the defect position can be detected, the defect is necessarily present), and the neural network model needs to output one or more rectangular boxes including the defect, and the vertex coordinates and coverage of the rectangular boxes represent the position of the defect. For example, referring to fig. 3(C), the model outputs two rectangular boxes (shown by dashed lines) containing defect a and defect B in the image to be detected. The neural network model in these applications can adopt model architectures such as YOLO, Faster-RCNN, SSD, RetinaNet, and the like.
If the defects on the surfaces of various parts are detected, a neural network model can be trained for each part. For example, two neural network models can be trained to detect two parts, namely an automobile hub and an automobile door. Certainly, it is not excluded that for some parts with similar structures, the part can share one neural network model, for example, to detect two parts of an automobile hub X and an automobile hub Y, the difference between X and Y is only that the detailed patterns of the hubs are slightly different, or only one neural network model can be trained, so as to save time and computational resources.
According to the defect detection method, the acquired first original image is not directly detected, but image information in a first background area in the first original image is firstly removed, and then the neural network model is utilized to detect the defects on the surface of the part in the acquired image to be detected. Because the image information in the first background area is removed, the content of the first background area basically does not influence the detection process of the neural network model, and the condition of mistakenly detecting the defects in the first background area basically does not occur, so that the defect detection precision can be improved, and the false detection rate is reduced. In addition, the neural network model has good learning and generalization capabilities, so that the defect detection is performed by using the pre-trained neural network model, and the defect detection precision is improved.
In addition, the method can effectively shield the influence of the background on the detection result, so that the requirement on the detection environment during defect detection is reduced (for example, the environment with a simple background is not necessarily selected for detection), and the method has wide application range and high practicability.
The training process of the neural network model used in the defect detection method will be briefly described below, and the training process may occur before step S200. The method comprises the following steps:
(a) a second raw image containing a second part is acquired.
The second original image is an image including a part for training, and the part for training is hereinafter referred to as a second part. In order to ensure the detection effect of the model, the second part and the first part can be parts of the same type or parts with similar structures to the first part. The second part may be selected for parts that are free of defects, as well as parts that contain various types of defects, in order to enhance the robustness of the model obtained by training.
In addition, when the camera collects the second original image, the scene during actual detection can be imitated as much as possible, and the model obtained by training has a good detection effect. For example, when detecting, the part is placed on a table for image acquisition, and when training, the part is also placed on the table for image acquisition; during detection, the front of the part faces upwards for image acquisition, and during training, the front of the part faces upwards for image acquisition; during detection, the parts are shot at 10 preset positions, and during training, the parts can be shot at 10 preset positions, and the like.
The rest of step (a) may refer to step S200, and will not be repeated.
(b) A second foreground region in the second original image corresponding to the second part is determined.
Determining the second foreground region and determining the first foreground region may be done in a similar manner. For the way in which the second foreground region is determined using the three-dimensional model of the second part, this may be achieved: firstly, determining fourth transformation between the pose of the camera when acquiring a second original image and the pose of a three-dimensional model of a second part, wherein the three-dimensional model of the second part has a preset pose; and then projecting the three-dimensional model of the second part into the second original image by using fourth transformation, and determining a region formed by projection as a second foreground region, or projecting the three-dimensional model of the second part by using the fourth transformation to form a second mask image, and determining a region which is not covered by the second mask image after the phase difference between the second original image and the second mask image as the second foreground region.
And determining a fourth transformation includes at least the following two ways:
the first mode is that firstly, the depth data of a scene in a second original image acquired when the camera acquires the second original image is acquired; and then point cloud registration is carried out between the point cloud corresponding to the depth data and the point cloud corresponding to the three-dimensional model of the second part, and fourth transformation between the two point clouds is determined.
In the second mode, firstly, fifth transformation between the pose of the camera during calibration and the pose of the three-dimensional model of the second part is obtained; then acquiring a sixth transformation between the pose of the camera when acquiring the second original image and the pose of the camera when calibrating; finally, the sixth transform is determined as the product of the third transform and the second transform. The calibration can be performed when the camera takes the first shot of the second part, and the calibration mode can be a point cloud registration mode.
The rest of step (b) may refer to step S201, and will not be repeated.
(c) And removing the image information in the second background area in the second original image to obtain the training image.
The second background area is an area of the second original image except the second foreground area, the training image is an image used for training the neural network model, and a large number of training images can form a training set. The manner of clearing the image information in the second background area in the second original image may be, but is not limited to, setting the portion in the second background area in the second original image to be a single color. Further, the single color may be selected to have a similarity to the color of the second part less than a predetermined threshold.
The rest of step (c) may refer to step S202, and will not be repeated.
(d) And labeling the training image.
In supervised learning, training samples need to be labeled, and labeling information is stored as a label of the sample. According to different requirements, the content of the labeling information is different: for example, whether the surface of the second part in the training image contains a defect or not can be marked, and the marking information contains two labels of yes and no; for another example, the position of the surface defect of the second part in the training image may be labeled, and the labeling information includes a rectangular frame where the defect is located. The annotation can be, but is not limited to, manual annotation.
(e) And training the neural network model by using the training images and the labeling information.
The possible training process of the neural network model is as follows: in one round of training, one or more training images are input into the model to obtain a detection result output by the model, the detection result and the labeling information are used for calculating the prediction loss of the model, the parameters of the model are adjusted according to the prediction loss, and multiple rounds of training are carried out until the training end condition is met.
If the labeling information contains two labels of 'yes' and 'no', the trained neural network model can output whether the surface of the first part has defects or not when used for detection; if the marking information contains the rectangular frame where the defect is located, the trained neural network model can output the position where the surface defect of the first part is located when the trained neural network model is used for detection.
The steps (a), (b), (c) and (d), (e) may or may not be performed on the same device. For example, steps (a), (b), (c) may be performed on a control device in the defect labeling system, and steps (d), (e) may be performed on a server. Because the control device may be a computer located at the detection site, but the training process consumes huge computing resources, and the control device may be difficult to undertake, after the step (c) is performed, the training image may be sent to the server (or may be copied and uploaded to the server), labeling and training are performed on the server, and after the neural network model is trained, the model is deployed on the control device for actual detection. The method for marking on the server means that a marker accesses the server to mark by using terminal equipment. The server referred to above may be a common server or a cloud server.
Fig. 4 is a functional block diagram of a defect detection apparatus 300 according to an embodiment of the present application. Referring to fig. 4, the defect detecting apparatus 300 includes: a first image acquisition module 310 for acquiring a first raw image containing a first part; a first foreground determining module 320, configured to determine a first foreground region corresponding to the first part in the first original image; a first background removing module 330, configured to remove image information in a first background region in the first original image to obtain an image to be detected, where the first background region is a region of the first original image except the first foreground region; the detecting module 340 is configured to detect a defect on the surface of the first part in the image to be detected by using a pre-trained neural network model.
In some implementations, the first background removal module 330 removes image information in a first background region in the first original image, including: and setting the part in the first background area in the first original image as a single color.
In some implementations, the single color is similar to the color of the first part by less than a preset threshold.
In some implementations, the first foreground determining module 320 determines a first foreground region in the first original image corresponding to the first part, including: determining a first transformation between a pose of a camera at the time of acquiring the first original image and a pose of a three-dimensional model of the first part, wherein the three-dimensional model of the first part has a preset pose; and projecting the three-dimensional model of the first part into the first original image by using the first transformation, and determining a region formed by projection as the first foreground region, or projecting the three-dimensional model of the first part by using the first transformation, and forming a first mask image, and determining a region which is not covered by the first mask image after the phase of the first original image and the first mask image as the first foreground region.
In some implementations, the first foreground determination module 320 determines a first transformation between the pose of the camera at the time the first raw image was acquired and the pose of the three-dimensional model of the first part, including: acquiring depth data of a scene in the first original image acquired by the camera while acquiring the first original image; and performing point cloud registration between the point cloud corresponding to the depth data and the point cloud corresponding to the three-dimensional model of the first part, and determining the first transformation between the two point clouds.
In some implementations, the first foreground determination module 320 determines a first transformation between the pose of the camera at the time the first raw image was acquired and the pose of the three-dimensional model of the first part, including: acquiring a second transformation between the pose of the camera during calibration and the pose of the three-dimensional model of the first part; acquiring a third transformation between the pose of the camera when acquiring the first original image and the pose of the camera when calibrating; determining a product of the third transform and the second transform as the first transform.
In some implementations, the detecting module 340 detects the defect existing on the surface of the first part in the image to be detected by using a pre-trained neural network model, including: and detecting whether the surface of the first part has defects or not in the image to be detected by using a pre-trained neural network model, or detecting the position of the surface defect of the first part.
In some implementations, the first image acquisition module 310 acquires a first raw image containing a first part, including: and acquiring a first original image obtained by shooting the first part at a plurality of preset positions and a plurality of preset angles by a camera.
In some implementations, the acquiring the first original image of the first part by the camera at a plurality of preset positions and at a plurality of preset angles by the first image acquiring module 310 includes: the mechanical arm with the camera arranged at the control tail end sequentially moves to a plurality of preset positions and sequentially rotates to a plurality of preset angles at each position to shoot the first part, and a first original image obtained through shooting is obtained.
In some implementations, the defect detection apparatus 300 further includes: a second image acquisition module for acquiring a second original image containing a second part before the first image acquisition module 310 acquires the original image containing the part; a second foreground determining module, configured to determine a second foreground region corresponding to the second part in the second original image; and the second background removing module is used for removing image information in a second background area in the second original image to obtain a training image, wherein the second background area is an area, except for the second foreground area, in the second original image, and the training image is used for training the neural network model.
In some implementations, the second background removal module removes image information in a second background region in the second original image, including: and setting the part in the second background area in the second original image as a single color.
In some implementations, the second foreground determination module determines a second foreground region in the second original image corresponding to the second part, including: determining a fourth transformation between the pose of the camera at the time of acquiring the second original image and the pose of the three-dimensional model of the second part, wherein the three-dimensional model of the second part has a preset pose; and projecting the three-dimensional model of the second part into the second original image by using the fourth transformation, and determining a region formed by projection as the second foreground region, or projecting the three-dimensional model of the second part by using the fourth transformation, forming a second mask image, and determining a region which is not covered by the second mask image after the phase difference between the second original image and the second mask image as the second foreground region.
In some implementations, the defect detection apparatus 300 further includes: and the training module is used for training the neural network model by using the training image and the labeling information obtained after labeling the training image after the second background removing module obtains the training image, or sending the training image to a server so that the server can train the neural network model by using the training image and the labeling information obtained after labeling the training image.
The implementation principle and the resulting technical effects of the defect detection apparatus 300 provided in the embodiment of the present application have been introduced in the foregoing method embodiments, and for the sake of brief description, reference may be made to corresponding contents in the method embodiments for parts that are not mentioned in the apparatus embodiments.
Fig. 5 shows a possible structure of an electronic device 400 provided in an embodiment of the present application. Referring to fig. 5, the electronic device 400 includes: a processor 410, a memory 420, and a communication interface 430, which are interconnected and in communication with each other via a communication bus 440 and/or other form of connection mechanism (not shown).
The Memory 420 includes one or more (Only one is shown in the figure), which may be, but not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The processor 410, as well as possibly other components, may access, read, and/or write data to the memory 420.
The processor 410 includes one or more (only one shown) which may be an integrated circuit chip having signal processing capabilities. The Processor 410 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Micro Control Unit (MCU), a Network Processor (NP), or other conventional processors; or a special-purpose Processor, including a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, and a discrete hardware component.
Communication interface 430 includes one or more (only one shown) devices that can be used to communicate directly or indirectly with other devices for data interaction. The communication interface 430 may be an ethernet interface; may be a mobile communications network interface, such as an interface for a 3G, 4G, 5G network; or may be other types of interfaces having data transceiving functions.
One or more computer program instructions may be stored in memory 420 and read and executed by processor 410 to implement the steps of the defect detection method provided by the embodiments of the present application, as well as other desired functions.
It will be appreciated that the configuration shown in fig. 5 is merely illustrative and that electronic device 400 may include more or fewer components than shown in fig. 5 or may have a different configuration than shown in fig. 5. The components shown in fig. 5 may be implemented in hardware, software, or a combination thereof. For example, the control device in the defect detection system provided in the embodiment of the present application may be implemented by using the structure of the electronic device 400.
The embodiment of the present application further provides a computer-readable storage medium, where computer program instructions are stored on the computer-readable storage medium, and when the computer program instructions are read and executed by a processor of a computer, the steps of the defect detection method provided in the embodiment of the present application are executed. The computer-readable storage medium may be implemented as, for example, memory 420 in electronic device 400 in fig. 5.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (17)

1. A method of defect detection, comprising:
acquiring a first original image containing a first part;
determining a first foreground region corresponding to the first part in the first original image;
removing image information in a first background area in the first original image to obtain an image to be detected, wherein the first background area is an area except the first foreground area in the first original image;
detecting defects on the surface of the first part in the image to be detected by using a pre-trained neural network model;
wherein the determining a first foreground region in the first original image corresponding to the first part comprises:
determining a first transformation between a pose of a camera at the time of acquiring the first original image and a pose of a three-dimensional model of the first part, wherein the three-dimensional model of the first part has a preset pose;
projecting a three-dimensional model of the first part into the first original image using the first transformation, and determining a region formed by the projection as the first foreground region,
or, the three-dimensional model of the first part is projected by using the first transformation to form a first mask image, and a region which is not covered by the first mask image after the phase of the first original image and the first mask image is determined as the first foreground region.
2. The defect detection method of claim 1, wherein the clearing of the image information in the first background region in the first original image comprises:
and setting the part in the first background area in the first original image as a single color.
3. The defect detection method of claim 2, wherein the single color is similar to the color of the first part by less than a predetermined threshold.
4. The defect detection method of claim 1, wherein determining a first transformation between a pose of the camera at the time of acquiring the first raw image and a pose of the three-dimensional model of the first part comprises:
acquiring depth data of a scene in the first original image acquired by the camera while acquiring the first original image;
and performing point cloud registration between the point cloud corresponding to the depth data and the point cloud corresponding to the three-dimensional model of the first part, and determining the first transformation between the two point clouds.
5. The defect detection method of claim 1, wherein determining a first transformation between a pose of the camera at the time of acquiring the first raw image and a pose of the three-dimensional model of the first part comprises:
acquiring a second transformation between the pose of the camera during calibration and the pose of the three-dimensional model of the first part;
acquiring a third transformation between the pose of the camera when acquiring the first original image and the pose of the camera when calibrating;
determining a product of the third transform and the second transform as the first transform.
6. The method for detecting the defects of the first part surface according to the claim 1, wherein the detecting the defects existing on the first part surface in the image to be detected by using the pre-trained neural network model comprises:
and detecting whether the surface of the first part has defects or not in the image to be detected by using a pre-trained neural network model, or detecting the position of the surface defect of the first part.
7. The defect detection method of any of claims 1-6, wherein said obtaining a first raw image containing a first part comprises:
and acquiring a first original image obtained by shooting the first part at a plurality of preset positions and a plurality of preset angles by a camera.
8. The defect detection method of claim 7, wherein the acquiring a first raw image of the first part captured by the camera at a plurality of predetermined positions and a plurality of predetermined angles comprises:
the mechanical arm with the camera arranged at the control tail end sequentially moves to a plurality of preset positions and sequentially rotates to a plurality of preset angles at each position to shoot the first part, and a first original image obtained through shooting is obtained.
9. The defect detection method of claim 1, wherein prior to said acquiring an original image containing a part, said method further comprises:
acquiring a second original image containing a second part;
determining a second foreground region corresponding to the second part in the second original image;
and removing image information in a second background area in the second original image to obtain a training image, wherein the second background area is an area except for the second foreground area in the second original image, and the training image is used for training the neural network model.
10. The defect detection method of claim 9, wherein the clearing of the image information in the second background region in the second original image comprises:
and setting the part in the second background area in the second original image as a single color.
11. The defect detection method of claim 10, wherein said determining a second foreground region in said second original image corresponding to said second part comprises:
determining a fourth transformation between the pose of the camera at the time of acquiring the second original image and the pose of the three-dimensional model of the second part, wherein the three-dimensional model of the second part has a preset pose;
projecting the three-dimensional model of the second part into the second original image using the fourth transformation, and determining a region formed by the projection as the second foreground region,
or, the third transformation is used for projecting the three-dimensional model of the second part to form a third mask image, and the area which is not covered by the third mask image after the third original image and the third mask image are combined is determined as the third foreground area.
12. The defect detection method of any of claims 9-11, wherein after the obtaining the training image, the method further comprises:
and training the neural network model by using the training image and the labeling information obtained after labeling the training image, or sending the training image to a server, so that the server can train the neural network model by using the training image and the labeling information obtained after labeling the training image.
13. A defect detection apparatus, comprising:
the first image acquisition module is used for acquiring a first original image containing a first part;
a first foreground determining module, configured to determine a first foreground region corresponding to the first part in the first original image;
a first background removing module, configured to remove image information in a first background region in the first original image to obtain an image to be detected, where the first background region is a region of the first original image except the first foreground region;
the detection module is used for detecting the defects on the surface of the first part in the image to be detected by utilizing a pre-trained neural network model;
wherein the first foreground determining module determines a first foreground region in the first original image corresponding to the first part, including:
determining a first transformation between a pose of a camera at the time of acquiring the first original image and a pose of a three-dimensional model of the first part, wherein the three-dimensional model of the first part has a preset pose;
projecting a three-dimensional model of the first part into the first original image using the first transformation, and determining a region formed by the projection as the first foreground region,
or, the three-dimensional model of the first part is projected by using the first transformation to form a first mask image, and a region which is not covered by the first mask image after the phase of the first original image and the first mask image is determined as the first foreground region.
14. A defect detection system, comprising:
the robot comprises a robot, wherein a camera is arranged at the tail end of a mechanical arm of the robot;
the control device is used for sending a control instruction to the robot, controlling the camera to acquire a first original image containing a first part, determining a first foreground region corresponding to the first part in the first original image, eliminating image information in a first background region in the first original image, obtaining an image to be detected, and detecting defects on the surface of the first part in the image to be detected by using a pre-trained neural network model, wherein the first background region is a region except the first foreground region in the first original image;
wherein the control device determining a first foreground region in the first original image corresponding to the first part comprises: determining a first transformation between a pose of a camera at the time of acquiring the first original image and a pose of a three-dimensional model of the first part, wherein the three-dimensional model of the first part has a preset pose; and projecting the three-dimensional model of the first part into the first original image by using the first transformation, and determining a region formed by projection as the first foreground region, or projecting the three-dimensional model of the first part by using the first transformation, and forming a first mask image, and determining a region which is not covered by the first mask image after the phase of the first original image and the first mask image as the first foreground region.
15. The defect detection system of claim 14, wherein the camera comprises an RGB-D camera.
16. A computer-readable storage medium, having stored thereon computer program instructions, which, when read and executed by a processor, perform the steps of the method according to any one of claims 1-12.
17. An electronic device, comprising: a memory having stored therein computer program instructions which, when read and executed by the processor, perform the steps of the method of any of claims 1-12.
CN201910711135.1A 2019-08-01 2019-08-01 Defect detection method, device and system Active CN110400315B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910711135.1A CN110400315B (en) 2019-08-01 2019-08-01 Defect detection method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910711135.1A CN110400315B (en) 2019-08-01 2019-08-01 Defect detection method, device and system

Publications (2)

Publication Number Publication Date
CN110400315A CN110400315A (en) 2019-11-01
CN110400315B true CN110400315B (en) 2020-05-05

Family

ID=68327366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910711135.1A Active CN110400315B (en) 2019-08-01 2019-08-01 Defect detection method, device and system

Country Status (1)

Country Link
CN (1) CN110400315B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062915B (en) * 2019-12-03 2023-10-24 浙江工业大学 Real-time steel pipe defect detection method based on improved YOLOv3 model
CN111652883B (en) * 2020-07-14 2024-02-13 征图新视(江苏)科技股份有限公司 Glass surface defect detection method based on deep learning
CN112903703A (en) * 2021-01-27 2021-06-04 广东职业技术学院 Ceramic surface defect detection method and system based on image processing
CN113096094B (en) * 2021-04-12 2024-05-17 吴俊� Three-dimensional object surface defect detection method
CN113469997B (en) * 2021-07-19 2024-02-09 京东科技控股股份有限公司 Method, device, equipment and medium for detecting plane glass
CN113538436A (en) * 2021-09-17 2021-10-22 深圳市信润富联数字科技有限公司 Method and device for detecting part defects, terminal equipment and storage medium
CN113870267B (en) * 2021-12-03 2022-03-22 深圳市奥盛通科技有限公司 Defect detection method, defect detection device, computer equipment and readable storage medium
CN114354621B (en) * 2021-12-29 2024-04-19 广州德志金属制品有限公司 Method and system for automatically detecting product appearance
CN114998357B (en) * 2022-08-08 2022-11-15 长春摩诺维智能光电科技有限公司 Industrial detection method, system, terminal and medium based on multi-information analysis
WO2024044913A1 (en) * 2022-08-29 2024-03-07 Siemens Aktiengesellschaft Method, apparatus, electronic device, storage medium and computer program product for detecting circuit board assembly defect
CN116363085B (en) * 2023-03-21 2024-01-12 江苏共知自动化科技有限公司 Industrial part target detection method based on small sample learning and virtual synthesized data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510957A (en) * 2008-02-15 2009-08-19 索尼株式会社 Image processing device, camera device, communication system, image processing method, and program
CN106409711A (en) * 2016-09-12 2017-02-15 佛山市南海区广工大数控装备协同创新研究院 Solar silicon wafer defect detecting system and method
CN108520274A (en) * 2018-03-27 2018-09-11 天津大学 High reflecting surface defect inspection method based on image procossing and neural network classification
CN109087286A (en) * 2018-07-17 2018-12-25 江西财经大学 A kind of detection method and application based on Computer Image Processing and pattern-recognition
CN109215085A (en) * 2018-08-23 2019-01-15 上海小萌科技有限公司 A kind of article statistic algorithm using computer vision and image recognition
CN109636772A (en) * 2018-10-25 2019-04-16 同济大学 The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning
CN109840508A (en) * 2019-02-17 2019-06-04 李梓佳 One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6650779B2 (en) * 1999-03-26 2003-11-18 Georgia Tech Research Corp. Method and apparatus for analyzing an image to detect and identify patterns
CN102768767B (en) * 2012-08-06 2014-10-22 中国科学院自动化研究所 Online three-dimensional reconstructing and locating method for rigid body
US10055882B2 (en) * 2016-08-15 2018-08-21 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
US20190139214A1 (en) * 2017-06-12 2019-05-09 Sightline Innovation Inc. Interferometric domain neural network system for optical coherence tomography

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510957A (en) * 2008-02-15 2009-08-19 索尼株式会社 Image processing device, camera device, communication system, image processing method, and program
CN106409711A (en) * 2016-09-12 2017-02-15 佛山市南海区广工大数控装备协同创新研究院 Solar silicon wafer defect detecting system and method
CN108520274A (en) * 2018-03-27 2018-09-11 天津大学 High reflecting surface defect inspection method based on image procossing and neural network classification
CN109087286A (en) * 2018-07-17 2018-12-25 江西财经大学 A kind of detection method and application based on Computer Image Processing and pattern-recognition
CN109215085A (en) * 2018-08-23 2019-01-15 上海小萌科技有限公司 A kind of article statistic algorithm using computer vision and image recognition
CN109636772A (en) * 2018-10-25 2019-04-16 同济大学 The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning
CN109840508A (en) * 2019-02-17 2019-06-04 李梓佳 One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于机器视觉的轴承表面缺陷检测与分类***研究;宇文旋;《中国优秀硕士学位论文全文数据库 工程科技II辑》;20180815(第8期);第7页,第30-36页,第49页 *

Also Published As

Publication number Publication date
CN110400315A (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN110400315B (en) Defect detection method, device and system
CN111179358B (en) Calibration method, device, equipment and storage medium
US11704888B2 (en) Product onboarding machine
CN109993086B (en) Face detection method, device and system and terminal equipment
CN110176032B (en) Three-dimensional reconstruction method and device
CN111127422A (en) Image annotation method, device, system and host
US11893774B2 (en) Systems and methods for training machine models with augmented data
CN113689578B (en) Human body data set generation method and device
CN111144349B (en) Indoor visual relocation method and system
CN111445459A (en) Image defect detection method and system based on depth twin network
CN112257692A (en) Pedestrian target detection method, electronic device and storage medium
CN112435223B (en) Target detection method, device and storage medium
CN112686950B (en) Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
CN112258574A (en) Method and device for marking pose information and computer readable storage medium
Kumar et al. High-throughput 3D reconstruction of plant shoots for phenotyping
CN111985458A (en) Method for detecting multiple targets, electronic equipment and storage medium
CN112683228A (en) Monocular camera ranging method and device
CN110942092A (en) Graphic image recognition method and recognition system
CN114821274A (en) Method and device for identifying state of split and combined indicator
CN112116068A (en) Annular image splicing method, equipment and medium
CN116681677A (en) Lithium battery defect detection method, device and system
CN111797832A (en) Automatic generation method and system of image interesting region and image processing method
EP4352451A1 (en) Texture mapping to polygonal models for industrial inspections
CN110910379A (en) Incomplete detection method and device
Galdino et al. A measure distance system for docks: an image-processing approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant