CN110400315A - A kind of defect inspection method, apparatus and system - Google Patents
A kind of defect inspection method, apparatus and system Download PDFInfo
- Publication number
- CN110400315A CN110400315A CN201910711135.1A CN201910711135A CN110400315A CN 110400315 A CN110400315 A CN 110400315A CN 201910711135 A CN201910711135 A CN 201910711135A CN 110400315 A CN110400315 A CN 110400315A
- Authority
- CN
- China
- Prior art keywords
- image
- original image
- camera
- transformation
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
This application involves defect detecting technique field, a kind of defect inspection method, apparatus and system are provided.Wherein, defect inspection method includes: to obtain the first original image comprising the first part;Determine the first foreground area corresponding with the first part in the first original image;The image information in the first original image in first background area is removed, image to be detected is obtained;Defect existing for the first piece surface is detected in image to be detected using the neural network model of pre-training.The above method is due to removing the image information in first background area, therefore the content of first background area will not influence the detection process of neural network model substantially, the case where erroneous detection goes out defect in background area is not appeared in substantially yet, so as to improve the precision of defects detection, false detection rate is reduced.
Description
Technical field
The present invention relates to defect detecting technique fields, in particular to a kind of defect inspection method, apparatus and system.
Background technique
In industrial circle, it is often necessary to be detected to the defect of component surface.With artificial intelligence technology and calculating
The development of machine vision technique, the detection of industrial defect are also more and more intelligent.In some detection schemes proposed at present, first
It shoots to obtain the image of part by camera, then image is input in prior established detection model, is exported by model
Testing result.However, due to usually not only including part in the image of part, also comprising having powerful connections, the mould in above scheme
Type when being detected often by the content erroneous detection in background be defect, lead to the accuracy decline of defects detection.
Summary of the invention
The embodiment of the present application is designed to provide a kind of defect inspection method, apparatus and system, to improve above-mentioned technology
Problem.
To achieve the above object, the application provides the following technical solutions:
In a first aspect, the embodiment of the present application provides a kind of defect inspection method, comprising: obtaining includes the first of the first part
Original image;Determine the first foreground area corresponding with first part in first original image;Determine described first
The first foreground area corresponding with first part in original image, and remove the first background area in first original image
Image information in domain obtains image to be detected, wherein the first background area is in first original image except described
Region other than first foreground area;Described first is detected in described image to be detected using the neural network model of pre-training
Defect existing for piece surface.
The above method does not detect the first original image got directly, but disposes the first original first
Image information in beginning image in first background area, then recycle neural network model in image to be detected of acquisition into
The defects detection of row piece surface.Since the image information in first background area has been cleared by, first background area
Content will not influence the detection process of neural network model substantially, do not appear in erroneous detection in first background area substantially yet and go out
The case where defect, reduces false detection rate so as to improve the precision of defects detection.
Further, since neural network model has good study and generalization ability, therefore utilize the nerve net of pre-training
Network model carries out defects detection, is also beneficial to improve the precision of defects detection.
It is described to remove in first original image in first background area in a kind of implementation of first aspect
Image information, comprising: the part in first original image in first background area is set to solid color.
Since defect can be detected in the picture, the inevitable piece surface with surrounding of the piece surface of defective locations
There are color differences, after the part in the first original image in first background area is set to solid color, the first background area
The color of all pixels is all identical in domain, difference is not present, therefore do not appear in the first background substantially when detecting
Erroneous detection goes out the case where defect in region.
In a kind of implementation of first aspect, the similarity degree of the color of the solid color and first part
Less than preset threshold.
When the part in the first original image in first background area is set to solid color, can select as far as possible and the
The bigger color of the color difference of one part avoids influencing in image to be detected because being difficult to distinguish the first part with background
Detection effect.
In a kind of implementation of first aspect, in determination first original image with first part pair
The first foreground area answered, comprising: determine pose of the camera when acquiring first original image and first part
The first transformation between the pose of threedimensional model, wherein the threedimensional model of first part has preset pose;Using institute
The region stated the first transformation to project to the threedimensional model of first part in first original image, and projection is formed
It is determined as first foreground area, alternatively, the threedimensional model of first part is projected using first transformation,
The first mask image is formed, and by first original image and the first mask image Xiang Yuhou not by first exposure mask
The region that image is covered is determined as first foreground area.
The threedimensional model of first part can be built in advance, for example, drafting acquisition can be carried out by mapping software, or
Person carries out three-dimensional reconstruction acquisition by the same model part to the first part.First is determined by the way of threedimensional model projection
Foreground area is more accurate, thus be conducive to distinguish the first foreground area and first background area in the first original image, into
And improve detection accuracy.
In a kind of implementation of first aspect, pose of the determining camera when acquiring first original image
The first transformation between the pose of the threedimensional model of first part, comprising: obtain the camera in acquisition described first
The depth data of the scene in first original image acquired when original image;Corresponding cloud of the depth data with
Point cloud registering is carried out between corresponding cloud of threedimensional model of first part, determines that described first between two clouds becomes
It changes.
In a kind of implementation of first aspect, pose of the determining camera when acquiring first original image
The first transformation between the pose of the threedimensional model of first part, comprising: obtain the camera when being demarcated
The second transformation between the pose of the threedimensional model of pose and first part;It obtains the camera and is acquiring first original
The third transformation of pose and the camera between the pose when being demarcated when beginning image;By the third transformation with it is described
The product of second transformation is determined as first transformation.
The threedimensional model of first part is projected, need to obtain pose of the camera when acquiring the first original image
The first transformation between the pose of the threedimensional model of the first part, two kinds provided above obtain the first realization side converted
Formula.
The first is achieved in that: using camera sampling depth data, and utilizes corresponding cloud of depth data and three-dimensional mould
Corresponding cloud of type does point cloud registering, the first implementation needs camera to have the function of sampling depth data, for example, can be with
Using RGB-D camera.
It is achieved in that for second: when starting to detect the first part, first to the pose of the pose of camera and threedimensional model
It is demarcated, that is, the second transformation when obtaining calibration between two poses.And pose of the camera when acquiring the first original image
It can be obtained in some cases with third transformation of the camera between the pose when being demarcated: for example, camera is according to default
Mode translate and/or rotate, the first original image is acquired when until reaching the pose that some presets, due to the translation of camera
And/or circling behavior is preset, so third transformation is available;In another example the mechanical arm of robot is arranged in camera
End, mechanical arm is translated and/or the pose transformation caused by rotating can be by robot records, so third transformation can be with
It obtains.After obtaining third transformation and the second transformation, third can be converted to acquisition the first transformation (this that be multiplied with the second transformation
In multiplication refer to the corresponding matrix multiple of transformation).First transformation efficiency with higher is calculated using second of implementation.
It, can be with if being needed when the first part of detection in multiple different locations and different angles is repeatedly shot
The first transformation is all calculated using the first above-mentioned implementation for each shooting, (can also be marked just for first time shooting
It is fixed) using the first transformation of the first above-mentioned implementation calculating, for each shooting later all using above-mentioned second realization
Mode calculates the first transformation.
In a kind of implementation of first aspect, the neural network model using pre-training is in the mapping to be checked
Defect existing for first piece surface is detected as in, comprising: using the neural network model of pre-training described to be detected
First piece surface is detected in image with the presence or absence of defect, alternatively, the position where detection first surface defects of parts
It sets.
In some applications, it is only necessary to detect the first piece surface with the presence or absence of defect, i.e. output "Yes" and "No" two
One of kind result;In other applications, then the position where output element surface defect is wanted, such as in image to be detected
The middle one or more rectangle frames including defect of output.
It is described to obtain the first original image comprising the first part, comprising: to obtain in a kind of implementation of first aspect
The first original for taking camera to be shot under multiple preset positions and multiple preset angles to first part
Beginning image.
For the defect for comprehensively detecting the first piece surface, it is necessary first to comprehensively acquire the figure on the surface of the first part
Picture, so as to be shot at multiple preset positions using camera, and camera can be at each preset position
It is shot according to multiple preset angles.Wherein, camera can be one, successively be moved to each preset position, and according to
It is secondary to turn to each preset angle and shot;Camera be also possible to it is multiple, such as at each preset position be arranged one
A camera, and control each camera and successively turn to each preset angle and shot, alternatively, at each preset position
Multiple cameras are all set, and each camera is shot towards a preset angle.
In a kind of implementation of first aspect, the acquisition camera is in multiple preset positions and multiple preset
The first original image that first part is shot under angle, comprising: control end is provided with the machinery of camera
Arm is successively moved to multiple preset positions and successively turns to multiple preset angles at each position to the described 1st
Part is shot, and the first original image that shooting obtains is obtained.
Mechanical arm can free shift and/or rotation in a certain range, therefore it is complete relatively accurately to control camera
At shooting, simultaneously as mechanical arm is frequently used for industrial production, belong in factory the equipment for being easier to obtain, therefore use
It is low that mechanical arm controls the mobile scheme enforcement difficulty of camera.
In a kind of implementation of first aspect, before the original image of the acquisition comprising part, the method
Further include: obtain the second original image comprising the second part;Determine in second original image with second part pair
The second foreground area answered;The image information in second original image in second background area is removed, training image is obtained,
Wherein, the second background area is the region in second original image in addition to second foreground area, the instruction
Practice image for training the neural network model.
It is described to remove in second original image in second background area in a kind of implementation of first aspect
Image information, comprising: the part in second original image in second background area is set to solid color.
In a kind of implementation of first aspect, in determination second original image with second part pair
The second foreground area answered, comprising: determine pose of the camera when acquiring second original image and second part
The 4th transformation between the pose of threedimensional model, wherein the threedimensional model of second part has preset pose;Using institute
State the region that the threedimensional model of second part is projected in second original image, and projection is formed by the 4th transformation
It is determined as second foreground area, alternatively, the threedimensional model of second part is projected using the 4th transformation,
The second mask image is formed, and by second original image and the second mask image Xiang Yuhou not by second exposure mask
The region that image is covered is determined as second foreground area.
Three of the above is achieved in that the collection process for training the training set of neural network, the process and detection part
The step of defect, is similar, is not repeated to illustrate.
In a kind of implementation of first aspect, after the acquisition training image, the method also includes: it utilizes
The training image and the markup information obtained after being labeled to the training image the training neural network model, or
Person sends the training image to server, so that the server can be using the training image and to the training
The markup information training neural network model that image obtains after being labeled.
The training of neural network model can execute on the same device with the detection of surface defects of parts, for example, being located at
Detect the computer at scene.But since training process consumption computing resource is huge, training image can also be sent to clothes
Business device, is trained on the server, after training neural network model, then model is deployed to and detects live computer
On, it is used for actual detection.
Second aspect, the embodiment of the present application provide a kind of defect detecting device, comprising: the first image collection module is used for
Obtain the first original image comprising the first part;First prospect determining module, for determine in first original image with
Corresponding first foreground area of first part;First background remove module, for determine in first original image with
Corresponding first foreground area of first part, and remove the letter of the image in first original image in first background area
Breath obtains image to be detected, wherein the first background area is that first foreground area is removed in first original image
Region in addition;Detection module detects described for the neural network model using pre-training in described image to be detected
Defect existing for one piece surface.
The third aspect, the embodiment of the present application provide a kind of defect detecting system, comprising: robot, the machine of the robot
Tool arm end is provided with camera;Equipment is controlled, for sending control instruction to the robot, controlling the camera acquisition includes
First original image of the first part, and, for determining corresponding with first part in first original image
One foreground area removes the image information in first original image in first background area, obtains image to be detected, and benefit
Defect existing for first piece surface is detected in described image to be detected with the neural network model of pre-training, wherein
The first background area is the region in first original image in addition to first foreground area.
In a kind of implementation of the third aspect, the camera includes RGB-D camera.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, the computer-readable storage medium
It is stored with computer program instructions in matter, when the computer program instructions are read out by the processor and run, executes first aspect
Or any one possible implementation of first aspect provide method the step of.
5th aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: memory and processor, the storage
It is stored with computer program instructions in device, when the computer program instructions are read and run by the processor, executes first
The step of method that the possible implementation of any one of aspect or first aspect provides.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application will make below to required in the embodiment of the present application
Attached drawing is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore should not be seen
Work is the restriction to range, for those of ordinary skill in the art, without creative efforts, can be with
Other relevant attached drawings are obtained according to these attached drawings.
Fig. 1 shows a kind of schematic diagram of defect detecting system provided by the embodiments of the present application;
Fig. 2 shows a kind of flow charts of defect inspection method provided by the embodiments of the present application;
Fig. 3 (A) to Fig. 3 (C) shows showing for the detection detection effect of defect inspection method provided by the embodiments of the present application
It is intended to;
Fig. 4 shows a kind of functional block diagram of defect detecting device provided by the embodiments of the present application;
Fig. 5 shows the schematic diagram of a kind of electronic equipment provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application is described.It answers
Note that similar label and letter indicate similar terms in following attached drawing, therefore, once quilt in a certain Xiang Yi attached drawing
Definition, then do not need that it is further defined and explained in subsequent attached drawing.The terms "include", "comprise" or its
What his variant is intended to non-exclusive inclusion, so that including the process, methods of a series of elements, article or setting
Standby includes not only those elements, but also including other elements that are not explicitly listed, or further includes for this process, side
Method, article or the intrinsic element of equipment.In the absence of more restrictions, limited by sentence "including a ..."
Element, it is not excluded that there is also other identical elements in the process, method, article or apparatus that includes the element.
Fig. 1 shows the schematic diagram of defect detecting system 100 provided by the embodiments of the present application.Referring to Fig.1, defects detection system
System includes robot 110 and control equipment 120, can pass through wired or wireless side between robot 110 and control equipment 120
Formula carries out data interaction.The machinery that robot 110 may further include robot body 116, connect with robot body 116
Arm 114 and the camera 112 that 114 end of mechanical arm is set.Provided by the embodiments of the present application lack is realized in control equipment 120
It falls into detection method (method and step illustrates in Fig. 2).In simple terms, it in the defect for needing to detect piece surface, first places
Then part controls equipment 120 to robot body 116 and sends control instruction, robot body 116 is according to the control received
Instruction mechanical arm 114 is controlled, for example, control mechanical arm 114 translated and/or rotated, thus drive camera 112 into
Row translation and/or rotation, until camera 112 reaches suitable position and/or turns to suitable angle, and clap part
It takes the photograph.The image that camera 112 takes passes control equipment 120 back through robot body 116, carries out defect in control equipment 120
Detect simultaneously output test result.
Wherein, control equipment 120 may be, but not limited to, special equipment, desktop computer, laptop, tablet computer, intelligence
The virtual units such as entity devices or virtual machine such as energy mobile phone.Control equipment 120 can be an equipment, be also possible to more
The combination of equipment.Control equipment 120 can be set in detection scene, real-time perfoming defects detection, and output test result to existing
The staff of field checks.Certainly control equipment 120 also can be set long-range, alternatively, control equipment 120 can also and machine
People 110 integrates.
In different implementations, camera 112 can be general camera, be also possible to RGB-D camera, if camera 112
It is RGB-D camera, the depth data (purposes of depth data of the scene in image can also be acquired when shooting the image of part
It introduces later).Optionally, a general camera and a depth camera can also be set together, realizes similar RGB-D
The function of camera.
Fig. 2 shows a kind of flow charts of defect inspection method provided by the embodiments of the present application.This method is not examined directly
The original image comprising part is surveyed, but tries first to remove the image information in original image in background area, then will be obtained again
The image obtained is used for defects detection, and since the content in background area will not generate interference to detection process again, institute is in this way
The precision of defects detection can be improved.It may be noted that the method in Fig. 2 can be applied in defect detecting system, but can also be with
Applied in other systems or equipment.Referring to Fig. 2, this method comprises:
Step S200: the first original image is obtained.
First original image refers to the image comprising part to be detected, hereinafter uses to distinguish the defects detection stage
Part and the part that uses of model training stage, part to be detected is known as the first part.For example, with reference to Fig. 3 (A), outside
The rectangle frame enclosed indicates that the first original image, intermediate cylindrical body indicate the first part, exist at the first piece surface A, B location
Defect (hereinafter defect A and defect B), defect A and defect B are also the target of defects detection, in addition, placing the first part
Table-top surface location of C at there is also place defect (hereinafter defect C), defect C is not the defect of the first piece surface, is not
The target of defects detection.
First original image shoot obtaining by surface of the camera to the first part.In some implementations
In, it is the defect for comprehensively detecting the first piece surface, needs comprehensively to acquire the image on the surface of the first part, so as to
It is shot at multiple preset positions using camera, and camera can be according to multiple default at each preset position
Angle shot.For example, can circumferentially determine 10 camera sites, each bat along it for the first cylindrical part
Act as regent the place's of setting determining 10 shooting angle again, so that the image of shooting can cover the surface of the first part as far as possible.
Further, camera for shooting can be one, and camera is successively moved to each preset position, and successively turns
It moves to each preset angle and is shot.For example, mechanical arm can be under the control of control equipment in defect detecting system
It is moved, to mutually be delivered to preset position for be arranged in mechanical arm tail end, and turns to preset angle, to complete
The acquisition tasks of first original image.Certainly, camera for shooting is also possible to multiple, for example, at each preset position
One camera is set, and controls each camera and successively turns to each preset angle and shot, alternatively, each preset
Multiple cameras are all set at position, and each camera is fixed to be shot towards a preset angle.Below, it is risen to be simple
See, is mainly illustrated in case where camera is one, but this is not to be construed as the limitation to the application protection scope.
For collected every first original image, the mode of processing be it is similar, therefore subsequent step mainly for
Wherein the case where first original image, illustrates.
Step S201: the first foreground area in the first original image is determined.
First foreground area refers to region corresponding with the first part in the first original image, such as in Fig. 3 (A), circle
The region that cylinder occupies is exactly the first foreground area.
Determining the first foreground area, there are many modes, in some implementations, can use the side of certain image procossings
Method, such as image segmentation is carried out to the first original image, foreground object and background segment are come.
In other implementations, if the threedimensional model of the first part can be obtained in advance, phase can also be determined first
The first transformation between the pose of the threedimensional model of pose and the first part of the machine when acquiring the first original image.Then, lead to
It crosses the mode projected to the threedimensional model of the first part and determines the first foreground area, specifically include at least following two side
Formula:
First, projecting to the threedimensional model of the first part in the first original image using the first transformation, formation is projected
Region is the first foreground area.
Second, the threedimensional model of the first part is projected first with the first transformation to form the first mask image, then
First original image and the first mask image Xiang Yuhou the first prospect is not determined as by the region that the first mask image is covered
Region.For example, the first mask image can be of the same size with the first original image, the pixel value of image can take 0 or 1
Two kinds are worth, and it is 1 that threedimensional model is located in the first mask image to project the pixel value in the region to be formed, the picture outside the region
Element value is 0, after the pixel of the first original image and the first mask image corresponding position carries out with operation, in the first original image
The pixel value of partial pixel will be zeroed out and (be blanked), the region that the pixel that value is not zeroed out is formed is the first foreground zone
Domain (is not blanked).
The threedimensional model of first part can be obtained by different modes: for example, can be by mapping software (as used
CAD etc.) it is drawn to obtain according to the design standard of the first part, also, since factory is likely to when manufacturing the first part
Its threedimensional model has been drawn, therefore can directly have been used in the scheme of the application;In another example scanning can be passed through
Mode (such as use depth camera, spatial digitizer) realize the three-dimensional reconstruction, etc. of first part.It is thrown using threedimensional model
The mode of shadow determines the first foreground area, it is not necessary to complicated processing is carried out to first original image itself, therefore which is simple
Fast, accuracy is also relatively high, and then is conducive to improve subsequent detection accuracy.
It may be noted that the threedimensional model of so-called first part, this part for referring to and currently detecting are all above
With the model of the component sharing of model, not merely for this part currently detected.Also, threedimensional model itself
It can be and defect is not present: for by the way of drawing, threedimensional model is natively without defect;For using three-dimensional reconstruction
Mode, as long as based on one do not have defective part carry out rebuild.
One pose can be preset for it for the threedimensional model of the first part, to carry out the calculating of the first transformation, by
In threedimensional model be a virtual part, so this pose can arbitrarily be set.The calculating of first transformation includes at least
The following two kinds mode:
First way
The depth data of the scene in the first original image that camera is acquired when acquiring the first original image is obtained first,
Then point cloud registering is carried out between corresponding with the threedimensional model of the first part cloud of corresponding cloud of depth data, determines two
The first transformation between a cloud.
When introducing Fig. 1 it has been noted that can use the acquisition that RGB-D camera carries out depth data, in RGB-D camera
In, the common camera of the first original image and the depth measuring module of sampling depth data are acquired apart from close, therefore can be with
Think to be directed to Same Scene.A piece of cloud can be obtained based on depth data, since camera is in different location and difference
When the first part of angle shot, the depth data of generation is different, so what camera was obtained when acquiring the first original image
Point cloud actually also contains posture information of the camera when acquiring the first original image.And according to the threedimensional model of the first part
Can obtain another point cloud, cloud characterization be the first part threedimensional model posture information (it is i.e. mentioned above to
The preset pose of threedimensional model), thus, point cloud registering is done between this two panels point cloud, so that it may obtain camera acquisition the
The first transformation between the pose of the threedimensional model of pose and the first part when one original image.
A kind of mode of point cloud registering is divided into rough registration and essence is registrated two stages progress.Rough registration is exactly completely not
In the case where the relative positional relationship for understanding two panels point cloud, this approximate spin matrix of two panels point cloud and translation matrix are found,
For example, rough registration can be done based on PPF (Point Pair Feature).Essence registration is exactly the initial value in known rotation translation
In the case of (obtained by rough registration), further calculate to obtain more accurate spin matrix and translation matrix, for example, can adopt
Smart registration is done with ICP (Iterative Closest Point) algorithm.
About the registration of cloud, suitable point Yun Tezheng and registration Algorithm can be selected according to the morphological feature of part.
In addition, for the point cloud obtained from depth data, since depth data is directed to entire scene, not only for the 1st
Part, therefore can also filter out the point cloud data for belonging to background before being registrated, to improve registration effect.
The second way
When starting to detect the first part, first the pose of the threedimensional model of the initial pose of camera and the first part is carried out
Calibration obtains the second transformation between two poses at this time.For example, mutually confidential in 10 preset position shootings the 1st
Part divides 10 preset angles to be shot at each position again, then the initial pose of camera can be to the first part into
Row shoots pose when (camera is located at first preset position, and turns to first preset angle) for the first time.Second becomes
Change calculate can using point cloud registering by the way of, be introduced above obtain first convert first way when by the agency of mistake
This method is not repeated to illustrate herein.
After obtaining the second transformation, it is also necessary to obtain pose and camera of the camera when acquiring the first original image and marked
Third transformation between the pose of timing.Since the shooting of camera is usually controlled action, third transformation is can to obtain
: for example, camera is translated and/or rotated according to preset mode, acquisition first is former when reaching the pose that some presets
Beginning image, translation and/or circling behavior due to camera are preset, so third transformation is available;In another example even if phase
Machine is mobile not in accordance with scheduled mode, but since the end of the mechanical arm of robot is arranged in camera, mechanical arm is translated
And/or pose transformation caused by rotation can be by robot records, so third transformation is also available.
After obtaining third transformation and the second transformation, third can be converted to the acquisition first that is multiplied with the second transformation and become
It changes.Mathematically, pose transformation can be indicated with the form of transformation matrix, therefore third transformation is multiplied with the second transformation and can be
Refer to that pose converts corresponding transformation matrix and is multiplied.
Both the above obtains the mode of the first transformation, and first way calculating accuracy is high, but point cloud registering calculation amount
Larger, the second way by way of mathematical computations due to can directly obtain the first change after calibration obtains the second transformation
It changes, therefore efficiency with higher.
Or the example mentioned before foring, it is mutually confidential at 10 the first parts of preset positions shooting, each position again
10 preset angles are divided to be shot.It can then be calculated just for shooting (when calibration) for the first time using above-mentioned first way
First transformation all calculates the first transformation (at this time in the second way using the above-mentioned second way for 99 shootings later
The second transformation used is exactly the first transformation obtained when demarcating).Certainly, first way is all used for 100 shootings
The first transformation of calculating is also possible, and only calculation amount is relatively large.
Furthermore, it should be noted that necessary advanced rower is fixed, this calibration if calculating the first transformation by the second way
Process will re-start once each part to be detected.
After obtaining the first transformation, the threedimensional model of the first part can be projected using the first transformation, example
Such as, in OpenGL, the can be completed according to the internal reference of threedimensional model (use grid mesh structure), the first transformation and camera
The threedimensional model of one part is projected to the first original image.Referring still to Fig. 3 (A), if projection result be it is ideal, it is intermediate
Region where cylindrical body is the first foreground area.
Step S202: removing the image information in the first original image in first background area, obtains image to be detected.
First background area is the region in the first original image in addition to the first foreground area, is determined in step s 201
After first foreground area, the background area in the first original image is also determined that.Referring still to Fig. 3 (A), the first original image
In region in addition to cylindrical body be exactly first background area, ideally, the first part is not included in first background area.
Existing object in image information characterization first background area in first original image in first background area, this
A little objects may interfere defect inspection process, therefore, can dispose these image informations in step S202, keep away
Exempt to influence defects detection.For example, in Fig. 3 (A), there is the desktop for placing the first part in first background area, on desktop
Defect C is possible to by erroneous detection be defect existing for the first piece surface, and desktop is characterized in first background area but if removing
Image information, then erroneous detection can be avoided by.It may be noted that the reflective, texture, object etc. in first background area are likely to
Cause erroneous detection, be not to say that only the defects of first background area can just cause erroneous detection, is only with defect C in Fig. 3 (A)
Example.After image information in first original image in first background area is removed, the image of acquisition is known as mapping to be checked
Picture.
Further, since defect can be detected in the picture, the piece surface certainty and surrounding of defective locations
Piece surface there are color differences, therefore in some implementations, can be by by the first background in the first original image
Part in region is set to the mode of solid color to remove image information, after handling in this way, by institute in first background area
Pixel color be all it is identical, difference is not present, therefore the erroneous detection in first background area can be improved and go out defect
Problem.Referring to Fig. 3 (B), first background area is shown with shade, and expression has been set to solid color, at this time the first original graph
As the defect C on desktop and desktop in first background area has not existed, obtained in Fig. 3 (B) be to
Detection image.
When the part in the first original image in first background area is set to solid color, the solid color of selection with
The similarity degree of the color of first part can be less than preset threshold.The purpose of this optinal plan is selection and first as far as possible
The bigger color of the color difference of part avoids distinguishing the 1st because being difficult in image to be detected as above-mentioned solid color
Part and background and influence detection effect.For example, above-mentioned solid color can choose black if knowing that the first part is light color in advance
Color;In another example above-mentioned solid color can choose white if knowing that the first part is dark color in advance;In another example if in advance
Do not know what color the first part is, can estimate the 1st according to the pixel in the first original image in the first foreground area
Then the color of part automatically selects again and is less than the color of preset threshold as above-mentioned solid color with its similarity degree.
It removes the image information in the first original image in first background area and is not limited to the above-mentioned mode restained, example
Such as, in some implementations, the part in the first original image in first background area can also be set to transparent.
In addition, also may be performed simultaneously it should also be noted that step S202 and step S201 can be executed successively, for example, if
First foreground area be the first original image and the first mask image phase and by way of determine, determined executing with operation
Out while the first foreground area, the pixel in first background area may be zeroed out and (be equivalent to and be set to black), i.e., while
Complete the content of step S202.
Step S203: it is detected in image to be detected existing for the first piece surface using the neural network model of pre-training
Defect.
Image to be detected is input in trained neural network model, model can output test result.According to detection
The difference of demand can train different types of model, to export different testing results.For example, in some applications, only
Need to detect the first piece surface with the presence or absence of defect, neural network model only needs to export "Yes" and "No" two at this time
One of kind result, neural network model at this time can be two disaggregated models, select VGG, ResNet, GoogleNet etc.
Model framework is trained.In other applications, then need to detect that defect is (certain in the position of the first piece surface
If being capable of detecting when defective locations necessarily existing defects), neural network model needs to export one or more packets at this time
The rectangle frame of defect is included, the apex coordinate and covering scope of these rectangle frames characterize the position where defect.For example, referring to figure
3 (C), model output two rectangle frames comprising defect A and defect B in image to be detected (dotted line is shown).These applications
In neural network model can be using the model frameworks such as YOLO, Faster-RCNN, SSD, RetinaNet.
To detect the defect on various parts surface, one neural network model of every kind of part training can be directed to.For example,
Two kinds of parts of automotive hub and arrangements for automotive doors are detected, two neural network models can be trained.Certainly, it is also not excluded for, for
The more similar part of some structures, a case where neural network model can be shared, for example, to detect automotive hub X and
The difference of two kinds of parts of automotive hub Y, X and Y are only that the thin portion decorative pattern of wheel hub is slightly distinguished, and can also only train a nerve
Network model, to save time and computing resource.
Drawbacks described above detection method does not detect the first original image got directly, but removes first
Fall the image information in the first original image in first background area, then recycles neural network model in the to be detected of acquisition
The defects detection of piece surface is carried out in image.Since the image information in first background area has been cleared by, first
The content of background area will not influence the detection process of neural network model substantially, also not appear in first background area substantially
The case where middle erroneous detection goes out defect reduces false detection rate so as to improve the precision of defects detection.Further, since neural network mould
Type has good study and generalization ability, therefore carries out defects detection using the neural network model of pre-training, is also beneficial to
Improve the precision of defects detection.
Also, since this method can effectively shield influence of the background to testing result, so that carrying out defects detection pair
The requirement of detection environment reduces (for example, not necessarily the environment of simple background have to be selected to be detected), so that this method is suitable
Wider with range, practicability is higher.
It following is a brief introduction of the training process of neural network model used in defect inspection method, training process can be sent out
Life is before step S200.Its step are as follows:
(a) the second original image comprising the second part is obtained.
Second original image refers to the image of the part comprising training, and hereinafter the part by training is known as the 2nd 0
Part.For the detection effect for guaranteeing model, the second part and the first part can be the part of same model, alternatively, with the 1st
The part of part structure proximate.Second part, which can choose, does not have defective part, and, zero comprising various types of defects
Part, to enhance the robustness for the model that training obtains.
In addition, when camera is acquired the second original image, scene when can copy actually detected as far as possible, in this way
The model inspection effect that training obtains is preferable.For example, part is placed on desk and carries out Image Acquisition when detection, then part when training
It is also placed on desk and carries out Image Acquisition;Part face-up carries out Image Acquisition when detection, then part is also just facing when training
Upper carry out Image Acquisition;Part is shot 10 preset positions when detection, then it can also be pre- at 10 when training
If position part shoot, etc..
Remaining content of step (a) can refer to step S200, be not repeated to illustrate.
(b) the second foreground area corresponding with the second part in the second original image is determined.
It determines the second foreground area and determines that the first foreground area can be by a similar method.For wherein utilizing second
The threedimensional model of part determines the mode of the second foreground area, this can be implemented so that and determines that camera is original in acquisition second first
The 4th transformation between the pose of the threedimensional model of pose and the second part when image, wherein the threedimensional model of the second part
With preset pose;Then the threedimensional model of the second part is projected in the second original image using the 4th transformation, and will
The region that projection is formed is determined as the second foreground area, alternatively, being thrown using the 4th transformation the threedimensional model of the second part
Shadow forms the second mask image, and the second original image and the second mask image Xiang Yuhou is not covered by the second mask image
The region of lid is determined as the second foreground area.
And determine the 4th transformation, include at least following two mode:
The scene in the second original image that first way, first acquisition camera are acquired when acquiring the second original image
Depth data;Then a cloud is carried out between corresponding with the threedimensional model of the second part cloud of corresponding cloud of depth data
Registration determines the 4th transformation between two clouds.
The second way, the first pose of the threedimensional model of pose and second part of the acquisition camera when being demarcated it
Between the 5th transformation;Then obtain pose and camera pose when demarcated of the camera when acquiring the second original image it
Between the 6th transformation;6th transformation is finally determined as to the product of third transformation with the second transformation.Here calibration can be in phase
Machine is shot when progress to the second part for the first time, and the mode of calibration can be by the way of point cloud registering.
Remaining content of step (b) can refer to step S201, be not repeated to illustrate.
(c) image information in the second original image in second background area is removed, training image is obtained.
Second background area refers to the region in the second original image in addition to the second foreground area, training image be for
The image of training neural network model, a large amount of training image may be constructed training set.Remove the second back in the second original image
The mode of image information in scene area can use, but be not limited to the part in the second original image in second background area
It is set to solid color.Further, which can choose small with the similarity degree of the color of the second part when choosing
In the color of preset threshold.
Remaining content of step (c) can refer to step S202, be not repeated to illustrate.
(d) training image is labeled.
It in the study for having supervision, needs to be labeled training sample, and is protected markup information as the label of sample
It leaves and.The content of difference according to demand, markup information is different: for example, the 2nd 0 in training image can be marked
Whether part surface includes defect, includes two kinds of labels of "Yes" and "No" in markup information;In another example training image can be marked
In the second surface defects of parts where position, include the rectangle frame at defective place in markup information.Mark can use,
But it is not limited to the mode manually marked.
(e) training image and markup information training neural network model are utilized.
The possible training process of neural network model is: in a wheel training, one or more training images being input to
The testing result that model output is obtained in model is lost using the prediction of testing result and markup information computation model, according to pre-
The parameter for surveying loss adjustment model carries out more wheel training until meeting training termination condition.
If in markup information including two kinds of labels of "Yes" and "No", trained neural network model is for detecting
When, the surface of the first part can be exported with the presence or absence of defect;If including the rectangle frame at defective place in markup information, instruct
The neural network model perfected is for that can export the position where the surface defect of the first part when detecting.
Above-mentioned steps (a), (b), (c) and step (d), (e) may be executed on the same device, it is also possible to not set same
Standby upper execution.For example, step (a), (b), (c) can execute in the control equipment in defect labeling system, and step (d),
(e) it can execute on the server.Because control equipment may be the computer positioned at detection scene, but since training process disappears
It is huge to consume computing resource, control equipment is likely difficult to undertake, so can send training image after step (c) has executed
To server (alternatively, uploading onto the server after can also copying), it is labeled and trains on the server, train nerve net
After network model, then model is deployed to and is controlled in equipment, is used for actual detection.Wherein, it is labeled, refers on the server
Mark person is labeled using terminal device access server.Server referred to above can be common server, can also
To be Cloud Server.
Fig. 4 shows the functional block diagram of defect detecting device 300 provided by the embodiments of the present application.Referring to Fig. 4, defect inspection
Surveying device 300 includes: the first image collection module 310, for obtaining the first original image comprising the first part;First prospect
Determining module 320, for determining the first foreground area corresponding with first part in first original image;First back
Scape removes module 330 and obtains mapping to be checked for removing the image information in first original image in first background area
Picture, wherein the first background area is the region in first original image in addition to first foreground area;Detection
Module 340 detects first piece surface in described image to be detected for the neural network model using pre-training and deposits
Defect.
In some implementations, the first background removes module 330 and removes the first background area in first original image
Image information in domain, comprising: the part in first original image in first background area is set to solid color.
In some implementations, the similarity degree of the color of the solid color and first part is less than default threshold
Value.
In some implementations, the first prospect determining module 320 determines in first original image with described first
Corresponding first foreground area of part, comprising: determine pose of the camera when acquiring first original image and described first
The first transformation between the pose of the threedimensional model of part, wherein the threedimensional model of first part has preset pose;
The threedimensional model of first part is projected in first original image using first transformation, and projection is formed
Region be determined as first foreground area, alternatively, using first transformation to the threedimensional model of first part into
Row projection, forms the first mask image, and first original image and the first mask image Xiang Yuhou is not described
The region that first mask image is covered is determined as first foreground area.
In some implementations, the first prospect determining module 320 determines camera when acquiring first original image
Pose and first part threedimensional model pose between first transformation, comprising: obtain the camera acquisition institute
The depth data of the scene in first original image acquired when stating the first original image;It is corresponding in the depth data
Point cloud registering is carried out between point cloud and corresponding cloud of threedimensional model of first part, is determined described between two clouds
First transformation.
In some implementations, the first prospect determining module 320 determines camera when acquiring first original image
Pose and first part threedimensional model pose between first transformation, comprising: obtain the camera and marked
The second transformation between the pose of the threedimensional model of the pose of timing and first part;The camera is obtained described in the acquisition
The third transformation of pose and the camera between the pose when being demarcated when the first original image;The third is converted
It is determined as first transformation with the product of second transformation.
In some implementations, detection module 340 is using the neural network model of pre-training in described image to be detected
Defect existing for middle detection first piece surface, comprising: using the neural network model of pre-training in the mapping to be checked
First piece surface is detected as in the presence or absence of defect, alternatively, the position where detection first surface defects of parts.
In some implementations, the first image collection module 310 obtains the first original image comprising the first part, packet
It includes: obtaining camera shoots first part under multiple preset positions and multiple preset angles the
One original image.
In some implementations, the first image collection module 310 obtains camera in multiple preset positions and multiple
The first original image that first part is shot under preset angle, comprising: control end is provided with camera
Mechanical arm be successively moved to multiple preset positions and successively turn to multiple preset angles at each position to described
First part is shot, and the first original image that shooting obtains is obtained.
In some implementations, defect detecting device 300 further include: the second image collection module, in the first figure
As before obtaining the acquisition of module 310 comprising the original image of part, acquisition includes the second original image of the second part;Before second
Scape determining module, for determining the second foreground area corresponding with second part in second original image;Second back
Scape removes module, for removing the image information in second original image in second background area, obtains training image,
In, the second background area is the region in second original image in addition to second foreground area, the training
Image is for training the neural network model.
In some implementations, the second background is removed module and is removed in second original image in second background area
Image information, comprising: the part in second original image in second background area is set to solid color.
In some implementations, the second prospect determining module determine in second original image with second part
Corresponding second foreground area, comprising: determine pose of the camera when acquiring second original image and second part
Threedimensional model pose between the 4th transformation, wherein the threedimensional model of second part have preset pose;It utilizes
The area that the threedimensional model of second part is projected in second original image, and projection is formed by the 4th transformation
Domain is determined as second foreground area, alternatively, being thrown using the 4th transformation the threedimensional model of second part
Shadow forms the second mask image, and by second original image and the second mask image Xiang Yuhou not by described second
The region that mask image is covered is determined as second foreground area.
In some implementations, defect detecting device 300 further include: training module, for removing mould in the second background
After block obtains training image, the markup information that is obtained after being labeled using the training image and to the training image
The training neural network model, alternatively, the training image is sent to server, so that the server can be described in
Training image and the markup information obtained after being labeled to the training image the training neural network model.
The technical effect of defect detecting device 300 provided by the embodiments of the present application, realization principle and generation is in aforementioned side
By the agency of in method embodiment, to briefly describe, Installation practice part does not refer to that place, the method for can refer to are applied in corresponding in example
Hold.
Fig. 5 shows a kind of possible structure of electronic equipment 400 provided by the embodiments of the present application.Referring to Fig. 5, electronics is set
Standby 400 include: processor 410, memory 420 and communication interface 430, these components by communication bus 440 and/or other
Bindiny mechanism's (not shown) of form is interconnected and is mutually communicated.
Wherein, memory 420 includes one or more (one is only shown in figure), may be, but not limited to, deposits at random
Access to memory (Random Access Memory, abbreviation RAM), read-only memory (Read Only Memory, abbreviation ROM),
Programmable read only memory (Programmable Read-Only Memory, abbreviation PROM), erasable read-only memory
(Erasable Programmable Read-Only Memory, abbreviation EPROM), electricallyerasable ROM (EEROM)
(Electric Erasable Programmable Read-Only Memory, abbreviation EEPROM) etc..Processor 410 and
Other possible components can access to memory 420, read and/or write data therein.
Processor 410 includes one or more (one is only shown in figure), can be a kind of IC chip, has
The processing capacity of signal.Above-mentioned processor 410 can be general processor, including central processing unit (Central
Processing Unit, abbreviation CPU), micro-control unit (Micro Controller Unit, abbreviation MCU), network processing unit
(Network Processor, abbreviation NP) or other conventional processors;It can also be application specific processor, including digital signal
Processor (Digital Signal Processor, abbreviation DSP), specific integrated circuit (Application Specific
Integrated Circuits, abbreviation ASIC), field programmable gate array (Field Programmable Gate Array,
Abbreviation FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware components.
Communication interface 430 includes one or more (one is only shown in figure), can be used for carrying out with other equipment direct
Or communicate indirectly, to carry out the interaction of data.Communication interface 430 can be Ethernet interface;It can be mobile radio communication
Network interface, such as the interface of 3G, 4G, 5G network;Still it can be the other kinds of interface with data transmit-receive function.
It can store one or more computer program instructions in memory 420, processor 410 can read and run
These computer program instructions, the step of to realize defect inspection method provided by the embodiments of the present application and other desired function
Energy.
It is appreciated that structure shown in fig. 5 is only to illustrate, electronic equipment 400 can also include it is more than shown in Fig. 5 or
The less component of person, or with the configuration different from shown in Fig. 5.Each component shown in Fig. 5 can using hardware, software or
A combination thereof is realized.For example, the control equipment in defect detecting system provided by the embodiments of the present application can use electronic equipment
400 structure is realized.
The embodiment of the present application also provides a kind of computer readable storage medium, is stored on the computer readable storage medium
Computer program instructions when the computer program instructions are read and run by the processor of computer, execute the application and implement
The step of defect inspection method that example provides.For example, computer readable storage medium can be implemented as electronic equipment 400 in Fig. 5
In memory 420.
In embodiment provided herein, it should be understood that disclosed device and method, it can be by others side
Formula is realized.The apparatus embodiments described above are merely exemplary, for example, the division of the unit, only one kind are patrolled
Function division is collected, there may be another division manner in actual implementation, in another example, multiple units or components can combine or can
To be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Coupling, direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some communication interfaces, device or unit
It connects, can be electrical property, mechanical or other forms.
In addition, unit may or may not be physically separated as illustrated by the separation member, as unit
The component of display may or may not be physical unit, it can and it is in one place, or may be distributed over more
In a network unit.Some or all of unit therein can be selected to realize this embodiment scheme according to the actual needs
Purpose.
Furthermore each functional module in each embodiment of the application can integrate one independent portion of formation together
Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
The above description is only an example of the present application, the protection scope being not intended to limit this application, for ability
For the technical staff in domain, various changes and changes are possible in this application.Within the spirit and principles of this application, made
Any modification, equivalent substitution, improvement and etc. should be included within the scope of protection of this application.
Claims (18)
1. a kind of defect inspection method characterized by comprising
Obtain the first original image comprising the first part;
Determine the first foreground area corresponding with first part in first original image;
It removes the image information in first original image in first background area, obtains image to be detected, wherein described the
One background area is the region in first original image in addition to first foreground area;
Defect existing for first piece surface is detected in described image to be detected using the neural network model of pre-training.
2. defect inspection method according to claim 1, which is characterized in that described to remove in first original image the
Image information in one background area, comprising:
Part in first original image in first background area is set to solid color.
3. defect inspection method according to claim 2, which is characterized in that the solid color and first part
The similarity degree of color is less than preset threshold.
4. defect inspection method according to claim 1, which is characterized in that in determination first original image with
Corresponding first foreground area of first part, comprising:
Between the pose for determining the threedimensional model of pose and first part of the camera when acquiring first original image
First transformation, wherein the threedimensional model of first part have preset pose;
The threedimensional model of first part is projected in first original image using first transformation, and will projection
The region of formation is determined as first foreground area,
Alternatively, projecting using first transformation to the threedimensional model of first part, the first mask image is formed, and
The region that first original image and the first mask image Xiang Yuhou are not covered by first mask image is true
It is set to first foreground area.
5. defect inspection method according to claim 4, which is characterized in that the determining camera is acquiring first original
The first transformation between the pose of the threedimensional model of pose and first part when beginning image, comprising:
Obtain the depth of the scene in first original image that the camera is acquired when acquiring first original image
Data;
A cloud is carried out between the corresponding cloud of threedimensional model of corresponding cloud of the depth data with first part to match
Standard determines first transformation between two clouds.
6. defect inspection method according to claim 4, which is characterized in that the determining camera is acquiring first original
The first transformation between the pose of the threedimensional model of pose and first part when beginning image, comprising:
Second obtained between the pose of the threedimensional model of pose and first part of the camera when being demarcated becomes
It changes;
Obtain pose of the camera when acquiring first original image and pose of the camera when being demarcated it
Between third transformation;
The product of third transformation and second transformation is determined as first transformation.
7. defect inspection method according to claim 1, which is characterized in that the neural network model using pre-training
Defect existing for first piece surface is detected in described image to be detected, comprising:
First piece surface is detected in described image to be detected with the presence or absence of scarce using the neural network model of pre-training
It falls into, alternatively, the position where detection first surface defects of parts.
8. defect inspection method described in any one of -7 according to claim 1, which is characterized in that described obtain includes the 1st
First original image of part, comprising:
Obtain what camera shot first part under multiple preset positions and multiple preset angles
First original image.
9. defect inspection method according to claim 8, which is characterized in that the acquisition camera is in multiple preset positions
And the first original image that first part is shot under multiple preset angles, comprising:
The mechanical arm that control end is provided with camera is successively moved to multiple preset positions and successively rotates at each position
First part is shot to multiple preset angles, obtains the first original image that shooting obtains.
10. defect inspection method according to claim 1, which is characterized in that obtain the original graph comprising part described
Before picture, the method also includes:
Obtain the second original image comprising the second part;
Determine the second foreground area corresponding with second part in second original image;
The image information in second original image in second background area is removed, obtains training image, wherein described second
Background area is the region in second original image in addition to second foreground area, and the training image is for training
The neural network model.
11. defect inspection method according to claim 10, which is characterized in that described to remove in second original image
Image information in second background area, comprising:
Part in second original image in second background area is set to solid color.
12. defect inspection method according to claim 11, which is characterized in that in determination second original image
The second foreground area corresponding with second part, comprising:
Between the pose for determining the threedimensional model of pose and second part of the camera when acquiring second original image
The 4th transformation, wherein the threedimensional model of second part have preset pose;
The threedimensional model of second part is projected in second original image using the 4th transformation, and will projection
The region of formation is determined as second foreground area,
Alternatively, projecting using the 4th transformation to the threedimensional model of second part, the second mask image is formed, and
The region that second original image and the second mask image Xiang Yuhou are not covered by second mask image is true
It is set to second foreground area.
13. defect inspection method described in any one of 0-12 according to claim 1, which is characterized in that trained described
After image, the method also includes:
The markup information training nerve net obtained after being labeled using the training image and to the training image
Network model, alternatively, send the training image to server, so that the server can be using the training image and right
The markup information training neural network model that the training image obtains after being labeled.
14. a kind of defect detecting device characterized by comprising
First image collection module, for obtaining the first original image comprising the first part;
First prospect determining module, for determining the first foreground zone corresponding with first part in first original image
Domain;
First background removes module, for removing the image information in first original image in first background area, obtains
Image to be detected, wherein the first background area be first original image in addition to first foreground area
Region;
Detection module detects first parts list for the neural network model using pre-training in described image to be detected
Defect existing for face.
15. a kind of defect detecting system characterized by comprising
The mechanical arm tail end of robot, the robot is provided with camera;
Equipment is controlled, for sending control instruction to the robot, controlling the camera acquisition includes the first of the first part
Original image, and, for determining the first foreground area corresponding with first part in first original image, remove
Image information in first original image in first background area obtains image to be detected, and utilizes the nerve of pre-training
Network model detects defect existing for first piece surface in described image to be detected, wherein first background area
Domain is the region in first original image in addition to first foreground area.
16. defect detecting system according to claim 15, which is characterized in that the camera includes RGB-D camera.
17. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium
Program instruction when the computer program instructions are read out by the processor and run, executes institute such as any one of claim 1-13
The step of described method.
18. a kind of electronic equipment characterized by comprising memory and processor are stored with computer in the memory
Program instruction when the computer program instructions are read and run by the processor, executes institute as appointed in claim 1-13
The step of method described in one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910711135.1A CN110400315B (en) | 2019-08-01 | 2019-08-01 | Defect detection method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910711135.1A CN110400315B (en) | 2019-08-01 | 2019-08-01 | Defect detection method, device and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110400315A true CN110400315A (en) | 2019-11-01 |
CN110400315B CN110400315B (en) | 2020-05-05 |
Family
ID=68327366
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910711135.1A Active CN110400315B (en) | 2019-08-01 | 2019-08-01 | Defect detection method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110400315B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111062915A (en) * | 2019-12-03 | 2020-04-24 | 浙江工业大学 | Real-time steel pipe defect detection method based on improved YOLOv3 model |
CN111652883A (en) * | 2020-07-14 | 2020-09-11 | 征图新视(江苏)科技股份有限公司 | Glass surface defect detection method based on deep learning |
CN112903703A (en) * | 2021-01-27 | 2021-06-04 | 广东职业技术学院 | Ceramic surface defect detection method and system based on image processing |
CN113096094A (en) * | 2021-04-12 | 2021-07-09 | 成都市览图科技有限公司 | Three-dimensional object surface defect detection method |
CN113469997A (en) * | 2021-07-19 | 2021-10-01 | 京东科技控股股份有限公司 | Method, device, equipment and medium for detecting plane glass |
CN113538436A (en) * | 2021-09-17 | 2021-10-22 | 深圳市信润富联数字科技有限公司 | Method and device for detecting part defects, terminal equipment and storage medium |
CN113870267A (en) * | 2021-12-03 | 2021-12-31 | 深圳市奥盛通科技有限公司 | Defect detection method, defect detection device, computer equipment and readable storage medium |
CN114354621A (en) * | 2021-12-29 | 2022-04-15 | 广州德志金属制品有限公司 | Method and system for automatically detecting product appearance |
CN114998357A (en) * | 2022-08-08 | 2022-09-02 | 长春摩诺维智能光电科技有限公司 | Industrial detection method, system, terminal and medium based on multi-information analysis |
CN116363085A (en) * | 2023-03-21 | 2023-06-30 | 江苏共知自动化科技有限公司 | Industrial part target detection method based on small sample learning and virtual synthesized data |
WO2024044913A1 (en) * | 2022-08-29 | 2024-03-07 | Siemens Aktiengesellschaft | Method, apparatus, electronic device, storage medium and computer program product for detecting circuit board assembly defect |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020054694A1 (en) * | 1999-03-26 | 2002-05-09 | George J. Vachtsevanos | Method and apparatus for analyzing an image to direct and identify patterns |
CN101510957A (en) * | 2008-02-15 | 2009-08-19 | 索尼株式会社 | Image processing device, camera device, communication system, image processing method, and program |
CN102768767A (en) * | 2012-08-06 | 2012-11-07 | 中国科学院自动化研究所 | Online three-dimensional reconstructing and locating method for rigid body |
CN106409711A (en) * | 2016-09-12 | 2017-02-15 | 佛山市南海区广工大数控装备协同创新研究院 | Solar silicon wafer defect detecting system and method |
US20180047208A1 (en) * | 2016-08-15 | 2018-02-15 | Aquifi, Inc. | System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function |
CN108520274A (en) * | 2018-03-27 | 2018-09-11 | 天津大学 | High reflecting surface defect inspection method based on image procossing and neural network classification |
CN109087286A (en) * | 2018-07-17 | 2018-12-25 | 江西财经大学 | A kind of detection method and application based on Computer Image Processing and pattern-recognition |
CN109215085A (en) * | 2018-08-23 | 2019-01-15 | 上海小萌科技有限公司 | A kind of article statistic algorithm using computer vision and image recognition |
CN109636772A (en) * | 2018-10-25 | 2019-04-16 | 同济大学 | The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning |
US20190139214A1 (en) * | 2017-06-12 | 2019-05-09 | Sightline Innovation Inc. | Interferometric domain neural network system for optical coherence tomography |
CN109840508A (en) * | 2019-02-17 | 2019-06-04 | 李梓佳 | One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium |
-
2019
- 2019-08-01 CN CN201910711135.1A patent/CN110400315B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020054694A1 (en) * | 1999-03-26 | 2002-05-09 | George J. Vachtsevanos | Method and apparatus for analyzing an image to direct and identify patterns |
CN101510957A (en) * | 2008-02-15 | 2009-08-19 | 索尼株式会社 | Image processing device, camera device, communication system, image processing method, and program |
CN102768767A (en) * | 2012-08-06 | 2012-11-07 | 中国科学院自动化研究所 | Online three-dimensional reconstructing and locating method for rigid body |
US20180047208A1 (en) * | 2016-08-15 | 2018-02-15 | Aquifi, Inc. | System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function |
CN106409711A (en) * | 2016-09-12 | 2017-02-15 | 佛山市南海区广工大数控装备协同创新研究院 | Solar silicon wafer defect detecting system and method |
US20190139214A1 (en) * | 2017-06-12 | 2019-05-09 | Sightline Innovation Inc. | Interferometric domain neural network system for optical coherence tomography |
CN108520274A (en) * | 2018-03-27 | 2018-09-11 | 天津大学 | High reflecting surface defect inspection method based on image procossing and neural network classification |
CN109087286A (en) * | 2018-07-17 | 2018-12-25 | 江西财经大学 | A kind of detection method and application based on Computer Image Processing and pattern-recognition |
CN109215085A (en) * | 2018-08-23 | 2019-01-15 | 上海小萌科技有限公司 | A kind of article statistic algorithm using computer vision and image recognition |
CN109636772A (en) * | 2018-10-25 | 2019-04-16 | 同济大学 | The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning |
CN109840508A (en) * | 2019-02-17 | 2019-06-04 | 李梓佳 | One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
吴挺 等: "基于主动式全景视觉传感器的管道内部缺陷检测方法", 《仪器仪表学报》 * |
宇文旋: "基于机器视觉的轴承表面缺陷检测与分类***研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 * |
徐信 等: "拉深件缺陷在线检测***设计", 《锻压技术》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111062915A (en) * | 2019-12-03 | 2020-04-24 | 浙江工业大学 | Real-time steel pipe defect detection method based on improved YOLOv3 model |
CN111062915B (en) * | 2019-12-03 | 2023-10-24 | 浙江工业大学 | Real-time steel pipe defect detection method based on improved YOLOv3 model |
CN111652883A (en) * | 2020-07-14 | 2020-09-11 | 征图新视(江苏)科技股份有限公司 | Glass surface defect detection method based on deep learning |
CN111652883B (en) * | 2020-07-14 | 2024-02-13 | 征图新视(江苏)科技股份有限公司 | Glass surface defect detection method based on deep learning |
CN112903703A (en) * | 2021-01-27 | 2021-06-04 | 广东职业技术学院 | Ceramic surface defect detection method and system based on image processing |
CN113096094B (en) * | 2021-04-12 | 2024-05-17 | 吴俊� | Three-dimensional object surface defect detection method |
CN113096094A (en) * | 2021-04-12 | 2021-07-09 | 成都市览图科技有限公司 | Three-dimensional object surface defect detection method |
CN113469997B (en) * | 2021-07-19 | 2024-02-09 | 京东科技控股股份有限公司 | Method, device, equipment and medium for detecting plane glass |
CN113469997A (en) * | 2021-07-19 | 2021-10-01 | 京东科技控股股份有限公司 | Method, device, equipment and medium for detecting plane glass |
CN113538436A (en) * | 2021-09-17 | 2021-10-22 | 深圳市信润富联数字科技有限公司 | Method and device for detecting part defects, terminal equipment and storage medium |
CN113870267A (en) * | 2021-12-03 | 2021-12-31 | 深圳市奥盛通科技有限公司 | Defect detection method, defect detection device, computer equipment and readable storage medium |
CN114354621A (en) * | 2021-12-29 | 2022-04-15 | 广州德志金属制品有限公司 | Method and system for automatically detecting product appearance |
CN114354621B (en) * | 2021-12-29 | 2024-04-19 | 广州德志金属制品有限公司 | Method and system for automatically detecting product appearance |
CN114998357A (en) * | 2022-08-08 | 2022-09-02 | 长春摩诺维智能光电科技有限公司 | Industrial detection method, system, terminal and medium based on multi-information analysis |
CN114998357B (en) * | 2022-08-08 | 2022-11-15 | 长春摩诺维智能光电科技有限公司 | Industrial detection method, system, terminal and medium based on multi-information analysis |
WO2024044913A1 (en) * | 2022-08-29 | 2024-03-07 | Siemens Aktiengesellschaft | Method, apparatus, electronic device, storage medium and computer program product for detecting circuit board assembly defect |
CN116363085B (en) * | 2023-03-21 | 2024-01-12 | 江苏共知自动化科技有限公司 | Industrial part target detection method based on small sample learning and virtual synthesized data |
CN116363085A (en) * | 2023-03-21 | 2023-06-30 | 江苏共知自动化科技有限公司 | Industrial part target detection method based on small sample learning and virtual synthesized data |
Also Published As
Publication number | Publication date |
---|---|
CN110400315B (en) | 2020-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110400315A (en) | A kind of defect inspection method, apparatus and system | |
Liu et al. | A detection and recognition system of pointer meters in substations based on computer vision | |
TWI566204B (en) | Three dimensional object recognition | |
JP6305171B2 (en) | How to detect objects in a scene | |
WO2022170844A1 (en) | Video annotation method, apparatus and device, and computer readable storage medium | |
CN111368852A (en) | Article identification and pre-sorting system and method based on deep learning and robot | |
CN106971408B (en) | A kind of camera marking method based on space-time conversion thought | |
CN111192293A (en) | Moving target pose tracking method and device | |
WO2021136386A1 (en) | Data processing method, terminal, and server | |
CN108362205B (en) | Space distance measuring method based on fringe projection | |
CA3185292A1 (en) | Neural network analysis of lfa test strips | |
CN104680570A (en) | Action capturing system and method based on video | |
CN111399634B (en) | Method and device for recognizing gesture-guided object | |
US20140218477A1 (en) | Method and system for creating a three dimensional representation of an object | |
US20220088455A1 (en) | Golf ball set-top detection method, system and storage medium | |
CN110111364A (en) | Method for testing motion, device, electronic equipment and storage medium | |
CN110942092A (en) | Graphic image recognition method and recognition system | |
CN107767366B (en) | A kind of transmission line of electricity approximating method and device | |
CN109345567A (en) | Movement locus of object recognition methods, device, equipment and storage medium | |
CN116168040B (en) | Component direction detection method and device, electronic equipment and readable storage medium | |
An et al. | Image-based positioning system using LED Beacon based on IoT central management | |
CN101980299B (en) | Chessboard calibration-based camera mapping method | |
CN112146589A (en) | Three-dimensional morphology measurement system and method based on ZYNQ platform | |
Zhang et al. | High-speed vision extraction based on the CamShift algorithm | |
CN214410073U (en) | Three-dimensional detection positioning system combining industrial camera and depth camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |