CN116168040B - Component direction detection method and device, electronic equipment and readable storage medium - Google Patents

Component direction detection method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116168040B
CN116168040B CN202310460513.XA CN202310460513A CN116168040B CN 116168040 B CN116168040 B CN 116168040B CN 202310460513 A CN202310460513 A CN 202310460513A CN 116168040 B CN116168040 B CN 116168040B
Authority
CN
China
Prior art keywords
target
image
component
expression vector
target component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310460513.XA
Other languages
Chinese (zh)
Other versions
CN116168040A (en
Inventor
张红杰
马浩铭
欧杰
李政禹
陈振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Yuanzhigu Technology Co ltd
Original Assignee
Sichuan Yuanzhigu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Yuanzhigu Technology Co ltd filed Critical Sichuan Yuanzhigu Technology Co ltd
Priority to CN202310460513.XA priority Critical patent/CN116168040B/en
Publication of CN116168040A publication Critical patent/CN116168040A/en
Application granted granted Critical
Publication of CN116168040B publication Critical patent/CN116168040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a direction detection method and device for components, electronic equipment and a readable storage medium, and belongs to the technical field of image recognition. The direction detection method of the component provided by the application comprises the following steps: acquiring a standard image of a target component and a test image of the target component; inputting the standard image of the target component and the test image of the target component into a trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not.

Description

Component direction detection method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of image recognition, and particularly relates to a direction detection method and device for components, electronic equipment and a readable storage medium.
Background
In the manufacturing process of electronic products, PCBs (Printed Circuit Board, printed wiring boards) are increasingly used. The PCB usually employs a surface mount technology (Surface Mounted Technology, SMT) to perform component mounting, and after component mounting, a direction of a polar component after mounting is detected, so that it is important to determine whether a mounting direction of the polar component is wrong.
In the related art, in the process of detecting the direction of the polar element, a PCB image including the polar element is generally input into a target detection model, the input PCB image is processed through the target detection model, the polar element is positioned from the PCB image, the polar identifier of the polar element is subjected to feature extraction processing, and classification prediction is performed based on the features of the polar identifier, so as to determine whether the installation direction of the polar element is wrong.
However, the direction detection method of the component in the related art has a problem of low accuracy of the direction detection of the component. For example, because the types of components are different, the polarity identifiers are also different, and the characteristic extraction process of the polarity identifiers involves setting an image segmentation threshold, under the scene of larger illumination and PCB color difference change, the image segmentation threshold is invalid due to the influence of the problems of production process, surface pollution and the like, so that the detection accuracy of the target detection model is lower.
Disclosure of Invention
The embodiment of the application provides a direction detection method and device for components, electronic equipment and a readable storage medium, so as to solve the problem of low accuracy of direction detection for the components.
In a first aspect, an embodiment of the present application provides a method for detecting a direction of a component, where the method includes:
acquiring a standard image of a target component and a test image of the target component;
inputting the standard image of the target component and the test image of the target component into a trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not.
In a second aspect, an embodiment of the present application provides a direction detection device for a component, including: an acquisition module and a direction detection module;
the acquisition module is used for acquiring a standard image of a target component and a test image of the target component;
the direction detection module is used for inputting the standard image of the target component and the test image of the target component into the trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In the embodiment of the application, a standard image of a target component and a test image of the target component are acquired; inputting the standard image of the target component and the test image of the target component into a trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not. In this way, the rotation recognition model pays attention to the direction characteristics irrelevant to the types of the components in the training process, and then the standard image of the target component and the test image of the target component are input into the trained rotation recognition model for direction detection processing, so that the target result more accords with the actual direction of the target component, the accuracy of direction detection of the component is improved, and the problem of lower accuracy of direction detection of the component is solved.
Drawings
Fig. 1 is a schematic flow chart of a direction detection method provided in an embodiment of the present application;
FIG. 2-1 is a schematic block diagram of a rotation recognition model according to an embodiment of the present application;
FIG. 2-2 is a schematic flow chart of another direction detection method according to an embodiment of the present application;
FIGS. 2-3 are schematic flow diagrams of another direction detection method provided in an embodiment of the present application;
FIGS. 2-4 are schematic flow diagrams of another direction detection method provided in an embodiment of the present application;
FIG. 3 is a schematic flow chart of a training process of a rotational recognition model according to an embodiment of the present application;
FIG. 4-1 is a schematic flow chart of a construction process of a component data set according to an embodiment of the present application;
FIG. 4-2 is a schematic flow chart of another rotational recognition model training process provided in an embodiment of the present application;
FIG. 5 is a schematic flow chart of another direction detection method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a direction detection device for a component according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The application scenario of the embodiment of the application may be to detect the installation direction of the polar component after the polar component is installed on the PCB, and determine whether the installation direction of the polar component is wrong. Specifically, the PCB is often subjected to component mounting by using a surface assembly technology, and after the polar component is mounted, the direction determination process of the polar component is indispensable. The traditional direction detection method is mostly carried out manually, and is low in efficiency and easy to make mistakes. With the development of deep learning and machine vision, the related art may perform automatic detection of the mounting direction of the polar components around the machine vision model for target detection and classification, for example, target detection and direction recognition for the polar components of the specified type by using a target detection model such as YOLOv3 and YOLOv 4. However, in the related art, the target detection model is mainly used for positioning and classifying target components from an input PCB image, when the target detection model is used for direction identification of the target components, the polarity identifiers are different due to different component types, the characteristic extraction process of the polarity identifiers involves setting of an image segmentation threshold, and under the scene of large changes of illumination and PCB color difference, the image segmentation threshold is influenced by problems of production process, surface pollution and the like, so that the detection accuracy of the target detection model in direction identification of the polarity components is low.
In contrast, the embodiment of the application provides a method and a device for detecting the direction of a component, electronic equipment and a readable storage medium, which can solve the problem of lower accuracy of direction detection of the component in the related technology. For example, the general concept of the direction detection method of the component provided in the embodiment of the present application may include: obtaining a standard image of a target component and a test image of the target component; inputting a standard image of the target component and a test image of the target component into the trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result includes direction indication information for indicating whether the direction of the target component is rotated. In this way, the rotation recognition model pays attention to the direction characteristics irrelevant to the types of the components in the training process, and then the standard image of the target component and the test image of the target component are input into the trained rotation recognition model for direction detection processing, so that the target result more accords with the actual direction of the target component, the accuracy of direction detection of the component is improved, and the problem of lower accuracy of direction detection of the component is solved.
In practical application, the direction detection method of the components provided by the embodiment of the application can be applied to detection of the mounting direction of the polar components after the polar components are mounted on the PCB. Of course, the direction detection method of the component provided in the embodiment of the present application may also be applied to direction detection of components in other fields, and the application is not specifically limited herein.
The method, the device, the electronic equipment and the readable storage medium for detecting the direction of the component provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a direction detection method of a component according to an embodiment of the present application.
As shown in fig. 1, the method for detecting the direction of the component provided in the embodiment of the present application may include:
step 110: acquiring a standard image of a target component and a test image of the target component;
step 120: inputting the standard image of the target component and the test image of the target component into a trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not.
In step 110, the components are classified according to whether the components have polarities, and the target component may be a polar component or a non-polar component. Alternatively, the target components may be resistors, inductors, capacitors, power supplies, switches, diodes, etc. classified according to the functions of the components, and the specific type of the target components is not limited in this application.
In step 110, the standard image of the target component and the test image of the target component may be pre-acquired images or real-time acquired images, and the image acquisition mode is not particularly limited in this application.
In the embodiment of the present application, taking the target component as an example, the direction of the target component may be a preset specified direction in the standard image of the target component; in the test image of the target component, the direction of the target component may be the direction to be detected with respect to the above specified direction. Under the condition that the angle between the direction to be detected and the appointed direction is smaller than or equal to a preset threshold value, the direction of the target component in the test image is not rotated relative to the appointed direction; otherwise, when the angle between the direction to be detected and the designated direction is larger than the preset threshold value, the direction of the target component in the test image is rotated relative to the designated direction. The preset threshold value may be a preset value approaching 0 °, such as 0 ° or 5 °,10 °, or the like, which is not particularly limited in the present application. Based on the above, the method and the device can be used for detecting the direction of the target component in the test image of the target component.
In step 120, the rotation recognition model is a pre-trained direction recognition model for detecting the direction of the component, the rotation recognition model learns the direction characteristics of the component in the training process, and the rotation recognition model can pertinently pay attention to the direction characteristics irrelevant to the type of the component because the direction characteristics of the component are irrelevant to the type of the component, so that the standard image of the target component and the test image of the target component can be input into the trained rotation recognition model for direction detection processing, the target result is more in accordance with the actual direction of the target component, and the accuracy of direction detection of the component is improved.
According to the direction detection method provided by the embodiment of the application, the standard image of the target component and the test image of the target component are obtained; inputting the standard image of the target component and the test image of the target component into a trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not. In this way, the rotation recognition model pays attention to the direction characteristics irrelevant to the types of the components in the training process, and then the standard image of the target component and the test image of the target component are input into the trained rotation recognition model for direction detection processing, so that the target result more accords with the actual direction of the target component, the accuracy of direction detection of the component is improved, and the problem of lower accuracy of direction detection of the component is solved.
In a specific embodiment, as shown in fig. 2-1, the rotation recognition model mentioned in step 120 may include: the device comprises a feature extraction module for converting an image into a vector and a full connection layer for predicting the direction of a component, wherein the feature extraction module is connected with the full connection layer.
In terms of direction detection processing, as shown in fig. 2-2, a feature extraction module may be configured to perform direction feature extraction and encoding processing on a standard image of the target component to obtain a first direction feature expression vector, and perform direction feature extraction and encoding processing on a test image of the target component to obtain a second direction feature expression vector; and the full-connection layer is used for carrying out classification prediction processing on the target direction characteristic expression vector obtained based on the first direction characteristic expression vector and the second direction characteristic expression vector to obtain a target result.
Wherein, in terms of the direction detection process, the rotation recognition model may be a model of a twin network structure. Specifically, the rotation recognition model may perform the same processing on two images input together (i.e., the standard image of the target component and the test image of the target component). For example, the feature extraction module may include a feature encoder layer that performs directional feature extraction and encoding processes on the standard image of the target component and the test image of the target component, respectively; or, for another example, the feature extraction module may include two feature encoder layers with the same network parameters, and the two feature encoder layers may perform directional feature extraction and encoding processing on the standard image of the target component and the test image of the target component respectively, which is not specifically limited in this application.
Wherein the feature extraction module may be directly connected to the full connection layer (not shown); alternatively, as shown in fig. 2-1, the rotation recognition model further includes a spatial attention module for learning directional features of the component, and the feature extraction module is connected to the full connection layer via the spatial attention module. The following description is made separately.
For example, in the case where the feature extraction module is directly connected to the full connection layer, the target directional feature expression vector is a sum of the first directional feature expression vector and the second directional feature expression vector;
in the step 120, inputting the standard image of the target component and the test image of the target component into the trained rotation recognition model for direction detection, to obtain a target result may include:
carrying out direction characteristic extraction and coding treatment on the standard image of the target component to obtain a first direction characteristic expression vector;
carrying out direction characteristic extraction and coding treatment on the test image of the target component to obtain a second direction characteristic expression vector;
determining a target directional feature expression vector based on a sum of the first directional feature expression vector and the second directional feature expression vector;
And carrying out classification prediction processing on the target direction characteristic expression vector to obtain a target result.
For example, the rotation recognition model may specifically be a twin network structure sharing network parameters, respectively extracting features of a standard image of a target component and a test image of the target component, then adding features, and obtaining a predicted target result through multi-layer nonlinear transformation in a full connection layer, where a specific output formula of the rotation recognition model is as follows: output=mlp (a (x 1) +a (x 2)), where x1 and x2 represent a standard image of the target component and a test image of the target component, respectively, a represents that the feature extraction module performs directional feature extraction and encoding processing, and MLP represents a classification prediction process of the multi-layer perceptron neural network in the fully connected layer. Because the twin network structure shares network parameters and the direction characteristic addition has interchangeability, the two images input into the rotation identification model are sequence independent, and the method is more suitable for rotation judgment tasks of components.
In this way, the rotation recognition model pays attention to the direction characteristics irrelevant to the type of the component when carrying out direction recognition processing on the input standard image of the target component and the test image of the target component, and the accuracy rate of direction detection of the component is improved.
As another example, as shown in fig. 2-1, in the case that the rotation recognition model further includes a spatial attention module, and the feature extraction module is connected to the full connection layer via the spatial attention module, in terms of the direction detection process, the spatial attention module may be configured to perform a two-layer convolution process and a nonlinear transformation process on the first direction feature expression vector to obtain a first attention tensor, and multiply the first attention tensor with the first direction feature expression vector to obtain a first target direction feature expression vector;
the spatial attention module is further used for carrying out two-layer convolution processing and nonlinear transformation processing on the second direction characteristic expression vector to obtain a second attention tensor, and multiplying the second attention tensor with the second direction characteristic expression vector to obtain a second target direction characteristic expression vector;
the target direction feature expression vector is the sum of the first target direction feature expression vector and the second target direction feature expression vector.
It can be appreciated that spatial attention may be simultaneously applied to two input images of a twin network structure. Taking one of the two input images (standard image of the target component) as an example, as shown in fig. 2-3, a spatial attention module is adopted to sequentially perform two-layer convolution processing on the first direction feature expression vector, sigmoid (nonlinear saturation function) is adopted to perform nonlinear transformation processing to obtain a first attention tensor, and the first attention tensor is multiplied by the first direction feature expression vector to obtain a first target direction feature expression vector. The specific output formula of the rotation recognition model may be: output=mlp (CAM (a (x 1))+cam (a (x 2))), where x1 and x2 respectively represent a standard image of a target component and a test image of the target component, a represents that the feature extraction module performs directional feature extraction and encoding processing, CAM represents that the spatial attention module further learns a change portion of the directional feature, and MLP represents classification prediction processing of the multi-layer perceptron neural network in the fully connected layer. Therefore, the rotation recognition model further focuses on the change part of the direction characteristics through a spatial attention mechanism, and the accuracy of the twin network structure is further improved.
Based on this, in the step 120, inputting the standard image of the target component and the test image of the target component into the trained rotation recognition model for direction detection, to obtain a target result may include:
carrying out direction characteristic extraction and coding treatment on the standard image of the target component to obtain a first direction characteristic expression vector;
carrying out direction characteristic extraction and coding treatment on the test image of the target component to obtain a second direction characteristic expression vector;
performing two-layer convolution processing and nonlinear transformation processing on the first direction feature expression vector to obtain a first attention tensor, and multiplying the first attention tensor by the first direction feature expression vector to obtain a first target direction feature expression vector;
performing two-layer convolution processing and nonlinear transformation processing on the second direction feature expression vector to obtain a second attention tensor, and multiplying the second attention tensor by the second direction feature expression vector to obtain a second target direction feature expression vector;
determining a target directional feature expression vector based on a sum of the first target directional feature expression vector and the second target directional feature expression vector;
And carrying out classification prediction processing on the target direction characteristic expression vector to obtain a target result.
Therefore, on the basis of the twin network structure, the rotation recognition model further focuses on the change part of the direction characteristics through a spatial attention mechanism, and the accuracy of the twin network structure is further improved, so that the accuracy of direction detection of components is further improved.
In practical applications, the feature extraction module may include a feature encoder layer in a preset deep convolutional neural network model, for example, the preset deep convolutional neural network model may be a pre-prepared AlexNet, leNet model, and the feature extraction module may use the feature encoder layer of the deep convolutional neural network such as AlexNet, leNet as a skeleton.
In practical applications, the fully-connected layer may be formed of a two-layer fully-connected layer structure. It can be understood that the number of layers of the full-connection layer is less, so that the prediction result of the full-connection layer is inaccurate, the excessive fitting phenomenon of the number of layers of the full-connection layer also can cause the prediction result of the full-connection layer to be inaccurate, and the prediction accuracy is higher when the number of layers of the full-connection layer obtained through the test is two.
In addition, the application scenario of the embodiment of the present application may be to detect the mounting direction of the polar component after the polar component is mounted on the PCB. As shown in fig. 2 to 4, the method for detecting the direction of the component provided in the embodiment of the present application may include:
step 210: acquiring a first image comprising a standard PCB and a second image comprising a test PCB;
step 220: inputting the first image into a trained target detection model to perform target detection processing to obtain the position information of a target component;
step 230: extracting a standard image of the target component from the first image according to the position information of the target component;
step 240: extracting a test image of the target component from the second image according to the position information of the target component;
step 250: inputting the standard image of the target component and the test image of the target component into a trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not.
Wherein steps 210 through 240 may be sub-steps of step 110; step 250 may refer to the details of step 120.
In practical application, the embodiment of the application may obtain the first image including the standard PCB and the second image including the test PCB by photographing the PCB, and may also obtain the first image including the standard PCB and the second image including the test PCB by photographing and performing image registration processing on the photograph, which is not particularly limited in this application.
In a specific example, the step 210 of acquiring the first image including the standard PCB and the second image including the test PCB includes:
acquiring a first initial image comprising a standard PCB and a second initial image comprising a test PCB;
and carrying out image registration processing on the first initial image and the second initial image based on a feature matching algorithm to obtain a first image comprising a standard PCB and a second image comprising a test PCB.
The image registration process can be used for correcting the angle of the second initial image comprising the test PCB, so that the angle corresponds to the pixels of the first initial image comprising the standard PCB one by one, and therefore the subsequent comparison of corresponding components at the designated position is facilitated, and the influence of the placement position of the PCB is eliminated. For example, the feature matching algorithm may be a fast feature point extraction algorithm (ORB, oriented FAST and Rotated BRIEF) or a scale-invariant feature transform matching algorithm (SIFT, scaleInvariantFeatureTransform), etc., without specific limitation of the present application. According to the embodiment of the application, key points of the PCB images can be calculated based on the feature matching algorithm, and linear transformation is carried out, so that pixel positions in the two PCB images are in one-to-one correspondence, the corresponding components are conveniently compared at the appointed position, and the influence of the placement position of the PCB is eliminated.
In step 220, the object detection model is used to predict positional information of the object component from a first image including the standard PCB. The input of the object detection model is a first image including the standard PCB, the output of the object detection model is position information of the object component in the first image including the standard PCB, for example, the position information may be positioning frame information of the object component in the first image, and the positioning frame information may include upper left corner coordinates (x, y) and width height (w, h).
The target detection model may be a fast R-CNN model, an SSD model, a YOLO model, or the like, and is not particularly limited in this application.
The target detection model pays attention to the position information of the components in the training process, the training sample data set of the target detection model can be an image comprising the PCB, and the position information of the components can be marked on the image of the PCB manually for training. Since the training process of the target detection model belongs to a conventional basic means in the field, the description is omitted here.
In step 230-step 250, the present application may acquire images of polar components (i.e., target components) at the same position from a first image including a standard PCB and a second image including a test PCB, input the standard image of the target component and the test image of the target component to a trained rotation recognition model for performing direction detection processing, and output 0 of the rotation recognition model, where the direction of the target component in the test image is rotated relative to the designated direction of the target component in the standard image; or the rotation recognition model output 1 indicates that the direction of the target component in the test image is not rotated relative to the specified direction of the target component in the standard image.
In this way, in the industrial mass production scene of the PCB, the method for detecting the ground direction provided by the embodiments of the present application can accurately detect the installation direction of the polar component after the polar component is installed on the PCB, and accurately determine whether the installation direction of the polar component is wrong.
The above describes the process of predicting the direction of the component by the rotation recognition model on the input image, and before this step, the embodiment of the present application may further include a training process of the rotation recognition model.
As shown in fig. 3, in a specific embodiment, in the method for detecting a direction of a component provided in the embodiment of the present application, before the direction of the target component is detected by using the rotation recognition model in step 120, the embodiment of the present application further includes a training process of the rotation recognition model, where the training process of the rotation recognition model may include:
step 310: acquiring a first rotation identification model and M pieces of sample data, wherein the sample data are images comprising polar components;
step 320: preprocessing the M pieces of sample data to obtain N pieces of preprocessed target sample data; each piece of target sample data in the N pieces of target sample data comprises a standard image of a polar component and a comparison image corresponding to the standard image, and the comparison image is obtained by rotating the standard image of the polar component; m is an integer, N is a multiple of M;
Step 330: inputting first target sample data into the first rotation identification model to obtain a first direction predicted value result corresponding to the polar component of the first target sample data;
step 340: based on the first direction predicted value result and the first direction true value result, adjusting parameters of the first rotation recognition model to obtain a second rotation recognition model; the first direction truth result is obtained based on the rotation angle of the contrast image relative to the standard image in the first set of target sample data;
step 350: obtaining an (n+1) th rotation recognition model according to the N target sample data;
step 360: and obtaining a trained rotation recognition model based on the (N+1) th rotation recognition model.
In step 310, the M pieces of sample data may be images including polar components acquired from a pre-built component dataset. Prior to step 310, embodiments of the present application may also include a process of constructing a polar component dataset. Specifically, the types of polar components are various, and with the development of technology, new varieties are increasing, and the embodiment of the application can collect images of a large number of different types of polar components. The method comprises the steps of classifying and storing identifiers with different polarities, wherein the same type of components are different in size and shape, but identifiers with similar identification polarities are established, the original image is rotated by 90 degrees, 180 degrees and 270 degrees and is turned axially symmetrically to obtain enhanced data, and the number of the images of the polar components is increased. The construction flow of the specific data set is shown in the figure 4-1:
(1) Photographing by an industrial camera to obtain a PCB image with high resolution;
(2) And manually marking the positions of the components in the PCB image. For example, the positioning frame information (x, y, w, h) of the component can specifically comprise upper left corner coordinates (x, y) and width and height (w, h);
(3) Intercepting an image of the polar component from the PCB image according to the component position;
(4) Classifying and storing the images of the components according to different polarity identifiers, and labeling the types of the identifiers;
(5) And carrying out 90 DEG rotation, 180 DEG rotation and 270 DEG rotation on each image and axisymmetric overturning and saving the images into a component data set.
Therefore, the number of component images in the component data set and the randomness of the component directions in each component image can be improved, and the constructed component data set is used as sample data for training the rotation identification model, so that the accuracy of the rotation identification model is higher.
In order to ensure randomness of the target sample data, in step 320, the preprocessing is performed on the M pieces of sample data to obtain N preprocessed pieces of target sample data, including:
for each of the M samples: taking the sample data as a standard image of the polar element, and carrying out K times of rotation processing according to different rotation angles to obtain K comparison images; determining K target sample data based on the K comparison images; the standard images in each of the K pieces of target sample data are the same, and the rotation angles of the contrast images in each of the K pieces of target sample data are different.
Of course, the specific processing procedure for preprocessing the P samples of data may be not only preprocessing before model training, but also processing during model training, which is not particularly limited in this application.
In this embodiment of the present application, in step 340, adjusting the parameter of the first rotation identification model based on the first direction predicted value result and the first direction true value result includes:
determining a contrast loss value corresponding to the first target sample data based on the first direction predicted value result and the first direction true value result;
and adjusting parameters in the first rotation identification model based on the comparison loss value.
When the rotation angle of the comparison image relative to the standard image in the target sample data is smaller than a preset threshold value, the first direction truth value result is 1, which indicates that the direction of the target component in the comparison image does not rotate relative to the designated direction of the target component in the standard image. When the rotation angle of the comparison image relative to the standard image in the target sample data is larger than or equal to a preset threshold value, the first direction true value result is 0, which indicates that the direction of the target component in the comparison image rotates relative to the designated direction of the target component in the standard image.
In this way, in the process of training the rotation recognition model, the feature extraction module is used for extracting the direction features of the standard image and the comparison image of the polar component, the spatial attention module is used for further learning the change part of the direction features, the full-connection layer is used for classifying whether the component direction rotates, the direction prediction value result is obtained through prediction, and further the loss calculation is carried out on the direction prediction value result obtained based on the rotation angle of the comparison image relative to the standard image in the target sample data, so that the rotation recognition model focuses on the direction features irrelevant to the component type.
For example, as shown in fig. 4-2, in practical application, the number of rotations K may be 4, and the training process of the rotation recognition model may include:
(1) Acquiring an image of a polar component from the component data set;
(2) Rotating the image by 0 degrees, 90 degrees, 180 degrees and 270 degrees respectively, and randomly rotating by-5 to +5 degrees after rotation in order to ensure randomness;
(3) Obtaining 4 pairs of images, namely an original image-0 degree rotation image, an original image-90 degree rotation image, an original image-180 degree rotation image and an original image-270 degree rotation image;
(4) Labeling the direction true value result of the 4 pairs of images, wherein the original image-0 degree is labeled as 1, and the rest pairs of images are labeled as 0;
(5) The rotational recognition model is trained with the batch data until the network parameters converge.
In this way, in the process of training the rotation recognition model, the first direction predicted value result predicted by the first rotation recognition model and the first direction true value result obtained based on the rotation angle of the comparison image relative to the standard image in the first target sample data can be subjected to loss calculation, and the rotation recognition model is trained iteratively until the network parameters of the rotation recognition model are converged, so that the trained rotation recognition model is obtained, and the rotation recognition model focuses on the direction characteristics irrelevant to the types of the components.
In practical application, as shown in fig. 5, the direction detection method for the components provided in the embodiment of the present application combines image registration, object detection, twin network and spatial attention algorithm to provide a set of polar component rotation recognition method for PCB, and the scheme can be directly applied to practical production tasks. The method and the device for detecting the polarity components of the PCB take a standard PCB as a template, a target detection model is adopted to position the polarity components of the PCB, and then a rotation identification model based on a twin network structure and a spatial attention algorithm is adopted to conduct rotation identification of the polarity components.
The twin network node can enable the rotation recognition model to predict whether the direction of the component rotates or not in an unsupervised mode, so that the method has good expandability, and meanwhile, the cost of designing infinite identifier recognition operators is eliminated. According to the rotary recognition model provided by the embodiment of the application, the component type labels are not needed, and the rotary recognition model is trained through data of batch component images, so that the rotary recognition model learns direction characteristics irrelevant to the component types. Because the polarity identifier of the component is generally associated with a directional characteristic, in an ideal case, the rotation recognition model may focus attention on the polarity identifier, and determine whether the component is rotated by a relative change in the position of the identifier.
In this way, the rotation recognition model pays attention to the direction characteristics irrelevant to the types of the components in the training process, and then the standard image of the target component and the test image of the target component are input into the trained rotation recognition model for direction detection processing, so that the target result more accords with the actual direction of the target component, the accuracy of direction detection of the component is improved, and the problem of lower accuracy of direction detection of the component is solved. And on the basis of the twin network structure, the rotation recognition model further focuses on the change part of the direction characteristics through a spatial attention mechanism, so that the accuracy of the twin network structure is further improved, and the accuracy of the direction detection of the components is further improved.
In addition, because the rotation recognition model concentrates attention on the direction characteristics in the training process, but not on the characteristics related to the polarity identifiers, the rotation recognition model has strong expandability, wider application range and high accuracy when tested on components of the polarity identifiers which are not seen. For example, the polar component data set constructed in embodiments of the present application may include a training sample set and a verification sample set, the polar identifiers of the components in the training sample set being different from the polar identifiers of the components in the verification sample set. For example, the polar component data set includes 8 types of sample data, the polar identifiers of the components of each type of sample data are different, the first 7 types of sample data can be used as a training sample set for learning and training the rotary recognition model, the trained rotary recognition model can test the 8 th type of sample data as a verification sample set, the accuracy of test results is high, the application range of the rotary recognition model is not limited to limited types of components in the training sample set, and the rotary recognition model can be applied to the testing of the directions of other types of components except the training sample set.
According to the direction detection method for the components, the execution main body can be the direction detection device for the components. In this embodiment, a direction detection method of a component performed by a direction detection device of the component is taken as an example, and the direction detection device of the component provided in the embodiment of the present application is described.
Fig. 6 is a schematic structural diagram of a direction detection device for a component according to an embodiment of the present application.
As shown in fig. 6, an embodiment of the present application provides a direction detection device 600 for a component, which may include: an acquisition module 601 and a direction detection module 602;
the acquiring module 601 is configured to acquire a standard image of a target component and a test image of the target component;
the direction detection module 602 is configured to input the standard image of the target component and the test image of the target component to a trained rotation recognition model for direction detection, so as to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not.
The direction detection device of the component provided by the embodiment of the application comprises an acquisition module and a direction detection module; the acquisition module is used for acquiring a standard image of a target component and a test image of the target component; the direction detection module is used for inputting the standard image of the target component and the test image of the target component into the trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not. In this way, the rotation recognition model pays attention to the direction characteristics irrelevant to the types of the components in the training process, and then the standard image of the target component and the test image of the target component are input into the trained rotation recognition model for direction detection processing, so that the target result more accords with the actual direction of the target component, the accuracy of direction detection of the component is improved, and the problem of lower accuracy of direction detection of the component is solved.
Optionally, in the device for detecting a direction of a component provided in the embodiment of the present application, the rotation identification model includes: the device comprises a feature extraction module for converting an image into a vector and a full connection layer for predicting the direction of a component, wherein the feature extraction module is connected with the full connection layer;
in the aspect of direction detection processing, the feature extraction module is used for carrying out direction feature extraction and coding processing on the standard image of the target component to obtain a first direction feature expression vector, and carrying out direction feature extraction and coding processing on the test image of the target component to obtain a second direction feature expression vector; and the full-connection layer is used for carrying out classification prediction processing on the target direction characteristic expression vector obtained based on the first direction characteristic expression vector and the second direction characteristic expression vector to obtain a target result.
Optionally, in the direction detection device for a component provided in the embodiment of the present application, the target direction feature expression vector is a sum of the first direction feature expression vector and the second direction feature expression vector;
the direction detection module is specifically configured to:
Carrying out direction characteristic extraction and coding treatment on the standard image of the target component to obtain a first direction characteristic expression vector;
carrying out direction characteristic extraction and coding treatment on the test image of the target component to obtain a second direction characteristic expression vector;
determining a target directional feature expression vector based on a sum of the first directional feature expression vector and the second directional feature expression vector;
and carrying out classification prediction processing on the target direction characteristic expression vector to obtain a target result.
In this way, the rotation recognition model pays attention to the direction characteristics irrelevant to the type of the component when carrying out direction recognition processing on the input standard image of the target component and the test image of the target component, and the accuracy rate of direction detection of the component is improved.
Optionally, in the device for detecting the direction of a component provided in the embodiment of the present application, the rotation recognition model further includes a spatial attention module for learning a directional feature of the component, and the feature extraction module is connected to the fully-connected layer through the spatial attention module;
in the aspect of direction detection processing, the spatial attention module is configured to perform two-layer convolution processing and nonlinear transformation processing on the first direction feature expression vector to obtain a first attention tensor, and multiply the first attention tensor with the first direction feature expression vector to obtain a first target direction feature expression vector;
The spatial attention module is further configured to perform two-layer convolution processing and nonlinear transformation processing on the second direction feature expression vector to obtain a second attention tensor, and multiply the second attention tensor with the second direction feature expression vector to obtain a second target direction feature expression vector;
the target direction feature expression vector is a sum of the first target direction feature expression vector and the second target direction feature expression vector.
Therefore, the rotation recognition model further focuses on the change part of the direction characteristics through a spatial attention mechanism, and the accuracy of the twin network structure is further improved.
Optionally, in the direction detection device for a component provided in the embodiment of the present application, the feature extraction module includes a feature encoder layer in a preset deep convolutional neural network model; the full-connection layer is composed of a two-layer full-connection layer structure.
It can be understood that the number of layers of the full-connection layer is less, so that the prediction result of the full-connection layer is inaccurate, the excessive fitting phenomenon of the number of layers of the full-connection layer also can cause the prediction result of the full-connection layer to be inaccurate, and the prediction accuracy is higher when the number of layers of the full-connection layer obtained through the test is two.
Optionally, the device for detecting a direction of a component provided in the embodiment of the present application further includes a training module, where the training module is configured to:
in the training process of the rotation recognition model, acquiring a first rotation recognition model and M pieces of sample data, wherein the sample data are images comprising polar components;
preprocessing the M pieces of sample data to obtain N pieces of preprocessed target sample data; each piece of target sample data in the N pieces of target sample data comprises a standard image of a polar component and a comparison image corresponding to the standard image, and the comparison image is obtained by rotating the standard image of the polar component; m is an integer, N is a multiple of M;
inputting first target sample data into the first rotation identification model to obtain a first direction predicted value result corresponding to the polar component of the first target sample data;
based on the first direction predicted value result and the first direction true value result, adjusting parameters of the first rotation recognition model to obtain a second rotation recognition model; the first direction truth result is obtained based on the rotation angle of the contrast image relative to the standard image in the first set of target sample data;
Obtaining an (n+1) th rotation recognition model according to the N target sample data;
and obtaining a trained rotation recognition model based on the (N+1) th rotation recognition model.
Optionally, in the direction detection device for a component provided in the embodiment of the present application, in a process of preprocessing the M pieces of sample data to obtain N preprocessed pieces of target sample data, the training module is specifically configured to:
for each of the M samples: taking the sample data as a standard image of the polar element, and carrying out K times of rotation processing according to different rotation angles to obtain K comparison images; determining K target sample data based on the K comparison images; the standard images in each of the K pieces of target sample data are the same, and the rotation angles of the contrast images in each of the K pieces of target sample data are different.
Optionally, in the device for detecting a direction of a component provided in the embodiment of the present application, in a process of adjusting a parameter of the first rotation identification model based on the first direction predicted value result and the first direction true value result, the training module is specifically configured to:
Determining a contrast loss value corresponding to the first target sample data based on the first direction predicted value result and the first direction true value result;
and adjusting parameters in the first rotation identification model based on the comparison loss value.
In this way, in the process of training the rotation recognition model, the feature extraction module is used for extracting the direction features of the standard image and the comparison image of the polar component, the spatial attention module is used for further learning the change part of the direction features, the full-connection layer is used for classifying whether the component direction rotates, the direction prediction value result is obtained through prediction, and further the loss calculation is carried out on the direction prediction value result obtained based on the rotation angle of the comparison image relative to the standard image in the target sample data, so that the rotation recognition model focuses on the direction features irrelevant to the component type.
Optionally, in the device for detecting a direction of a component provided in the embodiment of the present application, the obtaining module is specifically configured to:
acquiring a first image comprising a standard PCB and a second image comprising a test PCB;
inputting the first image into a trained target detection model to perform target detection processing to obtain the position information of a target component;
Extracting a standard image of the target component from the first image according to the position information of the target component;
and extracting the test image of the target component from the second image according to the position information of the target component.
In this way, in the industrial mass production scene of the PCB, the method for detecting the ground direction provided by the embodiments of the present application can accurately detect the installation direction of the polar component after the polar component is installed on the PCB, and accurately determine whether the installation direction of the polar component is wrong.
Optionally, in the direction detecting device for a component provided in the embodiment of the present application, in a process of acquiring a first image including a standard PCB and a second image including a test PCB, the acquiring module is specifically configured to:
acquiring a first initial image comprising the standard PCB and a second initial image comprising the test PCB;
and carrying out image registration processing on the first initial image and the second initial image based on a feature matching algorithm to obtain a first image comprising a standard PCB and a second image comprising a test PCB.
Therefore, the embodiment of the application can calculate the key points of the PCB images based on the feature matching algorithm, and perform linear transformation to enable the pixel positions in the two PCB images to correspond one by one, so that the corresponding components can be conveniently compared at the appointed position in the follow-up process, and the influence of the placement position of the PCB is eliminated.
The direction detection device of the component in the embodiment of the application may be an electronic device, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The direction detection device of the component in the embodiment of the present application may be a device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The direction detection device for the component provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 5, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 700, including a processor 701 and a memory 702, where a program or an instruction capable of running on the processor 701 is stored in the memory 702, and the program or the instruction when executed by the processor 701 implements each step of the embodiment of the method, and the steps can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: radio frequency unit 801, network module 802, audio output unit 803, input unit 804, sensor 805, display unit 806, user input unit 807, interface unit 808, memory 809, and processor 810.
Those skilled in the art will appreciate that the electronic device 800 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 810 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The input unit 804 is configured to obtain a standard image of a target component and a test image of the target component;
the processor 810 is configured to input the standard image of the target component and the test image of the target component into a trained rotation recognition model for performing direction detection processing, so as to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not.
According to the electronic equipment provided by the embodiment of the application, the input unit is used for acquiring the standard image of the target component and the test image of the target component; the processor is used for inputting the standard image of the target component and the test image of the target component into the trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, and the direction indication information is used for indicating whether the direction of the target component rotates or not. In this way, the rotation recognition model pays attention to the direction characteristics irrelevant to the types of the components in the training process, and then the standard image of the target component and the test image of the target component are input into the trained rotation recognition model for direction detection processing, so that the target result more accords with the actual direction of the target component, the accuracy of direction detection of the component is improved, and the problem of lower accuracy of direction detection of the component is solved.
The electronic device provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and in order to avoid repetition, details are not repeated here.
It should be appreciated that in embodiments of the present application, the input unit 804 may include a graphics processor (Graphics Processing Unit, GPU) 8041 and a microphone 8042, with the graphics processor 8041 processing image data of still images or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes at least one of a touch panel 8071 and other input devices 8072. Touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two parts, a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 809 can be used to store software programs as well as various data. The memory 809 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 809 may include volatile memory or nonvolatile memory, or the memory 809 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 809 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 810 may include one or more processing units; optionally, the processor 810 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implement each process of the embodiment of the method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, implementing each process of the above method embodiment, and achieving the same technical effect, so as to avoid repetition, and not repeated here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product, which is stored in a storage medium, and the program product is executed by at least one processor to implement the respective processes of the above method embodiments, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (12)

1. The method for detecting the direction of the component is characterized by comprising the following steps of:
acquiring a standard image of a target component and a test image of the target component;
inputting the standard image of the target component and the test image of the target component into a trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, wherein the direction indication information is used for indicating whether the direction of the target component rotates or not;
the rotation recognition model includes: the device comprises a feature extraction module for converting an image into a vector and a full connection layer for predicting the direction of a component, wherein the feature extraction module is connected with the full connection layer;
in the aspect of direction detection processing, the feature extraction module is used for carrying out direction feature extraction and coding processing on the standard image of the target component to obtain a first direction feature expression vector, and carrying out direction feature extraction and coding processing on the test image of the target component to obtain a second direction feature expression vector; and the full-connection layer is used for carrying out classification prediction processing on the target direction characteristic expression vector obtained based on the first direction characteristic expression vector and the second direction characteristic expression vector to obtain a target result.
2. The method of claim 1, wherein the target directional feature expression vector is a sum of the first directional feature expression vector and the second directional feature expression vector;
inputting the standard image of the target component and the test image of the target component into a trained rotation recognition model for direction detection processing to obtain a target result, wherein the method comprises the following steps:
carrying out direction characteristic extraction and coding treatment on the standard image of the target component to obtain a first direction characteristic expression vector;
carrying out direction characteristic extraction and coding treatment on the test image of the target component to obtain a second direction characteristic expression vector;
determining a target directional feature expression vector based on a sum of the first directional feature expression vector and the second directional feature expression vector;
and carrying out classification prediction processing on the target direction characteristic expression vector to obtain a target result.
3. The method of claim 1, wherein the rotational recognition model further comprises a spatial attention module for learning directional features of components, the feature extraction module being connected with the fully connected layer via the spatial attention module;
In the aspect of direction detection processing, the spatial attention module is configured to perform two-layer convolution processing and nonlinear transformation processing on the first direction feature expression vector to obtain a first attention tensor, and multiply the first attention tensor with the first direction feature expression vector to obtain a first target direction feature expression vector;
the spatial attention module is further configured to perform two-layer convolution processing and nonlinear transformation processing on the second direction feature expression vector to obtain a second attention tensor, and multiply the second attention tensor with the second direction feature expression vector to obtain a second target direction feature expression vector;
the target direction feature expression vector is a sum of the first target direction feature expression vector and the second target direction feature expression vector.
4. The method of claim 1, wherein the feature extraction module comprises a feature encoder layer in a pre-set deep convolutional neural network model; the full-connection layer is composed of a two-layer full-connection layer structure.
5. The method of claim 1, wherein the training process of the rotational recognition model comprises:
Acquiring a first rotation identification model and M pieces of sample data, wherein the sample data are images comprising polar components;
preprocessing the M pieces of sample data to obtain N pieces of preprocessed target sample data; each piece of target sample data in the N pieces of target sample data comprises a standard image of a polar component and a comparison image corresponding to the standard image, and the comparison image is obtained by rotating the standard image of the polar component; m is an integer, N is a multiple of M;
inputting first target sample data into the first rotation identification model to obtain a first direction predicted value result corresponding to the polar component of the first target sample data;
based on the first direction predicted value result and the first direction true value result, adjusting parameters of the first rotation recognition model to obtain a second rotation recognition model; the first direction truth result is obtained based on the rotation angle of the contrast image relative to the standard image in the first set of target sample data;
obtaining an (n+1) th rotation recognition model according to the N target sample data;
and obtaining a trained rotation recognition model based on the (N+1) th rotation recognition model.
6. The method of claim 5, wherein preprocessing the M samples of data to obtain preprocessed N target samples of data comprises:
for each of the M samples: taking the sample data as a standard image of the polar element, and carrying out K times of rotation processing according to different rotation angles to obtain K comparison images; determining K target sample data based on the K comparison images; the standard images in each of the K pieces of target sample data are the same, and the rotation angles of the contrast images in each of the K pieces of target sample data are different.
7. The method of claim 5, wherein adjusting the parameters of the first rotational recognition model based on the first direction predictor result and the first direction truth result comprises:
determining a contrast loss value corresponding to the first target sample data based on the first direction predicted value result and the first direction true value result;
and adjusting parameters in the first rotation identification model based on the comparison loss value.
8. The method of claim 1, wherein the acquiring the standard image of the target component and the test image of the target component comprises:
acquiring a first image comprising a standard PCB and a second image comprising a test PCB;
inputting the first image into a trained target detection model to perform target detection processing to obtain the position information of a target component;
extracting a standard image of the target component from the first image according to the position information of the target component;
and extracting the test image of the target component from the second image according to the position information of the target component.
9. The method of claim 8, wherein the acquiring a first image comprising a standard PCB and a second image comprising a test PCB comprises:
acquiring a first initial image comprising the standard PCB and a second initial image comprising the test PCB;
and carrying out image registration processing on the first initial image and the second initial image based on a feature matching algorithm to obtain a first image comprising a standard PCB and a second image comprising a test PCB.
10. A direction detecting device for a component, comprising: an acquisition module and a direction detection module;
the acquisition module is used for acquiring a standard image of a target component and a test image of the target component;
the direction detection module is used for inputting the standard image of the target component and the test image of the target component into the trained rotation recognition model for direction detection processing to obtain a target result; the rotation recognition model learns the direction characteristics of the components in the training process; the target result comprises direction indication information, wherein the direction indication information is used for indicating whether the direction of the target component rotates or not;
the rotation recognition model includes: the device comprises a feature extraction module for converting an image into a vector and a full connection layer for predicting the direction of a component, wherein the feature extraction module is connected with the full connection layer; in the aspect of direction detection processing, the feature extraction module is used for carrying out direction feature extraction and coding processing on the standard image of the target component to obtain a first direction feature expression vector, and carrying out direction feature extraction and coding processing on the test image of the target component to obtain a second direction feature expression vector; and the full-connection layer is used for carrying out classification prediction processing on the target direction characteristic expression vector obtained based on the first direction characteristic expression vector and the second direction characteristic expression vector to obtain a target result.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method of any of claims 1-9.
12. A readable storage medium, characterized in that it stores thereon a program or instructions, which when executed by a processor, implement the steps of the method according to any of claims 1-9.
CN202310460513.XA 2023-04-26 2023-04-26 Component direction detection method and device, electronic equipment and readable storage medium Active CN116168040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310460513.XA CN116168040B (en) 2023-04-26 2023-04-26 Component direction detection method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310460513.XA CN116168040B (en) 2023-04-26 2023-04-26 Component direction detection method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN116168040A CN116168040A (en) 2023-05-26
CN116168040B true CN116168040B (en) 2023-07-07

Family

ID=86413638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310460513.XA Active CN116168040B (en) 2023-04-26 2023-04-26 Component direction detection method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116168040B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409261B (en) * 2023-12-14 2024-02-20 成都数之联科技股份有限公司 Element angle classification method and system based on classification model

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0810132B2 (en) * 1986-06-04 1996-01-31 富士電機株式会社 Target pattern rotation angle detection method
CN101118263A (en) * 2006-08-04 2008-02-06 华硕电脑股份有限公司 Polar direction automatic detection method of polar element
CN104359402A (en) * 2014-11-17 2015-02-18 南京工业大学 Detection method for rectangular pin component visual positioning
CN105469400B (en) * 2015-11-23 2019-02-26 广州视源电子科技股份有限公司 The quick method and system for identifying, marking of electronic component polar orientation
CN106897994A (en) * 2017-01-20 2017-06-27 北京京仪仪器仪表研究总院有限公司 A kind of pcb board defect detecting system and method based on layered image
CN108961236B (en) * 2018-06-29 2021-02-26 国信优易数据股份有限公司 Circuit board defect detection method and device
CN109389091B (en) * 2018-10-22 2022-05-03 重庆邮电大学 Character recognition system and method based on combination of neural network and attention mechanism
CN109886072B (en) * 2018-12-25 2021-02-26 中国科学院自动化研究所 Face attribute classification system based on bidirectional Ladder structure
CN111640088B (en) * 2020-04-22 2023-12-01 深圳拓邦股份有限公司 Electronic element polarity detection method and system based on deep learning and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于机器视觉技术的塑料制品缺陷检测研究;沈红雷;;塑料科技(第08期);全文 *

Also Published As

Publication number Publication date
CN116168040A (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN109558832B (en) Human body posture detection method, device, equipment and storage medium
CN108229277B (en) Gesture recognition method, gesture control method, multilayer neural network training method, device and electronic equipment
CN109961009B (en) Pedestrian detection method, system, device and storage medium based on deep learning
JP7051267B2 (en) Image detection methods, equipment, electronic equipment, storage media, and programs
Sykora et al. Comparison of SIFT and SURF methods for use on hand gesture recognition based on depth map
CN106874826A (en) Face key point-tracking method and device
CN116168040B (en) Component direction detection method and device, electronic equipment and readable storage medium
CN110956131B (en) Single-target tracking method, device and system
CN115661336A (en) Three-dimensional reconstruction method and related device
CN111199169A (en) Image processing method and device
CN111832561A (en) Character sequence recognition method, device, equipment and medium based on computer vision
Dai et al. Robust image registration of printed circuit boards using improved SIFT‐PSO algorithm
CN114612531A (en) Image processing method and device, electronic equipment and storage medium
Amiri et al. RASIM: a novel rotation and scale invariant matching of local image interest points
TWI769603B (en) Image processing method and computer readable medium thereof
CN113963311A (en) Safe production risk video monitoring method and system
CN115972198A (en) Mechanical arm visual grabbing method and device under incomplete information condition
CN115660969A (en) Image processing method, model training method, device, equipment and storage medium
CN114565777A (en) Data processing method and device
Lu et al. Lightweight green citrus fruit detection method for practical environmental applications
Dai et al. LAR: a low-power, high-precision mobile phone-based AR system
US11682227B2 (en) Body and hand association method and apparatus, device, and storage medium
Huang et al. Face detection based on image stitching for class attendance checking
CN113706506B (en) Method and device for detecting assembly state, electronic equipment and storage medium
WO2023241372A1 (en) Camera intrinsic parameter calibration method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant