CN109785292B - Core wire sequence judging method based on machine vision - Google Patents

Core wire sequence judging method based on machine vision Download PDF

Info

Publication number
CN109785292B
CN109785292B CN201811567286.6A CN201811567286A CN109785292B CN 109785292 B CN109785292 B CN 109785292B CN 201811567286 A CN201811567286 A CN 201811567286A CN 109785292 B CN109785292 B CN 109785292B
Authority
CN
China
Prior art keywords
core
image
core wire
machine vision
taking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811567286.6A
Other languages
Chinese (zh)
Other versions
CN109785292A (en
Inventor
汪钰人
刘国海
沈继锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201811567286.6A priority Critical patent/CN109785292B/en
Publication of CN109785292A publication Critical patent/CN109785292A/en
Application granted granted Critical
Publication of CN109785292B publication Critical patent/CN109785292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a core wire sequence judging method based on machine vision, which comprises the following steps of 1, firstly, collecting core wire pictures, dividing the core wire pictures into two picture sets, wherein one part is used as a training set, and the other part is used as a test set; then carrying out color space conversion on the core line picture, converting the image from RGB space to HSV space, and extracting the values of an S channel and a V channel of the image; step 2, extracting pixel values of the core line image and the background in the y direction of the coordinate system, classifying the core line image, extracting features, and combining the two types of features to obtain a total feature; and 3, judging whether the core wire sequence is correct or not according to R, G, B values, and counting the identification rate of the classification. The method has the advantages of simplicity, rapidness, high efficiency, low cost and the like, saves a great amount of labor cost, and improves the production efficiency and the product quality.

Description

Core wire sequence judging method based on machine vision
Technical Field
The invention relates to the field of image processing, in particular to a core wire sequence judging method based on machine vision.
Background
Currently, existing flat cable sequence detection uses a large number of human eyes to detect and uses a fixed conductive plug to detect. The two modes require a large amount of manpower investment, the manual inspection efficiency is low, the cost is high, and the resource waste and the missed judgment are caused; in addition, visual fatigue is easy to occur due to human eye detection, false detection is caused, an efficient assembly line form cannot be formed, and the quality of products cannot be guaranteed; the method for detecting the qualification of the terminal flat cable by using the electrifying mode is time-consuming, material-consuming, inflexible and greatly reduced in practicality. Has great influence on the production cost and efficiency of enterprises.
Disclosure of Invention
The invention aims to solve the technical problems in the prior art and provides a core wire sequence judging method based on machine vision, which is simple and quick, high in efficiency and low in cost and can save a great amount of labor cost.
The technical scheme of the invention is as follows: a core wire sequence discriminating method based on machine vision comprises the following steps:
step 1, firstly, collecting core wire pictures, dividing the core wire pictures into two picture sets, wherein one part is used as a training set and the other part is used as a test set; then carrying out color space conversion on the core line picture, converting the image from RGB space to HSV space, and extracting the values of an S channel and a V channel of the image; step 2, extracting pixel values of the core line image and the background in the y direction of the coordinate system, classifying the core line image, extracting features, and combining the two types of features to obtain a total feature; and 3, judging whether the core wire sequence is correct or not according to R, G, B values, and counting the identification rate of the classification.
In the step 1, binarization processing is performed on the images of the S channel and the V channel of the HSV color space by using a maximum inter-class variance method; multiplying the binarized images of the two channels of the image S, V to obtain a rough segmentation result; and carrying out morphological opening operation on the rough segmentation result to remove noise.
Further, in the step 2, the core line image and the background are classified, and the three core lines and the background are classified into four types by adopting a K-means algorithm, namely, three colors and background colors of the three core lines.
Further, the specific process of the step 2 is as follows:
establishing a coordinate system by taking the upper left corner of the image as 0 point, taking the right direction of the x axis as the forward direction and taking the downward direction of the y axis as the forward direction, solving an externally connected rectangle, and taking the values of R channels of 201 color points as the first part of characteristics of each image by using the central line of the y direction of the externally connected rectangle; dividing the core wire into 3 classes by using a K-means algorithm, graying points corresponding to 3 different clusters, and taking the average value of each class as a second partial characteristic; the two types of features are combined as a total feature.
Further, the value of R, G, B of each color in the step 3 is used as a basis to judge the sequence of the core wires.
Further, the specific process of the step 3 is as follows: the extracted total features are sent to a support vector machine for training, core wires with correct sequences are marked as 1, core wires with wrong sequences are marked as 0, a model capable of judging whether the positions of the wires are correct is obtained after training, and then pictures are judged by the model; when a picture is input into the model, the model outputs 1 or 0, so that whether the core wire sequence is correct or not can be judged; finally, accuracy = number of correctly judged core wires/total number of core wires, and further accuracy of the model is obtained.
Further, the core line pictures are divided into two picture sets by 2000 pieces: 1800 sheets were used as training sets and 200 sheets were used as test sets.
1. Firstly, performing color space conversion, converting an image from RGB space to HSV space, and extracting values of an S channel and a V channel of the image;
2. binarizing images of two channels S and V of the HSV color space by using a maximum inter-class variance method;
3. multiplying the binarized images of the two channels of the image S, V to obtain a rough segmentation result;
4. performing morphological opening operation on the rough segmentation result to remove noise;
5. establishing a coordinate system by taking the upper left corner of the image as 0 point, taking the right direction of the x axis as the forward direction and taking the downward direction of the y axis as the forward direction, solving an externally connected rectangle, and taking the values of R channels of 201 color points as the first part of characteristics of each image by using the central line of the y direction of the externally connected rectangle; dividing the core line picture into 3 types by using a K-means algorithm, graying points corresponding to 3 different clusters, and taking the mean value of each type as a second part of characteristics; combining the two types of features as a total feature;
6. the extracted total features are sent to a support vector machine for training, the support vector machine is trained by RBF kernel functions, core wires in the correct sequence are marked as 1, core wires in the wrong sequence are marked as 0, a model capable of judging whether the positions of the wires are correct is obtained after training, and then the model is used for judging pictures. When a picture is input to the model, the model outputs 1 or 0, so that whether the core line sequence is correct can be judged. Finally, accuracy = number of correctly judged core wires/total number of core wires, and further accuracy of the model is obtained.
The invention has the following technical effects: the invention sends the extracted total characteristics into a support vector machine for training, the core wires with correct sequence are marked as 1, the core wires with wrong sequence are marked as 0, a model capable of judging whether the positions of the wires are correct is obtained after training, and then the model is used for judging pictures. When a picture is input to the model, the model outputs 1 or 0, so that whether the core line sequence is correct can be judged. Finally, accuracy = number of correctly judged core wires/total number of core wires, and further accuracy of the model is obtained. The method has the advantages of simplicity, rapidness, high efficiency, low cost and the like, saves a great amount of labor cost, and improves the production efficiency and the product quality.
Drawings
Fig. 1 is a diagram of the core line images in the correct order.
Fig. 2 is a wrong order of core line images.
Fig. 3 is a core image after S-channel binarization.
Fig. 4 is a core image after V-channel binarization.
Fig. 5 is a roughly segmented image.
Fig. 6 is an image of a morphological opening operation.
Detailed Description
In this embodiment, a method for identifying color sequence of a core wire is provided, and a technical solution in an embodiment of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiment of the present invention. The method comprises the following steps: step 1, firstly, collecting core wire pictures, dividing the core wire pictures into two picture sets, wherein one part is used as a training set and the other part is used as a test set; then carrying out color space conversion on the core line picture, converting the image from RGB space to HSV space, and extracting the values of an S channel and a V channel of the image; step 2, extracting pixel values of the core line image and the background in the y direction of the coordinate system, classifying the core line image, extracting features, and combining the two types of features to obtain a total feature; and 3, judging whether the core wire sequence is correct or not according to R, G, B values, and counting the identification rate of the classification.
The core line pictures are divided into two picture sets by 2000 pieces: 1800 sheets were used as training sets and 200 sheets were used as test sets.
1. Firstly, converting an image from an RGB space into an HSV space, and extracting values of an S channel and a V channel of the image;
2. as shown in fig. 1 and 2, the core line images in the correct order are shown in fig. 1, and the core line images in the wrong order are shown in fig. 2. As shown in fig. 3 and 4, the images of the two channels S and V of the HSV color space are binarized by the maximum inter-class variance method. As shown in fig. 5, the images binarized by the two channels of S, V are multiplied to obtain a result of coarse segmentation;
3. as shown in fig. 6, morphological opening operation is performed on the result of the rough segmentation to remove part of noise;
4. and establishing a coordinate system by taking the upper left corner of the image as 0 point, taking the right x axis as the positive direction and taking the downward y axis as the positive direction. And storing the coordinates of all points of the image after morphological opening operation into a matrix x and a matrix y respectively. Taking out the maximum value xmax and the minimum value xmin of the matrix x, taking xmax-xmin as one side of the rectangle, taking out the maximum value ymax and the minimum value ymin of the matrix y, and taking ymax-ymin as the other side of the rectangle. Taking the value of an R channel of a color point from the center line of the y direction of the circumscribed rectangle as the characteristic of each graph, and taking 201 characteristic points in total; dividing the core wires into 3 classes by using a K-means algorithm, graying points corresponding to 3 different clusters, and taking the mean value of each class as a second partial characteristic; combining the two types of features together to obtain 204 total features;
5. the extracted features are sent to a support vector machine for training, the support vector machine adopts a Gaussian radial basis function, a penalty factor is set to be 100, gaussian radial basis function parameters are set to be 2.3, core wires in the correct sequence are marked as 1, core wires in the wrong sequence are marked as 0, a model capable of judging whether the positions of the core wires are correct is obtained after training, and then the model is used for judging pictures. When a picture is input to the model, the model outputs 1 or 0, so that whether the core line sequence is correct can be judged. Finally, accuracy = number of correctly judged core wires/total number of core wires, and further accuracy of the model is obtained. In order to embody the identification capability of the method provided by the invention, the accuracy of the method provided by the invention is compared with that of a method using a support vector machine. Table 1 shows the accuracy of each method, and as can be seen from table 1, the method provided by the invention has a better effect.
Table 1: core line classification results (%)
Method Accuracy (%)
Gray scale co-occurrence matrix 97.08
Color moment 96.67
Invariant moment 96.32
The proposed method 99.34
In summary, the core wire sequence judging method based on machine vision of the invention comprises the following steps: 1. firstly, performing color space conversion, converting an image from RGB space to HSV space, and extracting values of an S channel and a V channel of the image; 2. binarizing images of two channels S and V of the HSV color space by using a maximum inter-class variance method; 3. multiplying the binarized images of the two channels of the image S, V to obtain a rough segmentation result; 4. performing morphological opening operation on the rough segmentation result to remove noise; 5. establishing a coordinate system by taking the upper left corner of the image as 0 point, taking the right direction of the x axis as the forward direction and taking the downward direction of the y axis as the forward direction, solving an externally connected rectangle, and taking the values of R channels of 201 color points as the first part of characteristics of each image by using the central line of the y direction of the externally connected rectangle; dividing the core wires into 3 classes by using a K-means algorithm, graying points corresponding to 3 different clusters, and taking the mean value of each class as a second partial characteristic; combining the two types of features as a total feature; 6. the extracted total features are sent to a support vector machine for training, core wires with correct sequences are marked as 1, core wires with wrong sequences are marked as 0, a model capable of judging whether the positions of the wires are correct is obtained after training, and then the model is used for judging pictures. When a picture is input to the model, the model outputs 1 or 0, so that whether the core line sequence is correct can be judged. Finally, accuracy = number of correctly judged core wires/total number of core wires, and further accuracy of the model is obtained. The method has the advantages of simplicity, rapidness, high efficiency, low cost and the like, saves a great amount of labor cost, and improves the production efficiency and the product quality.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (6)

1. The core wire sequence judging method based on machine vision is characterized by comprising the following steps of:
step 1, firstly, collecting core wire pictures, dividing the core wire pictures into two picture sets, wherein one part is used as a training set and the other part is used as a test set; then carrying out color space conversion on the core line picture, converting the image from RGB space to HSV space, and extracting the values of an S channel and a V channel of the image; step 2, extracting pixel values of the core line image and the background in the y direction of the coordinate system, classifying the core line image, extracting features, and combining the two types of features to obtain a total feature; step 3, judging whether the core wire sequence is correct or not through R, G, B values, and counting the identification rate of classification;
the specific process of the step 2 is as follows:
establishing a coordinate system by taking the upper left corner of the image as 0 point, taking the right direction of the x axis as the forward direction and taking the downward direction of the y axis as the forward direction, solving an externally connected rectangle, and taking the values of R channels of 201 color points as the first part of characteristics of each image by using the central line of the y direction of the externally connected rectangle; dividing the core line picture into 3 classes by using a K-means algorithm, graying points corresponding to 3 different clusters, and taking the average value of each class as a second partial characteristic; combining the two types of features as a total feature;
the specific process of the step 3 is as follows: the extracted total features are sent to a support vector machine for training, core wires with correct sequences are marked as 1, core wires with wrong sequences are marked as 0, a model capable of judging whether the positions of the wires are correct is obtained after training, and then pictures are judged by the model; when a picture is input into the model, the model outputs 1 or 0, so that whether the core wire sequence is correct or not can be judged; finally, accuracy = number of correctly judged core wires/total number of core wires, and further accuracy of the model is obtained.
2. The method for discriminating core line sequence based on machine vision according to claim 1 wherein, in said step 1, binarizing images of two channels S and V of HSV color space by maximum inter-class variance method; multiplying the binarized images of the two channels of the image S, V to obtain a rough segmentation result; and carrying out morphological opening operation on the rough segmentation result to remove noise.
3. The machine vision-based core line sequence judging method according to claim 1, wherein the core line image and the background are classified in the step 2, and the three core lines and the background are classified into four types, namely, three colors of the three core lines and the background color by using a K-means algorithm.
4. The method for determining the order of the core wires based on the machine vision according to claim 1, wherein the value of R, G, B of each color in the step 3 is used as a basis to determine the order of the core wires.
5. The machine vision-based core line sequence judging method according to claim 1, wherein the core line pictures are divided into two picture sets by 2000 pieces: 1800 sheets were used as training sets and 200 sheets were used as test sets.
6. The machine vision-based core sequence judging method according to claim 1, wherein the support vector machine adopts a gaussian radial basis function, the penalty factor is set to 100, and the gaussian radial basis function parameter is set to 2.3.
CN201811567286.6A 2018-12-20 2018-12-20 Core wire sequence judging method based on machine vision Active CN109785292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811567286.6A CN109785292B (en) 2018-12-20 2018-12-20 Core wire sequence judging method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811567286.6A CN109785292B (en) 2018-12-20 2018-12-20 Core wire sequence judging method based on machine vision

Publications (2)

Publication Number Publication Date
CN109785292A CN109785292A (en) 2019-05-21
CN109785292B true CN109785292B (en) 2023-07-21

Family

ID=66498034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811567286.6A Active CN109785292B (en) 2018-12-20 2018-12-20 Core wire sequence judging method based on machine vision

Country Status (1)

Country Link
CN (1) CN109785292B (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951795B (en) * 2015-05-26 2019-07-05 重庆金山科技(集团)有限公司 Image classification identifies judgment method
JP6276734B2 (en) * 2015-07-22 2018-02-07 矢崎総業株式会社 Inspection apparatus and inspection method

Also Published As

Publication number Publication date
CN109785292A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN114937055B (en) Image self-adaptive segmentation method and system based on artificial intelligence
CN102305798B (en) Method for detecting and classifying glass defects based on machine vision
CN115082683A (en) Injection molding defect detection method based on image processing
CN105046700A (en) Brightness correction and color classification-based fruit surface defect detection method and system
CN108181316B (en) Bamboo strip defect detection method based on machine vision
CN106251333B (en) Element reverse detection method and system
CN110781913B (en) Zipper cloth belt defect detection method
CN106228541A (en) Screen positioning method and device in visual inspection
CN104535589A (en) Online detection method and device for low-voltage current mutual inductor
CN113256624A (en) Continuous casting round billet defect detection method and device, electronic equipment and readable storage medium
Wah et al. Analysis on feature extraction and classification of rice kernels for Myanmar rice using image processing techniques
CN113793337A (en) Locomotive accessory surface abnormal degree evaluation method based on artificial intelligence
CN112419298A (en) Bolt node plate corrosion detection method, device, equipment and storage medium
CN108596196B (en) Pollution state evaluation method based on insulator image feature dictionary
CN114594114A (en) Full-automatic online nondestructive detection method for lithium battery cell
CN114820626B (en) Intelligent detection method for automobile front face part configuration
CN115082776A (en) Electric energy meter automatic detection system and method based on image recognition
CN112561875A (en) Photovoltaic cell panel coarse grid detection method based on artificial intelligence
CN102901735A (en) System for carrying out automatic detections upon workpiece defect, cracking, and deformation by using computer
CN116109577A (en) Printing label defect detection system and method
CN110060239B (en) Defect detection method for bottle opening of bottle
CN113269234A (en) Connecting piece assembly detection method and system based on target detection
CN109785292B (en) Core wire sequence judging method based on machine vision
CN103605973A (en) Image character detection and identification method
CN114820597B (en) Smelting product defect detection method, device and system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant