CN113111712A - AI identification positioning method, system and device for bagged product - Google Patents

AI identification positioning method, system and device for bagged product Download PDF

Info

Publication number
CN113111712A
CN113111712A CN202110266376.7A CN202110266376A CN113111712A CN 113111712 A CN113111712 A CN 113111712A CN 202110266376 A CN202110266376 A CN 202110266376A CN 113111712 A CN113111712 A CN 113111712A
Authority
CN
China
Prior art keywords
data
group
bagged
identification
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110266376.7A
Other languages
Chinese (zh)
Inventor
李建全
王铮
李金松
杨一粟
王志伟
王小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Winner Medical Co ltd
Original Assignee
Winner Medical Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Winner Medical Co ltd filed Critical Winner Medical Co ltd
Priority to CN202110266376.7A priority Critical patent/CN113111712A/en
Publication of CN113111712A publication Critical patent/CN113111712A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method or system for recognizing and positioning bagged products integrates AI and machine vision, uses a deep learning model to recognize and process the collected original images of the bagged products to obtain a first group of data, and obtaining another corresponding group of data by using a traditional template matching algorithm, checking and calibrating the two groups of data to obtain the optimal group of data, taking the optimal group of data as a positioning result of the bagged product, so that the manipulator can grab the bagged product according to the optimal group of data to complete the subsequent procedure, because the data obtained by the deep learning model processing method is used for judging the positioning result of the bagged product, the identification and judgment of the disordered material can be completed without occupying a large area, accurate data are transmitted to a subsequent filling or executing mechanical arm, the automatic packaging of the bagged products can be quickly and accurately realized, and the factory modernization is better realized.

Description

AI identification positioning method, system and device for bagged product
Technical Field
The invention relates to the field of packaging automation, in particular to an AI (Artificial intelligence) identification and positioning method, system and device for a bagged product.
Background
In the field of packaging automation, it is generally necessary to place the bagged products into a given packaging container (for example, to orderly place the bagged products into a carton), and a filling or execution robot is generally used to grab the incoming materials into the packaging container, so as to realize packaging automation. However, if the incoming material is irregular, disordered, stacked and the like, the manipulator cannot grab proper products, so that the subsequent automation is difficult to realize. Typically, manual dosing is required to avoid instances of irregular, chaotic or stacked feeding.
At present, materials are separated by using a vibration or differential material arranging device in the market to avoid stacking and enable the materials to be uniform, but the cost of the methods is high, a large floor area is occupied, the efficiency is low, and the methods are not beneficial to modern factories.
Therefore, there is a need for a method for identifying and positioning bagged products, so that under the condition of appropriate floor space, chaotic material identification and judgment can be completed, accurate data can be transmitted to a subsequent filling or execution manipulator, and the efficiency is higher.
Disclosure of Invention
The invention mainly solves the technical problem of providing a method, a system and a device for identifying and positioning bagged products, which can complete the identification and judgment of disordered materials and transmit accurate data to a subsequent filling or execution manipulator under the condition of not increasing too much floor space.
According to a first aspect, an embodiment provides a method for recognizing and positioning bagged products by fusing AI and machine vision, comprising:
collecting original images of bagged products on a production line;
transmitting the original image into a deep learning model for processing to obtain a first group of data of the bagged product, wherein the first group of data comprises a central point coordinate, a positive and negative category and angle data;
processing the original image by using a template matching algorithm to obtain a second group of data corresponding to the first group of data, wherein the second group of data comprises a central point coordinate, positive and negative categories and angle data;
checking and calibrating the first group of data and the second group of data to obtain optimal group data, and taking the optimal group data as a bagged product identification and positioning result;
and transmitting the identification and positioning results of the bagged products to a manipulator so that the manipulator can grab the bagged products, and simultaneously, displaying the effect in real time through an upper computer software interface.
In some embodiments, the importing the raw image into a depth learning model for processing to obtain a first set of data of a bagged product comprises:
transmitting the original image into a pre-trained deep learning model YOLO for recognition;
and (4) extracting and converting the coordinate and the category information of each identified bagged product in the non-stacking state into a central point coordinate, a positive and negative category and an angle through data so as to realize primary positioning.
In some embodiments, the training process of the deep learning model YOLO includes:
the method comprises the steps that an industrial camera is used for collecting images of bagged products randomly placed on a production line, and a marking tool is used for marking the whole targets and the local targets of the bagged products, so that a data set is constructed, wherein the whole targets comprise the range of the whole area of the bagged products and are divided into a front side and a back side; the local target comprises local characteristic areas on the bagged product, and the number of the local characteristic areas is at least 1 on the front surface and the back surface;
dividing a training set, a verification set and a test set from the data set according to a preset proportion;
training a deep learning model YOLO by using a training set, judging the performance of the model in the training process by using a verification set, testing by using a test set, and finally obtaining a YOLO pre-training model meeting the requirements when a test result can accurately identify and position an overall target and a local target.
In some embodiments, the process of converting the coordinate and category information of each bagged product identified by the deep learning model YOLO into the first set of data through data extraction comprises: calculating the coordinate of the central point of the whole target frame as the coordinate of the central point of the bagged product according to the coordinate of the whole target frame of the bagged product identified by the YOLO; identifying whether the front or back result of the overall target of the bagged product is the front or back result through the YOLO as the front and back categories of the bagged product; and connecting the central points of the whole target and the local target of the bagged product identified by the YOLO, and adding a certain compensation angle to obtain the angle of the bagged product.
In some embodiments, matching of the target of the bagged product is performed by using a conventional template matching algorithm based on a group of data extracted by a deep learning model YOLO, specifically: and carrying out template matching in the whole target range identified by the deep learning model YOLO of the original image, selecting different templates according to the positive and negative categories of the whole target identified by the deep learning model YOLO, and obtaining a group of second group data generated by template matching when the matching result meets a certain threshold value.
In some embodiments, performing a check calibration on the first set of data and the second set of data to obtain an optimal set of data includes: in a first group of data obtained by the deep learning model YOLO and a second group of data obtained by matching a traditional template, if the coordinate distance between the central points of the first group of data and the second group of data is smaller than a certain threshold value, the positive and negative categories are consistent, the angle orientations are the same, and the error is smaller than a certain threshold value, the identification result is determined to be correct, and the central point coordinate, the positive and negative categories and the angle matched with the template are used as the final bagged product identification positioning result; if the distance between the central point coordinates of the two is smaller than a certain threshold value, the positive and negative categories are consistent, the angles are opposite, and the error is smaller than a certain threshold value, the template matching direction is determined to be wrong, the deep learning model YOLO recognition result is correct, and the central point coordinates and the positive and negative categories recognized by the deep learning model YOLO and a new angle of 180-degree rotation of the angle matched by the traditional template are taken as the final bagged product recognition positioning result.
In some embodiments, further comprising: when the overall target identified by the deep learning model YOLO comprises a plurality of local targets, a plurality of angles can be obtained, if the orientation of one angle is the same as the direction of the angle matched with the template, the error is smaller than a certain threshold value, the distance between the coordinates of the central points of the two angles is smaller than a certain threshold value, and the positive and negative categories are consistent, the coordinate of the central point matched with the traditional template, the positive and negative categories and the angle are taken as the final bagged product identification positioning result; and when the template matching result is 0 and the whole target identified by the deep learning model YOLO comprises a single local target, taking the coordinate, the angle, the positive and negative categories of the central point of the deep learning model YOLO as the final bagged product identification positioning result.
In some embodiments, the template matching algorithm comprises gray value-based template matching or shape-based template matching.
According to a second aspect, there is provided in one embodiment an identification and location system for a bagged product, comprising:
the acquisition module is used for acquiring original images of bagged products on the production line;
the AI identification positioning module is used for transmitting the original image into a deep learning model for processing to obtain a first group of data of the bagged product, wherein the first group of data comprises a central point coordinate, positive and negative categories and angle data;
the traditional identification positioning module is used for processing the original image by using a template matching algorithm to obtain a second group of data corresponding to the first group of data, wherein the second group of data comprises a center point coordinate, positive and negative categories and angle data;
the calibration optimization module is used for checking and calibrating the first group of data and the second group of data to obtain optimal group of data, and the optimal group of data is used as a bagged product identification and positioning result;
and the transmission and display module is used for transmitting the identification and positioning results of the bagged products to the manipulator so that the manipulator can grab the bagged products, and meanwhile, effect display is carried out in real time through an upper computer software interface.
According to a third aspect, there is provided in one embodiment an identification and location device for a bagged product, comprising:
the camera bellows is used for providing environment for image data acquisition;
the light source is a strip light source and is arranged on two sides of the bottom in the dark box;
and the industrial camera is arranged at the top of the product to be shot and is used for shooting the view of the product to be shot on the production line.
According to the bagged product identification and positioning method of the embodiment, AI and machine vision are fused, a deep learning model is used for identifying and processing the acquired original image of the bagged product to obtain a first group of data, a traditional template matching algorithm is used for obtaining another group of corresponding data, the two groups of data are checked and calibrated to obtain optimal group of data, the optimal group of data is used as a bagged product positioning result, so that a manipulator can grab the bagged product according to the optimal group of data to complete a subsequent program, the bagged product positioning result is judged by using the data obtained by the deep learning model processing method, the identification and judgment of disordered materials can be completed without occupying a large area, accurate data are transmitted to a subsequent filling or executing manipulator, and the bagged product packaging automation can be quickly and accurately realized, the factory modernization is better realized.
Drawings
Fig. 1 is a schematic flow chart of an identification and positioning method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a system of a positioning and identification method according to an embodiment of the present invention;
fig. 3 is a block diagram of an apparatus for identifying and positioning according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings).
It can be known from the background art that, in the current production line for putting the bagged products into the given packaging containers, the incoming materials need to be regular, and can not be too disordered and stacked, so that the bagged products can be smoothly picked up by a manipulator and put into the subsequent packaging containers according to the regular sequence.
Research shows that in the automation field, an AI and machine vision identification and positioning method is applied to a production line, chaotic material identification and judgment can be completed under the condition of not increasing the occupied area, accurate data are transmitted to a subsequent manipulator, and the working efficiency is improved.
In the embodiment of the invention, AI and machine vision are fused, the collected original image of the bagged product is identified and processed by using a deep learning model to obtain a first group of data, the corresponding other group of data is obtained by using a traditional template matching algorithm, the two groups of data are checked and calibrated to obtain the optimal group of data, and the optimal group of data is used as a positioning result of the bagged product, so that a manipulator can grab the bagged product according to the optimal group of data to complete a subsequent procedure, the automatic packaging of the bagged product can be realized quickly and accurately, and the factory modernization can be realized better.
Referring to fig. 1, a flow chart of a method for identifying and positioning a bagged product is provided for the present embodiment, where the method integrates AI and machine vision, and includes:
step 1, collecting original images of bagged products on a production line.
The bagged products are in a disordered state when fed, for example, can be bagged cotton swabs or single bagged masks, and the bagged products are in a disordered state when fed, namely, a stacked state and a non-stacked state, a front surface or a back surface and placed at various rotation angles.
In this embodiment, acquiring the original image of the bagged product on the production line includes: the method comprises the steps that a camera bellows of a proper video or picture shooting environment is built on a production line for conveying products, industrial cameras and light sources are arranged in the camera bellows in advance, when a plurality of disordered bagged products pass through the camera bellows on the production line, the industrial cameras shoot, then video streams in the industrial cameras during shooting are obtained, and the bagged product original images are obtained after transcoding.
And 2, transmitting the original image into a depth learning model for processing to obtain a first group of data of the bagged product, wherein the first group of data comprises a central point coordinate, a positive and negative category and angle data.
In this embodiment, the transcoded original image is introduced into a pre-trained deep learning model for identification, the identification includes identification of stacked bagged products and identification of non-stacked bagged products, and coordinates and category information of each identified non-stacked bagged product are extracted and converted into coordinates, positive and negative categories and angles of a central point through data extraction, so as to realize primary positioning, and information of all identified and positioned non-stacked bagged products in each frame of image after extraction forms a group of data, namely a first group of data.
In this embodiment, the deep learning model is a pre-trained deep learning model YOLO, and the training process is as follows:
firstly, acquiring images of stacked bagged products randomly placed on a production line by using an industrial camera, and labeling the whole target and the local target of the bagged products in a non-stacked state by using a labeling tool labelImg so as to construct a data set;
then, dividing a training set, a verification set and a test set from the constructed data set according to a certain proportion;
and finally, training the deep learning model YOLO by using a training set, judging the performance of the model in the training process by using a verification set, testing by using a test set, and passing the test when the test result can accurately identify and position the whole target and the local target to finally obtain the pre-training model YOLO meeting the requirements.
In the embodiment, the overall target refers to a range including the overall area of the non-stacked bagged product, and is divided into a front category and a back category; the local target refers to a range of local feature areas where features are more prominent on non-stacked bagged products, for example, at least 1 local feature area on each of the front and back sides.
In this embodiment, the coordinate and the category information of each identified non-stacked bagged product are converted into the coordinate of the central point, the categories of the front and the back, and the angle through data extraction, and the process of realizing the preliminary positioning includes:
the data extraction process specifically comprises the following steps: calculating the coordinate of the central point of the whole target frame as the coordinate of the central point of the non-stacked bagged product according to the coordinate of the whole target frame of the non-stacked bagged product identified by the YOLO; identifying, by the YOLO, a positive or negative result of the overall goal of the non-stacked bagged product as a positive category of the non-stacked bagged product; and connecting the central points of the whole target and the local target of the non-stacked bagged product identified by the YOLO, and adding a certain compensation angle to obtain the angle of the non-stacked bagged product.
And 3, processing the original image by using a template matching algorithm to obtain a second group of data corresponding to the first group of data, wherein the second group of data comprises a central point coordinate, a positive and negative category and angle data.
In this embodiment, the processing using the template matching algorithm may be understood as processing using a conventional visual algorithm, including: adopting a template matching algorithm to carry out target matching on the bagged products in a non-stacking state, and specifically comprising the following steps: and carrying out template matching in the whole target range identified by the deep learning model YOLO of the original image, selecting different templates according to the positive and negative categories of the whole target identified by the deep learning model YOLO, wherein the matching result meets a certain threshold value, namely the matching is successful, so that a new group of central point coordinates, the positive and negative categories and angles generated by template matching, namely a second group of data, is correspondingly obtained.
In this embodiment, template matching is performed by a conventional visual algorithm to find an accurate target.
In this embodiment, the template matching algorithm includes gray value-based template matching and shape-based template matching.
And 4, checking and calibrating the first group of data and the second group of data to obtain optimal group data, and taking the optimal group data as a bagged product identification and positioning result.
In this embodiment, the first group of data and the second group of data are put together and are correspondingly checked and calibrated, so that a group of optimal data is obtained, and the group of optimal data can be used as a final bagged product identification and positioning result, so that the obtained result is accurate and reliable, and the efficiency is improved, wherein the optimal data can be understood as the group of data which is in a non-stacking state, has the closest distance, is correct on the front side and the back side, and has the most appropriate angle.
The process of checking calibration may specifically include:
in the data sets obtained by matching the deep learning model YOLO and the traditional template respectively, if the coordinate distance of the central points of the deep learning model YOLO and the traditional template is smaller than a certain threshold, the angle orientation is the same, the error is smaller than a certain threshold, and the positive and negative categories are consistent, the recognition result is considered to be correct, and the coordinate of the central point, the positive and negative categories and the angle matched with the template are taken as the final result.
If the distance between the central point coordinates of the two is smaller than a certain threshold value, the angles are opposite and within a certain range, and the positive and negative categories are consistent, the template matching direction is considered to be wrong, but the recognition result of the deep learning model YOLO is correct, so that the central point coordinates and the positive and negative categories recognized by the deep learning model YOLO, and a new angle of 180-degree rotation of the angle matched by the traditional template are taken as the final result.
The calibration process of the two groups of data also comprises results of other conditions, and leakage reduction optimization needs to be carried out, and the method specifically comprises the following steps:
when the overall target identified by the deep learning model YOLO comprises a plurality of local targets, a plurality of angles can be obtained, if the orientation of one angle is the same as the orientation of the angle matched with the template, the error is smaller than a certain threshold value, the distance between the coordinates of the central points of the two angles is smaller than a certain threshold value, and the positive categories and the negative categories are consistent, the coordinates of the central point matched with the traditional template, the positive categories and the negative categories and the angles are taken as the final bagged product identification and positioning result.
And when the template matching result is 0 and the whole target identified by the deep learning model YOLO comprises a single local target, taking the coordinate, the angle, the positive and negative categories of the central point of the deep learning model YOLO as the final bagged product identification positioning result.
And 5, transmitting the identification and positioning results of the bagged products to a manipulator so that the manipulator can grab the bagged products, and simultaneously, displaying the effect in real time through an upper computer software interface.
And finally, transmitting the identification and positioning result of the bagged product to a manipulator, and grabbing the bagged product at the corresponding position and putting the bagged product into a packaging container by the manipulator according to the result.
In the embodiment, the bagged product is identified and judged by an AI and machine vision method, and accurate data is transmitted to the subsequent filling or execution manipulator, so that the efficiency of the whole industrial process is improved.
Referring to fig. 2, the present embodiment further provides an identification and location system for a bagged product, including:
the acquisition module 101 is used for acquiring original images of the bagged products on the production line.
The collection module 101 can collect various products, generally bagged products, which are fed from a feeding port on the production line, and the state of the bagged products is disordered when the products are fed, and the disordered bagged products are required to be orderly placed into a subsequent packaging box, for example, the bagged products can be bagged swabs or a single bagged mask, and the products are stacked and unstacked, and are placed at different rotation angles when being fed.
In this embodiment, acquiring the original image of the bagged product on the production line includes: the method comprises the steps that a camera bellows of a proper video or picture shooting environment is built on a production line for conveying products, industrial cameras and light sources are arranged in the camera bellows in advance, when a plurality of disordered bagged products pass through the camera bellows on the production line, the industrial cameras shoot, then video streams in the industrial cameras during shooting are obtained, and the bagged product original images are obtained after transcoding.
The AI identification and positioning module 102 is configured to transmit the original image into a deep learning model for processing, so as to obtain a first set of data of the bagged product, where the first set of data includes a center point coordinate, positive and negative categories, and angle data.
The AI identification and positioning module 102 transmits the transcoded original image to a pre-trained deep learning model for identification, identifies the stacked bagged product and the non-stacked bagged product, extracts the coordinates and category information of each identified non-stacked bagged product and converts the coordinates and category information into coordinates, positive and negative categories and angles of a central point, so as to realize initial positioning, and the extracted information of all identified and positioned non-stacked bagged products in each frame of image forms a group of data, namely a first group of data.
In this embodiment, the deep learning model is a pre-trained deep learning model YOLO, and the training process is as follows:
firstly, acquiring images of stacked bagged products randomly placed on a production line by using an industrial camera, and labeling the whole target and the local target of the bagged products in a non-stacked state by using a labeling tool labelImg so as to construct a data set;
then, dividing a training set, a verification set and a test set from the constructed data set according to a certain proportion;
and finally, training the deep learning model YOLO by using a training set, judging the performance of the model in the training process by using a verification set, testing by using a test set, and passing the test when the test result can accurately identify and position the whole target and the local target to finally obtain the pre-training model YOLO meeting the requirements.
In the embodiment, the overall target refers to a range including the overall area of the non-stacked bagged product, and is divided into a front category and a back category; the local target refers to a range of local feature areas where features are more prominent on non-stacked bagged products, for example, at least 1 local feature area on each of the front and back sides.
In this embodiment, the coordinate and the category information of each identified non-stacked bagged product are converted into the coordinate of the central point, the categories of the front and the back, and the angle through data extraction, and the process of realizing the preliminary positioning includes:
the data extraction process specifically comprises the following steps: calculating the coordinate of the central point of the whole target frame as the coordinate of the central point of the non-stacked bagged product according to the coordinate of the whole target frame of the non-stacked bagged product identified by the YOLO; identifying, by the YOLO, a positive or negative result of the overall goal of the non-stacked bagged product as a positive category of the non-stacked bagged product; and connecting the central points of the whole target and the local target of the non-stacked bagged product identified by the YOLO, and adding a certain compensation angle to obtain the angle of the non-stacked bagged product.
The conventional identification and positioning module 103 is configured to process the original image by using a template matching algorithm to obtain a second set of data corresponding to the first set of data, where the second set of data includes a center point coordinate, a positive and negative category, and angle data.
In this embodiment, the processing, by the conventional identification and positioning module 103, the original image by using a template matching algorithm includes: carry out the template matching through traditional visual algorithm, carry out the target matching to the bagged products of non-stacked state, specifically do: and carrying out template matching in the whole target range identified by the deep learning model YOLO of the original image, selecting different templates according to the positive and negative categories of the whole target identified by the deep learning model YOLO, wherein the matching result meets a certain threshold value, namely the matching is successful, so that a new group of central point coordinates, the positive and negative categories and angles generated by template matching, namely a second group of data, is correspondingly obtained.
In this embodiment, the template matching algorithm includes gray value-based template matching and shape-based template matching.
And the calibration optimization module 104 is configured to perform verification and calibration on the first group of data and the second group of data to obtain optimal group of data, and the optimal group of data is used as a bagged product identification and positioning result.
The calibration optimization module 104 combines the first set of data and the second set of data together, and performs corresponding checking and calibration to obtain a set of optimal data, which can be used as a final bagged product identification and positioning result, so that the obtained result is accurate and reliable, and the efficiency is improved, wherein the optimal data can be understood as the set of data in a non-stacking state, which is closest in distance, correct in front and back, and most suitable in angle.
The checking and calibrating process specifically comprises the following steps: in the data sets obtained by matching the deep learning model YOLO and the traditional template respectively, if the coordinate distance of the central points of the deep learning model YOLO and the traditional template is smaller than a certain threshold, the angle orientation is the same, the error is smaller than a certain threshold, and the positive and negative categories are consistent, the recognition result is considered to be correct, and the coordinate of the central point, the positive and negative categories and the angle matched with the template are taken as the final result.
If the distance between the central point coordinates of the two is smaller than a certain threshold value, the angles are opposite and within a certain range, and the positive and negative categories are consistent, the template matching direction is considered to be wrong, but the recognition result of the deep learning model YOLO is correct, so that the central point coordinates and the positive and negative categories recognized by the deep learning model YOLO, and a new angle of 180-degree rotation of the angle matched by the traditional template are taken as the final result.
When the overall target identified by the deep learning model YOLO comprises a plurality of local targets, a plurality of angles can be obtained, if the orientation of one angle is the same as the orientation of the angle matched with the template, the error is smaller than a certain threshold value, the distance between the coordinates of the central points of the two angles is smaller than a certain threshold value, and the positive categories and the negative categories are consistent, the coordinates of the central point matched with the traditional template, the positive categories and the negative categories and the angles are taken as the final bagged product identification and positioning result.
And when the template matching result is 0 and the whole target identified by the deep learning model YOLO comprises a single local target, taking the coordinate, the angle, the positive and negative categories of the central point of the deep learning model YOLO as the final bagged product identification positioning result.
And the transmission and display module 105 is used for transmitting the identification and positioning results of the bagged products to the manipulator so that the manipulator can grab the bagged products, and meanwhile, effect display is carried out in real time through an upper computer software interface.
The transmission and display module 105 transmits the final bagged product identification and positioning result to the manipulator, and the manipulator can grab the bagged product at the corresponding position according to the result and put the bagged product into the packaging container, for example, after the above steps, the manipulator can grab the single bagged product which is closest in position, most reasonable in placement angle and in a non-stacked state, and put the bagged product into the carton. And based on the optimal group data obtained by the calibration optimization module, the original image is subjected to effect drawing of center point coordinates, positive and negative categories and angles, and finally displayed through an upper computer software interface.
In the identification and positioning system for bagged products provided by this embodiment, in a chaotic incoming material scene (semi-structured ), a graphics is segmented by using the YOLO deep learning algorithm to find a target area, and then a software system for finding an accurate target is found by performing template matching by using a conventional visual algorithm, so that the robustness and accuracy of the system can be improved.
Referring to fig. 3, a schematic structural diagram of an identification and positioning device for a bagged product according to this embodiment is used to provide a suitable data acquisition environment for identification and positioning of the bagged product, and includes:
a camera chamber 200 for providing an environment for image data acquisition. The camera bellows in this embodiment is an aluminium alloy rectangle camera bellows.
The light sources 201, which are strip light sources in this embodiment, are disposed at two sides of the bottom of the dark box, so as to better and uniformly polish incoming materials.
And the industrial camera 202 is arranged on the top of the product to be shot and is used for shooting the view of the product to be shot on the production line. The industrial camera in the embodiment is a camera with a polaroid, and can better shoot videos or graphs of incoming materials.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (10)

1. An AI identification and positioning method of a bagged product, which is characterized by comprising the following steps:
collecting original images of bagged products on a production line;
transmitting the original image into a deep learning model for processing to obtain a first group of data of the bagged product, wherein the first group of data comprises a central point coordinate, a positive and negative category and angle data;
processing the original image by using a template matching algorithm to obtain a second group of data corresponding to the first group of data, wherein the second group of data comprises a central point coordinate, positive and negative categories and angle data;
checking and calibrating the first group of data and the second group of data to obtain optimal group data, and taking the optimal group data as a bagged product identification and positioning result;
and transmitting the identification and positioning results of the bagged products to a manipulator so that the manipulator can grab the bagged products, and simultaneously, displaying the effect in real time through an upper computer software interface.
2. The AI identification and positioning method of claim 1, wherein the importing the raw image into a deep learning model for processing to obtain a first set of data for a bagged product comprises:
transmitting the original image into a pre-trained deep learning model YOLO for recognition;
and (4) extracting and converting the coordinate and the category information of each identified bagged product in the non-stacking state into a central point coordinate, a positive and negative category and an angle through data so as to realize primary positioning.
3. The AI recognition positioning method of claim 2, wherein the training process of the deep learning model YOLO comprises:
the method comprises the steps that an industrial camera is used for collecting images of bagged products randomly placed on a production line, and a marking tool is used for marking the whole targets and the local targets of the bagged products, so that a data set is constructed, wherein the whole targets comprise the range of the whole area of the bagged products and are divided into a front side and a back side; the local target comprises local characteristic areas on the bagged product, and the number of the local characteristic areas is at least 1 on the front surface and the back surface;
dividing a training set, a verification set and a test set from the data set according to a preset proportion;
training a deep learning model YOLO by using a training set, judging the performance of the model in the training process by using a verification set, testing by using a test set, and finally obtaining a YOLO pre-training model meeting the requirements when a test result can accurately identify and position an overall target and a local target.
4. The AI identification and localization method according to claim 3, wherein the process of converting the coordinate and category information of each bagged product identified by the deep learning model YOLO into the first set of data through data extraction comprises:
calculating the coordinate of the central point of the whole target frame as the coordinate of the central point of the bagged product according to the coordinate of the whole target frame of the bagged product identified by the YOLO;
identifying whether the front or back result of the overall target of the bagged product is the front or back result through the YOLO as the front and back categories of the bagged product;
and connecting the central points of the whole target and the local target of the bagged product identified by the YOLO, and adding a certain compensation angle to obtain the angle of the bagged product.
5. The AI identification and positioning method according to claim 4, wherein a set of data extracted based on a deep learning model YOLO is used to match a target of a bagged product by using a conventional template matching algorithm, specifically: and carrying out template matching in the whole target range identified by the deep learning model YOLO of the original image, selecting different templates according to the positive and negative categories of the whole target identified by the deep learning model YOLO, and obtaining a group of second group data generated by template matching when the matching result meets a certain threshold value.
6. The AI identification positioning method of claim 5, wherein the checking and calibrating the first set of data and the second set of data to obtain an optimal set of data comprises:
in a first group of data obtained by the deep learning model YOLO and a second group of data obtained by matching a traditional template, if the coordinate distance between the central points of the first group of data and the second group of data is smaller than a certain threshold value, the positive and negative categories are consistent, the angle orientations are the same, and the error is smaller than a certain threshold value, the identification result is determined to be correct, and the central point coordinate, the positive and negative categories and the angle matched with the template are used as the final bagged product identification positioning result;
if the distance between the central point coordinates of the two is smaller than a certain threshold value, the positive and negative categories are consistent, the angles are opposite, and the error is smaller than a certain threshold value, the template matching direction is determined to be wrong, the deep learning model YOLO recognition result is correct, and the central point coordinates and the positive and negative categories recognized by the deep learning model YOLO and a new angle of 180-degree rotation of the angle matched by the traditional template are taken as the final bagged product recognition positioning result.
7. The AI identification positioning method of claim 6, further comprising:
when the overall target identified by the deep learning model YOLO comprises a plurality of local targets, a plurality of angles can be obtained, if the orientation of one angle is the same as the direction of the angle matched with the template, the error is smaller than a certain threshold value, the distance between the coordinates of the central points of the two angles is smaller than a certain threshold value, and the positive and negative categories are consistent, the coordinate of the central point matched with the traditional template, the positive and negative categories and the angle are taken as the final bagged product identification positioning result;
and when the template matching result is 0 and the whole target identified by the deep learning model YOLO comprises a single local target, taking the coordinate, the angle, the positive and negative categories of the central point of the deep learning model YOLO as the final bagged product identification positioning result.
8. The AI identification positioning method of claim 1, wherein the template matching algorithm includes gray value-based template matching or shape-based template matching.
9. An AI identification and location system for a bagged product, comprising:
the acquisition module is used for acquiring original images of bagged products on the production line;
the AI identification positioning module is used for transmitting the original image into a deep learning model for processing to obtain a first group of data of the bagged product, wherein the first group of data comprises a central point coordinate, positive and negative categories and angle data;
the traditional identification positioning module is used for processing the original image by using a template matching algorithm to obtain a second group of data corresponding to the first group of data, wherein the second group of data comprises a center point coordinate, positive and negative categories and angle data;
the calibration optimization module is used for checking and calibrating the first group of data and the second group of data to obtain optimal group of data, and the optimal group of data is used as a bagged product identification and positioning result;
and the transmission and display module is used for transmitting the identification and positioning results of the bagged products to the manipulator so that the manipulator can grab the bagged products, and meanwhile, effect display is carried out in real time through an upper computer software interface.
10. An AI identification and positioning device for a bagged product, comprising:
the camera bellows is used for providing environment for image data acquisition;
the light source is a strip light source and is arranged on two sides of the bottom in the dark box;
and the industrial camera is arranged at the top of the product to be shot and is used for shooting the view of the product to be shot on the production line.
CN202110266376.7A 2021-03-11 2021-03-11 AI identification positioning method, system and device for bagged product Pending CN113111712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110266376.7A CN113111712A (en) 2021-03-11 2021-03-11 AI identification positioning method, system and device for bagged product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110266376.7A CN113111712A (en) 2021-03-11 2021-03-11 AI identification positioning method, system and device for bagged product

Publications (1)

Publication Number Publication Date
CN113111712A true CN113111712A (en) 2021-07-13

Family

ID=76711260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110266376.7A Pending CN113111712A (en) 2021-03-11 2021-03-11 AI identification positioning method, system and device for bagged product

Country Status (1)

Country Link
CN (1) CN113111712A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108982508A (en) * 2018-05-23 2018-12-11 江苏农林职业技术学院 A kind of plastic-sealed body IC chip defect inspection method based on feature templates matching and deep learning
CN109101966A (en) * 2018-06-08 2018-12-28 中国科学院宁波材料技术与工程研究所 Workpiece identification positioning and posture estimation system and method based on deep learning
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN111178250A (en) * 2019-12-27 2020-05-19 深圳市越疆科技有限公司 Object identification positioning method and device and terminal equipment
CN111167731A (en) * 2019-10-23 2020-05-19 武汉库柏特科技有限公司 Product sorting method, product sorting system and intelligent sorting robot
WO2020173036A1 (en) * 2019-02-26 2020-09-03 博众精工科技股份有限公司 Localization method and system based on deep learning
CN111767780A (en) * 2020-04-10 2020-10-13 福建电子口岸股份有限公司 AI and vision combined intelligent hub positioning method and system
CN112170233A (en) * 2020-09-01 2021-01-05 燕山大学 Small part sorting method and system based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108982508A (en) * 2018-05-23 2018-12-11 江苏农林职业技术学院 A kind of plastic-sealed body IC chip defect inspection method based on feature templates matching and deep learning
CN109101966A (en) * 2018-06-08 2018-12-28 中国科学院宁波材料技术与工程研究所 Workpiece identification positioning and posture estimation system and method based on deep learning
WO2020173036A1 (en) * 2019-02-26 2020-09-03 博众精工科技股份有限公司 Localization method and system based on deep learning
CN111167731A (en) * 2019-10-23 2020-05-19 武汉库柏特科技有限公司 Product sorting method, product sorting system and intelligent sorting robot
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN111178250A (en) * 2019-12-27 2020-05-19 深圳市越疆科技有限公司 Object identification positioning method and device and terminal equipment
CN111767780A (en) * 2020-04-10 2020-10-13 福建电子口岸股份有限公司 AI and vision combined intelligent hub positioning method and system
CN112170233A (en) * 2020-09-01 2021-01-05 燕山大学 Small part sorting method and system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐戈等: "《大数据与人工智能应用导论》", 电子科技大学出版社, pages: 141 - 142 *

Similar Documents

Publication Publication Date Title
US11276194B2 (en) Learning dataset creation method and device
CN113524194B (en) Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
US7283661B2 (en) Image processing apparatus
CN110580725A (en) Box sorting method and system based on RGB-D camera
CN108335331A (en) A kind of coil of strip binocular visual positioning method and apparatus
CN113610921A (en) Hybrid workpiece grabbing method, device and computer-readable storage medium
CN109034694B (en) Production raw material intelligent storage method and system based on intelligent manufacturing
CN111191582B (en) Three-dimensional target detection method, detection device, terminal device and computer readable storage medium
CN113129383B (en) Hand-eye calibration method and device, communication equipment and storage medium
CN110712202A (en) Special-shaped component grabbing method, device and system, control device and storage medium
CN110756462B (en) Power adapter test method, device, system, control device and storage medium
CN117124302B (en) Part sorting method and device, electronic equipment and storage medium
CN111311691A (en) Unstacking method and system of unstacking robot
CN116061187B (en) Method for identifying, positioning and grabbing goods on goods shelves by composite robot
CN113313725B (en) Bung hole identification method and system for energetic material medicine barrel
Liu et al. Deep-learning based robust edge detection for point pair feature-based pose estimation with multiple edge appearance models
WO2024067006A1 (en) Disordered wire sorting method, apparatus, and system
CN112975957A (en) Target extraction method, system, robot and storage medium
CN116228854B (en) Automatic parcel sorting method based on deep learning
CN113111712A (en) AI identification positioning method, system and device for bagged product
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment
CN115880220A (en) Multi-view-angle apple maturity detection method
WO2023082417A1 (en) Grabbing point information obtaining method and apparatus, electronic device, and storage medium
CN112288819B (en) Multi-source data fusion vision-guided robot grabbing and classifying system and method
WO2022150280A1 (en) Machine vision-based method and system to facilitate the unloading of a pile of cartons in a carton handling system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination