CN115049860B - System based on feature point identification and capturing method - Google Patents

System based on feature point identification and capturing method Download PDF

Info

Publication number
CN115049860B
CN115049860B CN202210669659.0A CN202210669659A CN115049860B CN 115049860 B CN115049860 B CN 115049860B CN 202210669659 A CN202210669659 A CN 202210669659A CN 115049860 B CN115049860 B CN 115049860B
Authority
CN
China
Prior art keywords
identification
template
actual
angle
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210669659.0A
Other languages
Chinese (zh)
Other versions
CN115049860A (en
Inventor
何志雄
柳建雄
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Tiantai Robot Co Ltd
Original Assignee
Guangdong Tiantai Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Tiantai Robot Co Ltd filed Critical Guangdong Tiantai Robot Co Ltd
Priority to CN202210669659.0A priority Critical patent/CN115049860B/en
Publication of CN115049860A publication Critical patent/CN115049860A/en
Application granted granted Critical
Publication of CN115049860B publication Critical patent/CN115049860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • G06V10/7515Shifting the patterns to accommodate for positional errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9027Trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a system and a grabbing method based on feature point identification, wherein the grabbing system comprises: a collection end; the identification terminal is used for storing the angle templates, receiving the collected images, identifying and matching the collected images and the angle templates one by one, judging whether the collected images are the target object, if so, selecting the angle template matched with the collected images as the identification template, acquiring the actual characteristic points of the target object from the images and the identification characteristic points from the identification template, and comparing the actual characteristic points of the target object with the identification characteristic points of the identification template; a plurality of robotic arms; the invention aims to provide a system and a grabbing method based on feature point identification, so that identification feature points of an identification template are close to actual feature points, the moving track of a mechanical arm and a control instruction of a grabbing point generated by identification features of the identification template can also accord with the positions of the actual feature points, and the grabbing accuracy of the mechanical arm can be improved.

Description

System based on feature point identification and capturing method
Technical Field
The invention relates to the technical field of mechanical automation, in particular to a system based on feature point identification and a capturing method.
Background
When the mechanical arm works, the mechanical arm is generally controlled through a pre-programmed program, when the mechanical arm receives a grabbing instruction, the mechanical arm grabs an article after moving to a preset position through parameters set by the program, however, in the grabbing process of the mechanical arm, because the grabbing point of the clamping jaw of the mechanical arm and the grabbing point of the actual article may have a certain deviation, the grabbing point of the grabbing tool and the actual article cannot be in good contact, the mechanical arm cannot be guaranteed to stably grab the article, and when the article is grabbed and moved, the article is easy to shake or fall off, so that the production efficiency is affected. Meanwhile, in the article conveying process, the angle of the article also influences the grabbing result of the grabbing tool, when the angle of the article has certain deviation with the angle of the template in the data database, even if the mechanical arm moves to a preset position according to original preset parameters, the mechanical arm can be missed to grab due to the deviation of the angle, and the production efficiency is influenced.
Disclosure of Invention
The invention aims to provide a system and a grabbing method based on feature point identification, which enable identification feature points of an identification template to be close to actual feature points, enable a moving track of a mechanical arm and a control instruction of a grabbing point, which are generated through identification features of the identification template, to also accord with the positions of the actual feature points, improve grabbing accuracy of the mechanical arm and avoid situations of missed grabbing or deviation of the grabbing point and the like.
In order to achieve the purpose, the invention adopts the following technical scheme: a feature point identification-based grab system, comprising:
the acquisition end is used for acquiring an image of an object and sending the acquired image to the recognition end;
the identification end is used for storing a plurality of angle templates, receiving the acquired images at the same time, identifying and matching the acquired images and the angle templates one by one, judging whether the acquired images are target objects or not, if so, selecting the angle templates matched with the acquired images as identification templates, acquiring actual characteristic points of the target objects from the images and identification characteristic points from the identification templates, comparing the actual characteristic points of the target objects with the identification characteristic points of the identification templates, and modifying and optimizing the identification templates according to comparison results to obtain the optimal identification templates; generating a moving track of the mechanical arm and a control instruction of the grabbing point according to the optimal recognition template, and sending the control instruction to the mechanical arm;
the system comprises a plurality of mechanical arms, a plurality of positioning devices and a plurality of positioning devices, wherein each mechanical arm comprises a control end and a gripping tool, each gripping tool is different, the control ends are used for storing tool templates of gripping points of the gripping tools, receiving and analyzing control instructions from an identification end at the same time, and judging the matching degree of the current tool template and the optimal identification template according to the analysis result; if the coincidence degree is larger than the preset range, selecting the mechanical arm of the current tool template as an output end, and driving the current mechanical arm to move and grab the track; and if the coincidence degree is smaller than the preset range, replacing the next control end until the coincidence degree of the tool template in the next control end and the optimal recognition template is larger than the preset range.
Preferably, the identification terminal includes:
the template library module is used for storing angle template information, and the angle template information comprises template pictures of a plurality of angles; the angle template information comprises 360/n template drawings, n belongs to [1,2,3, \8230;, A ]; a is a positive integer less than 360 and divisible by 360; the template drawings correspond to different placing angles of the articles;
the identification processing module is used for extracting an identification frame from the image information by using an One-Stage algorithm and displaying the acquired image in a frame mode;
simultaneously, matching and identifying the target object in the frame body, judging whether the target object exists in the current image, if so, matching the frame body where the target object is located with the template image in the template library module, and if the matching result is consistent, storing the characteristic information of the target object in the current frame body into the storage module;
and the storage module is used for storing the characteristic information of the target object in the acquired image.
Preferably, the identification processing module further comprises:
the gradient quantization submodule is used for performing first-layer pyramid direction gradient quantization and second-layer pyramid direction gradient quantization on the acquired image to obtain identification characteristics corresponding to the acquired image;
and the extraction submodule is used for acquiring the identification features by taking the current angle as a list and storing the identification features in the storage module.
Preferably, the identification terminal further comprises:
the correction optimization module is used for extracting a target frame of a target object from the acquired image in a sub-pixel point mode; combining the identification features of the target frame and the rest of the identification features into an actual feature point according to a preset proportion, and obtaining an identification feature point relative to the actual feature point on the basis of the actual feature point; calculating to obtain the distance between the corresponding actual characteristic point and the identification characteristic point, comparing the distance between the corresponding actual characteristic point and the identification characteristic point with a preset distance value, if the distance between the corresponding actual characteristic point and the identification characteristic point is greater than the preset distance value, obtaining the number of the actual characteristic points meeting the distance requirement, comparing the number of the actual characteristic points with a preset number value, if the number of the actual characteristic points is greater than the preset number value, substituting the actual characteristic points and the identification characteristic points into a change matrix, and performing pose correction optimization on the identification template to obtain an optimal identification template.
Preferably, the modification optimization module includes:
the target frame acquisition submodule is used for collecting an edge point set of a target object in the acquired image through an edge detection operator, performing operation processing on edge points of the edge point set to obtain sub-pixel points corresponding to the edge points and direction vectors of the edge points, acquiring a corresponding sub-pixel point set and direction vector point set, namely a target frame, and sending the target frame to the association submodule;
the association submodule is used for combining the identification features on the target frame and other identification features into an actual identification point according to a proportion and finding out identification feature points corresponding to the actual feature points on the identification template according to the actual feature points; the correlation submodule acquires the distances between all the actual characteristic points and the corresponding identification characteristic points, judges whether the distances between all the actual characteristic points and the corresponding identification characteristic points are larger than a preset distance value or not, acquires the number of all the actual characteristic points which are larger than the preset distance value if the distances are larger than the preset distance value, judges whether the number of all the actual characteristic points meets a preset number value or not, and sends a correction instruction to the correction submodule if the numbers of all the actual characteristic points meet the preset number value;
and the correction submodule is used for receiving and analyzing a correction instruction, substituting the actual characteristic points and the identification characteristic points into the change matrix, and performing pose correction optimization on the identification template to obtain the optimal identification template.
A grabbing method based on feature point identification comprises the following steps:
collecting an image: collecting an image of a conveyed object, and sending the collected image to an identification end;
identifying the article and optimizing the template: identifying and matching the acquired image and the angle templates one by one, judging whether the acquired image is a target object, if so, selecting the angle template matched with the acquired image as an identification template, acquiring an actual characteristic point of the target object from the image and an identification characteristic point from the identification template, and correcting and optimizing the identification template according to a comparison result by comparing the actual characteristic point of the target object with the identification characteristic point of the identification template to obtain an optimal identification template;
generating a control instruction: generating a moving track of the mechanical arm and a control instruction of the grabbing point according to the optimal recognition template, and sending the control instruction to the mechanical arm;
receiving a control instruction and grabbing an article: comparing the optimal recognition template corresponding to the control instruction with the tool template corresponding to the grabbing tool, and judging the matching degree of the current tool template and the optimal recognition template; if the coincidence degree is larger than the preset range, selecting the mechanical arm of the current tool template as an output end, and driving the current mechanical arm to move and grab the track; and if the coincidence degree is smaller than the preset range, replacing the next control end until the coincidence degree of the tool template in the next control end and the optimal recognition template is larger than the preset range.
Preferably, the step of identifying the article and optimizing the template includes storing angle template information, the angle template information including template maps of a plurality of angles; the angle template information comprises 360/n template drawings, n belongs to [1,2,3, \8230;, A ]; a is a positive integer less than 360 and divisible by 360; the template drawings correspond to different placing angles of the articles;
using One-Stage algorithm to identify frame extraction for image information, and displaying the acquired image in a frame mode; and simultaneously, matching and identifying the target object in the frame body, judging whether the target object exists in the current image, matching the frame body where the target object is located with the template picture if the target object exists, and storing the characteristic information of the target object in the current frame body if the matching result is consistent.
Preferably, in the step of identifying the article and optimizing the template, a first layer of pyramid directional gradient quantization and a second layer of pyramid directional gradient quantization are performed on the acquired image; the gradient quantification is specifically: making 7 Gauss fuzzy kernel; calculating gradient through sobel, and extracting a single-channel gradient amplitude maximum image matrix from a non-maximum suppression algorithm of solving the sum of squares of the gradients in the X direction and the Y direction of the three-channel image; obtaining an angle image matrix from the gradient image matrixes in the X direction and the Y direction; quantizing the range of the angle image matrix from 0 degree to 360 degrees into an integer from 0 degree to 15 degree, continuously quantizing the 7-derived remainders in 8 directions, extracting pixels larger than a threshold value in the amplitude image matrix, then extracting a quantized image matrix corresponding to a neighborhood 3 x 3 of the pixels to form a histogram, extracting more than 5 neighborhoods in the same direction, assigning the same direction to the corresponding direction, and finally performing shifting encoding on the index from 00000001 to 10000000; acquiring identification characteristics corresponding to the acquired images; and acquiring and storing the identification features by taking the current angle as a list.
Preferably, in the step of identifying the article and optimizing the template, the method includes extracting a target frame of the target object from the acquired image in a sub-pixel point mode; collecting an edge point set of a target object in an acquired image through an edge detection operator, performing operation processing on edge points of the edge point set to obtain sub-pixel points corresponding to the edge points and direction vectors of the edge points, and obtaining a corresponding sub-pixel point set and a direction vector point set, namely a target frame;
combining the identification features of the target frame and the rest identification features into actual feature points according to a preset proportion, and obtaining identification feature points relative to the actual feature points on the basis of the actual feature points; calculating to obtain the distance between the corresponding actual characteristic point and the identification characteristic point, comparing the distance between the corresponding actual characteristic point and the identification characteristic point with a preset distance value, if the distance between the corresponding actual characteristic point and the identification characteristic point is greater than the preset distance value, obtaining the number of the actual characteristic points meeting the distance requirement, comparing the number of the actual characteristic points with a preset number value, combining the identification characteristics on the target frame and the rest identification characteristics into an actual identification point according to the proportion, and finding out the identification characteristic point corresponding to the actual characteristic point on the identification template according to the actual characteristic point; the method comprises the steps of obtaining the distances between all actual characteristic points and corresponding identification characteristic points, judging whether the distances between all the actual characteristic points and the corresponding identification characteristic points are larger than a preset distance value or not, if so, obtaining the number of all the actual characteristic points which are larger than the preset distance value, judging whether the number of all the actual characteristic points meets a preset number value or not, if so, substituting the actual characteristic points and the identification characteristic points into a change matrix, and carrying out pose correction optimization on an identification template to obtain an optimal identification template.
Preferably, in the step of identifying the article and optimizing the template, the method further comprises the step of changing the matrix to include a translation matrix and a rotation matrix;
the coordinates of the actual feature points and the coordinates of the recognition feature points are substituted into the following formula (1):
Figure GDA0004045628030000061
wherein R is a rotation matrix, and R is a rotation matrix,
Figure GDA0004045628030000062
to translate the matrix, q i And p i Respectively, the associated actual feature points and the coordinates of the identification feature points, n i I is a natural integer greater than 1;
and then acquiring the actual characteristic points and the minimum deflection angle R in the identification characteristic points, substituting the minimum deflection angle R into the following formula (2), and calculating to obtain the minimum value of the rotation matrix R, wherein the formula (2) is as follows:
Figure GDA0004045628030000071
the minimum value of the rotation matrix R is substituted back into equation (1), resulting in equation (3):
Figure GDA0004045628030000072
wherein c is i =p i ×n i
And (3) solving the partial derivatives of the formula (3), converting the partial derivatives into linear equations and solving the angle r of the minimum deflection, the minimum horizontal offset x and the minimum vertical offset y by the following process:
the partial derivative formula is as follows:
Figure GDA0004045628030000073
Figure GDA0004045628030000074
Figure GDA0004045628030000075
the conversion into a linear equation to find the angle of minimum deflection r, the minimum horizontal offset x and the minimum vertical offset y is as follows:
Figure GDA0004045628030000076
the technical scheme of the invention has the beneficial effects that: the method and the device have the advantages that the collected images are identified and matched through the angle templates, and if the collected images are matched with the image information in the angle templates, namely the angle positions of the articles in the angle templates are the same as the angle positions of the articles in the collected images, the collected images can be judged to be target objects.
The method comprises the steps of correcting and optimizing recognition characteristic points of a recognition template according to the actual characteristic points of an object in an actually acquired image and the recognition characteristic points in the recognition template, enabling the recognition characteristic points of the recognition template to be close to the actual characteristic points, enabling the recognition characteristic points to be closer to the actual characteristic points, enabling the recognition characteristic points after correction and optimization to serve as the optimal recognition template, generating a moving track of a mechanical arm and a control instruction of a grabbing point according to the corrected recognition characteristic points, and generating the moving track of the mechanical arm and the control instruction of the grabbing point according to the positions of the actual characteristic points because the recognition characteristic points in the correction template are based on the actual characteristic points, so that the generated moving track of the mechanical arm and the control instruction of the grabbing point can also accord with the positions of the actual characteristic points, the grabbing accuracy of the mechanical arm can be improved, and the situations of missing grabbing or deviation of the grabbing points and the like can be avoided.
Meanwhile, each mechanical arm is provided with a control end for identifying and receiving a control instruction and different gripping tools, after the control instruction is received, a tool template with a gripping point is compared with an optimal identification template, the tool template with the highest matching degree with the optimal identification template is selected from the tool templates, namely, the matching degree of the gripping point in the tool template and the actual characteristic point of the article is the highest, the gripping tool corresponding to the tool template is selected as the optimal gripping tool, the mechanical arm is selected as an output end, the mechanical arm receives and analyzes the control instruction and executes corresponding track movement and gripping action according to the control instruction, the optimal gripping tool is screened out through comparison with the optimal identification template, the degree of fit between the gripping tool and the actual article is higher, shaking or falling off in the gripping process is avoided, and the gripping effect of the mechanical arm is ensured.
Detailed Description
The technical solution of the present invention is further explained by the following embodiments.
The following examples are given by way of illustration only and are not to be construed as limiting the present invention.
In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
A grabbing system based on feature point recognition comprises:
the acquisition end is used for acquiring an image of an object and sending the acquired image to the recognition end;
the identification end is used for storing a plurality of angle templates, receiving the acquired images at the same time, identifying and matching the acquired images with the angle templates one by one, judging whether the acquired images are target objects or not, if so, selecting the angle templates matched with the acquired images as identification templates, acquiring actual characteristic points of the target objects from the images and identification characteristic points from the identification templates, and correcting and optimizing the identification templates according to comparison results by comparing the actual characteristic points of the target objects with the identification characteristic points of the identification templates to obtain the optimal identification templates; generating a moving track of the mechanical arm and a control instruction of the grabbing point according to the optimal recognition template, and sending the control instruction to the mechanical arm;
the system comprises a plurality of mechanical arms, a plurality of positioning devices and a plurality of positioning devices, wherein each mechanical arm comprises a control end and a gripping tool, each gripping tool is different, the control ends are used for storing tool templates of gripping points of the gripping tools, receiving and analyzing control instructions from an identification end at the same time, and judging the matching degree of the current tool template and the optimal identification template according to the analysis result; if the coincidence degree is larger than the preset range, selecting the mechanical arm of the current tool template as an output end, and driving the current mechanical arm to move and grab the track; and if the coincidence degree is smaller than the preset range, replacing the next control end until the coincidence degree of the tool template in the next control end and the optimal recognition template is larger than the preset range.
When the mechanical arm receives a grabbing instruction, the grabbing point of the mechanical arm, namely the grabbing tool, may have a certain deviation from the grabbing point of an actual article, the grabbing point of the grabbing tool cannot be in good contact with the actual article, and it cannot be ensured that the mechanical arm can stably grab the article, and when the article is grabbed and moved, the article is easy to shake or fall off, which affects production efficiency. Meanwhile, in the article conveying process, the angle of the article can also influence the grabbing result of the grabbing tool, when the angle of the article has certain deviation with the angle of the template in the data database, even if the mechanical arm moves to a preset position according to the original preset parameters, the mechanical arm can be missed to grab due to the deviation of the angle, and the production efficiency is influenced.
The method and the device have the advantages that the collected images are identified and matched through the angle templates, and if the collected images are matched with the image information in the angle templates, namely the angle positions of the articles in the angle templates are the same as the angle positions of the articles in the collected images, the collected images can be judged to be target objects. According to historical article angle position information, all possible angle positions of the articles are recorded as angle templates, and the database information of the angle templates is large, so that the angle position of all the articles can be basically covered, and the articles can be accurately identified by an identification end during conveying.
The method comprises the steps of correcting and optimizing recognition characteristic points of a recognition template according to the actual characteristic points of an object in an actually acquired image and the recognition characteristic points in the recognition template, enabling the recognition characteristic points of the recognition template to be close to the actual characteristic points, enabling the recognition characteristic points to be closer to the actual characteristic points, enabling the recognition characteristic points after correction and optimization to serve as the optimal recognition template, generating a moving track of a mechanical arm and a control instruction of a grabbing point according to the corrected recognition characteristic points, and generating the moving track of the mechanical arm and the control instruction of the grabbing point according to the positions of the actual characteristic points because the recognition characteristic points in the correction template are based on the actual characteristic points, so that the generated moving track of the mechanical arm and the control instruction of the grabbing point can also accord with the positions of the actual characteristic points, the grabbing accuracy of the mechanical arm can be improved, and the situations of missing grabbing or deviation of the grabbing points and the like can be avoided.
Meanwhile, each mechanical arm is provided with a control end for identifying and receiving a control instruction and different gripping tools, after the control instruction is received, a tool template with a gripping point is compared with an optimal identification template, the tool template with the highest matching degree with the optimal identification template is selected from the tool templates, namely, the matching degree of the gripping point in the tool template and the actual characteristic point of the article is the highest, the gripping tool corresponding to the tool template is selected as the optimal gripping tool, the mechanical arm is selected as an output end, the mechanical arm receives and analyzes the control instruction and executes corresponding track movement and gripping action according to the control instruction, the optimal gripping tool is screened out through comparison with the optimal identification template, the degree of fit between the gripping tool and the actual article is higher, shaking or falling off in the gripping process is avoided, and the gripping effect of the mechanical arm is ensured.
Preferably, the identification terminal includes:
the template library module is used for storing angle template information, and the angle template information comprises template pictures of a plurality of angles; the angle template information comprises 360/n template drawings, n belongs to [1,2,3, \8230;, A ]; a is a positive integer less than 360 and divisible by 360; the template drawings correspond to different placing angles of the articles;
the identification processing module is used for extracting an identification frame from the image information by using an One-Stage algorithm and displaying the acquired image in a frame mode;
simultaneously, matching and identifying the target object in the frame body, judging whether the target object exists in the current image, if so, matching the frame body where the target object is located with the template image in the template library module, and if the matching result is consistent, storing the characteristic information of the target object in the current frame body into the storage module;
and the storage module is used for storing the characteristic information of the target object in the acquired image.
During the transportation of the article, a certain angle of deflection may exist, so during the identification, in view of the possibility of various rotation angles of the article, 360 DEG/n template graphs are established as a template library, n is a positive integer greater than 0, and the value of n can be properly adjusted according to the size of the actually transported article. In this embodiment, for a general article, in order to make the database of the template maps sufficiently large, n takes a value of 1, so that 360 template maps are stored as comparison objects for the article; meanwhile, the angle information of the articles in the historical database can be added to be used as the template drawing, and the database of the template drawing is enriched.
For the manufacturing of the template images, each template image is actually subjected to one-time down-sampling according to the requirement of a system matching algorithm, namely the first-time feature extraction is to perform feature extraction on an original template image; the second feature extraction is for the down-sampled image of the template, i.e. the template image is reduced by 2 times in each dimension. Firstly, acquiring a target training image, namely a template; and programming to realize that the rotation unit is 1 degree, so that the templates with 360 continuous angles can be obtained, and if the rotation unit is 2 degrees, the templates are 180. The first and second template feature extraction are realized by programming algorithm. Extracting 360 template features, and the running time of the upper computer is about 2-3s. The template feature extraction is to set threshold parameters, and is used in a template image quantization algorithm and a template feature extraction algorithm. An opencv open source interface is basically adopted in programming, and complex calculation is not needed.
In the application, the identification processing module comprises a tracking sub-module and a matching sub-module; the tracking sub-module uses an One-Stage algorithm to identify the frame body of the image information and extract the frame body, and the object is displayed in a frame body mode; the matching submodule is used for matching and identifying the object in the frame body, judging whether a target part exists in the current image information or not, matching the frame body where the target part exists with the template picture in the template library module if the target part exists, and storing the characteristic information of the object in the current frame body into the storage module if the matching result is consistent.
In this application, the identification processing module further includes:
the gradient quantization submodule is used for performing first-layer pyramid direction gradient quantization and second-layer pyramid direction gradient quantization on the acquired image to obtain identification characteristics corresponding to the acquired image;
and the extraction submodule is used for acquiring the identification features by taking the current angle as a list and storing the identification features in the storage module.
The process of realizing gradient quantization of the gradient quantization submodule comprises four steps:
1. firstly, blurring a kernel with the size of 7 Gauss;
2. calculating gradient through sobel, and extracting a single-channel gradient amplitude maximum image matrix from a non-maximum suppression algorithm of solving the sum of squares of the gradients in the X direction and the Y direction of the three-channel image;
3. obtaining an angle image matrix from the gradient image matrices in the X and Y directions;
4. the angle image matrix is quantized into an integer from 0 to 360 degrees into 0 to 15 degrees, then 8-direction quantization is continuously carried out on 7-taking remainders, pixels which are larger than a threshold value in the amplitude image matrix are taken, then quantized image matrixes corresponding to the neighborhood 3 x 3 of the pixels are taken to form a histogram, more than 5 neighborhoods are taken in the same direction and assigned to the corresponding direction, and finally, shifting coding is carried out on the index from 00000001 to 10000000.
Wherein the gradient amplitude maximum image matrix calculation formula is as follows:
Figure GDA0004045628030000131
Figure GDA0004045628030000132
wherein x represents a position of the object,
Figure GDA0004045628030000133
for x-position gradient values, { R, G, B } for the three-color channel, R, G, B channel, ori () for the gradient direction.
After the gradient quantization is performed, the identification features in the template graph are obviously different from other pixel points in terms of pixel point values, and therefore, the process for identifying the features in the application is as follows: traversing the image matrix with the maximum gradient amplitude value, finding out pixel points with the maximum gradient amplitude value in each neighborhood in the image matrix with the maximum gradient amplitude value, and if finding out the pixel points with the maximum gradient amplitude value in the neighborhood, setting the gradient amplitude values of the pixel value points except the pixel points with the maximum gradient amplitude value in the neighborhood to be zero;
judging whether the gradient amplitude of the pixel point with the maximum gradient amplitude in all neighborhoods is larger than a gradient amplitude threshold value or not, and if so, marking the pixel point as an identification feature;
acquiring the quantity of all identification features, judging whether the quantity of all identification features is larger than a quantity threshold value, if so, adding all identification features into a feature set and storing the feature set in the configuration file; if not, judging whether the identification feature has at least another identification feature in the range of the distance threshold, if so, rejecting the identification feature and the identification feature in the distance threshold, and if not, storing the identification feature in the storage module.
The identification features in the storage module store groupings of identification features in each group at an angle. In the process of identifying by the matching sub-module, the identification features in the storage module are called, and the identification features of each group are identified and matched with the frame body on the image. In the application, whether the transport parts exist in the image or not is calculated through a similarity calculation mode,
the similarity calculation formula in the application is as follows:
Figure GDA0004045628030000141
wherein, L is a frame body in the image, T represents a template drawing, c is the position of the template drawing in the input identification feature, P represents a neighborhood taking c as the center, r is an offset position, and Sori () represents a gradient amplitude;
and respectively carrying out similarity calculation on the identification features in the 360 template drawings to obtain 360 similarity scores, finding out a maximum numerical value of the 360 similarity scores, judging whether the numerical value is greater than a threshold value, if so, indicating that the content in the input frame is a transportation part, otherwise, indicating that the content in the frame is a target part.
Specifically, the identification terminal further includes:
the correction optimization module is used for extracting a target frame of a target object from the acquired image in a sub-pixel point mode; combining the identification features of the target frame and the rest of the identification features into an actual feature point according to a preset proportion, and obtaining an identification feature point relative to the actual feature point on the basis of the actual feature point; calculating to obtain the distance between the corresponding actual characteristic point and the identification characteristic point, comparing the distance between the corresponding actual characteristic point and the identification characteristic point with a preset distance value, if the distance between the corresponding actual characteristic point and the identification characteristic point is greater than the preset distance value, obtaining the number of the actual characteristic points meeting the distance requirement, comparing the number of the actual characteristic points with a preset number value, if the number of the actual characteristic points is greater than the preset number value, substituting the actual characteristic points and the identification characteristic points into a change matrix, and performing pose correction optimization on the identification template to obtain an optimal identification template.
Meanwhile, the revision optimization module includes:
the target frame acquisition submodule is used for collecting an edge point set of a target object in the acquired image through an edge detection operator, performing operation processing on edge points of the edge point set to obtain sub-pixel points corresponding to the edge points and direction vectors of the edge points, acquiring a corresponding sub-pixel point set and direction vector point set, namely a target frame, and sending the target frame to the association submodule;
the correlation submodule is used for combining the identification features on the target frame and other identification features into an actual identification point according to a proportion, and finding out the identification feature points corresponding to the actual feature points on the identification template according to the actual feature points; the correlation submodule acquires the distances between all the actual characteristic points and the corresponding identification characteristic points, judges whether the distances between all the actual characteristic points and the corresponding identification characteristic points are larger than a preset distance value or not, acquires the number of all the actual characteristic points which are larger than the preset distance value if the distances are larger than the preset distance value, judges whether the number of all the actual characteristic points meets a preset number value or not, and sends a correction instruction to the correction submodule if the numbers of all the actual characteristic points meet the preset number value;
and the correction submodule is used for receiving and analyzing a correction instruction, substituting the actual characteristic points and the identification characteristic points into the change matrix, and performing pose correction optimization on the identification template to obtain the optimal identification template.
In one embodiment, the implementation process of obtaining the target frame is as follows:
the method comprises the steps of collecting an edge point set of a target object in an image through a Canny operator, carrying out binary quadratic polynomial fitting on the edge point set, solving a binary quadratic polynomial through a facet model to obtain a Hessian matrix, solving the Hessian matrix to obtain a characteristic value and a characteristic vector of the edge point set, deriving the characteristic value through a Taylor expansion formula to obtain sub-pixels of the edge point set, and realizing target frame extraction of the target object. Collecting an edge point set of a target object through a Canny operator, then fitting a binary quadratic polynomial, solving a coefficient by using a facet model to obtain a Hessian matrix, solving a characteristic value and a characteristic vector, wherein the characteristic vector is a direction vector for identifying a characteristic point, deriving by using a Taylor expansion formula, combining the direction vector of the point to obtain a corresponding sub-pixel point, and circularly obtaining a corresponding sub-pixel point set and a direction vector point set to be stored at a corresponding position of a kdtree data structure body. By constructing a KDTree algorithm, the storage sequence of the sub-pixel point sets and the direction vector point sets in the kdTree data structure is associated with leaf nodes of the KDTree tree, namely the storage sequence of the original sub-pixels and the original direction vectors in the data structure is changed. In addition, the sub-pixel points of the edge are extracted in the application, the target object is extracted, the edge points of the sub-pixels can improve the definition of the edge, the extracted target object is more accurate, and the edge points or the feature points on the target frame can be more accurate.
According to the embodiment of the invention, the recognition features on the target frame and other recognition features are acquired in the preset proportion to form the actual feature points, the other recognition features are recognition features on the non-target edge, the time for picking out the recognition features of the target frame and the other recognition features can be reduced by the preset proportion, and meanwhile, the accuracy of the template pose correction can be ensured by a large number of the other recognition features.
The mode of acquiring the actual feature points and the identification feature points is as follows: and obtaining a tangent of the actual feature point, making a perpendicular line on the tangent of the actual feature point, connecting the perpendicular line with the recognition feature point, and calculating the length of the perpendicular line, wherein the length of the perpendicular line is the distance between the actual feature point and the recognition feature point.
And then, acquiring the distances between the actual feature points and the identification feature points which are in one-to-one correspondence after association, and judging whether the distance is greater than a distance threshold value. Only when the distance is greater than the distance threshold value, the situation that the position and the posture of the target to be welded are greatly different from the position and the posture of the template graph can be shown, and the position and the posture of the template graph need to be corrected. And after all the actual characteristic points and the identification characteristic points meeting the distance threshold are obtained, counting the number of the actual characteristic points and the identification characteristic points, and correcting the template when the number meets the number threshold. Although the actual feature point and the identified feature point are correlated with each other in the pose, the actual feature point may be a rotated edge point on the frame of the target, and the edge point of the correlated identified feature point is only close in the pose, and the edge point which cannot be rotated cannot be completely overlapped with the edge point. Therefore, when the corrected pose of the template graph is close to the target to be welded, the actual characteristic points and the identification characteristic points of the type still meet the requirement of a distance threshold. If the distance threshold is only adopted to judge whether the template pose needs to be modified, the pose of the template graph is always corrected, and the running resources of the system are wasted.
Wherein the change matrix comprises a translation matrix and a rotation matrix;
first, the coordinates of the actual feature point and the coordinates of the recognition feature point are substituted into the following formula (1):
Figure GDA0004045628030000161
wherein R is a rotation matrix, and R is a rotation matrix,
Figure GDA0004045628030000162
to translate the matrix, q i And p i Coordinates, n, of the associated actual and recognized feature points, respectively i Is a feature vector, i is a natural integer greater than 1;
and then acquiring the actual characteristic points and the minimum deflection angle R in the identification characteristic points, substituting the minimum deflection angle R into the following formula (2), and calculating to obtain the minimum value of the rotation matrix R, wherein the formula (2) is as follows:
Figure GDA0004045628030000171
the minimum value of the rotation matrix R is substituted back into equation (1), resulting in equation (3):
Figure GDA0004045628030000172
wherein c is i =p i ×n i
And (3) solving the partial derivatives of the formula (3), converting the partial derivatives into linear equations and solving the angle r of the minimum deflection, the minimum horizontal offset x and the minimum vertical offset y by the following process:
the partial derivative formula four is as follows:
Figure GDA0004045628030000173
Figure GDA0004045628030000174
Figure GDA0004045628030000175
the conversion into a linear equation to find the angle of minimum deflection r, the minimum horizontal offset x and the minimum vertical offset y is as follows:
Figure GDA0004045628030000176
a grabbing method based on feature point identification comprises the following steps:
collecting an image: collecting an image of a conveyed object, and sending the collected image to an identification end;
identifying an article and optimizing a template: identifying and matching the acquired image and the angle templates one by one, judging whether the acquired image is a target object, if so, selecting the angle template matched with the acquired image as an identification template, acquiring an actual characteristic point of the target object from the image and an identification characteristic point from the identification template, and correcting and optimizing the identification template according to a comparison result by comparing the actual characteristic point of the target object with the identification characteristic point of the identification template to obtain an optimal identification template;
generating a control instruction: generating a moving track of the mechanical arm and a control instruction of the grabbing point according to the optimal identification template, and sending the control instruction to the mechanical arm;
receiving a control instruction and grabbing an article: comparing the optimal recognition template corresponding to the control instruction with the tool template corresponding to the grabbing tool, and judging the matching degree of the current tool template and the optimal recognition template; if the coincidence degree is larger than the preset range, selecting the mechanical arm of the current tool template as an output end, and driving the current mechanical arm to move and grab the track; and if the coincidence degree is smaller than the preset range, replacing the next control end until the coincidence degree of the tool template in the next control end and the optimal recognition template is larger than the preset range.
Preferably, the step of identifying the article and optimizing the template includes storing angle template information, the angle template information including template drawings of a plurality of angles; the angle template information comprises 360/n template drawings, n belongs to [1,2,3, \8230;, A ]; a is a positive integer less than 360 and divisible by 360; the template pictures correspond to different placing angles of the articles;
using One-Stage algorithm to identify frame extraction for image information, and displaying the acquired image in a frame mode; and simultaneously, matching and identifying the target object in the frame body, judging whether the target object exists in the current image, matching the frame body where the target object is located with the template picture if the target object exists, and storing the characteristic information of the target object in the current frame body if the matching result is consistent.
Specifically, in the step of identifying an article and optimizing a template, a first layer of pyramid directional gradient quantization and a second layer of pyramid directional gradient quantization are performed on the acquired image; the gradient quantification is specifically: making 7 Gauss fuzzy kernel; calculating the gradient through sobel, and extracting a single-channel gradient amplitude maximum image matrix by a non-maximum suppression algorithm of the square sum of the gradients in the X direction and the Y direction for the three-channel image; obtaining an angle image matrix from the gradient image matrixes in the X direction and the Y direction; quantizing the range of the angle image matrix from 0 degree to 360 degrees into an integer from 0 degree to 15 degree, continuously quantizing the 7-derived remainders in 8 directions, extracting pixels larger than a threshold value in the amplitude image matrix, then extracting a quantized image matrix corresponding to a neighborhood 3 x 3 of the pixels to form a histogram, extracting more than 5 neighborhoods in the same direction, assigning the same direction to the corresponding direction, and finally performing shifting encoding on the index from 00000001 to 10000000; acquiring identification characteristics corresponding to the acquired images; and acquiring and storing the identification characteristics by taking the current angle as a list.
Wherein the gradient amplitude maximum image matrix calculation formula is as follows:
Figure GDA0004045628030000191
Figure GDA0004045628030000192
wherein x represents a position of the object,
Figure GDA0004045628030000193
for x-position gradient values, { R, G, B } for the three-color channel, R, G, B channel, ori () for the gradient direction.
After the gradient quantization is performed, the identification features in the template graph are obviously different from other pixel points in terms of pixel point values, and therefore, the process for identifying the features in the application is as follows: traversing the gradient amplitude maximum image matrix, finding out pixel points with maximum gradient amplitudes in each neighborhood in the gradient amplitude maximum image matrix, and if the pixel points with the maximum gradient amplitudes are found out in the neighborhoods, setting the gradient amplitudes of pixel value points except the pixel points with the maximum gradient amplitudes in the neighborhoods to be zero;
judging whether the gradient amplitude of the pixel point with the maximum gradient amplitude in all neighborhoods is larger than a gradient amplitude threshold value or not, and if so, marking the pixel point as an identification feature;
acquiring the quantity of all identification features, judging whether the quantity of all identification features is larger than a quantity threshold value, if so, adding all identification features into a feature set and storing the feature set in the configuration file; if not, judging whether the identification features have at least one other identification feature in the range of the distance threshold, if so, rejecting the identification features and the identification features in the distance threshold, and if not, storing the identification features in a storage module.
The identification features in the storage module store groupings of identification features as each group at an angle. In the process of identifying by the matching sub-module, the identification features in the storage module are called, and the identification features of each group are identified and matched with the frame body on the image. Whether the transport parts exist in the images or not is calculated in a similarity calculation mode in the application,
the similarity calculation formula in the application is as follows:
Figure GDA0004045628030000201
wherein, L is a frame body in the image, T represents a template drawing, c is the position of the template drawing in the input identification feature, P represents a neighborhood taking c as the center, r is an offset position, and Sori () represents a gradient amplitude;
and after similarity calculation is carried out on the identification features in the 360 template drawings respectively, obtaining 360 similarity scores, finding out a numerical value with the maximum similarity score of the 360 similarity scores, judging whether the numerical value is greater than a threshold value, if so, indicating that the content in the input frame is a transportation part, otherwise, indicating that the content in the frame is a target part.
In the step of identifying the article and optimizing the template, the method comprises the steps of extracting a target frame of a target object from a collected image in a sub-pixel point mode; collecting an edge point set of a target object in an acquired image through an edge detection operator, performing operation processing on edge points of the edge point set to obtain sub-pixel points corresponding to the edge points and direction vectors of the edge points, and obtaining a corresponding sub-pixel point set and a direction vector point set, namely a target frame;
combining the identification features of the target frame and the rest of the identification features into an actual feature point according to a preset proportion, and obtaining an identification feature point relative to the actual feature point on the basis of the actual feature point; calculating to obtain the distance between the corresponding actual characteristic point and the identification characteristic point, comparing the distance between the corresponding actual characteristic point and the identification characteristic point with a preset distance value, if the distance between the corresponding actual characteristic point and the identification characteristic point is greater than the preset distance value, obtaining the number of the actual characteristic points meeting the distance requirement, comparing the number of the actual characteristic points with a preset number value, combining the identification characteristics on the target frame and the rest identification characteristics into an actual identification point according to the proportion, and finding out the identification characteristic point corresponding to the actual characteristic point on the identification template according to the actual characteristic point; the method comprises the steps of obtaining the distances between all actual characteristic points and corresponding identification characteristic points, judging whether the distances between all the actual characteristic points and the corresponding identification characteristic points are larger than a preset distance value or not, if so, obtaining the number of all the actual characteristic points which are larger than the preset distance value, judging whether the number of all the actual characteristic points meets a preset number value or not, if so, substituting the actual characteristic points and the identification characteristic points into a change matrix, and carrying out pose correction optimization on an identification template to obtain an optimal identification template.
Meanwhile, in the step of identifying the article and optimizing the template, the step further comprises the step of changing the matrix to include a translation matrix and a rotation matrix;
the coordinates of the actual feature points and the coordinates of the recognition feature points are substituted into the following formula (1):
Figure GDA0004045628030000211
wherein R is a rotation matrix, and R is a rotation matrix,
Figure GDA0004045628030000212
to shift the matrix, q i And p i Coordinates, n, of the associated actual and recognized feature points, respectively i I is a natural integer greater than 1;
and then acquiring the actual characteristic points and the minimum deflection angle R in the identification characteristic points, substituting the minimum deflection angle R into the following formula (2), and calculating to obtain the minimum value of the rotation matrix R, wherein the formula (2) is as follows:
Figure GDA0004045628030000213
the minimum value of the rotation matrix R is substituted back into equation (1), resulting in equation (3):
Figure GDA0004045628030000214
wherein c is i =p i ×n i
And (3) solving the partial derivatives of the formula (3), converting the partial derivatives into linear equations and solving the angle r of the minimum deflection, the minimum horizontal offset x and the minimum vertical offset y by the following process:
the partial derivative formula four is as follows:
Figure GDA0004045628030000221
Figure GDA0004045628030000222
Figure GDA0004045628030000223
the conversion into a linear equation to find the angle of minimum deflection r, the minimum horizontal offset x and the minimum vertical offset y is as follows:
Figure GDA0004045628030000224
in the description herein, references to the description of the terms "embodiment," "example," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The technical principle of the present invention is described above in connection with specific embodiments. The description is made for the purpose of illustrating the principles of the invention and should not be construed in any way as limiting the scope of the invention. Based on the explanations herein, those skilled in the art will be able to conceive of other embodiments of the present invention without inventive step, and these embodiments will fall within the scope of the present invention.

Claims (6)

1. A grasping system based on feature point identification is characterized by comprising:
the acquisition end is used for acquiring an image of an object and sending the acquired image to the recognition end;
the identification end is used for storing a plurality of angle templates, receiving the acquired images at the same time, identifying and matching the acquired images with the angle templates one by one, judging whether the acquired images are target objects or not, if so, selecting the angle templates matched with the acquired images as identification templates, acquiring actual characteristic points of the target objects from the images and identification characteristic points from the identification templates, and correcting and optimizing the identification templates according to comparison results by comparing the actual characteristic points of the target objects with the identification characteristic points of the identification templates to obtain the optimal identification templates; generating a moving track of the mechanical arm and a control instruction of the grabbing point according to the optimal identification template, and sending the control instruction to the mechanical arm;
the identification end further comprises a correction optimization module, and the correction optimization module comprises:
the target frame obtaining submodule collects an edge point set of a target object in the collected image through an edge detection operator, performs operation processing on edge points of the edge point set to obtain a corresponding sub-pixel point set and a direction vector point set, namely a target frame, and sends the target frame to the association submodule;
the association submodule is used for combining the identification features on the target frame and other identification features into an actual identification point according to a proportion and finding out identification feature points corresponding to the actual feature points on the identification template according to the actual feature points; the correlation submodule acquires the distances between all the actual characteristic points and the corresponding identification characteristic points, judges whether the distances between all the actual characteristic points and the corresponding identification characteristic points are larger than a preset distance value or not, acquires the number of all the actual characteristic points which are larger than the preset distance value if the distances are larger than the preset distance value, judges whether the number of all the actual characteristic points meets a preset number value or not, and sends a correction instruction to the correction submodule if the numbers of all the actual characteristic points meet the preset number value;
the correction submodule is used for receiving and analyzing a correction instruction, substituting the actual characteristic points and the identification characteristic points into the change matrix, and performing pose correction optimization on the identification template to obtain an optimal identification template;
the change matrix comprises a translation matrix and a rotation matrix;
the coordinates of the actual feature points and the coordinates of the recognition feature points are substituted into the following formula (1):
Figure FDA0004045628020000021
wherein R is a rotation matrix, and R is a rotation matrix,
Figure FDA0004045628020000022
to shift the matrix, q i And p i Coordinates, n, of the associated actual and recognized feature points, respectively i Is a feature vector, i is a natural integer greater than 1;
and then acquiring the actual characteristic points and the minimum deflection angle R in the identification characteristic points, substituting the minimum deflection angle R into the following formula (2), and calculating to obtain the minimum value of the rotation matrix R, wherein the formula (2) is as follows:
Figure FDA0004045628020000023
substituting the minimum value of the rotation matrix R back into equation (1) to obtain equation (3):
Figure FDA0004045628020000024
wherein c is i =p i ×n i
And (3) solving the partial derivatives of the formula (3), converting the partial derivatives into linear equations and solving the angle r of the minimum deflection, the minimum horizontal offset x and the minimum vertical offset y by the following process:
the partial derivative formula four is as follows:
Figure FDA0004045628020000025
Figure FDA0004045628020000026
Figure FDA0004045628020000027
the conversion into a linear equation to find the angle of minimum deflection r, the minimum horizontal offset x and the minimum vertical offset y is as follows:
Figure FDA0004045628020000031
the system comprises a plurality of mechanical arms, a plurality of positioning devices and a plurality of positioning devices, wherein each mechanical arm comprises a control end and a gripping tool, each gripping tool is different, the control ends are used for storing tool templates of gripping points of the gripping tools, receiving and analyzing control instructions from an identification end at the same time, and judging the coincidence degree of the current tool template and the optimal identification template according to an analysis result; if the coincidence degree is larger than the preset range, selecting the mechanical arm of the current tool template as an output end, and driving the current mechanical arm to move and grab the track; and if the coincidence degree is smaller than the preset range, replacing the next control end until the coincidence degree of the tool template in the next control end and the optimal recognition template is larger than the preset range.
2. The feature point identification-based grasping system according to claim 1, wherein the identifying end includes:
the template library module is used for storing angle template information, and the angle template information comprises template pictures of a plurality of angles; the angle template information comprises 360/n template drawings, n belongs to [1,2,3, \8230;, A ]; a is a positive integer less than 360 and divisible by 360; the template pictures correspond to different placing angles of the articles;
the identification processing module is used for extracting an identification frame from the image information by using an One-Stage algorithm and displaying the acquired image in a frame mode;
simultaneously, matching and identifying the target object in the frame body, judging whether the target object exists in the current image, matching the frame body where the target object is located with the template graph in the template library module if the target object exists, and storing the characteristic information of the target object in the current frame body into the storage module if the matching result is consistent;
and the storage module is used for storing the characteristic information of the target object in the acquired image.
3. The feature point identification-based grab system of claim 2, wherein the identification processing module further comprises:
the gradient quantization submodule is used for performing first-layer pyramid direction gradient quantization and second-layer pyramid direction gradient quantization on the acquired image to obtain identification characteristics corresponding to the acquired image;
and the extraction submodule is used for acquiring the identification features by taking the current angle as a list and storing the identification features in the storage module.
4. A feature point recognition-based capture method applied to the feature point recognition-based capture system of any one of claims 1 to 3, comprising the following steps:
collecting an image: collecting images of the conveyed object, and sending the collected images to an identification end;
identifying the article and optimizing the template: the method comprises the steps of identifying and matching an acquired image and a plurality of angle templates one by one, judging whether the acquired image is a target object or not, if so, selecting the angle template matched with the acquired image as an identification template, acquiring actual characteristic points of the target object from the image and identification characteristic points from the identification template, and correcting and optimizing the identification template according to a comparison result by comparing the actual characteristic points of the target object with the identification characteristic points of the identification template to obtain an optimal identification template;
generating a control instruction: generating a moving track of the mechanical arm and a control instruction of the grabbing point according to the optimal recognition template, and sending the control instruction to the mechanical arm;
the step of identifying the article and optimizing the template comprises the steps of extracting a target frame of a target object from the collected image in a sub-pixel point mode; collecting an edge point set of a target object in the collected image through an edge detection operator, and performing operation processing on edge points of the edge point set to obtain a corresponding sub-pixel point set and a direction vector point set, namely a target frame;
combining the recognition features on the target frame and the rest recognition features into an actual recognition point according to a proportion, and finding out recognition feature points corresponding to the actual feature points on the recognition template according to the actual feature points; acquiring distances between all actual feature points and corresponding recognition feature points, judging whether the distances between all the actual feature points and the corresponding recognition feature points are larger than a preset distance value or not, if so, acquiring the number of all the actual feature points which are larger than the preset distance value, judging whether the number of all the actual feature points satisfies a preset number value or not, and if so, substituting the actual feature points and the recognition feature points into a change matrix to perform pose correction optimization on the recognition template to obtain an optimal recognition template;
the change matrix comprises a translation matrix and a rotation matrix;
the coordinates of the actual feature points and the coordinates of the recognition feature points are substituted into the following formula (1):
Figure FDA0004045628020000051
wherein R is a rotation matrix, and R is a rotation matrix,
Figure FDA0004045628020000052
to shift the matrix, q i And p i Coordinates, n, of the associated actual and recognized feature points, respectively i I is a natural integer greater than 1;
and then acquiring the actual characteristic points and the minimum deflection angle R in the identification characteristic points, substituting the minimum deflection angle R into the following formula (2), and calculating to obtain the minimum value of the rotation matrix R, wherein the formula (2) is as follows:
Figure FDA0004045628020000053
substituting the minimum value of the rotation matrix R back into equation (1) to obtain equation (3):
Figure FDA0004045628020000054
wherein c is i =p i ×n i
And (3) solving the partial derivatives of the formula (3), converting the partial derivatives into linear equations and solving the angle r of the minimum deflection, the minimum horizontal offset x and the minimum vertical offset y by the following process:
the partial derivative formula four is as follows:
Figure FDA0004045628020000055
Figure FDA0004045628020000056
Figure FDA0004045628020000057
the conversion into a linear equation to find the angle of minimum deflection r, the minimum horizontal offset x and the minimum vertical offset y is as follows:
Figure FDA0004045628020000061
receiving a control instruction and grabbing an article: comparing the optimal recognition template corresponding to the control command with a tool template corresponding to the grabbing tool, and judging the coincidence degree of the current tool template and the optimal recognition template; if the coincidence degree is larger than the preset range, selecting the mechanical arm of the current tool template as an output end, and driving the current mechanical arm to move and grab the track; and if the coincidence degree is smaller than the preset range, replacing the next control end until the coincidence degree of the tool template in the next control end and the optimal recognition template is larger than the preset range.
5. The feature point identification-based grabbing method according to claim 4, wherein the step of identifying an article and optimizing a template comprises storing angle template information, wherein the angle template information comprises template drawings of a plurality of angles; the angle template information comprises 360/n template graphs, n belongs to [1,2,3, \8230, A ]; a is a positive integer less than 360 and divisible by 360; the template drawings correspond to different placing angles of the articles;
using an One-Stage algorithm to identify a frame body of the image information and extract the frame body, and showing the acquired image in a frame body mode; and simultaneously, matching and identifying the target object in the frame body, judging whether the target object exists in the current image, matching the frame body where the target object is located with the template picture if the target object exists, and storing the characteristic information of the target object in the current frame body if the matching result is consistent.
6. The feature point identification-based grabbing method according to claim 5, wherein in the step of identifying an article and optimizing a template, the method further comprises performing a first-layer pyramid direction gradient quantization and a second-layer pyramid direction gradient quantization on the acquired image; the gradient quantification is specifically: making 7 Gauss fuzzy kernel; calculating gradient through sobel, and extracting a single-channel gradient amplitude maximum image matrix from a non-maximum suppression algorithm of solving the sum of squares of the gradients in the X direction and the Y direction of the three-channel image; obtaining an angle image matrix from the gradient image matrixes in the X direction and the Y direction; quantizing the range of the angle image matrix from 0 degree to 360 degrees into an integer from 0 degree to 15 degree, continuously quantizing the 7-derived remainders in 8 directions, extracting pixels larger than a threshold value in the amplitude image matrix, then extracting a quantized image matrix corresponding to a neighborhood 3 x 3 of the pixels to form a histogram, extracting more than 5 neighborhoods in the same direction, assigning the same direction to the corresponding direction, and finally performing shifting encoding on the index from 00000001 to 10000000; acquiring identification characteristics corresponding to the acquired images; and acquiring and storing the identification characteristics by taking the current angle as a list.
CN202210669659.0A 2022-06-14 2022-06-14 System based on feature point identification and capturing method Active CN115049860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210669659.0A CN115049860B (en) 2022-06-14 2022-06-14 System based on feature point identification and capturing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210669659.0A CN115049860B (en) 2022-06-14 2022-06-14 System based on feature point identification and capturing method

Publications (2)

Publication Number Publication Date
CN115049860A CN115049860A (en) 2022-09-13
CN115049860B true CN115049860B (en) 2023-02-28

Family

ID=83161979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210669659.0A Active CN115049860B (en) 2022-06-14 2022-06-14 System based on feature point identification and capturing method

Country Status (1)

Country Link
CN (1) CN115049860B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115213721B (en) * 2022-09-21 2022-12-30 江苏友邦精工实业有限公司 A upset location manipulator for automobile frame processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9002098B1 (en) * 2012-01-25 2015-04-07 Hrl Laboratories, Llc Robotic visual perception system
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN111203877A (en) * 2020-01-13 2020-05-29 广州大学 Climbing building waste sorting robot system, control method, device and medium
CN112070818A (en) * 2020-11-10 2020-12-11 纳博特南京科技有限公司 Robot disordered grabbing method and system based on machine vision and storage medium
CN114549420A (en) * 2022-01-26 2022-05-27 常州工程职业技术学院 Workpiece identification and positioning method based on template matching

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019084601A (en) * 2017-11-02 2019-06-06 キヤノン株式会社 Information processor, gripping system and information processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9002098B1 (en) * 2012-01-25 2015-04-07 Hrl Laboratories, Llc Robotic visual perception system
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN111203877A (en) * 2020-01-13 2020-05-29 广州大学 Climbing building waste sorting robot system, control method, device and medium
CN112070818A (en) * 2020-11-10 2020-12-11 纳博特南京科技有限公司 Robot disordered grabbing method and system based on machine vision and storage medium
CN114549420A (en) * 2022-01-26 2022-05-27 常州工程职业技术学院 Workpiece identification and positioning method based on template matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Fast object localization and pose estimation in heavy clutter for robotic bin picking;MingYu Liu et al.;《The International Journal of Robotics Research》;20120508;第31卷(第8期);第1-4页 *
基于视觉引导的工业机器人快速分拣***研究;党宏社 等;《电子器件》;20170421;第40卷(第02期);第481-485页 *

Also Published As

Publication number Publication date
CN115049860A (en) 2022-09-13

Similar Documents

Publication Publication Date Title
CN107992881B (en) Robot dynamic grabbing method and system
CN110580725A (en) Box sorting method and system based on RGB-D camera
CN106000904B (en) A kind of house refuse Automated Sorting System
CN110315525A (en) A kind of robot workpiece grabbing method of view-based access control model guidance
CN111604909A (en) Visual system of four-axis industrial stacking robot
CN111553949B (en) Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
CN114751153B (en) Full-angle multi-template stacking system
CN115049860B (en) System based on feature point identification and capturing method
CN111243017A (en) Intelligent robot grabbing method based on 3D vision
CN112561886A (en) Automatic workpiece sorting method and system based on machine vision
CN112560704B (en) Visual identification method and system for multi-feature fusion
CN115008093B (en) Multi-welding-point welding robot control system and method based on template identification
CN111428815B (en) Mechanical arm grabbing detection method based on Anchor angle mechanism
CN111968185A (en) Calibration board, nine-point calibration object grabbing method and system based on code definition
CN111523511A (en) Video image Chinese wolfberry branch detection method for Chinese wolfberry harvesting and clamping device
CN110110823A (en) Object based on RFID and image recognition assists in identifying system and method
CN114193440A (en) Robot automatic grabbing system and method based on 3D vision
CN110533717B (en) Target grabbing method and device based on binocular vision
CN117337691A (en) Pitaya picking method and picking robot based on deep neural network
CN115026823B (en) Industrial robot control method and system based on coordinate welding
CN114750155B (en) Object classification control system and method based on industrial robot
CN115533895A (en) Two-finger manipulator workpiece grabbing method and system based on vision
Tan et al. Unmanned Sorting Site Combined with Path Planning and Barcode Identification
CN115100416A (en) Irregular steel plate pose identification method and related equipment
CN113808205A (en) Rapid dynamic target grabbing method based on detection constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 528000, No. 43 Longliang Road, Longyan Industrial Zone, Longyan Village, Leliu Street, Shunde District, Foshan City, Guangdong Province

Patentee after: GUANGDONG TIANTAI ROBOT Co.,Ltd.

Address before: 528322 third floor, No. 6 complex building, Dadun section of Daliang 105 National Highway (plot 5-1), Shunde District, Foshan City, Guangdong Province (No. 9, No. 23, Honggang section of Guangzhou Zhuhai Highway)

Patentee before: GUANGDONG TIANTAI ROBOT Co.,Ltd.