CN115922738A - Electronic component grabbing method, device, equipment and medium in stacking scene - Google Patents

Electronic component grabbing method, device, equipment and medium in stacking scene Download PDF

Info

Publication number
CN115922738A
CN115922738A CN202310223272.7A CN202310223272A CN115922738A CN 115922738 A CN115922738 A CN 115922738A CN 202310223272 A CN202310223272 A CN 202310223272A CN 115922738 A CN115922738 A CN 115922738A
Authority
CN
China
Prior art keywords
electronic component
information
grabbing
rgb
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310223272.7A
Other languages
Chinese (zh)
Other versions
CN115922738B (en
Inventor
彭悦言
杨旭韵
温志庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ji Hua Laboratory
Original Assignee
Ji Hua Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ji Hua Laboratory filed Critical Ji Hua Laboratory
Priority to CN202310223272.7A priority Critical patent/CN115922738B/en
Publication of CN115922738A publication Critical patent/CN115922738A/en
Application granted granted Critical
Publication of CN115922738B publication Critical patent/CN115922738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of visual control, and particularly discloses a method, a device, equipment and a medium for grabbing electronic components in a stacking scene, wherein the grabbing method comprises the following steps: acquiring a point cloud picture and an RGB picture; analyzing the RGB image by using an example detection model to acquire device envelope frame information and device type information; calling a device template based on the device type information to acquire angle information required for grabbing the target electronic device; acquiring three-dimensional coordinate information of the target electronic component based on point cloud in the point cloud picture and device envelope frame information; controlling a mechanical arm to grab the target electronic component based on the three-dimensional coordinate information and the angle information; the grasping method divides the processes of detection, identification and positioning of the electronic components into the processes of preliminary positioning, accurate positioning and three-dimensional pose calibration, has the characteristic of high grasping precision, does not need to input point clouds with huge information quantity into a neural network, effectively reduces the consumption of system computing resources and improves the grasping speed.

Description

Electronic component grabbing method, device, equipment and medium in stacking scene
Technical Field
The application relates to the technical field of visual control, in particular to a method, a device, equipment and a medium for grabbing electronic components in a stacking scene.
Background
The robot is used as a future-oriented intelligent manufacturing key technology, has the advantages of strong controllability, high flexibility, flexible configuration and the like, and is widely applied to the fields of part processing, cooperative transportation, object grabbing, part assembly and the like.
With the continuous development of machine vision technology and sensor technology, the robot based on computer vision can complete a series of actions of detection, identification, positioning, grabbing and stacking, can flexibly complete sorting of materials in an unordered or semi-unordered state, has the advantages of high flexibility, flexibility in configuration and the like, and is widely applied to the fields of part processing, cooperative transportation, object sorting, component assembly and the like.
The circuit board is usually completed by a plurality of electronic component plug-ins, the electronic components are usually produced and stacked in batches to be plugged, generally the electronic components are randomly placed on a tray or a workbench in an unordered state, stacking and other phenomena often exist, the position of the electronic component can be roughly determined only by analyzing the existing positioning and analyzing process of the grabbing mechanical arm by combining a plurality of traditional algorithms and a neural network, and particularly when the electronic component is blocked and stacked, the pose estimation accuracy of the electronic component is low, and the requirement on computing resources is high.
In view of the above problems, no effective technical solution exists at present.
Disclosure of Invention
The application aims to provide an electronic component grabbing method, device, equipment and medium in a stacking scene, so that the existing algorithm is replaced to improve the recognition accuracy of the pose of the electronic component, improve the grabbing precision and reduce the requirement of computing resources.
In a first aspect, the present application provides an electronic component grabbing method in a stacking scenario, which is used for positioning and grabbing an electronic component, and the method includes the following steps:
acquiring a point cloud picture and an RGB picture at least comprising one electronic component;
analyzing the RGB map by using an example detection model to acquire device envelope frame information and device type information;
calling a device template based on the device type information, and acquiring angle information required for grabbing the target electronic component by using the device template and the device envelope frame information;
acquiring three-dimensional coordinate information of the target electronic component based on the point cloud in the point cloud picture and the device envelope frame information;
and controlling a mechanical arm to grab the target electronic component based on the three-dimensional coordinate information and the angle information.
The electronic component grabbing method under the stacking scene is based on an example detection model for analyzing an RGB (red, green and blue) diagram to quickly locate the approximate position of each electronic component in the image in the RGB diagram and determine the device type information of the electronic component, then a device template is called to perform feature matching on a target electronic component on the premise of primary location so as to accurately locate the detailed part of the target electronic component, and the actual pose of the target electronic component in the real scene is determined by combining a point cloud diagram so as to control a mechanical arm to accurately grab the electronic component.
The method for grabbing the electronic components under the stacking scene comprises the following steps of analyzing the RGB graph by using an example detection model to obtain device envelope frame information and device type information, and calling a device template based on the device type information, wherein the steps are as follows:
analyzing the RGB map by using an example detection model to obtain a binary mask map about the electronic component;
and determining at least one target electronic component according to the mask area in the two classification mask images.
The electronic component grabbing method under the stacking scenario includes the following steps that at least one target electronic component is determined according to mask areas in the two classification mask diagrams:
acquiring the area of each mask according to the two-classification mask image;
and comparing the area of each mask with the size of a target area to determine at least one target electronic component, wherein the target area is a preset area value corresponding to the device type information.
According to the grabbing method, the area of the mask code and the size of the target area can be compared, whether the electronic component corresponding to the mask code is shielded or whether the placing state is flat or not can be quickly determined, and therefore the target electronic component corresponding to the optimal mask code is selected as the grabbing object of the round.
The method for grabbing the electronic components in the stacking scene is characterized in that the RGB graph is preprocessed based on a filtering and noise reduction algorithm.
The method for grabbing the electronic component in the stacking scene comprises the step of generating a point cloud picture and an RGB picture based on frequency structure light sampling corresponding to image background color matching of a structure light 3D camera.
In this example, the point cloud image and the RGB image are generated based on a frequency structured light sampling corresponding to the structured light 3D camera based on image background color matching, thereby ensuring accurate point cloud acquisition of the point cloud image and ensuring that electronic components in the RGB image are clearly distinguished from the workbench colors.
The electronic component grabbing method under the stacking scene comprises the following steps of obtaining angle information required by grabbing a target electronic component by using the component template and the component envelope frame information:
extracting a local RGB map of the target electronic component based on the device envelope frame information;
acquiring a homography matrix of the device template and the local RGB map based on feature matching;
and acquiring the angle information according to the homography matrix.
The method for grabbing the electronic component in the stacking scene comprises the following steps of analyzing the RGB image by the example detection model to acquire device envelope frame information and device type information:
acquiring a characteristic diagram according to the RGB image;
acquiring an interested area according to the characteristic diagram;
and carrying out region classification on the RGB map according to the region of interest, and generating the device envelope frame information and the device type information according to a classification result.
In a second aspect, the present application further provides an electronic component grabbing device under a stacking scene, configured to locate and grab an electronic component, the device includes:
the acquisition module is used for acquiring a point cloud picture and an RGB picture at least comprising one electronic component;
the detection module is used for analyzing the RGB map by using an example detection model to acquire device envelope frame information and device type information;
the first positioning module is used for calling a device template based on the device type information and acquiring angle information required by grabbing a target electronic component by using the device template and the device envelope frame information;
the second positioning module is used for acquiring the three-dimensional coordinate information of the target electronic component based on the point cloud in the point cloud picture and the device envelope frame information;
and the grabbing module is used for controlling a mechanical arm to grab the target electronic component based on the three-dimensional coordinate information and the angle information.
The electronic component grabbing device under the stacking scene analyzes an RGB (red, green and blue) diagram based on an example detection model so as to quickly locate the approximate position of each electronic component in the image in the RGB diagram and determine the device type information of the electronic component, then calls a device template to perform feature matching on the target electronic component on the premise of primary location so as to accurately locate the detailed part of the target electronic component, and determines the actual pose of the target electronic component in a real scene by combining a point cloud diagram so as to control a mechanical arm to accurately grab the electronic component.
In a third aspect, the present application further provides an electronic device, comprising a processor and a memory, where the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, perform the steps of the method as provided in the first aspect.
In a fourth aspect, the present application also provides a storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the method as provided in the first aspect above.
From the above, the application provides an electronic component grabbing method, device, equipment and medium in a stacking scene, wherein the electronic component grabbing method in the stacking scene analyzes an RGB diagram based on an example detection model to quickly locate the approximate position of each electronic component in the RGB diagram and determine the device type information of the electronic component, then a device template is called to perform feature matching on a target electronic component on the premise of primary location so as to accurately locate the detailed part of the target electronic component, the actual pose of the target electronic component in a real scene is determined by combining a point cloud diagram, and the electronic component is accurately grabbed by controlling a mechanical arm.
Drawings
Fig. 1 is a flowchart of an electronic component grabbing method in a stacking scenario according to an embodiment of the present application.
FIG. 2 is an RGB map of an original capture in some embodiments of the present application.
FIG. 3 is an example detection model based processed RGB map in some embodiments of the present application.
FIG. 4 is a schematic diagram of a feature matching process in some embodiments of the present application.
Fig. 5 is a schematic structural diagram of an electronic component grabbing device in a stacking scenario provided in the embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals: 201. an acquisition module; 202. a detection module; 203. a first positioning module; 204. a second positioning module; 205. a grabbing module; 301. a processor; 302. a memory; 303. a communication bus.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
In a first aspect, referring to fig. 1, some embodiments of the present application provide an electronic component grabbing method in a stacking scenario, where the method is used to position and grab an electronic component, and includes the following steps:
s1, acquiring a point cloud picture and an RGB picture at least comprising one electronic component;
s2, analyzing the RGB image by using an example detection model to acquire device envelope frame information and device type information;
s3, calling a device template based on the device type information, and acquiring angle information required by grabbing the target electronic component by using the device template and the device envelope frame information;
s4, acquiring three-dimensional coordinate information of the target electronic component based on the point cloud in the point cloud picture and the device envelope frame information;
and S5, controlling the mechanical arm to grab the target electronic component based on the three-dimensional coordinate information and the angle information.
Specifically, in the embodiment of the application, the electronic components are workpieces which need to be grabbed by a mechanical arm and transferred or stacked, wherein when the cloud point diagram and the RGB diagram contain a plurality of electronic components, the electronic components may be electronic components of the same type, or electronic components of different types and sizes; the electronic component grabbing method in the stacking scene aims to quickly determine the pose and the corresponding angle information of a target electronic component, so as to determine the specific position relationship between the target component and a mechanical arm in the real environment to generate a corresponding control command to operate the mechanical arm to grab the target electronic component, therefore, the steps S1 to S4 are equivalent to a round of pose analysis and identification process for the target electronic component, wherein the target electronic component is one or more electronic components needing to be grabbed in the identification process, and therefore, the target electronic component in the steps S3 to S5 can be one or more electronic components.
It should be noted that, the point cloud image and the RGB image are matched, that is, the calibration relationship between the point cloud image and the RGB image is known, the point cloud image and the RGB image may be acquired based on an acquisition device having the functions of point cloud and image acquisition, or may be acquired by using a point cloud acquisition device and an image acquisition device respectively under the condition that the calibration relationship is clear; correspondingly, the acquisition equipment can be arranged on the mechanical arm or fixed on a specific position in a real scene.
It should be noted that, the method according to the embodiment of the present application mainly performs pose analysis and identification of the target electronic component through image analysis, and therefore, the electronic component in the RGB diagram is located in an environment with a significant color difference, such as a workbench with a large color difference, where the electronic component is placed or stacked in a scattered manner in the workbench, and step S1 is equivalent to shooting a cloud point diagram and an RGB diagram of the workbench based on a specific angle (e.g., a visual angle perpendicular to the workbench).
More specifically, the example detection model is a pre-trained image classification model, and is configured to perform image classification according to a difference between a background color (a table color) and a foreground color (an electronic component color) in an RGB diagram, so as to determine the number and distribution positions of electronic components in the RGB diagram, thereby implementing preliminary positioning of the electronic components, where the preliminary positioning process includes: dividing the electronic components based on the difference between the foreground color and the background color and the preset appearance (such as a quadrangle or a quadrangle set) of the electronic components to generate device envelope frame information wrapping different electronic components, and determining the device type information of the electronic components in the corresponding device envelope frame information by traversing and analyzing the foreground color appearance and the appearance of the different electronic components in the different device envelope frame information.
More specifically, in the embodiment of the present application, the steps S1 to S2 implement preliminary positioning of the electronic component, which is equivalent to performing appearance feature matching on foreground colors in an RGB diagram by using electronic component instances with different appearances, implementing rough positioning of the outline of the electronic component, and performing matching in various combination forms (such as various stacking conditions of the electronic component), so as to generate corresponding device envelope frame information and determine corresponding device type information after defining each outline object; after the preliminary positioning of the electronic component is completed, the method of the embodiment of the application needs to combine with the steps S3 to S4 to accurately position the target electronic component.
More specifically, the electronic components with different device type information have different image features, such as image features of appearance, size, color, texture, character marks, and the like, the device template in step S3 is a standard template image corresponding to the device type information, and has the one or more image features, and the image of the electronic component in the RGB diagram has the image features matched with the corresponding device template, so that step S3 can determine the posture of the target electronic component in the RGB diagram according to the matching relationship between the device template and the image features of the target electronic component in the device envelope frame information, thereby determining the horizontal orientation angle of the target electronic component on the workbench to calculate and obtain angle information required for grabbing the target electronic component, and the angle information is used for guiding the mechanical arm to rotate the corresponding angle to accurately complete the electronic component grabbing processing in the corresponding target stacking scene.
More specifically, the device template is stored in the device database, and after the device type information of the electronic component is determined in step S2, the corresponding device template can be directly called and extracted from the device database in step S3 according to the feature code of the device type information, and the device template may be one of a feature image, a two-dimensional model and a three-dimensional model of the electronic component corresponding to the device type information.
More specifically, in the embodiment of the present application, the target electronic component is preferably an electronic component whose surface is not blocked, such as an electronic component stacked on the uppermost layer, so as to ensure that the step S3 can smoothly perform feature matching by using the device template to obtain more accurate angle information.
More specifically, step S3 accurately determines the posture of the target electronic component in the RGB diagram by using the device template, that is, determines the planar two-dimensional state of the target electronic component (that is, the planar two-dimensional coordinates of each position point); considering that different electronic components have different thicknesses and different stacking conditions also affect the height positions of the electronic components, the method of the embodiment of the application further needs to perform step S4 of analyzing the depth of the target electronic component to determine the three-dimensional coordinate information of each contour point of the target electronic component, so as to ensure that the subsequent mechanical arm can accurately capture the target electronic component; since the RGB map is matched with the point cloud map, the method according to the embodiment of the present application fills the z amount of the pixel in the RGB map by using the depth amount (depth value perpendicular to the RGB map direction) of the point cloud in the point cloud map matched with the RGB map after determining the position of the target electronic component in the RGB map, so as to form the three-dimensional coordinate information.
More specifically, after the three-dimensional coordinate information is determined, the method of the embodiment of the application can accurately determine the pose relationship between the target electronic component and the tail end of the mechanical arm in a real scene according to the three-dimensional coordinate information and the angle information based on a hand-eye calibration mode, so that a corresponding mechanical arm control command is generated to complete the electronic component grabbing operation in a target stacking scene.
The electronic component grabbing method in the stacking scene in the embodiment of the application analyzes an RGB (red, green and blue) map based on an example detection model to quickly locate the approximate position of each electronic component in the image in the RGB map and determine the device type information of the electronic component, then a device template is called to perform feature matching on a target electronic component on the premise of primary location so as to accurately locate the detail part of the target electronic component, the actual pose of the target electronic component in a real scene is determined by combining a point cloud map, and the electronic component is accurately grabbed by controlling a mechanical arm.
In some preferred embodiments, the step of analyzing the RGB map using the example detection model to obtain the device envelope frame information and the device type information and the step of calling the device template based on the device type information further include the following steps:
S2A, analyzing the RGB image by using an example detection model to obtain a binary mask image about the electronic component;
and S2B, determining at least one target electronic component according to the mask area in the binary-classification mask image.
Specifically, the mask in the two-classification mask map corresponds to pixels occupied by the electronic components in the RGB map, and the completeness of the exposure of different electronic components in the RGB map can be obtained by analyzing the mask area in the two-classification mask map.
More specifically, as can be seen from the foregoing, the example detection model is based on classifying the foreground color and the background color to implement detection, after the device envelope frame information is generated, the example detection model can perform semantic segmentation on pixels in the device envelope frame information according to the device type information to obtain masks associated with the corresponding device envelope frame information, and obtain corresponding masks of all device envelope frame information to obtain a binary classification mask map.
More specifically, in some embodiments, in the process of generating the binary mask map, the method in the embodiment of the present application may further classify the mask according to the device type information to generate the binary mask maps related to different device type information, and may further classify according to the mask form to generate the binary mask maps related to different shielding degrees, and use the classification as an analysis basis for subsequently determining the target electronic component, so as to reduce the amount of calculation in the process of determining the target electronic component.
In some preferred embodiments, the step of determining at least one target electronic component according to the mask area in the binary mask map includes:
S2B1, acquiring the area of each mask according to the two-classification mask image;
and S2B2, comparing the area of each mask with the size of a target area to determine at least one target electronic component, wherein the target area is a preset area value corresponding to the device type information.
Specifically, as can be seen from the foregoing, the area of the mask represents the completeness of the exposure of different electronic components in the RGB diagram, and the capture method in the embodiment of the present application can compare the area of the mask with the size of the target area to quickly determine whether the electronic component corresponding to the mask is blocked or the placement state is flat, so that the target electronic component corresponding to the optimal mask is selected as the capture object for the round.
It should be noted that, the closer the ratio of the area of the mask to the target area is to 1, the better the placement state of the electronic component corresponding to the mask is.
In some preferred modes, the target electronic component is preferably one.
In some preferred modes, step S2B2 includes:
S2B21, calculating the area ratio of the areas of the different masks to the target area, and selecting the electronic components corresponding to the masks with the area ratio closest to 1 in preset number (a plurality of masks are preferred to be three) as optimal candidate objects;
S2B22, intercepting a point cloud set corresponding to the optimal candidate object in the point cloud image based on the mask, and calculating to obtain an obb box (oriented bounding box) and a geometric center of the corresponding point cloud set;
S2B23, respectively calculating the point cloud maximum plane of each optimal candidate object, the sealing cone of the sucker and the tangential force, the radial force and the closure of the obb box according to the obb box and the geometric center to generate a sealing score;
and S2B24, scoring the area ratio, the geometric center depth and the sealing performance, calculating and generating a grabbing score of each optimal candidate object according to preset weight weighting, and taking the optimal candidate object with the highest grabbing score as a target electronic component.
Specifically, in the embodiment, multi-condition state analysis is performed on a plurality of candidate optimal candidate objects, and the electronic component with the optimal placement state on the workbench can be obtained as a capture object, so that the mechanical arm is ensured to accurately capture the electronic component, mask data representing a two-dimensional state and point cloud data representing a three-dimensional state are integrated in the state analysis process, and the capture precision of the electronic component in a target stacking scene is effectively improved.
In some preferred embodiments, the RGB map is pre-processed based on a filtering, noise reduction algorithm.
Specifically, the RGB image preprocessed based on the filtering and denoising algorithm can reduce the influence of noise in the image on the image analysis process, so that the example detection model can generate device envelope frame information with higher precision, and the device type information is ensured to be correct.
In some preferred embodiments, the point cloud and RGB plots are generated based on a structured light 3D camera based on image background color matching corresponding frequency structured light samples.
Specifically, as can be seen from the foregoing, the example detection model is based on classifying the foreground color and the background color to achieve detection, so that the difference between the background color of the workstation and the color of the electronic component is related to the analysis precision of the example detection model, and therefore, in the embodiment of the present application, the point cloud graph and the RGB graph are generated based on the structured light 3D camera based on the frequency structured light sampling corresponding to the image background color matching, so as to ensure accurate point cloud acquisition of the point cloud graph and ensure that the electronic component in the RGB graph is obviously different from the workstation color.
In some preferred embodiments, as shown in fig. 2, before the RGB map is put into the instance detection model for analysis, the approximate regions of the electronic component are intercepted based on the basic target detection algorithm, and the RGB map is used as the auxiliary identification data of the instance detection model, so that the instance detection model performs analysis based on the approximate regions to generate the device envelope frame information and the device type information, that is, the electronic component is obtained in the corresponding approximate region range, thereby reducing the analysis interference of the instance detection model and improving the identification accuracy.
It should be noted that fig. 3 is an RGB diagram processed based on an example detection model, which retains a rough area (i.e., a rectangular box in the diagram) intercepted by a basic target detection algorithm and also generates a corresponding mask, i.e., a light color part in the middle of each electronic component.
In some preferred embodiments, the step of obtaining angle information required for grabbing the target electronic component by using the device template and the device envelope frame information includes:
s31, extracting a local RGB (red, green and blue) graph of the target electronic component based on the device envelope frame information;
s32, acquiring a homography matrix of the device template and the local RGB image based on feature matching;
and S33, acquiring angle information according to the homography matrix.
Specifically, the partial RGB map is a partial image of the target electrical component captured in the RGB map, i.e., an image based on the preliminary positioning, and steps S32 to S33 are performed to analyze the angular difference between the partial RGB map and the device template to determine the angular information.
More specifically, the process of obtaining the angle information may be performed based on various images of the device template, and the capturing method in the embodiment of the present application is preferably performed based on image features of parameter characters in the device template, and the accuracy of the device type information may also be verified based on a matching result of the parameter characters in the matching process.
More specifically, the homography matrix is a rotation matrix, and as a result of the feature matching in step S32, the device template can be completely matched or maximally matched with the local RGB map after being rotated based on the rotation matrix, and as shown in fig. 4, the local RGB map can be completely matched or maximally matched with the device template after being rotated based on the rotation matrix.
Since the device type information is obtained based on the example detection model analysis, there may be a case of analysis error, in some preferred embodiments, the capture method in the embodiment of the present application further includes a verification process of the device type information, that is, step S3 further includes the following steps:
s34, obtaining an affine transformation matrix according to the homography matrix, and carrying out affine transformation on the local RGB image to obtain a verification image;
s35, placing the verification graph into an example detection model to obtain verification category information and verification envelope frame information;
and S36, verifying the angle information according to the verification envelope frame information, the verification category information, the device envelope frame information and the device type information.
Specifically, as shown in fig. 4, the right-side diagram is a verification diagram generated after affine transformation is performed on the partial RGB diagram, the verification diagram is placed in the example detection model, analysis is performed on the device envelope frame information and the device type information again to generate verification category information and verification envelope frame information, whether the angle information is reliable or not is determined by comparing whether the verification envelope frame information and the device envelope frame information are consistent or similar and whether the verification category information and the device type information are consistent or not, and if the angle information is not reliable, the step S2 or the step S1 needs to be returned to for a new round of capture identification.
More specifically, the grabbing method provided by the embodiment of the application verifies the angle information based on the homography matrix, so that the accuracy of device type information identification and the reliability of angle information are ensured, and the grabbing precision of the mechanical arm is effectively improved.
In some preferred embodiments, the process of analyzing the RGB map by the example detection model to obtain the device envelope frame information and the device type information includes:
a1, obtaining a characteristic diagram according to an RGB (red, green and blue) diagram;
a2, acquiring an interested area according to the characteristic diagram;
and A3, carrying out region classification on the RGB image according to the region of interest, and generating device envelope frame information and device type information according to a classification result.
Specifically, the feature map is a feature data image of a pixel, and the region of interest may be set manually, may be set according to a pixel classification condition, or may be set based on the approximate region.
More specifically, in the step A3, the device envelope frame information can be generated by selecting different regions of interest to perform multiple classification and regression, so that the region division can be performed, and meanwhile, the device type information can be determined by performing shape analysis on the device envelope frame information.
In a second aspect, referring to fig. 5, some embodiments of the present application further provide an electronic component grabbing device in a stacking scenario, where the electronic component grabbing device is used to position and grab an electronic component, and the device includes:
an obtaining module 201, configured to obtain a point cloud chart and an RGB chart that at least include one electronic component;
a detection module 202, configured to analyze the RGB map by using an example detection model to obtain device envelope frame information and device type information;
the first positioning module 203 is used for calling a device template based on the device type information and acquiring angle information required for grabbing the target electronic component by using the device template and the device envelope frame information;
the second positioning module 204 is used for acquiring three-dimensional coordinate information of the target electronic component based on the point cloud in the point cloud picture and the device envelope frame information;
and the grabbing module 205 is configured to control the manipulator to grab the target electronic component based on the three-dimensional coordinate information and the angle information.
The electronic component grabbing device under the stacking scene analyzes the RGB image based on the example detection model to quickly locate the approximate position of each electronic component in the image and determine the device type information of the electronic component, then the device template is called to perform feature matching on the target electronic component on the premise of primary location so as to accurately locate the detailed part of the target electronic component, the actual pose of the target electronic component in the real scene is determined by combining the point cloud image, and the electronic component is accurately grabbed by controlling the mechanical arm.
In some preferred embodiments, the electronic component grabbing device in the stacking scenario of the embodiment of the present application is configured to execute the electronic component grabbing method in the stacking scenario provided by the first aspect.
In a third aspect, referring to fig. 6, some embodiments of the present application further provide a schematic structural diagram of an electronic device, where the present application provides an electronic device, including: the processor 301 and the memory 302, the processor 301 and the memory 302 being interconnected and communicating with each other via a communication bus 303 and/or other form of connection mechanism (not shown), the memory 302 storing computer readable instructions executable by the processor 301, the processor 301 executing the computer readable instructions when the electronic device is operated to perform the method of any of the alternative implementations of the above-described embodiments.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program performs the method in any optional implementation manner of the foregoing embodiments. The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In summary, the embodiment of the present application provides an electronic component grabbing method, an apparatus, a device, and a medium in a stacking scenario, where the electronic component grabbing method in the stacking scenario analyzes an RGB diagram based on an example detection model to quickly locate an approximate position of each electronic component in the RGB diagram and determine device type information of the electronic component, then, on the premise of primary location, a device template is called to perform feature matching on a target electronic component to perform precise location on a detailed part of the target electronic component, and an actual pose of the target electronic component in a real scenario is determined in combination with a point cloud diagram to control a manipulator to accurately grab the electronic component.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An electronic component grabbing method in a stacking scene is used for positioning and grabbing electronic components and is characterized by comprising the following steps:
acquiring a point cloud picture and an RGB picture at least comprising one electronic component;
analyzing the RGB map by using an example detection model to acquire device envelope frame information and device type information;
calling a device template based on the device type information, and acquiring angle information required for grabbing the target electronic component by using the device template and the device envelope frame information;
acquiring three-dimensional coordinate information of the target electronic component based on the point cloud in the point cloud picture and the device envelope frame information;
and controlling a mechanical arm to grab the target electronic component based on the three-dimensional coordinate information and the angle information.
2. The electronic component grabbing method under the stacking scenario as claimed in claim 1, wherein the step of analyzing the RGB map by using the instance detection model to obtain device envelope frame information and device type information and the step of calling the device template based on the device type information further include the following steps:
analyzing the RGB map by using an example detection model to obtain a binary mask map about the electronic component;
and determining at least one target electronic component according to the mask area in the two classification mask images.
3. The method for grabbing electronic components under the stacking scenario according to claim 2, wherein the step of determining at least one target electronic component according to a mask area in the binary mask map includes:
acquiring the area of each mask according to the two-classification mask image;
and comparing the area of each mask with the size of a target area to determine at least one target electronic component, wherein the target area is a preset area value corresponding to the device type information.
4. The electronic component grabbing method under the stacking scenario as claimed in claim 1, wherein the RGB map is preprocessed based on a filtering and denoising algorithm.
5. The method for grabbing electronic components under a stacked scene as claimed in claim 1, wherein the point cloud image and the RGB image are generated based on a frequency structured light sampling corresponding to an image background color matching by a structured light 3D camera.
6. The method for grabbing electronic components under the stacking scenario according to claim 1, wherein the step of obtaining angle information required for grabbing the target electronic component by using the device template and the device envelope frame information includes:
extracting a local RGB (red, green and blue) graph of the target electronic component based on the device envelope frame information;
acquiring homography matrixes of the device template and the local RGB image based on feature matching;
and acquiring the angle information according to the homography matrix.
7. The electronic component grabbing method under the stacking scenario as claimed in claim 1, wherein the process of analyzing the RGB diagram acquisition device envelope frame information and the device type information by the instance detection model includes:
acquiring a characteristic diagram according to the RGB diagram;
acquiring an interested area according to the characteristic diagram;
and performing region classification on the RGB image according to the region of interest, and generating the device envelope frame information and the device type information according to a classification result.
8. The utility model provides an electronic components grabbing device under scene piles up for the location snatchs electronic components, its characterized in that, the device includes:
the acquisition module is used for acquiring a point cloud picture and an RGB picture at least comprising one electronic component;
the detection module is used for analyzing the RGB image by using an example detection model to acquire device envelope frame information and device type information;
the first positioning module is used for calling a device template based on the device type information and acquiring angle information required by grabbing a target electronic component by using the device template and the device envelope frame information;
the second positioning module is used for acquiring the three-dimensional coordinate information of the target electronic component based on the point cloud in the point cloud picture and the device envelope frame information;
and the grabbing module is used for controlling a mechanical arm to grab the target electronic component based on the three-dimensional coordinate information and the angle information.
9. An electronic device comprising a processor and a memory, said memory storing computer readable instructions which, when executed by said processor, perform the steps of the method according to any one of claims 1 to 7.
10. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method according to any one of claims 1-7.
CN202310223272.7A 2023-03-09 2023-03-09 Electronic component grabbing method, device, equipment and medium in stacking scene Active CN115922738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310223272.7A CN115922738B (en) 2023-03-09 2023-03-09 Electronic component grabbing method, device, equipment and medium in stacking scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310223272.7A CN115922738B (en) 2023-03-09 2023-03-09 Electronic component grabbing method, device, equipment and medium in stacking scene

Publications (2)

Publication Number Publication Date
CN115922738A true CN115922738A (en) 2023-04-07
CN115922738B CN115922738B (en) 2023-06-02

Family

ID=85832335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310223272.7A Active CN115922738B (en) 2023-03-09 2023-03-09 Electronic component grabbing method, device, equipment and medium in stacking scene

Country Status (1)

Country Link
CN (1) CN115922738B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015089591A (en) * 2013-11-05 2015-05-11 ファナック株式会社 Apparatus and method for taking out bulked articles by robot
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method
CN109176521A (en) * 2018-09-19 2019-01-11 北京因时机器人科技有限公司 A kind of mechanical arm and its crawl control method and system
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN114494712A (en) * 2020-11-09 2022-05-13 北京四维图新科技股份有限公司 Object extraction method and device
CN114882109A (en) * 2022-04-27 2022-08-09 天津新松机器人自动化有限公司 Robot grabbing detection method and system for sheltering and disordered scenes
CN114952842A (en) * 2022-05-27 2022-08-30 赛那德数字技术(上海)有限公司 Unordered grabbing method and device based on grabbing manipulator and storage medium
CN115284279A (en) * 2022-06-21 2022-11-04 福建(泉州)哈工大工程技术研究院 Mechanical arm grabbing method and device based on aliasing workpiece and readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015089591A (en) * 2013-11-05 2015-05-11 ファナック株式会社 Apparatus and method for taking out bulked articles by robot
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method
CN109176521A (en) * 2018-09-19 2019-01-11 北京因时机器人科技有限公司 A kind of mechanical arm and its crawl control method and system
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN114494712A (en) * 2020-11-09 2022-05-13 北京四维图新科技股份有限公司 Object extraction method and device
CN114882109A (en) * 2022-04-27 2022-08-09 天津新松机器人自动化有限公司 Robot grabbing detection method and system for sheltering and disordered scenes
CN114952842A (en) * 2022-05-27 2022-08-30 赛那德数字技术(上海)有限公司 Unordered grabbing method and device based on grabbing manipulator and storage medium
CN115284279A (en) * 2022-06-21 2022-11-04 福建(泉州)哈工大工程技术研究院 Mechanical arm grabbing method and device based on aliasing workpiece and readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
仇实: "基于RGB-D视觉引导的机器人抓取及数字孪生***", 中国优秀硕士学位论文全文数据库(电子期刊) *

Also Published As

Publication number Publication date
CN115922738B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN112070818B (en) Robot disordered grabbing method and system based on machine vision and storage medium
Moisan et al. Automatic homographic registration of a pair of images, with a contrario elimination of outliers
Azad et al. Stereo-based 6d object localization for grasping with humanoid robot systems
CN114952809B (en) Workpiece identification and pose detection method, system and mechanical arm grabbing control method
CN111242240B (en) Material detection method and device and terminal equipment
CN105934757B (en) A kind of method and apparatus of the key point for detecting the first image and the incorrect incidence relation between the key point of the second image
CN110400315A (en) A kind of defect inspection method, apparatus and system
CN109816634B (en) Detection method, model training method, device and equipment
CN114742789B (en) General part picking method and system based on surface structured light and electronic equipment
Muñoz et al. Fast 6D pose from a single RGB image using Cascaded Forests Templates
CN112657176A (en) Binocular projection man-machine interaction method combined with portrait behavior information
CN115019024B (en) Visual recognition method of QFP
Abbas Recovering homography from camera captured documents using convolutional neural networks
CN110148133B (en) Circuit board fragment image identification method based on feature points and structural relationship thereof
CN113111899A (en) Object recognition or object registration method based on image classification and computing system
CN117253022A (en) Object identification method, device and inspection equipment
CN116228854B (en) Automatic parcel sorting method based on deep learning
CN115619783B (en) Method and device for detecting product processing defects, storage medium and terminal
CN115922738B (en) Electronic component grabbing method, device, equipment and medium in stacking scene
Dvorak et al. Object state recognition for automatic AR-based maintenance guidance
CN111860035A (en) Book cover detection method and device, storage medium and electronic equipment
CN113379899B (en) Automatic extraction method for building engineering working face area image
CN112288038B (en) Object recognition or object registration method based on image classification and computing system
CN114155291A (en) Box body pose identification method and device, terminal and storage medium
CN113496142A (en) Method and device for measuring volume of logistics piece

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant