CN115346211A - Visual recognition method, visual recognition system and storage medium - Google Patents

Visual recognition method, visual recognition system and storage medium Download PDF

Info

Publication number
CN115346211A
CN115346211A CN202211056751.6A CN202211056751A CN115346211A CN 115346211 A CN115346211 A CN 115346211A CN 202211056751 A CN202211056751 A CN 202211056751A CN 115346211 A CN115346211 A CN 115346211A
Authority
CN
China
Prior art keywords
camera
information
dimensional
visual recognition
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211056751.6A
Other languages
Chinese (zh)
Inventor
王兆广
何春来
王卫军
孙嘉彬
尚正新
陈琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Potevio Logistics Technology Co ltd
China Electronics Technology Robot Co ltd
Original Assignee
Potevio Logistics Technology Co ltd
China Electronics Technology Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Potevio Logistics Technology Co ltd, China Electronics Technology Robot Co ltd filed Critical Potevio Logistics Technology Co ltd
Priority to CN202211056751.6A priority Critical patent/CN115346211A/en
Publication of CN115346211A publication Critical patent/CN115346211A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G65/00Loading or unloading
    • B65G65/005Control arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of automatic assembly and disassembly, and particularly relates to a visual identification method, a visual identification system and a storage medium. The visual identification method comprises the steps of obtaining two-dimensional plane position information and three-dimensional point cloud information of an object; screening out point clouds corresponding to the two-dimensional plane position information from the three-dimensional point cloud information; and identifying six-dimensional attitude information of the object from the point cloud. The storage medium stores a computer program executable by a computer, the computer program implementing the visual recognition method when executed. The vision recognition system includes a support, a rotation mechanism, a camera, a processor, and the storage medium. The invention provides a visual identification method, a visual identification system and a storage medium, which can be applied to the field of automatic loading and unloading in industrial application, realize the automation of unloading work, complete the work without manual participation, save the labor cost and improve the loading and unloading efficiency.

Description

Visual recognition method, visual recognition system and storage medium
Technical Field
The invention belongs to the technical field of automatic loading and unloading, and particularly relates to a visual recognition method, a visual recognition system and a storage medium.
Background
In the logistics transportation industry, loading and unloading goods is a very important link. At present, most of the goods are unloaded manually in the loading and unloading process. During unloading, the goods are manually placed on a conveyor belt, and the goods are conveyed by the conveyor belt into a transport means (such as a train). The manual unloading method is low in unloading efficiency, the logistics time cost is increased, and the labor cost is high. Therefore, how to realize automatic unloading becomes a problem of key research in the logistics transportation industry, and when realizing mechanical automatic unloading, it is particularly critical to identify the goods, and how to automatically identify the goods position becomes a problem to be solved urgently by those skilled in the art.
Disclosure of Invention
In order to solve the above problems, the present invention provides a visual recognition method, a visual recognition system and a storage medium, wherein the technical scheme is as follows:
a visual recognition method, comprising: acquiring two-dimensional plane position information and three-dimensional point cloud information of an object; screening out point clouds corresponding to the two-dimensional plane position information from the three-dimensional point cloud information; and identifying six-dimensional attitude information of the object from the point cloud.
The visual recognition method as described above, more preferably: when two-dimensional plane position information and three-dimensional point cloud information of a plurality of objects are obtained, a plurality of shooting views are obtained through multiple times of shooting; an overlapping area is reserved between two adjacent shooting visual fields, and the size of the overlapping area is larger than that of an object; and after the overlapping area is removed by setting a threshold value, obtaining the two-dimensional plane position information and the three-dimensional point cloud information of a plurality of objects.
The visual recognition method as described above is more preferably: the method comprises the following steps of carrying out model training, wherein in the model training, marking of an object is realized through an image marking tool, and a plane image of the object is trained through a convolutional neural network to obtain a training model; and in the visual identification, after a plane image of the object is obtained, the two-dimensional plane position information of the object is obtained by comparing the plane image with the training model.
The visual recognition method as described above is more preferably: after the plane image of the object is obtained, the plane image is segmented through an edge detection algorithm to obtain a plurality of sub-images; and obtaining the two-dimensional plane position information of a plurality of objects by comparing the sub-images with the training model.
The visual recognition method as described above, more preferably: when the six-dimensional attitude information of the object is identified by the point cloud, acquiring depth information and rotation information of the object through the point cloud; the depth information and the two-dimensional plane position information form translation information of the object; the translation information and the rotation information constitute the six-dimensional pose information of the object.
A storage medium storing a computer program executable by a computer, the computer program when executed implementing the visual recognition method.
A visual recognition system, comprising: a support, a rotation mechanism, a camera, a processor, and the storage medium; the rotating mechanism is arranged on the bracket, the camera is arranged on the rotating mechanism, and the rotating mechanism is used for driving the camera to rotate; the processor is respectively connected with the rotating mechanism, the camera and the storage medium.
The visual recognition system as described above is more preferably: the rotating mechanism comprises a base, a driving block and a connecting frame; the base is arranged on the support, the driving block is rotatably connected with the base, the connecting frame is rotatably connected with the driving block, and the camera is arranged on the connecting frame; install first motor and second motor on the drive block, first motor is used for driving the drive block rotates, the second motor is used for driving the link rotates.
The visual recognition system as described above is more preferably: the rotating axis of the first motor is perpendicular to the rotating axis of the second motor, and the rotating axis of the first motor is parallel to the height direction of the support.
The visual recognition system as described above is more preferably: a connecting piece is arranged between the camera and the rotating mechanism, the connecting piece is arranged on the connecting frame, and the camera is arranged on the connecting piece; be equipped with the mounting groove on the connecting piece, the camera is installed on the mounting groove.
The visual recognition system as described above, further preferably: the mounting groove is a strip-shaped groove, and the length direction of the mounting groove is parallel to the rotation axis of the second motor and used for supplying the camera to adjust the mounting position.
The visual recognition system as described above, further preferably: the cameras comprise a 2D camera and a 3D camera, and the 2D camera and the 3D camera are respectively connected with the processor; the 2D camera is used for acquiring two-dimensional plane position information of an object, and the 3D camera is used for acquiring depth information and rotation information of the object.
Analysis shows that compared with the prior art, the invention has the advantages and beneficial effects that:
the invention provides a visual identification method, a visual identification system and a storage medium, which can be applied to the field of automatic loading and unloading in industrial application, realize the automation of unloading work, can complete the work without manual participation, save labor cost and improve loading and unloading efficiency.
Drawings
FIG. 1 is a flow chart of a visual identification method of the present invention;
FIG. 2 is a schematic diagram of the connection of the visual recognition system of the present invention;
fig. 3 is a schematic structural diagram of the rotating mechanism and the connecting member of the present invention.
In the figure: 1-a scaffold; 2-a rotating mechanism; 3-a camera; 4-a connector; 5-a base; 6-a drive block; 7-a connecting frame; and 8, installing a groove.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", and the like indicate orientations or positional relationships based on orientations or positional relationships shown in the drawings, which are merely for convenience of description of the present invention and do not require that the present invention must be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. The terms "connected" and "connected" as used herein are intended to be broadly construed, and may include, for example, fixed connections and removable connections; they may be directly connected or indirectly connected through intermediate members, and specific meanings of the above terms will be understood by those skilled in the art as appropriate.
Please refer to fig. 1 to fig. 3, wherein fig. 1 is a flow chart of a visual identification method according to the present invention; FIG. 2 is a schematic diagram of the connection of the visual recognition system of the present invention; fig. 3 is a schematic structural diagram of the rotating mechanism and the connecting member of the present invention.
In one embodiment of the present invention, as shown in FIG. 1, a visual recognition method is provided that requires model training prior to recognizing an object. When the model is trained, the object is labeled by the image labeling tool, and then the planar image of the object is trained by the convolutional neural network, so that a training model is obtained. After the training model is completed, the six-dimensional posture information of the object is recognized mainly through the following steps.
The method comprises the following steps of firstly, obtaining two-dimensional plane position information and three-dimensional point cloud information of an object.
In the first step, when the two-dimensional plane position information and the three-dimensional point cloud information of a plurality of objects are acquired, a plurality of shooting views are acquired through multiple times of shooting. When shooting, two adjacent shooting visual fields leave an overlapping area, and the size of the overlapping area is set to be larger than that of one object (such as a carton), so that the middle object (such as the carton) can be completely shot in the two adjacent shooting visual fields. After a plurality of shooting views with overlapped areas left are acquired, the overlapped areas need to be removed, when the overlapped areas are removed, the overlapped areas can be screened and removed in a mode of setting a threshold, for example, the threshold can be set according to the maximum gray value of adjacent objects (such as cartons) contained in the overlapped areas, and therefore a plane image and three-dimensional point cloud information of the objects can be acquired. After a planar image of an object is obtained, the planar image is segmented through an edge detection algorithm to obtain a plurality of sub-images. By comparing the subimages with the training model, the two-dimensional plane position information of a plurality of objects can be obtained, and then the two-dimensional plane position information and the three-dimensional point cloud information of the plurality of objects are obtained.
And step two, screening out the point cloud corresponding to the position information of the two-dimensional plane from the three-dimensional point cloud information.
After the two-dimensional plane position information of the object is obtained, the point cloud generated by the two-dimensional plane position information is compared with the point cloud in the three-dimensional point cloud information, and the point cloud corresponding to the two-dimensional plane position information is found in the three-dimensional point cloud information.
And step three, identifying six-dimensional attitude information of the object by the point cloud.
After the two-dimensional plane information and the corresponding point cloud of the object are found in the second step, the six-dimensional attitude information of the object can be identified by the two elements (the two-dimensional plane information and the corresponding point cloud). Specifically, the two-dimensional plane information includes information of two dimensions, and when a point cloud corresponding to the two-dimensional plane position information is screened out, depth information (third dimension information) of the object can be obtained through the point cloud, and then translation information of the object is obtained (distance from the origin of a camera coordinate system to the origin of the coordinate system of the identified object along an X axis, a Y axis and a Z axis during visual identification). In addition, rotation information of the object (angles of rotation about the X-axis, the Y-axis, and the Z-axis, respectively, from the camera coordinate system to the coordinate system of the recognized object) can be directly acquired through the point cloud corresponding to the two-dimensional plane position information. The six-dimensional attitude information of the object can be identified through the translation information and the rotation information.
In industrial application, the visual identification method can be applied to the field of automatic loading and unloading, for example, the visual identification method is used on an AGV (Automated Guided Vehicle), so that the automation of unloading work is realized, the work can be completed without manual participation, the labor cost is saved, and the loading and unloading efficiency is improved. When unloading is carried out, after the six-dimensional attitude information of the object is recognized, the six-dimensional attitude information can be sent to the robot to grab the object. In the grasping, it is preferable to grasp from top to bottom and from left to right.
In an embodiment of the present invention, there is also provided a storage medium storing a computer program executable by a computer, the computer program implementing the visual recognition method when executed. The storage medium may be any one of or a combination of a portable disk, a hard disk, a random access memory, a read only memory, an erasable programmable read only memory, an optical storage device, a magnetic storage device.
In one embodiment of the present invention, as shown in fig. 2, a visual recognition system is also provided. Specifically, the vision recognition system includes a support 1, a rotation mechanism 2, a camera 3, a processor, and a storage medium. Wherein, slewing mechanism 2 installs on support 1, and camera 3 installs on slewing mechanism 2, and the processor links to each other with slewing mechanism 2, camera 3, storage medium respectively.
In the embodiment, the bracket 1 can be installed on an AGV, and the rotating mechanism 2 and the camera 3 are fixed at a predetermined height, so that the view field of the camera 3 is prevented from being blocked by a robot on the AGV; the rotating mechanism 2 is arranged at the top end of the bracket 1 and can drive the camera 3 to rotate, so that the visual field of the camera 3 is increased; the camera 3 can shoot a plane image and three-dimensional point cloud information of an object to be identified; a computer program in which a visual recognition method is stored in a storage medium; the processor can execute a computer program in a storage medium, recognize six-dimensional pose information of an object from a planar image of the object and three-dimensional point cloud information, transmit the six-dimensional pose information to the robot, and grasp the object by the robot. The visual identification system of this embodiment can be applied to automatic loading and unloading field, realizes the automation of unloading work, does not need artifical the participation can accomplish work, saves the cost of labor and improves handling efficiency.
In one embodiment of the invention, as shown in fig. 3, the turning mechanism 2 comprises a base 5, a driving block 6 and a connecting frame 7. Specifically, base 5 is installed on support 1, and drive block 6 links to each other with base 5 is rotatable, and link 7 links to each other with drive block 6 is rotatable, and camera 3 is installed on link 7. Wherein, install first motor and second motor on the drive block 6, first motor can drive block 6 and rotate, and the second motor can drive link 7 and rotate, and then makes camera 3 of installing on link 7 have two rotational degrees of freedom, and the field of vision scope can enlarge.
Further, in this embodiment, the axis of rotation of first motor and the axis of rotation of second motor are perpendicular, and the axis of rotation of first motor is parallel with the direction of height of support 1, and the direction of height of support 1 is vertical direction, and then camera 3 can rotate and/or rotate along vertical axis along the horizontal axis, and camera 3's field of vision can increase according to the demand in horizontal direction and vertical direction, satisfies the shooting field of vision demand under the large scene.
As shown in fig. 3, in an embodiment of the present invention, a connecting member 4 is disposed between the camera 3 and the rotating mechanism 2, the connecting member 4 is mounted on the connecting frame 7, the camera 3 is mounted on the connecting member 4, and the camera 3 and the rotating mechanism 2 are fixed together by the connecting member 4, which can improve the connection strength.
Further, in this embodiment, the connecting member 4 is provided with a mounting groove 8, and the mounting groove 8 is used as a position for connecting the camera 3, so that the camera 3 can be mounted on the mounting groove 8. Mounting groove 8 is the bar groove, and length direction is parallel with the axis of rotation of second motor, can supply camera 3 to adjust mounted position, conveniently carries out the timing to camera 3's position.
As shown in fig. 2, in one embodiment of the invention, the cameras 3 comprise a 2D camera and a 3D camera, the 2D camera and the 3D camera being connected to the processor, respectively. Specifically, the 2D camera can capture a planar image of an object and then acquire two-dimensional planar position information of the object, and the 3D camera can scan the object and acquire three-dimensional point cloud information of the object and then acquire depth information and rotation information of the object.
It will be appreciated by those skilled in the art that the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The embodiments disclosed above are therefore to be considered in all respects as illustrative and not restrictive. All changes which come within the scope of or equivalence to the invention are intended to be embraced therein.

Claims (10)

1. A visual recognition method, comprising:
acquiring two-dimensional plane position information and three-dimensional point cloud information of an object;
screening out point clouds corresponding to the two-dimensional plane position information from the three-dimensional point cloud information;
and identifying six-dimensional attitude information of the object from the point cloud.
2. The visual recognition method of claim 1, wherein:
when two-dimensional plane position information and three-dimensional point cloud information of a plurality of objects are obtained, a plurality of shooting views are obtained through multiple times of shooting;
an overlapping area is reserved between two adjacent shooting visual fields, and the size of the overlapping area is larger than that of an object;
and after the overlapping area is removed by setting a threshold value, obtaining the two-dimensional plane position information and the three-dimensional point cloud information of a plurality of objects.
3. The visual recognition method of claim 1, wherein:
the method comprises the following steps that a model training is further included, in the model training, the object is marked through an image marking tool, and a plane image of the object is trained through a convolutional neural network to obtain a training model;
and in the visual identification, after a plane image of the object is obtained, the two-dimensional plane position information of the object is obtained by comparing the plane image with the training model.
4. The visual recognition method of claim 3, wherein:
after the plane image of the object is obtained, the plane image is segmented through an edge detection algorithm to obtain a plurality of sub-images;
and obtaining the two-dimensional plane position information of a plurality of objects by comparing the sub-images with the training model.
5. The visual recognition method of claim 1, wherein:
when the six-dimensional attitude information of the object is identified by the point cloud, acquiring depth information and rotation information of the object through the point cloud;
the depth information and the two-dimensional plane position information form translation information of the object;
the translation information and the rotation information constitute the six-dimensional pose information of the object.
6. A storage medium storing a computer program executable by a computer, characterized in that:
the computer program when executed implements the visual recognition method of any one of claims 1 to 5.
7. A visual recognition system, comprising:
a support, a rotating mechanism, a camera, a processor, and the storage medium of claim 6;
the rotating mechanism is arranged on the bracket, the camera is arranged on the rotating mechanism, and the rotating mechanism is used for driving the camera to rotate;
the processor is respectively connected with the rotating mechanism, the camera and the storage medium.
8. The visual recognition system of claim 7, wherein:
the rotating mechanism comprises a base, a driving block and a connecting frame;
the base is arranged on the support, the driving block is rotatably connected with the base, the connecting frame is rotatably connected with the driving block, and the camera is arranged on the connecting frame;
the driving block is provided with a first motor and a second motor, the first motor is used for driving the driving block to rotate, and the second motor is used for driving the connecting frame to rotate;
the rotating axis of the first motor is perpendicular to the rotating axis of the second motor, and the rotating axis of the first motor is parallel to the height direction of the support.
9. The visual recognition system of claim 8, wherein:
a connecting piece is arranged between the camera and the rotating mechanism, the connecting piece is arranged on the connecting frame, and the camera is arranged on the connecting piece;
the connecting piece is provided with a mounting groove, and the camera is mounted on the mounting groove;
the mounting groove is a strip-shaped groove, and the length direction of the mounting groove is parallel to the rotation axis of the second motor and used for supplying the camera to adjust the mounting position.
10. The visual recognition system of claim 7, wherein:
the cameras comprise a 2D camera and a 3D camera, and the 2D camera and the 3D camera are respectively connected with the processor;
the 2D camera is used for acquiring two-dimensional plane position information of an object, and the 3D camera is used for acquiring depth information and rotation information of the object.
CN202211056751.6A 2022-08-31 2022-08-31 Visual recognition method, visual recognition system and storage medium Pending CN115346211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211056751.6A CN115346211A (en) 2022-08-31 2022-08-31 Visual recognition method, visual recognition system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211056751.6A CN115346211A (en) 2022-08-31 2022-08-31 Visual recognition method, visual recognition system and storage medium

Publications (1)

Publication Number Publication Date
CN115346211A true CN115346211A (en) 2022-11-15

Family

ID=83955235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211056751.6A Pending CN115346211A (en) 2022-08-31 2022-08-31 Visual recognition method, visual recognition system and storage medium

Country Status (1)

Country Link
CN (1) CN115346211A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115848878A (en) * 2023-02-28 2023-03-28 云南烟叶复烤有限责任公司 AGV-based cigarette frame identification and stacking method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115848878A (en) * 2023-02-28 2023-03-28 云南烟叶复烤有限责任公司 AGV-based cigarette frame identification and stacking method and system
CN115848878B (en) * 2023-02-28 2023-05-26 云南烟叶复烤有限责任公司 AGV-based tobacco frame identification and stacking method and system

Similar Documents

Publication Publication Date Title
US9707682B1 (en) Methods and systems for recognizing machine-readable information on three-dimensional objects
US10124489B2 (en) Locating, separating, and picking boxes with a sensor-guided robot
US9205558B1 (en) Multiple suction cup control
US9259844B2 (en) Vision-guided electromagnetic robotic system
US9457970B1 (en) Modular cross-docking system
US9659217B2 (en) Systems and methods for scale invariant 3D object detection leveraging processor architecture
CN109772718B (en) Parcel address recognition system, parcel address recognition method, parcel sorting system and parcel sorting method
US9205562B1 (en) Integration of depth points into a height map
EP3854535A1 (en) Real-time determination of object metrics for trajectory planning
CN112847375B (en) Workpiece grabbing method and device, computer equipment and storage medium
CN115346211A (en) Visual recognition method, visual recognition system and storage medium
CN113245235B (en) Commodity classification method and device based on 3D vision
CN115582827A (en) Unloading robot grabbing method based on 2D and 3D visual positioning
CN112828892A (en) Workpiece grabbing method and device, computer equipment and storage medium
CN112936257A (en) Workpiece grabbing method and device, computer equipment and storage medium
CN113483664B (en) Screen plate automatic feeding system and method based on line structured light vision
CN218100264U (en) Visual identification system
JP7418335B2 (en) Parcel identification device and parcel sorting device
CN114453258A (en) Parcel sorting system, parcel sorting method, industrial control equipment and storage medium
EP4249178A1 (en) Detecting empty workspaces for robotic material handling
CN113084815B (en) Physical size calculation method and device of belt-loaded robot and robot
CN209312143U (en) A kind of robot based on machine vision stores up vending system automatically
WO2023073780A1 (en) Device for generating learning data, method for generating learning data, and machine learning device and machine learning method using learning data
CN116079791A (en) Robot vision recognition system and robot with same
CN113409394A (en) Intelligent forking method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination