CN114055473A - Visual detection identification system based on transfer robot - Google Patents

Visual detection identification system based on transfer robot Download PDF

Info

Publication number
CN114055473A
CN114055473A CN202111480952.4A CN202111480952A CN114055473A CN 114055473 A CN114055473 A CN 114055473A CN 202111480952 A CN202111480952 A CN 202111480952A CN 114055473 A CN114055473 A CN 114055473A
Authority
CN
China
Prior art keywords
article
area
carried
transfer robot
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111480952.4A
Other languages
Chinese (zh)
Other versions
CN114055473B (en
Inventor
付大方
丁仁宏
李卫
李�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Xinsilu Intelligent Technology Co ltd
Original Assignee
Hefei Xinsilu Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Xinsilu Intelligent Technology Co ltd filed Critical Hefei Xinsilu Intelligent Technology Co ltd
Priority to CN202111480952.4A priority Critical patent/CN114055473B/en
Publication of CN114055473A publication Critical patent/CN114055473A/en
Application granted granted Critical
Publication of CN114055473B publication Critical patent/CN114055473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a visual detection and identification system based on a transfer robot, which relates to the technical field of transfer robots and comprises a control center, the control center is in communication connection with a database, a vision acquisition module, a data processing module, a data analysis module and an adjustment module, the article to be carried is provided with characteristic points, and through the position change of the characteristic points of the article to be carried, thereby judging the position change of the object to be carried in the carrying area, the carrying robot adjusts the grabbing position according to the position change of the object to be carried, thereby can make transfer robot can be according to the state change of treating the transport article, the corresponding position of grabbing of automatic adjustment avoids because the current state dislocation and the skew of treating the transport article, leads to the end of grabbing of transfer robot and treats to bump between the transport article, causes the damage of article, increases the risk that transfer robot broke down simultaneously.

Description

Visual detection identification system based on transfer robot
Technical Field
The invention relates to the technical field of transfer robots, in particular to a visual detection and identification system based on a transfer robot.
Background
The manipulator is an automatic operating device which can imitate some action functions of a human arm and is used for grabbing and carrying objects or operating tools according to a fixed program, is the earliest appearing industrial robot and the earliest appearing modern robot, can replace heavy labor of people to realize mechanization and automation of production, and can operate in a harmful environment, so that the manipulator is widely applied to departments of mechanical manufacturing, metallurgy, electronics, light industry, atomic energy and the like; the carrying robot generates power in an electric or pneumatic mode to replace frequent manual carrying actions, so that the working intensity of personnel is reduced to a certain degree;
the existing transfer robot mostly finishes the transfer of articles by setting fixed running tracks and clamping force in advance, and in such a mode, the articles need to be kept in the same state before being transferred, and if the states are inconsistent, corresponding adjustment cannot be made, so that the risk of collision between the transfer robot and the articles is caused.
Disclosure of Invention
The invention aims to provide a visual detection and identification system based on a transfer robot.
The purpose of the invention can be realized by the following technical scheme: the visual detection and identification system based on the transfer robot comprises a control center, wherein the control center is in communication connection with a data processing module, a data analysis module and an adjustment module;
the data processing module is used for setting standard states of the to-be-carried objects in the carrying area and the placing area, then obtaining the current states of the to-be-carried objects, respectively obtaining a benchmark reference vector line and a benchmark vector line, establishing a three-dimensional coordinate system, analyzing and obtaining the offset of the to-be-carried objects in the current states and the standard states according to the position relation of the benchmark reference vector line and the benchmark vector line in the three-dimensional coordinate system through the data analysis module, and finally adjusting the grabbing positions of the grabbing ends of the carrying robot according to the analysis results of the data analysis module through the adjustment module.
Further, control center still communication connection has a database, the database is used for setting up the three-dimensional model in transport area and placing area, includes:
acquiring photos of a carrying area and a placing area, and rasterizing the acquired photos;
and de-marginalizing the pictures of the conveying area after the rasterization processing, only keeping the pictures within the range of the conveying area and the placing area, selecting a reference point at the center of the conveying area, forming a conveying reference position according to the reference point, and mapping the conveying reference position into the placing area to generate a drop point reference position.
Furthermore, the control center is also in communication connection with a vision acquisition module, acquires video data in the carrying area in real time through the vision acquisition module, and acquires a state photo of the object to be carried when the object to be carried appears in the carrying area.
Further, the data processing module is used for processing the state photo of the article to be carried, and the specific process comprises the following steps:
setting a standard state of an article to be conveyed in a conveying area, selecting a characteristic point on the article to be conveyed, marking the characteristic point, and connecting the characteristic point with a reference point in a conveying reference position to obtain a reference vector line of the article to be conveyed;
the method comprises the steps of obtaining the state of an article to be conveyed in a conveying area at the current moment, obtaining feature points of the article to be conveyed, connecting the feature points with reference points in conveying reference positions in the conveying area, obtaining reference vector lines of the article to be conveyed in the current state, and establishing a three-dimensional coordinate system by taking the reference points as original points.
Further, the data analysis module obtains the offset of the article to be carried through the data processed by the data processing module, and the analysis process comprises:
obtaining coordinate values of the characteristic points of the articles to be carried in the standard state in the three-dimensional coordinate system, and recording the coordinate values as reference coordinate values;
obtaining a coordinate value of a characteristic point of an article to be carried in a current state in a three-dimensional coordinate system, and recording the coordinate value as an actual coordinate value;
and acquiring the offset of the article to be carried in the directions of the x axis and the y axis respectively according to the positions of the reference coordinate value and the actual coordinate value.
Further, the adjusting module is used for adjusting the position of the grabbing end of the transfer robot before carrying according to the analysis result of the data analysis module, and the adjusting process comprises the following steps:
in the initial state of the transfer robot, selecting two characteristic points at the grabbing end of the transfer robot, and respectively connecting the two characteristic points with the bottom corner end points of an isosceles triangle arranged in the transfer area to generate two connecting lines;
according to the method, an isosceles triangle in a carrying area is mapped to a corresponding position of an article to be carried in a standard state, then the position of the isosceles triangle on the article to be carried in a current state is obtained according to the position of the isosceles triangle on the article to be carried in the standard state, and the isosceles triangle in the current state is compared with the isosceles triangle in the standard state, so that the deflection angle of the article to be carried is obtained.
Further, when the transfer robot is in an initial state, the two feature points on the grabbing end are consistent with the two connecting lines of the two base angle end points of the isosceles triangle in length, and the two connecting lines are located in the same vertical plane.
Further, the adjusting module is also used for adjusting the placing position of the article to be carried in the placing area, and the specific process comprises the following steps:
marking the article to be placed in the placing area by the transfer robot as an article to be placed, and setting a standard state of the article to be placed in the placing area according to actual requirements;
comparing a drop point reference position of an article to be placed in a standard state in a placing area with a carrying reference position of the article to be carried in the standard state, and then obtaining the difference between the drop point reference position and the carrying reference position;
then the handling robot is in the in-process of grabbing the article to be handled to placing the district, will wait to carry the article and adjust to when making the article to be handled putting into placing the district, with waiting to place the standard condition of article in placing the district unanimous.
Compared with the prior art, the invention has the beneficial effects that: the automatic grabbing device comprises a carrying robot, a carrying area and a carrying robot, wherein the carrying robot is provided with characteristic points on an article to be carried, the position of the characteristic points of the article to be carried is changed, the position of the article to be carried is judged to be changed in the carrying area, the grabbing position of the carrying robot is adjusted according to the position of the article to be carried, the carrying robot can automatically adjust the corresponding grabbing position according to the state change of the article to be carried, the situation that the current state of the article to be carried is staggered and inclined, the grabbing end of the carrying robot collides with the article to be carried, the article is damaged, and meanwhile the risk that the carrying robot breaks down is reduced.
Drawings
Fig. 1 is a schematic diagram of the present invention.
Detailed Description
As shown in fig. 1, the visual inspection and identification system based on the transfer robot comprises a control center, wherein the control center is in communication connection with a database, a visual acquisition module, a data processing module, a data analysis module and an adjustment module;
the database is used for setting three-dimensional models of the carrying area and the placing area, and it needs to be further explained that in the specific implementation process, the carrying robot carries the object to be carried from the carrying area to the placing area;
carrying out three-dimensional modeling on the carrying area and the placing area, wherein the specific process comprises the following steps:
acquiring a photo of the conveying area, and rasterizing the photo of the conveying area;
performing de-marginalization processing on the pictures in the conveying area after rasterization processing, only keeping the pictures in the range of the conveying area, and selecting a reference point at the center of the conveying area;
setting an isosceles triangle by taking the reference point as a center, combining the reference point with the isosceles triangle to form a carrying reference position, and then combining the carrying reference position with a carrying area photo to generate a carrying area model;
acquiring a photo of the placing area, and rasterizing the photo of the placing area;
performing de-marginalization processing on the rasterized pictures in the placement area, and only keeping the pictures in the range of the placement area;
and mapping the conveying reference position in the conveying area model into the placing area to generate a drop point reference position, and forming the drop point reference position and the placing area picture to generate a placing area model.
It should be further noted that, in the specific implementation process, the video data in the carrying area is acquired in real time through the vision acquisition module, when an article to be carried appears in the carrying area, a state photo of the article to be carried is acquired, and then the state photo of the article to be carried is uploaded to the data processing module.
The data processing module is used for processing the state photos of the articles to be carried, and the specific process comprises the following steps:
setting a standard state of an article to be conveyed in a conveying area, selecting a characteristic point on the article to be conveyed, marking the characteristic point, connecting the characteristic point with a reference point in a conveying reference position to obtain a reference vector line of the article to be conveyed, and then obtaining the length and the direction of the reference vector line;
acquiring the state of an article to be conveyed in a conveying area at the current moment, acquiring feature points of the article to be conveyed, and then connecting the feature points with reference points in a conveying reference position in the conveying area to obtain a reference vector line of the article to be conveyed in the current state;
acquiring the length and the direction of a reference line, and then establishing a three-dimensional coordinate system by taking the reference point as an origin;
respectively mapping the reference vector line and the reference vector line into a three-dimensional coordinate system;
it should be further noted that, in the specific implementation process, the error existing between the current state of the article to be transported and the standard state can be obtained very intuitively through the positions of the reference vector line and the reference vector line in the three-dimensional coordinate system, so that when the transporting robot transports the article to be transported, the clamping position of the transporting robot is automatically adjusted, and further, the article to be transported and the damage of the transporting robot caused by collision between the transporting robot and the transported article due to inconsistent states of the article to be transported can be avoided.
After the current state picture of the article to be carried is processed, the difference between the current state of the article to be carried and the standard state is analyzed through the data analysis module, and the specific process comprises the following steps:
obtaining coordinate values of the feature points of the articles to be carried in the standard state in the three-dimensional coordinate system, recording the coordinate values as reference coordinate values, and recording the reference coordinate values as (x0, y0, z 0);
obtaining coordinate values of the feature points of the articles to be carried in the current state in the three-dimensional coordinate system, recording the coordinate values as actual coordinate values, and recording the actual coordinate values as (x1, y1, z 0);
acquiring the offset of the article to be carried in the directions of the x axis and the y axis respectively according to the reference coordinate value and the position of the actual coordinate value, and marking the offset of the characteristic point of the article to be carried in the directions of the x axis and the y axis as XP and YP respectively;
the offset of the characteristic point of the object to be carried in the x-axis direction is
Figure BDA0003395239300000061
The offset of the characteristic point of the object to be carried in the x-axis direction is
Figure BDA0003395239300000062
When x1 is less than x0, the offset of the characteristic point of the article to be carried in the x-axis direction is XP along the x-axis negative direction; when x1 is larger than x0, the offset of the characteristic point of the article to be carried in the x-axis direction is XP along the positive x-axis direction; when x1 is x0, the offset of the characteristic point of the article to be carried in the x-axis direction is 0;
similarly, when y1 < y0, the characteristic point of the article to be carried is offset in the y-axis direction by YP along the negative y-axis direction; when y1 is larger than y0, the offset amount of the characteristic point of the article to be carried in the y-axis direction is offset YP along the positive y-axis direction; when y1 is y0, the offset amount of the feature point of the article to be carried in the y-axis direction is 0.
The adjusting module is used for adjusting the position of the carrying robot before carrying according to the analysis result of the data analysis module, and the specific process comprises the following steps:
the method comprises the steps that an initial state of the transfer robot is obtained, a grabbing end of the transfer robot in the initial state is marked, two feature points are selected at the grabbing end of the transfer robot and are respectively connected with base angle end points of isosceles triangles arranged in a transfer area to generate two connecting lines, and it needs to be further explained that in the specific implementation process, the lengths of the two connecting lines are consistent in the initial state, and the two connecting lines are located in the same vertical plane;
mapping an isosceles triangle in the carrying area to a corresponding position of an article to be carried in a standard state, then obtaining the position of the isosceles triangle on the article to be carried in a current state according to the position of the isosceles triangle on the article to be carried in the standard state, and comparing the isosceles triangle in the current state with the isosceles triangle in the standard state to obtain the deflection angle of the article to be carried;
then, according to the offset of the characteristic points of the articles to be carried in the directions of the x axis and the y axis, the grabbing end of the transfer robot is adjusted according to the corresponding offset, then according to the deflection angle of the articles to be carried, the grabbing angle of the grabbing end of the transfer robot is adjusted, and then the articles to be carried are grabbed;
it should be further noted that, in the specific implementation process, after the grabbing end of the transfer robot grabs the article to be handled, the position of the grabbing end of the transfer robot returns to the position where the grabbing end was located in the initial state, that is, the offset and the deflection angle of the grabbing end adjusted before grabbing are adjusted, so that the state of the article to be handled in the grabbing end is consistent with the state of the article to be handled being grabbed in the standard state.
It should be further noted that, in the specific implementation process, the position change of the article to be carried in the carrying area is determined by the position change of the feature point of the article to be carried, and the gripping position of the carrying robot is adjusted according to the position change of the article to be carried, so that the carrying robot can automatically adjust the corresponding gripping position according to the state change of the article to be carried, thereby avoiding the damage of the article due to the collision between the gripping end of the carrying robot and the article to be carried caused by the current state dislocation and skew of the article to be carried, and increasing the risk of the carrying robot failing.
The adjusting module is also used for adjusting the placing position of the article to be carried in the placing area, and the specific process comprises the following steps:
marking the article to be placed in the placing area by the transfer robot as an article to be placed, and setting a standard state of the article to be placed in the placing area according to actual requirements;
comparing a drop point reference position of an article to be placed in a standard state in a placing area with a carrying reference position of the article to be carried in the standard state, and then obtaining the difference between the drop point reference position and the carrying reference position;
then the handling robot is in the in-process of grabbing the article to be handled to placing the district, will wait to carry the article and adjust to when making the article to be handled putting into placing the district, with waiting to place the standard condition of article in placing the district unanimous.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the present invention.

Claims (8)

1. The visual detection and identification system based on the transfer robot comprises a control center and is characterized in that the control center is in communication connection with a data processing module, a data analysis module and an adjustment module;
the data processing module is used for setting standard states of the to-be-carried objects in the carrying area and the placing area, then obtaining the current states of the to-be-carried objects, respectively obtaining a benchmark reference vector line and a benchmark vector line, establishing a three-dimensional coordinate system, analyzing and obtaining the offset of the to-be-carried objects in the current states and the standard states according to the position relation of the benchmark reference vector line and the benchmark vector line in the three-dimensional coordinate system through the data analysis module, and finally adjusting the grabbing positions of the grabbing ends of the carrying robot according to the analysis results of the data analysis module through the adjustment module.
2. The transfer robot-based vision inspection identification system of claim 1, wherein the control center is further communicatively connected with a database for setting three-dimensional models of a transfer area and a placement area, comprising:
acquiring photos of a carrying area and a placing area, and rasterizing the acquired photos;
and de-marginalizing the pictures of the conveying area after the rasterization processing, only keeping the pictures within the range of the conveying area and the placing area, selecting a reference point at the center of the conveying area, forming a conveying reference position according to the reference point, and mapping the conveying reference position into the placing area to generate a drop point reference position.
3. The vision inspection and recognition system based on the transfer robot as claimed in claim 2, wherein the control center is further communicatively connected with a vision acquisition module, video data in the transfer area is acquired in real time through the vision acquisition module, and when an object to be transferred appears in the transfer area, a state picture of the object to be transferred is acquired.
4. The vision inspection and recognition system based on the transfer robot as claimed in claim 3, wherein the data processing module is configured to process the status picture of the article to be transferred, and the specific process includes:
setting a standard state of an article to be conveyed in a conveying area, selecting a characteristic point on the article to be conveyed, marking the characteristic point, and connecting the characteristic point with a reference point in a conveying reference position to obtain a reference vector line of the article to be conveyed;
the method comprises the steps of obtaining the state of an article to be conveyed in a conveying area at the current moment, obtaining feature points of the article to be conveyed, connecting the feature points with reference points in conveying reference positions in the conveying area, obtaining reference vector lines of the article to be conveyed in the current state, and establishing a three-dimensional coordinate system by taking the reference points as original points.
5. The vision inspection and recognition system based on the transfer robot of claim 4, wherein the data analysis module obtains the offset of the article to be transferred by analyzing the data processed by the data processing module, and the analysis process comprises:
obtaining coordinate values of the characteristic points of the articles to be carried in the standard state in the three-dimensional coordinate system, and recording the coordinate values as reference coordinate values;
obtaining a coordinate value of a characteristic point of an article to be carried in a current state in a three-dimensional coordinate system, and recording the coordinate value as an actual coordinate value;
and acquiring the offset of the article to be conveyed in the directions of the x axis and the y axis respectively according to the positions of the reference coordinate value and the actual coordinate value.
6. The vision inspection and recognition system based on the transfer robot as claimed in claim 5, wherein the adjusting module is configured to adjust the position of the gripping end of the transfer robot before transferring according to the analysis result of the data analysis module, and the adjusting process includes:
in the initial state of the transfer robot, selecting two characteristic points at the grabbing end of the transfer robot, and respectively connecting the two characteristic points with the bottom corner end points of an isosceles triangle arranged in the transfer area to generate two connecting lines;
according to the standard state, the isosceles triangles in the carrying area are mapped to the corresponding positions of the articles to be carried, then the positions of the isosceles triangles on the articles to be carried in the current state are obtained according to the positions of the isosceles triangles on the articles to be carried in the standard state, and the isosceles triangles in the current state are compared with the isosceles triangles in the standard state, so that the deflection angle of the articles to be carried is obtained.
7. The transfer robot-based vision inspection and recognition system according to claim 6, wherein the transfer robot is configured such that, in the initial state, the two feature points on the gripping end are the same length as the two connecting lines at the two base corner end points of the isosceles triangle, and the two connecting lines are located in the same vertical plane.
8. The vision inspection and recognition system based on the transfer robot of claim 4, wherein the adjusting module is further configured to adjust a placement position of an object to be transferred in the placement area, and the specific process includes:
marking the articles to be placed in the placing area by the transfer robot as articles to be placed, and setting the standard state of the articles to be placed in the placing area according to actual requirements;
comparing a drop point reference position of an article to be placed in a standard state in a placing area with a carrying reference position of the article to be carried in the standard state, and then obtaining the difference between the drop point reference position and the carrying reference position;
then the handling robot is in the in-process of grabbing the article to be handled to placing the district, will wait to carry the article and adjust to when making the article to be handled putting into placing the district, with waiting to place the standard condition of article in placing the district unanimous.
CN202111480952.4A 2021-12-06 2021-12-06 Visual detection identification system based on transfer robot Active CN114055473B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111480952.4A CN114055473B (en) 2021-12-06 2021-12-06 Visual detection identification system based on transfer robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111480952.4A CN114055473B (en) 2021-12-06 2021-12-06 Visual detection identification system based on transfer robot

Publications (2)

Publication Number Publication Date
CN114055473A true CN114055473A (en) 2022-02-18
CN114055473B CN114055473B (en) 2022-06-17

Family

ID=80228735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111480952.4A Active CN114055473B (en) 2021-12-06 2021-12-06 Visual detection identification system based on transfer robot

Country Status (1)

Country Link
CN (1) CN114055473B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0863214A (en) * 1994-08-25 1996-03-08 Fanuc Ltd Visual tracking method
JP2006082171A (en) * 2004-09-15 2006-03-30 Fuji Photo Film Co Ltd Tool location correcting method for articulated robot
CN110303498A (en) * 2019-07-03 2019-10-08 广东博智林机器人有限公司 Handling system and its control method, floor tile paving system
CN110374312A (en) * 2019-07-03 2019-10-25 广东博智林机器人有限公司 Handling system and its control method, floor tile paving system
CN111421528A (en) * 2020-03-24 2020-07-17 广州市轻工职业学校 Industrial robot's automated control system
CN112815832A (en) * 2019-11-15 2021-05-18 中国科学院长春光学精密机械与物理研究所 Measuring camera coordinate system calculation method based on 3D target
US20210178584A1 (en) * 2019-12-12 2021-06-17 Keyence Corporation Measuring device
CN113021341A (en) * 2021-03-18 2021-06-25 深圳市科服信息技术有限公司 Robot based on 5G article identification and automatic transfer transportation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0863214A (en) * 1994-08-25 1996-03-08 Fanuc Ltd Visual tracking method
JP2006082171A (en) * 2004-09-15 2006-03-30 Fuji Photo Film Co Ltd Tool location correcting method for articulated robot
CN110303498A (en) * 2019-07-03 2019-10-08 广东博智林机器人有限公司 Handling system and its control method, floor tile paving system
CN110374312A (en) * 2019-07-03 2019-10-25 广东博智林机器人有限公司 Handling system and its control method, floor tile paving system
CN112815832A (en) * 2019-11-15 2021-05-18 中国科学院长春光学精密机械与物理研究所 Measuring camera coordinate system calculation method based on 3D target
US20210178584A1 (en) * 2019-12-12 2021-06-17 Keyence Corporation Measuring device
CN111421528A (en) * 2020-03-24 2020-07-17 广州市轻工职业学校 Industrial robot's automated control system
CN113021341A (en) * 2021-03-18 2021-06-25 深圳市科服信息技术有限公司 Robot based on 5G article identification and automatic transfer transportation

Also Published As

Publication number Publication date
CN114055473B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN106737665B (en) Based on binocular vision and the matched mechanical arm control system of SIFT feature and implementation method
CN107324041B (en) Manipulator and automatic film magazine handling device for film magazine clamping
CN111136656B (en) Method for automatically identifying and grabbing three-dimensional irregular object of robot
CN111791239A (en) Method for realizing accurate grabbing by combining three-dimensional visual recognition
CN109013405A (en) It is a kind of independently detected with cast(ing) surface and substandard products sorting function robot system
CN110980276B (en) Method for implementing automatic casting blanking by three-dimensional vision in cooperation with robot
CN112010024B (en) Automatic container grabbing method and system based on laser and vision fusion detection
CN110666805A (en) Industrial robot sorting method based on active vision
CN113103215B (en) Motion control method for robot vision flyswatter
CN112561886A (en) Automatic workpiece sorting method and system based on machine vision
CN110539299A (en) Robot working method, controller and robot system
CN115703232A (en) Robot system with image-based sizing mechanism and method of operating the same
CN113269723A (en) Unordered grasping system for three-dimensional visual positioning and mechanical arm cooperative work parts
CN113602799B (en) Airport luggage case carrying system and control method thereof
DE102020104332A1 (en) SYSTEMS FOR CHANGING TOOLS ON A GRIPPER DEVICE
CN114055473B (en) Visual detection identification system based on transfer robot
CN113763462A (en) Method and system for automatically controlling feeding
CN206645534U (en) A kind of unordered grabbing device of robot based on double camera
CN111251296B (en) Visual detection system suitable for pile up neatly electric motor rotor
Reddy et al. Integration of robotic arm with vision system
CN114851206B (en) Method for grabbing stove based on vision guiding mechanical arm
CN115556102B (en) Robot sorting and planning method and planning equipment based on visual recognition
WO2004052596A1 (en) Method and arrangement to avoid collision between a robot and its surroundings while picking details including a sensorsystem
CN112171664A (en) Production line robot track compensation method, device and system based on visual identification
CN118220723B (en) Accurate stacking method and system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant