CN110295728B - Carrying system, control method thereof and floor tile paving system - Google Patents

Carrying system, control method thereof and floor tile paving system Download PDF

Info

Publication number
CN110295728B
CN110295728B CN201910596240.5A CN201910596240A CN110295728B CN 110295728 B CN110295728 B CN 110295728B CN 201910596240 A CN201910596240 A CN 201910596240A CN 110295728 B CN110295728 B CN 110295728B
Authority
CN
China
Prior art keywords
image
placement
feature
grabbing
tile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910596240.5A
Other languages
Chinese (zh)
Other versions
CN110295728A (en
Inventor
汪亚伦
黎威
李江
刘震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bozhilin Robot Co Ltd
Original Assignee
Guangdong Bozhilin Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bozhilin Robot Co Ltd filed Critical Guangdong Bozhilin Robot Co Ltd
Priority to CN201910596240.5A priority Critical patent/CN110295728B/en
Publication of CN110295728A publication Critical patent/CN110295728A/en
Application granted granted Critical
Publication of CN110295728B publication Critical patent/CN110295728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04FFINISHING WORK ON BUILDINGS, e.g. STAIRS, FLOORS
    • E04F21/00Implements for finishing work on buildings
    • E04F21/18Implements for finishing work on buildings for setting wall or ceiling slabs or plates

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a carrying system, a control method thereof and a floor tile paving system. Wherein, this system includes: the image acquisition device is used for acquiring a first image of a current placement area and a second image of a current grabbing area, determining a first offset based on a first feature position in the first image and a preset placement feature position, and determining a second offset based on a second feature position in the second image and a preset grabbing feature position; the controller is used for determining a placing position according to the first offset and the placing reference position and determining a grabbing position based on the second offset and the grabbing feature position; and the robot is provided with a mechanical arm, and based on a control command of the controller, the mechanical arm moves to the grabbing position to grab the object to be carried and then moves to the placing position. The invention solves the technical problem that the robot for tiling in the prior art can not meet the tile-tiling precision requirement.

Description

Carrying system, control method thereof and floor tile paving system
Technical Field
The invention relates to the field of robots, in particular to a carrying system, a control method of the carrying system and a floor tile paving system.
Background
Tile has a wide application in the construction field, and tile work is usually performed manually at present, but manual measurement is often inaccurate and construction progress is slow, so in the tile field, a tile robot is gradually introduced.
In the process of paving floor tiles, a current tile robot grabs the floor tiles from a tile transport AGV (Automated Guided Vehicle), carries the floor tiles to a paving position, aligns reference tiles, and then paves the floor tiles. However, the accuracy of AGV positioning is limited (error 20mm), and paving of the floor tiles has high requirements on accuracy, so that the method is difficult to meet the requirements of paving of the floor tiles on accuracy.
Aiming at the problem that a robot used for tiling in the prior art can not meet the accuracy requirement of tiling, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a carrying system, a control method thereof and a floor tile paving system, which at least solve the technical problem that a robot for tile pasting in the prior art cannot meet the tile pasting precision requirement.
According to an aspect of an embodiment of the present invention, there is provided a carrying system including: the image acquisition device is used for acquiring a first image of a current placement area and a second image of a current grabbing area, determining a first offset based on a first feature position in the first image and a preset placement feature position, and determining a second offset based on a second feature position in the second image and a preset grabbing feature position; the controller is used for determining a placing position according to the first offset and a placing reference position and determining a grabbing position based on the second offset and a grabbing characteristic position, wherein the placing reference position is obtained according to the placing characteristic position; the robot with the mechanical arm moves to a grabbing position to grab an object to be carried and then moves to a placing position based on a control command of the controller.
Further, an image acquisition device is arranged at the tail end of the mechanical arm.
Further, the object to be carried is a tile to be tiled, the first characteristic position is a position where a tile corner of the tile to be tiled is located in the first image, and the second characteristic position is a position where a tile corner of the tile to be tiled is located in the second image.
Further, the placing characteristic position and the grabbing characteristic position are coordinate parameters expressed in world coordinates of the mechanical arm, and the controller is further used for converting the first characteristic position and the second characteristic position into coordinate parameters in the world coordinates of the mechanical arm.
Further, the controller is further configured to control the mechanical arm to move to the initial placement area before the image acquisition device acquires the first image of the current placement area and the second image of the current capture area, acquire the initial first image through the image acquisition device, extract a first feature position in the initial first image to obtain a placement feature position, control the mechanical arm to move to the initial capture area, acquire the initial second image through the image acquisition device, and extract a second feature position in the initial second image to obtain a capture feature position.
According to an aspect of an embodiment of the present invention, there is provided a control method of a conveyance system including: the robot with mechanical arm, an image acquisition device and a controller, and the control method of the handling system comprises the following steps: acquiring a first image of a current placement area and a second image of a current grabbing area; acquiring a first offset determined based on a first feature position in a first image and a preset placement feature position; acquiring a second offset determined based on a second feature position in the second image and a preset grabbing feature position; determining a placement position according to the first offset and a placement reference position, wherein the placement reference position is obtained according to a placement feature position, and determining a grabbing position based on the second offset and a grabbing feature position; and after the mechanical arm is controlled to move to the grabbing position to grab the object to be conveyed, the mechanical arm is controlled to move to the placing position.
Further, before acquiring the first image of the current placement region and the second image of the current capture region, a placement feature position and a capture feature position are acquired, wherein the step of acquiring the placement feature position and the capture feature position includes: controlling the mechanical arm to move to an initial placement area, and acquiring an initial first image through an image acquisition device; extracting a first characteristic position in the initial first image to obtain a placement characteristic position; controlling the mechanical arm to move to an initial grabbing area, and acquiring an initial second image through an image acquisition device; and extracting a second characteristic position in the initial second image to obtain a capture characteristic position.
Further, the object to be carried is a floor tile, the first characteristic position is a position where a corner of the floor tile is laid in the first image, and the second characteristic position is a position where a corner of the floor tile is laid in the second image.
Further, converting the first characteristic position into a coordinate parameter in a world coordinate of the mechanical arm; and comparing the coordinate parameter corresponding to the first characteristic position with the coordinate parameter corresponding to the placement characteristic position to obtain a first offset.
Further, determining that the mechanical arm moves to the current placement area, and sending an image acquisition instruction to control the image acquisition device to acquire a first image; and determining that the mechanical arm moves to the current grabbing area, and sending an image acquisition instruction to control the image acquisition device to acquire a second image.
According to an aspect of an embodiment of the present invention, there is provided a tile paving system, comprising any one of the above-described handling systems.
In the embodiment of the invention, an image acquisition device is used for acquiring a first image of a current placement area and a second image of a current grabbing area, a first offset of a first feature position in the first image and a preset placement feature position and a second offset between a second feature position in the second image and a preset grabbing feature position are determined, and after the two offsets are received, a controller determines the grabbing position and the placing position according to the two offsets, so that an object to be conveyed is grabbed according to the determined grabbing position, and the object to be conveyed is placed according to the placing position. Above-mentioned scheme is according to the present first image of placing the region and the present second image of snatching the region of image acquisition device collection and is confirmed to grab and get the position and place the position to realized carrying out visual guidance based on the image to the handling process of object, with the purpose that improves handling system's precision, when treating the handling object for the ceramic tile, can realize need not artificial intervention to accomplish the effect that high accuracy ceramic tile laid after carrying out the teaching, the technical problem that the precision requirement of the robot that is used for the tiling can't satisfy the tile among the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic view of a handling system according to an embodiment of the present application;
FIG. 2 is a schematic illustration of the operation of a handling system according to an embodiment of the present application;
FIG. 3 is a system architecture diagram of a handling system according to an embodiment of the present application;
FIG. 4a is a schematic illustration of a trajectory of a robot according to an embodiment of the present application;
FIG. 4b is a schematic diagram of another robot trajectory according to an embodiment of the present application; and
fig. 5 is a flowchart of a control method of a handling system according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present invention, an embodiment of a handling system is provided, and fig. 1 is a schematic diagram of a handling system according to an embodiment of the present application, as shown in fig. 1, the system includes:
an image capturing device 10 is used for capturing a first image of a current placement area and a second image of a current capture area, determining a first offset based on a first feature position in the first image and a preset placement feature position, and determining a second offset based on a second feature position in the second image and a preset capture feature position.
Specifically, the image capturing device may be a smart camera, and more specifically, may be a smart camera disposed at a distal end of the robot arm. Fig. 2 is a schematic diagram of the operation of a carrying system according to an embodiment of the present application, and as shown in fig. 2, the object to be carried may be a floor tile, in the drawing, the tile taking and photographing area is a current capturing area, and the tile taking and photographing area is a current placing area. The current grasping area may be set near the raw material library (a position where an object to be carried is placed), and the current placing area may be set near the reference brick (a tiled object). The image acquisition device can be driven by a mechanical arm of the robot to respectively move to a current grabbing area and a current placing area, and sends a designated signal (for example, a numerical value of a designated position in a register of the controller can be modified) to the controller after the image acquisition device moves to the current grabbing area or the current placing area, the controller sends an image acquisition instruction to the image acquisition device after receiving the designated signal, and the image acquisition device acquires a first image or a second image after receiving the image acquisition instruction.
The placement feature position and the grasping feature position may be positions determined in the teaching process or positions designated by a person. Taking teaching as an example, in the teaching process, the movement of the robot and the movement of the mechanical arm are manually controlled, so that the carrying process of an object is completed, and the placement characteristic position and the grabbing characteristic position are determined in the process.
The first characteristic position may be a position of a characteristic point of the object to be conveyed in the first image, the characteristic point may be a center point of the object to be conveyed in the first image, or may be a position of a corner of the object to be conveyed in the first image (for example, a corner of a tile in the tile taking photographing area in fig. 2). The second characteristic position is the same as the first characteristic position, and is not described in detail here.
The first offset may be a difference between the first feature position and a preset placement feature position, and the second offset may be a difference between the second feature position and a capture feature position.
And a controller 20 configured to determine a placement position according to the first offset amount and a placement reference position, and determine a grasping position based on the second offset amount and a grasping feature position, where the placement reference position is obtained according to the placement feature position.
Specifically, the placement reference position may be a position aligned with the placement feature position on the basis of the placement feature position and having a prescribed distance (e.g., 2 mm).
In an alternative embodiment, the first offset and the placement feature position may be adjusted to the same coordinate system, and then the placement position may be obtained by adding the first offset on the basis of the placement feature position. Similarly, the second offset and the grabbing feature position can be adjusted to the same coordinate system, and then the second offset is added on the basis of grabbing the feature position, so that the grabbing position can be obtained.
The robot having the robot arm 30 moves to the grasping position to grasp the object to be carried and then moves to the placing position based on a control command of the controller.
Specifically, the end of the robot arm may include a suction cup for sucking the object to be carried.
After the grabbing position and the placing position are determined, the controller can control the mechanical arm to grab the object to be conveyed according to the grabbing position and move the object to be conveyed to the placing position according to the placing position, and therefore the object is conveyed.
The controller may be a controller of the robot application, or may be a controller of an upper computer.
Referring to fig. 2, an embodiment of the above-mentioned embodiments applied to tile paving is described, which may include the following steps:
and S21, moving the mechanical arm to the tile photographing position, triggering the camera to acquire images, positioning the characteristics (the corners of the tiles), calculating the offset of the coordinates of the robot, and transmitting the offset to the robot through a TCP/IP protocol. The robot adds the offset to the tile coordinates at the time of teaching to obtain the tile coordinates at this time.
Specifically, the offset in S21 is a first offset, and the tile coordinate during teaching is a placement standard position. After receiving the offset calculated by the intelligent camera, the robot can calculate the placing position, namely the tile coordinate, according to the received offset and the tile coordinate during teaching.
And S22, moving the mechanical arm to a tile taking and photographing position, triggering a camera to acquire an image, positioning the characteristics (the angle of the floor tile), calculating the offset of the robot coordinate, and transmitting the offset to the robot through a TCP/IP protocol. The robot adds the offset to the brick-taking coordinates during teaching to obtain the current brick-taking coordinates.
Specifically, the tile taking photographing position in S22 is the current capturing area, and the tile taking coordinate during teaching is the capturing characteristic position. After receiving the offset calculated by the intelligent camera, the robot can obtain the grabbing position of the time according to the received offset and the brick-fetching coordinate during teaching, namely the brick-fetching coordinate.
And S23, operating the mechanical arm to grab the floor tiles according to the obtained tile taking and tile sticking coordinates, carrying the floor tiles and paving the floor tiles.
After the brick taking coordinate and the tile sticking coordinate are determined, the mechanical arm operates under the control of the robot, so that the floor tiles to be paved can be grabbed at the brick taking position, and the floor tiles to be paved can be placed at the tile sticking position.
And S24, controlling the mechanical arm to a tile shooting position, triggering the camera to collect an image, selecting adjacent edges of the floor tile in the image as features, and measuring the distance between the adjacent edges to obtain the gap width of the adjacent tile.
The steps are used for triggering the camera to collect the image of the placement area again after the mechanical arm places the floor tiles to be paved at the tile placement position, and acquiring the distance between the reference tile and the adjacent edge of the paving tile according to the collected image, namely the gap width of the two tiles.
It should be noted that a precision range (for example, 2 ± 0.5mm) can be set according to the precision requirement of paving, and if the gap width of the two tiles is not within the precision range, an alarm can be given to the upper computer to prompt that the positions of the paving tiles are wrong, so that the precision degree of paving the floor tiles is further improved.
It should be noted that, when measuring the distance between adjacent edges, multiple points may be selected from one edge, a vertical line from the other edge is made, if the lengths of the vertical lines corresponding to the multiple points are the same and within the accuracy range, it is determined that there is no problem in the tiling, otherwise, an alarm signal is sent.
With the above embodiments, it can be known that the software in the present application is deployed in an intelligent camera (i.e., an image acquisition device), and the intelligent camera can communicate with an upper computer or a robot through a network port via a TCP/IP protocol, and receive a photographing instruction sent by the upper computer or the robot; the intelligent camera further calculates according to the pictures obtained by photographing to obtain offsets (including the first offsets and the second offsets), and sends the calculated offsets to the upper computer or the robot, so that the upper computer can control the mechanical arm to move, and the floor tiles can be grabbed and paved.
Fig. 3 is a system architecture diagram of a handling system according to an embodiment of the present application, and referring to fig. 3, a software main body of the smart camera includes a camera calibration module, an image acquisition module, a feature positioning module, an offset calculation module, and a data communication module. The intelligent camera is firstly calibrated before working, and after a photographing instruction transmitted by a robot control program is received through the data communication module, the image acquisition module triggers the camera to photograph to obtain an image of the floor tile. Then, the corner of the floor tile is found out as the characteristic according to the template matching through the characteristic positioning module, and the pixel coordinate of the floor tile is calculated. And converting the pixel coordinates of the corner of the floor tile into world coordinates of the robot by using camera calibration data recorded in the camera calibration module. And comparing the robot world coordinate of the floor tile corner feature at the current position with the robot world coordinate of the teaching floor tile corner feature in the offset calculation module, calculating to obtain the offset information of the robot coordinate, and sending the offset information to the control program of the robot by the data communication module. And finally, operating the mechanical arm to move by a control program to complete the grabbing and paving of the floor tiles.
It should be noted that the robot control program is a control program stored in a controller, and the controller may be a controller in a robot or a controller of an upper computer, and the present application is not limited specifically.
As can be seen from the above, in the above embodiments of the present application, the image acquisition device is used to acquire the first image of the current placement area and the second image of the current capture area, and determine the first offset between the first feature position in the first image and the preset placement feature position, and the second offset between the second feature position in the second image and the preset capture feature position, after receiving the two offsets, the controller determines the capture position and the placement position according to the two offsets, so as to capture the object to be transported according to the determined capture position, and complete the placement of the object to be transported according to the placement position. Above-mentioned scheme is according to the present first image of placing the region and the present second image of snatching the region of image acquisition device collection and is confirmed to grab and get the position and place the position to realized carrying out visual guidance based on the image to the handling process of object, with the purpose that improves handling system's precision, when treating the handling object for the ceramic tile, can realize need not artificial intervention to accomplish the effect that high accuracy ceramic tile laid after carrying out the teaching, the technical problem that the precision requirement of the robot that is used for the tiling can't satisfy the tile among the prior art is solved.
As an alternative embodiment, an image acquisition device is arranged at the tail end of the mechanical arm.
In the above scheme, the image acquisition device may be an intelligent camera, and the intelligent camera may be disposed at the end of the robot arm, so as to reach the current grabbing area or the current placing area under the driving of the robot arm.
As an alternative embodiment, the object to be handled is a tile to be tiled, the first characteristic position is a position where a corner of the tile is tiled in the first image, and the second characteristic position is a position where a corner of the tile is tiled in the second image.
Specifically, in the above scheme, the floor tile to be paved can be in the shape of a square, a rectangle, a regular hexagon, etc. The tile corner where the tile has been tiled and the tile corner where the tile is to be tiled in the first image may be a designated one of the tile corners.
As an alternative embodiment, the placement feature position and the grasping feature position are coordinate parameters expressed in world coordinates of the robot arm, and the controller is further configured to convert both the first feature position and the second feature position into coordinate parameters in the world coordinates of the robot arm.
Specifically, the position of the robot may change at a certain pace, so that the feature position is captured and the feature position is placed to be recorded by the world coordinates of the robot arm, and the first feature position and the second feature position are both position information acquired from an image acquired by the image acquisition device, and therefore both the first feature position and the second feature position belong to coordinate parameters in a coordinate system of the image acquisition device. In this case, the parameters in the two different coordinate systems are difficult to calculate, and therefore, the controller also needs to convert both the first characteristic position and the second characteristic position into coordinate parameters in the world coordinates of the robot arm in order to calculate both the placement characteristic position and the grasping characteristic position.
It should be noted that the robot operates according to a predetermined step and path when it is in operation. The step is used for representing the length of the robot which needs to move after paving one floor tile and before paving the next floor tile, and the path is used for representing the running track of the robot when paving the floor tile in the room. For example, when paving tiles in a room, the tiles to be paved are square tiles with a size of 800mm by 800mm, the moving step of the robot may be 800mm, the moving direction may be the direction of the next tile to be paved determined according to the path, the path may be a path formed by odd rows from left to right and even rows from right to left in the row sequence, as shown by the arrow in fig. 4 a; or from the peripheral tiles to the interior tiles, as shown by the arrows in fig. 4 b.
In an alternative embodiment, before the system operates, the image capturing device may be calibrated, so as to obtain a transformation matrix in a coordinate system of the image capturing device and a world coordinate of the robot arm, and the first characteristic position and the second characteristic position may be transformed according to the transformation matrix.
As an optional embodiment, the controller is further configured to, before the image acquisition device acquires the first image of the current placement region and the second image of the current capture region, control the mechanical arm to move to the initial placement region, acquire the initial first image through the image acquisition device, extract a first feature position in the initial first image, obtain a placement feature position, control the mechanical arm to move to the initial capture region, acquire the initial second image through the image acquisition device, and extract a second feature position in the initial second image, so as to obtain a capture feature position.
When the image acquisition device acquires the first image of the current placement area and the second image of the current grabbing area, the object to be transported is transported, before this, the controller needs to acquire the grabbing characteristic position and the placing characteristic position, so that when the object to be transported is transported, the current grabbing position and placing position can be determined according to the grabbing characteristic position and the placing characteristic position.
In an alternative embodiment, the step of controlling the robot arm to move and the image capturing device to capture the initial first image and the initial second image may be completed by a manual teaching method, so as to obtain the position of the captured feature and the position reference point based on the manual teaching process.
Still taking tile paving as an example, the above-mentioned capturing characteristic positions and placing reference positions may be obtained at an initial stage of tile paving in a room, that is, when tile paving is started in a room, the capturing characteristic positions and the placing reference positions may be obtained in a teaching manner, and then tile paving is performed.
Specifically, the initial placement area and the initial grabbing area may be pre-designated areas, for example, an area where a first tile to be tiled needs to be placed (for example, a corner in a room) when the tiles are tiled is the initial placement area, and an area where the first tile to be tiled is grabbed is the initial grabbing area.
In the following, the teaching process is described in detail, wherein the image capturing device is an intelligent camera disposed at the end of the robot:
and S41, operating the mechanical arm to move to the tile photographing position and triggering the camera to acquire images. The corner of the tile is selected in the image as the locating feature and the position of the reference tile at that time is recorded as its reference position.
Specifically, the tile shooting position is an initial placement area, the image acquired in the above step is an initial first image, and the corner of the tile is a first characteristic position. The corner of the floor tile can be any preset corner or a designated corner.
And when the control mechanical arm reaches the tile shooting position, sending a shooting instruction to the intelligent camera, and triggering the intelligent camera to acquire images. After the intelligent camera collects the image, the image is processed, and the position of one corner of the floor tile is extracted from the image and used as the reference position of the reference tile.
And S42, operating the mechanical arm to move to the tile taking and photographing position, triggering the camera to acquire an image, selecting the corner of the tile in the image as a positioning feature, and recording the position of the tile at the moment as a reference position of the tile.
Specifically, the tile taking photographing position is an initial grabbing area, the image acquired in the step is an initial second image, and the corner of the floor tile is a second characteristic position. The corner of the floor tile can be any preset corner or a designated corner.
And when the control mechanical arm reaches the brick taking and photographing position, a photographing instruction is sent to the intelligent camera, and the intelligent camera is triggered to acquire images. After the intelligent camera collects the images, the images are processed, and the position of one corner of the floor tile is extracted from the images to be used as a grabbing characteristic position.
And S43, operating the mechanical arm to grab the floor tiles, carry the floor tiles, lay the floor tiles, and control the gap between adjacent floor tiles to be 2 mm.
In the steps, the mechanical arm is operated to grab the floor tiles and lay the floor tiles, the teaching process is completed, the grabbing characteristic positions and the placing characteristic positions are determined through the teaching process, and the reference is used for determining the grabbing and laying coordinates of the floor tiles at each time during work.
Example 2
In accordance with an embodiment of the present invention, there is provided an embodiment of a method of controlling a handling system, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that presented herein.
Fig. 5 is a flowchart of a control method of a handling system according to an embodiment of the present application, the handling system including: the robot with the mechanical arm, the image acquisition device and the controller, wherein the control method of the handling system comprises the following steps:
step S502, a first image of the current placement area and a second image of the current grabbing area are obtained.
In step S504, a first offset determined based on the first feature position in the first image and a preset placement feature position is obtained.
In step S506, a second offset determined based on the second feature position in the second image and the preset capture feature position is obtained.
Specifically, the image capturing device may be a smart camera, and more specifically, may be a smart camera disposed at a distal end of the robot arm. Fig. 2 is a schematic diagram of the operation of a carrying system according to an embodiment of the present application, and as shown in fig. 2, the object to be carried may be a floor tile, in the drawing, the tile taking and photographing area is a current capturing area, and the tile taking and photographing area is a current placing area. The current grasping area may be set near the raw material library (a position where an object to be carried is placed), and the current placing area may be set near the reference brick (a tiled object). The image acquisition device can be driven by a mechanical arm of the robot to respectively move to a current grabbing area and a current placing area, and sends a designated signal (for example, a numerical value of a designated position in a register of the controller can be modified) to the controller after the image acquisition device moves to the current grabbing area or the current placing area, the controller sends an image acquisition instruction to the image acquisition device after receiving the designated signal, and the image acquisition device acquires a first image or a second image after receiving the image acquisition instruction.
The placement feature position and the grasping feature position may be positions determined in the teaching process or positions designated by a person. Taking teaching as an example, in the teaching process, the movement of the robot and the movement of the mechanical arm are manually controlled, so that the carrying process of an object is completed, and the placement characteristic position and the grabbing characteristic position are determined in the process.
The first characteristic position may be a position of a characteristic point of the object to be conveyed in the first image, the characteristic point may be a center point of the object to be conveyed in the first image, or may be a position of a corner of the object to be conveyed in the first image (for example, a corner of a tile in the tile taking photographing area in fig. 2). The second characteristic position is the same as the first characteristic position, and is not described in detail here.
The first offset may be a difference between the first feature position and a preset placement feature position, and the second offset may be a difference between the second feature position and a capture feature position.
And step S508, determining a placement position according to the first offset and a placement reference position, and determining a grabbing position based on the second offset and a grabbing feature position, wherein the placement reference position is obtained according to the placement feature position.
In an alternative embodiment, the first offset and the placement reference position may be adjusted to the same coordinate system, and then the first offset is added on the basis of the placement reference position, so as to obtain the placement position. Similarly, the second offset and the grabbing feature position can be adjusted to the same coordinate system, and then the second offset is added on the basis of grabbing the feature position, so that the grabbing position can be obtained.
In step S5010, after the robot arm is controlled to move to the grasping position to grasp the object to be conveyed, the robot arm is controlled to move to the placing position.
The end of the mechanical arm may include a suction cup for sucking an object to be carried. After the grabbing position and the placing position are determined, the controller can control the mechanical arm to grab the object to be conveyed according to the grabbing position and move the object to be conveyed to the placing position according to the placing position, and therefore the object is conveyed.
The controller may be a controller of the robot application, or may be a controller of an upper computer.
As can be seen from the above, in the above embodiment of the present application, a first image of a current placement area and a second image of a current capture area are obtained; acquiring a first offset determined based on a first feature position in a first image and a preset placement feature position; acquiring a second offset determined based on a second feature position in the second image and a preset grabbing feature position; determining a placement position according to the first offset and the placement reference position, and determining a grabbing position based on the second offset and the grabbing characteristic position; and after the mechanical arm is controlled to move to the grabbing position to grab the object to be conveyed, the mechanical arm is controlled to move to the placing position. Above-mentioned scheme is according to the present first image of placing the region and the present second image of snatching the region of image acquisition device collection and is confirmed to grab and get the position and place the position to realized carrying out visual guidance based on the image to the handling process of object, with the purpose that improves handling system's precision, when treating the handling object for the ceramic tile, can realize need not artificial intervention to accomplish the effect that high accuracy ceramic tile laid after carrying out the teaching, the technical problem that the precision requirement of the robot that is used for the tiling can't satisfy the tile among the prior art is solved.
As an alternative embodiment, before acquiring the first image of the current placement region and the second image of the current capture region, the method further comprises: obtaining a placement feature position and a capture feature position, wherein the steps of obtaining the placement feature position and the capture feature position comprise: controlling the mechanical arm to move to an initial placement area, and acquiring an initial first image through an image acquisition device; extracting a first characteristic position in the initial first image to obtain a placement characteristic position; controlling the mechanical arm to move to an initial grabbing area, and acquiring an initial second image through an image acquisition device; and extracting a second characteristic position in the initial second image to obtain a capture characteristic position.
When the image acquisition device acquires the first image of the current placement area and the second image of the current grabbing area, the object to be transported is transported, before this, the controller needs to acquire the grabbing characteristic position and the placing characteristic position, so that when the object to be transported is transported, the current grabbing position and placing position can be determined according to the grabbing characteristic position and the placing characteristic position.
In an optional embodiment, the steps of controlling the mechanical arm to move and controlling the image acquisition device to acquire the initial first image and the initial second image can be completed in a manual teaching mode, so that the positions of the grabbing feature and the placing feature can be obtained based on the manual teaching process.
Still taking paving tiles as an example, the capturing characteristic positions and the placing characteristic positions can be obtained at the initial stage of paving tiles indoors, that is, when paving tiles indoors, the capturing characteristic positions and the placing characteristic positions can be obtained in a teaching mode firstly, and then paving tiles.
Specifically, the initial placement area and the initial grabbing area may be pre-designated areas, for example, an area where a first tile to be tiled needs to be placed (for example, a corner in a room) when the tiles are tiled is the initial placement area, and an area where the first tile to be tiled is grabbed is the initial grabbing area.
As an alternative embodiment, the object to be handled is a tile to be tiled, the first characteristic position is a position where a corner of the tile is tiled in the first image, and the second characteristic position is a position where a corner of the tile is tiled in the second image.
Specifically, in the above scheme, the floor tile to be paved can be in the shape of a square, a rectangle, a regular hexagon, etc. The tile corner where the tile has been tiled and the tile corner where the tile is to be tiled in the first image may be a designated one of the tile corners.
As an alternative embodiment, the step of obtaining a first offset determined based on a first feature position in the first image and a preset placement feature position is performed by setting the placement feature position as a coordinate parameter expressed in world coordinates of the robot arm, and includes: converting the first characteristic position into a coordinate parameter in a world coordinate of the mechanical arm; and comparing the coordinate parameter corresponding to the first characteristic position with the coordinate parameter corresponding to the placement characteristic position to obtain a first offset.
Specifically, the position of the robot may change at a certain pace, so that the feature position is captured and the feature position is placed to be recorded by the world coordinates of the robot arm, and the first feature position and the second feature position are both position information acquired from an image acquired by the image acquisition device, and therefore both the first feature position and the second feature position belong to coordinate parameters in a coordinate system of the image acquisition device. In this case, the parameters in the two different coordinate systems are difficult to calculate, and therefore, the controller also needs to convert both the first characteristic position and the second characteristic position into coordinate parameters in the world coordinates of the robot arm in order to calculate both the placement characteristic position and the grasping characteristic position.
As an alternative embodiment, acquiring a first image of a current placement area and a second image of a current capture area includes: determining that the mechanical arm moves to a current placement area, and sending an image acquisition instruction to control an image acquisition device to acquire a first image; and determining that the mechanical arm moves to the current grabbing area, and sending an image acquisition instruction to control the image acquisition device to acquire a second image.
Referring to fig. 2, the object to be carried may be a floor tile, and in the drawing, the tile taking photographing area is a current grabbing area, and the tile photographing area is a current placing area. The current grasping area may be set near the raw material library (a position where an object to be carried is placed), and the current placing area may be set near the reference brick (a tiled object). The image acquisition device can be driven by a mechanical arm of the robot to respectively move to a current grabbing area and a current placing area, and sends a designated signal (for example, a numerical value of a designated position in a register of the controller can be modified) to the controller after the image acquisition device moves to the current grabbing area or the current placing area, the controller sends an image acquisition instruction to the image acquisition device after receiving the designated signal, and the image acquisition device acquires a first image or a second image after receiving the image acquisition instruction.
Example 3
According to an embodiment of the present invention, there is provided a tile paving system, characterized by comprising the handling system described in embodiment 1.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

1. A handling system, comprising:
the image acquisition device is used for acquiring a first image of a current placement area and a second image of a current grabbing area, determining a first offset based on a first feature position in the first image and a preset placement feature position, and determining a second offset based on a second feature position in the second image and a preset grabbing feature position;
the controller is used for determining a placement position according to the first offset and a placement reference position, and determining a grabbing position based on the second offset and the grabbing feature position, wherein the placement reference position is obtained according to the placement feature position;
and the robot is provided with a mechanical arm, and based on a control command of the controller, the mechanical arm moves to the grabbing position to grab the object to be carried and then moves to the placing position.
2. The system of claim 1, wherein said one image capture device is disposed at the end of said robotic arm.
3. The system according to claim 1, wherein the object to be handled is a tile to be tiled, the first characteristic position is a position of a corner of the tiled tile in the first image, and the second characteristic position is a position of a corner of the tiled tile in the second image.
4. The system of claim 1, wherein the placement feature location and the grasping feature location are coordinate parameters represented in world coordinates of the robotic arm, and wherein the controller is further configured to convert both the first feature location and the second feature location to coordinate parameters in the world coordinates of the robotic arm.
5. The system of claim 1, wherein the controller is further configured to control the robotic arm to move to an initial placement area before the image capturing device captures a first image of a current placement area and a second image of a current capture area, capture the initial first image by the image capturing device, extract a first feature position in the initial first image to obtain the placement feature position, and control the robotic arm to move to the initial capture area, capture the initial second image by the image capturing device, and extract a second feature position in the initial second image to obtain the capture feature position.
6. A method of controlling a handling system, characterized in that the handling system comprises: the robot with the mechanical arm, an image acquisition device and a controller, wherein the control method of the handling system comprises the following steps:
acquiring a first image of a current placement area and a second image of a current grabbing area;
acquiring a first offset determined based on a first feature position in the first image and a preset placement feature position;
acquiring a second offset determined based on a second feature position in the second image and a preset capture feature position;
determining a placement position according to the first offset and a placement reference position, and determining a grabbing position based on the second offset and the grabbing feature position, wherein the placement reference position is obtained according to the placement feature position;
and after the mechanical arm is controlled to move to the grabbing position to grab the object to be carried, the mechanical arm is controlled to move to the placing position.
7. The method of claim 6, wherein prior to acquiring the first image of the currently placed region and the second image of the currently grabbed region, the method further comprises: obtaining the placement feature position and the grasping feature position, wherein the step of obtaining the placement feature position and the grasping feature position comprises:
controlling the mechanical arm to move to an initial placement area, and acquiring an initial first image through the image acquisition device;
extracting a first feature position in the initial first image to obtain the placement feature position;
controlling the mechanical arm to move to an initial grabbing area, and acquiring an initial second image through the image acquisition device;
and extracting a second feature position in the initial second image to obtain the grabbing feature position.
8. The method according to claim 6 or 7, wherein the object to be handled is a tile to be tiled, the first characteristic position is a position of a corner of the tiled tile in the first image, and the second characteristic position is a position of a corner of the tiled tile in the second image.
9. The method according to claim 6, wherein the placement feature position is a coordinate parameter expressed in world coordinates of the robot arm, and the obtaining of the first offset amount determined based on the first feature position in the first image and a preset placement feature position includes:
converting the first characteristic position into a coordinate parameter in world coordinates of the mechanical arm;
and comparing the coordinate parameter corresponding to the first characteristic position with the coordinate parameter corresponding to the placement characteristic position to obtain the first offset.
10. The method of claim 6, wherein acquiring a first image of a currently placed region and a second image of a currently grabbed region comprises:
determining that the mechanical arm moves to the current placement area, and sending an image acquisition instruction to control the image acquisition device to acquire the first image;
and determining that the mechanical arm moves to the current grabbing area, and sending an image acquisition instruction to control the image acquisition device to acquire the second image.
11. A tile paving system comprising the handling system of any of claims 1 to 5.
CN201910596240.5A 2019-07-03 2019-07-03 Carrying system, control method thereof and floor tile paving system Active CN110295728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910596240.5A CN110295728B (en) 2019-07-03 2019-07-03 Carrying system, control method thereof and floor tile paving system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910596240.5A CN110295728B (en) 2019-07-03 2019-07-03 Carrying system, control method thereof and floor tile paving system

Publications (2)

Publication Number Publication Date
CN110295728A CN110295728A (en) 2019-10-01
CN110295728B true CN110295728B (en) 2021-02-09

Family

ID=68030201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910596240.5A Active CN110295728B (en) 2019-07-03 2019-07-03 Carrying system, control method thereof and floor tile paving system

Country Status (1)

Country Link
CN (1) CN110295728B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111515965B (en) * 2020-04-16 2023-02-17 广东博智林机器人有限公司 Paving method and device for decorative plane materiel, robot and storage medium
CN111447366B (en) * 2020-04-27 2022-04-01 Oppo(重庆)智能科技有限公司 Transportation method, transportation device, electronic device, and computer-readable storage medium
CN113482301B (en) * 2021-07-02 2022-07-01 北京建筑大学 Tile paving method and tile automatic paving control system
CN114351991B (en) * 2022-01-25 2023-09-15 广东博智林机器人有限公司 Paving mechanism, paving robot and paving method compatible with wall bricks of different sizes

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108214487B (en) * 2017-12-16 2021-07-20 广西电网有限责任公司电力科学研究院 Robot target positioning and grabbing method based on binocular vision and laser radar

Also Published As

Publication number Publication date
CN110295728A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN110295728B (en) Carrying system, control method thereof and floor tile paving system
CN110303498B (en) Carrying system, control method thereof and floor tile paving system
CN110374312B (en) Carrying system, control method thereof and floor tile paving system
DE112019000177T5 (en) A ROBOTIC SYSTEM WITH AN AUTOMATED PACKAGE REGISTRATION MECHANISM AND METHOD TO OPERATE THIS SYSTEM
US10596707B2 (en) Article transfer device
CN104626169B (en) Robot part grabbing method based on vision and mechanical comprehensive positioning
CN108789414A (en) Intelligent machine arm system based on three-dimensional machine vision and its control method
CN110450129B (en) Carrying advancing method applied to carrying robot and carrying robot thereof
US20190375602A1 (en) Robot system and control method for robot system
US20220058826A1 (en) Article position managing apparatus, article position management system, article position managing method, and program
CN106845354B (en) Part view library construction method, part positioning and grabbing method and device
CN106044570A (en) Steel coil lifting device automatic identification device and method adopting machine vision
CN110941462B (en) System and method for automatically learning product manipulation
US10434649B2 (en) Workpiece pick up system
CN109775376A (en) The robot de-stacking method of irregular random material
CN111390910A (en) Manipulator target grabbing and positioning method, computer readable storage medium and manipulator
JP2004338889A (en) Image recognition device
CN110298877A (en) A kind of the determination method, apparatus and electronic equipment of object dimensional pose
CN108470165A (en) A kind of picking robot fruit vision collaboratively searching method
CN106733686A (en) A kind of streamline object positioning method of view-based access control model and code-disc data fusion
CN114193440A (en) Robot automatic grabbing system and method based on 3D vision
CN111397509B (en) Candle wick correction method and system
CN111476840B (en) Target positioning method, device, equipment and computer readable storage medium
CN110397257A (en) Handling system and its control method, floor tile paving system
CN113597362B (en) Method and control device for determining the relationship between a robot coordinate system and a mobile device coordinate system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant