WO2021254376A1 - Transport robot control method and device, transport robot, and storage medium - Google Patents

Transport robot control method and device, transport robot, and storage medium Download PDF

Info

Publication number
WO2021254376A1
WO2021254376A1 PCT/CN2021/100304 CN2021100304W WO2021254376A1 WO 2021254376 A1 WO2021254376 A1 WO 2021254376A1 CN 2021100304 W CN2021100304 W CN 2021100304W WO 2021254376 A1 WO2021254376 A1 WO 2021254376A1
Authority
WO
WIPO (PCT)
Prior art keywords
transport robot
robot
identification code
information
delivery box
Prior art date
Application number
PCT/CN2021/100304
Other languages
French (fr)
Chinese (zh)
Inventor
许哲涛
Original Assignee
京东科技信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东科技信息技术有限公司 filed Critical 京东科技信息技术有限公司
Publication of WO2021254376A1 publication Critical patent/WO2021254376A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means

Definitions

  • the present disclosure generally relates to the field of transportation robots, and more specifically to a control method, device, transportation robot, and storage medium of the transportation robot.
  • the distribution robot usually adopts the split design of the distribution box and the transportation robot to improve the distribution efficiency and the flexibility of the distribution. Based on this design method, it is necessary for the transport robot to accurately locate the distribution box to be combined with the distribution box.
  • the delivery robot of the delivery robot is equipped with lidar.
  • the delivery robot detects the surrounding environment through the lidar to obtain the position of the surrounding objects and the shape of the plane outline, and then determine the delivery box based on the plane outline shape, and then The location of the distribution box moves to complete the combination of the distribution box.
  • the present disclosure provides a control method of a transport robot, which includes:
  • a movement operation is performed based on the relative position, so that the transport robot is combined with the delivery box.
  • the identifying the graphic identification code in the target image and determining the location information of the graphic identification code in the target image includes:
  • the extracted contour information determine the target contour information that satisfies the preset contour feature, and use the image corresponding to the target contour information as the corner image of the graphic identification code;
  • the position information of the graphic identification code in the target image is calculated.
  • the determining the relative position of the transportation robot and the distribution box according to the position information includes:
  • the offset amount of the position information relative to the reference position information is calculated, and the offset amount is used as the relative position of the transport robot and the delivery box.
  • the moving operation based on the relative position to combine the transport robot with the delivery box includes:
  • the transportation robot When it is detected that the transportation robot is facing the distribution box, the transportation robot is controlled to move to the distribution box, so that the transportation robot is combined with the distribution box.
  • the method further includes:
  • the transport robot is controlled to stop moving, so as to complete the combination of the transport robot and the delivery box.
  • the method before the image acquisition is performed by the camera component to obtain the target image, the method further includes:
  • the determining the current position of the transport robot based on the matching result includes:
  • the current position of the transport robot is determined.
  • control device for a transport robot which includes:
  • the acquisition module is configured to perform image acquisition through the camera component to obtain the target image
  • An identification module configured to identify a graphic identification code in the target image and determine the position information of the graphic identification code in the target image, the image identification code being set on the outside of the delivery box;
  • a determining module configured to determine the relative position of the transport robot and the delivery box according to the position information
  • the moving module is configured to perform a moving operation based on the relative position, so that the transport robot is combined with the delivery box.
  • the present disclosure provides a transport robot, the transport robot includes a camera component, a control device, and a chassis drive device, the control device is respectively connected to the camera component and the chassis drive device, wherein:
  • the camera component is configured to perform image collection to obtain a target image
  • the control device is configured to identify the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image, the image identification code is set on the outside of the delivery box;
  • the position information determines the relative position of the transport robot and the delivery box;
  • the control device is further configured to control the chassis driving device to perform a movement operation based on the relative position, so that the transport robot is combined with the delivery box.
  • the chassis driving device includes a connecting portion and a bearing portion, the bottom of the control device is fixedly connected to the connecting portion, and the bearing portion is used for connecting the transport robot and the delivery box. After being combined, load the distribution box; and
  • the camera component is arranged at the rear end of the carrying part.
  • the transport robot further includes a distance detection component, and the distance detection component is connected to the controller;
  • the distance detecting component is configured to detect the distance between the transport robot and the delivery box
  • the control device is further configured to control the chassis driving device to stop moving when the distance detected by the distance detection component is within a preset distance range, so as to complete the combination of the transport robot and the delivery box.
  • the number of the distance detection components is multiple, and the distance detection components are symmetrically arranged on the side of the control device facing the distribution box.
  • the transportation robot further includes a lidar, and the lidar is connected to the control device;
  • the lidar is configured to scan to obtain point cloud data of surrounding objects.
  • the control device is further configured to obtain the target position of the delivery box, match the point cloud data with pre-stored map information, determine the current position of the transport robot based on the matching result; determine the current position and The movement path between the target positions, and based on the movement path, the chassis driving device is controlled to move to reach the target position.
  • the transport robot further includes an inertial measurement unit IMU, and the inertial measurement unit IMU is connected to the control device;
  • the IMU is configured to detect posture information and mileage information of the transport robot.
  • the control device is further configured to use the position in the matching result as the first candidate position; determine the transportation robot according to the posture information and the mileage information fed back by the IMU, and the starting position of the transportation robot The second candidate position; according to the first candidate position and the second candidate position, determine the current position of the transport robot.
  • control device further includes a human-computer interaction component.
  • the present disclosure provides a split delivery robot, which includes the delivery robot of the present disclosure and at least one delivery box, and an image identification code is provided on the outside of the delivery box.
  • the present disclosure provides a computer-readable storage medium with a computer program stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method of the present disclosure is implemented.
  • the present disclosure provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method described in the present disclosure.
  • the present disclosure provides a control method, device, transportation robot, and storage medium for a transportation robot to solve the technical problem that the distribution box cannot be combined with the transportation robot because the lidar cannot recognize the distribution box.
  • the accuracy of the combination of the distribution box and the transport robot is improved.
  • the present disclosure provides a method for controlling a transport robot, which can collect images through a camera component to obtain a target image, then identify the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image , Wherein the image identification code is set on the outside of the delivery box. Then, the relative position of the transport robot and the delivery box is determined according to the position information, and the moving operation is performed based on the relative position, so that the transport robot and the delivery box are combined.
  • the distribution box is accurately located through the graphic identification code, and the relative position of the transportation robot and the distribution box is determined based on the graphic identification code, thereby realizing the combination of the transportation robot and the distribution box. In this way, there is no need to detect the distribution box by lidar, which avoids the technical problem that the distribution box and the transportation robot cannot be combined because the lidar cannot identify the distribution box, thereby improving the accuracy of the combination of the distribution box and the transportation robot.
  • FIG. 1 is a schematic diagram of a split delivery robot provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of a transport robot provided by an embodiment of the disclosure
  • FIG. 3 is a flowchart of a method for controlling a transport robot according to an embodiment of the present disclosure
  • Fig. 4a is a schematic diagram of a two-dimensional code provided by an embodiment of the present disclosure.
  • Fig. 4b is a schematic diagram of contour information provided by an embodiment of the present disclosure.
  • Fig. 4c is a schematic diagram of a corner point image provided by an embodiment of the present disclosure.
  • FIG. 4d is a schematic diagram of a target image provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of another split-type delivery robot provided by an embodiment of the present disclosure.
  • FIG. 6 is a flowchart of an example of a method for controlling a transport robot provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of a control device for a transport robot provided by an embodiment of the disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
  • the embodiments of the present disclosure provide a method for controlling a delivery robot, which can be applied to a split delivery robot, and in some embodiments, it can be executed by a delivery robot in the split delivery robot.
  • the transport robot may at least include a camera component, a control device, and a chassis driving device.
  • the chassis driving device includes a connecting part and a carrying part, the bottom of the control device is fixedly connected with the connecting part, and the carrying part is used to carry the delivery box after the transport robot is combined with the delivery box.
  • the camera component can be a camera, a camera, etc. In some embodiments, the camera component can be arranged at the end of the carrying part. As shown in Fig. 1, a schematic diagram of the split delivery robot provided by the present disclosure.
  • the camera component can also be arranged in other positions, such as the control device directly facing the side of the distribution box.
  • the transport robot can also include components such as distance sensors, single-chip microcomputers, CAN (Controller Area Network, Controller Area Network) transceivers, CAN buses, Lidar, IMU (Inertial Measurement Unit), motors, encoders, etc. .
  • FIG. 2 it is a schematic structural diagram of a transport robot provided by an embodiment of the present disclosure.
  • the transport robot includes a camera, a distance sensor A, a distance sensor B, a lidar, an IMU, a control device, and a chassis driving device.
  • the control device Including a single chip microcomputer, a main controller, a CAN transceiver, and a CAN bus
  • the chassis driving device may include a motor driver, a motor, and an encoder.
  • the main controller is respectively connected with the camera, lidar, IMU and motor driver.
  • the motor driver is connected with the motor, and an encoder is connected between the motor driver and the motor.
  • the camera can be used to capture images and transmit the captured images to the main controller, so that the main controller can recognize the image identification code to locate the delivery box.
  • IMU can be used to detect the posture information of the transport robot, such as acceleration, posture, angular velocity, etc.
  • Lidar is used to scan the point cloud data of the surrounding environment; the encoder is used to record the mileage information that has been driven, and the active controller can navigate and locate based on the information returned by the IMU and encoder.
  • the main controller is connected to the CAN transceiver through the CAN bus, the CAN transceiver is connected to the single-chip microcomputer, and the single-chip microcomputer is connected to the distance sensor A and the distance sensor B respectively.
  • the distance sensor can be a narrow-beam ultrasonic distance measuring sensor, an optical TOF distance measuring sensor, etc., which are used to measure the relative distance between the distribution box and the transport robot, and report it to the single-chip microcomputer.
  • the single-chip microcomputer reports the distance result to the CAN bus through the CAN transceiver, and the main controller obtains the data reported by the single-chip microcomputer through the CAN bus.
  • the main controller can send motion instructions to the motor driver, and drive the motor to rotate to realize the robot's forward, backward, and turn operations, so as to realize the combination of the transport robot and the delivery box.
  • the transport robot may also include other components not shown in FIG. 2, such as human-computer interaction components (ie, display screens, voice interaction components, etc.), which are not limited in the embodiment of the present disclosure.
  • Step 301 image acquisition is performed by the camera component to obtain a target image
  • Step 302 Identify the graphic identification code in the target image, and determine the location information of the graphic identification code in the target image;
  • Step 303 Determine the relative position of the transport robot and the delivery box according to the position information
  • step 304 a moving operation is performed based on the relative position to combine the transport robot with the delivery box.
  • the delivery robot is usually placed in the same area as the delivery box; or, after delivery, the delivery robot can also move to the area where the delivery box is located through the navigation system. At this time, the delivery box will enter the range of the camera component of the transport robot, and the transport robot can collect images through the camera component to obtain the target image. In this way, the target image captured by the imaging component usually contains an image of the delivery box.
  • an image identification code is provided on the outer side of the delivery box. In some embodiments, it can be provided on the side that should face the transport robot.
  • the image identification code may be a graphic code such as a two-dimensional code or a barcode, which is not limited in the embodiment of the present disclosure.
  • the process of recognizing the graphic identification code in the target image and determining the position information of the graphic identification code in the target image includes: extracting contour information of the target image; in the extracted contour information, determining that The target contour information of the contour feature is preset, and the image corresponding to the target contour information is used as the corner image of the graphic identification code; based on the position coordinates of the corner image in the target image, the position information of the graphic identification code in the target image is calculated.
  • the control device can extract the contour information contained in the target image through a preset image processing algorithm.
  • smooth filtering and binarization of the target image can be performed to obtain the contour information contained in the target image.
  • the target contour information that satisfies the preset contour characteristics can be searched for, and the target contour information can be searched.
  • the image corresponding to the contour information is used as the corner image of the graphic identification code.
  • the position coordinates of the corner image in the target image can be determined, and the position information of the graphic identification code in the target image can be calculated according to the coordinates. There are many ways to calculate position information.
  • the position coordinates of the center point can be calculated according to the position coordinates of the two diagonal corner images, and the position coordinates of the center point can be used as the graphic identification code in the target image.
  • Position information or, you can directly use the position coordinates of a certain corner image as the position information of the graphic identification code in the target image.
  • the calculation method of the position information of the graphic identification code needs to be consistent with the preset reference position information calibration method.
  • the position coordinates of the upper left corner of the two-dimensional code are calibrated as the reference position information, and correspondingly, the position coordinates of the upper left corner image in the target image are used as the position information of the graphic identification code during calculation.
  • the image of the two-dimensional code may be as shown in Figure 4a, including 3 corner points (ie, two top corners and the lower left corner), and the image after extracting the contour information is shown in Figure 4b.
  • the corner image is shown in Figure 4c.
  • the center point of the corner image in Figure 4c can form a right-angled triangle, and then calculate the position information of the graphic identification code in the target image based on this right-angled triangle, which can be recorded as (xa, ya).
  • the control device can determine the relative position of the transport robot and the delivery box based on the position information.
  • the reference position information of the preset graphic identification code in the target image can be obtained, and then the offset of the position information relative to the reference position information can be calculated, and the offset can be used as the difference between the transport robot and the delivery box. relative position.
  • the reference position information is the position information of the graphic identification code in the image taken by the transport robot when the transport robot is facing the delivery box. As shown in Figure 4d, the cross in the picture represents the reference position information.
  • the reference position information can be preset by a technician.
  • it can be the center point of the image, which can be recorded as (xb, yb), and the offset between the position (xa, ya) of the QR code and the center point (xb, yb) can be calculated, and the offset can be used as a transportation The relative position of the robot and the distribution box.
  • control device may send a movement instruction to the motor driver based on the relative position, so that the position information of the graphic identification code in the target image is the same as the reference position information.
  • the transportation robot is facing the distribution box, and then the transportation robot can move to the distribution box, thereby completing the combination of the transportation robot and the distribution box.
  • the moving process may be: determining the adjustment angle of the transport robot based on the relative position, and moving according to the adjustment angle; when it is detected that the transport robot is facing the distribution box, control the transport robot to move to the distribution box , In order to combine the transport robot with the distribution box.
  • control device can determine the adjustment angle of the transport robot based on the calculated relative position (ie, the offset), such as moving to the opposite angle of the offset, and then the control device can send the movement to the motor driver Instructions to make the motor driver drive the motor to rotate to achieve angle adjustment.
  • the transport robot can continuously perform image collection (for example, it can periodically perform image collection through the camera component), so as to continuously perform angle adjustment to improve the accuracy of the combination.
  • accurate loading guidance can also be achieved by the distance detection component.
  • the processing process includes: detecting the distance between the transport robot and the delivery box through the distance detection component; when the detected distance is within the preset distance range When the time, the transportation robot is controlled to stop moving to complete the combination of the transportation robot and the distribution box.
  • the transportation robot when it is detected that the transportation robot is facing the distribution box, the transportation robot can detect the distance between the transportation robot and the distribution box through the distance detection component, and move to the distribution box. When the detected distance is within the preset distance range, it indicates that the transport robot has entered the designated position of the distribution box, and the transport robot can be controlled to stop moving to complete the combination of the transport robot and the distribution box.
  • distance sensors There can be multiple distance sensors, and they can be symmetrically arranged on the side of the control device facing the distribution box.
  • two distance sensors namely, distance sensor A and distance sensor B, are configured on the body of the delivery robot, as shown in FIG. 5.
  • the distance sensor can measure the distance d1 and distance d2 between the left and right sides of the distribution box and the transport robot, and report to the single-chip microcomputer.
  • the single-chip microcomputer reports the received results to the main controller through the CAN bus, and the main controller sends the data to the motor according to the values of d1 and d2.
  • the driver sends a motion instruction until d1 and d2 reach the preset distance range, the transport robot completes the loading of the delivery box.
  • lidar navigation can be used to make the transportation robot reach the loading area of the distribution box.
  • the processing process includes: obtaining the target position of the distribution box; scanning through the lidar to obtain point cloud data of surrounding objects , And match the point cloud data with the pre-stored map information, determine the current position of the transport robot based on the matching result; determine the motion path between the current position and the target position, and move based on the motion path to reach the target position.
  • the delivery robot needs to move to the area where the delivery box is located through the navigation system.
  • the transport robot can obtain the target position where the delivery box is located, and the target position can be preset by the technician.
  • the transport robot can scan through lidar to obtain point cloud data of surrounding objects, and then match the point cloud data with pre-stored map information.
  • the map information is constructed by laser radar for SLAM (simultaneous localization and mapping, time location and map).
  • the control device can match the detected point cloud data with the map information, so as to use the matched position as the current position. Then, the motion path between the current position and the target position can be determined, and the motion path can be moved based on the motion path to reach the target position.
  • the current position can be determined by combining the feedback of the IMU and the mileage information.
  • the processing process includes: taking the position in the matching result as the first candidate position; according to the posture information of the transportation robot and the mileage information that has been driven, and the transportation The starting position of the robot determines the second candidate position of the transport robot; according to the first candidate position and the second candidate position, the current position of the transport robot is determined.
  • the position in the matching result may be used as the first candidate position.
  • the position of the transport robot (which can be called the second candidate position) can be calculated based on the posture information fed back by the IMU, the mileage information recorded by the encoder, and the starting position of the transport robot, and then the first candidate The middle position between the position and the second candidate position is used as the current position of the transport robot. In this way, the current position can be determined by combining the feedback of the IMU and the mileage information, thereby improving the accuracy of the position determination.
  • a process of constructing SLAM through lidar including: the transport robot starts from a preset map origin, and records posture information and mileage information through an IMU and an encoder. According to the posture information fed back by the IMU, the orientation of the transport robot can be known, and the driving distance of each orientation can be known according to the mileage information. Lidar continuously scans to obtain point cloud data of surrounding objects. The point cloud data can be used to analyze the contours and distances of surrounding objects. In some embodiments, the lidar is used to detect the origin of the map, and the obstacles around the origin of the map and the distance between the obstacles and the transport robot can be known. Next, the transport robot moves forward and based on the mileage information during the progress.
  • the lidar also continuously scans and returns the point cloud data of the surrounding obstacles, thereby establishing the surrounding map of the azimuth ⁇ point from the origin of the coordinate. Finally, the transport robot keeps moving until it traverses the entire environment, and the entire space map can be obtained by superimposing all the maps.
  • an example of a control method of a transport robot is also provided, as shown in FIG. 6, which includes the following steps:
  • Step 601 Obtain the target location where the distribution box is located
  • Step 602 Determine the current position of the transport robot through the lidar positioning, the attitude information fed back by the IMU, and the mileage information that has been traveled;
  • Step 603 Determine the motion path between the current position and the target position, and move based on the motion path to reach the target position where the delivery box is located;
  • Step 604 image acquisition is performed by the camera component to obtain a target image
  • Step 605 Identify the graphic identification code in the target image, and the image identification code is set on the outside of the delivery box;
  • Step 606 extract the contour information contained in the graphic identification code
  • Step 607 Among the extracted contour information, determine target contour information that meets preset contour characteristics, and use an image corresponding to the target contour information as a corner point image;
  • Step 608 Calculate the position information of the graphic identification code in the target image based on the position coordinates of the corner point image in the target image;
  • Step 609 Calculate the offset of the position information relative to the preset reference position information, and use the offset as the relative position of the transport robot and the delivery box;
  • the reference position information is the position information of the graphic identification code in the image taken by the transportation robot when the transportation robot is facing the distribution box;
  • Step 610 Determine an adjustment angle of the transport robot based on the relative position, and perform a movement operation according to the adjustment angle;
  • Step 611 When it is detected that the transportation robot is facing the distribution box, control the transportation robot to move to the distribution box;
  • Step 612 Detect the distance between the transport robot and the distribution box through the distance detection component
  • Step 613 When the detected distance is within the preset distance range, control the transport robot to stop moving, so as to complete the combination of the transport robot and the delivery box.
  • the present disclosure also provides a transport robot, which includes a camera component, a control device, and a chassis drive device, the control device is respectively connected to the camera component and the chassis drive device, wherein:
  • the camera component is configured to perform image collection to obtain a target image
  • the control device is configured to identify the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image, the image identification code is set on the outside of the delivery box;
  • the position information determines the relative position of the transport robot and the delivery box;
  • the control device is further configured to control the chassis driving device to perform a movement operation based on the relative position, so that the transport robot is combined with the delivery box.
  • the chassis driving device includes a connecting portion and a bearing portion, the bottom of the control device is fixedly connected to the connecting portion, and the bearing portion is used for connecting the transport robot and the delivery box. After being combined, load the distribution box; and
  • the camera component is arranged at the rear end of the carrying part.
  • the transport robot further includes a distance detection component, and the distance detection component is connected to the controller;
  • the distance detecting component is configured to detect the distance between the transport robot and the delivery box
  • the control device is further configured to control the chassis driving device to stop moving when the distance detected by the distance detection component is within a preset distance range, so as to complete the combination of the transport robot and the delivery box.
  • the number of the distance detection components is multiple, and the distance detection components are symmetrically arranged on the side of the control device facing the distribution box.
  • the transportation robot further includes a lidar, and the lidar is connected to the control device;
  • the lidar is configured to scan to obtain point cloud data of surrounding objects.
  • the control device is further configured to obtain the target position of the delivery box, match the point cloud data with pre-stored map information, determine the current position of the transport robot based on the matching result; determine the current position and The movement path between the target positions, and based on the movement path, the chassis driving device is controlled to move to reach the target position.
  • the transport robot further includes an inertial measurement unit IMU, and the inertial measurement unit IMU is connected to the control device;
  • the IMU is configured to detect posture information and mileage information of the transport robot.
  • the control device is further configured to use the position in the matching result as the first candidate position; determine the transportation robot according to the posture information and the traveled mileage information fed back by the IMU, and the starting position of the transportation robot The second candidate position; according to the first candidate position and the second candidate position, determine the current position of the transport robot.
  • control device further includes a human-computer interaction component.
  • the present disclosure also provides a control device for a transport robot. As shown in FIG. 7, the device includes:
  • the acquisition module 710 is configured to perform image acquisition through a camera component to obtain a target image
  • the recognition module 720 is configured to recognize the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image, and the image identification code is set on the outside of the delivery box;
  • the determining module 730 is configured to determine the relative position of the transport robot and the delivery box according to the position information
  • the moving module 740 is configured to perform a moving operation based on the relative position, so as to combine the transport robot with the delivery box.
  • the present disclosure also provides an electronic device, as shown in FIG. 8, which includes a processor 801, a communication interface 802, a memory 803, and a communication bus 804.
  • the bus 804 completes the communication between each other,
  • the memory 803 is configured to store computer programs
  • the processor 801 is configured to implement the method described in the present disclosure when it is configured to execute the program stored in the memory 803.
  • the communication bus mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used to indicate in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above-mentioned electronic device and other devices.
  • the memory may include random access memory (Random Access Memory, RAM), and may also include non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk storage.
  • RAM Random Access Memory
  • NVM non-Volatile Memory
  • the memory may also be at least one storage device located far away from the aforementioned processor.
  • the above-mentioned processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; it may also be a digital signal processor (DSP), a dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • CPU central processing unit
  • NP network processor
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the present disclosure also provides a computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor, the method described in the present disclosure is implemented.
  • the present disclosure also provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method described in the present disclosure.
  • the present disclosure may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A transport robot control method and device, a transport robot, and a storage medium. The control method comprises: performing image acquisition by means of a camera component to obtain a target image (S301); identifying a graphic identification code in the target image, and determining position information of the graphic identification code in the target image (S302), the image identification code being provided on the outside of a delivery box; determining the relative position of the transport robot and the delivery box according to the position information (S303); and performing a movement operation on the basis of the relative position to enable the combination between the transport robot and the delivery box (S304).

Description

运送机器人的控制方法、装置、运送机器人和存储介质Control method and device of conveying robot, conveying robot and storage medium
相关申请的引用References to related applications
本公开要求于2020年6月19日向中国人民共和国国家知识产权局提交的申请号为202010565219.1,发明名称为“一种运送机器人的控制方法、装置、运送机器人和存储介质”的发明专利申请的全部权益,并通过引用的方式将其全部并入本公开。This disclosure requires all of the invention patent applications filed with the State Intellectual Property Office of the People’s Republic of China on June 19, 2020, with an application number of 202010565219.1 and an invention title of "a control method, device, transport robot and storage medium for a transport robot" Rights and interests, and all of them are incorporated into this disclosure by reference.
领域field
本公开大体上涉及运送机器人领域,更具体地涉及运送机器人的控制方法、装置、运送机器人和存储介质。The present disclosure generally relates to the field of transportation robots, and more specifically to a control method, device, transportation robot, and storage medium of the transportation robot.
背景background
目前,配送机器人通常采用配送箱体和运送机器人分体式设计,以提高配送效率和配送的灵活性。基于该设计方式,需要运送机器人准确的定位出配送箱体,从而与配送箱体结合。At present, the distribution robot usually adopts the split design of the distribution box and the transportation robot to improve the distribution efficiency and the flexibility of the distribution. Based on this design method, it is necessary for the transport robot to accurately locate the distribution box to be combined with the distribution box.
相关技术中,配送机器人的运送机器人上配置有激光雷达,运送机器人通过激光雷达对周围环境进行检测,得到周围物体的位置、以及在平面轮廓形状,进而基于平面轮廓形状确定配送箱体,然后向配送箱体的所在位置移动,从而完成配送箱体的结合。In related technologies, the delivery robot of the delivery robot is equipped with lidar. The delivery robot detects the surrounding environment through the lidar to obtain the position of the surrounding objects and the shape of the plane outline, and then determine the delivery box based on the plane outline shape, and then The location of the distribution box moves to complete the combination of the distribution box.
基于上述技术方案,当配送箱体不是正对配送箱体摆放时,激光雷达检测到的配送箱体的平面轮廓形状会发生变化,可能无法识别出配送箱体,导致配送箱体与运送机器人无法结合。Based on the above technical solution, when the distribution box is not placed directly opposite to the distribution box, the plane contour shape of the distribution box detected by the lidar will change, and the distribution box may not be recognized, resulting in the distribution box and the transportation robot. Unable to combine.
概述Overview
第一方面,本公开提供了运送机器人的控制方法,其包括:In the first aspect, the present disclosure provides a control method of a transport robot, which includes:
通过摄像部件进行图像采集,得到目标图像;Image acquisition through the camera component to obtain the target image;
在所述目标图像中识别图形标识码,并确定所述图形标识码在所述目标图像中的位置信息,所述图像标识码设置于配送箱体的外侧;Identify the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image, the image identification code is set on the outside of the delivery box;
根据所述位置信息确定运送机器人与所述配送箱体的相对位置; 以及Determine the relative position of the transport robot and the delivery box according to the position information; and
基于所述相对位置进行移动操作,以使所述运送机器人与所述配送箱体结合。A movement operation is performed based on the relative position, so that the transport robot is combined with the delivery box.
在某些实施方案中,所述在所述目标图像中识别图形标识码,并确定所述图形标识码在所述目标图像中的位置信息,包括:In some embodiments, the identifying the graphic identification code in the target image and determining the location information of the graphic identification code in the target image includes:
对所述目标图像进行轮廓信息提取;Extract contour information on the target image;
在提取出的轮廓信息中,确定满足预设轮廓特征的目标轮廓信息,并将所述目标轮廓信息对应的图像作为所述图形标识码的角点图像;以及In the extracted contour information, determine the target contour information that satisfies the preset contour feature, and use the image corresponding to the target contour information as the corner image of the graphic identification code; and
基于所述角点图像在所述目标图像中的位置坐标,计算所述图形标识码在所述目标图像中的位置信息。Based on the position coordinates of the corner point image in the target image, the position information of the graphic identification code in the target image is calculated.
在某些实施方案中,所述根据所述位置信息确定运送机器人与所述配送箱体的相对位置,包括:In some embodiments, the determining the relative position of the transportation robot and the distribution box according to the position information includes:
获取预设的所述图形标识码在所述目标图像中的基准位置信息,其中,所述基准位置信息是所述运送机器人正对所述配送箱体时,所述图形识别码在所述运送机器人所拍摄到的图像中的位置信息;以及Obtain the preset reference position information of the graphic identification code in the target image, where the reference position information is that when the transport robot is facing the delivery box, the graphic identification code is in the transport The position information in the image captured by the robot; and
计算所述位置信息相对于所述基准位置信息的偏移量,将所述偏移量作为所述运送机器人与所述配送箱体的相对位置。The offset amount of the position information relative to the reference position information is calculated, and the offset amount is used as the relative position of the transport robot and the delivery box.
在某些实施方案中,所述基于所述相对位置进行移动操作,以使所述运送机器人与所述配送箱体结合,包括:In some embodiments, the moving operation based on the relative position to combine the transport robot with the delivery box includes:
基于所述相对位置确定所述运送机器人的调整角度,并按照所述调整角度进行移动操作;以及Determine the adjustment angle of the transport robot based on the relative position, and perform a movement operation according to the adjustment angle; and
当检测到所述运送机器人正对所述配送箱体时,控制所述运送机器人向所述配送箱体移动,以使所述运送机器人与所述配送箱体结合。When it is detected that the transportation robot is facing the distribution box, the transportation robot is controlled to move to the distribution box, so that the transportation robot is combined with the distribution box.
在某些实施方案中,控制所述运送机器人向所述配送箱体移动之后,所述方法还包括:In some embodiments, after controlling the transportation robot to move to the distribution box, the method further includes:
通过距离检测部件检测所述运送机器人与所述配送箱体之间的距离;以及Detecting the distance between the transport robot and the delivery box by a distance detecting component; and
当检测到的距离在预设距离范围内时,控制所述运送机器人停止移动,以完成所述运送机器人与所述配送箱体结合。When the detected distance is within the preset distance range, the transport robot is controlled to stop moving, so as to complete the combination of the transport robot and the delivery box.
在某些实施方案中,所述通过摄像部件进行图像采集,得到目标图像之前,还包括:In some embodiments, before the image acquisition is performed by the camera component to obtain the target image, the method further includes:
获取配送箱体所在的目标位置;Obtain the target location of the distribution box;
通过激光雷达进行扫描,得到周围物体的点云数据,并将所述点云数据与预先存储的地图信息进行匹配,基于匹配结果确定所述运送机器人的当前位置;以及Scanning by lidar to obtain point cloud data of surrounding objects, matching the point cloud data with pre-stored map information, and determining the current position of the transport robot based on the matching result; and
确定所述当前位置与所述目标位置之间的运动路径,并基于所述运动路径进行移动,以到达所述目标位置。Determine a motion path between the current position and the target position, and move based on the motion path to reach the target position.
在某些实施方案中,所述基于匹配结果确定所述运送机器人的当前位置,包括:In some embodiments, the determining the current position of the transport robot based on the matching result includes:
将匹配结果中的位置作为第一候选位置;Take the position in the matching result as the first candidate position;
根据所述运送机器人的姿态信息和已行驶的里程信息、以及所述运送机器人的起始位置,确定所述运送机器人的第二候选位置;以及Determining the second candidate position of the transport robot according to the posture information and the mileage information of the transport robot, and the starting position of the transport robot; and
根据所述第一候选位置和所述第二候选位置,确定所述运送机器人的当前位置。According to the first candidate position and the second candidate position, the current position of the transport robot is determined.
第二方面,本公开提供了运送机器人的控制装置,其包括:In a second aspect, the present disclosure provides a control device for a transport robot, which includes:
采集模块,配置为通过摄像部件进行图像采集,得到目标图像;The acquisition module is configured to perform image acquisition through the camera component to obtain the target image;
识别模块,配置为在所述目标图像中识别图形标识码,并确定所述图形标识码在所述目标图像中的位置信息,所述图像标识码设置于配送箱体的外侧;An identification module configured to identify a graphic identification code in the target image and determine the position information of the graphic identification code in the target image, the image identification code being set on the outside of the delivery box;
确定模块,配置为根据所述位置信息确定运送机器人与所述配送箱体的相对位置;以及A determining module, configured to determine the relative position of the transport robot and the delivery box according to the position information; and
移动模块,配置为基于所述相对位置进行移动操作,以使所述运送机器人与所述配送箱体结合。The moving module is configured to perform a moving operation based on the relative position, so that the transport robot is combined with the delivery box.
第三方面,本公开提供了运送机器人,所述运送机器人包括摄像部件、控制装置和底盘驱动装置,所述控制装置分别与所述摄像部件和所述底盘驱动装置连接,其中:In a third aspect, the present disclosure provides a transport robot, the transport robot includes a camera component, a control device, and a chassis drive device, the control device is respectively connected to the camera component and the chassis drive device, wherein:
所述摄像部件,配置为进行图像采集,得到目标图像;The camera component is configured to perform image collection to obtain a target image;
所述控制装置,配置为在所述目标图像中识别图形标识码,并确定所述图形标识码在所述目标图像中的位置信息,所述图像标识码设 置于配送箱体的外侧;根据所述位置信息确定运送机器人与所述配送箱体的相对位置;以及The control device is configured to identify the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image, the image identification code is set on the outside of the delivery box; The position information determines the relative position of the transport robot and the delivery box; and
所述控制装置,还配置为基于所述相对位置控制所述底盘驱动装置进行移动操作,以使所述运送机器人与所述配送箱体结合。The control device is further configured to control the chassis driving device to perform a movement operation based on the relative position, so that the transport robot is combined with the delivery box.
在某些实施方案中,所述底盘驱动装置包括连接部和承载部,所述控制装置的底部与所述连接部固定连接,所述承载部用于在所述运送机器人与所述配送箱体结合后,承载所述配送箱体;以及In some embodiments, the chassis driving device includes a connecting portion and a bearing portion, the bottom of the control device is fixedly connected to the connecting portion, and the bearing portion is used for connecting the transport robot and the delivery box. After being combined, load the distribution box; and
所述摄像部件设置于所述承载部的尾端。The camera component is arranged at the rear end of the carrying part.
在某些实施方案中,所述运送机器人还包括距离检测部件,所述距离检测部件与所述控制器连接;In some embodiments, the transport robot further includes a distance detection component, and the distance detection component is connected to the controller;
所述距离检测部件,配置为检测所述运送机器人与所述配送箱体之间的距离;以及The distance detecting component is configured to detect the distance between the transport robot and the delivery box; and
所述控制装置,还配置为当所述距离检测部件检测到的距离在预设距离范围内时,控制所述底盘驱动装置停止移动,以完成所述运送机器人与所述配送箱体结合。The control device is further configured to control the chassis driving device to stop moving when the distance detected by the distance detection component is within a preset distance range, so as to complete the combination of the transport robot and the delivery box.
在某些实施方案中,所述距离检测部件的数目为多个,且所述距离检测部件对称设置于所述控制装置朝向所述配送箱体的侧面。In some embodiments, the number of the distance detection components is multiple, and the distance detection components are symmetrically arranged on the side of the control device facing the distribution box.
在某些实施方案中,所述运送机器人还包括激光雷达,所述激光雷达与所述控制装置连接;In some embodiments, the transportation robot further includes a lidar, and the lidar is connected to the control device;
所述激光雷达,配置为扫描得到周围物体的点云数据;以及The lidar is configured to scan to obtain point cloud data of surrounding objects; and
所述控制装置,还配置为获取配送箱体所在的目标位置,将所述点云数据与预先存储的地图信息进行匹配,基于匹配结果确定所述运送机器人的当前位置;确定所述当前位置与所述目标位置之间的运动路径,并基于所述运动路径控制所述底盘驱动装置移动,以到达所述目标位置。The control device is further configured to obtain the target position of the delivery box, match the point cloud data with pre-stored map information, determine the current position of the transport robot based on the matching result; determine the current position and The movement path between the target positions, and based on the movement path, the chassis driving device is controlled to move to reach the target position.
在某些实施方案中,所述运送机器人还包括惯性测量单元IMU,所述惯性测量单元IMU与所述控制装置连接;In some embodiments, the transport robot further includes an inertial measurement unit IMU, and the inertial measurement unit IMU is connected to the control device;
所述IMU,配置为检测所述运送机器人的姿态信息和已行驶的里程信息;以及The IMU is configured to detect posture information and mileage information of the transport robot; and
所述控制装置,还配置为将匹配结果中的位置作为第一候选位置; 根据所述IMU反馈的姿态信息和已行驶的里程信息、以及所述运送机器人的起始位置,确定所述运送机器人的第二候选位置;根据所述第一候选位置和所述第二候选位置,确定所述运送机器人的当前位置。The control device is further configured to use the position in the matching result as the first candidate position; determine the transportation robot according to the posture information and the mileage information fed back by the IMU, and the starting position of the transportation robot The second candidate position; according to the first candidate position and the second candidate position, determine the current position of the transport robot.
在某些实施方案中,所述控制装置还包括人机交互部件。In some embodiments, the control device further includes a human-computer interaction component.
第四方面,本公开提供了分体式配送机器人,其包括本公开的运送机器人、以及至少一个配送箱体,所述配送箱体的外侧设置有图像标识码。In a fourth aspect, the present disclosure provides a split delivery robot, which includes the delivery robot of the present disclosure and at least one delivery box, and an image identification code is provided on the outside of the delivery box.
第五方面,本公开提供了计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现本公开的方法。In a fifth aspect, the present disclosure provides a computer-readable storage medium with a computer program stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method of the present disclosure is implemented.
第六方面,本公开提供了包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行本公开所述的方法。In the sixth aspect, the present disclosure provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method described in the present disclosure.
在某些实施方案中,本公开提供了运送机器人的控制方法、装置、运送机器人和存储介质,以解决因激光雷达无法识别出配送箱体而导致配送箱体与运送机器人无法结合的技术问题,提高了配送箱体与运送机器人结合的精确度。In some embodiments, the present disclosure provides a control method, device, transportation robot, and storage medium for a transportation robot to solve the technical problem that the distribution box cannot be combined with the transportation robot because the lidar cannot recognize the distribution box. The accuracy of the combination of the distribution box and the transport robot is improved.
在某些实施方案中,本公开提供运送机器人的控制方法,可以通过摄像部件进行图像采集,得到目标图像,然后在目标图像中识别图形标识码,并确定图形标识码在目标图像中的位置信息,其中,图像标识码设置于配送箱体的外侧。然后,根据位置信息确定运送机器人与配送箱体的相对位置,基于相对位置进行移动操作,以使运送机器人与配送箱体结合。上述方案,通过图形标识码准确定位出配送箱体,并基于图形标识码确定运送机器人与配送箱体的相对位置,进而实现运送机器人与配送箱体结合。这样,无需通过激光雷达检测配送箱体,避免了因激光雷达无法识别出配送箱体而导致配送箱体与运送机器人无法结合的技术问题,从而提高了配送箱体与运送机器人结合的精确度。In some embodiments, the present disclosure provides a method for controlling a transport robot, which can collect images through a camera component to obtain a target image, then identify the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image , Wherein the image identification code is set on the outside of the delivery box. Then, the relative position of the transport robot and the delivery box is determined according to the position information, and the moving operation is performed based on the relative position, so that the transport robot and the delivery box are combined. In the above solution, the distribution box is accurately located through the graphic identification code, and the relative position of the transportation robot and the distribution box is determined based on the graphic identification code, thereby realizing the combination of the transportation robot and the distribution box. In this way, there is no need to detect the distribution box by lidar, which avoids the technical problem that the distribution box and the transportation robot cannot be combined because the lidar cannot identify the distribution box, thereby improving the accuracy of the combination of the distribution box and the transportation robot.
当然,实施本公开的任一产品或方法并不一定需要同时达到以上所述的所有优点。Of course, implementing any product or method of the present disclosure does not necessarily need to achieve all the advantages described above at the same time.
附图简要说明Brief description of the drawings
为了更清楚地说明本公开的技术方案,下面将对本公开中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the present disclosure more clearly, the following will briefly introduce the accompanying drawings that need to be used in the present disclosure. Obviously, for those of ordinary skill in the art, without creative labor, Other drawings can be obtained from these drawings.
图1为本公开一实施例提供的分体式配送机器人的示意图;FIG. 1 is a schematic diagram of a split delivery robot provided by an embodiment of the present disclosure;
图2为本公开一实施例提供的运送机器人的结构示意图;FIG. 2 is a schematic structural diagram of a transport robot provided by an embodiment of the disclosure;
图3为本公开一实施例提供的运送机器人的控制方法的流程图;FIG. 3 is a flowchart of a method for controlling a transport robot according to an embodiment of the present disclosure;
图4a为本公开一实施例提供的二维码的示意图;Fig. 4a is a schematic diagram of a two-dimensional code provided by an embodiment of the present disclosure;
图4b为本公开一实施例提供的轮廓信息的示意图;Fig. 4b is a schematic diagram of contour information provided by an embodiment of the present disclosure;
图4c为本公开一实施例提供的角点图像的示意图;Fig. 4c is a schematic diagram of a corner point image provided by an embodiment of the present disclosure;
图4d为本公开一实施例提供的目标图像的示意图;FIG. 4d is a schematic diagram of a target image provided by an embodiment of the present disclosure;
图5为本公开一实施例提供的另一种分体式配送机器人的示意图;FIG. 5 is a schematic diagram of another split-type delivery robot provided by an embodiment of the present disclosure;
图6为本公开一实施例提供的运送机器人的控制方法示例的流程图;FIG. 6 is a flowchart of an example of a method for controlling a transport robot provided by an embodiment of the present disclosure;
图7为本公开一实施例提供的运送机器人的控制装置的结构示意图;以及FIG. 7 is a schematic structural diagram of a control device for a transport robot provided by an embodiment of the disclosure; and
图8为本公开一实施例提供的电子设备的结构示意图。FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
详述Detail
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, rather than all the embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present disclosure.
本公开实施例提供了运送机器人的控制方法,该方法可以应用于分体式配送机器人,在某些实施方案中,可以由分体式配送机器人中的运送机器人执行。其中,运送机器人可以至少包括摄像部件、控制装置和底盘驱动装置。其中,底盘驱动装置包括连接部和承载部,控制装置的底部与连接部固定连接,承载部用于在运送机器人与配送箱 体结合后,承载配送箱体。摄像部件可以为相机、摄像头等,在某些实施方案中,摄像部件可以设置于承载部的尾端。如图1所示,为本公开提供的分体式配送机器人的示意图。可以理解的是,摄像部件还可以设置于其他位置,如控制装置正对配送箱体的侧面。另外,运送机器人还可以包括如距离传感器、单片机、CAN(Controller Area Network,控制器局域网络)收发器、CAN总线、激光雷达、IMU(Inertial measurement unit,惯性测量单元)、电机、编码器等部件。The embodiments of the present disclosure provide a method for controlling a delivery robot, which can be applied to a split delivery robot, and in some embodiments, it can be executed by a delivery robot in the split delivery robot. Among them, the transport robot may at least include a camera component, a control device, and a chassis driving device. Wherein, the chassis driving device includes a connecting part and a carrying part, the bottom of the control device is fixedly connected with the connecting part, and the carrying part is used to carry the delivery box after the transport robot is combined with the delivery box. The camera component can be a camera, a camera, etc. In some embodiments, the camera component can be arranged at the end of the carrying part. As shown in Fig. 1, a schematic diagram of the split delivery robot provided by the present disclosure. It is understandable that the camera component can also be arranged in other positions, such as the control device directly facing the side of the distribution box. In addition, the transport robot can also include components such as distance sensors, single-chip microcomputers, CAN (Controller Area Network, Controller Area Network) transceivers, CAN buses, Lidar, IMU (Inertial Measurement Unit), motors, encoders, etc. .
如图2所示,为本公开实施例提供的运送机器人的结构示意图,该运送机器人包括相机、距离传感器A、距离传感器B、、激光雷达和IMU、控制装置和底盘驱动装置,其中,控制装置包括单片机、主控制器、CAN收发器、CAN总线,底盘驱动装置可以包括电机驱动器、电机和编码器。其中,主控制器分别与相机、激光雷达、IMU和电机驱动器连接。电机驱动器与电机连接,且电机驱动器与电机之间连接有编码器。相机可以用于拍摄图像,并将拍摄到的图像传输给主控制器,以使主控制器识别图像识别码,从而定位配送箱体。IMU可以用于检测运送机器人的姿态信息,如加速度、姿态、角速度等。激光雷达用于扫描周围环境的点云数据;编码器用于记录已行驶的里程信息,主动控制器则可以根据IMU和编码器返回的信息进行导航和定位。另外,主控制器通过CAN总线与CAN收发器连接,CAN收发器与单片机连接,单片机分别与距离传感器A和距离传感器B连接。距离传感器可以选择窄束超声波测距传感器、光学TOF测距传感器等,用于测量配送箱体与运送机器人的相对距离,并上报给单片机。单片机则将距离结果通过CAN收发器上报到CAN总线,主控制器则会通过CAN总线获取单片机上报的数据。主控制器可向电机驱动器发送运动指令,驱动电机旋转实现机器人前进、后退、转弯等操作,以实现运送机器人与配送箱体结合。当然,运送机器人还可以包括图2中未示出的其他部件,如人机交互部件(即显示屏、语音交互部件等),本公开实施例不做限定。As shown in FIG. 2, it is a schematic structural diagram of a transport robot provided by an embodiment of the present disclosure. The transport robot includes a camera, a distance sensor A, a distance sensor B, a lidar, an IMU, a control device, and a chassis driving device. The control device Including a single chip microcomputer, a main controller, a CAN transceiver, and a CAN bus, the chassis driving device may include a motor driver, a motor, and an encoder. Among them, the main controller is respectively connected with the camera, lidar, IMU and motor driver. The motor driver is connected with the motor, and an encoder is connected between the motor driver and the motor. The camera can be used to capture images and transmit the captured images to the main controller, so that the main controller can recognize the image identification code to locate the delivery box. IMU can be used to detect the posture information of the transport robot, such as acceleration, posture, angular velocity, etc. Lidar is used to scan the point cloud data of the surrounding environment; the encoder is used to record the mileage information that has been driven, and the active controller can navigate and locate based on the information returned by the IMU and encoder. In addition, the main controller is connected to the CAN transceiver through the CAN bus, the CAN transceiver is connected to the single-chip microcomputer, and the single-chip microcomputer is connected to the distance sensor A and the distance sensor B respectively. The distance sensor can be a narrow-beam ultrasonic distance measuring sensor, an optical TOF distance measuring sensor, etc., which are used to measure the relative distance between the distribution box and the transport robot, and report it to the single-chip microcomputer. The single-chip microcomputer reports the distance result to the CAN bus through the CAN transceiver, and the main controller obtains the data reported by the single-chip microcomputer through the CAN bus. The main controller can send motion instructions to the motor driver, and drive the motor to rotate to realize the robot's forward, backward, and turn operations, so as to realize the combination of the transport robot and the delivery box. Of course, the transport robot may also include other components not shown in FIG. 2, such as human-computer interaction components (ie, display screens, voice interaction components, etc.), which are not limited in the embodiment of the present disclosure.
下面将结合具体实施方式,对本公开提供的运送机器人的控制方法进行详细的说明,如图3所示,步骤如下:In the following, in conjunction with specific implementations, the control method of the transport robot provided by the present disclosure will be described in detail, as shown in FIG. 3, the steps are as follows:
步骤301,通过摄像部件进行图像采集,得到目标图像; Step 301, image acquisition is performed by the camera component to obtain a target image;
步骤302,在目标图像中识别图形标识码,并确定图形标识码在目标图像中的位置信息;Step 302: Identify the graphic identification code in the target image, and determine the location information of the graphic identification code in the target image;
步骤303,根据位置信息确定运送机器人与配送箱体的相对位置;以及Step 303: Determine the relative position of the transport robot and the delivery box according to the position information; and
步骤304,基于相对位置进行移动操作,以使运送机器人与配送箱体结合。In step 304, a moving operation is performed based on the relative position to combine the transport robot with the delivery box.
在某些实施方案中,运送机器人通常与配送箱体放置在同一区域内;或者,运送机器人在进行配送后,也可以通过导航***移动到配送箱体所在的区域。此时,配送箱体会进入运送机器人的摄像部件范围内,运送机器人可以通过摄像部件进行图像采集,得到目标图像。这样,摄像部件拍摄到的目标图像通常包含有配送箱体的图像。In some embodiments, the delivery robot is usually placed in the same area as the delivery box; or, after delivery, the delivery robot can also move to the area where the delivery box is located through the navigation system. At this time, the delivery box will enter the range of the camera component of the transport robot, and the transport robot can collect images through the camera component to obtain the target image. In this way, the target image captured by the imaging component usually contains an image of the delivery box.
在某些实施方案中,配送箱体的外侧设置有图像标识码,在某些实施方案中,可以设置在应当正对运送机器人的侧面。该图像标识码可以是二维码、条形码等图形码,本公开实施例不做限定。控制装置接收到目标图像后,可以识别目标图像中是否包含图形标识码。若识别出图形标识码,则确定图形标识码在目标图像中的位置信息,其中,该位置信息可以用像素坐标表示,也可以预先建立的坐标系,用该坐标系中的坐标来表示。若未识别出图形标识码,则继续执行步骤301,直到识别出图形标识码。In some embodiments, an image identification code is provided on the outer side of the delivery box. In some embodiments, it can be provided on the side that should face the transport robot. The image identification code may be a graphic code such as a two-dimensional code or a barcode, which is not limited in the embodiment of the present disclosure. After the control device receives the target image, it can identify whether the target image contains a graphic identification code. If the graphic identification code is recognized, the location information of the graphic identification code in the target image is determined, where the location information can be represented by pixel coordinates, or a pre-established coordinate system can be represented by coordinates in the coordinate system. If the graphic identification code is not recognized, step 301 is continued until the graphic identification code is recognized.
在某些实施方案中,在目标图像中识别图形标识码,并确定图形标识码在目标图像中的位置信息的过程包括:对目标图像进行轮廓信息提取;在提取出的轮廓信息中,确定满足预设轮廓特征的目标轮廓信息,并将目标轮廓信息对应的图像作为图形标识码的角点图像;基于角点图像在目标图像中的位置坐标,计算图形标识码在目标图像中的位置信息。In some embodiments, the process of recognizing the graphic identification code in the target image and determining the position information of the graphic identification code in the target image includes: extracting contour information of the target image; in the extracted contour information, determining that The target contour information of the contour feature is preset, and the image corresponding to the target contour information is used as the corner image of the graphic identification code; based on the position coordinates of the corner image in the target image, the position information of the graphic identification code in the target image is calculated.
在某些实施方案中,控制装置可以通过预设的图像处理算法,提取目标图像包含的轮廓信息。在某些实施方案中,可以对目标图像进行平滑滤波和二值化处理,得到目标图像包含的轮廓信息,然后可以在这些轮廓信息中,查找满足预设轮廓特征的目标轮廓信息,并将目 标轮廓信息对应的图像作为图形标识码的角点图像。然后,可以确定角点图像在目标图像中的位置坐标,进而根据该坐标计算图形标识码在目标图像中的位置信息。计算位置信息的方式可以是多种多样的,例如,可以根据两个对角的角点图像的位置坐标,计算中心点的位置坐标,将中心点的位置坐标作为图形标识码在目标图像中的位置信息;或者,也可以直接将某个角点图像的位置坐标,作为图形标识码在目标图像中的位置信息。其中,图形标识码的位置信息的计算方式,需要与预设的基准位置信息的标定方式一致。例如,将二维码左上角的位置坐标标定为基准位置信息,相应的,在计算时,将左上角的角点图像在目标图像中的位置坐标,作为图形标识码的位置信息。In some embodiments, the control device can extract the contour information contained in the target image through a preset image processing algorithm. In some embodiments, smooth filtering and binarization of the target image can be performed to obtain the contour information contained in the target image. Then, among the contour information, the target contour information that satisfies the preset contour characteristics can be searched for, and the target contour information can be searched. The image corresponding to the contour information is used as the corner image of the graphic identification code. Then, the position coordinates of the corner image in the target image can be determined, and the position information of the graphic identification code in the target image can be calculated according to the coordinates. There are many ways to calculate position information. For example, the position coordinates of the center point can be calculated according to the position coordinates of the two diagonal corner images, and the position coordinates of the center point can be used as the graphic identification code in the target image. Position information; or, you can directly use the position coordinates of a certain corner image as the position information of the graphic identification code in the target image. Among them, the calculation method of the position information of the graphic identification code needs to be consistent with the preset reference position information calibration method. For example, the position coordinates of the upper left corner of the two-dimensional code are calibrated as the reference position information, and correspondingly, the position coordinates of the upper left corner image in the target image are used as the position information of the graphic identification code during calculation.
在某些实施方案中,二维码的图像可以如图4a所示,包含3个角点(即两个顶角和左下角),提取轮廓信息后的图像如图4b所示,识别出的角点图像如图4c所示。图4c中的角点图像的中心点,可构成一个直角三角形,然后根据这个直角三角形计算图形标识码在目标图像中的位置信息,可记为(xa,ya)。In some embodiments, the image of the two-dimensional code may be as shown in Figure 4a, including 3 corner points (ie, two top corners and the lower left corner), and the image after extracting the contour information is shown in Figure 4b. The corner image is shown in Figure 4c. The center point of the corner image in Figure 4c can form a right-angled triangle, and then calculate the position information of the graphic identification code in the target image based on this right-angled triangle, which can be recorded as (xa, ya).
在某些实施方案中,控制装置可以根据位置信息确定运送机器人与配送箱体的相对位置。在某些实施方案中,可以获取预设的图形标识码在目标图像中的基准位置信息,然后计算位置信息相对于基准位置信息的偏移量,将偏移量作为运送机器人与配送箱体的相对位置。其中,基准位置信息是运送机器人正对配送箱体时,图形识别码在运送机器人所拍摄到的图像中的位置信息。如图4d所示,图片中的十字表示基准位置信息。In some embodiments, the control device can determine the relative position of the transport robot and the delivery box based on the position information. In some embodiments, the reference position information of the preset graphic identification code in the target image can be obtained, and then the offset of the position information relative to the reference position information can be calculated, and the offset can be used as the difference between the transport robot and the delivery box. relative position. Among them, the reference position information is the position information of the graphic identification code in the image taken by the transport robot when the transport robot is facing the delivery box. As shown in Figure 4d, the cross in the picture represents the reference position information.
其中,基准位置信息可以由技术人员预先设置。例如,可以为图像的中心点,可记为(xb,yb),则可以计算二维码位置(xa,ya)与中心点(xb,yb)的偏移量,该偏移量可作为运送机器人与配送箱体的相对位置。Among them, the reference position information can be preset by a technician. For example, it can be the center point of the image, which can be recorded as (xb, yb), and the offset between the position (xa, ya) of the QR code and the center point (xb, yb) can be calculated, and the offset can be used as a transportation The relative position of the robot and the distribution box.
在某些实施方案中,控制装置可以基于相对位置,向电机驱动器发送运动指令,以使图形识别码的在目标图像中的位置信息与基准位置信息相同。当该位置信息与基准位置信息相同时,运送机器人正对所述配送箱体,然后运送机器人可以向配送箱体移动,从而完成运送机器人与配送箱体结合。In some embodiments, the control device may send a movement instruction to the motor driver based on the relative position, so that the position information of the graphic identification code in the target image is the same as the reference position information. When the position information is the same as the reference position information, the transportation robot is facing the distribution box, and then the transportation robot can move to the distribution box, thereby completing the combination of the transportation robot and the distribution box.
在某些实施方案中,移动过程可以为:基于相对位置确定运送机器人的调整角度,并按照调整角度进行移动操作;当检测到运送机器人正对配送箱体时,控制运送机器人向配送箱体移动,以使运送机器人与配送箱体结合。In some embodiments, the moving process may be: determining the adjustment angle of the transport robot based on the relative position, and moving according to the adjustment angle; when it is detected that the transport robot is facing the distribution box, control the transport robot to move to the distribution box , In order to combine the transport robot with the distribution box.
在某些实施方案中,控制装置可以基于计算出的相对位置(即偏移量),确定运送机器人的调整角度,比如向偏移的反向角度移动,然后,控制装置可以向电机驱动器发送运动指令,以使电机驱动器驱动电机旋转实现角度调整。在移动过程中,运送机器人可以持续的进行图像采集(比如可以周期性的通过摄像部件进行图像采集),从而持续进行角度调整,以提高结合的准确度。In some embodiments, the control device can determine the adjustment angle of the transport robot based on the calculated relative position (ie, the offset), such as moving to the opposite angle of the offset, and then the control device can send the movement to the motor driver Instructions to make the motor driver drive the motor to rotate to achieve angle adjustment. During the moving process, the transport robot can continuously perform image collection (for example, it can periodically perform image collection through the camera component), so as to continuously perform angle adjustment to improve the accuracy of the combination.
在某些实施方案中,还可以通过距离检测部件实现精准的装载引导,处理过程包括:通过距离检测部件检测运送机器人与配送箱体之间的距离;当检测到的距离在预设距离范围内时,控制运送机器人停止移动,以完成运送机器人与配送箱体结合。In some embodiments, accurate loading guidance can also be achieved by the distance detection component. The processing process includes: detecting the distance between the transport robot and the delivery box through the distance detection component; when the detected distance is within the preset distance range When the time, the transportation robot is controlled to stop moving to complete the combination of the transportation robot and the distribution box.
在某些实施方案中,当检测到运送机器人正对配送箱体时,运送机器人可以通过距离检测部件检测运送机器人与配送箱体之间的距离,并向配送箱体移动。当检测到的距离在预设距离范围内时,说明运送机器人已进入配送箱体的指定位置,可以控制运送机器人停止移动,以完成运送机器人与配送箱体结合。In some embodiments, when it is detected that the transportation robot is facing the distribution box, the transportation robot can detect the distance between the transportation robot and the distribution box through the distance detection component, and move to the distribution box. When the detected distance is within the preset distance range, it indicates that the transport robot has entered the designated position of the distribution box, and the transport robot can be controlled to stop moving to complete the combination of the transport robot and the distribution box.
距离传感器可以为多个,且可以对称设置于控制装置朝向配送箱体的侧面。在一个示例中,配送机器人车体上配置有两个距离传感器,即距离传感器A和距离传感器B,如图5所示。距离传感器可以测量配送箱体左右两侧距离运送机器人的距离d1和距离d2,并上报给单片机,单片机将收到的结果通过CAN总线上报给主控制器,主控制器根据d1和d2数值向电机驱动器发送运动指令,直到d1和d2达到预设距离范围时,运送机器人完成对配送箱体的装载。There can be multiple distance sensors, and they can be symmetrically arranged on the side of the control device facing the distribution box. In an example, two distance sensors, namely, distance sensor A and distance sensor B, are configured on the body of the delivery robot, as shown in FIG. 5. The distance sensor can measure the distance d1 and distance d2 between the left and right sides of the distribution box and the transport robot, and report to the single-chip microcomputer. The single-chip microcomputer reports the received results to the main controller through the CAN bus, and the main controller sends the data to the motor according to the values of d1 and d2. The driver sends a motion instruction until d1 and d2 reach the preset distance range, the transport robot completes the loading of the delivery box.
在某些实施方案中,可以通过激光雷达导航以使运送机器人达到配送箱体的装载区域,处理过程包括:获取配送箱体所在的目标位置;通过激光雷达进行扫描,得到周围物体的点云数据,并将点云数据与预先存储的地图信息进行匹配,基于匹配结果确定运送机器人的当前 位置;确定当前位置与目标位置之间的运动路径,并基于运动路径进行移动,以到达目标位置。In some embodiments, lidar navigation can be used to make the transportation robot reach the loading area of the distribution box. The processing process includes: obtaining the target position of the distribution box; scanning through the lidar to obtain point cloud data of surrounding objects , And match the point cloud data with the pre-stored map information, determine the current position of the transport robot based on the matching result; determine the motion path between the current position and the target position, and move based on the motion path to reach the target position.
在某些实施方案中,运送机器人在进行配送后,需要通过导航***移动到配送箱体所在的区域。运送机器人可以获取配送箱体所在的目标位置,该目标位置可以由技术人员预先设定。运送机器人可以通过激光雷达进行扫描,得到周围物体的点云数据,然后,将点云数据与预先存储的地图信息进行匹配。该地图信息是通过激光雷达进行SLAM(simultaneous localization and mapping,时定位与地图)构建得到的。控制装置可以将检测到的点云数据与地图信息进行匹配,从而将匹配出的位置作为当前位置。然后,可以确定当前位置与目标位置之间的运动路径,并基于运动路径进行移动,以到达目标位置。In some embodiments, after delivery, the delivery robot needs to move to the area where the delivery box is located through the navigation system. The transport robot can obtain the target position where the delivery box is located, and the target position can be preset by the technician. The transport robot can scan through lidar to obtain point cloud data of surrounding objects, and then match the point cloud data with pre-stored map information. The map information is constructed by laser radar for SLAM (simultaneous localization and mapping, time location and map). The control device can match the detected point cloud data with the map information, so as to use the matched position as the current position. Then, the motion path between the current position and the target position can be determined, and the motion path can be moved based on the motion path to reach the target position.
在某些实施方案中,可以结合IMU的反馈和里程信息确定当前位置,处理过程包括:将匹配结果中的位置作为第一候选位置;根据运送机器人的姿态信息和已行驶的里程信息、以及运送机器人的起始位置,确定运送机器人的第二候选位置;根据第一候选位置和第二候选位置,确定运送机器人的当前位置。In some embodiments, the current position can be determined by combining the feedback of the IMU and the mileage information. The processing process includes: taking the position in the matching result as the first candidate position; according to the posture information of the transportation robot and the mileage information that has been driven, and the transportation The starting position of the robot determines the second candidate position of the transport robot; according to the first candidate position and the second candidate position, the current position of the transport robot is determined.
在某些实施方案中,可以将匹配结果中的位置作为第一候选位置。此外,可以根据IMU反馈的姿态信息和编码器记录的已行驶的里程信息、以及运送机器人的起始位置,计算运送机器人的位置(可称为第二候选位置),然后,可以将第一候选位置和第二候选位置的中间位置,作为运送机器人的当前位置。这样,可以结合IMU的反馈和里程信息确定当前位置,从而提高位置确定的准确度。In some embodiments, the position in the matching result may be used as the first candidate position. In addition, the position of the transport robot (which can be called the second candidate position) can be calculated based on the posture information fed back by the IMU, the mileage information recorded by the encoder, and the starting position of the transport robot, and then the first candidate The middle position between the position and the second candidate position is used as the current position of the transport robot. In this way, the current position can be determined by combining the feedback of the IMU and the mileage information, thereby improving the accuracy of the position determination.
在某些实施方案中,还提供了通过激光雷达进行SLAM构建的过程,包括:运送机器人从预设的地图原点出发,并通过IMU和编码器记录姿态信息和里程信息。根据IMU反馈的姿态信息,可以知道运送机器人的朝向,根据里程信息可以知道每个朝向的行驶距离。激光雷达持续扫描,得到周围物体的点云数据。通过点云数据可以分析得到周围物体的轮廓、距离等信息。在某些实施方案中,通过激光雷达在地图原点处进行检测,可以获知地图原点周围的障碍物、以及障碍物距运送机器人的距离,接下来,运送机器人前进,并在前进过程中根 据里程信息和姿态信息确定前进距离和自身朝向,运送机器人前进过程激光雷达也是不断扫描并返回周边障碍物的点云数据,从而建立出距离坐标原点d方位角γ点的周边地图。最后,运送机器人不断移动直到遍历整个环境,将所有地图叠加就可以得到整个空间地图In some embodiments, a process of constructing SLAM through lidar is also provided, including: the transport robot starts from a preset map origin, and records posture information and mileage information through an IMU and an encoder. According to the posture information fed back by the IMU, the orientation of the transport robot can be known, and the driving distance of each orientation can be known according to the mileage information. Lidar continuously scans to obtain point cloud data of surrounding objects. The point cloud data can be used to analyze the contours and distances of surrounding objects. In some embodiments, the lidar is used to detect the origin of the map, and the obstacles around the origin of the map and the distance between the obstacles and the transport robot can be known. Next, the transport robot moves forward and based on the mileage information during the progress. Determine the travel distance and its own direction with the posture information. During the forwarding process of the transport robot, the lidar also continuously scans and returns the point cloud data of the surrounding obstacles, thereby establishing the surrounding map of the azimuth γ point from the origin of the coordinate. Finally, the transport robot keeps moving until it traverses the entire environment, and the entire space map can be obtained by superimposing all the maps.
在某些实施方案中,还提供了运送机器人的控制方法的示例,如图6所示,包括如下步骤:In some embodiments, an example of a control method of a transport robot is also provided, as shown in FIG. 6, which includes the following steps:
步骤601,获取配送箱体所在的目标位置;Step 601: Obtain the target location where the distribution box is located;
步骤602,通过激光雷达定位、IMU反馈的姿态信息和已行驶的里程信息,确定运送机器人的当前位置;Step 602: Determine the current position of the transport robot through the lidar positioning, the attitude information fed back by the IMU, and the mileage information that has been traveled;
步骤603,确定当前位置与目标位置之间的运动路径,并基于运动路径进行移动,以到达配送箱体所在的目标位置;Step 603: Determine the motion path between the current position and the target position, and move based on the motion path to reach the target position where the delivery box is located;
步骤604,通过摄像部件进行图像采集,得到目标图像; Step 604, image acquisition is performed by the camera component to obtain a target image;
步骤605,在目标图像中识别图形标识码,图像标识码设置于配送箱体的外侧;Step 605: Identify the graphic identification code in the target image, and the image identification code is set on the outside of the delivery box;
步骤606,提取图形标识码包含的轮廓信息; Step 606, extract the contour information contained in the graphic identification code;
步骤607,在提取出的轮廓信息中,确定满足预设轮廓特征的目标轮廓信息,并将目标轮廓信息对应的图像作为角点图像;Step 607: Among the extracted contour information, determine target contour information that meets preset contour characteristics, and use an image corresponding to the target contour information as a corner point image;
步骤608,基于角点图像在目标图像中的位置坐标,计算图形标识码在目标图像中的位置信息;Step 608: Calculate the position information of the graphic identification code in the target image based on the position coordinates of the corner point image in the target image;
步骤609,计算位置信息相对于预设的基准位置信息的偏移量,将偏移量作为运送机器人与配送箱体的相对位置;Step 609: Calculate the offset of the position information relative to the preset reference position information, and use the offset as the relative position of the transport robot and the delivery box;
其中,基准位置信息是运送机器人正对配送箱体时,图形识别码在运送机器人所拍摄到的图像中的位置信息;Wherein, the reference position information is the position information of the graphic identification code in the image taken by the transportation robot when the transportation robot is facing the distribution box;
步骤610,基于相对位置确定运送机器人的调整角度,并按照调整角度进行移动操作;Step 610: Determine an adjustment angle of the transport robot based on the relative position, and perform a movement operation according to the adjustment angle;
步骤611,当检测到运送机器人正对配送箱体时,控制运送机器人向配送箱体移动;Step 611: When it is detected that the transportation robot is facing the distribution box, control the transportation robot to move to the distribution box;
步骤612,通过距离检测部件检测运送机器人与配送箱体之间的距离;以及Step 612: Detect the distance between the transport robot and the distribution box through the distance detection component; and
步骤613,当检测到的距离在预设距离范围内时,控制运送机器人停止移动,以完成运送机器人与配送箱体结合。Step 613: When the detected distance is within the preset distance range, control the transport robot to stop moving, so as to complete the combination of the transport robot and the delivery box.
基于相同的技术构思,本公开还提供了运送机器人,该运送机器人包括摄像部件、控制装置和底盘驱动装置,所述控制装置分别与所述摄像部件和所述底盘驱动装置连接,其中:Based on the same technical concept, the present disclosure also provides a transport robot, which includes a camera component, a control device, and a chassis drive device, the control device is respectively connected to the camera component and the chassis drive device, wherein:
所述摄像部件,配置为进行图像采集,得到目标图像;The camera component is configured to perform image collection to obtain a target image;
所述控制装置,配置为在所述目标图像中识别图形标识码,并确定所述图形标识码在所述目标图像中的位置信息,所述图像标识码设置于配送箱体的外侧;根据所述位置信息确定运送机器人与所述配送箱体的相对位置;以及The control device is configured to identify the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image, the image identification code is set on the outside of the delivery box; The position information determines the relative position of the transport robot and the delivery box; and
所述控制装置,还配置为基于所述相对位置控制所述底盘驱动装置进行移动操作,以使所述运送机器人与所述配送箱体结合。The control device is further configured to control the chassis driving device to perform a movement operation based on the relative position, so that the transport robot is combined with the delivery box.
在某些实施方案中,所述底盘驱动装置包括连接部和承载部,所述控制装置的底部与所述连接部固定连接,所述承载部用于在所述运送机器人与所述配送箱体结合后,承载所述配送箱体;以及In some embodiments, the chassis driving device includes a connecting portion and a bearing portion, the bottom of the control device is fixedly connected to the connecting portion, and the bearing portion is used for connecting the transport robot and the delivery box. After being combined, load the distribution box; and
所述摄像部件设置于所述承载部的尾端。The camera component is arranged at the rear end of the carrying part.
在某些实施方案中,所述运送机器人还包括距离检测部件,所述距离检测部件与所述控制器连接;In some embodiments, the transport robot further includes a distance detection component, and the distance detection component is connected to the controller;
所述距离检测部件,配置为检测所述运送机器人与所述配送箱体之间的距离;以及The distance detecting component is configured to detect the distance between the transport robot and the delivery box; and
所述控制装置,还配置为当所述距离检测部件检测到的距离在预设距离范围内时,控制所述底盘驱动装置停止移动,以完成所述运送机器人与所述配送箱体结合。The control device is further configured to control the chassis driving device to stop moving when the distance detected by the distance detection component is within a preset distance range, so as to complete the combination of the transport robot and the delivery box.
在某些实施方案中,所述距离检测部件的数目为多个,且所述距离检测部件对称设置于所述控制装置朝向所述配送箱体的侧面。In some embodiments, the number of the distance detection components is multiple, and the distance detection components are symmetrically arranged on the side of the control device facing the distribution box.
在某些实施方案中,所述运送机器人还包括激光雷达,所述激光雷达与所述控制装置连接;In some embodiments, the transportation robot further includes a lidar, and the lidar is connected to the control device;
所述激光雷达,配置为扫描得到周围物体的点云数据;以及The lidar is configured to scan to obtain point cloud data of surrounding objects; and
所述控制装置,还配置为获取配送箱体所在的目标位置,将所述点云数据与预先存储的地图信息进行匹配,基于匹配结果确定所述运 送机器人的当前位置;确定所述当前位置与所述目标位置之间的运动路径,并基于所述运动路径控制所述底盘驱动装置移动,以到达所述目标位置。The control device is further configured to obtain the target position of the delivery box, match the point cloud data with pre-stored map information, determine the current position of the transport robot based on the matching result; determine the current position and The movement path between the target positions, and based on the movement path, the chassis driving device is controlled to move to reach the target position.
在某些实施方案中,所述运送机器人还包括惯性测量单元IMU,所述惯性测量单元IMU与所述控制装置连接;In some embodiments, the transport robot further includes an inertial measurement unit IMU, and the inertial measurement unit IMU is connected to the control device;
所述IMU,配置为检测所述运送机器人的姿态信息和已行驶的里程信息;以及The IMU is configured to detect posture information and mileage information of the transport robot; and
所述控制装置,还配置为将匹配结果中的位置作为第一候选位置;根据所述IMU反馈的姿态信息和已行驶的里程信息、以及所述运送机器人的起始位置,确定所述运送机器人的第二候选位置;根据所述第一候选位置和所述第二候选位置,确定所述运送机器人的当前位置。The control device is further configured to use the position in the matching result as the first candidate position; determine the transportation robot according to the posture information and the traveled mileage information fed back by the IMU, and the starting position of the transportation robot The second candidate position; according to the first candidate position and the second candidate position, determine the current position of the transport robot.
在某些实施方案中,所述控制装置还包括人机交互部件。In some embodiments, the control device further includes a human-computer interaction component.
基于相同的技术构思,本公开还提供了运送机器人的控制装置,如图7所示,该装置包括:Based on the same technical concept, the present disclosure also provides a control device for a transport robot. As shown in FIG. 7, the device includes:
采集模块710,配置为通过摄像部件进行图像采集,得到目标图像;The acquisition module 710 is configured to perform image acquisition through a camera component to obtain a target image;
识别模块720,配置为在目标图像中识别图形标识码,并确定图形标识码在目标图像中的位置信息,图像标识码设置于配送箱体的外侧;The recognition module 720 is configured to recognize the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image, and the image identification code is set on the outside of the delivery box;
确定模块730,配置为根据位置信息确定运送机器人与配送箱体的相对位置;以及The determining module 730 is configured to determine the relative position of the transport robot and the delivery box according to the position information; and
移动模块740,配置为基于相对位置进行移动操作,以使运送机器人与配送箱体结合。The moving module 740 is configured to perform a moving operation based on the relative position, so as to combine the transport robot with the delivery box.
基于相同的技术构思,本公开还提供了电子设备,如图8所示,包括处理器801、通信接口802、存储器803和通信总线804,其中,处理器801,通信接口802,存储器803通过通信总线804完成相互间的通信,Based on the same technical concept, the present disclosure also provides an electronic device, as shown in FIG. 8, which includes a processor 801, a communication interface 802, a memory 803, and a communication bus 804. The bus 804 completes the communication between each other,
存储器803,配置为存放计算机程序;The memory 803 is configured to store computer programs;
处理器801,配置为执行存储器803上所存放的程序时,实现本公开所述的方法。The processor 801 is configured to implement the method described in the present disclosure when it is configured to execute the program stored in the memory 803.
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构 (Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。The communication bus mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc. The communication bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used to indicate in the figure, but it does not mean that there is only one bus or one type of bus.
通信接口用于上述电子设备与其他设备之间的通信。The communication interface is used for communication between the above-mentioned electronic device and other devices.
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。在某些实施方案中,存储器还可以是至少一个位于远离前述处理器的存储装置。The memory may include random access memory (Random Access Memory, RAM), and may also include non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk storage. In some embodiments, the memory may also be at least one storage device located far away from the aforementioned processor.
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。The above-mentioned processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; it may also be a digital signal processor (DSP), a dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
本公开还提供了计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现本公开所述的方法。The present disclosure also provides a computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor, the method described in the present disclosure is implemented.
本公开还提供了包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行本公开所述的方法。The present disclosure also provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method described in the present disclosure.
在本公开的实施方案中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本公开实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算 机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。In the embodiments of the present disclosure, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented by software, it can be implemented in the form of a computer program product in whole or in part. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present disclosure are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website site, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that in this article, relational terms such as "first" and "second" are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these There is any such actual relationship or sequence between entities or operations. Moreover, the terms "including", "including" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or device that includes a series of elements includes not only those elements, but also those that are not explicitly listed Other elements of, or also include elements inherent to this process, method, article or equipment. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or equipment that includes the element.
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above are only specific implementations of the present disclosure, so that those skilled in the art can understand or implement the present disclosure. Various modifications to these embodiments will be obvious to those skilled in the art, and the general principles defined herein can be implemented in other embodiments without departing from the spirit or scope of the present disclosure. Therefore, the present disclosure will not be limited to the embodiments shown in this document, but should conform to the widest scope consistent with the principles and novel features disclosed in this document.

Claims (17)

  1. 运送机器人的控制方法,其包括:The control method of the transport robot includes:
    通过摄像部件进行图像采集,得到目标图像;Image acquisition through the camera component to obtain the target image;
    在所述目标图像中识别图形标识码,并确定所述图形标识码在所述目标图像中的位置信息,所述图像标识码设置于配送箱体的外侧;Identify the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image, the image identification code is set on the outside of the delivery box;
    根据所述位置信息确定运送机器人与所述配送箱体的相对位置;以及Determine the relative position of the transport robot and the delivery box according to the position information; and
    基于所述相对位置进行移动操作,以使所述运送机器人与所述配送箱体结合。A movement operation is performed based on the relative position, so that the transport robot is combined with the delivery box.
  2. 如权利要求1所述的方法,其中,所述在所述目标图像中识别图形标识码,并确定所述图形标识码在所述目标图像中的位置信息,包括:The method of claim 1, wherein the identifying the graphic identification code in the target image and determining the location information of the graphic identification code in the target image comprises:
    对所述目标图像进行轮廓信息提取;Extract contour information on the target image;
    在提取出的轮廓信息中,确定满足预设轮廓特征的目标轮廓信息,并将所述目标轮廓信息对应的图像作为所述图形标识码的角点图像;以及In the extracted contour information, determine the target contour information that satisfies the preset contour feature, and use the image corresponding to the target contour information as the corner image of the graphic identification code; and
    基于所述角点图像在所述目标图像中的位置坐标,计算所述图形标识码在所述目标图像中的位置信息。Based on the position coordinates of the corner point image in the target image, the position information of the graphic identification code in the target image is calculated.
  3. 如权利要求1或2所述的方法,其中,所述根据所述位置信息确定运送机器人与所述配送箱体的相对位置,包括:The method according to claim 1 or 2, wherein the determining the relative position of the transportation robot and the delivery box according to the position information comprises:
    获取预设的所述图形标识码在所述目标图像中的基准位置信息,其中,所述基准位置信息是所述运送机器人正对所述配送箱体时,所述图形识别码在所述运送机器人所拍摄到的图像中的位置信息;以及Obtain the preset reference position information of the graphic identification code in the target image, where the reference position information is that when the transport robot is facing the delivery box, the graphic identification code is in the transport The position information in the image captured by the robot; and
    计算所述位置信息相对于所述基准位置信息的偏移量,将所述偏移量作为所述运送机器人与所述配送箱体的相对位置。The offset amount of the position information relative to the reference position information is calculated, and the offset amount is used as the relative position of the transport robot and the delivery box.
  4. 如权利要求1至3中任一权利要求所述的方法,其中,所述基于所述相对位置进行移动操作,以使所述运送机器人与所述配送箱体 结合,包括:The method according to any one of claims 1 to 3, wherein the moving operation based on the relative position to combine the transport robot with the delivery box includes:
    基于所述相对位置确定所述运送机器人的调整角度,并按照所述调整角度进行移动操作;以及Determine the adjustment angle of the transport robot based on the relative position, and perform a movement operation according to the adjustment angle; and
    当检测到所述运送机器人正对所述配送箱体时,控制所述运送机器人向所述配送箱体移动,以使所述运送机器人与所述配送箱体结合。When it is detected that the transportation robot is facing the distribution box, the transportation robot is controlled to move to the distribution box, so that the transportation robot is combined with the distribution box.
  5. 如权利要求4所述的方法,其中,所述控制所述运送机器人向所述配送箱体移动之后,所述方法还包括:The method according to claim 4, wherein, after the controlling the transportation robot to move to the distribution box, the method further comprises:
    通过距离检测部件检测所述运送机器人与所述配送箱体之间的距离;以及Detecting the distance between the transport robot and the delivery box by a distance detecting component; and
    当检测到的距离在预设距离范围内时,控制所述运送机器人停止移动,以完成所述运送机器人与所述配送箱体结合。When the detected distance is within the preset distance range, the transport robot is controlled to stop moving, so as to complete the combination of the transport robot and the delivery box.
  6. 如权利要求1至5中任一权利要求所述的方法,其中,所述通过摄像部件进行图像采集,得到目标图像之前,还包括:The method according to any one of claims 1 to 5, wherein before the image acquisition by the camera component to obtain the target image, the method further comprises:
    获取配送箱体所在的目标位置;Obtain the target location of the distribution box;
    通过激光雷达进行扫描,得到周围物体的点云数据,并将所述点云数据与预先存储的地图信息进行匹配,基于匹配结果确定所述运送机器人的当前位置;以及Scanning by lidar to obtain point cloud data of surrounding objects, matching the point cloud data with pre-stored map information, and determining the current position of the transport robot based on the matching result; and
    确定所述当前位置与所述目标位置之间的运动路径,并基于所述运动路径进行移动,以到达所述目标位置。Determine a motion path between the current position and the target position, and move based on the motion path to reach the target position.
  7. 如权利要求6所述的方法,其中,所述基于匹配结果确定所述运送机器人的当前位置,包括:8. The method of claim 6, wherein the determining the current position of the transport robot based on the matching result comprises:
    将匹配结果中的位置作为第一候选位置;Take the position in the matching result as the first candidate position;
    根据所述运送机器人的姿态信息和已行驶的里程信息、以及所述运送机器人的起始位置,确定所述运送机器人的第二候选位置;以及Determining the second candidate position of the transport robot according to the posture information and the mileage information of the transport robot, and the starting position of the transport robot; and
    根据所述第一候选位置和所述第二候选位置,确定所述运送机器人的当前位置。According to the first candidate position and the second candidate position, the current position of the transport robot is determined.
  8. 运送机器人的控制装置,其包括:The control device of the transport robot includes:
    采集模块,配置为通过摄像部件进行图像采集,得到目标图像;The acquisition module is configured to perform image acquisition through the camera component to obtain the target image;
    识别模块,配置为在所述目标图像中识别图形标识码,并确定所述图形标识码在所述目标图像中的位置信息,所述图像标识码设置于配送箱体的外侧;An identification module configured to identify a graphic identification code in the target image and determine the position information of the graphic identification code in the target image, the image identification code being set on the outside of the delivery box;
    确定模块,配置为根据所述位置信息确定运送机器人与所述配送箱体的相对位置;以及A determining module, configured to determine the relative position of the transport robot and the delivery box according to the position information; and
    移动模块,配置为基于所述相对位置进行移动操作,以使所述运送机器人与所述配送箱体结合。The moving module is configured to perform a moving operation based on the relative position, so that the transport robot is combined with the delivery box.
  9. 运送机器人,其包括摄像部件、控制装置和底盘驱动装置,所述控制装置分别与所述摄像部件和所述底盘驱动装置连接,其中:A transport robot, which includes a camera component, a control device, and a chassis drive device, the control device is respectively connected to the camera component and the chassis drive device, wherein:
    所述摄像部件,配置为进行图像采集,得到目标图像;The camera component is configured to perform image collection to obtain a target image;
    所述控制装置,配置为在所述目标图像中识别图形标识码,并确定所述图形标识码在所述目标图像中的位置信息,所述图像标识码设置于配送箱体的外侧;根据所述位置信息确定运送机器人与所述配送箱体的相对位置;The control device is configured to identify the graphic identification code in the target image, and determine the position information of the graphic identification code in the target image, the image identification code is set on the outside of the delivery box; The position information determines the relative position of the transport robot and the delivery box;
    所述控制装置,还配置为基于所述相对位置控制所述底盘驱动装置进行移动操作,以使所述运送机器人与所述配送箱体结合。The control device is further configured to control the chassis driving device to perform a movement operation based on the relative position, so that the transport robot is combined with the delivery box.
  10. 如权利要求9所述的运送机器人,其中,所述底盘驱动装置包括连接部和承载部,所述控制装置的底部与所述连接部固定连接,所述承载部用于在所述运送机器人与所述配送箱体结合后,承载所述配送箱体;以及The transportation robot according to claim 9, wherein the chassis driving device includes a connecting portion and a bearing portion, the bottom of the control device is fixedly connected to the connecting portion, and the bearing portion is used to connect the conveying robot to the connecting portion. After the distribution box is combined, carry the distribution box; and
    所述摄像部件设置于所述承载部的尾端。The camera component is arranged at the tail end of the carrying part.
  11. 如权利要求9或10所述的运送机器人,其中,所述运送机器人还包括距离检测部件,所述距离检测部件与所述控制器连接;The transport robot according to claim 9 or 10, wherein the transport robot further comprises a distance detection component, and the distance detection component is connected to the controller;
    所述距离检测部件,配置为检测所述运送机器人与所述配送箱体之间的距离;以及The distance detecting component is configured to detect the distance between the transport robot and the delivery box; and
    所述控制装置,还配置为当所述距离检测部件检测到的距离在预设距离范围内时,控制所述底盘驱动装置停止移动,以完成所述运送机器人与所述配送箱体结合。The control device is further configured to control the chassis driving device to stop moving when the distance detected by the distance detection component is within a preset distance range, so as to complete the combination of the transport robot and the delivery box.
  12. 如权利要求11所述的运送机器人,其中,所述距离检测部件的数目为多个,且所述距离检测部件对称设置于所述控制装置朝向所述配送箱体的侧面。11. The transport robot according to claim 11, wherein the number of the distance detection components is multiple, and the distance detection components are symmetrically arranged on the side of the control device facing the delivery box.
  13. 如权利要求9至12中任一权利要求所述的运送机器人,其中,所述运送机器人还包括激光雷达,所述激光雷达与所述控制装置连接;The transportation robot according to any one of claims 9 to 12, wherein the transportation robot further comprises a lidar, and the lidar is connected to the control device;
    所述激光雷达,配置为扫描得到周围物体的点云数据;以及The lidar is configured to scan to obtain point cloud data of surrounding objects; and
    所述控制装置,还配置为获取配送箱体所在的目标位置,将所述点云数据与预先存储的地图信息进行匹配,基于匹配结果确定所述运送机器人的当前位置;确定所述当前位置与所述目标位置之间的运动路径,并基于所述运动路径控制所述底盘驱动装置移动,以到达所述目标位置。The control device is further configured to obtain the target position of the delivery box, match the point cloud data with pre-stored map information, determine the current position of the transport robot based on the matching result; determine the current position and The movement path between the target positions, and based on the movement path, the chassis driving device is controlled to move to reach the target position.
  14. 如权利要求13所述的运送机器人,其中,所述运送机器人还包括惯性测量单元IMU,所述惯性测量单元IMU与所述控制装置连接;The transportation robot according to claim 13, wherein the transportation robot further comprises an inertial measurement unit IMU, and the inertial measurement unit IMU is connected to the control device;
    所述IMU,配置为检测所述运送机器人的姿态信息和已行驶的里程信息;以及The IMU is configured to detect posture information and mileage information of the transport robot; and
    所述控制装置,还配置为将匹配结果中的位置作为第一候选位置;根据所述IMU反馈的姿态信息和已行驶的里程信息、以及所述运送机器人的起始位置,确定所述运送机器人的第二候选位置;根据所述第一候选位置和所述第二候选位置,确定所述运送机器人的当前位置。The control device is further configured to use the position in the matching result as the first candidate position; determine the transportation robot according to the posture information and the traveled mileage information fed back by the IMU, and the starting position of the transportation robot The second candidate position; according to the first candidate position and the second candidate position, determine the current position of the transport robot.
  15. 如权利要求9至14中任一权利要求所述的运送机器人,其中,所述控制装置还包括人机交互部件。The transportation robot according to any one of claims 9 to 14, wherein the control device further includes a human-computer interaction component.
  16. 分体式配送机器人,其包括如权利要求9至15中任一权利要 求所述的运送机器人、以及至少一个配送箱体,所述配送箱体的外侧设置有图像标识码。A split-type delivery robot includes the transport robot according to any one of claims 9 to 15 and at least one delivery box, and an image identification code is provided on the outside of the delivery box.
  17. 计算机可读存储介质,其中,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-7中任一权利要求所述的方法。A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method according to any one of claims 1-7 is implemented.
PCT/CN2021/100304 2020-06-19 2021-06-16 Transport robot control method and device, transport robot, and storage medium WO2021254376A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010565219.1 2020-06-19
CN202010565219.1A CN111694358B (en) 2020-06-19 2020-06-19 Method and device for controlling transfer robot, and storage medium

Publications (1)

Publication Number Publication Date
WO2021254376A1 true WO2021254376A1 (en) 2021-12-23

Family

ID=72482150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/100304 WO2021254376A1 (en) 2020-06-19 2021-06-16 Transport robot control method and device, transport robot, and storage medium

Country Status (2)

Country Link
CN (1) CN111694358B (en)
WO (1) WO2021254376A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114211509A (en) * 2021-12-31 2022-03-22 上海钛米机器人股份有限公司 Food delivery robot's box and food delivery robot

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694358B (en) * 2020-06-19 2022-11-08 京东科技信息技术有限公司 Method and device for controlling transfer robot, and storage medium
CN114227659A (en) * 2021-12-15 2022-03-25 北京云迹科技股份有限公司 Split type robot
CN114211489B (en) * 2021-12-15 2024-06-07 北京云迹科技股份有限公司 Split security monitoring robot
CN114789440B (en) * 2022-04-22 2024-02-20 深圳市正浩创新科技股份有限公司 Target docking method, device, equipment and medium based on image recognition

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025971A (en) * 2006-02-24 2007-08-29 富士通株式会社 Bar-code reading apparatus, bar-code reading method, and library apparatus
CN104777835A (en) * 2015-03-11 2015-07-15 武汉汉迪机器人科技有限公司 Omni-directional automatic forklift and 3D stereoscopic vision navigating and positioning method
CN106873590A (en) * 2017-02-21 2017-06-20 广州大学 A kind of carrier robot positioning and task management method and device
CN206833249U (en) * 2017-05-31 2018-01-02 北京物资学院 A kind of merchandising machine people
US20180307230A1 (en) * 2017-04-24 2018-10-25 Mitsubishi Electric Corporation Flight control device and profile measurement device
CN108792384A (en) * 2018-04-18 2018-11-13 北京极智嘉科技有限公司 Method for carrying, handling device and handling system
CN111061228A (en) * 2018-10-17 2020-04-24 长沙行深智能科技有限公司 Automatic container transfer control method based on target tracking
CN111056196A (en) * 2018-10-17 2020-04-24 长沙行深智能科技有限公司 Automatic container transfer control method based on image signs
CN111694358A (en) * 2020-06-19 2020-09-22 北京海益同展信息科技有限公司 Method and device for controlling transfer robot, and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG11201906595XA (en) * 2017-01-16 2019-08-27 Zhejiang Guozi Robotics Co Ltd Method for carrying goods by robot
CN108983603B (en) * 2018-06-27 2021-07-16 广州视源电子科技股份有限公司 Butt joint method of robot and object and robot thereof
CN109066861A (en) * 2018-08-20 2018-12-21 四川超影科技有限公司 Intelligent inspection robot charging controller method based on machine vision
CN109460044A (en) * 2019-01-10 2019-03-12 轻客小觅智能科技(北京)有限公司 A kind of robot method for homing, device and robot based on two dimensional code
CN111017069A (en) * 2019-12-18 2020-04-17 北京海益同展信息科技有限公司 Distribution robot, control method, device and system thereof, and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025971A (en) * 2006-02-24 2007-08-29 富士通株式会社 Bar-code reading apparatus, bar-code reading method, and library apparatus
CN104777835A (en) * 2015-03-11 2015-07-15 武汉汉迪机器人科技有限公司 Omni-directional automatic forklift and 3D stereoscopic vision navigating and positioning method
CN106873590A (en) * 2017-02-21 2017-06-20 广州大学 A kind of carrier robot positioning and task management method and device
US20180307230A1 (en) * 2017-04-24 2018-10-25 Mitsubishi Electric Corporation Flight control device and profile measurement device
CN206833249U (en) * 2017-05-31 2018-01-02 北京物资学院 A kind of merchandising machine people
CN108792384A (en) * 2018-04-18 2018-11-13 北京极智嘉科技有限公司 Method for carrying, handling device and handling system
CN111061228A (en) * 2018-10-17 2020-04-24 长沙行深智能科技有限公司 Automatic container transfer control method based on target tracking
CN111056196A (en) * 2018-10-17 2020-04-24 长沙行深智能科技有限公司 Automatic container transfer control method based on image signs
CN111694358A (en) * 2020-06-19 2020-09-22 北京海益同展信息科技有限公司 Method and device for controlling transfer robot, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114211509A (en) * 2021-12-31 2022-03-22 上海钛米机器人股份有限公司 Food delivery robot's box and food delivery robot
CN114211509B (en) * 2021-12-31 2024-05-03 上海钛米机器人股份有限公司 Box of meal delivery robot and meal delivery robot

Also Published As

Publication number Publication date
CN111694358A (en) 2020-09-22
CN111694358B (en) 2022-11-08

Similar Documents

Publication Publication Date Title
WO2021254376A1 (en) Transport robot control method and device, transport robot, and storage medium
JP2501010B2 (en) Mobile robot guidance device
US11002840B2 (en) Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
JP4533659B2 (en) Apparatus and method for generating map image by laser measurement
CN109917788B (en) Control method and device for robot to walk along wall
JP2017215939A (en) Information processor, vehicle, information processing method and program
CN111121754A (en) Mobile robot positioning navigation method and device, mobile robot and storage medium
US11954918B2 (en) Object detection device, object detection method, and storage medium
CN110850859B (en) Robot and obstacle avoidance method and obstacle avoidance system thereof
JP2017120551A (en) Autonomous traveling device
CN103472434B (en) Robot sound positioning method
WO2023024347A1 (en) Autonomous exploration method for robot, and terminal device and storage medium
US11734850B2 (en) On-floor obstacle detection method and mobile machine using the same
CN113768419B (en) Method and device for determining sweeping direction of sweeper and sweeper
Peter et al. Line segmentation of 2d laser scanner point clouds for indoor slam based on a range of residuals
WO2022237375A1 (en) Positioning apparatus calibration method, odometer calibration method, program product, and calibration apparatus
KR20180066668A (en) Apparatus and method constructing driving environment of unmanned vehicle
CN115880673A (en) Obstacle avoidance method and system based on computer vision
CN114115263B (en) Autonomous mapping method and device for AGV, mobile robot and medium
JP7451165B2 (en) Traveling position verification system, traveling position measurement system, and traveling position correction system
CN114777761A (en) Cleaning machine and map construction method
EP3736733B1 (en) Tracking device, tracking method, and tracking system
WO2023015407A1 (en) Method for identifying artifact point, terminal device, and computer-readable storage medium
CN113768420A (en) Sweeper and control method and device thereof
CN112506189A (en) Method for controlling robot to move

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21826619

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21826619

Country of ref document: EP

Kind code of ref document: A1