CN116342695B - Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium - Google Patents

Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium Download PDF

Info

Publication number
CN116342695B
CN116342695B CN202310610634.8A CN202310610634A CN116342695B CN 116342695 B CN116342695 B CN 116342695B CN 202310610634 A CN202310610634 A CN 202310610634A CN 116342695 B CN116342695 B CN 116342695B
Authority
CN
China
Prior art keywords
point cloud
goods
target
obstacle
cloud set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310610634.8A
Other languages
Chinese (zh)
Other versions
CN116342695A (en
Inventor
杨秉川
方牧
鲁豫杰
李陆洋
王琛
方晓曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionnav Robotics Shenzhen Co Ltd
Original Assignee
Visionnav Robotics Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionnav Robotics Shenzhen Co Ltd filed Critical Visionnav Robotics Shenzhen Co Ltd
Priority to CN202310610634.8A priority Critical patent/CN116342695B/en
Publication of CN116342695A publication Critical patent/CN116342695A/en
Application granted granted Critical
Publication of CN116342695B publication Critical patent/CN116342695B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/07504Accessories, e.g. for towing, charging, locking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/0755Position control; Position detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Structural Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Civil Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geology (AREA)
  • Mechanical Engineering (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Forklifts And Lifting Vehicles (AREA)

Abstract

An unmanned forklift truck put detection method and device, an unmanned forklift truck and a storage medium, wherein the method comprises the following steps: acquiring multi-frame initial point cloud data corresponding to a target goods placing position; combining the multi-frame initial point cloud data to obtain a combined point cloud set; obtaining a projection image corresponding to the merging point cloud set on a reference plane, and separating a cross beam point cloud set from the merging point cloud set according to the projection image; determining obstacle position information corresponding to an obstacle on the target goods placing position based on the cross beam point cloud set; and determining the goods placing posture information corresponding to the target goods placing position according to the cross beam point cloud set and the obstacle position information, wherein the goods placing posture information is used for indicating the unmanned forklift to place the carried goods at the target goods placing position so as to enable the goods to accord with the goods placing posture information. By implementing the embodiment of the application, the accuracy of detection of the goods shelf by the unmanned forklift can be improved, and the goods placing position can be accurately determined, so that the safety and reliability of goods transportation and placing by the unmanned forklift can be improved.

Description

Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium
Technical Field
The application relates to the technical field of unmanned forklifts, in particular to a method and a device for detecting the goods put by an unmanned forklifts, the unmanned forklifts and a storage medium.
Background
Currently, in a work scene of warehouse logistics, a demand for transporting goods through robots such as unmanned forklifts is increasing. Conventional unmanned forklift transportation basically relies on 2D lidar, toF (Time of flight) cameras and the like as detection devices, and the pick-up and put-down positions are determined according to plane data obtained by detection. However, in practice, it is found that, in the case that goods need to be placed on a shelf, especially a high-level shelf (for example, a shelf with a height of 11 meters or more) and the like, the conventional unmanned forklift cannot determine the height of the goods due to only acquiring plane data, and it is often difficult to accurately detect and identify a shelf beam, an obstacle or the like, so that it is difficult to determine a proper goods placing position, which easily causes the risk of collision or falling of the goods in the goods placing process, and reduces the safety and reliability of the unmanned forklift for transporting the goods.
Disclosure of Invention
The embodiment of the application discloses a method and a device for detecting the goods placed by an unmanned forklift, the unmanned forklift and a storage medium, which can improve the accuracy of the unmanned forklift in detecting a shelf beam, an obstacle and the like, and further can accurately determine the goods placing position including the goods placing height, thereby being beneficial to improving the safety and the reliability of the unmanned forklift in transporting and placing goods.
The first aspect of the embodiment of the application discloses a method for detecting the goods put of an unmanned forklift, which is applied to the unmanned forklift and comprises the following steps:
acquiring multi-frame initial point cloud data corresponding to a target goods placing position;
combining the multi-frame initial point cloud data to obtain a combined point cloud set;
acquiring a projection image corresponding to the merging point cloud set on a reference plane, and separating a cross beam point cloud set from the merging point cloud set according to the projection image; the normal vector of the reference plane is parallel to the ground, and the beam point cloud set is used for determining a shelf beam corresponding to the target goods placing position;
determining obstacle position information corresponding to an obstacle on the target goods placing position based on the cross beam point cloud set;
and determining the goods placing posture information corresponding to the target goods placing position according to the cross beam point cloud set and the obstacle position information, wherein the goods placing posture information is used for indicating the unmanned forklift to place the carried goods at the target goods placing position so as to enable the goods to accord with the goods placing posture information.
The second aspect of the embodiment of the application discloses an unmanned forklift put detection device, which is applied to an unmanned forklift and comprises:
The point cloud data acquisition unit is used for acquiring multi-frame initial point cloud data corresponding to the target goods placing position;
the multi-frame merging unit is used for merging the multi-frame initial point cloud data to obtain a merging point cloud set;
the projection image acquisition unit is used for acquiring a projection image corresponding to the merging point cloud set on a reference plane and separating a cross beam point cloud set from the merging point cloud set according to the projection image; the normal vector of the reference plane is parallel to the ground, and the beam point cloud set is used for determining a shelf beam corresponding to the target goods placing position;
the obstacle information determining unit is used for determining obstacle position information corresponding to an obstacle at the target goods placing position based on the cross beam point cloud set;
and the gesture information determining unit is used for determining the goods placing gesture information corresponding to the target goods placing position according to the cross beam point cloud set and the obstacle position information, and the goods placing gesture information is used for indicating the unmanned forklift to place the carried goods at the target goods placing position so as to enable the goods to accord with the goods placing gesture information.
A third aspect of the embodiment of the present application discloses an unmanned forklift, comprising:
A memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program codes stored in the memory to execute all or part of the steps in any one of the unmanned forklift put detection methods disclosed in the first aspect of the embodiment of the application.
A fourth aspect of the embodiment of the present application discloses a computer-readable storage medium storing a computer program, where the computer program causes a computer to execute all or part of the steps in any one of the unmanned forklift put detection methods disclosed in the first aspect of the embodiment of the present application.
Compared with the related art, the embodiment of the application has the following beneficial effects:
according to the embodiment of the application, the unmanned forklift applying the unmanned forklift discharging detection method can acquire multi-frame initial point cloud data corresponding to the target discharging position, and combine the multi-frame initial point cloud data to obtain a combined point cloud set. The unmanned forklift can acquire a projection image corresponding to the merging point cloud set on the reference plane, and separate the beam point cloud set from the merging point cloud set according to the projection image. The normal vector of the reference plane can be parallel to the ground, and the beam point cloud set can be used for determining a shelf beam corresponding to the target storage position. On the basis, the unmanned forklift can determine obstacle position information corresponding to the obstacle on the target goods placing position based on the cross beam point cloud set, and then can determine goods placing posture information corresponding to the target goods placing position according to the cross beam point cloud set and the obstacle position information, and the goods placing posture information can be used for indicating the unmanned forklift to place the carried goods at the target goods placing position so that the goods meet the goods placing posture information. Therefore, by implementing the embodiment of the application, in the working scene of warehouse logistics, the multi-frame point cloud data acquired aiming at the target goods placing position can be combined, so that the precision of detecting the shelf cross beam, the obstacle and the like near the target goods placing position by the unmanned forklift is improved. Meanwhile, based on the mutual conversion of the point cloud data and the projection image, the position and the posture information corresponding to the shelf cross beam, the obstacle and the like can be rapidly calculated, and then the position where the goods carried by the unmanned forklift can be placed can be reasonably determined. In the related art, the unmanned forklift is difficult to accurately detect and identify the goods shelves, particularly the high-level goods shelves, and compared with the traditional goods placing detection scheme, the unmanned forklift goods placing detection method provided by the embodiment of the application can effectively improve the accuracy of goods placing detection of the unmanned forklift on the goods shelves, so that the goods placing positions including the goods placing height can be accurately determined, the risks of goods collision and even falling in the goods placing process are avoided, and the safety and reliability of goods transportation and goods placing of the unmanned forklift are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings needed in the embodiments, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of an unmanned forklift disclosed in an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for detecting the delivery of an unmanned forklift according to an embodiment of the present application;
FIG. 3 is a schematic illustration of a projection of a point cloud onto a reference plane to obtain a projected image in accordance with an embodiment of the present application;
FIG. 4 is a schematic flow chart of another method for detecting the delivery of an unmanned forklift according to an embodiment of the present application;
FIG. 5A is a schematic diagram showing the effect of morphological image processing according to an embodiment of the present application;
FIG. 5B is a schematic diagram showing the effect of another morphological image processing according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of another method for detecting the delivery of an unmanned forklift according to an embodiment of the present application;
FIG. 7 is a schematic illustration of a beam point cloud and an obstacle point cloud as disclosed in an embodiment of the application;
FIG. 8 is a schematic diagram of a modular automated fork truck delivery detection apparatus according to an embodiment of the present application;
fig. 9 is a schematic diagram of a modular unmanned forklift according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses a method and a device for detecting the goods placed by an unmanned forklift, the unmanned forklift and a storage medium, which can improve the accuracy of the unmanned forklift in detecting a shelf beam, an obstacle and the like, and further can accurately determine the goods placing position including the goods placing height, thereby being beneficial to improving the safety and the reliability of the unmanned forklift in transporting and placing goods.
The following will describe in detail the embodiments and the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic diagram of an application scenario of a method for detecting a put of an unmanned forklift according to an embodiment of the present application, which may include an unmanned forklift 10 (also referred to as an automatic transfer robot or an automatic guided vehicle, i.e. Automated Guided Vehicle, AGV) and a pallet 20. In the working scene of the warehouse logistics, the unmanned forklift 10 can detect and identify the goods shelf 20 near the target goods placing position (namely, the position for placing the goods carried by the unmanned forklift 10 can be determined according to the specific requirement of the unmanned forklift 10 in the working scene) through collecting point cloud data so as to determine the information of the positions, the positions and the postures corresponding to the cross beams and other obstacles on the goods shelf 20, and further determine the goods placing posture information required for placing the goods at the target goods placing position, so that the automatic transportation and the placement of the goods can be accurately realized.
In some embodiments, as shown in fig. 1, the unmanned forklift 10 may include a perception module that may include a perception element 11 and a processing element (not specifically shown). The sensing element 11 may include various types of sensors such as a 3D laser radar, and is configured to collect point cloud data near the unmanned forklift 10, especially in a front space thereof, so as to acquire at least one frame of initial point cloud data for the target storage location. It can be understood that the point cloud data collected by the 3D lidar sensor is stereo point cloud data. The sensing element 11 may be disposed at a midpoint between the fork arm roots of the unmanned forklift 10 (for example, disposed on a vehicle body or on a fork arm, the former is shown in fig. 1), or may be disposed at other positions according to actual conditions (for example, the disposed positions may be adjusted according to the shape of the unmanned forklift and the different storage space environments), which is not particularly limited in the embodiment of the present application. The processing element may be configured to process initial point cloud data corresponding to the target delivery location, so as to determine delivery attitude information corresponding to the target delivery location.
The unmanned forklift 10 shown in fig. 1 is a vehicle, and this is only an example. In other embodiments, the unmanned forklift 10 may also have other different configurations, such as a track robot, a non-vehicle trackless robot, etc., which are not particularly limited in the embodiments of the present application. The sensing module mounted on the unmanned forklift 10 may include various devices or systems including the sensing element 11 and the processing element, such as a vehicle machine, a computer, and a point cloud scanning processing System based on a SoC (System-on-a-Chip) connected with various sensors such as a 3D laser radar, and the embodiment of the application is not limited in particular.
It should be noted that, the above-mentioned goods shelves 20 may include a single layer or multiple layers (as shown in fig. 1), each layer of goods shelves 20 may have a beam for carrying and stacking goods, and before the unmanned forklift 10 is to place the goods carried by the goods shelves at the target placing position, other obstacles (such as other goods, goods shelves, etc.) may be present on the goods shelves 20, which need to be detected and identified by the unmanned forklift 10, so as to specifically determine the suitable placing position and corresponding placing posture information.
In the related art, conventional unmanned forklift transportation basically relies on 2D lidar, toF cameras, etc. as detection devices, and it is often difficult to accurately detect and identify beams, obstacles, etc. on a shelf 20, especially for the case of high-level shelves (e.g., shelves with a height of 11 meters or more), and the defects of insufficient detection accuracy and incapability of determining height information easily cause the risk of collision or falling of goods during the process of placing goods. In the embodiment of the present application, in order to place the goods carried by the unmanned forklift 10 at the target loading position and overcome the difficulty in detecting the beam, the obstacle, etc. on the shelf 20 by the unmanned forklift 10 in the related art, the beam, the obstacle, etc. on the shelf 20 may be further positioned by combining the multiple frame point cloud data collected for the target loading position.
Illustratively, the unmanned forklift 10 may acquire multiple frames of initial point cloud data corresponding to the target delivery location, and combine the multiple frames of initial point cloud data to obtain a combined point cloud set. Further, the unmanned forklift 10 may acquire a projection image corresponding to the merging point cloud set on the reference plane, and separate the beam point cloud set from the merging point cloud set according to the projection image. The normal vector of the reference plane may be parallel to the ground, and the beam point cloud set may be used to determine the beam on the shelf 20 corresponding to the target placement position. Based on the above, the unmanned forklift 10 may determine the obstacle position information corresponding to the obstacle at the target goods-placing position based on the cross beam point cloud set, and further may determine the goods-placing posture information corresponding to the target goods-placing position according to the cross beam point cloud set and the obstacle position information, where the goods-placing posture information may be used to instruct the unmanned forklift 10 to place the carried goods at the target goods-placing position so that the goods conform to the goods-placing posture information.
Therefore, by implementing the method for detecting the goods placed by the unmanned forklift according to the embodiment, the accuracy of detecting the shelf cross beam, the obstacle and the like near the target goods placing position by the unmanned forklift 10 can be improved by combining the multi-frame point cloud data in the working scene of the warehouse logistics. Meanwhile, based on the mutual conversion of the point cloud data and the projection image, the position and the posture information corresponding to the shelf cross beam, the obstacle and the like can be rapidly calculated, and the position where the goods carried by the unmanned forklift 10 can be placed can be reasonably determined. Compared with the traditional goods placing detection scheme, the goods placing detection method of the unmanned forklift of the embodiment of the application can effectively improve the accuracy of goods placing detection of the unmanned forklift 10 on the goods shelves 20 (particularly can comprise high-level goods shelves), thereby accurately determining the goods placing position comprising the goods placing height, avoiding the danger of goods collision and even falling in the goods placing process, and being beneficial to improving the safety and reliability of goods transportation and placement of the unmanned forklift 10.
Referring to fig. 2, fig. 2 is a schematic flow chart of a method for detecting a put of an unmanned forklift according to an embodiment of the present application, and the method can be applied to the unmanned forklift. As shown in fig. 2, the method for detecting the goods put by the unmanned forklift can comprise the following steps:
202. And acquiring multi-frame initial point cloud data corresponding to the target goods placing position.
In the embodiment of the application, the unmanned forklift can acquire point cloud data aiming at the target goods placing position through the carried sensing module. The target placement location, i.e., a location for placing the goods carried by the unmanned forklift (e.g., a designated warehouse location, a designated shelf or an idle shelf, etc.), may be determined according to a specific requirement of the unmanned forklift in a working scenario. Illustratively, the sensing module may include a sensing element, such as a 3D lidar, an ultrasonic radar, etc., for acquiring multiple frames of initial point cloud data corresponding to the target delivery location.
In some embodiments, the unmanned forklift can acquire point cloud data through the sensing module under the condition that the unmanned forklift moves to the target detection position. The target detection position may be located near the target loading position (for example, 0.5 m, 1 m in front of the target loading position, etc.), which is not specifically limited in the embodiment of the present application.
For example, the unmanned forklift may determine the corresponding target detection position in advance according to the target loading position corresponding to the carried goods. When the unmanned forklift moves to the target detection position, the unmanned forklift can receive the target posture information issued by the sensing module of the unmanned forklift, and according to the target posture information, the sensing element is started to acquire corresponding point cloud data. The target posture information may include a specific corresponding coordinate position (for example, three-axis coordinates x, y, z on a space rectangular coordinate system established based on a specified reference origin) and a pose (for example, a rotation angle determined based on the space rectangular coordinate system, including a pitch angle, a yaw angle, a roll angle) when the cargo is placed at the target placing position under an ideal condition. The unmanned forklift can acquire point cloud data of the sensing element towards the target detection position under the indication of the target attitude information so as to acquire multi-frame initial point cloud data corresponding to the target goods placing position.
In other embodiments, the unmanned forklift can keep on its sensing element in the process of carrying the goods and moving to the target goods placing position corresponding to the goods, so as to continuously collect the point cloud data of the space in front of the unmanned forklift. When the unmanned forklift moves to the target detection position, the multi-frame point cloud data which starts to be collected at the moment can be used as initial point cloud data for the subsequent step of determining the goods placing posture information corresponding to the target goods placing position.
Optionally, in the process of acquiring the multi-frame initial point cloud data, the unmanned forklift can perform preliminary filtration on each frame of point cloud data acquired by the sensing module based on the target delivery position to obtain corresponding frames of initial point cloud data.
Illustratively, taking an example that the target storage location includes a designated shelf, the unmanned forklift may determine, based on the target pose information corresponding to the target storage location, a region of interest, i.e. a ROI (Region Of Interest) region, including the shelf in each frame of point cloud data. Specifically, the unmanned forklift can extract the ROI of each frame of point cloud data according to the target attitude information and a preset ROI range threshold value, so as to obtain the point cloud data in the ROI area, and the point cloud data is used as the initial point cloud data of each frame.
On the basis, under the condition that multi-frame initial point cloud data are acquired, the multi-frame initial point cloud data can be combined in a subsequent step by the unmanned forklift, so that the accuracy of detecting and identifying the shelf cross beam, the obstacle and the like corresponding to the target goods placing position is improved.
204. And merging the multi-frame initial point cloud data to obtain a merging point cloud set.
In the embodiment of the application, after the unmanned forklift acquires the multi-frame initial point cloud data acquired by the sensing module, a merging point cloud set with higher precision can be obtained by a multi-frame merging mode. By determining each set of matching point cloud data having a correspondence relationship in each frame of initial point cloud data, the frames of initial point cloud data may be superimposed, and corresponding merging point cloud data remaining in the merging point cloud set may be determined based on each set of mutually corresponding point cloud data included in each set of matching point cloud data.
In some embodiments, the unmanned forklift may calculate, according to the coordinate positions of the respective point cloud data included in each set of matching point cloud data, an average coordinate position corresponding to the respective point cloud data as the coordinate position of the combined point cloud data corresponding to the set of matching point cloud data. On the basis, each group of merging point cloud data corresponding to the matching point cloud data can form a merging point cloud set.
In other embodiments, the unmanned forklift may also cluster each point cloud data included in each set of matching point cloud data, and determine, according to a corresponding cluster center, merging point cloud data corresponding to each set of matching point cloud data. Optionally, in the clustering process, if an outlier exists, the outlier may be removed from the set of matching point cloud data, and then corresponding merging point cloud data may be determined according to the remaining point cloud data. On the basis, the merging point cloud data corresponding to each group of matching point cloud data can form a corresponding merging point cloud set.
206. Acquiring a projection image corresponding to the merging point cloud set on a reference plane, and separating a cross beam point cloud set from the merging point cloud set according to the projection image; and the normal vector of the reference plane is parallel to the ground, and the beam point cloud set is used for determining a shelf beam corresponding to the target goods placing position.
In the embodiment of the application, in order to further determine the beam point clouds corresponding to the shelf beams in the merging point cloud set, the merging point clouds can be projected on a reference plane to obtain corresponding projection images, and the beam point clouds are rapidly determined according to the mutual reference and conversion between the projection images and the merging point clouds.
The reference plane may be a plane whose normal vector is parallel to the ground. Referring to fig. 3, fig. 3 is a schematic diagram illustrating a projection of a point cloud onto a reference plane to obtain a projected image according to an embodiment of the present application. As shown in fig. 3, taking as an example a space rectangular coordinate system established based on a specified reference origin, the X-Y plane (plane defined by X-axis and Y-axis) of the space rectangular coordinate system is parallel to the ground, the normal vectors of the planes such as the X-Z plane (plane defined by X-axis and Z-axis), the Y-Z plane (plane defined by Y-axis and Z-axis) and the like are all parallel to the ground.
By way of example, with the X-Z plane as a reference plane, the above-mentioned merging point clouds (as shown by the dot-pattern cubes in fig. 3) are projected onto the reference plane, and a corresponding projection image (as shown by the quadrangular shadows on the X-Z plane in fig. 3) can be obtained. In some embodiments, each point cloud data in the point cloud set is merged, and a corresponding projection pixel point can exist on the projection plane; in other embodiments, some point cloud data in the merged point cloud set may be screened and projected onto the projection plane (for example, the gray value of the corresponding projection pixel is set to P, p+.0), and the point cloud data that is not screened is not projected (for example, the gray value of the corresponding projection pixel is set to 0).
It should be noted that, the merging point cloud set shown in fig. 3 is a cube, which is merely a simplified example, in an actual working scenario, the merging point cloud set corresponding to the shelf beam, the obstacle, and the like corresponding to the target storage position may have a plurality of different forms, and based on the difference of the relative positions of the unmanned forklift, there may also be different form combinations (for example, the space of the shelf gap, the crack, and the like may be further detected and determined along with the movement of the unmanned forklift, so as to further determine the combination form of the corresponding shelf beam, the obstacle, and the like), which is not particularly limited in the embodiment of the present application.
On the basis, the unmanned forklift can separate the beam point cloud set from the merging point cloud set according to the projection image. In some embodiments, by identifying the projection image, partial point cloud data corresponding to the shelf beam in the merging point cloud set may be determined, and then the partial point cloud data may be separated from the merging point cloud set as the beam point cloud set. In other embodiments, the projected image may be traversed to obtain the images that are continuous in the length direction and that match the cargo specification information (e.g., the maximum length of the cargo, etc.) and/or the shelf specification information (e.g., the length of the shelf beam, etc.); and meanwhile, the cross beam image areas which are continuous in the height direction and are consistent with the shelf specification information (such as the thickness of a shelf cross beam) can be further determined from the merging point cloud sets, and partial point cloud data corresponding to the cross beam image areas are separated from the merging point cloud sets to obtain corresponding cross beam point cloud sets.
208. And determining obstacle position information corresponding to the obstacle on the target goods placing position based on the cross beam point cloud set.
In the embodiment of the application, the unmanned forklift can determine the obstacle position information corresponding to other obstacles (such as other cargos, shelf supports and the like) placed on the shelf cross beam according to the cross beam point cloud set, so that the goods placing posture information required by placing the cargos can be further determined in the subsequent steps.
In some embodiments, to determine the uppermost edge of the pallet beam corresponding to the target placement location (i.e., the highest plane of the pallet beam for carrying cargo and other obstacles), a plane fit may be performed on the beam point cloud to obtain a corresponding fit beam plane. Wherein, above-mentioned fit crossbeam plane can include a plurality of, can confirm above-mentioned top edge according to each fit crossbeam plane to further detect the obstacle on this goods shelves crossbeam.
Alternatively, the plane fitting may be implemented based on a RANSAC (RANdom SAmple Consensus ) plane search algorithm, that is, based on the RANSAC algorithm, performing plane fitting on each point cloud data in the beam point cloud set, and searching to obtain each fitted beam plane.
On this basis, according to the above-mentioned posture of the fitting beam plane in the space rectangular coordinate system (as shown in fig. 3), the yaw angle data (yaw) corresponding to the fitting beam plane can be obtained. By way of example, by obtaining the plane normal vector corresponding to the plane of the fitting beam, the value of the arctangent function (arctan function) of the included angle between the plane normal vector and the x axis and the y axis can be calculated respectively according to the plane normal vector, and yaw angle data corresponding to the plane of the fitting beam can be determined.
Based on the yaw angle data, a rotation translation matrix corresponding to the target delivery position can be established, and the unmanned forklift can rotate and translate the multi-frame initial point cloud data according to the rotation translation matrix to obtain a corresponding transfer point cloud set.
In some embodiments, the unmanned forklift may further obtain highest edge point cloud data from the point cloud data corresponding to the fitting beam planes (i.e., the point cloud data on each fitting beam plane), and perform straight line fitting on the edge point cloud data to obtain a corresponding fitting straight line equation. On the basis, according to the cross beam point cloud set, the fitting linear equation and the cargo specification information, the obstacle point cloud set can be extracted from the transfer point cloud set.
For example, the unmanned forklift may define a target region of interest for the transfer point cloud set according to the beam point cloud set and cargo specification information of the cargo carried by the beam point cloud set. Specifically, a target region of interest that can at least contain the cargo may be defined according to the maximum length and the maximum width of the cargo, starting from the highest fitting beam plane (i.e., the uppermost edge of the shelf beam corresponding to the target cargo position) in the beam point cloud set. Further, the point cloud data in the target region of interest in the transfer point cloud set may be determined as a regional point cloud set, which may be used to further determine obstructions that may be placed near the cargo at the target cargo location.
In order to reduce the calculation amount in the process of determining the obstacle, a three-dimensional voxel grid can be constructed based on the regional point cloud set, and voxel filtering is performed on the regional point cloud set through the three-dimensional voxel grid, namely, all the point cloud data contained in the corresponding three-dimensional voxels are represented through the gravity centers of all the point cloud data (one or more) contained in each three-dimensional voxel, and then the regional filtering point cloud set formed by the point cloud data corresponding to each gravity center can be obtained.
On the basis, according to the fitting linear equation, the obstacle point cloud data above the cross beam point cloud is extracted from the region filtering point cloud, and the corresponding obstacle point cloud can be formed. Optionally, noise filtering (for example, filtering based on an european clustering algorithm, a RANSAC algorithm, etc.) may be further performed on the obstacle point cloud data, so as to obtain, according to the obstacle point cloud data after noise filtering, an obstacle point cloud set with a length or width meeting a specified threshold (for example, 5 cm, 10 cm, etc.).
In the embodiment of the application, after the obstacle point cloud set is extracted from the transfer point cloud set, the unmanned forklift can further determine the obstacle position information corresponding to the obstacle at the target goods placing position according to the obstacle point cloud set, so as to avoid the obstacle in the subsequent step and determine goods placing posture information required by reasonably placing goods.
210. And determining the goods placing posture information corresponding to the target goods placing position according to the cross beam point cloud set and the obstacle position information, wherein the goods placing posture information is used for indicating the unmanned forklift to place the carried goods at the target goods placing position so as to enable the goods to accord with the goods placing posture information.
Illustratively, the above-mentioned put attitude information may include at least a target coordinate position (e.g., three-axis coordinates x, y, z, etc. on a space rectangular coordinate system established based on a specified reference origin) and target yaw angle data (e.g., yaw angle yaw, etc. determined based on the above-mentioned space rectangular coordinate system) that can be placed specifically when placing the cargo at the target put location (e.g., a specified warehouse location, a specified shelf or idle shelf, etc.).
In the embodiment of the application, after the unmanned forklift acquires the beam point cloud set, the transverse axis coordinate position x and the vertical axis coordinate position z where cargoes can be placed can be determined according to each fitting beam plane in the beam point cloud set; and calculating target yaw angle data yaw for placing cargoes based on the plane normal vector corresponding to the fitting beam plane. Meanwhile, based on the obstacle position information, the coordinate position y of the longitudinal axis between the obstacles, where goods can be placed, can also be determined.
On the basis, the unmanned forklift can determine corresponding navigation control instructions and fork control instructions based on the goods placing posture information, so that goods carried by the unmanned forklift are placed at positions which are as close to the target goods placing positions as possible in response to the navigation control instructions and the fork control instructions, and the final positions and postures of the goods can accord with the goods placing posture information. It will be appreciated that the unmanned forklift may continue to perform steps 202 through 210 described above during the loading process until the load is placed on the target loading location.
Therefore, by implementing the method for detecting the goods placed by the unmanned forklift described in the above embodiment, in a working scene of warehouse logistics, the accuracy of detecting the shelf cross beam, the obstacle and the like near the target goods placed position by the unmanned forklift can be improved by combining the multi-frame point cloud data collected for the target goods placed position. Meanwhile, based on the mutual conversion of the point cloud data and the projection image, the position and the posture information corresponding to the shelf cross beam, the obstacle and the like can be rapidly calculated, and then the position where the goods carried by the unmanned forklift can be placed can be reasonably determined. The automatic goods placing and detecting device can effectively improve the accuracy of goods placing and detecting of the unmanned forklift on the goods shelf, particularly on the high-position goods shelf, so that the goods placing position including the goods placing height can be accurately determined, the risks that goods collide or even fall down in the goods placing process are avoided, and the automatic goods placing and detecting device is favorable for improving the safety and reliability of goods transportation and placing of the unmanned forklift.
Referring to fig. 4, fig. 4 is a schematic flow chart of another method for detecting a put of an unmanned forklift according to an embodiment of the present application, and the method can be applied to the unmanned forklift. As shown in fig. 4, the method for detecting the goods put by the unmanned forklift can comprise the following steps:
402. And acquiring multi-frame initial point cloud data corresponding to the target goods placing position.
404. And merging the multi-frame initial point cloud data to obtain a merging point cloud set.
Step 402 and step 404 are similar to step 202 and step 204 described above, and are not repeated here.
406. Based on a plurality of grids divided on a reference plane, grid representative points corresponding to the respective grids are extracted from the merging point clouds, respectively.
In the embodiment of the application, in order to project the merging point cloud set on the reference plane, a series of operations such as reduction, filtering and the like can be performed on the merging point cloud set first so as to reduce the quantity of point cloud data contained in the merging point cloud set and reduce the calculated quantity during the subsequent interconversion of the point cloud data and the projection image as much as possible.
In some embodiments, the unmanned forklift may define an area within a certain distance threshold (e.g., 0.5 m, 0.6 m, etc.) from the target delivery location based on the target delivery location, and reject point cloud data outside the area from the merging point cloud set, to obtain a pruned merging point cloud set.
On the basis, the reference plane can be divided into a plurality of grids, and grid representative points corresponding to the grids are respectively extracted from the merging point cloud set. Illustratively, as shown in FIG. 3, if an X-Z plane is used as a reference plane, the X-Z plane may be gridded. Under the condition that the merging point cloud set is projected on the X-Z plane, if a certain grid corresponds to single point cloud data, the point cloud data can be used as a grid representative point corresponding to the grid; if a certain grid corresponds to a plurality of point cloud data, one of the point cloud data (for example, the point cloud data with the largest horizontal axis coordinate position x, the point cloud data with the largest vertical axis coordinate position z, the point cloud data closest to the center of the grid, etc.) can be screened as a grid representative point corresponding to the grid; if a certain grid does not correspond to any point cloud data, the grid does not have corresponding grid representative points.
By extracting the grid representative points corresponding to each grid from the merging point cloud set respectively, the number of the point cloud data can be accurately controlled by utilizing the grid scale, so that the data volume of the merging point cloud set can be further reduced.
408. And respectively carrying out binarization processing on the projection pixel points of each grid representative point on the reference plane, and forming a projection image corresponding to the merging point cloud set on the reference plane according to the projection pixel points subjected to the binarization processing.
In the embodiment of the application, after the grid representative points corresponding to the grids are respectively extracted from the merging point cloud set, the unmanned forklift can project the grid representative points onto a reference plane. For example, as shown in fig. 3, taking the X-Z plane as the reference plane as an example, merging the grid representative points A0 in the point cloud set, and taking the projection pixel point A1 on the reference plane as A1, the binarization processing may be performed on the projection pixel point A1, and the gray value of the projection pixel point A1 is set to 255 (taking the gray value as an example with 8-bit depth).
It will be appreciated that if the point cloud data B0 and B0 ' in the merged point cloud set correspond to the same grid (e.g., B0 and B0 ' are projected at the same position on the X-Z plane or projected at different positions within the same grid on the X-Z plane), the grid representative point corresponding to the grid may be B0 or B0 '. Taking B0 as a grid representative point as an example, if the projected pixel point on the reference plane is B1, the projected pixel point B1 may be subjected to binarization processing. After the projection pixel B1 forms a projection image, the projection pixel B may correspond to the point cloud data B0 and B0' in the merged point cloud set at the same time.
On the basis, the projection pixel points subjected to binarization processing can form a projection image corresponding to the merging point cloud set on the reference plane, so as to be used for further determining the cross beam point cloud set in the merging point cloud set.
410. And carrying out morphological image processing on the projection image to obtain an image to be traversed.
Illustratively, the morphological image processing described above may include an image dilation operation, an image erosion operation, and the like. Referring to fig. 5A, fig. 5A is a schematic diagram illustrating an effect of morphological image processing according to an embodiment of the present application, wherein a left side illustrates an effect of performing an image expansion operation on a projection image (a hollow square indicates an original pixel point, a solid square indicates an expanded newly added pixel point), and a right side illustrates an effect of performing an image erosion operation on the projection image (a cross square indicates an eroded pixel point).
In the embodiment of the application, the unmanned forklift can perform expansion corrosion operation on the projection image, namely, firstly perform image expansion operation on the projection image, and then perform image corrosion operation on the image obtained by expansion, as shown in fig. 5B (the diagonal line box represents the original pixel point, the hollow box represents the gap without the pixel point, the solid box represents the newly increased pixel point of expansion, and the cross box represents the pixel point with corrosion removed). In this way, fine gaps (such as hollow small holes and the like) in the projection image can be closed (or eliminated), so that the combined point cloud set can be cleaned accordingly, point cloud data which possibly have interference can be removed, point cloud data which possibly have missing can be complemented (such as the situation that small black points or small holes appear in the point cloud data due to non-uniform scanning of a sensor such as a 3D laser radar and the like under partial conditions) at the same time, and potential defects of the sensor such as the 3D laser radar and the like in the process of scanning the point cloud data can be overcome, and the subsequent detection of a shelf beam can be facilitated more accurately.
412. Traversing the image to be traversed from the target direction, determining a beam image area which accords with the beam specification information and the cargo specification information, and separating point cloud data corresponding to projection pixel points contained in the beam image area from the merging point cloud set to serve as a beam point cloud set.
In the embodiment of the application, the image to be traversed, which is obtained after morphological image processing is carried out on the projection image, can be further traversed to determine the cross beam image area corresponding to the cross beam of the goods shelf. For example, the unmanned forklift traverses the image to be traversed according to a target direction, where the target direction may be from top to bottom or from bottom to top, and the embodiment of the application is not limited specifically.
For example, an unmanned forklift may traverse an image to be traversed from top to bottom to obtain images that are continuous in the length direction and that conform to cargo specification information (e.g., maximum length of cargo, etc.) and/or shelf specification information (e.g., length of shelf beams, etc.); and a beam image area which is continuous in the height direction and matches the shelf specification information (for example, the thickness of the shelf beam, etc.). On the basis, each grid representing point corresponding to the projection pixel point contained in the beam image area can be determined, and then the grid representing points can be separated from the merging point cloud set, so that the corresponding beam point cloud set is obtained.
414. And determining obstacle position information corresponding to the obstacle on the target goods placing position based on the cross beam point cloud set.
416. And determining the goods placing posture information corresponding to the target goods placing position according to the cross beam point cloud set and the obstacle position information, wherein the goods placing posture information is used for indicating the unmanned forklift to place the carried goods at the target goods placing position so as to enable the goods to accord with the goods placing posture information.
Step 414 and step 416 are similar to step 208 and step 210 described above, and are not repeated here.
Therefore, by implementing the method for detecting the goods placed by the unmanned forklift described in the embodiment, in the working scene of warehouse logistics, the acquired multi-frame point cloud data are combined, so that the accuracy of the unmanned forklift in detecting the goods shelf cross beams, the obstacles and the like near the target goods placing position is improved; meanwhile, based on the mutual conversion of the point cloud data and the projection image, the position and the gesture information corresponding to the shelf cross beam, the obstacle and the like can be calculated rapidly, and the placeable position of the goods carried by the unmanned forklift can be determined reasonably, so that the accuracy of goods placing detection of the unmanned forklift on the shelf, particularly the high-order shelf, is improved, and the safety and the reliability of goods transportation and placement of the unmanned forklift are improved. In addition, through projection of the merging point cloud set on the reference plane, the calculated amount of determining the beam image area can be effectively reduced, and then the corresponding beam point cloud set can be rapidly determined, so that the unmanned forklift can be promoted to further calculate the position and posture information corresponding to the shelf beam, the obstacle and the like, and the efficiency of goods placing posture information required by goods placing can be determined.
Referring to fig. 6, fig. 6 is a schematic flow chart of another method for detecting a put of an unmanned forklift according to an embodiment of the present application, and the method can be applied to the unmanned forklift. As shown in fig. 6, the method for detecting the goods put by the unmanned forklift can comprise the following steps:
602. and acquiring multi-frame initial point cloud data corresponding to the target goods placing position.
604. And merging the multi-frame initial point cloud data to obtain a merging point cloud set.
Step 602 and step 604 are similar to step 202 and step 204 described above, and are not repeated here.
606. Based on a plurality of grids divided on a reference plane, grid representative points corresponding to the respective grids are extracted from the merging point clouds, respectively.
608. And respectively carrying out binarization processing on the projection pixel points of each grid representative point on the reference plane, and forming a projection image corresponding to the merging point cloud set on the reference plane according to the projection pixel points subjected to the binarization processing.
Step 606 and step 608 are similar to step 406 and step 408, and are not repeated here.
610. And carrying out morphological image processing on the projection image to obtain an image to be traversed.
612. Traversing the image to be traversed from the target direction, determining a beam image area which accords with the beam specification information and the cargo specification information, and separating point cloud data corresponding to projection pixel points contained in the beam image area from the merging point cloud set to serve as a beam point cloud set.
Step 610 and step 612 are similar to step 410 and step 412 described above, and are not repeated here.
614. And determining obstacle position information corresponding to the obstacle on the target goods placing position based on the cross beam point cloud set.
Step 614 is similar to step 414, and will not be described herein.
616. And carrying out plane fitting on the cross beam point cloud set to obtain a fitted cross beam plane, and determining the transverse axis coordinate position, the vertical axis coordinate position and the target yaw angle data according to the fitted cross beam plane.
In an embodiment of the present application, the above-mentioned put attitude information may at least include a specific placeable target coordinate position (for example, three-axis coordinates x, y, z on a space rectangular coordinate system established based on a specified reference origin, etc.) and target yaw angle data (for example, yaw angle yaw determined based on the above-mentioned space rectangular coordinate system, etc.) when the goods are placed at the target put location (for example, a specified warehouse location, a specified shelf or idle shelf, etc.).
For example, after the beam point cloud set is determined, the unmanned forklift can determine a horizontal axis coordinate position x where goods can be placed on the highest plane of the shelf beam and a vertical axis coordinate position z corresponding to the highest plane according to each fitting beam plane in the beam point cloud set. Meanwhile, based on the plane normal vector corresponding to the plane of the fitting cross beam, target yaw angle data yaw for placing cargoes can be further calculated.
In some embodiments, the unmanned forklift may calibrate the yaw angle data yaw according to a fitted straight line equation obtained by straight line fitting the point cloud data on the highest plane. For example, according to the above fit linear equation, a rotation angle with the highest matching degree with the goods carried by the unmanned forklift at the target goods placing position may be determined, so as to further perform necessary fine adjustment (for example, performing adaptive adjustment according to the goods specification information) based on the rotation angle, to obtain calibrated yaw angle data yaw, and further use the calibrated yaw angle data yaw as goods placing posture information corresponding to the target goods placing position.
618. And when the obstacle position information accords with the goods space condition corresponding to the goods specification information, determining the coordinate position of the vertical axis according to the relative position relation between the obstacle position information and the target goods placing position.
By way of example, with the target goods placement position as the center, the edges of the obstacle can be determined to the two sides of the target goods placement position, and then whether the corresponding obstacle position information accords with the goods placement space condition corresponding to the goods specification information can be judged. As shown in fig. 7, the obstacle 21a and the obstacle 21b may be stacked on the pallet 20, and whether or not the load can be placed at the target loading position between the obstacle 21a and the obstacle 21b can be confirmed by determining whether or not the obstacle position information corresponding to the obstacle 21a and the obstacle 21b matches the designated loading space condition.
In some embodiments, if there are obstacles on both sides of the target cargo-placing position (as shown in fig. 7), if the obstacle position information indicates that there is enough space for placing the cargo, the middle value of the obstacle position information can be determined according to the vertical axis coordinate positions of the edges of the obstacles on both sides, and the vertical axis coordinate position y is used as the vertical axis coordinate position y where the cargo can be placed; if the obstacle position information indicates that the space is insufficient, the unmanned forklift may not be put in place.
In other embodiments, if an obstacle exists on one side of the target cargo-placing position, if the obstacle position information indicates that there is enough space for placing the cargo, the vertical axis coordinate position of the edge of the obstacle may be added with a specified safety offset (for example, 3 cm, 5 cm, etc.) as the vertical axis coordinate position y where the cargo can be placed; if the obstacle position information indicates that there is insufficient space, the unmanned forklift may not be put in place.
In still other embodiments, if no obstacle exists on both sides of the target cargo position, the vertical axis coordinate position y where the cargo can be placed may be set to 0, so as to represent that there is no limitation on the vertical axis coordinate position y.
Step 618 may be performed immediately after step 614, and step 614 and step 618 may be performed in synchronization with step 616, respectively, to determine the horizontal axis coordinate position x, the vertical axis coordinate position y, and the vertical axis coordinate position z, respectively, where the cargo may be placed, and the corresponding target yaw angle data yaw, thereby facilitating the efficiency of determining the cargo placement posture information by the unmanned forklift.
Therefore, by implementing the method for detecting the goods placed by the unmanned forklift described in the embodiment, in the working scene of warehouse logistics, the acquired multi-frame point cloud data are combined, so that the accuracy of the unmanned forklift in detecting the goods shelf cross beams, the obstacles and the like near the target goods placing position is improved; meanwhile, based on the mutual conversion of the point cloud data and the projection image, the position and the gesture information corresponding to the shelf cross beam, the obstacle and the like can be calculated rapidly, and the placeable position of the goods carried by the unmanned forklift can be determined reasonably, so that the accuracy of goods placing detection of the unmanned forklift on the shelf, particularly the high-order shelf, is improved, and the safety and the reliability of goods transportation and placement of the unmanned forklift are improved. In addition, through confirm respectively when placing the goods in the target goods placing position, coordinate position and yaw angle data that specifically can place help unmanned fork truck to generate suitable navigation control instruction and fork control instruction to can further promote unmanned fork truck automatic transportation and place the accuracy of goods.
Referring to fig. 8, fig. 8 is a schematic modularized view of an unmanned forklift put detection device according to an embodiment of the present application, where the unmanned forklift put detection device may be applied to the above unmanned forklift, and may particularly include the above sensing module. As shown in fig. 8, the unmanned forklift put detection device may include a point cloud data acquisition unit 801, a multi-frame merging unit 802, a projection image acquisition unit 803, an obstacle information determination unit 804, and an attitude information determination unit 805, wherein:
A point cloud data obtaining unit 801, configured to obtain multi-frame initial point cloud data corresponding to a target delivery position;
a multi-frame merging unit 802, configured to merge multi-frame initial point cloud data to obtain a merging point cloud set;
a projection image obtaining unit 803, configured to obtain a projection image corresponding to the merging point cloud set on the reference plane, and separate the beam point cloud set from the merging point cloud set according to the projection image; the normal vector of the reference plane is parallel to the ground, and the beam point cloud set is used for determining a shelf beam corresponding to the target goods placing position;
an obstacle information determining unit 804, configured to determine obstacle position information corresponding to an obstacle at a target cargo position based on the beam point cloud set;
and a posture information determining unit 805 configured to determine, according to the cross beam point cloud set and the obstacle position information, a put posture information corresponding to the target put position, where the put posture information is used to instruct the unmanned forklift to place the carried cargo at the target put position so that the cargo conforms to the put posture information.
Therefore, the unmanned forklift put detection device described in the embodiment can combine the multi-frame point cloud data collected for the target put position in the working scene of the warehouse logistics, so that the accuracy of detecting the shelf cross beam, the obstacle and the like near the target put position by the unmanned forklift is improved. Meanwhile, based on the mutual conversion of the point cloud data and the projection image, the position and the posture information corresponding to the shelf cross beam, the obstacle and the like can be rapidly calculated, and then the position where the goods carried by the unmanned forklift can be placed can be reasonably determined. Compared with the traditional goods placing detection scheme, the goods placing detection method can effectively improve the accuracy of goods placing detection of the unmanned forklift on the goods shelf, particularly the high-position goods shelf, so that the goods placing position including the goods placing height can be accurately determined, the risks of goods collision and even falling in the goods placing process are avoided, and the safety and the reliability of goods transportation and placement of the unmanned forklift are improved.
In one embodiment, the above-mentioned projected image obtaining unit 803 may specifically include:
based on a plurality of grids divided on a reference plane, respectively extracting grid representative points corresponding to the grids from the merging point cloud set;
and respectively carrying out binarization processing on the projection pixel points of each grid representative point on the reference plane, and forming a projection image corresponding to the merging point cloud set on the reference plane according to the projection pixel points subjected to the binarization processing.
On this basis, the projection image acquisition unit 803, when used for separating the cross beam point cloud from the merging point cloud based on the projection image, may specifically include:
carrying out morphological image processing on the projection image to obtain an image to be traversed;
traversing the image to be traversed from the target direction, determining a beam image area which accords with the beam specification information and the cargo specification information, and separating point cloud data corresponding to projection pixel points contained in the beam image area from the merging point cloud set to serve as a beam point cloud set.
In one embodiment, the obstacle information determining unit 804 may specifically be configured to:
Performing plane fitting on the beam point cloud set to obtain a fitted beam plane, and acquiring yaw angle data corresponding to the fitted beam plane;
based on yaw angle data, establishing a rotation translation matrix corresponding to a target goods placing position, and carrying out rotation translation on multi-frame initial point cloud data according to the rotation translation matrix to obtain a transfer point cloud set;
acquiring highest edge point cloud data from point cloud data corresponding to a fitting beam plane, and performing straight line fitting on the edge point cloud data to obtain a fitting straight line equation;
extracting obstacle point clouds from the transfer point clouds according to the cross beam point clouds, the fitting linear equation and the cargo specification information;
and determining obstacle position information corresponding to the obstacle on the target goods placing position according to the obstacle point cloud set.
In one embodiment, the obstacle information determining unit 804 may specifically include, when configured to extract the obstacle point cloud from the transfer point cloud according to the beam point cloud, the fitting equation, and the cargo specification information:
according to the cross beam point cloud set and the cargo specification information, a target region of interest is defined for the transfer point cloud set, and point cloud data in the target region of interest in the transfer point cloud set is determined to be the region point cloud set;
Constructing a three-dimensional voxel grid based on the regional point cloud set, and carrying out voxel filtering on the regional point cloud set through the three-dimensional voxel grid to obtain a regional filtering point cloud set;
extracting obstacle point cloud data above the cross beam point cloud from the regional filtering point cloud according to the fitting linear equation;
and carrying out noise filtering on the obstacle point cloud data, and obtaining an obstacle point cloud set according to the obstacle point cloud data after noise filtering.
In one embodiment, the gesture information determining unit 805 may be further configured to:
and calibrating the yaw angle data according to the fitting linear equation to obtain calibrated yaw angle data, wherein the calibrated yaw angle data can be used for determining the stocking attitude information corresponding to the target stocking position.
Illustratively, the put attitude information may include at least a target coordinate position and target yaw angle data, where the target coordinate position may include a horizontal axis coordinate position, a vertical axis coordinate position, and the attitude information determining unit 805 may specifically be configured to:
performing plane fitting on the cross beam point cloud set to obtain a fitted cross beam plane, and determining a cross axis coordinate position, a vertical axis coordinate position and target yaw angle data according to the fitted cross beam plane;
And when the obstacle position information accords with the goods space condition corresponding to the goods specification information, determining the coordinate position of the vertical axis according to the relative position relation between the obstacle position information and the target goods placing position.
Therefore, by implementing the unmanned forklift put detection device described in the embodiment, in the working scene of warehouse logistics, the acquired multi-frame point cloud data are combined, so that the accuracy of detecting the shelf cross beam, the obstacle and the like near the target put position by the unmanned forklift is improved; meanwhile, based on the mutual conversion of the point cloud data and the projection image, the position and the gesture information corresponding to the shelf cross beam, the obstacle and the like can be calculated rapidly, and the placeable position of the goods carried by the unmanned forklift can be determined reasonably, so that the accuracy of goods placing detection of the unmanned forklift on the shelf, particularly the high-order shelf, is improved, and the safety and the reliability of goods transportation and placement of the unmanned forklift are improved. In addition, through projection of the merging point cloud set on the reference plane, the calculated amount of determining the beam image area can be effectively reduced, and then the corresponding beam point cloud set can be rapidly determined, so that the unmanned forklift can be promoted to further calculate the position and posture information corresponding to the shelf beam, the obstacle and the like, and the efficiency of goods placing posture information required by goods placing can be determined. In addition, through confirm respectively when placing the goods in the target goods placing position, coordinate position and yaw angle data that specifically can place help unmanned fork truck to generate suitable navigation control instruction and fork control instruction to can further promote unmanned fork truck automatic transportation and place the accuracy of goods.
Referring to fig. 9, fig. 9 is a schematic diagram of a modularized unmanned forklift according to an embodiment of the present application, where the unmanned forklift may include the above-mentioned sensing module (e.g. a vehicle, a computer, an SoC-based unmanned forklift put detection system, etc.). As shown in fig. 9, the unmanned forklift (specifically, may be a sensing module mounted on the unmanned forklift) may include:
a memory 901 storing executable program code; a processor 902 coupled to the memory 901; the processor 902 invokes executable program codes stored in the memory 901, and may execute all or part of the steps in any of the unmanned forklift put detection methods described in the foregoing embodiments.
Furthermore, the embodiment of the application further discloses a computer readable storage medium, which stores a computer program for electronic data exchange, wherein the computer program enables a computer to execute all or part of the steps in any one of the unmanned forklift put detection methods described in the above embodiment.
Furthermore, the embodiment of the application further discloses a computer program product, when the computer program product runs on a computer, so that the computer can execute all or part of the steps in any one of the unmanned forklift truck put detection methods described in the embodiment.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
The foregoing describes in detail the method and apparatus for detecting the goods put in the unmanned forklift, the unmanned forklift and the storage medium, and specific examples are applied to illustrate the principle and implementation of the application, and the description of the foregoing embodiments is only used to help understand the method and core idea of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (9)

1. The utility model provides an unmanned fork truck put detection method which is characterized in that is applied to unmanned fork truck, the method includes:
acquiring multi-frame initial point cloud data corresponding to a target goods placing position; the initial point cloud data are three-dimensional point cloud data, and the initial point cloud data are acquired by a 3D laser radar;
combining the multi-frame initial point cloud data to obtain a combined point cloud set;
acquiring a projection image corresponding to the merging point cloud set on a reference plane, and separating a cross beam point cloud set from the merging point cloud set according to the projection image; the normal vector of the reference plane is parallel to the ground, and the beam point cloud set is used for determining a shelf beam corresponding to the target goods placing position;
determining obstacle position information corresponding to an obstacle on the target goods placing position based on the cross beam point cloud set;
according to the cross beam point cloud set and the obstacle position information, goods placing posture information corresponding to the target goods placing position is determined, wherein the goods placing posture information is used for indicating the unmanned forklift to place the carried goods at the target goods placing position so that the goods conform to the goods placing posture information, the obstacle comprises other goods except the target goods, and the target goods are the goods carried by the unmanned forklift;
Based on the cross beam point cloud set, determining obstacle position information corresponding to an obstacle on the target put position comprises the following steps:
performing plane fitting on the cross beam point cloud set to obtain a fitted cross beam plane, and acquiring yaw angle data corresponding to the fitted cross beam plane;
based on the yaw angle data, establishing a rotation translation matrix corresponding to the target delivery position, and carrying out rotation translation on the multi-frame initial point cloud data according to the rotation translation matrix to obtain a transfer point cloud set;
acquiring highest edge point cloud data from the point cloud data corresponding to the fitting beam plane, and performing straight line fitting on the edge point cloud data to obtain a fitting straight line equation;
extracting obstacle point clouds from the transfer point clouds according to the cross beam point clouds, the fitting linear equation and cargo specification information;
and determining obstacle position information corresponding to the obstacle at the target goods placing position according to the obstacle point cloud set.
2. The method of claim 1, wherein the acquiring the corresponding projection image of the merging point cloud set on the reference plane comprises:
based on a plurality of grids divided on a reference plane, respectively extracting grid representative points corresponding to the grids from the merging point cloud set;
And respectively carrying out binarization processing on the projection pixel points of each grid representative point on the reference plane, and forming a projection image corresponding to the merging point cloud set on the reference plane according to the projection pixel points subjected to the binarization processing.
3. The method of claim 2, wherein said separating the beam point cloud from the merge point cloud from the projected image comprises:
carrying out morphological image processing on the projection image to obtain an image to be traversed;
traversing the image to be traversed from a target direction, determining a beam image area which accords with beam specification information and cargo specification information, and separating point cloud data corresponding to the projection pixel points contained in the beam image area from the merging point cloud set to serve as a beam point cloud set.
4. The method of claim 1, wherein extracting an obstacle point cloud from the transfer point cloud based on the beam point cloud, the fit straight line equation, and cargo specification information, comprises:
according to the cross beam point cloud set and the cargo specification information, a target region of interest is defined for the transfer point cloud set, and point cloud data in the target region of interest in the transfer point cloud set is determined to be a region point cloud set;
Constructing a three-dimensional voxel grid based on the regional point cloud set, and carrying out voxel filtering on the regional point cloud set through the three-dimensional voxel grid to obtain a regional filtering point cloud set;
extracting obstacle point cloud data above the cross beam point cloud from the regional filtering point cloud according to the fitting linear equation;
and carrying out noise filtering on the obstacle point cloud data, and obtaining an obstacle point cloud set according to the obstacle point cloud data after noise filtering.
5. The method of claim 1, wherein after said fitting the edge point cloud data to a straight line, the method further comprises:
and calibrating the yaw angle data according to the fitting linear equation to obtain calibrated yaw angle data, wherein the calibrated yaw angle data is used for determining the stocking attitude information corresponding to the target stocking position.
6. A method according to any one of claims 1 to 3, wherein the put pose information comprises at least a target coordinate position and target yaw angle data, the target coordinate position comprising a horizontal axis coordinate position, a vertical axis coordinate position and a vertical axis coordinate position, the determining the put pose information corresponding to the target put position according to the beam point cloud and the obstacle position information comprises:
Performing plane fitting on the cross beam point cloud set to obtain a fitted cross beam plane, and determining the transverse axis coordinate position, the vertical axis coordinate position and the target yaw angle data according to the fitted cross beam plane;
and under the condition that the obstacle position information accords with the goods space condition corresponding to the goods specification information, determining the vertical axis coordinate position according to the relative position relation between the obstacle position information and the target goods placing position.
7. Unmanned fork truck puts goods detection device, its characterized in that is applied to unmanned fork truck, unmanned fork truck puts goods detection device includes:
the point cloud data acquisition unit is used for acquiring multi-frame initial point cloud data corresponding to the target goods placing position; the initial point cloud data are three-dimensional point cloud data, and the initial point cloud data are acquired by a 3D laser radar;
the multi-frame merging unit is used for merging the multi-frame initial point cloud data to obtain a merging point cloud set;
the projection image acquisition unit is used for acquiring a projection image corresponding to the merging point cloud set on a reference plane and separating a cross beam point cloud set from the merging point cloud set according to the projection image; the normal vector of the reference plane is parallel to the ground, and the beam point cloud set is used for determining a shelf beam corresponding to the target goods placing position;
The obstacle information determining unit is used for determining obstacle position information corresponding to an obstacle at the target goods placing position based on the cross beam point cloud set;
the system comprises a beam point cloud set, a barrier position information determining unit, a target position information determining unit and an unmanned forklift, wherein the beam point cloud set is used for carrying the cargo, the barrier position information determining unit is used for determining the cargo placing posture information corresponding to the target cargo placing position according to the beam point cloud set and the barrier position information, the cargo placing posture information is used for indicating the unmanned forklift to place the carried cargo at the target cargo placing position so that the cargo accords with the cargo placing posture information, the barrier comprises other cargoes except the target cargo, and the target cargo is the cargo carried by the unmanned forklift.
8. An unmanned forklift comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to implement the method of any one of claims 1 to 6.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method according to any one of claims 1 to 6.
CN202310610634.8A 2023-05-29 2023-05-29 Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium Active CN116342695B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310610634.8A CN116342695B (en) 2023-05-29 2023-05-29 Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310610634.8A CN116342695B (en) 2023-05-29 2023-05-29 Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium

Publications (2)

Publication Number Publication Date
CN116342695A CN116342695A (en) 2023-06-27
CN116342695B true CN116342695B (en) 2023-08-25

Family

ID=86880716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310610634.8A Active CN116342695B (en) 2023-05-29 2023-05-29 Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium

Country Status (1)

Country Link
CN (1) CN116342695B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507167A (en) * 2017-07-25 2017-12-22 上海交通大学 A kind of cargo pallet detection method and system matched based on a cloud face profile
CN109941274A (en) * 2019-03-01 2019-06-28 武汉光庭科技有限公司 Parking method and system, server and medium based on radar range finding identification gantry crane
CN112379387A (en) * 2020-11-13 2021-02-19 劢微机器人科技(深圳)有限公司 Automatic goods location calibration method, device, equipment and storage medium
CN114004899A (en) * 2021-11-12 2022-02-01 广东嘉腾机器人自动化有限公司 Pallet pose identification method, storage medium and equipment
CN114418952A (en) * 2021-12-21 2022-04-29 未来机器人(深圳)有限公司 Goods counting method and device, computer equipment and storage medium
CN115546300A (en) * 2022-10-11 2022-12-30 未来机器人(深圳)有限公司 Method and device for identifying pose of tray placed tightly, computer equipment and medium
CN116148884A (en) * 2022-11-21 2023-05-23 未来机器人(深圳)有限公司 Obstacle detection method and intelligent forklift

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10776651B2 (en) * 2019-01-18 2020-09-15 Intelligrated Headquarters, Llc Material handling method, apparatus, and system for identification of a region-of-interest

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507167A (en) * 2017-07-25 2017-12-22 上海交通大学 A kind of cargo pallet detection method and system matched based on a cloud face profile
CN109941274A (en) * 2019-03-01 2019-06-28 武汉光庭科技有限公司 Parking method and system, server and medium based on radar range finding identification gantry crane
CN112379387A (en) * 2020-11-13 2021-02-19 劢微机器人科技(深圳)有限公司 Automatic goods location calibration method, device, equipment and storage medium
CN114004899A (en) * 2021-11-12 2022-02-01 广东嘉腾机器人自动化有限公司 Pallet pose identification method, storage medium and equipment
CN114418952A (en) * 2021-12-21 2022-04-29 未来机器人(深圳)有限公司 Goods counting method and device, computer equipment and storage medium
CN115546300A (en) * 2022-10-11 2022-12-30 未来机器人(深圳)有限公司 Method and device for identifying pose of tray placed tightly, computer equipment and medium
CN116148884A (en) * 2022-11-21 2023-05-23 未来机器人(深圳)有限公司 Obstacle detection method and intelligent forklift

Also Published As

Publication number Publication date
CN116342695A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US10290115B2 (en) Device and method for determining the volume of an object moved by an industrial truck
CN110837814B (en) Vehicle navigation method, device and computer readable storage medium
US9630320B1 (en) Detection and reconstruction of an environment to facilitate robotic interaction with the environment
CN105431370A (en) Method and system for automatically landing containers on a landing target using a container crane
CN112464812B (en) Vehicle-based concave obstacle detection method
EP2439487A1 (en) Volume measuring device for mobile objects
CN111652936B (en) Three-dimensional sensing and stacking planning method and system for open container loading
CN112070838A (en) Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
CN111192328A (en) Two-dimensional laser radar-based point cloud processing method for three-dimensional scanning system of compartment container
CN113345008A (en) Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation
US11873195B2 (en) Methods and systems for generating landing solutions for containers on landing surfaces
CN115546202B (en) Tray detection and positioning method for unmanned forklift
CN116128841A (en) Tray pose detection method and device, unmanned forklift and storage medium
US11977392B2 (en) Identifying elements in an environment
CN113557523A (en) Method and device for operating a robot with improved object detection
CN110816522A (en) Vehicle attitude control method, apparatus, and computer-readable storage medium
CN116342695B (en) Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium
CN116425088B (en) Cargo carrying method, device and robot
CN113841101A (en) Method for creating an environment map for use in autonomous navigation of a mobile robot
JP7272568B2 (en) Method and computational system for performing robot motion planning and repository detection
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN115289966A (en) Goods shelf detecting and positioning system and method based on TOF camera
JP7227849B2 (en) Trajectory generator
CN113084815A (en) Physical size calculation method and device of belt-loaded robot and robot
CN116342858B (en) Object detection method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant