CN111738253B - Fork truck tray positioning method, device, equipment and readable storage medium - Google Patents

Fork truck tray positioning method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111738253B
CN111738253B CN201910363982.3A CN201910363982A CN111738253B CN 111738253 B CN111738253 B CN 111738253B CN 201910363982 A CN201910363982 A CN 201910363982A CN 111738253 B CN111738253 B CN 111738253B
Authority
CN
China
Prior art keywords
forklift
image
pallet
tray
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910363982.3A
Other languages
Chinese (zh)
Other versions
CN111738253A (en
Inventor
沈蕾
万保成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN201910363982.3A priority Critical patent/CN111738253B/en
Publication of CN111738253A publication Critical patent/CN111738253A/en
Application granted granted Critical
Publication of CN111738253B publication Critical patent/CN111738253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Forklifts And Lifting Vehicles (AREA)

Abstract

The invention provides a forklift pallet positioning method, a device, equipment and a readable storage medium, wherein 3D initial images containing forklift pallets and prior position information corresponding to the forklift pallets are obtained; intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift tray positive section, wherein the forklift tray positive section is a forklift tray section parallel to the end face; acquiring a 2D characteristic image projected on the front section of the forklift pallet by the 3D image block; according to the jack 2D position determined in the 2D characteristic image, the jack 3D position of the forklift tray is obtained, so that the efficiency and the accuracy of determining the section of the forklift tray are improved, and the accuracy and the reliability of positioning the jack of the forklift tray are further improved.

Description

Fork truck tray positioning method, device, equipment and readable storage medium
Technical Field
The invention relates to the technical field of warehouse logistics, in particular to a forklift pallet positioning method, a forklift pallet positioning device, forklift pallet positioning equipment and a readable storage medium.
Background
Fork trucks are a type of wheeled haulage vehicles that carry out loading and unloading, stacking and short-distance transportation operations of goods through fork truck trays, and are widely used for carrying materials in ports, airports and warehouses. In the in-service use process, the fork truck tray is piled up on the goods shelves, and when fork truck needs to take fork truck tray, preset position before automatic driving to the goods shelves to insert the jack that fork truck tray was inserted and carry out the fork to fork truck tray and get. Because the placement position of the forklift pallet on the shelf may be different from the preset position, the jack of the front area target forklift pallet needs to be positioned when the forklift takes the forklift pallet.
In the existing forklift pallet positioning method, an RFID tag or an identification image is usually set at a preset position of a forklift pallet, so that the forklift can position the forklift pallet according to the positioning of the RFID tag or the identification image. For example, marks are stuck on the edges of the two sides or the center of the end face of the forklift pallet, and manual marks on the pallet are identified and positioned by utilizing forklift pallet pictures acquired by the camera.
However, the surface of the forklift pallet may be worn during the use process, so that the RFID tag or the identification image arranged on the end surface of the forklift pallet may be damaged, and the problem that the forklift pallet cannot be identified or is in error identification occurs. Therefore, the existing forklift pallet positioning method is low in reliability.
Disclosure of Invention
The embodiment of the invention provides a forklift pallet positioning method, device and equipment and a readable storage medium, so that the accuracy and reliability of positioning jacks of a forklift pallet are improved.
In a first aspect of an embodiment of the present invention, a method for positioning a pallet of a forklift is provided, including:
acquiring a 3D initial image containing a forklift pallet and prior position information corresponding to the forklift pallet;
intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift tray positive section, wherein the forklift tray positive section is a forklift tray section parallel to the end face;
Acquiring a 2D characteristic image projected on the front section of the forklift pallet by the 3D image block;
and acquiring the jack 3D position of the forklift tray according to the jack 2D position determined in the 2D characteristic image.
In a second aspect of the embodiment of the present invention, there is provided a forklift pallet positioning device, including:
the prior module is used for acquiring a 3D initial image containing the forklift tray and prior position information corresponding to the forklift tray;
the intercepting module is used for intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift tray positive section, wherein the forklift tray positive section is a forklift tray section parallel to the end face;
the transformation module is used for acquiring a 2D characteristic image projected on the front section of the forklift pallet by the 3D image block;
and the positioning module is used for acquiring the jack 3D position of the forklift tray according to the jack 2D position determined in the 2D characteristic image.
In a third aspect of the embodiments of the present invention, there is provided an apparatus comprising: the system comprises a memory, a processor and a computer program, wherein the computer program is stored in the memory, and the processor runs the computer program to execute the forklift pallet positioning method according to the first aspect and various possible designs of the first aspect.
In a fourth aspect of the embodiments of the present invention, there is provided a readable storage medium, in which a computer program is stored, where the computer program is used to implement the forklift pallet positioning method according to the first aspect and the various possible designs of the first aspect when the computer program is executed by a processor.
According to the forklift pallet positioning method, device and equipment and the readable storage medium, 3D initial images containing forklift pallets and prior position information corresponding to the forklift pallets are obtained; intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift tray positive section, wherein the forklift tray positive section is a forklift tray section parallel to the end face; acquiring a 2D characteristic image projected on the front section of the forklift pallet by the 3D image block; according to the jack 2D position determined in the 2D characteristic image, the jack 3D position of the forklift tray is obtained, so that the efficiency and the accuracy of determining the section of the forklift tray are improved, and the accuracy and the reliability of positioning the jack of the forklift tray are further improved.
Drawings
Fig. 1 is a schematic flow chart of a forklift pallet positioning method according to an embodiment of the present invention;
Fig. 2 is a 3D schematic diagram of a forklift pallet according to an embodiment of the present invention;
fig. 3 is an example of a 3D initial image including a forklift pallet according to an embodiment of the present invention;
fig. 4 is an example of a 3D image block including a front section of a forklift pallet according to an embodiment of the present invention;
FIG. 5 is an example of an alternative implementation of step S103 of FIG. 1 provided by an embodiment of the present invention
Fig. 6 is a flowchart of another forklift pallet positioning method according to an embodiment of the present invention;
FIG. 7 is an example of a 2D feature image after dilation process provided by an embodiment of the present invention;
FIG. 8 is a diagram of a world coordinate system and a new coordinate system according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a forklift pallet positioning device according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a hardware structure of an apparatus according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein.
It should be understood that, in various embodiments of the present invention, the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present invention, "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements that are expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present invention, "plurality" means two or more. "and/or" is merely an association relationship describing an association object, and means that three relationships may exist, for example, and/or B may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and C", "comprising A, B, C" means that all three of A, B, C comprise, "comprising A, B or C" means that one of the three comprises A, B, C, and "comprising A, B and/or C" means that any 1 or any 2 or 3 of the three comprises A, B, C.
It should be understood that in the present invention, "B corresponding to a", "a corresponding to B", or "B corresponding to a" means that B is associated with a, from which B can be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information. The matching of A and B is that the similarity of A and B is larger than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to detection" depending on the context.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
In the process that the forklift utilizes the forklift pallet to carry cargoes, the forklift firstly runs to the front of a goods shelf for placing the forklift pallet according to a preset running path, and then the image of the forklift pallet on the goods shelf is shot and obtained through a camera arranged at the front end of the forklift. And determining the position of the forklift pallet, namely the jack on the forklift pallet according to the identification of the image of the forklift pallet. The fork truck is according to the location to jack on the fork truck tray, thereby removes fork and inserts the jack and realize getting to the plug of fork truck tray. In this process, how to improve the recognition of the image of the forklift pallet, determine the position of the forklift pallet, i.e. the jack on the forklift pallet, is one of the keys affecting whether the forklift can accurately insert and extract the forklift pallet. In the prior art, the problem of low reliability exists in a mode of assisting in positioning the forklift tray by sticking various identification tags, and the mobility of the forklift tray is limited by positioning the identification tags. If the jack on the forklift tray is positioned by directly identifying the end face image of the forklift tray, the problem of identification errors possibly occurs due to the distortion of the end face image caused by the difference of shooting angles, and the reliability is still not high.
In order to solve the problem of low positioning reliability of the forklift pallet in the prior art, the embodiment of the invention provides a forklift pallet positioning method, which is characterized in that 3D initial images containing forklift pallets are intercepted by prior position information, and then 2D images projected on the positive section of the forklift pallet are obtained according to the image blocks, so that the jack positions of the pallet are identified, and the accuracy and reliability of positioning the forklift pallet and jacks on the forklift pallet are improved.
Referring to fig. 1, which is a schematic flow chart of a forklift pallet positioning method provided by an embodiment of the present invention, an execution subject of the method shown in fig. 1 may be a software and/or hardware device, for example, may be a positioning terminal set on a forklift, or may be a server that performs data interaction with the forklift. The method shown in fig. 1 includes steps S101 to S104, specifically as follows:
s101, acquiring a 3D initial image containing a forklift tray and prior position information corresponding to the forklift tray.
In particular, it may be that a 3D initial image containing a forklift pallet is acquired from a camera, which should be a 3D camera, such as a TOF3D camera. The obtained 3D initial image comprises a point cloud of the 3D image of the forklift tray, and each pixel point corresponds to a 3D coordinate. Referring to fig. 2, a 3D schematic diagram of a forklift pallet according to an embodiment of the present invention is provided. The 3D initial image may be, for example, an image containing the image shown in fig. 2 and other noise information. The X-axis direction, Y-axis direction, and Z-axis direction shown in fig. 2 are directions of coordinate axes in the coordinate system in the 3D initial image.
The prior position information is, for example, a position pre-specified for the forklift pallet, such as the approximate extent of the forklift pallet in the Y-axis direction and the approximate extent of the forklift pallet in the Z-axis direction in fig. 2. Further, for example, the general ranges of the forklift pallet in the X-axis direction, the Y-axis direction, and the Z-axis direction are shown in fig. 2. The a priori position information may be understood as determining a general range of positions of the forklift pallet, e.g. the a priori position information indicates that the forklift pallet is located between 800-1800mm in the Z-axis direction and between 30-200mm in the Y-axis direction.
Optionally, since the 3D camera coordinate system may be offset from the world coordinate system, the camera coordinate system may be calibrated before acquiring the 3D initial image including the forklift pallet and the prior position information corresponding to the forklift pallet. For example, a rotation matrix between a world coordinate system and a 3D camera coordinate system is acquired first, and a 3D initial image captured by a 3D camera is acquired, where coordinates of each pixel point of the 3D initial image belong to the 3D camera coordinate system. And then, according to the rotation matrix, transforming the coordinates of each pixel point of the 3D initial image from the 3D camera coordinate system into the world coordinate system to obtain the 3D initial image in the world coordinate system. For example, the relative rotation matrix of the 3D camera coordinate system and the world coordinate system is noted as The coordinates of point P in the 3D camera coordinate system are denoted as c P The coordinates of the point P in the world coordinate system are denoted by W P Transforming the 3D camera coordinate system into the world coordinate system to obtain the coordinates of the point P in the 3D initial image in the world coordinate system as follows:
s102, intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift tray positive section, wherein the forklift tray positive section is a forklift tray section parallel to the end face.
The coordinates of the pixel point cloud of the forklift pallet in a world coordinate system can be roughly determined according to the priori position information, the 3D initial image of the forklift pallet can be intercepted according to the coordinates, wherein the method mainly comprises the steps of intercepting pixel points in the Y direction and the Z direction, and reserving the pixel points in the X direction in the 3D initial image. In some embodiments, the pixels in the three directions X, Y, Z may be truncated, which is not limited herein. Alternatively, the above-described interception may be performed with a certain margin reserved on the basis of the a priori position information.
Referring to fig. 3, an example of a 3D initial image including a forklift pallet is provided in an embodiment of the present invention. Referring to fig. 4, an example of a 3D image block including a front section of a forklift pallet is provided in an embodiment of the present invention. In addition to the pixels of the forklift pallet, the 3D initial image in fig. 3 may also include pixels of a pallet structure beside the forklift pallet, but in the 3D image block obtained after the capturing and including the normal section of the forklift pallet, most of the interference pixels are removed, and the rest is pixels of the captured section of the forklift pallet.
And S103, acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet.
Alternatively, in order to improve the accuracy of the 2D feature image on the front section of the forklift pallet, the 2D feature image may be acquired by processing the 3D image block by using a principal component analysis method. In particular, reference may be made to fig. 5, which is an example of an alternative implementation of step S103 in fig. 1 provided by an embodiment of the present invention. The method shown in fig. 5 includes the following steps S201 to S204.
S201, acquiring a covariance matrix according to the 3D coordinates of each pixel point in the 3D image block.
At p i Representing a 3D camera sitAny pixel point p on the 3D image block in the standard system i =(x i ,yi,z i ),Representing the mean of the tray point cloud, the covariance matrix C is defined as:
wherein k represents the number of pixel points in the 3D image block.
S202, obtaining eigenvalues of the covariance matrix and eigenvectors corresponding to the eigenvalues.
With each pixel pi= (x) i ,yi,z i ) The pixel matrix of the 3D image block is formed for one column vector, the covariance matrix C is calculated for the pixel matrix, and the eigenvalue and the eigenvector are calculated by the covariance matrix C, so that the following conditions are satisfiedλ j For eigenvalues of the covariance matrix, +. >For the eigenvector, j ε {0,1,2}.
And S203, transforming the 3D coordinates of each pixel point in the 3D image block into a new coordinate system formed by the feature vectors, and acquiring the new coordinates of each pixel point in the 3D image block.
The X axis, the Y axis and the Z axis of the new coordinate system are sequentially a first feature vector, a second feature vector and a third feature vector which are determined in sequence from the big to the small of feature values in the feature vectors, and a plane formed by the X axis and the Y axis of the new coordinate system is a plane where the normal section of the forklift tray is located. It can be understood that, in the new coordinate system supported by the three feature vectors, the feature vector corresponding to the smallest feature value points to the normal direction of the normal section of the forklift pallet, and the plane formed by the other two feature vectors is the plane in which the normal section of the forklift pallet is located.
And S204, projecting new coordinates of each pixel point in the 3D image block along a Z axis of the new coordinate system to obtain a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet.
For example, it may be understood that the Z value of the new coordinate of each pixel point in the 3D image block is taken to be 0, so as to obtain the coordinate of the pixel point of the positive section of the forklift pallet.
In the embodiment shown in fig. 5, the characteristic value and the characteristic vector are obtained through the covariance matrix of each pixel point of the 3D image block, the obvious degree of change is determined according to the magnitude of the characteristic value, and the two directions with the most obvious characteristic change are taken as the new X-axis direction and the new Y-axis direction, so that the plane where the positive section of the forklift pallet is located is determined, and the accuracy is higher.
And S104, acquiring the jack 3D position of the forklift pallet according to the jack 2D position determined in the 2D characteristic image.
Specifically, the jack 2D position may be transformed from the new coordinate system to a world coordinate system corresponding to the 3D initial image, so as to obtain the jack 3D position of the forklift pallet.
According to the forklift pallet positioning method, 3D initial images containing forklift pallets and prior position information corresponding to the forklift pallets are obtained; intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift tray positive section, wherein the forklift tray positive section is a forklift tray section parallel to the end face; acquiring a 2D characteristic image projected on the front section of the forklift pallet by the 3D image block; according to the jack 2D position determined in the 2D characteristic image, the jack 3D position of the forklift tray is obtained, so that the efficiency and the accuracy of determining the section of the forklift tray are improved, and the accuracy and the reliability of positioning the jack of the forklift tray are further improved.
On the basis of the above embodiment, in order to further improve the positioning accuracy of the 2D position of the jack, after the step S103 (acquiring the 2D feature image of the 3D image block projected on the front section of the forklift pallet), a process of circularly positioning and comparing the 2D position of the forklift pallet multiple times may be further included, and the 3D position of the forklift pallet obtained in the previous cycle is taken as the prior position information of the next cycle. Specifically, fig. 6 may be referred to as a flowchart of another forklift pallet positioning method according to an embodiment of the present invention. The method shown in fig. 6 includes the following steps S301 to S309.
S301, acquiring a 3D initial image containing a forklift tray and prior position information corresponding to the forklift tray.
S302, intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift tray positive section, wherein the forklift tray positive section is a forklift tray section parallel to the end face.
And S303, acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet.
The specific implementation manner of the above steps S301 to S303 may refer to the steps S101 to S103 shown in fig. 1, and the implementation principle and technical effect are similar, and are not repeated herein.
S304, determining the 2D position of the forklift pallet according to the 2D characteristic image and a preset target pallet template.
Specifically, the 2D feature images may be slidably matched with at least one preset tray template, so as to obtain a matching result corresponding to each tray template. And then determining the tray template with the optimal matching result as a preset target tray template. And determining the position which is the best match between the preset target tray template and the 2D characteristic image as the 2D position of the forklift tray. The calculation method of sliding matching with the tray template can be realized by the following formula:
wherein R (x, y) represents a matching value of an image area with coordinates (x, y) as reference points and the tray template T in the 2D feature image, and I represents a portion of the 2D feature image that matches T. Assuming that the template image size is m×n; and (3) to be matched is that the size of the 2D characteristic image is Ms, then R (x, y) is a sliding value of x on [0, ms-M ], y is a sliding value on [0, ns-N ], sliding matching with the tray template T as a search window is carried out, one step is obtained after each sliding, and finally, the minimum value in a series of R (x, y) obtained for each tray template is used as a matching result of the tray template and the 2D characteristic image. The matching structure is, for example, a numerical value. And then, taking the tray template with the smallest matching result from the at least one tray template as the finally determined target tray template. For example, the matching result corresponding to the tray template with the smallest matching result is Rmin (xT, yT), and then the pixel point range of (xT-xt+m, yT-yt+n) in the 2D feature image is the 2D position of the forklift tray obtained by the positioning.
S305, determining the 3D position of the forklift pallet according to the 2D position of the forklift pallet.
After the 2D position of the forklift pallet is obtained, the accuracy of single positioning may be insufficient, so in order to improve the positioning accuracy, the 3D position of the forklift pallet is determined once according to the positioning, and then the steps are repeated by taking the 3D position as priori position information to realize further fine positioning. The 3D position of the forklift pallet may be determined by transforming the coordinates of the pixels (xT-xt+m, yT-yt+n) in the above embodiment from the new coordinate system constructed by the feature vectors into a 3D camera coordinate system, thereby obtaining the 3D position of the forklift pallet.
S306, judging whether the two continuous determined 2D positions of the forklift trays are the same.
If not, go to S307; if yes, the process proceeds to S308.
And S307, taking the 3D position of the forklift pallet as prior position information, and returning to the step S302.
And S308, determining the jack 2D position in the last determined 2D characteristic image of the forklift pallet.
S309, acquiring the jack 3D position of the forklift pallet according to the jack 2D position determined in the 2D characteristic image.
The specific implementation manner of the step S309 may refer to the step S104 shown in fig. 1, and the implementation principle and technical effects are similar, and are not repeated herein.
According to the embodiment, the accuracy of forklift pallet positioning is further improved through repeated cyclic positioning.
Based on the embodiment shown in fig. 6, before step S304 (determining the 2D position of the forklift pallet according to the 2D feature image and the preset target pallet template), expansion processing may be performed on each pixel point of the 2D feature image to obtain an expanded 2D feature image, so as to improve the density of the pixel points in the 2D feature image, reduce the influence of low resolution of the 3D camera, and thereby improve the effect of target matching. Referring to fig. 7, an example of a 2D feature image of a comparison between before and after the expansion process is provided in an embodiment of the present invention.
The expansion process may be, for example:
wherein a represents the 2D feature image;representing an expansion operator; b represents a structural element for performing expansion processing, for example, 9 pixel units arranged in a nine-grid shape, or 5 pixel units arranged in a cross shape; b (B) x = { x+b|b e B }, which represents the point set after the structural element is translated by x, B is the coordinates of each pixel unit in the structural element B.
After performing expansion processing on each pixel point of the 2D feature image, correspondingly, performing sliding matching on the 2D feature image by using at least one preset tray template to obtain a matching result corresponding to each tray template, and specifically, performing sliding matching on the expanded 2D feature image by using at least one preset tray template to obtain a matching result corresponding to each tray template.
In some embodiments, due to the forklift mountThe end surfaces of the trays may not be neatly placed on the shelves or the ground, which may cause the forks to strike the sides of the forklift pallet causing the pallet to fracture. In order to improve the accuracy of the fork truck in inserting and taking the fork truck pallet, after the step S203, a process of acquiring the end face deflection angle of the fork truck pallet may be further included. Specifically, referring to fig. 8, a schematic diagram of a world coordinate system and a new coordinate system according to an embodiment of the present invention is shown. In fig. 8, the coordinate axes of the new coordinate system are indicated by broken lines, and the coordinate axes of the world coordinate system are indicated by solid lines. When the fork truck tray is not arranged in a right-left direction, the right section of the fork truck tray can be understood as the right section (see X New type O New type Y New type Plane) and the XOY plane of the world coordinate system (see X World (world) O World (world) Y World (world) In a plane) may be at an angle so that the X-axis of the new coordinate system (see X in fig. 8) New type An axis), an X-axis of a world coordinate system corresponding to the 3D initial image (see X in fig. 8) World (world) Axes) is determined as the rotation angle of the end face of the pallet with respect to the XOY plane of the world coordinate system. The forklift will adjust the attitude of the fork insertion according to the rotation angle.
Based on the above embodiments, the process of obtaining the prior location information corresponding to the forklift pallet in step S101 shown in fig. 1 may have various implementation manners, for example, the prior location information corresponding to the forklift pallet may be obtained by performing planar clustering using the conditional euclidean normal line information.
Specifically, the k-D tree structure of the pixel points in the 3D initial image may be obtained according to the euclidean distance between the pixel points in the 3D initial image. It is understood that the positive cross section of the forklift pallet in the pixel point cloud satisfies the assumption that it is almost in one plane. The angle between the normals between 2 adjacent pixels lying on a plane is theoretically small. Firstly, constructing a k-d tree for an input pixel point cloud P, so as to facilitate subsequent and rapid searching of adjacent pixel points for each pixel point (in Euclidean space, the Euclidean distance between 2 adjacent pixel points is smaller than a preset Euclidean threshold value). And then determining adjacent pixel points for each pixel point in the k-d tree structure. And acquiring normal line information of each pixel point in the k-d tree structure, wherein the normal line information indicates the normal line of a surface formed by the pixel point and adjacent pixel points corresponding to the pixel point. And determining the normal included angle between each pixel point and the adjacent pixel point according to the normal information of each pixel point in the k-d tree structure. It can be understood that if the normal line of each point in the pixel point cloud P is calculated and stored, theoretically, if 2 adjacent pixel points are located on the same plane, the normal line included angle is smaller. And dividing the pixel points in the k-d tree structure into a plurality of pixel categories according to a preset included angle threshold and normal included angles between each pixel point and adjacent pixel points, wherein each pixel category comprises pixel points with the normal included angles smaller than the included angle threshold and adjacent pixel points, and the pixel points corresponding to each pixel category are pixel points in the same plane. And determining the normal section of the forklift pallet class in a plane formed by the pixel points corresponding to the pixel classes according to preset characteristic information of the forklift pallet. It will be appreciated that an empty classification list C (for storing pixel classes C1, C2,..the ci,..) and an empty queue Q (Q for recording which points were processed) is initialized; the following five-step process is started for each point Pi in P:
Step one, pi is stored in Q, indicating that it has been processed.
If Pi does not belong to any class, a class corresponding to Pi, for example C1, is newly built in C.
Step three, searching for the neighboring pixel point Pj of Pi in the point cloud (the searching method of the neighboring pixel point is to set a sphere area with the pixel point Pi as the center and r as the radius, and the pixel point in the sphere area is used as the neighboring pixel point Pj of Pi)
And step four, judging whether each adjacent pixel point Pj is processed (i.e. within Q) or not, if so, not operating the adjacent pixel point, and continuing to judge whether other adjacent pixel points are processed or not.
Fifthly, if the adjacent pixel point Pj is not processed (i.e. is not in Q), judging whether the normal included angle between Pi and Pj is smaller than a preset included angle threshold value; if the normal angle is smaller than the preset angle threshold, adding Pj into the category to which Pi belongs, then storing Pj into Q (namely marking the Pj as processed), if the normal angle is larger than or equal to the preset angle threshold, newly building a category corresponding to Pj in C, such as C2, and then storing Pj into Q (namely marking the Pj as processed).
After all the pixels are processed in the five steps, a classification list C containing a plurality of classes ci is obtained, wherein each ci represents a plane. And then, carrying out coarse screening in C by utilizing characteristic information of the forklift pallet such as the area, the mass center, the perimeter and the like, and determining the normal section of the forklift pallet. A fork truck pallet-like normal cross-section is understood to be a cross-section that approximates a parallel XOY plane in the world coordinate system.
And finally, determining prior position information corresponding to the forklift pallet according to a coordinate range corresponding to the forklift pallet normal section in a world coordinate system. For example, if the coordinate interval of the pixel point in the Y direction in the normal section of the forklift pallet is [ Y1, Y2], the coordinate interval is taken as prior position information of the forklift pallet in the Y direction.
Referring to fig. 9, a schematic structural diagram of a pallet positioning device for a forklift is provided in an embodiment of the present invention. The forklift pallet positioning device 80 shown in fig. 9 includes:
and the priori module 81 is used for acquiring the 3D initial image containing the forklift tray and the priori position information corresponding to the forklift tray.
And the intercepting module 82 is configured to intercept the 3D initial image in a location area indicated by the prior location information, and obtain a 3D image block including a normal section of the forklift tray, where the normal section of the forklift tray is a section of the forklift tray parallel to the end surface.
And the transformation module 83 is used for acquiring a 2D characteristic image projected on the front section of the forklift pallet by the 3D image block.
And the positioning module 84 is configured to obtain a jack 3D position of the forklift pallet according to the jack 2D position determined in the 2D feature image.
The forklift pallet positioning device 80 in the embodiment shown in fig. 9 may be correspondingly used to perform the steps in the embodiment of the method shown in fig. 1, and its implementation principle and technical effects are similar, and will not be described herein again.
On the basis of the above embodiment, the transforming module 83 is configured to obtain a covariance matrix according to the 3D coordinates of each pixel point in the 3D image block; acquiring eigenvalues of the covariance matrix and eigenvectors corresponding to the eigenvalues; transforming the 3D coordinates of each pixel point in the 3D image block into a new coordinate system formed by the feature vectors, and obtaining new coordinates of each pixel point in the 3D image block, wherein the X axis, the Y axis and the Z axis of the new coordinate system are sequentially a first feature vector, a second feature vector and a third feature vector which are determined in sequence from large to small in the feature vectors, and a plane formed by the X axis and the Y axis of the new coordinate system is a plane in which the front section of the forklift pallet is positioned; and projecting new coordinates of each pixel point in the 3D image block along a Z axis of the new coordinate system to obtain a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet.
On the basis of the above embodiment, the positioning module 84 is configured to transform the jack 2D position from the new coordinate system to a world coordinate system corresponding to the 3D initial image, so as to obtain the jack 3D position of the forklift pallet.
On the basis of the above embodiment, the positioning module 84 is configured to determine, after the transforming module 83 obtains a 2D feature image of the 3D image block projected on the front section of the forklift pallet, a 2D position of the forklift pallet according to the 2D feature image and a preset target pallet template; determining a 3D position of the forklift pallet according to the 2D position of the forklift pallet; and taking the 3D position of the forklift pallet as priori position information, and returning to execute the interception of the 3D initial image in the position area indicated by the priori position information to obtain a 3D image block containing the positive section of the forklift pallet until the 2D positions of the forklift pallet which are continuously determined twice are the same, and determining the jack 2D position in the 2D characteristic image of the forklift pallet which is determined the last time.
On the basis of the above embodiment, the positioning module 84 is configured to slide-match the 2D feature image with at least one preset tray template, so as to obtain a matching result corresponding to each tray template; determining the tray template with the optimal matching result as a preset target tray template; and determining the position which is the best match between the preset target tray template and the 2D characteristic image as the 2D position of the forklift tray.
On the basis of the above embodiment, the positioning module 84 is configured to perform expansion processing on each pixel point of the 2D feature image before determining the 2D position of the forklift pallet according to the 2D feature image and a preset target pallet template, so as to obtain the expanded 2D feature image.
Correspondingly, the positioning module 84 is configured to slide-match the expanded 2D feature image with at least one preset tray template, so as to obtain a matching result corresponding to each tray template.
On the basis of the above embodiment, the transforming module 83 is configured to transform the 3D coordinates of each pixel point in the 3D image block into a new coordinate system configured by the feature vector, and determine, as a rotation angle of the end surface of the forklift pallet with respect to the XOY surface of the world coordinate system, an angle between an X axis of the new coordinate system and an X axis of the world coordinate system corresponding to the 3D initial image after acquiring the new coordinates of each pixel point in the 3D image block.
On the basis of the above embodiment, the prior module 81 is configured to obtain a k-D tree structure of pixel points in the 3D initial image according to euclidean distances between pixel points in the 3D initial image; determining adjacent pixel points for each pixel point in the k-d tree structure; acquiring normal line information of each pixel point in the k-d tree structure, wherein the normal line information indicates the normal line of a local plane formed by the pixel point and adjacent pixel points corresponding to the pixel point; determining the normal included angle between each pixel point and the adjacent pixel point according to the normal information of each pixel point in the k-d tree structure; dividing the pixel points in the k-d tree structure into a plurality of pixel categories according to a preset included angle threshold and normal included angles between each pixel point and adjacent pixel points, wherein each pixel category comprises pixel points with the normal included angles smaller than the included angle threshold and adjacent pixel points, and the pixel points corresponding to each pixel category are pixel points in the same plane; according to preset characteristic information of the forklift pallet, determining a normal section of the forklift pallet in a plane formed by pixel points corresponding to the pixel categories; and determining prior position information corresponding to the forklift pallet according to a coordinate range corresponding to the forklift pallet normal section in a world coordinate system.
On the basis of the above embodiment, the prior module 81 is configured to obtain a rotation matrix between a world coordinate system and a 3D camera coordinate system before the 3D initial image including the forklift pallet and prior position information corresponding to the forklift pallet are obtained; acquiring a 3D initial image shot by a 3D camera, wherein coordinates of each pixel point of the 3D initial image belong to a 3D camera coordinate system; and transforming the coordinates of each pixel point of the 3D initial image from the 3D camera coordinate system to the world coordinate system according to the rotation matrix to obtain the 3D initial image in the world coordinate system.
Referring to fig. 10, a schematic hardware structure of an apparatus according to an embodiment of the present invention, where the apparatus 90 includes: a processor 91, a memory 92 and a computer program; wherein the method comprises the steps of
A memory 92 for storing the computer program, which may also be a flash memory (flash). Such as application programs, functional modules, etc. implementing the methods described above.
And a processor 91, configured to execute the computer program stored in the memory, so as to implement each step in the forklift pallet positioning method. Reference may be made in particular to the description of the embodiments of the method described above.
Alternatively, the memory 92 may be separate or integrated with the processor 91.
When the memory 92 is a device separate from the processor 91, the apparatus may further include:
a bus 93 for connecting the memory 92 and the processor 91.
The invention also provides a readable storage medium, wherein the readable storage medium stores a computer program, and the computer program is used for realizing the forklift tray positioning method provided by the various embodiments when being executed by a processor.
The readable storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media can be any available media that can be accessed by a general purpose or special purpose computer. For example, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. In the alternative, the readable storage medium may be integral to the processor. The processor and the readable storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short). In addition, the ASIC may reside in a user device. The processor and the readable storage medium may reside as discrete components in a communication device. The readable storage medium may be read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tape, floppy disk, optical data storage device, etc.
The present invention also provides a program product comprising execution instructions stored in a readable storage medium. The at least one processor of the apparatus may read the execution instructions from the readable storage medium, the execution instructions being executed by the at least one processor to cause the apparatus to implement the forklift pallet positioning method provided by the various embodiments described above.
In the above embodiment of the apparatus, it should be understood that the processor may be a central processing unit (english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, digital signal processors (english: digital Signal Processor, abbreviated as DSP), application specific integrated circuits (english: application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (11)

1. The forklift pallet positioning method is characterized by comprising the following steps of:
acquiring a 3D initial image containing a forklift pallet and prior position information corresponding to the forklift pallet;
intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift tray positive section, wherein the forklift tray positive section is a forklift tray section parallel to the end face;
acquiring a 2D characteristic image projected on the front section of the forklift pallet by the 3D image block;
acquiring the jack 3D position of the forklift pallet according to the jack 2D position determined in the 2D characteristic image;
the obtaining the 2D characteristic image of the 3D image block projected on the front section of the forklift pallet comprises the following steps:
acquiring a covariance matrix according to the 3D coordinates of each pixel point in the 3D image block;
acquiring eigenvalues of the covariance matrix and eigenvectors corresponding to the eigenvalues;
transforming the 3D coordinates of each pixel point in the 3D image block into a new coordinate system formed by the feature vectors, and obtaining new coordinates of each pixel point in the 3D image block, wherein the X axis, the Y axis and the Z axis of the new coordinate system are sequentially a first feature vector, a second feature vector and a third feature vector which are determined in sequence from large to small in the feature vectors, and a plane formed by the X axis and the Y axis of the new coordinate system is a plane in which the front section of the forklift pallet is positioned;
And projecting new coordinates of each pixel point in the 3D image block along a Z axis of the new coordinate system to obtain a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet.
2. The method of claim 1, wherein the obtaining the jack 3D position of the forklift pallet from the jack 2D position determined in the 2D feature image comprises:
and transforming the jack 2D position from the new coordinate system to a world coordinate system corresponding to the 3D initial image to obtain the jack 3D position of the forklift pallet.
3. The method according to claim 1 or 2, further comprising, after said acquiring a 2D feature image of the 3D image block projected on the forklift pallet normal plane:
determining the 2D position of a forklift pallet according to the 2D characteristic image and a preset target pallet template;
determining a 3D position of the forklift pallet according to the 2D position of the forklift pallet;
and taking the 3D position of the forklift pallet as priori position information, and returning to execute the interception of the 3D initial image in the position area indicated by the priori position information to obtain a 3D image block containing the positive section of the forklift pallet until the 2D positions of the forklift pallet which are continuously determined twice are the same, and determining the jack 2D position in the 2D characteristic image of the forklift pallet which is determined the last time.
4. The method of claim 3, wherein determining the 2D position of the forklift pallet according to the 2D feature image and a preset target pallet template comprises:
sliding and matching the 2D characteristic images by using at least one preset tray template to obtain a matching result corresponding to each tray template;
determining the tray template with the optimal matching result as a preset target tray template;
and determining the position which is the best match between the preset target tray template and the 2D characteristic image as the 2D position of the forklift tray.
5. The method of claim 4, further comprising, prior to determining the 2D position of the forklift pallet from the 2D feature image and a preset target pallet template:
performing expansion processing on each pixel point of the 2D characteristic image to obtain an expanded 2D characteristic image;
correspondingly, the sliding matching is performed on the 2D feature image by using at least one preset tray template, and a matching result corresponding to each tray template is obtained, including:
and carrying out sliding matching on the expanded 2D characteristic images by using at least one preset tray template to obtain matching results corresponding to the tray templates.
6. The method according to claim 1, further comprising, after said transforming the 3D coordinates of each pixel point in the 3D image block into a new coordinate system formed by the feature vectors, obtaining new coordinates of each pixel point in the 3D image block:
and determining an included angle between the X axis of the new coordinate system and the X axis of the world coordinate system corresponding to the 3D initial image as a rotation angle of the end face of the forklift pallet relative to the XOY face of the world coordinate system.
7. The method of claim 1, wherein the obtaining the prior location information corresponding to the forklift pallet comprises:
acquiring a k-D tree structure of pixel points in the 3D initial image according to Euclidean distance among the pixel points in the 3D initial image;
determining adjacent pixel points for each pixel point in the k-d tree structure;
acquiring normal line information of each pixel point in the k-d tree structure, wherein the normal line information indicates the normal line of a local plane formed by the pixel point and adjacent pixel points corresponding to the pixel point;
determining the normal included angle between each pixel point and the adjacent pixel point according to the normal information of each pixel point in the k-d tree structure;
Dividing the pixel points in the k-d tree structure into a plurality of pixel categories according to a preset included angle threshold and normal included angles between each pixel point and adjacent pixel points, wherein each pixel category comprises pixel points with the normal included angles smaller than the included angle threshold and adjacent pixel points, and the pixel points corresponding to each pixel category are pixel points in the same plane;
according to preset characteristic information of the forklift pallet, determining a normal section of the forklift pallet in a plane formed by pixel points corresponding to the pixel categories;
and determining prior position information corresponding to the forklift pallet according to a coordinate range corresponding to the forklift pallet normal section in a world coordinate system.
8. The method of claim 1, further comprising, prior to the acquiring the 3D initial image containing the forklift pallet and the corresponding prior positional information of the forklift pallet:
acquiring a rotation matrix between a world coordinate system and a 3D camera coordinate system;
acquiring a 3D initial image shot by a 3D camera, wherein coordinates of each pixel point of the 3D initial image belong to a 3D camera coordinate system;
and transforming the coordinates of each pixel point of the 3D initial image from the 3D camera coordinate system to the world coordinate system according to the rotation matrix to obtain the 3D initial image in the world coordinate system.
9. A forklift pallet positioning device, comprising:
the prior module is used for acquiring a 3D initial image containing the forklift tray and prior position information corresponding to the forklift tray;
the intercepting module is used for intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift tray positive section, wherein the forklift tray positive section is a forklift tray section parallel to the end face;
the transformation module is used for acquiring a 2D characteristic image projected on the front section of the forklift pallet by the 3D image block;
the positioning module is used for acquiring the jack 3D position of the forklift tray according to the jack 2D position determined in the 2D characteristic image;
the transformation module is used for obtaining a covariance matrix according to the 3D coordinates of each pixel point in the 3D image block; acquiring eigenvalues of the covariance matrix and eigenvectors corresponding to the eigenvalues; transforming the 3D coordinates of each pixel point in the 3D image block into a new coordinate system formed by the feature vectors, and obtaining new coordinates of each pixel point in the 3D image block, wherein the X axis, the Y axis and the Z axis of the new coordinate system are sequentially a first feature vector, a second feature vector and a third feature vector which are determined in sequence from large to small in the feature vectors, and a plane formed by the X axis and the Y axis of the new coordinate system is a plane in which the front section of the forklift pallet is positioned; and projecting new coordinates of each pixel point in the 3D image block along a Z axis of the new coordinate system to obtain a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet.
10. An apparatus, comprising: a memory, a processor and a computer program stored in the memory, the processor running the computer program to perform the forklift pallet positioning method of any one of claims 1 to 8.
11. A readable storage medium, wherein a computer program is stored in the readable storage medium, which when executed by a processor is adapted to implement the forklift pallet positioning method of any one of claims 1 to 8.
CN201910363982.3A 2019-04-30 2019-04-30 Fork truck tray positioning method, device, equipment and readable storage medium Active CN111738253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910363982.3A CN111738253B (en) 2019-04-30 2019-04-30 Fork truck tray positioning method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910363982.3A CN111738253B (en) 2019-04-30 2019-04-30 Fork truck tray positioning method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111738253A CN111738253A (en) 2020-10-02
CN111738253B true CN111738253B (en) 2023-08-08

Family

ID=72645887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910363982.3A Active CN111738253B (en) 2019-04-30 2019-04-30 Fork truck tray positioning method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111738253B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554701A (en) * 2021-07-16 2021-10-26 杭州派珞特智能技术有限公司 PDS tray intelligent identification and positioning system and working method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5208753A (en) * 1991-03-28 1993-05-04 Acuff Dallas W Forklift alignment system
US5812395A (en) * 1994-11-16 1998-09-22 Masciangelo; Stefano Vision based forklift control system for autonomous pallet loading
CN105976375A (en) * 2016-05-06 2016-09-28 苏州中德睿博智能科技有限公司 RGB-D-type sensor based tray identifying and positioning method
CN106672859A (en) * 2017-01-05 2017-05-17 深圳市有光图像科技有限公司 Method for visually identifying tray based on forklift and forklift
CN107218927A (en) * 2017-05-16 2017-09-29 上海交通大学 A kind of cargo pallet detecting system and method based on TOF camera
CN107507167A (en) * 2017-07-25 2017-12-22 上海交通大学 A kind of cargo pallet detection method and system matched based on a cloud face profile
CN108502810A (en) * 2018-04-13 2018-09-07 深圳市有光图像科技有限公司 A kind of method and fork truck of fork truck identification pallet
CN109520418A (en) * 2018-11-27 2019-03-26 华南农业大学 A kind of pallet method for recognizing position and attitude based on two dimensional laser scanning instrument
JP2019048696A (en) * 2017-09-11 2019-03-28 Kyb株式会社 Information processing device and information processing method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5208753A (en) * 1991-03-28 1993-05-04 Acuff Dallas W Forklift alignment system
US5812395A (en) * 1994-11-16 1998-09-22 Masciangelo; Stefano Vision based forklift control system for autonomous pallet loading
CN105976375A (en) * 2016-05-06 2016-09-28 苏州中德睿博智能科技有限公司 RGB-D-type sensor based tray identifying and positioning method
CN106672859A (en) * 2017-01-05 2017-05-17 深圳市有光图像科技有限公司 Method for visually identifying tray based on forklift and forklift
CN107218927A (en) * 2017-05-16 2017-09-29 上海交通大学 A kind of cargo pallet detecting system and method based on TOF camera
CN107507167A (en) * 2017-07-25 2017-12-22 上海交通大学 A kind of cargo pallet detection method and system matched based on a cloud face profile
JP2019048696A (en) * 2017-09-11 2019-03-28 Kyb株式会社 Information processing device and information processing method
CN108502810A (en) * 2018-04-13 2018-09-07 深圳市有光图像科技有限公司 A kind of method and fork truck of fork truck identification pallet
CN109520418A (en) * 2018-11-27 2019-03-26 华南农业大学 A kind of pallet method for recognizing position and attitude based on two dimensional laser scanning instrument

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Pallet recognition and localization using an RGB-D camera;Junhao Xiao,et al.;《International Journal of Advanced Robotic Systems》;全文 *

Also Published As

Publication number Publication date
CN111738253A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN110807350B (en) System and method for scan-matching oriented visual SLAM
US9378431B2 (en) Method of matching image features with reference features and integrated circuit therefor
US8774510B2 (en) Template matching with histogram of gradient orientations
Aldoma et al. CAD-model recognition and 6DOF pose estimation using 3D cues
US8780110B2 (en) Computer vision CAD model
CN111145214A (en) Target tracking method, device, terminal equipment and medium
US8798377B2 (en) Efficient scale-space extraction and description of interest points
CN105139416A (en) Object identification method based on image information and depth information
CN109740633B (en) Image similarity calculation method and device and storage medium
US11080878B2 (en) Method and apparatus for detecting 3D object from 2D image
JP6278276B2 (en) Object identification device, object identification method, and program
US11232589B2 (en) Object recognition device and object recognition method
EP3766644A1 (en) Workpiece picking device and workpiece picking method
CN111199562A (en) System and method for rapid object detection while robotic picking
CN111738253B (en) Fork truck tray positioning method, device, equipment and readable storage medium
Holz et al. Fast edge-based detection and localization of transport boxes and pallets in rgb-d images for mobile robot bin picking
CN114358133B (en) Method for detecting looped frames based on semantic-assisted binocular vision SLAM
EP2993623B1 (en) Apparatus and method for multi-object detection in a digital image
CN113379826A (en) Method and device for measuring volume of logistics piece
US12002260B2 (en) Automatic topology mapping processing method and system based on omnidirectional image information
JP3436235B2 (en) Moving object image search method and apparatus
US6625332B1 (en) Computer-implemented image registration
Wu et al. Real-time robust algorithm for circle object detection
CN115965927B (en) Pavement information extraction method and device, electronic equipment and readable storage medium
TWI804845B (en) Object positioning method and object positioning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant