CN110344621B - Wheel point cloud detection method for intelligent garage - Google Patents

Wheel point cloud detection method for intelligent garage Download PDF

Info

Publication number
CN110344621B
CN110344621B CN201910511231.1A CN201910511231A CN110344621B CN 110344621 B CN110344621 B CN 110344621B CN 201910511231 A CN201910511231 A CN 201910511231A CN 110344621 B CN110344621 B CN 110344621B
Authority
CN
China
Prior art keywords
point cloud
wheel
target
dimensional
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910511231.1A
Other languages
Chinese (zh)
Other versions
CN110344621A (en
Inventor
梅天灿
郑文远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910511231.1A priority Critical patent/CN110344621B/en
Publication of CN110344621A publication Critical patent/CN110344621A/en
Application granted granted Critical
Publication of CN110344621B publication Critical patent/CN110344621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04HBUILDINGS OR LIKE STRUCTURES FOR PARTICULAR PURPOSES; SWIMMING OR SPLASH BATHS OR POOLS; MASTS; FENCING; TENTS OR CANOPIES, IN GENERAL
    • E04H6/00Buildings for parking cars, rolling-stock, aircraft, vessels or like vehicles, e.g. garages
    • E04H6/08Garages for many vehicles
    • E04H6/12Garages for many vehicles with mechanical means for shifting or lifting vehicles
    • E04H6/18Garages for many vehicles with mechanical means for shifting or lifting vehicles with means for transport in vertical direction only or independently in vertical and horizontal directions
    • E04H6/182Garages for many vehicles with mechanical means for shifting or lifting vehicles with means for transport in vertical direction only or independently in vertical and horizontal directions using car-gripping transfer means
    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04HBUILDINGS OR LIKE STRUCTURES FOR PARTICULAR PURPOSES; SWIMMING OR SPLASH BATHS OR POOLS; MASTS; FENCING; TENTS OR CANOPIES, IN GENERAL
    • E04H6/00Buildings for parking cars, rolling-stock, aircraft, vessels or like vehicles, e.g. garages
    • E04H6/42Devices or arrangements peculiar to garages, not covered elsewhere, e.g. securing devices, safety devices, monitoring and operating schemes; centering devices
    • E04H6/422Automatically operated car-parks
    • E04H6/424Positioning devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Mechanical Engineering (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a wheel point cloud detection method for an intelligent garage, which comprises the following steps: collecting a three-dimensional point cloud of a parking area scene by using a two-dimensional single-line laser radar; labeling the data set; positioning the wheel position based on a three-dimensional target detection network model of the unstructured point cloud; determining a target point cloud range based on the set space candidate frame, and extracting wheel target point clouds in the region of interest; the spatial minimum bounding box based on the predicted six degrees of freedom represents the wheel target while containing the position and attitude information of the wheel. The invention provides a deep learning algorithm for wheel target detection for an intelligent garage scene, and compared with the traditional target detection method, the method has the characteristics of high efficiency, strong positioning capability, accurate posture identification, strong transportability and the like, and is primarily applied to the urban intelligent garage.

Description

Wheel point cloud detection method for intelligent garage
Technical Field
The invention relates to the field of computer vision and artificial intelligence, in particular to a wheel point cloud detection method for an intelligent garage based on two-dimensional single-line laser acquisition, which can accurately position a wheel and identify the posture of the wheel.
Background
Most of automated garages move vehicles to specified parking spaces through intelligent robots. At present, many parking robots in the market can utilize limited land area to a certain extent to obtain doubled parking spaces. However, the garage adopts a car access method which is basically passive. If the tray type robot needs a driver to stop the vehicle on the vehicle carrying plate, the vehicle is moved to a corresponding parking space through the carrying robot, however, the space of the vehicle carrying plate is narrow, the driver is required to have good parking skill, and the vehicle is required to be stopped in a special vehicle carrying tray or area at a correct angle.
According to the parking method of the intelligent garage, contact sensors such as piezoelectric sensors are abandoned, a vehicle carrying plate is not used, vehicles parked in any pose are identified in space by utilizing data collected by laser radars distributed above a parking area and on a manipulator, the vehicles are grabbed to a suspension position and then placed in any parking space in the space, and the requirements on parking skills of places and drivers can be greatly reduced. In the parking process, the system accurately positions the position of the vehicle to be parked, which is the key for realizing intelligent parking, wherein the acquisition of the wheel position information determines whether the vehicle is successfully grabbed or not, and is more important content. Therefore, the central positions of the four wheels need to be accurately detected through vehicle point cloud obtained by scanning of the laser radar, and the problem of safely grabbing the vehicle to the specified parking space is converted into the problem of target detection of the wheels.
At present, no method for detecting a wheel target by using laser point cloud exists in an intelligent garage scene. The method is closer to the scheme that: in a laser radar setting method and a parking lot (publication number: CN109581414A, application publication) disclosed in the patent literature, when a vehicle passes through a target lane, the laser radars distributed on both sides of the lane scan the inner sides of the wheels to obtain wheel information with thickness characteristics, so as to perform vehicle positioning. A3D target detection method (publication number: CN109581414A, application publication) based on point cloud data designs a 3D target detection network model main body structure based on the point cloud data to detect an interested target in a related road scene, but the method needs to combine the point cloud and an image collected by a camera for scene perception. The method is characterized in that a laser scanner is applied to a document (Wanghai, research on an automatic parking environment perception technology based on a laser radar, university of great courseware, 2013) to obtain obstacles such as vehicles, walls, pedestrians and the like in the front range, and a feasible parking space is analyzed. A vehicle locating method, an automatic driving control method, and a related system (publication No. CN109596140A, application publication) disclosed in the patent literature obtain the distance and direction of a wheel of a traveling vehicle with respect to a lidar by transmitting a probe signal to the wheel and receiving a signal reflected by the wheel using the lidar, which also requires the vehicle to move along a fixed lane and cannot recognize the posture of the wheel.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a wheel point cloud detection method in an intelligent garage scene. In order to achieve the above object, the present invention provides a wheel target detection technique based on deep learning, including the following steps:
step 1, collecting a three-dimensional point cloud of a parking area scene by using a two-dimensional single-line laser radar, and making a corresponding data set;
step 2, determining target pose information based on a three-dimensional target detection model of the unstructured point cloud;
step 3, determining the predicted target category based on a three-dimensional target detection model of the unstructured point cloud;
and 4, analyzing the wheel pose information predicted by the three-dimensional target detection model, judging whether the wheels return to the right and the positions of the wheels in a manipulator coordinate system, and feeding back signals of whether the manipulator arranged above the intelligent garage grabs and the positions of the grabbed points.
Further, the specific implementation manner of step 1 comprises the following sub-steps,
step 1.1, a vehicle taking manipulator is fixedly connected to a vertical mechanical frame, a vehicle taking cover used for clamping a vehicle is fixed on the manipulator and comprises a vehicle taking cover shell, clamping mechanisms used for clamping the tire are respectively arranged at positions, corresponding to four tires of the vehicle, of the bottom of the shell, four laser radars are respectively arranged on the clamping mechanisms, and when the vehicle taking cover shell falls to the positions near the wheels along with the manipulator, the laser radars start to work and collect original three-dimensional point cloud data of the four wheels;
step 1.2, establishing a laser three-dimensional coordinate system, taking an initial position before laser descending as an origin of the coordinate system, taking a moving direction of the laser when the laser descends vertically as a positive direction of a z axis, taking a direction of 0 degrees during two-dimensional laser scanning as a positive direction of an x axis, and taking a direction of 90 degrees as a positive direction of a y axis, and converting distance value information obtained by scanning to the laser three-dimensional coordinate system in each falling process of the laser to obtain a point cloud sample containing a wheel target;
step 1.3, obtaining a garage scene point cloud sample represented by three-dimensional coordinates in the steps 1.1 and 1.2, marking point clouds belonging to wheel parts in the sample point by point, calculating coordinates of eight minimum external cubic vertexes and a cubic centroid of each wheel target point cloud by using a principal component analysis method, and taking 12 fixed parameters of three coplanar vertexes and three-dimensional coordinates of the centroid as a position label of each target; and (3) setting a (1, 0) vector as a class label for each wheel target, and dividing the point cloud data set into a training set and a testing set according to a proper proportion.
Further, the specific implementation manner of calculating the coordinates of the eight minimum circumscribed cubes and the cube centroid of each wheel target point cloud by using the principal component analysis method in the step 1.3 and taking the three-dimensional coordinates of the fixed three coplanar vertices and the centroid as the position label of each target is as follows,
(1) calculating the centroid p of an input point cloudcenteroid
(2) From the input point cloud and pcenteroidObtaining covariance matrix M of point cloudcloudCalculating a point cloud covariance matrix eigenvalue and a characteristic vector, wherein the direction of the characteristic vector is the main direction of the point cloud;
(3) after the main directions are corrected to be vertical, the minimum circumscribed cube of the point cloud is found along the three main directions to obtain the minimum enclosing frame for marking the point cloud, and for the minimum enclosing frame of each marked wheel, three coplanar vertexes p fixed in the minimum enclosing frame are taken0,p1,p2And centroid pctrHas 12 parameters in total, namely pctr(xctr,yctr,zctr),p0(x0,y0,z0),p1(x1,y1,z1),p2(x2,y2,z2) As a location tag for each object.
Further, the specific implementation of step 2 includes the following sub-steps,
step 2.1, sampling point cloud samples with different sizes to enable the size of each sample entering a three-dimensional target detection model to be uniform, wherein the three-dimensional target detection network is based on a PointNet network and is added with a target positioning module;
step 2.2, for a frame of point Cloud, the maximum target number is nobjEquivalently copying the wheel target part point Cloud in the Cloud until the Cloud contains n targetsobjEnsuring that the size of the point cloud input into the three-dimensional target detection model is fixed;
step 2.3, determining the range of all point cloud samples in the xyz direction of the three-dimensional space, and equally dividing the three-dimensional space into n candidate regions along the xyz direction; inputting Cloud into a positioning module of a three-dimensional target detection model for feature extraction, and outputting predicted wheel position information;
step 2.4, find the candidate area center closest to the marked wheel target center among the n candidate area centers
Figure BDA0002093561720000031
Thereby determining the candidate frame C where the wheel is locatedchsAccording to
Figure BDA0002093561720000032
Screening out n in totalobjN corresponding to each candidate regionobjThe group prediction mark points and the marked wheel target mark points are jointly input into a loss layer to calculate the regression loss LregOptimizing the loss function by training iteratively until L is zeroregAnd (6) converging.
Further, the specific implementation manner of sampling point cloud samples with different sizes in step 2.1 to make each sample entering the three-dimensional target detection model uniform in size is as follows,
setting N as the size of each sample, and sampling the original point cloud CoriSize NinputPoint cloud C marked as targettgtThe number of dots is NtgtAccordingly, background point cloud CbgdThe number of dots is Nbgd=Ninput-NtgtIf the size of the original point cloud is smaller than N, the original point cloud C is processedoriRandomly sampling and adding original point clouds until the size of the point clouds is expanded to N; if the size of the original point cloud is larger than N, further judgment of N is still neededtgtAnd size of N, N>NtgtFrom C to CbgdAdding C by random samplingoriUntil the point cloud size is N; n is a radical of<NtgtThen, the background point is not considered any more, and only the C is selectedtgtTaking N points as sampling point cloud Csmp
Further, the specific implementation manner of step 2.3 is as follows: determining the coordinate range (x) of the point cloud according to all the input point cloud datamin,xmax,ymin,ymax,zmin,zmax) And establishing a large point cloud space cube C, and arranging C along the x, y and z directionsEqually dividing the seeds according to a set proportion to generate n cube candidate frames C0,C1,…CnInputting Cloud with the size of i multiplied by 3 into the positioning module of the three-dimensional target detection model for feature extraction, outputting predicted wheel position information, specifically, inputting Cloud with the size of i multiplied by 3 into the positioning module of the three-dimensional target detection model, correcting the Cloud through a space transformation network, generating n multiplied by 64 feature vectors through two layers of 3 multiplied by 3 convolutional layers, enabling the feature vectors to enter a second space transformation network for feature alignment, generating n multiplied by 1024 feature vectors through three layers of 3 multiplied by 3 convolutional layers, extracting recessive features through a full connection layer, outputting n multiplied by 4 multiplied by 3 vectors, and expressing that n candidate regions respectively return to one group of mark point offsets △ctr,△0,△1,△2(ii) a The position of each candidate region output regression point is three coplanar vertexes of the candidate region and a central point coordinate Cctr,C0,C1,C2And △ctr,△0,△1,△2The sum of (1); the regression point position p is the sum of the four landmark points of the candidate box and the offset:
Figure BDA0002093561720000041
further, the specific implementation manner of step 2.3 is as follows: the loss function in step 2.4 is as follows,
Figure BDA0002093561720000042
wherein p ispred,pgtRespectively predicted and marked landmark coordinates, ppredDerived from network regression, pgtThe minimum bounding box obtained in step 1.3 is used for obtaining the center point offset △ pctrAnd apex offset △ piIn the case of the wheel target localization model, the total localization loss LregComprises the following steps:
Figure BDA0002093561720000043
through multiple iterations, LregAnd convergence shows that the wheel target positioning model is approximately optimal.
Further, the specific implementation of step 3 includes the following sub-steps,
step 3.1, according to the candidate frame C where the wheel target is predicted in the step 2chsIntercepting and screening the point clouds contained in the candidate frames to obtain a new target point CloudROIAll the non-target part candidate frames are classified as (0, 1);
step 3.2, mixing CloudROIInputting a classification module of a three-dimensional target detection model, calculating a classification score for each candidate region containing the target, finally combining the classification scores of all the candidate regions containing and not containing the wheel target into a one-dimensional vector with the length of n, and calculating Softmax cross entropy loss L together with the mark classesclsAnd using an Adams optimizer to make LclsMinimum;
and 3.3, inputting the point cloud sample in the test set into the trained three-dimensional target detection model to obtain the position of the minimum external enclosure frame of the predicted wheel target in the test point cloud sample and the predicted score, if the foreground score is higher than the background score, judging that the enclosure frame is the minimum external enclosure frame of the wheel target, and if the foreground score is lower than the background score, judging that the point cloud contained in the enclosure frame belongs to the background.
Further, the specific implementation of step 4 includes the following sub-steps,
step 4.1, calculating and predicting an included angle between the minimum external enclosure frame of each wheel target and a yz plane of a laser three-dimensional coordinate system, wherein the specific implementation mode is as follows: obtaining a vector according to the predicted coordinates of the mark points
Figure BDA0002093561720000051
The normal vector of the yz plane is
Figure BDA0002093561720000052
Find the angle between the vector and the yz plane
Figure BDA0002093561720000053
Vector quantity
Figure BDA0002093561720000054
The included angles with yz plane are respectively theta1’=90°-θ1,θ2’=90°-θ2Comparison of θ1' and theta2' the smaller angle is used as the predicted included angle theta between the wheel target and the yz planepredThe included angle theta between the overall orientation of the known vehicle body and the yz plane of the laser three-dimensional coordinate systemcarIn the case of (1), an included angle △ theta of a point cloud of a vehicle body and a wheel point on a yz plane of a laser three-dimensional coordinate system is obtainedpredcar|;
And 4.2, comparing the included angle of the vehicle body of the wheel with a preset threshold value, if the included angle is smaller than the threshold value, feeding back a signal of 'allowing grabbing' to the mechanical hand by the system, calculating a grabbing point by the mechanical hand according to the information of the smallest enclosing frame of the predicted wheel and finishing the process of grabbing the vehicle to ascend, if △ theta is larger than the threshold value, feeding back a signal of 'not returning the wheel to the right state and not allowing grabbing' to the mechanical hand by the system, directly ascending the mechanical hand to the original point of the laser three-dimensional coordinate system, and sending out information to remind a driver of further operation and adjust.
The technical scheme provided by the invention has the beneficial effects that: (1) the wheel point cloud detection method based on deep learning and oriented to the intelligent garage is provided, and automatic real-time online detection of the static wheel target height is realized; (2) positioning and classifying the targets through two times of feature extraction, and introducing a 3DRPN interface to return the positions of the targets to form a complete multi-target identification process; (3) for the representation mode of the recognition result, the existing point cloud voxelization-based method is abandoned, and the 3D grid is replaced by the minimum circumscribed cube, so that the wheel target volume ratio in the predicted bounding box is maximized as much as possible, and the optimal representation effect is achieved.
Drawings
FIG. 1 is a wheel point cloud detection flow chart of the present invention.
Fig. 2 is a schematic diagram of laser radar scanning distance value-two-dimensional coordinate conversion.
Fig. 3 is a diagram of the effect of the minimum bounding box of a wheel target marked by a principal component analysis method, wherein (a), (b) and (c) respectively represent minimum circumscribed cubes for finding point clouds along three main directions.
FIG. 4 is a schematic diagram of a wheel target detection network characterizing a target by predictive landmark points.
FIG. 5 is a flow chart of an input point cloud sampling.
Fig. 6 is a block diagram of a wheel target detection network positioning module.
Fig. 7 is a schematic diagram illustrating a process of screening candidate boxes by the wheel target detection network positioning module.
Fig. 8 is an overall configuration diagram of the wheel target detection network.
FIG. 9 is a schematic diagram of a three-dimensional model positioning criterion IoU (a) and a schematic diagram of an internal and external method of discrete type decision points (b).
FIG. 10 is a schematic diagram of a model attitude accuracy evaluation method.
Detailed Description
For a better understanding of the technical solutions of the present invention, the following detailed description of the present invention is made with reference to the accompanying drawings. The main steps of the flow in the embodiment are shown in fig. 1, and the specific implementation process is as follows:
step 1, collecting three-dimensional point cloud of a parking area scene by using a two-dimensional single-line laser radar, and manufacturing a point cloud data set under an intelligent garage scene. Before the data set is produced, the original distance information obtained by scanning needs to be converted into a point cloud data form expressed by three-dimensional coordinates. The method comprises the following steps:
step 1.1, the vehicle taking manipulator is fixedly connected to the vertical mechanical frame, and a vehicle taking cover used for clamping a vehicle is fixed on the manipulator. Get the car cover including getting the car cover casing, the casing bottom is equipped with one respectively with four tire of vehicle correspond the position punishment and are used for pressing from both sides the mechanism of getting the tire, and four laser radar install respectively on getting the mechanism. When the car cover shell falls to the position near the wheels along with the manipulator, the laser radar starts to work and collects the original three-dimensional point cloud data of the four wheels.
And 1.2, establishing a laser three-dimensional coordinate system. To stimulateThe initial position before light descends is the origin of a coordinate system, the movement direction of the laser when the laser descends vertically is the positive direction of the z axis, the coordinate of the z direction is provided by a PLC (programmable logic controller) controlling a manipulator, the position 1.8m higher than the ground is taken as the zero point of the z axis, and the positive direction of the z axis points to the ground vertically. The distance information acquired by the laser is resolved into xy plane coordinates, as shown in fig. 2, O is the laser origin, the 0 ° direction during laser scanning is the positive x-axis direction, and the 90 ° direction is the positive y-axis direction. d is the ith scanning point piLinear distance to O, scanning resolution of laser radar is 0.25 deg., piThe angle between the line between the Y axis and the O is theta, the angle is triangular,
Figure BDA0002093561720000071
px,pyis piCoordinates on the x-axis and y-axis. And converting the distance value information obtained by scanning to a laser three-dimensional coordinate system in each falling process of the laser to obtain a point cloud sample containing the wheel target.
And step 1.3, obtaining a garage scene point cloud sample represented by a three-dimensional coordinate according to the steps 1.1 and 1.2. Point clouds belonging to wheel parts in the sample are marked point by point. And calculating to obtain the coordinates of eight vertexes of the minimum circumscribed cube and the centroid of the cube of each wheel target point cloud by using a principal component analysis method. The principal component analysis method is realized as follows:
(4) calculating the centroid p of an input point cloudcenteroid
(5) From the input point cloud and pcenteroidObtaining covariance matrix M of point cloudcloudCalculating a point cloud covariance matrix eigenvalue and a characteristic vector, wherein the direction of the characteristic vector is the main direction of the point cloud;
(6) after the main directions are corrected to be vertical, the minimum circumscribed cube of the point cloud is found along the three main directions.
The point cloud minimal bounding box marked by the method is shown in figure 3.
For each marked wheel smallest enclosing frame, taking three coplanar vertexes p fixed therein0,p1,p2And centroid pctrThree-dimensional (D) ofThe coordinates are 12 parameters in total, i.e. pctr(xctr,yctr,zctr),p0(x0,y0,z0),p1(x1,y1,z1),p2(x2,y2,z2) Setting (1, 0) vectors as category labels for each wheel target, obtaining 1260 single target samples in the point cloud data set obtained by the method, splicing two different single wheel target samples together to obtain multiple target samples, introducing random offset for further expanding the data set, and randomly translating △ L (1 m) in each point cloud along the xyz direction<△L<2 m). The derived samples and the original samples are 660 in number, and the number of points in each sample is different from 8192 to 32768. 900 samples are randomly selected from 1920 samples to be used as a training set, and 1020 samples are used as a testing set.
And 2, inputting the training set into a three-dimensional target detection network based on unstructured Point cloud, wherein the network is based on a PointNet network, a target positioning module is added, and the position and posture information of the wheel target is determined after training, wherein the PointNet network is the prior art, and the specific structure can be found in documents Qi C R, Su H, Mo K, et al.
The specific implementation method of the step 2 is as follows:
step 2.1, sampling point cloud samples with different sizes, so that the size of each sample entering the network is unified to be N16384, the implementation process is as shown in fig. 5, and the original point cloud C to be sampledoriSize NinputPoint cloud C marked as targettgtThe number of dots is NtgtAccordingly, background point cloud CbgdThe number of dots is Nbgd=Ninput-NtgtIf the size of the original point cloud is smaller than N, the original point cloud C is processedoriRandomly sampling and adding original point clouds until the size of the point clouds is expanded to N; if the size of the original point cloud is larger than N, further judgment of N is still neededtgtAnd the size of N. N is a radical of>NtgtWhen the temperature of the water is higher than the set temperature,from CbgdAdding C by random samplingoriUntil the point cloud size is N; n is a radical of<NtgtThen, the background point is not considered any more, and only the C is selectedtgtTaking N points as sampling point cloud Csmp
Step 2.2, for a frame of point Cloud, the maximum target number is nobj2. Because the target detection network requires that the size of the network input point Cloud is fixed during each iterative training, the marked wheel targets in the Cloud are equivalently copied (can be copied for one target or can be randomly copied) until the number of targets included in the Cloud is nobjAnd ensuring that the size of the point cloud input to the classification network in the step 3 is fixed.
Step 2.3, determining the coordinate range (x) of the point cloud according to all the input point cloud datamin,xmax,ymin,ymax,zmin,zmax) And establishing a large point cloud space cube C, equally dividing C in the x, y and z directions according to a certain set proportion to generate n cube candidate frames C0,C1,…CnThe positioning module of the model is shown in figure 6, and is specifically operated in that the Cloud with the size of i multiplied by 3 is input into the positioning module of the three-dimensional target detection model to be corrected through a space transformation network, n multiplied by 64 feature vectors are generated through two layers of 3 multiplied by 3 convolutional layers, the feature vectors enter a second space transformation network to be aligned with features, n multiplied by 1024 feature vectors are generated through three layers of 3 multiplied by 3 convolutional layers, after recessive features are extracted through a full connection layer, n multiplied by 4 multiplied by 3 vectors are output and are represented as a group of regression mark point offsets △ of n candidate areasctr,△0,△1,△2. The position of each candidate region output regression point is three coplanar vertexes of the candidate region and a central point coordinate Cctr,C0,C1,C2And △ctr,△0,△1,△2The sum of (1). The regression point position p is the sum of the four landmark points of the candidate box and the offset:
Figure BDA0002093561720000081
and 2.4, judging in which candidate frame each marked wheel target falls. By finding and marking the wheel target center
Figure BDA0002093561720000082
The nearest candidate frame center, thereby determining the candidate frame C where the wheel is locatedchs(ii) a Screening out n in totalobjA candidate box, wherein the process of screening the candidate box is as shown in FIG. 7; calculating nobjThe sum of squared euclidean distances between x 4 marker points and the marker points of the labeled target is used to find the regression loss by preventing the optimization algorithm from falling into the local minimum L2 norm as the regression loss function for four points:
Figure BDA0002093561720000083
wherein p ispred,pgtRespectively predicted and marked landmark coordinates, ppredDerived from network regression, pgtFrom the minimum bounding box obtained in step 1.3, the center point offset △ p is determinedctrAnd apex offset △ piIn the case of the wheel target localization model, the total localization loss LregComprises the following steps:
Figure BDA0002093561720000091
through 1000 iterations, LregAnd convergence shows that the wheel target positioning model is approximately optimal.
Step 3, determining the predicted target category based on the three-dimensional target detection model of the unstructured point cloud, wherein the implementation mode comprises the following steps:
step 3.1, according to the candidate frame C where the wheel target is predicted in the step 2chsIntercepting the point clouds contained in the screening candidate frame to obtain a new target point cloud,
CloudROI=∑Cchs
the specific implementation mode is as follows: cchsThe coordinates of the points in (1) are unchanged, and the rest of the candidate frames CelmHas a point coordinate of (0,0,0), and CelmAll were classified as (0, 1) vectors, denoted as background.
Step 3.2, CloudROIIs sent to a target classification network, and the positioning and classification network forms the whole wheel target detection network, as shown in fig. 8, the classification module and the positioning module are completely identical in network structure, but the classification module inputs the intercepted point CloudROIOutput via the full connection layer is nobjObtaining n through a softmax classifier by using a characteristic vector of x 1024objVector of x 2, representing for each CchsThe classification score of the target. All C arechsAnd CelmIs combined into a one-dimensional vector h of length npredJointly calculating Softmax cross entropy loss L with tag hclsUsing Adams optimizer to make LclsAnd minimum. Through 1000 iterations, LclsAnd convergence shows that the wheel target classification model is approximately optimal.
And 3.3, storing the model after 1000 times of iterative training to a model. In the testing stage, a program reads a trained model from a model.ckpt, point cloud samples in a testing set are input into a wheel target detection model and are respectively subjected to positioning and classification modules to obtain the position of a minimum external bounding box and a classification score(s) of a predicted wheel target in the testing point cloud samplesfg,sbg) If the foreground scores sfgScore above background sbgJudging that the enclosure frame is the minimum external enclosure frame of the wheel target; if foreground sfgSub-background score sbgAnd judging that the point cloud contained in the bounding box belongs to the background.
Step 4, analyzing the wheel pose information obtained by the prediction of the three-dimensional target detection model, judging whether the wheels return to the right and the positions of the wheels in a manipulator coordinate system, and feeding back signals of whether the manipulator arranged above the intelligent garage grabs and grabs the positions of the points, wherein the implementation mode comprises the following steps:
step 4.1, calculating and predicting the minimum external enclosure frame and laser three-dimensional of each wheel targetAngle theta of yz plane of coordinate systempredThe specific implementation mode is as follows: obtaining a vector according to the predicted coordinates of the mark points
Figure BDA0002093561720000092
The normal vector of the yz plane is
Figure BDA0002093561720000093
Find the angle between the vector and the yz plane
Figure BDA0002093561720000101
Vector quantity
Figure BDA0002093561720000102
The included angles with yz plane are respectively theta1’=90°-θ1,θ2’=90°-θ2Comparison of θ1' and theta2' the smaller angle is used as the predicted included angle theta between the wheel target and the yz planepred. The included angle theta between the overall orientation of the known vehicle body and the yz plane of the laser three-dimensional coordinate systemcarIn the case of (1), an included angle △ theta of a point cloud of a vehicle body and a wheel point on a yz plane of a laser three-dimensional coordinate system is obtainedpredcar|。
Step 4.2, comparing △ theta with a preset threshold thetathresholdAt 5 °, if △ θ<θthresholdIf △ theta is not reached, the system feeds back a signal of 'allowing to grab' to the mechanical hand, the mechanical hand calculates a grabbing point according to the information of the smallest enclosing frame of the predicted wheel and finishes the process of grabbing the vehicle to ascend>θthresholdAnd the system feeds back a signal of 'the wheels are not aligned and are not allowed to be grabbed' to the mechanical arm, the mechanical arm directly rises to the original point of the laser three-dimensional coordinate system, and sends out information to remind a driver of further operation and adjust the posture of the wheels. And finishing the program, and finishing the execution of a complete vehicle capturing process.
The effectiveness of the present invention is verified by experiments below.
Experiment: the sample class only has two classes of foreground and backgroundTherefore, the evaluation of the performance of the wheel target detection model mainly focuses on predicting both the alignment and attitude results of the wheel. For the prediction positioning portion, the intersection ratio IoU of the prediction frame and the mark frame in the three-dimensional space is used as an evaluation index in the model test. Predicted frame volume of VpredThe volume of the mark frame is VgtThen there is
Figure BDA0002093561720000103
As shown in fig. 9(a) as a schematic diagram of a cube IoU, in a spatial range, two cubes may intersect at any point on any surface, and a union part is completely irregular, and it is impossible to find a definite value of a union volume by using a conventional mathematical method. Under the condition that an accurate solution cannot be obtained, discrete calculation is adopted in the experiment, the surrounding frame is divided into m parts along the three directions of length, width and height, and each cube is made to contain m3A point set form representation of points. IoU are approximated by determining the scale of discrete points in the cube. As shown in fig. 9(B), the detected point is determined as an outer point (e.g., point B) if it is on the same side of any two parallel surfaces of the surrounding frame, and is determined as an inner point (e.g., point a) if it is on the opposite side of any two parallel surfaces of the surrounding frame. The specific judgment method is to observe the included angle between the connecting line of the measured point and the vertex of any one side and the direction vector of the side. For the internal points A, thetaa,θbOne is an acute angle and the other is an obtuse angle; for the outer point B, there must be a normal vector of one edge or one surface, so that θc,θdBoth are acute angles or both are obtuse angles; namely, it is
Figure BDA0002093561720000111
If n is found to be an interior point belonging to both the mark frame and the prediction frameIoUIoU can be expressed approximately as
Figure BDA0002093561720000112
For the predicted attitude partAnd taking the included angle between the direction vectors of the connecting lines between the mark and the predicted vertex as a judgment standard. Referring to FIG. 10, the regression mark point p0,p0And p1The direction vector of the connecting line is
Figure BDA0002093561720000113
Predicted point p0' and mark center p1' Direction vector of connecting line is
Figure BDA0002093561720000114
Then the angle between the predicted and marked direction vectors is
Figure BDA0002093561720000115
Obtaining the attitude angle theta corresponding to the other two points in the same way1,θ2。θ0,θ1,θ2Representing the accuracy of predicting the aspect of the minimum bounding box length, width and height, a smaller attitude angle indicates that the attitude of the predicted box is closer to the marker box.
The experiment is operated on a GTX1080Ti video card, the system environment is Ubuntu16.04, the software environment is Python2.7+ Tensorflow1.2+ CUDA8.0+ CUDNN6.0, all point cloud samples are stored in a txt format, the learning rate of network training is 0.001, and 8 point cloud samples are input into the network for training each iteration. The results are shown in Table 1:
TABLE 1 Experimental test indexes and results
Figure BDA0002093561720000116
The experimental data is easy to obtain from the table, the average deviation of the postures of the predicted bounding box and the marked bounding box is 4.26 degrees, the three-dimensional intersection ratio IoU is 75.67 percent, and the detection speed of a single point cloud test sample reaches 21 fps.
In summary, aiming at the requirement of automatic parking in an intelligent garage for acquiring the position and the attitude information of a vehicle wheel to be parked in real time, the method for detecting the three-dimensional point cloud wheel target meeting the automatic parking requirement is provided and realized on the basis of a deep convolutional neural network by combining the characteristics of the wheel target in point cloud data, has certain wheel target positioning capacity and is more accurate in wheel attitude identification; the whole detection process is less in time consumption and high in efficiency, and can be popularized and applied in an intelligent stereo garage scene.

Claims (8)

1. A wheel point cloud detection method for an intelligent garage is characterized by comprising the following steps:
step 1, collecting a three-dimensional point cloud of a parking area scene by using a two-dimensional single-line laser radar, and making a corresponding data set;
step 2, determining target pose information based on a three-dimensional target detection model of the unstructured point cloud;
the specific implementation of step 2 comprises the following sub-steps,
step 2.1, sampling point cloud samples with different sizes to enable the size of each sample entering a three-dimensional target detection model to be uniform, wherein the three-dimensional target detection model is based on a PointNet network and is added with a target positioning module;
step 2.2, for a frame of point Cloud, the maximum target number is nobjEquivalently copying the wheel target part point Cloud in the Cloud until the Cloud contains n targetsobjEnsuring that the size of the point cloud input into the three-dimensional target detection model is fixed;
step 2.3, determining the range of all point cloud samples in the xyz direction of the three-dimensional space, and equally dividing the three-dimensional space into n candidate regions along the xyz direction; inputting Cloud into a positioning module of a three-dimensional target detection model for feature extraction, and outputting predicted wheel position information;
step 2.4, find the candidate area center closest to the marked wheel target center among the n candidate area centers
Figure FDA0002427945270000011
Thereby determining the candidate frame C where the wheel is locatedchsAccording to
Figure FDA0002427945270000012
Screening out n in totalobjN corresponding to each candidate regionobjThe group prediction mark points and the marked wheel target mark points are jointly input into a loss layer to calculate the regression loss LregOptimizing the loss function by training iteratively until L is zeroregConverging;
step 3, determining the predicted target category based on a three-dimensional target detection model of the unstructured point cloud;
and 4, analyzing the wheel pose information predicted by the three-dimensional target detection model, judging whether the wheels return to the right and the positions of the wheels in a manipulator coordinate system, and feeding back signals of whether the manipulator arranged above the intelligent garage grabs and the positions of the grabbed points.
2. The intelligent garage-oriented wheel point cloud detection method according to claim 1, wherein the method comprises the following steps: a specific implementation of step 1 comprises the following sub-steps,
step 1.1, a vehicle taking manipulator is fixedly connected to a vertical mechanical frame, a vehicle taking cover used for clamping a vehicle is fixed on the manipulator and comprises a vehicle taking cover shell, clamping mechanisms used for clamping the tire are respectively arranged at positions, corresponding to four tires of the vehicle, of the bottom of the shell, four laser radars are respectively arranged on the clamping mechanisms, and when the vehicle taking cover shell falls to the positions near the wheels along with the manipulator, the laser radars start to work and collect original three-dimensional point cloud data of the four wheels;
step 1.2, establishing a laser three-dimensional coordinate system, taking an initial position before laser descending as an origin of the coordinate system, taking a moving direction of the laser when the laser descends vertically as a positive direction of a z axis, taking a direction of 0 degrees during two-dimensional laser scanning as a positive direction of an x axis, and taking a direction of 90 degrees as a positive direction of a y axis, and converting distance value information obtained by scanning to the laser three-dimensional coordinate system in each falling process of the laser to obtain a point cloud sample containing a wheel target;
step 1.3, obtaining a garage scene point cloud sample represented by three-dimensional coordinates in the steps 1.1 and 1.2, marking point clouds belonging to wheel parts in the sample point by point, calculating coordinates of eight minimum external cubic vertexes and a cubic centroid of each wheel target point cloud by using a principal component analysis method, and taking 12 fixed parameters of three coplanar vertexes and three-dimensional coordinates of the centroid as a position label of each target; and (3) setting a (1, 0) vector as a class label for each wheel target, and dividing the point cloud data set into a training set and a testing set according to a proper proportion.
3. The intelligent garage-oriented wheel point cloud detection method according to claim 2, wherein the method comprises the following steps: the specific implementation manner of calculating and obtaining the coordinates of the eight minimum external cubic vertexes and the cubic centroid of each wheel target point cloud by using a principal component analysis method in the step 1.3 and taking the three-dimensional coordinates of the fixed three coplanar vertexes and the centroid as the position label of each target is as follows,
(1) calculating the centroid p of an input point cloudcenteroid
(2) From the input point cloud and pcenteroidObtaining covariance matrix M of point cloudcloudCalculating a point cloud covariance matrix eigenvalue and a characteristic vector, wherein the direction of the characteristic vector is the main direction of the point cloud;
(3) after the main directions are corrected to be vertical, the minimum circumscribed cube of the point cloud is found along the three main directions to obtain the minimum enclosing frame for marking the point cloud, and for the minimum enclosing frame of each marked wheel, three coplanar vertexes p fixed in the minimum enclosing frame are taken0,p1,p2And centroid pctrHas 12 parameters in total, namely pctr(xctr,yctr,zctr),p0(x0,y0,z0),p1(x1,y1,z1),p2(x2,y2,z2) As a location tag for each object.
4. The intelligent garage-oriented wheel point cloud detection method according to claim 3, wherein the method comprises the following steps: the specific implementation manner of sampling point cloud samples with different sizes in the step 2.1 to make the sizes of all samples entering the three-dimensional target detection model uniform is as follows,
let NFor each sample size, the original point cloud C to be sampledoriSize NinputPoint cloud C marked as targettgtThe number of dots is NtgtAccordingly, background point cloud CbgdThe number of dots is Nbgd=Ninput-NtgtIf the size of the original point cloud is smaller than N, the original point cloud C is processedoriRandomly sampling and adding original point clouds until the size of the point clouds is expanded to N; if the size of the original point cloud is larger than N, further judgment of N is still neededtgtAnd size of N, N>NtgtFrom C to CbgdAdding C by random samplingoriUntil the point cloud size is N; n is a radical of<NtgtThen, the background point is not considered any more, and only the C is selectedtgtTaking N points as sampling point cloud Csmp
5. The intelligent garage-oriented wheel point cloud detection method according to claim 4, wherein the method comprises the following steps: the specific implementation manner of the step 2.3 is as follows: determining the coordinate range (x) of the point cloud according to all the input point cloud datamin,xmax,ymin,ymax,zmin,zmax) And establishing a large point cloud space cube C, equally dividing C in the x, y and z directions according to a certain set proportion to generate n cube candidate frames C0,C1,…CnInputting Cloud with the size of i multiplied by 3 into the positioning module of the three-dimensional target detection model for feature extraction, outputting predicted wheel position information, specifically, inputting Cloud with the size of i multiplied by 3 into the positioning module of the three-dimensional target detection model, correcting the Cloud through a space transformation network, generating n multiplied by 64 feature vectors through two layers of 3 multiplied by 3 convolutional layers, enabling the feature vectors to enter a second space transformation network for feature alignment, generating n multiplied by 1024 feature vectors through three layers of 3 multiplied by 3 convolutional layers, extracting recessive features through a full connection layer, outputting n multiplied by 4 multiplied by 3 vectors, and expressing that n candidate regions respectively return to one group of mark point offsets △ctr,△0,△1,△2(ii) a The position of each candidate region output regression point is three coplanar vertexes of the candidate region and a central point coordinate Cctr,C0,C1,C2And △ctr,△0,△1,△2The sum of (1); the regression point position p is the sum of the four landmark points of the candidate box and the offset:
Figure FDA0002427945270000031
6. the intelligent garage-oriented wheel point cloud detection method according to claim 4, wherein the method comprises the following steps: the specific implementation manner of the step 2.3 is as follows: the loss function in step 2.4 is as follows,
Figure FDA0002427945270000032
wherein p ispred,pgtRespectively predicted and marked landmark coordinates, ppredDerived from network regression, pgtThe minimum bounding box obtained in step 1.3 is used for obtaining the center point offset △ pctrAnd apex offset △ piIn the case of the wheel target localization model, the total localization loss LregComprises the following steps:
Figure FDA0002427945270000033
through multiple iterations, LregAnd convergence shows that the wheel target positioning model is approximately optimal.
7. The intelligent garage-oriented wheel point cloud detection method according to claim 4, wherein the method comprises the following steps: the specific implementation of step 3 comprises the following sub-steps,
step 3.1, according to the candidate frame C where the wheel target is predicted in the step 2chsIntercepting and screening the point clouds contained in the candidate frames to obtain a new target point CloudROIAll the non-target part candidate frames are classified as (0, 1);
step 3.2, mixing CloudROIInput threeAnd the classification module of the dimensional target detection model calculates classification scores for each candidate region containing the target, finally combines the classification scores of all the candidate regions containing and not containing the wheel target into a one-dimensional vector with the length of n, and calculates Softmax cross entropy loss L together with the mark classesclsAnd using an Adams optimizer to make LclsMinimum;
and 3.3, inputting the point cloud sample in the test set into the trained three-dimensional target detection model to obtain the position of the minimum external enclosure frame of the predicted wheel target in the test point cloud sample and the predicted score, if the foreground score is higher than the background score, judging that the enclosure frame is the minimum external enclosure frame of the wheel target, and if the foreground score is lower than the background score, judging that the point cloud contained in the enclosure frame belongs to the background.
8. The intelligent garage-oriented wheel point cloud detection method according to claim 7, wherein the method comprises the following steps: the specific implementation of step 4 comprises the following sub-steps,
step 4.1, calculating and predicting an included angle between the minimum external enclosure frame of each wheel target and a yz plane of a laser three-dimensional coordinate system, wherein the specific implementation mode is as follows: obtaining a vector according to the predicted coordinates of the mark points
Figure FDA0002427945270000041
The normal vector of the yz plane is
Figure FDA0002427945270000042
Find the angle between the vector and the yz plane
Figure FDA0002427945270000043
Vector quantity
Figure FDA0002427945270000044
The included angles with yz plane are respectively theta1’=90°-θ1,θ2’=90°-θ2Comparison of θ1' and theta2' the smaller angle is used as the predicted included angle theta between the wheel target and the yz planepredThe included angle theta between the overall orientation of the known vehicle body and the yz plane of the laser three-dimensional coordinate systemcarIn the case of (1), an included angle △ theta of a point cloud of a vehicle body and a wheel point on a yz plane of a laser three-dimensional coordinate system is obtainedpredcar|;
And 4.2, comparing the included angle of the vehicle body of the wheel with a preset threshold value, if the included angle is smaller than the threshold value, feeding back a signal of 'allowing grabbing' to the mechanical hand by the system, calculating a grabbing point by the mechanical hand according to the information of the smallest enclosing frame of the predicted wheel and finishing the process of grabbing the vehicle to ascend, if △ theta is larger than the threshold value, feeding back a signal of 'not returning the wheel to the right state and not allowing grabbing' to the mechanical hand by the system, directly ascending the mechanical hand to the original point of the laser three-dimensional coordinate system, and sending out information to remind a driver of further operation and adjust.
CN201910511231.1A 2019-06-13 2019-06-13 Wheel point cloud detection method for intelligent garage Active CN110344621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910511231.1A CN110344621B (en) 2019-06-13 2019-06-13 Wheel point cloud detection method for intelligent garage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910511231.1A CN110344621B (en) 2019-06-13 2019-06-13 Wheel point cloud detection method for intelligent garage

Publications (2)

Publication Number Publication Date
CN110344621A CN110344621A (en) 2019-10-18
CN110344621B true CN110344621B (en) 2020-05-26

Family

ID=68181950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910511231.1A Active CN110344621B (en) 2019-06-13 2019-06-13 Wheel point cloud detection method for intelligent garage

Country Status (1)

Country Link
CN (1) CN110344621B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853103B (en) * 2019-11-12 2022-06-10 佛山智能装备技术研究院 Data set manufacturing method for deep learning attitude estimation
CN110909644A (en) * 2019-11-14 2020-03-24 南京理工大学 Method and system for adjusting grabbing posture of mechanical arm end effector based on reinforcement learning
CN111161202A (en) * 2019-12-30 2020-05-15 上海眼控科技股份有限公司 Vehicle behavior information acquisition method and device, computer equipment and storage medium
CN111368653B (en) * 2020-02-19 2023-09-08 杭州电子科技大学 Low-altitude small target detection method based on R-D graph and deep neural network
CN111428619B (en) * 2020-03-20 2022-08-05 电子科技大学 Three-dimensional point cloud head attitude estimation system and method based on ordered regression and soft labels
CN111472592B (en) * 2020-04-09 2021-08-24 陕西天诚软件有限公司 Intelligent system based on big data
CN111652060B (en) * 2020-04-27 2024-04-19 宁波吉利汽车研究开发有限公司 Laser radar-based height limiting early warning method and device, electronic equipment and storage medium
CN111562591B (en) * 2020-05-11 2021-06-18 清华大学 Automobile position measuring method and device based on laser radar
CN112070835B (en) * 2020-08-21 2024-06-25 达闼机器人股份有限公司 Mechanical arm pose prediction method and device, storage medium and electronic equipment
CN112071119A (en) * 2020-08-31 2020-12-11 安徽中科美络信息技术有限公司 Intelligent auxiliary warehouse entry and exit method and system based on Internet of vehicles
CN112614186A (en) * 2020-12-28 2021-04-06 上海汽车工业(集团)总公司 Target pose calculation method and calculation module
CN112560800B (en) * 2021-01-12 2024-05-28 知行汽车科技(苏州)股份有限公司 Road edge detection method, device and storage medium
CN113223091B (en) * 2021-04-29 2023-01-24 达闼机器人股份有限公司 Three-dimensional target detection method, three-dimensional target capture device and electronic equipment
CN113177477A (en) * 2021-04-29 2021-07-27 湖南大学 Target detection and identification method based on three-dimensional point cloud analysis
CN113505653B (en) * 2021-06-15 2023-06-30 杭州飞步科技有限公司 Object detection method, device, apparatus, medium and program product
CN113628177A (en) * 2021-07-29 2021-11-09 北京好运达智创科技有限公司 Double-layer beam storage detection system for beam body
CN113888463A (en) * 2021-08-31 2022-01-04 广州文远知行科技有限公司 Wheel rotation angle detection method and device, electronic device and storage medium
CN115457496B (en) * 2022-09-09 2023-12-08 北京百度网讯科技有限公司 Automatic driving retaining wall detection method and device and vehicle
CN116091437B (en) * 2022-12-30 2024-02-02 苏州思卡信息***有限公司 Axle number detection method based on 3D point cloud

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2079982A1 (en) * 2006-10-16 2009-07-22 Robert Bosch GmbH Method for determining the axis of rotation of a vehicle wheel
CN107514167A (en) * 2017-07-18 2017-12-26 武汉智象机器人有限公司 Vehicle identification system and storage method based on fixed radar and movable radar
CN108868268A (en) * 2018-06-05 2018-11-23 西安交通大学 Based on point to identity distance from the unmanned vehicle position and orientation estimation method being registrated with cross-correlation entropy
CN109386155A (en) * 2018-09-20 2019-02-26 同济大学 Nobody towards automated parking ground parks the alignment method of transfer robot
CN109596140A (en) * 2019-01-30 2019-04-09 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle positioning method, automatic Pilot control method and related system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2079982A1 (en) * 2006-10-16 2009-07-22 Robert Bosch GmbH Method for determining the axis of rotation of a vehicle wheel
CN107514167A (en) * 2017-07-18 2017-12-26 武汉智象机器人有限公司 Vehicle identification system and storage method based on fixed radar and movable radar
CN108868268A (en) * 2018-06-05 2018-11-23 西安交通大学 Based on point to identity distance from the unmanned vehicle position and orientation estimation method being registrated with cross-correlation entropy
CN109386155A (en) * 2018-09-20 2019-02-26 同济大学 Nobody towards automated parking ground parks the alignment method of transfer robot
CN109596140A (en) * 2019-01-30 2019-04-09 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle positioning method, automatic Pilot control method and related system

Also Published As

Publication number Publication date
CN110344621A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110344621B (en) Wheel point cloud detection method for intelligent garage
CN108171748B (en) Visual identification and positioning method for intelligent robot grabbing application
EP3683721B1 (en) A material handling method, apparatus, and system for identification of a region-of-interest
CN110175576B (en) Driving vehicle visual detection method combining laser point cloud data
CN109444911B (en) Unmanned ship water surface target detection, identification and positioning method based on monocular camera and laser radar information fusion
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
CN108445480B (en) Mobile platform self-adaptive extended target tracking system and method based on laser radar
Zhou et al. Self‐supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain
Petrovskaya et al. Model based vehicle detection and tracking for autonomous urban driving
WO2022188663A1 (en) Target detection method and apparatus
US20200302237A1 (en) System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
CN114359876B (en) Vehicle target identification method and storage medium
CN112487919A (en) 3D target detection and tracking method based on camera and laser radar
CN112232139B (en) Obstacle avoidance method based on combination of Yolo v4 and Tof algorithm
Wellhausen et al. Reliable real-time change detection and mapping for 3D LiDARs
CN113484875B (en) Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering
CN109633686B (en) Method and system for detecting ground obstacle based on laser radar
Piewak et al. Fully convolutional neural networks for dynamic object detection in grid maps
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
Wen et al. Research on 3D point cloud de-distortion algorithm and its application on Euclidean clustering
CN114399675A (en) Target detection method and device based on machine vision and laser radar fusion
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN116573017A (en) Urban rail train running clearance foreign matter sensing method, system, device and medium
Morris et al. A view-dependent adaptive matched filter for ladar-based vehicle tracking
US20240051146A1 (en) Autonomous solar installation using artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant