CN113920163A - Moving target detection method based on combination of tradition and deep learning - Google Patents

Moving target detection method based on combination of tradition and deep learning Download PDF

Info

Publication number
CN113920163A
CN113920163A CN202111176760.4A CN202111176760A CN113920163A CN 113920163 A CN113920163 A CN 113920163A CN 202111176760 A CN202111176760 A CN 202111176760A CN 113920163 A CN113920163 A CN 113920163A
Authority
CN
China
Prior art keywords
image
moving target
moving
potential
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111176760.4A
Other languages
Chinese (zh)
Other versions
CN113920163B (en
Inventor
蒋涛
崔亚男
谢昱锐
付克昌
袁建英
吴思东
黄小燕
刘明文
段翠萍
罗鸿明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202111176760.4A priority Critical patent/CN113920163B/en
Publication of CN113920163A publication Critical patent/CN113920163A/en
Application granted granted Critical
Publication of CN113920163B publication Critical patent/CN113920163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a moving target detection method based on combination of tradition and deep learning, which comprises the following steps: detecting adjacent frames of road images acquired by a binocular camera by adopting an example segmentation algorithm so as to divide each image into a potential moving target area and a static area; step two, respectively extracting and matching feature points of potential moving target areas and static areas in each image; and step three, determining motion compensation based on the camera self-motion parameters, judging the motion state by calculating a reprojection error, and marking the motion target in the image based on the judgment result. The invention discloses a moving target detection method based on combination of tradition and deep learning, which can effectively improve algorithm real-time performance and improve detection precision by improving the precision of self-moving parameter estimation.

Description

Moving target detection method based on combination of tradition and deep learning
Technical Field
The invention relates to the technical field of image processing. More specifically, the invention relates to a moving object detection method based on combination of tradition and deep learning, which is used in the situation of detecting a potential moving object on a driving road of an intelligent vehicle.
Background
Nowadays, intelligent vehicles become a hot spot of research in the field of vehicle engineering in the world and a new power for the growth of the automobile industry, and many developed countries incorporate the intelligent vehicles into the intelligent transportation field which is mainly developed in China. The intelligent vehicle has the characteristics of complex running environment, high dynamic property, high randomness and the like. The accurate detection and track prediction of the dynamic targets in the environment are the basis of unmanned vehicle behavior decision-making and navigation planning, are the key points for ensuring intelligent and safe driving, and especially under the conditions of multi-lane driving lane changing, highway merging from a highway and the like, the motion information of the targets in the scene is particularly important for unmanned vehicle decision-making.
Currently, the perception of moving objects by smart vehicles is mainly based on a laser radar method and a vision-based method. The laser radar can obtain accurate distance information of a scene target from the vehicle, but is limited by angular resolution, so that the detection capability of the laser radar on a small long-distance target is weak; in addition, the high price thereof also becomes one of the factors limiting the popularization of the unmanned vehicle. On the contrary, the visual sensor considered to be most similar to human perception attracts attention due to the advantages of low cost, small size, light weight, large information, good algorithm reusability and the like, and even a large unmanned company takes pure visual perception as the main direction of intelligent vehicle environment perception. Currently, the application of vision to smart vehicles is mainly focused on the identification of lane lines, road signs, pedestrians, and vehicles. For moving object detection of a moving camera, for example, a camera fixed on a moving platform, the moving object detection method based on a stationary camera is not applicable due to the motion of the camera itself. Therefore, research on moving object detection based on a moving camera is becoming a hot spot of research in recent years.
At present, moving target detection based on a static camera mainly adopts methods such as background subtraction, frame difference method, optical flow method and the like to realize moving target detection, and is widely applied, for example, to crowd monitoring in public places. However, when the camera is fixed on a mobile platform, the detection of the moving object is carried out, for example: the method is not suitable for the intelligent vehicle, and the motion of the target and the motion of the background are mixed together due to the motion of the camera, so that great difficulty is brought to the detection of the moving target.
Disclosure of Invention
An object of the present invention is to solve at least the above problems and/or disadvantages and to provide at least the advantages described hereinafter.
To achieve these objects and other advantages in accordance with the purpose of the invention, there is provided a moving object detecting method based on a combination of conventional and deep learning, comprising:
detecting adjacent frames of road images acquired by a binocular camera by adopting an example segmentation algorithm so as to divide each image into a potential moving target area and a static area;
step two, respectively extracting and matching feature points of potential moving target areas and static areas in each image;
and step three, determining motion compensation based on the camera self-motion parameters, judging the motion state by calculating a reprojection error, and marking the motion target in the image based on the judgment result.
Preferably, in the step one, the binocular camera is configured to employ a binocular camera mounted on a vehicle;
in step two, an example segmentation algorithm SOLOV2 is adopted, a background pixel value in the road environment data image is marked as 0, and pixel values of each remaining potential moving object are marked in sequence from 1, 2, so that different potential moving objects in each image acquired by the binocular camera are set as mask images with different label information, and the road environment data image is divided into a potential moving object area and a static area.
Preferably, in step three, the feature point extraction manner for the potential moving target area and the stationary area is configured to include:
the feature points of the static region are configured to be obtained by adopting an ORB feature point extraction method, and the camera self-movement parameters are obtained through a feature point homogenization extraction strategy;
the feature points for the potential moving target region are configured to be obtained by employing the Shi Tomasi feature point extraction method.
Preferably, in step four, the motion compensation is configured to include:
based on the feature points extracted and matched in the static region, calculating the camera self-motion parameters between every two frames by using a PnP method on the front and back frame images;
and performing motion compensation on the previous frame image in every two frames through the camera self-motion parameters so as to enable the image to be equivalent to the condition that the camera is still.
Preferably, in step four, the reprojection error is a reprojection residual image obtained by projecting the feature points on the current frame onto the previous frame to obtain the adjacent frame.
Preferably, in step four, the method of motion state resolution is configured to:
length F corresponding to a feature point represented by a reprojected residual on a potential moving target regionLAnd a characteristic point motion state judgment threshold value ThMaking a comparison at FL>ThThen the characteristic point is marked with color to show that the characteristic point is a moving point;
traversing each potential moving target by using a label, counting the number of moving points falling in each potential moving target area, setting a threshold phi of the number of moving points, and if the number of moving points of a certain potential moving target area is greater than the threshold, marking the potential moving target area as green to represent the moving target;
wherein, T ishIs configured to:
Th=ρFavg
Favgrepresents the mean of the reprojected residual lengths of the stationary regions, and ρ represents a value greater than 1.
The invention at least comprises the following beneficial effects: firstly, the mask generated by example segmentation is used as a potential moving target area, and the whole image is accurately divided into two parts by the method: a stationary region and a potentially moving target region. And further, the two parts are respectively extracted by using different strategies, so that the real-time performance of the algorithm is improved, and the precision of the self-motion parameter estimation is improved.
Secondly, when the moving target is judged, the moving state of the target is judged by considering the threshold value method of the self-moving estimation error, the error of the self-moving parameter estimation is considered, the accuracy is improved, and the method is practical and effective.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
FIG. 1 is a schematic flow chart of a moving object detection method based on the combination of the traditional method and the deep learning method;
FIG. 2 is a schematic flow chart of the moving object determination in step four;
FIG. 3 is a diagram of an unprocessed road environment obtained in the application of an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the marking of points of motion in a potential motion target area in step four according to the present invention;
FIG. 5 is a schematic diagram of a moving object marked in step four according to the present invention;
FIG. 6 is another schematic diagram of the fourth step of the present invention after the moving object is marked;
FIG. 7 is a diagram illustrating a prior art method for marking a moving object;
fig. 8 is another schematic diagram of a prior art after a moving object is marked.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
According to the moving target detection method based on the combination of the traditional method and the deep learning, the problems that the traditional target detection cannot be detected when occlusion occurs and the detection effect is not ideal are solved by trying to combine the pedestrian and vehicle detection algorithm based on the deep learning and the traditional method. In addition, a threshold method considering the self-motion estimation error is introduced to judge the motion state of the target so as to solve the problem of low accuracy caused by the fact that the existing method only utilizes the reprojection error to judge the motion state of the target and neglects the self-motion parameter estimation error.
The method utilizes an example segmentation algorithm to detect the examples of the vehicles and the pedestrians of the left image in the road image, and takes the detected vehicles and pedestrians as potential moving targets.
Because potential moving targets of the traffic environment are mainly pedestrians and vehicles, the traditional target detection cannot be detected when the shielding occurs, the detection effect is not ideal, and the current pedestrian and vehicle detection based on deep learning is quite mature, the pedestrian and vehicle detection method based on deep learning is introduced, the candidate area of the moving target in the image is firstly positioned, the search range of the moving target in the whole image is narrowed, and the moving target detection speed is improved. Considering the accuracy of target boundary detection, the invention adopts an example segmentation algorithm to obtain the boundary regions of vehicles and pedestrians.
Then, four images of the front and rear two frames of images (the four images of the front and rear two frames are images at two visual angles of the same object which are collected by the binocular camera when in use, but the two images are only slightly different in visual angle, so that the left image or the right image can be directly selected for analysis processing during analysis) are detected, the camera self-motion parameters are firstly solved, then the motion compensation is carried out on the image of the previous frame in each two frames by the obtained camera self-motion parameters, and then the re-projection residual image is obtained.
The specific process is as follows:
the self-motion parameter estimation of the camera mainly obtains the pose of the camera through a visual odometer, but the visual odometer is based on the assumed condition of a static scene at present, and when a dynamic target is taken as a subject, an algorithm fails. The feature points are unevenly distributed in a static area, which causes errors for calculating the self-movement parameters of the camera, so the method combines the previous potential moving object extraction and the visual odometer, and comprises the following steps of: firstly, regarding a potential motion target area as motion, extracting characteristic points in the rest static area, and using the characteristic points as self-motion parameter estimation; secondly, a homogenization feature extraction strategy is used in the static region to improve the accuracy of self-motion parameter estimation.
And finally, judging the motion state of the target by re-projecting the residual image. And judging the motion state of the target by adopting a threshold method considering self-motion parameter estimation errors according to the uncertainty of the target motion estimation. The threshold here refers to the average length of the pairs of characteristic points. And respectively calculating the average length of the feature point pairs of the static region and the length of the feature point pairs of the potential moving target region in the re-projection residual image. The value of the stationary region takes into account the error from the motion parameter estimation. And if the length value of the characteristic point pair of the potential moving target area is larger than a certain multiple of the average value of the static area, judging that the characteristic point is moving, otherwise, judging that the characteristic point is static. And then setting a constant threshold, counting the number of the motion points of each potential motion target area, if the number is larger than the threshold, marking the potential motion target area as a motion target, otherwise, marking the potential motion target area as a static target, and respectively marking the potential motion target area with different colors.
Example (b):
the invention is realized on a Clion experimental platform, mainly comprises seven steps, mainly relates to potential moving target extraction by example segmentation, characteristic point extraction and matching of a static area and a potential moving target area, self-moving parameter estimation, motion compensation, calculation of a re-projection residual error, and judgment and marking of a moving state by using an adaptive threshold, and specifically comprises the following steps:
the method comprises the following steps of firstly, extracting potential moving objects from an input image, wherein the potential moving objects comprise vehicles and pedestrians. Specifically, a mature example segmentation algorithm SOLOV2 is adopted, the background pixel value is marked as 0, the pixel values of the rest potential moving targets are marked sequentially from 1 to 2, and mask images which are corresponding to each image of a left image of the binocular camera and have different label information for different potential moving targets are obtained.
And step two, extracting and matching feature points in the potential moving target area and the static area acquired by each image in the step one by using two different methods respectively so as to meet different targets required to be achieved by the corresponding area. The stationary region uses an ORB feature point extraction method and a feature point homogenization strategy to acquire relatively accurate camera self-motion parameters. And because a large number of characteristic points are needed in the potential moving target area, target loss caused by too few characteristic points when the motion state is judged is avoided, and a Shi Tomasi characteristic point extraction method is used for obtaining richer characteristic points. The static and potential moving target areas of the right image do not need to be output through deep learning again, and the corresponding feature points of the right image are directly obtained through matching of the feature points of the corresponding area of the left image, so that the time cost can be reduced.
And step three, calculating the camera self-movement parameters between every two frames by utilizing the front frame image and the rear frame image through the feature points extracted and matched by the static area by using a PnP method.
And step four, performing motion compensation on the previous frame image in each two frames by using the camera self-motion parameters in the step three. Making the image equivalent to the case where the camera is stationary.
And fifthly, carrying out re-projection by using the characteristic points in the second step. I.e. using feature points of the current frame to project onto the previous frame. And acquiring a re-projection residual image.
And step six, the error of the self-motion parameter estimation of the camera cannot be completely eliminated (if the residual error of the re-projection of the static area without the error is zero). Calculating the mean value of the re-projection residuals of the static area, comparing the values of a certain multiple of the re-projection residuals on the potential moving target area and the mean value of the re-projection residuals of the static area as a threshold, and marking the characteristic point as green if a certain re-projection residual on the potential moving target area is larger than the threshold, wherein the characteristic point represents a moving point. Traversing each potential moving target by using a label, counting the number of moving points falling in each potential moving target area, setting a threshold phi of the number of moving points, and marking a potential moving target area as red to represent the moving target if the number of moving points of a certain potential moving target area is greater than the threshold. Otherwise the area is marked green, indicating a stationary object.
Th=ρFavg
When F is presentL>ThThen the feature point is marked as a moving point, where FLRepresenting the length of a pair of characteristic points represented by the re-projection residual on a potential moving object region, FavgRepresenting the mean of the lengths of the reprojected residuals of the stationary region, p representing a value greater than 1, ThAnd a threshold value for judging the motion state of the characteristic point.
In order to better illustrate the accuracy of the invention in detecting moving targets, for the same road environment data, the detection flow and effect of the invention are shown in fig. 4-6, which can be seen that the invention can mark moving targets in complex road environment in the largest range, and can mark static potential moving targets in images, and the marking accuracy can meet the use requirements;
however, the general conventional method for detecting a moving object in the prior art does not perform example segmentation in the image information processing process, and does not adopt a specific threshold to determine the motion state of the feature point, so the detection result is as shown in fig. 7 and 8, wherein the area in the frame represents the detected moving object, and the defects in the prior art can be known from the figure: 1, due to excessive interference factors in a road environment image, roadside telegraph poles, walls, leaves and the like are mistakenly detected as moving targets in the prior art, and a large amount of false detection occurs; 2, the prior art cannot mark some static potential moving objects. Meanwhile, in the prior art, the image is not partitioned in the actual processing, so the detection processing efficiency is low.
The above scheme is merely illustrative of a preferred example, and is not limiting. When the invention is implemented, appropriate replacement and/or modification can be carried out according to the requirements of users.
The number of apparatuses and the scale of the process described herein are intended to simplify the description of the present invention. Applications, modifications and variations of the present invention will be apparent to those skilled in the art.
While embodiments of the invention have been disclosed above, it is not intended to be limited to the uses set forth in the specification and examples. It can be applied to all kinds of fields suitable for the present invention. Additional modifications will readily occur to those skilled in the art. It is therefore intended that the invention not be limited to the exact details and illustrations described and illustrated herein, but fall within the scope of the appended claims and equivalents thereof.

Claims (6)

1. A moving object detection method based on combination of tradition and deep learning is characterized by comprising the following steps: detecting adjacent frames of road images acquired by a binocular camera by adopting an example segmentation algorithm so as to divide each image into a potential moving target area and a static area;
step two, respectively extracting and matching feature points of potential moving target areas and static areas in each image;
and step three, determining motion compensation based on the camera self-motion parameters, judging the motion state by calculating a reprojection error, and marking the motion target in the image based on the judgment result.
2. The moving object detection method based on the combination of the tradition and the deep learning of claim 1, wherein in step one, the binocular camera is configured to adopt a binocular camera installed on a vehicle;
in step two, an example segmentation algorithm SOLOV2 is adopted, a background pixel value in the road environment data image is marked as 0, and pixel values of each remaining potential moving object are marked in sequence from 1, 2, so that different potential moving objects in each image acquired by the binocular camera are set as mask images with different label information, and the road environment data image is divided into a potential moving object area and a static area.
3. The moving object detection method based on the combination of the tradition and the deep learning as claimed in claim 1, wherein in step three, the feature point extraction manner for the potential moving object region and the static region is configured to include:
the feature points of the static region are configured to be obtained by adopting an ORB feature point extraction method, and the camera self-movement parameters are obtained through a feature point homogenization extraction strategy;
the feature points for the potential moving target region are configured to be obtained by employing the Shi Tomasi feature point extraction method.
4. The moving object detecting method based on the combination of the conventional and the deep learning as claimed in claim 1, wherein in step four, the motion compensation is configured to include:
based on the feature points extracted and matched in the static region, calculating the camera self-motion parameters between every two frames by using a PnP method on the front and back frame images;
and performing motion compensation on the previous frame image in every two frames through the camera self-motion parameters so as to enable the image to be equivalent to the condition that the camera is still.
5. The method as claimed in claim 1, wherein in step four, the reprojection error is the reprojection residual image obtained by projecting the feature points on the feature point projection on the current frame onto the previous frame to obtain the neighboring frame.
6. The moving object detection method based on combination of tradition and deep learning as claimed in claim 1, wherein in step four, the method of motion state decision is configured to:
length F corresponding to a feature point represented by a reprojected residual on a potential moving target regionLAnd a characteristic point motion state judgment threshold value ThMaking a comparison at FL>ThThen the characteristic point is marked with color to show that the characteristic point is a moving point;
traversing each potential moving target by using a label, counting the number of moving points falling in each potential moving target area, setting a threshold phi of the number of moving points, and marking the potential moving target area as red to represent the moving target if the number of moving points of a certain potential moving target area is greater than the threshold; otherwise, marking the potential moving target area as green to represent a static target;
wherein, T ishIs configured to:
Th=ρFavg
Favgrepresents the mean of the reprojected residual lengths of the stationary regions, and ρ represents a value greater than 1.
CN202111176760.4A 2021-10-09 2021-10-09 Moving target detection method based on combination of traditional and deep learning Active CN113920163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111176760.4A CN113920163B (en) 2021-10-09 2021-10-09 Moving target detection method based on combination of traditional and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111176760.4A CN113920163B (en) 2021-10-09 2021-10-09 Moving target detection method based on combination of traditional and deep learning

Publications (2)

Publication Number Publication Date
CN113920163A true CN113920163A (en) 2022-01-11
CN113920163B CN113920163B (en) 2024-06-11

Family

ID=79238705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111176760.4A Active CN113920163B (en) 2021-10-09 2021-10-09 Moving target detection method based on combination of traditional and deep learning

Country Status (1)

Country Link
CN (1) CN113920163B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190272645A1 (en) * 2018-03-01 2019-09-05 Honda Motor Co., Ltd. Systems and methods for performing instance segmentation
US20200250461A1 (en) * 2018-01-30 2020-08-06 Huawei Technologies Co., Ltd. Target detection method, apparatus, and system
CN111797688A (en) * 2020-06-02 2020-10-20 武汉大学 Visual SLAM method based on optical flow and semantic segmentation
CN112115889A (en) * 2020-09-23 2020-12-22 成都信息工程大学 Intelligent vehicle moving target detection method based on vision
CN113012197A (en) * 2021-03-19 2021-06-22 华南理工大学 Binocular vision odometer positioning method suitable for dynamic traffic scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200250461A1 (en) * 2018-01-30 2020-08-06 Huawei Technologies Co., Ltd. Target detection method, apparatus, and system
US20190272645A1 (en) * 2018-03-01 2019-09-05 Honda Motor Co., Ltd. Systems and methods for performing instance segmentation
CN111797688A (en) * 2020-06-02 2020-10-20 武汉大学 Visual SLAM method based on optical flow and semantic segmentation
CN112115889A (en) * 2020-09-23 2020-12-22 成都信息工程大学 Intelligent vehicle moving target detection method based on vision
CN113012197A (en) * 2021-03-19 2021-06-22 华南理工大学 Binocular vision odometer positioning method suitable for dynamic traffic scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIANYING YUAN等: "Independent Moving Object Detection Based on a Vehicle Mounted Binocular Camera", 《IEEE SENSORS JOURNAL》, 21 September 2020 (2020-09-21) *

Also Published As

Publication number Publication date
CN113920163B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
CN109034047B (en) Lane line detection method and device
CN110178167B (en) Intersection violation video identification method based on cooperative relay of cameras
CN108596129B (en) Vehicle line-crossing detection method based on intelligent video analysis technology
CN110738121A (en) front vehicle detection method and detection system
CN107590470B (en) Lane line detection method and device
JP2008045974A (en) Object-detecting apparatus
CN101727748A (en) Method, system and equipment for monitoring vehicles based on vehicle taillight detection
CN112215306A (en) Target detection method based on fusion of monocular vision and millimeter wave radar
CN106951898B (en) Vehicle candidate area recommendation method and system and electronic equipment
EP2813973B1 (en) Method and system for processing video image
Yang et al. A vehicle license plate recognition system based on fixed color collocation
CN110379168A (en) A kind of vehicular traffic information acquisition method based on Mask R-CNN
CN111259796A (en) Lane line detection method based on image geometric features
JP2008064628A (en) Object detector and detecting method
CN115308732A (en) Multi-target detection and tracking method integrating millimeter wave radar and depth vision
CN105469427A (en) Target tracking method applied to videos
CN111523385B (en) Stationary vehicle detection method and system based on frame difference method
CN107506753B (en) Multi-vehicle tracking method for dynamic video monitoring
WO2022166606A1 (en) Target detection method and apparatus
Vajak et al. A rethinking of real-time computer vision-based lane detection
Sanberg et al. Color-based free-space segmentation using online disparity-supervised learning
CN107066929B (en) Hierarchical recognition method for parking events of expressway tunnel integrating multiple characteristics
CN117130010A (en) Obstacle sensing method and system for unmanned vehicle and unmanned vehicle
Habib et al. Lane departure detection and transmission using Hough transform method
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant