CN109934096A - Automatic Pilot visual perception optimization method based on feature timing dependence - Google Patents

Automatic Pilot visual perception optimization method based on feature timing dependence Download PDF

Info

Publication number
CN109934096A
CN109934096A CN201910060991.5A CN201910060991A CN109934096A CN 109934096 A CN109934096 A CN 109934096A CN 201910060991 A CN201910060991 A CN 201910060991A CN 109934096 A CN109934096 A CN 109934096A
Authority
CN
China
Prior art keywords
feature
target
timing
detection
automatic pilot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910060991.5A
Other languages
Chinese (zh)
Other versions
CN109934096B (en
Inventor
缪其恒
吴长伟
苏志杰
孙焱标
王江明
许炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zero Run Technology Co Ltd
Original Assignee
Zhejiang Zero Run Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zero Run Technology Co Ltd filed Critical Zhejiang Zero Run Technology Co Ltd
Priority to CN201910060991.5A priority Critical patent/CN109934096B/en
Publication of CN109934096A publication Critical patent/CN109934096A/en
Application granted granted Critical
Publication of CN109934096B publication Critical patent/CN109934096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of automatic Pilot visual perception optimization method based on feature timing dependence, it is improved including depth convolutional neural networks detection framework: addition output target auto-correlation layer, Goal time order Conformance Assessment index is exported, is exported as optional network branches;Target detection tranining database improved method;Off-line training process improved method includes that depth convolutional neural networks training loss function improves: addition auto-correlation loss function, and is aided with certain weight coefficient and network total losses function is added, and participates in the training of trunk characteristic.The present invention is optimized in stability of the trained and reasoning stage to detection algorithm output result, effectively promote the stability of the classification of sensation target testing result and position recurrence, to improve the accuracy and stability of related objective distance and relative motion estimation, for automatic Pilot application provide more accurately with effective target apperception result, whole visual perception algorithm performance is promoted, meets the needs of automatic driving.

Description

Automatic Pilot visual perception optimization method based on feature timing dependence
Technical field
The present invention relates to automatic driving visual perception technical fields, more particularly to one kind to be based on feature timing dependence Automatic Pilot visual perception optimization method.
Background technique
Intelligence is one of the important trend of nowadays China Automobile Industry, and vision system is in field of automotive active safety application It is increasingly wider.Single binocular forward sight, backsight and 360 degree of viewing systems have become the mainstream of existing advanced DAS (Driver Assistant System) Perception device.Existing such visual perception system can provide structured road information (all types of lane lines etc.) and specific kind The target information (all kinds of traffic marks, vehicle, pedestrian etc.) of class.Corresponding early warning system is derived based on above-mentioned perception output result System and active safety system.The existing vehicle-mounted vision system perceptional function of commercialization mainly includes the mesh such as pedestrian, vehicle, traffic mark Mark detection and identification.Traditional detection method is mostly based on the characteristics of image description of engineer, passes through adaboost or SVM equal part Class device is realized in such a way that sliding window is searched for.The effect of such method depends on characteristics of image and describes the design of operator, and applies Robustness and portable poor.Its limitation and application difficult point are: such as pedestrian, vehicle, traffic mark etc. are no Generic target detection needs to design different characteristics of image and describes operator, algorithm target detection framework and side in daylight and at night Method needs to distinguish adjustment etc..
Existing depth convolutional neural networks technology is also undergoing change at full speed.Network task is known from initial simple classification It does not apply, the application in each field such as detection, segmentation, light stream and stereoscopic vision till now;Network model is superfluous from complexity Remaining catenet till now simplify efficient mininet;Network application scene is sent out from the application of high power consumption server end Open up low-power consumption front end Embedded Application.Algorithm of target detection frame based on depth convolutional neural networks has started to be applied to portion Sub-headend platform, such as safety monitoring, intelligent transportation, smart phone.For the target detection application in intelligent driving field, Real-time is higher with robustness requirement.For deep learning detects framework, the emphasis of existing method is to promote target Recall rate (including promoted detection framework coverage goal range scale and class scope) and ignore detection Goal time order on one Cause property.The deep learning algorithm of target detection of existing view-based access control model, there are still such as testing result timing stability and consistency Lower defect: there are inconsistent (even if picture illumination are visible by naked eyes deviation) for the testing result of consecutive frame target frame;Same target Position regression result timing it is unstable (due to many factors such as angle, illumination, position influence).The above problem, which will lead to, to be based on The target range of vision and relative motion measurement fluctuation are larger, to influence subsequent related application algorithm, are unable to satisfy certainly It is dynamic to drive application demand (especially high-speed working condition).And current mainstream is based on timing based on the detection algorithm training of machine learning Discrete sample does not consider influence of the timing dependence to target classification and recurrence output-consistence.
Summary of the invention
In order to solve the above-mentioned technical problem the present invention, provides a kind of automatic Pilot visual impression based on feature timing dependence Know optimization method, is optimized in stability of the trained and reasoning stage to detection algorithm output result, effectively promote vision mesh Mark testing result classification and position return stability, thus improve related objective distance and relative motion estimation it is accurate Property and stability, provide for automatic Pilot application more accurately with effective target apperception as a result, to promoting whole visual impression Know algorithm performance, meets the needs of automatic driving.
Above-mentioned technical problem of the invention is mainly to be addressed by following technical proposals: the present invention is based on when feature The automatic Pilot visual perception optimization method of sequence correlation, including depth convolutional neural networks detect framework improved method, target Detect tranining database improved method and off-line training process improved method.It is right the present invention is based on characteristics of image timing dependence The training of depth convolutional neural networks and inference method improve.Depth convolutional neural networks detect framework improved method: adding Add output target auto-correlation layer, export Goal time order Conformance Assessment index, is exported as optional network branches;Off-line training mistake Journey improved method includes that depth convolutional neural networks training loss function improves: addition auto-correlation loss function, and is aided with certain Network total losses function is added in weight coefficient, participates in the training of trunk characteristic.The present invention introduces timing dependence feature deep The training of neural network detection model and reasoning stage are spent, it can (the reasoning stage includes auto-correlation introducing the additional operation of minute quantity Branch) or do not introduce under conditions of additional operation (only the training stage includes auto-correlation branch), effectively promote sensation target detection As a result the stability that classification and position return, to improve the accuracy of related objective distance and relative motion estimation and steady It is qualitative, for automatic Pilot application provide more accurately with effective target apperception result.
Preferably, the depth convolutional neural networks detect framework improved method are as follows: in existing deep neural network It detects under framework backbone frame, adds feature timing dependence branch, be based on concatenated convolutional feature, it will be different by channel cascade The corresponding feature channel fusion of scale, exports objective result using detection branches, extracts the description of feature corresponding to characteristic target, meter The autocorrelation of the adjacent timing target signature is calculated, final output to detect the maximum target offset of Goal time order autocorrelation Amount compensation.For the flexibility for retaining forward inference application, depth detection network inputs are 3 channel RGB pictures, are exported as target column Table (default includes various types of vehicles, pedestrian, non-motor vehicle and traffic mark, signal lamp etc.).
Preferably, the target detection tranining database improved method includes the following steps:
1. timing sample augmentation: original training data is pressed its filename and frame number by addition timing dependence sample Search original video content is extended to comprising several adjacent frame data;
2. timing increases sample newly and marks in advance automatically: using track algorithm, based on original training sample label to newly-increased training Sample is marked in advance, i.e., using original label as detection algorithm input, using track algorithm output update after target position as Newly-increased training sample corresponding label;
3. timing increases sample desk checking newly: by step 2. in newly-increased sample pre- mark label generated through artificial school It tests, the tranining database after generating final augmentation.
Preferably, the off-line training process improved method includes: the improvement of target detection branch penalty function, target Timing off-set loss function is newly-increased and online data augmentation is modified.
Preferably, the target detection branch penalty function improved method are as follows:
Newly-increased timing dependence loss function LcorrIt is as follows:
Lcorr=α Lctr+βLftr
Wherein, LctrFor classification temporal consistency loss, it is defined as the temporal consistency of target signature prediction, i.e.,
LftrTemporal consistency loss is returned for position, is defined as the timing autocorrelation of target signature map, i.e.,
Target detection branch penalty function Li are as follows:
Li=k1Lcorr+k2Lcls+k3Lreg,
Wherein, LclsFor target classification loss, softmaxLoss or focalLoss loss function can be used;
LregIt returns and loses for target position, smoothL1Loss or L2Loss loss function can be used.
α, β are temporal consistency loss function classifications and return component weight coefficient (default value is 0.5).k1,k2,k3 Target detection branch penalty function respectively forms the weight coefficient of partial response (default value is 0.33).
Preferably, the Goal time order offset loss function increases method newly are as follows:
Define target offset loss function Lsft:
In above formula loss is sought to center and 2D frame dimension information respectively and summed;
It is neural network forecast offset,It is the offset that label provides;X, y, w, h are respectively target in image coordinate Upper left angle point cross, ordinate and width, height in system.
Training total losses function L are as follows: L=∑ Li+Lsft, i=1,2.
Preferably, the online data augmentation amending method are as follows: the minimum training unit of used database is 5 frames Timing consecutive image randomly selects 2 in the time series data every time and is trained, and the geometry augmentation mode of training sample is not required to It is consistent, but target position label caused by corresponding Geometrical change need to be updated completely.
Preferably, the automatic Pilot visual perception optimization method based on feature timing dependence, including it is online Reasoning process improved method: if front-end platform performance margin is insufficient, that is, adopting original detection framework, does not add Goal time order offset Output branch uses the resulting network weight coefficient of training in off-line training process improved method;If front-end platform performance still has Free time calculates power, can carry out timing off-set calculating depending on vacant calculation power N number of target higher to priority by rule is preset, and It is merged with present frame object detection results.
The beneficial effects of the present invention are: being carried out in stability of the trained and reasoning stage to detection algorithm output result excellent Change, effectively promoted sensation target testing result classification and position return stability, thus improve related objective distance and Relative motion estimation accuracy and stability, for automatic Pilot application provide more accurately with effective target apperception as a result, To promote whole visual perception algorithm performance, meets the needs of automatic driving.
Detailed description of the invention
Fig. 1 is a kind of schematic diagram of improved depth convolutional neural networks detection framework in the present invention.
Fig. 2 is the improved a kind of flow chart of target detection tranining database in the present invention.
Specific embodiment
Below with reference to the embodiments and with reference to the accompanying drawing the technical solutions of the present invention will be further described.
Embodiment: the automatic Pilot visual perception optimization method based on feature timing dependence of the present embodiment, including depth It spends convolutional neural networks and detects framework improved method, target detection tranining database improved method, off-line training process improvement side Method and On-line accoun process improved method.
1, depth convolutional neural networks detect framework improved method are as follows: detect framework trunk frame in existing deep neural network Under frame, feature timing dependence branch is added, as shown in Figure 1, to retain the flexibility of forward inference application, depth detection network Input be 3 channel RGB pictures, export for object listing (default include various types of vehicles, pedestrian, non-motor vehicle and traffic mark, Signal lamp etc.), it is based on concatenated convolutional feature, i.e. sharing feature coded portion in Fig. 1, the main nerves such as including conv+relu+BN Network operation is merged the corresponding feature channel of different scale by channel cascade, by calculate feature of interest region when Sequence correlation exports objective result using detection branches, extracts the description of feature corresponding to characteristic target, calculates the adjacent timing mesh The autocorrelation of feature is marked, final output to detect the maximum target offset amount compensation of Goal time order autocorrelation, i.e., (Δ x, Δy)。
2, target detection tranining database improved method, process is as shown in Fig. 2, include the following steps:
1. timing sample augmentation: original training data is pressed its filename and frame number by addition timing dependence sample Search original video content is extended to comprising several consecutive frames (default is improved are as follows: 1- > 5) data;
2. timing increases sample newly and marks in advance automatically: using track algorithm (LK or KCF etc.), being based on original training sample label Newly-increased training sample is marked in advance, i.e., is inputted by detection algorithm of original label, by mesh after the update of track algorithm output Cursor position is as newly-increased training sample corresponding label;
3. timing increases sample desk checking newly: by step 2. in newly-increased sample pre- mark label generated through artificial school It tests, the tranining database after generating final augmentation.
3, off-line training process improved method: due to introducing timing dependence branch, the training process of detection model It needs to be correspondingly improved, main improve includes loss function and online data augmentation method etc..Training still uses mini batch Measure the mode of stochastic gradient descent:
3.1, target detection branch penalty function improves: definition target detection inference function is H, and input is triple channel figure Picture, output include that target classification (Hc) and target position return (Hl) Liang Ge branch;
Newly-increased timing dependence loss function LcorrIt is as follows, it mainly include that timing classification consistency and timing return unanimously Property:
Lcorr=α Lctr+βLftr
Wherein, LctrFor classification temporal consistency loss, it is defined as the temporal consistency of target signature prediction, i.e.,
LftrTemporal consistency loss is returned for position, is defined as the timing autocorrelation of target signature map, i.e.,
Target detection branch penalty function Li are as follows:
Li=k1Lcorr+k2Lcls+k3Lreg,
Wherein, LclsFor target classification loss, softmaxLoss or focalLoss loss function can be used;
LregIt returns and loses for target position, smoothL1Loss or L2Loss loss function can be used;
3.2, Goal time order offset loss function is newly-increased:
Define target offset loss function Lsft:
In above formula loss is sought to center and 2D frame dimension information respectively and summed;
It is neural network forecast offset,It is the offset that label provides;
Training total losses function L are as follows: L=∑ Li+Lsft, i=1,2;
3.3, online data augmentation is modified: used herein compared to the training process based on discrete image data library The minimum training unit of database is 5 frame timing consecutive images, randomly selects 2 in the time series data every time and is trained, The geometry augmentation mode (random cropping, mirror image etc.) of training sample is not required to be consistent completely, but need to be by corresponding Geometrical change institute Caused by target position label be updated;The color augmentation mode of training sample need to be consistent as far as possible, or allow small color Color conversion tolerance.
4, On-line accoun process is improved: if front-end platform performance margin is insufficient, that is, being adopted original detection framework, is not added mesh Timing off-set output branch is marked, the resulting network weight coefficient of method training in use 3, compared to version before improving, target one Cause property, which has, to be obviously improved (with target recall rate small elevation);If the still available free calculation power of front-end platform performance, can be by preparatory Setting rule, regards the N number of target progress timing off-set calculating higher to priority of vacant calculation power, and with present frame target detection knot Target offset output (is considered as the improvement of target detection output) by fruit fusion, and goal congruence is substantially improved with recall rate.
Compared to existing algorithm of target detection, of the invention is most particularly advantageous in that temporal consistency information (target Classification describes auto-correlation coefficient with consistency and target signature is returned) incorporate that deep neural network is trained to be applied with forward direction Journey, under conditions of not increasing additional operation, or the limited additional operation of increase, the Goal time order for greatly improving detection algorithm is steady It is qualitative, improve the feasibility (improve target range and relative motion is estimated) of respective algorithms automatic Pilot application.The present invention answers With flexible, it is suitable for improving various detection algorithm frameworks, without increasing original core network operand, and power can be calculated according to platform Additional operation used in nargin flexible configuration forward inference.

Claims (8)

1. a kind of automatic Pilot visual perception optimization method based on feature timing dependence, it is characterised in that including depth convolution Neural network detects framework improved method, target detection tranining database improved method and off-line training process improved method.
2. the automatic Pilot visual perception optimization method according to claim 1 based on feature timing dependence, feature It is the depth convolutional neural networks detection framework improved method are as follows: detect framework trunk frame in existing deep neural network Under frame, feature timing dependence branch is added, is based on concatenated convolutional feature, is cascaded by channel by the corresponding feature of different scale Channel fusion exports objective result using detection branches, extracts the description of feature corresponding to characteristic target, calculates the adjacent timing mesh The autocorrelation of feature is marked, final output to detect the maximum target offset amount compensation of Goal time order autocorrelation.
3. the automatic Pilot visual perception optimization method according to claim 1 based on feature timing dependence, feature It is that the target detection tranining database improved method includes the following steps:
1. timing sample augmentation: addition timing dependence sample is searched for original training data by its filename and frame number Original video content is extended to comprising several adjacent frame data;
2. timing increases sample newly and marks in advance automatically: using track algorithm, based on original training sample label to newly-increased training sample It is marked, i.e., is inputted by detection algorithm of original label in advance, using target position after the update of track algorithm output as newly-increased Training sample corresponding label;
3. timing increases sample desk checking newly: by step 2. in newly-increased sample pre- mark label generated through desk checking, Tranining database after generating final augmentation.
4. the automatic Pilot visual perception optimization method according to claim 1 based on feature timing dependence, feature It is that the off-line training process improved method includes: that target detection branch penalty function improves, Goal time order offset is lost Function is newly-increased and online data augmentation is modified.
5. the automatic Pilot visual perception optimization method according to claim 4 based on feature timing dependence, feature It is the target detection branch penalty function improved method are as follows:
Newly-increased timing dependence loss function LcorrIt is as follows:
Lcorr=α Lctr+βLftr
Wherein, LctrFor classification temporal consistency loss, it is defined as the temporal consistency of target signature prediction, i.e.,
LftrTemporal consistency loss is returned for position, is defined as the timing autocorrelation of target signature map, i.e.,
Target detection branch penalty function Li are as follows:
Li=k1Lcorr+k2Lcls+k3Lreg,
Wherein, LclsFor target classification loss, softmaxLoss or focalLoss loss function can be used;
LregIt returns and loses for target position, smoothL1Loss or L2Loss loss function can be used.
6. the automatic Pilot visual perception optimization method according to claim 4 based on feature timing dependence, feature It is that the Goal time order offset loss function increases method newly are as follows:
Define target offset loss function Lsft:
In above formula loss is sought to center and 2D frame dimension information respectively and summed;
It is neural network forecast offset,It is the offset that label provides;
Training total losses function L are as follows: L=∑ Li+Lsft, i=1,2.
7. the automatic Pilot visual perception optimization method according to claim 4 or 5 or 6 based on feature timing dependence, It is characterized in that the online data augmentation amending method are as follows: the minimum training unit of used database is 5 frame timings company Continuous image, randomly selects 2 in the time series data every time and is trained, the geometry augmentation mode of training sample is not required to protect completely It holds unanimously, but target position label caused by corresponding Geometrical change need to be updated.
8. the automatic Pilot visual perception optimization method according to claim 1 based on feature timing dependence, feature It is to include online reasoning process improved method: if front-end platform performance margin is insufficient, that is, adopts original detection framework, do not add Goal time order deviates output branch, uses the resulting network weight coefficient of training in off-line training process improved method;If front end The still available free calculation power of platform property can carry out timing depending on vacant calculation power N number of target higher to priority by rule is preset Calculations of offset, and merged with present frame object detection results.
CN201910060991.5A 2019-01-22 2019-01-22 Automatic driving visual perception optimization method based on characteristic time sequence correlation Active CN109934096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910060991.5A CN109934096B (en) 2019-01-22 2019-01-22 Automatic driving visual perception optimization method based on characteristic time sequence correlation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910060991.5A CN109934096B (en) 2019-01-22 2019-01-22 Automatic driving visual perception optimization method based on characteristic time sequence correlation

Publications (2)

Publication Number Publication Date
CN109934096A true CN109934096A (en) 2019-06-25
CN109934096B CN109934096B (en) 2020-12-11

Family

ID=66985064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910060991.5A Active CN109934096B (en) 2019-01-22 2019-01-22 Automatic driving visual perception optimization method based on characteristic time sequence correlation

Country Status (1)

Country Link
CN (1) CN109934096B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619279A (en) * 2019-08-22 2019-12-27 天津大学 Road traffic sign instance segmentation method based on tracking
CN111274926A (en) * 2020-01-17 2020-06-12 深圳佑驾创新科技有限公司 Image data screening method and device, computer equipment and storage medium
CN113780152A (en) * 2021-09-07 2021-12-10 北京航空航天大学 Remote sensing image ship small target detection method based on target perception

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022239A (en) * 2016-05-13 2016-10-12 电子科技大学 Multi-target tracking method based on recurrent neural network
CN107330920A (en) * 2017-06-28 2017-11-07 华中科技大学 A kind of monitor video multi-target tracking method based on deep learning
WO2018086513A1 (en) * 2016-11-08 2018-05-17 杭州海康威视数字技术股份有限公司 Target detection method and device
CN108921013A (en) * 2018-05-16 2018-11-30 浙江零跑科技有限公司 A kind of visual scene identifying system and method based on deep neural network
CN108986143A (en) * 2018-08-17 2018-12-11 浙江捷尚视觉科技股份有限公司 Target detection tracking method in a kind of video
CN109145798A (en) * 2018-08-13 2019-01-04 浙江零跑科技有限公司 A kind of Driving Scene target identification and travelable region segmentation integrated approach
CN109242003A (en) * 2018-08-13 2019-01-18 浙江零跑科技有限公司 Method is determined based on the vehicle-mounted vision system displacement of depth convolutional neural networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022239A (en) * 2016-05-13 2016-10-12 电子科技大学 Multi-target tracking method based on recurrent neural network
WO2018086513A1 (en) * 2016-11-08 2018-05-17 杭州海康威视数字技术股份有限公司 Target detection method and device
CN107330920A (en) * 2017-06-28 2017-11-07 华中科技大学 A kind of monitor video multi-target tracking method based on deep learning
CN108921013A (en) * 2018-05-16 2018-11-30 浙江零跑科技有限公司 A kind of visual scene identifying system and method based on deep neural network
CN109145798A (en) * 2018-08-13 2019-01-04 浙江零跑科技有限公司 A kind of Driving Scene target identification and travelable region segmentation integrated approach
CN109242003A (en) * 2018-08-13 2019-01-18 浙江零跑科技有限公司 Method is determined based on the vehicle-mounted vision system displacement of depth convolutional neural networks
CN108986143A (en) * 2018-08-17 2018-12-11 浙江捷尚视觉科技股份有限公司 Target detection tracking method in a kind of video

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619279A (en) * 2019-08-22 2019-12-27 天津大学 Road traffic sign instance segmentation method based on tracking
CN111274926A (en) * 2020-01-17 2020-06-12 深圳佑驾创新科技有限公司 Image data screening method and device, computer equipment and storage medium
CN111274926B (en) * 2020-01-17 2023-09-22 武汉佑驾创新科技有限公司 Image data screening method, device, computer equipment and storage medium
CN113780152A (en) * 2021-09-07 2021-12-10 北京航空航天大学 Remote sensing image ship small target detection method based on target perception
CN113780152B (en) * 2021-09-07 2024-04-05 北京航空航天大学 Remote sensing image ship small target detection method based on target perception

Also Published As

Publication number Publication date
CN109934096B (en) 2020-12-11

Similar Documents

Publication Publication Date Title
US20200293797A1 (en) Lane line-based intelligent driving control method and apparatus, and electronic device
CN109934096A (en) Automatic Pilot visual perception optimization method based on feature timing dependence
CN112750150B (en) Vehicle flow statistical method based on vehicle detection and multi-target tracking
US20200226502A1 (en) Travel plan recommendation method, apparatus, device and computer readable storage medium
CN113221677B (en) Track abnormality detection method and device, road side equipment and cloud control platform
CN111027430B (en) Traffic scene complexity calculation method for intelligent evaluation of unmanned vehicles
JP7292355B2 (en) Methods and apparatus for identifying vehicle alignment information, electronics, roadside equipment, cloud control platforms, storage media and computer program products
CN108446622A (en) Detecting and tracking method and device, the terminal of target object
US20190339088A1 (en) Navigation with sun glare information
CN107657625A (en) Merge the unsupervised methods of video segmentation that space-time multiple features represent
CN112734808A (en) Trajectory prediction method for vulnerable road users in vehicle driving environment
CN112885130B (en) Method and device for presenting road information
CN110516633A (en) A kind of method for detecting lane lines and system based on deep learning
CN110706271A (en) Vehicle-mounted vision real-time multi-vehicle-mounted target transverse and longitudinal distance estimation method
CN113076891B (en) Human body posture prediction method and system based on improved high-resolution network
CN106529432A (en) Hand area segmentation method deeply integrating significance detection and prior knowledge
CN109740609A (en) A kind of gauge detection method and device
CN111209880A (en) Vehicle behavior identification method and device
WO2023137921A1 (en) Artificial intelligence-based instance segmentation model training method and apparatus, and storage medium
CN101320477B (en) Human body tracing method and equipment thereof
CN109753841A (en) Lane detection method and apparatus
CN110472508A (en) Lane line distance measuring method based on deep learning and binocular vision
CN116958740A (en) Zero sample target detection method based on semantic perception and self-adaptive contrast learning
CN106650814A (en) Vehicle-mounted monocular vision-based outdoor road adaptive classifier generation method
CN116977943A (en) Road element identification method, device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 310051 1st and 6th floors, no.451 Internet of things street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Zhejiang Zero run Technology Co.,Ltd.

Address before: 310051 1st and 6th floors, no.451 Internet of things street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: ZHEJIANG LEAPMOTOR TECHNOLOGY Co.,Ltd.