CN106296708B - Car tracing method and apparatus - Google Patents

Car tracing method and apparatus Download PDF

Info

Publication number
CN106296708B
CN106296708B CN201610683492.8A CN201610683492A CN106296708B CN 106296708 B CN106296708 B CN 106296708B CN 201610683492 A CN201610683492 A CN 201610683492A CN 106296708 B CN106296708 B CN 106296708B
Authority
CN
China
Prior art keywords
radar
detector
coordinate system
relationship
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610683492.8A
Other languages
Chinese (zh)
Other versions
CN106296708A (en
Inventor
朱少岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Aoshi Zhihui Photoelectric Technology Co Ltd
Original Assignee
Ningbo Aoshi Zhihui Photoelectric Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Aoshi Zhihui Photoelectric Technology Co Ltd filed Critical Ningbo Aoshi Zhihui Photoelectric Technology Co Ltd
Priority to CN201610683492.8A priority Critical patent/CN106296708B/en
Publication of CN106296708A publication Critical patent/CN106296708A/en
Application granted granted Critical
Publication of CN106296708B publication Critical patent/CN106296708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Landscapes

  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Image Analysis (AREA)

Abstract

Embodiment of the disclosure is about a kind of car tracing method, this method includes the transfer matrix obtained between radar data and camera image using scaling board, spatial correspondence and corresponding time relationship using radar data and camera image based on the transfer matrix is as primary condition, detection parameters are obtained using Supervised machine learning in tracing process, and the confidence level based on tracking result corrects the detector parameters of detector.Embodiment of the disclosure is also about a kind of vehicle tracking device.

Description

Car tracing method and apparatus
Technical field
The disclosure belongs to computer vision and graph processing technique field, more particularly, to based on computer vision and figure The car tracing method and apparatus of processing technique.
Background technique
Car tracing and detection are an important application directions in computer vision and graphics process field.In the field In have using laser, radar, the video camera such as monocular camera or stereoscopic camera and Multi-sensor Fusion to vehicle carry out The research of trace detection occurs.
Car tracing based on multisensor can detecte the information such as position, speed, the track of particular vehicle in space. This is for detecting abnormal vehicle, evading potential risk with important practical significance.Due to using including the more of visual sensor When sensor, characterization of the vehicle under different conditions is distinct, so that the robustness of detection not can guarantee.And only apply monocular The visual information of visible light camera carries out car tracing, and what can be got is only vehicle position in the picture, size, lacks Weary depth information.It will be unable to the information such as the position calculated vehicle in three dimensions, speed, track in this way, strongly limit Its application in traffic environment.And laser radar can provide accurate depth.It will be seen that light video camera is mutually tied with laser radar It closes, it is not only available to arrive enough spaces, visual information, also the confidence level of tracing algorithm can be made to greatly improve.Existing is big Tracing process is divided into multisensor registration and target tracking two parts by most methods.The space of radar and visible light camera is matched Quasi- substantially there are two types of modes: the first is to obtain corresponding radar data point and image slices respectively in radar, image coordinate system Vegetarian refreshments, by calculating the transfer matrix obtained between Two coordinate system;Second is extracted respectively different under different coordinate systems Feature (extract line feature such as in image coordinate system, in radar fix system extract point feature), closed using the two in space System is converted into the problem of minimizing objective function.The existing mode of target tracking is broadly divided into two kinds: the first is to generate mould Type finds out the image-region for having minimal reconstruction error with object module;Second is discrimination model, and target tracking is regarded one A classification problem is handled, target as image pixel other than prospect, target as background, by foreground and background more It newly arrives and tracks target.
And these methods in the prior art have its limitation.First, space calibration is only carried out, only under static environment The space of target, visual information, application are limited;Second, target tracking is only carried out, can only obtain the position of target and ruler in image It is very little, do not have spatial information, it is not very practical.
Summary of the invention
Embodiment of the disclosure discloses a kind of car tracing method, sits this method comprises: obtaining radar using scaling board Mark system and the relationship of world coordinate system and the relationship of camera review coordinate system and world coordinate system;According to coordinate obtained It is the transfer matrix that relationship obtains radar fix system and camera review coordinate system;By the selected of radar data and camera review At the time of corresponding time relationship and spatial correspondence based on above-mentioned transfer matrix chased after as primary condition with evaluation goal The confidence level of result during track;The parameter in supervised learning acquisition detector is executed in tracing process;And based on chasing after The confidence level of track result corrects the parameter of detector in tracing process.
In some embodiments, this based on tracking result is with a low credibility examines target using detector in threshold value Survey includes obtaining detection parameters using semi-supervised learning iteration.
In some embodiments, obtaining detection parameters using semi-supervised learning iteration includes the detection based on last iteration The classification of the obtained image pixel of device and time and the spatial correspondence of tracing process, to detector in assorting process The erroneous detection of appearance and missing inspection are corrected, and the detection parameters of current iteration are updated using semi-supervised learning.
In some embodiments, when camera review frame per second and radar frequency are mismatched using in radar data report when Information is carved to be averaged with corresponding with camera review frame radar data.
In some embodiments, based on tracking result confidence level determine the detector reinitialize bounding box when Machine.
In some embodiments, which uses the cascade of multi-categorizer.
In some embodiments, which includes the classifier using spatial position as discrimination standard.
In some embodiments, which is single line laser radar.
In some embodiments, which is visible light camera.
Embodiment of the disclosure also discloses a kind of follow-up mechanism, the device include scaling board, radar, video camera and Processor, the scaling board are used to obtain relationship and camera review coordinate system and the world of radar fix system and world coordinate system The relationship of coordinate system;And the processor is used to obtain radar fix system and camera review according to coordinate system relationship obtained The transfer matrix of coordinate system;By radar data and camera review it is selected at the time of corresponding time relationship and be based on above-mentioned turn The spatial correspondence of shifting matrix is as primary condition with the confidence level of result in evaluation goal tracing process;In tracing process Execute the parameter in supervised learning acquisition detector;And based on tracking result confidence level in tracing process to detector Parameter corrected.
Embodiment of the disclosure can solve above-mentioned difficulty in the prior art can be relieved existing car tracing algorithm without Method provides the situation of exact space, visual information simultaneously.By the spaces union rating test of video camera and radar, view is realized Feel, the data fusion of depth information solve the only data incompleteness caused by vision or depth information.Then by the letter Breath is used in car tracing algorithm, can obtain vision, the depth information of particular vehicle simultaneously under traffic scene, for Vehicle abnormality detection, risk averse are of great significance.It joined the detection using spatial information as judgment criteria in tracing algorithm Device realizes organically blending in tracing algorithm.
Detailed description of the invention
Present disclose provides attached drawing further understanding in order to disclosure, and attached drawing constitutes part of this application, But it is only used for illustrating the non-limiting example for some inventions for embodying concept of the invention, rather than for making any limit System.
Fig. 1 shows the flow chart of laser radar and video camera space scaling method according to some embodiments of the disclosure.
Fig. 2 shows carry out laser radar and the calibration of video camera space using scaling board according to some embodiments of the disclosure Schematic diagram.
Fig. 3 shows the flow chart of the target tracking algorism according to some embodiments of the disclosure.
Fig. 4 shows the schematic diagram by machine learning performance objective track algorithm according to some embodiments of the disclosure.
Fig. 5 shows the block diagram of the vehicle tracking device according to some embodiments of the disclosure.
Specific embodiment
It will hereinafter use those skilled in the art to lead to the essence that others skilled in the art convey them to work The term being often used describes the concept of the invention of the disclosure.However, these concept of the invention can be presented as many different forms, because And it should not be considered limited to embodiments described herein.These embodiments are provided so that present disclosure is more detailed and complete Range that is whole, and completely conveying it to include to those skilled in the art.It must also be noted that these embodiments do not have to be mutually exclusive.Come May be assumed that into from the component of one embodiment, step or element may be present in another embodiment or uses.This public affairs is not being departed from In the case where the range for the embodiment opened, shown by diversified alternative and/or equivalent implementations substitution and it can retouch The specific embodiment stated.The application is intended to cover any modification or modification of embodiments discussed herein.
It can obviously be practiced for a person skilled in the art using only some aspects in described aspect Alternative embodiment.Herein for purposes of illustration, specific number, material and configuration are described in embodiment, however, field Technical staff without these specific details, alternative embodiment can also be practiced.In other cases, may Well-known feature is omitted or simplified, so as not to make illustrative embodiment be difficult to understand.
In addition, hereinafter facilitating to understand illustrative embodiment, various operations are successively described in order to multiple discrete Operation;However, described sequence is not construed as meaning that these operations are necessarily dependent upon sequence execution.But not These operations must be executed with the sequence presented.
" in some embodiments " hereinafter, the phrases such as " in one embodiment " may or may not refer to identical reality Apply example.Term " includes ", " having " and "comprising" are synonymous, unless providing in other ways in context.Phrase " A and/or B " means (A), (B) or (A and B).Phrase " A/B " means (A), (B) or (A and B), is similar to phrase " A and/or B ".It is short Language " at least one of A, B and C " means (A), (B), (C), (A and B), (A and C), (B and C) or (A, B and C).Phrase " (A) B " means (B) or (A and B), i.e. A is optional.
Fig. 1 shows the flow chart of laser radar and video camera space scaling method according to some embodiments of the disclosure. Fig. 2 shows carry out the signal that laser radar is demarcated with video camera space using scaling board according to some embodiments of the disclosure Figure.Data needed for wherein laser radar and video camera can be used for obtaining tracking target.The target, which for example can be, to be needed to carry out The vehicle of tracking.
The principle of the space of radar and video camera calibration is as follows: according to video camera and world coordinate system, video camera with The spatial relationship of camera image, the transformational relation of available camera image and world coordinate system:
Wherein R is spin matrix, and t is translation vector, and S is proportionality coefficient, fx,fyIt is focal length, Cx,CyIt is offset parameter, P (Xw,Yw,Zw) it is coordinate of the P point in world coordinate system, P (Xp,Yp, 1) and it is position of the P point in camera image.
Similarly, according to the position of radar, posture, the transformational relation of available radar fix system to world coordinate system:
Wherein (ρ, β) is polar value of the P point in laser radar coordinate system, ρ be laser radar source to certain point away from From the angle that β is inswept for laser radar, (x, y, z) is the coordinate value in corresponding world coordinate system, and α is laser radar Pitch angle is installed, h is the mounting height of laser radar.
According to the transformational relation of above two formula, the transformational relation that radar fix system can be obtained to image is as follows, thus by thunder It is converted to up to coordinate system and the problem of calibrating of image and finds the corresponding radar data point of multiple groups and image slices vegetarian refreshments and calculate turn The problem of moving matrix:
As shown in Fig. 2, the space calibration of radar and video camera can be carried out using scaling board in some embodiments.Calibration Plate is not limited to schematic square illustrated in fig. 2, but can select any machine vision scaling board in the prior art, such as Gridiron pattern scaling board, dot scaling board, three-dimensional scaling plate, concentric circle calibration plate, aperture scaling board and/or depth of field plate scaling board Deng.As long as scaling board has scheduled size and scale and can be detected by radar and video camera.Such as the step of Fig. 1 Shown in S101 and S102, in some embodiments, in the case where keeping radar and camera position constant, mobile scaling board from And acquire a certain amount of camera review and radar data camera review as input and radar data.Image Acquisition is completed Later, process enters the calculating step of transformational relation.
In step s 106, scaling board radar data collected and unused scaling board radar number collected can be used According to being compared, to filter out the radar data point [(ρ fallen on scaling board11),(ρ22),(ρ33)], wherein ρnWith θnDistance and deflection angle respectively in polar coordinate system.
In step S103, S104 and S105, by using harris Corner Detection, MSER region detection to image data It is filtered, filters out scaling board apex coordinate, and calculate [(ρ11),(ρ22),(ρ33)] position in the picture [(x1,y1),(x2,y2),(x3,y3)].It is noted that Corner Detection Algorithm used in the embodiment of the present disclosure is not limited to Harris Corner Detection Algorithm also can be used other Corner Detection Algorithms such as Kitchen-Rosenfeld Corner Detection and calculate Method, KLT Corner Detection Algorithm and/or SUSAN Corner Detection Algorithm etc..Region detection algorithms are also not limited to the inspection of the region MSER It surveys, other well known Region Feature Extraction algorithms also can be used.Corner Detection Algorithm and Region Feature Extraction algorithm scheduling algorithm Such as it can be realized with opencv computer vision library.
In the relationship and camera review coordinate system and world coordinate system for obtaining radar fix system and world coordinate system After relationship, calibration radar fix system and video camera figure can be obtained according to formula (1)~(3) illustrated above in step s 107 As the transfer matrix of coordinate system.In some embodiments, the transfer matrix between Two coordinate system is available by optimizing through RANSAC Least square method calculate.In step S108, it can for example be tested and be turned using preprepared reference standard data Move the accuracy of matrix.The transfer matrix of obtained calibration will be used to provide the corresponding relationship in space, so that tracing algorithm can To get enough spaces, visual information, also the confidence level of tracing algorithm can be made to greatly improve.
Fig. 3 shows the flow chart of the target tracking algorism according to some embodiments of the disclosure.It is further illustrated in Fig. 4 Learning process is iterated with how detection process is based on tracking result.Because the car tracing under traffic scene needs stronger Robustness, the part that will test are added in tracing algorithm.Again since the external model of tracking target can during tracking Certain change can occur, on-line learning algorithm can be used and update detector parameters to improve robustness.As described above, will The transfer matrix of radar data and camera review demarcated is defined as spatial correspondence.Also any selected timing is carved Radar data and camera review are defined as corresponding time relationship.In some embodiments, radar frequency and video frame rate may It can not be consistent or can not synchronize.Information takes radar data at the time of need in this case using in radar data report Average value is with corresponding with camera review frame, to realize the corresponding relationship of time upper radar data and camera review.? In some embodiments, video frame rate is likely to be greater than radar frequency, can take the time average to video image at this time with radar Data are corresponded to.In some embodiments, the above corresponding time relationship and spatial correspondence are commented as primary condition The confidence level of result in marked price mark tracing process.Such as can with will be a certain selected corresponding when inscribe have and demarcated space The radar data and camera review of corresponding relationship are as primary condition.Can autonomous discriminant pursuit confidence level whether reach it is pre- really Fixed threshold value, so that detector selection was suitble to uses opportunity.Supervised learning can be executed in tracing process to be examined The parameter in device is surveyed, the confidence level based on tracking result corrects the parameter of detector in tracing process.In some realities Apply in example, correct detector parameter include based on tracking result it is described it is with a low credibility in threshold value and use the detector pair Target carries out detection to be corrected using on-line study.In some embodiments, multi-categorizer grade can be used in detector Connection, including using spatial position as the classifier of discrimination standard.Online machine learning can be used in learning algorithm, to carry out The update of detector parameters.
In step S301, the pre-determined initial boundary frame of preparation is for tracking process.The initial boundary frame can be with It is reinitialized when determining that the confidence level of tracking result is unsatisfactory for predetermined threshold value by detector.In step S302 In, it above-mentioned radar data and camera review with time and spatial correspondence melt is incorporated as primary condition and uses In the machine-learning process of detection, study and tracking.Shown in step S303, S304 and S305 detection, study and Tracing process is tracked target using tracing algorithm, and to tracking result confidence level evaluate, such as it is with a low credibility in The threshold value of a certain pre-determining just detects target under current time using detector, which can be used semi-supervised Study carrys out iteration and carries out.Semi-supervised learning include but is not limited to semisupervised classification, Semi-Supervised Regression, semi-supervised clustering and/or its Combination.In some embodiments, the parameter in detector first can have supervision according to the execution of initial boundary frame and learn to obtain.? In some embodiments, the parameter training of detector needs to be added the positive negative sample of learner generation to carry out semi-supervised learning.? In the iterative process for obtaining the detection parameters of current iteration using semi-supervised learning, obtained using the resulting detector of last iteration To the classification of image pixel, and as shown in Figure 4 according to tracing process time obtained, space structure sexual intercourse, for detection The erroneous detection and missing inspection that device occurs during sample classification are corrected.And the iterative process is repeated, until tracking terminates.
Fig. 5 shows the block diagram of the vehicle tracking device 500 according to some embodiments of the disclosure.Vehicle tracking device 500 Including scaling board 501, radar 501, video camera 505 and processor 507.Scaling board 501 can be existing skill as described above A variety of machine vision scaling boards in art.In some embodiments, radar 501 includes single line laser radar, multi-line laser radar And/or vehicle-mounted three-dimensional laser radar.In some embodiments, video camera 505 includes visible light camera, thermal camera, CCD Video camera, cmos camera, web camera and/or any it can provide the equipment of sequence of video images.Processor 507 can be with Compatible including Complex Instruction Set Computer processor (CISC), RISC processor (RISC), x86 instruction set Processor, multi-core processor, multicore mobile processor, microprocessor, microcontroller and/or central processing unit (CPU) etc..? In some embodiments, processor 507 may include specific integrated circuit, combinational logic circuit, combination electronic circuit etc. is any can To execute the logic unit of logical order.Scaling board 501 is used to obtain the relationship of radar fix system and world coordinate system and takes the photograph The relationship of camera image coordinate system and world coordinate system.Processor 507 obtains radar according to above-mentioned coordinate system relationship obtained The transfer matrix of coordinate system and camera review coordinate system.Processor 507 is also by the selected of radar data and camera review The corresponding time relationship at moment and spatial correspondence based on above-mentioned transfer matrix are tracked as primary condition with evaluation goal The confidence level of result in the process.Processor 507 also executes the parameter in supervised learning acquisition detector in tracing process, with And the confidence level based on tracking result corrects the parameter of detector.Such as processor 507 can be with a low credibility in threshold value And target is detected using the detector.
The effect of embodiment of the disclosure also uses the corresponding radar data and camera review of time domain and airspace It is verified, horizontal fix puts radar, video camera first, and obtains image, radar data by mobile scaling board, amounts to 25 groups, wherein 20 groups are used to calculate, 5 groups for testing.Then, video and radar points cloud number are shot simultaneously under traffic environment According to.Gained video and radar point cloud data are in central processing unitI5-3470 3.2GHzCPU, memory 4G, OS are It is emulated in the system of WINDOWS 10 with MATLAB software.Evaluating standard is based on real border frame and experimental result side Boundary's frame Duplication, AUC, precision index of center Euclidean distance, the evaluating standard can from document " Y.Wu, J.Lim, and H.Ling,Online Object Tracking:A Benchmark,Proc.IEEE Conf.Computer Vision And Pattern Recognition, pp.1808-1814,2013 " are known.Through detecting, the algorithm of the embodiment of the present disclosure Precision is 0.5487, AUC 0.4575, and algorithm compared to the prior art has some superiority, especially for a long time Tracking in be substantially better than existing algorithm, this is beneficial to the actual needs in long-time vehicle tracking.
Part Methods step herein and process may need to be executed by computer, thus with hardware, software, firmware and The mode of any combination thereof is implemented, and may include the executable instruction of computer.The executable instruction of the computer can To be stored on machine-readable media or be carried out in a manner of being downloaded from remote server in the form of a computer program product It provides, and is read by the one or more processors of general purpose computer, special purpose computer and/or other programmable data processing units Take and execute the function action to indicate in implementation method step and process.Machine readable media includes but is not limited to floppy disk, light Disk, compact disk, magneto-optic disk, read only memory ROM, random access memory ram, electronically erasable programmable rom (EPROM), electrically erasable Programming ROM (EEPROM), storage card, flash memory and/or electricity, light, the transmitting signal of sound and other forms are (such as carrier wave, red External signal, digital signal etc.).
It is furthermore noted that term "and/or" herein can indicate "and", "or", distance, "one", it is " some but not All ", " the two is neither " and/or " the two is all ", but there is no restriction in this regard.Although having been shown and described herein The specific embodiment of the disclosure, but it is apparent to those skilled in the art can be the case where not departing from scope It is lower to carry out numerous changes, change and modification.In addition, in above-mentioned specific embodiment, it can be seen that various features are individually being implemented It is combined together to simplify disclosure in example.The displosure mode should not be construed as reflecting claimed embodiment needs There is more features described in more clear than each claim.On the contrary, as claim reflects, the master of the disclosure What topic relied on is the less feature of features more all than single disclosed embodiment.Therefore, each claim of claims Item remains individual complete embodiment in itself.To sum up, those skilled in the art will appreciate that in the model for not departing from the disclosure In the case where enclosing and being spiritual, it can be changed and modified in broader various aspects.The appended claims are within its scope It covers and falls into disclosure true scope and all such changes in spirit, change and modification.

Claims (8)

1. a kind of car tracing method, which comprises
The relationship and camera review coordinate system and world coordinates of radar fix system and world coordinate system are obtained using scaling board The relationship of system;
The transfer matrix of radar fix system and camera review coordinate system is obtained according to coordinate system relationship obtained;
By radar data and camera review it is selected at the time of corresponding time relationship and space based on above-mentioned transfer matrix Corresponding relationship is as primary condition with the confidence level of result in evaluation goal tracing process;
The parameter that supervised learning obtains detector is executed in tracing process, the detector use includes being with spatial position The cascade of the multi-categorizer of the classifier of discrimination standard;And based on tracking result the confidence level in tracing process to institute The parameter for stating detector is corrected.
2. the method as described in claim 1, wherein the confidence level based on tracking result is in tracing process to the inspection It includes obtaining detection parameters using semi-supervised learning iteration that the parameter for surveying device, which correct,.
3. method according to claim 2, wherein obtaining detection parameters using semi-supervised learning iteration includes based on the last time The classification of the obtained image pixel of the detector of iteration and time and the spatial correspondence of tracing process, exist to detector The erroneous detection and missing inspection occurred in assorting process is corrected, and the detection parameters of current iteration are updated using semi-supervised learning.
4. the method as described in claim 1 further includes using radar when camera review frame per second and radar frequency mismatch Information is averaged with corresponding with camera review frame radar data at the time of in datagram.
5. the method as described in claim 1 further includes determining that the detector is again initial based on the confidence level for tracking result Change the opportunity of bounding box.
6. method described in claim 1, wherein the radar is single line laser radar.
7. method described in claim 1, wherein the video camera is visible light camera.
8. a kind of vehicle tracking device, described device includes scaling board, radar, video camera and processor, and the scaling board is used In the relationship for the relationship and camera review coordinate system and world coordinate system for obtaining radar fix system and world coordinate system;And
The processor is used to obtain turn of radar fix system and camera review coordinate system according to coordinate system relationship obtained Move matrix;
By radar data and camera review it is selected at the time of corresponding time relationship and space based on above-mentioned transfer matrix Corresponding relationship is as primary condition with the confidence level of result in evaluation goal tracing process;
The parameter in supervised learning acquisition detector is executed in tracing process, the detector use includes with spatial position For the cascade of the multi-categorizer of the classifier of discrimination standard;And
The confidence level based on tracking result corrects the parameter of the detector in tracing process.
CN201610683492.8A 2016-08-18 2016-08-18 Car tracing method and apparatus Active CN106296708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610683492.8A CN106296708B (en) 2016-08-18 2016-08-18 Car tracing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610683492.8A CN106296708B (en) 2016-08-18 2016-08-18 Car tracing method and apparatus

Publications (2)

Publication Number Publication Date
CN106296708A CN106296708A (en) 2017-01-04
CN106296708B true CN106296708B (en) 2019-02-15

Family

ID=57678390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610683492.8A Active CN106296708B (en) 2016-08-18 2016-08-18 Car tracing method and apparatus

Country Status (1)

Country Link
CN (1) CN106296708B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028287B (en) * 2018-10-09 2023-10-20 杭州海康威视数字技术股份有限公司 Method and device for determining a transformation matrix of radar coordinates and camera coordinates
CN109490890B (en) * 2018-11-29 2023-06-02 重庆邮电大学 Intelligent vehicle-oriented millimeter wave radar and monocular camera information fusion method
CN109858440A (en) * 2019-01-30 2019-06-07 苏州昆承智能车检测科技有限公司 The front vehicles detection system merged based on range radar and machine vision data
CN112241987B (en) * 2019-07-19 2023-11-14 杭州海康威视数字技术股份有限公司 System, method, device and storage medium for determining defense area
CN110517284B (en) * 2019-08-13 2023-07-14 中山大学 Target tracking method based on laser radar and PTZ camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103439981A (en) * 2013-08-29 2013-12-11 浙江理工大学 Laser mark automatic tracking extensometer control method based on uncalibrated visual servo
CN103559791A (en) * 2013-10-31 2014-02-05 北京联合大学 Vehicle detection method fusing radar and CCD camera signals
CN105844664A (en) * 2016-03-21 2016-08-10 辽宁师范大学 Monitoring video vehicle detection tracking method based on improved TLD

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103439981A (en) * 2013-08-29 2013-12-11 浙江理工大学 Laser mark automatic tracking extensometer control method based on uncalibrated visual servo
CN103559791A (en) * 2013-10-31 2014-02-05 北京联合大学 Vehicle detection method fusing radar and CCD camera signals
CN105844664A (en) * 2016-03-21 2016-08-10 辽宁师范大学 Monitoring video vehicle detection tracking method based on improved TLD

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于测距雷达和机器视觉数据融合的前方车辆检测***;庞成;《中国优秀硕士学位论文全文数据库信息科技辑》;20160815(第8期);第I138-879页
无人驾驶车辆运动目标跟踪方法研究;董永坤;《中国优秀硕士学位论文全文数据库信息科技辑》;20150615(第05期);第I138-563页

Also Published As

Publication number Publication date
CN106296708A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
US11688196B2 (en) Fish biomass, shape, and size determination
CN106296708B (en) Car tracing method and apparatus
US11756324B2 (en) Fish biomass, shape, size, or health determination
CN109283538A (en) A kind of naval target size detection method of view-based access control model and laser sensor data fusion
Chen et al. A deep learning approach to drone monitoring
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN101167086A (en) Human detection and tracking for security applications
CN106228570B (en) A kind of Truth data determines method and apparatus
CN112017243B (en) Medium visibility recognition method
CN113743385A (en) Unmanned ship water surface target detection method and device and unmanned ship
Zheng et al. Detection, localization, and tracking of multiple MAVs with panoramic stereo camera networks
CN105447431B (en) A kind of docking aircraft method for tracking and positioning and system based on machine vision
CN108596117A (en) A kind of scene monitoring method based on scanning laser range finder array
CN112016558B (en) Medium visibility recognition method based on image quality
CN110287957B (en) Low-slow small target positioning method and positioning device
Liu et al. Outdoor camera calibration method for a GPS & camera based surveillance system
Lynch Monocular pose estimation for automated aerial refueling via perspective-n-point
Han et al. Automated three-dimensional measurement method of in situ fish with a stereo camera
Niblock et al. Fast model-based feature matching technique applied to airport lighting
Garibotto et al. 3D scene analysis by real-time stereovision
Khatry et al. Good practice in training set preparation for marine navigation systems
CN117593650B (en) Moving point filtering vision SLAM method based on 4D millimeter wave radar and SAM image segmentation
CN112014393B (en) Medium visibility recognition method based on target visual effect
Zheng et al. Struck based infrared flying bird tracking experiments
Svecovs et al. Real time object localization based on computer vision: Cone detection for perception module of a racing car for Formula student driverless

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant