CN112215070A - Unmanned aerial vehicle aerial photography video traffic flow statistical method, host and system - Google Patents
Unmanned aerial vehicle aerial photography video traffic flow statistical method, host and system Download PDFInfo
- Publication number
- CN112215070A CN112215070A CN202010946810.1A CN202010946810A CN112215070A CN 112215070 A CN112215070 A CN 112215070A CN 202010946810 A CN202010946810 A CN 202010946810A CN 112215070 A CN112215070 A CN 112215070A
- Authority
- CN
- China
- Prior art keywords
- video image
- unmanned aerial
- traffic flow
- aerial vehicle
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007619 statistical method Methods 0.000 title claims abstract description 11
- 238000003062 neural network model Methods 0.000 claims abstract description 31
- 238000013135 deep learning Methods 0.000 claims abstract description 20
- 238000012549 training Methods 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 7
- 230000001419 dependent effect Effects 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 230000003068 static effect Effects 0.000 claims 1
- 230000008859 change Effects 0.000 description 8
- 238000001514 detection method Methods 0.000 description 6
- 230000006872 improvement Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an unmanned aerial vehicle aerial photography video traffic flow statistical method based on deep learning, which comprises the following steps: obtaining a sample video image, and marking a vehicle in the sample video image; setting a sign line in a sample video image according to the intersection needing to count the traffic flow; setting a reference object, and correcting the marking line according to the reference object; constructing a deep neural network model, and training the deep neural network model by using the corrected sample video image; acquiring a video image to be detected of the crossing position shot by the unmanned aerial vehicle in real time; identifying the video image to be detected by using the trained deep neural network model, and outputting an identification result; and calculating the number of vehicles in the road area corresponding to the video image to be detected according to the identification result. The invention also discloses a host and an unmanned aerial vehicle aerial photography video traffic flow statistical system based on deep learning. By adopting the invention, the traffic flow can be counted and analyzed, the workload of workers is reduced, and the working efficiency is improved.
Description
Technical Field
The invention relates to the technical field of traffic monitoring, in particular to an unmanned aerial vehicle aerial video traffic flow statistical method based on deep learning, a host and an unmanned aerial vehicle aerial video traffic flow statistical system based on deep learning.
Background
With the development of social economy, traffic industry is rapidly developing, the number of vehicles is huge, and the vehicles still grow year by year, so that the frequency of phenomena such as traffic accidents, vehicle congestion, vehicle chaos and the like is higher and higher. These traffic problems seriously affect the daily trips of residents, and also increase the burden of the ground traffic dispersion work. Although the cameras are installed on the key nodes of the city at present, the traffic condition of the whole road cannot be visually displayed. Because unmanned aerial vehicle's portability and flexibility utilize unmanned aerial vehicle to carry out the accurate positioning and the discernment of vehicle, exert huge advantage in detecting road traffic condition.
In an intelligent traffic system, a traffic flow statistical technology is acquired in real time to provide basic decision data for the intelligent traffic system, traffic management departments are facilitated to carry out optimized dispatching on traffic, drivers are facilitated to select travel routes better, and urban planners can plan whether roads are widened or not according to traffic flow parameters, so that research on traffic flow statistics has very important theoretical significance and potential application value.
Disclosure of Invention
The invention aims to solve the technical problem of providing an unmanned aerial vehicle aerial photography video traffic flow statistical method, a host and a system based on deep learning, which can be used for counting and analyzing traffic flow, reducing the workload of workers and improving the working efficiency.
In order to solve the technical problem, the invention provides an unmanned aerial vehicle aerial photography video traffic flow statistical method based on deep learning, which comprises the following steps: obtaining a sample video image, and marking a vehicle in the sample video image; setting a sign line in the sample video image according to the intersection needing to count the traffic flow; setting a reference object, and correcting the mark line according to the reference object; constructing a deep neural network model, and training the deep neural network model by using the corrected sample video image; acquiring a video image to be detected of the crossing position shot by the unmanned aerial vehicle in real time; identifying the video image to be detected by using the trained deep neural network model, and outputting an identification result; and calculating the number of vehicles in the road area corresponding to the video image to be detected according to the identification result.
As an improvement of the above solution, the recognition result includes: object recognition probability information, object position information, and object region segmentation information of the road and the vehicle.
As an improvement of the above, the step of correcting the sign line according to the reference object includes: tracking the reference object by a kernel-dependent filtering tracking method; calculating the relative offset of the reference object; and correcting the target position of the mark line according to the relative offset.
As an improvement of the above scheme, the step of setting a marker line in the sample video image at the intersection where the traffic flow is counted as needed includes: extracting crossroads needing to count the traffic flow; and four sign lines are arranged at the crossroad.
As a modification of the above, the reference object is a stationary object that differs in color or geometry from the surroundings.
As an improvement of the scheme, the deep neural network model is a Faster R-CNN deep neural network model.
Correspondingly, the invention also provides a host, comprising: the marking module is used for acquiring a sample video image and marking the vehicle in the sample video image; the marking module is used for counting intersections of traffic flow according to needs and setting a marking line in the sample video image; the correction module is used for setting a reference object and correcting the mark line according to the reference object; the training module is used for constructing a deep neural network model and training the deep neural network model by using the corrected sample video image; the acquisition module is used for acquiring a video image to be detected of the intersection position shot by the unmanned aerial vehicle in real time; the recognition module is used for recognizing the video image to be detected by using the trained deep neural network model and outputting a recognition result; and the calculating module is used for calculating the number of vehicles in the road area corresponding to the video image to be detected according to the identification result.
As an improvement of the above solution, the modification module includes: a setting unit for setting a reference object; a tracking unit for tracking the reference object by a kernel-dependent filtering tracking method; a calculating unit for calculating a relative offset of the reference object; and the correcting unit is used for correcting the target position of the mark line according to the relative offset.
Correspondingly, the invention further provides an unmanned aerial vehicle aerial photography video traffic flow statistical system based on deep learning, which comprises an unmanned aerial vehicle multi-source image acquisition platform and the host.
As the improvement of above-mentioned scheme, unmanned aerial vehicle multisource image acquisition platform is for carrying on visible light camera and thermal infrared camera to carry out multisource image acquisition's unmanned aerial vehicle platform.
The implementation of the invention has the following beneficial effects:
according to the invention, through sharing the depth image feature extraction network, the calculation resources are effectively utilized, the operation time of model training and road and vehicle detection is saved, and the accuracy and the detection rate are greatly improved.
The invention can count and analyze the flow of different vehicle types in the video with high precision, greatly lightens the workload of the working personnel and greatly improves the working efficiency.
Drawings
FIG. 1 is a flowchart of an embodiment of a method for counting traffic flow of an aerial video shot by an unmanned aerial vehicle based on deep learning according to the present invention;
fig. 2 is a schematic structural diagram of the unmanned aerial vehicle aerial photography video traffic flow statistical system based on deep learning.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
The characteristics of the image include pixels at a shallow level, linear structures (such as straight lines and curves of various shapes) at a deep level, and planar structures (such as rectangles, triangles and circles) at a deep level. And then, the aggregation of a plurality of planar structures at one level is carried out to form various texture characteristics, namely, low-level characteristics are combined to form more abstract high-level characteristics, and finally, various high-level characteristics form various target categories which can be recognized by human eyes and have practical significance. Therefore, whether it is a road or a vehicle, it is composed of the above-mentioned various hierarchical structures from shallow to deep, and both should have a large number of common features that can be reused on the basis of the lower-level features.
Aiming at the problem of vehicle counting of aerial photography roads, the invention sufficiently utilizes the commonality of image basic characteristics in deep learning, divides a deep neural network model into a basic backbone network and a head task network, and provides an unmanned aerial vehicle aerial photography video vehicle flow statistical method based on deep learning.
Referring to fig. 1, fig. 1 shows a flowchart of an embodiment of the method for counting traffic of an aerial video shot by an unmanned aerial vehicle based on deep learning, which includes:
s101, obtaining a sample video image, and labeling vehicles in the sample video image.
The aerial image can be acquired in an unmanned aerial vehicle aerial shooting mode, the frame is extracted, the unmanned aerial vehicle aerial image is selected to serve as a sample video image, and vehicles in the sample video image are marked to form a vehicle database. Specifically, the method comprises the following steps:
the imaging area of the sample video image includes: two-way four lanes and two-way six lanes without central green belts in cities, suburbs and rural areas, two-way six lanes and two-way eight lanes with central green belts, crossroads and T-shaped intersections, but the invention is not limited by the above;
the imaging conditions of the sample video image include: in sunny, cloudy and cloudy days causing brightness change, in mild haze, moderate haze and severe haze causing saturation change, in dawn and dusk causing chromaticity change, and in light rain and medium rain causing local shielding, but the invention is not limited thereto.
And S102, counting intersections of the traffic flow according to needs, and setting a sign line in the sample video image.
Specifically, the step of setting a marker line in a sample video image at the intersection where the traffic flow is counted as needed includes:
(1) extracting crossroads needing to count the traffic flow;
(2) and four sign lines are arranged at the crossroad.
S103, setting a reference object and correcting the mark line according to the reference object.
In the present invention, the reference object is preferably a stationary object that differs in color or geometry from the surrounding environment.
Specifically, the step of correcting the sign line according to the reference object includes:
(1) tracking the reference object by a kernel-dependent filtering tracking method;
(2) calculating the relative offset of the reference object;
(3) and correcting the target position of the mark line according to the relative offset.
That is, the marked reference object is tracked by the kernel correlation filtering tracking method, so that the relative offset is calculated for the reference object in each frame, and the target position of the mark line, which is offset due to video jitter, is corrected by the relative offset.
It should be noted that the method for predicting the movement track of the specific target includes: based on the fact that the speed of the conventional target does not suddenly change between adjacent frames of the image, the moving track and the speed of the specific target are estimated according to the detected position change trend of the specific target between the adjacent frames, and therefore the approximate position of the specific target in the next frame is predicted.
And S104, constructing a deep neural network model, and training the deep neural network model by using the corrected sample video image.
Preferably, the deep neural network model is a Faster R-CNN deep neural network model.
And S105, acquiring a video image to be detected of the intersection position shot by the unmanned aerial vehicle in real time.
The unmanned aerial vehicle carries out overlooking shooting on the position of the road junction and transmits shot video images to be detected to the host computer in real time.
And S106, identifying the video image to be detected by using the trained deep neural network model, and outputting an identification result.
The recognition result comprises: object recognition probability information, object position information, and object region segmentation information of the road and the vehicle.
And identifying the video image to be detected by using the trained deep neural network model, outputting the identification probability of the detected road and vehicle objects, the object positions and the object region segmentation, and positioning the position and the size of each vehicle.
And S107, calculating the number of vehicles in the road area corresponding to the video image to be detected according to the identification result.
The vehicles located in the road area are counted based on the recognition structure (the object recognition probability information, the object position information, and the object area division information of the road and the vehicles), and the counting result is output.
Therefore, the invention effectively utilizes the computing resources by sharing the depth image feature extraction network, saves the running time of model training and road and vehicle detection, and greatly improves the accuracy and the detection rate. Meanwhile, the method can be used for counting and analyzing the flow of different vehicle types in the video with high precision, greatly reducing the workload of workers and greatly improving the working efficiency.
Referring to fig. 2, fig. 2 shows a specific structure of an unmanned aerial vehicle aerial photography video traffic flow statistical system based on deep learning, which comprises an unmanned aerial vehicle multi-source image acquisition platform 1 and a host 2.
Unmanned aerial vehicle multisource image acquisition platform 1 is for carrying on visible light camera and thermal infrared camera to carry out the unmanned aerial vehicle platform that multisource image was gathered.
Specifically, unmanned aerial vehicle multisource image acquisition platform 1 includes unmanned aerial vehicle platform, power, computer mainboard, ground control customer end, visible light camera, thermal infrared camera, camera mount, image acquisition card, 4G module and basic station. The unmanned aerial vehicle platform is provided with a flight controller, a power system, a GPS (global positioning system), a battery and the like, and supports module expansion; the computer main board, the visible light camera and the thermal infrared camera are all fixed on the unmanned aerial vehicle platform; the image acquisition card is used for ensuring that the computer mainboard acquires the image data of the thermal infrared camera; the computer mainboard is provided with an image acquisition card drive, adopts an SDK development structure matched with the image acquisition card, and is programmed to synchronously acquire acquisition data of the visible light camera and the thermal infrared camera; the 4G module is carried on a computer mainboard and is connected with a base station through automatic dialing; the ground monitoring client is connected to the base station, and the computer mainboard carried on the unmanned aerial vehicle is connected with the ground monitoring client.
As shown in fig. 2, the host 2 includes a labeling module 21, a marking module 22, a modifying module 23, a training module 24, an obtaining module 25, an identifying module 26, and a calculating module 27, specifically:
and the labeling module 21 is configured to obtain a sample video image and label a vehicle in the sample video image. It should be noted that the sample video image is acquired by the unmanned aerial vehicle in an aerial photography mode, and is selected in a frame extraction mode. After the vehicles in the sample video images are labeled, a vehicle database can be formed. Specifically, the imaging area of the sample video image includes: two-way four lanes and two-way six lanes without central green belts in cities, suburbs and rural areas, two-way six lanes and two-way eight lanes with central green belts, crossroads and T-shaped intersections, but the invention is not limited by the above; the imaging conditions of the sample video image include: in sunny, cloudy and cloudy days causing brightness change, in mild haze, moderate haze and severe haze causing saturation change, in dawn and dusk causing chromaticity change, and in light rain and medium rain causing local shielding, but the invention is not limited thereto.
And the marking module 22 is used for counting intersections of the traffic flow according to needs and setting a marking line in the sample video image. Specifically, the sign module 22 may extract an intersection at which the traffic flow needs to be counted, and set four sign lines at the intersection.
And the correcting module 23 is used for setting a reference object and correcting the sign line according to the reference object.
And the training module 24 is used for constructing a deep neural network model and training the deep neural network model by using the corrected sample video image. Preferably, the deep neural network model is a Faster R-CNN deep neural network model.
And the obtaining module 25 is configured to obtain a to-be-detected video image of the intersection position shot by the unmanned aerial vehicle in real time. It should be noted that the unmanned aerial vehicle performs overlook shooting on the intersection position, and transmits the shot video image to be detected to the acquisition module of the host computer in real time.
And the recognition module 26 is configured to recognize the video image to be detected by using the trained deep neural network model, and output a recognition result.
And the calculating module 27 is configured to calculate the number of vehicles in the road area corresponding to the video image to be detected according to the identification result. The recognition result comprises: object recognition probability information, object position information, and object region segmentation information of the road and the vehicle.
Further, the correction module 23 includes:
a setting unit for setting a reference object; the reference object is preferably a stationary object that differs in color or geometry from the surroundings.
And the tracking unit is used for tracking the reference object by a kernel correlation filtering tracking method.
And the calculating unit is used for calculating the relative offset of the reference object. Specifically, the calculation unit calculates a relative offset amount for each frame with respect to the reference object, respectively.
And the correcting unit is used for correcting the target position of the mark line according to the relative offset. Thereby avoiding skew due to video jitter.
Therefore, the invention effectively utilizes the computing resources by sharing the depth image feature extraction network, saves the running time of model training and road and vehicle detection, and greatly improves the accuracy and the detection rate. Meanwhile, the method can be used for counting and analyzing the flow of different vehicle types in the video with high precision, greatly reducing the workload of workers and greatly improving the working efficiency.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
Claims (10)
1. The utility model provides an unmanned aerial vehicle video traffic flow statistical method that takes photo by plane based on degree of deep learning which characterized in that includes:
obtaining a sample video image, and marking a vehicle in the sample video image;
setting a sign line in the sample video image according to the intersection needing to count the traffic flow;
setting a reference object, and correcting the mark line according to the reference object;
constructing a deep neural network model, and training the deep neural network model by using the corrected sample video image;
acquiring a video image to be detected of the crossing position shot by the unmanned aerial vehicle in real time;
identifying the video image to be detected by using the trained deep neural network model, and outputting an identification result;
and calculating the number of vehicles in the road area corresponding to the video image to be detected according to the identification result.
2. The deep learning-based unmanned aerial vehicle aerial video traffic flow statistical method according to claim 1, wherein the identification result comprises: object recognition probability information, object position information, and object region segmentation information of the road and the vehicle.
3. The method for video traffic flow statistics by aerial photography of unmanned aerial vehicle based on deep learning of claim 1, wherein the step of correcting the sign line according to the reference comprises:
tracking the reference object by a kernel-dependent filtering tracking method;
calculating the relative offset of the reference object;
and correcting the target position of the mark line according to the relative offset.
4. The unmanned aerial vehicle aerial photography video traffic flow statistical method based on deep learning of claim 1, wherein the step of setting a sign line in a sample video image according to the intersection where the traffic flow needs to be counted comprises:
extracting crossroads needing to count the traffic flow;
and four sign lines are arranged at the crossroad.
5. The method for video traffic flow statistics based on deep learning for unmanned aerial vehicle aerial photography according to claim 1, wherein the reference object is a static object having a difference in color or geometry from the surrounding environment.
6. The deep learning-based unmanned aerial vehicle aerial video traffic flow statistical method according to claim 1, wherein the deep neural network model is a Faster R-CNN deep neural network model.
7. A host, comprising:
the marking module is used for acquiring a sample video image and marking the vehicle in the sample video image;
the marking module is used for counting intersections of traffic flow according to needs and setting a marking line in the sample video image;
the correction module is used for setting a reference object and correcting the mark line according to the reference object;
the training module is used for constructing a deep neural network model and training the deep neural network model by using the corrected sample video image;
the acquisition module is used for acquiring a video image to be detected of the intersection position shot by the unmanned aerial vehicle in real time;
the recognition module is used for recognizing the video image to be detected by using the trained deep neural network model and outputting a recognition result;
and the calculating module is used for calculating the number of vehicles in the road area corresponding to the video image to be detected according to the identification result.
8. The host of claim 7, wherein the revision module comprises:
a setting unit for setting a reference object;
a tracking unit for tracking the reference object by a kernel-dependent filtering tracking method;
a calculating unit for calculating a relative offset of the reference object;
and the correcting unit is used for correcting the target position of the mark line according to the relative offset.
9. An unmanned aerial vehicle aerial video traffic flow statistical system based on deep learning, comprising an unmanned aerial vehicle multi-source image acquisition platform and the host machine of claim 7 or 8.
10. The unmanned aerial vehicle aerial video traffic flow statistical system of claim 9, wherein the unmanned aerial vehicle multi-source image acquisition platform is an unmanned aerial vehicle platform carrying a visible light camera and a thermal infrared camera and performing multi-source image acquisition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010946810.1A CN112215070A (en) | 2020-09-10 | 2020-09-10 | Unmanned aerial vehicle aerial photography video traffic flow statistical method, host and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010946810.1A CN112215070A (en) | 2020-09-10 | 2020-09-10 | Unmanned aerial vehicle aerial photography video traffic flow statistical method, host and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112215070A true CN112215070A (en) | 2021-01-12 |
Family
ID=74049334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010946810.1A Pending CN112215070A (en) | 2020-09-10 | 2020-09-10 | Unmanned aerial vehicle aerial photography video traffic flow statistical method, host and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112215070A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113724250A (en) * | 2021-09-26 | 2021-11-30 | 新希望六和股份有限公司 | Animal target counting method based on double-optical camera |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107886523A (en) * | 2017-11-01 | 2018-04-06 | 武汉大学 | Vehicle target movement velocity detection method based on unmanned plane multi-source image |
CN108364466A (en) * | 2018-02-11 | 2018-08-03 | 金陵科技学院 | A kind of statistical method of traffic flow based on unmanned plane traffic video |
CN108710875A (en) * | 2018-09-11 | 2018-10-26 | 湖南鲲鹏智汇无人机技术有限公司 | A kind of take photo by plane road vehicle method of counting and device based on deep learning |
CN110717387A (en) * | 2019-09-02 | 2020-01-21 | 东南大学 | Real-time vehicle detection method based on unmanned aerial vehicle platform |
-
2020
- 2020-09-10 CN CN202010946810.1A patent/CN112215070A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107886523A (en) * | 2017-11-01 | 2018-04-06 | 武汉大学 | Vehicle target movement velocity detection method based on unmanned plane multi-source image |
CN108364466A (en) * | 2018-02-11 | 2018-08-03 | 金陵科技学院 | A kind of statistical method of traffic flow based on unmanned plane traffic video |
CN108710875A (en) * | 2018-09-11 | 2018-10-26 | 湖南鲲鹏智汇无人机技术有限公司 | A kind of take photo by plane road vehicle method of counting and device based on deep learning |
CN110717387A (en) * | 2019-09-02 | 2020-01-21 | 东南大学 | Real-time vehicle detection method based on unmanned aerial vehicle platform |
Non-Patent Citations (1)
Title |
---|
张冬梅;卢小平;张航;余振宝;苗沛基;: "一种基于无人机视频影像的车流量统计算法", 遥感信息, no. 01, 20 February 2020 (2020-02-20) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113724250A (en) * | 2021-09-26 | 2021-11-30 | 新希望六和股份有限公司 | Animal target counting method based on double-optical camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108955702B (en) | Lane-level map creation system based on three-dimensional laser and GPS inertial navigation system | |
Changzhen et al. | A traffic sign detection algorithm based on deep convolutional neural network | |
CN102867417B (en) | Taxi anti-forgery system and taxi anti-forgery method | |
CN110619279B (en) | Road traffic sign instance segmentation method based on tracking | |
CN111833598B (en) | Automatic traffic incident monitoring method and system for unmanned aerial vehicle on highway | |
CN112800913B (en) | Pavement damage data space-time analysis method based on multi-source feature fusion | |
CN105160309A (en) | Three-lane detection method based on image morphological segmentation and region growing | |
CN109782364B (en) | Traffic sign board missing detection method based on machine vision | |
CN109615862A (en) | Road vehicle movement of traffic state parameter dynamic acquisition method and device | |
CN104978567A (en) | Vehicle detection method based on scenario classification | |
CN103268470A (en) | Method for counting video objects in real time based on any scene | |
CN109993163A (en) | A kind of non-rating plate identifying system and its recognition methods based on artificial intelligence | |
CN113903008A (en) | Ramp exit vehicle violation identification method based on deep learning and trajectory tracking | |
CN116824859A (en) | Intelligent traffic big data analysis system based on Internet of things | |
CN111951576A (en) | Traffic light control system based on vehicle identification and method thereof | |
Bu et al. | A UAV photography–based detection method for defective road marking | |
Wei et al. | Damage inspection for road markings based on images with hierarchical semantic segmentation strategy and dynamic homography estimation | |
CN106529391B (en) | A kind of speed limit road traffic sign detection of robust and recognition methods | |
CN112215070A (en) | Unmanned aerial vehicle aerial photography video traffic flow statistical method, host and system | |
CN113392817A (en) | Vehicle density estimation method and device based on multi-row convolutional neural network | |
CN116721552B (en) | Non-motor vehicle overspeed identification recording method, device, equipment and storage medium | |
CN111145551A (en) | Intersection traffic planning system based on CNN detection follows chapter rate | |
CN110415299B (en) | Vehicle position estimation method based on set guideboard under motion constraint | |
CN116229396B (en) | High-speed pavement disease identification and warning method | |
US11682298B2 (en) | Practical method to collect and measure real-time traffic data with high accuracy through the 5G network and accessing these data by cloud computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |