CN113505663A - Electric bicycle red light running video analysis and identification method based on artificial intelligence - Google Patents

Electric bicycle red light running video analysis and identification method based on artificial intelligence Download PDF

Info

Publication number
CN113505663A
CN113505663A CN202110704401.5A CN202110704401A CN113505663A CN 113505663 A CN113505663 A CN 113505663A CN 202110704401 A CN202110704401 A CN 202110704401A CN 113505663 A CN113505663 A CN 113505663A
Authority
CN
China
Prior art keywords
electric bicycle
artificial intelligence
video
early warning
red light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110704401.5A
Other languages
Chinese (zh)
Inventor
郭荣
张信豪
翁月娜
魏剑新
徐经纬
刘远超
唐浩
金佳
严俊刚
夏路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Haoteng Electron Technology Co ltd
Original Assignee
Zhejiang Haoteng Electron Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Haoteng Electron Technology Co ltd filed Critical Zhejiang Haoteng Electron Technology Co ltd
Priority to CN202110704401.5A priority Critical patent/CN113505663A/en
Publication of CN113505663A publication Critical patent/CN113505663A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an electric bicycle red light running video analyzing and identifying method based on artificial intelligence, which comprises the following steps: the video access module is responsible for accessing a monitoring video of a traffic intersection, converting a video stream into a single-frame video image, and sending the single-frame video image to the artificial intelligence analysis module after acquiring the acquisition time of each frame of image; the artificial intelligence analysis module is responsible for analyzing and comparing whether the early warning information exists or not by adopting a convolutional neural network according to the single-frame video image; and the early warning output module synthesizes evidence data according to the analysis result of the artificial intelligence analysis module and the video data of the video access module, captures pictures before the electric bicycle enters the zebra crossing and after the electric bicycle enters the zebra crossing for evidence keeping, and uploads the pictures to the upper-layer server in time to inform relevant functional departments. The method can improve the accuracy of traffic light detection in real time, detect at high speed, track the electric bicycle and detect the red light running behavior of the electric bicycle in real time.

Description

Electric bicycle red light running video analysis and identification method based on artificial intelligence
Technical Field
The invention relates to the field of video monitoring, in particular to an electric bicycle red light running video analyzing and identifying method based on artificial intelligence.
Background
The electric bicycle is light, cheap, noiseless, pollution-free, occupies a little parking stall, has incomparable advantage of other vehicles. In recent two years, the number of electric bicycles is increasing day by day, and the number of huge electric vehicles brings certain electric vehicle management problems, for example, because the drivers of the electric vehicles are different in quality, the electric vehicles are very common in the road violation behaviors. For example, red light running, backward running, random lane changing, cross collision and the like occur frequently.
In order to guarantee the life safety of people, standardize the traffic order of electric bicycles and solve the defects of the prior art, the invention provides an artificial intelligence-based red light running video analysis and identification method for electric bicycles.
Disclosure of Invention
The invention aims to provide an artificial intelligence-based video monitoring and identifying method for red light running of an electric bicycle.
The electric bicycle red light running video analyzing and identifying method based on artificial intelligence is characterized in that a video access module, an artificial intelligence analysis module and an early warning output module are adopted in the analyzing and identifying method;
the video access module is responsible for accessing a monitoring video of a traffic intersection, converting a video stream into a single-frame video image, and sending the single-frame video image to the artificial intelligence analysis module after acquiring the acquisition time of each frame of image;
the artificial intelligence analysis module is responsible for analyzing and comparing whether the early warning information exists or not by adopting a convolutional neural network according to a single-frame video image, and the processing process comprises the following steps:
step 1: marking the zebra crossing area in the video image as an early warning area, recording the area coordinate of the early warning area, and taking the early warning area as a detection sample for judging whether early warning information exists or not;
step 2: identifying the traffic light state in the single-frame video image by adopting a neural network classification model, and judging whether the traffic light state is the red light state at present;
and step 3: when the current state is a red light state, performing target detection on the electric bicycle serving as an object to be detected in the video image, respectively performing target detection regression on the electric bicycle by adopting two network frame models of dark net and ssd, and obtaining target detection regression mean values of the electric bicycle and the electric bicycle to obtain the height t, the width w and the central point coordinates (x, y) of the electric bicycle;
and 4, step 4: further extracting the vehicle information of the color, the shape, the size and the speed of each electric bicycle, and tracking the target by using a Kalman filter, specifically tracking the speed of each electric bicycle, the height t of the detection regression obtained in the step 3 and the coordinates (x, y) of the central point; when the red light is turned on, if the electric bicycle passes through the early warning area, the early warning information is judged to be detected;
the early warning output module synthesizes evidence data according to the analysis result of the artificial intelligence analysis module and the video data of the video access module, captures pictures before the electric bicycle enters the zebra crossing and after the electric bicycle enters the zebra crossing for evidence keeping, and uploads the pictures to the upper-layer server in time to inform relevant functional departments.
The method for analyzing and identifying the red light running video of the electric bicycle based on the artificial intelligence is characterized in that in the step 2 of the artificial intelligence analysis module, a neural network classification model is adopted to identify the specific process of the red light running state in a single-frame video image as follows:
s1: firstly, preliminarily identifying the traffic light state in a single-frame video image through a neural network classification model;
s2: inputting characteristics of time, place and position coordinates in a neural network classification model to improve the model, introducing an SVM (support vector machine) and a tree model, automatically adjusting related hyper-parameters of the improved neural network classification model, the SVM and the tree model by adopting Bayesian parameter adjustment, and optimizing and searching optimal parameters by verifying set data, accuracy and recall rate evaluation indexes, thereby effectively improving the traffic light state recognition rate;
s3: and stacking and integrating three models, namely a neural network classification model, an SVM (support vector machine) and a tree model after parameter adjustment, as a traffic light state classification model, detecting the traffic light state in a single-frame video image, and judging whether the traffic light is the red light at present.
Compared with the prior art, the invention has the following beneficial effects:
1) artificial intelligence technology: the artificial intelligence technology is used, and the efficiency is high; further training can be performed according to the user data, and the efficiency is improved.
2) Solving the problem of composite application: the system is compatible with all video analysis and detection accesses, can utilize the existing public resource to monitor videos, does not need to erect a camera again, utilizes an artificial intelligence technology to analyze a calibrated monitoring area in the videos, detects whether the monitored area has early warning information through front and back comparison, records the early warning information according to the requirement, uploads the recorded early warning information to an upper-layer server, and pushes the recorded early warning information to relevant functional departments in real time.
3) Powerful system functions: and comparing and verifying the analysis result by adopting an artificial intelligence technology, detecting the change condition of the monitoring area, recording the information of site, time and the like as required, and outputting the information to a specified information platform.
4) Excellent product compatibility: the method of the invention can adopt national standard communication protocol and video decoding algorithm, improve product compatibility and be compatible with all domestic mainstream video snapshot systems and videos.
Drawings
FIG. 1 is a frame structure diagram of a complete system formed by a video access module, a convolutional neural network core algorithm, an artificial intelligence analysis module and an early warning output module according to the method of the present invention;
fig. 2 is a diagram of an implementation method of the artificial intelligence-based red light running video analysis and identification of the electric bicycle.
Detailed Description
The present invention is further illustrated by the following examples, which should not be construed as limiting the scope of the invention.
Example (b):
as shown in fig. 1, a complete system is formed by the video access module, the convolutional neural network core algorithm, the artificial intelligence analysis module and the early warning output module of the present invention.
Fig. 2 is a diagram of an implementation method of the artificial intelligence-based red light running video analysis and identification of the electric bicycle in the embodiment of the present invention.
The video access module: the method is characterized by comprising the steps of packaging standard modules such as ONVIF and GB28181 and SDK access components publicly provided by various camera manufacturers, sampling, transcoding through operational amplifier, inputting to a GPU graphic analysis module for analysis, and judging and processing the fault function through sampling software. The system is responsible for accessing monitoring videos of the traffic intersection, converting the video stream into single-frame video images, and sending the single-frame video images to the artificial intelligent analysis module after acquiring the acquisition time of each frame of image.
The artificial intelligence analysis module: and the system is responsible for comparing and analyzing whether the early warning information exists or not by adopting a convolutional neural network according to the video image and processing the early warning information. Which comprises the following steps:
step 1: dividing an area needing early warning according to a video image and an early warning area (such as a zebra crossing area) calibrated by a user, recording area coordinates, and starting detection by taking the divided area needing early warning as a detection sample;
step 2: firstly, preliminarily identifying the traffic light state in a single-frame video image through a neural network classification model;
inputting characteristics of time, place and position coordinates in a neural network classification model to improve the model, introducing an SVM (support vector machine) and a tree model, automatically adjusting related hyper-parameters of the improved neural network classification model, the SVM and the tree model by adopting Bayesian parameter adjustment, and optimizing and searching optimal parameters by verifying set data, accuracy and recall rate evaluation indexes, thereby effectively improving the traffic light state recognition rate;
in this embodiment, the parameters of the neural network of the above method are:
the first step is to input fixed sizes of fixed positions 30 x 30-90 x 90;
secondly, adding a layer of 2D convolutional neural network, wherein the kernel number is 96, the kernel size is 11, the stride unit is moved by 4, and the activation function adopts relu;
thirdly, adding a layer of batch normalization;
fourthly, increasing a pooling layer, namely 3 large and small positions and 2 steps;
fifthly, adding a layer of 2D convolutional neural network, wherein the kernel function is 256, the kernel size is 5, the stride is 1, and the activation function adopts relue;
sixthly, adding a layer of batch normalization;
a seventh step of increasing a pooling layer with the size of 3 and the step length of 2;
the eighth step is that a 2D convolution neural network is added, the number of kernel functions is 380, the kernel size is 55, the stride is 1, and the activation function adopts relue;
the ninth step is to increase a 2D convolution neural network, the number of kernel functions is 380, the kernel size is 5, the stride is 1, and the activation function adopts relue;
step ten, adding a 2D convolutional neural network, wherein the number of kernels is 256, the kernel size is 3, the stride is 1, and the activation function adopts relue;
eleventh, increasing a pooling layer with the size of 3 and the step length of 2;
a full connection layer with the size of 4096 is added in the twelfth step, the function relu is activated, and the function is regular, and dropout is 0.5;
thirteenth, adding a full connection layer, canceling 1000, activating a function relu, regular, and dropout 0.5;
fourteenth, increasing softmax;
the SVM parameters of the method are as follows:
the first step sets the kernel function to a gaussian kernel.
Secondly, adjusting lambda parameter = 0.5;
thirdly, adjusting the punishment coefficient digit to be 0.1;
the tree model (i.e. random forest) parameters of the above method are:
in the first step, 300 trees are set
The second step sets the maximum depth to 6.
The stacking of the method adopts a vote mode to obtain a probability value, and the classification threshold value is set to be 0.5.
Stacking and integrating three models, namely a neural network classification model, an SVM (support vector machine) and a tree model after parameter adjustment, as a traffic light state classification model, detecting a traffic light state in a single-frame video image, and judging whether the traffic light state is a red light at present;
and step 3: when the current traffic light is in a red light state, the target detection is carried out on the electric bicycle serving as an object to be detected in the video image, two network frame models of darknet and ssd are adopted to carry out target detection regression on the electric bicycle respectively, the target detection regression mean value of the electric bicycle and the electric bicycle is obtained, the height t, the width w and the center point coordinates (x, y) of the electric bicycle can be obtained, the center point coordinates (x, y) are set as the center point of Gaussian distribution, and regression values below 2 standard deviations are removed. This step effectively rejects invalid data noise.
And 4, step 4: and (3) further extracting the vehicle information of the color, shape, size and speed of each electric bicycle, and tracking the target by using a Kalman filter, specifically tracking the speed of each electric bicycle, the height t and the central point coordinate (x, y) of the detection regression obtained in the step (3), namely tracking the central point, the height and the speed of the regression frame, wherein the Kalman filter adopts a linear uniform velocity model, and the observation point is the central point, the height and the speed. When the red light is turned on, if the electric bicycle passes through the early warning area, the early warning information is judged to be detected;
the early warning output module synthesizes evidence data according to the analysis result of the artificial intelligence analysis module and the video data of the video access module, captures pictures before the electric bicycle enters the zebra crossing and after the electric bicycle enters the zebra crossing for evidence keeping, and uploads the pictures to the upper-layer server in time to inform relevant functional departments. Meanwhile, the functions of mobile phone APP client alarm, WeChat push alarm, mobile phone short message alarm, alarm picture composition and the like are expanded.
The statements in this specification merely set forth a list of implementations of the inventive concept and the scope of the present invention should not be construed as limited to the particular forms set forth in the examples.

Claims (2)

1. An electric bicycle red light running video analysis and identification method based on artificial intelligence is characterized in that a video access module, an artificial intelligence analysis module and an early warning output module are adopted in the analysis and identification method;
the video access module is responsible for accessing a monitoring video of a traffic intersection, converting a video stream into a single-frame video image, and sending the single-frame video image to the artificial intelligence analysis module after acquiring the acquisition time of each frame of image;
the artificial intelligence analysis module is responsible for analyzing and comparing whether the early warning information exists or not by adopting a convolutional neural network according to a single-frame video image, and the processing process comprises the following steps:
step 1: marking the zebra crossing area in the video image as an early warning area, recording the area coordinate of the early warning area, and taking the early warning area as a detection sample for judging whether early warning information exists or not;
step 2: identifying the traffic light state in the single-frame video image by adopting a neural network classification model, and judging whether the traffic light state is the red light state at present;
and step 3: when the current state is a red light state, performing target detection on the electric bicycle serving as an object to be detected in the video image, respectively performing target detection regression on the electric bicycle by adopting two network frame models of dark net and ssd, and obtaining target detection regression mean values of the electric bicycle and the electric bicycle to obtain the height t, the width w and the central point coordinates (x, y) of the electric bicycle;
and 4, step 4: further extracting the vehicle information of the color, the shape, the size and the speed of each electric bicycle, and tracking the target by using a Kalman filter, specifically tracking the speed of each electric bicycle, the height t of the detection regression obtained in the step 3 and the coordinates (x, y) of the central point; when the red light is turned on, if the electric bicycle passes through the early warning area, the early warning information is judged to be detected;
the early warning output module synthesizes evidence data according to the analysis result of the artificial intelligence analysis module and the video data of the video access module, captures pictures before the electric bicycle enters the zebra crossing and after the electric bicycle enters the zebra crossing for evidence keeping, and uploads the pictures to the upper-layer server in time to inform relevant functional departments.
2. The method for analyzing and identifying the red light running video of the electric bicycle based on the artificial intelligence as claimed in claim 1, wherein in the step 2 of the artificial intelligence analysis module, the specific process of identifying the traffic light state in the single-frame video image by using the neural network classification model is as follows:
s1: firstly, preliminarily identifying the traffic light state in a single-frame video image through a neural network classification model;
s2: inputting characteristics of time, place and position coordinates in a neural network classification model to improve the model, introducing an SVM (support vector machine) and a tree model, automatically adjusting related hyper-parameters of the improved neural network classification model, the SVM and the tree model by adopting Bayesian parameter adjustment, and optimizing and searching optimal parameters through cross validation, accuracy and recall rate evaluation indexes, thereby effectively improving the traffic light state recognition rate of traffic lights;
s3: and stacking and integrating three models, namely a neural network classification model, an SVM (support vector machine) and a tree model after parameter adjustment, as a traffic light state classification model, detecting the traffic light state in a single-frame video image, and judging whether the traffic light is the red light at present.
CN202110704401.5A 2021-08-16 2021-08-16 Electric bicycle red light running video analysis and identification method based on artificial intelligence Pending CN113505663A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110704401.5A CN113505663A (en) 2021-08-16 2021-08-16 Electric bicycle red light running video analysis and identification method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110704401.5A CN113505663A (en) 2021-08-16 2021-08-16 Electric bicycle red light running video analysis and identification method based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN113505663A true CN113505663A (en) 2021-10-15

Family

ID=78010501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110704401.5A Pending CN113505663A (en) 2021-08-16 2021-08-16 Electric bicycle red light running video analysis and identification method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN113505663A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113988110A (en) * 2021-12-02 2022-01-28 深圳比特微电子科技有限公司 Red light running behavior detection method and device and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232350A (en) * 2019-06-10 2019-09-13 哈尔滨工程大学 A kind of real-time water surface multiple mobile object detecting and tracking method based on on-line study
CN110580808A (en) * 2018-06-08 2019-12-17 杭州海康威视数字技术股份有限公司 Information processing method and device, electronic equipment and intelligent traffic system
CN112349101A (en) * 2021-01-08 2021-02-09 深圳裹动智驾科技有限公司 High-precision map generation method, and method and system for identifying traffic lights
CN112712057A (en) * 2021-01-13 2021-04-27 腾讯科技(深圳)有限公司 Traffic signal identification method and device, electronic equipment and storage medium
CN113033275A (en) * 2020-11-17 2021-06-25 浙江浩腾电子科技股份有限公司 Vehicle lane-changing non-turn signal lamp analysis system based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580808A (en) * 2018-06-08 2019-12-17 杭州海康威视数字技术股份有限公司 Information processing method and device, electronic equipment and intelligent traffic system
CN110232350A (en) * 2019-06-10 2019-09-13 哈尔滨工程大学 A kind of real-time water surface multiple mobile object detecting and tracking method based on on-line study
CN113033275A (en) * 2020-11-17 2021-06-25 浙江浩腾电子科技股份有限公司 Vehicle lane-changing non-turn signal lamp analysis system based on deep learning
CN112349101A (en) * 2021-01-08 2021-02-09 深圳裹动智驾科技有限公司 High-precision map generation method, and method and system for identifying traffic lights
CN112712057A (en) * 2021-01-13 2021-04-27 腾讯科技(深圳)有限公司 Traffic signal identification method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王辉 等: "基于视频和位置信息的交通灯识别", 《大众科技》, vol. 17, no. 10 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113988110A (en) * 2021-12-02 2022-01-28 深圳比特微电子科技有限公司 Red light running behavior detection method and device and readable storage medium
CN113988110B (en) * 2021-12-02 2022-04-05 深圳比特微电子科技有限公司 Red light running behavior detection method and device and readable storage medium

Similar Documents

Publication Publication Date Title
US11503057B2 (en) Intrusion detection method and system for internet of vehicles based on spark and combined deep learning
CN110390262B (en) Video analysis method, device, server and storage medium
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
WO2020042984A1 (en) Vehicle behavior detection method and apparatus
KR102122859B1 (en) Method for tracking multi target in traffic image-monitoring-system
CN102164270A (en) Intelligent video monitoring method and system capable of exploring abnormal events
CN111241343A (en) Road information monitoring and analyzing detection method and intelligent traffic control system
CN106331636A (en) Intelligent video monitoring system and method of oil pipelines based on behavioral event triggering
CN109993138A (en) A kind of car plate detection and recognition methods and device
GB2622512A (en) Internet-of-vehicles intrusion detection method and device based on improved convolutional neural network
CN110619277A (en) Multi-community intelligent deployment and control method and system
CN111738218B (en) Human body abnormal behavior recognition system and method
KR102122850B1 (en) Solution for analysis road and recognition vehicle license plate employing deep-learning
CN112149511A (en) Method, terminal and device for detecting violation of driver based on neural network
CN112153334B (en) Intelligent video box equipment for safety management and corresponding intelligent video analysis method
WO2021218385A1 (en) Image identification method, invasion target detection method, and apparatus
CN117437599B (en) Pedestrian abnormal event detection method and system for monitoring scene
CN111274886A (en) Deep learning-based pedestrian red light violation analysis method and system
CN113505663A (en) Electric bicycle red light running video analysis and identification method based on artificial intelligence
CN115546742A (en) Rail foreign matter identification method and system based on monocular thermal infrared camera
CN116092119A (en) Human behavior recognition system based on multidimensional feature fusion and working method thereof
CN112686130B (en) Wisdom fishing boat supervision decision-making system
CN118015562A (en) Method and system for extracting key frames of traffic accident monitoring video in severe weather
CN111178181B (en) Traffic scene segmentation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination