CN111008612A - Production frequency statistical method, system and storage medium - Google Patents

Production frequency statistical method, system and storage medium Download PDF

Info

Publication number
CN111008612A
CN111008612A CN201911343125.3A CN201911343125A CN111008612A CN 111008612 A CN111008612 A CN 111008612A CN 201911343125 A CN201911343125 A CN 201911343125A CN 111008612 A CN111008612 A CN 111008612A
Authority
CN
China
Prior art keywords
smoke
video
production
image
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911343125.3A
Other languages
Chinese (zh)
Inventor
方强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Biaoqi Wuhan Information Technology Co Ltd
Original Assignee
Biaoqi Wuhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Biaoqi Wuhan Information Technology Co Ltd filed Critical Biaoqi Wuhan Information Technology Co Ltd
Priority to CN201911343125.3A priority Critical patent/CN111008612A/en
Publication of CN111008612A publication Critical patent/CN111008612A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Fuzzy Systems (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

A production number counting method, system and storage medium are disclosed. The method comprises the following steps: acquiring a video of the activity condition of the production facility which discharges the flue gas outwards; converting the video into a picture of each frame; extracting a salient region of the picture; classifying the preprocessed pictures according to the smoke contained and the smoke not contained to manufacture a training data set; constructing a deep learning model; sending the training data set into a deep learning model for training to obtain a classification model; acquiring a real-time video of the activity condition of the production facility, converting the real-time video into an image of each frame, and inputting the image into a trained model for classification and identification; when the images are classified to include the smoke category, recording a video; when the image is classified to not contain the smoke category, stopping recording the video; and obtaining the production times according to the number of the recorded videos. The invention uses the classification of the images to count the production times, thereby improving the recognition efficiency.

Description

Production frequency statistical method, system and storage medium
Technical Field
The invention belongs to the technical field of data pattern recognition and deep learning, and particularly relates to a production frequency statistical method, a production frequency statistical system and a storage medium.
Background
With the increasing environmental problem, the transformation problem of industrial enterprises is becoming more and more prominent. The reasonable and standard industrial production process can not only improve the production efficiency, but also effectively reduce the pollution to the environment and realize the conversion from extensive production to green environment-friendly production. However, in actual production, due to the influence of various factors such as labor cost, economic benefits, superior tasks and the like, industrial enterprises have many behaviors which violate production management regulations in the production process. The problem of stealing by industrial enterprises is the more serious violation.
The enterprise stealing refers to pollutant emission activities generated by carrying out production activities outside a normal production plan or normal production time. The enterprise can check whether there is an unplanned production activity, namely a steal activity, by comparing the statistics of the smoke exhaust times with the production plan. The traditional smoke discharge frequency statistics is mainly to count the smoke discharge frequency of a shot video manually or to count the frequency of a specially-assigned person near a smoke discharge system according to the seen smoke discharge activity. These conventional statistical methods have a number of significant drawbacks, such as: as production activities are performed daily, the fume extraction time may not be as accurate. The manual counting of the data may have practical problems of missing counting, repeated counting, artificial fatigue and the like. In addition, if there are multiple discharge points in an enterprise, the corresponding labor cost will be increased, and the corresponding cost for the enterprise will be greatly increased.
Disclosure of Invention
The invention provides a production frequency statistical method, a production frequency statistical system and a storage medium. The activity condition of the production facility which discharges the smoke outwards is observed through a high-altitude camera arranged in the plant area. When the camera observes that a large amount of smoke is discharged from the production facility, the video recording is started. And when no smoke is discharged in the video, stopping recording the video. By counting the number of recorded videos in a period of time, the number of times of production in the period of time can be conveniently known.
Because data comes from a high-altitude camera arranged in a factory area, the distance and the angle between the camera and an observed production facility generally cannot reach ideal values, the proportion of the area of interest in the picture to the whole area of the picture is low. In order to solve the problem, the invention performs preprocessing on the acquired picture data, namely extracts a picture interested region.
Meanwhile, the physical state of the smoke has diffusivity, namely the smoke has obvious difference along with the appearance of diffusion movement, so that the smoke is difficult to identify by a target identification method. Therefore, the invention converts the target identification problem into the target classification problem, judges whether to record the video or not by judging whether the picture of each frame contains the smoke or not, and counts the production times by counting the number of the videos in a period of time. The classification of the images is relatively difficult to realize compared with the identification of the images, and is relatively less influenced by the physical state of the smoke under the condition of sufficient data. Therefore, the use of image classification to count the emission times (which represent the production times) greatly improves the recognition efficiency and reduces the error rate in the counting process.
According to a first aspect of embodiments of the present invention, there is provided a production number statistical method, including:
acquiring a video of the activity condition of the production facility which discharges the flue gas outwards;
converting the video into a picture of each frame;
preprocessing the picture to extract a significant region;
classifying the preprocessed pictures according to the smoke contained and the smoke not contained to manufacture a training data set;
constructing a deep learning model;
sending the training data set into the deep learning model for training to obtain a classification model;
acquiring a real-time video of the activity condition of the production facility, converting the real-time video into an image of each frame, and inputting the image into a trained model for classification and identification;
when the images are detected to be classified to include the smoke category, recording a video; when the image is detected to be classified to contain no smoke category, stopping recording the video;
and calculating the production times according to the number of the recorded videos.
In the above statistical method of the number of times of production, the method of extracting a significant region by preprocessing the picture includes:
carrying out pyramid decomposition on the picture;
extracting the brightness, direction and color characteristics of each layer of decomposed image;
obtaining a characteristic diagram of the brightness, the direction and the color of each layer of image by using a central-peripheral difference algorithm according to the brightness, the direction and the color characteristics of each layer of image;
carrying out interlayer addition on the characteristic graphs of the color, the brightness and the direction respectively to obtain corresponding saliency graphs of the color, the brightness and the direction;
combining the obtained 3 saliency maps into a final saliency map by an average value method;
and finding out a salient region of the picture through the final salient map.
According to a second aspect of the embodiments of the present invention, there is provided a production number counting system including:
the camera is used for acquiring a video of the activity condition of the production facility capable of discharging the flue gas outwards;
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of the first aspect described above.
According to a third aspect of embodiments of the present invention, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any one of the methods of the first aspect described above.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 shows a flow diagram of a production count statistics method according to one embodiment of the invention.
FIG. 2 shows a flowchart of the ITTI algorithm according to one embodiment of the invention.
FIG. 3 shows a convolutional neural network architecture diagram, according to one embodiment of the present invention.
Detailed Description
FIG. 1 shows a flow diagram of a production count statistics method according to one embodiment of the invention. As shown in fig. 1, the method includes the following steps.
And step 10, acquiring a video of the activity condition of the production facility which discharges the flue gas outwards by using a camera. The monitored production facility may be determined by the type of plant, for example, for a coke plant, the bottom of the quenching tower may be viewed with a camera, since a large amount of flue gas is emitted from the bottom of the quenching tower during production.
And step 11, acquiring an rtsp stream address of the camera, and further acquiring a video of the activity condition of the production facility acquired by the camera.
And step 12, converting the video into a picture of each frame by using an opencv frame in the python language.
In step 13, many areas in the captured picture are invalid because the distance and angle of the camera to the monitored production facility is not perfect. Therefore, the invention uses the visual saliency model (ITTI) method to preprocess the picture to extract and save a salient region (a region of interest). When the production times of the coke-oven plant are counted, the significant area is the area of the smoke discharged from the bottom of the coke-quenching tower.
And 14, classifying the preprocessed pictures to produce a training data set, wherein the pictures can be classified according to the fact that the pictures contain smoke and do not contain smoke, and the pictures contain smoke and represent that a factory is in production, and the pictures do not contain smoke and represent that the factory is off production. In addition, in order to reduce the recognition error caused by the different illumination environments in the daytime and at night, the preprocessed pictures can be further classified and manufactured into the training data set according to the smoke contained in the daytime, the smoke not contained in the daytime, the smoke contained in the nighttime and the smoke not contained in the nighttime.
And step 15, constructing a deep learning model by using the tflearn tool.
And step 16, sending the training data set into the deep learning model for training to obtain a classification model.
And step 17, acquiring a real-time video of the activity condition of the production facility, acquired by a camera, converting the real-time video into an image of each frame, and inputting the image into a trained model for classification and identification.
Step 18, when detecting that the image is classified to include a smoke category, starting to record a video; when the image is detected to be classified to contain no smoke category, stopping recording the video;
and step 19, calculating the production times according to the number of the recorded videos.
FIG. 2 shows a flowchart of the ITTI algorithm according to one embodiment of the invention. As shown in fig. 2, the method for region-of-interest extraction of pictures acquired from a video using the visual saliency model (ITTI) includes the following steps.
And 20, carrying out Gaussian pyramid decomposition on the acquired picture to prepare for subsequent picture feature mining. The gaussian pyramid is to perform gaussian low-pass filtering on the picture and then perform down-sampling. The size of each layer of image is 1/4 of the size of the next layer of image in the process of Gaussian pyramid processing, wherein the bottom layer of the pyramid is the original size of the picture acquired in the video. The decomposed picture can fully represent the characteristics of the original picture, the speed and the performance are greatly improved, and the pressure of a system and hardware for realizing the method is reduced.
And step 21, calculating the brightness, direction and color characteristics of each layer of image after decomposition.
The feature extraction formula of the image brightness is as follows:
I=(r+g+b)/3
in the formula, I is a brightness characteristic, r, g and b are channel components of different colors, r represents red, g represents green and b represents blue.
The feature extraction formula of the image color is as follows:
R=(r-(g+b))/2
G=(g-(r+b))/2
B=(b-(g+r))/2
Y=(r+g)-2*(|r-g|+b)
r, G, B, Y in the formula represent the color characteristic values of red, green, blue and yellow, respectively.
And step 22, after the characteristics of the brightness, the color and the direction are extracted, obtaining the characteristic graphs of the brightness, the direction and the color of each layer of image by using a central-peripheral difference algorithm (C-S operation), and obtaining a new image by calculating the difference between the central layer and the peripheral layer by the algorithm. The characteristic diagram is an index which is used for representing the sensitivity degree of severe change by taking the gray value on the characteristic diagram as a measurement standard from the characteristics of brightness, color and direction. The more drastic the change, the higher its grey value.
The calculation formula of the brightness characteristic graph of the picture is as follows:
I(c,s)=|I(c)ΘI(s)|
the calculation formula of the color characteristic graph of the picture is as follows:
RG(c,s)=|(R(c)-G(c))Θ(G(s)-R(s))|
BY(c,s)=|(B(c)-Y(c))Θ(Y(s)-B(s))|
the calculation formula of the direction characteristic diagram of the picture is as follows:
Ο(c,s,θ)=|O(c,θ)ΘO(s,θ)|
wherein c represents a central layer part, s represents a peripheral layer part, theta represents a central peripheral difference operator, RG and BY represent color feature graphs of a red-green channel and a blue-yellow channel, and theta is a square output BY the filter and takes values of 0 degree, 45 degrees, 90 degrees and 135 degrees.
And step 23, after obtaining each feature map, performing interlayer addition operation on the feature maps of the color, the brightness and the direction respectively, thereby calculating the saliency maps of the corresponding color, the brightness and the direction.
And 24, combining the obtained 3 saliency maps into a final saliency map by an average value method, and finding out a saliency region of the picture, such as a region of the coke quenching tower bottom smoke exhaust.
FIG. 3 shows a convolutional neural network architecture diagram, according to one embodiment of the present invention. As shown in fig. 3, a tflearn tool can be used to construct a deep learning model, as follows.
The picture data is compressed to a fixed size format of 64 × 64 and taken as input layer data.
Building a convolution layer and a pooling layer, wherein the concrete parameters are as follows: the first layer is a convolution layer, the number of filters, the size of Kernel and stride are respectively 16, 3 multiplied by 3 and 1; the second layer is a pooling layer, and the number of filters, the size of Kernel and stride are respectively 16, 2 multiplied by 2 and 2; the third layer is a convolution layer, the number of filters, the size of Kernel and stride are respectively 16, 3 multiplied by 3 and 1; the fourth layer is a pooling layer, and the number of filters, the size of Kernel and stride are respectively 16, 2 multiplied by 2 and 2; the fifth layer is a convolution layer, the number of filters, the size of Kernel and stride are respectively 16, 3 multiplied by 3 and 1; the sixth layer is a pooling layer, and the number of filters, the size of Kernel and stride are respectively 1, 2 multiplied by 2 and 2; the seventh, eighth and ninth layers are all connected layers. In addition, two dropout functions are included, which determine whether a neuron is inhibited with a probability of keep _ prob. The objective is to prevent model overfitting and make the generalization of the model stronger.
In an exemplary embodiment, there is also provided a production times statistics system, including: the camera is used for acquiring videos of the activity conditions of the production facilities capable of discharging the flue gas outwards; a processor; and a memory for storing processor-executable instructions; wherein the processor is configured to execute instructions in the memory to perform all or part of the steps of the method described above.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor to perform all or part of the steps of the above method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.

Claims (8)

1. A production number statistical method is characterized by comprising the following steps:
acquiring a video of the activity condition of the production facility which discharges the flue gas outwards;
converting the video into a picture of each frame;
preprocessing the picture to extract a significant region;
classifying the preprocessed pictures according to the smoke contained and the smoke not contained to manufacture a training data set;
constructing a deep learning model;
sending the training data set into the deep learning model for training to obtain a classification model;
acquiring a real-time video of the activity condition of the production facility, converting the real-time video into an image of each frame, and inputting the image into a trained model for classification and identification;
when the images are detected to be classified to include the smoke category, recording a video; when the image is detected to be classified to contain no smoke category, stopping recording the video;
and calculating the production times according to the number of the recorded videos.
2. The statistical method of production times according to claim 1, wherein the method of extracting a significant region by preprocessing the picture comprises:
carrying out pyramid decomposition on the picture;
extracting the brightness, direction and color characteristics of each layer of decomposed image;
obtaining a characteristic diagram of the brightness, the direction and the color of each layer of image by using a central-peripheral difference algorithm according to the brightness, the direction and the color characteristics of each layer of image;
carrying out interlayer addition on the characteristic graphs of the color, the brightness and the direction respectively to obtain corresponding saliency graphs of the color, the brightness and the direction;
combining the obtained 3 saliency maps into a final saliency map by an average value method;
and finding out a salient region of the picture through the final salient map.
3. The statistical method of production times according to claim 2, wherein the size of each layer of image in the pyramid decomposition process is 1/4 of the size of the next layer of image.
4. The statistical method of production times according to claim 3, wherein the luminance feature extraction formula is:
I=(r+g+b)/3
in the formula, I is a brightness characteristic, r, g and b are channel components of different colors, r represents red, g represents green and b represents blue;
the color feature extraction formula is:
R=(r-(g+b))/2
G=(g-(r+b))/2
B=(b-(g+r))/2
Y=(r+g)-2*(|r-g|+b)
r, G, B, Y in the formula represent the color characteristic values of red, green, blue and yellow, respectively.
5. The statistical method of production times according to claim 4, wherein the formula of the luminance characteristic map is:
I(c,s)=|I(c)ΘI(s)|
the formula of the color feature map is as follows:
RG(c,s)=|(R(c)-G(c))Θ(G(s)-R(s))|
BY(c,s)=|(B(c)-Y(c))Θ(Y(s)-B(s))|
the formula of the direction characteristic diagram is as follows:
Ο(c,s,θ)=|O(c,θ)ΘO(s,θ)|
wherein c represents a central layer part, s represents a peripheral layer part, theta represents a central peripheral difference operator, RG and BY represent color feature graphs of a red-green channel and a blue-yellow channel, and theta is output of the filter and takes values of 0 degree, 45 degrees, 90 degrees and 135 degrees.
6. The method for counting the number of production times according to claim 1, wherein the preprocessed pictures are classified to produce the training data set according to the smoke included in the day, the smoke not included in the day, the smoke included in the night and the smoke not included in the night.
7. A production count statistics system, comprising:
the camera is used for acquiring a video of the activity condition of the production facility capable of discharging the flue gas outwards;
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any one of claims 1-6.
8. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201911343125.3A 2019-12-24 2019-12-24 Production frequency statistical method, system and storage medium Pending CN111008612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911343125.3A CN111008612A (en) 2019-12-24 2019-12-24 Production frequency statistical method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911343125.3A CN111008612A (en) 2019-12-24 2019-12-24 Production frequency statistical method, system and storage medium

Publications (1)

Publication Number Publication Date
CN111008612A true CN111008612A (en) 2020-04-14

Family

ID=70117725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911343125.3A Pending CN111008612A (en) 2019-12-24 2019-12-24 Production frequency statistical method, system and storage medium

Country Status (1)

Country Link
CN (1) CN111008612A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103728879A (en) * 2014-01-20 2014-04-16 华北电力大学 Power station boiler emission soft measuring method based on least squares support vector machine and on-line updating
CN108956876A (en) * 2018-07-12 2018-12-07 浙江大学 A kind of measurement time delay correcting method of flue gas on-line continuous monitoring system
CN109598891A (en) * 2018-12-24 2019-04-09 中南民族大学 A kind of method and system for realizing Smoke Detection using deep learning disaggregated model
CN110059723A (en) * 2019-03-19 2019-07-26 北京工业大学 A kind of robust smog detection method based on integrated depth convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103728879A (en) * 2014-01-20 2014-04-16 华北电力大学 Power station boiler emission soft measuring method based on least squares support vector machine and on-line updating
CN108956876A (en) * 2018-07-12 2018-12-07 浙江大学 A kind of measurement time delay correcting method of flue gas on-line continuous monitoring system
CN109598891A (en) * 2018-12-24 2019-04-09 中南民族大学 A kind of method and system for realizing Smoke Detection using deep learning disaggregated model
CN110059723A (en) * 2019-03-19 2019-07-26 北京工业大学 A kind of robust smog detection method based on integrated depth convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
秦文政: "开放环境下基于视觉注意模型的烟雾检测技术研究" *

Similar Documents

Publication Publication Date Title
CN111191576B (en) Personnel behavior target detection model construction method, intelligent analysis method and system
CN110135269B (en) Fire image detection method based on mixed color model and neural network
CN113139521B (en) Pedestrian boundary crossing monitoring method for electric power monitoring
CN110427922A (en) One kind is based on machine vision and convolutional neural networks pest and disease damage identifying system and method
CN106384117B (en) A kind of vehicle color identification method and device
CN113409362B (en) High altitude parabolic detection method and device, equipment and computer storage medium
CN106339657B (en) Crop straw burning monitoring method based on monitor video, device
CN113887412B (en) Detection method, detection terminal, monitoring system and storage medium for pollution emission
CN112906550B (en) Static gesture recognition method based on watershed transformation
CN107818303A (en) Unmanned plane oil-gas pipeline image automatic comparative analysis method, system and software memory
CN101316371B (en) Flame detecting method and device
CN113989858B (en) Work clothes identification method and system
CN113409360A (en) High altitude parabolic detection method and device, equipment and computer storage medium
CN110414342A (en) A kind of movement human detection recognition method based on video image processing technology
CN105405138A (en) Water surface target tracking method based on saliency detection
CN114885119A (en) Intelligent monitoring alarm system and method based on computer vision
CN114926778A (en) Safety helmet and personnel identity recognition system under production environment
CN115082834A (en) Engineering vehicle black smoke emission monitoring method and system based on deep learning
CN105718896A (en) Intelligent robot with target recognition function
CN115311623A (en) Equipment oil leakage detection method and system based on infrared thermal imaging
CN110647813A (en) Human face real-time detection and identification method based on unmanned aerial vehicle aerial photography
CN105930814A (en) Method for detecting personnel abnormal gathering behavior on the basis of video monitoring platform
CN111008612A (en) Production frequency statistical method, system and storage medium
CN110135274B (en) Face recognition-based people flow statistics method
CN110956156A (en) Deep learning-based red light running detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination