CN115065798A - Big data-based video analysis monitoring system - Google Patents

Big data-based video analysis monitoring system Download PDF

Info

Publication number
CN115065798A
CN115065798A CN202210991968.XA CN202210991968A CN115065798A CN 115065798 A CN115065798 A CN 115065798A CN 202210991968 A CN202210991968 A CN 202210991968A CN 115065798 A CN115065798 A CN 115065798A
Authority
CN
China
Prior art keywords
video
frame
big data
extracted
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210991968.XA
Other languages
Chinese (zh)
Other versions
CN115065798B (en
Inventor
陈志明
陈博允
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Zhilian Information Technology Co ltd
Guangzhou Intelligent Computing Information Technology Co ltd
Original Assignee
Guangzhou Zhilian Information Technology Co ltd
Guangzhou Intelligent Computing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Zhilian Information Technology Co ltd, Guangzhou Intelligent Computing Information Technology Co ltd filed Critical Guangzhou Zhilian Information Technology Co ltd
Priority to CN202210991968.XA priority Critical patent/CN115065798B/en
Publication of CN115065798A publication Critical patent/CN115065798A/en
Application granted granted Critical
Publication of CN115065798B publication Critical patent/CN115065798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video analysis monitoring system based on big data, which comprises a shooting module, a frame extracting module, a big data identification module and an alarm module, wherein the shooting module is used for shooting the big data; the shooting module is used for shooting the monitoring area to obtain a monitoring video; the frame extracting module is used for adopting a self-adaptive frame number interval to carry out frame extracting processing on the monitoring video to obtain video frames; the big data identification module is used for identifying the video frame by adopting a big data technology to obtain an identification result; and the alarm module is used for prompting the operator on duty according to the identification result. When the video is used for monitoring the monitoring area, the video frames are extracted through the self-adaptive frame number interval to carry out big data identification, and the occurrence of the event of wasting computing resources caused by extracting the video frames by using the fixed frame number interval all the time is effectively avoided.

Description

Big data-based video analysis monitoring system
Technical Field
The invention relates to the field of monitoring, in particular to a video analysis monitoring system based on big data.
Background
The monitoring is a physical basis for real-time monitoring of key departments or important places of various industries. The management department can acquire effective data, image or sound information through the monitoring system, and timely monitor and memorize the process of the emergency abnormal event so as to provide efficient and timely command and height, police force deployment, case handling and the like. With the rapid development and popularization of the current computer application, a strong digital wave is raised globally, and the digitalization of various devices is the primary objective of safety protection. The performance characteristics of the digital monitoring alarm are as follows: the method comprises the steps of monitoring picture real-time display, video image quality single-channel adjusting function, independent setting of video recording speed of each channel, quick retrieval, setting of multiple video recording modes, automatic backup, tripod head/lens control function, network transmission and the like.
With the development of big data technology, the existing monitoring system also develops the function of analyzing the monitoring video content in real time. In the prior art, generally, a fixed frame number interval is adopted to perform frame extraction processing on the content of a video, so that a big data technology is used to identify the content in a monitoring video and judge whether a set type event occurs. However, in the normal monitoring process, the probability of occurrence of the set type of event is very small, and if the frame is extracted at a fixed frame number interval, it obviously wastes the computing resource.
Disclosure of Invention
The invention aims to disclose a video analysis monitoring system based on big data, which solves the problem that the existing video monitoring system adopts fixed frame number intervals to extract frames, then identifies frame pictures, judges whether events with set types occur or not and wastes computing resources.
In order to achieve the purpose, the invention adopts the following technical scheme:
a video analysis monitoring system based on big data comprises a shooting module, a frame extracting module, a big data identification module and an alarm module;
the shooting module is used for shooting the monitoring area to obtain a monitoring video;
the frame extracting module is used for adopting a self-adaptive frame number interval to carry out frame extracting processing on the monitoring video to obtain video frames;
the big data identification module is used for identifying the video frame by adopting a big data technology to obtain an identification result;
the alarm module is used for prompting the operator on duty according to the identification result;
the frame number interval is calculated as follows:
for the extracted k video frame
Figure 224325DEST_PATH_IMAGE001
To, for
Figure 741894DEST_PATH_IMAGE001
Performing identification processing, if the obtained identification result is
Figure 704033DEST_PATH_IMAGE001
If the event contains the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the k-th video frame already extracted is calculated in the following way:
Figure 801302DEST_PATH_IMAGE002
if the obtained identification result is
Figure 430867DEST_PATH_IMAGE001
If the event does not contain the set type, the frame number interval between the (k + 1) th video frame to be extracted and the extracted k-th video frame is calculated by adopting the following method:
Figure 322600DEST_PATH_IMAGE003
wherein,
Figure 571703DEST_PATH_IMAGE004
indicating the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted,
Figure 738242DEST_PATH_IMAGE005
indicating the frame number interval between the extracted kth video frame and the extracted (k-1) th video frame,
Figure 956734DEST_PATH_IMAGE006
indicates a preset number of frames,
Figure 19368DEST_PATH_IMAGE007
a preset lower limit value of the number of frames is shown,
Figure 690520DEST_PATH_IMAGE008
indicating a preset upper limit value of the number of frames.
Preferably, the shooting module comprises a shooting unit and a light supplementing unit;
the shooting unit is used for shooting the monitoring area to obtain a monitoring video;
the light supplementing unit is used for supplementing light to the monitored area when the light brightness is lower than a set brightness threshold value.
Preferably, the corresponding frame number of the extracted k-th video frame in the monitored video is recorded as
Figure 457488DEST_PATH_IMAGE009
Then, the calculation formula of the frame number corresponding to the extracted (k + 1) th video frame in the surveillance video is:
Figure 530486DEST_PATH_IMAGE010
wherein,
Figure 378005DEST_PATH_IMAGE011
and the number of the frame number corresponding to the (k + 1) th video frame to be extracted in the monitoring video is shown.
Preferably, the identifying the video frame by using the big data technology to obtain the identification result includes:
preprocessing a video frame to obtain a preprocessed image;
and inputting the preprocessed image into a recognition model obtained by training by adopting a big data technology for recognition processing to obtain a recognition result.
Preferably, the preprocessing the video frame to obtain a preprocessed image includes:
carrying out graying processing on the video frame to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
and performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image.
Preferably, the identification result includes an event that the video frame includes a set type or an event that the video frame does not include a set type.
Preferably, the alarm module comprises a control unit and a prompt unit;
the control unit is used for judging whether the identification result is
Figure 536454DEST_PATH_IMAGE001
When the event contains the set type of event, sending a prompt instruction to a prompt unit;
and the prompting unit is used for prompting the operator on duty after receiving the prompting instruction.
Preferably, the graying the video frame to obtain the grayscale image includes:
graying the video frame by using the following formula:
Figure 44795DEST_PATH_IMAGE012
wherein,
Figure 237879DEST_PATH_IMAGE013
representing a grayscale image
Figure 642316DEST_PATH_IMAGE014
The middle coordinate is
Figure 288061DEST_PATH_IMAGE015
Pixel of the pixel pointThe value of the one or more of the one,
Figure 396831DEST_PATH_IMAGE016
which represents a pre-set scaling factor, is,
Figure 647684DEST_PATH_IMAGE017
respectively expressed in the red component image, the green component image and the blue component image, and having coordinates of
Figure 19759DEST_PATH_IMAGE015
The red component image, the green component image and the blue component image are respectively images of a red component, a green component and a blue component of the video frame in an RGB color space.
When the video is used for monitoring the monitored area, the video frames are extracted for big data identification through the self-adaptive frame number interval, so that the occurrence of the event of wasting computing resources caused by extracting the video frames by using the fixed frame number interval all the time is effectively avoided.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an embodiment of a big data based video analysis monitoring system according to the present invention.
FIG. 2 is a diagram of an embodiment of obtaining a pre-processed image according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention and are not to be construed as limiting the present invention.
As shown in fig. 1, the present invention provides a video analysis monitoring system based on big data, which includes a shooting module, a frame extracting module, a big data identification module and an alarm module;
the shooting module is used for shooting the monitoring area to obtain a monitoring video;
the frame extracting module is used for adopting a self-adaptive frame number interval to carry out frame extracting processing on the monitoring video to obtain video frames;
the big data identification module is used for identifying the video frames by adopting a big data technology to obtain an identification result;
the alarm module is used for prompting the operator on duty according to the identification result;
the frame number interval is calculated as follows:
for the extracted k video frame
Figure 155730DEST_PATH_IMAGE001
To, for
Figure 537033DEST_PATH_IMAGE001
Performing identification processing, if the obtained identification result is
Figure 439130DEST_PATH_IMAGE001
If the event contains the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the k-th video frame already extracted is calculated in the following way:
Figure 185369DEST_PATH_IMAGE018
if the obtained identification result is
Figure 540127DEST_PATH_IMAGE001
If the event does not contain the set type, the frame number interval between the (k + 1) th video frame to be extracted and the extracted k-th video frame is calculated by adopting the following method:
Figure 256279DEST_PATH_IMAGE019
wherein,
Figure 747303DEST_PATH_IMAGE004
indicating the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted,
Figure 726761DEST_PATH_IMAGE020
indicating the frame number interval between the extracted kth video frame and the extracted (k-1) th video frame,
Figure 34726DEST_PATH_IMAGE006
indicates a preset number of frames,
Figure 492253DEST_PATH_IMAGE007
represents a preset lower limit value of the number of frames,
Figure 368942DEST_PATH_IMAGE008
indicating a preset upper limit value of the number of frames.
When the video is used for monitoring the monitoring area, the video frames are extracted through the self-adaptive frame number interval to carry out big data identification, and the occurrence of the event of wasting computing resources caused by extracting the video frames by using the fixed frame number interval all the time is effectively avoided.
When an event is detected, the invention starts to shorten the time interval to improve the security level of the system, and when no event is detected, the invention increases the frame number interval to avoid wasting computing resources.
Preferably, the set type event can be set according to different monitoring areas, and meanwhile, the corresponding type of data set class is also required to be used for training to obtain the corresponding recognition model. For example, when monitoring an escalator, the event of the set type may be that a person riding the escalator falls, the number of passengers exceeds a set value, or the like.
Preferably, the shooting module comprises a shooting unit and a light supplementing unit;
the shooting unit is used for shooting the monitoring area to obtain a monitoring video;
the light supplementing unit is used for supplementing light to the monitored area when the light brightness is lower than a set brightness threshold value.
Preferably, the corresponding frame number of the extracted k-th video frame in the monitored video is recorded as
Figure 519300DEST_PATH_IMAGE009
Then, the calculation formula of the frame number corresponding to the extracted (k + 1) th video frame in the surveillance video is:
Figure 51913DEST_PATH_IMAGE021
wherein,
Figure 47551DEST_PATH_IMAGE011
and the corresponding frame number of the (k + 1) th video frame to be extracted in the monitoring video is shown.
Preferably, the identifying the video frame by using the big data technology to obtain the identification result includes:
preprocessing a video frame to obtain a preprocessed image;
and inputting the preprocessed image into a recognition model obtained by big data technology training for recognition processing to obtain a recognition result.
Preferably, the recognition model of the invention is trained by adopting a distributed computing mode, the training tasks are distributed to a plurality of nodes for computation, and finally, the computation results are collected to obtain the training results.
Preferably, as shown in fig. 2, the preprocessing the video frame to obtain a preprocessed image includes:
carrying out graying processing on the video frame to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
and performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image.
The noise reduction processing is carried out before the foreground extraction, so that the influence of noise on the foreground extraction can be effectively reduced, and the accuracy of the extracted preprocessed image only comprising the pixels of the foreground part is improved.
Preferably, the identification result includes an event that the video frame includes a set type or an event that the video frame does not include a set type.
Preferably, the alarm module comprises a control unit and a prompt unit;
the control unit is used for judging whether the identification result is
Figure 44326DEST_PATH_IMAGE001
When the event contains the set type of event, a prompt instruction is sent to a prompt unit;
the prompting unit is used for prompting the operator on duty after receiving the prompting instruction.
Preferably, the graying the video frame to obtain the grayscale image includes:
graying the video frame by using the following formula:
Figure 568848DEST_PATH_IMAGE022
wherein,
Figure 119915DEST_PATH_IMAGE013
representing a grayscale image
Figure 453332DEST_PATH_IMAGE023
The center coordinate is
Figure 773455DEST_PATH_IMAGE015
The pixel value of the pixel point of (a),
Figure 37DEST_PATH_IMAGE016
which represents a pre-set scaling factor, is,
Figure 772821DEST_PATH_IMAGE017
respectively expressed in the red component image, the green component image and the blue component image, and having coordinates of
Figure 375840DEST_PATH_IMAGE015
The red component image, the green component image and the blue component image are respectively images of a red component, a green component and a blue component of the video frame in an RGB color space.
Preferably, the performing noise reduction processing on the grayscale image to obtain a noise-reduced image includes:
carrying out edge detection on the gray level image by using a Canny algorithm to obtain a set oneSet of edge pixel points;
noise detection is carried out on pixel points in the gray-scale image based on oneSet, and a set twoSet of the noise pixel points is obtained;
and carrying out noise reduction processing on the pixel points in the twoSet to obtain a noise reduction image.
Different from a general noise reduction processing mode, the noise reduction processing method is not used for directly carrying out noise reduction processing on all pixel points, and due to the fact that the complexity of a noise reduction algorithm is high, the speed of the noise reduction processing is influenced, and therefore the real-time performance of correctly identifying the preset type event by the monitoring system is influenced. Therefore, the invention adopts the steps of firstly carrying out edge detection, then carrying out noise detection on the basis of the result of the edge detection, and finally carrying out noise reduction processing on the set of pixel points obtained on the basis of the noise detection.
Preferably, the performing noise detection on the pixel points in the gray-scale image based on oneSet to obtain a set twoSet of noise pixel points includes:
respectively calculating the noise parameter of each pixel point in the gray level image;
and storing the pixel points with the noise parameters larger than the set parameter threshold value into the set twoSet.
Preferably, the noise parameter is calculated as follows:
for the pixel point wtj, the noise parameter wtj is calculated using the following formula:
Figure 816049DEST_PATH_IMAGE024
wherein,
Figure 275849DEST_PATH_IMAGE025
the noise parameter of the representation wtj is,
Figure 4771DEST_PATH_IMAGE026
Figure 411481DEST_PATH_IMAGE027
representing preset weight coefficients, niset representing wtj centered
Figure 968846DEST_PATH_IMAGE028
A collection of pixel points within a region of size,
Figure 537230DEST_PATH_IMAGE029
and
Figure 815765DEST_PATH_IMAGE030
representing the pixel values of pixel point wtj and pixel point i respectively,
Figure 26166DEST_PATH_IMAGE031
the standard deviation of the pixel values representing the pixels in niset,
Figure 644230DEST_PATH_IMAGE032
indicating the length of the connection between pixel wtj and pixel i,
Figure 180253DEST_PATH_IMAGE033
standard deviation representing the length of the connection between the pixel point in niset and pixel point wtj;
Figure 680505DEST_PATH_IMAGE034
a similar parameter is indicated and is,
Figure 632280DEST_PATH_IMAGE035
numbs represents the number of pixels in niset that have the same gradient direction as wtj,
Figure 904517DEST_PATH_IMAGE036
representing the edge judgment parameter, if wtj belongs to the set oneSet
Figure 80284DEST_PATH_IMAGE036
Is 1.5, if wtj does not belong to the set oneSet, then
Figure 67831DEST_PATH_IMAGE036
The content of the organic acid is 0.5,
Figure 354456DEST_PATH_IMAGE037
representing a preset constant coefficient.
The noise parameter of the invention is relevant to the result of the edge detection, when the pixel belongs to the set oneSet, the probability that the pixel belongs to the edge pixel is very high, therefore, the value of the right part of the above formula is correspondingly reduced, but because the invention carries out the edge detection before the noise reduction, part of the noise pixel is probably wrongly identified as the edge pixel, therefore, when one noise pixel is wrongly identified as the edge pixel, the value of the left part is very large, and at this time, the noise pixel in the state can be correctly identified by the parameter threshold value set by the invention. The left part of the invention considers the difference of the connecting line length and the pixel value between the pixel point which is currently calculated and the pixel point in the set range, and when the difference of the weighted result of the pixel point and the area is larger, the probability of belonging to the noise pixel point is also larger, therefore, the accuracy of the detection result of the noise pixel point can be improved by the arrangement.
Preferably, the performing noise reduction processing on the pixel point in the twoSet to obtain a noise-reduced image includes:
and carrying out noise reduction processing on the pixel points in the twoSet by using a non-local mean filtering algorithm to obtain a noise reduction image.
Preferably, the performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image includes:
performing foreground extraction processing on the noise-reduced image by using an Otsu method to obtain a set rdSet of foreground pixels;
performing foreground extraction processing on the noise-reduced image by using a watershed algorithm to obtain a set sdSet of foreground pixel points;
acquiring an intersection tdSet of the rdSet and the sdSet;
and taking the edge pixel points in the tdSet as seed pixel points, and performing image growth processing to obtain a preprocessed image.
The existing foreground extraction algorithm generally adopts a single algorithm to extract, so that the continuity of the obtained preprocessed image is poorer, therefore, the two algorithms are adopted to process, then the intersection of the processing results is obtained, and the hole is repaired based on the intersection, so that the continuity of the image is improved.
The above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (8)

1. A video analysis monitoring system based on big data is characterized by comprising a shooting module, a frame extracting module, a big data identification module and an alarm module;
the shooting module is used for shooting the monitoring area to obtain a monitoring video;
the frame extracting module is used for adopting a self-adaptive frame number interval to carry out frame extracting processing on the monitoring video to obtain video frames;
the big data identification module is used for identifying the video frames by adopting a big data technology to obtain an identification result;
the alarm module is used for prompting the operator on duty according to the identification result;
the frame number interval is calculated as follows:
for the extracted k video frame
Figure DEST_PATH_IMAGE002_15A
To, for
Figure DEST_PATH_IMAGE002_16A
Performing identification processing, if the obtained identification result is
Figure DEST_PATH_IMAGE002_17A
If the event contains the set type of event, the frame number interval between the (k + 1) th video frame to be extracted and the k-th video frame already extracted is calculated in the following way:
Figure DEST_PATH_IMAGE004AAA
if the obtained identification result is
Figure DEST_PATH_IMAGE002_18A
If the event does not contain the set type, the frame number interval between the (k + 1) th video frame to be extracted and the extracted k-th video frame is calculated by adopting the following method:
Figure DEST_PATH_IMAGE006AAA
wherein,
Figure DEST_PATH_IMAGE008AAA
indicating the frame number interval between the (k + 1) th video frame to be extracted and the (k) th video frame already extracted,
Figure DEST_PATH_IMAGE010AAA
indicating the frame number between the extracted k-th video frame and the extracted k-1 th video frameThe separation of the air inlet and the air outlet,
Figure DEST_PATH_IMAGE012AAA
indicates a preset number of frames,
Figure DEST_PATH_IMAGE014AAA
represents a preset lower limit value of the number of frames,
Figure DEST_PATH_IMAGE016AAA
represents a preset upper limit value of the number of frames.
2. The big data based video analysis monitoring system according to claim 1, wherein the shooting module comprises a shooting unit and a light supplementing unit;
the shooting unit is used for shooting the monitoring area to obtain a monitoring video;
the light supplementing unit is used for supplementing light to the monitored area when the light brightness is lower than a set brightness threshold value.
3. The big-data-based video analysis monitoring system according to claim 1, wherein the corresponding frame number of the extracted kth video frame in the monitored video is recorded as
Figure DEST_PATH_IMAGE018AAA
Then, the calculation formula of the frame number corresponding to the extracted (k + 1) th video frame in the surveillance video is:
Figure DEST_PATH_IMAGE020AAA
wherein,
Figure DEST_PATH_IMAGE022AAA
and the corresponding frame number of the (k + 1) th video frame to be extracted in the monitoring video is shown.
4. The video analysis monitoring system based on big data according to claim 1, wherein the identifying the video frames by using big data technology to obtain the identification result comprises:
preprocessing a video frame to obtain a preprocessed image;
and inputting the preprocessed image into a recognition model obtained by big data technology training for recognition processing to obtain a recognition result.
5. The big data based video analysis monitoring system according to claim 4, wherein the preprocessing the video frames to obtain the preprocessed image comprises:
carrying out graying processing on the video frame to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
and performing foreground extraction processing on the noise-reduced image to obtain a preprocessed image.
6. The big data based video analysis monitoring system according to claim 4, wherein the recognition result comprises an event with a set type contained in the video frame or an event without a set type contained in the video frame.
7. The big data based video analysis monitoring system according to claim 1, wherein the alarm module comprises a control unit and a prompt unit;
the control unit is used for judging whether the identification result is
Figure DEST_PATH_IMAGE002_19A
When the event contains the set type of event, a prompt instruction is sent to a prompt unit;
the prompting unit is used for prompting the operator on duty after receiving the prompting instruction.
8. The big data based video analysis monitoring system according to claim 5, wherein the graying the video frames to obtain the grayscale images comprises:
graying the video frame by using the following formula:
Figure DEST_PATH_IMAGE024AAA
wherein,
Figure DEST_PATH_IMAGE026AAA
representing a grayscale image
Figure DEST_PATH_IMAGE028AAA
The center coordinate is
Figure DEST_PATH_IMAGE030_6A
The pixel value of the pixel point of (a),
Figure DEST_PATH_IMAGE032AAA
which represents a pre-set scaling factor, is,
Figure DEST_PATH_IMAGE034AAA
respectively expressed in red component image, green component image, and blue component image, and having coordinates of
Figure DEST_PATH_IMAGE030_7A
The red component image, the green component image and the blue component image are respectively images of a red component, a green component and a blue component of the video frame in an RGB color space.
CN202210991968.XA 2022-08-18 2022-08-18 Big data-based video analysis monitoring system Active CN115065798B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210991968.XA CN115065798B (en) 2022-08-18 2022-08-18 Big data-based video analysis monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210991968.XA CN115065798B (en) 2022-08-18 2022-08-18 Big data-based video analysis monitoring system

Publications (2)

Publication Number Publication Date
CN115065798A true CN115065798A (en) 2022-09-16
CN115065798B CN115065798B (en) 2022-11-22

Family

ID=83208138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210991968.XA Active CN115065798B (en) 2022-08-18 2022-08-18 Big data-based video analysis monitoring system

Country Status (1)

Country Link
CN (1) CN115065798B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115408557A (en) * 2022-11-01 2022-11-29 吉林信息安全测评中心 Safety monitoring system based on big data
CN115761571A (en) * 2022-10-26 2023-03-07 北京百度网讯科技有限公司 Video-based target retrieval method, device, equipment and storage medium
CN116805433A (en) * 2023-06-27 2023-09-26 北京奥康达体育科技有限公司 Human motion trail data analysis system
CN117404636A (en) * 2023-09-15 2024-01-16 山东省金海龙建工科技有限公司 Intelligent street lamp for parking lot based on image processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572357A (en) * 2011-12-31 2012-07-11 中兴通讯股份有限公司 Video monitoring system front end memory method and video monitoring system
CN104618679A (en) * 2015-03-13 2015-05-13 南京知乎信息科技有限公司 Method for extracting key information frame from monitoring video
CN111064924A (en) * 2019-11-26 2020-04-24 天津易华录信息技术有限公司 Video monitoring method and system based on artificial intelligence
CN111523347A (en) * 2019-02-01 2020-08-11 北京奇虎科技有限公司 Image detection method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572357A (en) * 2011-12-31 2012-07-11 中兴通讯股份有限公司 Video monitoring system front end memory method and video monitoring system
CN104618679A (en) * 2015-03-13 2015-05-13 南京知乎信息科技有限公司 Method for extracting key information frame from monitoring video
CN111523347A (en) * 2019-02-01 2020-08-11 北京奇虎科技有限公司 Image detection method and device, computer equipment and storage medium
CN111064924A (en) * 2019-11-26 2020-04-24 天津易华录信息技术有限公司 Video monitoring method and system based on artificial intelligence

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761571A (en) * 2022-10-26 2023-03-07 北京百度网讯科技有限公司 Video-based target retrieval method, device, equipment and storage medium
CN115408557A (en) * 2022-11-01 2022-11-29 吉林信息安全测评中心 Safety monitoring system based on big data
CN116805433A (en) * 2023-06-27 2023-09-26 北京奥康达体育科技有限公司 Human motion trail data analysis system
CN116805433B (en) * 2023-06-27 2024-02-13 北京奥康达体育科技有限公司 Human motion trail data analysis system
CN117404636A (en) * 2023-09-15 2024-01-16 山东省金海龙建工科技有限公司 Intelligent street lamp for parking lot based on image processing

Also Published As

Publication number Publication date
CN115065798B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN115065798B (en) Big data-based video analysis monitoring system
KR101942808B1 (en) Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN
EP1805715B1 (en) A method and system for processing video data
KR102194499B1 (en) Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN and Driving Method Thereof
CN106845890B (en) Storage monitoring method and device based on video monitoring
CN110084165B (en) Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation
CN110826538A (en) Abnormal off-duty identification system for electric power business hall
US20080152236A1 (en) Image processing method and apparatus
CN112364740B (en) Unmanned aerial vehicle room monitoring method and system based on computer vision
WO2019114145A1 (en) Head count detection method and device in surveillance video
CN106851229B (en) Security and protection intelligent decision method and system based on image recognition
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN112287823A (en) Facial mask identification method based on video monitoring
CN113065568A (en) Target detection, attribute identification and tracking method and system
CN111581679A (en) Method for preventing screen from shooting based on deep network
CN112866654B (en) Intelligent video monitoring system
CN117557937A (en) Monitoring camera image anomaly detection method and system
CN103034997A (en) Foreground detection method for separation of foreground and background of surveillance video
CN116749817A (en) Remote control method and system for charging pile
CN116110095A (en) Training method of face filtering model, face recognition method and device
WO2022198507A1 (en) Obstacle detection method, apparatus, and device, and computer storage medium
CN111145219B (en) Efficient video moving target detection method based on Codebook principle
CN113591591A (en) Artificial intelligence field behavior recognition system
CN112488031A (en) Safety helmet detection method based on color segmentation
CN117011288B (en) Video quality diagnosis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant