CN113962924A - Video monitoring quality evaluation method containing pan-tilt movement and mosaic detection - Google Patents

Video monitoring quality evaluation method containing pan-tilt movement and mosaic detection Download PDF

Info

Publication number
CN113962924A
CN113962924A CN202110884012.5A CN202110884012A CN113962924A CN 113962924 A CN113962924 A CN 113962924A CN 202110884012 A CN202110884012 A CN 202110884012A CN 113962924 A CN113962924 A CN 113962924A
Authority
CN
China
Prior art keywords
image
video monitoring
substep
video
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110884012.5A
Other languages
Chinese (zh)
Inventor
詹桂立
李西明
陆文欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Pride Internet Of Things Information Technology Co ltd
Original Assignee
Guangzhou Pride Internet Of Things Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Pride Internet Of Things Information Technology Co ltd filed Critical Guangzhou Pride Internet Of Things Information Technology Co ltd
Priority to CN202110884012.5A priority Critical patent/CN113962924A/en
Publication of CN113962924A publication Critical patent/CN113962924A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a video monitoring quality evaluation method containing pan-tilt movement and mosaic detection, which comprises the following steps: step S1: acquiring a video monitoring image; step S2: transmitting the video monitoring image to a server; step S3: the server carries out image quality evaluation on the video monitoring image to obtain an evaluation result, wherein the image quality evaluation comprises shielding detection, holder movement detection, color cast detection and mosaic detection; step S4: and when the evaluation result is that the potential safety hazard exists, early warning is carried out. Through the mode, the method can detect whether the video monitoring image has shielding, abnormal pan-tilt movement, abnormal color cast and abnormal mosaic, namely, the method solves some faults which cannot be detected by the traditional image quality evaluation algorithm, so that the method has higher accuracy in the scene of monitoring quality.

Description

Video monitoring quality evaluation method containing pan-tilt movement and mosaic detection
Technical Field
The invention relates to the technical field of image quality evaluation, in particular to a video monitoring quality evaluation method containing pan-tilt movement and mosaic detection.
Background
At present, the fault of camera hardware can be detected by an equipment operation and maintenance management platform, but the detection of some soft faults such as shielding interference, color cast abnormity, mosaic interference and the like is difficult to realize, the fault is not equipment fault, the problem cannot be found by monitoring the camera hardware, and operation and maintenance personnel need to perform troubleshooting according to subjective judgment. If only rely on people's eye to observe the surveillance video, can consume a large amount of manpower and materials, because people's energy is limited, the fault accuracy, timeliness can't obtain guaranteeing, cause the work efficiency to reduce very easily after carrying out long-time work, produce the condition such as false retrieval, hourglass inspection.
At present, there are two main categories of image quality evaluation methods: one type is an evaluation method based on neural networks such as: NIQE, PIQE, MSE, PSNR and other methods; another class is methods based on human visual characteristics and graphics such as: BRISQUE, SSIM, etc. The existing BRISQUE algorithm flow is introduced here, which includes the following steps (1), (2), (3):
extracting Natural Scene Statistics (NSS):
the distribution of pixel intensities of the natural image is different from the distribution of pixel intensities of the distorted image. This difference in distribution is more pronounced when normalizing the pixel intensities and calculating the distribution over these normalized intensities. In particular, after normalization, the pixel intensities of natural images follow a gaussian distribution (bell curve), while the pixel intensities of unnatural or distorted images do not follow a gaussian distribution (bell curve).
The extracted features are respectively:
TABLE 1 feature extraction
Figure RE-GDA0003420553450000021
Calculating a feature vector:
the first two elements of the 36 × 1 eigenvector are calculated by fitting MSCN parameters to a Generalized Gaussian Distribution (GGD). Next, an Asymmetric Generalized Gaussian Distribution (AGGD) is adapted to each of the 4 adjacent element product parameters. Finally, 16 values are obtained. The image is reduced to half the original size and the same process is repeated to obtain 18 new numbers.
TABLE 2 characterization
Figure BDA0003193319630000022
Predicting the image score in step (3)
After the images are converted into feature vectors, the feature vectors and the output of all the images in the training data set are put into a Support Vector Machine (SVM) for predictive image scoring, and then the final evaluation score can be obtained. And classifying the quality of the image according to the score condition.
The various image quality detection evaluation algorithms described above, including BRISQUE, are highly accurate for detecting pictures on existing public data sets. However, the actual usage scenarios are variable, and special processing is required for various usage scenarios: for example, in the field of surveillance video, not only the sharpness of the image itself but also whether the available surveillance information can be obtained need to be considered, for example, the video quality under occlusion may still be high. (2) The monitoring is moved artificially, and the like, so that the universal detection algorithm is still to be adapted in the monitoring video detection.
Namely, the existing image quality evaluation method is poor in combinability and adaptability with actual monitoring video quality evaluation, and the traditional image quality evaluation method cannot detect the problems of shielding, abnormal rotation angle of a monitoring probe, abnormal color cast, mosaic interference and the like of a monitoring video, so that the existing video monitoring quality evaluation accuracy is low.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a video monitoring quality evaluation method containing pan-tilt movement and mosaic detection, which can solve the technical problems.
(II) technical scheme
In order to solve the technical problems, the invention provides the following technical scheme: a video monitoring quality evaluation method containing pan-tilt movement and mosaic detection comprises the following steps:
step S1: acquiring a video monitoring image;
step S2: transmitting the video monitoring image to a server;
step S3: the method comprises the steps that a server carries out image quality evaluation on a video monitoring image to obtain an evaluation result, wherein the image quality evaluation comprises shielding detection, holder movement detection, color cast detection and mosaic detection;
step S4: and when the evaluation result is that the potential safety hazard exists, early warning is carried out.
Preferably, the pan/tilt detection comprises the following sub-steps:
substep S321: selecting a first video monitoring image and a second video monitoring image, wherein the first video monitoring image is a video monitoring image before the pan-tilt moves, and the second video monitoring image is a video monitoring image after the pan-tilt moves;
substep S322: detecting each characteristic point of the first video monitoring image and the second video monitoring image by using a FAST algorithm;
substep S323: describing each feature point of the first video monitoring image and the second video monitoring image to obtain each feature point descriptor;
substep S324: calculating the distance between two feature points corresponding to the same feature point descriptors of the first video monitoring image and the second video monitoring image;
substep S325: and judging whether the distance between the two characteristic points is equal to a preset holder moving distance or not, and if not, evaluating the result that the potential safety hazard exists.
Preferably, the mosaic detection comprises the following sub-steps:
substep S341: carrying out image preprocessing on the video monitoring image to obtain a binary edge image;
substep S342: scanning the binary edge image from left to right and from top to bottom to count the number of all rectangles on the binary edge image;
substep S343: and judging whether the number of the rectangles is larger than a preset rectangle number threshold value or not, and if so, judging that the monitoring potential safety hazard exists in the evaluation result.
Preferably, the image preprocessing specifically includes: firstly, denoising a video monitoring image through image Gaussian filtering, secondly, obtaining image edge information through Canny edge detection, and finally, connecting unconnected parts on the image edge information through image expansion operation to obtain a binary edge image.
Preferably, the occlusion detection comprises the sub-steps of:
substep S311: carrying out image segmentation on the video monitoring image to obtain a plurality of first image blocks;
substep S312: for each first image patch, the sharpness value d (f) is calculated using a Brenner function, which is specified in equation (1) below:
D(f)=∑yxf(x+2,y)-f(x,y)2 (1);
wherein f (x, y) represents the gray value of the pixel point (x, y), and f (x +2, y) represents the gray value of the pixel point (x +2, y);
substep S313: comparing each definition value D (f) with a preset definition threshold value to obtain the shielding rate of the video monitoring image;
substep S314: and judging whether the shielding rate of the video monitoring image is greater than a preset shielding rate threshold value or not, and if so, evaluating the result that the monitoring potential safety hazard exists.
Preferably, the color cast detection comprises the following sub-steps:
substep S331: carrying out image segmentation on the video monitoring image to obtain a plurality of second image blocks with equal areas;
substep S332: converting each second image patch from the RGB color space to the XYZ color space using the following equation (2):
Figure BDA0003193319630000051
substep S333: converting each second image patch from the XYZ color space to the CIE Lab color space using equation (3) below:
Figure BDA0003193319630000052
in formula (3), L, a, b represent three components in CIE Lab color space, f (t) is a function, and Xn, Yn and Zn are constant values;
substep S334: calculating the variance D of the component a and the component b;
substep S335: judging whether the variance D is larger than a preset variance threshold value, if so, determining that the second image blocks have a color cast phenomenon, and further obtaining the color cast rate of the video monitoring image;
substep S336: and judging whether the color cast rate of the video monitoring image is greater than a preset color cast rate threshold value, and if so, evaluating the result that the potential monitoring safety hazard exists.
Preferably, step S1 is to obtain video surveillance images of a plurality of cameras, where each camera corresponds to each internal network ip address.
Preferably, step S2 is to distribute the video surveillance images of the cameras to the servers through the internal gateway.
Preferably, step S3 is to perform image quality evaluation on the video surveillance images distributed by the respective servers.
Preferably, the video monitoring quality evaluation method including pan-tilt movement and mosaic detection is based on an OpenCV software library.
(III) advantageous effects
Compared with the prior art, the invention provides a video monitoring quality evaluation method containing pan-tilt movement and mosaic detection, which has the following beneficial effects: according to the invention, whether the video is blocked or not can be correspondingly detected by carrying out blocking detection on the video monitoring image, whether the rotation angle of the monitoring probe is abnormal or not can be correspondingly detected by the pan-tilt movement detection, whether the video picture is abnormal in color cast or not can be correspondingly detected by the color cast detection, and whether the mosaic is abnormal or not can be correspondingly detected by the mosaic detection, so that some faults which cannot be detected by the traditional image quality evaluation algorithm are solved, and the higher accuracy rate is realized in the scene aiming at the monitoring quality.
Drawings
FIG. 1 is a flow chart of the steps of a video surveillance quality assessment method involving pan-tilt movement and mosaic detection according to the present invention;
FIG. 2 is a flowchart illustrating the steps of occlusion detection according to the present invention;
FIG. 3 is a flowchart illustrating the steps of the pan/tilt/zoom detection according to the present invention;
FIG. 4 is a flowchart illustrating the steps of the color cast detection according to the present invention;
FIG. 5 is a flowchart illustrating the steps of mosaic detection according to the present invention;
FIG. 6 is an exemplary diagram of a feature point pair selected by pan/tilt/zoom detection according to the present invention;
FIG. 7 is a diagram of an exemplary mosaic detection scan binary edge image according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention relates to a video monitoring quality evaluation method containing pan-tilt movement and mosaic detection, which comprises the following steps:
step S1: and acquiring a video monitoring image. It should be understood that the video surveillance image corresponds to a surveillance video that is recorded from a camera.
Step S2: and transmitting the video monitoring image to a server.
Step S3: the server carries out image quality evaluation on the video monitoring image to obtain an evaluation result, wherein the image quality evaluation comprises occlusion detection, holder movement detection, color cast detection and mosaic detection.
Specifically, the occlusion detection includes the following substeps:
substep S311: and carrying out image segmentation on the video monitoring image to obtain a plurality of first image blocks. Preferably, the areas of the first image blocks are equal; the larger the number of the first image blocks, the better the corresponding detection effect.
Substep S312: for each first image patch, the sharpness value d (f) is calculated using a Brenner function, which is specified in equation (1) below:
D(f)=∑yxf(x+2,y)-f(x,y)2 (1);
wherein f (x, y) represents the gray value of the pixel (x, y), and f (x +2, y) represents the gray value of the pixel (x +2, y).
The Brenner function is also called as a gradient filter method, the calculation amount is small, and only the difference of two pixel points which are different in the x direction needs to be calculated.
Substep S313: and comparing each definition value D (f) with a preset definition threshold value to obtain the occlusion rate of the video monitoring image.
Generally speaking, the sharper the image is, the larger the difference between its pixel points is. For a first image block, if its sharpness value d (f) is greater than a preset sharpness threshold, the first image block is considered to be sharp.
Substep S314: and judging whether the shielding rate of the video monitoring image is greater than a preset shielding rate threshold value or not, and if so, evaluating the result that the monitoring potential safety hazard exists. It should be understood that in occlusion detection, a surveillance security hidden trouble corresponds to the existence of an occlusion in a video surveillance image, i.e., the existence of an occlusion in a surveillance video.
For example, the video surveillance image is subjected to image segmentation to obtain 16 first image blocks, where the sharpness values d (f) of the 4 first image blocks are greater than a preset sharpness threshold, then the occlusion rate of the video surveillance image is 4/16 ═ 25%, and the preset occlusion rate threshold is, for example, 10%, so that the evaluation result of the video surveillance image is that there is a surveillance safety hazard, that is, there is occlusion in the corresponding surveillance video.
The tripod head is a supporting platform of the camera, the monitoring position of the camera is adjusted by controlling the movement of the tripod head, and the actual moving distance and the controlled moving distance are possibly inconsistent when the tripod head is damaged, so that the invention carries out corresponding detection by the movement detection of the tripod head. Specifically, the pan/tilt detection includes the following substeps:
substep S321: and selecting a first video monitoring image and a second video monitoring image, wherein the first video monitoring image is a video monitoring image before the pan-tilt moves, and the second video monitoring image is a video monitoring image after the pan-tilt moves.
Substep S322: and detecting each characteristic point of the first video monitoring image and the second video monitoring image by using a FAST algorithm.
The method for detecting and judging feature points by fast (features from calculated segment test) algorithm is to regard the pixel point as a feature point if the gray values of the pixel point and the surrounding points are very different, for example, compare a certain pixel point with all the pixel points on the circle whose radius is 3.
Substep S323: and describing each feature point of the first video monitoring image and the second video monitoring image to obtain each feature point descriptor.
The description process of the characteristic points comprises the following steps: an arbitrary feature point is selected as a circle center, d is used as a radius to draw a circle, two feature points are connected in pairs in the circle to form a group of feature point pairs, for example, as shown in fig. 6, 4 groups of feature point pairs P1(a, B), P2(a, B), P3(a, B), and P4(a, B) are selected, and operation T is defined as shown in the following formula (4):
Figure BDA0003193319630000091
in the formula (4), IAAnd IBIs the gray value of the point.
Respectively carrying out T operation on the selected characteristic point pairs to obtain the following binary strings:
T(P1(A,B))=1
T(P2(A,B))=0
T(P3(A,B))=1
T(P4(A,B))=1
namely, the feature point descriptor corresponds to: 1011.
substep S324: and calculating the distance between two feature points corresponding to the same feature point descriptors of the first video monitoring image and the second video monitoring image.
The distance correspondence between two feature points may be a lateral distance on the x-axis or a longitudinal distance on the y-axis. For example, if the feature point descriptor corresponding to the feature point a (2,3) on the first video surveillance image is 10110100, and the feature point descriptor corresponding to the feature point B (12, 3) on the second video surveillance image is 10110100, the lateral distance between the feature points A, B is 10.
Substep S325: and judging whether the distance between the two characteristic points is equal to a preset holder moving distance or not, and if not, evaluating the result that the potential safety hazard exists.
It should be understood that the distance between the two feature points may be regarded as the actual moving distance of the pan/tilt head, and if the distance between the two feature points is not equal to the preset moving distance of the pan/tilt head, the evaluation result indicates that there is a monitoring potential safety hazard, i.e., the rotation angle of the monitoring probe is abnormal.
In addition, in actual monitoring, the camera may be subjected to video recording for a long time by facing to a strong light source, the strong light source may age a photosensitive cmos component of the camera, and color cast of a video image is caused, however, at this time, hardware detection may still consider that the camera is working normally, and therefore color cast detection is introduced for corresponding detection. Specifically, the color cast detection includes the following substeps:
substep S331: and carrying out image segmentation on the video monitoring image to obtain a plurality of second image blocks with equal areas. For example, the video surveillance image is divided into 4 equal-area second image blocks. The sub-step S331 is to prevent the picture itself having a large area of pure color region, which causes deviation of the final detection result.
Substep S332: converting each second image patch from the RGB color space to the XYZ color space using the following equation (2):
Figure BDA0003193319630000101
substep S333: converting each second image patch from the XYZ color space to the CIE Lab color space using equation (3) below:
Figure BDA0003193319630000102
Figure BDA0003193319630000111
in the formula (3), L, a, b represent three components in CIE Lab color space, f (t) is a function, Xn, Yn and Zn are constant values, and the values of Xn, Yn and Zn may be 95.047, 100.0 and 108.883. The images of the RGB color space are converted into CIE Lab color space through the formulas (2) and (3), and the purpose of converting the color space is to well detect the color cast condition in the CIE Lab color space.
Substep S334: and calculating the variance D of the a component and the b component. It should be understood that the variance D is the variance of the a component and the b component, i.e. the average of the population of the sequence formed by the a component and the b component is calculated, and then the variance formula is used to calculate the variance D.
Substep S335: and judging whether the variance D is larger than a preset variance threshold value, if so, determining that the second image blocks have a color cast phenomenon, and further obtaining the color cast rate of the video monitoring image.
For example, the video surveillance image is divided into 4 second image blocks with equal areas, wherein the variance D of 2 second image blocks is greater than the preset variance threshold, and the color cast ratio of the video surveillance image is 50%. In addition, the second image block is reddish or greenish when the mean da of the a-component is greater than 0, and yellowish or bluish when the mean db of the b-component is greater than 0.
Substep S336: and judging whether the color cast rate of the video monitoring image is greater than a preset color cast rate threshold value, and if so, evaluating the result that the potential monitoring safety hazard exists.
It should be understood that, in the color cast detection, when the evaluation result indicates that there is a monitoring safety hazard, that is, there is a color cast abnormality in the video monitoring image.
In addition, when the photosensitive module of the monitoring probe is damaged, the transmission line is damaged, and the like, the continuous irregular square blocks appear on the picture, which seriously affects the quality of the monitored picture, namely, the mosaic abnormity appears, therefore, the invention carries out corresponding detection through mosaic detection, and the mosaic detection comprises the following substeps:
substep S341: and carrying out image preprocessing on the video monitoring image to obtain a binary edge image.
In this sub-step S341, the image preprocessing specifically includes: the method comprises the steps of firstly reducing noise of a video monitoring image through image Gaussian filtering, secondly obtaining image edge information through Canny edge detection, and finally connecting unconnected parts on the image edge information through image expansion operation to obtain a binary edge image, wherein each pixel point on the binary edge image has only 0 and 1 situations.
Substep S342: the binary edge image is scanned from left to right and from top to bottom to count the number of all rectangles on the binary edge image.
The mosaic abnormity is that a large number of discontinuous squares appear, so that the binary edge image is scanned from left to right and from top to bottom, if two pixels exist in the vertical direction and are 1, the scanning is continued until 0 pixel appears, the length and width array is counted, and if the number exceeds a threshold value, the part is considered to have a rectangle; for example, as shown in fig. 7, after the first pixel point (2,2) of 1 is found, scanning is started to scan right downwards, and the scanning is continued only when the pixel points 2+1 and 2+1, that is, (2,3) and (3,2) are also 1. And after all pixel points are scanned, counting the number of all rectangles on the binary edge image.
Substep S343: and judging whether the number of the rectangles is larger than a preset rectangle number threshold value or not, and if so, judging that the monitoring potential safety hazard exists in the evaluation result. It can be understood that in the mosaic detection, there is a monitoring potential safety hazard, that is, there is mosaic abnormality corresponding to the video monitoring image.
Step S4: and when the evaluation result is that the potential safety hazard exists, early warning is carried out.
It can be understood that after the video monitoring image completes the shielding detection, the pan-tilt movement detection, the color cast detection and the mosaic detection, when the evaluation result of at least one of the detection is that the monitoring potential safety hazard exists, the early warning is carried out so as to remind the staff to carry out maintenance and repair. Furthermore, video monitoring images and monitoring videos with potential monitoring safety hazards can be stored in a database. The damaged monitoring probe is replaced, the monitoring of the positions which can be analyzed according to the damage frequency of the monitoring probe is easy to damage, and the maintenance and supervision of the position are strengthened in a targeted manner.
It should be understood that, in some embodiments, each camera is internally provided with a positioning module, when there is no potential safety hazard in the camera, the camera does not supply power to the positioning module, when there is a potential safety hazard in monitoring the video monitoring image after completing the shielding detection, the pan-tilt movement detection, the color cast detection and the mosaic detection and when an evaluation result of at least one of the detections indicates that there is a potential safety hazard in monitoring, the camera supplies power to the positioning module, so that the positioning module positions the position of the camera with the potential safety hazard, and simultaneously uploads the position information (i.e., sends the position information to the monitoring platform), thereby facilitating a worker to know the position of the camera with the potential safety hazard in time and maintaining the camera in time. In addition, after the position information of the camera with the monitoring potential safety hazard is uploaded, the power supply is further automatically cut off in the camera, or the camera stops supplying power to the devices, so that the camera is stopped, and the electric energy can be saved after the position information of the camera with the monitoring potential safety hazard is reported.
The invention can correspondingly detect whether the video is blocked or not by carrying out blocking detection on the video monitoring image, can correspondingly detect whether the rotation angle of the monitoring probe is abnormal or not by carrying out holder movement detection, can correspondingly detect whether the video picture is abnormal in color cast or not by carrying out color cast detection, and can correspondingly detect whether the mosaic is abnormal or not by carrying out mosaic detection, namely, the invention solves some faults which cannot be detected by the traditional image quality evaluation algorithm, so that the higher accuracy is realized in the scene aiming at the monitoring quality. In addition, compared with the existing image quality evaluation algorithm including BRISQUE, the method does not need a large number of matrix operations, so that the time consumption for detecting and evaluating the image quality is short.
Preferably, in step S1, the video surveillance images of multiple cameras are obtained, and in the surveillance intranet, each camera corresponds to each intranet ip address, and the current data of the camera can be obtained by using a corresponding api function; step S2 is that video monitoring images of each camera are distributed to each server through an internal gateway by using an IP-based load balancing algorithm, wherein the load balancing algorithm can be an IP hash algorithm, the source IP address hash idea is that a numerical value is obtained by obtaining the IP address of a client and calculating through a hash function, the numerical value is used for carrying out modular operation on the size of a server list, and the obtained result is the serial number of the server to be accessed by a customer service terminal; step S3 is specifically to perform image quality evaluation on the video surveillance images distributed by each server.
Preferably, the video monitoring quality evaluation method is based on an OpenCV software library.
It is to be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A video monitoring quality evaluation method containing pan-tilt movement and mosaic detection is characterized by comprising the following steps:
step S1: acquiring a video monitoring image;
step S2: transmitting the video monitoring image to a server;
step S3: the server carries out image quality evaluation on the video monitoring image to obtain an evaluation result, wherein the image quality evaluation comprises occlusion detection, pan-tilt movement detection, color cast detection and mosaic detection;
step S4: and when the evaluation result is that the monitoring potential safety hazard exists, early warning is carried out.
2. The video surveillance quality assessment method according to claim 1, characterized in that: the cradle head movement detection comprises the following substeps:
substep S321: selecting a first video monitoring image and a second video monitoring image, wherein the first video monitoring image is the video monitoring image before the pan-tilt moves, and the second video monitoring image is the video monitoring image after the pan-tilt moves;
substep S322: detecting each characteristic point of the first video monitoring image and the second video monitoring image by using a FAST algorithm;
substep S323: describing each feature point of the first video monitoring image and the second video monitoring image to obtain each feature point descriptor;
substep S324: calculating the distance between two feature points corresponding to the same feature point descriptors of the first video monitoring image and the second video monitoring image;
substep S325: and judging whether the distance between the two feature points is equal to a preset pan-tilt movement distance or not, and if not, judging that the evaluation result is that the potential safety hazard exists.
3. The video surveillance quality assessment method according to claim 2, characterized in that: the mosaic detection comprises the following sub-steps:
substep S341: carrying out image preprocessing on the video monitoring image to obtain a binary edge image;
substep S342: scanning the binary edge image from left to right and from top to bottom to count the number of all rectangles on the binary edge image;
substep S343: and judging whether the number of the rectangles is larger than a preset rectangle number threshold value, if so, judging that the evaluation result is that the monitoring potential safety hazard exists.
4. The video surveillance quality assessment method according to claim 3, characterized in that: the image preprocessing specifically comprises: firstly, denoising the video monitoring image through image Gaussian filtering, secondly, obtaining image edge information through Canny edge detection, and finally, connecting unconnected parts on the image edge information through image expansion operation to obtain the binary edge image.
5. The video surveillance quality assessment method according to claim 4, characterized in that: the occlusion detection comprises the sub-steps of:
substep S311: performing image segmentation on the video monitoring image to obtain a plurality of first image blocks;
substep S312: calculating a sharpness value d (f) for each of the first image patches using a Brenner function, which is specified in equation (1) below:
D(f)=∑yxf(x+2,y)-f(x,y)2 (1);
wherein f (x, y) represents the gray value of the pixel point (x, y), and f (x +2, y) represents the gray value of the pixel point (x +2, y);
substep S313: comparing each of the definition values D (f) with a preset definition threshold value to obtain an occlusion rate of the video monitoring image;
substep S314: and judging whether the shielding rate of the video monitoring image is greater than a preset shielding rate threshold value, if so, judging that the monitoring potential safety hazard exists in the evaluation result.
6. The video surveillance quality assessment method according to claim 5, characterized in that: the color cast detection comprises the following sub-steps:
substep S331: carrying out image segmentation on the video monitoring image to obtain a plurality of second image blocks with equal areas;
substep S332: converting each of the second image patches from the RGB color space to the XYZ color space using the following equation (2):
Figure FDA0003193319620000031
substep S333: converting each of the second image patches from the XYZ color space to a CIE Lab color space using equation (3) below:
L=116f(Y/Yn)-16
a=500[f(X/Xn)-f(Y/Yn)]
b=200[f(Y/Yn)-f(Z/Zn)] (3);
Figure FDA0003193319620000032
in the formula (3), L, a, b represent three components in the CIE Lab color space, f (t) is a function, and Xn, Yn and Zn are constant values;
substep S334: calculating the variance D of the a component and the b component;
substep S335: judging whether the variance D is larger than a preset variance threshold value, if so, determining that the second image blocks have a color cast phenomenon, and further obtaining the color cast rate of the video monitoring image;
substep S336: and judging whether the color cast rate of the video monitoring image is greater than a preset color cast rate threshold value, if so, judging that the monitoring potential safety hazard exists in the evaluation result.
7. The video surveillance quality assessment method according to claim 6, characterized in that: the step S1 is specifically to acquire video monitoring images of a plurality of cameras, where each camera corresponds to each internal network ip address.
8. The video surveillance quality assessment method according to claim 7, characterized in that: the step S2 is specifically to distribute the video surveillance images of the cameras to the servers through an interior gateway.
9. The video surveillance quality assessment method according to claim 8, characterized in that: the step S3 is specifically to perform image quality evaluation on the video surveillance images distributed by the servers.
10. The video surveillance quality assessment method according to claim 9, characterized in that: the video monitoring quality evaluation method is based on an OpenCV software library.
CN202110884012.5A 2021-08-03 2021-08-03 Video monitoring quality evaluation method containing pan-tilt movement and mosaic detection Pending CN113962924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110884012.5A CN113962924A (en) 2021-08-03 2021-08-03 Video monitoring quality evaluation method containing pan-tilt movement and mosaic detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110884012.5A CN113962924A (en) 2021-08-03 2021-08-03 Video monitoring quality evaluation method containing pan-tilt movement and mosaic detection

Publications (1)

Publication Number Publication Date
CN113962924A true CN113962924A (en) 2022-01-21

Family

ID=79460488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110884012.5A Pending CN113962924A (en) 2021-08-03 2021-08-03 Video monitoring quality evaluation method containing pan-tilt movement and mosaic detection

Country Status (1)

Country Link
CN (1) CN113962924A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278217A (en) * 2022-07-21 2022-11-01 深圳市震有软件科技有限公司 Image picture detection method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278217A (en) * 2022-07-21 2022-11-01 深圳市震有软件科技有限公司 Image picture detection method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107272637B (en) A kind of video monitoring system fault self-checking self- recoverage control system and method
CN109089160B (en) Video analysis system and method for food processing violation behaviors of restaurants in colleges and universities
EP3104327B1 (en) Anomalous pixel detection
CN105957088B (en) Transformer composite insulator casing monitoring method and system based on computer vision
CN109559304A (en) Image quality online evaluation method, apparatus and application for industrial vision detection
CN110443800B (en) Video image quality evaluation method
JP3486229B2 (en) Image change detection device
CN112364740A (en) Unmanned machine room monitoring method and system based on computer vision
CN113962924A (en) Video monitoring quality evaluation method containing pan-tilt movement and mosaic detection
CN112911221B (en) Remote live-action storage supervision system based on 5G and VR videos
CN115965889A (en) Video quality assessment data processing method, device and equipment
CN113947746A (en) Distribution network safety quality control method based on feedback mechanism supervision
CN113793294A (en) Video monitoring quality evaluation method with jitter and electromagnetic interference detection
CN110659627A (en) Intelligent video monitoring method based on video segmentation
KR101917622B1 (en) Leakage detection method using background modeling method
CN112560574A (en) River black water discharge detection method and recognition system applying same
CN112906488A (en) Security protection video quality evaluation system based on artificial intelligence
CN115953726B (en) Machine vision container face damage detection method and system
CN116704440A (en) Intelligent comprehensive acquisition and analysis system based on big data
CN115880365A (en) Double-station automatic screw screwing detection method, system and device
CN113191336B (en) Electric power hidden danger identification method and system based on image identification
CN115684853A (en) Unmanned aerial vehicle power transmission line fault detection method and system with ultraviolet imager
CN114677667A (en) Transformer substation electrical equipment infrared fault identification method based on deep learning
CN113052878A (en) Multi-path high-altitude parabolic detection method and system for edge equipment in security system
KR102336433B1 (en) Yield management system and method using camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhan Guili

Inventor after: Li Ximing

Inventor after: Lu Wenhuan

Inventor after: Lin Qunxiong

Inventor after: Sun Quanzhong

Inventor before: Zhan Guili

Inventor before: Li Ximing

Inventor before: Lu Wenhuan

CB03 Change of inventor or designer information