CN115985007A - 5G recorder video inspection method and system based on low power consumption - Google Patents
5G recorder video inspection method and system based on low power consumption Download PDFInfo
- Publication number
- CN115985007A CN115985007A CN202211555189.1A CN202211555189A CN115985007A CN 115985007 A CN115985007 A CN 115985007A CN 202211555189 A CN202211555189 A CN 202211555189A CN 115985007 A CN115985007 A CN 115985007A
- Authority
- CN
- China
- Prior art keywords
- module
- video
- value
- definition
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 254
- 238000013441 quality evaluation Methods 0.000 claims abstract description 86
- 230000001133 acceleration Effects 0.000 claims abstract description 71
- 238000011156 evaluation Methods 0.000 claims abstract description 53
- 238000005070 sampling Methods 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims abstract description 11
- 230000005540 biological transmission Effects 0.000 claims description 34
- 239000012634 fragment Substances 0.000 claims description 18
- 238000012417 linear regression Methods 0.000 claims description 4
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a low-power consumption 5G recorder video inspection method and system, wherein the system comprises an acceleration sensor module, a video acquisition module, a motion amplitude detection module, a video definition detection module, a video acquisition quality evaluation module, a face detection module, a 5G image evaluation and sending module, a server face blacklist comparison module and a terminal alarm module, all the modules are matched with each other, finally, the terminal alarm module associates YUV video frames with blacklist personnel names according to timestamps, and a popup window displays blacklist personnel and video frame images. The low-power-consumption 5G recorder video inspection method and system provided by the invention can be used for carrying out video frame area sampling video frame ambiguity detection according to the motion amount of the acceleration sensor, reducing the power consumption of face detection calculation according to the removal of blurred images, adjusting the face comparison image sending frequency according to the 5G signal strength, and reducing the influence of power consumption improvement due to signal difference.
Description
Technical Field
The invention relates to the field of video inspection of recorders, in particular to a low-power consumption-based 5G recorder video inspection method and system.
Background
Currently, a recorder is used for routing inspection and analyzing real-time videos. Carry out face detection, the face is compared, has great calculated amount, lets record appearance loss excessive electric quantity, and the standby time of record appearance shortens. A new approach is needed to implement a low power consumption real-time analysis scheme.
There is an information acquisition method based on a recorder and a recorder public number CN114500780A, including a recorder body, a lighting lamp, a high definition camera and a light supplement lamp are installed on one side of the outer wall of the recorder body, a display screen, an operation button and a fingerprint recognizer are arranged on the other side of the outer wall of the recorder body, and a recording module, an intercom module, a face recognition system, a main control chip, a GPS positioning module, a wireless module and a storage module are arranged inside the recorder body. The information acquisition method based on the recorder and the recorder can be used for conveniently judging whether the staff is in a violation state in advance when the staff is on duty, can perform positioning alarm when relevant staff appears at a detection position, improve the personal safety of the staff on duty, and simultaneously enable the staff on duty to quickly and accurately find the relevant staff in a supervision platform on duty in daily patrol, thereby reducing the burden of the staff on duty. But the human face comparison has larger power consumption, higher power consumption and larger improvement space.
Disclosure of Invention
The invention aims to provide a low-power consumption 5G recorder video inspection method and system, which are used for solving the problems.
In order to achieve the purpose, the invention provides the following technical scheme:
the utility model provides a 5G record appearance video system of patrolling and examining based on low-power consumption, includes:
an acceleration sensor module: the acceleration sensor module sends the obtained acceleration fragment data and the timestamp to the motion amplitude detection module and the video acquisition quality evaluation module.
The video acquisition module: a video acquisition module of the recorder acquires a video image of the real world, a YUV video frame and a timestamp are obtained, the YUV video frame and the timestamp are sent to a face detection module, a Y component of the YUV video frame is sent to a video definition detection module, and the timestamp of the YUV video frame generated each time is notified to an acceleration sensor module.
The motion amplitude detection module: and the motion amplitude detection module receives the acceleration fragment data and the time stamp of the acceleration sensor module. And sending the motion amplitude value and the timestamp of the video frame fragment obtained by calculation to a video definition detection module.
Video definition detection module: the video definition detection module takes the calculated definition value of the block as the definition value of the whole picture, and sends the definition value and the timestamp to the video acquisition quality evaluation module.
The video acquisition quality evaluation module: the video acquisition quality evaluation module receives the motion amplitude value and the time stamp of the motion amplitude detection module, and the video acquisition quality evaluation module receives the definition value and the time stamp of the video definition detection module. And the video acquisition quality evaluation module sends the calculated estimation value of the definition value and the timestamp to the face detection module according to the motion amplitude value and the definition value of the timestamp alignment video frame.
The face detection module: and the face detection module receives the YUV video frame and the timestamp of the video acquisition module. And the face detection module receives the estimated value and the timestamp of the definition value of the video acquisition quality evaluation module. And the face detection module judges whether face detection is carried out according to the time stamp associated YUV video frame and the estimated value of the definition value, and sends the face image, the visibility and the time stamp to the 5G image estimation sending module.
5G image evaluation and transmission module: the 5G image evaluation sending module receives the face image, the visibility and the timestamp of the face detection module, and the 5G signal intensity acquired by the 5G image evaluation sending module sends the face image with the highest visibility and the timestamp to the server face blacklist comparison module.
The server face blacklist comparison module: and the server face blacklist comparison module receives the face image and the timestamp of the 5G image evaluation and transmission module. And the server face blacklist comparison module compares the face image with the face of the blacklist library, and if the similarity exceeds 80%, the blacklist person name and the time error are notified to the terminal alarm module.
A terminal alarm module: and the terminal alarm module receives the name and time error of the blacklist personnel of the server face blacklist comparison module and receives the YUV video frame and the timestamp of the video acquisition module. And the terminal alarm module associates the YUV video frame with the name of the blacklist personnel according to the timestamp, and displays the blacklist personnel and the video frame image through a popup window.
The invention also comprises a 5G recorder video inspection method based on low power consumption, the 5G recorder video inspection system based on low power consumption is adopted, and the specific steps comprise:
s1: the acceleration sensor module sends the obtained acceleration fragment data and the timestamp to the motion amplitude detection module and the video acquisition quality evaluation module;
s2: and a video acquisition module of the recorder acquires a real-world video image to obtain a YUV video frame and a time stamp. And the video acquisition module sends the YUV video frames and the time stamps to the face detection module. The Y component of a YUV video frame of the video acquisition module is sent to the video definition detection module, and the acceleration sensor module is informed of the timestamp of the YUV video frame generated by the video acquisition module each time;
s3: and the motion amplitude detection module receives the acceleration fragment data and the time stamp of the acceleration sensor module. Sending the motion amplitude value and the timestamp of the video frame fragment obtained through calculation to a video definition detection module;
s4: the video definition detection module takes the calculated definition value of the block as the definition value of the whole picture and sends the definition value and the timestamp to the video acquisition quality evaluation module;
s5: the video acquisition quality evaluation module receives the motion amplitude value and the timestamp of the motion amplitude detection module, and the video acquisition quality evaluation module receives the definition value and the timestamp of the video definition detection module. The video acquisition quality evaluation module sends the calculated estimation value of the definition value and the timestamp to the face detection module according to the motion amplitude value and the definition value of the timestamp alignment video frame;
s6: and the face detection module receives the YUV video frame and the timestamp of the video acquisition module. And the face detection module receives the estimated value and the timestamp of the definition value of the video acquisition quality evaluation module. The face detection module judges whether face detection is carried out according to the estimated value of the time stamp-associated YUV video frame and the definition value, and sends a face image, the visibility and the time stamp to the 5G image estimation sending module;
s7: the 5G image evaluation and sending module receives the face image and the visibility and the timestamp of the face detection module, and the 5G image evaluation and sending module sends the face image with the highest visibility and the timestamp to the server face blacklist comparison module according to the 5G signal intensity obtained by the 5G image evaluation and sending module;
s8: and the server face blacklist comparison module receives the face image and the timestamp of the 5G image evaluation and transmission module. The server face blacklist comparison module compares the face image with the face of the blacklist library, and if the similarity exceeds 80%, the blacklist person name and the time error are notified to the terminal alarm module;
s9: and a terminal alarm module of the recorder receives the name of the blacklist personnel of the server face blacklist comparison module, and the name is wrong with time. And the terminal alarm module receives the YUV video frame and the timestamp of the video acquisition module. And the terminal alarm module associates the YUV video frame with the name of the blacklist personnel according to the timestamp, and displays the blacklist personnel and the video frame image through a pop-up window.
The step S1 includes:
an acceleration sensor module of the recorder periodically acquires acceleration values of dimensions in the x direction, the y direction and the z direction, the acceleration sensor module receives a video frame generation notice and a video time stamp of a video acquisition module, and the acceleration sensor module divides the acceleration values of the dimensions in the x direction, the y direction and the z direction according to the video time stamp. The acceleration sensor module sends the acceleration slicing data and the timestamp to the motion amplitude detection module and the video acquisition quality evaluation module.
The step S3 includes:
3.1 the motion amplitude detection module receives the acceleration slicing data and the time stamp of the acceleration sensor module,
3.2 the motion amplitude detection module respectively carries out Fast Fourier Transform (FFT) on the three-direction dimensional data of the acceleration fragments to n sampling points of the x, y and z three-direction dimensions of each fragment to obtain complex values of n frequency points.
3.3 the motion amplitude detection module adds the amplitude values of the frequency points to the real root square value and the virtual root square value of the coverage values of each n frequency points in the dimensions of the x, y and z directions respectively.
And 3.4, the motion amplitude detection module respectively adds the amplitude values of the n frequency points in the dimensions of the three directions of x, y and z to obtain an accumulated value, and the accumulated value is divided by 3n to obtain the motion amplitude value of the video frame fragment.
And 3.5, the motion amplitude detection module sends the motion amplitude value and the timestamp of the video frame fragment to the video definition detection module.
The step S4 includes:
4.1: and the video definition detection module receives the motion amplitude value and the timestamp of the motion amplitude detection module.
4.2: the video definition detection module receives a video frame Y component used by the video acquisition module.
4.3: and the video definition detection module associates the Y component and the motion amplitude value of the video frame according to the time stamp.
4.4: the video definition detection module performs sampling block check to check whether the image is fuzzy. And when the number of the detection blocks is less than one, the number of the detection blocks is adjusted to be one, and when the number of the detection blocks is greater than the maximum value of the number of the detection areas, the number of the detection blocks is adjusted to be the maximum value of the number of the detection areas.
4.5: the video definition detection module sets the fixed size of the block, generates the central position of the block by using a random algorithm, and regenerates the central position when the random algorithm blocks are overlapped.
4.6: and the video definition detection module performs convolution on each pixel of each block by using a Laplacian operator, and squares the convolution value to obtain a non-negative value. And the video definition detection module accumulates the non-negative values to obtain the definition of the block.
4.7: the video definition detection module takes the definition value of the block as the definition value of the whole video frame.
4.8: and the video definition detection module sends the definition value and the timestamp to the video acquisition quality evaluation module.
The step S5 includes:
5.1: the video acquisition quality evaluation module receives the motion amplitude value and the time stamp of the motion amplitude detection module,
5.2: and the video acquisition quality evaluation module receives the definition value and the timestamp of the video definition detection module.
5.3: and the video acquisition quality evaluation module aligns the motion amplitude value and the definition value of the video frame according to the time stamp. The larger the motion amplitude value is, the lower the influence on the sharpness value is, the linear relation between the sharpness value and the motion amplitude value is established by using a linear regression method, and the sharpness value = b/motion amplitude value + a.
5.4: the video acquisition quality evaluation module accumulates video frames to exceed 30 frames, uses least square estimator to calculate parameter b and a of linear relation between definition value and motion amplitude value, and the video acquisition quality evaluation module calculates calculation value of definition value according to motion amplitude value, parameter b and a.
5.5: the video collection quality evaluation module takes the definition value of the received video definition detection module as an observed value of a Kalman filter,
5.6: the video acquisition quality evaluation module calculates the prediction value of the Kalman filter and passes the estimation value of the definition value. And the video acquisition quality evaluation module removes random sampling introduction deviation of the definition detection module through a Kalman filter.
5.7: and the video acquisition quality evaluation module sends the evaluation value of the definition value and the timestamp to the face detection module.
The step S6 includes:
6.1: and the face detection module receives the YUV video frame and the timestamp of the video acquisition module.
6.2: and the face detection module receives the estimated value and the timestamp of the definition value of the video acquisition quality evaluation module.
6.3: and the face detection module associates the YUV video frame with the estimated value of the definition value according to the time stamp.
6.4: the face detection module judges that the estimation value of the definition value of the YUV video frame exceeds a threshold value T, face detection is carried out, if a face is detected, the face position and the face visibility are obtained, a face image is obtained by intercepting according to the face position, and the face detection module sends the face image, the visibility and a timestamp to the 5G image estimation sending module.
6.5: and if the face detection module judges that the estimated value of the definition value of the YUV video frame is less than or equal to the threshold value T, the face detection is not carried out, so that the power consumption is reduced.
The step S7 includes:
7.1: and the 5G image evaluation and sending module receives the face image, the feasibility and the time stamp of the face detection module.
7.2: the 5G image evaluation sending module obtains the 5G signal strength, and the 5G image evaluation sending module multiplies the 5G signal strength by an adjusting coefficient to obtain the sending frequency per second.
7.3: and the 5G image evaluation sending module performs reciprocal operation according to the sending frequency per second to obtain a sending period.
7.4: the 5G image evaluation and transmission module judges that the face images are sorted from high to low according to the degree of availability in each transmission period.
7.5: and the 5G image evaluation and sending module only sends the face image with the highest degree of the feasibility and the timestamp to the server face blacklist comparison module.
The invention provides a low-power consumption-based 5G recorder video inspection method and system, which have the beneficial effects that:
a video inspection method and a system of a 5G recorder based on low power consumption are used for detecting the video frame fuzziness of video frame region sampling according to the motion amount of an acceleration sensor, reducing the power consumption of face detection calculation according to the removal of fuzzy images, adjusting the sending frequency of face comparison images according to the 5G signal intensity, and reducing the influence of power consumption improvement due to signal difference.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of a 5G recorder video inspection system based on low power consumption.
Detailed Description
The technical solutions of the present invention are further described below with reference to the drawings and examples, and the following examples are only used to more clearly illustrate the technical solutions of the present invention, and therefore are only used as examples, and the protection scope of the present invention is not limited thereby.
As shown in fig. 1, a 5G recorder video inspection system based on low power consumption includes:
the system comprises an acceleration sensor module 1, a video acquisition module 2, a motion amplitude detection module 3, a video definition detection module 4, a video acquisition quality evaluation module 6, a face detection module 7,5G image evaluation sending module 8, a server face blacklist comparison module 9 and a terminal alarm module 10.
Acceleration sensor module 1 of recorder: the acceleration sensor module 1 periodically (for example, 1000 samples per second) acquires acceleration values of dimensions in the x direction, the y direction and the z direction, the acceleration sensor module 1 receives a video frame generation notification and a video time stamp of the video acquisition module 2, and the acceleration sensor module 1 divides the acceleration values of the dimensions in the x direction, the y direction and the z direction according to the video time stamp. The acceleration sensor module 1 sends the acceleration slicing data and the timestamp to the motion amplitude detection module 3 and the video acquisition quality evaluation module 6.
Video acquisition module 2 of record appearance: and a video acquisition module 2 of the recorder acquires a real-world video image to obtain a YUV video frame and a time stamp. The video acquisition module 2 sends the YUV video frames and the time stamp to the face detection module 7. The Y component of the YUV video frame of the video acquisition module 2 is sent to the video definition detection module 4, and the acceleration sensor module 1 is informed of the timestamp of the YUV video frame generated by the video acquisition module 2 each time.
Motion amplitude detection module 3 of record appearance: the motion amplitude detection module 3 receives the acceleration slicing data and the timestamp of the acceleration sensor module 1. The motion amplitude detection module 3 respectively carries out Fast Fourier Transform (FFT) on the three-direction dimensional data of the acceleration slice to n sampling points of the x, y and z directions of each slice to obtain complex values of n frequency points. The motion amplitude detection module 3 adds the amplitude values of the frequency points to the real root square value and the virtual root square value of the coverage value of each n frequency points in the dimensions of the three directions of x, y and z respectively, the motion amplitude detection module 3 adds the amplitude values of the n frequency points in the dimensions of the three directions of x, y and z respectively to obtain an accumulated value, and the accumulated value is divided by 3n to obtain the motion amplitude value of the video frame slice. The motion amplitude detection module 3 sends the motion amplitude value and the timestamp of the video frame fragment to the video definition detection module 4.
Video definition detection module 4 of recorder: the video definition detection module 4 receives the motion amplitude value and the timestamp of the motion amplitude detection module 3. The video sharpness detection module 4 receives the Y component of the video frame used by the video capture module 2. The video sharpness detection module 4 associates the Y component of the video frame with the motion amplitude value according to the timestamp. The video sharpness detection module 4 performs a sampling block check to see if the image is blurred. And if the motion amplitude value is large, the image blurring possibility is considered to be large, the video definition detection module 4 reduces the number of detection blocks, the number of the detection blocks is a preset detection coefficient/motion amplitude value, when the number of the detection blocks is less than one, the number of the detection blocks is adjusted to be one, and when the number of the detection blocks is greater than the maximum value of the number of the detection areas, the number of the detection blocks is adjusted to be the maximum value of the number of the detection areas. The video sharpness detection module 4 sets a fixed size of the block, generates the center position of the block using a random algorithm, and regenerates the center position when the random algorithm blocks overlap. The video sharpness detection module 4 convolves each pixel of each block with a laplacian operator, and squares the convolved value to obtain a non-negative value. The video definition detection module 4 accumulates the non-negative values to obtain the definition of the block. The video sharpness detection module 4 takes the sharpness value of the block as the sharpness value of the whole picture. The video definition detection module 4 sends the definition value and the timestamp to the video acquisition quality evaluation module 6.
Video acquisition quality evaluation module 6 of recorder: the video collection quality evaluation module 6 receives the motion amplitude value and the timestamp of the motion amplitude detection module 3, and the video collection quality evaluation module 6 receives the definition value and the timestamp of the video definition detection module 4. And the video acquisition quality evaluation module 6 aligns the motion amplitude value and the definition value of the video frame according to the timestamp. The larger the motion amplitude value is, the lower the influence on the sharpness value is, the linear relation between the sharpness value and the motion amplitude value is established by using a linear regression method, and the sharpness value = b/motion amplitude value + a. The video acquisition quality evaluation module 6 accumulates video frames more than 30 frames, uses least square estimator to calculate the parameter b and a value of the linear relation between the definition value and the motion amplitude value, and the video acquisition quality evaluation module 6 calculates the calculated value of the definition value according to the motion amplitude value, the parameter b and the a value. The video collection quality evaluation module 6 takes the definition value of the received video definition detection module 4 as an observed value of a Kalman filter, the video collection quality evaluation module 6 calculates a predicted value of the Kalman filter, and the video collection quality evaluation module 6 estimates the definition value according to an estimated value of the definition value. And the video acquisition quality evaluation module 6 removes random sampling introduction deviation of the definition detection module 4 through a Kalman filter. The video acquisition quality evaluation module 6 sends the evaluation value of the definition value and the timestamp to the face detection module 7.
Face detection module 7 of recorder: the face detection module 7 receives the YUV video frame and the timestamp of the video acquisition module 2. The face detection module 7 receives the estimated value of the sharpness value and the timestamp of the video acquisition quality evaluation module 6. And the face detection module 7 associates the YUV video frame with the estimated value of the definition value according to the time stamp. And if the face detection module 7 judges that the estimation value of the definition value of the YUV video frame exceeds the threshold value T, face detection is carried out, if the face is detected, the face position and the face visibility are obtained, a face image is obtained by intercepting according to the face position, and the face detection module 7 sends the face image, the visibility and a timestamp to the 5G image estimation sending module 8. If the human face detection module 7 judges that the estimated value of the definition value of the YUV video frame is less than or equal to the threshold value T, human face detection is not carried out, and power consumption is reduced.
5G image evaluation and transmission module 8 of recorder: the 5G image evaluation sending module 8 receives the face image and the availability and the timestamp of the face detection module 7. The 5G image evaluation and transmission module 8 obtains the 5G signal strength, the 5G image evaluation and transmission module 8 multiplies the 5G signal strength by an adjustment coefficient to obtain the transmission frequency per second, the 5G image evaluation and transmission module 8 performs reciprocal according to the transmission frequency per second to obtain a transmission period, the 5G image evaluation and transmission module 8 judges that the face images are sorted from high to low according to the degree of feasibility in each transmission period, and the 5G image evaluation and transmission module 8 only transmits the face image with the highest degree of feasibility and a timestamp to the server face blacklist comparison module 9.
The 5G image evaluation and sending module 8 adjusts sending frequency through 5G signal strength, and effectively reduces the problem that the 5G module increases the sending power due to poor 5G signals, so that the final power consumption is increased.
The server face blacklist comparison module 9: and the server face blacklist comparison module receives the face image and the timestamp of the 5G image evaluation and transmission module 8. The server face blacklist comparison module 9 compares the face image with the faces in the blacklist library, and if the similarity exceeds 80%, the blacklist person name and the time error are notified to the terminal alarm module 10.
The invention also comprises a 5G recorder video inspection method based on low power consumption, the 5G recorder video inspection system based on low power consumption is adopted, and the specific steps comprise:
s1: an acceleration sensor module 1 of the recorder periodically (for example, 1000 samples per second) acquires acceleration values of x, y and z dimensions, the acceleration sensor module 1 receives a video frame generation notification and a video timestamp of a video acquisition module 2, and the acceleration sensor module 1 divides the acceleration values of the x, y and z dimensions according to the video timestamp. The acceleration sensor module 1 sends the acceleration slicing data and the timestamp to the motion amplitude detection module 3 and the video acquisition quality evaluation module 6.
S2: and a video acquisition module 2 of the recorder acquires a real-world video image to obtain a YUV video frame and a time stamp. The video acquisition module 2 sends the YUV video frames and the timestamps to the face detection module 7. The Y component of the YUV video frame of the video acquisition module 2 is sent to the video definition detection module 4, and the acceleration sensor module 1 is informed of the time stamp of the YUV video frame generated by the video acquisition module 2 each time.
S3: the motion amplitude detection module 3 receives the acceleration slicing data and the timestamp of the acceleration sensor module 1. Sending the motion amplitude value and the timestamp of the video frame fragment obtained by calculation to a video definition detection module 4; and a motion amplitude detection module 3 of the recorder calculates the motion amplitude value of the video frame fragment.
3.1 the motion amplitude detection module 3 receives the acceleration slice data and the timestamp of the acceleration sensor module 1,
3.2 the motion amplitude detection module 3 respectively carries out Fast Fourier Transform (FFT) on the three-direction dimensional data of the acceleration slice to n sampling points of the x, y and z dimensions of each slice to obtain complex values of n frequency points.
3.3 the motion amplitude detection module 3 adds the amplitude values of the frequency points to the real root square value and the imaginary root square value of the coverage values of each n frequency points in the three dimensions of x, y and z.
And 3.4, the motion amplitude detection module 3 respectively adds the amplitude values of the n frequency points in the dimensions of the x direction, the y direction and the z direction to obtain an accumulated value, and the accumulated value is divided by 3n to obtain the motion amplitude value of the video frame fragment.
3.5 the motion amplitude detection module 3 sends the motion amplitude value and the timestamp of the video frame fragment to the video definition detection module 4.
S4: the video definition detection module 4 takes the calculated definition value of the block as the definition value of the whole picture, and sends the definition value and the timestamp to the video acquisition quality evaluation module 6; the video sharpness detection module 4 of the recorder detects the sharpness values of the video frames.
4.1: the video definition detection module 4 receives the motion amplitude value and the timestamp of the motion amplitude detection module 3.
4.2: the video sharpness detection module 4 receives the video frame Y component used by the video capture module 2.
4.3: the video sharpness detection module 4 associates the Y component of the video frame with the motion amplitude value according to the timestamp.
4.4: the video sharpness detection module 4 performs a sampling block check to see if the image is blurred. And if the motion amplitude value is large, the image blurring possibility is considered to be large, the video definition detection module 4 reduces the number of detection blocks, the number of the detection blocks is a preset detection coefficient/motion amplitude value, when the number of the detection blocks is less than one, the number of the detection blocks is adjusted to be one, and when the number of the detection blocks is greater than the maximum value of the number of the detection areas, the number of the detection blocks is adjusted to be the maximum value of the number of the detection areas.
4.5: the video sharpness detection module 4 sets a fixed size of the block, generates the center position of the block using a random algorithm, and regenerates the center position when the random algorithm blocks overlap.
4.6: the video sharpness detection module 4 convolves each pixel of each block with a laplacian operator, and squares the convolved value to obtain a non-negative value. The video definition detection module 4 accumulates the non-negative values to obtain the definition of the block.
4.7: the video sharpness detection module 4 takes the sharpness value of the block as the sharpness value of the whole video frame.
4.8: the video definition detection module 4 sends the definition value and the timestamp to the video acquisition quality evaluation module 6.
S5: the video collection quality evaluation module 6 receives the motion amplitude value and the timestamp of the motion amplitude detection module 3, and the video collection quality evaluation module 6 receives the definition value and the timestamp of the video definition detection module 4. The video acquisition quality evaluation module 6 aligns the motion amplitude value and the definition value of the video frame according to the time stamp, and sends the calculated estimation value of the definition value and the time stamp to the face detection module 7; a video acquisition quality evaluation module 6 of the recorder evaluates the quality of the video frames to generate an evaluation value of the sharpness value.
5.1: the video collection quality evaluation module 6 receives the motion amplitude value and the timestamp of the motion amplitude detection module 3,
5.2: the video collection quality evaluation module 6 receives the definition value and the timestamp of the video definition detection module 4.
5.3: and the video acquisition quality evaluation module 6 aligns the motion amplitude value and the definition value of the video frame according to the timestamp. The larger the motion amplitude value is, the lower the sharpness value is, the linear relation between the sharpness value and the motion amplitude value is established by using a linear regression method, and the sharpness value = b/motion amplitude value + a.
5.4: the video acquisition quality evaluation module 6 accumulates video frames more than 30 frames, uses least square estimator to calculate the parameter b and a of the linear relation between the definition value and the motion amplitude value, and the video acquisition quality evaluation module 6 calculates the calculation value of the definition value according to the motion amplitude value, the parameter b and the a.
5.5: the video collection quality evaluation module 6 takes the definition value of the received video definition detection module 4 as the observed value of the kalman filter,
5.6: the video acquisition quality evaluation module 6 calculates the prediction value of the Kalman filter, and the video acquisition quality evaluation module 6 passes through the estimation value of the definition value. And the video acquisition quality evaluation module 6 removes the random sampling induced deviation of the definition detection module 4 through a Kalman filter.
5.7: the video acquisition quality evaluation module 6 sends the estimated value of the sharpness value and the timestamp to the face detection module 7.
S6: the face detection module 7 receives the YUV video frame and the timestamp of the video acquisition module 2. The face detection module 7 receives the estimated value of the sharpness value and the timestamp of the video acquisition quality evaluation module 6. The face detection module 7 judges whether to carry out face detection according to the estimated value of the time stamp-associated YUV video frame and the definition value, and sends a face image, the visibility and the time stamp to the 5G image estimation sending module 8; the face detection module 7 of the recorder detects the face of the video frame:
6.1: the face detection module 7 receives the YUV video frame and the timestamp of the video acquisition module 2.
6.2: the face detection module 7 receives the estimated value of the sharpness value and the timestamp of the video acquisition quality evaluation module 6.
6.3: and the face detection module 7 associates the YUV video frame with the estimated value of the definition value according to the time stamp.
6.4: the face detection module 7 judges that the estimated value of the definition value of the YUV video frame exceeds a threshold value T, then face detection is carried out, if a face is detected, a face position and face visibility are obtained, then a face image is obtained by intercepting according to the face position, and the face detection module 7 sends the face image, the visibility and a timestamp to the 5G image estimation sending module 8.
6.5: if the human face detection module 7 judges that the estimated value of the definition value of the YUV video frame is less than or equal to the threshold value T, human face detection is not carried out, and power consumption is reduced.
S7: the 5G image evaluation and sending module 8 receives the face image and the visibility and the timestamp of the face detection module 7, and the 5G signal intensity acquired by the 5G image evaluation and sending module 8 sends the face image with the highest visibility and the timestamp to the server face blacklist comparison module 9; and a 5G image evaluation and transmission module 8 of the recorder generates the face image according to the image quality priority.
7.1: the 5G image evaluation sending module 8 receives the face image and the availability and the timestamp of the face detection module 7.
7.2: the 5G image evaluation and transmission module 8 obtains the 5G signal strength, and the 5G image evaluation and transmission module 8 multiplies the 5G signal strength by an adjusting coefficient to obtain the transmission frequency per second.
7.3: and the 5G image evaluation and transmission module 8 performs reciprocal calculation according to the transmission frequency per second to obtain a transmission period.
7.4: the 5G image evaluation transmission module 8 judges that the face images are sorted from high to low according to the degree of availability in each transmission period.
7.5: the 5G image evaluation and transmission module 8 only transmits the face image with the highest degree of feasibility and the timestamp to the server face blacklist comparison module 9.
S8: and the server face blacklist comparison module 9 receives the face image and the timestamp of the 5G image evaluation and transmission module 8. The server face blacklist comparison module 9 compares the face image with the face of the blacklist library, and if the similarity exceeds 80%, the blacklist person name and the time error are notified to the terminal alarm module 10;
s9: and a terminal alarm module 10 of the recorder receives the name of the blacklist personnel of the server face blacklist comparison module 9 and the time error. The terminal alarm module 10 receives the YUV video frame and the timestamp of the video acquisition module 2. And the terminal alarm module 10 associates the YUV video frame with the name of the blacklist personnel according to the timestamp, and displays the blacklist personnel and the video frame image through a pop window.
The above embodiments are described in further detail to solve the technical problems, technical solutions and advantages of the present invention, and it should be understood that the above embodiments are only examples of the present invention and are not intended to limit the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. The utility model provides a 5G record appearance video system of patrolling and examining based on low-power consumption which characterized in that includes:
acceleration sensor module (1): the acceleration sensor module (1) sends the obtained acceleration slicing data and the time stamp to the motion amplitude detection module (3) and the video acquisition quality evaluation module (6);
video capture module (2): a video acquisition module (2) of the recorder acquires a video image of the real world to obtain a YUV video frame and a time stamp, the YUV video frame and the time stamp are sent to a face detection module (7), a Y component of the YUV video frame is sent to a video definition detection module (4), and the time stamp of the YUV video frame generated each time is notified to an acceleration sensor module (1);
motion amplitude detection module (3): the motion amplitude detection module (3) receives acceleration slicing data and a timestamp of the acceleration sensor module (1); sending the motion amplitude value and the timestamp of the video frame fragment obtained by calculation to a video definition detection module (4);
video sharpness detection module (4): the video definition detection module (4) takes the calculated definition value of the block as the definition value of the whole picture, and sends the definition value and the timestamp to the video acquisition quality evaluation module (6);
a video acquisition quality evaluation module (6): the video acquisition quality evaluation module (6) receives the motion amplitude value and the time stamp of the motion amplitude detection module (3), and the video acquisition quality evaluation module (6) receives the definition value and the time stamp of the video definition detection module (4); the video acquisition quality evaluation module (6) aligns the motion amplitude value and the definition value of the video frame according to the time stamp, and sends the evaluation value of the calculated definition value and the time stamp to the face detection module (7);
face detection module (7): the face detection module (7) receives the YUV video frame and the timestamp of the video acquisition module (2); the face detection module (7) receives the estimated value and the timestamp of the definition value of the video acquisition quality evaluation module (6); the face detection module (7) judges whether to carry out face detection according to the time stamp-associated YUV video frame and the estimation value of the definition value, and sends a face image, the visibility and the time stamp to the 5G image estimation sending module (8);
5G image evaluation transmission module (8): the 5G image evaluation and sending module (8) receives the face image, the feasibility degree and the time stamp of the face detection module (7), the 5G signal intensity acquired by the 5G image evaluation and sending module (8) sends the face image with the highest feasibility degree and the time stamp to the server face blacklist comparison module (9);
a server face blacklist comparison module (9): the server face blacklist comparison module (9) receives the face image and the timestamp of the 5G image evaluation sending module (8); the server face blacklist comparison module (9) compares the face image with the face of the blacklist library, and if the similarity exceeds 80%, the blacklist person name and the time error are notified to the terminal alarm module (10);
terminal alarm module (10): the terminal alarm module (10) receives the name and time error of blacklist personnel of the server face blacklist comparison module (9), and receives the YUV video frame and the time stamp of the video acquisition module (2); and the terminal alarm module (10) associates the YUV video frame with the name of the blacklist personnel according to the timestamp, and pops up the window to display the image of the blacklist personnel and the video frame.
2. The 5G recorder video inspection method based on low power consumption adopts the 5G recorder video inspection system based on low power consumption according to claim 1, and is characterized by comprising the following steps:
s1: the acceleration sensor module (1) sends the obtained acceleration slicing data and the time stamp to the motion amplitude detection module (3) and the video acquisition quality evaluation module (6);
s2: a video acquisition module (2) of the recorder acquires a real-world video image to obtain a YUV video frame and a timestamp; the video acquisition module (2) sends the YUV video frames and the time stamps to the face detection module (7); the Y component of the YUV video frame of the video acquisition module (2) is sent to the video definition detection module (4), and the acceleration sensor module (1) is informed of the time stamp of the YUV video frame generated by the video acquisition module (2) each time;
s3: the motion amplitude detection module (3) receives acceleration slicing data and a timestamp of the acceleration sensor module (1); sending the motion amplitude value and the timestamp of the video frame fragment obtained by calculation to a video definition detection module (4);
s4: the video definition detection module (4) takes the calculated definition value of the block as the definition value of the whole picture, and sends the definition value and the timestamp to the video acquisition quality evaluation module (6);
s5: the video acquisition quality evaluation module (6) receives the motion amplitude value and the time stamp of the motion amplitude detection module (3), and the video acquisition quality evaluation module (6) receives the definition value and the time stamp of the video definition detection module (4); the video acquisition quality evaluation module (6) aligns the motion amplitude value and the definition value of the video frame according to the time stamp, and sends the calculated estimation value of the definition value and the time stamp to the face detection module (7);
s6: the face detection module (7) receives the YUV video frame and the timestamp of the video acquisition module (2); the face detection module (7) receives the estimated value and the timestamp of the definition value of the video acquisition quality evaluation module (6); the face detection module (7) judges whether to carry out face detection according to the time stamp-associated YUV video frame and the estimation value of the definition value, and sends a face image, the visibility and the time stamp to the 5G image estimation sending module (8);
s7: the 5G image evaluation and sending module (8) receives the face image, the feasibility degree and the time stamp of the face detection module (7), the 5G signal intensity acquired by the 5G image evaluation and sending module (8) sends the face image with the highest feasibility degree and the time stamp to the server face blacklist comparison module (9);
s8: the server face blacklist comparison module (9) receives the face image and the timestamp of the 5G image evaluation sending module (8); the server face blacklist comparison module (9) compares the face image with the face of the blacklist library, and if the similarity exceeds 80%, the blacklist person name and the time error are notified to the terminal alarm module (10);
s9: a terminal alarm module (10) of the recorder receives the name of the blacklist personnel of the server face blacklist comparison module (9) and the time error; the terminal alarm module (10) receives the YUV video frame and the timestamp of the video acquisition module (2); and the terminal alarm module (10) associates the YUV video frame with the name of the blacklist personnel according to the timestamp, and pops up the window to display the image of the blacklist personnel and the video frame.
3. The video inspection method for the 5G recorder based on the low power consumption is characterized in that the step S1 comprises the following steps:
an acceleration sensor module (1) of the recorder periodically acquires acceleration values of dimensions in the x direction, the y direction and the z direction, the acceleration sensor module (1) receives a video frame generation notice and a video time stamp of a video acquisition module (2), and the acceleration sensor module (1) divides the acceleration values of the dimensions in the x direction, the y direction and the z direction into pieces according to the video time stamp; the acceleration sensor module (1) sends the acceleration slicing data and the time stamp to the motion amplitude detection module (3) and the video acquisition quality evaluation module (6).
4. The video inspection method for the 5G recorder based on the low power consumption is characterized in that the step S3 comprises the following steps:
3.1 the motion amplitude detection module (3) receives the acceleration slicing data and the time stamp of the acceleration sensor module (1),
3.2 the motion amplitude detection module (3) respectively carries out Fast Fourier Transform (FFT) on three-direction dimensional data of the acceleration slice to n sampling points of three-direction dimensions of x, y and z of each slice to obtain complex values of n frequency points;
3.3 the motion amplitude detection module (3) adds the real root square value and the virtual root square value of the coverage value of each n frequency points of the three dimensions of x, y and z to the amplitude value of the frequency point;
3.4 the motion amplitude detection module (3) respectively adds the amplitude values of n frequency points of the dimensions of the x direction, the y direction and the z direction to obtain an accumulated value, and the accumulated value is divided by 3n to obtain the motion amplitude value of the video frame slice;
3.5 the motion amplitude detection module (3) sends the motion amplitude value and the time stamp of the video frame fragment to the video definition detection module (4).
5. The video inspection method for the 5G recorder based on the low power consumption as claimed in claim 2, wherein the step S4 comprises:
4.1: the video definition detection module (4) receives the motion amplitude value and the time stamp of the motion amplitude detection module (3);
4.2: the video definition detection module (4) receives the component of the video frame Y used by the video acquisition module (2);
4.3: the video definition detection module (4) associates the Y component and the motion amplitude value of the video frame according to the time stamp;
4.4: the video definition detection module (4) performs sampling block check to determine whether the image is fuzzy; when the motion amplitude value is large, the image blurring possibility is considered to be large, the video definition detection module (4) reduces the number of detection blocks, the number of the detection blocks is a preset detection coefficient/motion amplitude value, when the number of the detection blocks is smaller than one, the number of the detection blocks is adjusted to be one, and when the number of the detection blocks is larger than the maximum value of the number of the detection areas, the number of the detection blocks is adjusted to be the maximum value of the number of the detection areas;
4.5: the video definition detection module (4) sets the fixed size of the block, generates the central position of the block by using a random algorithm, and regenerates the central position when the blocks of the random algorithm are overlapped;
4.6: the video definition detection module (4) convolutes each pixel of each block by using a Laplacian operator, and squares a convolution value to obtain a non-negative value; the video definition detection module (4) accumulates the non-negative values to obtain the definition of the block;
4.7: the video definition detection module (4) takes the definition value of the block as the definition value of the whole video frame;
4.8: and the video definition detection module (4) sends the definition value and the timestamp to the video acquisition quality evaluation module (6).
6. The video inspection method for the 5G recorder based on the low power consumption is characterized in that the step S5 comprises the following steps:
5.1: the video acquisition quality evaluation module (6) receives the motion amplitude value and the time stamp of the motion amplitude detection module (3);
5.2: the video acquisition quality evaluation module (6) receives the definition value and the timestamp of the video definition detection module (4);
5.3: the video acquisition quality evaluation module (6) aligns the motion amplitude value and the definition value of the video frame according to the time stamp; the larger the motion amplitude value is, the lower the influence on the definition value is, a linear relation between the definition value and the motion amplitude value is established by using a linear regression method, and the definition value = b/motion amplitude value + a;
5.4: the video acquisition quality evaluation module (6) accumulates video frames to exceed 30 frames, a least square estimator is used for calculating parameters b and a of a linear relation between the definition value and the motion amplitude value, and the video acquisition quality evaluation module (6) calculates a calculation value of the definition value according to the motion amplitude value and the parameters b and a;
5.5: the video acquisition quality evaluation module (6) takes the definition value of the received video definition detection module (4) as an observed value of a Kalman filter;
5.6: the video acquisition quality evaluation module (6) calculates the prediction value of the Kalman filter, and the video acquisition quality evaluation module (6) passes through the estimation value of the definition value; the video acquisition quality evaluation module (6) removes random sampling introduction deviation of the definition detection module (4) through a Kalman filter;
5.7: and the video acquisition quality evaluation module (6) sends the evaluation value of the definition value and the time stamp to the face detection module (7).
7. The video inspection method for the 5G recorder based on the low power consumption is characterized in that the step S6 comprises the following steps:
6.1: the face detection module (7) receives the YUV video frame and the timestamp of the video acquisition module (2);
6.2: the face detection module (7) receives the estimated value and the timestamp of the definition value of the video acquisition quality evaluation module (6);
6.3: the face detection module (7) associates a YUV video frame with an estimated value of a definition value according to the time stamp;
6.4: the face detection module (7) judges that the estimated value of the definition value of the YUV video frame exceeds a threshold value T, face detection is carried out, if a face is detected, the face position and the face visibility are obtained, a face image is obtained by intercepting according to the face position, and the face detection module (7) sends the face image, the visibility and a timestamp to the 5G image estimation sending module (8);
6.5: and if the face detection module (7) judges that the estimated value of the definition value of the YUV video frame is less than or equal to the threshold value T, the face detection is not carried out, so that the power consumption is reduced.
8. The video inspection method for the 5G recorder based on the low power consumption is characterized in that the step S7 comprises the following steps:
7.1: the 5G image evaluation and sending module (8) receives the face image, the feasibility and the time stamp of the face detection module (7);
7.2:5G signal intensity obtained by the 5G image evaluation sending module (8), and the 5G image evaluation sending module (8) multiplies the 5G signal intensity by an adjusting coefficient to obtain sending frequency per second;
7.3: the 5G image evaluation and transmission module (8) performs reciprocal operation according to the transmission frequency per second to obtain a transmission period;
7.4: the 5G image evaluation and transmission module (8) judges that the face images are sorted from high to low according to the degree of availability in each transmission period;
7.5: and the 5G image evaluation and transmission module (8) only transmits the face image with the highest degree of feasibility and the timestamp to the server face blacklist comparison module (9).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211555189.1A CN115985007B (en) | 2022-12-06 | 2022-12-06 | 5G recorder video inspection method and system based on low power consumption |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211555189.1A CN115985007B (en) | 2022-12-06 | 2022-12-06 | 5G recorder video inspection method and system based on low power consumption |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115985007A true CN115985007A (en) | 2023-04-18 |
CN115985007B CN115985007B (en) | 2024-06-21 |
Family
ID=85957115
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211555189.1A Active CN115985007B (en) | 2022-12-06 | 2022-12-06 | 5G recorder video inspection method and system based on low power consumption |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115985007B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110191058A1 (en) * | 2009-08-11 | 2011-08-04 | Certusview Technologies, Llc | Locating equipment communicatively coupled to or equipped with a mobile/portable device |
WO2019100608A1 (en) * | 2017-11-21 | 2019-05-31 | 平安科技(深圳)有限公司 | Video capturing device, face recognition method, system, and computer-readable storage medium |
CN112584073A (en) * | 2020-12-24 | 2021-03-30 | 杭州叙简科技股份有限公司 | 5G-based law enforcement recorder distributed assistance calculation method |
CN112995432A (en) * | 2021-02-05 | 2021-06-18 | 杭州叙简科技股份有限公司 | Depth image identification method based on 5G double recorders |
WO2021184894A1 (en) * | 2020-03-20 | 2021-09-23 | 深圳市优必选科技股份有限公司 | Deblurred face recognition method and system and inspection robot |
CN113505674A (en) * | 2021-06-30 | 2021-10-15 | 上海商汤临港智能科技有限公司 | Face image processing method and device, electronic equipment and storage medium |
CN114419769A (en) * | 2022-01-29 | 2022-04-29 | 青岛海信移动通信技术股份有限公司 | Door lock management method and device, intelligent operation recorder, medium and system |
-
2022
- 2022-12-06 CN CN202211555189.1A patent/CN115985007B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110191058A1 (en) * | 2009-08-11 | 2011-08-04 | Certusview Technologies, Llc | Locating equipment communicatively coupled to or equipped with a mobile/portable device |
WO2019100608A1 (en) * | 2017-11-21 | 2019-05-31 | 平安科技(深圳)有限公司 | Video capturing device, face recognition method, system, and computer-readable storage medium |
WO2021184894A1 (en) * | 2020-03-20 | 2021-09-23 | 深圳市优必选科技股份有限公司 | Deblurred face recognition method and system and inspection robot |
CN112584073A (en) * | 2020-12-24 | 2021-03-30 | 杭州叙简科技股份有限公司 | 5G-based law enforcement recorder distributed assistance calculation method |
CN112995432A (en) * | 2021-02-05 | 2021-06-18 | 杭州叙简科技股份有限公司 | Depth image identification method based on 5G double recorders |
CN113505674A (en) * | 2021-06-30 | 2021-10-15 | 上海商汤临港智能科技有限公司 | Face image processing method and device, electronic equipment and storage medium |
CN114419769A (en) * | 2022-01-29 | 2022-04-29 | 青岛海信移动通信技术股份有限公司 | Door lock management method and device, intelligent operation recorder, medium and system |
Non-Patent Citations (1)
Title |
---|
雷蕴奇;柳秀霞;宋晓冰;袁美玲;欧阳江帆;: "视频中运动人脸的检测与特征定位方法", 华南理工大学学报(自然科学版), no. 05, 15 May 2009 (2009-05-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN115985007B (en) | 2024-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11410002B2 (en) | Ship identity recognition method based on fusion of AIS data and video data | |
CN107004271B (en) | Display method, display apparatus, electronic device, computer program product, and storage medium | |
CN110191320B (en) | Video jitter and freeze detection method and device based on pixel time sequence motion analysis | |
US8712149B2 (en) | Apparatus and method for foreground detection | |
CN103366506A (en) | Device and method for automatically monitoring telephone call behavior of driver when driving | |
JP2001357484A (en) | Road abnormality detector | |
CN102164270A (en) | Intelligent video monitoring method and system capable of exploring abnormal events | |
CN106851049A (en) | A kind of scene alteration detection method and device based on video analysis | |
CN103095966B (en) | A kind of video jitter quantization method and device | |
CN112257632A (en) | Transformer substation monitoring system based on edge calculation | |
CN101163234A (en) | Method of implementing pattern recognition and image monitoring using data processing device | |
CN104539936A (en) | System and method for monitoring snow noise of monitor video | |
CN108174198B (en) | Video image quality diagnosis analysis detection device and application system | |
US20100253779A1 (en) | Video image monitoring system | |
CN113096158A (en) | Moving object identification method and device, electronic equipment and readable storage medium | |
CN111601011A (en) | Automatic alarm method and system based on video stream image | |
CN109905670A (en) | A kind of multi-stage platform monitoring system | |
CN115953719A (en) | Multi-target recognition computer image processing system | |
CN111950491A (en) | Personnel density monitoring method and device and computer readable storage medium | |
Luo et al. | Vehicle flow detection in real-time airborne traffic surveillance system | |
CN111460949B (en) | Real-time monitoring method and system for preventing external damage of power transmission line | |
CN115985007A (en) | 5G recorder video inspection method and system based on low power consumption | |
CN112633249A (en) | Embedded pedestrian flow detection method based on light deep learning framework | |
CN114724378B (en) | Vehicle tracking statistical system and method based on deep learning | |
CN107507214A (en) | The method and apparatus for obtaining goods image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |