CN115456868B - Data management method for fire drill system - Google Patents

Data management method for fire drill system Download PDF

Info

Publication number
CN115456868B
CN115456868B CN202211416744.2A CN202211416744A CN115456868B CN 115456868 B CN115456868 B CN 115456868B CN 202211416744 A CN202211416744 A CN 202211416744A CN 115456868 B CN115456868 B CN 115456868B
Authority
CN
China
Prior art keywords
gray
image
frame
gray level
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211416744.2A
Other languages
Chinese (zh)
Other versions
CN115456868A (en
Inventor
李欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Jinyi Zhonghe Information Technology Co ltd
Original Assignee
Nanjing Jinyi Zhonghe Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Jinyi Zhonghe Information Technology Co ltd filed Critical Nanjing Jinyi Zhonghe Information Technology Co ltd
Priority to CN202211416744.2A priority Critical patent/CN115456868B/en
Publication of CN115456868A publication Critical patent/CN115456868A/en
Application granted granted Critical
Publication of CN115456868B publication Critical patent/CN115456868B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of electric digital data processing, in particular to a data management method of a fire drill system, which comprises the following steps: acquiring each frame of gray level image corresponding to a fire drill site; calculating the gray threshold of each frame of gray image, and then obtaining core pixel points; acquiring each area of each frame of gray level image according to the core pixel points; calculating the position label of each region, and then matching each region in two adjacent frames of gray level images to obtain each matching pair; for any matching pair, calculating the amount of loss information of the corresponding area of the next frame of gray level image in the matching pair; for two adjacent frames of gray level images, calculating the integral loss information quantity of the next frame of gray level image according to the loss information quantity corresponding to all the areas in the next frame of gray level image, and judging whether the next frame of gray level image is reserved or not; and compressing and transmitting the video images corresponding to all the reserved gray level images. The invention can reduce the data volume of video image compression and transmission.

Description

Data management method for fire drill system
Technical Field
The invention relates to the technical field of electric digital data processing, in particular to a data management method of a fire drill system.
Background
The fire drill system is a simulation system specially applied to fire drill in recent years, namely, video data of a fire drill site is collected, the video data of the fire drill site is transmitted to a terminal by using a transmission mode of the fire drill system, and then a terminal monitoring person assists the site drill person to perform fire drill. In a fire drill system, the instantaneity of live video data transmission often has a great influence on the drill result.
In the conventional video instant transmission technology, the original video data is compressed by using predictive coding so as to reduce the transmission size of the video data, and for the video transmission in fire drilling, the problem of insufficient transmission efficiency, overlong transmission time and lost data real-time property can be caused if the amount of the video data needing to be transmitted in the emergency scene is too large; by observing the video data of the fire drilling site collected by the fire drilling system, the fact that the video data of the fire drilling site transmitted by the conventional video instant transmission technology has considerable redundancy is easily found, namely, the information carried between the video data corresponding to continuous multi-frame video images is almost the same, so that the size of the transmitted data volume is increased by compressing and transmitting all the video data, and the transmission efficiency of the video data of the fire drilling site is reduced.
Therefore, there is a need for an efficient method to achieve accurate transmission of video data at the site of fire fighting exercise.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a fire drill system data management method, which adopts the following technical scheme:
acquiring video image data corresponding to a fire drill site, wherein the video image data comprises at least two frames of video images; preprocessing each frame of video image to obtain each frame of gray level image;
performing curve fitting on the gray histogram corresponding to each frame of gray image to obtain a peak-valley curve corresponding to each frame of gray image, and calculating the gray threshold of each frame of gray image according to the gray value corresponding to each peak in the peak-valley curve; selecting core pixel points corresponding to the gray level images of each frame according to the gray level threshold;
acquiring at least two areas corresponding to each frame of gray level image according to the core pixel points; calculating the position label of each region according to the horizontal and vertical coordinate values corresponding to each pixel point in the region; matching each area in two adjacent frames of gray level images according to the position labels to obtain at least two matching pairs; there are two regions in the matching pair;
for any matching pair, calculating the average gray value and the variance corresponding to each region according to the gray values corresponding to all pixel points in each region in the matching pair, and further obtaining the loss information quantity of the region corresponding to the next frame of gray image in the matching pair;
for two adjacent frames of gray level images, calculating the integral loss available information amount corresponding to the next frame of gray level image according to the loss available information amount corresponding to all the areas in the next frame of gray level image, the image entropy and the average gray level value corresponding to the next frame of gray level image and the image entropy and the average gray level value corresponding to the previous frame of gray level image; judging whether to reserve the next frame of gray level image according to the whole loss information amount;
and compressing and transmitting the video images corresponding to all the retained gray level images.
Preferably, the method for calculating the gray threshold of each frame of gray image according to the gray value corresponding to each peak in the peak-valley curve comprises: and calculating the average gray value corresponding to all the wave crests according to the gray value corresponding to each wave crest in the peak-valley curve, and recording the average gray value corresponding to all the wave crests as the gray threshold of the corresponding frame gray image.
Preferably, the method for selecting the core pixel point corresponding to each frame of gray scale image according to the gray scale threshold value comprises the following steps: comparing the gray threshold value with the judgment threshold value for any frame of gray image, and recording the pixel points corresponding to the gray values smaller than the gray threshold value as core pixel points corresponding to the frame of gray image when the gray threshold value is larger than the judgment threshold value; and when the gray threshold is less than or equal to the judgment threshold, marking the pixel points corresponding to the gray threshold with the gray value greater than or equal to the gray threshold as core pixel points corresponding to the frame of gray image.
Preferably, the method for obtaining at least two regions corresponding to each frame of gray-scale image according to the core pixel point comprises: and acquiring at least two regions corresponding to each frame of gray level image by using a region growing algorithm according to the core pixel points.
Preferably, the method for calculating the position label of each region according to the horizontal and vertical coordinate values corresponding to each pixel point in the region comprises: and calculating average abscissa values and average ordinate values corresponding to all pixel points in the region, wherein the average abscissa values and the average ordinate values form a position label.
Preferably, the method for matching each region in two adjacent frames of gray images according to the position tag to obtain at least two matching pairs comprises: and matching each region in the two adjacent frames of gray images by using a KM algorithm according to the position tags to obtain at least two matching pairs.
Preferably, the amount of information that can be lost is:
Figure 346064DEST_PATH_IMAGE001
wherein,
Figure 218074DEST_PATH_IMAGE002
when the next frame of gray image in any matching pair is the nth frame of gray image, the amount of information which can be lost in the kth area in the nth frame of gray image is reduced;
Figure 523416DEST_PATH_IMAGE003
is a natural constant;
Figure 548004DEST_PATH_IMAGE004
in the formula (I), wherein,
Figure 411924DEST_PATH_IMAGE005
the average gray value corresponding to the kth area in the nth frame gray image;
Figure 272694DEST_PATH_IMAGE006
for the (n-1) th frame in the grayscale image
Figure 630995DEST_PATH_IMAGE007
Average gray values corresponding to the areas;
Figure 24936DEST_PATH_IMAGE008
the sign of the absolute value is calculated;
Figure 810489DEST_PATH_IMAGE009
in the formula (I), wherein,
Figure 158556DEST_PATH_IMAGE010
the variance corresponding to the kth area in the nth frame gray level image;
Figure 320547DEST_PATH_IMAGE011
for the (n-1) th frame in the grayscale image
Figure 834574DEST_PATH_IMAGE007
The variance corresponding to each region;
Figure 791029DEST_PATH_IMAGE008
the sign of the absolute value is calculated.
Preferably, the total amount of information that can be lost is:
Figure 891971DEST_PATH_IMAGE012
wherein,
Figure 310183DEST_PATH_IMAGE013
when the next frame of gray image in the two adjacent frames of gray images is the nth frame of gray image, the whole amount of information corresponding to the nth frame of gray image can be lost;
Figure 429449DEST_PATH_IMAGE002
the amount of information which can be lost in the kth area in the nth frame gray level image;
Figure 307537DEST_PATH_IMAGE014
the total number of the areas in the nth frame gray level image;
Figure 145043DEST_PATH_IMAGE015
the image entropy corresponding to the nth frame of gray level image is obtained;
Figure 632525DEST_PATH_IMAGE016
the image entropy corresponding to the (n-1) th frame gray level image;
Figure 606297DEST_PATH_IMAGE017
the average gray value corresponding to the n-th frame of gray image;
Figure 655287DEST_PATH_IMAGE018
the average gray value corresponding to the gray image of the (n-1) th frame;
Figure 229357DEST_PATH_IMAGE003
is a natural constant.
Preferably, the method for determining whether to retain the gray image of the next frame according to the total amount of information that can be lost comprises: normalizing the integral loss-able information quantity, and comparing the normalized integral loss-able information quantity with a threshold value; when the normalized integral loss information quantity is larger than a threshold value, deleting the next frame of gray level image; and when the normalized integral loss information amount is less than or equal to the threshold value, retaining the next frame of gray image.
The embodiment of the invention at least has the following beneficial effects:
the method comprises the steps of calculating the integral loss information quantity of each frame of gray level image corresponding to video image data, judging whether the frame of gray level image is reserved or not according to the integral loss information quantity of each frame of gray level image, realizing accurate selection of the video image corresponding to a fire drilling site, judging that the two adjacent frames of gray level images are highly similar when the integral loss information quantity corresponding to the next frame of gray level image in the two adjacent frames of gray level images is larger, and deleting the next frame of gray level image; the method and the device provided by the invention have the advantages that the information contained in the video image corresponding to the fire drill site is complete, the data volume of the video image data for compression transmission is reduced, the transmission efficiency of the video image data of the fire drill site is improved, and the instantaneity of transmission is ensured.
Meanwhile, for any one matching pair, according to the gray values corresponding to all pixel points in each region in the matching pair, calculating the average gray value and the variance corresponding to each region, and further obtaining the loss information quantity of the region corresponding to the next frame of gray image in the matching pair; in the calculation of the amount of information which can be lost, not only the average gray value corresponding to the region but also the variance corresponding to the region are considered, and the calculation is performed only through the average gray value corresponding to the region, so that the method has great contingency; thus, the variance is used to reduce this contingency and improve the accuracy with which the amount of information can be lost.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart illustrating steps of a data management method for a fire drill system according to an embodiment of the present invention.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the proposed solution, its specific implementation, structure, characteristics and effects will be given in conjunction with the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The main purposes of the invention are: preprocessing each frame of video image to be transmitted in the fire fighting drilling system to obtain a gray image; further carrying out region division on each frame of gray level image to obtain at least two regions corresponding to each frame of gray level image; then, carrying out area matching on two adjacent frames of gray level images to obtain at least two matching pairs; calculating the amount of loss information of the corresponding area of the gray level image of the next frame in the matching pair; further obtaining the whole loss-capable information amount corresponding to the next frame of gray level image; judging whether to reserve the next frame of gray level image according to the whole loss information quantity; and compressing and transmitting the video images corresponding to all the retained gray level images. According to the invention, one highly similar two-frame video image is deleted through the whole lost information quantity, so that the number of video images during video image data transmission is reduced; the transmission efficiency is improved.
Referring to fig. 1, a flowchart illustrating steps of a fire drill system data management method according to an embodiment of the present invention is shown, where the method includes the following steps:
step 1, acquiring video image data corresponding to a fire drill site, wherein the video image data comprises at least two frames of video images; and preprocessing each frame of video image to obtain each frame of gray level image.
Specifically, a field worker shoots a fire drill field in real time through a camera to obtain video image data corresponding to the fire drill field; each shot frame of video image is an RGB image; the RGB image is a three-channel image, and the three channels are an R channel, a G channel and a B channel respectively; if each frame of the shot video image is directly used, the calculation amount of the subsequent process is increased, so that the embodiment performs graying processing on each frame of the video image to obtain each frame of the gray image; the gray image is a single-channel image, and the calculation amount can be reduced in the subsequent calculation process.
The graying method is known technology and will not be described; the implementer can select a corresponding graying method to process each frame of video image.
Step 2, performing curve fitting on a gray histogram corresponding to each frame of gray image to obtain a peak-valley curve corresponding to each frame of gray image, and calculating a gray threshold of each frame of gray image according to a gray value corresponding to each peak in the peak-valley curve; and selecting core pixel points corresponding to the gray level images of each frame according to the gray level threshold value.
The gray histogram corresponding to each frame of gray image is obtained according to the gray value of the pixel point in each frame of gray image, the gray histogram represents the number of the pixel points with certain gray value in the gray image, the frequency of the occurrence of certain gray value in the gray image is reflected, and the obtaining process is a known technology and is not repeated.
Then, curve fitting is carried out on the gray level histogram by utilizing an envelope fitting technology of EMD to obtain a peak-valley curve corresponding to each frame of gray level image; because the gray level image is obtained according to the video image corresponding to the fire drill site, and the environment of the fire drill site is extremely complex, the distribution of the gray level values corresponding to the pixel points in the gray level image is also extremely complex, so that the peak-valley curve corresponding to each frame of gray level image presents a form of fluctuation, and has a plurality of peaks and a plurality of valleys. The envelope fitting technique of EMD is a known technique and will not be described in detail.
Calculating the gray threshold of each frame of gray image according to the gray value corresponding to each peak in the peak-valley curve; specifically, according to the gray value corresponding to each peak in the peak-valley curve, the average gray value corresponding to all peaks is calculated, the average gray value corresponding to all peaks is recorded as the gray threshold of the corresponding frame gray image, and the formula is expressed as follows:
Figure 271262DEST_PATH_IMAGE019
in the formula,
Figure 584694DEST_PATH_IMAGE020
the gray threshold value is the gray threshold value of the nth frame of gray image;
Figure 319432DEST_PATH_IMAGE021
the peak-valley curve corresponding to the n-th frame of gray scale image
Figure 584060DEST_PATH_IMAGE022
Gray values corresponding to the individual wave crests;
Figure 445968DEST_PATH_IMAGE023
the total number of peaks in a peak-valley curve corresponding to the nth frame of gray scale image.
The gray value corresponding to each peak in the peak-valley curve corresponding to the n-th frame of gray image represents that the number of pixel points corresponding to the gray value in the n-th frame of gray image is large; therefore, the average gray value corresponding to all the peaks in each peak-valley curve is recorded as the gray threshold of the corresponding frame gray image, and the larger the gray threshold is, the more the peaks near the larger gray value in the corresponding frame gray image are; most of the pixels are distributed near the larger gray value, and because the edge pixels have larger difference with other pixels in the gray image, the probability that the pixels corresponding to the smaller gray value are the edge pixels is higher; the smaller the gray threshold value is, the more wave peaks near the smaller gray value in the corresponding frame gray image are indicated; that is, most of the pixels are distributed near the smaller gray value, and the edge pixel point has a larger difference from other pixels in the gray image, so that the probability that the pixel corresponding to the larger gray value is the edge pixel point is higher.
Finally, selecting core pixel points corresponding to the gray level images of each frame according to the gray level threshold value; comparing the gray threshold value with the judgment threshold value for any frame of gray image, and recording the pixel points corresponding to the gray values smaller than the gray threshold value as core pixel points corresponding to the frame of gray image when the gray threshold value is larger than the judgment threshold value; and when the gray threshold is less than or equal to the judgment threshold, marking the pixel points corresponding to the gray threshold with the gray value greater than or equal to the gray threshold as core pixel points corresponding to the frame of gray image. In this embodiment, the value of the determination threshold is 125, and an implementer can adjust the value of the determination threshold according to actual conditions.
It should be noted that, because the edge pixel points can reflect the information contained in the gray level image, and when the gray level threshold is greater than the determination threshold, the probability that the pixel points corresponding to the gray level value less than the gray level threshold are the edge pixel points is higher, the pixel points corresponding to the gray level value less than the gray level threshold are marked as the core pixel points of the corresponding frame gray level image; when the gray threshold is less than or equal to the judgment threshold, the probability that the pixel points corresponding to the gray values greater than or equal to the gray threshold are edge pixel points is high, and therefore the pixel points corresponding to the gray values greater than or equal to the gray threshold are marked as core pixel points of the corresponding frame gray image.
Step 3, acquiring at least two areas corresponding to each frame of gray level image according to the core pixel points; calculating the position label of each region according to the horizontal and vertical coordinate values corresponding to each pixel point in the region; matching each area in two adjacent frames of gray level images according to the position labels to obtain at least two matching pairs; there are two regions in the matched pair.
Specifically, according to core pixel points, at least two regions corresponding to each frame of gray level image are obtained by using a region growing algorithm; the region growing algorithm is a known technique, is not in the protection scope of the present invention, and is not described in detail.
Further, calculating the position label of each region according to the horizontal and vertical coordinate values corresponding to each pixel point in the region; namely, calculating the average abscissa value and the average ordinate value corresponding to all the pixel points in the region, wherein the average abscissa value and the average ordinate value form a position tag. The horizontal and vertical coordinate values corresponding to each pixel point in the area are obtained through an image coordinate system corresponding to the frame gray level image; the establishment of the image coordinate system is a known technique and is not described in detail.
Taking the kth area in the n-th frame gray image as an example, the method for acquiring the position label is described, that is, the position label of the kth area in the n-th frame gray image is
Figure 128753DEST_PATH_IMAGE024
Wherein
Figure 18080DEST_PATH_IMAGE025
in the formula (I), wherein,
Figure 51895DEST_PATH_IMAGE026
is as follows
Figure 186336DEST_PATH_IMAGE027
In the kth region in the frame gray image
Figure 989207DEST_PATH_IMAGE028
The abscissa value of each pixel point;
Figure 49435DEST_PATH_IMAGE029
is as follows
Figure 321279DEST_PATH_IMAGE027
The total number of pixel points in the kth region in the frame gray image;
Figure 243099DEST_PATH_IMAGE030
in the formula (I), the reaction is carried out,
Figure 415323DEST_PATH_IMAGE031
is as follows
Figure 397185DEST_PATH_IMAGE027
In the kth region in the frame gray image
Figure 601729DEST_PATH_IMAGE028
The longitudinal coordinate value of each pixel point;
Figure 842087DEST_PATH_IMAGE029
is as follows
Figure 353970DEST_PATH_IMAGE027
The total number of pixel points in the kth region in the frame gray image.
When the regions in two adjacent frames of gray scale images are subsequently matched according to the position tags, because the positions of the same information on different frames of gray scale images may be different, the embodiment utilizes the second embodiment
Figure 257467DEST_PATH_IMAGE027
Average abscissa values and average ordinate values corresponding to all pixel points in a kth region in a frame gray image are used as position labels of the kth region, matching between the regions in two adjacent frames of gray images is carried out through the position labels, the more the number of the pixel points in the regions is, the larger the fault tolerance rate of the calculated position labels is, and the higher the matching accuracy between the regions is in the subsequent matching.
Obtaining position labels corresponding to all areas in each frame of gray level image by the general method; matching each area in two adjacent frames of gray level images according to the position labels; the specific matching mode is to take the n frame gray image and the n-1 frame gray image as an example, and the KM algorithm is utilized to match the n frame gray image and the n-1 frame gray image according to the position label
Figure 487591DEST_PATH_IMAGE027
Matching the frame gray level image with each region in the n-1 th frame gray level image to obtain a matching pair; the KM algorithm is a known technique, and is not within the protection scope of the present invention, and the specific matching process is not described.
And 4, for any matching pair, calculating the average gray value and the variance corresponding to each region according to the gray values corresponding to all the pixel points in each region in the matching pair, and further obtaining the amount of information which can be lost in the region corresponding to the next frame of gray image in the matching pair.
The amount of information that can be lost is:
Figure 266060DEST_PATH_IMAGE001
wherein,
Figure 383183DEST_PATH_IMAGE002
when the next frame of gray image in any matching pair is the nth frame of gray image, the amount of information which can be lost in the kth area in the nth frame of gray image is reduced;
Figure 972427DEST_PATH_IMAGE003
is a natural constant;
Figure 939115DEST_PATH_IMAGE004
in the formula (I), wherein,
Figure 6428DEST_PATH_IMAGE005
the average gray value corresponding to the kth area in the nth frame gray image;
Figure 243637DEST_PATH_IMAGE006
for the (n-1) th frame in the grayscale image
Figure 269361DEST_PATH_IMAGE007
Average gray values corresponding to the regions;
Figure 723345DEST_PATH_IMAGE008
calculating the sign of the absolute value;
Figure 328770DEST_PATH_IMAGE009
in the formula (I), wherein,
Figure 420485DEST_PATH_IMAGE010
variance corresponding to k-th area in n-th frame gray image;
Figure 882691DEST_PATH_IMAGE011
For the (n-1) th frame in the grayscale image
Figure 558391DEST_PATH_IMAGE007
The variance corresponding to each region;
Figure 498666DEST_PATH_IMAGE008
the sign of the absolute value is calculated.
It should be noted that the kth region in the n-th frame gray image and the kth region in the n-1-th frame gray image
Figure 710466DEST_PATH_IMAGE007
Each region forms a matching pair; in each frame of gray scale image corresponding to video image data of the fire drill site, if the information quantity carried by the kth area in the nth frame of gray scale image is equal to that of the kth area
Figure 812415DEST_PATH_IMAGE032
In the frame gray image
Figure 975411DEST_PATH_IMAGE007
Under the condition that the information quantity carried by each region is not much different, the difference of the average gray values corresponding to all the pixel points in the two regions is not big, and the difference of the variances corresponding to all the pixel points in the two regions is not big;
Figure 938951DEST_PATH_IMAGE033
representing the difference of the average gray values corresponding to all the pixel points in the two regions,
Figure 254525DEST_PATH_IMAGE034
the variance difference corresponding to all pixel points in the two regions is represented, so that
Figure 776642DEST_PATH_IMAGE035
Is less valued than in
Figure 912089DEST_PATH_IMAGE035
When the value is small, the description will be given
Figure 679319DEST_PATH_IMAGE027
In gray scale images in frames
Figure 895009DEST_PATH_IMAGE036
An area and a
Figure 73180DEST_PATH_IMAGE032
In the frame gray image
Figure 977813DEST_PATH_IMAGE007
The information change condition corresponding to each area is not large, which shows that even the first area is
Figure 532423DEST_PATH_IMAGE027
In the frame gray image
Figure 806278DEST_PATH_IMAGE036
The loss of information from a region is acceptable with respect to video image data because of the availability of the second region
Figure 420930DEST_PATH_IMAGE032
In the frame gray image
Figure 281701DEST_PATH_IMAGE007
A region replaces the first
Figure 154848DEST_PATH_IMAGE027
In the frame gray image
Figure 299521DEST_PATH_IMAGE036
An area; otherwise, it is impossible to use
Figure 835807DEST_PATH_IMAGE032
In the frame gray image
Figure 698721DEST_PATH_IMAGE007
A region replaces the first
Figure 109980DEST_PATH_IMAGE027
In the frame gray image
Figure 374739DEST_PATH_IMAGE036
And (4) a region.
The reason why the quantization is performed by using the product of the difference between the variances and the difference between the average gray-scale values in the calculation of the amount of information that can be lost is that although the average gray-scale value can represent the approximate range of the gray-scale values of all the pixels in the two regions, the contingency of the average gray-scale value is too strong, for example, if there are only two pixels in each region in the matching pair, the gray-scale values of the two pixels in one region are respectively the gray-scale values of the two pixels in one region
Figure 81926DEST_PATH_IMAGE037
(ii) a The gray values of two pixel points in the other area are respectively
Figure 166557DEST_PATH_IMAGE038
(ii) a Although the corresponding average gray values are equal, the information contained in the two areas actually has extremely large difference, so that the difference of the variance is utilized to carry out constraint to reduce the contingency, and then the negative exponential function of a natural constant e is utilized to carry out inversion to obtain a loss-capable information quantity which is used for representing the loss-capable degree of the information in the corresponding area; when it comes to
Figure 381506DEST_PATH_IMAGE027
In the frame gray image
Figure 235193DEST_PATH_IMAGE036
An area and a
Figure 378860DEST_PATH_IMAGE032
In the frame gray image
Figure 216366DEST_PATH_IMAGE007
Information carried by an area becomesThe smaller the chemical change, the smaller the
Figure 703848DEST_PATH_IMAGE002
The greater the value of (A), i.e. the first
Figure 428353DEST_PATH_IMAGE027
In the frame gray image
Figure 726610DEST_PATH_IMAGE036
The higher the extent to which information in an area can be lost.
Step 5, for two adjacent frames of gray images, calculating the integral loss information amount corresponding to the next frame of gray images according to the loss information amount corresponding to all the areas in the next frame of gray images, the image entropy and the average gray value corresponding to the next frame of gray images and the image entropy and the average gray value corresponding to the previous frame of gray images; and judging whether to reserve the gray image of the next frame or not according to the whole loss information quantity.
The total amount of information that can be lost is:
Figure 67724DEST_PATH_IMAGE039
wherein,
Figure 802241DEST_PATH_IMAGE013
when the next frame of gray image in the two adjacent frames of gray images is the nth frame of gray image, the whole amount of information which can be lost is corresponding to the nth frame of gray image;
Figure 364941DEST_PATH_IMAGE002
the amount of information which can be lost in the kth area in the nth frame gray level image;
Figure 348946DEST_PATH_IMAGE014
the total number of the areas in the nth frame gray level image;
Figure 380618DEST_PATH_IMAGE015
the image entropy corresponding to the nth frame gray level image;
Figure 960635DEST_PATH_IMAGE016
the image entropy corresponding to the gray level image of the (n-1) th frame is obtained;
Figure 892688DEST_PATH_IMAGE017
the average gray value corresponding to the nth frame of gray image is obtained;
Figure 532748DEST_PATH_IMAGE018
the average gray value corresponding to the gray image of the (n-1) th frame;
Figure 51716DEST_PATH_IMAGE003
is a natural constant. The calculation method of the image entropy is a known technology and is not described in detail.
First, the
Figure 684692DEST_PATH_IMAGE027
The total amount of information that can be lost corresponding to the frame gray image is composed of two parts, the first part
Figure 753142DEST_PATH_IMAGE027
The average amount of information loss of the regions corresponding to all the regions in the frame gray level image
Figure 314835DEST_PATH_IMAGE040
Of 1 at
Figure 570367DEST_PATH_IMAGE027
Frame gray scale image and
Figure 7034DEST_PATH_IMAGE032
inter-frame loss-able information amount of frame gray image
Figure 929990DEST_PATH_IMAGE041
(ii) a Area average loss-capable information amount representation
Figure 662585DEST_PATH_IMAGE027
Each region in the frame gray image corresponds to
Figure 654681DEST_PATH_IMAGE032
For each region in the frame gray image, the first
Figure 380192DEST_PATH_IMAGE027
The extent of loss in each region in the frame gray image is described as follows, the greater the extent of loss
Figure 642808DEST_PATH_IMAGE027
Each region in the frame gray image and the second
Figure 61151DEST_PATH_IMAGE032
The information quantity carried by each area in the frame gray image is not very different, namely
Figure 540543DEST_PATH_IMAGE027
The regions in the frame gray image may be selected from
Figure 554897DEST_PATH_IMAGE032
The regions in the frame gray image.
While the amount of information between frames can be lost
Figure 452446DEST_PATH_IMAGE041
From the first
Figure 290958DEST_PATH_IMAGE027
Image entropy corresponding to frame gray level image
Figure 742799DEST_PATH_IMAGE032
Ratio of image entropies corresponding to the frame gray image, second
Figure 560845DEST_PATH_IMAGE027
Average gray value corresponding to frame gray image and the second gray value
Figure 47321DEST_PATH_IMAGE032
The ratio of the average gray values corresponding to the frame gray image and the product of the two ratios are formed
Figure 56734DEST_PATH_IMAGE027
Frame gray scale image and
Figure 746604DEST_PATH_IMAGE032
the closer the information carried by the frame gray images is, the closer the amount of information that can be lost between frames is to 1, because the closer the information carried by the two frame gray images is, the smaller the difference between the two frame gray images is, i.e. the closer the average gray values corresponding to the two frame gray images are,
Figure 617608DEST_PATH_IMAGE042
the closer to 1; the more similar the distribution of the gray values corresponding to all the pixel points in the two frames of gray images, the closer the corresponding image entropy is,
Figure 207858DEST_PATH_IMAGE043
the closer to 1, the
Figure 873325DEST_PATH_IMAGE044
The closer to 0, the corresponding
Figure 581650DEST_PATH_IMAGE041
The closer to 1;
Figure 240033DEST_PATH_IMAGE013
the larger the value of (A), the more the description is about
Figure 701101DEST_PATH_IMAGE027
Loss of frame gray level image, and elegance and no damage compared with video image data corresponding to fire drill site
Figure 288203DEST_PATH_IMAGE027
The frame gray image can be completely composed of
Figure 467511DEST_PATH_IMAGE032
The frame gray image is replaced.
Further, whether a next frame gray image in two adjacent frames of gray images is reserved or not is judged according to the whole loss information quantity; normalizing the integral loss-able information quantity, and comparing the normalized integral loss-able information quantity with a threshold value; when the normalized integral loss information quantity is larger than a threshold value, deleting the next frame of gray level image; when the normalized integral loss information amount is less than or equal to the threshold value, the gray level image of the next frame is reserved; in this embodiment, the value of the threshold is 0.7, and an implementer can adjust the value of the threshold according to actual conditions in a specific operation process.
It should be noted that, taking two adjacent frames of gray scale images as the nth frame of gray scale image and the (n-1) th frame of gray scale image as an example, when the normalized total loss-enabled information amount is greater than the threshold value, that is, when the normalized total loss-enabled information amount corresponding to the nth frame of gray scale image is greater than the threshold value, at this time, it is considered that the nth frame of gray scale image is the first frame of gray scale image
Figure 195165DEST_PATH_IMAGE027
Frame gray scale image and
Figure 245160DEST_PATH_IMAGE032
the frame gray level images are highly similar
Figure 245305DEST_PATH_IMAGE027
The frame gray image can be completely composed of
Figure 646330DEST_PATH_IMAGE032
And replacing the frame gray image and reserving one frame gray image.
And 6, compressing and transmitting the video images corresponding to all the retained gray level images.
Specifically, the video images corresponding to all the reserved gray level images are compressed by using the predictive coding, the predictive coding selects backward prediction, the calculated amount is small, and then the compressed video images are transmitted by using a transmission mode of a fire drilling system.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; the modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application, and are included in the protection scope of the present application.

Claims (5)

1. A fire drill system data management method is characterized by comprising the following steps:
acquiring video image data corresponding to a fire drill site, wherein the video image data comprises at least two frames of video images; preprocessing each frame of video image to obtain each frame of gray level image;
performing curve fitting on the gray histogram corresponding to each frame of gray image to obtain a peak-valley curve corresponding to each frame of gray image, and calculating the gray threshold of each frame of gray image according to the gray value corresponding to each peak in the peak-valley curve; selecting core pixel points corresponding to the gray level images of each frame according to the gray level threshold;
acquiring at least two areas corresponding to each frame of gray level image according to the core pixel points; calculating the position label of each region according to the horizontal and vertical coordinate values corresponding to each pixel point in the region; matching each area in two adjacent frames of gray level images according to the position labels to obtain at least two matching pairs; there are two regions in the matching pair;
for any matching pair, calculating the average gray value and the variance corresponding to each region according to the gray values corresponding to all the pixel points in each region in the matching pair, and further obtaining the amount of information which can be lost in the region corresponding to the next frame of gray image in the matching pair;
for two adjacent frames of gray level images, calculating the integral loss available information amount corresponding to the next frame of gray level image according to the loss available information amount corresponding to all the areas in the next frame of gray level image, the image entropy and the average gray level value corresponding to the next frame of gray level image and the image entropy and the average gray level value corresponding to the previous frame of gray level image; judging whether to reserve the next frame of gray level image according to the whole loss information quantity;
compressing and transmitting the video images corresponding to all the reserved gray level images;
the method for calculating the gray threshold of each frame of gray image according to the gray value corresponding to each peak in the peak-valley curve comprises the following steps: calculating the average gray value corresponding to all the wave crests according to the gray value corresponding to each wave crest in the peak-valley curve, and recording the average gray value corresponding to all the wave crests as the gray threshold of the corresponding frame gray image;
the method for selecting the core pixel point corresponding to each frame of gray level image according to the gray level threshold value comprises the following steps: comparing the gray threshold value with the judgment threshold value for any frame of gray image, and recording the pixel points corresponding to the gray values smaller than the gray threshold value as core pixel points corresponding to the frame of gray image when the gray threshold value is larger than the judgment threshold value; when the gray threshold is less than or equal to the judgment threshold, marking the pixel points corresponding to the gray values greater than or equal to the gray threshold as core pixel points corresponding to the frame of gray image;
the method for acquiring at least two regions corresponding to each frame of gray level image according to the core pixel points comprises the following steps: acquiring at least two regions corresponding to each frame of gray level image by using a region growing algorithm according to the core pixel points;
the total amount of information that can be lost is:
Figure 990838DEST_PATH_IMAGE001
wherein,
Figure 906710DEST_PATH_IMAGE002
when the next frame of gray image in the two adjacent frames of gray images is the nth frame of gray image, the whole amount of information which can be lost is corresponding to the nth frame of gray image;
Figure 454366DEST_PATH_IMAGE003
the amount of information which can be lost in the kth area in the nth frame gray level image;
Figure 342556DEST_PATH_IMAGE004
the total number of the areas in the nth frame gray level image;
Figure 317466DEST_PATH_IMAGE005
the image entropy corresponding to the nth frame gray level image;
Figure 533683DEST_PATH_IMAGE006
the image entropy corresponding to the (n-1) th frame gray level image;
Figure 868719DEST_PATH_IMAGE007
the average gray value corresponding to the nth frame of gray image is obtained;
Figure 627727DEST_PATH_IMAGE008
the average gray value corresponding to the gray image of the (n-1) th frame;
Figure 835855DEST_PATH_IMAGE009
are natural constants.
2. The fire drill system data management method according to claim 1, wherein the method for calculating the position label of each area according to the abscissa and ordinate values corresponding to each pixel point in each area comprises: and calculating average abscissa values and average ordinate values corresponding to all pixel points in the region, wherein the average abscissa values and the average ordinate values form a position label.
3. The fire drill system data management method according to claim 1, wherein the method for matching each region in two adjacent frames of gray images according to the position tags to obtain at least two matching pairs comprises: and matching each region in the two adjacent frames of gray images by using a KM algorithm according to the position tags to obtain at least two matching pairs.
4. The fire drill system data management method according to claim 1, wherein the amount of losable information is:
Figure 920792DEST_PATH_IMAGE010
wherein,
Figure 138147DEST_PATH_IMAGE003
when the next frame of gray image in any matching pair is the nth frame of gray image, the amount of information which can be lost in the kth area in the nth frame of gray image is reduced;
Figure 17241DEST_PATH_IMAGE009
is a natural constant;
Figure 865112DEST_PATH_IMAGE011
in the formula (I), wherein,
Figure 242872DEST_PATH_IMAGE012
the average gray value corresponding to the kth area in the nth frame gray image;
Figure 732759DEST_PATH_IMAGE013
for the (n-1) th frame in the grayscale image
Figure 200781DEST_PATH_IMAGE014
Average gray values corresponding to the regions;
Figure 937662DEST_PATH_IMAGE015
calculating the sign of the absolute value;
Figure 615768DEST_PATH_IMAGE016
in the formula (I), wherein,
Figure 847029DEST_PATH_IMAGE017
for the kth region pair in the n-th frame gray imageThe corresponding variance;
Figure 418825DEST_PATH_IMAGE018
for the (n-1) th frame in the grayscale image
Figure 811760DEST_PATH_IMAGE014
The variance corresponding to each region;
Figure 242741DEST_PATH_IMAGE015
the sign of the absolute value is calculated.
5. The fire drill system data management method according to claim 1, wherein the method for determining whether to retain the next frame of gray scale image according to the total amount of losable information comprises: normalizing the integral loss-able information quantity, and comparing the normalized integral loss-able information quantity with a threshold value; when the normalized integral loss information quantity is larger than a threshold value, deleting the next frame of gray level image; and when the normalized integral loss information amount is less than or equal to the threshold value, the gray level image of the next frame is reserved.
CN202211416744.2A 2022-11-14 2022-11-14 Data management method for fire drill system Active CN115456868B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211416744.2A CN115456868B (en) 2022-11-14 2022-11-14 Data management method for fire drill system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211416744.2A CN115456868B (en) 2022-11-14 2022-11-14 Data management method for fire drill system

Publications (2)

Publication Number Publication Date
CN115456868A CN115456868A (en) 2022-12-09
CN115456868B true CN115456868B (en) 2023-01-31

Family

ID=84295584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211416744.2A Active CN115456868B (en) 2022-11-14 2022-11-14 Data management method for fire drill system

Country Status (1)

Country Link
CN (1) CN115456868B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117333824B (en) * 2023-12-01 2024-02-13 中铁十九局集团第三工程有限公司 BIM-based bridge construction safety monitoring method and system
CN117714691B (en) * 2024-02-05 2024-04-12 佳木斯大学 AR augmented reality piano teaching is with self-adaptation transmission system
CN117831744B (en) * 2024-03-06 2024-05-10 大连云间来客科技有限公司 Remote monitoring method and system for critically ill patients
CN117880529B (en) * 2024-03-12 2024-05-14 深圳市诚立业科技发展有限公司 Low-delay wireless network short message video transmission method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107066933B (en) * 2017-01-25 2020-06-05 武汉极目智能技术有限公司 Road sign identification method and system
CN114723705B (en) * 2022-03-31 2023-08-22 深圳市启灵图像科技有限公司 Cloth flaw detection method based on image processing
CN115119016B (en) * 2022-06-29 2024-06-18 北京精确指向信息技术有限公司 Information data encryption algorithm

Also Published As

Publication number Publication date
CN115456868A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN115456868B (en) Data management method for fire drill system
CN110324626B (en) Dual-code-stream face resolution fidelity video coding and decoding method for monitoring of Internet of things
WO2023134791A2 (en) Environmental security engineering monitoring data management method and system
CN115914649B (en) Data transmission method and system for medical video
CN115297289A (en) Efficient storage method for monitoring video
US8249358B2 (en) Image quality evaluation method, image quality evaluation system and image quality evaluation program
CN117615088B (en) Efficient video data storage method for safety monitoring
CN112291562B (en) Fast CU partition and intra mode decision method for H.266/VVC
CN116405574B (en) Remote medical image optimization communication method and system
CN103501438B (en) A kind of content-adaptive method for compressing image based on principal component analysis
CN112738533B (en) Machine inspection image regional compression method
CN109982071B (en) HEVC (high efficiency video coding) dual-compression video detection method based on space-time complexity measurement and local prediction residual distribution
EP0985318B1 (en) System for extracting coding parameters from video data
CN115063326B (en) Infrared night vision image efficient communication method based on image compression
CN116095347B (en) Construction engineering safety construction method and system based on video analysis
CN111723735B (en) Pseudo high bit rate HEVC video detection method based on convolutional neural network
CN115665359B (en) Intelligent compression method for environment monitoring data
CN116543338A (en) Student classroom behavior detection method based on gaze target estimation
CN116311052A (en) Crowd counting method and device, electronic equipment and storage medium
CN115866251A (en) Semantic segmentation based image information rapid transmission method
CN106713924B (en) For text layered compression method and device
Le Callet et al. Continuous quality assessment of MPEG2 video with reduced reference
US6463174B1 (en) Macroblock-based segmentation and background mosaicking method
CN113747178A (en) Image edge end compression and back end recovery method and system in power channel visualization scene
CN112188212A (en) Method and device for intelligent transcoding of high-definition monitoring video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant