CN112616054B - Self-adaptive compression transmission and recovery method and device for wild animal monitoring image - Google Patents

Self-adaptive compression transmission and recovery method and device for wild animal monitoring image Download PDF

Info

Publication number
CN112616054B
CN112616054B CN202011441971.1A CN202011441971A CN112616054B CN 112616054 B CN112616054 B CN 112616054B CN 202011441971 A CN202011441971 A CN 202011441971A CN 112616054 B CN112616054 B CN 112616054B
Authority
CN
China
Prior art keywords
image
region
transmission
compressed
bit plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011441971.1A
Other languages
Chinese (zh)
Other versions
CN112616054A (en
Inventor
张军国
谢将剑
柴垒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Forestry University
Original Assignee
Beijing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Forestry University filed Critical Beijing Forestry University
Priority to CN202011441971.1A priority Critical patent/CN112616054B/en
Publication of CN112616054A publication Critical patent/CN112616054A/en
Application granted granted Critical
Publication of CN112616054B publication Critical patent/CN112616054B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • H04N19/635Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by filter definition or implementation details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a wild animal monitoring image self-adaptive compression transmission and recovery method and a device, which are applied to a wireless sensor network, wherein the method comprises the following steps: acquiring a wild animal image to be compressed and transmitted, and extracting based on a target region to generate a mask image corresponding to the target region; carrying out compression coding on the generated mask image by a method of important bit plane displacement and multi-level tree set splitting; distributing and remotely transmitting the compressed and coded image data through a distributed transmission mechanism; restoring the compressed and coded image on the decoding end based on the image restoration model; wherein the recovery model is implemented based on generating a countermeasure network; the beneficial effects are as follows: the method has the advantages that the original image is compressed, the data volume is reduced, and meanwhile, the wild animal image which is clear and has practical application value can be obtained by an image receiving end.

Description

Self-adaptive compression transmission and recovery method and device for wild animal monitoring image
Technical Field
The invention relates to the technical field of image data processing, in particular to a wild animal monitoring image self-adaptive compression transmission and recovery method and device.
Background
The wild animal monitoring system is helpful for comprehensively knowing the inhabitation condition and population information of wild animals in real time, and provides reliable data support for wild animal protection. At present, a wireless sensor network is adopted as a monitoring carrier, so that a main monitoring method is provided. As the data volume of the wild animal monitoring image is increased, pressure is generated on data transmission in the wireless sensor network. Therefore, image compression is required when the monitoring image is transmitted. The wild animal monitoring image compression transmission method is beneficial to reducing the transmission quantity of data, improving the life cycle of a sensor network, reducing the coding proportion of irrelevant contents and improving the overall compression efficiency while comprehensively and timely knowing the inhabitation condition and population information of wild animals. The compressed wild animal image can be used as a data set for wild animal identification and classification through local end recovery, and data support is provided for automation and intellectualization of subsequent wild animal protection.
Image compression is currently largely divided into multilevel tree set splitting algorithm (SPIHT) based and JPEG2000 based methods. The above method is based on global compression, does not consider distributed transmission of the target area and the background area, and generates data redundancy. Therefore, the reconstruction of the image area is prioritized, the target area is preferentially coded, the recovery quality of the image of the target area can be improved under the condition of low bit rate, and the extraction of the target area is the basis for realizing the prioritization of image transmission. Wild animal monitoring images are easily affected by the environment, have the characteristics of complex background and uneven illumination, and aggravate the difficulty in completing target extraction and mask generation. Therefore, the compressing and transmitting method for the wild animal monitoring image needs to comprehensively analyze the characteristics of the wild animal image information, ensure the preferential coding of the target area on the basis of reducing the data transmission quantity, and simultaneously ensure that the quality of the recovered image meets the data analysis requirement.
Therefore, the existing image compression algorithms mainly compress the whole image, and a large amount of data redundancy is generated in the process of carrying out large-scale remote data transmission; moreover, the effect of image reconstruction at the decoding end is greatly reduced, and the reconstruction result also has adverse effect on the later data analysis.
Disclosure of Invention
The invention aims to: the method for compressing, transmitting and recovering the wild animal monitoring image in a self-adaptive mode is provided, so that the wild animal monitoring image can be compressed and the data volume is reduced, and meanwhile, the clear wild animal monitoring image with practical application value can be obtained by an image receiving end.
In a first aspect: a wild animal monitoring image self-adaptive compression transmission and recovery method is applied to a wireless sensor network, wherein the wireless sensor network adopts a network topology structure of clustering convergence, and the method comprises the following steps:
acquiring a wild animal image to be compressed and transmitted, and extracting based on a target region to generate a mask image corresponding to the target region; wherein the wildlife images are derived from wildlife auto-triggering devices deployed inside the monitoring region; the target region is a foreground region with a wild animal part, which is different from a background region in the wild animal image;
carrying out compression coding on the generated mask image by a method of important bit plane displacement and multi-level tree set splitting;
distributing and remotely transmitting the compressed and encoded image data through a distributed transmission mechanism;
restoring the compressed and coded image on the decoding end based on an image restoration model; wherein the recovery model is implemented based on a generative confrontation network implemented based on an improved compressed excitation module.
As an optional implementation manner of this application, the acquiring a wild animal image to be compressed and transmitted, and then extracting based on a target region to generate a corresponding mask image specifically includes:
reconstructing an image color space model;
extracting image texture parameters;
establishing a parameter matrix according to the texture parameters and the reconstructed color space model, and clustering pixel data through a self-adaptive algorithm to complete the segmentation of the target area;
and finally, carrying out region combination on the segmented images, determining a final target region by combining edge detection, and taking the final target region as the mask image.
As an optional implementation manner of this application, the compression encoding of the generated mask image by using a method of significant bit plane displacement and multi-level tree set splitting specifically includes:
partitioning the mask image into area blocks and edge blocks;
determining the highest bit plane according to the maximum value of the wavelet coefficient of the mask image, and then carrying out bit-by-bit coding on the highest bit plane; wherein, the target region coefficient has a bottom level which represents the lowest bit information of the wavelet coefficient and is an unimportant level, namely NSB;
and finally, carrying out image coding based on a multi-level tree set splitting algorithm.
As an alternative embodiment of the present application, the procedure of the insignificant bit plane discriminant manner is as follows:
firstly, setting a peak signal-to-noise ratio T of an unimportant bit plane as a constraint condition, then calculating the reconstruction quality PSNR of a target area for each bit plane one by one, if the PSNR is more than or equal to T, indicating that the reconstruction quality of the coded bit plane meets the subjective visual requirement of human eyes, and marking the bit plane below the bit plane as the unimportant bit plane, otherwise, continuously coding the bit plane.
As an optional implementation manner of the present application, the network topology structure of the cluster aggregation includes a source node, a target node, a plurality of cluster head nodes, and intra-cluster nodes corresponding to each cluster head node, and the allocating and remote transmitting the compressed and encoded image data by using a distributed transmission mechanism specifically includes:
the source node sends a task transmission instruction to the cluster head node C1;
after the source node receives a feedback instruction of the cluster head node C1, the target area is directly coded and transmitted to a cluster head node C2 of the next stage; the background region information divides the image information into blocks according to the number of nodes in the cluster, transmits the block information to other nodes in the cluster respectively to carry out compression coding processing operation, and then transmits the result to a cluster head node C2 of the next stage;
after the cluster head node C2 at the next level integrates the transmission data at the previous level, the target area information is continuously coded and then distributed to the cluster head node C3 at the next level for data integration; the background area coding information is directly distributed to the next level of cluster nodes without any processing to continue compression processing until the processing result is transmitted to the next level of cluster head C3; repeating the steps to transmit data step by step until the requirements of the transmission level and the image compression ratio in the set network are met;
and finally, all the coding information is transmitted to the target node together, and after the data are integrated, the data are transmitted to a background server for reconstructing the image data.
As an optional implementation manner of the present application, the image recovery model includes a discriminator and a generator, and both adopt a neural network, and are generated by training according to the following steps:
training the discriminator;
training the generator;
training the discriminator and the generator alternately; wherein the compressed excitation modules are embedded in the convolution layers of each of the discriminators and generators, respectively.
In a second aspect: the utility model provides a wild animal monitoring image self-adaptation compression transmission and recovery unit, is applied to wireless sensor network, wireless sensor network adopts the network topology structure that the cluster assembles, includes:
the acquisition device is used for acquiring wild animal images in the monitoring area;
a processing module to:
acquiring a wild animal image to be compressed and transmitted, and extracting based on a target area to generate a corresponding mask image; wherein the target region is a foreground region having a wild animal portion distinct from a background region in the wild animal image;
carrying out compression coding on the generated mask image by a method of important bit plane displacement and multi-level tree set splitting;
the transmission module is used for distributing and remotely transmitting the compressed and coded image data through a distributed transmission mechanism;
the decoding module is used for recovering the compressed and coded image on the basis of the image recovery model on the decoding end; wherein the recovery model is implemented based on a generative confrontation network implemented based on an improved compressed excitation module.
As an optional implementation manner of the present application, the distributed transmission mechanism is to establish a wireless sensor network distributed image transmission model based on independent coding and joint decoding, and perform data distribution transmission on a target region and a background region respectively;
during transmission, firstly, the pixel points in the image to be transmitted are classified according to marks and unmarked, then all the marked pixel points are coded and transmitted through the cluster head nodes, and the unmarked pixel points are divided according to the energy of the cluster nodes and then transmitted through the cluster nodes.
As an optional embodiment of the present application, the image recovery model includes a discriminator and a generator, and both adopt a neural network, and are generated by training according to the following steps:
training the discriminator;
training the generator;
training the discriminator and the generator alternately; wherein the compressed excitation modules are embedded in the convolutional layers of the discriminators and the generators, respectively.
In a third aspect: an adaptive wildlife monitoring image compression, transmission and recovery apparatus comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, and the processor is configured to invoke the program instructions to perform the method according to the first aspect.
The wild animal monitoring image self-adaptive compression transmission and recovery method and device provided by the embodiment of the invention have the following beneficial effects:
1. before image compression transmission, the method extracts and obtains the most valuable foreground area based on the target area, and then realizes the priority coding transmission of the target area based on a distributed transmission mechanism in the compression transmission process, thereby greatly reducing the data redundancy in the data transmission process and improving the transmission efficiency of the system.
2. The method combines the characteristics of the wild animal image, ensures that super-resolution image reconstruction can be completed at a decoding end by combining an optimized image recovery model while reducing data redundancy in data compression transmission, ensures that the recovered image is clear and rich in content, ensures that the method has practical application value, and provides a reliable data basis for subsequent data analysis.
Drawings
Fig. 1 is a flowchart of a wildlife monitoring image adaptive compression transmission and recovery method according to an embodiment of the present invention;
FIG. 2 is a block diagram of steps of a method according to an embodiment of the present invention;
FIG. 3 is a detailed flow chart of the construction of the color space model of FIG. 2;
fig. 4 is a detailed flowchart of constructing a texture parameter filter according to an embodiment of the present invention;
FIG. 5 is a detailed flowchart of bit-plane transmission based on the non-significant bit-plane NSB for image compression transmission according to an embodiment of the present invention;
FIG. 6 is a node distribution diagram of a multilevel tree set splitting coding method in image compression transmission according to an embodiment of the present invention;
FIG. 7 is a diagram of a data allocation method of a distributed transmission mechanism according to an embodiment of the present invention;
FIG. 8 is a diagram of a generator structure of a countermeasure network in accordance with an embodiment of the present invention;
fig. 9 is a structural diagram of an adaptive compressing, transmitting and recovering device for monitoring images of wild animals according to an embodiment of the present invention;
fig. 10 is a structural diagram of another adaptive wild animal monitoring image compression, transmission and recovery device according to an embodiment of the present invention.
Detailed Description
Specific embodiments of the present invention will be described in detail below, and it should be noted that the embodiments described herein are only for illustration and are not intended to limit the present invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to those of ordinary skill in the art that: it is not necessary to employ these specific details to practice the present invention.
Throughout the specification, reference to "one embodiment," "an embodiment," "one example" or "an example" means: the particular features, structures, or characteristics described in connection with the embodiment or example are included in at least one embodiment of the invention. Thus, the appearances of the phrases "in one embodiment," "in an embodiment," "one example" or "an example" in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures, or characteristics may be combined in any suitable combination and/or sub-combination in one or more embodiments or examples. Further, those of ordinary skill in the art will appreciate that the illustrations provided herein are for illustrative purposes and are not necessarily drawn to scale.
The present invention will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1 to 8, a method for adaptively compressing, transmitting and recovering a wild animal monitoring image is applied to a wireless sensor network, where the wireless sensor network adopts a network topology structure of clustering convergence, and includes a source node, a target node, a plurality of cluster head nodes, and respective corresponding intra-cluster nodes, which are not described herein again, and the method includes:
s101, acquiring a wild animal image to be compressed and transmitted, and extracting based on a target region to generate a corresponding mask image; wherein the wild animal image (i.e. wild animal monitoring image) is derived from a wild animal automatic triggering device deployed inside the monitoring area; the target region is a foreground region having a wild animal portion distinct from a background region in the wild animal image.
Specifically, the wild animal image comprises a target region and a background region;
the extraction mainly refers to color parameters and texture parameters of the image; the target area is an image local part and has a pixel set of a foreground pixel with continuous coordinates in an image coordinate space; in the present invention, mainly refers to a foreground region having a wild animal part, which is distinguished from a background image; the method adopted by the target area extraction is a self-adaptive mean-shift algorithm which mainly calculates bandwidth according to pixel characteristics and carries out pixel point clustering; the mask image is a binary image which is composed of a target area in a coordinate space and has no background area inside, and the mask image is generated based on gray histogram estimation and edge detection; the method comprises the following steps:
and A1-1, reconstructing an image color space model. Specifically, an input wild animal monitoring image is converted into an LUV color space from an RGB color space, a chrominance channel UV of an LUV space model is kept unchanged, and a luminance channel L and a bilateral filter are convolved to obtain a luminance component
Figure BDA0002830532090000081
Obtaining a reconstructed color space model
Figure BDA0002830532090000082
And A1-2, extracting image texture parameters. And extracting texture parameters from the input wild animal monitoring image through a Hermite filter.
The Hermite filter is obtained by multiplying a Gaussian window function and a Hermite polynomial. Preferably, in order to extract valuable texture parameters and reduce the amount of computation, the selection principle of the polynomial equation is defined as follows:
the sum of the polynomial coefficients should be less than 5; due to the polynomial H mn And H nm The matrix is a matrix which is transposed mutually, so that one matrix can be arbitrarily selected as a convolution kernel; due to the polynomial H 11 No texture parameters can be extracted and therefore are not chosen as convolution kernels.
It should be noted that, the steps A1-1 and A1-2 are parallel and have no sequence.
A1-3, segmentation of the target area. And (3) forming a parameter matrix according to the texture parameters and the color parameters obtained in the steps A1-1 and A1-2 (namely, establishing a parameter matrix according to the texture parameters and the reconstructed color space model), setting the parameter matrix as an input matrix, and clustering pixel data of the input matrix by adopting a self-adaptive mean-shift algorithm to complete the segmentation of the target region.
The self-adaptive mean-shift algorithm selects proper bandwidth according to the characteristics of image pixels to finish clustering of pixel data, namely different sampling data x i (i =1,2, … n) using different bandwidths h = h (x) i ) Then the kernel density estimate for the variable bandwidth is expressed as:
Figure BDA0002830532090000083
wherein n is the number of sampling points, K is a kernel function weighting coefficient, and is symmetrical about the origin, and the integral domain integral is 1.
Wherein, h (x) i ) Obtained via the following formula:
Figure BDA0002830532090000084
wherein h is 0 The average offset of all pixel values and the median M in the image is obtained, that is, the average of the differences between the pixel values and the median M of all the pixel points is obtained, r represents a proportionality coefficient, f (x) i ) Representing a grey level of x i The probability of the pixel of (2), then
Figure BDA0002830532090000091
Wherein, n represents the number of pixel points, (x, y) represents the coordinates of the pixel points, I (x, y) represents the pixel group of the corresponding pixel points, M represents the pixel median of the whole image, and the pixel median refers to the pixel values of the pixel points of the whole image which are arranged from small to high and are positioned in the middle of the queue.
Figure BDA0002830532090000092
Where m represents the number of gray levels of the image.
And A1-4, determining a target area by combining edge detection. Performing histogram estimation on the gray level image of the input wild animal monitoring image, determining a proper monitoring image segmentation class M according to the quantity relation among different pixel points, and performing region merging on the images segmented by the step A1-3 self-adaptive mean-shift, wherein the number of the merged images is M; and carrying out edge detection on the gray level image, overlapping the edge monitoring result and the merging result, and determining a target area (namely target area extraction) as the mask image.
S102, carrying out compression coding on the generated mask image through a method of important bit plane displacement and multi-level tree set splitting.
Specifically, the method comprises the following steps:
a2-1, mask coding. Based on the image target region extracted in step S101, a binary image, which is a corresponding mask image, is generated for the target region. The mask image is partitioned into two major categories: region blocks and edge blocks. The region blocks refer to all 0 and all 1 blocks, and the corresponding symbols are "0" and "511"; the edge block refers to a block including both 0 and 1, and has symbols of "1" to "510". The proportion of the edge block symbols representing the edge information in the symbol sequence is small, and the vast majority of the symbol sequence is composed of the region symbol blocks, so that the continuity is strong. And aiming at the characteristics of high occurrence probability and continuity of the region block symbols, carrying out run-length coding on the region block symbols.
A2-2, bit plane transmission. Carrying out bit plane coding, wherein the bit plane coding refers to determining the highest bit plane according to the maximum value of the wavelet coefficient of the mask image and then carrying out bit-by-bit coding from the highest bit plane; the presence of the underlying bit-plane in the target region, representing the lowest bit information of the wavelet coefficients, is an insignificant bit-plane, or NSB. The unimportant bit plane is a bit plane which is used for perfecting quality information recovered by a target area and is insensitive to subjective quality evaluation.
Preferably, the procedure of the insignificant bit plane discrimination method is as follows:
firstly, setting a peak signal-to-noise ratio T of an unimportant bit plane as a constraint condition, then calculating the reconstruction quality PSNR of a target area for each bit plane one by one, if the PSNR is more than or equal to T, indicating that the reconstruction quality of the coded bit plane meets the subjective visual requirement of human eyes, and marking the bit plane below the bit plane as the unimportant bit plane, otherwise, continuously coding the bit plane.
That is, when the X-th layer of the bit plane ends, if the recovery quality PSNR of the image is greater than T, the distortion amplitude is A when the bit information below the bit plane N is lost for the amplitude A of each wavelet coefficient 1
Figure BDA0002830532090000101
Wherein N represents the number of layers of the bit plane where the picture is lost;
the mean square error MSE of the wavelet coefficient matrix can be obtained:
Figure BDA0002830532090000102
wherein m represents the length of the image and n represents the height of the image;
the quality PSNR of the reconstructed image can be estimated from MSE:
Figure BDA0002830532090000103
if PSNR ≧ T at this time, the bit planes below bit plane N are divided into insignificant bit planes NSB.
And A2-3, encoding (namely compressing the image) based on a multi-level tree set splitting algorithm. All the descendant coordinate sets of the node (i, j), the descendant coordinate set and the descendant coordinate set except the child node are respectively represented by D (i, j), O (i, j) and L (i, j), wherein D (i, j) = O (i, j) + L (i, j). Each node has four coefficients corresponding to it in adjacent high frequency subbands, except for the lowest frequency node.
The multi-level tree set splitting coding respectively defines three representation coefficients or sets, namely LIP, LSP and LIS, wherein LIP represents an unimportant coefficient node, LSP represents an important coefficient node, and LIS represents an unimportant subset.
Preferably, the encoding process of the multi-level tree set splitting is as follows:
a2-3-1, initialization threshold and ordered list.
A2-3-2, scanning LIP. Judging whether the node in the LIP belongs to an important node or not, wherein the method comprises the following steps:
Figure BDA0002830532090000111
if node coefficient X i,j If the current threshold value is larger than the current threshold value T, classifying the node as an important node, adding the important node to the LSP, and deleting the node from the LIP; otherwise, it is classified as an unimportant node and kept unchanged in the LIP.
A2-3-3, scanning LIS. Judging the set type in the LIS, if the set type belongs to D (i, j), judging whether the child nodes of the set type contain important nodes, if the child nodes contain important nodes, adding the nodes into the LSP, and if all the nodes are unimportant nodes, adding the nodes into the LIP. If the descendant nodes L (i, j) which are not 0 in D (i, j) also comprise important nodes, dividing D (i, j) into O (i, j) and L (i, j), dividing L (i, j) into four sets, respectively judging the importance, and respectively adding the sets into LSP or LIP; if the set of the descendant nodes belongs to the L (i, j), judging whether the descendant nodes contain the important nodes, if so, dividing the descendant nodes into a set D (i, j), adding the set D (i, j) to the LSP, and deleting the set D from the LIP.
And A2-3-4, scanning LSP. And if the node (i, j) is not newly added in the scanning process of the steps A2-3-2 and A2-3-3, outputting the bit information under the current threshold value.
A2-3-5, adjusting the threshold value, and repeating the steps A2-3-2 to A2-3-4.
And S103, distributing and remotely transmitting the compressed and coded image data through a distributed transmission mechanism.
Specifically, as shown in fig. 7, the method includes the following steps:
a3-1, the source node S sends a command of transmitting the task to the cluster head node C1 and requires the cluster head node C1 to reasonably distribute other nodes in the cluster. And after the source node S receives the feedback instruction, directly finishing coding the target area and transmitting the target area to the cluster head node C2 at the next stage. The background region information divides the image information into blocks according to the number of the nodes in the cluster, transmits the block information to other nodes in the cluster respectively to carry out compression coding processing operation, and then transmits the result to the cluster head node C2 of the next stage.
A3-2, the next-level cluster head node C2 integrates the previous-level transmission data, continuously encodes the target area information, and then assigns the next-level cluster head node C3 to perform data integration. And the background area coding information is directly distributed to the nodes in the next level cluster for continuous compression processing without any processing until the processing result is transmitted to the cluster head C3 of the next level. The data transmission process of the next stage is the same as the step A3-2 until the requirements of the transmission stage and the image compression ratio in the set network are met;
and finally, transmitting all the coding information to a target node D together, and after the data are integrated, transmitting the data to a background server for data reconstruction.
That is to say, the distributed transmission mechanism mainly refers to establishing a wireless sensor network distributed image transmission model based on independent coding and joint decoding, and performing data distribution transmission on a target region and a background region respectively;
firstly, classifying pixel points in an image to be transmitted according to marks and unmarked points, wherein the pixel points in a target area are marks, and the pixel points in a background area are unmarked; and then coding and transmitting all marked pixel points through cluster head nodes, dividing unmarked pixel points according to the energy of the cluster internal nodes, and transmitting through the cluster internal nodes, wherein the cluster head nodes do not participate in the coding task of background area data in the process so as to ensure that the target area is transmitted to the target node to the maximum extent.
S104, recovering the compressed and coded image on the decoding end based on the image recovery model; wherein the recovery model is implemented based on a generative confrontation network implemented based on an improved compressed excitation module.
Specifically, the image restoration model comprises a discriminator and a generator, and both adopt a neural network, and the detailed process is as follows:
and A4-1, training a discriminator. The noise data is input into the initialized generator and generated into a false sample, and the false sample and the true sample are input into the discriminator together, and the discriminator model is a binary model.
Preferably, the image restoration model takes the generation of the countermeasure network as a main body, and the improved extrusion excitation module (i.e. the SE module) is embedded into the generation of the countermeasure network, so as to achieve the purposes of adjusting the weight of the characteristic parameters and optimizing the network performance.
The SE module adaptively recalibrates the characteristic response of the channel mode by explicitly modeling the interdependency among the channels, thereby improving the parameter quality of the network. To solve the channel dependency problem, a global pool feature map is used to squeeze into the channel descriptor, as shown in the following equation:
Figure BDA0002830532090000131
wherein, Z c Means the c-th feature map u c Corresponding extrusion characteristics, u C ( i,j ) Representing pixel values of height i and width j, and H, W are u, respectively c Height and width of (a). Then, in order to fully utilize the feature information after the extrusion operation, the extrusion feature is fed back to a fully-connected 3-layer neural network, and the input and output sizes of the network are the same. In order to improve the network performance in the SE excitation module, the function referenced by the SE excitation module is improved, and the improved function is as follows:
s={k 1 ×σ(g(z,W))+k 2 ×σ(z)}×2
={k 1 ×σ(W 2 δ(W 1 z))+k 2 ×σ(z)}×2
wherein s = [ s ] 1 ,s 2 ,s 3 …s c ]A scale vector of an original feature map, wherein the original feature map is the compressed mask image; and σ, δ represent the sigmod function and the ReLU function, respectively. W is a group of 1 ,W 2 The weights of the input layer and the output layer are represented, respectively. k is a radical of 1 ,k 2 Is a proportionality coefficient, k 1 ,k 2 Is not less than 0, and k 1 +k 2 =1。
And A4-2, training a generator. And optimizing the parameters of the generator according to the feedback result of the discriminator, and fixing the parameters of the discriminator to make the generated sample approximate to a real sample, for example, fig. 8 is a generator structure diagram of the super-resolution countermeasure network.
The decoding end realizes image restoration based on the generation of the confrontation network, the related generation confrontation network comprises a generator and a discriminator, a generation model of data is learned in a confrontation mode, and the purpose of self optimization is achieved.
The generator takes the generation countermeasure network GAN as a basic framework, and embeds the improved extrusion excitation module into the generator.
A4-3, alternate training. And (3) compared with the sample generated last time, the sample generated after the training of the generator is more approximate to the real sample, and the steps A4-1 to A4-2 are repeated for alternate training.
Preferably, the batch normalization layer BN in the generator standardizes the features, ignores the absolute difference between the features and simultaneously reduces the range flexibility of the original network, so that the invention takes the single-resolution network model EDSR as the generator and removes the BN layer, thereby improving the network performance; meanwhile, in order to further improve the quality of network parameters, the improved SE module is embedded into the generator convolution layer; also, to further improve the accuracy of the discriminator, the present invention embeds an improved SE module in the convolutional layer in the discriminator; the characteristics of the last three convolution layers are fused together in a characteristic fusion mode, so that the low-frequency characteristics of the image are better utilized, and the performance of the discriminator is improved; and finally, performing global pooling on the fusion features, and performing identification of true and false samples by using a Sigmoid activation function after full connection.
The generator is used for restoring the image, and when the discriminator judges that the image is the high-resolution image, the discriminator outputs the restored reconstructed image, so that the image receiving end can obtain a clear wild animal image with practical application value.
Firstly, extracting a mask image for generating a wild animal image based on a target region, then performing compression coding on the mask image by methods of important bit plane displacement and multilevel tree set splitting, then performing image data distribution and remote transmission by a distributed transmission mechanism, and finally realizing the recovery of the compressed image on the basis of an image recovery model on a decoding end;
compared with the prior art, the method has the following effects:
1. before image compression transmission, the method extracts and obtains the most valuable foreground area based on the target area, then divides the priority of image area reconstruction, realizes the priority coding transmission of the target area based on a distributed transmission mechanism in the process of compression transmission, can greatly reduce the data redundancy in the process of data transmission, and improves the transmission efficiency of the system.
2. The method combines the characteristics of the wild animal image, ensures that the super-resolution image reconstruction can be completed at a decoding end by combining an optimized image reconstruction model while ensuring that the data redundancy is reduced in the process of data compression transmission, ensures that the recovered image is clear and rich in content, ensures that the recovered image has practical application value, and provides a reliable data basis for subsequent data analysis.
Based on the same inventive concept, referring to fig. 9, an embodiment of the present invention further provides a device for adaptively compressing, transmitting and recovering images of monitored wild animals, which is applied to a wireless sensor network, wherein the wireless sensor network adopts a network topology structure of clustering convergence, and the device comprises:
the acquisition device is used for acquiring wild animal images in the monitoring area; wherein the wildlife image is derived from a wildlife auto-triggering device deployed inside the monitoring region.
The processing module is used for acquiring a wild animal image to be compressed and transmitted and then extracting the wild animal image based on the target area to generate a corresponding mask image; wherein the target region is a foreground region having a wild animal portion distinct from a background region in the wild animal image;
wherein, the extraction mainly refers to color parameters and texture parameters of the image; the target area is an image local part and has a continuous coordinate in an image coordinate space, and a pixel set of foreground pixels mainly refers to a foreground area which is different from a background image and has a wild animal part; the method adopted by the target area extraction is a self-adaptive mean-shift algorithm which mainly calculates bandwidth according to pixel characteristics and carries out pixel point clustering; the mask image is a binary image which is composed of a target area and has no background area inside in a coordinate space, and the mask image is generated based on gray histogram estimation and edge detection.
Performing compression coding on the generated mask image by using a method of important bit plane displacement and multi-level tree set splitting;
the bit plane can be described as converting a decimal value into a binary value, and the bit plane where the binary value with the position number i (from left to right) is located is marked as an ith bit plane; the bit-plane displacement is implemented based on the target region bit-plane significance partition. The multi-level tree set splitting is mainly used for compression coding of images.
Preferably, the image compression encoding process is to perform shape encoding on a mask image generated by the extracted target region by using a bit plane displacement method for the mask; and expanding bit planes, expanding the number of the bit planes of the wavelet coefficients, and then adjusting the distribution of the target area and the background coefficient on the bit planes in different modes to realize the purpose of preferential coding of the target area.
The transmission module is used for distributing and remotely transmitting the compressed and coded image data through a distributed transmission mechanism;
the distributed transmission mechanism mainly refers to the establishment of a wireless sensor network distributed image transmission model based on independent coding and joint decoding, and data distribution transmission is respectively carried out on a target region and a background region.
Specifically, firstly, the pixel points in the monitoring image are classified according to the marks and the unmarked pixels, then all the marked pixel points are coded and transmitted through the cluster head nodes, the unmarked pixel points are divided according to the energy of the cluster internal nodes, and then the pixel points are transmitted through the cluster internal nodes, and the cluster head nodes can not participate in the coding task of the background area data in the process, so that the target area is transmitted to the target node to the maximum extent.
The decoding module is used for realizing the recovery of the compressed and coded image on the decoding end based on the image recovery model; wherein the recovery model is implemented based on a generative confrontation network implemented based on an improved compressed excitation module.
The image recovery model comprises a discriminator and a generator, adopts a neural network, and is generated by training according to the following steps:
training the discriminator;
training the generator;
training the discriminator and the generator alternately; wherein the compressed excitation modules are embedded in the convolutional layers of the discriminators and the generators, respectively. By embedding the improved compressed excitation module (SE module) into the network and optimizing the loss function, the purposes of adjusting the weight of the characteristic parameters and optimizing the network performance are achieved, and the recovery of the wild animal monitoring image is realized.
It should be noted that, in the device embodiment, specific function implementation steps and workflows of each module are the same as those described in the foregoing method embodiment, and reference may be made to the foregoing description, which is not repeated herein.
Optionally, the embodiment of the invention also provides another wild animal monitoring image adaptive compression transmission and recovery device. As shown in fig. 10, may include: one or more processors 101, one or more input devices 102, one or more output devices 103, and memory 104, the processors 101, input devices 102, output devices 103, and memory 104 being interconnected via a bus 105. The memory 104 is used for storing a computer program comprising program instructions, and the processor 101 is configured to call the program instructions to execute the method of the adaptive compression transmission and recovery method for wildlife monitored images part of the embodiment.
It should be understood that, in the embodiment of the present invention, the Processor 101 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 102 may include a keyboard or the like, and the output device 103 may include a display (LCD or the like), a speaker, or the like.
The memory 104 may include read-only memory and random access memory, and provides instructions and data to the processor 101. A portion of the memory 104 may also include non-volatile random access memory. For example, the memory 104 may also store device type information.
In specific implementation, the processor 101, the input device 102, and the output device 103 described in the embodiment of the present invention may execute the implementation manner described in the embodiment of the method for adaptive compression transmission and recovery of a monitored image of a wild animal provided in the embodiment of the present invention, and are not described herein again.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (8)

1. A wild animal monitoring image self-adaptive compression transmission and recovery method is applied to a wireless sensor network, wherein the wireless sensor network adopts a network topology structure of clustering convergence, and is characterized by comprising the following steps:
acquiring a wild animal image to be compressed and transmitted, and extracting based on a target region to generate a mask image corresponding to the target region; wherein the wildlife image is derived from a wildlife auto-triggering device deployed inside the monitoring region; the target region is a foreground region with a wild animal part, which is different from a background region in the wild animal image;
performing compression coding on the generated mask image by using a method of important bit plane displacement and multi-level tree set splitting;
distributing and remotely transmitting the compressed and coded image data through a distributed transmission mechanism;
restoring the compressed and coded image on the decoding end based on an image restoration model; wherein the recovery model is implemented based on a generative confrontation network implemented based on an improved compressed excitation module;
the compressing and encoding the generated mask image by a method of significant bit plane displacement and multi-level tree set splitting specifically includes:
partitioning the mask image into area blocks and edge blocks; wherein the region blocks refer to all 0 and all 1 blocks; the edge block is a block containing 0 and 1 at the same time, the proportion of the edge block symbol representing the edge information in the symbol sequence is small, the vast majority of the symbol sequence is composed of region symbol blocks, the continuity is strong, and the region block symbol is run-length coded according to the characteristic that the region block symbol has high probability and is continuous;
determining the highest bit plane according to the maximum value of the wavelet coefficient of the mask image, and then carrying out bit-by-bit coding on the highest bit plane; wherein, the target region coefficient has a bottom level which represents the lowest bit information of the wavelet coefficient and is an unimportant level, namely NSB;
finally, image coding is carried out based on a multi-level tree set splitting algorithm;
the procedure of the insignificant bit plane discrimination method is as follows:
firstly, setting a peak signal-to-noise ratio T of an unimportant bit plane as a constraint condition, then calculating the reconstruction quality PSNR of a target area for each bit plane one by one, if the PSNR is more than or equal to T, indicating that the reconstruction quality of the coded bit plane meets the subjective visual requirement of human eyes, and marking the bit plane below the bit plane as the unimportant bit plane, otherwise, continuously coding the bit plane.
2. The wildlife monitoring image adaptive compression transmission and recovery method according to claim 1, wherein the obtaining of the wildlife image to be compressed and transmitted and then extracting based on the target area to generate a corresponding mask image specifically comprises:
reconstructing an image color space model;
extracting image texture parameters;
establishing a parameter matrix according to the texture parameters and the reconstructed color space model, and clustering pixel data through a self-adaptive algorithm to complete the segmentation of the target area;
and finally, carrying out region combination on the segmented images, determining a final target region by combining edge detection, and taking the final target region as the mask image.
3. The wildlife monitoring image adaptive compression transmission and recovery method according to claim 1, wherein the clustered and converged network topology structure includes a source node, a target node, a plurality of cluster head nodes and intra-cluster nodes corresponding to each cluster head node, and the compressed and encoded image data is distributed and remotely transmitted through a distributed transmission mechanism, specifically comprising:
the source node sends a task transmission instruction to the cluster head node C1;
after the source node receives a feedback instruction of the cluster head node C1, the target area is directly coded and transmitted to a cluster head node C2 of the next stage; the background region information divides the image information into blocks according to the number of nodes in the cluster, transmits the block information to other nodes in the cluster respectively for compression coding processing operation, and then transmits the result to a cluster head node C2 at the next stage;
after the cluster head node C2 at the next level integrates the transmission data at the previous level, the target area information is continuously coded and then distributed to the cluster head node C3 at the next level for data integration; the background area coding information is directly distributed to the next level of cluster nodes without any processing to continue compression processing until the processing result is transmitted to the next level of cluster head C3; and so on, the data transmission is carried out step by step until the requirements of the transmission stage and the image compression ratio in the set network are met;
and finally, all the coding information is transmitted to the target node together, and after the data are integrated, the data are transmitted to a background server for reconstructing the image data.
4. The wildlife monitoring image adaptive compression transmission and recovery method as claimed in claim 3, wherein the image recovery model comprises a discriminator and a generator, both of which adopt a neural network, and is generated by training according to the following steps:
training the discriminator;
training the generator;
alternately training the discriminator and the generator; wherein the compressed excitation modules are embedded in the convolutional layers of the discriminators and the generators, respectively.
5. A wildlife monitoring image self-adaptive compression transmission and recovery device adopting the method of claim 1, which is applied to a wireless sensor network, wherein the wireless sensor network adopts a network topology structure of clustering convergence, and the device comprises:
the acquisition device is used for acquiring wild animal images in the monitoring area;
a processing module to:
acquiring a wild animal image to be compressed and transmitted, and extracting based on a target area to generate a corresponding mask image; wherein the target region is a foreground region having a wild animal portion distinct from a background region in the wild animal image;
performing compression coding on the generated mask image by using a method of important bit plane displacement and multi-level tree set splitting;
the transmission module is used for distributing and remotely transmitting the compressed and coded image data through a distributed transmission mechanism;
the decoding module is used for realizing the recovery of the compressed and coded image on the decoding end based on the image recovery model; wherein the recovery model is implemented based on a generative confrontation network implemented based on an improved compressed excitation module;
the compressing and encoding the generated mask image by a method of significant bit plane displacement and multi-level tree set splitting specifically includes:
partitioning the mask image into area blocks and edge blocks; wherein the region blocks refer to all 0 and all 1 blocks; the edge blocks are blocks which simultaneously contain 0 and 1, the occupied proportion of edge block symbols representing edge information in a symbol sequence is small, most of the symbol sequence is composed of region symbol blocks, the continuity is strong, and the region block symbols are subjected to run length coding according to the characteristic that the region block symbols are high in probability and continuous;
determining the highest bit plane according to the maximum value of the wavelet coefficient of the mask image, and then carrying out bit-by-bit coding on the highest bit plane; wherein, the target region coefficient has a bottom level which represents the lowest bit information of the wavelet coefficient and is an unimportant level, namely NSB;
and finally, carrying out image coding based on a multi-level tree set splitting algorithm.
6. The wildlife monitoring image adaptive compression transmission and recovery device as claimed in claim 5, wherein the distributed transmission mechanism is a wireless sensor network distributed image transmission model established based on independent coding and joint decoding, and performs data distribution transmission on the target region and the background region respectively;
during transmission, firstly, the pixel points in the image to be transmitted are classified according to marks and unmarked, then all the marked pixel points are coded and transmitted through the cluster head nodes, and the unmarked pixel points are divided according to the energy of the cluster nodes and then transmitted through the cluster nodes.
7. The wildlife monitoring image adaptive compression, transmission and recovery device as claimed in claim 6, wherein the image recovery model comprises a discriminator and a generator, both of which adopt a neural network, and is generated by training according to the following steps:
training the discriminator;
training the generator;
alternately training the discriminator and the generator; wherein the compressed excitation modules are embedded in the convolutional layers of the discriminators and the generators, respectively.
8. An adaptive compression transmission and recovery device for wildlife monitoring images, comprising a processor, an input device, an output device and a memory, wherein the processor, the input device, the output device and the memory are connected with each other, wherein the memory is used for storing a computer program, the computer program comprises program instructions, and the processor is configured to call the program instructions to execute the method according to any one of claims 1-4.
CN202011441971.1A 2020-12-11 2020-12-11 Self-adaptive compression transmission and recovery method and device for wild animal monitoring image Active CN112616054B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011441971.1A CN112616054B (en) 2020-12-11 2020-12-11 Self-adaptive compression transmission and recovery method and device for wild animal monitoring image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011441971.1A CN112616054B (en) 2020-12-11 2020-12-11 Self-adaptive compression transmission and recovery method and device for wild animal monitoring image

Publications (2)

Publication Number Publication Date
CN112616054A CN112616054A (en) 2021-04-06
CN112616054B true CN112616054B (en) 2023-03-03

Family

ID=75232680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011441971.1A Active CN112616054B (en) 2020-12-11 2020-12-11 Self-adaptive compression transmission and recovery method and device for wild animal monitoring image

Country Status (1)

Country Link
CN (1) CN112616054B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837945B (en) * 2021-09-30 2023-08-04 福州大学 Display image quality optimization method and system based on super-resolution reconstruction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332162A (en) * 2011-09-19 2012-01-25 西安百利信息科技有限公司 Method for automatic recognition and stage compression of medical image regions of interest based on artificial neural network
CN102833536A (en) * 2012-07-24 2012-12-19 南京邮电大学 Distributed video encoding and decoding method facing to wireless sensor network
CN103561242A (en) * 2013-11-14 2014-02-05 北京林业大学 Wild animal monitoring system based on wireless image sensor network
CN104581167A (en) * 2014-03-07 2015-04-29 华南理工大学 Distributed image compression transmission method for wireless sensor network
CN105846960A (en) * 2016-04-22 2016-08-10 中国矿业大学 Data compression coding and reliable transmission method of distributed real-time monitoring information source
CN108990130A (en) * 2018-09-29 2018-12-11 南京工业大学 Distributed compressed sensing QoS routing method based on cluster
CN109982085A (en) * 2017-12-28 2019-07-05 新岸线(北京)科技集团有限公司 A kind of method of high precision image mixing compression

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10652583B2 (en) * 2016-08-19 2020-05-12 Apple Inc. Compression of image assets

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332162A (en) * 2011-09-19 2012-01-25 西安百利信息科技有限公司 Method for automatic recognition and stage compression of medical image regions of interest based on artificial neural network
CN102833536A (en) * 2012-07-24 2012-12-19 南京邮电大学 Distributed video encoding and decoding method facing to wireless sensor network
CN103561242A (en) * 2013-11-14 2014-02-05 北京林业大学 Wild animal monitoring system based on wireless image sensor network
CN104581167A (en) * 2014-03-07 2015-04-29 华南理工大学 Distributed image compression transmission method for wireless sensor network
CN105846960A (en) * 2016-04-22 2016-08-10 中国矿业大学 Data compression coding and reliable transmission method of distributed real-time monitoring information source
CN109982085A (en) * 2017-12-28 2019-07-05 新岸线(北京)科技集团有限公司 A kind of method of high precision image mixing compression
CN108990130A (en) * 2018-09-29 2018-12-11 南京工业大学 Distributed compressed sensing QoS routing method based on cluster

Also Published As

Publication number Publication date
CN112616054A (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN112801895B (en) Two-stage attention mechanism-based GAN network image restoration algorithm
CN113178255B (en) GAN-based medical diagnosis model attack resistance method
US20180096232A1 (en) Using image analysis algorithms for providing training data to neural networks
CN110225341A (en) A kind of code flow structure image encoding method of task-driven
CN110225350B (en) Natural image compression method based on generation type countermeasure network
CN111192211B (en) Multi-noise type blind denoising method based on single deep neural network
CN110363068B (en) High-resolution pedestrian image generation method based on multiscale circulation generation type countermeasure network
US20220277492A1 (en) Method and data processing system for lossy image or video encoding, transmission and decoding
CN111491167B (en) Image encoding method, transcoding method, device, equipment and storage medium
Chamain et al. End-to-end image classification and compression with variational autoencoders
US11798254B2 (en) Bandwidth limited context based adaptive acquisition of video frames and events for user defined tasks
CN112616054B (en) Self-adaptive compression transmission and recovery method and device for wild animal monitoring image
CN112333451A (en) Intra-frame prediction method based on generation countermeasure network
CN117151990B (en) Image defogging method based on self-attention coding and decoding
CN116797437A (en) End-to-end image steganography method based on generation of countermeasure network
Wang et al. Adaptive image compression using GAN based semantic-perceptual residual compensation
CN107256554B (en) Single-layer pulse neural network structure for image segmentation
Dash et al. CompressNet: Generative compression at extremely low bitrates
CN116896638A (en) Data compression coding technology for transmission operation detection scene
CN106961607A (en) Time-domain lapped transform based on JND is multiple description coded, decoding method and system
CN115866265A (en) Multi-code-rate depth image compression system and method applied to mixed context
CN111163320A (en) Video compression method and system
CN113902000A (en) Model training, synthetic frame generation, video recognition method and device and medium
CN106897674B (en) A kind of in-orbit remote sensing images city detection method based on JPEG2000 code stream
Akutsu et al. End-to-End Deep ROI Image Compression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant