CN115731447A - Decompressed image target detection method and system based on attention mechanism distillation - Google Patents

Decompressed image target detection method and system based on attention mechanism distillation Download PDF

Info

Publication number
CN115731447A
CN115731447A CN202211420783.XA CN202211420783A CN115731447A CN 115731447 A CN115731447 A CN 115731447A CN 202211420783 A CN202211420783 A CN 202211420783A CN 115731447 A CN115731447 A CN 115731447A
Authority
CN
China
Prior art keywords
target detection
network
quality image
detection
distillation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211420783.XA
Other languages
Chinese (zh)
Inventor
廖飞龙
刘冰倩
林爽
翁宇游
莫文昊
安康
辛宇晨
郑州
黄建业
杨彦
李扬笛
武欣欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electric Power Research Institute Co Ltd CEPRI
Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Original Assignee
China Electric Power Research Institute Co Ltd CEPRI
Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electric Power Research Institute Co Ltd CEPRI, Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd, State Grid Fujian Electric Power Co Ltd filed Critical China Electric Power Research Institute Co Ltd CEPRI
Priority to CN202211420783.XA priority Critical patent/CN115731447A/en
Publication of CN115731447A publication Critical patent/CN115731447A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a decompressed image target detection method based on attention mechanism distillation, which comprises the following steps: s1, acquiring a high-quality image data set, and acquiring a corresponding decompressed low-quality image data set from the high-quality image data set through compression and decompression; s3, training a target detection teacher network based on the high-quality image data set; s4, detecting a teacher network based on the trained target, and adding a distillation loss training target detection student network based on attention; and S5, carrying out target detection on the decompressed image based on the trained student network. The invention realizes the extraction of higher-quality image features from the decompressed image and effectively improves the target detection performance of the low-quality image.

Description

Decompressed image target detection method and system based on attention mechanism distillation
Technical Field
The invention relates to the field of image target detection, in particular to a method and a system for detecting a decompressed image target based on attention mechanism distillation.
Background
As a key technology in the fields of automatic driving, intelligent monitoring and the like, a target detection algorithm is one of the most popular research directions in the field of computer vision today. In recent years, with the rapid development of deep learning, the target detection method based on deep learning has achieved remarkable performance. However, in some practical application scenarios, high-quality clean images are difficult to obtain (bandwidth limitation, high-quality images are difficult to transmit), most images are decompressed images, and compression of the images inevitably brings a certain decrease in image quality. Even in some specific occasions, such as field monitoring and detection, a large amount of data needs to be detected, and due to the limitation of equipment and bandwidth, a large compression ratio needs to be adopted to compress the data, and when the compression ratio is large, the quality of the decompressed image is rapidly reduced, so that when a target detection network detects the decompressed low-quality image, serious detection omission and false detection situations often occur, and the target detection network almost fails in an actual application scene.
Disclosure of Invention
In view of this, the present invention provides a method and a system for detecting a target of a decompressed image based on attention-driven distillation, so as to extract a higher quality image feature from the decompressed image and effectively improve the target detection performance of a low quality image.
In order to achieve the purpose, the invention adopts the following technical scheme:
a decompressed image target detection method based on attention-based distillation comprises the following steps:
s1, acquiring a high-quality image data set, and acquiring a corresponding decompressed low-quality image data set from the high-quality image data set through compression and decompression;
s2, constructing a target detection teacher network and a target detection student network;
s3, training a target detection teacher network based on the high-quality image data set;
s4, detecting a teacher network based on the trained target, and adding a distillation loss training target detection student network based on attention;
and S5, carrying out target detection on the low-quality image based on the trained student network.
Further, the target detection teacher network is constructed based on YOLOv3 or YOLOv5s, and the backbone network is fixed and the detection head is removed in the training process.
Further, the target detection student network is based on YOLOv3 or YOLOv5s, and an attention mechanics learning module is added in front of each branch detection head of different scales of YOLOv3 or YOLOv5 s.
Further, the attention learning module includes a transpose convolution layer, N residual blocks, and an average pool layer.
Further, step S4 specifically includes:
taking the high-quality image data set as the input of a trained target detection teacher network, taking the corresponding decompressed low-quality image data set as the input of a target detection student network, fixing the parameters of the target detection teacher network, and then extracting the high-quality features z from the target detection teacher network t And low quality features z extracted by target detection student network s And calculating distillation loss, and training the student network by adding the detection loss of the target detection network.
Further, knowledge-based distillation techniques drive the decompressed image degradation characteristics to approximate the high quality image characteristics as expressed by the following equation:
Figure BDA0003940333890000031
wherein t and s represent a teacher network and a student network, respectively, f represents a backbone network having a parameter θ, and z t =f t (x;θ t ) Representing high quality features extracted from the high quality image x,
Figure BDA0003940333890000032
Figure BDA0003940333890000033
representing images from decompression
Figure BDA0003940333890000034
Extracting degradation features; d represents a certain distance or divergence measure in the feature space.
Further, the detection loss is expressed as:
Figure BDA0003940333890000035
where ω denotes an attention map of size 1 × C × H × w, the latter term being a regularization term due to sparsity; and setting R (omega) = | | | omega | | non-woven phosphor 1
Detecting loss L det The device is divided into three parts:
Figure BDA0003940333890000036
Figure BDA0003940333890000037
Figure BDA0003940333890000038
wherein λ in each equation represents the different weight magnitudes of the three components; s. the 2 A signature graph size representing the output of the detection network; b represents the number of detection frames allocated to each grid;
Figure BDA0003940333890000039
represents that when the subscript is i, j, the detection frame has objects, the subscript is 1, and the rest is 0; p is a radical of formula i (c) The probability of the object being a class c object is obtained;
the loss of detection can be briefly expressed as:
L det =L box +L cls +L obj
the loss of the final trained student network is then:
L=L det +λ*L dis
an apparatus for decompressed image object detection based on attention-based distillation, comprising:
the data acquisition module is used for acquiring a high-quality image data set and acquiring a corresponding decompressed low-quality image data set from the high-quality image data set through compression and decompression;
the model building module is used for building a target detection teacher network and a target detection student network;
the model training module is used for training a target detection teacher network based on a high-quality image data set, and adding a distillation loss training target detection student network based on attention to the trained target detection teacher network;
and the detection module is used for carrying out target detection on the low-quality image based on the trained student network.
A decompressed image object detection system based on attention-driven distillation, comprising a processor, a memory and a computer program stored on the memory, wherein the processor executes the computer program, and specifically executes the steps of the decompressed image object detection method as described above.
A computer readable storage medium comprising a program executable by a processor to implement the method as described above.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, through the knowledge distillation technology based on the self-attention mechanism, important areas in image features can be focused, a network is prompted to extract image features with higher quality from a decompressed image, the detection precision of the existing target detection method based on deep learning on the low-quality image (decompressed image) is improved, and the generalization and popularization of the method are enhanced.
Drawings
FIG. 1 is the overall architecture of the present invention;
FIG. 2 is a high quality picture and decompressed picture inspection results and extracted features and difference maps in an embodiment of the present invention, wherein (a) the inspection results of the high quality image, (b) the inspection results of the low quality decompressed image, (c) the features of the high quality image extraction, (d) the features of the low quality decompressed image extraction, (e) the differences between the high quality features and the low quality features;
FIG. 3 is a block diagram of yolov3-tiny according to an embodiment of the present invention;
FIG. 4 is a view of the structure of yolov5s in one embodiment of the present invention;
fig. 5 is a graph of features extracted by different algorithms in an embodiment of the present invention, where (a) is features extracted from high quality images, (b) is features extracted from low quality decompressed images, (c) is features extracted from the detector after Aug algorithm enhancement training, and (d) is features recovered from the model after l2 norm distillation. (e) For the function of the invention to restore, the first line is the visualization of the feature, the second line is the difference plot between the high quality feature and the corresponding restoration feature;
fig. 6 is a detection visualization of different algorithms in an embodiment of the invention, wherein (a) the detection results of a high quality image, (b) the detection results of a low quality decompressed image, (c) the detection results of the l2 norm distillation method on the low quality decompressed image, and (d) the detection results of the invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1 to 5, the present embodiment provides a method for detecting an object in a decompressed image based on attention-driven distillation, comprising the following steps:
s1, acquiring a high-quality image data set, and acquiring a corresponding decompressed low-quality image data set from the high-quality image data set through compression and decompression;
s2, constructing a target detection teacher network and a target detection student network;
s3, training a target detection teacher network based on the high-quality image data set;
s4, detecting a teacher network based on the trained target, and adding a distillation loss training target detection student network based on attention;
and S5, performing target detection on the low-quality image based on the trained student network.
In this example, the dataset selected COCO2017 as a high quality image dataset consisting of 118287 training pictures and 5000 test pictures.
In this embodiment, referring to fig. 1, a YOLO series single-stage detector is used, the input of the teacher network is a high-quality image, and the structure of the student network is the same as that of the teacher, but the decompressed image is used as the input. As shown in fig. 1, YOLOv3 consists of two parts, a backbone for feature extraction and a detection head for classification and bounding box regression. In this embodiment, the teacher's backbone is used to extract high quality features, thus fixing the backbone network and removing the head during training. For the student model, the stem and head are retained and initialized with pre-trained parameters for better convergence. Paired images, i.e., high quality images and corresponding decompressed images, are input into the teacher and student networks, respectively.
In this embodiment, an attention-aware feature extraction method is proposed, in which a learned attention map is used as a weight of an l2 norm. Due to the different weights of distillation for different regions, the method of the present invention allows the degraded features extracted from the decompressed image to better align with the corresponding high quality features.
Knowledge distillation techniques that drive the decompressed image degradation characteristics closer to the high quality image characteristics can be expressed as the following equation:
Figure BDA0003940333890000071
wherein t and s represent a teacher network and a student network, respectively, f represents a backbone network having a parameter θ, and z t =f t (x;θ t ) Representing high quality features extracted from the high quality image x,
Figure BDA0003940333890000072
Figure BDA0003940333890000073
representing images from decompression
Figure BDA0003940333890000074
The extracted degradation features. d represents some measure of distance (or divergence) in the feature space.
The importance of different regions of the feature map is not the same for the object detection task. Similarly, by visualizing the features, the results show a difference mapping between high quality features and degraded features, as shown in fig. 2 (c), (d), and (e). Therefore, it is not appropriate to consider the importance of each region as the same constant.
The invention represents the importance of different regions in the feature map by learning an attention map and weights (denoted as ω) and applies it to distillation loss. Suppose the importance of an image feature is defined as z s And z t The larger the difference between them, i.e. the difference of a certain region in the feature (e.g. the edge and texture regions shown in fig. 2 (e)), the more important it is in the feature extraction process, and the larger the value of ω should be.
The invention learns the attention view omega (as shown in fig. 1) by adding an attention learning module branch to the student model, and the proposed attention-aware feature extraction loss can be expressed as:
Figure BDA0003940333890000081
where ω denotes an attention map of size 1 × C × H × w, the latter term being a regularization term due to sparsity
Setting R (omega) = | | | | omega | | non-magnetic wind 1
The final loss in training the student network is:
L=L det +λ*L dis
the learned values of the attention map ω measure the difficulty/importance of feature reconstruction. If z is t And z s With a larger gap between, the student network tends to learn a larger weight ω to reduce losses. Conversely, once ω increases, the second term in the loss function will also increase, which will cause the model optimization to decrease z t And z s This makes the student model more concerned about difficult/important areas in the feature map. Therefore, the student network can better enhance the characteristics under the guidance of the teacher network and the attention, and the detection accuracy is improved.
Wherein the attention learning module includes a transposed convolutional layer for magnifying the feature map, N residual blocks, and an average pool layer for restoring the original resolution.
Preferably, N is 3, the network structure of the learning module is as follows:
input layer → first deconvolution layer (upsampling) → first activation function layer → second convolution layer → second activation function layer → third convolution layer → third activation function layer → fourth convolution layer → fourth activation function layer → fifth convolution layer → fifth activation function layer → sixth convolution layer → sixth activation function layer → seventh convolution layer → seventh activation function layer → eighth convolution layer → eighth activation function layer → ninth convolution layer → ninth activation function layer → tenth convolution layer → tenth activation function layer → first average stratification layer (downsampling) → eleventh convolution layer → eleventh activation function layer → twelfth convolution layer → twelfth activation function layer → thirteenth convolution layer → output layer
Example 1:
in this example, the experimental quantification results using yolov3-tiny as the detector are shown in table 1, and the experimental quantification results using yolov5s as the detector are shown in table 2. The various comparative experimental methods were performed in the order described above from top to bottom.
TABLE 1 yolov3-tiny comparative experiment results
Figure BDA0003940333890000091
TABLE 2 yolov5s comparative experimental results
Figure BDA0003940333890000101
From the above experimental results, it can be found that the present invention proposes a new distillation loss function based on attention mechanism to train a target detection network, which mainly performs target detection on low-quality decompressed images. By introducing an attention mechanism into distillation loss of the target detection network, the importance of different areas of the high-quality characteristic image characteristic diagram can be learned at the same time, and the network is prompted to pay more attention to learning more important areas such as object edges. In addition, the method of the invention has better effect than the existing optimal method Aug, and is easier to popularize in other target detection tasks. In addition, referring to fig. 5, the present invention demonstrates for the first time that such attention-based distillation losses can be better characterized than MSE distillation losses, while having better detection. The experimental results on two common target detection networks show that the distillation loss based on the attention mechanism provided by the invention can obtain a better detection result on the task of decompressing image target detection. A partial target detection visualization is shown in fig. 6.
Example two
Based on the same inventive concept, the application also provides a decompressed image target detection device based on attention-driven distillation, which comprises:
the data acquisition module is used for acquiring a high-quality image data set and acquiring a corresponding decompressed low-quality image data set from the high-quality image data set through compression and decompression;
the model building module is used for building a target detection teacher network and a target detection student network;
the model training module is used for training a target detection teacher network based on a high-quality image data set, and adding a distillation loss training target detection student network based on attention to the trained target detection teacher network;
and the detection module is used for carrying out target detection on the low-quality image based on the trained student network.
EXAMPLE III
Based on the same inventive concept, the present application further provides a decompressed image object detection system based on attention-driven distillation, which includes a processor, a memory, and a computer program stored in the memory, wherein when the processor executes the computer program, the steps in the decompressed image object detection method described above are specifically performed.
Example four
Based on the same inventive concept, the present application also provides a computer-readable storage medium comprising a program, which is executable by a processor to implement the method as described above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.

Claims (10)

1. A decompression image target detection method based on attention mechanism distillation is characterized by comprising the following steps:
s1, acquiring a high-quality image data set, and acquiring a corresponding decompressed low-quality image data set from the high-quality image data set through compression and decompression;
s2, constructing a target detection teacher network and a target detection student network;
s3, training a target detection teacher network based on the high-quality image data set;
s4, detecting a teacher network based on the trained target, and adding a distillation loss training target detection student network based on attention;
and S5, carrying out target detection on the low-quality image based on the trained student network.
2. The method for detecting the target of the decompressed image based on the attention mechanism distillation as claimed in claim 1, wherein the target detection teacher network is constructed based on YOLOv3 or YOLOv5s, and the main network is fixed and the detection head is removed during the training process.
3. The method as claimed in claim 1, wherein the object detection student network is based on YOLOv3 or YOLOv5s, and an attention learning module is added before each branch detection head of different scales of YOLOv3 or YOLOv5 s.
4. The method of claim 3, wherein the attention learning module comprises a transposed convolution layer, N residual blocks, and an average cell layer.
5. The method for detecting the target of the decompressed image based on attention-machine-based distillation according to claim 1, wherein the step S4 is specifically as follows:
taking a high-quality image data set as the input of a trained target detection teacher network, taking a corresponding decompressed low-quality image data set as the input of a target detection student network, fixing the parameters of the target detection teacher network, and extracting high-quality features z from the target detection teacher network t And low-quality features z extracted by target detection student network s Calculation of the distillation loss L dis It is represented as follows:
Figure FDA0003940333880000021
wherein z is t Representing high quality features extracted from a high quality image x, z s Representing images from decompression
Figure FDA0003940333880000025
Extracting degradation features; ω denotes an attention map of size 1 × C × H × w, the latter term being a regularization term due to sparsity; and setting R (omega) = | | | omega | | non-woven phosphor 1 (ii) a And training the student network by adding the detection loss of the target detection network on the basis of the distillation loss.
6. The method of claim 5, wherein the method for detecting the target of the decompressed image based on attention-driven distillation is characterized in that the method for promoting the degradation characteristic of the decompressed image to be close to the characteristic of the high-quality image based on the knowledge distillation technology is expressed by the following formula:
Figure FDA0003940333880000022
wherein, theta s Parameters representing student networks, t and s representing teacher network and student network, respectively, f representing a backbone network having a parameter θ, z t =f t (x;θ t ) Representing high quality features extracted from the high quality image x,
Figure FDA0003940333880000023
representing images from decompression
Figure FDA0003940333880000026
Extracting degradation features; d represents a certain distance or divergence measure in the feature space.
7. The method of claim 6, wherein the distillation loss is expressed as:
Figure FDA0003940333880000024
where ω denotes an attention map of size 1 × C × H × w, the latter term being a regularization term due to sparsity; and setting R (omega) = | | | | omega | | non-magnetic wind 1
Detecting loss L det The method comprises the following three parts:
Figure FDA0003940333880000031
Figure FDA0003940333880000032
Figure FDA0003940333880000033
wherein λ in each equation represents the different weight magnitudes of the three components; s 2 A signature graph size representing the output of the detection network; b represents the number of detection frames allocated to each grid;
Figure FDA0003940333880000034
represents that when the subscript is i, j, the detection frame has objects, the subscript is 1, and the rest is 0; p is a radical of formula i (c) The probability of the object being a class c object is obtained;
the loss of detection can be briefly expressed as:
L det =L box +L cls +L obj
the loss of the final trained student network is then:
L=L det +λ*L dis
8. an apparatus for detecting an object in a decompressed image based on attention-based distillation, comprising:
the data acquisition module is used for acquiring a high-quality image data set and acquiring a corresponding decompressed low-quality image data set from the high-quality image data set through compression and decompression;
the model building module is used for building a target detection teacher network and a target detection student network;
the model training module is used for training a target detection teacher network based on a high-quality image data set, and adding a distillation loss training target detection student network based on attention to the trained target detection teacher network;
and the detection module is used for carrying out target detection on the low-quality image based on the trained student network.
9. A decompressed image object detection system based on attention-based distillation, comprising a processor, a memory and a computer program stored on the memory, wherein the processor executes the computer program and specifically executes the steps of the decompressed image object detection method according to any one of claims 1-7.
10. A computer-readable storage medium, characterized by comprising a program executable by a processor to implement the method of any one of claims 1-7.
CN202211420783.XA 2022-11-13 2022-11-13 Decompressed image target detection method and system based on attention mechanism distillation Pending CN115731447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211420783.XA CN115731447A (en) 2022-11-13 2022-11-13 Decompressed image target detection method and system based on attention mechanism distillation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211420783.XA CN115731447A (en) 2022-11-13 2022-11-13 Decompressed image target detection method and system based on attention mechanism distillation

Publications (1)

Publication Number Publication Date
CN115731447A true CN115731447A (en) 2023-03-03

Family

ID=85295567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211420783.XA Pending CN115731447A (en) 2022-11-13 2022-11-13 Decompressed image target detection method and system based on attention mechanism distillation

Country Status (1)

Country Link
CN (1) CN115731447A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778300A (en) * 2023-06-25 2023-09-19 北京数美时代科技有限公司 Knowledge distillation-based small target detection method, system and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778300A (en) * 2023-06-25 2023-09-19 北京数美时代科技有限公司 Knowledge distillation-based small target detection method, system and storage medium
CN116778300B (en) * 2023-06-25 2023-12-05 北京数美时代科技有限公司 Knowledge distillation-based small target detection method, system and storage medium

Similar Documents

Publication Publication Date Title
CN113240580B (en) Lightweight image super-resolution reconstruction method based on multi-dimensional knowledge distillation
CN111062872B (en) Image super-resolution reconstruction method and system based on edge detection
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
JP2022548712A (en) Image Haze Removal Method by Adversarial Generation Network Fusing Feature Pyramids
CN110827312B (en) Learning method based on cooperative visual attention neural network
CN115358932B (en) Multi-scale feature fusion face super-resolution reconstruction method and system
CN115345866B (en) Building extraction method in remote sensing image, electronic equipment and storage medium
US20230252605A1 (en) Method and system for a high-frequency attention network for efficient single image super-resolution
Song et al. TUSR-Net: triple unfolding single image dehazing with self-regularization and dual feature to pixel attention
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
CN115063318A (en) Adaptive frequency-resolved low-illumination image enhancement method and related equipment
Chen et al. Image denoising via deep network based on edge enhancement
CN115731447A (en) Decompressed image target detection method and system based on attention mechanism distillation
CN115965559A (en) Integrated aerial image enhancement method for forest scene
CN112446292B (en) 2D image salient object detection method and system
CN111539434B (en) Infrared weak and small target detection method based on similarity
CN116563110A (en) Blind image super-resolution reconstruction method based on Bicubic downsampling image space alignment
CN110796716A (en) Image coloring method based on multiple residual error networks and regularized transfer learning
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
CN104091315B (en) Method and system for deblurring license plate image
CN113658046B (en) Super-resolution image generation method, device, equipment and medium based on feature separation
CN115760589A (en) Image optimization method and device for motion blurred image
Nie et al. Image restoration from patch-based compressed sensing measurement
CN112529815A (en) Method and system for removing raindrops in real image after rain
CN115690418B (en) Unsupervised automatic detection method for image waypoints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination