CN109325915B - Super-resolution reconstruction method for low-resolution monitoring video - Google Patents
Super-resolution reconstruction method for low-resolution monitoring video Download PDFInfo
- Publication number
- CN109325915B CN109325915B CN201811056960.4A CN201811056960A CN109325915B CN 109325915 B CN109325915 B CN 109325915B CN 201811056960 A CN201811056960 A CN 201811056960A CN 109325915 B CN109325915 B CN 109325915B
- Authority
- CN
- China
- Prior art keywords
- resolution
- layer
- image
- convolution
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000012544 monitoring process Methods 0.000 title claims abstract description 25
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims abstract description 18
- 230000006870 function Effects 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000011478 gradient descent method Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000011840 criminal investigation Methods 0.000 abstract description 7
- 230000000694 effects Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a super-resolution reconstruction method for a low-resolution surveillance video, which uses two convolution kernels with different sizes for feature representation of a low-resolution surveillance video frame, combines features extracted by the two convolution kernels as input of a next layer, enables network training to be easier through a residual error learning mode, reconstructs the super-resolution of the learned features through a deconvolution layer, optimizes a convolutional neural network by using a random gradient descent algorithm, acquires a trained network model, inputs a low-resolution surveillance picture to be reconstructed into the trained network model, and reconstructs the super-resolution of the surveillance video. According to the invention, on the premise of not increasing the hardware cost, the image resolution of the monitoring video is increased, so that more characteristic information required for identifying the face can be obtained, the characteristic information is used for assisting criminal investigation to determine the identity of the criminal suspect, and the accuracy and efficiency of determining the identity of the criminal suspect in the criminal investigation are improved.
Description
Technical Field
The invention relates to the field of computer vision methods, in particular to a super-resolution reconstruction method for a low-resolution monitoring video.
Background
As the government of China actively utilizes advanced security technology for maintaining social stability and guaranteeing the safety of people's lives and properties, a relatively perfect video monitoring system is established in cities all over the country. These video surveillance systems play an important role in criminal investigation in public security authorities. However, in actual monitoring, since the criminal suspect is far away from the camera or the imaging effect in the monitoring camera is not good, many images with low resolution are generated in monitoring, and it is difficult to provide feature information required for identifying the face. Therefore, it is the starting point of the present invention to perform resolution enhancement processing on the low-resolution monitored image to improve the recognizability of the target.
The image super-resolution reconstruction is a technology for improving the image quality by using a software algorithm, overcomes the defect of high cost of obtaining a high-resolution image through hardware, and has important significance in the aspect of improving the visual effect of the image. The resolution of the monitoring image with low resolution is improved by using an image super-resolution reconstruction technology, and the image resolution of the monitoring video is improved on the premise of not improving hardware cost, so that more characteristic information required by identifying the face can be acquired, and the method is used for assisting criminal investigation to determine the identity of a criminal suspect.
Disclosure of Invention
The invention aims to provide a super-resolution reconstruction method for a low-resolution monitoring video, which aims to solve the problems that many monitoring pictures are low in resolution, feature information required by target face identification is difficult to provide and the like.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a super-resolution reconstruction method for low-resolution surveillance video is characterized by comprising the following steps: extracting features of an image for training by using a convolutional neural network method containing convolutional layers and residual connection, reconstructing the image through a deconvolution layer to improve the resolution of the image, optimizing the convolutional neural network by using a random gradient descent algorithm to obtain a trained network model, and inputting an image frame to be reconstructed into the trained network model to obtain a reconstruction result; the method comprises the following steps:
(1) Selecting a plurality of pictures as a training database, wherein the training database comprises a low-resolution image for inputting a network and a corresponding high-resolution image serving as a supervised learning label;
(2) Inputting the training samples into a convolutional neural network for network training, wherein the convolutional neural network comprises a plurality of convolutional layers and a plurality of residual error connections, and the processing process in the convolutional layers is as follows:
the first layer is 1 convolution layer containing convolution kernels with the size of 3 × 3 and used for extracting global features of an image, the subsequent layers are a plurality of parallel convolution layers with 2 different convolution kernels and used for extracting features with different sizes, the first convolution layer contains a plurality of convolution kernels with the size of 3 × 3, the second convolution layer contains a plurality of convolution kernels with the size of 5 × 5, discrete convolution is carried out on the convolution kernels and an original image respectively, and after an offset term is added, the extracted image features are obtained through a ReLU activation function and are expressed as follows:
where L =1, 2., L represents the number of network layers, i represents the position of the pixel,representing the ith pixel of the image in layer l-1,representing the jth image feature of the h convolutional layer in the l layer, M j Represents the set of all images of the input, k represents the convolution kernel,represents the ith value in the jth convolution kernel in the ith layer,representing the jth bias term in the ith layer. Since each layer of the present invention contains 2 parallel convolutional layers with different convolutional kernels, h =1,2, f (x) represents the ReLU activation function, which is expressed as follows:
f(x)=max(0,x) (2),
after the convolution is completed, the results of the 2 parallel convolution layers are merged together at the merge layer as a whole image feature, which is expressed as follows:
wherein X L Output representing the l-th layer [. ]]Representing operations that merge the results of multiple parallel convolutional layers together as a whole block of image features.
(3) And (3) taking the output of the l layer as the input of the l +1 layer, and repeating the calculation of the parallel convolution layer in the step (2) until the output is transmitted to the last layer of the network. After the L-th layer of convolution is finished, the output characteristic of the L-th layer and the input characteristic of the first layer are subjected to residual error operation, and the residual error operation is expressed as follows:
X=X L +X 1 (4)
where X represents the feature after the residual operation is completed. And then inputting the X features into a deconvolution layer to amplify the size of the X features, and then enabling the amplified features to pass through a convolution layer containing a 3X 3 size convolution kernel to finally obtain an output image with improved resolution.
(4) Comparing the corresponding high-resolution image serving as the supervised learning label with an output image, optimizing the convolutional neural network by using a random gradient descent method, and obtaining a trained network after at least one hundred thousand iterations;
(5) And (4) inputting the image to be reconstructed into the trained network obtained in the step (4) when a low-resolution monitoring image to be super-resolution reconstructed is known, and outputting the super-resolution reconstructed high-resolution monitoring image by the convolutional neural network.
The super-resolution reconstruction method for the low-resolution surveillance video is characterized by comprising the following steps of: the feature extraction of the low-resolution monitoring image is carried out by using a convolutional neural network, wherein the features with different sizes are extracted by using convolutional kernels with different sizes, then the features of the convolutional neural network and the convolutional kernels are combined, the training of the network is optimized by using a residual connection mode, and the super-resolution reconstruction is carried out on the learned features by using a deconvolution layer, so that a reconstructed image with improved resolution is obtained. And finally, carrying out network optimization by using a random gradient descent method to obtain a trained network, and further carrying out super-resolution reconstruction on the low-resolution monitoring image.
According to the invention, the super-resolution reconstruction is carried out on the low-resolution monitoring image to obtain the reconstructed high-resolution image, and the image resolution of the monitoring video is improved on the premise of not improving the hardware cost, so that more characteristic information required for identifying the face can be obtained, and the super-resolution reconstruction method is used for assisting criminal investigation to determine the identity of the criminal suspect.
In the invention, the random gradient descent algorithm is an optimization algorithm, and is more suitable for the optimization control process with more control variables and more complex controlled system. In the process of training the network, the aim is to minimize the error between the output result of the network and the correct result, and the minimum value of the objective function is obtained through multiple iterations.
The invention uses a convolution neural network method to carry out feature extraction and super-resolution reconstruction. The method gradually extracts low-level features to high-level abstract features, and extracts features with different sizes by using convolution kernels with different sizes, so that effective feature information is better extracted, the reconstruction effect is improved, the convolution neural network has high flexibility, different parameters can be adjusted according to different actual conditions, and the method can be applied to different occasions.
The beneficial effects of the invention are:
the invention uses the super-resolution reconstruction of the convolutional neural network on the picture to improve the resolution of the low-resolution monitoring picture, thereby obtaining more characteristic information required by identifying the face, realizing the application of the super-resolution reconstruction of the picture to criminal investigation, and improving the accuracy and efficiency of determining the identity of the criminal suspect in the criminal investigation.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
Fig. 2 is a convolutional network structure used by the present invention.
Fig. 3 is a comparison graph of the effects in the surveillance video of the present invention.
Detailed Description
As shown in fig. 1, a super-resolution reconstruction method for low-resolution surveillance video comprises the following steps:
(1) Selecting 700 pictures as a training database, wherein the training database comprises a low-resolution image for inputting a network and a corresponding high-resolution image serving as a supervised learning label;
(2) Inputting training samples into a convolutional neural network for network training, wherein the convolutional neural network comprises a plurality of convolutional layers and a plurality of residual error connections, the convolutional operation comprises convolutional layers with 2 different convolutional kernels, and the processing process in the convolutional layers is as follows:
the first layer is 1 convolution layer containing convolution kernels with the size of 3 × 3 and used for extracting global features of an image, the subsequent layers are a plurality of parallel convolution layers with 2 different convolution kernels and used for extracting features with different sizes, the first convolution layer contains a plurality of convolution kernels with the size of 3 × 3, the second convolution layer contains a plurality of convolution kernels with the size of 5 × 5, discrete convolution is carried out on the convolution kernels and an original image respectively, and after an offset term is added, the extracted image features are obtained through a ReLU activation function and are expressed as follows:
where L =1, 2.., L represents the number of layers of the network, i represents the location of the pixel,representing the ith pixel of the image in layer l-1,representing the jth image feature of the h convolutional layer in the l layer, M j Represents the set of all images of the input, k represents the convolution kernel,represents the ith value in the jth convolution kernel in the ith layer,representing the jth bias term in the ith layer. Since each layer of the present invention contains 2 parallel convolutional layers with different convolutional kernels, h =1,2, f (x) represents the ReLU activation function, which is expressed as follows:
f(x)=max(0,x) (2),
after the convolution is completed, the results of the 2 parallel convolution layers are merged together at the merge layer as a whole block of image features, which are expressed as follows:
wherein X L Output representing the l-th layer [. ]]Representing the operation of merging together the results of multiple parallel convolutional layers as an entire block of image features.
(3) And (3) taking the output of the l layer as the input of the l +1 layer, and repeating the calculation of the parallel convolution layer in the step (2) until the output is transmitted to the last layer of the network. After the L-th layer of convolution is finished, residual error operation is carried out on the output characteristic of the L-th layer and the input characteristic of the first layer, and the residual error operation is expressed as follows:
X=X L +X 1 (4)
where X represents the feature after the residual operation is completed. And then inputting the X features into a deconvolution layer to amplify the size of the X features, and then enabling the amplified features to pass through a convolution layer containing a 3X 3 size convolution kernel to finally obtain an output image with improved resolution.
(4) Comparing the corresponding high-resolution image serving as the supervised learning label with an output image, optimizing the convolutional neural network by using a random gradient descent method, and obtaining a trained network after at least one hundred thousand iterations;
(5) And inputting the image to be reconstructed into the obtained trained network when the known low-resolution monitoring image to be reconstructed at super resolution is known, and outputting the high-resolution monitoring image after the super-resolution reconstruction by using the convolutional neural network.
Fig. 2 shows a convolutional network structure used in the present invention, where the formula on the left of the convolutional layer represents the size of the convolutional kernel, and the convolutional network structure uses convolutional layers with convolutional kernels of different sizes and residual connection to perform feature extraction, where the merging layer merges the output features of the convolutional layers with convolutional kernels of different sizes into a whole block of image features, the addition symbol represents residual connection, and then uses the deconvolution layer to enlarge the feature size, so as to finally obtain an output image with improved resolution. In fig. 3, (a) is a low resolution monitor picture, (b) is a reconstructed high resolution picture, and the low resolution picture has the same size as the reconstructed picture so as to compare the effect more intuitively, wherein the red-framed face part is uniformly enlarged for comparing the effect.
Claims (2)
1. A super-resolution reconstruction method for low-resolution surveillance video is characterized by comprising the following steps: extracting features of an image for training by using a convolutional neural network method containing convolutional layers and residual connection, reconstructing the image through a deconvolution layer to improve the resolution of the image, then optimizing the convolutional neural network by using a random gradient descent algorithm to obtain a trained network model, and inputting an image frame to be reconstructed into the trained network model to obtain a reconstruction result; the method comprises the following steps:
(1) Selecting a plurality of pictures as a training database, wherein the training database comprises a low-resolution image for inputting a network and a corresponding high-resolution image serving as a supervised learning label;
(2) Inputting the training sample into a convolutional neural network for network training, wherein the convolutional neural network comprises a plurality of convolutional layers and a plurality of residual errors, and the processing process in the convolutional layers is as follows:
the first layer is 1 convolution layer containing convolution kernels with the size of 3 × 3 and used for extracting global features of an image, the subsequent layers are a plurality of parallel convolution layers with 2 different convolution kernels and used for extracting features with different sizes, the first convolution layer contains a plurality of convolution kernels with the size of 3 × 3, the second convolution layer contains a plurality of convolution kernels with the size of 5 × 5, discrete convolution is carried out on the convolution kernels and an original image respectively, and after an offset term is added, the extracted image features are obtained through a ReLU activation function and are expressed as follows:
where L =1, 2.., L represents the number of layers of the network, i represents the location of the pixel,representing the ith pixel of the image in layer l-1,representing the jth image feature of the h convolutional layer in the l layer, M j Represents the set of all images of the input, k represents the convolution kernel,represents the ith value in the jth convolution kernel in the ith layer,represents the jth bias term in the ith layer; since each layer of the present invention contains 2 parallel convolutional layers with different convolutional kernels, h =1,2, f (x) represents the ReLU activation function, which is expressed as follows:
f(x)=max(0,x) (2),
after the convolution is completed, the results of the 2 parallel convolution layers are merged together at the merge layer as a whole block of image features, which are expressed as follows:
wherein X L Output representing the l-th layer [. ]]Representing an operation of merging together the results of multiple parallel convolutional layers as a whole block of image features;
(3) Taking the output of the l layer as the input of the l +1 layer, and repeating the calculation of the parallel convolution layer in the step (2) until the output is transmitted to the last layer of the network; after the L-th layer of convolution is finished, the output characteristic of the L-th layer and the input characteristic of the first layer are subjected to residual error operation, and the residual error operation is expressed as follows:
X=X L +X 1 (4)
wherein X represents a feature after completion of the residual operation; inputting the X features into a deconvolution layer to amplify the size of the X features, and then passing the amplified features through a convolution layer containing a convolution kernel with the size of 3X 3 to finally obtain an output image with improved resolution;
(4) Comparing the corresponding high-resolution image serving as the supervised learning label with the output image, optimizing the convolutional neural network by using a random gradient descent method, and obtaining a trained network after at least one hundred thousand iterations;
(5) And (4) inputting the image to be reconstructed into the trained network obtained in the step (4) when a low-resolution monitoring image to be super-resolution reconstructed is known, and outputting the super-resolution reconstructed high-resolution monitoring image by the convolutional neural network.
2. The super-resolution reconstruction method for low-resolution surveillance video according to claim 1, characterized in that: performing feature extraction of the low-resolution monitoring image by using a convolutional neural network, wherein convolution kernels with different sizes are used for extracting features with different sizes, then the features are combined, training of the network is optimized by using a residual linking mode, and a deconvolution layer is used for performing super-resolution reconstruction on the learned features, so that a reconstructed image with improved resolution is obtained; and finally, carrying out network optimization by using a random gradient descent method to obtain a trained network, and further carrying out super-resolution reconstruction on the low-resolution monitoring image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811056960.4A CN109325915B (en) | 2018-09-11 | 2018-09-11 | Super-resolution reconstruction method for low-resolution monitoring video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811056960.4A CN109325915B (en) | 2018-09-11 | 2018-09-11 | Super-resolution reconstruction method for low-resolution monitoring video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109325915A CN109325915A (en) | 2019-02-12 |
CN109325915B true CN109325915B (en) | 2022-11-08 |
Family
ID=65264816
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811056960.4A Active CN109325915B (en) | 2018-09-11 | 2018-09-11 | Super-resolution reconstruction method for low-resolution monitoring video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109325915B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085652A (en) * | 2019-06-14 | 2020-12-15 | 深圳市中兴微电子技术有限公司 | Image processing method and device, computer storage medium and terminal |
CN110647936B (en) * | 2019-09-20 | 2023-07-04 | 北京百度网讯科技有限公司 | Training method and device for video super-resolution reconstruction model and electronic equipment |
CN111062867A (en) * | 2019-11-21 | 2020-04-24 | 浙江大华技术股份有限公司 | Video super-resolution reconstruction method |
CN110991355A (en) * | 2019-12-06 | 2020-04-10 | 北京理工大学 | Super-resolution method for aligning face images based on residual back-projection neural network |
CN111915492B (en) * | 2020-08-19 | 2021-03-30 | 四川省人工智能研究院(宜宾) | Multi-branch video super-resolution method and system based on dynamic reconstruction |
CN112580502A (en) * | 2020-12-17 | 2021-03-30 | 南京航空航天大学 | SICNN-based low-quality video face recognition method |
CN113408347B (en) * | 2021-05-14 | 2022-03-15 | 桂林电子科技大学 | Method for detecting change of remote building by monitoring camera |
CN113869282B (en) * | 2021-10-22 | 2022-11-11 | 马上消费金融股份有限公司 | Face recognition method, hyper-resolution model training method and related equipment |
CN114500948A (en) * | 2022-01-25 | 2022-05-13 | 重庆卡佐科技有限公司 | Vehicle monitoring method, monitoring system and vehicle-mounted terminal |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016019484A1 (en) * | 2014-08-08 | 2016-02-11 | Xiaoou Tang | An apparatus and a method for providing super-resolution of a low-resolution image |
CN106683067A (en) * | 2017-01-20 | 2017-05-17 | 福建帝视信息科技有限公司 | Deep learning super-resolution reconstruction method based on residual sub-images |
CN107578377A (en) * | 2017-08-31 | 2018-01-12 | 北京飞搜科技有限公司 | A kind of super-resolution image reconstruction method and system based on deep learning |
EP3319039A1 (en) * | 2016-11-07 | 2018-05-09 | UMBO CV Inc. | A method and system for providing high resolution image through super-resolution reconstruction |
-
2018
- 2018-09-11 CN CN201811056960.4A patent/CN109325915B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016019484A1 (en) * | 2014-08-08 | 2016-02-11 | Xiaoou Tang | An apparatus and a method for providing super-resolution of a low-resolution image |
EP3319039A1 (en) * | 2016-11-07 | 2018-05-09 | UMBO CV Inc. | A method and system for providing high resolution image through super-resolution reconstruction |
CN106683067A (en) * | 2017-01-20 | 2017-05-17 | 福建帝视信息科技有限公司 | Deep learning super-resolution reconstruction method based on residual sub-images |
CN107578377A (en) * | 2017-08-31 | 2018-01-12 | 北京飞搜科技有限公司 | A kind of super-resolution image reconstruction method and system based on deep learning |
Non-Patent Citations (1)
Title |
---|
基于深度卷积神经网络的遥感图像超分辨率重建;王爱丽等;《黑龙江大学自然科学学报》;20180225(第01期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109325915A (en) | 2019-02-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109325915B (en) | Super-resolution reconstruction method for low-resolution monitoring video | |
CN113052210B (en) | Rapid low-light target detection method based on convolutional neural network | |
CN109523470B (en) | Depth image super-resolution reconstruction method and system | |
CN111127308A (en) | Mirror image feature rearrangement repairing method for single sample face recognition under local shielding | |
Su et al. | Global learnable attention for single image super-resolution | |
CN111968064B (en) | Image processing method and device, electronic equipment and storage medium | |
CN111814661A (en) | Human behavior identification method based on residual error-recurrent neural network | |
CN110930378B (en) | Emphysema image processing method and system based on low data demand | |
CN110705353A (en) | Method and device for identifying face to be shielded based on attention mechanism | |
Wang et al. | FaceFormer: Aggregating global and local representation for face hallucination | |
CN111696038A (en) | Image super-resolution method, device, equipment and computer-readable storage medium | |
Chen et al. | RBPNET: An asymptotic Residual Back-Projection Network for super-resolution of very low-resolution face image | |
JP2023526899A (en) | Methods, devices, media and program products for generating image inpainting models | |
TWI803243B (en) | Method for expanding images, computer device and storage medium | |
Yang et al. | Image super-resolution reconstruction based on improved Dirac residual network | |
CN115731597A (en) | Automatic segmentation and restoration management platform and method for mask image of face mask | |
Qian et al. | Effective super-resolution methods for paired electron microscopic images | |
Cui et al. | Exploring resolution and degradation clues as self-supervised signal for low quality object detection | |
Chen et al. | Learning traces by yourself: Blind image forgery localization via anomaly detection with ViT-VAE | |
CN111310751A (en) | License plate recognition method and device, electronic equipment and storage medium | |
Lai et al. | Generative focused feedback residual networks for image steganalysis and hidden information reconstruction | |
Yeswanth et al. | Sovereign critique network (SCN) based super-resolution for chest X-rays images | |
CN116758092A (en) | Image segmentation method, device, electronic equipment and storage medium | |
CN112712468B (en) | Iris image super-resolution reconstruction method and computing device | |
US20230154140A1 (en) | Neural network-based high-resolution image restoration method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |