CN109191392A - A kind of image super-resolution reconstructing method of semantic segmentation driving - Google Patents

A kind of image super-resolution reconstructing method of semantic segmentation driving Download PDF

Info

Publication number
CN109191392A
CN109191392A CN201810901713.3A CN201810901713A CN109191392A CN 109191392 A CN109191392 A CN 109191392A CN 201810901713 A CN201810901713 A CN 201810901713A CN 109191392 A CN109191392 A CN 109191392A
Authority
CN
China
Prior art keywords
network
resolution
super
semantic segmentation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810901713.3A
Other languages
Chinese (zh)
Other versions
CN109191392B (en
Inventor
颜波
牛雪静
谭伟敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201810901713.3A priority Critical patent/CN109191392B/en
Publication of CN109191392A publication Critical patent/CN109191392A/en
Application granted granted Critical
Publication of CN109191392B publication Critical patent/CN109191392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to digital image processing techniques field, specially a kind of image super-resolution reconstructing method of semantic segmentation driving.The method of the present invention specifically includes: independently training image super-resolution network and semantic segmentation network model;Cascade the super-resolution network and semantic segmentation network of stand-alone training;Under the driving of semantic segmentation task, training super-resolution network;After the network processes that low-resolution image passes through task-driven, accurate semantic segmentation result is obtained.The experimental results showed that the present invention enables to super-resolution network to better adapt to segmentation task, clear, high resolution input picture is provided for semantic segmentation network, effectively improves the accuracy of separation of low-resolution image.

Description

A kind of image super-resolution reconstructing method of semantic segmentation driving
Technical field
The invention belongs to digital image processing techniques fields, and in particular to a kind of image super-resolution reconstructing method, more It says to body, is related to a kind of image super-resolution reconstructing method of semantic segmentation driving.
Background technique
Semantic segmentation is one of background task of computer vision field, and pixel is divided into difference according to different semantemes by it Classification, have a wide range of applications in terms of automatic Pilot, image content.In recent years, depth convolutional neural networks (deepconvolutional neural network, DCNN) not only has significant progress in image classification task, and And in the task of some structurings output, such as semantic segmentation, it made breakthrough progress.
2015, Long et al.[1]It proposes FCN (fully convolutional neural network), for the first time will DCNN is applied to the semantic segmentation task of Pixel-level classification.In order to keep receptive field, the pond layer in FCN is more, leads to spy Sign figure resolution ratio is smaller, and segmentation result is coarse.Chen et al. proposes to improve characteristic pattern resolution ratio while not reducing receptive field The method of Deeplab series[2-4], empty convolution is introduced, the output of network is optimized, in PASCAL VOC 2012[5]Survey 86.9% accuracy rate reached on examination collection.However, the wisp in segmented image is still very big chooses in semantic segmentation War.
Image super-resolution reconstruct be it is a kind of effectively promote image resolution ratio, the technological means of rich image content, can be with The visual effect of object effectively in enhancing wisp or low-resolution image.It is difficult simulation again based on early the reconstructing method of interpolation Miscellaneous real scene.With the development of DCNN, also there are many ultra-resolution ratio reconstructing methods neural network based.
2015, Dong et al.[6]It is proposed SRCNN (Super-Resolution Convolutional Neural Network), using low-resolution image as input, high-definition picture is as label, by optimization object function, allows DCNN Learn the mapping relations between low-resolution image and high-definition picture.2016, Kim et al.[7]Deepen the network architecture, It uses the image of interpolation as input, stacks multiple convolutional layers, and accelerate network convergence using the structure of residual error, achieve preferably Quality reconstruction.
Above-mentioned ultra-resolution ratio reconstructing method is all but the figure seen of naked eyes for the purpose of the sensory effects for promoting human eye The image that picture and machine are seen is not identical[8].Increase point of image rather than just the sensory effects of naked eyes for specific tasks Resolution is beneficial to improve the effect of specific tasks.It is proposed a kind of ultra-resolution ratio reconstructing method of semantic segmentation driving for mentioning The semantic segmentation accuracy of object has very strong practical value in high wisp or low-resolution image.
Summary of the invention
For overcome the deficiencies in the prior art, the purpose of the present invention is to provide a kind of image oversubscription of semantic segmentation driving Resolution reconstructing method, allow super-resolution network can under the driving of semantic segmentation undated parameter, improve low-resolution image The accuracy of semantic segmentation.
The image super-resolution reconstructing method of semantic segmentation driving provided by the invention, the specific steps are as follows:
(1) independently pre-training image super-resolution network and semantic segmentation network model
Use data setTraining super-resolution network, whereinIt is low-resolution image, as super-resolution The input of rate network,It is high-definition picture, the label as training process;Super-resolution network used is an end It can be VDSR to the network at end[7]、EDSR[9]Or SRCNN[6]Deng;
Use data setTraining semantic segmentation network, wherein IiFor the input of semantic segmentation network, MiFor pixel Grade label, is image IiIn each pixel true classification;Semantic segmentation network used can be Deeplab[2-4]、FCN[1] Or PSPnet[10]Deng;
(2) the super-resolution network and semantic segmentation network of stand-alone training are cascaded
Super-resolution network can be by low-resolution imageIt is mapped as high-definition pictureWherein θSR For the parameter of super-resolution network;The output of super-resolution networkBy the input as semantic segmentation network, obtain Obtain each pixel classifications result in super-resolution imageCascade network structure is constituted, wherein θseg For the parameter of semantic segmentation network;
(3) under the driving of semantic segmentation task, training super-resolution network
Trim network parameter on the basis of pre-training model, with the loss function and semantic segmentation net of super-resolution network The loss function of network instructs the update of the parameter of super-resolution network jointly, so that super-resolution network is for specific semantic point The task of cutting is adjusted;
(4) low-resolution image obtains accurate semantic segmentation result after the network processes of task-driven
For the semantic segmentation task of low-resolution image, low-resolution image is first input to the semanteme point of training completion It cuts in the super-resolution network model of driving, reconstructs high-definition picture, then the high-definition picture of reconstruct is input to semanteme Divide in network, obtains accurate segmentation result.
Further, in step (1), the data set of training super-resolution networkAcquisition methods are as follows:
By high-definition pictureDown-sampling according to a certain percentage obtains low-resolution imageFor amateur use In the higher image data set of the resolution ratio of super resolution task, the data of super resolution task can be constructed in this way Collection.The data set of image super-resolution reconstruct is different from the data set of other tasks, such as object detection data set, image classification number According to collection etc., do not need manually to mark, therefore super-resolution reconstruction can combine with other tasks, while not needing standard For the data set of multiple tasks, make it possible the super-resolution reconstruction of task-driven.
Further, in step (1), the method for two kinds of network stand-alone trainings are as follows:
With two kinds of data set training super-resolution networks, network first is trained with the data set of common super resolution task, After convergence, then with semantic partitioned data set trim network;
With the semantic segmentation data set training semantic segmentation network containing Pixel-level mark of standard.
The data set of common super resolution task can be DIV2K[11], 91 pictures[12]Deng;Semantic segmentation data set It can be PASCAL VOC 2012[5]、PASCAL context[13]Or Cityscapes[14]Deng.
Further, in step (2), the parameter of cascade network by two stand-alone trainings model initialization, wherein semantic The parameter of parted pattern part will be fixed, the loss for the semantic segmentation that the high-definition picture for calculating reconstruct generates, in advance Trained semantic segmentation network provides correct guidance for the parameter update of super-resolution network, therefore one more accurate semantic It is most important in cascade network to divide network model.
Further, in step (3), the loss function are as follows:
The loss function of super-resolution network are as follows:
Wherein, N is amount of images.
The loss function of semantic segmentation network are as follows:
Wherein,
L is the set of the classification of pixel,For the pixel for belonging to l class in i-th of image,For l class pixel Quantity, u are the positions of pixel.
In order to make super-resolution network can adapt to semantic segmentation task, rather than it is provided solely for preferable visual quality, The loss function of the loss function of super-resolution network and semantic segmentation network is combined, as final loss function, so The objective function that parameter updates are as follows:
Wherein, α, β are used to balance the contribution of two kinds of loss functions, and the selection of α, β can adjust according to demand, ordinary circumstance Under, α is smaller with respect to β, and the high-definition picture visual effect of reconstruct is poorer, but semantic segmentation accuracy is higher;α is bigger with respect to β, It is then opposite;It is recommended that α, β ratio between two are taken as (0.5-1): 1, preferably 1:1.
Although the present invention uses the form of two cascades, minimizing gradient that loss function generates still can be with It travels in super-resolution network, loss function is to θSRGradient are as follows:
The beneficial effects of the present invention are: the present invention makes for specific semantic segmentation task training super-resolution network The purpose for obtaining super-resolution reconstruction is no longer only to provide the high-definition picture that visual quality is higher, details is richer, but Export high-definition picture that is richer for semantic segmentation Web content, being more advantageous to extraction feature.In the present invention, semanteme point The ultra-resolution ratio reconstructing method frame for cutting driving is simple, is easily achieved, and can be used as a kind of pre-treatment and is widely used, improves low The semantic segmentation accuracy of image in different resolution.
Detailed description of the invention
Fig. 1 is network frame figure of the invention.
Fig. 2 is that the segmentation effect of the high-definition picture reconstructed using method of the invention and other methods compares (4 times of weights Structure).
Specific embodiment
Embodiment of the present invention is described in detail below, but protection scope of the present invention is not limited to the implementation Example.
Using VDSR as super-resolution network, Deeplab-V2 does 4 times and 8 times reconstruct as semantic segmentation network respectively, Low-resolution image is obtained by high-definition picture down-sampling, the specific steps are as follows:
(1) stand-alone training super-resolution network VDSR and semantic segmentation network Deeplab-V2.With DIV2K and PASCAL VOC 2012 trains super-resolution network;With the training semantic segmentation network of PASCAL VOC 2012;
(2) the super-resolution network and semantic segmentation network for cascading stand-alone training, with the parameter initialization grade in step (1) The parameter of corresponding part in networking network;
(3) under the driving of semantic segmentation task, training super-resolution network, the weight of loss function α, β be set as 1:1 or 0.5:1;
(4) after the network processes that low-resolution image passes through task-driven, accurate semantic segmentation result is obtained.
The image for image and the other methods reconstruct that the present invention reconstructs, after semantic segmentation network processes, the accuracy of separation Compare as shown in table 1.As can be seen that under different reconstruct multiples, the essence of the high-definition picture segmentation of method reconstruct of the invention Exactness is significantly larger than other methods.
In addition, give α: 0.5 in Fig. 2, when β: 1, in 4 times of reconstruct, the method for the present invention and other methods reconstructed image point The intuitively comparing of effect is cut, wherein Fig. 2 (a) is high-definition picture and semantic segmentation label;Fig. 2 (b) is bicubic difference weight The image and semantic segmentation result of structure;Fig. 2 (c) is the reconstructed image and semantic segmentation knot of the super-resolution network of stand-alone training Fruit;Fig. 2 (d) is the reconstructed image and segmentation result of the super-resolution network of semantic segmentation driving.As can be seen that side of the invention The segmentation result that method generates is most accurate.
The comparison of the accuracy of separation of 1 distinct methods reconstructed image of table
Bibliography
[1]J.Long,E.Shelhamer and T.Darrell,“Fully convolutional networks for semantic segmentation,”IEEE Conference on Computer Vision and Pattern Recognition(CVPR),pp.3431-3440,2015.(FCN)
[2]L.Chen,G.Papandreou,and et al.,“Semantic image segmentation with deep convolutional nets and fully connected CRFs,”International Conference on Learning Representations(ICLR),2015.(DeepLab-V1)
[3]L.Chen,G.Papandreou,and et al.,“DeepLab:Semantic image segmentation with deep convolutional nets,atrous convolution,and fully connected CRFs,”IEEE Transactions on Pattern Analysis and Machine Intelligence(TPMAI),vol.40,pp.834-848,2018.(DeepLab-V2)
[4]L.Chen,G.Papandreou,and et al.“Rethinking atrous convolution for semantic image segmentation,”arXiv:1706.05587,2017.(DeepLab-V3)
[5]M.Everingham,S.Eslami,and et al.,“The pascal visual object classes challenge:a retrospective,”International Journal of Computer Vision(IJCV), vol.111,no.1,pp.98-136,2014.(PASCAL VOC 2012)
[6]C.Dong,C.C.Loy,K.He,and X.Tang.“Image super-resolution using deep convolutional networks,”IEEE Transactions on Pattern Analysis and Machine Intelligence(TPAMI),vol.38,no.2,pp.295-307,2015.(SRCNN)
[7]J.Kim,J.Lee,and et al.“Accurate image super-resolution using very deep convolutional networks,”IEEE Conference on Computer Vision and Pattern Recognition(CVPR),pp.1646-1654,2016.(VDSR)
[8]C.Xie,J.Wang,Z.Zhang,Y.Zhou,L.Xie and A.Yuille,“Adversarial Examples for Semantic Segmentation and Object Detection,”IEEE International Conference on Computer Vision(ICCV),pp.1378-1387,2017.
[9]B.Lim,S.Son,H.Kim,S.Nah and K.M.Lee,“Enhanced Deep Residual Networks for Single Image Super-Resolution,”IEEE Conference on Computer Vision and Pattern Recognition Workshops(CVPRW),pp.1132-1140,2017.(EDSR)
[10]H.Zhao,J.Shi,X.Qi,X.Wang,and J.Jia,“Pyramid Scene Parsing Network,”IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.2881-2890,2017.(PSPnet)
[11]R.Timofte,E.Agustsson,L.Van Gool,M.-H.Yang,L.Zhang,et al.,“Ntire 2017 challenge on single image superresolution:Methods and results,”IEEE Conference on Computer Vision and Pattern Recognition Workshops(CVPRW),2017. (DIV2K)
[12]J.Yang,J.Wright,T.S.Huang,and Y.Ma,“Image super-resolution via sparse representation,”IEEE Transactions on Image Processing,pp.2861-2873, 2010.(91 images)
[13]R.Mottaghi,X.Chen,X.Liu,and et al.,“The role of context for object detection and semantic segmentation in the wild,”IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2014.(PASCAL context)
[14]M.Cordts,M.Omran,S.Ramos,and et al.,“The cityscapes dataset for semantic urban scene understanding,”IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2016.(Cityscapes)。

Claims (7)

1. a kind of image super-resolution reconstructing method of semantic segmentation driving, which is characterized in that specific step is as follows:
(1) independently pre-training image super-resolution network and semantic segmentation network model
Use data setTraining super-resolution network, whereinIt is low-resolution image, as super-resolution net The input of network,It is high-definition picture, the label as training process;
Use data setTraining semantic segmentation network, wherein IiFor the input of semantic segmentation network, MiFor Pixel-level Label indicates image IiIn each pixel true classification;
(2) the super-resolution network and semantic segmentation network of stand-alone training are cascaded
Super-resolution network is by low-resolution imageIt is mapped as high-definition pictureWherein θSRFor oversubscription The parameter of resolution network;The output of super-resolution networkAs the input of semantic segmentation network, surpassed Each pixel classifications result in image in different resolutionCascade network structure is constituted, wherein θseg For the parameter of semantic segmentation network;
(3) under the driving of semantic segmentation task, training super-resolution network
Trim network parameter on the basis of pre-training model, with the loss function of super-resolution network and semantic segmentation network Loss function instructs the update of the parameter of super-resolution network jointly, so that super-resolution network is appointed for specific semantic segmentation Business is adjusted;
(4) network processes of the low-resolution image Jing Guo task-driven obtain accurate semantic segmentation result
For the semantic segmentation task of low-resolution image, first the semantic segmentation that low-resolution image is input to training completion is driven In dynamic super-resolution network model, high-definition picture is reconstructed, then the high-definition picture of reconstruct is input to semantic segmentation In network, accurate segmentation result is obtained.
2. the method according to claim 1, wherein training the data set of super-resolution network in step (1)Acquisition methods are as follows:
By high-definition pictureDown-sampling according to a certain percentage obtains low-resolution imageOversubscription is used for for amateur The higher image data set of the resolution ratio of resolution task, all in accordance with the data set of the method building super resolution task.
3. according to the method described in claim 2, it is characterized in that, in step (1), the method for two kinds of network stand-alone trainings are as follows:
It is restrained with two kinds of data set training super-resolution networks first with the data set training network of common super resolution task Afterwards, then with semantic partitioned data set trim network;
With the semantic segmentation data set training semantic segmentation network containing Pixel-level mark of standard.
4. requiring the method according to claim 3, which is characterized in that the data set of the common super resolution task For DIV2K or 91 pictures;The semantic segmentation data set be PASCAL VOC 2012, PASCAL context or Cityscapes。
5. the method according to claim 1, wherein the parameter of cascade network is by two independent instructions in step (2) Experienced model parameter initialization;Wherein the parameter of semantic segmentation model part is fixed, for calculating the high resolution graphics of reconstruct As the loss of the semantic segmentation generated, the semantic segmentation network of pre-training provides correctly for the parameter update of super-resolution network Guidance.
6. the method according to claim 1, wherein in step (3), the loss function are as follows:
The loss function of super-resolution network are as follows:
Wherein, N is amount of images;
The loss function of semantic segmentation network are as follows:
Wherein,
L is the set of the classification of pixel,For the pixel for belonging to l class in i-th of image,For the quantity of l class pixel, U is the position of pixel;
The loss function of the loss function of super-resolution network and semantic segmentation network is combined, as final loss function, So the objective function that parameter updates are as follows:
Wherein, α, β are used to balance the contribution of two kinds of loss functions;α, β ratio between two are taken as (0.5-1): 1.
7. the method according to claim 1, wherein the super-resolution network is a net end to end Network is VDSR, EDSR or SRCNN;Semantic segmentation network is Deeplab, FCN or PSPnet.
CN201810901713.3A 2018-08-09 2018-08-09 Image super-resolution reconstruction method driven by semantic segmentation Active CN109191392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810901713.3A CN109191392B (en) 2018-08-09 2018-08-09 Image super-resolution reconstruction method driven by semantic segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810901713.3A CN109191392B (en) 2018-08-09 2018-08-09 Image super-resolution reconstruction method driven by semantic segmentation

Publications (2)

Publication Number Publication Date
CN109191392A true CN109191392A (en) 2019-01-11
CN109191392B CN109191392B (en) 2021-06-04

Family

ID=64921175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810901713.3A Active CN109191392B (en) 2018-08-09 2018-08-09 Image super-resolution reconstruction method driven by semantic segmentation

Country Status (1)

Country Link
CN (1) CN109191392B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009565A (en) * 2019-04-04 2019-07-12 武汉大学 A kind of super-resolution image reconstruction method based on lightweight network
CN110136062A (en) * 2019-05-10 2019-08-16 武汉大学 A kind of super resolution ratio reconstruction method of combination semantic segmentation
CN110837811A (en) * 2019-11-12 2020-02-25 腾讯科技(深圳)有限公司 Method, device and equipment for generating semantic segmentation network structure and storage medium
CN112419158A (en) * 2020-12-07 2021-02-26 上海互联网软件集团有限公司 Image video super-resolution and super-definition reconstruction system and method
US20210209732A1 (en) * 2020-06-17 2021-07-08 Beijing Baidu Netcom Science And Technology Co., Ltd. Face super-resolution realization method and apparatus, electronic device and storage medium
CN113538227A (en) * 2020-04-20 2021-10-22 华为技术有限公司 Image processing method based on semantic segmentation and related equipment
CN113657388A (en) * 2021-07-09 2021-11-16 北京科技大学 Image semantic segmentation method fusing image super-resolution reconstruction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481188A (en) * 2017-06-23 2017-12-15 珠海经济特区远宏科技有限公司 A kind of image super-resolution reconstructing method
WO2018067258A1 (en) * 2016-10-06 2018-04-12 Qualcomm Incorporated Neural network for image processing
CN108140130A (en) * 2015-11-05 2018-06-08 谷歌有限责任公司 The bilateral image procossing that edge perceives
CN108259994A (en) * 2018-01-15 2018-07-06 复旦大学 A kind of method for improving video spatial resolution

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108140130A (en) * 2015-11-05 2018-06-08 谷歌有限责任公司 The bilateral image procossing that edge perceives
WO2018067258A1 (en) * 2016-10-06 2018-04-12 Qualcomm Incorporated Neural network for image processing
CN107481188A (en) * 2017-06-23 2017-12-15 珠海经济特区远宏科技有限公司 A kind of image super-resolution reconstructing method
CN108259994A (en) * 2018-01-15 2018-07-06 复旦大学 A kind of method for improving video spatial resolution

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009565A (en) * 2019-04-04 2019-07-12 武汉大学 A kind of super-resolution image reconstruction method based on lightweight network
CN110136062A (en) * 2019-05-10 2019-08-16 武汉大学 A kind of super resolution ratio reconstruction method of combination semantic segmentation
CN110136062B (en) * 2019-05-10 2020-11-03 武汉大学 Super-resolution reconstruction method combining semantic segmentation
CN110837811A (en) * 2019-11-12 2020-02-25 腾讯科技(深圳)有限公司 Method, device and equipment for generating semantic segmentation network structure and storage medium
CN110837811B (en) * 2019-11-12 2021-01-05 腾讯科技(深圳)有限公司 Method, device and equipment for generating semantic segmentation network structure and storage medium
CN113538227A (en) * 2020-04-20 2021-10-22 华为技术有限公司 Image processing method based on semantic segmentation and related equipment
CN113538227B (en) * 2020-04-20 2024-04-12 华为技术有限公司 Image processing method based on semantic segmentation and related equipment
US20210209732A1 (en) * 2020-06-17 2021-07-08 Beijing Baidu Netcom Science And Technology Co., Ltd. Face super-resolution realization method and apparatus, electronic device and storage medium
US11710215B2 (en) * 2020-06-17 2023-07-25 Beijing Baidu Netcom Science And Technology Co., Ltd. Face super-resolution realization method and apparatus, electronic device and storage medium
CN112419158A (en) * 2020-12-07 2021-02-26 上海互联网软件集团有限公司 Image video super-resolution and super-definition reconstruction system and method
CN113657388A (en) * 2021-07-09 2021-11-16 北京科技大学 Image semantic segmentation method fusing image super-resolution reconstruction
CN113657388B (en) * 2021-07-09 2023-10-31 北京科技大学 Image semantic segmentation method for super-resolution reconstruction of fused image

Also Published As

Publication number Publication date
CN109191392B (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN109191392A (en) A kind of image super-resolution reconstructing method of semantic segmentation driving
CN109389556A (en) The multiple dimensioned empty convolutional neural networks ultra-resolution ratio reconstructing method of one kind and device
CN113362223B (en) Image super-resolution reconstruction method based on attention mechanism and two-channel network
CN108492248A (en) Depth map super-resolution method based on deep learning
CN107844795B (en) Convolutional neural network feature extraction method based on principal component analysis
CN110634108B (en) Composite degraded network live broadcast video enhancement method based on element-cycle consistency confrontation network
CN109064396A (en) A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network
CN109035149A (en) A kind of license plate image based on deep learning goes motion blur method
CN108259994B (en) Method for improving video spatial resolution
CN107563965A (en) Jpeg compressed image super resolution ratio reconstruction method based on convolutional neural networks
CN107993238A (en) A kind of head-and-shoulder area image partition method and device based on attention model
CN110163801A (en) A kind of Image Super-resolution and color method, system and electronic equipment
Chen et al. Single image super-resolution using deep CNN with dense skip connections and inception-resnet
CN109920012A (en) Image colorant system and method based on convolutional neural networks
CN107784628A (en) A kind of super-resolution implementation method based on reconstruction optimization and deep neural network
CN116091886A (en) Semi-supervised target detection method and system based on teacher student model and strong and weak branches
CN112288630A (en) Super-resolution image reconstruction method and system based on improved wide-depth neural network
CN111127331A (en) Image denoising method based on pixel-level global noise estimation coding and decoding network
CN110751271B (en) Image traceability feature characterization method based on deep neural network
CN112580473A (en) Motion feature fused video super-resolution reconstruction method
CN112364838A (en) Method for improving handwriting OCR performance by utilizing synthesized online text image
CN115861614A (en) Method and device for automatically generating semantic segmentation graph based on down jacket image
Gao et al. Learning to Incorporate Texture Saliency Adaptive Attention to Image Cartoonization.
CN111105354A (en) Depth image super-resolution method and device based on multi-source depth residual error network
CN109993701A (en) A method of the depth map super-resolution rebuilding based on pyramid structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant