CN108600782A - Video super-resolution method, device and computer readable storage medium - Google Patents
Video super-resolution method, device and computer readable storage medium Download PDFInfo
- Publication number
- CN108600782A CN108600782A CN201810309141.XA CN201810309141A CN108600782A CN 108600782 A CN108600782 A CN 108600782A CN 201810309141 A CN201810309141 A CN 201810309141A CN 108600782 A CN108600782 A CN 108600782A
- Authority
- CN
- China
- Prior art keywords
- image
- video
- pending image
- resolution
- pending
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 230000003321 amplification Effects 0.000 claims abstract description 52
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 51
- 230000009467 reduction Effects 0.000 claims abstract description 24
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 239000000284 extract Substances 0.000 claims abstract description 15
- 230000004913 activation Effects 0.000 claims description 27
- 230000005540 biological transmission Effects 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 10
- 230000010354 integration Effects 0.000 claims description 7
- 238000011084 recovery Methods 0.000 claims description 7
- 238000002360 preparation method Methods 0.000 claims 1
- 238000012549 training Methods 0.000 abstract description 10
- 230000001537 neural effect Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 6
- 238000010606 normalization Methods 0.000 description 6
- 239000013598 vector Substances 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000008034 disappearance Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000007853 buffer solution Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 239000000243 solution Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234363—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
- H04L67/5682—Policies or rules for updating, deleting or replacing the stored data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
- H04N21/23106—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23406—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving management of server-side video buffer
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of video super-resolution methods, by being amplified processing to pending image, and extract scaling feature, obtain the first pending image after scaling feature extraction;Then the described first pending image is sent to residual error network, so that the residual error network exports revised second pending image;Then it sets additional write-in and is saved in content the attribute of the described second pending image to provide on node;Later based on the scale in scale amplification module, reduction treatment is carried out to the described second pending image, generation is gone back original image and exported.The invention also discloses a kind of video super-resolution device and computer readable storage mediums.This method can effectively solve the problem that the video image under different enlargement ratios cannot share convolution neural metwork training result and web cachings for additional the technical issues of contents processing inefficiency is written.
Description
Technical field
The present invention relates to a kind of technical field of image processing more particularly to video super-resolution method, device and computers
Readable storage medium storing program for executing.
Background technology
Super-resolution processing technology is a kind of technology for the resolution ratio improving video or image, and popular understanding is logical
The method for crossing hardware or software improves the resolution ratio of original image.Image/video is durings its acquisition collection, transmission storage etc.
Due to the restriction of certain factors, the decline or too low of its quality level may be caused.With the development of Computer Multimedia Technology,
Requirement of the people to Definition of digital picture is higher and higher, therefore is frequently necessary to improve video or figure by super-resolution processing
The resolution ratio of picture, at the same time it is wished that treated, video definition is higher.
Currently, super-resolution technique has very extensive purposes, including high definition television, medicine shadow in actual life
Picture, satellite image, safety detection, micro-imaging, the fields such as virtual reality.Wherein, in digital TV field, super-resolution is utilized
Digital television signal is converted to the application that high-definition TV signal is particularly important by reconstruction technique, can effectively improve video
Clarity.Super-resolution technique follows the principle of " network is deeper, effect is better ", but using the oversubscription of SRResNet network structures
Resolution technology will produce the problems such as parameter is excessive, gradient restrained slow, training difficulty, real-time rate declines due to the intensification of network,
Classical ResNet models do not refer to how the super-resolution technique of different enlargement ratios is realized.Classical ResNet models, which use, to be criticized
(Batch Normalization) method of normalization convergence gradient accelerates training process, but criticizes normalization as network depth adds
It can so that computing cost is excessive deeply, and its principle can make feature be standardized, and be not particularly suited for super-resolution application, therefore
It needs to propose to be different from batch normalized processing method, achievees the purpose that reduce computing cost, accelerates to calculate convergence rate.
Meanwhile when by network buffer system transmitting video files,.Web cachings are located at client and content source server
Between.When client request content, the web cachings export the copy of content for preserving source server, so as to next
When the request of a same content arrives, directly by the copy locally preserved, service is provided for client, is rung to reach to shorten
It should postpone, reduce the purpose of network bandwidth consumption, while realize the work(that media content is automatically performed distribution according to user's request
Energy.When the original contents on source server are updated, the copy that web is cached can be caused to fail.For this purpose, defined in HTTP standards
A set of rule and mechanism for web cache managements.If web cachings detect that the copy locally preserved alreadys exceed effectively
Phase then needs to confirm whether the content is also effective to source server.If original contents have been updated, web is cached just
It needs to re-download original contents and cache.But for being segmented newer original contents, this transmission mode, which will exist, asks
Topic.At this time due to detecting that original contents are updated, web caching will more latest copy, again from source server by entire content
Transmission one time.But in fact, when updating every time, the content cached originally is not changed, all or effective, it is only necessary under
Carry the newest data being added.Obviously, for the one section of content newly increased, and entire file is retransmitted, this can be seriously affected
The performance of communication system, meaningless consumption network bandwidth increase operating lag, against the original intention of web caching deployment.
The above is only used to facilitate the understanding of the technical scheme, and is not represented and is recognized that the above is existing skill
Art.
Invention content
The main purpose of the present invention is to provide a kind of video super-resolution method, device and computer-readable storage mediums
Matter, it is intended to which convolution neural metwork training result and web cachings pair cannot be shared by solving the video image under different enlargement ratios
In additional the technical issues of contents processing inefficiency is written.
To achieve the above object, the present invention provides a kind of video super-resolution method, and the video super-resolution method is answered
For video super-resolution system, the video super-resolution system includes that content generates end, the video super-resolution method
Include the following steps:
Content generates end and obtains pending video, and the pending video is decomposed into the pending image of several frames;
Processing is amplified to the pending image, and extracts scaling feature, obtains carrying through scaling feature
The first pending image after taking;
The described first pending image is sent to residual error network, so that residual error network output revised second waits locating
Manage image;
It sets the attribute of the described second pending image to additional write-in (append only) and content offer is provided
On node, wherein it refers to source web server that the content, which provides node, or higher level's web cachings in multistage web caching systems;
Based on the scale in scale amplification module, reduction treatment is carried out to the described second pending image, generates also artwork
Picture simultaneously exports;
According to each timing position for going back original image, by each reduction image integration at one section of video, currently waited locating
Manage the corresponding super-resolution processing video of video.
Preferably, described that processing is amplified to the pending image, and scaling feature is extracted, it obtains through scale
Zoom feature extraction after the first pending image the step of include:
The pending image for obtaining low resolution pre-processes the pending image in pretreatment convolutional layer;
The pretreated pending image is sent to scale amplification module, is waited for described based on preset amplification scale
Processing image is amplified and extracts scaling feature, is extracted the first pending image of scaling feature.
Preferably, the preset amplification scale includes 2 times, 3 times or 4 times.
Preferably, described to send the first pending image to residual error network, it is corrected so that the residual error network exports
The step of rear the second pending image includes:
The described first pending image is sent to residual error network, at several bottleneck residual units in residual error network
Reason generates revised second pending image;
Described second pending image is sent to attribute setup module.
Preferably, the residual error network includes several bottleneck residual units and a convolutional layer, each bottleneck residual unit
There are one weights to normalize module for connection;
The bottleneck residual unit includes three convolutional layers, between each two convolutional layer, the activation primitive layer containing there are one,
The activation primitive is PReLu functions;
The activation primitive includes a variable, and the value of the variable is obtained by last layer e-learning.
Preferably, the attribute by the described second pending image is set as additional write-in (append only) and protects
Being stored to the step that content is provided on node includes:
In attribute setup module, the attribute of the second pending image is set by file system, or by data
Append only labels are added in library, or realize append only labels by adding meta data file;
The second pending image that attribute is append only is saved in the web server of source, or by higher level web
Caching obtains the second pending file that the attribute is append only from the web server of source and is locally saving as pair
This.
In addition, to achieve the above object, the present invention also provides a kind of video super-resolution methods, which is characterized in that described
Video super-resolution method is applied to video super-resolution system, and the video super-resolution system further includes web cachings end, institute
Video super-resolution method is stated to include the following steps:
Web caches end when receiving the image processing requests of user equipment transmission, is determined based on the copy locally preserved
Whether need to be updated it;
When completing update to the copy locally preserved, the image file of user equipment requests is sent to scale and restores mould
Block returns to the user equipment after completing scale reduction treatment.
Preferably, described after web cachings end receives the image processing requests of user equipment transmission, based on what is locally preserved
Copy determines the need for the step of being updated to it and includes:
After web caching terminations receive the image processing requests of user equipment transmission, the copy locally preserved is checked;
If do not have in copy user terminal ask image file or user terminal request image file in copy not
Image file completely or in copy is expired, and the attribute of image file is append only, then is provided to content
Node initiates request, to obtain the content lacked in copy.
In addition, to achieve the above object, the present invention also provides a kind of video super-resolution devices, which is characterized in that described
Video super-resolution device includes:It memory, processor and is stored on the memory and can run on the processor
Video super-resolution program, when the video super-resolution program is executed by the processor realize as described in any of the above-described
Video super-resolution method the step of.
In addition, to achieve the above object, the present invention also provides a kind of computer readable storage mediums, which is characterized in that institute
It states and is stored with video super-resolution program on computer readable storage medium, the video super-resolution program is executed by processor
The step of Shi Shixian video super-resolution methods as described in any one of the above embodiments
The present invention program generates end by content and obtains pending video, the pending video is decomposed into several frames
Pending image;Then processing is amplified to the pending image, and extracts scaling feature, obtained through scaling
The first pending image after feature extraction;Then the described first pending image is sent to residual error network, for the residual error
Network exports revised second pending image;Then the attribute of the described second pending image is set to additional write-in
It (append only) and is saved on content offer node, wherein it refers to source web server that the content, which provides node, or more
Higher level's web cachings in grade web caching systems;Later based on the scale in scale amplification module, to the described second pending figure
As carrying out reduction treatment, generation is gone back original image and is exported;Finally according to each timing position for going back original image, by each also artwork
As being integrated into one section of video, the corresponding super-resolution processing video of currently pending video is obtained;This method can effectively solve the problem that
Video image under different enlargement ratios cannot share convolution neural metwork training result and web cachings in additional write-in
Hold the low technical problem for the treatment of effeciency.
Description of the drawings
Fig. 1 is the knot of the affiliated terminal of video super-resolution device in the hardware running environment that the embodiment of the present invention is related to
Structure schematic diagram;
Fig. 2 is the flow diagram of video super-resolution method first embodiment of the present invention;
Fig. 3 is to be amplified processing to the pending image in video super-resolution method second embodiment of the present invention,
And the refinement flow for the step of extracting scaling feature, obtaining the first pending image after scaling feature extraction is shown
It is intended to;
Fig. 4 is to send the described first pending image to residual error net in video super-resolution method 3rd embodiment of the present invention
Network, the refinement flow diagram for the step of exporting revised second pending image for the residual error network;
Fig. 5 is the flow diagram of video super-resolution method fourth embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific implementation mode
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
As shown in Figure 1, the affiliated terminal structure of device for the hardware running environment that Fig. 1, which is the embodiment of the present invention, to be related to shows
It is intended to.
Terminal of the embodiment of the present invention can be PC, can also be smart mobile phone, tablet computer, E-book reader, MP3
(Moving Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3)
Player, MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert's compression standard sound
Frequency level 3) the packaged type terminal device with display function such as player, pocket computer.
As shown in Figure 1, the terminal may include:Processor 1001, such as CPU, network interface 1004, user interface
1003, memory 1005, communication bus 1002.Wherein, communication bus 1002 is for realizing the connection communication between these components.
User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard), optional user interface
1003 can also include standard wireline interface and wireless interface.Network interface 1004 may include optionally that the wired of standard connects
Mouth, wireless interface (such as WI-FI interfaces).Memory 1005 can be high-speed RAM memory, can also be stable memory
(non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned processor
1001 storage device.
Optionally, terminal can also include camera, RF (Radio Frequency, radio frequency) circuit, sensor, audio
Circuit, WiFi module etc..Wherein, sensor such as optical sensor, motion sensor and other sensors.Specifically, light
Sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can according to the light and shade of ambient light come
The brightness of display screen is adjusted, proximity sensor can close display screen and/or backlight when mobile terminal is moved in one's ear.As
One kind of motion sensor, gravity accelerometer can detect in all directions the size of (generally three axis) acceleration, quiet
Size and the direction that can detect that gravity when only, the application that can be used to identify mobile terminal posture are (such as horizontal/vertical screen switching, related
Game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Certainly, mobile terminal can also match
The other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor are set, details are not described herein.
It will be understood by those skilled in the art that the restriction of the not structure paired terminal of terminal structure shown in Fig. 1, can wrap
It includes than illustrating more or fewer components, either combines certain components or different components arrangement.
As shown in Figure 1, as may include that operating system, network are logical in a kind of memory 1005 of computer storage media
Believe module, Subscriber Interface Module SIM and video super-resolution program.
In terminal shown in Fig. 1, network interface 1004 is mainly used for connecting background server, is carried out with background server
Data communicate;User interface 1003 is mainly used for connecting client (user terminal), with client into row data communication;And processor
1001 can be used for calling the video super-resolution program stored in memory 1005.
In the present embodiment, video super-resolution device includes:Memory 1005, processor 1001 and it is stored in described deposit
On reservoir 1005 and the video super-resolution program that can be run on the processor 1001, wherein the calling of processor 1001 is deposited
When the video super-resolution program stored in reservoir 1005, and execute following operation:
Content generates end and obtains pending video, and the pending video is decomposed into the pending image of several frames;
Processing is amplified to the pending image, and extracts scaling feature, obtains carrying through scaling feature
The first pending image after taking;
The described first pending image is sent to residual error network, so that residual error network output revised second waits locating
Manage image;
It sets the attribute of the described second pending image to additional write-in (append only) and content offer is provided
On node, wherein it refers to source web server that the content, which provides node, or higher level's web cachings in multistage web caching systems;
Based on the scale in scale amplification module, reduction treatment is carried out to the described second pending image, generates also artwork
Picture simultaneously exports;
According to each timing position for going back original image, by each reduction image integration at one section of video, currently waited locating
Manage the corresponding super-resolution processing video of video.
Further, processor 1001 can call the video super-resolution program stored in memory 1005, also execute
It operates below:
The pending image for obtaining low resolution pre-processes the pending image in pretreatment convolutional layer;
The pretreated pending image is sent to scale amplification module, is waited for described based on preset amplification scale
Processing image is amplified and extracts scaling feature, is extracted the first pending image of scaling feature.
Further, processor 1001 can call the video super-resolution program stored in memory 1005, also execute
It operates below:
The preset amplification scale includes 2 times, 3 times or 4 times.
Further, processor 1001 can call the video super-resolution program stored in memory 1005, also execute
It operates below:
The described first pending image is sent to residual error network, at several bottleneck residual units in residual error network
Reason generates revised second pending image;
Described second pending image is sent to attribute setup module.
Further, processor 1001 can call the video super-resolution program stored in memory 1005, also execute
It operates below:
The residual error network includes several bottleneck residual units and a convolutional layer, and each bottleneck residual unit is connected with one
A weight normalizes module;
The bottleneck residual unit includes three convolutional layers, between each two convolutional layer, the activation primitive layer containing there are one,
The activation primitive is PReLu functions;
The activation primitive includes a variable, and the value of the variable is obtained by last layer e-learning.
Further, processor 1001 can call the video super-resolution program stored in memory 1005, also execute
It operates below:
In attribute setup module, the attribute of the second pending image is set by file system, or by data
Append only labels are added in library, or realize append only labels by adding meta data file;
The second pending image that attribute is append only is saved in the web server of source, or by higher level web
Caching obtains the second pending file that the attribute is append only from the web server of source and is locally saving as pair
This.
Further, processor 1001 can call the video super-resolution program stored in memory 1005, also execute
It operates below:
Web caches end when receiving the image processing requests of user equipment transmission, is determined based on the copy locally preserved
Whether need to be updated it;
When completing update to the copy locally preserved, the image file of user equipment requests is sent to scale and restores mould
Block returns to the user equipment after completing scale reduction treatment.
Further, processor 1001 can call the video super-resolution program stored in memory 1005, also execute
It operates below:
After web caching terminations receive the image processing requests of user equipment transmission, the copy locally preserved is checked;
If do not have in copy user terminal ask image file or user terminal request image file in copy not
Image file completely or in copy is expired, and the attribute of image file is append only, then is provided to content
Node initiates request, to obtain the content lacked in copy.
First embodiment of the invention provides a kind of video super-resolution method, is video oversubscription of the present invention with reference to Fig. 2, Fig. 2
The flow diagram of resolution method first embodiment, the video super-resolution method include:
Step S10, content generate end and obtain pending video, and the pending video is decomposed into the pending figure of several frames
Picture;
What video was all made of static picture, these static pictures are referred to as frame.In general, frame per second is less than 15
Frame/second, continuous sport video just have the feeling of pause.Using television standard PAL system, it provides video 25 in China
Frame/second (interlace mode, per 625 scan lines of frame).General video all can determine frame when making according to application range
Number, frame number is more, and data volume is bigger, so slowed down frame speed sometimes for data volume is reduced, such as only 16 frames per second.It removes
Except this, also 24 frames/second, 25 frames/second, 29.97 frames/second, the different standard such as 30 frames/second.There are many soft currently on the market
Part tool can be to pending video into edlin, to obtain several complete pending frame images.
Step S20 is amplified processing to the pending image, and extracts scaling feature, obtains contracting through scale
Put the first pending image after feature extraction;
When getting pending low-resolution image, pending image is input in pretreatment convolutional layer and extracts spy
Image Jing Guo feature extraction is sent to scale amplification module by sign after the feature extraction for completing original pending image,
Scale amplification module is prefixed three different amplification scales, is 2 times, 3 times and 4 times respectively, can be selected according to actual conditions
Different amplification scales are applied.Under normal conditions, different video qualities is suitable for different amplification scales.Amplify scale
When being 2 times, the scale of each pixel in a single direction becomes original 2 times, that is, becomes 4 pictures from 1 pixel
Vegetarian refreshments, this 4 pixels are 2 × 2 arrangements, that is to say, that scale of the amplified image pixel on any one direction becomes former
2 times come, when amplification scale is 3 times and 4 times similarly, each pixel becomes original several times, for example, amplification scale is
At 3 times, a pixel becomes 9 pixels, and when amplification scale is 4 times, a pixel becomes 16 pixels.
After pending image completes the enhanced processing of particular dimensions in amplification module, scaling feature is extracted.Scale
Zoom feature is the characteristic information about amplification factor, indicates the amplification situation of some image.It can obtain by ruler later
The first pending image after zoom feature extraction is spent, this image is without scaling feature, that is to say, that for difference
Amplification scale, after scaling feature extraction, obtained first pending image is the same.
Step S30 sends the first pending image to residual error network, so that residual error network output is revised
Second pending image;
Residual error network is actually a convolutional neural networks, is made of several bottleneck residual units and convolutional layer,
A weight normalization module is added to after each bottleneck residual unit.Wherein, the structure of each bottleneck residual unit includes three
A convolutional layer and two activation primitive layers, two activation primitive layers are located between each two convolutional layer.Activation primitive uses
PReLu functions, the variable containing there are one, obtain from last layer e-learning.In addition, in the present embodiment, using ResNet-34
Network model.
Before residual error network occurs, the depth network model number of plies that people use is less, at the beginning of rational weight is arranged
Beginningization increases batch standardization and improves a series of means such as activation primitive, gradient disappearance has been effectively relieved so that depth
Network training becomes feasible.With the intensification of the network number of plies, theoretically error can become smaller, while the ability to express enhancing of model,
But after simple overlay network network layers, training error becomes much larger, and mainly receives the influence of the factors such as gradient disappearance.
Then there is residual error network, residual error network is to be stacked to constitute by residual error module, residual error module be divided into as conventional residual error module and
Bottleneck residual error module, bottleneck residual error mould 1 × 1 convolution in the block can play the role of lifting dimension, to enable 3 × 3 convolution can be with
It is carried out in the input compared with low dimensional, which can be greatly decreased calculation amount, and especially effect is preferable in very deep network.
ReLu is changed to PReLu by the activation primitive wherein in residual error network, introduce one can learning parameter help of its adaptability
Part negative coefficient is practised, in addition above-mentioned residual error Web vector graphic picture up-sampling method, uses sub-pixel convolutional layer.
Step S40 sets the attribute of the described second pending image to additional write-in (append only) and is saved in
Content is provided on node, wherein it refers to source web server that the content, which provides node, or the higher level in multistage web caching systems
Web is cached;
The second pending image is the segmental file that can only be updated by additional write-in.Such as:HTTP
Streaming service scripts is added to the end of file when update all by way of additional be written every time.Image text is set
The attribute of part is append only, by the attribute of file system set content file, or by adding in the database
Append only labels, or realize append only labels by adding meta data file.It refers to source that content, which provides node,
Web server or higher level's web cachings in multistage web caching systems.Image file is saved in content to provide on node
Including:Content generates end and the content file that attribute is append only is saved in the web server of source;Or the higher level
Web cachings obtain the content file that the attribute is append only from the web server of source and are locally saving as copy.
Step S50 carries out reduction treatment based on the scale in scale amplification module to the described second pending image, raw
At going back original image and export;
There are one scale recovery modules for setting after residual error network, and the main function of the module is will to pass through amplification module
Amplified pending image carries out contractility reduction, and ultimately generates high-resolution original image of going back and exported, and then
To the video of high quality.
Step S60 is obtained according to each timing position for going back original image by each reduction image integration at one section of video
The corresponding super-resolution processing video of currently pending video.
It is similar with step S10, by several reduction image integrations by super-resolution processing at video similar to video point
Solution is the inverse process of image, and can be by means of existing software tool is gone back original image to several and compiled on the market at present
Processing is collected, to be integrated into complete super-resolution processing video.
The video super-resolution method proposed in the present embodiment generates end by content and obtains pending video, will be described
Pending video is decomposed into the pending image of several frames;Then processing is amplified to the pending image, and extracts scale
Zoom feature obtains the first pending image after scaling feature extraction;Then the described first pending image is sent
To residual error network, so that the residual error network exports revised second pending image;Then by the described second pending figure
The attribute of picture is set as additional write-in (append only) and is saved on content offer node, wherein the content provides section
Point refers to source web server, or higher level's web cachings in multistage web caching systems;Later based on the ruler in scale amplification module
Degree carries out reduction treatment to the described second pending image, and generation is gone back original image and exported;Finally according to each original image gone back
Timing position obtains the corresponding super-resolution processing of currently pending video and regards by each reduction image integration at one section of video
Frequently;This method can effectively solve the problem that the video image under different enlargement ratios cannot share convolution neural metwork training result and
Web cachings are for additional the technical issues of contents processing inefficiency is written.
Based on first embodiment, the second embodiment of video super-resolution method of the present invention is proposed, with reference to Fig. 3, step S20
Including:
Step S21 obtains the pending image of low resolution, is carried out to the pending image in pretreatment convolutional layer pre-
Processing;
When getting pending low-resolution image, pending image is input in pretreatment convolutional layer and extracts spy
Image Jing Guo feature extraction is sent to scale amplification module by sign after the feature extraction for completing original pending image.
Step S22 sends the pretreated pending image to scale amplification module, based on preset amplification scale
Scaling feature is amplified and extracted to the pending image, and extracted scaling feature first is pending
Image.
Three different amplification scales are prefixed in scale amplification module, are 2 times, 3 times and 4 times respectively, it can be according to reality
Border situation selects different amplification scales to be applied.Due to video resolution used by DTV be it is fixed several,
Such as 720P, 1080P, 2K, 4K etc., therefore under normal conditions, different video qualities is suitable for different amplification scales.It waits locating
After reason image completes the enhanced processing of particular dimensions in amplification module, scaling feature is extracted.Scaling is characterized in closing
In the characteristic information of amplification factor, the amplification situation of some image is indicated.It can obtain carrying by scaling feature later
The first pending image after taking, this image is without scaling feature, that is to say, that for different amplification scales,
After scaling feature extraction, obtained first pending image is the same.
Further, in one embodiment, the preset amplification scale includes 2 times, 3 times or 4 times.
When amplification scale is 2 times, the scale of each pixel in a single direction becomes original 2 times, that is, from 1
Pixel becomes 4 pixels, this 4 pixels are 2 × 2 arrangements, that is to say, that amplified image pixel is in any one side
Upward scale becomes original 2 times, and when amplification scale is 3 times and 4 times similarly, each pixel becomes original several
Times, for example, when amplification scale is 3 times, a pixel becomes 9 pixels, and when amplification scale is 4 times, a pixel becomes
For 16 pixels.
The video super-resolution method proposed in the present embodiment is being located in advance by obtaining the pending image of low resolution
Reason convolutional layer pre-processes the pending image;Then the pretreated pending image to scale is sent to amplify
Module is amplified the pending image based on preset amplification scale and extracts scaling feature, extracted
First pending image of scaling feature;In the super-resolution problem of video image, it is contemplated that adjacent several frames have very
Strong relevance, therefore not only to ensure the quality of super-resolution result, it also wants that the efficiency requirements handled in real time can be reached.
Based on first embodiment, the 3rd embodiment of video super-resolution method of the present invention is proposed, with reference to Fig. 4, step S30
Including:
Step S31, the transmission first pending image are residual by several bottlenecks in residual error network to residual error network
Poor cell processing generates revised second pending image;
Before residual error network occurs, the depth network model number of plies that people use is less, at the beginning of rational weight is arranged
Beginningization increases batch standardization and improves a series of means such as activation primitive, gradient disappearance has been effectively relieved so that depth
Network training becomes feasible.With the intensification of the network number of plies, theoretically error can become smaller, while the ability to express enhancing of model,
But after simple overlay network network layers, training error becomes much larger, and mainly receives the influence of the factors such as gradient disappearance.
Then there is residual error network, residual error network is to be stacked to constitute by residual error module, residual error module be divided into as conventional residual error module and
Bottleneck residual error module, bottleneck residual error mould 1 × 1 convolution in the block can play the role of lifting dimension, to enable 3 × 3 convolution can be with
It is carried out in the input compared with low dimensional, which can be greatly decreased calculation amount, and especially effect is preferable in very deep network.
ReLu is changed to PReLu by the activation primitive wherein in residual error network, introduce one can learning parameter help of its adaptability
Part negative coefficient is practised, in addition above-mentioned residual error Web vector graphic picture up-sampling method, uses sub-pixel convolutional layer.
Residual error network is actually a convolutional neural networks, is made of several bottleneck residual units and convolutional layer,
A weight normalization module is added to after each bottleneck residual unit.Wherein, the structure of each bottleneck residual unit includes three
A convolutional layer and two activation primitive layers, two activation primitive layers are located between each two convolutional layer.Activation primitive uses
PReLu functions, the variable containing there are one, obtain from last layer e-learning.
Described second pending image is sent to attribute setup module by step S32.
The first pending image without scaling feature is input to residual error network, obtains the oversubscription of residual error network
Resolution reconstruction is handled, and is exported by the revised second pending image to scale recovery module, for ruler from residual error network
It spends the image that recovery module does not have this scaling feature and carries out scale reduction treatment, generate the reduction with scale feature
Image.
Further, in one embodiment, the residual error network includes several bottleneck residual units and a convolutional layer,
There are one weights to normalize module for each bottleneck residual unit connection;
Residual error network settings include several bottleneck residual units after scale amplification module, residual in above-mentioned bottleneck
There are one convolutional layers for setting behind poor unit, wherein there are one weights to normalize module, power behind each bottleneck residual unit
Renormalization processing is a kind of method of neural network model parameter.Due in deep neural network, parameter set includes
A large amount of weight and deviation, therefore a major issue in the processing above-mentioned parameter real-time deep study how to optimize.
In weight normalizes module, in order to accelerate the convergence rate of Optimization Steps, based on stochastic gradient descent by k ranks vector v and
Scale factor g indicates k rank weight vectors, is changed by certain mathematics, we can obtain following formula:
Wherein g is scale factor, and w is k rank weight vectors, and v is k rank vectors, and L is loss function.
Further, in one embodiment, the bottleneck residual unit include three convolutional layers, each two convolutional layer it
Between, containing there are one activation primitive layer, the activation primitive is PReLu functions;
Each bottleneck residual unit includes 3 convolutional layers, comprising there are one activation primitives between each two convolutional layer
Layer, the activation primitive are Parametric ReLU i.e. PReLU functions.The formula of the function is as follows, and wherein α is a change
Amount, obtains, the variable α of introducing helps the study part negative coefficient of its adaptability from last layer e-learning.
The video super-resolution method proposed in the present embodiment, by sending the first pending image to residual error net
Network is handled by several bottleneck residual units in residual error network, generates revised second pending image;Then by institute
It states the second pending image and is sent to attribute setup module;Activation primitive layer is improved, the learning ability of residual error network is improved
And adaptability.
Fourth embodiment of the invention provides a kind of video super-resolution method, is video oversubscription of the present invention with reference to Fig. 5, Fig. 5
The flow diagram of resolution method fourth embodiment, the video super-resolution method include:
Step S70, web caches end when receiving the image processing requests of user equipment transmission, based on what is locally preserved
Copy determines the need for being updated it;
If uncomplete content in copy, and the attribute of content file is append only, then providing node to content initiates
Request obtains the content lacked in copy, specially:If the content file for not having user terminal to ask in copy or user
Content file in holding the content file of request imperfect in copy or copy is expired, and web cachings are then carried to content
It initiates to ask for node, content provides the content that node will be lacked by response message in copy, and shows the content text
Part attribute is the information of append only, returns to web cachings.After checking the copy locally preserved, if content in copy
Completely, that is, there is effective copy, then web cachings directly read copy content, return to user terminal.
By taking video super-resolution as an example, content generates end and generates several pending images, and supply source web server is read
It takes, append only attributes is set.Client sends out image processing requests, is handled first by web caching systems.If client
The content of end request has copy in web caching systems, then web caching systems can directly read copy content and be sent to visitor
Family end.If the local situations such as term of validity is already expired without copy or copy, web caching systems can be to source web server or should
The upper level web cachings of web cachings send out request, and client is relayed to after receiving file destination.
If the corresponding file of image processing requests is not present in web cachings, web is cached to content and is provided node,
That is upper level web is cached or source web server initiates HTTP request, and the entire file of acquisition request receives content offer node and returns
After the data returned, the part needed for client is extracted, is transmitted to client, and by the data received in local cache, while root
The relevant information in the http response of node is provided according to content, setting file attribute is append only.
If the corresponding file of image processing requests exists in web cachings, web cachings judge whether copy covers
The requested range of client, if cache file cover the requested range of client (such as cache file be 10MB, visitor
The requested data area in family end is 7MB~9MB), and (being, for example, partitioning file) that file is append only, then
Web cachings directly read the corresponding data in local cache file, return to client, while in http response message, taking
The extended field of tape file attribute indicates that the content is append only.If be not covered with, web is cached to be carried to content
HTTP request is initiated for node, whether the scope of examination has update, while the data of acquisition request lack part are (i.e. since 10MB
Further part).After receiving the data that content provides node return, the part needed for client is extracted, client is transmitted to, and
In http response message, the extended field of file attribute is carried, indicates that the content is append only.It will receive simultaneously
Data add at the end of local cache file, and it is append only to mark this document.
Further, if the requested content of client is locally having caching, but file is not append only
(being, for example, that playlist describes file), then the cache policy processing of web cachings routinely.Check whether caching is expired, such as
Do not have expired, then directly reads cache contents and return to client.If expired, node request update is provided to content, such as
Fruit content does not update, then directly reads cache contents and return to client.If there is updating, then reception content provides node and returns
After the more new content returned, extraction related data returns to client.
The image file of user equipment requests is sent to ruler by step S80 when completing update to the copy locally preserved
Recovery module is spent, the user equipment is returned to after completing scale reduction treatment.
The main function of scale recovery module is will to pass through the amplified pending image of amplification module to carry out contractility also
Original, and ultimately generate high-resolution original image of going back and exported, and then obtain the video of high quality.
The video super-resolution method proposed in the present embodiment, by caching the figure that end receives user equipment transmission as web
After processing request, determine the need for being updated it based on the copy locally preserved;Then in the pair to locally preserving
When this completion updates, the image file of user equipment requests is sent to scale recovery module, is returned after completing scale reduction
To the user equipment;The residual error network for increasing weight normalization module greatly reduces the standardized calculating cost of weight,
It avoids and increases randomness during noise is estimated, can adapt to a greater variety of network models.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium
On be stored with video super-resolution program, following operation is realized when the video super-resolution program is executed by processor:
Content generates end and obtains pending video, and the pending video is decomposed into the pending image of several frames;
Processing is amplified to the pending image, and extracts scaling feature, obtains carrying through scaling feature
The first pending image after taking;
The described first pending image is sent to residual error network, so that residual error network output revised second waits locating
Manage image;
It sets the attribute of the described second pending image to additional write-in (append only) and content offer is provided
On node, wherein it refers to source web server that the content, which provides node, or higher level's web cachings in multistage web caching systems;
Based on the scale in scale amplification module, reduction treatment is carried out to the described second pending image, generates also artwork
Picture simultaneously exports;
According to each timing position for going back original image, by each reduction image integration at one section of video, currently waited locating
Manage the corresponding super-resolution processing video of video.
Further, following operation is also realized when the video super-resolution program is executed by processor:
The pending image for obtaining low resolution pre-processes the pending image in pretreatment convolutional layer;
The pretreated pending image is sent to scale amplification module, is waited for described based on preset amplification scale
Processing image is amplified and extracts scaling feature, is extracted the first pending image of scaling feature.
Further, following operation is also realized when the video super-resolution program is executed by processor:
The preset amplification scale includes 2 times, 3 times or 4 times.
Further, following operation is also realized when the video super-resolution program is executed by processor:
The described first pending image is sent to residual error network, at several bottleneck residual units in residual error network
Reason generates revised second pending image;
Described second pending image is sent to attribute setup module.
Further, following operation is also realized when the video super-resolution program is executed by processor:
The residual error network includes several bottleneck residual units and a convolutional layer, and each bottleneck residual unit is connected with one
A weight normalizes module;
The bottleneck residual unit includes three convolutional layers, between each two convolutional layer, the activation primitive layer containing there are one,
The activation primitive is PReLu functions;
The activation primitive includes a variable, and the value of the variable is obtained by last layer e-learning.
Further, following operation is also realized when the video super-resolution program is executed by processor:
In attribute setup module, the attribute of the second pending image is set by file system, or by data
Append only labels are added in library, or realize append only labels by adding meta data file;
The second pending image that attribute is append only is saved in the web server of source, or by higher level web
Caching obtains the second pending file that the attribute is append only from the web server of source and is locally saving as pair
This.
Further, following operation is also realized when the video super-resolution program is executed by processor:
Web caches end when receiving the image processing requests of user equipment transmission, is determined based on the copy locally preserved
Whether need to be updated it;
When completing update to the copy locally preserved, the image file of user equipment requests is sent to scale and restores mould
Block returns to the user equipment after completing scale reduction treatment.
Further, following operation is also realized when the video super-resolution program is executed by processor:
After web caching terminations receive the image processing requests of user equipment transmission, the copy locally preserved is checked;
If do not have in copy user terminal ask image file or user terminal request image file in copy not
Image file completely or in copy is expired, and the attribute of image file is append only, then is provided to content
Node initiates request, to obtain the content lacked in copy.
It should be noted that herein, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that process, method, article or system including a series of elements include not only those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this
There is also other identical elements in the process of element, method, article or system.
The embodiments of the present invention are for illustration only, can not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical scheme of the present invention substantially in other words does the prior art
Going out the part of contribution can be expressed in the form of software products, which is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disc, CD), including some instructions use so that a station terminal equipment (can be mobile phone,
Computer, server, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
It these are only the preferred embodiment of the present invention, be not intended to limit the scope of the invention, it is every to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of video super-resolution method, which is characterized in that the video super-resolution method is applied to video super-resolution
System, the video super-resolution system include that content generates end, and the video super-resolution method includes the following steps:
Content generates end and obtains pending video, and the pending video is decomposed into the pending image of several frames;
Processing is amplified to the pending image, and extracts scaling feature, is obtained after scaling feature extraction
The first pending image;
The described first pending image is sent to residual error network, so that the residual error network exports revised second pending figure
Picture;
It sets additional write-in and is saved in content the attribute of the described second pending image to provide on node, wherein described
It refers to source web server that content, which provides node, or higher level's web cachings in multistage web caching systems;
Based on the scale in scale amplification module, reduction treatment is carried out to the described second pending image, generation goes back original image simultaneously
Output;
According to each timing position for going back original image currently pending regard is obtained by each reduction image integration at one section of video
Frequently corresponding super-resolution processing video.
2. video super-resolution method as described in claim 1, which is characterized in that described to be put to the pending image
Big processing, and the step of extracting scaling feature, obtaining the first pending image after scaling feature extraction includes:
The pending image for obtaining low resolution pre-processes the pending image in pretreatment convolutional layer;
The pretreated pending image is sent to scale amplification module, based on preset amplification scale to described pending
Image is amplified and extracts scaling feature, is extracted the first pending image of scaling feature.
3. video super-resolution method as claimed in claim 2, which is characterized in that the preset amplification scale include 2 times,
3 times or 4 times.
4. video super-resolution method as described in claim 1, which is characterized in that described to send the first pending image
To residual error network, so that the step of residual error network output revised second pending image, includes:
The described first pending image is sent to residual error network, is handled by several bottleneck residual units in residual error network,
Generate the revised second pending image;
Described second pending image is sent to attribute setup module.
5. video super-resolution method as claimed in claim 4, which is characterized in that the residual error network includes several bottlenecks
Residual unit and a convolutional layer, there are one weights to normalize module for each bottleneck residual unit connection;
The bottleneck residual unit includes three convolutional layers, and between each two convolutional layer, the activation primitive layer containing there are one is described
Activation primitive is PReLu functions;
The activation primitive includes a variable, and the value of the variable is obtained by last layer e-learning.
6. video super-resolution method as described in claim 1, which is characterized in that described by the described second pending image
Attribute is set as the additional step for being written and being saved on content offer node:
In attribute setup module, the attribute of the second pending image is set by file system, or by the database
Attribute label is added, or attribute label is realized by adding meta data file;
The second pending image that attribute is additional write-in is saved in the web server of source, or is cached from source by higher level web
The second pending file that the attribute is additional write-in is obtained in web server and is locally saving as copy.
7. a kind of video super-resolution method, which is characterized in that the video super-resolution method is applied to video super-resolution
System, the video super-resolution system further include web cachings end, and the video super-resolution method includes the following steps:
Web caches end when receiving the image processing requests of user equipment transmission, is determined whether based on the copy locally preserved
It needs to be updated it;
When completing update to the copy locally preserved, the image file of user equipment requests is sent to scale recovery module,
The user equipment is returned to after completing scale reduction treatment.
8. video super-resolution method as claimed in claim 7, which is characterized in that described to be set when web cachings end receives user
After the image processing requests that preparation is sent, the step of determining the need for being updated it based on the copy locally preserved, includes:
After web caching terminations receive the image processing requests of user equipment transmission, the copy locally preserved is checked;
If the image file of the image file or user terminal request that do not have user terminal to ask in copy is endless in copy
Image file in whole or copy is expired, and the attribute of image file is additional write-in, then provides node hair to content
Request is played, to obtain the content lacked in copy.
9. a kind of video super-resolution device, which is characterized in that the video super-resolution device includes:Memory, processor
And it is stored in the video super-resolution program that can be run on the memory and on the processor, the video super-resolution
It is realized such as the step of video super-resolution method described in any item of the claim 1 to 8 when program is executed by the processor.
10. a kind of computer readable storage medium, which is characterized in that it is super to be stored with video on the computer readable storage medium
Resolution ratio program is realized when the video super-resolution program is executed by processor as described in any item of the claim 1 to 8
The step of video super-resolution method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810309141.XA CN108600782A (en) | 2018-04-08 | 2018-04-08 | Video super-resolution method, device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810309141.XA CN108600782A (en) | 2018-04-08 | 2018-04-08 | Video super-resolution method, device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108600782A true CN108600782A (en) | 2018-09-28 |
Family
ID=63621347
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810309141.XA Withdrawn CN108600782A (en) | 2018-04-08 | 2018-04-08 | Video super-resolution method, device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108600782A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109285119A (en) * | 2018-10-23 | 2019-01-29 | 百度在线网络技术(北京)有限公司 | Super resolution image generation method and device |
CN109413434A (en) * | 2018-11-08 | 2019-03-01 | 腾讯科技(深圳)有限公司 | Image processing method, device, system, storage medium and computer equipment |
CN109977816A (en) * | 2019-03-13 | 2019-07-05 | 联想(北京)有限公司 | A kind of information processing method, device, terminal and storage medium |
CN110446071A (en) * | 2019-08-13 | 2019-11-12 | 腾讯科技(深圳)有限公司 | Multi-media processing method, device, equipment and medium neural network based |
CN110958460A (en) * | 2019-11-22 | 2020-04-03 | 北京软通智城科技有限公司 | Video storage method and device, electronic equipment and storage medium |
CN111311522A (en) * | 2020-03-26 | 2020-06-19 | 重庆大学 | Two-photon fluorescence microscopic image restoration method based on neural network and storage medium |
CN112131857A (en) * | 2020-09-11 | 2020-12-25 | 安徽中科新辰技术有限公司 | Ultrahigh-resolution visual self-adaptive typesetting method |
-
2018
- 2018-04-08 CN CN201810309141.XA patent/CN108600782A/en not_active Withdrawn
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109285119A (en) * | 2018-10-23 | 2019-01-29 | 百度在线网络技术(北京)有限公司 | Super resolution image generation method and device |
CN109413434A (en) * | 2018-11-08 | 2019-03-01 | 腾讯科技(深圳)有限公司 | Image processing method, device, system, storage medium and computer equipment |
CN109977816A (en) * | 2019-03-13 | 2019-07-05 | 联想(北京)有限公司 | A kind of information processing method, device, terminal and storage medium |
CN109977816B (en) * | 2019-03-13 | 2021-05-18 | 联想(北京)有限公司 | Information processing method, device, terminal and storage medium |
CN110446071A (en) * | 2019-08-13 | 2019-11-12 | 腾讯科技(深圳)有限公司 | Multi-media processing method, device, equipment and medium neural network based |
CN110958460A (en) * | 2019-11-22 | 2020-04-03 | 北京软通智城科技有限公司 | Video storage method and device, electronic equipment and storage medium |
CN111311522A (en) * | 2020-03-26 | 2020-06-19 | 重庆大学 | Two-photon fluorescence microscopic image restoration method based on neural network and storage medium |
CN111311522B (en) * | 2020-03-26 | 2023-08-08 | 重庆大学 | Neural network-based two-photon fluorescence microscopic image restoration method and storage medium |
CN112131857A (en) * | 2020-09-11 | 2020-12-25 | 安徽中科新辰技术有限公司 | Ultrahigh-resolution visual self-adaptive typesetting method |
CN112131857B (en) * | 2020-09-11 | 2024-05-31 | 安徽中科新辰技术有限公司 | Self-adaptive typesetting method for ultrahigh resolution visualization |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108288251A (en) | Image super-resolution method, device and computer readable storage medium | |
CN108600782A (en) | Video super-resolution method, device and computer readable storage medium | |
US20220261960A1 (en) | Super-resolution reconstruction method and related apparatus | |
CN110189246B (en) | Image stylization generation method and device and electronic equipment | |
CN108635849B (en) | Animation data compression and decompression method and device | |
US20230215076A1 (en) | Image frame display method, apparatus, device, storage medium, and program product | |
CN103179393B (en) | Reduce the DRAM compression scheme of motion compensation and the power consumption in display refreshing | |
CN110996170A (en) | Video file playing method and related equipment | |
CN110213485A (en) | A kind of image processing method and terminal | |
CN111696034B (en) | Image processing method and device and electronic equipment | |
US20190340726A1 (en) | Projection image construction method and device | |
US20230275948A1 (en) | Dynamic user-device upscaling of media streams | |
CN110766610A (en) | Super-resolution image reconstruction method and electronic equipment | |
CN111857515B (en) | Image processing method, device, storage medium and electronic equipment | |
CN110197459B (en) | Image stylization generation method and device and electronic equipment | |
WO2023284503A1 (en) | Tone mapping method and apparatus for panoramic image | |
CN108027715B (en) | The modification of graph command token | |
CN106416231A (en) | Display interface bandwidth modulation | |
CN113409208A (en) | Image processing method, device, equipment and storage medium | |
CN113222178A (en) | Model training method, user interface generation method, device and storage medium | |
CN113705309A (en) | Scene type judgment method and device, electronic equipment and storage medium | |
Kim et al. | A CNN-based multi-scale super-resolution architecture on FPGA for 4K/8K UHD applications | |
CN110392296B (en) | Online playback technology for aircraft custom format trial flight video image | |
US11810267B2 (en) | Efficient server-client machine learning solution for rich content transformation | |
CN112561843B (en) | Method, apparatus, device and storage medium for processing image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20180928 |