CN110428366A - Image processing method and device, electronic equipment, computer readable storage medium - Google Patents
Image processing method and device, electronic equipment, computer readable storage medium Download PDFInfo
- Publication number
- CN110428366A CN110428366A CN201910683492.1A CN201910683492A CN110428366A CN 110428366 A CN110428366 A CN 110428366A CN 201910683492 A CN201910683492 A CN 201910683492A CN 110428366 A CN110428366 A CN 110428366A
- Authority
- CN
- China
- Prior art keywords
- image
- resolution
- processed
- background
- foreground picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 238000012545 processing Methods 0.000 claims description 79
- 210000000746 body region Anatomy 0.000 claims description 56
- 238000001514 detection method Methods 0.000 claims description 48
- 238000000034 method Methods 0.000 claims description 21
- 238000004422 calculation algorithm Methods 0.000 claims description 17
- 238000001914 filtration Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 11
- 230000004927 fusion Effects 0.000 claims description 8
- 230000000877 morphologic effect Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 12
- 238000012549 training Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 12
- 238000013507 mapping Methods 0.000 description 10
- 238000003384 imaging method Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000005070 sampling Methods 0.000 description 6
- 230000002708 enhancing effect Effects 0.000 description 5
- 238000012937 correction Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 241000282326 Felis catus Species 0.000 description 2
- 206010021703 Indifference Diseases 0.000 description 2
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000005530 etching Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000155 melt Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
This application involves a kind of image processing method and device, electronic equipment, computer readable storage medium, which includes: the image to be processed for obtaining first resolution;It identifies the target subject in the image to be processed, obtains target subject foreground picture and Background;Super-resolution rebuilding is carried out to the target subject foreground picture and the Background respectively;By after reconstruction target subject foreground picture and Background merge, obtain target image, the resolution ratio of the target image is greater than the first resolution, can be improved the treatment of details effect of image reconstruction.
Description
Technical field
It, can more particularly to a kind of image processing method, device, electronic equipment, computer this application involves image field
Read storage medium.
Background technique
Super-resolution Reconstruction technical goal is to rebuild to obtain high-definition picture from low-resolution image, so that reconstruction obtained
Image is apparent.Some low-resolution images can be rebuild by Super-resolution Reconstruction and achieve the effect that user wants.Traditional
Super-resolution Reconstruction technology is made unified Super-resolution Reconstruction generally be directed to whole image and is handled, the image each region rebuild
Indifference cannot be considered in terms of the details of image.
Summary of the invention
The embodiment of the present application provides a kind of image processing method, device, electronic equipment, computer readable storage medium, can
To improve the treatment of details effect of image reconstruction.
A kind of image processing method, comprising:
Obtain the image to be processed of first resolution;
It identifies the target subject in the image to be processed, obtains target subject foreground picture and Background;
Super-resolution rebuilding is carried out to the target subject foreground picture and the Background respectively;
By after reconstruction target subject foreground picture and Background merge, obtain target image, the target image
Resolution ratio is greater than the first resolution.
A kind of image processing apparatus, comprising:
Module is obtained, for obtaining the image to be processed of first resolution;
Identification module, the target subject in the image to be processed, obtains target subject foreground picture and background for identification
Figure;
Module is rebuild, for carrying out super-resolution rebuilding to the target subject foreground picture and the Background respectively;
Fusion Module obtains target image, institute for merging the target subject foreground picture after rebuilding and Background
The resolution ratio for stating target image is greater than the first resolution.
Above-mentioned image processing method and device, electronic equipment, computer readable storage medium, by obtaining first resolution
Image to be processed, identify the target subject in image to be processed, target subject foreground picture and Background obtained, respectively to target
Main body foreground picture and Background carry out super-resolution rebuilding, by after reconstruction target subject foreground picture and Background merge,
Target image is obtained, the resolution ratio of target image is greater than first resolution, can take into account the details of image, improve image reconstruction
Treatment of details effect.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the internal structure block diagram of electronic equipment in one embodiment;
Fig. 2 is the flow chart of image processing method in one embodiment;
Fig. 3 is the architecture diagram of image reconstruction model in one embodiment;
Fig. 4 is the structure chart of one embodiment cascade block;
Fig. 5 is the structure chart of another embodiment cascade block;
Fig. 6 is the flow chart for carrying out super-resolution rebuilding in one embodiment to Background;
Fig. 7 is that image processing method is applied to the flow chart that video handles scene in one embodiment;
Fig. 8 is the flow chart that the target subject in the image to be processed is identified in one embodiment;
Fig. 9 is the process for determining the target subject in image to be processed in one embodiment according to body region confidence level figure
Figure;
Figure 10 is the effect diagram for carrying out main body identification in one embodiment to image to be processed;
Figure 11 is the architecture diagram of image processing method in one embodiment;
Figure 12 is the structural block diagram of image processing apparatus in one embodiment;
Figure 13 is the schematic diagram of internal structure of electronic equipment in another embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and
It is not used in restriction the application.
Image processing method in the embodiment of the present application can be applied to electronic equipment.The electronic equipment can be for camera
Computer equipment, personal digital assistant, tablet computer, smart phone, wearable device etc..Camera in electronic equipment exists
When shooting image, auto-focusing will do it, to guarantee the image clearly of shooting.
It in one embodiment, may include image processing circuit in above-mentioned electronic equipment, image processing circuit can use
Hardware and or software component is realized, it may include defines ISP (Image Signal Processing, image signal process) pipeline
Various processing units.Fig. 1 is the schematic diagram of image processing circuit in one embodiment.As shown in Figure 1, for purposes of illustration only, only
The various aspects of image processing techniques relevant to the embodiment of the present application are shown.
As shown in Figure 1, image processing circuit includes the first ISP processor 130, the 2nd ISP processor 140 and control logic
Device 150.First camera 110 includes one or more first lens 112 and the first imaging sensor 114.First image sensing
Device 114 may include colour filter array (such as Bayer filter), and the first imaging sensor 114 can be obtained with the first imaging sensor
The luminous intensity and wavelength information that 114 each imaging pixel captures, and one group for being handled by the first ISP processor 130 is provided
Image data.Second camera 120 includes one or more second lens 122 and the second imaging sensor 124.Second image passes
Sensor 124 may include colour filter array (such as Bayer filter), and the second imaging sensor 124 can be obtained with the second image sensing
The luminous intensity and wavelength information that each imaging pixel of device 124 captures, and can be handled by the 2nd ISP processor 140 one is provided
Group image data.
First image transmitting of the first camera 110 acquisition is handled to the first ISP processor 130, the first ISP processing
It, can be by statistical data (brightness of such as image, the contrast value of image, the face of image of the first image after device 130 handles the first image
Color etc.) it is sent to control logic device 150, control logic device 150 can determine the control ginseng of the first camera 110 according to statistical data
Number, so that the first camera 110 can carry out the operation such as auto-focusing, automatic exposure according to control parameter.First image is by the
One ISP processor 130 can store after being handled into video memory 160, and the first ISP processor 130 can also read figure
As the image that stores in memory 160 is with to handling.In addition, the first image can after ISP processor 130 is handled
It is sent directly to display 170 to be shown, display 170 can also read the image in video memory 160 to be shown
Show.
Wherein, the first ISP processor 130 handles image data pixel by pixel in various formats.For example, each image slices
Element can have the bit depth of 8,10,12 or 14 bits, and the first ISP processor 130 can carry out one or more figures to image data
Statistical information as processing operation, collection about image data.Wherein, image processing operations can be by identical or different bit depth
Precision carries out.
Video memory 160 can be independent dedicated in a part, storage equipment or electronic equipment of memory device
Memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving from the first 114 interface of imaging sensor, the first ISP processor 130 can carry out one or more
Image processing operations, such as time-domain filtering.Image data that treated can be transmitted to video memory 160, to be shown it
It is preceding to carry out other processing.First ISP processor 130 receives processing data from video memory 160, and to the processing data
Carry out the image real time transfer in RGB and YCbCr color space.Treated that image data is exportable for first ISP processor 130
To display 170, so that user watches and/or by graphics engine or GPU (Graphics Processing Unit, at figure
Reason device) it is further processed.In addition, the output of the first ISP processor 130 also can be transmitted to video memory 160, and display
170 can read image data from video memory 160.In one embodiment, video memory 160 can be configured to realization one
A or multiple frame buffers.
The statistical data that first ISP processor 130 determines can be transmitted to control logic device 150.For example, statistical data can wrap
Include automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 112 shadow correction of the first lens etc. first
114 statistical information of imaging sensor.Control logic device 150 may include the processor for executing one or more routines (such as firmware)
And/or microcontroller, one or more routines can statistical data based on the received, determine the control parameter of the first camera 110
And the first ISP processor 130 control parameter.For example, the control parameter of the first camera 110 may include gain, spectrum assignment
The time of integration, stabilization parameter, flash of light control parameter, 112 control parameter of the first lens (such as focus or zoom focal length) or
The combination etc. of these parameters.ISP control parameter may include for automatic white balance and color adjustment (for example, in RGB process phase
Between) 112 shadow correction parameter of gain level and color correction matrix and the first lens.
Similarly, the second image transmitting that second camera 120 acquires is handled to the 2nd ISP processor 140, and second
After ISP processor 140 handles the first image, can by the statistical data of the second image (brightness of such as image, image contrast value,
The color etc. of image) it is sent to control logic device 150, control logic device 150 can determine second camera 120 according to statistical data
Control parameter, so that second camera 120 can carry out auto-focusing, the operation such as automatic exposure according to control parameter.Second figure
As that can store after the 2nd ISP processor 140 is handled into video memory 160, the 2nd ISP processor 140 can also
To read the image stored in video memory 160 with to handling.In addition, the second image is carried out by ISP processor 140
It can be sent directly to display 170 after processing and shown that display 170 can also read the image in video memory 160
To be shown.Second camera 120 and the 2nd ISP processor 140 also may be implemented such as the first camera 110 and the first ISP
Treatment process described in processor 130.
In one embodiment, the first camera 110 can be colour imagery shot, and second camera 120 can be TOF (Time
Of Flight, flight time) camera or structure light video camera head.TOF camera can obtain TOF depth map, structure light video camera head
Structure light depth map can be obtained.First camera 110 and second camera 120 can be colour imagery shot.Pass through two colours
Camera obtains binocular depth figure.First ISP processor 130 and the 2nd ISP processor 140 can be same ISP processor.
First camera 110 and second camera 120 acquire the figure to be processed that Same Scene respectively obtains first resolution
The image and depth map to be processed of first resolution are sent to ISP processor by picture and depth map.ISP processor can be according to phase
Machine calibrating parameters are registrated the image and depth map to be processed of first resolution, keep the visual field completely the same;Then it regenerates
At center weight figure corresponding with the image to be processed of first resolution, wherein weighted value represented by the center weight figure from
Center is gradually reduced to edge;The image to be processed of first resolution and center weight figure are input to trained subject detection
In model, body region confidence level figure is obtained, the image to be processed of first resolution is determined further according to body region confidence level figure
In target subject;The image to be processed, depth map and center weight figure of first resolution can also be input to trained master
In body detection model, obtain body region confidence level figure, further according to body region confidence level figure determine first resolution wait locate
The target subject in image is managed, target subject foreground picture and Background are obtained.Then, electronic equipment respectively to the target subject before
Scape figure and the Background carry out super-resolution rebuilding, by after reconstruction target subject foreground picture and Background merge, obtain
The resolution ratio of target image, the target image is greater than the first resolution, can be improved the treatment of details effect of target subject, together
When also can be improved the treatment of details effect of image reconstruction.
Fig. 2 is the flow chart of image processing method in one embodiment.Image processing method in the present embodiment, with operation
It is described in terminal or server in Fig. 1.As shown in Fig. 2, the image processing method includes:
Step 202, the image to be processed of first resolution is obtained.
Wherein, first resolution refers to that image resolution ratio, image resolution ratio refer to the information content stored in image, is every English
Very little image memory pixel quantity.Image to be processed can be shot any scene by camera and obtain image, can be
Color image or black white image.What the image to be processed can be locally stored for electronic equipment, can also be other equipment storage,
It can be stored from network, can be also electronic equipment captured in real-time, it is without being limited thereto.
Specifically, the ISP processor of electronic equipment or central processing unit can be obtained from local or other equipment or network
The image to be processed of first resolution, or a scene is shot with first resolution by camera and obtains image to be processed.
Step 204, it identifies the target subject in the image to be processed, obtains target subject foreground picture and Background.
Wherein, main body refers to various objects, such as people, flower, cat, dog, ox, blue sky, white clouds, background.Target subject refers to
The main body needed can select as needed.Subject detection (salient object detection) refers in face of a scene
When, automatically to area-of-interest handled and selectivity ignore region of loseing interest in.Area-of-interest is known as body region
Domain.Target subject foreground picture refers to that the image in the target subject region in image to be processed, Background refer in image to be processed
The image in remaining region in addition to target subject region.
Specifically, image to be processed can be inputted subject detection model by electronic equipment, be identified by subject detection model
Target subject in the image to be processed, and be target subject foreground picture and Background by image segmentation to be processed.Further,
The binaryzation exposure mask figure of segmentation can be exported by subject detection model.
Step 206, super-resolution rebuilding is carried out to target subject foreground picture and Background respectively.
Wherein, super-resolution rebuilding, which refers to, rebuilds to obtain high-resolution figure by low-resolution image or image sequence
Picture.
Specifically, electronic equipment by main body identification model obtain first resolution target subject foreground picture and first point
It, can be by target subject foreground picture input picture reconstruction model after the Background of resolution.By image reconstruction model to target subject
Foreground picture carries out super-resolution rebuilding, the high-resolution target subject foreground picture after being rebuild.Also, the mesh after the reconstruction
The resolution ratio for marking main body foreground picture is greater than first resolution.Then, electronic equipment can be calculated by quick oversubscription algorithm or interpolation
Method etc. carries out super-resolution rebuilding to the Background of first resolution, the high-resolution Background after being rebuild.Also, it should
The resolution ratio of Background after reconstruction is greater than first resolution.
In the present embodiment, the resolution ratio of the target subject foreground picture after reconstruction and the resolution ratio of Background can be identical
Resolution ratio can also be different resolution ratio.
Step 208, by after reconstruction target subject foreground picture and Background merge, obtain target image, the target
The resolution ratio of image is greater than the first resolution.
Specifically, electronic equipment by after reconstruction target subject foreground picture and Background carry out anastomosing and splicing processing, fusion
Spliced image is target image.Likewise, the resolution ratio of the target image obtained after rebuilding is greater than image to be processed
First resolution.
The image processing method of the present embodiment identifies image to be processed by obtaining the image to be processed of first resolution
In target subject, obtain target subject foreground picture and Background.Oversubscription is carried out to target subject foreground picture and Background respectively
Resolution is rebuild, and different oversubscription processing can be done to target subject foreground picture and Background.By the target subject foreground picture after reconstruction
It is merged with Background, obtains target image, the resolution ratio of target image is greater than first resolution, allows to take into account image
Details, improve the treatment of details effect of image reconstruction.
In one embodiment, super-resolution rebuilding is carried out to the target subject foreground picture, comprising: pass through image reconstruction mould
Type extracts the feature of the target subject foreground picture, obtains characteristic pattern, which is previously according to main body prospect pattern
This to the model being trained, the main body foreground picture sample centering include first resolution main body foreground picture and second point
The main body foreground picture of resolution;Super-resolution processing is carried out to characteristic pattern by the image reconstruction model, obtains second resolution
Target subject foreground picture, the second resolution be greater than the first resolution.
Wherein, characteristic pattern, which refers to, carries out the image that feature extraction obtains to image to be processed.
Specifically, electronic equipment can acquire a large amount of main body foreground picture sample pair, each main body foreground picture sample pair in advance
In include the main body foreground picture of first resolution and the main body foreground picture of second resolution.And by the master of first resolution
Body foreground picture inputs untrained image reconstruction model and carries out super-resolution rebuilding, the main body prospect that image reconstruction model is exported
Scheme to compare with the main body foreground picture of second resolution, and according to discrepancy adjustment image reconstruction model.By repetition training
And adjustment, until the difference of the main body foreground picture of the main body foreground picture and second resolution of image reconstruction Model Reconstruction is less than threshold
When value, deconditioning.
Target subject foreground picture is inputted trained image reconstruction model by electronic equipment, and image reconstruction model can pass through volume
Lamination carries out feature extraction to the target subject foreground picture, obtains the corresponding characteristic pattern of target subject foreground picture.Pass through the figure
As the channel information of characteristic pattern is converted spatial information by reconstruction model, the target subject foreground picture of second resolution is obtained, it should
Second resolution is greater than the first resolution.
Image processing method in the present embodiment, before trained image reconstruction model extraction target subject
The feature of scape figure, obtains characteristic pattern, carries out super-resolution processing to characteristic pattern by the image reconstruction model, obtains the second resolution
The target subject foreground picture of rate, the second resolution are greater than the first resolution, can do part for target subject foreground picture
Super-resolution rebuilding processing, be capable of the details of preferably processing target main body foreground picture, so as to guarantee target subject
Clarity.
As shown in figure 3, for the architecture diagram of image reconstruction model in one embodiment.The image reconstruction model includes convolution
Layer, Nonlinear Mapping layer and up-sampling layer.Residual unit (Residual) and the first convolutional layer in Nonlinear Mapping layer are successively
Cascade obtains cascade block (Cascading Block).Include multiple cascade blocks in the Nonlinear Mapping layer, cascades block and second
Convolutional layer successively cascades, and constitutes Nonlinear Mapping layer.That is the arrow in Fig. 3 is known as global cascade connection.Nonlinear Mapping layer with
Layer connection is up-sampled, the channel information of image is converted to spatial information by up-sampling layer, exports high-definition picture.
The convolutional layer of the target subject foreground picture input picture reconstruction model of first resolution is carried out feature by electronic equipment
It extracts, obtains characteristic pattern.By the Nonlinear Mapping layer of characteristic pattern input picture reconstruction model, handled by first cascade block
Splice to output, and by the output of the characteristic pattern of convolutional layer output and first cascade block, first is input to after splicing
A first convolutional layer carries out dimension-reduction treatment.Then, the characteristic pattern after dimensionality reduction is inputted second cascade block to handle, by convolution
The characteristic pattern of layer output, first output for cascading block and the output of second cascade block are spliced, and are input to after splicing
Second the first convolutional layer carries out dimension-reduction treatment.Similarly, after the output for obtaining n-th cascade block, before n-th cascade block
Each cascade block output and convolutional layer output characteristic pattern spliced, splicing after input the first convolutional layer of n-th into
Row dimension-reduction treatment, until obtaining the output of the last one the first convolutional layer in Nonlinear Mapping layer.First in the present embodiment
Convolutional layer can be 1 × 1 convolution.
The residual error characteristic pattern that Nonlinear Mapping layer exports is input to up-sampling layer, up-samples layer for residual error characteristic pattern channel
Information is converted to spatial information, for example the multiplying power of oversubscription is × 4, and the characteristic pattern channel for being input to up-sampling layer is necessary for 16 × 3,
It is converted into spatial information by channel information after up-sampling layer, i.e. up-sampling layer final output image is the three of 4 times of sizes
Channel Color figure.
In one embodiment, the structure of each cascade block is as shown in figure 4, include three residual units in a cascade block
With three the first convolutional layers, residual unit is successively cascaded with the first convolutional layer.It is connected between residual unit by part cascade
Together, part cascade linkage function is identical as overall situation cascade linkage function.Using the characteristic pattern of convolutional layer output as cascade block
Input, is handled by first residual unit and is exported, and by the characteristic pattern of convolutional layer output and first residual unit
Output is spliced, and first the first convolutional layer is input to after splicing and carries out dimension-reduction treatment.Similarly, n-th residual error is obtained
After the output of unit, the characteristic pattern that the output of each residual unit before n-th residual unit and convolutional layer export is carried out
Splicing, input the first convolutional layer of n-th carries out dimension-reduction treatment after splicing, until obtain in a cascade block the last one the
The output of one convolutional layer.It should be noted that the first convolutional layer in the present embodiment is the first convolution in a cascade block
Layer, the first convolutional layer can be 1 × 1 convolution.
In one embodiment, as shown in figure 5, corresponding 1 × 1 point of volume of each residual unit in Fig. 4 can be replaced with
Group convolution adds the combination of 1 × 1 convolution, to reduce the number of parameters in treatment process.It is understood that the image reconstruction mould
The quantity of cascade block and the first convolutional layer in type does not limit, the number of residual unit and the first convolutional layer in each cascade block
Amount without limitation, can also be adjusted according to different demands.
In one embodiment, as shown in fig. 6, carrying out super-resolution rebuilding to the Background, comprising:
Step 602, super-resolution rebuilding is carried out to Background by the interpolation algorithm, obtains the background of third resolution ratio
Figure, the third resolution ratio are greater than the first resolution.
Wherein, interpolation algorithm includes but is not limited to arest neighbors interpolation, bilinear interpolation and bicubic interpolation etc..
Specifically, electronic equipment can be by arest neighbors interpolation algorithm, bilinear interpolation algorithm and bicubic interpolation algorithm
At least one pair of first resolution Background carry out super-resolution rebuilding, the background of the third resolution ratio after being rebuild
Figure, the third resolution ratio are greater than the first resolution.
In the present embodiment, electronic equipment can also carry out oversubscription by Background of the quick oversubscription algorithm to first resolution
Resolution is rebuild, with the Background of the third resolution ratio after being rebuild.
This by after reconstruction target subject foreground picture and Background merge, obtain target image, comprising:
Step 604, the Background of the target subject foreground picture of second resolution and third resolution ratio is adjusted to corresponding
Size.
Specifically, electronic equipment can will determine the size of the target subject foreground picture of second resolution, differentiate according to second
The size of the Background of the size adjusting third resolution ratio of the target subject foreground picture of rate, so that the target subject prospect after rebuilding
Scheme identical with the size of Background.
In the present embodiment, the target subject after electronic equipment can also be rebuild according to the size adjusting of the Background after reconstruction
The size of foreground picture, so that the target subject foreground picture after rebuilding is identical with the size of Background.
In the present embodiment, electronic equipment can size and Background to the target subject foreground picture after reconstruction size all
It is adjusted, so that the size of the target subject foreground picture after rebuilding and Background reach same target size.
Step 606, by the Background of the target subject foreground picture of the second resolution after adjustment size and third resolution ratio
It is merged, obtains target image.
Wherein, image co-registration refer to by multi-source channel the collected image data about same image by image
Reason and computer technology extract the image of the advantageous information synthesis high quality in channel to greatest extent.
Specifically, electronic equipment can be by the target subject foreground picture and third resolution ratio of the second resolution after adjustment size
Background merged.Electronic equipment can be by graph cut algorithm etc. to the target subject foreground picture and Background after reconstruction
It is handled, obtains target image.
Above-mentioned image processing method carries out super-resolution rebuilding to Background by the interpolation algorithm, obtains third resolution
The Background of the target subject foreground picture of second resolution and third resolution ratio is adjusted to corresponding size by the Background of rate,
It can be identical size by different resolution and various sizes of Image Adjusting.By the mesh of the second resolution after adjustment size
Mark main body foreground picture and the Background of third resolution ratio are merged, and complete reconstruction image are obtained, to obtain target image.
In one embodiment, electronic equipment can be instructed previously according to Background sample to image reconstruction model
Practice.Background sample centering is two identical Backgrounds, and one is the high-resolution Background marked, the low resolution not marked
Rate Background inputs non-training image reconstruction model and carries out reconstruction processing, and by after reconstruction Background and the high-resolution that has marked
Rate Background compares, constantly to adjust the parameter of image reconstruction model, deconditioning when meeting threshold value.Then, electric
The Background of image to be processed can be inputted trained image reconstruction model by sub- equipment, pass through trained image reconstruction model
Super-resolution rebuilding is carried out to Background, the Background after being rebuild.The resolution ratio of Background after the reconstruction is greater than first
Resolution ratio.
In one embodiment, as shown in fig. 7, the image processing method is applied to video processing;The first resolution
Image to be processed is every frame image to be processed in the video of first resolution.
Specifically, which, can be by low resolution by the image processing method applied to video processing
Video image is redeveloped into high-resolution image.When the image processing method is applied to video processing, electronic equipment can will need
The resolution ratio of the video of processing is as first resolution, then the image to be processed of first resolution is that every frame in the video waits locating
Manage image.
The image to be processed of the acquisition first resolution, comprising:
Step 702, every frame image to be processed in the video of first resolution is obtained.
Specifically, electronic equipment can obtain the video of first resolution from local or other equipment or network, can also be with
Video record is carried out by electronic equipment.Electronic equipment can obtain each frame image to be processed in the video of first resolution.
Target subject in the identification image to be processed, obtains target subject foreground picture and Background, comprising:
Step 704, it identifies the target subject in every frame image to be processed in the video, obtains in every frame image to be processed
Target subject foreground picture and Background.
Then, every frame image to be processed can be inputted subject detection model by electronic equipment, be identified by subject detection model
Target subject in every frame image to be processed out, and be target subject foreground picture and Background by every frame image segmentation to be processed.
Further, the binaryzation exposure mask figure of the corresponding segmentation of every frame image to be processed can be exported by subject detection model.
This carries out super-resolution rebuilding to the target subject foreground picture and the Background respectively, comprising:
Step 706, respectively to the target subject foreground picture and Background progress Super-resolution reconstruction in every frame image to be processed
It builds.
Specifically, electronic equipment by main body identification model obtain target subject foreground picture in every frame image to be processed and
It, can be by the target subject foreground picture input picture reconstruction model in every frame image to be processed after Background.Pass through image reconstruction mould
Type carries out super-resolution rebuilding to the target subject foreground picture in every frame image to be processed, obtains the target of every frame image to be processed
High-resolution target subject foreground picture after the reconstruction of main body foreground picture.Also, point of the target subject foreground picture after the reconstruction
Resolution is all larger than first resolution.Then, electronic equipment can wait locating by quick oversubscription algorithm or interpolation algorithm etc. to every frame
The Background managed in image carries out super-resolution rebuilding, the high-resolution background after obtaining the reconstruction of every frame image to be processed
Figure.Also, the resolution ratio of the Background after the reconstruction is all larger than first resolution.
In the present embodiment, the resolution ratio of the target subject foreground picture after reconstruction and the resolution ratio of Background can be identical
Resolution ratio can also be different resolution ratio.
In the present embodiment, the resolution ratio of each frame target subject foreground picture after reconstruction is identical, each frame background after reconstruction
The resolution ratio of figure is identical.
In the present embodiment, the resolution ratio of each frame target subject foreground picture after reconstruction and each frame Background is same point
Resolution.
This by after reconstruction main body foreground picture and Background merge, obtain target image, the resolution of the target image
Rate is greater than the first resolution, comprising:
Step 708, by after the corresponding reconstruction of every frame image to be processed target subject foreground picture and Background merge,
Obtain every frame target image.
Specifically, electronic equipment can establish the target subject foreground picture after image to be processed, reconstruction and Background three it
Between mapping relations.Then, electronic equipment carries out the target subject foreground picture and Background with mapping relations after reconstruction
Anastomosing and splicing processing, obtains every frame target image.Similarly, the resolution ratio of the every frame target image obtained after reconstruction, which is greater than, to be corresponded to
Each frame image to be processed first resolution.
Step 710, target video is generated according to every frame target image, the resolution ratio of the target video is greater than first resolution
Rate.
Specifically, every frame target image can be merged superposition according to the sequence of each frame image to be processed by electronic equipment, be obtained
High-resolution video, i.e. target video.The resolution ratio of the target video is greater than the first resolution, every in the target video
The resolution ratio of frame target image is all larger than first resolution.
Above-mentioned image processing method is applied to video and handles scene.Every frame in video by obtaining first resolution
Image to be processed identifies the target subject in every frame image to be processed in the video, obtains the mesh in every frame image to be processed
Main body foreground picture and Background are marked, respectively to the target subject foreground picture and Background progress super-resolution in every frame image to be processed
Rate rebuild, by after the corresponding reconstruction of every frame image to be processed target subject foreground picture and Background merge, obtain every frame
Target image generates target video according to every frame target image, and the resolution ratio of the target video is greater than the first resolution, can
The video of low resolution is redeveloped into high-resolution video.By carrying out difference respectively to target subject foreground picture and Background
Super-resolution rebuilding processing, can be improved the treatment effect to image detail.
In one embodiment, as shown in figure 8, target subject in the identification image to be processed, comprising:
Step 802, center weight figure corresponding with the image to be processed is generated, wherein represented by the center weight figure
Weighted value is gradually reduced from center to edge.
Wherein, center weight figure refers to the figure for recording the weighted value of each pixel in image to be processed.Center power
The weighted value recorded in multigraph is gradually reduced from center to four sides, i.e., center weight is maximum, is gradually reduced again to four side rights.Pass through
The weighted value that center weight chart levies picture centre pixel to the image edge pixels point of image to be processed is gradually reduced.
ISP processor or central processing unit can generate corresponding center weight figure according to the size of image to be processed.It should
Weighted value represented by center weight figure is gradually reduced from center to four sides.Center weight figure can be used Gaussian function or use
First-order equation or second-order equation generate.The Gaussian function can be two-dimensional Gaussian function.
Step 804, the image to be processed and the center weight figure are input in subject detection model, obtain body region
Confidence level figure, wherein the subject detection model is the image to be processed previously according to Same Scene, center weight figure and corresponding
The model that the main body exposure mask figure marked is trained.
Wherein, subject detection model is to acquire a large amount of training data in advance, and it includes initial that training data, which is input to,
What the subject detection model of network weight was trained.Every group of training data includes the corresponding figure to be processed of Same Scene
Picture, center weight figure and the main body exposure mask figure marked.Wherein, image to be processed and center weight figure are examined as the main body of training
The input of model is surveyed, main body exposure mask (mask) figure marked obtains true as the subject detection model desired output of training
It is worth (ground truth).Main body exposure mask figure is the image filters template of main body in image for identification, can be with shielded image
Other parts filter out the main body in image.Subject detection model can training can the various main bodys of recognition detection, as people, flower,
Cat, dog, background etc..
Specifically, the image to be processed and center weight figure can be input to main body inspection by ISP processor or central processing unit
It surveys in model, carries out detecting available body region confidence level figure.Body region confidence level figure is belonged to for recording main body
The probability for the main body which kind of can be identified, such as it is 0.8 that some pixel, which belongs to the probability of people, colored probability is 0.1, background it is general
Rate is 0.1.
Step 806, the target subject in the image to be processed is determined according to the body region confidence level figure.
Specifically, ISP processor or central processing unit can choose confidence level highest or secondary according to body region confidence level figure
The high main body as in image to be processed, a main body if it exists, then using the main body as target subject;Multiple masters if it exists
Body can select as needed wherein one or more main bodys as target subject.
Image processing method in the present embodiment obtains image to be processed, and generates center corresponding with image to be processed
After weight map, image to be processed and center weight figure are input in corresponding subject detection model and detected, available main body
Region confidence figure can determine to obtain the target subject in image to be processed, utilize center according to body region confidence level figure
Weight map can allow the object of picture centre to be easier to be detected, and utilize image to be processed, center weight figure using trained
The subject detection model obtained with training such as main body exposure mask figures, can more accurately identify the target master in image to be processed
Body.
In one embodiment, as shown in figure 9, this is determined in the image to be processed according to the body region confidence level figure
Target subject, comprising:
Step 902, which is handled, obtains main body exposure mask figure.
Specifically, there are some confidence levels in body region confidence level figure lower, scattered point, can pass through ISP processor
Or central processing unit is filtered processing to body region confidence level figure, obtains main body exposure mask figure.The filtration treatment, which can be used, matches
Confidence threshold value is set, the pixel by confidence value in body region confidence level figure lower than confidence threshold value filters.The confidence level
Self-adapting confidence degree threshold value can be used in threshold value, can also use fixed threshold, can also use the corresponding threshold value of subregion configuration of territory.
Step 904, the image to be processed is detected, determines the highlight area in the image to be processed.
Wherein, highlight area refers to that brightness value is greater than the region of luminance threshold.
Specifically, ISP processor or central processing unit carry out highlight detection to image to be processed, and it is big that screening obtains brightness value
In the target pixel points of luminance threshold, highlight area is obtained using Connected area disposal$ to target pixel points.
Step 906, it according to the highlight area and the main body exposure mask figure in the image to be processed, determines in the image to be processed
Eliminate the target subject of bloom.
Specifically, ISP processor or central processing unit can be by the highlight areas and the main body exposure mask figure in image to be processed
It does Difference Calculation or the target subject for eliminating bloom in image to be processed is calculated in logical AND.
In the present embodiment, filtration treatment is done to body region confidence level figure and obtains main body exposure mask figure, improves body region
The reliability of confidence level figure detects image to be processed to obtain highlight area, is then handled with main body exposure mask figure, can
The target subject for the bloom that has been eliminated, for influence main body accuracy of identification bloom, highlight regions individually use filter into
Row processing, improves the precision and accuracy of main body identification.
In one embodiment, this handles the body region confidence level figure, obtains main body exposure mask figure, comprising: right
The body region confidence level figure carries out the processing of self-adapting confidence degree threshold filtering, obtains binaryzation exposure mask figure, the binaryzation exposure mask
Figure includes body region and background area;Morphological scale-space and guiding filtering processing are carried out to the binaryzation exposure mask figure, led
Body exposure mask figure.
Specifically, ISP processor or central processing unit are by body region confidence level figure according to self-adapting confidence degree threshold value mistake
After filter processing, the confidence value of the pixel of reservation is indicated using 1, the confidence value of the pixel removed is indicated using 0, is obtained
To binaryzation exposure mask figure.
Morphological scale-space may include corrosion and expansion.Etching operation first can be carried out to binaryzation exposure mask figure, then be expanded
Operation removes noise;Filtering processing is guided to the binaryzation exposure mask figure after Morphological scale-space again, realizes edge filter behaviour
Make, obtains the main body exposure mask figure of edge extracting.
The noise for the main body exposure mask figure that can be guaranteed by Morphological scale-space and guiding filtering processing is few or does not make an uproar
Point, edge are softer.
In one embodiment, which includes body region and background area, this is by the target after reconstruction
Main body foreground picture and Background are merged, and target image is obtained, comprising: by after the reconstruction target subject foreground picture and this two
Body region in value exposure mask figure is merged, and background area in the Background and the binaryzation exposure mask figure after reconstruction is carried out
Fusion, obtains target image.
It specifically, include body region and background area in binaryzation exposure mask figure, body region can be white, background area
It can be black.Electronic equipment melts the target subject foreground picture after the reconstruction with the body region in the binaryzation exposure mask figure
It closes, i.e., is merged with the part of black, background area in the Background and the binaryzation exposure mask figure after reconstruction is merged,
It is merged with the part of black, to obtain target image.
In one embodiment, this method further include: obtain depth map corresponding with the image to be processed;The depth map packet
Include at least one of TOF depth map, binocular depth figure and structure light depth map;The image and depth map to be processed are registrated
Processing, image and depth map to be processed after obtaining Same Scene registration.
Wherein, depth map refers to figure including depth information.Same field is shot by depth camera or binocular camera
Scape obtains corresponding depth map.Depth camera can be structure light video camera head or TOF camera.Depth map can be structure optical depth
At least one of figure, TOF depth map and binocular depth figure.
Specifically, electronic equipment can shoot Same Scene by camera by ISP processor or central processing unit and obtain
Then image to be processed and corresponding depth map are registrated image to be processed and depth map using camera calibration parameter, obtain
Image and depth map to be processed after to registration.
In other embodiments, when the emulation depth map that obtains depth map, can automatically generate can not be shot.Emulate depth map
In the depth value of each pixel can be preset value.In addition, the depth value of each pixel in emulation depth map can correspond to
Different preset values.
In one embodiment, the image to be processed and the center weight figure are input in subject detection model by this, are obtained
To body region confidence level figure, comprising: image to be processed, the depth map and center weight figure after the registration are input to master
In body detection model, body region confidence level figure is obtained;Wherein, which is previously according to Same Scene wait locate
The model that reason image, depth map, center weight figure and the corresponding main body exposure mask figure marked are trained.
Wherein, subject detection model is to acquire a large amount of training data in advance, and it includes initial that training data, which is input to,
What the subject detection model of network weight was trained.Every group of training data includes the corresponding figure to be processed of Same Scene
Picture, depth map, center weight figure and the main body exposure mask figure marked.Wherein, image to be processed and center weight figure are as training
Subject detection model input, the main body exposure mask figure marked as training subject detection model desired output obtain it is true
Real value.Main body exposure mask figure is the image filters template of main body in image for identification, can be screened with the other parts of shielded image
Main body in image out.Subject detection model can training can the various main bodys of recognition detection, such as people, flower, cat, dog, background.
In the present embodiment, using depth map and center weight figure as the input of subject detection model, depth map can use
Depth information allow apart from the closer object of camera be easier be detected, four sides big using center weight in center weight figure
The small center attention mechanism of weight allows the object of picture centre to be easier to be detected, and introduces depth map realization and does depth to main body
Feature enhancing is spent, center weight figure is introduced and attention feature enhancing in center is done to main body, can not only accurately identify simple scenario
Under target subject, more substantially increase the main body recognition accuracy under complex scene, introducing depth map can solve traditional mesh
Mark the detection method problem poor to the ever-changing robustness of objective function of natural image.Simple scenario refers to that main body is single, background
The not high scene of region contrast.
Figure 10 is the effect diagram for carrying out main body identification in one embodiment to image to be processed.As shown in Figure 10, to
Processing image is that RGB Figure 100 2 is led after RGB figure is input to subject detection model there are a butterfly in RGB Figure 100 2
Then body region confidence level Figure 100 4 is filtered body region confidence level Figure 100 4 and binaryzation obtains binaryzation exposure mask figure
1006, then Morphological scale-space and guiding filtering realization edge enhancing are carried out to binaryzation exposure mask Figure 100 6, obtain main body exposure mask figure
1008。
In one embodiment, a kind of image processing method is provided, comprising:
Step (a1) obtains the image to be processed of first resolution.
Step (a2) generates center weight figure corresponding with the image to be processed, wherein represented by the center weight figure
Weighted value is gradually reduced from center to edge.
The image to be processed and the center weight figure are input in subject detection model, obtain body region by step (a3)
Domain confidence level figure, wherein the subject detection model is the image to be processed, center weight figure and correspondence previously according to Same Scene
The model that is trained of the main body exposure mask figure marked.
Step (a4) carries out the processing of self-adapting confidence degree threshold filtering to the body region confidence level figure, obtains binaryzation
Exposure mask figure, which includes body region and background area.
Step (a5) carries out Morphological scale-space to the binaryzation exposure mask figure and guiding filtering is handled, obtains main body exposure mask figure.
Step (a6) detects the image to be processed, determines the highlight area in the image to be processed.
Step (a7) determines the image to be processed according to the highlight area and the main body exposure mask figure in the image to be processed
The middle target subject for eliminating bloom, obtains target subject foreground picture and Background.
Step (a8) obtains characteristic pattern by the feature of the image reconstruction model extraction target subject foreground picture, the image
Reconstruction model is previously according to main body foreground picture sample to the model being trained, which includes
The main body foreground picture of first resolution and the main body foreground picture of second resolution.
Step (a9) carries out super-resolution processing to this feature figure by the image reconstruction model, obtains second resolution
Target subject foreground picture, the second resolution be greater than the first resolution.
Step (a10) carries out super-resolution rebuilding to the Background by the interpolation algorithm, obtains the back of third resolution ratio
Jing Tu, the third resolution ratio are greater than the first resolution.
The Background of the target subject foreground picture of the second resolution and the third resolution ratio is adjusted to by step (a11)
Corresponding size.
Step (a12), will be in the target subject foreground picture and the binaryzation exposure mask figure of the second resolution after adjustment size
Body region merged, by adjust size after third resolution ratio Background and the binaryzation exposure mask figure in background area
It is merged, obtains target image.
Above-mentioned image processing method carries out main body knowledge by be processed image of the subject detection model to first resolution
Not, target subject foreground picture and Background can quick and precisely be obtained.Target subject foreground picture is carried out by image reconstruction model
Super-resolution rebuilding processing, is capable of the details of preferably processing target main body foreground picture, so that the target subject prospect after rebuilding
Figure details is apparent.And super-resolution rebuilding is carried out to Background by interpolation algorithm, guaranteeing the clear of target subject foreground picture
The clear speed that super-resolution rebuilding is taken into account while spend.By the target subject foreground picture and background of the different resolution after reconstruction
Figure is adjusted to identical size, and is merged with corresponding region in binaryzation exposure mask figure, and target image is obtained.This programme solves
When traditional super-resolution is rebuild, each region of picture handles indifference, rebuilds the details and efficiency that cannot be considered in terms of image
Situation.
It as shown in figure 11, is the architecture diagram of image processing method in one embodiment.Electronic equipment is by first resolution
Image to be processed inputs subject detection model, obtains target subject foreground picture and Background.It is made up of cascade residual error network
Image reconstruction model carries out super-resolution rebuilding processing to target subject foreground picture, and is surpassed by interpolation algorithm to Background
Resolution reconstruction.By after reconstruction target subject foreground picture and Background merge, obtain target image, the target image
Resolution ratio is greater than first resolution.
It should be understood that although each step in the flow chart of Fig. 2-Fig. 9 is successively shown according to the instruction of arrow,
It is these steps is not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
There is no stringent sequences to limit for rapid execution, these steps can execute in other order.Moreover, in Fig. 2-Fig. 9 extremely
Few a part of step may include that perhaps these sub-steps of multiple stages or stage are not necessarily same to multiple sub-steps
Moment executes completion, but can execute at different times, and the execution sequence in these sub-steps or stage is also not necessarily
It successively carries out, but in turn or can be handed over at least part of the sub-step or stage of other steps or other steps
Alternately execute.
Figure 12 is the structural block diagram of the image processing apparatus of one embodiment.As shown in figure 12, comprising: obtain module
1202, identification module 1204, reconstruction module 1206 and Fusion Module 1208.
Module 1202 is obtained, for obtaining the image to be processed of first resolution.
Identification module 1204, the target subject in the image to be processed, obtains target subject foreground picture and back for identification
Jing Tu.
Module 1206 is rebuild, for carrying out super-resolution rebuilding to the target subject foreground picture and the Background respectively.
Fusion Module 1208 obtains target figure for merging the target subject foreground picture after rebuilding and Background
The resolution ratio of picture, the target image is greater than the first resolution.
Above-mentioned image processing apparatus identifies the mesh in image to be processed by obtaining the image to be processed of first resolution
Main body is marked, target subject foreground picture and Background are obtained.Super-resolution reconstruction is carried out to target subject foreground picture and Background respectively
It builds, different oversubscription processing can be done to target subject foreground picture and Background.By the target subject foreground picture and background after reconstruction
Figure is merged, and target image is obtained, and the resolution ratio of target image is greater than first resolution, allows to take into account the thin of image
Section, improves the treatment of details effect of image reconstruction.
In one embodiment, it rebuilds module 1206 to be also used to: by the image reconstruction model extraction target subject prospect
The feature of figure, obtains characteristic pattern, which is previously according to main body foreground picture sample to the mould being trained
Type, the main body foreground picture sample centering include the main body foreground picture of first resolution and the main body foreground picture of second resolution;
Super-resolution processing is carried out to characteristic pattern by the image reconstruction model, obtains the target subject foreground picture of second resolution, it should
Second resolution is greater than the first resolution.
Above-mentioned image processing apparatus, by using the spy of the trained image reconstruction model extraction target subject foreground picture
Sign, obtains characteristic pattern, carries out super-resolution processing to characteristic pattern by the image reconstruction model, obtains the target of second resolution
Main body foreground picture, the second resolution are greater than the first resolution, can do local super-resolution for target subject foreground picture
Rate reconstruction processing, is capable of the details of preferably processing target main body foreground picture, so as to guarantee the clarity of target subject.
In one embodiment, it rebuilds module 1206 to be also used to: super-resolution is carried out to Background by the interpolation algorithm
It rebuilds, obtains the Background of third resolution ratio, which is greater than the first resolution;By the target master of second resolution
Body foreground picture and the Background of third resolution ratio are adjusted to corresponding size;By the target master of the second resolution after adjustment size
Body foreground picture and the Background of third resolution ratio are merged, and target image is obtained.
Image processing apparatus in the present embodiment carries out super-resolution rebuilding to Background by the interpolation algorithm, obtains
The Background of the target subject foreground picture of second resolution and third resolution ratio is adjusted to corresponding by the Background of third resolution ratio
Size, can be identical size by different resolution and various sizes of Image Adjusting.By second point after adjustment size
The target subject foreground picture of resolution and the Background of third resolution ratio are merged, and complete reconstruction image are obtained, to obtain
Target image.
In one embodiment, the image processing method is applied to video processing;The image to be processed of the first resolution
For every frame image to be processed in the video of first resolution;
It obtains module 1202 to be also used to: obtaining every frame image to be processed in the video of first resolution.
The identification module 1204 is also used to: being identified the target subject in every frame image to be processed in the video, is obtained every
Target subject foreground picture and Background in frame image to be processed.
Rebuild module 1206 be also used to: respectively in every frame image to be processed target subject foreground picture and Background carry out
Super-resolution rebuilding.
Fusion Module 1208 is also used to: by the target subject foreground picture and background after the corresponding reconstruction of every frame image to be processed
Figure is merged, and every frame target image is obtained;Target video is generated according to every frame target image, the resolution ratio of the target video is big
In the first resolution.
Above-mentioned image processing apparatus is applied to video and handles scene.Every frame in video by obtaining first resolution
Image to be processed identifies the target subject in every frame image to be processed in the video, obtains the mesh in every frame image to be processed
Main body foreground picture and Background are marked, respectively to the target subject foreground picture and Background progress super-resolution in every frame image to be processed
Rate rebuild, by after the corresponding reconstruction of every frame image to be processed target subject foreground picture and Background merge, obtain every frame
Target image generates target video according to every frame target image, and the resolution ratio of the target video is greater than the first resolution, can
The video of low resolution is redeveloped into high-resolution video.By carrying out difference respectively to target subject foreground picture and Background
Super-resolution rebuilding processing, can be improved the treatment effect to image detail.
In one embodiment, identification module 1204 is also used to: center weight figure corresponding with the image to be processed is generated,
Wherein, weighted value represented by the center weight figure is gradually reduced from center to edge;The image to be processed and the center are weighed
Multigraph is input in subject detection model, obtains body region confidence level figure, wherein the subject detection model is previously according to same
The model that image to be processed, center weight figure and the corresponding main body exposure mask figure marked of one scene are trained;Root
The target subject in the image to be processed is determined according to the body region confidence level figure.
Image processing apparatus in the present embodiment obtains image to be processed, and generates center corresponding with image to be processed
After weight map, image to be processed and center weight figure are input in corresponding subject detection model and detected, available main body
Region confidence figure can determine to obtain the target subject in image to be processed, utilize center according to body region confidence level figure
Weight map can allow the object of picture centre to be easier to be detected, and utilize image to be processed, center weight figure using trained
The subject detection model obtained with training such as main body exposure mask figures, can more accurately identify the target master in image to be processed
Body.
In one embodiment, identification module 1204 is also used to: being handled the body region confidence level figure, is led
Body exposure mask figure;The image to be processed is detected, determines the highlight area in the image to be processed;According to the height in the image to be processed
Light region and the main body exposure mask figure determine the target subject that bloom is eliminated in the image to be processed.
In the present embodiment, filtration treatment is done to body region confidence level figure and obtains main body exposure mask figure, improves body region
The reliability of confidence level figure detects image to be processed to obtain highlight area, is then handled with main body exposure mask figure, can
The target subject for the bloom that has been eliminated, for influence main body accuracy of identification bloom, highlight regions individually use filter into
Row processing, improves the precision and accuracy of main body identification.
In one embodiment, identification module 1204 is also used to: carrying out adaptive confidence to the body region confidence level figure
Threshold filtering processing is spent, obtains binaryzation exposure mask figure, which includes body region and background area;To the two-value
Change exposure mask figure and carry out Morphological scale-space and guiding filtering processing, obtains main body exposure mask figure;
Fusion Module 1208 is also used to: by the main body in the target subject foreground picture and the binaryzation exposure mask figure after the reconstruction
Region is merged, and background area in the Background and the binaryzation exposure mask figure after reconstruction is merged, target image is obtained.
In one embodiment, which is also used to: obtaining depth map corresponding with the image to be processed;It should
Depth map includes at least one of TOF depth map, binocular depth figure and structure light depth map;To the image and depth map to be processed
Registration process is carried out, the image and depth map to be processed after obtaining Same Scene registration.
Identification module 1204 is also used to: image to be processed, the depth map and center weight figure after the registration are inputted
Into subject detection model, body region confidence level figure is obtained;Wherein, which is previously according to Same Scene
The model that image, depth map, center weight figure and the corresponding main body exposure mask figure marked to be processed are trained.
In the present embodiment, using depth map and center weight figure as the input of subject detection model, depth map can use
Depth information allow apart from the closer object of camera be easier be detected, four sides big using center weight in center weight figure
The small center attention mechanism of weight allows the object of picture centre to be easier to be detected, and introduces depth map realization and does depth to main body
Feature enhancing is spent, center weight figure is introduced and attention feature enhancing in center is done to main body, can not only accurately identify simple scenario
Under target subject, more substantially increase the main body recognition accuracy under complex scene, introducing depth map can solve traditional mesh
Mark the detection method problem poor to the ever-changing robustness of objective function of natural image.Simple scenario refers to that main body is single, background
The not high scene of region contrast.
The division of modules is only used for for example, in other embodiments, can will scheme in above-mentioned image processing apparatus
As processing unit is divided into different modules as required, to complete all or part of function of above-mentioned image processing apparatus.
Figure 13 is the schematic diagram of internal structure of electronic equipment in one embodiment.As shown in figure 13, which includes
The processor and memory connected by system bus.Wherein, for the processor for providing calculating and control ability, support is entire
The operation of electronic equipment.Memory may include non-volatile memory medium and built-in storage.Non-volatile memory medium is stored with
Operating system and computer program.The computer program can be performed by processor, for realizing following each embodiment institute
A kind of image processing method provided.Built-in storage provides height for the operating system computer program in non-volatile memory medium
The running environment of speed caching.The electronic equipment can be mobile phone, tablet computer or personal digital assistant or wearable device etc..
Realizing for the modules in image processing apparatus provided in the embodiment of the present application can be the shape of computer program
Formula.The computer program can be run in terminal or server.The program module that the computer program is constituted is storable in terminal
Or on the memory of server.When the computer program is executed by processor, method described in the embodiment of the present application is realized
Step.
The embodiment of the present application also provides a kind of computer readable storage mediums.One or more is executable comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when the computer executable instructions are executed by one or more processors
When, so that the step of processor executes image processing method.
A kind of computer program product comprising instruction, when run on a computer, so that computer executes image
Processing method.
It may include non-to any reference of memory, storage, database or other media used in the embodiment of the present application
Volatibility and/or volatile memory.Suitable nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM in a variety of forms may be used
, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM),
Enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (11)
1. a kind of image processing method characterized by comprising
Obtain the image to be processed of first resolution;
It identifies the target subject in the image to be processed, obtains target subject foreground picture and Background;
Super-resolution rebuilding is carried out to the target subject foreground picture and the Background respectively;
By after reconstruction target subject foreground picture and Background merge, obtain target image, the resolution of the target image
Rate is greater than the first resolution.
2. the method according to claim 1, wherein carrying out Super-resolution reconstruction to the target subject foreground picture
It builds, comprising:
By the feature of target subject foreground picture described in image reconstruction model extraction, characteristic pattern, described image reconstruction model are obtained
It is previously according to main body foreground picture sample to the model being trained, the main body foreground picture sample centering includes first point
The main body foreground picture of resolution and the main body foreground picture of second resolution;
Super-resolution processing is carried out to the characteristic pattern by described image reconstruction model, obtains the target subject of second resolution
Foreground picture, the second resolution are greater than the first resolution.
3. according to the method described in claim 2, it is characterized in that, carrying out super-resolution rebuilding to the Background, comprising:
Super-resolution rebuilding is carried out to the Background by the interpolation algorithm, obtains the Background of third resolution ratio, it is described
Third resolution ratio is greater than the first resolution;
The target subject foreground picture and Background by after reconstruction merges, and obtains target image, comprising:
The Background of the target subject foreground picture of the second resolution and the third resolution ratio is adjusted to corresponding size;
By adjust size after the target subject foreground picture of second resolution and the Background of third resolution ratio merge, obtain
Target image.
4. the method according to claim 1, wherein described image processing method is applied to video processing;It is described
The image to be processed of first resolution is every frame image to be processed in the video of first resolution;
The image to be processed for obtaining first resolution, comprising:
Obtain every frame image to be processed in the video of the first resolution;
Target subject in the identification image to be processed, obtains target subject foreground picture and Background, comprising:
It identifies the target subject in every frame image to be processed in the video, obtains the target subject in every frame image to be processed
Foreground picture and Background;
It is described that super-resolution rebuilding is carried out to the target subject foreground picture and the Background respectively, comprising:
Respectively to the target subject foreground picture and Background progress super-resolution rebuilding in every frame image to be processed;
The main body foreground picture and Background by after reconstruction merges, and obtains target image, the resolution of the target image
Rate is greater than the first resolution, comprising:
By after the corresponding reconstruction of every frame image to be processed target subject foreground picture and Background merge, obtain every frame target
Image;
Target video is generated according to every frame target image, the resolution ratio of the target video is greater than the first resolution.
5. the method according to claim 1, wherein the target subject in the identification image to be processed,
Include:
Generate center weight figure corresponding with the image to be processed, wherein weighted value represented by the center weight figure from
Center is gradually reduced to edge;
The image to be processed and the center weight figure are input in subject detection model, body region confidence level is obtained
Figure, wherein the subject detection model is the image to be processed previously according to Same Scene, center weight figure and corresponding has marked
The model that the main body exposure mask figure of note is trained;
The target subject in the image to be processed is determined according to the body region confidence level figure.
6. according to the method described in claim 5, it is characterized in that, described according to body region confidence level figure determination
Target subject in image to be processed, comprising:
The body region confidence level figure is handled, main body exposure mask figure is obtained;
The image to be processed is detected, determines the highlight area in the image to be processed;
According in the image to be processed highlight area and the main body exposure mask figure, determine eliminated in the image to be processed it is high
The target subject of light.
7. according to the method described in claim 6, it is characterized in that, which is characterized in that it is described to the body region confidence level
Figure is handled, and main body exposure mask figure is obtained, comprising:
The processing of self-adapting confidence degree threshold filtering is carried out to the body region confidence level figure, obtains binaryzation exposure mask figure, it is described
Binaryzation exposure mask figure includes body region and background area;
Morphological scale-space and guiding filtering processing are carried out to the binaryzation exposure mask figure, obtain main body exposure mask figure;
The target subject foreground picture and Background by after reconstruction merges, and obtains target image, comprising:
Target subject foreground picture after the reconstruction is merged with the body region in the binaryzation exposure mask figure, will be rebuild
Background area is merged in Background and the binaryzation exposure mask figure afterwards, obtains target image.
8. according to the method described in claim 5, it is characterized in that, the method also includes:
Obtain depth map corresponding with the image to be processed;The depth map includes TOF depth map, binocular depth figure and structure
At least one of optical depth figure;
Registration process is carried out to the image to be processed and depth map, image to be processed and depth after obtaining Same Scene registration
Figure;
It is described that the image to be processed and the center weight figure are input in subject detection model, obtain body region confidence
Degree figure, comprising:
Image to be processed, the depth map and the center weight figure after the registration is input in subject detection model,
Obtain body region confidence level figure;Wherein, the subject detection model is previously according to the image to be processed of Same Scene, depth
The model that figure, center weight figure and the corresponding main body exposure mask figure marked are trained.
9. a kind of image processing apparatus characterized by comprising
Module is obtained, for obtaining the image to be processed of first resolution;
Identification module, the target subject in the image to be processed, obtains target subject foreground picture and Background for identification;
Module is rebuild, for carrying out super-resolution rebuilding to the target subject foreground picture and the Background respectively;
Fusion Module obtains target image, the mesh for merging the target subject foreground picture after rebuilding and Background
The resolution ratio of logo image is greater than the first resolution.
10. a kind of electronic equipment, including memory and processor, computer program, the calculating are stored in the memory
When machine program is executed by the processor, so that the processor is executed as at image described in any item of the claim 1 to 8
The step of reason method.
11. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
It realizes when being executed by processor such as the step of image processing method described in any item of the claim 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910683492.1A CN110428366B (en) | 2019-07-26 | 2019-07-26 | Image processing method and device, electronic equipment and computer readable storage medium |
PCT/CN2020/101817 WO2021017811A1 (en) | 2019-07-26 | 2020-07-14 | Image processing method and apparatus, electronic device, and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910683492.1A CN110428366B (en) | 2019-07-26 | 2019-07-26 | Image processing method and device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110428366A true CN110428366A (en) | 2019-11-08 |
CN110428366B CN110428366B (en) | 2023-10-13 |
Family
ID=68412750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910683492.1A Active CN110428366B (en) | 2019-07-26 | 2019-07-26 | Image processing method and device, electronic equipment and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110428366B (en) |
WO (1) | WO2021017811A1 (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111047526A (en) * | 2019-11-22 | 2020-04-21 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111091506A (en) * | 2019-12-02 | 2020-05-01 | RealMe重庆移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN111145202A (en) * | 2019-12-31 | 2020-05-12 | 北京奇艺世纪科技有限公司 | Model generation method, image processing method, device, equipment and storage medium |
CN111161369A (en) * | 2019-12-20 | 2020-05-15 | 上海联影智能医疗科技有限公司 | Image reconstruction storage method and device, computer equipment and storage medium |
CN111163265A (en) * | 2019-12-31 | 2020-05-15 | 成都旷视金智科技有限公司 | Image processing method, image processing device, mobile terminal and computer storage medium |
CN111553846A (en) * | 2020-05-12 | 2020-08-18 | Oppo广东移动通信有限公司 | Super-resolution processing method and device |
CN111598776A (en) * | 2020-04-29 | 2020-08-28 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, storage medium, and electronic device |
CN111932594A (en) * | 2020-09-18 | 2020-11-13 | 西安拙河安见信息科技有限公司 | Billion pixel video alignment method and device based on optical flow and medium |
CN112001940A (en) * | 2020-08-21 | 2020-11-27 | Oppo(重庆)智能科技有限公司 | Image processing method and device, terminal and readable storage medium |
CN112085686A (en) * | 2020-08-21 | 2020-12-15 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN112184554A (en) * | 2020-10-13 | 2021-01-05 | 重庆邮电大学 | Remote sensing image fusion method based on residual mixed expansion convolution |
WO2021017811A1 (en) * | 2019-07-26 | 2021-02-04 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer readable storage medium |
CN112418167A (en) * | 2020-12-10 | 2021-02-26 | 深圳前海微众银行股份有限公司 | Image clustering method, device, equipment and storage medium |
WO2021102857A1 (en) * | 2019-11-28 | 2021-06-03 | 深圳市大疆创新科技有限公司 | Image processing method, apparatus, and device, and storage medium |
CN113240687A (en) * | 2021-05-17 | 2021-08-10 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and readable storage medium |
US20210327028A1 (en) * | 2020-04-17 | 2021-10-21 | Fujifilm Business Innovation Corp. | Information processing apparatus |
WO2022011657A1 (en) * | 2020-07-16 | 2022-01-20 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
CN114049254A (en) * | 2021-10-29 | 2022-02-15 | 华南农业大学 | Low-pixel ox-head image reconstruction and identification method, system, equipment and storage medium |
CN114067122A (en) * | 2022-01-18 | 2022-02-18 | 深圳市绿洲光生物技术有限公司 | Two-stage binarization image processing method |
WO2022105779A1 (en) * | 2020-11-18 | 2022-05-27 | 北京字节跳动网络技术有限公司 | Image processing method, model training method, and apparatus, medium, and device |
CN114630129A (en) * | 2022-02-07 | 2022-06-14 | 浙江智慧视频安防创新中心有限公司 | Video coding and decoding method and device based on intelligent digital retina |
CN114972020A (en) * | 2022-04-13 | 2022-08-30 | 北京字节跳动网络技术有限公司 | Image processing method and device, storage medium and electronic equipment |
CN117440104A (en) * | 2023-12-21 | 2024-01-23 | 北京遥感设备研究所 | Data compression reconstruction method based on target significance characteristics |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113362224A (en) * | 2021-05-31 | 2021-09-07 | 维沃移动通信有限公司 | Image processing method and device, electronic equipment and readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040001622A1 (en) * | 2002-06-27 | 2004-01-01 | Roylance Eugene A. | Method and system for image processing including mixed resolution, multi-channel color compression, transmission and decompression |
CN102800085A (en) * | 2012-06-21 | 2012-11-28 | 西南交通大学 | Method for detecting and extracting main target image in complicated image |
CN102842119A (en) * | 2012-08-18 | 2012-12-26 | 湖南大学 | Quick document image super-resolution method based on image matting and edge enhancement |
CN105741252A (en) * | 2015-11-17 | 2016-07-06 | 西安电子科技大学 | Sparse representation and dictionary learning-based video image layered reconstruction method |
US20160328828A1 (en) * | 2014-02-25 | 2016-11-10 | Graduate School At Shenzhen, Tsinghua University | Depth map super-resolution processing method |
CN108764370A (en) * | 2018-06-08 | 2018-11-06 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and computer equipment |
US20190114781A1 (en) * | 2017-10-18 | 2019-04-18 | International Business Machines Corporation | Object classification based on decoupling a background from a foreground of an image |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101874482B1 (en) * | 2012-10-16 | 2018-07-05 | 삼성전자주식회사 | Apparatus and method of reconstructing 3-dimension super-resolution image from depth image |
CN110428366B (en) * | 2019-07-26 | 2023-10-13 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
-
2019
- 2019-07-26 CN CN201910683492.1A patent/CN110428366B/en active Active
-
2020
- 2020-07-14 WO PCT/CN2020/101817 patent/WO2021017811A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040001622A1 (en) * | 2002-06-27 | 2004-01-01 | Roylance Eugene A. | Method and system for image processing including mixed resolution, multi-channel color compression, transmission and decompression |
CN102800085A (en) * | 2012-06-21 | 2012-11-28 | 西南交通大学 | Method for detecting and extracting main target image in complicated image |
CN102842119A (en) * | 2012-08-18 | 2012-12-26 | 湖南大学 | Quick document image super-resolution method based on image matting and edge enhancement |
US20160328828A1 (en) * | 2014-02-25 | 2016-11-10 | Graduate School At Shenzhen, Tsinghua University | Depth map super-resolution processing method |
CN105741252A (en) * | 2015-11-17 | 2016-07-06 | 西安电子科技大学 | Sparse representation and dictionary learning-based video image layered reconstruction method |
US20190114781A1 (en) * | 2017-10-18 | 2019-04-18 | International Business Machines Corporation | Object classification based on decoupling a background from a foreground of an image |
CN108764370A (en) * | 2018-06-08 | 2018-11-06 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and computer equipment |
Non-Patent Citations (2)
Title |
---|
PRABHU S M: "Unified multiframe super-resolution of matte, foreground, and background", 《JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A OPTICS IMAGE SCIENCE & VISION》 * |
张万绪等: "基于稀疏表示与引导滤波的图像超分辨率重建", 《计算机工程》 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021017811A1 (en) * | 2019-07-26 | 2021-02-04 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer readable storage medium |
CN111047526B (en) * | 2019-11-22 | 2023-09-26 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111047526A (en) * | 2019-11-22 | 2020-04-21 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
WO2021102857A1 (en) * | 2019-11-28 | 2021-06-03 | 深圳市大疆创新科技有限公司 | Image processing method, apparatus, and device, and storage medium |
CN111091506A (en) * | 2019-12-02 | 2020-05-01 | RealMe重庆移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN111161369A (en) * | 2019-12-20 | 2020-05-15 | 上海联影智能医疗科技有限公司 | Image reconstruction storage method and device, computer equipment and storage medium |
CN111161369B (en) * | 2019-12-20 | 2024-04-23 | 上海联影智能医疗科技有限公司 | Image reconstruction storage method, device, computer equipment and storage medium |
CN111145202B (en) * | 2019-12-31 | 2024-03-08 | 北京奇艺世纪科技有限公司 | Model generation method, image processing method, device, equipment and storage medium |
CN111163265A (en) * | 2019-12-31 | 2020-05-15 | 成都旷视金智科技有限公司 | Image processing method, image processing device, mobile terminal and computer storage medium |
CN111145202A (en) * | 2019-12-31 | 2020-05-12 | 北京奇艺世纪科技有限公司 | Model generation method, image processing method, device, equipment and storage medium |
US20210327028A1 (en) * | 2020-04-17 | 2021-10-21 | Fujifilm Business Innovation Corp. | Information processing apparatus |
CN111598776A (en) * | 2020-04-29 | 2020-08-28 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, storage medium, and electronic device |
CN111598776B (en) * | 2020-04-29 | 2023-06-30 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic apparatus |
CN111553846A (en) * | 2020-05-12 | 2020-08-18 | Oppo广东移动通信有限公司 | Super-resolution processing method and device |
CN111553846B (en) * | 2020-05-12 | 2023-05-26 | Oppo广东移动通信有限公司 | Super-resolution processing method and device |
WO2022011657A1 (en) * | 2020-07-16 | 2022-01-20 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
CN112001940A (en) * | 2020-08-21 | 2020-11-27 | Oppo(重庆)智能科技有限公司 | Image processing method and device, terminal and readable storage medium |
CN112085686A (en) * | 2020-08-21 | 2020-12-15 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN111932594A (en) * | 2020-09-18 | 2020-11-13 | 西安拙河安见信息科技有限公司 | Billion pixel video alignment method and device based on optical flow and medium |
CN111932594B (en) * | 2020-09-18 | 2023-12-19 | 西安拙河安见信息科技有限公司 | Billion pixel video alignment method and device based on optical flow and medium |
CN112184554A (en) * | 2020-10-13 | 2021-01-05 | 重庆邮电大学 | Remote sensing image fusion method based on residual mixed expansion convolution |
WO2022105779A1 (en) * | 2020-11-18 | 2022-05-27 | 北京字节跳动网络技术有限公司 | Image processing method, model training method, and apparatus, medium, and device |
CN112418167A (en) * | 2020-12-10 | 2021-02-26 | 深圳前海微众银行股份有限公司 | Image clustering method, device, equipment and storage medium |
CN113240687A (en) * | 2021-05-17 | 2021-08-10 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and readable storage medium |
CN114049254B (en) * | 2021-10-29 | 2022-11-29 | 华南农业大学 | Low-pixel ox-head image reconstruction and identification method, system, equipment and storage medium |
CN114049254A (en) * | 2021-10-29 | 2022-02-15 | 华南农业大学 | Low-pixel ox-head image reconstruction and identification method, system, equipment and storage medium |
CN114067122B (en) * | 2022-01-18 | 2022-04-08 | 深圳市绿洲光生物技术有限公司 | Two-stage binarization image processing method |
CN114067122A (en) * | 2022-01-18 | 2022-02-18 | 深圳市绿洲光生物技术有限公司 | Two-stage binarization image processing method |
CN114630129A (en) * | 2022-02-07 | 2022-06-14 | 浙江智慧视频安防创新中心有限公司 | Video coding and decoding method and device based on intelligent digital retina |
CN114972020A (en) * | 2022-04-13 | 2022-08-30 | 北京字节跳动网络技术有限公司 | Image processing method and device, storage medium and electronic equipment |
CN117440104A (en) * | 2023-12-21 | 2024-01-23 | 北京遥感设备研究所 | Data compression reconstruction method based on target significance characteristics |
CN117440104B (en) * | 2023-12-21 | 2024-03-29 | 北京遥感设备研究所 | Data compression reconstruction method based on target significance characteristics |
Also Published As
Publication number | Publication date |
---|---|
CN110428366B (en) | 2023-10-13 |
WO2021017811A1 (en) | 2021-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110428366A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
US11132771B2 (en) | Bright spot removal using a neural network | |
EP3757890A1 (en) | Method and device for image processing, method and device for training object detection model | |
CN110473185A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
EP3937481A1 (en) | Image display method and device | |
EP3350767B1 (en) | Exposure-related intensity transformation | |
CN110149482A (en) | Focusing method, device, electronic equipment and computer readable storage medium | |
CN111402146B (en) | Image processing method and image processing apparatus | |
WO2020152521A1 (en) | Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures | |
CN108764370A (en) | Image processing method, device, computer readable storage medium and computer equipment | |
CN110334635A (en) | Main body method for tracing, device, electronic equipment and computer readable storage medium | |
CN110276831B (en) | Method and device for constructing three-dimensional model, equipment and computer-readable storage medium | |
CN108810413A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108961302A (en) | Image processing method, device, mobile terminal and computer readable storage medium | |
CN110191287A (en) | Focusing method and device, electronic equipment, computer readable storage medium | |
CN110490196A (en) | Subject detection method and apparatus, electronic equipment, computer readable storage medium | |
CN110365897A (en) | Image correcting method and device, electronic equipment, computer readable storage medium | |
CN110392211A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
Song et al. | Real-scene reflection removal with raw-rgb image pairs | |
CN114862698B (en) | Channel-guided real overexposure image correction method and device | |
CN110399823A (en) | Main body tracking and device, electronic equipment, computer readable storage medium | |
CN109360176A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
US11889175B2 (en) | Neural network supported camera image or video processing pipelines | |
CN108881740A (en) | Image method and device, electronic equipment, computer readable storage medium | |
CN107292853A (en) | Image processing method, device, computer-readable recording medium and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |