CN108921782A - A kind of image processing method, device and storage medium - Google Patents
A kind of image processing method, device and storage medium Download PDFInfo
- Publication number
- CN108921782A CN108921782A CN201810475606.9A CN201810475606A CN108921782A CN 108921782 A CN108921782 A CN 108921782A CN 201810475606 A CN201810475606 A CN 201810475606A CN 108921782 A CN108921782 A CN 108921782A
- Authority
- CN
- China
- Prior art keywords
- information
- image
- feature
- location information
- training sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 465
- 238000012545 processing Methods 0.000 claims abstract description 146
- 230000011218 segmentation Effects 0.000 claims abstract description 104
- 238000000605 extraction Methods 0.000 claims abstract description 37
- 230000007423 decrease Effects 0.000 claims description 15
- 238000000034 method Methods 0.000 claims description 15
- 230000004069 differentiation Effects 0.000 claims description 10
- 241000406668 Loxodonta cyclotis Species 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 4
- 230000006870 function Effects 0.000 description 54
- 210000001331 nose Anatomy 0.000 description 38
- 210000001508 eye Anatomy 0.000 description 36
- 210000004709 eyebrow Anatomy 0.000 description 36
- 210000000214 mouth Anatomy 0.000 description 34
- 238000010586 diagram Methods 0.000 description 17
- 210000004209 hair Anatomy 0.000 description 13
- 238000011084 recovery Methods 0.000 description 12
- 210000005252 bulbus oculi Anatomy 0.000 description 10
- 230000000875 corresponding effect Effects 0.000 description 10
- 210000000887 face Anatomy 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 239000000203 mixture Substances 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000035772 mutation Effects 0.000 description 3
- 230000000452 restraining effect Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000037308 hair color Effects 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 208000027534 Emotional disease Diseases 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing method, device and storage mediums, by obtaining image to be processed;Feature extraction is carried out to the target object in the image to be processed, obtains target signature location information;Segmentation of feature regions is carried out to the target object, obtains Target Segmentation area information;Using preset image processing model, the original resolution of the image to be processed is turned up based on the target signature location information and Target Segmentation area information, described image handles model and formed by the feature locations information and the training of cut zone information for presetting object in multiple training sample images.The program can target signature location information and Target Segmentation area information based on target object in image to be processed, the original resolution of image to be processed is precisely turned up so that the clarity of treated image is higher, the picture quality that improves that treated.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of image processing method, device and storage medium.
Background technique
With the development of science and technology digital picture has been more and more widely used, it is gradually evolved into most important information and carries
One of body.Wherein, the resolution ratio of image is higher, and the pixel density in image is also higher, can divide from high-definition picture
It is more to analyse obtained detailed information.However, due to there is several factors to will lead to the resolution ratio of acquired image in practical application
Demand cannot be reached, therefore, it is necessary to promote the resolution ratio of low-resolution image.
In the prior art, generally logical using bicubic interpolation when low-resolution image is converted to high-definition picture
The pixel value for crossing low-resolution image recovers high-definition picture, for example, carrying out to the pixel value of low-resolution image simple
Interpolation processing, obtain high-definition picture, the obtained resolution ratio between high-definition picture and low-resolution image is poor
Value is little, and the details such as the component of object or profile still can be relatively fuzzyyer in obtained high-definition picture, picture quality
It is lower, so that image display effect is poor.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method, device and storage medium, it is intended to the image that improves that treated
Quality.
In order to solve the above technical problems, the embodiment of the present invention provides following technical scheme:
A kind of image processing method, including:
Obtain image to be processed;
Feature extraction is carried out to the target object in the image to be processed, obtains target signature location information;
Segmentation of feature regions is carried out to the target object, obtains Target Segmentation area information;
Using preset image processing model, it is turned up based on the target signature location information and Target Segmentation area information
The original resolution of the image to be processed, described image handle model by the feature of object default in multiple training sample images
Location information and the training of cut zone information form.
A kind of image processing apparatus, including:
First acquisition unit, for obtaining image to be processed;
Extraction unit obtains target signature position for carrying out feature extraction to the target object in the image to be processed
Confidence breath;
Cutting unit obtains Target Segmentation area information for carrying out segmentation of feature regions to the target object;
Unit is turned up, for using preset image processing model, based on the target signature location information and target point
The original resolution that the image to be processed is turned up in area information is cut, described image handles model by multiple training sample images
The feature locations information of default object and the training of cut zone information form.
A kind of storage medium, the storage medium are stored with a plurality of instruction, and described instruction is suitable for processor and is loaded, with
Execute the step in any image processing method provided by the embodiment of the present invention.
The embodiment of the present invention is when needing the resolution ratio to image to be turned up, available image to be processed, and treats
The target object handled in image carries out feature extraction, obtains target signature location information, and to the mesh in image to be processed
It marks object and carries out segmentation of feature regions, obtain Target Segmentation area information;Then preset image processing model is used, mesh is based on
Mark feature locations information and Target Segmentation area information, the original resolution of image to be processed be turned up, i.e., by resolution ratio it is low to
Processing image is converted to the image of high resolution.Since the program can be based on the target signature of target object in image to be processed
The original resolution of image to be processed is precisely turned up in location information and Target Segmentation area information, it is thus possible to improve
The clarity of treated image, the picture quality that improves that treated.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is the schematic diagram of a scenario of image processing method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram of feature locations information in training sample image provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram of cut zone information in training sample image provided in an embodiment of the present invention;
Fig. 5 is the schematic diagram that low-resolution image provided in an embodiment of the present invention is converted to high-definition picture;
Fig. 6 is the schematic diagram provided in an embodiment of the present invention treating training pattern and being trained;
Fig. 7 is another flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 8 is the structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Fig. 9 is another structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Figure 10 is another structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Figure 11 is the structural schematic diagram of the network equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those skilled in the art's every other implementation obtained without creative efforts
Example, shall fall within the protection scope of the present invention.
The embodiment of the present invention provides a kind of image processing method, device and storage medium.
Referring to Fig. 1, Fig. 1 is the schematic diagram of a scenario of image processing method provided by the embodiment of the present invention, wherein the figure
As processing unit specifically can integrate in the network equipment such as terminal or server equipment, for example, the network equipment is available
Multiple training sample images, the training sample image can be the higher image of clarity, can be from the memory of storage image
In get, and determine every training sample image in preset object (for example, face or vehicle etc.) fisrt feature information (packet
Include fisrt feature location information and the first cut zone information), and the original resolution of every training sample image is turned down
For preset value (preset value can carry out flexible setting according to actual needs), the training sample after multiple resolution ratio are turned down is obtained
Image.Then, object is preset by preset calculate after training pattern in the training sample image after every resolution ratio is turned down
Second feature information (including second feature location information and second cut zone information);It is special according to fisrt feature information and second
Reference breath is treated training pattern and is trained, and image processing model is obtained.
Hereafter, when needing the resolution ratio to image to be turned up, the image processing requests of user's input be can receive, and
Obtaining image to be processed based on the image processing requests (such as can be and shot by mobile phone, camera or camera etc.
Obtain image to be processed), and feature extraction is carried out to the target object in image to be processed, obtain target signature location information;
And segmentation of feature regions is carried out to the target object in image to be processed, obtain Target Segmentation area information;Then using figure
As processing model, it is based on target signature location information and Target Segmentation area information, the original resolution of image to be processed is turned up,
The image that the low image to be processed of resolution ratio is converted to high resolution, can also store the image after conversion to memory
In, etc..
It should be noted that the schematic diagram of a scenario of image processing method shown in FIG. 1 is only an example, the present invention is real
The scene for applying the image processing method of example description is technical solution in order to more clearly illustrate the embodiment of the present invention, not structure
In pairs in the restriction of technical solution provided in an embodiment of the present invention, those of ordinary skill in the art are it is found that with image processing method
The differentiation of method and the appearance of new business scene, technical solution provided in an embodiment of the present invention is for similar technical problem, equally
It is applicable in.
It is described in detail separately below.
In the present embodiment, it will be described from the angle of image processing apparatus, which can specifically collect
At in the network equipment such as terminal or server equipment.
A kind of image processing method, including:Obtain image to be processed;Feature is carried out to the target object in image to be processed
It extracts, obtains target signature location information;Segmentation of feature regions is carried out to target object, obtains Target Segmentation area information;It adopts
With preset image processing model, the original of image to be processed is turned up based on target signature location information and Target Segmentation area information
Beginning resolution ratio, image processing model by multiple training sample images preset object feature locations information and cut zone information
Training forms.
Referring to Fig. 2, Fig. 2 is the flow diagram for the image processing method that one embodiment of the invention provides.At the image
Reason method may include:
S101, image to be processed is obtained.
Wherein, image to be processed can be the lower image of clarity, for example, it may be resolution ratio is low point of 16 × 16
Resolution image or other image in different resolution.It include target object in the image to be processed, which may include people
Face or vehicle etc..
The acquisition modes that image processing apparatus obtains image to be processed may include:Mode one can pass through mobile phone, photograph
The shootings such as machine or camera largely include the image of target object;Mode two, can by search on the internet or from
Image to be processed etc. is obtained in database, certainly, the acquisition modes of image to be processed can also be other acquisition modes, specifically
Mode is not construed as limiting here.
In some embodiments, before the step of obtaining image to be processed, or model is being handled using pre-set image
Before the step of original resolution of image to be processed is turned up based on target signature location information and Target Segmentation area information, figure
As processing method can also include:
(1) multiple training sample images are obtained, and determine the fisrt feature position for presetting object in every training sample image
Confidence breath and the first cut zone information;
(2) original resolution of every training sample image is turned down as preset value, obtains the training after resolution ratio is turned down
Sample image;
(3) the second feature location information and that object is preset in the training sample image after every resolution ratio is turned down is obtained
Two cut zone information;
(4) according to fisrt feature location information, the first cut zone information, second feature location information and the second cut section
Domain information is trained to training pattern to preset, obtains image processing model.
Wherein, training sample image can be the higher image of clarity, for example, it may be resolution ratio is 128 × 128
The high-definition picture etc. that high-definition picture or resolution ratio are 1024 × 1024.It can in multiple training sample images
It also may include the image of same default object to include the image of different default objects, which may include face
Or vehicle etc., for example, it may include face that object is preset in the training sample image of part, it is pre- in another part training sample image
If object may include vehicle etc., the default object for including in every training sample image can be the same, can also be different.
For example, by taking default object is face as an example multiple can be taken in different location, different time or different angle
The image of same face, alternatively, the image of multiple different faces is taken for different crowd, in same training sample image
It may include one or more face, may include the general image of face in the training sample image, can also only include
The image etc. of face regional area;Shooting angle in the training sample image including face can be front or side isogonism
Degree.
In another example can be taken in different location, different time or different angle more so that default object is vehicle as an example
The image of Zhang Tongyi vehicle, alternatively, taking the image of multiple different vehicles, same training sample image for different vehicle
In may include one or more vehicle, may include the general image of vehicle in the training sample image, can also only wrap
Include the image etc. of vehicle regional area;Shooting angle in the training sample image including vehicle can be front or side isogonism
Degree.
It should be noted that the quantity of multiple training sample images, type and quantity, shooting angle including presetting object
And resolution sizes etc. can carry out flexible setting according to actual needs, particular content is not construed as limiting here.
Image processing apparatus obtain training sample image acquisition modes may include:Mode one can pass through mobile phone, photograph
The shootings such as camera or camera largely include the way such as the image of default object and multiple images of the same default object of shooting
Diameter acquires multiple training sample images.Mode two can be obtained by searching on the internet or from picture database
Multiple training sample images etc., certainly, the acquisition modes of multiple training sample images can also be other acquisition modes, specifically
Mode is not construed as limiting here.
After obtaining multiple training sample images, image processing apparatus can determine in every training sample image default pair
The fisrt feature information of elephant, the fisrt feature information may include fisrt feature location information and the first cut zone information etc.,
I.e. image processing apparatus can determine the fisrt feature location information that object is preset in every training sample image and the first segmentation
Area information.Wherein, for example, as shown in figure 3, the fisrt feature location information may include eye when default object is face
The feature locations information of the human faces such as eyeball, eyebrow, nose, mouth and face mask, the location information of each feature can wrap
The location information of multiple characteristic points is included, which can be two-dimensional coordinate position or pixel coordinate position etc..
The fisrt feature location information can be through face recognition technology, to eyes, nose, the eyebrow on face in image
Each human face such as hair and mouth is positioned, and generates the location information of the characteristic point of each human face, this feature point can
To be the location coordinate information of the corresponding key point of each human face, this feature point can be in the exterior contour of face and each
The number at edge or center of human face etc., this feature point can carry out flexible setting according to actual needs.The fisrt feature
Location information can also be by manually marking each human face characteristic point such as the eyes on face, nose, eyebrow and mouth
Location information etc..
The fisrt feature information can also include face character and texture information etc., wherein face character may include eye
Eyeball size, hair color, nose size and mouth size etc., texture information may include face pixel etc., and particular content can be with
Flexible setting is carried out according to actual needs, is not construed as limiting here.
For example, as shown in figure 3, the first cut zone information may include hair (segmentation when default object is face
Region 1), left eye eyeball (cut zone 5), right eye eyeball (cut zone 3), left eyebrow (cut zone 4), right eyebrow (cut zone
2), the cut zone such as nose (cut zone 6), lip (cut zone 7), tooth (cut zone 8) and face.It can be to each point
It cuts region and different marks is set, obtain cut zone information, for example, can be set as the pixel value in cut zone
One constant, pixel value in non-cut zone are 0 etc., and the pixel value in different cut zone can be with different normal
Number indicates.
It should be noted that when default object be vehicle when, the fisrt feature location information may include wheel, license plate,
The location information of the vehicle characteristics such as vehicle window, logo, car light and vehicle mirror, the first cut zone information may include wheel, license plate,
The vehicle characteristics cut zone information such as vehicle window, logo, car light and vehicle mirror.
In some embodiments, image processing apparatus determines the fisrt feature that object is preset in every training sample image
The step of information may include:Mark instruction is received, and is determined based on mark instruction and presets object in every training sample image
Fisrt feature location information;Setting instruction is received, and is determined based on setting instruction and presets object in every training sample image
The first cut zone information;Fisrt feature information is set by fisrt feature location information and the first cut zone information.
Specifically, image processing apparatus can receive the mark instruction of user's input, and mark instruction can be used for instructing
Markup information is arranged in position where practicing the feature for presetting object in sample image, which can be point, circle or more
Side shape etc..One or more markup informations can be set in a training sample image based on mark instruction, for example, instructing
Practice the positions such as the eyes of face or nose in sample image and markup information is set.It is then possible to according to each markup information
Determine position of each feature of default object in training sample image, and then according to each feature of default object in training
Position in sample image calculates fisrt feature location information of the feature of each default object in the training sample image,
And so on, one or more markup informations can be above arranged in another training sample image based on mark instruction, so
Can be calculated afterwards according to each markup information each feature of default object in another training sample image on it is first special
Location information is levied, until the training sample image in multiple training sample images is calculated and finished, is obtained in training sample image
The fisrt feature location information of default object.
Image processing apparatus can receive the setting instruction of user's input, and setting instruction can be used in training sample figure
Mark is arranged in the pixel value that the feature region of object is preset as in, which can be number or title etc..It is set based on this
The corresponding mark in one or more regions can be arranged in a training sample image by setting instruction, for example, in training sample figure
The regions such as the eyes of the face or nose setting mark as in.It is then possible to determine default pair according to the mark in each region
Cut zone of each feature of elephant in training sample image, and then according to each feature of default object in training sample figure
Cut zone as in determines first cut zone information of the feature of each default object in the training sample image, with
This analogizes, and one or more area identifications can above be arranged in another training sample image based on setting instruction, then
The first segmentation on can determining each feature for presetting object in another training sample image according to each area identification
Area information obtains pre- in training sample image until the training sample image determination in multiple training sample images finishes
If the cut zone information of object.Finally obtained fisrt feature location information and the first cut zone information are fisrt feature
Information.
After obtaining every training sample image, every training sample image can be passed through down-sampling by image processing apparatus
Or other modes, the original resolution of every training sample image is turned down as preset value, so as to obtain multiple resolution ratio
Training sample image after turning down, wherein the preset value can carry out flexible setting according to actual needs, after which turns down
Training sample image can be the lower image of clarity, for example, it may be resolution ratio be 16 × 16 low-resolution image
Deng.For example, the original resolution of training sample image a can be turned down as preset value, the training sample after resolution ratio is turned down is obtained
The original resolution of training sample image b can be turned down as preset value, obtain the training sample after resolution ratio is turned down by this image A
This image B;The original resolution of training sample image c can be turned down as preset value, obtain the training sample after resolution ratio is turned down
This image C;Etc..
After the training sample image after obtaining multiple resolution ratio and turning down, available every resolution ratio turn down after training
The second feature location information and the second cut zone information of object are preset in sample image, for example, by preset wait train
Model calculates the second feature location information that object is preset in the training sample image after every resolution ratio is turned down and the second segmentation
Area information.
Wherein, it is preset to training pattern may include residual error network (Residual Network) and generate confrontation network
The model of compositions such as (Generative Adversarial Network, GAN), or including convolutional network and generate confrontation net
The model of the compositions such as network, the network frame which fights network may include multiple network mutation, for example, may include priori
Estimate network, differentiate that network and character network etc. generate network, should can also be other models, Ke Yigen to training pattern
Flexible setting is carried out according to actual needs, particular content is not construed as limiting here.
In some embodiments, the second spy that object is preset in the training sample image after every resolution ratio is turned down is obtained
Sign location information and the step of the second cut zone information may include:Using to the prior estimate network in training pattern, meter
Calculate the second feature location information that object is preset in the training sample image after every resolution ratio is turned down and the second cut zone letter
Breath.
Image processing apparatus can call this to wait for the prior estimate network in training pattern, and use prior estimate network meter
The second feature information that object is preset in the training sample image after every resolution ratio is turned down is calculated, which refers to above-mentioned
Default object it is consistent, for example, the default object may include face or vehicle etc..The second feature information may include second
Feature locations information and the second cut zone information etc., wherein second feature location information and above-mentioned fisrt feature location information
Similar, the second cut zone information is similar with above-mentioned first cut zone information, for example, this when default object is face
Two feature locations information may include the location information of the features such as eyes, eyebrow, nose, mouth and face mask, each feature
Location information may include multiple characteristic points location information, which may include hair, eyes, eyebrow
The information of the cut zone such as hair, nose, mouth (including lip and tooth etc.) and face.In another example when default object is vehicle,
The fisrt feature location information may include the location information of the vehicle characteristics such as wheel, license plate, vehicle window, logo, car light and vehicle mirror,
The second cut zone information may include the vehicle characteristics cut zone such as wheel, license plate, vehicle window, logo, car light and vehicle mirror letter
Breath.
In some embodiments, the training sample figure after every resolution ratio is turned down is calculated after training pattern by preset
The step of second feature location information and the second cut zone information of default object, may include as in:
From multiple resolution ratio turn down after training sample image in select a training sample image, as current trained sample
This image;
Default object is searched from current training sample image;
If finding default object in current training sample image, using to the prior estimate network in training pattern
Calculate the second feature location information and the second cut zone information of default object;
Return execute from multiple resolution ratio turn down after training sample image in select a training sample image, as working as
The operation of preceding training sample image, the training sample image after multiple resolution ratio are turned down, which calculates, to be finished.
Specifically, current training sample image is the training sample image after a resolution ratio is turned down, image procossing dress
Default object can be searched from current training sample image by setting, for example, can be by face recognition technology from current training sample
Image searches face, and searches the features such as the eyes on face, eyebrow, nose, mouth and face mask.If in current training sample
Searched in this image the second feature location information for not needing to calculate default object then less than default object and its correlated characteristic and
Second cut zone information.If default object and its correlated characteristic are found in current training sample image, by wait instruct
Practice second feature location information and the second cut zone information that model calculates default object.Then, it returns and executes from multiple points
Resolution turn down after training sample image in select a training sample image, as the operation of current training sample image, directly
Training sample image after turning down to multiple resolution ratio, which calculates, to be finished.
Obtaining fisrt feature location information, the first cut zone information, second feature location information and the second cut section
It, can be according to fisrt feature location information, the first cut zone information, second feature location information and the second segmentation after domain information
Area information is treated training pattern and is trained.
In some embodiments, believed according to fisrt feature location information, the first cut zone information, second feature position
Breath and the second cut zone information, treating the step of training pattern is trained, obtains image processing model may include:
(a) it is based on second feature location information and the second cut zone information using to the residual error network in training pattern,
The resolution ratio of training sample image after every resolution ratio is turned down is restrained to the original resolution of training sample image, is divided
Training sample image after resolution convergence;
(b) it uses to the character network in training pattern, default pair in the training sample image after calculating resolution convergence
The third feature location information of elephant;
(c) according to fisrt feature location information, the first cut zone information, second feature location information, the second cut section
The parameter that domain information and third feature location information treat training pattern is updated, and obtains image processing model.
Specifically, in order to which the training sample image after turning down every resolution ratio precisely reverts to training sample image, figure
As processing unit can be called to the residual error network in training pattern, and second feature information (packet is based on using the residual error network
Include second feature location information and the second cut zone information), training sample image after every resolution ratio is turned down to instruction
Practice sample image original resolution convergence, obtain resolution ratio convergence after training sample image, the resolution ratio turn down after instruction
The resolution ratio for practicing sample image, the difference greater than the resolution ratio of the training sample image after resolution ratio convergence, between the two resolution ratio
Value can be less than preset threshold, which can carry out flexible setting according to actual needs.
After obtaining the training sample image after resolution ratio convergence, it can call to the character network in training pattern, and
Using the third feature location information for presetting object in the training sample image after the convergence of this feature network query function resolution ratio, this is pre-
If object is consistent with the above-mentioned default object referred to, for example, the default object may include face or vehicle etc..Wherein, third
Feature locations information is similar with above-mentioned fisrt feature location information, for example, when default object is face, the third feature position
Information may include the location information of the features such as eyes, eyebrow, nose, mouth and face mask, the location information of each feature
It may include the location information of multiple characteristic points.Either, which, which can be, passes through face recognition technology
Each human face such as eyes, nose, eyebrow and mouth in the training sample image after restraining to resolution ratio on face carries out
Positioning, generates the location information of the characteristic point of each human face.At this point, image processing apparatus can be according to fisrt feature position
Information, the first cut zone information, second feature location information, the second cut zone information and third feature location information are treated
The parameter of training pattern is updated, and obtains image processing model.
In some embodiments, believed according to fisrt feature location information, the first cut zone information, second feature position
Breath, the second cut zone information and third feature location information are treated training pattern and are trained, and image processing model is obtained
Step may include:
Using to the prior estimate network in training pattern, fisrt feature location information and second feature location information are calculated
Between error, obtain feature locations error, and calculate between the first cut zone information and the second cut zone information
Error obtains cut zone error, sets fisrt feature error for feature locations error and cut zone error;
Using to the character network in training pattern, calculate between fisrt feature location information and third feature location information
Error, obtain second feature error;
Using the training sample image and original training to the residual error network in training pattern, after determining resolution ratio convergence
Image error between sample image;
Gradient information is obtained according to fisrt feature error, second feature error and image error;
The parameter to training pattern is updated according to gradient information, obtains image processing model.
For example, image processing apparatus can be called to the prior estimate network in training pattern, pass through prior estimate network
Fisrt feature location information is compared with second feature location information, obtains feature locations error;And first is divided
Area information is compared with the second cut zone information, obtains cut zone error;By feature locations error and cut zone
Error is set as fisrt feature error.Specifically, image processing apparatus can calculate separately fisrt feature location information and second
Feature locations error between feature locations information, and, between the first cut zone information and the second cut zone information
Cut zone error, this feature location error and cut zone error are fisrt feature error.
Specifically, image processing apparatus can calculate between fisrt feature information and second feature information according to formula (1)
Fisrt feature error, which specifically can be as follows:
Wherein, Δ1Indicate fisrt feature error, (value of N can be according to reality for the number of N expression training sample image
Need to carry out flexible setting), n n-th training sample image of expression, z expression fisrt feature information (including fisrt feature position letter
Breath and the first cut zone information), znIndicate the corresponding fisrt feature information of n-th training sample image, x indicates resolution ratio tune
Training sample image after low, xnIndicate the training sample image after n-th resolution ratio is turned down, P is indicated in training pattern
Prior estimate network, P (xn) indicate n-th resolution ratio turn down after the corresponding second feature information of training sample image (including
Second feature location information and the second cut zone information).
And image processing apparatus can be called to the character network in training pattern, and use this feature network query function
Feature locations error between fisrt feature location information and third feature location information, obtains second feature error.For example, can
To calculate the second feature error between fisrt feature location information and third feature location information according to formula (2)
Wherein, Δ2Indicate second feature error, N indicates that the number of training sample image, n indicate n-th training sample figure
Picture, Φ indicate that, to the character network in training pattern, y indicates training sample image, Φ (yn) indicate n-th training sample image
Corresponding fisrt feature location information, G indicate residual error network, x indicate resolution ratio turn down after training sample image, xnIndicate the
N opens the training sample image after resolution ratio are turned down, Φ (G (xn)) indicate the training sample image pair after n-th resolution ratio is turned down
The third feature location information answered.
And image processing apparatus can be called to the residual error network in training pattern, and be determined using the residual error network
The image error between training sample image and original training sample image after resolution ratio convergence, the image error can wrap
Include pixel error, driscrimination error and confrontation error etc..Wherein, pixel error can be the training sample image after resolution ratio convergence
Error between original training sample image between each pixel value, confrontation error, which can be, differentiates that network is residual for fighting
The error that poor network and prior estimate network generate.Driscrimination error can be to the training sample image and original after resolution ratio convergence
The training sample image of beginning carries out differentiating true and false error, for example, training sample image to be discriminated for one, when determine should be to
When discriminative training sample image is the training sample image after resolution ratio convergence, by the mark of the training sample image to be discriminated
It is set as 0, when determining the training sample image to be discriminated is original training sample image, by the training sample to be discriminated
The mark of image is set as 1, is then compared obtained mark with true value, obtains driscrimination error.
In some embodiments, the figure between training sample image and training sample image after determining resolution ratio convergence
As the step of error may include:
Using the training sample image and original training to the residual error network in training pattern, after obtaining resolution ratio convergence
Pixel error between sample image;Using the training sample figure to the differentiation network in training pattern, after being restrained to resolution ratio
As being differentiated with original training sample image, driscrimination error and confrontation error are obtained;By pixel error, driscrimination error and right
Anti- error is set as image error.
For example, pixel error, driscrimination error and confrontation error etc. can be calculated separately, wherein pixel error can basis
Following formula (3) is calculated:
Wherein, Δ3Indicate pixel error, N indicates that the number of training sample image, n indicate n-th training sample image, y
Indicate training sample image, ynIndicate the training sample image after n-th training sample image, x expression resolution ratio are turned down, xnTable
Show the training sample image after n-th resolution ratio is turned down, G is indicated to the residual error network in training pattern, G (xn) indicate by residual error
N-th resolution ratio that network recovery obtains turn down after training sample image.
Driscrimination error (4) can be calculated according to the following formula, and confrontation error (5) can be counted according to the following formula
It calculates:
Wherein, Δ4Indicating driscrimination error, N indicates that the number of training sample image, n indicate n-th training sample image,
Log indicates that logarithmic function, D indicate that, to the differentiation network in training pattern, y indicates training sample image, ynIndicate n-th training
Sample image, x indicate resolution ratio turn down after training sample image, xnIndicate the training sample figure after n-th resolution ratio is turned down
Picture, G are indicated to the residual error network in training pattern, G (xn) indicate after being turned down by n-th resolution ratio that residual error network recovery obtains
Training sample image, Δ5Indicate confrontation error.
After obtaining each error, image processing apparatus can be according to fisrt feature error, second feature error and figure
It is obtained in some embodiments according to fisrt feature error, second feature error and image error as error obtains gradient information
The step of taking gradient information may include:
First-loss function is constructed based on fisrt feature error, second feature error, pixel error and confrontation error;
Gradient decline is carried out to first-loss function, obtains first gradient information;
The second loss function is constructed based on driscrimination error, and gradient decline is carried out to the second loss function, obtains the second ladder
Spend information;
Gradient information is set by first gradient information and the second gradient information.
Specifically, image processing apparatus can be based on fisrt feature error, second feature error, image according to formula (6)
Pixel error and confrontation error in error construct first-loss function, which specifically can be as follows:
Wherein, L1Indicating first-loss function, the meaning that other parameters indicate is similar to formula (5) with above-mentioned formula (1),
It does not repeat here, which is the total mistake for generating network (including residual error network and prior estimate network)
Difference.
Then, image processing apparatus can carry out gradient decline to first-loss function, to minimize first-loss function,
Obtain first gradient information, wherein the mode of gradient decline can carry out flexible setting according to actual needs, and particular content is herein
Place is not construed as limiting.
And image processing apparatus can be according to formula (7) based on the second loss of driscrimination error construction in image error
Function, and gradient decline is carried out to the second loss function and obtains the second gradient information to minimize the second loss function, the public affairs
Formula (7) specifically can be as follows:
Wherein, L2Indicate the second loss function, the meaning that other parameters indicate is similar with above-mentioned formula (4), here not
It repeats, which is to differentiate network error.
After obtaining first gradient information and the second gradient information, image processing apparatus can according to first gradient information and
Second gradient information updates the parameter to training pattern, to adjust to the parameter of training pattern or weight etc. to appropriate value, just
Image processing model can be obtained, wherein in training pattern generation network (including residual error network and prior estimate network) and
Differentiate that network can alternately update.
S102, feature extraction is carried out to the target object in image to be processed, obtains target signature location information.
In some embodiments, feature extraction is carried out to the target object in image to be processed, obtains target signature position
Confidence cease the step of may include:
Feature extraction is carried out to the target object in image to be processed using image processing model, obtains target signature position
Information;Alternatively, determining the object identity and signature identification of target object, target is searched from image to be processed according to object identity
Object, and when finding target object, the feature of target object and its position are extracted according to signature identification, obtain mesh
Mark feature locations information.
Specifically, after obtaining image processing model, image processing apparatus can be using image processing model to image
Resolution ratio is turned up, firstly, image processing apparatus can call the prior estimate network in image processing model, using the elder generation
The target object tested in estimation network handles processing image carries out feature extraction, for example, can adopt when target object is face
The feature extraction that human face is carried out with the face in prior estimate network handles processing image, obtains the eye of human face
The feature locations information such as eyeball, eyebrow, nose and mouth.When target object is vehicle, the prior estimate network pair can be used
Vehicle in image to be processed carries out feature extraction, obtains the Q-characters confidences such as wheel, license plate, vehicle window, logo, car light and vehicle mirror
Breath.
Either, image processing apparatus can obtain target signature location information using other modes, for example, image procossing
Device can first determine the object identity and signature identification of target object, wherein the object identity may include one or more,
For each target object of unique identification;This feature mark may include one or more, in unique identification target object
One or more feature for including, the object identity and signature identification can be the name being made of number, text and/or letter
Title or number or profile mark etc..It is then possible to target object is searched from image to be processed according to object identity, when
When not finding target object, does not need to execute operation is extracted etc. to the feature of target object and its position;When finding
When target object, the feature of target object and its position can be extracted according to signature identification, obtain target signature position
Information.
S103, segmentation of feature regions is carried out to target object, obtains Target Segmentation area information.
In some embodiments, feature extraction is carried out to the target object in image to be processed, obtains target signature position
Confidence cease the step of may include:
Feature extraction is carried out to the target object in image to be processed using image processing model, obtains target signature position
Information;Alternatively, determining the object identity and signature identification of target object, target is searched from image to be processed according to object identity
Object, and when finding target object, the feature of target object and its position are extracted according to signature identification, obtain mesh
Mark feature locations information.
Specifically, image processing apparatus can call the prior estimate network in image processing model, be estimated using the priori
The target object counted in network handles processing image carries out segmentation of feature regions, for example, can adopt when target object is face
The segmentation of feature regions that human face is carried out with the face in prior estimate network handles processing image, obtains human face
Eyes, eyebrow, nose and mouth etc. divide area information.When target object is vehicle, the prior estimate network can be used
Segmentation of feature regions is carried out to the vehicle in image to be processed, obtains the segmentation such as wheel, license plate, vehicle window, logo, car light and vehicle mirror
Area information.Either, image processing apparatus can obtain Target Segmentation area information using other modes, for example, at image
It manages device and target object is searched from image to be processed according to object identity, and when finding target object, according to feature mark
Knowledge extracts the feature of target object and its position, obtains target signature location information.
S104, using preset image processing model, be based on target signature location information and Target Segmentation area information tune
The original resolution of high image to be processed.
Wherein, image processing model by multiple training sample images preset object feature locations information and cut zone
Information training forms.Since image processing model is based on the feature locations for presetting object in different resolution training sample image
What information and cut zone information were trained, therefore, image processing apparatus can be got by image processing model
Image to be processed in target object target signature location information and Target Segmentation area information, the original of image to be processed is turned up
Beginning resolution ratio, i.e., be converted to high-definition picture for low-resolution image.The target object and the above-mentioned default object class referred to
Seemingly, for example, the default object may include face or vehicle etc..Wherein, target signature location information and above-mentioned fisrt feature position
Confidence breath is similar, and Target Segmentation area information is similar with above-mentioned first cut zone information.
In some embodiments, when target object is face, target signature location information is human face position letter
Breath, Target Segmentation area information are human face cut zone information, using preset image processing model, are based on target signature
The step of original resolution of image to be processed is turned up in location information and Target Segmentation area information may include:
Human face location information and human face cut zone information are based on using preset image processing model, are turned up
The original resolution of image to be processed.
Wherein, human face location information may include the position of the features such as eyes, eyebrow, nose, mouth and face mask
Confidence breath, the location information of each feature may include the location information of multiple characteristic points, the human face cut zone information
It may include the segmentation area information such as hair, eyes, eyebrow, nose, mouth and face.
Image processing apparatus can be based on human face location information and human face cut section by image processing model
The original resolution of image to be processed will be turned up in domain information, the image that obtains that treated, it can by the original of image to be processed
Resolution ratio is turned up to preset resolution value, which can carry out flexible setting according to actual needs.For example, as schemed
Shown in 5, the pixel compared with high magnification numbe can be recovered according to each pixel in low-resolution image, so as to
High-definition picture is obtained, the super-resolution efect compared with high magnification numbe is reached, can effectively be promoted to human face and profile
Etc. details recovery, picture quality is substantially increased, so that the display effect of image is preferable.
In some embodiments, when target object is vehicle, target signature location information is vehicle characteristics position letter
Breath, Target Segmentation area information are vehicles segmentation area information, using preset image processing model, are based on target signature position
The step of original resolution of image to be processed is turned up in information and Target Segmentation area information may include:
Vehicle characteristics location information and vehicles segmentation area information are based on using preset image processing model, are turned up wait locate
Manage the original resolution of image.
Wherein, vehicle characteristics location information may include the vehicle characteristics such as wheel, license plate, vehicle window, logo, car light and vehicle mirror
Location information, vehicles segmentation area information may include vehicle characteristics such as wheel, license plate, vehicle window, logo, car light and vehicle mirror point
Cut area information.Image processing apparatus can be based on vehicle characteristics location information and vehicles segmentation region by image processing model
The original resolution of image to be processed will be turned up in information, the image that obtains that treated.It can be by original point of image to be processed
Resolution is turned up to preset resolution value, which can carry out flexible setting according to actual needs.
From the foregoing, it will be observed that the embodiment of the present invention is when needing the resolution ratio to image to be turned up, available figure to be processed
Picture, and feature extraction is carried out to the target object in image to be processed, target signature location information is obtained, and to figure to be processed
Target object as in carries out segmentation of feature regions, obtains Target Segmentation area information;Then preset image procossing mould is used
Type is based on target signature location information and Target Segmentation area information, the original resolution of image to be processed is turned up, i.e., will differentiate
The low image to be processed of rate is converted to the image of high resolution.Since the program can be based on target object in image to be processed
The original resolution of image to be processed is precisely turned up in target signature location information and Target Segmentation area information, therefore,
The clarity for the image that can be improved that treated, the picture quality that improves that treated.
Citing, is described in further detail by the method according to described in above-described embodiment below.
The present embodiment will be illustrated so that image processing apparatus is the network equipment as an example, by taking default object is face as an example,
The network equipment is when collecting user's facial image by monitoring device, due to being influenced by factors such as current environment and equipment,
The resolution of possible acquired image is lower, therefore, can be by scheme through the embodiment of the present invention by the lower figure of resolution ratio
As being converted to the higher image of resolution ratio.
For including residual error network to training pattern and generate confrontation network, the mutation network which fights network can
To include prior estimate network, character network and differentiation network etc., for example, as shown in Figure 6, wherein the core knot of residual error network
Structure is residual error module, which learns to be input to the residual error of output, rather than direct mapping between the two, can be had in this way
The performance degradation for overcoming the problems, such as network structure and causing of effect.Generating confrontation network includes generator and arbiter etc., generator
Target is to generate sample fascination arbiter true enough, and arbiter can be two classifiers, it can be determined that input data
It is truthful data or generation sample.
Below by with the whole network structure to training pattern with prior estimate network, residual error network, differentiate network and
For character network, for example, as shown in Figure 6, wherein prior estimate network can be used for from low-resolution image (i.e. resolution ratio
Training sample image after turning down) in estimate face prior information (i.e. characteristic information), which may include
The feature locations information (i.e. second feature location information) of face and cut zone information (i.e. the second cut section of each organ of face
Domain information) etc., and obtained characteristic information (including second feature location information and second cut zone information) is transferred to residual
Poor network, which can be also used for the characteristic information that will be obtained and true high-definition picture (trains sample
This image) corresponding fisrt feature information is compared, obtain prior uncertainty (i.e. fisrt feature error);Etc..
Residual error network can be used for the characteristic information to come according to prior estimate network transmission, and low-resolution image is restored
For high-definition picture, the high-definition picture being restored (training sample image i.e. after resolution ratio convergence), by recovery
High-definition picture, which is transferred to, differentiates network and character network, and by the high-definition picture of recovery and true high resolution graphics
As being compared, pixel error is obtained;Etc..
Differentiate that network can be used for judging input data (high-definition picture and true high resolution graphics including recovery
Picture) it is truthful data (i.e. true high-definition picture) or generation sample (high-definition picture restored), promote residual
Poor network recovery goes out high-definition picture more true to nature.For example, differentiating that the input of network is the high-definition picture or defeated restored
Enter for true high-definition picture, confrontation error and driscrimination error etc. can be exported.
Character network can be used for extracting the characteristic information (including feature locations information) of the high-definition picture of recovery, and
It is compared with the characteristic information of true high-definition picture, promotes the image of network recovery to be able to maintain authentication information, in turn
Facilitate the completion of face verification task.The input of this feature network is the high-definition picture and true high-resolution restored
Rate image exports the characteristic error between the high-definition picture and true high-definition picture for recovery.
Forward calculation is being carried out based on multiple training sample images, corresponding pixel error, prior uncertainty, feature is being obtained and misses
It, can further being instructed to training pattern to human face super-resolution after each errors such as difference, driscrimination error and confrontation error
Practice, for example, loss function can be constructed based on each error, and by carrying out gradient decline to loss function, to treat training
The parameter of model is updated, and continuous iteration is restrained until to training pattern, which can be using end to end
Training method is trained, and is generated network (including residual error network and prior estimate network) and is differentiated that network can alternately update,
So as to obtain image processing model.
It, can be to be converted low by the prior estimate network query function in image processing model after obtaining image processing model
The feature locations information and cut zone information etc. of target object in image in different resolution, and by this feature location information and cut section
Domain information etc. is transferred to residual error network, at this point, residual error network can be incited somebody to action according to this feature location information and cut zone information etc.
Low-resolution image to be converted switchs to high-definition picture (i.e. treated image).
Referring to Fig. 7, Fig. 7 is the flow diagram of image processing method provided in an embodiment of the present invention.This method process
May include:
201, the network equipment obtains multiple training sample images, and determines the first spy of face in every training sample image
Reference breath.
Firstly, the network equipment needs to carry out model training, that is, treat training pattern and be trained, for example, hand can be passed through
The shootings such as machine, camera or camera largely include face image and the same face of shooting multiple images, by
It search or the approach such as obtains from picture database on internet and obtains multiple training sample images.
Wherein, training sample image can be the higher image of clarity, for example, it may be resolution ratio is 128 × 128
The high-definition picture etc. that high-definition picture or resolution ratio are 1024 × 1024.It can in multiple training sample images
It also may include the image of same face, the face for including can in every training sample image to include the image of different faces
With different.For example, the images of multiple same faces can be taken in different location, different time or different angle, alternatively,
The image of multiple different faces is taken for different crowd, may include one or more in same training sample image
Face includes that the shooting angle of face can be front or side angularly in the training sample image.
After obtaining multiple training sample images, the network equipment can determine first of face in every training sample image
Characteristic information, the fisrt feature information may include fisrt feature location information and the first cut zone information etc..Wherein, this
One feature locations information may include the location information of the features such as eyes, eyebrow, nose, mouth and face mask, for example, as schemed
Shown in 3, the location information of each feature may include the location information of multiple characteristic points.
The fisrt feature location information can be through face recognition technology, to eyes, nose, the eyebrow on face in image
Each human face such as hair and mouth is positioned, and the location information of the characteristic point of each human face is generated.The fisrt feature
Location information can also be by manually marking each human face characteristic point such as the eyes on face, nose, eyebrow and mouth
Location information etc..
For example, as shown in figure 4, the first cut zone information may include hair, eyes, eyebrow, nose, mouth and face
Etc. cut zone, different marks can be set to each cut zone, obtain cut zone information, for example, can for positioned at
Pixel value in cut zone is set as a constant, and the pixel value in non-cut zone is 0 etc., different cut zone
Interior pixel value can be indicated with different constants, for example, the pixel value in left eye areas is 1, the picture in right eye areas
Plain value is 2, and the pixel value in nasal area is 3 etc..
202, the network equipment turns down the original resolution of every training sample image for preset value, obtains multiple resolution ratio
Training sample image after turning down.
After obtaining every training sample image, the network equipment can by every training sample image by down-sampling or its
The original resolution of every training sample image is turned down as preset value, is turned down so as to obtain multiple resolution ratio by his mode
Training sample image afterwards, wherein the preset value can carry out flexible setting according to actual needs, the resolution ratio turn down after instruction
Practicing sample image can be the lower image of clarity, for example, it may be the low-resolution image that resolution ratio is 16 × 16, or
It is the low-resolution image etc. that resolution ratio is 8 × 8.For example, it is default for the original resolution of training sample image a being turned down
Value, obtains the training sample image A after resolution ratio is turned down, it is default for the original resolution of training sample image b being turned down
Value, obtains the training sample image B after resolution ratio is turned down;It is default for the original resolution of training sample image c being turned down
Value, obtains the training sample image C after resolution ratio is turned down;Etc..
203, the network equipment passes through the training after every resolution ratio of prior estimate network query function in training pattern is turned down
The second feature information of face in sample image, and determine that the fisrt feature between fisrt feature information and second feature information is missed
Difference.
The network equipment passes through the training sample after every resolution ratio of prior estimate network query function in training pattern is turned down
The second feature information of face in image, the second feature information may include second feature location information and the second cut zone
Information etc., wherein the second feature location information may include the position of the features such as eyes, eyebrow, nose, mouth and face mask
Confidence breath, the location information of each feature may include the location information of multiple characteristic points, which can be with
Information including cut zone such as hair, eyes, eyebrow, nose, mouth (including lip and tooth etc.) and faces.
At this point, the network equipment can be by special with first by second feature information to the prior estimate network in training pattern
Reference breath compares, for example, fisrt feature location information can be compared with second feature location information, obtains feature
Location error, and the first cut zone information is compared with the second cut zone information, cut zone error is obtained, is obtained
To this feature location error and cut zone error be fisrt feature error, the calculation formula of the fisrt feature error can be with
For above-mentioned formula (1).
204, the network equipment is by being based on second feature information to the residual error network in training pattern, by every resolution ratio tune
The training sample to the convergence of the original resolution of training sample image, after obtaining resolution ratio convergence of training sample image after low
Image, and the training sample image after resolution ratio is restrained is compared with training sample image, obtains pixel error.
In order to which the training sample image after turning down every resolution ratio precisely reverts to original training sample image, network
Equipment can be by being based on second feature information to the residual error network in training pattern, the training sample after every resolution ratio is turned down
Restraining to the original resolution of original training sample image for this image, to revert to training sample image, obtains resolution ratio
Training sample image after convergence.
After obtaining the training sample image after resolution ratio convergence, the network equipment can be by the residual error in training pattern
Training sample image after network restrains resolution ratio carries out pixel comparison with original training sample image, obtains pixel mistake
Difference, the calculation formula of the pixel error can be above-mentioned formula (3).
205, the network equipment passes through the training sample image after the character network calculating resolution convergence in training pattern
The third feature information of middle face, and calculate the second feature error between fisrt feature information and third feature information.
After obtaining the training sample image after resolution ratio convergence, the network equipment can be by the feature in training pattern
The third feature information of face, the third feature information may include in training sample image after the convergence of network query function resolution ratio
Third feature location information etc., wherein the third feature location information may include eyes, eyebrow, nose, mouth and face's wheel
The location information of the features such as exterior feature, the location information of each feature may include the location information of multiple characteristic points.
At this point, the network equipment can be by calculating fisrt feature location information and third to the character network in training pattern
Feature locations error between feature locations information, obtains second feature error, and the calculation formula of the second feature error can be with
For above-mentioned formula (2).
206, the network equipment by after in training pattern differentiate network to resolution ratio convergence after training sample image with
Original training sample image is differentiated, driscrimination error and confrontation error are obtained.
After obtaining the training sample image after resolution ratio convergence, the network equipment can be by the differentiation in training pattern
Training sample image after network restrains resolution ratio differentiates with training sample image, obtains driscrimination error and confrontation misses
Difference.Wherein, the calculation formula of driscrimination error can be above-mentioned formula (4), and the calculation formula for fighting error can be above-mentioned formula
(5)。
207, the network equipment is based on fisrt feature error, second feature error, pixel error and confrontation error construction first
Loss function is trained residual error network and prior estimate network by first-loss function, and is constructed based on driscrimination error
Second loss function is trained differentiation network by the second loss function, obtains image processing model.
After obtaining each error, the network equipment can be based on fisrt feature error, second feature error, image error
In pixel error and confrontation error construct first-loss function, the expression formula of the first-loss function can be above-mentioned formula
(6), residual error network and prior estimate network are trained by first-loss function, for example, can be to first-loss function
Gradient decline is carried out, first gradient information is obtained, according to the ginseng of first gradient information update residual error network and prior estimate network
Number, with parameter or weight for adjusting residual error network and prior estimate network etc. to appropriate value.
And the second loss function, the expression formula of second loss function are constructed based on the driscrimination error in image error
It can be above-mentioned formula (7) to be trained by the second loss function to differentiation network, for example, can be to the second loss function
Gradient decline is carried out, the second gradient information is obtained, updates the parameter for differentiating network, according to the second gradient information to adjust differentiation net
Parameter or weight of network etc. are to appropriate value.It treats in this way and differentiates network, residual error network and prior estimate network in training pattern
After being trained etc. each network, available image processing model.
208, the network equipment passes through target face in the prior estimate network query function image to be processed in image processing model
Target signature information.
After obtaining image processing model, the network equipment, can be with when collecting user's facial image by monitoring device
Collected low resolution is converted into high-resolution compared with image compared with image by image processing model.Due to image procossing
Model is that the characteristic information based on face in different resolution training sample image is trained, and therefore, is obtaining figure
After processing model, when needing the resolution ratio to image to be turned up, the network equipment can be by image processing model
The target signature information of target face, the target signature information may include target in prior estimate network query function image to be processed
Feature locations information and Target Segmentation area information etc., wherein the target signature location information may include eyes, eyebrow, nose
The location information of the features such as son, mouth and face mask, the location information of each feature may include the position of multiple characteristic points
Information, the Target Segmentation area information may include the information of the cut zone such as hair, eyes, eyebrow, nose, mouth and face.
209, image to be processed is turned up by being based on target signature information to the residual error network in training pattern in the network equipment
Original resolution, the image that obtains that treated.
After obtaining target signature information, the network equipment can be special by being based on target to the residual error network in training pattern
Reference breath, is turned up the original resolution of image to be processed, the image that obtains that treated, i.e., low-resolution image is converted to high score
Resolution image, for example, as shown in Figure 5.For example, can be gone out according to 1 pixel Information recovering in low-resolution image 64 or
128 equal pixels carry out each pixel in low-resolution image to recover corresponding 64 or 128 etc. pixels
Afterwards, available high-definition picture reaches the super-resolution efect of 8 times of even higher multiples, can effectively be promoted to people
The recovery of the details such as face and profile, substantially increases picture quality, so that the display effect of image is preferable.
The image processing flow of the network equipment can be applied in various aspects, for example, in core body business, since there are many bodies
The resolution ratio of the image of part card data is lower, needs to carry out oversubscription to the image of low-quality identity card data, i.e., will be to be converted
Image is converted to high-definition picture, can be distinguished based on high-definition picture user's face, effectively be promoted confirmatory
Energy.In addition, in the case where monitoring environment, due to the limitation of scene and monitoring camera, image one that monitoring camera collects
As quality it is poor, resolution ratio is lower, hence it is highly desirable to acquired image promoted quality, thus further promoted after
Continuous inter-related task performance.
In the embodiment of the present invention, the characteristic information pair of object can be preset in image based on different resolution and the image
It is trained to training pattern, obtains image processing model, and when needing the resolution ratio to image to be turned up, can used
The image processing model can be precisely turned up based on resolution ratio of the characteristic information to image, it is thus possible to improve after processing
Image clarity, the picture quality that improves that treated.
For convenient for better implementation image processing method provided in an embodiment of the present invention, the embodiment of the present invention also provides one kind
Device based on above-mentioned image processing method.Wherein the meaning of noun is identical with above-mentioned image processing method, and specific implementation is thin
Section can be with reference to the explanation in embodiment of the method.
Referring to Fig. 8, Fig. 8 is the structural schematic diagram of image processing apparatus provided in an embodiment of the present invention, the wherein image
Processing unit may include first acquisition unit 301, extraction unit 302, cutting unit 303 and height-regulating unit 304 etc..
Wherein, first acquisition unit 301, for obtaining image to be processed.
The image to be processed can be the lower image of clarity, for example, it may be the low resolution that resolution ratio is 16 × 16
Rate image or other image in different resolution.It include target object in the image to be processed, which may include face
Or vehicle etc..
The acquisition modes that first acquisition unit 301 obtains image to be processed may include:Mode one, can by mobile phone,
The shootings such as camera or camera largely include the image of target object;Mode two, can by searching on the internet or
Person obtains image to be processed etc. from database, and certainly, the acquisition modes of image to be processed can also be other acquisition modes,
Concrete mode is not construed as limiting here.
In some embodiments, as shown in figure 9, image processing apparatus can also include determination unit 305, turn down unit
306, second acquisition unit 307 and training unit 308 etc., specifically can be as follows:
Determination unit 305 for obtaining multiple training sample images, and determines and presets object in every training sample image
Fisrt feature location information and the first cut zone information;
Unit 306 being turned down, for turning down the original resolution of every training sample image for preset value, obtaining multiple points
Resolution turn down after training sample image;
Second acquisition unit 307 presets the of object for obtaining in the training sample image after every resolution ratio is turned down
Two feature locations information and the second cut zone information;
Training unit 308, for being believed according to fisrt feature location information, the first cut zone information, second feature position
Breath and the second cut zone information, treat training pattern and are trained, obtain image processing model.
Wherein, training sample image can be the higher image of clarity, for example, it may be resolution ratio is 128 × 128
The high-definition picture etc. that high-definition picture or resolution ratio are 1024 × 1024.It can in multiple training sample images
It also may include the image of same default object to include the image of different default objects, which may include face
Or vehicle etc., for example, it may include face that object is preset in the training sample image of part, it is pre- in another part training sample image
If object may include vehicle etc., the default object for including in every training sample image can be the same, can also be different.
For example, by taking default object is face as an example multiple can be taken in different location, different time or different angle
The image of same face, alternatively, the image of multiple different faces is taken for different crowd, in same training sample image
It may include one or more face, may include the general image of face in the training sample image, can also only include
The image etc. of face regional area;Shooting angle in the training sample image including face can be front or side isogonism
Degree.
In another example can be taken in different location, different time or different angle more so that default object is vehicle as an example
The image of Zhang Tongyi vehicle, alternatively, taking the image of multiple different vehicles, same training sample image for different vehicle
In may include one or more vehicle, may include the general image of vehicle in the training sample image, can also only wrap
Include the image etc. of vehicle regional area;Shooting angle in the training sample image including vehicle can be front or side isogonism
Degree.
It should be noted that the quantity of multiple training sample images, type and quantity, shooting angle including presetting object
And resolution sizes etc. can carry out flexible setting according to actual needs, particular content is not construed as limiting here.
Determination unit 305 obtain training sample image acquisition modes may include:Mode one can pass through mobile phone, photograph
The shootings such as camera or camera largely include the way such as the image of default object and multiple images of the same default object of shooting
Diameter acquires multiple training sample images.Mode two can be obtained by searching on the internet or from picture database
Multiple training sample images etc., certainly, the acquisition modes of multiple training sample images can also be other acquisition modes, specifically
Mode is not construed as limiting here.
After obtaining multiple training sample images, determination unit 305 can determine in every training sample image default pair
The fisrt feature information of elephant, the fisrt feature information may include fisrt feature location information and the first cut zone information etc.,
I.e. determination unit 305 can determine the fisrt feature location information and the first cut section that object is preset in every training sample image
Domain information.Wherein, for example, as shown in figure 3, when default object be face when, the fisrt feature location information may include eyes,
The feature locations information of the human faces such as eyebrow, nose, mouth and face mask, the location information of each feature may include more
The location information of a characteristic point, the location information can be two-dimensional coordinate position or pixel coordinate position etc..
The fisrt feature location information can be through face recognition technology, to eyes, nose, the eyebrow on face in image
Each human face such as hair and mouth is positioned, and generates the location information of the characteristic point of each human face, this feature point can
To be the location coordinate information of the corresponding key point of each human face, this feature point can be in the exterior contour of face and each
The number at edge or center of human face etc., this feature point can carry out flexible setting according to actual needs.The fisrt feature
Location information can also be by manually marking each human face characteristic point such as the eyes on face, nose, eyebrow and mouth
Location information etc..
The fisrt feature information can also include face character and texture information etc., wherein face character may include eye
Eyeball size, hair color, nose size and mouth size etc., texture information may include face pixel etc., and particular content can be with
Flexible setting is carried out according to actual needs, is not construed as limiting here.
For example, as shown in figure 3, the first cut zone information may include hair (segmentation when default object is face
Region 1), left eye eyeball (cut zone 5), right eye eyeball (cut zone 3), left eyebrow (cut zone 4), right eyebrow (cut zone
2), the cut zone such as nose (cut zone 6), lip (cut zone 7), tooth (cut zone 8) and face.It can be to each point
It cuts region and different marks is set, obtain cut zone information, for example, can be set as the pixel value in cut zone
One constant, pixel value in non-cut zone are 0 etc., and the pixel value in different cut zone can be with different normal
Number indicates.
It should be noted that when default object be vehicle when, the fisrt feature location information may include wheel, license plate,
The location informations such as vehicle window, logo, car light and vehicle mirror, the first cut zone information may include wheel, license plate, vehicle window, logo,
The area informations such as car light and vehicle mirror.
After obtaining every training sample image, down-sampling can be passed through for every training sample image by turning down unit 306
Or other modes, the original resolution of every training sample image is turned down as preset value, so as to obtain multiple resolution ratio
Training sample image after turning down, wherein the preset value can carry out flexible setting according to actual needs, after which turns down
Training sample image can be the lower image of clarity, for example, it may be resolution ratio be 16 × 16 low-resolution image
Deng.For example, the original resolution of training sample image a can be turned down as preset value, the training sample after resolution ratio is turned down is obtained
The original resolution of training sample image b can be turned down as preset value, obtain the training sample after resolution ratio is turned down by this image A
This image B;The original resolution of training sample image c can be turned down as preset value, obtain the training sample after resolution ratio is turned down
This image C;Etc..
After the training sample image after obtaining multiple resolution ratio and turning down, second acquisition unit 307 can be by preset
It is calculated after training pattern and presets the second feature location information and the of object in the training sample image after every resolution ratio is turned down
Two cut zone information.
Wherein, preset to training pattern may include residual error network and the model for generating the confrontation compositions such as network, or
Model including the compositions such as convolutional network and generation confrontation network, the network frame which fights network may include multiple nets
Network mutation, for example, may include prior estimate network, differentiate that network and character network etc. generate network, it should be to training pattern also
It can be other models, flexible setting can be carried out according to actual needs, particular content is not construed as limiting here.
In some embodiments, second acquisition unit 307 is specifically used for:Using to the prior estimate net in training pattern
Network calculates the second feature location information and the second cut section that object is preset in the training sample image after every resolution ratio is turned down
Domain information.
Second acquisition unit 307 can call this to wait for the prior estimate network in training pattern, and use prior estimate net
Network calculates the second feature information that object is preset in the training sample image after every resolution ratio is turned down, the default object with it is above-mentioned
The default object referred to is consistent, for example, the default object may include face or vehicle etc..The second feature information may include
Second feature location information and the second cut zone information etc., wherein second feature location information and above-mentioned fisrt feature position
Information is similar, and the second cut zone information is similar with above-mentioned first cut zone information, for example, when default object is face,
The second feature location information may include the location information of the features such as eyes, eyebrow, nose, mouth and face mask, each
The location information of feature may include the location information of multiple characteristic points, which may include hair, eye
The information of the cut zone such as eyeball, eyebrow, nose, mouth (including lip and tooth etc.) and face.In another example when default object is vehicle
When, which may include the position of the vehicle characteristics such as wheel, license plate, vehicle window, logo, car light and vehicle mirror
Information, the second cut zone information may include the vehicle characteristics cut sections such as wheel, license plate, vehicle window, logo, car light and vehicle mirror
Domain information.
In some embodiments, second acquisition unit 307 specifically can be used for:
From multiple resolution ratio turn down after training sample image in select a training sample image, as current trained sample
This image;
Default object is searched from current training sample image;
If finding default object in current training sample image, by calculating the of default object to training pattern
Two feature locations information and the second cut zone information;
Return execute from multiple resolution ratio turn down after training sample image in select a training sample image, as working as
The operation of preceding training sample image, the training sample image after multiple resolution ratio are turned down, which calculates, to be finished.
Obtaining fisrt feature location information, the first cut zone information, second feature location information and the second cut section
After domain information, training unit 308 can be believed according to fisrt feature location information, the first cut zone information, second feature position
Breath and the second cut zone information are treated training pattern and are trained.
In some embodiments, as shown in Figure 10, training unit 308 may include convergence subelement 3081, calculate son
Unit 3082 and update subelement 3083 etc., specifically can be as follows:
Subelement 3081 is restrained, for being based on second feature location information and the using to the residual error network in training pattern
Two cut zone information, original point to training sample image of the resolution ratio of the training sample image after every resolution ratio is turned down
Resolution convergence, the training sample image after obtaining resolution ratio convergence;
Computation subunit 3082, for using the training to the character network in training pattern, after calculating resolution convergence
The third feature location information of object is preset in sample image;
Subelement 3083 is updated, for according to fisrt feature location information, the first cut zone information, second feature position
The parameter that information, the second cut zone information and third feature location information treat training pattern is updated, and is obtained at image
Manage model.
Specifically, it in order to which the training sample image after turning down every resolution ratio precisely reverts to training sample image, receives
Hold back subelement 3081 can by training pattern be based on second feature information (including second feature location information and second segmentation
Area information), restraining to the original resolution of training sample image for training sample image after every resolution ratio is turned down obtains
Training sample image to after resolution ratio convergence, for example, can be by turning down resolution ratio to the residual error network in training pattern
Training sample image afterwards is converted to the training sample image of five resolution ratio.
After obtaining the training sample image after resolution ratio convergence, computation subunit 3082 can be called in training pattern
Character network, and using this feature network query function resolution ratio convergence after training sample image in preset object third feature
Location information, the default object is consistent with the above-mentioned default object referred to, for example, the default object may include face or vehicle
Deng.Wherein, third feature location information is similar with above-mentioned fisrt feature location information, for example, when default object is face, it should
Third feature location information may include the location information of the features such as eyes, eyebrow, nose, mouth and face mask, Mei Gete
The location information of sign may include the location information of multiple characteristic points.Either, which, which can be, passes through
Face recognition technology resolution ratio is restrained after training sample image in eyes, nose, eyebrow and mouth on face etc. it is each
Human face is positioned, and the location information of the characteristic point of each human face is generated.At this point, convergence subelement 3081 can root
It is special according to fisrt feature location information, the first cut zone information, second feature location information, the second cut zone information and third
The parameter that sign location information treats training pattern is updated, and obtains image processing model.
In some embodiments, updating subelement 3083 may include the first computing module, the second computing module, determination
Module obtains module and update module etc., specifically can be as follows:
First computing module, for calculating fisrt feature location information using to the prior estimate network in training pattern
With the error between second feature location information, feature locations error is obtained, and calculates the first cut zone information and second
Error between cut zone information obtains cut zone error, sets for feature locations error and cut zone error
One characteristic error;
Second computing module, for using to the character network in training pattern, calculating fisrt feature location information and the
Error between three feature locations information, obtains second feature error;
Determining module, for using the training sample figure to the residual error network in training pattern, after determining resolution ratio convergence
Picture and the image error between original training sample image;
Module is obtained, for obtaining gradient information according to fisrt feature error, second feature error and image error;
Update module obtains image processing model for updating the parameter to training pattern according to gradient information.
For example, the first computing module can be called to the prior estimate network in training pattern, pass through prior estimate network
Fisrt feature location information is compared with second feature location information, obtains feature locations error;And first is divided
Area information is compared with the second cut zone information, obtains cut zone error;By feature locations error and cut zone
Error is set as fisrt feature error, i.e. the first computing module can calculate fisrt feature information and the according to above-mentioned formula (1)
Fisrt feature error between two characteristic informations.Specifically, the first computing module can calculate separately fisrt feature location information
Feature locations error between second feature location information, and, the first cut zone information and the second cut zone information
Between cut zone error, this feature location error and cut zone error are fisrt feature error.
And second computing module can call to the character network in training pattern, and using this feature network according to
Above-mentioned formula (2) calculates the feature locations error between fisrt feature location information and third feature location information, obtains second
Characteristic error.
And determining module can be called to the residual error network in training pattern, and determined and differentiated using the residual error network
The image error between training sample image and training sample image after rate convergence, the image error may include that pixel is missed
Difference, driscrimination error and confrontation error etc..Wherein, pixel error can be the training sample image and original after resolution ratio convergence
Error between training sample image between each pixel value, confrontation error can be differentiate network for fight residual error network and
The error that prior estimate network generates.Driscrimination error can be to the training sample image and original training after resolution ratio convergence
Sample image carries out differentiating true and false error, for example, training sample image to be discriminated for one, when the judgement training to be discriminated
When sample image is the training sample image after resolution ratio convergence, 0 is set by the mark of the training sample image to be discriminated,
When determining the training sample image to be discriminated is original training sample image, by the mark of the training sample image to be discriminated
Knowledge is set as 1, is then compared obtained mark with true value, obtains driscrimination error.
In some embodiments, determining module specifically can be used for:Using to the residual error network in training pattern, obtain
The pixel error between training sample image and original training sample image after resolution ratio convergence;Using in training pattern
Differentiation network, to resolution ratio convergence after training sample image differentiate with original training sample image, identified
Error and confrontation error;Image error is set by pixel error, driscrimination error and confrontation error.
For example, determining module can calculate pixel error according to above-mentioned formula (3), is calculated and identified according to above-mentioned formula (4)
Error, and error etc. is fought according to above-mentioned formula (5).
After obtaining each error, ladder can be obtained according to fisrt feature error, second feature error and image error
Information is spent, in some embodiments, obtaining module specifically can be used for:Based on fisrt feature error, second feature error, as
Plain error and confrontation error construct first-loss function;Gradient decline is carried out to first-loss function, obtains first gradient information;
The second loss function is constructed based on driscrimination error, and gradient decline is carried out to the second loss function, obtains the second gradient information;It will
First gradient information and the second gradient information are set as gradient information.
Specifically, fisrt feature error, second feature error, image error can be based on according to formula (6) by obtaining module
In pixel error and confrontation error construct first-loss function.It is then possible to gradient decline is carried out to first-loss function, with
First-loss function is minimized, first gradient information is obtained, wherein the mode of gradient decline can carry out spirit according to actual needs
Setting living, particular content are not construed as limiting here.
And acquisition module can construct the second loss function based on the driscrimination error in image error according to formula (7),
And gradient decline is carried out to the second loss function and obtains the second gradient information to minimize the second loss function.
After obtaining first gradient information and the second gradient information, update module can be according to first gradient information and second
Gradient information updates the parameter to training pattern, to adjust to the parameter of training pattern or weight etc. to appropriate value, can obtain
To image processing model, wherein to the generation network (including residual error network and prior estimate network) in training pattern and differentiate
Network can alternately update.
Extraction unit 302 obtains target signature position for carrying out feature extraction to the target object in image to be processed
Information.
In some embodiments, extraction unit 302 is specifically used for:Using image processing model in image to be processed
Target object carries out feature extraction, obtains target signature location information;Alternatively, extraction unit 302 is specifically used for:Determine target pair
The object identity and signature identification of elephant search target object according to object identity from image to be processed, and work as and find target
When object, the feature of target object and its position are extracted according to signature identification, obtain target signature location information.
Specifically, after obtaining image processing model, can be turned by resolution ratio of the image processing model to image
It changes, firstly, the prior estimate network in image processing model is called by extraction unit 302, using the prior estimate network handles
The target object handled in image carries out feature extraction, for example, can use the prior estimate net when target object is face
Network carries out the feature extraction of human face to the face in image to be processed, obtain the eyes of human face, eyebrow, nose and
The feature locations information such as mouth.It, can be using in prior estimate network handles processing image when target object is vehicle
Vehicle carries out feature extraction, obtains the feature locations information such as wheel, license plate, vehicle window, logo, car light and vehicle mirror.
Either, extraction unit 302 can obtain target signature location information using other modes, for example, extraction unit
302 can first determine the object identity and signature identification of target object, wherein the object identity may include one or more,
For each target object of unique identification;This feature mark may include one or more, in unique identification target object
One or more feature for including, the object identity and signature identification can be the name being made of number, text and/or letter
Title or number or profile mark etc..It is then possible to target object is searched from image to be processed according to object identity, when
When not finding target object, does not need to execute operation is extracted etc. to the feature of target object and its position;When finding
When target object, the feature of target object and its position can be extracted according to signature identification, obtain target signature position
Information.
Cutting unit 303 obtains Target Segmentation area information for carrying out segmentation of feature regions to target object.
In some embodiments, cutting unit 303 is specifically used for:Target object is carried out using image processing model special
Region segmentation is levied, Target Segmentation area information is obtained;Alternatively, cutting unit 303 is specifically used for:Determine the object mark of target object
Knowledge and signature identification, search target object according to object identity, and when finding target object from image to be processed, according to
Signature identification is split the feature of target object and its region, obtains Target Segmentation area information.
Specifically, cutting unit 303 can call the prior estimate network in image processing model, using the prior estimate
Network handles handle the target object in image and carry out segmentation of feature regions, for example, can use when target object is face
The prior estimate network handles handle the segmentation of feature regions that the face in image carries out human face, obtain the eye of human face
Eyeball, eyebrow, nose and mouth etc. divide area information.When target object is vehicle, the prior estimate network pair can be used
Vehicle in image to be processed carries out segmentation of feature regions, obtains the cut sections such as wheel, license plate, vehicle window, logo, car light and vehicle mirror
Domain information.Either, cutting unit 303 can obtain Target Segmentation area information using other modes, for example, image procossing fills
It sets and target object is searched from image to be processed according to object identity, and when finding target object, according to signature identification pair
The feature of target object and its position extract, and obtain target signature location information.
Unit 304 is turned up, for using preset image processing model, is based on target signature location information and Target Segmentation
The original resolution of image to be processed is turned up in area information.
Wherein, image processing model by multiple training sample images preset object feature locations information and cut zone
Information training forms.Since image processing model is based on the feature locations for presetting object in different resolution training sample image
What information and cut zone information were trained, therefore, it is turned up what unit 304 can be got by image processing model
The target signature location information and Target Segmentation area information of target object, are turned up the original of image to be processed in image to be processed
Resolution ratio, wherein the 4th resolution ratio is greater than third resolution ratio and second resolution, i.e., low-resolution image is converted to high-resolution
Rate image.The target object is similar with the above-mentioned default object referred to, for example, the default object may include face or vehicle
Deng.Wherein, target signature location information is similar with above-mentioned fisrt feature location information, Target Segmentation area information and above-mentioned first
Cut zone information is similar.
In some embodiments, when target object is face, target signature location information is human face position letter
Breath, Target Segmentation area information are human face cut zone information, and unit 304, which is turned up, specifically can be used for:Using preset
Image processing model is based on human face location information and human face cut zone information, original point that image to be processed is turned up
Resolution.
Wherein, human face location information may include the position of the features such as eyes, eyebrow, nose, mouth and face mask
Confidence breath, the location information of each feature may include the location information of multiple characteristic points, the human face cut zone information
It may include the segmentation area information such as hair, eyes, eyebrow, nose, mouth and face.
Unit 304, which is turned up, can be based on human face location information and human face cut zone by image processing model
The original resolution of image to be processed will be turned up in information, the image that obtains that treated, it can by original point of image to be processed
Resolution is turned up to preset resolution value, which can carry out flexible setting according to actual needs.For example, such as Fig. 5
It is shown, the pixel compared with high magnification numbe can be recovered according to each pixel in low-resolution image, so as to
High-definition picture is obtained, the super-resolution efect compared with high magnification numbe is reached, can effectively be promoted to human face and profile
Etc. details recovery, picture quality is substantially increased, so that the display effect of image is preferable.
In some embodiments, when target object is vehicle, target signature location information is vehicle characteristics position letter
Breath, Target Segmentation area information are vehicles segmentation area information, and unit 304, which is turned up, specifically can be used for:Using preset image
It handles model and is based on vehicle characteristics location information and vehicles segmentation area information, the original resolution of image to be processed is turned up.
Wherein, vehicle characteristics location information may include the vehicle characteristics such as wheel, license plate, vehicle window, logo, car light and vehicle mirror
Location information, vehicles segmentation area information may include vehicle characteristics such as wheel, license plate, vehicle window, logo, car light and vehicle mirror point
Cut area information.Unit 304, which is turned up, can be based on vehicle characteristics location information and vehicles segmentation region by image processing model
The original resolution of image to be processed will be turned up in information, the image that obtains that treated.It can be by original point of image to be processed
Resolution is turned up to preset resolution value, which can carry out flexible setting according to actual needs.
From the foregoing, it will be observed that the embodiment of the present invention, when needing the resolution ratio to image to be turned up, first acquisition unit 301 can
To obtain image to be processed, and feature extraction is carried out to the target object in image to be processed by extraction unit 302, obtains target
Feature locations information, and segmentation of feature regions is carried out to the target object in image to be processed by cutting unit 303, obtain mesh
Mark cut zone information;Then preset image processing model is used by height-regulating unit 304, based on target signature location information and
The original resolution of image to be processed is turned up in Target Segmentation area information, i.e., is converted to the low image to be processed of resolution ratio point
The high image of resolution.Since the program can target signature location information and target point based on target object in image to be processed
Area information is cut, the original resolution of image to be processed is precisely turned up, it is thus possible to improve treated image is clear
Clear degree, the picture quality that improves that treated.
The embodiment of the present invention also provides a kind of network equipment, which can be the equipment such as server or terminal.Such as
Shown in Figure 11, it illustrates the structural schematic diagrams of the network equipment involved in the embodiment of the present invention, specifically:
The network equipment may include one or more than one processing core processor 401, one or more
The components such as memory 402, power supply 403 and the input unit 404 of computer readable storage medium.Those skilled in the art can manage
It solves, network equipment infrastructure shown in Figure 11 does not constitute the restriction to the network equipment, may include more more or less than illustrating
Component, perhaps combine certain components or different component layouts.Wherein:
Processor 401 is the control centre of the network equipment, utilizes various interfaces and connection whole network equipment
Various pieces by running or execute the software program and/or module that are stored in memory 402, and are called and are stored in
Data in reservoir 402 execute the various functions and processing data of the network equipment, to carry out integral monitoring to the network equipment.
Optionally, processor 401 may include one or more processing cores;Preferably, processor 401 can integrate application processor and tune
Demodulation processor processed, wherein the main processing operation system of application processor, user interface and application program etc., modulatedemodulate is mediated
Reason device mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 401
In.
Memory 402 can be used for storing software program and module, and processor 401 is stored in memory 402 by operation
Software program and module, thereby executing various function application and data processing.Memory 402 can mainly include storage journey
Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function
Such as sound-playing function, image player function) etc.;Storage data area, which can be stored, uses created number according to the network equipment
According to etc..In addition, memory 402 may include high-speed random access memory, it can also include nonvolatile memory, such as extremely
A few disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 402 can also wrap
Memory Controller is included, to provide access of the processor 401 to memory 402.
The network equipment further includes the power supply 403 powered to all parts, it is preferred that power supply 403 can pass through power management
System and processor 401 are logically contiguous, to realize management charging, electric discharge and power managed etc. by power-supply management system
Function.Power supply 403 can also include one or more direct current or AC power source, recharging system, power failure monitor
The random components such as circuit, power adapter or inverter, power supply status indicator.
The network equipment may also include input unit 404, which can be used for receiving the number or character of input
Information, and generate keyboard related with user setting and function control, mouse, operating stick, optics or trackball signal
Input.
Although being not shown, the network equipment can also be including display unit etc., and details are not described herein.Specifically in the present embodiment
In, the processor 401 in the network equipment can be corresponding by the process of one or more application program according to following instruction
Executable file be loaded into memory 402, and the application program being stored in memory 402 is run by processor 401,
It is as follows to realize various functions:
Obtain image to be processed;Feature extraction is carried out to the target object in image to be processed, obtains target signature position
Information;Segmentation of feature regions is carried out to target object, obtains Target Segmentation area information;Using preset image processing model,
The original resolution of image to be processed, image processing model are turned up based on target signature location information and Target Segmentation area information
It is formed by the feature locations information and the training of cut zone information of presetting object in multiple training sample images.
Optionally, feature locations information includes fisrt feature location information and second feature location information, cut zone letter
Breath includes the first cut zone information and the second cut zone information, is based on target signature position using pre-set image processing model
Before the step of original resolution of image to be processed is turned up in information and Target Segmentation area information, method further includes:
Multiple training sample images are obtained, and determine the fisrt feature position letter for presetting object in every training sample image
Breath and the first cut zone information;The original resolution of every training sample image is turned down as preset value, resolution ratio tune is obtained
Training sample image after low;Obtain the second feature position that object is preset in the training sample image after every resolution ratio is turned down
Information and the second cut zone information;According to fisrt feature location information, the first cut zone information, second feature location information
With the second cut zone information, it is trained to preset to training pattern, obtains image processing model.
Optionally, feature locations information further includes third feature location information, according to fisrt feature location information, first point
Area information, second feature location information and the second cut zone information are cut, preset experienced model is trained, image is obtained
Handle model the step of include:
It is based on second feature location information and the second cut zone information using to the residual error network in training pattern, it will be every
The resolution ratio for opening the training sample image after resolution ratio is turned down is restrained to the original resolution of training sample image, obtains resolution ratio
Training sample image after convergence;Using the training sample figure to the character network in training pattern, after calculating resolution convergence
The third feature location information of object is preset as in;According to fisrt feature location information, the first cut zone information, second feature
The parameter that location information, the second cut zone information and third feature location information treat training pattern is updated, and obtains figure
As processing model.
From the foregoing, it will be observed that the embodiment of the present invention is when needing the resolution ratio to image to be turned up, available figure to be processed
Picture, and feature extraction is carried out to the target object in image to be processed, target signature location information is obtained, and to figure to be processed
Target object as in carries out segmentation of feature regions, obtains Target Segmentation area information;Then preset image procossing mould is used
Type is based on target signature location information and Target Segmentation area information, the original resolution of image to be processed is turned up, i.e., will differentiate
The low image to be processed of rate is converted to the image of high resolution.Since the program can be based on target object in image to be processed
The original resolution of image to be processed is precisely turned up in target signature location information and Target Segmentation area information, therefore,
The clarity for the image that can be improved that treated, the picture quality that improves that treated.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, it may refer to the detailed description above with respect to image processing method, details are not described herein again.
It will appreciated by the skilled person that all or part of the steps in the various methods of above-described embodiment can be with
It is completed by instructing, or relevant hardware is controlled by instruction to complete, which can store computer-readable deposits in one
In storage media, and is loaded and executed by processor.
For this purpose, the embodiment of the present invention provides a kind of storage medium, wherein being stored with a plurality of instruction, which can be processed
Device is loaded, to execute the step in any image processing method provided by the embodiment of the present invention.For example, the instruction can
To execute following steps:
Obtain image to be processed;Feature extraction is carried out to the target object in image to be processed, obtains target signature position
Information;Segmentation of feature regions is carried out to target object, obtains Target Segmentation area information;Using preset image processing model,
The original resolution of image to be processed, image processing model are turned up based on target signature location information and Target Segmentation area information
It is formed by the feature locations information and the training of cut zone information of presetting object in multiple training sample images.
Optionally, feature locations information includes fisrt feature location information and second feature location information, cut zone letter
Breath includes the first cut zone information and the second cut zone information, is based on target signature position using pre-set image processing model
Before the step of original resolution of image to be processed is turned up in information and Target Segmentation area information, method further includes:
Multiple training sample images are obtained, and determine the fisrt feature position letter for presetting object in every training sample image
Breath and the first cut zone information;The original resolution of every training sample image is turned down as preset value, resolution ratio tune is obtained
Training sample image after low;Obtain the second feature position that object is preset in the training sample image after every resolution ratio is turned down
Information and the second cut zone information;According to fisrt feature location information, the first cut zone information, second feature location information
With the second cut zone information, it is trained to preset to training pattern, obtains image processing model.
Optionally, feature locations information further includes third feature location information, according to fisrt feature location information, first point
Area information, second feature location information and the second cut zone information are cut, preset experienced model is trained, image is obtained
Handle model the step of include:
It is based on second feature location information and the second cut zone information using to the residual error network in training pattern, it will be every
The resolution ratio for opening the training sample image after resolution ratio is turned down is restrained to the original resolution of training sample image, obtains resolution ratio
Training sample image after convergence;Using the training sample figure to the character network in training pattern, after calculating resolution convergence
The third feature location information of object is preset as in;According to fisrt feature location information, the first cut zone information, second feature
The parameter that location information, the second cut zone information and third feature location information treat training pattern is updated, and obtains figure
As processing model.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
Wherein, which may include:Read-only memory (ROM, Read Only Memory), random access memory
Body (RAM, Random Access Memory), disk or CD etc..
By the instruction stored in the storage medium, can execute at any image provided by the embodiment of the present invention
Step in reason method, it is thereby achieved that achieved by any image processing method provided by the embodiment of the present invention
Beneficial effect is detailed in the embodiment of front, and details are not described herein.
It is provided for the embodiments of the invention a kind of image processing method, device and storage medium above and has carried out detailed Jie
It continues, used herein a specific example illustrates the principle and implementation of the invention, and the explanation of above embodiments is only
It is to be used to help understand method and its core concept of the invention;Meanwhile for those skilled in the art, according to the present invention
Thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be construed as
Limitation of the present invention.
Claims (15)
1. a kind of image processing method, which is characterized in that including:
Obtain image to be processed;
Feature extraction is carried out to the target object in the image to be processed, obtains target signature location information;
Segmentation of feature regions is carried out to the target object, obtains Target Segmentation area information;
Using preset image processing model, based on described in the target signature location information and the height-regulating of Target Segmentation area information
The original resolution of image to be processed, described image handle model by the feature locations of object default in multiple training sample images
Information and the training of cut zone information form.
2. image processing method according to claim 1, which is characterized in that the feature locations information includes fisrt feature
Location information and second feature location information, the cut zone information include the first cut zone information and the second cut zone
Information, it is described that the target signature location information and Target Segmentation area information height-regulating institute are based on using pre-set image processing model
Before the step of stating the original resolution of image to be processed, the method also includes:
Obtain multiple training sample images, and determine the fisrt feature location information that object is preset in every training sample image and
First cut zone information;
The original resolution of every training sample image is turned down as preset value, the training sample after resolution ratio is turned down is obtained
Image;
Obtain the second feature location information that object is preset in the training sample image after every resolution ratio is turned down and the second segmentation
Area information;
According to the fisrt feature location information, the first cut zone information, second feature location information and the second cut zone
Information is trained to preset to training pattern, obtains described image processing model.
3. image processing method according to claim 2, which is characterized in that described to obtain the instruction after every resolution ratio is turned down
The step of second feature location information and the second cut zone information of default object, includes in white silk sample image:
Using the prior estimate network in training pattern, calculate pre- in the training sample image after every resolution ratio is turned down
If the second feature location information of object and the second cut zone information.
4. image processing method according to claim 2, which is characterized in that the feature locations information further includes third spy
Location information is levied, it is described according to the fisrt feature location information, the first cut zone information, second feature location information and the
Two cut zone information, the step of being trained to preset experienced model, obtain image processing model include:
The second feature location information and the second cut zone information are based on using the residual error network in training pattern,
The resolution ratio of training sample image after every resolution ratio is turned down is received to the original resolution of the training sample image
It holds back, the training sample image after obtaining resolution ratio convergence;
Using the character network in training pattern, default pair in the training sample image after calculating the resolution ratio convergence
The third feature location information of elephant;
According to the fisrt feature location information, the first cut zone information, second feature location information, the second cut zone letter
Breath and third feature location information are updated the parameter to training pattern, obtain image processing model.
5. image processing method according to claim 4, which is characterized in that described to be believed according to the fisrt feature position
Breath, the first cut zone information, second feature location information, the second cut zone information and third feature location information are to described
The step of parameter to training pattern is updated, obtains image processing model include:
Using the prior estimate network in training pattern, the fisrt feature location information and the second feature are calculated
Error between location information obtains feature locations error, and calculates the first cut zone information and described second point
The error between area information is cut, cut zone error is obtained, the feature locations error and the cut zone error are set
It is set to fisrt feature error;
Using the character network in training pattern, the fisrt feature location information and the third feature position are calculated
Error between information obtains second feature error;
Using the residual error network in training pattern, training sample image after determining the resolution ratio convergence and original
Image error between the training sample image;
Gradient information is obtained according to the fisrt feature error, second feature error and image error;
The parameter to training pattern is updated according to the gradient information, obtains image processing model.
6. image processing method according to claim 5, which is characterized in that the use is described to residual in training pattern
Poor network, the image between training sample image and the original training sample image after determining the resolution ratio convergence miss
Difference step include:
Using the residual error network in training pattern, training sample image after obtaining the resolution ratio convergence and original
Pixel error between the training sample image;
Training sample image and original institute using the differentiation network in training pattern, after being restrained to the resolution ratio
It states training sample image to be differentiated, obtains driscrimination error and confrontation error;
Image error is set by the pixel error, driscrimination error and confrontation error.
7. image processing method according to claim 6, which is characterized in that described according to the fisrt feature error,
Two characteristic errors and image error obtain the step of gradient information and include:
First-loss function is constructed based on the fisrt feature error, second feature error, pixel error and confrontation error;
Gradient decline is carried out to the first-loss function, obtains first gradient information;
The second loss function is constructed based on the driscrimination error, and gradient decline is carried out to second loss function, obtains the
Two gradient informations;
Gradient information is set by the first gradient information and the second gradient information.
8. image processing method according to any one of claims 1 to 7, which is characterized in that described to the figure to be processed
Target object as in carries out feature extraction, and the step of obtaining target signature location information includes:
Feature extraction is carried out to the target object in the image to be processed using described image processing model, obtains target signature
Location information;Alternatively,
The object identity and signature identification for determining target object search mesh from the image to be processed according to the object identity
Mark object, and feature and its position when finding the target object, according to the signature identification to the target object
It extracts, obtains target signature location information.
9. image processing method according to any one of claims 1 to 7, which is characterized in that described to the target object
Segmentation of feature regions is carried out, the step of obtaining Target Segmentation area information includes:
Segmentation of feature regions is carried out to the target object using described image processing model, obtains Target Segmentation area information;
Alternatively,
The object identity and signature identification for determining target object search mesh from the image to be processed according to the object identity
Mark object, and feature and its place when finding the target object, according to the signature identification to the target object
Region is split, and obtains Target Segmentation area information.
10. image processing method according to any one of claims 1 to 7, which is characterized in that when the target object is behaved
When face, the target signature location information is human face location information, and the Target Segmentation area information is human face point
Area information is cut, it is described to use preset image processing model, it is based on the target signature location information and Target Segmentation region
Information is turned up the step of original resolution of the image to be processed and includes:
The human face location information and human face cut zone information are based on using preset image processing model, are turned up
The original resolution of the image to be processed.
11. image processing method according to any one of claims 1 to 7, which is characterized in that when the target object is vehicle
When, the target signature location information be vehicle characteristics location information, the Target Segmentation area information be vehicles segmentation area
Domain information, it is described to use preset image processing model, it is based on the target signature location information and Target Segmentation area information
The step of original resolution of the image to be processed is turned up include:
The vehicle characteristics location information and vehicles segmentation area information are based on using preset image processing model, described in height-regulating
The original resolution of image to be processed.
12. a kind of image processing apparatus, which is characterized in that including:
First acquisition unit, for obtaining image to be processed;
Extraction unit obtains target signature position letter for carrying out feature extraction to the target object in the image to be processed
Breath;
Cutting unit obtains Target Segmentation area information for carrying out segmentation of feature regions to the target object;
Unit is turned up, for using preset image processing model, is based on the target signature location information and Target Segmentation area
The original resolution of the image to be processed is turned up in domain information, and described image handles model by presetting in multiple training sample images
Feature locations information and cut zone the information training of object form.
13. image processing apparatus according to claim 12, which is characterized in that the feature locations information includes first special
Location information and second feature location information are levied, the cut zone information includes the first cut zone information and the second cut section
Domain information, described image processing unit further include:
Determination unit for obtaining multiple training sample images, and determines and presets the first of object in every training sample image
Feature locations information and the first cut zone information;
Unit being turned down, for turning down the original resolution of every training sample image for preset value, obtaining resolution ratio tune
Training sample image after low;
Second acquisition unit, for obtaining the second feature position for presetting object in the training sample image after every resolution ratio is turned down
Confidence breath and the second cut zone information;
Training unit, for according to the fisrt feature location information, the first cut zone information, second feature location information and
Second cut zone information, is trained to preset to training pattern, obtains described image processing model.
14. image processing apparatus according to claim 13, which is characterized in that the feature locations information further includes third
Feature locations information, the training unit include:
Subelement is restrained, for being based on the second feature location information and the to the residual error network in training pattern using described
Two cut zone information, the resolution ratio of the training sample image after every resolution ratio is turned down is to the training sample image
Original resolution convergence, obtain resolution ratio convergence after training sample image;
Computation subunit, for using the training to the character network in training pattern, after calculating the resolution ratio convergence
The third feature location information of object is preset in sample image;
Subelement is updated, for believing according to the fisrt feature location information, the first cut zone information, second feature position
Breath, the second cut zone information and third feature location information are updated the parameter to training pattern, obtain image
Handle model.
15. a kind of storage medium, which is characterized in that the storage medium is stored with a plurality of instruction, and described instruction is suitable for processor
It is loaded, the step in 1 to 11 described in any item image processing methods is required with perform claim.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810475606.9A CN108921782B (en) | 2018-05-17 | 2018-05-17 | Image processing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810475606.9A CN108921782B (en) | 2018-05-17 | 2018-05-17 | Image processing method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108921782A true CN108921782A (en) | 2018-11-30 |
CN108921782B CN108921782B (en) | 2023-04-14 |
Family
ID=64403366
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810475606.9A Active CN108921782B (en) | 2018-05-17 | 2018-05-17 | Image processing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108921782B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008817A (en) * | 2019-01-29 | 2019-07-12 | 北京奇艺世纪科技有限公司 | Model training, image processing method, device, electronic equipment and computer readable storage medium |
CN110059652A (en) * | 2019-04-24 | 2019-07-26 | 腾讯科技(深圳)有限公司 | Face image processing process, device and storage medium |
CN110276352A (en) * | 2019-06-28 | 2019-09-24 | 拉扎斯网络科技(上海)有限公司 | Index identification method, device, electronic equipment and computer readable storage medium |
CN110335199A (en) * | 2019-07-17 | 2019-10-15 | 上海骏聿数码科技有限公司 | A kind of image processing method, device, electronic equipment and storage medium |
CN110547210A (en) * | 2019-09-04 | 2019-12-10 | 北京海益同展信息科技有限公司 | feed supply method and system, computer system, and storage medium |
CN110602484A (en) * | 2019-08-29 | 2019-12-20 | 海南电网有限责任公司海口供电局 | Online checking method for shooting quality of power transmission line equipment |
CN110675312A (en) * | 2019-09-24 | 2020-01-10 | 腾讯科技(深圳)有限公司 | Image data processing method, image data processing device, computer equipment and storage medium |
CN110889809A (en) * | 2019-11-28 | 2020-03-17 | RealMe重庆移动通信有限公司 | Image processing method and device, electronic device and storage medium |
CN111080515A (en) * | 2019-11-08 | 2020-04-28 | 北京迈格威科技有限公司 | Image processing method, neural network training method and device |
CN111932555A (en) * | 2020-07-31 | 2020-11-13 | 商汤集团有限公司 | Image processing method and device and computer readable storage medium |
CN112215225A (en) * | 2020-10-22 | 2021-01-12 | 北京通付盾人工智能技术有限公司 | KYC certificate verification method based on computer vision technology |
CN112241640A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Graphic code determination method and device and industrial camera |
CN112381717A (en) * | 2020-11-18 | 2021-02-19 | 北京字节跳动网络技术有限公司 | Image processing method, model training method, device, medium, and apparatus |
CN112418054A (en) * | 2020-11-18 | 2021-02-26 | 北京字跳网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN112926580A (en) * | 2021-03-29 | 2021-06-08 | 深圳市商汤科技有限公司 | Image positioning method and device, electronic equipment and storage medium |
CN112945240A (en) * | 2021-03-16 | 2021-06-11 | 北京三快在线科技有限公司 | Method, device and equipment for determining positions of feature points and readable storage medium |
CN113744130A (en) * | 2020-05-29 | 2021-12-03 | 武汉Tcl集团工业研究院有限公司 | Face image generation method, storage medium and terminal equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170213339A1 (en) * | 2016-01-21 | 2017-07-27 | Impac Medical Systems, Inc. | Systems and methods for segmentation of intra-patient medical images |
CN107704857A (en) * | 2017-09-25 | 2018-02-16 | 北京邮电大学 | A kind of lightweight licence plate recognition method and device end to end |
WO2018054283A1 (en) * | 2016-09-23 | 2018-03-29 | 北京眼神科技有限公司 | Face model training method and device, and face authentication method and device |
CN107958246A (en) * | 2018-01-17 | 2018-04-24 | 深圳市唯特视科技有限公司 | A kind of image alignment method based on new end-to-end human face super-resolution network |
-
2018
- 2018-05-17 CN CN201810475606.9A patent/CN108921782B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170213339A1 (en) * | 2016-01-21 | 2017-07-27 | Impac Medical Systems, Inc. | Systems and methods for segmentation of intra-patient medical images |
WO2018054283A1 (en) * | 2016-09-23 | 2018-03-29 | 北京眼神科技有限公司 | Face model training method and device, and face authentication method and device |
CN107704857A (en) * | 2017-09-25 | 2018-02-16 | 北京邮电大学 | A kind of lightweight licence plate recognition method and device end to end |
CN107958246A (en) * | 2018-01-17 | 2018-04-24 | 深圳市唯特视科技有限公司 | A kind of image alignment method based on new end-to-end human face super-resolution network |
Non-Patent Citations (3)
Title |
---|
KARTHIKA GOPAN 等: "Video Super Resolution with Generative Adversarial Network", 《2018 2ND INTERNATIONAL CONFERENCE ON TRENDS IN ELECTRONICS AND INFORMATICS (ICOEI)》 * |
TAI YING 等: "Image Super-Resolution via Deep Recursive Residual Network", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
徐一峰: "生成对抗网络理论模型和应用综述", 《金华职业技术学院学报》 * |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008817A (en) * | 2019-01-29 | 2019-07-12 | 北京奇艺世纪科技有限公司 | Model training, image processing method, device, electronic equipment and computer readable storage medium |
CN110008817B (en) * | 2019-01-29 | 2021-12-28 | 北京奇艺世纪科技有限公司 | Model training method, image processing method, device, electronic equipment and computer readable storage medium |
CN110059652A (en) * | 2019-04-24 | 2019-07-26 | 腾讯科技(深圳)有限公司 | Face image processing process, device and storage medium |
CN110059652B (en) * | 2019-04-24 | 2023-07-25 | 腾讯科技(深圳)有限公司 | Face image processing method, device and storage medium |
CN110276352A (en) * | 2019-06-28 | 2019-09-24 | 拉扎斯网络科技(上海)有限公司 | Index identification method, device, electronic equipment and computer readable storage medium |
CN110335199A (en) * | 2019-07-17 | 2019-10-15 | 上海骏聿数码科技有限公司 | A kind of image processing method, device, electronic equipment and storage medium |
CN112241640A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Graphic code determination method and device and industrial camera |
CN110602484A (en) * | 2019-08-29 | 2019-12-20 | 海南电网有限责任公司海口供电局 | Online checking method for shooting quality of power transmission line equipment |
CN110602484B (en) * | 2019-08-29 | 2021-07-27 | 海南电网有限责任公司海口供电局 | Online checking method for shooting quality of power transmission line equipment |
CN110547210A (en) * | 2019-09-04 | 2019-12-10 | 北京海益同展信息科技有限公司 | feed supply method and system, computer system, and storage medium |
CN110675312A (en) * | 2019-09-24 | 2020-01-10 | 腾讯科技(深圳)有限公司 | Image data processing method, image data processing device, computer equipment and storage medium |
CN110675312B (en) * | 2019-09-24 | 2023-08-29 | 腾讯科技(深圳)有限公司 | Image data processing method, device, computer equipment and storage medium |
CN111080515A (en) * | 2019-11-08 | 2020-04-28 | 北京迈格威科技有限公司 | Image processing method, neural network training method and device |
CN110889809A (en) * | 2019-11-28 | 2020-03-17 | RealMe重庆移动通信有限公司 | Image processing method and device, electronic device and storage medium |
CN113744130B (en) * | 2020-05-29 | 2023-12-26 | 武汉Tcl集团工业研究院有限公司 | Face image generation method, storage medium and terminal equipment |
CN113744130A (en) * | 2020-05-29 | 2021-12-03 | 武汉Tcl集团工业研究院有限公司 | Face image generation method, storage medium and terminal equipment |
CN111932555A (en) * | 2020-07-31 | 2020-11-13 | 商汤集团有限公司 | Image processing method and device and computer readable storage medium |
CN112215225B (en) * | 2020-10-22 | 2024-03-15 | 北京通付盾人工智能技术有限公司 | KYC certificate verification method based on computer vision technology |
CN112215225A (en) * | 2020-10-22 | 2021-01-12 | 北京通付盾人工智能技术有限公司 | KYC certificate verification method based on computer vision technology |
CN112381717A (en) * | 2020-11-18 | 2021-02-19 | 北京字节跳动网络技术有限公司 | Image processing method, model training method, device, medium, and apparatus |
CN112418054A (en) * | 2020-11-18 | 2021-02-26 | 北京字跳网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
WO2022105779A1 (en) * | 2020-11-18 | 2022-05-27 | 北京字节跳动网络技术有限公司 | Image processing method, model training method, and apparatus, medium, and device |
CN112945240B (en) * | 2021-03-16 | 2022-06-07 | 北京三快在线科技有限公司 | Method, device and equipment for determining positions of feature points and readable storage medium |
CN112945240A (en) * | 2021-03-16 | 2021-06-11 | 北京三快在线科技有限公司 | Method, device and equipment for determining positions of feature points and readable storage medium |
CN112926580B (en) * | 2021-03-29 | 2023-02-03 | 深圳市商汤科技有限公司 | Image positioning method and device, electronic equipment and storage medium |
CN112926580A (en) * | 2021-03-29 | 2021-06-08 | 深圳市商汤科技有限公司 | Image positioning method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108921782B (en) | 2023-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921782A (en) | A kind of image processing method, device and storage medium | |
US11790589B1 (en) | System and method for creating avatars or animated sequences using human body features extracted from a still image | |
CN110363183B (en) | Service robot visual image privacy protection method based on generating type countermeasure network | |
CN108234870A (en) | Image processing method, device, terminal and storage medium | |
CN110111418A (en) | Create the method, apparatus and electronic equipment of facial model | |
WO2017092196A1 (en) | Method and apparatus for generating three-dimensional animation | |
CN109063584B (en) | Facial feature point positioning method, device, equipment and medium based on cascade regression | |
CN110464633A (en) | Acupuncture point recognition methods, device, equipment and storage medium | |
US11282257B2 (en) | Pose selection and animation of characters using video data and training techniques | |
CN108629339A (en) | Image processing method and related product | |
KR102043626B1 (en) | Deep learning-based virtual plastic surgery device for providing virtual plastic surgery image customers by analyzing big data on before and after image of plurality of person who has experience of a plastic surgery | |
CN108829233B (en) | Interaction method and device | |
CN108460398A (en) | Image processing method, device, cloud processing equipment and computer program product | |
CN105426882B (en) | The method of human eye is quickly positioned in a kind of facial image | |
CN106651978A (en) | Face image prediction method and system | |
CN102567716A (en) | Face synthetic system and implementation method | |
CN111723687A (en) | Human body action recognition method and device based on neural network | |
CN108564120A (en) | Feature Points Extraction based on deep neural network | |
CN109472795A (en) | A kind of image edit method and device | |
CN109446952A (en) | A kind of piano measure of supervision, device, computer equipment and storage medium | |
CN111222379A (en) | Hand detection method and device | |
WO2024103890A1 (en) | Model construction method and apparatus, reconstruction method and apparatus, and electronic device and non-volatile readable storage medium | |
CN104978583B (en) | The recognition methods of figure action and device | |
CN106778576A (en) | A kind of action identification method based on SEHM feature graphic sequences | |
US11361467B2 (en) | Pose selection and animation of characters using video data and training techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |