CN113177881B - Processing method and device for improving definition of picture - Google Patents

Processing method and device for improving definition of picture Download PDF

Info

Publication number
CN113177881B
CN113177881B CN202110466148.4A CN202110466148A CN113177881B CN 113177881 B CN113177881 B CN 113177881B CN 202110466148 A CN202110466148 A CN 202110466148A CN 113177881 B CN113177881 B CN 113177881B
Authority
CN
China
Prior art keywords
image
face
super
resolution
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110466148.4A
Other languages
Chinese (zh)
Other versions
CN113177881A (en
Inventor
林青山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Guangzhuiyuan Information Technology Co ltd
Original Assignee
Guangzhou Guangzhuiyuan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Guangzhuiyuan Information Technology Co ltd filed Critical Guangzhou Guangzhuiyuan Information Technology Co ltd
Priority to CN202110466148.4A priority Critical patent/CN113177881B/en
Publication of CN113177881A publication Critical patent/CN113177881A/en
Application granted granted Critical
Publication of CN113177881B publication Critical patent/CN113177881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a processing method and a processing device for improving the definition of a picture, comprising the steps of obtaining an original image and carrying out noise reduction processing on the original image to obtain a noise-reduced image; slicing the noise-reduced image to obtain a plurality of image slices, processing all the image slices by adopting a super-resolution model to obtain super-resolution enhanced image slices, and splicing all the image slices to obtain a noise-reduced super-resolution image; adopting a face recognition model to carry out face recognition on the original image, and outputting a super-resolution image according to a face recognition result; and performing filter processing on the super-resolution image according to preset filter parameters, and outputting a final image. According to the application, the image is subjected to a series of preprocessing such as noise reduction and face correction, then is input into the deep learning model for processing, and then a series of filter post-processing such as structure and atmosphere is performed on the result output by the deep learning model, so that the image definition effect is improved.

Description

Processing method and device for improving definition of picture
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a processing method and device for improving definition of pictures.
Background
In the current picture editing application for improving the definition of the pictures in the market, a user can input the pictures to improve the definition of the pictures. However, in the existing picture editing application, some of the pictures are uploaded to a server to improve the definition, and the processing method needs to network users, wastes broadband resources, has high server pressure and waits for a long time; some of the traditional processing methods are used for improving the definition of the universality of the photo, the effect is relatively general, and meanwhile, the effect on the face is not ideal; some of the enhancement of definition is performed by a deep learning model, but under the condition that the computing capacity of the mobile device is limited, the deep learning model is generally smaller, and the effect of simply enhancing the definition by the deep learning model is extremely limited. Therefore, the related art has failed to meet the editing requirement of the user to improve the definition of the photo.
Disclosure of Invention
In view of the above, the present application aims to overcome the defects of the prior art, and provide a processing method and apparatus for improving the definition of a picture, so as to solve the problem that the editing requirement of improving the definition of a picture by a user cannot be met in the prior art.
In order to achieve the above purpose, the application adopts the following technical scheme: a processing method for improving the definition of a picture comprises the following steps:
acquiring an original image and carrying out noise reduction treatment on the original image to obtain a noise-reduced image;
slicing the noise-reduced image to obtain a plurality of image slices, processing all the image slices by adopting a super-resolution model to obtain super-resolution enhanced image slices, and splicing all the image slices to obtain a noise-reduced super-resolution image;
adopting a face recognition model to carry out face recognition on the original image, and outputting a super-resolution image according to a face recognition result;
and performing filter processing on the super-resolution image according to preset filter parameters, and outputting a final image.
Further, the outputting the super-resolution image according to the face recognition result includes:
if the face is recognized, face data are obtained, the face data are preprocessed to obtain corrected face images, noise reduction processing is carried out on the corrected face images, and noise reduction corrected face images are obtained;
processing the noise reduction correction face image by adopting a face super-resolution model to obtain a corrected face super-resolution image with improved definition;
dividing the corrected face super-resolution image by adopting a face segmentation model to obtain a corrected face mask image, processing the corrected face mask image to obtain an eclosion corrected face mask image, and respectively carrying out rotation operation on the corrected face super-resolution image and the eclosion corrected face mask image to obtain an original face image and an eclosion original face mask image;
the pixel value in the eclosion original face mask image is taken as an adjusting parameter, the pixel value of the original face image is taken as a mixed color, and the pixel value of the noise-reduction super-division image is taken as a primary color, so that the original face image and the noise-reduction super-division image are fused, and a super-division image is output;
and if the face cannot be identified, determining the noise reduction super-resolution image as a super-resolution image.
Further, the performing noise reduction processing on the original image to obtain a noise-reduced image includes:
creating an OpenGL running environment in an OpenGL device;
obtaining an original image texture by using the OpenGL running environment and the original image;
and carrying out noise reduction treatment on the original image texture to obtain a noise reduction image.
Further, the slicing processing is performed on the noise reduction image to obtain a plurality of image slices, including:
cutting the noise reduction image according to the size of a preset size to obtain a plurality of image slices with the size of the preset size and enhanced super-resolution;
and stretching and filling the image with the image slice smaller than the preset size.
Further, the preprocessing the face data to obtain a corrected face image includes:
calculating the midpoint position of a connecting line of the left eye position and the right eye position in the face data, and marking the midpoint position as the midpoint of the face;
calculating a anticlockwise angle of a connecting line from a left eye position to a right eye position in the face data relative to a horizontal line, and recording the anticlockwise angle as a face rotation angle;
taking the midpoint of the human face as a center, taking the rotation angle of the human face as a rotation angle, and performing rotation operation on the original image to obtain a rotation image, and obtaining the positions of a left eye, a right eye, a left mouth angle and a right mouth angle in the rotation image;
calculating the midpoint position of a connecting line of the left eye position and the right eye position in the rotating image, and marking the midpoint position as the eye midpoint of the rotating face;
calculating the midpoint position of a connecting line of the left mouth angular position and the right mouth angular position in the rotating image, and marking the midpoint position as the mouth midpoint of the rotating face;
calculating the distance from the midpoint of the eyes of the rotating face to the midpoint of the mouth of the rotating face, and marking the distance as the face cutting size;
subtracting a face clipping size of a preset multiple from the abscissa of the eye midpoint of the rotary face to obtain a clipping left margin;
adding a preset times of face clipping size to the abscissa of the eye midpoint of the rotary face to obtain clipping right margin;
subtracting a face clipping size of a preset multiple from the ordinate of the eye midpoint of the rotary face to obtain a clipping upper margin;
adding a preset multiple face cutting size to the ordinate of the eye midpoint of the rotary face to obtain a cutting lower margin;
and performing cutting operation on the rotating image according to the cutting left edge distance, the cutting right edge distance, the cutting upper edge distance and the cutting lower edge distance to obtain a corrected face image.
Further, the processing the corrected portrait mask image to obtain an eclosion corrected portrait mask image includes:
and performing corrosion, expansion and blurring operation on the corrected portrait mask image to obtain an eclosion corrected portrait mask image.
Further, the rotating operation is performed on the corrected face super-resolution image and the eclosion corrected portrait mask image respectively to obtain an original face image and an eclosion original portrait mask image, which includes:
rotating the corrected face super-resolution image by taking the negative value of the face rotation angle as the rotation angle to obtain an original face image with improved definition;
and rotating the eclosion correction portrait mask image by taking the negative value of the face rotation angle as the rotation angle to obtain all eclosion original portrait mask images.
Further, the fusing the original face image with the noise-reduction super-resolution image by using the pixel value in the eclosion original face mask image as an adjustment parameter, the pixel value of the original face image as a mixed color, and the pixel value of the noise-reduction super-resolution image as a primary color, and outputting the super-resolution image, includes:
acquiring the pixel value of the eclosion original portrait mask image, correcting the pixel value of the super-resolution image of the face and reducing the pixel value of the noise image;
the pixel value of the eclosion original face mask image is taken as an adjusting parameter, the pixel value of the corrected face super-resolution image is taken as a mixed color, and the pixel value of the noise reduction image is taken as a primary color, so that the original face image and the noise reduction super-resolution image are fused;
and outputting the super-resolution image.
Further, the filtering processing is performed on the super-resolution image according to preset filter parameters, and a final image is output, including:
acquiring preset exposure parameters, contrast parameters, atmosphere parameters, structure parameters and sharpening parameters;
acquiring a superresolution image texture by using the OpenGL running environment;
and processing the super-resolution image texture by adopting preset exposure parameters, contrast parameters, atmosphere parameters, structural parameters and sharpening parameters to obtain a final image.
The embodiment of the application provides a device for improving the definition of a picture, which comprises the following steps:
the acquisition module is used for acquiring an original image and carrying out noise reduction treatment on the original image to obtain a noise reduction image;
the processing module is used for carrying out slice processing on the noise reduction image to obtain a plurality of image slices, adopting a super-resolution model to process all the image slices to obtain super-resolution enhanced image slices, and splicing all the image slices to obtain a noise reduction super-resolution image;
the recognition module is used for recognizing the face of the original image by adopting a face recognition model and outputting a super-resolution image according to the face recognition result;
and the output module is used for carrying out filter processing on the super-resolution image according to preset filter parameters and outputting a final image.
By adopting the technical scheme, the application has the following beneficial effects:
the application provides a processing method and a processing device for improving the definition of a picture, which are used for improving the definition of the picture by carrying out a series of preprocessing such as noise reduction and face correction on the picture, inputting the preprocessed image into a deep learning model for processing, and carrying out a series of filter post-processing such as structure, atmosphere and the like on the result output by the deep learning model. The application combines the conventional definition improvement and the human face definition improvement for the photo, and provides various filter effects for post-treatment so as to achieve the satisfactory definition improvement effect of the photo background and the human face.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram illustrating steps of a processing method for enhancing image clarity according to the present application;
FIG. 2 is a flow chart of a processing method for improving the definition of a picture according to the present application;
FIG. 3 is a schematic view of an image slice provided by the present application;
fig. 4 is a schematic structural diagram of a device for improving image clarity according to the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, based on the examples herein, which are within the scope of the application as defined by the claims, will be within the scope of the application as defined by the claims.
The following describes a specific processing method for improving the definition of a picture according to an embodiment of the present application with reference to the accompanying drawings.
As shown in fig. 1, the processing method for improving the definition of a picture provided in the embodiment of the present application includes:
s101, acquiring an original image and carrying out noise reduction treatment on the original image to obtain a noise reduction image;
s102, carrying out slice processing on the noise reduction image to obtain a plurality of image slices, adopting a super-resolution model to process all the image slices to obtain super-resolution enhanced image slices, and splicing all the image slices to obtain a noise reduction super-resolution image;
s103, adopting a face recognition model to carry out face recognition on the original image, and outputting a super-resolution image according to a face recognition result;
s104, performing filter processing on the super-resolution image according to preset filter parameters, and outputting a final image.
It can be understood that the application can be realized by loading a plurality of models into the mobile client, that is to say, the super-resolution model, the face recognition model, the face super-resolution model and the portrait segmentation model are all loaded into the mobile client, and the original image in the application is also shot by the mobile client or acquired in the album of the mobile client. It should be noted that, the super-resolution model, the face recognition model, the face super-resolution model and the portrait segmentation model in the present application all adopt a general model, which is not described herein.
In some embodiments, the outputting the super-resolution image according to the face recognition result includes:
if the face is recognized, face data are obtained, the face data are preprocessed to obtain corrected face images, noise reduction processing is carried out on the corrected face images, and noise reduction corrected face images are obtained;
processing the noise reduction correction face image by adopting a face super-resolution model to obtain a corrected face super-resolution image with improved definition;
dividing the corrected face super-resolution image by adopting a face segmentation model to obtain a corrected face mask image, processing the corrected face mask image to obtain an eclosion corrected face mask image, and respectively carrying out rotation operation on the corrected face super-resolution image and the eclosion corrected face mask image to obtain an original face image and an eclosion original face mask image;
the pixel value in the eclosion original face mask image is taken as an adjusting parameter, the pixel value of the original face image is taken as a mixed color, and the pixel value of the noise-reduction super-division image is taken as a primary color, so that the original face image and the noise-reduction super-division image are fused, and a super-division image is output;
and if the face cannot be identified, determining the noise reduction super-resolution image as a super-resolution image.
The processing method for improving the definition of the picture has the working principle that: referring to fig. 2, obtaining an original image by a mobile client and performing noise reduction processing on the original image to obtain a noise-reduced image; slicing the noise-reduced image to obtain a plurality of image slices, processing all the image slices by adopting a super-resolution model to obtain super-resolution enhanced image slices, and splicing all the image slices to obtain a noise-reduced super-resolution image; adopting a face recognition model to carry out face recognition on the original image, acquiring face data when the face is recognized, preprocessing the face data to obtain a corrected face image, and carrying out noise reduction processing on the corrected face image to obtain a noise reduction corrected face image; processing the noise reduction correction face image by adopting a face super-resolution model to obtain a corrected face super-resolution image with improved definition; dividing the corrected face super-resolution image by adopting a face segmentation model to obtain a corrected face mask image, processing the corrected face mask image to obtain an eclosion corrected face mask image, and respectively carrying out rotation operation on the corrected face super-resolution image and the eclosion corrected face mask image to obtain an original face image and an eclosion original face mask image; calculating the pixel value of the output super-resolution image by taking the pixel value in the eclosion original portrait mask image as the weight, fusing the original face image and the noise-reduction super-resolution image according to the pixel value, and outputting the super-resolution image; and performing filter processing on the super-resolution image according to preset filter parameters, and outputting a final image.
Specifically, the original image is input into the face recognition model to perform recognition operation, and if the face cannot be recognized, the noise reduction super-resolution image is a final super-resolution image.
In some embodiments, the performing noise reduction processing on the original image to obtain a noise-reduced image includes:
creating an OpenGL running environment in an OpenGL device;
obtaining an original image texture by using the OpenGL running environment and the original image;
and carrying out noise reduction treatment on the original image texture to obtain a noise reduction image.
Specifically, the original image is acquired by the mobile client. And creating an OpenGL running environment by using the mobile client, inputting the original image into the OpenGL running environment to obtain an original image texture, loading a quick noise reduction shader script, inputting the original image texture, and executing an OpenGL program to obtain a noise reduction image. The fast denoising shader script performs denoising operation on the input texture through a denoising function, and a specific denoising function implementation mode can adopt a proper implementation scheme according to requirements. Implementation can be achieved using prior art techniques, and the application is not limited herein.
In some embodiments, the slicing the noise-reduced image to obtain a plurality of image slices includes:
cutting the noise reduction image according to the size of a preset size to obtain a plurality of image slices with the size of the preset size and enhanced super-resolution;
and stretching and filling the image with the image slice smaller than the preset size.
As shown in fig. 3, a conventional super-resolution model is loaded to a mobile client, and the super-resolution model receives an input image with a pixel size of 128x128 and outputs an output image with a pixel size of 256x256 with improved definition. The super-resolution model is a mobilized model, can be stored on a mobile client, and supports operation under the performance of a ubiquitous mobile client. The structure and the training mode of the super-resolution model can adopt a proper implementation scheme according to the needs.
In the application, the noise reduction image is cut according to the pixel size of 128x128, and the part with the length less than 128 at the edge of the image is filled by using a stretching method, so that a group of noise reduction image slices of 128x128 are obtained. Inputting all the noise reduction image slices into the super-resolution model, inputting all the noise reduction image slices into the general super-resolution model to obtain a group of super-resolution enhanced image slices, and re-stitching all the super-resolution enhanced image slices to obtain a noise reduction super-resolution image.
In some embodiments, the preprocessing the face data to obtain a corrected face image includes:
calculating the midpoint position of a connecting line of the left eye position and the right eye position in the face data, and marking the midpoint position as the midpoint of the face;
calculating a anticlockwise angle of a connecting line from a left eye position to a right eye position in the face data relative to a horizontal line, and recording the anticlockwise angle as a face rotation angle;
taking the midpoint of the human face as a center, taking the rotation angle of the human face as a rotation angle, and performing rotation operation on the original image to obtain a rotation image, and obtaining the positions of a left eye, a right eye, a left mouth angle and a right mouth angle in the rotation image;
calculating the midpoint position of a connecting line of the left eye position and the right eye position in the rotating image, and marking the midpoint position as the eye midpoint of the rotating face;
calculating the midpoint position of a connecting line of the left mouth angular position and the right mouth angular position in the rotating image, and marking the midpoint position as the mouth midpoint of the rotating face;
calculating the distance from the midpoint of the eyes of the rotating face to the midpoint of the mouth of the rotating face, and marking the distance as the face cutting size;
subtracting a face clipping size of a preset multiple from the abscissa of the eye midpoint of the rotary face to obtain a clipping left margin;
adding a preset times of face clipping size to the abscissa of the eye midpoint of the rotary face to obtain clipping right margin;
subtracting a face clipping size of a preset multiple from the ordinate of the eye midpoint of the rotary face to obtain a clipping upper margin;
adding a preset multiple face cutting size to the ordinate of the eye midpoint of the rotary face to obtain a cutting lower margin;
and performing cutting operation on the rotating image according to the cutting left edge distance, the cutting right edge distance, the cutting upper edge distance and the cutting lower edge distance to obtain a corrected face image.
Specifically, a face recognition model is loaded to the mobile client, and the face recognition model can receive an input image and output face data in the input image, wherein the face data comprises positions of a face frame, a left eye, a right eye, a nose tip, a left mouth corner and a right mouth corner. The specific face recognition model implementation can adopt a proper implementation scheme according to the needs. And inputting the original image into the face recognition model to perform recognition operation, and if the face cannot be recognized, obtaining the noise reduction super-resolution image as a final super-resolution image. And if the face recognition model recognizes the face, recording all face data returned by the face recognition model. And executing face preprocessing operation on all the face data. The face preprocessing operation comprises the following steps:
(1) Calculating a midpoint position eyementer of a connecting line of a left eye position lefmeye and a right eye position lightmeye in the face data, wherein the midpoint position eyementer is recorded as a face midpoint, and a specific formula is eyementer.x= (lefmeye.x+lightmeye.x)/2, and eyementer.y= (lefmeye.y+lightmeye.y)/2;
(2) Calculating a anticlockwise angle of a connecting line from a left eye position leftEye to a right eye position lightEye in the face data relative to a horizontal line, and recording the anticlockwise angle as a face rotation angle, wherein the specific formula is as follows: angle=fastatan 2 (lighteye.y-refeye.y, lighteye.x-refeye.x), wherein fastAtan2 is a built-in function of the OpenCV library;
(3) Taking the midpoint of the human face as a center, taking the rotation angle of the human face as a rotation angle, performing rotation operation on the original image to obtain a rotation image, and simultaneously recording the positions of the left eye, the right eye, the left mouth angle and the right mouth angle in the rotation image;
(4) Calculating the midpoint position of the connecting line of the left eye position rotLeftEye and the right eye position rotRIGHT Eye in the rotation image, and marking the midpoint position as the eye midpoint rotEyecenter of the rotation face, wherein the specific formula is as follows: roteye center.x= (rotlefteye.x+rotrighteye.x)/2, roteye center.y= (rotlefteye.y+rotrighteye.y)/2;
(5) Calculating the midpoint position of a connecting line of the left mouth angle position rotLeftMouth and the right mouth angle position rotFlightMouth in the rotation image, and marking the midpoint position as the mouth midpoint rotMouthmenter of the rotation face, wherein the specific formula is as follows: rotmolthc enter.x= (rotlefmtmouth.x+rotgetmouth.x)/2, rotmolthc enter.y= (rotlefmtmouth.y+rotgetmouth.y)/2.
(6) Calculating the distance from the midpoint of the eyes of the rotating face to the midpoint of the mouth of the rotating face, and recording the distance as a face cutting size crop unit, wherein the specific formula is as follows: cropUnit=abs (rotMouthcenter. Y-rotEyecenter. Y), where abs is a built-in function of the OpenCV library;
(7) Subtracting a face clipping size of a preset multiple from the x coordinate of the eye midpoint of the rotating face to obtain a clipping left margin;
(8) Adding a preset multiple face clipping size to the x coordinate of the eye midpoint of the rotary face to obtain a clipping right margin;
(9) Subtracting a face clipping size of a preset multiple from the y coordinate of the eye midpoint of the rotating face to obtain a clipping upper margin;
(10) Adding a preset multiple face clipping size to the y coordinate of the eye midpoint of the rotary face to obtain a clipping lower margin;
(11) And performing cutting operation on the rotating image according to the cutting left edge distance, the cutting right edge distance, the cutting upper edge distance and the cutting lower edge distance to obtain a corrected face image.
After the corrected face image is obtained, a noise reduction algorithm based on discrete cosine transform is used for carrying out noise reduction operation on the corrected face image, so that the noise reduction corrected face image is obtained.
After the noise reduction correction face image is obtained, a face super-resolution model of the mobile client is adopted, the face super-resolution model can receive the face image, and the face super-resolution image with improved definition is output. The specific implementation mode of the face super-resolution model can adopt a proper implementation scheme according to the needs. And inputting all the noise reduction correction face images into the face super-resolution model to obtain all the corrected face super-resolution images with improved definition.
In some embodiments, the processing the corrected portrait mask image to obtain an eclosion corrected portrait mask image includes:
and performing corrosion, expansion and blurring operation on the corrected portrait mask image to obtain an eclosion corrected portrait mask image.
Preferably, the rotating operation is performed on the corrected face super-resolution image and the eclosion corrected portrait mask image respectively to obtain an original face image and an eclosion original portrait mask image, including:
rotating the corrected face super-resolution image by taking the negative value of the face rotation angle as the rotation angle to obtain an original face image with improved definition;
and rotating the eclosion correction portrait mask image by taking the negative value of the face rotation angle as the rotation angle to obtain all eclosion original portrait mask images.
Specifically, a portrait segmentation model is loaded to a mobile client, the portrait segmentation model can receive an input image and output a portrait mask image, the pixel value range of the portrait mask image is 0-1, and the pixel value represents the confidence that the pixel belongs to a portrait. The specific portrait segmentation model implementation can adopt a proper implementation scheme according to the needs. And inputting all the corrected face super-resolution images with improved definition into the portrait segmentation model to obtain all corrected portrait mask images. And performing corrosion, expansion and blurring operations on all the corrected portrait mask images to obtain all the feathered corrected portrait mask images. And respectively rotating all the eclosion correction portrait mask images by taking the negative value of the face rotation angle as the rotation angle to obtain all eclosion original portrait mask images. And respectively rotating all the corrected face super-resolution images with the improved definition by taking the negative value of the face rotation angle as the rotation angle to obtain all the original face images with the improved definition.
In some embodiments, calculating the pixel value of the output super-resolution image by using the pixel value of the eclosion original face image as an adjustment parameter, the pixel value of the original face image as a mixed color, and the pixel value of the noise-reduction super-resolution image as a primary color, and fusing the original face image and the noise-reduction super-resolution image according to the pixel value, and outputting the super-resolution image includes:
acquiring the pixel value of the eclosion original portrait mask image, correcting the pixel value of the super-resolution image of the face and reducing the pixel value of the noise image;
the pixel value of the eclosion original face mask image is taken as an adjusting parameter, the pixel value of the corrected face super-resolution image is taken as a mixed color, and the pixel value of the noise reduction image is taken as a primary color, so that the original face image and the noise reduction super-resolution image are fused;
and outputting the super-resolution image.
Specifically, the pixel values in all the eclosion original portrait mask images are taken as an adjusting parameter a, the pixel values of all the original face images with improved definition are taken as a mixed color s, and the pixel values of the noise reduction super-resolution images are taken as a primary color b, so that the normal mode mixing is carried out, and a final super-resolution image is obtained. Let the pixel value of the final super-resolution image be r, r can be calculated by the formula r=s×a+b×1-a of the mixture of the normal modes.
In some embodiments, the filtering the super-resolution image according to preset filter parameters to output a final image includes:
acquiring preset exposure parameters, contrast parameters, atmosphere parameters, structure parameters and sharpening parameters;
acquiring a superresolution image texture by using the OpenGL running environment;
and processing the super-resolution image texture by adopting preset exposure parameters, contrast parameters, atmosphere parameters, structural parameters and sharpening parameters to obtain a final image.
Specifically, the parameters of the filter, such as exposure, contrast, atmosphere, structure, sharpening and the like, input by a user can be obtained in advance through a mobile client interface; inputting the final superdivision image into an OpenGL running environment to obtain a final superdivision image texture; and loading a shader script of a filter such as exposure, contrast, atmosphere, structure, sharpening and the like, inputting the texture of the final super-resolution image and the acquired parameters of the filter such as exposure, contrast, atmosphere, structure, sharpening and the like, and executing an OpenGL program to obtain a final image.
As shown in fig. 4, an embodiment of the present application provides a device for improving the definition of a picture, including:
the acquiring module 401 is configured to acquire an original image and perform noise reduction processing on the original image to obtain a noise-reduced image;
the processing module 402 is configured to perform slice processing on the noise reduction image to obtain a plurality of image slices, process all the image slices with a super-resolution model to obtain super-resolution enhanced image slices, and splice all the image slices to obtain a noise reduction super-resolution image;
the recognition module 403 is configured to perform face recognition on the original image by using a face recognition model, and output a super-resolution image according to a face recognition result;
and the output module 404 is used for performing filter processing on the super-resolution image according to preset filter parameters and outputting a final image.
The working principle of the device for improving the definition of the picture provided by the embodiment of the application is that an acquisition module 401 acquires an original image and performs noise reduction treatment on the original image to obtain a noise-reduced image; the processing module 402 performs slice processing on the noise reduction image to obtain a plurality of image slices, adopts a super-resolution model to process all the image slices to obtain super-resolution enhanced image slices, and splices all the image slices to obtain a noise reduction super-resolution image; the recognition module 403 performs face recognition on the original image by adopting a face recognition model, and outputs a super-resolution image according to a face recognition result; the output module 404 performs filter processing on the super-resolution image according to preset filter parameters, and outputs a final image.
The embodiment of the application provides computer equipment, which comprises a processor and a memory connected with the processor;
the memory is used for storing a computer program, and the computer program is used for executing the processing method for improving the definition of the picture provided by any embodiment;
the processor is used to call and execute the computer program in the memory.
In summary, the present application provides a processing method and apparatus for improving the definition of a picture, which are used for improving the definition of a picture by performing a series of preprocessing such as noise reduction and face correction on the picture, inputting the preprocessed image into a deep learning model for processing, and performing a series of filter post-processing such as structure and atmosphere on the result output by the deep learning model.
It can be understood that the above-provided processing method embodiment corresponds to the above-mentioned device embodiment, and the corresponding specific details may be referred to each other and will not be described herein.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a processing method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. The processing method for improving the definition of the picture is characterized by comprising the following steps of:
acquiring an original image and carrying out noise reduction treatment on the original image to obtain a noise-reduced image;
slicing the noise-reduced image to obtain a plurality of image slices, processing all the image slices by adopting a super-resolution model to obtain super-resolution enhanced image slices, and splicing all the image slices to obtain a noise-reduced super-resolution image;
adopting a face recognition model to carry out face recognition on the original image, and outputting a super-resolution image according to a face recognition result;
performing filter processing on the super-resolution image according to preset filter parameters, and outputting a final image;
the outputting the super-resolution image according to the face recognition result comprises the following steps:
if the face is recognized, face data are obtained, the face data are preprocessed to obtain corrected face images, noise reduction processing is carried out on the corrected face images, and noise reduction corrected face images are obtained;
processing the noise reduction correction face image by adopting a face super-resolution model to obtain a corrected face super-resolution image with improved definition;
dividing the corrected face super-resolution image by adopting a face segmentation model to obtain a corrected face mask image, processing the corrected face mask image to obtain an eclosion corrected face mask image, and respectively carrying out rotation operation on the corrected face super-resolution image and the eclosion corrected face mask image to obtain an original face image and an eclosion original face mask image;
the pixel value in the eclosion original face mask image is taken as an adjusting parameter, the pixel value of the original face image is taken as a mixed color, and the pixel value of the noise-reduction super-division image is taken as a primary color, so that the original face image and the noise-reduction super-division image are fused, and a super-division image is output;
and if the face cannot be identified, determining the noise reduction super-resolution image as a super-resolution image.
2. The processing method according to claim 1, wherein the performing noise reduction processing on the original image to obtain a noise-reduced image includes:
creating an OpenGL running environment in an OpenGL device;
obtaining an original image texture by using the OpenGL running environment and the original image;
and carrying out noise reduction treatment on the original image texture to obtain a noise reduction image.
3. The processing method according to claim 1, wherein the slicing the noise-reduced image to obtain a plurality of image slices, includes:
cutting the noise reduction image according to the size of a preset size to obtain super-resolution enhanced image slices with a plurality of preset sizes;
and stretching and filling the image with the image slice smaller than the preset size.
4. The processing method according to claim 1, wherein preprocessing the face data to obtain a corrected face image includes:
calculating the midpoint position of a connecting line of the left eye position and the right eye position in the face data, and marking the midpoint position as the midpoint of the face;
calculating a anticlockwise angle of a connecting line from a left eye position to a right eye position in the face data relative to a horizontal line, and recording the anticlockwise angle as a face rotation angle;
taking the midpoint of the human face as a center, taking the rotation angle of the human face as a rotation angle, and performing rotation operation on the original image to obtain a rotation image, and obtaining the positions of a left eye, a right eye, a left mouth angle and a right mouth angle in the rotation image;
calculating the midpoint position of a connecting line of the left eye position and the right eye position in the rotating image, and marking the midpoint position as the eye midpoint of the rotating face;
calculating the midpoint position of a connecting line of the left mouth angular position and the right mouth angular position in the rotating image, and marking the midpoint position as the mouth midpoint of the rotating face;
calculating the distance from the midpoint of the eyes of the rotating face to the midpoint of the mouth of the rotating face, and marking the distance as the face cutting size;
subtracting a face clipping size of a preset multiple from the abscissa of the eye midpoint of the rotary face to obtain a clipping left margin;
adding a preset times of face clipping size to the abscissa of the eye midpoint of the rotary face to obtain clipping right margin;
subtracting a face clipping size of a preset multiple from the ordinate of the eye midpoint of the rotary face to obtain a clipping upper margin;
adding a preset multiple face cutting size to the ordinate of the eye midpoint of the rotary face to obtain a cutting lower margin;
and performing cutting operation on the rotating image according to the cutting left edge distance, the cutting right edge distance, the cutting upper edge distance and the cutting lower edge distance to obtain a corrected face image.
5. The processing method according to claim 1, wherein the processing the corrected portrait mask image to obtain an eclosion corrected portrait mask image includes:
and performing corrosion, expansion and blurring operation on the corrected portrait mask image to obtain an eclosion corrected portrait mask image.
6. The method according to claim 4, wherein the rotating the corrected face super-resolution image and the eclosion corrected face mask image to obtain an original face image and an eclosion original face mask image, respectively, includes:
rotating the corrected face super-resolution image by taking the negative value of the face rotation angle as the rotation angle to obtain an original face image with improved definition;
and rotating the eclosion correction portrait mask image by taking the negative value of the face rotation angle as the rotation angle to obtain all eclosion original portrait mask images.
7. The processing method according to claim 1, wherein calculating the pixel value of the output super-resolution image with the pixel value of the eclosion original face image as the adjustment parameter, the pixel value of the original face image as the mixed color, and the pixel value of the noise-reduction super-resolution image as the primary color, and fusing the original face image and the noise-reduction super-resolution image according to the pixel value, and outputting the super-resolution image comprises:
acquiring the pixel value of the eclosion original portrait mask image, correcting the pixel value of the super-resolution image of the face and reducing the pixel value of the noise image;
the pixel value of the eclosion original face mask image is taken as an adjusting parameter, the pixel value of the corrected face super-resolution image is taken as a mixed color, and the pixel value of the noise reduction image is taken as a primary color, so that the original face image and the noise reduction super-resolution image are fused;
and outputting the super-resolution image.
8. The processing method according to claim 2, wherein the filtering the super-resolution image according to preset filter parameters to output a final image includes:
acquiring preset exposure parameters, contrast parameters, atmosphere parameters, structure parameters and sharpening parameters;
acquiring a superresolution image texture by using the OpenGL running environment;
and processing the super-resolution image texture by adopting preset exposure parameters, contrast parameters, atmosphere parameters, structural parameters and sharpening parameters to obtain a final image.
9. A device for improving the sharpness of a picture, comprising:
the acquisition module is used for acquiring an original image and carrying out noise reduction treatment on the original image to obtain a noise reduction image;
the processing module is used for carrying out slice processing on the noise reduction image to obtain a plurality of image slices, adopting a super-resolution model to process all the image slices to obtain super-resolution enhanced image slices, and splicing all the image slices to obtain a noise reduction super-resolution image;
the recognition module is used for recognizing the face of the original image by adopting a face recognition model and outputting a super-resolution image according to the face recognition result;
the output module is used for carrying out filter processing on the super-resolution image according to preset filter parameters and outputting a final image;
the outputting the super-resolution image according to the face recognition result comprises the following steps:
if the face is recognized, face data are obtained, the face data are preprocessed to obtain corrected face images, noise reduction processing is carried out on the corrected face images, and noise reduction corrected face images are obtained;
processing the noise reduction correction face image by adopting a face super-resolution model to obtain a corrected face super-resolution image with improved definition;
dividing the corrected face super-resolution image by adopting a face segmentation model to obtain a corrected face mask image, processing the corrected face mask image to obtain an eclosion corrected face mask image, and respectively carrying out rotation operation on the corrected face super-resolution image and the eclosion corrected face mask image to obtain an original face image and an eclosion original face mask image;
the pixel value in the eclosion original face mask image is taken as an adjusting parameter, the pixel value of the original face image is taken as a mixed color, and the pixel value of the noise-reduction super-division image is taken as a primary color, so that the original face image and the noise-reduction super-division image are fused, and a super-division image is output;
and if the face cannot be identified, determining the noise reduction super-resolution image as a super-resolution image.
CN202110466148.4A 2021-04-28 2021-04-28 Processing method and device for improving definition of picture Active CN113177881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110466148.4A CN113177881B (en) 2021-04-28 2021-04-28 Processing method and device for improving definition of picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110466148.4A CN113177881B (en) 2021-04-28 2021-04-28 Processing method and device for improving definition of picture

Publications (2)

Publication Number Publication Date
CN113177881A CN113177881A (en) 2021-07-27
CN113177881B true CN113177881B (en) 2023-10-27

Family

ID=76926856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110466148.4A Active CN113177881B (en) 2021-04-28 2021-04-28 Processing method and device for improving definition of picture

Country Status (1)

Country Link
CN (1) CN113177881B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051386B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Image processing method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897708A (en) * 2017-03-06 2017-06-27 深圳英飞拓科技股份有限公司 Stereoscopic face detection method and device
CN108596140A (en) * 2018-05-08 2018-09-28 青岛海信移动通信技术股份有限公司 A kind of mobile terminal face identification method and system
WO2020087434A1 (en) * 2018-11-01 2020-05-07 深圳技术大学(筹) Method and device for evaluating resolution of face image
CN111709878A (en) * 2020-06-17 2020-09-25 北京百度网讯科技有限公司 Face super-resolution implementation method and device, electronic equipment and storage medium
CN112598580A (en) * 2020-12-29 2021-04-02 广州光锥元信息科技有限公司 Method and device for improving definition of portrait photo

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897708A (en) * 2017-03-06 2017-06-27 深圳英飞拓科技股份有限公司 Stereoscopic face detection method and device
CN108596140A (en) * 2018-05-08 2018-09-28 青岛海信移动通信技术股份有限公司 A kind of mobile terminal face identification method and system
WO2020087434A1 (en) * 2018-11-01 2020-05-07 深圳技术大学(筹) Method and device for evaluating resolution of face image
CN111709878A (en) * 2020-06-17 2020-09-25 北京百度网讯科技有限公司 Face super-resolution implementation method and device, electronic equipment and storage medium
CN112598580A (en) * 2020-12-29 2021-04-02 广州光锥元信息科技有限公司 Method and device for improving definition of portrait photo

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
人脸五官图像分割与清晰度匹配方法研究;沈逸;《中国优秀硕士学位论文全文数据库 信息科技》(第7期);第3章、第1章 *
包装袋平面设计中立体视觉效果分析;袁月明;《美术教育研究》;第78-79页 *

Also Published As

Publication number Publication date
CN113177881A (en) 2021-07-27

Similar Documents

Publication Publication Date Title
Wang et al. Real-esrgan: Training real-world blind super-resolution with pure synthetic data
Liu et al. A generalized framework for edge-preserving and structure-preserving image smoothing
CN108205804B (en) Image processing method and device and electronic equipment
Cheng et al. Robust algorithm for exemplar-based image inpainting
US9621869B2 (en) System and method for rendering affected pixels
US20160314619A1 (en) 3-Dimensional Portrait Reconstruction From a Single Photo
CN112598580B (en) Method and device for improving definition of portrait photo
CN107194869B (en) Image processing method and terminal, computer storage medium and computer equipment
US11521299B2 (en) Retouching digital images utilizing separate deep-learning neural networks
WO2021253723A1 (en) Human body image processing method and apparatus, electronic device and storage medium
CN111612878B (en) Method and device for making static photo into three-dimensional effect video
CN112258440B (en) Image processing method, device, electronic equipment and storage medium
CN113177881B (en) Processing method and device for improving definition of picture
WO2018039936A1 (en) Fast uv atlas generation and texture mapping
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
CN113129207A (en) Method and device for blurring background of picture, computer equipment and storage medium
CN116612263B (en) Method and device for sensing consistency dynamic fitting of latent vision synthesis
WO2023221636A1 (en) Video processing method and apparatus, and device, storage medium and program product
US10354125B2 (en) Photograph processing method and system
CN112561784B (en) Image synthesis method, device, electronic equipment and storage medium
WO2021135676A1 (en) Photographing background blurring method, mobile terminal, and storage medium
CN114742725A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113763233A (en) Image processing method, server and photographing device
CN113065566A (en) Method, system and application for removing mismatching
US9563940B2 (en) Smart image enhancements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant