CN107358241B - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN107358241B
CN107358241B CN201710526347.3A CN201710526347A CN107358241B CN 107358241 B CN107358241 B CN 107358241B CN 201710526347 A CN201710526347 A CN 201710526347A CN 107358241 B CN107358241 B CN 107358241B
Authority
CN
China
Prior art keywords
image
target
preset
feature
local area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710526347.3A
Other languages
Chinese (zh)
Other versions
CN107358241A (en
Inventor
梁昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710526347.3A priority Critical patent/CN107358241B/en
Publication of CN107358241A publication Critical patent/CN107358241A/en
Application granted granted Critical
Publication of CN107358241B publication Critical patent/CN107358241B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, a storage medium and electronic equipment. The image processing method comprises the steps of obtaining a target image, identifying the target object in the target image, determining a local area image on the target object, obtaining image characteristics of the local area image, obtaining preset image characteristics corresponding to the target object from a preset characteristic set, comparing the image characteristics with the preset image characteristics corresponding to the target object to obtain a comparison result, and finally processing the local area image according to the comparison result to obtain a processed target image. According to the scheme, the image is processed in a targeted manner based on the comparison result of the image characteristics of the image and the preset image characteristics, so that the accuracy and the effectiveness of image processing are improved.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
With the development of the internet and the mobile communication network, and with the rapid development of the processing capability and the storage capability of the electronic device, a large amount of application programs are rapidly spread and used. Especially for applications related to image processing, the image processing functions are becoming more powerful. At present, many camera applications provide convenient and fast photo beautification functions, and can realize instant optimization of photo effects only by simple operations.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, a storage medium and electronic equipment, which can improve the accuracy and effectiveness of image processing.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a target image and identifying a target object in the target image;
determining a local area image on the target object, and acquiring image characteristics of the local area image;
acquiring preset image characteristics corresponding to the target object from a preset characteristic set;
comparing the image characteristics with preset image characteristics corresponding to the target object to obtain a comparison result;
and processing the local area image according to the comparison result to obtain a processed target image.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the identification module is used for acquiring a target image and identifying a target object in the target image;
the determining module is used for determining a local area image on the target object and acquiring the image characteristics of the local area image;
the acquisition module is used for acquiring preset image characteristics corresponding to the target object from a preset characteristic set;
the comparison module is used for comparing the image characteristics with preset image characteristics corresponding to the target object to obtain a comparison result;
and the processing module is used for processing the local area image according to the comparison result to obtain a processed target image.
In a third aspect, the present invention further provides a storage medium, where a plurality of instructions are stored, where the instructions are adapted to be loaded by a processor to execute the above-mentioned image processing method.
In a fourth aspect, an embodiment of the present invention further provides an electronic device, including a processor and a memory, where the processor is electrically connected to the memory, and the memory is used to store instructions and data; the processor is used for executing the image processing method.
The embodiment of the invention discloses an image processing method, an image processing device, a storage medium and electronic equipment. The image processing method comprises the steps of obtaining a target image, identifying the target object in the target image, determining a local area image on the target object, obtaining image characteristics of the local area image, obtaining preset image characteristics corresponding to the target object from a preset characteristic set, comparing the image characteristics with the preset image characteristics corresponding to the target object to obtain a comparison result, and finally processing the local area image according to the comparison result to obtain a processed target image. According to the scheme, the image is processed in a targeted manner based on the comparison result of the image characteristics of the image and the preset image characteristics, so that the accuracy and the effectiveness of image processing are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a scene architecture of an image processing system according to an embodiment of the present invention.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
Fig. 3 is an application scene diagram of the image processing method according to the embodiment of the present invention.
Fig. 4 is a diagram of another application scenario of the image processing method according to the embodiment of the present invention.
Fig. 5 is another schematic flowchart of an image processing apparatus according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Fig. 9 is another schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image processing method, an image processing device, a storage medium and electronic equipment. The details will be described below separately.
Referring to fig. 1, fig. 1 is a schematic view of a scene architecture of an image processing system according to an embodiment of the present invention, including an electronic device and a server, where the electronic device and the server establish a communication connection through the internet.
When a user processes an image through an image processing function in the electronic device, the electronic device can record input and output data in the processing process and then send the recorded data to the server, wherein the electronic device can send the data to the server in a WEB mode or can send the data to the server through a client program installed in the electronic device. The server collects data sent by the electronic equipment, processes the received data based on machine depth learning, and generates image processing reference data which is in line with the beauty of the public, so that a user can acquire the image processing reference data through the electronic equipment to process the image.
Any of the following transmission protocols may be employed, but are not limited to, between the electronic device and the server: HTTP (hypertext transfer protocol), FTP (filetransfer protocol), P2P (Peer-to-Peer), P2SP (Peer-to-Peer server and Peer), and the like.
The electronic device may be a mobile terminal, such as a mobile phone, a tablet computer, or a conventional PC (personal computer), which is not limited in the embodiments of the present invention.
In an embodiment, an image processing method is provided, as shown in fig. 2, the flow may be as follows:
s101, acquiring a target image and identifying a target object in the target image.
The target image may be a person image or a scene image. The target object may be a person, may be an object. For example, face recognition may be performed on the target image, and when a face is recognized, a human body where the recognized face is located may be used as a target object in the target image. For another example, when a flower is recognized, the recognized flower may be used as a target object.
S102, determining a local area image on the target object, and acquiring image characteristics of the local area image.
Specifically, a local area image having a significant difference from other areas may be obtained by using a deep learning or a general image processing mechanism thereof, in combination with the target object itself.
In some embodiments, the target object is a human object, and the step of "acquiring image features of the local area image" may include:
dividing a human figure object into a plurality of parts according to the human body structure;
acquiring the corresponding priority of each part;
a target region is selected from a plurality of regions in accordance with the priority from high to low, and an image of the target region is used as a local region image.
In practical applications, there are various ways to divide the human object. For example, for a whole body image of a human body, the following can be classified: head, neck, limbs, torso, etc. For another example, if the face image is a face image, the following steps can be performed: forehead, eye, nose, ear, lip, brow, etc. The specific division mode can be divided according to the requirements of users or the positions where the specific images appear.
In some embodiments, the image features include: color features, texture features, shape features, spatial relationship features. A color feature is a global feature that describes the surface properties of a scene to which an image or image region corresponds. Texture features are also global features that describe the surface properties of the scene to which the image or image region corresponds. The shape feature is a local feature, and has two types of representation methods, one is a contour feature and mainly aims at the outer boundary of an object; the other is a region feature, which relates to the entire shape region. The spatial relationship features refer to the mutual spatial position or relative direction relationship among a plurality of targets segmented from the image, and these relationships can also be classified into connection/adjacency relationship, overlapping/overlapping relationship, inclusion/containment relationship, and the like.
Image feature extraction is to extract image information using a computer and determine whether a point of each image belongs to one image feature. The result of feature extraction is to divide the points on the image into different subsets, which often belong to isolated points, continuous curves or continuous regions. Features are the starting points for many computer image analysis algorithms. One of the most important characteristics of feature extraction is "repeatability", i.e. the features extracted from different images of the same scene should be the same.
Feature extraction is a preliminary operation in image processing that examines each pixel to determine whether the pixel represents a feature. If it is part of a larger algorithm, the algorithm generally examines only the local area images of the image. As a prerequisite operation for feature extraction, the input image is typically smoothed in scale space by a gaussian blur kernel. Thereafter one or more features of the image are calculated by local derivative operations.
In the specific implementation process, the image characteristics of the local area image can be extracted by utilizing a Fourier transform method, a window Fourier transform method, a wavelet transform method, a least square method, a boundary direction histogram method, texture characteristic extraction based on Tamura texture characteristics and the like.
S103, acquiring preset image characteristics corresponding to the target object from the preset characteristic set.
In some embodiments, in order to facilitate obtaining preset image features of different target objects, the step "obtaining preset image features corresponding to target objects from a preset feature set" may include:
determining the category of the person object;
and acquiring preset image features corresponding to the category from a preset feature set, and taking the preset image features as preset image features corresponding to the target object.
In the embodiment of the invention, a feature set comprising image features, image categories and a mapping relation between the image features and the image categories needs to be established in advance.
In some embodiments, the categories to which they belong may include: male, female, etc. The same type of human object has the same type of image features, for example, male has a laryngeal prominence, female has no laryngeal prominence; the eyebrows are thick for men and capillary for women. Different types of target objects have different types of image features and different requirements for image beautification effects to be achieved.
Of course, there may be other classifications, such as age-based: elderly, middle aged, young, juvenile, young, etc. The specific implementation is determined according to the user requirements, and is not limited.
In some embodiments, the preset image features may be obtained by analyzing and learning, by the server, the processing parameters in the operation process and the images before and after various processing uploaded by the electronic device based on the deep learning of the machine, identifying a processing effect that people corresponding to various images tend to, and extracting image features based on the processing effect.
In order to increase the image processing speed, the preset image characteristic value can be downloaded and stored in a storage space in the electronic equipment.
In some embodiments, the server may collect image processing operations of different users for an image from which common image features of the same category of images are obtained. For example, when image processing is performed by using the retouching software in the electronic device, the original image, the image adjustment parameter during the processing process, and the processed image are uploaded to the server by the electronic device, so that the server can analyze and learn various uploaded image data and operation data by using a learning algorithm. The server identifies the processing effect of the tendency of the public corresponding to the images of different categories through the processing process of self-analysis and learning, extracts the image characteristics based on the processing effect, and establishes the mapping relation between the categories and the image characteristics to obtain the characteristic set.
In the specific implementation process, when the electronic equipment uploads image data and operation data to the server, the operations of automatically processing images like 'one-key beauty', 'automatic beauty' and the like can be filtered out, and then the data closer to the real idea of the user can be obtained.
And S104, comparing the image characteristics with preset image characteristics corresponding to the target object to obtain a comparison result.
In some embodiments, the step of "comparing the image feature with the preset image feature corresponding to the target object to obtain the comparison result" may include the following steps:
analyzing the preset image characteristics to obtain a first characteristic value;
analyzing the image characteristics to obtain a second characteristic value;
and comparing the first characteristic value with the second characteristic value, calculating a characteristic difference value, and taking the characteristic difference value as a comparison result.
The feature value is a quantized data value for each image feature, and can be obtained by analyzing the image features based on a related image algorithm and calculating through binarization processing.
And S105, processing the local area image according to the comparison result to obtain a processed target image.
In some embodiments, if the comparison result is a feature difference between the first feature value and the second feature value as described above, the step of "processing according to the comparison result local area image" includes:
acquiring corresponding image adjustment parameters according to the characteristic difference;
and adjusting the image characteristics of the local area image according to the image adjustment parameters.
In some embodiments, the step of "adjusting the image characteristics of the local area image according to the image adjustment parameter" may include the following procedures:
extracting a color feature layer, a texture feature layer, a shape feature layer and a spatial relationship feature layer from the local area image to obtain a feature layer set;
selecting a target characteristic layer from the characteristic layer set according to the adjustment parameter for adjustment;
and synthesizing the adjusted target feature layer and the unselected feature layers in the feature set.
Each layer is composed of a number of pixels, which in turn form the whole image by means of an overlap. For example, taking the local area image as an "eye" as an example, a color feature layer (e.g., the difference between the colors of the eyes of the eastern person and the western person), a texture feature layer (e.g., the texture of spots, filaments, coronas, stripes, crypts, etc. in the iris), a shape feature layer (e.g., the shape of a danfeng eye, a round eye, etc.), and a spatial relationship feature layer (e.g., the position of the pupil relative to the sclera) are extracted from the eye. And then, selecting a feature layer to be adjusted from the feature layers according to specific adjustment parameters for adjustment, and if the blood streak is spread to the eyes due to night out, adjusting the eye texture feature layer to remove the blood streak.
For example, taking skin as an example, the image features include: white degree (color characteristic) and roughness degree (texture characteristic), if the quantized data values of the two image characteristics of the skin in the target image are: 3. and 5, the quantized data values of the image features of the preset skin are respectively: 5. and 5, acquiring a characteristic difference value as follows: 2. 0, in the first place. And then, acquiring corresponding image adjustment parameters according to the characteristic difference value, and processing the skin image according to the image adjustment parameters based on a preset image processing algorithm.
When there are multiple local area images, referring to fig. 3, taking a human face as an example, an image before processing (fig. 3 left image) includes: eyebrow a1, eye B1, lip C1, and face D1. If there is no difference between the image features of the eyebrows a1 and the preset image features of the eyebrows, there is a difference a between the image features of the eyes B1 and the preset image features of the eyes, there is a difference B between the lip C1 and the preset image features of the lips, and there is no difference between the face D1 and the preset image features of the face. Acquiring corresponding image adjustment parameters based on the characteristic difference a, and processing the image according to the acquired image adjustment parameters according to a corresponding image processing algorithm; and acquiring corresponding image adjustment parameters based on the characteristic difference value b, and automatically adjusting the image characteristics of the local area image according to the acquired image adjustment parameters according to a corresponding image processing algorithm. Finally, the processed image (right image in fig. 3) is output, and the lip shape is changed and the lip color is darkened (see C2) for the large eye (see B2), while the eyebrow a2 and the face D2 are unchanged.
That is, when a picture is opened, it is possible to automatically recognize which parts need to be processed, and perform image processing on the parts in a targeted manner while keeping other parameters of the image unchanged.
For example, referring to fig. 4, in combination with fig. 3, on the basis of the same beautification standard, it is known that there is no feature difference between the eyebrow a2 and the preset image feature of the eyebrow, there is no feature difference between the eye B2 and the preset image feature of the eye, there is no feature difference between the lip C2 and the preset image feature of the lip, and there is a feature difference D between the face D2 and the preset image feature of the face, and then corresponding image adjustment parameters are obtained based on the feature difference D, and the image is processed according to the obtained image adjustment parameters according to the image processing algorithm. Finally, the processed image (right image in fig. 4) is output, and the face shape changing (refer to D3) processing is performed, while the eyebrow A3, the eye B3, and the face lip C3 are unchanged.
In some embodiments, the obtained feature difference value may be directly used as an image adjustment parameter, and the image feature of the local area image is processed according to the feature difference value based on a corresponding image processing algorithm.
For the scene image, the image processing can be performed according to the feature difference based on the corresponding feature algorithm, such as implementing different filter processing.
As can be seen from the above, an embodiment of the present invention provides an image processing method, which includes obtaining a target image, identifying a target object in the target image, determining a local area image on the target object, obtaining image features of the local area image, obtaining preset image features corresponding to the target object from a preset feature set, comparing the image features with the preset image features corresponding to the target object to obtain a comparison result, and processing the local area image according to the comparison result to obtain a processed target image. According to the scheme, the image is processed in a targeted manner based on the comparison result of the image characteristics of the image and the preset image characteristics, so that the accuracy and the effectiveness of image processing are improved.
In an embodiment, another image processing method is further provided, as shown in fig. 5, the flow may be as follows:
s201, acquiring a target image, and identifying a target object in the target image.
The target image may be a person image or a scene image. The target object may be a person, may be an object. For example, face recognition may be performed on the target image, and when a face is recognized, a human body where the recognized face is located may be used as a target object in the target image.
S202, determining a local area image on the target object, and acquiring image characteristics of the local area image.
Specifically, a local area image having a significant difference from other areas may be obtained by using a deep learning or a general image processing mechanism thereof, in combination with the target object itself.
In some embodiments, taking the target object as a human object as an example, the step "acquiring image features of the local area image" may include:
dividing a human figure object into a plurality of parts according to the human body structure;
acquiring the corresponding priority of each part;
a target region is selected from a plurality of regions in accordance with the priority from high to low, and an image of the target region is used as a local region image.
In practical applications, there are various ways to divide the human object. For example, for a whole body image of a human body, the following can be classified: head, neck, limbs, torso, etc. For another example, if the face image is a face image, the following steps can be performed: forehead, eye, nose, ear, lip, brow, etc. The specific division mode can be divided according to the requirements of users or the positions where the specific images appear.
In the specific implementation process, the image characteristics of the local area image can be extracted by using a Fourier transform method, a window Fourier transform method, a wavelet transform method, a least square method, a boundary direction histogram method and the like.
S203, determining the category of the target object, and acquiring the preset image feature corresponding to the target object from the preset feature set according to the category.
In some embodiments, the categories to which they belong may include: male, female, etc. The same type of human object has the same type of image features, for example, male has a laryngeal prominence, female has no laryngeal prominence; the eyebrows are thick for men and capillary for women. Different types of target objects have different types of image features and different requirements for image beautification effects to be achieved.
Of course, there may be other classifications, such as age-based: elderly, middle aged, young, juvenile, young, etc. The specific implementation is determined according to the user requirements, and is not limited.
In the embodiment of the invention, a feature set comprising image features, image categories and a mapping relation between the image features and the image categories needs to be established in advance.
In some embodiments, the server may collect image processing operations of different users for an image from which common image features of the same category of images are obtained. For example, when image processing is performed by using the retouching software in the electronic device, the original image, the image adjustment parameter during the processing process, and the processed image are uploaded to the server by the electronic device, so that the server can analyze and learn various uploaded image data and operation data by using a learning algorithm. The server identifies the processing effect of the tendency of the public corresponding to the images of different categories through the processing process of self-analysis and learning, extracts the image characteristics based on the processing effect, and establishes the mapping relation between the categories and the image characteristics to obtain the characteristic set.
In order to increase the image processing speed, the preset image characteristic value can be downloaded and stored in a storage space in the electronic equipment.
S205, analyzing the preset image characteristics to obtain a first characteristic value, and analyzing the image characteristics to obtain a second characteristic value.
Wherein the feature value is a quantized data value for each feature. In particular, each image feature may include a plurality of sub-features, each sub-feature having a respective feature value. For example, the skin color feature is taken as an example, and includes the following sub-features: white, rough, tone, etc.
S206, comparing the first characteristic value with the second characteristic value, and calculating a characteristic difference value.
The feature value is a quantized data value for each image feature, and can be obtained by analyzing the image features based on a related image algorithm and calculating through binarization processing.
For example, taking skin as an example, the image features include: white degree (color characteristic) and roughness degree (texture characteristic), if the quantized data values of the two image characteristics of the skin in the target image are: 3. and 5, the quantized data values of the image features of the preset skin are respectively: 5. and 5, acquiring a characteristic difference value as follows: 2. 0, in the first place.
And S207, acquiring corresponding image adjustment parameters according to the characteristic difference.
And the characteristic difference value of the image characteristic of the skin and the preset image characteristic of the skin is as follows: 2. taking 0 as an example, the image adjustment parameter corresponding to the color feature difference is obtained. In practical application, a conversion formula can be designed, and the obtained characteristic difference value is treated as the conversion formula to obtain the corresponding image adjustment parameter.
In addition, the image adjusting method can also divide difference intervals, and each difference interval corresponds to different image adjusting parameters. And matching the obtained characteristic difference value with the set difference value interval to determine the characteristic difference value and obtain the corresponding image adjustment parameter.
In some embodiments, the obtained feature difference value may be directly used as an image adjustment parameter.
And S208, adjusting the image characteristics of the local area image according to the acquired image adjustment parameters, and outputting the processed image.
In some embodiments, a color feature layer, a texture feature layer, a shape feature layer, and a spatial relationship feature layer may be extracted from the local area image to obtain a feature layer set, then, a target feature layer is selected from the feature layer set according to an adjustment parameter to perform adjustment, and then, the adjusted target feature layer and unselected feature layers in the feature set are synthesized to obtain a final image.
In specific implementation, a corresponding image processing algorithm is determined according to the image characteristics, the image characteristics of the local area image are adjusted according to the acquired image adjustment parameters, and the processed image is output.
When a plurality of image features exist, taking a human face as an example, the method comprises the following steps: if the skin color does not have a feature difference value with the preset skin color, the face shape does not have a feature difference value with the preset face shape, and the eyes do not have a feature difference value with the preset eyes, corresponding image adjusting parameters are obtained only based on the feature difference value between the face shape and the preset face shape, the image features of the local area image are adjusted according to the obtained image adjusting parameters according to the corresponding image processing algorithm, and the processed image is output.
As can be seen from the above, in the image processing method provided in the embodiment of the present invention, the target image is obtained, the target object of the target image is identified, the local area image in the target object is determined, the image feature of the local area image is obtained, the preset image feature corresponding to the target object is obtained, the image feature and the preset image feature are analyzed, respectively, to obtain the first feature value and the second feature value, the feature difference between the first feature value and the second feature value is calculated, the corresponding image adjustment parameter is obtained according to the feature difference, the local area image is processed according to the image adjustment parameter, and the processed image is output. According to the scheme, based on the difference between the image characteristics of the image and the preset image characteristics, the local area image with the characteristic difference is subjected to targeted processing, and the accuracy and the effectiveness of image processing are improved.
In another embodiment of the present invention, an image processing apparatus is further provided, where the image processing apparatus may be integrated in an electronic device in the form of software or hardware, and the electronic device may specifically include a mobile phone, a tablet computer, a notebook computer, and the like. As shown in fig. 6, the image processing apparatus 300 may include an identification module 301, a determination module 302, an acquisition module 303, a comparison module 304, and a processing module 305, wherein:
the identifying module 301 is configured to obtain a target image and identify a target object in the target image.
The target image may be a person image or a scene image. The target object may be a person, may be an object. For example, face recognition may be performed on the target image, and when a face is recognized, a human body where the recognized face is located may be used as a target object in the target image. For another example, when a flower is recognized, the recognized flower can be used as a target object
The determining module 302 is configured to determine a local area image on the target object and acquire image features of the local area image.
Specifically, a local area image having a significant difference from other areas may be obtained by using a deep learning or a general image processing mechanism thereof, in combination with the target object itself.
An image may have one or more image features. For example, taking a human face as an example, the image features may include one or more features such as face shape, skin tone, nose height, eye size, and the like. In the specific implementation process, the image characteristics of the local area image can be extracted by utilizing a Fourier transform method, a window Fourier transform method, a wavelet transform method, a least square method, a boundary direction histogram method and the like.
The obtaining module 303 is configured to obtain a preset image feature corresponding to the target object from a preset feature set.
The comparing module 304 is configured to compare the image feature with a preset image feature corresponding to the target object, so as to obtain a comparison result.
In some embodiments, the preset image features may be obtained by analyzing and learning, by the server, the processing parameters in the operation process and the images before and after various processing uploaded by the electronic device based on the deep learning of the machine, identifying a processing effect that people corresponding to various images tend to, and extracting image features based on the processing effect.
And the processing module 305 is configured to process the local area image according to the comparison result to obtain a processed target image.
Referring to fig. 7, in some embodiments, the comparison module 304 includes:
an analysis submodule 3041, configured to analyze a preset image feature to obtain a first feature value, and analyze an image feature to obtain a second feature value;
the comparison submodule 3042 is configured to compare the first feature value with the second feature value, calculate a feature difference value, and use the feature difference value as a comparison result.
The feature value is a quantized data value for each image feature, and can be obtained by analyzing the image features based on a related image algorithm and calculating through binarization processing.
With continued reference to fig. 7, in some embodiments, the processing module 305 includes:
the parameter obtaining sub-module 3051 is configured to obtain a corresponding image adjustment parameter according to the feature difference;
the processing sub-module 3052 is configured to adjust an image feature of the local area image according to the image adjustment parameter.
In practical application, a conversion formula can be designed, and the obtained characteristic difference value is treated as the conversion formula to obtain the corresponding image adjustment parameter.
In addition, the image adjusting method can also divide difference intervals, and each difference interval corresponds to different image adjusting parameters. And matching the obtained characteristic difference value with the set difference value interval to determine the characteristic difference value and obtain the corresponding image adjustment parameter.
In some embodiments, the processing sub-module 3052 is configured to:
extracting a color feature layer, a texture feature layer, a shape feature layer and a spatial relationship feature layer from the local area image to obtain a feature layer set;
selecting a target characteristic layer from the characteristic layer set according to the adjustment parameter for adjustment;
and synthesizing the adjusted target feature layer and the unselected feature layers in the feature set.
In some embodiments, the target object is a human object, and the obtaining module 303 is configured to:
a determination submodule 3031 configured to determine a category to which the person object belongs;
the feature obtaining sub-module 3032 is configured to obtain a preset image feature corresponding to the category from a preset feature set, and use the preset image feature as a preset image feature corresponding to the target object.
Wherein, the types can include: male, female, etc. The same type of human object has the same type of image features, for example, male has a laryngeal prominence, female has no laryngeal prominence; the eyebrows are thick for men and capillary for women. Different types of target objects have different types of image features and different requirements for image beautification effects to be achieved.
In some embodiments, the target object is a human object; as shown in fig. 7, the determining module 302 includes:
a division submodule 3021 for dividing the human figure object into a plurality of parts according to the human body structure;
a level obtaining submodule 3022 configured to obtain a priority level corresponding to each location;
the selecting submodule 3023 is configured to select a target region from the plurality of regions according to the priority from high to low, and use an image of the target region as a local region image.
As can be seen from the above, the image processing apparatus provided in the embodiment of the present invention obtains the target image, identifies the target object of the target image, then determines the local area image on the target object, obtains the image feature of the local area image, then obtains the preset image feature corresponding to the target object from the preset feature set, compares the image feature with the preset image feature corresponding to the target object to obtain a comparison result, and finally processes the local area image according to the comparison result to obtain the processed target image. According to the scheme, the images are subjected to targeted processing based on the comparison result of the image features of the images and the preset image features, and the accuracy and the effectiveness of image processing are improved.
In another embodiment of the present invention, an electronic device is further provided, and the electronic device may be a smart phone, a tablet computer, or the like. As shown in fig. 8, the electronic device 400 includes a processor 401, a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the electronic device 400, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or loading an application program stored in the memory 402 and calling data stored in the memory 402, thereby integrally monitoring the electronic device.
In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions:
acquiring a target image, and identifying a target object in the target image;
determining a local area image on a target object, and acquiring image characteristics of the local area image;
acquiring preset image characteristics corresponding to the target object from a preset characteristic set;
comparing the image characteristics with preset image characteristics corresponding to the target object to obtain a comparison result;
and processing the local area image according to the comparison result to obtain a processed target image.
In some embodiments, processor 401 performs the following steps:
analyzing the preset image characteristics to obtain a first characteristic value;
analyzing the image characteristics to obtain a second characteristic value;
and comparing the first characteristic value with the second characteristic value, calculating a characteristic difference value, and taking the characteristic difference value as a comparison result.
In some embodiments, processor 401 further performs the steps of: acquiring corresponding image adjustment parameters according to the characteristic difference; and adjusting the image characteristics of the local area image according to the image adjustment parameters.
In some embodiments, processor 401 further performs the steps of:
extracting a color feature layer, a texture feature layer, a shape feature layer and a spatial relationship feature layer from the local area image to obtain a feature layer set;
selecting a target characteristic layer from the characteristic layer set according to the adjustment parameter for adjustment;
and synthesizing the adjusted target feature layer and the unselected feature layers in the feature set.
In some embodiments, processor 401 further performs the steps of: determining a category to which the target object belongs; and acquiring preset image features corresponding to the category from a preset feature set, and taking the preset image features as preset image features corresponding to the target object.
In some embodiments, the target object is a human object; processor 401 also performs the following steps: dividing a human figure object into a plurality of parts according to the human body structure; acquiring the corresponding priority of each part; a target region is selected from a plurality of regions in accordance with the priority from high to low, and an image of the target region is used as a local region image.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing instructions executable in the processor. The application programs may constitute various functional modules. The processor 401 executes various functional applications and data processing by running an application program stored in the memory 402.
In some embodiments, as shown in fig. 9, electronic device 400 further comprises: display 403, control circuit 404, radio frequency circuit 405, input unit 406, audio circuit 407, sensor 408, and power supply 409. The processor 401 is electrically connected to the display 403, the control circuit 404, the rf circuit 405, the input unit 406, the audio circuit 407, the sensor 408, and the power source 409.
The display screen 403 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof. The display screen 403 may be used as a screen in the embodiment of the present invention to display information.
The control circuit 404 is electrically connected to the display 403, and is configured to control the display 403 to display information.
The rf circuit 405 is used for transceiving rf signals to establish wireless communication with a network device or other electronic devices through wireless communication, and to transceive signals with the network device or other electronic devices.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 406 may include a fingerprint recognition module.
The audio circuit 407 may provide an audio interface between the user and the electronic device through a speaker, microphone.
The sensor 408 is used to collect external environmental information. The sensors 408 may include ambient light sensors, acceleration sensors, light sensors, motion sensors, and other sensors.
The power supply 409 is used to power the various components of the electronic device 400. In some embodiments, the power source 409 may be logically connected to the processor 401 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system.
Although not shown in fig. 9, the electronic device 400 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
As can be seen from the above, in the electronic device provided in the embodiment of the present invention, the target image is obtained, the target object of the target image is identified, then the local area image on the target object is determined, the image feature of the local area image is obtained, then the preset image feature corresponding to the target object is obtained from the preset feature set, then the image feature is compared with the preset image feature corresponding to the target object, so as to obtain a comparison result, and finally the local area image is processed according to the comparison result, so as to obtain the processed target image. According to the scheme, the images are subjected to targeted processing based on the comparison result of the image features of the images and the preset image features, and the accuracy and the effectiveness of image processing are improved.
In yet another embodiment of the present invention, a storage medium is further provided, wherein a plurality of instructions are stored in the storage medium, and the instructions are suitable for being loaded by a processor to execute the steps of any one of the image processing methods.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, or the like.
The use of the terms "a" and "an" and "the" and similar referents in the context of describing the concepts of the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural. Moreover, unless otherwise indicated herein, recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. In addition, the steps of all methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The present invention is not limited to the order of steps described. The use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate the inventive concept and does not pose a limitation on the scope of the inventive concept unless otherwise claimed. Various modifications and adaptations will be apparent to those skilled in the art without departing from the spirit and scope.
The foregoing describes in detail an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present invention, and a specific example of an application program in the present disclosure explains principles and embodiments of the present invention, and the descriptions of the foregoing embodiments are only used to help understand the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application program, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (5)

1. An image processing method, comprising:
acquiring a target image and identifying a target object in the target image;
determining a local area image on the target object, and acquiring image characteristics of the local area image; wherein the image features comprise image features of a plurality of different locations; the method specifically comprises the following steps: dividing a human body object into a plurality of parts according to the human body structure, wherein the human body object comprises the whole human body or a human face; acquiring the corresponding priority of each part; selecting a target part from the multiple parts according to the priority from high to low, and taking an image of the target part as a local area image;
acquiring preset image characteristics corresponding to the target object from a preset characteristic set;
comparing the image characteristics with preset image characteristics corresponding to the target object to obtain a comparison result; the method comprises the following steps: analyzing the preset image characteristics to obtain a first characteristic value; analyzing the image characteristics to obtain a second characteristic value; comparing the first characteristic value with the second characteristic value, calculating a characteristic difference value, and taking the characteristic difference value as a comparison result;
processing the local area image according to the comparison result to obtain a processed target image, including: when the image characteristics of a certain part do not have a characteristic difference value, the image of the part does not need to be processed; when the image characteristics of the other part have a characteristic difference value, acquiring a corresponding image adjustment parameter according to the characteristic difference value; adjusting the image of the part according to the image adjustment parameter, comprising: extracting a color feature layer, a texture feature layer, a shape feature layer and a spatial relationship feature layer from the local area image to obtain a feature layer set; selecting a target feature layer from the feature layer set according to the adjustment parameter for adjustment; and synthesizing the adjusted target feature layer and the unselected feature layers in the feature set.
2. The image processing method according to claim 1, wherein the target object is a human object; the step of acquiring the preset image characteristics corresponding to the target object from the preset characteristic set comprises the following steps:
determining a category to which the person object belongs;
and acquiring preset image features corresponding to the categories from a preset feature set, and taking the preset image features as preset image features corresponding to the target object.
3. An image processing apparatus characterized by comprising:
the identification module is used for acquiring a target image and identifying a target object in the target image;
the determining module is used for determining a local area image on the target object and acquiring the image characteristics of the local area image; wherein the image features comprise image features of a plurality of different locations; the method is specifically used for: dividing a human body object into a plurality of parts according to the human body structure, wherein the human body object comprises the whole human body or a human face; acquiring the corresponding priority of each part; selecting a target part from the multiple parts according to the priority from high to low, and taking an image of the target part as a local area image;
the acquisition module is used for acquiring preset image characteristics corresponding to the target object from a preset characteristic set;
the comparison module is used for comparing the image characteristics with preset image characteristics corresponding to the target object to obtain a comparison result;
the processing module is used for processing the local area image according to the comparison result to obtain a processed target image; the method is specifically used for: the method comprises the following steps: when the image characteristics of a certain part do not have a characteristic difference value, the image of the part does not need to be processed; when the image characteristics of the other part have a characteristic difference value, acquiring a corresponding image adjustment parameter according to the characteristic difference value; adjusting the image of the part according to the image adjusting parameters; the method comprises the following steps: extracting a color feature layer, a texture feature layer, a shape feature layer and a spatial relationship feature layer from the local area image to obtain a feature layer set; selecting a target feature layer from the feature layer set according to the adjustment parameter for adjustment; synthesizing the adjusted target feature layer and unselected feature layers in the feature set;
wherein the comparison module comprises:
the analysis submodule is used for analyzing the preset image characteristics to obtain a first characteristic value and analyzing the image characteristics to obtain a second characteristic value;
and the comparison submodule is used for comparing the first characteristic value with the second characteristic value, calculating a characteristic difference value and taking the characteristic difference value as a comparison result.
4. A storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the image processing method according to any one of claims 1-2.
5. An electronic device comprising a processor and a memory, the processor being electrically connected to the memory, the memory being configured to store instructions and data; the processor is configured to perform the image processing method of any of claims 1-2.
CN201710526347.3A 2017-06-30 2017-06-30 Image processing method, image processing device, storage medium and electronic equipment Expired - Fee Related CN107358241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710526347.3A CN107358241B (en) 2017-06-30 2017-06-30 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710526347.3A CN107358241B (en) 2017-06-30 2017-06-30 Image processing method, image processing device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN107358241A CN107358241A (en) 2017-11-17
CN107358241B true CN107358241B (en) 2021-01-26

Family

ID=60273675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710526347.3A Expired - Fee Related CN107358241B (en) 2017-06-30 2017-06-30 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN107358241B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076288A (en) * 2017-12-14 2018-05-25 光锐恒宇(北京)科技有限公司 Image processing method, device and computer readable storage medium
CN109961403B (en) * 2017-12-22 2021-11-16 Oppo广东移动通信有限公司 Photo adjusting method and device, storage medium and electronic equipment
CN108198144A (en) * 2017-12-28 2018-06-22 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108093177B (en) * 2017-12-28 2021-01-26 Oppo广东移动通信有限公司 Image acquisition method and device, storage medium and electronic equipment
CN108174099A (en) * 2017-12-29 2018-06-15 光锐恒宇(北京)科技有限公司 Method for displaying image, device and computer readable storage medium
CN108446653B (en) * 2018-03-27 2022-08-16 百度在线网络技术(北京)有限公司 Method and apparatus for processing face image
CN108776819A (en) * 2018-06-05 2018-11-09 Oppo广东移动通信有限公司 A kind of target identification method, mobile terminal and computer readable storage medium
CN109035159A (en) * 2018-06-27 2018-12-18 努比亚技术有限公司 A kind of image optimization processing method, mobile terminal and computer readable storage medium
CN109120906A (en) * 2018-10-30 2019-01-01 信利光电股份有限公司 A kind of intelligent monitor system
CN109598722B (en) * 2018-12-10 2020-12-08 杭州帝视科技有限公司 Image analysis method based on recurrent neural network
CN109726255A (en) * 2018-12-18 2019-05-07 斑马网络技术有限公司 Automatic update method, device, system and the storage medium of POI
CN110738626B (en) * 2019-10-24 2022-06-28 广东三维家信息科技有限公司 Rendering graph optimization method and device and electronic equipment
CN111246113B (en) * 2020-03-05 2022-03-18 上海瑾盛通信科技有限公司 Image processing method, device, equipment and storage medium
CN113449755B (en) * 2020-03-26 2022-12-02 阿里巴巴集团控股有限公司 Data processing method, model training method, device, equipment and storage medium
CN112083863A (en) * 2020-09-17 2020-12-15 维沃移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium
CN113763285B (en) * 2021-09-27 2024-06-11 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN114693554B (en) * 2022-03-28 2023-05-26 唐山学院 Big data image processing method and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8691247B2 (en) * 2006-12-26 2014-04-08 Ad Lunam Labs Inc. Skin rejuvenation cream
CN104346136B (en) * 2013-07-24 2019-09-13 腾讯科技(深圳)有限公司 A kind of method and device of picture processing
CN105279186A (en) * 2014-07-17 2016-01-27 腾讯科技(深圳)有限公司 Image processing method and system
CN104503749B (en) * 2014-12-12 2017-11-21 广东欧珀移动通信有限公司 Photo processing method and electronic equipment
CN105096241A (en) * 2015-07-28 2015-11-25 努比亚技术有限公司 Face image beautifying device and method
CN105812853A (en) * 2016-03-16 2016-07-27 联想(北京)有限公司 Image processing method and electronic device
CN106886752A (en) * 2017-01-06 2017-06-23 深圳市金立通信设备有限公司 The method and terminal of a kind of image procossing

Also Published As

Publication number Publication date
CN107358241A (en) 2017-11-17

Similar Documents

Publication Publication Date Title
CN107358241B (en) Image processing method, image processing device, storage medium and electronic equipment
JP6636154B2 (en) Face image processing method and apparatus, and storage medium
JP7413400B2 (en) Skin quality measurement method, skin quality classification method, skin quality measurement device, electronic equipment and storage medium
CN108319953B (en) Occlusion detection method and device, electronic equipment and the storage medium of target object
WO2019128507A1 (en) Image processing method and apparatus, storage medium and electronic device
EP3338217B1 (en) Feature detection and masking in images based on color distributions
CN108197546B (en) Illumination processing method and device in face recognition, computer equipment and storage medium
CN107369196B (en) Expression package manufacturing method and device, storage medium and electronic equipment
KR102183370B1 (en) Method for obtaining care information, method for sharing care information, and electronic apparatus therefor
CN108198130B (en) Image processing method, image processing device, storage medium and electronic equipment
EP2923306B1 (en) Method and apparatus for facial image processing
US20190095701A1 (en) Living-body detection method, device and storage medium
US8983152B2 (en) Image masks for face-related selection and processing in images
CN109087376B (en) Image processing method, image processing device, storage medium and electronic equipment
CN106682632B (en) Method and device for processing face image
CN111008935B (en) Face image enhancement method, device, system and storage medium
KR101141643B1 (en) Apparatus and Method for caricature function in mobile terminal using basis of detection feature-point
CN106682620A (en) Human face image acquisition method and device
CN108109161B (en) Video data real-time processing method and device based on self-adaptive threshold segmentation
CN114092678A (en) Image processing method, image processing device, electronic equipment and storage medium
CN109618098A (en) A kind of portrait face method of adjustment, device, storage medium and terminal
CN112101195A (en) Crowd density estimation method and device, computer equipment and storage medium
US20190205689A1 (en) Method and device for processing image, electronic device and medium
US10909351B2 (en) Method of improving image analysis
CN113379623B (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210126

CF01 Termination of patent right due to non-payment of annual fee