CN108513068B - Image selection method and device, storage medium and electronic equipment - Google Patents

Image selection method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108513068B
CN108513068B CN201810276595.1A CN201810276595A CN108513068B CN 108513068 B CN108513068 B CN 108513068B CN 201810276595 A CN201810276595 A CN 201810276595A CN 108513068 B CN108513068 B CN 108513068B
Authority
CN
China
Prior art keywords
image
face
processed
face image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810276595.1A
Other languages
Chinese (zh)
Other versions
CN108513068A (en
Inventor
何新兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810276595.1A priority Critical patent/CN108513068B/en
Publication of CN108513068A publication Critical patent/CN108513068A/en
Application granted granted Critical
Publication of CN108513068B publication Critical patent/CN108513068B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method and a device for selecting an image, a storage medium and electronic equipment. The method comprises the following steps: acquiring an image to be processed containing a face image; acquiring the definition of each face image in each image to be processed; acquiring a preset position value of each face image in each image to be processed, wherein the preset position value is a numerical value used for representing the size of a preset part of the face image; and selecting an image for processing from the images to be processed according to the definition of each face image in each image to be processed and a preset position value. The embodiment can improve the flexibility of selecting the image for processing from the images to be processed by the terminal.

Description

Image selection method and device, storage medium and electronic equipment
Technical Field
The present application belongs to the field of image technologies, and in particular, to a method and an apparatus for selecting an image, a storage medium, and an electronic device.
Background
Photographing is a basic function of the terminal. With the continuous progress of hardware such as a camera module and an image processing algorithm, the shooting function of the terminal is more and more powerful. Users also use the terminal to take pictures more and more frequently, for example, users often use the terminal to take pictures of people, etc. In the related art, a terminal may collect a plurality of frames of images, and select an image for processing from the plurality of frames of images. However, when an image for processing is selected from a plurality of frame images, the terminal has poor flexibility in selecting the image.
Disclosure of Invention
The embodiment of the application provides an image selection method and device, a storage medium and electronic equipment, which can improve the flexibility of selecting an image for processing from images to be processed by a terminal.
The embodiment of the application provides a method for selecting an image, which comprises the following steps:
acquiring an image to be processed containing a face image;
acquiring the definition of each face image in each image to be processed;
acquiring a preset position value of each face image in each image to be processed, wherein the preset position value is a numerical value used for representing the size of a preset part of the face image;
and selecting an image for processing from the images to be processed according to the definition and the preset position value of each face image in each image to be processed.
The embodiment of the application provides a device for selecting images, which comprises:
the first acquisition module is used for acquiring an image to be processed containing a face image;
the second acquisition module is used for acquiring the definition of each face image in each image to be processed;
the third acquisition module is used for acquiring a preset position value of each face image in each image to be processed, wherein the preset position value is a numerical value used for representing the size of the preset position of the face image;
and the selecting module is used for selecting an image for processing from the images to be processed according to the definition and the preset position value of each face image in each image to be processed.
Embodiments of the present application provide a storage medium having a computer program stored thereon, which, when executed on a computer, causes the computer to execute the method provided by the embodiments of the present application.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the method provided in the embodiment of the present application by calling the computer program stored in the memory.
In this embodiment, the terminal may select an image for processing from the plurality of frames of images to be processed according to the definition of each face image in each frame of images to be processed and the size of the preset location value of each face image in each frame of images to be processed. Therefore, the embodiment can improve the flexibility of selecting the image for processing from the images to be processed by the terminal.
Drawings
The technical solution and the advantages of the present invention will be apparent from the following detailed description of the embodiments of the present invention with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of an image selection method according to an embodiment of the present application.
Fig. 2 is another schematic flow chart of a method for selecting an image according to an embodiment of the present disclosure.
Fig. 3 is a scene schematic diagram of a method for selecting an image according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an image selecting apparatus according to an embodiment of the present application.
Fig. 5 is another schematic structural diagram of an image selecting apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
It can be understood that the execution subject of the embodiment of the present application may be a terminal device such as a smart phone or a tablet computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for selecting an image according to an embodiment of the present application, where the flow chart may include:
in step S101, an image to be processed including a face image is acquired.
Photographing is a basic function of the terminal. With the continuous progress of hardware such as a camera module and an image processing algorithm, the shooting function of the terminal is more and more powerful. Users also use the terminal to take pictures more and more frequently, for example, users often use the terminal to take pictures of people, etc. In the related art, a terminal may collect a plurality of frames of images, and select an image for processing from the plurality of frames of images. However, when an image for processing is selected from a plurality of frame images, the terminal has poor flexibility in selecting the image.
In step S101 of the embodiment of the present application, the terminal may first acquire an image to be processed including a face image. In one embodiment, the images to be processed may be multiple frames of images continuously and rapidly acquired by the terminal in the same scene.
In step S102, the sharpness of each face image in each image to be processed is obtained.
For example, after acquiring multiple frames of images to be processed including face images, the terminal may acquire the sharpness of each face image in each frame of images to be processed.
In step S103, a preset portion value of each face image in each image to be processed is obtained, where the preset portion value is a numerical value representing a size of a preset portion of the face image.
For example, the terminal may obtain a preset location value of each face image in each frame of image to be processed. The preset position value is a numerical value used for representing the size of the preset position of the face image.
In some embodiments, the preset portion value may be a numerical value representing the size of the area of the preset portion. Alternatively, the preset portion value may also be a numerical value indicating the height of the preset portion in the vertical direction, or the like. It is to be understood that the present embodiment is not limited thereto.
In one embodiment, the predetermined region may be a human face region such as eyes or a mouth.
In step S104, an image to be processed is selected from the images to be processed according to the sharpness and the preset position value of each face image in each image to be processed.
For example, after the definition of each face image in each frame of image to be processed and the preset position value of each face image in each frame of image to be processed are obtained, the terminal may select an image for processing from the multiple frames of images to be processed according to the parameter values.
It can be understood that, in this embodiment, the terminal may select an image for processing from the multiple frames of images to be processed according to the definition of each facial image in each frame of images to be processed and the size of the preset location value of each facial image in each frame of images to be processed. Therefore, the embodiment can improve the flexibility of selecting the image for processing from the images to be processed by the terminal.
Referring to fig. 2, fig. 2 is another schematic flow chart of a method for selecting an image according to an embodiment of the present application, where the flow chart may include:
in step S201, the terminal acquires an image to be processed including a face image.
For example, the terminal may first acquire a plurality of frames of images including a human face, which are continuously and rapidly acquired in the same scene, where the plurality of frames of images are to-be-processed images.
In step S202, the terminal obtains an image area of the image to be processed and an area of the face region, and obtains an area ratio of the area of the face region to the image area.
For example, after acquiring multiple frames of images to be processed, the terminal may acquire an image area of one frame of image to be processed and a face area of a user in the frame of image to be processed. Then, the terminal can obtain the proportion of the face area in the image area.
For example, the image to be processed A, B, C, D, E, F is a group image of three people, the terminal may acquire one frame of the image to be processed, such as the image area of the image to be processed a and the area of the face region of the person a in the image to be processed a, and then the terminal may acquire the ratio of the area of the face region of the person a to the image area of the image to be processed a.
After the proportion of the area of the face region of the first image in the image area of the image A to be processed is obtained, the terminal can detect whether the proportion reaches a preset proportion threshold value.
If it is detected that the ratio does not reach the preset ratio threshold, that is, the area occupied by the face part in the image to be processed is small, in this case, the terminal may perform other operations, such as selecting an image for processing from the plurality of frames of images to be processed according to the size of the eye of the user in the image to be processed.
If it is detected that the ratio reaches the preset ratio threshold, that is, the area occupied by the face part in the image to be processed is larger, the process proceeds to step S203.
In step S203, if the area ratio reaches the preset ratio threshold, the terminal obtains the sharpness of each face image in each image to be processed.
For example, the image area of the image a to be processed is 100, and the area of the face region of the nail is 10, then the proportion of the area 12 of the face region of the nail to the image area 100 of the image a to be processed is 10%. And the predetermined proportion threshold is 8%. Therefore, the proportion of the area of the face region of the nail to the image area of the image a to be processed exceeds the preset proportion threshold. At this time, the terminal may obtain the sharpness of each face image in each frame of the image to be processed.
For example, the image to be processed A, B, C, D, E, F is a group image of three people, i.e., a person a, a person b, and a person c, and then the terminal may first obtain the sharpness of the face image of the person a, the sharpness of the face image of the person b, and the sharpness of the face image of the person c in the image to be processed a, and then sequentially obtain the sharpness of the face images of the person a, the person b, and the person c in the image to be processed B, C, D, E, F. For example, please refer to table 1, where table 1 shows the sharpness values of the face images of the three persons a, b, and c in the image to be processed A, B, C, D, E, F.
TABLE 1
First of all Second step C3
A 90 91 89
B 91 92 92
C 96 97 94
D 94 95 96
E 95 93 93
F 92 93 93
As can be seen from table 1, in the image to be processed a, the definition of the face image of a is 90, the definition of the face image of b is 91, and the definition of the face image of c is 89.
In the image B to be processed, the definition of the face image A is 91, the definition of the face image B is 92, and the definition of the face image C is 92.
In the image C to be processed, the definition of the face image A is 96, the definition of the face image B is 97 and the definition of the face image C is 94.
In the image D to be processed, the definition of the face image A is 94, the definition of the face image B is 95, and the definition of the face image C is 96.
In the image to be processed E, the definition of the face image of a is 95, the definition of the face image of b is 93, and the definition of the face image of c is 93.
In the image F to be processed, the definition of the face image of the first person is 92, the definition of the face image of the second person is 93, and the definition of the face image of the third person is 93.
In step S204, the terminal obtains a preset location value of each face image in each image to be processed, where the preset location value is a numerical value used for representing the size of a preset location of the face image.
For example, after the sharpness of each face image in each image to be processed is obtained, the terminal may obtain a preset position value of each face image in each image to be processed, where the preset position value is a numerical value used for indicating the size of a preset part of the face image.
For example, if the preset portion is an eye, the terminal may obtain an eye value of each face image in each image to be processed, where the eye value is a numerical value representing the size of the eye of the face image.
In one embodiment, the eye value may be a numerical value representing the size of the eye area, or a numerical value representing the height of the eye in the vertical direction, or the like. It is to be understood that the present embodiment is not limited thereto.
For example, please refer to table 2, where table 2 shows eye values of three persons, i.e., a person a, a person b, and a person c, in the to-be-processed image A, B, C, D, E, F.
TABLE 2
First of all Second step C3
A 79 81 83
B 82 82 84
C 83 88 84
D 85 87 85
E 85 87 85
F 84 86 86
As can be seen from table 2, in the to-be-processed image a, the eye value of the face image of a is 79, the eye value of the face image of b is 81, and the eye value of the face image of c is 83.
In the image B to be processed, the eye value of the face image a is 82, the eye value of the face image B is 82, and the eye value of the face image c is 84.
In the image C to be processed, the eye value of the face image of a is 83, the eye value of the face image of b is 88, and the eye value of the face image of C is 84.
In the image D to be processed, the eye value of the face image of a is 85, the eye value of the face image of b is 87, and the eye value of the face image of c is 85.
In the image to be processed E, the eye value of the face image of a is 85, the eye value of the face image of b is 87, and the eye value of the face image of c is 85.
In the image F to be processed, the eye value of the face image of a is 84, the eye value of the face image of b is 86, and the eye value of the face image of c is 86.
In step S205, the terminal acquires a first weight and a second weight.
In step S206, in each to-be-processed image, the terminal weights the sharpness of each face image according to the first weight to obtain a weighted face sharpness of each face image, and weights the preset location value of each face image according to the second weight to obtain a weighted preset location value of each face image.
For example, steps S205 and S206 may include:
after the sharpness and the eye value of each face image in each image to be processed are obtained, the terminal may obtain the first weight and the second weight.
Then, for each image to be processed, the terminal may weight the sharpness of each face according to the first weight to obtain the weighted sharpness of each face image.
For example, the first weight is 40% and the second weight is 60%. It is understood that in practical applications, the values of the first weight and the second weight can be adjusted as needed. The examples herein should not be construed as limiting the present embodiments.
For example, in the image a to be processed, the first-weighted sharpness is 36, the second-weighted sharpness is 36.4, and the third-weighted sharpness is 35.6.
In the image B to be processed, the definition of a after weighting is 36.4, the definition of B after weighting is 36.8, and the definition of c after weighting is 36.8.
In the image C to be processed, the first-weighted sharpness is 38.4, the second-weighted sharpness is 38.8, and the third-weighted sharpness is 37.6.
In the image D to be processed, the first-weighted sharpness is 37.6, the second-weighted sharpness is 38, and the third-weighted sharpness is 38.4.
In the image to be processed E, the weighted sharpness of a is 38, the weighted sharpness of b is 37.2, and the weighted sharpness of c is 37.2.
In the image F to be processed, the first-weighted sharpness is 36.8, the second-weighted sharpness is 37.2, and the third-weighted sharpness is 37.2.
Then, the terminal may weight the eye value of each face image according to the second weight to obtain a weighted eye value of each face image.
In the image a to be processed, the weighted eye value of a is 47.4, the weighted eye value of b is 48.6, and the weighted eye value of c is 49.8.
In the image B to be processed, the weighted eye value of a is 49.2, the weighted eye value of B is 49.2, and the weighted eye value of c is 50.4.
In the image C to be processed, the weighted eye value of a is 49.8, the weighted eye value of b is 52.8, and the weighted eye value of C is 50.4.
In the image D to be processed, the weighted eye value of a is 51, the weighted eye value of b is 52.2, and the weighted eye value of c is 51.
In the image to be processed E, the weighted eye value of a is 51, the weighted eye value of b is 52.2, and the weighted eye value of c is 51.
In the image F to be processed, the weighted eye value of a is 50.4, the weighted eye value of b is 51.6, and the weighted eye value of c is 51.6.
In step S207, in each to-be-processed image, the terminal obtains a target value of each face image, where the target value is a sum of the weighted face sharpness and the weighted preset location value.
In step S208, the terminal determines the face image corresponding to the maximum value in the target values of each user as the target face image of the user.
For example, steps S207 and S208 may include:
after obtaining the weighted face sharpness and the weighted eye value of each face image in each image to be processed, the terminal may obtain a target value of each face image in each image to be processed. Wherein the target value is the sum of the weighted face sharpness and the weighted eye value.
For example, in image a to be processed, the target value of a is 36+ 47.4-83.4, the target value of b is 36.4+ 48.6-85, and the target value of c is 35.6+ 49.8-85.4.
In image B to be processed, the target value of a is 36.4+ 49.2-85.6, the target value of B is 36.8+ 49.2-86, and the target value of c is 36.8+ 50.4-87.2.
In image C to be processed, the target value of a is 38.4+ 49.8-88.2, the target value of b is 38.8+ 52.8-91.6, and the target value of C is 37.6+ 50.4-88.
In the image D to be processed, the target value of a is 37.6+ 51-88.6, the target value of b is 38+ 52.2-90.2, and the target value of c is 38.4+ 51-89.4.
In image E to be processed, the target value of a is 38+51 ═ 89, the target value of b is 37.2+52.2 ═ 89.4, and the target value of c is 37.2+51 ═ 88.2.
In the image F to be processed, the target value of a is 36.8+50.4 ═ 87.2, the target value of b is 37.2+51.6 ═ 88.8, and the target value of c is 37.2+51.6 ═ 88.8.
After the target value of each face image in each image to be processed is obtained, the terminal may determine the face image corresponding to the maximum value in the target values of each user as the target face image of the user.
For example, referring to table 3, for the user a, the maximum value of the target values of the face image of the user a is 89, and the value 89 is the target value of the face image of the user a in the image to be processed E, so that the terminal may determine the face image of the user a in the image to be processed E as the target face image of the user a.
Similarly, for the user b, the maximum value of the target values of the face image of the user b is 91.6, and the value 91.6 is the target value of the face image of the user b in the image C to be processed, so that the terminal can determine the face image of the user b in the image C to be processed as the target face image of the user b.
For the user c, the maximum value of the target values of the face images of the user c is 89.4, and the value 89.4 is the target value of the face image of the user c in the image D to be processed, so that the terminal can determine the face image of the user c in the image D to be processed as the target face image of the user c.
TABLE 3
First of all Second step C3
A 83.4 85 85.4
B 85.6 86 87.2
C 88.2 91.6 88
D 88.6 90.2 89.4
E 89 89.4 88.2
F 87.2 88.8 88.8
In step S209, the terminal selects an image to be processed containing a target face image of the user as an alternative image.
In step S210, the terminal selects the image to be processed containing the largest number of target face images as a base image.
For example, steps S209 and S210 may include:
after the target face image of each user is determined, the terminal can select the image to be processed containing the target face image of the user as an alternative image.
For example, the to-be-processed image C includes a target face image of the user b, the to-be-processed image D includes a target face image of the user C, and the to-be-processed image E includes a target face image of the user a, so the terminal may select the to-be-processed image C, D, E as the alternative image.
After the candidate images are selected from the images to be processed, the terminal can select the image to be processed containing the largest number of target face images as a basic image.
For example, since the candidate images C, D, E each include only 1 target face image, the terminal may select any one of the candidate images C, D, E as a base image.
In an implementation, this embodiment may further include the following steps:
determining a face image to be replaced from a basic image, wherein the face image to be replaced is a non-target face image;
acquiring a target face image for replacing each face image to be replaced from alternative images, wherein each face image to be replaced and the corresponding target face image are face images of the same user;
and in the basic image, replacing the corresponding face image to be replaced by using the target face image to obtain a basic image subjected to image replacement processing.
For example, the terminal selects the alternative image C as a base image, and then, the terminal may determine a face image to be replaced from the base image C, where the face image to be replaced is a non-target face image of the user.
For example, the face image of the first in the basic image C is not the target face image of the first, and the face image of the third is not the target face image of the third, so the terminal can determine the face images of the first and third in the basic image C as the face images to be replaced.
Then, the terminal can obtain a target face image for replacing each face image to be replaced from the alternative images outside the basic image, wherein each face image to be replaced and the corresponding target face image are face images of the same user.
For example, the terminal may obtain the target face image of c from the candidate image D and obtain the target face image of a from the candidate image E.
Then, the terminal can replace the face image to be replaced in the third basic image C with the target face image in the third alternative image D, and replace the face image to be replaced in the first basic image C with the target face image in the first alternative image E, so that the basic image C subjected to image replacement processing is obtained.
It can be understood that the face images of the three persons a, b and C in the base image C subjected to the image replacement processing are respective target face images, and the target face images of the three persons a, b and C are respective images with larger eyes and clearer face images.
In an embodiment, after the step of obtaining the base image subjected to the image replacement processing, this embodiment may further include the following steps:
and according to the image to be processed, carrying out image noise reduction processing on the basic image subjected to the image replacement processing.
For example, after obtaining the base image C subjected to the image replacement processing, the terminal may perform image denoising processing on the base image C subjected to the image replacement processing according to other images to be processed.
For example, the terminal may acquire 4 frames of images continuously acquired including the image C, and perform multi-frame noise reduction processing on the base image C subjected to the image replacement processing. For example, the terminal may acquire the image D, E, F and perform multi-frame noise reduction processing on the base image C subjected to image replacement processing according to the image D, E, F.
In one embodiment, the terminal may align the image C, D, E, F and obtain the pixel values of each aligned pixel in the image when performing multi-frame denoising. If the pixel values of the same group of alignment pixels are not different, the terminal can calculate the pixel value mean value of the group of alignment pixels, and replace the pixel value of the corresponding pixel of the image C with the pixel value mean value. If the pixel values of the alignment pixels in the same group have more differences, the pixel values in the image C may not be adjusted.
For example, the pixel P1 in the image C, the pixel P2 in the image D, the pixel P3 in the image E, and the pixel P4 in the image F are a group of mutually aligned pixels, where the pixel value of P1 is 101, the pixel value of P2 is 102, the pixel value of P3 is 103, and the pixel value of P4 is 104, and then the average of the pixel values of the group of mutually aligned pixels is 102.5, then the terminal may adjust the pixel value of the P1 pixel in the image C from 101 to 102.5, thereby performing noise reduction on the P1 pixel in the image C. If the pixel value of P1 is 103, the pixel value of P2 is 83, the pixel value of P3 is 90, and the pixel value of P4 is 80, then the pixel value of P1 may not be adjusted at this time, i.e., the pixel value of P1 remains 101, because their pixel values are more different.
In another embodiment, when the step of the terminal acquiring the image area of the image to be processed and the area of the face region and acquiring the area ratio of the area of the face region to the image area in S202 is executed, the following method may also be adopted: for example, the image a to be processed is a group photo of three people, i.e., a person, i.e., M1+ M2+ M3/3, the terminal may obtain an area M1 of a face region of a person, i.e., an area M2 of a face region of a person, i.e., an area M3 of a face region of a person, i.e., a person, i. Then, the terminal may calculate a ratio of the area of the average face region to the image area of the image a to be processed, and detect whether the ratio reaches a preset ratio threshold. If not, the terminal may perform other operations. And if so, triggering to acquire the definition of each face image in each image to be processed.
Please refer to fig. 3, which is a scene diagram illustrating a method for selecting an image according to an embodiment of the present disclosure.
In this embodiment, after entering a preview interface of a camera, if it is detected that the terminal is acquiring a face image, the terminal may acquire a current environmental parameter, and determine a target frame number according to at least two acquired face images. The environmental parameter may be ambient light level.
If the terminal determines that the face in the image is not displaced (or has small displacement) according to the collected face images in at least two frames and is currently in a bright environment, the terminal can determine the target frame number as 8 frames. If the terminal determines that the face in the image is not displaced (or has small displacement) according to the collected face images in at least two frames and is currently in a dark light environment, the terminal can determine the target frame number as 6 frames. If the terminal determines that the human face in the images is displaced according to the collected at least two frames of human face images, the terminal can determine the number of the target frames as 4 frames.
In one embodiment, whether the human face in the image is displaced or not can be detected by the following method: after the four acquired frames of images are acquired, the terminal can generate a coordinate system, and then the terminal can put each frame of image into the coordinate system in the same way. And then, the terminal can acquire the coordinates of the facial image feature points in each frame of image in the coordinate system. After the coordinates of the feature points of the face image in each frame of image in the coordinate system are obtained, the terminal can compare whether the coordinates of the feature points of the same face image in different images are the same or not. If the face images are the same, the face images in the images can be considered to be not displaced. If the difference is not the same, the face image in the image can be considered to be displaced. If the face image is detected to be displaced, the terminal can acquire a specific displacement value. If the specific displacement value is within the preset value range, the face image in the image can be considered to have smaller displacement. If the specific displacement value is outside the preset value range, the face image in the image can be considered to have larger displacement.
In this embodiment, the terminal may store the acquired image in the buffer queue. The buffer queue may be a fixed-length queue, for example, the buffer queue may store 10 frames of images newly acquired by the terminal.
For example, four people, A, B, C and D, play outside and prepare to take a picture beside a landscape. And the third is the group photo of the first, second and third photos. For example, after entering a preview interface of a camera, the terminal detects that the position of the face image of the third person from the 4 collected frames of images in the picture is not displaced and is currently in a bright environment. Based on this, the terminal determines that the target frame number is 8 frames. Before the photographing button is pressed, the terminal camera can continuously and quickly acquire images and store the acquired images into the cache queue.
After that, when the terminal presses the photographing button, the terminal can acquire the recently acquired 8 frames of images about the ethylene-propylene-methyl from the buffer queue. For example, the 8 images are A, B, C, D, E, F, G, H respectively according to time sequence. It is understood that these 8 frames of images are to-be-processed images.
After the 8 frames of images to be processed are obtained, the terminal can acquire the image definition of each frame of image to be processed, and then clear the image with obviously poor definition from the image to be processed. For example, the degrees of sharpness of the image to be processed A, B, C, D, E, F, G, H are 90, 91, 93, 96, 95, 94, 80, 79, respectively. Since the sharpness of the images to be processed G and H is significantly poor, the terminal can delete the images to be processed G and H, that is, the terminal selects only an image for processing from the images to be processed A, B, C, D, E, F.
In one embodiment, after all the images to be processed are acquired, the terminal may acquire the image definition of each frame of the images to be processed. Then, the terminal may acquire the maximum value in the image clarity. For example, the value range of the image definition is 0-100, wherein the maximum value of the definition of the image to be processed is 96. Then, the terminal may obtain the preset percentage, for example, the preset percentage is 90%. The terminal may then obtain the product of the maximum value in the image sharpness multiplied by a preset percentage, for example 96 × 90% — 86.4. If an image with the image definition lower than 86.4 exists in the image to be processed, the image can be regarded as unclear and the terminal can exclude the image from the image to be processed.
Thereafter, the terminal may acquire the image area of any one frame of the image to be processed A, B, C, D, E, F and the average value of the areas of the face regions of all users in the image to be processed.
For example, if the image area of the image a to be processed obtained by the terminal is 100, and the average value of the areas of the face regions of the people, namely people, and people, in the image a to be processed is 12%, the ratio of the average value 12 of the areas of the face regions in the image a to be processed, which accounts for the image area 100, is 12%, and exceeds the preset ratio threshold value by 8%.
In this case, the terminal may acquire the sharpness of the face region in each frame of the image to be processed again. For example, the face region may be a region containing face images of all users. For example, the face definitions of the face regions of the to-be-processed image A, B, C, D, E, F are 92, 94, and 93, respectively.
Then, the terminal may obtain a third weight and a fourth weight, weight the image sharpness of each image to be processed according to the third weight to obtain the weighted image sharpness of each image to be processed, and weight the face sharpness of each image to be processed according to the fourth weight to obtain the weighted face sharpness of each image to be processed.
For example, the third weight is 40% and the fourth weight is 60%. Then, the weighted image sharpness of the image to be processed a is 90 × 40% ═ 36, and the weighted face sharpness is 92 × 60% ═ 55.2. The weighted image clarity of the to-be-processed image B is 91 × 40% ═ 36.4, and the weighted face clarity is 92 × 60% ═ 55.2. The weighted image clarity of the to-be-processed image C is 93 × 40% ═ 37.2, and the weighted face clarity is 94 × 60% ═ 56.4. The weighted image clarity of the image D to be processed is 96 × 40% ═ 38.4, and the weighted face clarity is 94 × 60% ═ 56.4. The weighted image sharpness of the image to be processed E is 95 × 40% ═ 38, and the weighted face sharpness is 94 × 60% ═ 56.4. The weighted image clarity of the image F to be processed is 94 × 40% ═ 37.6, and the weighted face clarity is 93 × 60% ═ 55.8.
And then, the terminal can acquire the overall definition of each image to be processed, wherein the overall definition is the sum of the weighted image definition and the weighted human face definition. For example, the overall sharpness of the image a to be processed is 36+ 55.2-91.2, the overall sharpness of the image B to be processed is 36.4+ 55.2-91.6, the overall sharpness of the image C to be processed is 37.2+ 56.4-93.6, the overall sharpness of the image D to be processed is 38.4+ 56.4-94.8, the overall sharpness of the image E to be processed is 38+ 56.4-94.4, and the overall sharpness of the image F to be processed is 37.6+ 55.8-93.4.
Thereafter, the terminal may obtain the maximum value in the overall definition. For example, the maximum value of the overall sharpness of the image to be processed is 94.8. The terminal may then take a first proportion, for example 95%, and the product of the maximum value in overall sharpness 94.8 and the first proportion 95%, i.e. 94.8 × 95% ═ 90.06.
Then, the terminal can acquire an image to be processed with an overall definition of 90.06 from the image to be processed. For example, since the overall sharpness of all the images to be processed exceeds 90.06, the terminal can acquire the image to be processed A, B, C, D, E, F. These images to be processed can be considered to be sharp images.
Then, the terminal can acquire the eye value of each face in each image to be processed. For example, the eye values of three persons A, B and C to be treated A, B, C, D, E, F are shown in Table 2.
The terminal can obtain the maximum eye value of A, the maximum eye value of B and the maximum eye value of C. For example, the maximum eye value for A is 85, the maximum eye value for B is 88, and the maximum eye value for C is 86.
Then, the terminal may acquire the first ratio, for example, the second ratio is 95%, and acquire the product of the maximum eye value of each user and the second ratio. That is, the product 85 × 95% of the maximum eye value of nail and the second ratio is 80.75. For b, the product 88 × 95% of the maximum eye value and the second ratio is 83.6. For nail, the product 86 × 95% of the maximum eye value and the second ratio is 81.7.
The terminal may then select a base image from the images to be processed. For example, the terminal may first detect whether there is an image that satisfies the following condition: the overall definition of the image is greater than 90.06, the eye value of each face in the image is greater than the product of the maximum eye value and the second ratio, namely the eye value of A in the image is greater than 80.75, the eye value of B in the image is greater than 83.6, and the eye value of C in the image is greater than 81.7.
It is detected that the images to be processed C, D, E, F all satisfy the above condition, and the terminal can select any one frame image of the images to be processed C, D, E, F as the base image. For example, the terminal selects the image to be processed C as the base image.
Then, the terminal can detect whether a face image of the target user exists in the base image, wherein the eye value of the face image of the target user is smaller than the product of the maximum eye value of the target user and the second proportion. And if the face image of the target user exists, determining the face image as a face image to be replaced, acquiring a target face image of which the eye value of the target user is greater than the product of the maximum eye value and the second proportion of the target user from other images to be processed, and replacing the face image to be replaced in the basic image by using the target face image.
For example, in this embodiment, because the eye values of the ethephon in the base image C are all greater than the product of the respective maximum eye values and the second ratio, the face image to be replaced does not exist in the base image C. In this case, the terminal may store the base image C into an album as a photograph.
It can be understood that the photograph obtained in this embodiment is a large-eye, clear photograph of three people.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image selecting device according to an embodiment of the present disclosure. The image selecting apparatus 300 may include: a first obtaining module 301, a second obtaining module 302, a third obtaining module 303, and a selecting module 304.
The first obtaining module 301 is configured to obtain an image to be processed including a face image.
A second obtaining module 302, configured to obtain a sharpness of each face image in each to-be-processed image.
A third obtaining module 303, configured to obtain a preset location value of each face image in each to-be-processed image, where the preset location value is a numerical value used to represent a size of a preset location of the face image.
A selecting module 304, configured to select an image for processing from the images to be processed according to the definition and a preset position value of each face image in each image to be processed.
In one embodiment, the second obtaining module 302 is further configured to:
acquiring the image area of the image to be processed and the area of a face region;
acquiring the area proportion of the area of the face region in the image area;
and if the area proportion reaches a preset proportion threshold value, acquiring the definition of each face image in each image to be processed.
In one embodiment, the selecting module 304 is configured to:
acquiring a first weight and a second weight;
in each image to be processed, weighting the definition of each face image according to the first weight to obtain the weighted face definition of each face image, and weighting the preset position value of each face image according to the second weight to obtain the weighted preset position value of each face image;
acquiring a target value of each face image in each image to be processed, wherein the target value is the sum of the weighted face definition and the weighted preset position value;
determining a face image corresponding to the maximum value in the target numerical values of each user as a target face image of the user;
and selecting the image to be processed containing the target face image of the user as an alternative image.
In one embodiment, the selecting module 304 is configured to:
and selecting the image to be processed containing the maximum number of the target face images as a basic image.
Referring to fig. 5, fig. 5 is another schematic structural diagram of an image selecting device according to an embodiment of the present disclosure. In an embodiment, the image selecting apparatus 300 may further include: a replacement module 305 and a processing module 306.
A replacing module 305, configured to determine a face image to be replaced from a base image, where the face image to be replaced is a non-target face image; acquiring a target face image for replacing each face image to be replaced from alternative images, wherein each face image to be replaced and the corresponding target face image are face images of the same user; and in the basic image, replacing the corresponding face image to be replaced by using the target face image to obtain a basic image subjected to image replacement processing.
And the processing module 306 is configured to perform image denoising processing on the base image subjected to the image replacement processing according to the image to be processed.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer is caused to execute the steps in the image selecting method provided in this embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the steps in the image selection method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 6, fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
The mobile terminal 400 may include a camera module 401, a memory 402, a processor 403, and the like. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 6 is not intended to be limiting of mobile terminals and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The camera module 401 may include a single camera module and a dual camera module.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the mobile terminal.
In this embodiment, the processor 403 in the mobile terminal loads the executable code corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, thereby implementing the steps:
acquiring an image to be processed containing a face image; acquiring the definition of each face image in each image to be processed; acquiring a preset position value of each face image in each image to be processed, wherein the preset position value is a numerical value used for representing the size of a preset part of the face image; and selecting an image for processing from the images to be processed according to the definition and the preset position value of each face image in each image to be processed.
The embodiment of the invention also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 7 is a diagram illustrating an exemplary image processing circuit. As shown in fig. 7, for ease of explanation, only aspects of the image processing techniques related to embodiments of the present invention are shown.
As shown in fig. 7, the image processing circuit includes an image signal processor 540 and a control logic 550. Image data captured by the imaging device 510 is first processed by an image signal processor 540, and the image signal processor 540 analyzes the image data to capture image statistics that may be used to determine and/or one or more control parameters of the imaging device 510. The imaging device 510 may include a camera with one or more lenses 511 and an image sensor 512. Image sensor 512 may include an array of color filters (e.g., Bayer filters), and image sensor 512 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 512 and provide a set of raw image data that may be processed by an image signal processor 540. The sensor 520 may provide raw image data to the image signal processor 540 based on the sensor 520 interface type. The sensor 520 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The image signal processor 540 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and image signal processor 540 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image signal processor 540 may also receive pixel data from the image memory 530. For example, raw pixel data is sent from the sensor 520 interface to the image memory 530, and the raw pixel data in the image memory 530 is then provided to the image signal processor 540 for processing. The image Memory 530 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 520 interface or from the image memory 530, the image signal processor 540 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 530 for additional processing before being displayed. An image signal processor 540 receives the processed data from the image memory 530 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 570 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of the image signal processor 540 may also be sent to the image memory 530, and the display 570 may read image data from the image memory 530. In one embodiment, image memory 530 may be configured to implement one or more frame buffers. Further, the output of the image signal processor 540 may be transmitted to an encoder/decoder 560 so as to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 570 device. The encoder/decoder 560 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the image signal processor 540 may be sent to the control logic 550. For example, the statistical data may include image sensor 512 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 511 shading correction, and the like. The control logic 550 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 510 and, in turn, control parameters based on the received statistical data. For example, the control parameters may include sensor 520 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 511 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 511 shading correction parameters.
The following steps are steps for implementing the image processing method provided by the embodiment by using the image processing technology in fig. 7:
acquiring an image to be processed containing a face image; acquiring the definition of each face image in each image to be processed; acquiring a preset position value of each face image in each image to be processed, wherein the preset position value is a numerical value used for representing the size of a preset part of the face image; and selecting an image for processing from the images to be processed according to the definition and the preset position value of each face image in each image to be processed.
In one embodiment, after the step of acquiring the image to be processed including the face image, the electronic device may further perform: acquiring the image area of the image to be processed and the area of a face region; and acquiring the area proportion of the area of the face region in the image area.
Then, when the electronic device performs the step of obtaining the sharpness of each face image in each of the images to be processed, it may perform: and if the area proportion reaches a preset proportion threshold value, acquiring the definition of each face image in each image to be processed.
In one embodiment, when the electronic device performs the step of selecting an image for processing from the images to be processed according to the definition of each face image in each image to be processed and a preset position value, the electronic device may perform: acquiring a first weight and a second weight; in each image to be processed, weighting the definition of each face image according to the first weight to obtain the weighted face definition of each face image, and weighting the preset position value of each face image according to the second weight to obtain the weighted preset position value of each face image; acquiring a target value of each face image in each image to be processed, wherein the target value is the sum of the weighted face definition and the weighted preset position value; determining a face image corresponding to the maximum value in the target numerical values of each user as a target face image of the user; and selecting the image to be processed containing the target face image of the user as an alternative image.
In one embodiment, after the step of determining the face image corresponding to the maximum value of the sum of each user as the target face image of the user, the electronic device may further perform: and selecting the image to be processed containing the maximum number of the target face images as a basic image.
In one embodiment, the electronic device may further perform: determining a face image to be replaced from a basic image, wherein the face image to be replaced is a non-target face image; acquiring a target face image for replacing each face image to be replaced from alternative images, wherein each face image to be replaced and the corresponding target face image are face images of the same user; and in the basic image, replacing the corresponding face image to be replaced by using the target face image to obtain a basic image subjected to image replacement processing.
In one embodiment, after the step of obtaining the base image subjected to the image replacement processing, the electronic device may further perform: and according to the image to be processed, carrying out image noise reduction processing on the basic image subjected to the image replacement processing.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image selection method, and are not described herein again.
The image selection device provided in the embodiment of the present application and the image selection method in the above embodiments belong to the same concept, and any method provided in the image selection method embodiment may be run on the image selection device, and a specific implementation process thereof is described in detail in the image selection method embodiment, and is not described herein again.
It should be noted that, for the image selecting method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process for implementing the image selecting method described in the embodiment of the present application may be implemented by controlling the relevant hardware through a computer program, where the computer program may be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process may include the process of the embodiment of the image selecting method described in the embodiment of the present application. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
For the image selecting apparatus in the embodiment of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The method, the apparatus, the storage medium, and the electronic device for selecting an image provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (7)

1. An image selection method is characterized by comprising the following steps:
when detecting that an image containing a human face is collected, collecting current environmental parameters, when determining that the human face in the image is not displaced, determining a target frame number according to the environmental parameters, and storing the collected image into a cache queue;
acquiring a plurality of frames of images containing face images from the cache queue as images to be processed, wherein the number of the frames of the images to be processed is the target number of frames;
acquiring the image area of the image to be processed and the area of a face region;
acquiring the area proportion of the area of the face region in the image area;
if the area proportion reaches a preset proportion threshold value, acquiring the definition of each face image in each image to be processed;
acquiring a preset position value of each face image in each image to be processed, wherein the preset position value is a numerical value used for representing the size of a preset part of the face image;
acquiring a first weight and a second weight;
in each image to be processed, weighting the definition of each face image according to the first weight to obtain the weighted face definition of each face image, and weighting the preset position value of each face image according to the second weight to obtain the weighted preset position value of each face image;
acquiring a target value of each face image in each image to be processed, wherein the target value is the sum of the weighted face definition and the weighted preset position value;
determining a face image corresponding to the maximum value in the target numerical values of each user as a target face image of the user;
and selecting the image to be processed containing the target face image of the user as an alternative image.
2. The method for selecting an image according to claim 1, wherein after the step of determining the face image corresponding to the maximum value of the sum of each user as the target face image of the user, the method further comprises:
and selecting the image to be processed containing the maximum number of the target face images as a basic image.
3. The method for selecting an image according to claim 2, further comprising:
determining a face image to be replaced from a basic image, wherein the face image to be replaced is a non-target face image;
acquiring a target face image for replacing each face image to be replaced from alternative images, wherein each face image to be replaced and the corresponding target face image are face images of the same user;
and in the basic image, replacing the corresponding face image to be replaced by using the target face image to obtain a basic image subjected to image replacement processing.
4. The method for selecting an image according to claim 3, wherein after the step of obtaining the base image subjected to the image replacement processing, the method further comprises:
and according to the image to be processed, carrying out image noise reduction processing on the basic image subjected to the image replacement processing.
5. An image selecting apparatus, comprising:
the first acquisition module is used for acquiring current environmental parameters when detecting that images containing human faces are acquired, determining the number of target frames according to the environmental parameters when determining that the human faces in the images are not displaced, and storing the acquired images in a cache queue;
the first obtaining module is further configured to obtain multiple frames of images including face images from the buffer queue as images to be processed, where the number of frames of the images to be processed is the target number of frames;
the first acquisition module is further used for acquiring the image area of the image to be processed and the area of the face region; acquiring the area proportion of the area of the face region in the image area;
the second acquisition module is used for acquiring the definition of each face image in each image to be processed if the area ratio reaches a preset ratio threshold;
the third acquisition module is used for acquiring a preset position value of each face image in each image to be processed, wherein the preset position value is a numerical value used for representing the size of the preset position of the face image;
the selecting module is used for acquiring a first weight and a second weight; in each image to be processed, weighting the definition of each face image according to the first weight to obtain the weighted face definition of each face image, and weighting the preset position value of each face image according to the second weight to obtain the weighted preset position value of each face image; acquiring a target value of each face image in each image to be processed, wherein the target value is the sum of the weighted face definition and the weighted preset position value; determining a face image corresponding to the maximum value in the target numerical values of each user as a target face image of the user; and selecting the image to be processed containing the target face image of the user as an alternative image.
6. A storage medium having stored thereon a computer program, characterized in that the computer program, when executed on a computer, causes the computer to execute the method according to any of claims 1 to 4.
7. An electronic device comprising a memory, a processor, wherein the processor is configured to perform the method of any of claims 1 to 4 by invoking a computer program stored in the memory.
CN201810276595.1A 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment Active CN108513068B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810276595.1A CN108513068B (en) 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810276595.1A CN108513068B (en) 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108513068A CN108513068A (en) 2018-09-07
CN108513068B true CN108513068B (en) 2021-03-02

Family

ID=63379345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810276595.1A Active CN108513068B (en) 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108513068B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175980A (en) * 2019-04-11 2019-08-27 平安科技(深圳)有限公司 Image definition recognition methods, image definition identification device and terminal device
CN111444770A (en) * 2020-02-26 2020-07-24 北京大米未来科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN111696051A (en) * 2020-05-14 2020-09-22 维沃移动通信有限公司 Portrait restoration method and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008172395A (en) * 2007-01-10 2008-07-24 Sony Corp Imaging apparatus and image processing apparatus, method, and program
CN101617339A (en) * 2007-02-15 2009-12-30 索尼株式会社 Image processing apparatus and image processing method
CN102209196A (en) * 2010-03-30 2011-10-05 株式会社尼康 Image processing device and image estimating method
CN102377905A (en) * 2010-08-18 2012-03-14 佳能株式会社 Image pickup apparatus and control method therefor
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN103942525A (en) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 Real-time face optimal selection method based on video sequence
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN105303161A (en) * 2015-09-21 2016-02-03 广东欧珀移动通信有限公司 Method and device for shooting multiple people
CN106161962A (en) * 2016-08-29 2016-11-23 广东欧珀移动通信有限公司 A kind of image processing method and terminal
CN106331504A (en) * 2016-09-30 2017-01-11 北京小米移动软件有限公司 Shooting method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008172395A (en) * 2007-01-10 2008-07-24 Sony Corp Imaging apparatus and image processing apparatus, method, and program
CN101617339A (en) * 2007-02-15 2009-12-30 索尼株式会社 Image processing apparatus and image processing method
CN102209196A (en) * 2010-03-30 2011-10-05 株式会社尼康 Image processing device and image estimating method
CN102377905A (en) * 2010-08-18 2012-03-14 佳能株式会社 Image pickup apparatus and control method therefor
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN103942525A (en) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 Real-time face optimal selection method based on video sequence
CN105303161A (en) * 2015-09-21 2016-02-03 广东欧珀移动通信有限公司 Method and device for shooting multiple people
CN106161962A (en) * 2016-08-29 2016-11-23 广东欧珀移动通信有限公司 A kind of image processing method and terminal
CN106331504A (en) * 2016-09-30 2017-01-11 北京小米移动软件有限公司 Shooting method and device

Also Published As

Publication number Publication date
CN108513068A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN107948519B (en) Image processing method, device and equipment
CN108259770B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110766621B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN108401110B (en) Image acquisition method and device, storage medium and electronic equipment
CN107509044B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
WO2019085951A1 (en) Image processing method, and device
CN108093158B (en) Image blurring processing method and device, mobile device and computer readable medium
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN111327824B (en) Shooting parameter selection method and device, storage medium and electronic equipment
CN111726521B (en) Photographing method and photographing device of terminal and terminal
CN107563979B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
CN108574803B (en) Image selection method and device, storage medium and electronic equipment
CN110728705B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110717871A (en) Image processing method, image processing device, storage medium and electronic equipment
CN110445986B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108513068B (en) Image selection method and device, storage medium and electronic equipment
CN110866486B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN113012081A (en) Image processing method, device and electronic system
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant