CN108022274B - Image processing method, image processing device, computer equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN108022274B
CN108022274B CN201711225296.7A CN201711225296A CN108022274B CN 108022274 B CN108022274 B CN 108022274B CN 201711225296 A CN201711225296 A CN 201711225296A CN 108022274 B CN108022274 B CN 108022274B
Authority
CN
China
Prior art keywords
face
images
image
face images
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711225296.7A
Other languages
Chinese (zh)
Other versions
CN108022274A (en
Inventor
陈德银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711225296.7A priority Critical patent/CN108022274B/en
Publication of CN108022274A publication Critical patent/CN108022274A/en
Priority to PCT/CN2018/115675 priority patent/WO2019105237A1/en
Application granted granted Critical
Publication of CN108022274B publication Critical patent/CN108022274B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, an image processing device, computer equipment and a computer readable storage medium. The method comprises the following steps: carrying out face recognition on an image to be processed to obtain a face image; acquiring face information of the face image, wherein the face information comprises a face area, a face position and a face angle; comparing the face information of a plurality of face images to determine whether the face images are continuous shooting images; and if the plurality of face images are continuously shot images, generating a moving picture according to the plurality of face images. In the embodiment of the application, the computer equipment can acquire the continuous shooting image of the face image to generate the dynamic image without manual operation of a user, so that the operation steps of the user are simplified, and the generation efficiency of the dynamic image is improved.

Description

Image processing method, image processing device, computer equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a computer device, and a computer-readable storage medium.
Background
With the rapid development of intelligent computer equipment, more and more users adopt the intelligent computer equipment to take pictures. The processing function of the image in the intelligent computer device is also becoming more and more comprehensive and diversified, for example, the intelligent computer device can perform color adjustment, brightness adjustment, contrast adjustment, saturation adjustment and the like on the image, and remove a partial region from the image.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, computer equipment and a computer readable storage medium, which can generate a moving picture from a plurality of face images.
An image processing method comprising:
carrying out face recognition on an image to be processed to obtain a face image;
acquiring face information of the face image, wherein the face information comprises a face area, a face position and a face angle;
comparing the face information of a plurality of face images to determine whether the face images are continuous shooting images;
and if the plurality of face images are continuously shot images, generating a moving picture according to the plurality of face images.
An image processing apparatus comprising:
the recognition module is used for carrying out face recognition on the image to be processed to acquire a face image;
the acquisition module is used for acquiring face information of the face image, wherein the face information comprises a face area, a face position and a face angle;
the comparison module is used for comparing the face information of a plurality of face images and determining whether the face images are continuous shooting images;
and the processing module is used for generating a moving picture according to the plurality of face images if the plurality of face images are continuously shot images.
A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions that, when executed by the processor, cause the processor to perform the steps of the method of any of claims 1 to 7.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
In the embodiment of the application, the computer equipment can acquire the continuous shooting image of the face image to generate the dynamic image without manual operation of a user, so that the operation steps of the user are simplified, and the generation efficiency of the dynamic image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a method of image processing in one embodiment;
FIG. 2 is a flow chart of an image processing method in another embodiment;
FIG. 3 is a flow chart of an image processing method in another embodiment;
FIG. 4 is a flowchart of an image processing method in another embodiment;
FIG. 5 is a diagram illustrating an embodiment of generating an animation from a plurality of face images;
FIG. 6 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 7 is a block diagram showing the construction of an image processing apparatus according to another embodiment;
FIG. 8 is a block diagram showing the construction of an image processing apparatus according to another embodiment;
fig. 9 is a block diagram of a partial structure of a mobile phone related to a computer device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
FIG. 1 is a flow diagram of a method of image processing in one embodiment. As shown in fig. 1, an image processing method includes:
and 102, carrying out face recognition on the image to be processed to obtain a face image.
The computer equipment can adopt a face recognition algorithm to perform face recognition on the image to be processed, detect whether a face exists in the image to be processed, and if the face exists in the image to be processed, the image to be processed is a face image. The image to be processed can be an image shot and acquired by the computer equipment, an image stored by the computer equipment and an image downloaded by the computer equipment through a data network or a wireless local area network.
And 104, acquiring face information of the face image, wherein the face information comprises a face area, a face position and a face angle.
After the computer equipment acquires the face image, the face information of the face image can be acquired. The face information includes: face area, face position, and face angle. The face area, that is, the area of the face region in the image, may be represented by the number of pixels in the face region or the proportion of the face region in the image. The face position is a position of the face region in the image, and may be represented by a pixel coordinate of the face region, for example, a pixel in a 3 rd row and a 3 rd column. The face angle refers to a rotation angle of a face relative to a standard face in a three-dimensional space, and the rotation angle can be represented by three mutually perpendicular in-plane angles in a spatial rectangular coordinate system. In a three-dimensional space, 3 mutually perpendicular straight lines are intersected at one point to form a space rectangular coordinate system, every two three planes in the space rectangular coordinate system are perpendicular, computer equipment can obtain the offset angle of the face relative to a standard face in each plane, and the offset angle of the face relative to the standard face in the three planes is the face angle. The standard face is a face prestored in the computer equipment and can be a face synthesized by the computer equipment or a face set by a user. For example, the standard face may be a face in an image captured by a camera while the user is capturing the image.
When a plurality of faces exist in the face image, the computer equipment can screen the faces through face information of the faces. The method for screening the plurality of faces by the computer equipment can comprise at least one of the following methods:
(1) and the computer equipment respectively obtains the face areas of the faces to obtain the maximum face area, and selects the face corresponding to the face area within the preset proportion range by taking the maximum face area as a reference. For example, a face having a face area greater than 80% of the maximum face area is selected.
(2) The computer device detects whether the face position of the face is within a preset position, and the preset position can be a central area of the image. The computer equipment can detect whether the face area is in the central area, and then selects the face of the face area in the central area. If part of the face region is in the central region, the computer device can calculate the area ratio of the face region in the central region to the total face region, and if the ratio is greater than a preset threshold, the face region is selected. For example, if the area of the face region falling into the central region accounts for more than 60% of the total area of the face region, the face region is selected.
(3) The computer equipment can receive a selection instruction of a user for the face in the face image, and selects the face from the face image according to the selection instruction.
And step 106, comparing the face information of the plurality of face images, and determining whether the plurality of face images are continuous shooting images.
After the computer equipment acquires the plurality of face images, the face information of the plurality of face images can be respectively compared, and whether the plurality of face images are continuous shooting images or not is detected. The continuous shooting image is an image obtained by continuously shooting the same scene from the same direction and the same angle.
The step of comparing the face information of a plurality of face images by the computer equipment comprises the following steps:
(1) and the computer equipment acquires the face identification corresponding to the face in the face image. The face identification is a character string for uniquely identifying the face and can be numbers, letters, symbols and the like.
(2) The computer equipment can compare two images in a plurality of face images and compare face information of the two images, and the method comprises the following steps:
and detecting whether the face areas of the faces corresponding to the same face identification in the two images are different within a first threshold value. For example, it is detected whether the face areas of the faces corresponding to the same face identifier in the two images are different within a range of 5%, that is, the face area in one image is within a range of 95% -105% of the face area in the other image.
And if the difference of the face areas of the faces corresponding to the same face identification in the two images is within a first threshold value, detecting whether the face positions of the faces corresponding to the same face identification in the two images are the same. After the computer equipment identifies the face in the image, a rectangular frame containing the face can be intercepted, after the two images are overlapped and aligned, if the face in the other image also falls into the rectangular frame, the face positions of the faces in the two images are the same. The size of the rectangular frame can be adjusted according to the size of the face area.
If the face area difference of the faces corresponding to the same face identification in the two images is within a first threshold value and the face positions are the same, then detecting whether the face angle difference is within a second threshold value. When the computer device continuously takes images, the time difference of the continuously taken images is usually very short, for example, 0.2 second. Therefore, the change value of the face angle in the two adjacent images of the continuous shooting image shot when the face rotates is small. The computer equipment can respectively obtain the face angles in the two images, wherein the face angles are the angles of three planes in a space rectangular coordinate system. The computer device compares whether the angle difference of the three planes is within a second threshold, for example 10 deg.. And if the face angle is within the second threshold value, judging that the two face images are continuous shooting images.
(3) The computer equipment can identify whether the multiple images are continuous shooting images according to the fact that every two images are continuous shooting images. For example, if the a image and the B image are continuous images, and the B image and the C image are continuous images, the a image, the B image, and the C image are continuous images.
And 108, if the plurality of face images are continuous shooting images, generating a moving picture according to the plurality of face images.
After the images are continuously shot in the multiple face images, the computer equipment can generate a moving picture according to the multiple face images. The above-mentioned mode of generating the motion picture is to play a plurality of face images continuously, and then the motion picture can be obtained. For example, a computer device may generate a GIF (Graphics Interchange Format) map from a plurality of face images. The computer device may store the obtained motion picture in a folder with a preset path, where the folder with the preset path may be preset for the computer device or may be set for a user, for example, in a system album folder, or in a folder of a third-party application program, such as an emoticon folder of a QQ.
According to the method, the computer equipment can acquire the continuous shooting images of the face images to generate the dynamic images, manual operation of a user is not needed, operation steps of the user are simplified, and generation efficiency of the dynamic images is improved.
In an embodiment, when determining whether the plurality of face images are continuous-shot images, after the difference between the face areas of the faces corresponding to the same face identifier in the two images is within a first threshold, the face positions are the same, and the face angles are within a second threshold, the computer device may further detect whether the difference between the luminance values of each two images in the plurality of face images is within a third threshold. And if the difference between the brightness values of the two images is within the third threshold value range, judging the two images to be continuous shooting images. Because the continuous shooting image has short shooting time and small brightness value change of the image, whether the image is the continuous shooting image can be further judged through the difference value of the brightness values of the image.
In one embodiment, generating a motion picture from a plurality of face images includes at least one of:
(1) and generating a moving picture from the plurality of face images according to the storage sequence of the plurality of face images.
(2) And generating a moving picture from the plurality of face images according to the shooting time sequence of the plurality of face images.
(3) And determining the arrangement sequence of the plurality of face images according to the face angles, and generating a moving picture from the plurality of face images according to the arrangement sequence.
(4) And generating a motion picture from the plurality of face images according to the selection sequence of the user.
When the computer equipment generates a moving picture from a plurality of face images, the plurality of face images can be continuously played according to a specified sequence. The specified sequence can be a storage sequence of a plurality of face images, namely, the computer device plays the plurality of face images continuously from first to last according to the storage sequence to generate the moving picture. The computer equipment can also acquire the shooting time of each face image in the plurality of face images, and continuously play the plurality of face images according to the shooting time from morning to evening, so that the moving picture can be generated. The computer equipment also receives the selection sequence of the user to the plurality of face images, and continuously plays the plurality of face images according to the selection sequence of the user, so that the motion picture can be obtained. The computer equipment can also obtain the face angle in each image of the face images, select the image with the earliest shooting time as the initial image, arrange the face angle change values of other images relative to the initial image in the sequence from small to large, and play the initial image and other images in sequence according to the arrangement sequence to generate the moving image.
According to the method in the embodiment of the application, the computer equipment can play the face image according to a plurality of sequences to generate the dynamic image, the mode of generating the dynamic image is more diversified, and the improvement of the user viscosity is facilitated.
In one embodiment, generating a motion picture from a plurality of face images includes at least one of:
(1) and continuously playing the plurality of face images according to a preset first time interval to obtain a motion picture.
(2) And determining a second time interval according to the number of the plurality of face images, and continuously playing the plurality of face images according to the second time interval to obtain a motion picture.
(3) And acquiring a third time interval set for each face image, and continuously playing the plurality of face images according to the third time interval to obtain a motion picture.
The computer device presets a first time interval, the first time interval can be a value set by the computer device or a value preset by a user, and the computer device can play a plurality of face images in sequence according to the first time interval to generate a motion picture. For example, the computer device plays a plurality of face images in sequence at intervals of 0.5 second. The computer device can also pre-store the corresponding relation between the number of the images and the playing time interval, and the computer device can determine the second time interval according to the number of the human face images. The correspondence relationship may be a linear relationship or a non-linear relationship. For example, when the number of the human face images is 10, the second time interval is 0.5 second; when the number of the human face images is 5, the second time interval is 1 second. The computer device may further receive a playing time interval set by the user for each face image, that is, the user may set a corresponding third time interval for each face image, where the third time intervals corresponding to different images may be the same or different. And the computer sequentially plays the plurality of face images according to a third time interval corresponding to the face images to obtain the motion picture.
According to the method in the embodiment of the application, the computer equipment can play a plurality of face images according to a plurality of time intervals, namely, the mode of generating the motion picture can be diversified, and the improvement of the user viscosity is facilitated.
In one embodiment, generating a motion picture from a plurality of face images comprises: and extracting face region images from the plurality of face images respectively, and continuously playing the face region images in the plurality of face images to obtain a motion picture.
After the computer equipment acquires the face image, the face area in the face image can be further identified, and the face area image is extracted according to the face contour of the face area. The computer equipment can acquire a face area through skin color identification, and determine a face contour through the color difference between the face area and a background area. The computer equipment can also identify a face region in the face image through the machine learning model, then identify the face contour of the face region, and extract the face region image according to the face contour. After extracting the face region image, the computer device can directly and continuously play the face region image corresponding to each face image to generate the dynamic image when generating the dynamic image.
According to the method in the embodiment of the application, the computer equipment can extract the face region image from the face image and then sequentially play the face region image to obtain the dynamic image, namely when the computer equipment generates the dynamic image, the background region in the image can be removed, the dynamic image is generated only according to the face region in the image, and the mode of generating the dynamic image is more suitable for user requirements.
In one embodiment, after step 108, further comprising:
and 110, recognizing the face identification corresponding to the face image, clustering the face image according to the face identification, and generating a face atlas corresponding to the face identification.
Step 106, comparing the face information of the plurality of face images, and determining whether the plurality of face images are continuous shooting images comprises: and comparing the face information of a plurality of face images in the same picture set to determine whether the plurality of face images are continuous shooting images.
The computer equipment can obtain the face identification corresponding to the main face in the face image, wherein the main face is the main face in the image. Typically, the master's face may include a face within a predetermined area of the image, such as a face within a central region of the image; the master face may also include faces within the image having a face area greater than a specified value, such as faces in the image having a face area greater than 10% of the image area. The computer equipment can cluster a plurality of face images according to the face identification corresponding to the main face to generate an atlas of faces corresponding to the face identification. When the computer equipment determines whether the plurality of face images are continuous shooting images, the computer equipment can select the plurality of images from one image and detect whether the plurality of face images are continuous shooting images, namely, the computer equipment compares the face information of the plurality of face images in the same atlas and determines whether the plurality of face images are continuous shooting images. The method for detecting whether the multiple images are continuously shot images is the same as the method in step 206, and is not described herein again.
In the method in the embodiment of the application, the computer device selects a plurality of face images from the face image set corresponding to the same face identification, and judges whether the plurality of face images are continuous shooting images, namely, the computer device can select the plurality of face images from the face image set of the same face and identify whether the plurality of face images are continuous shooting images, so that the detected continuous shooting images are more accurate.
In one embodiment, after step 110, the method further comprises:
and step 112, if the contact information corresponding to the face identification is stored in the computer equipment, acquiring a face atlas corresponding to the face identification.
And step 114, sending the face image in the face image set to the computer equipment corresponding to the contact person.
The computer equipment can search the contact information corresponding to the face identification, and the method comprises the following steps:
(1) the computer equipment obtains marking information of a human face in a human face image, wherein the marking information can be a name corresponding to the human face, the computer equipment searches whether a name corresponding to the human face exists in stored contacts, and if the name corresponding to the human face exists in the stored contacts, the computer equipment obtains the contact corresponding to the human face.
(2) The computer equipment can also acquire the head portrait corresponding to the stored contact person, the computer equipment carries out similarity matching on the face corresponding to the face identification and the head portrait corresponding to the stored contact person, and if the matching is successful, the contact person is the contact person corresponding to the face area.
After the computer equipment acquires the contact person corresponding to the face identification, whether the contact person corresponding to the face identification has the stored contact person information or not can be searched. The contact information can be a mobile phone number, a landline number, a social contact account number and the like. And when the computer equipment stores the contact information of the contact, the computer equipment sends the face atlas corresponding to the face identification to the computer equipment corresponding to the contact.
According to the method, the computer equipment can send the face atlas corresponding to the face to the corresponding contact person, the user does not need to manually send the images one by one, the operation steps of the user are simplified, the image sharing method is more intelligent, and the requirements of the user are met.
In one embodiment, an image processing method includes:
and 402, carrying out face recognition on the image to be processed to obtain a face image.
And step 404, acquiring a plurality of face images selected by the user, and generating a moving picture from the plurality of face images according to the selection sequence of the user.
The computer equipment can also acquire a plurality of face images selected by the user, and sequentially play the plurality of face images according to the sequence of the plurality of face images selected by the user to generate the moving picture, namely the computer equipment can generate the moving picture from the plurality of face images according to the selection sequence of the user. The plurality of face images selected by the user can be continuously shot images or non-continuously shot images; the plurality of face images selected by the user can be face images corresponding to the same face identification, and also can be face images corresponding to different face identifications.
According to the method, the computer equipment can directly acquire the multiple images selected by the user, the dynamic image is generated according to the multiple images selected by the user, the dynamic image generation mode is simple and rapid, and the requirements of the user are met.
FIG. 5 is a diagram illustrating generation of a motion picture for a plurality of face images according to an embodiment. As shown in fig. 5, the image 502, the image 504, the image 506, the image 508, and the image 510 are all face images, and the above 5 face images are continuous shot images. The computer equipment can respectively obtain the shooting time of the 5 human face images, wherein the shooting time of the image 502 is 10/28/10: 25: 06/10/28/2017, the shooting time of the image 504 is 10/28/10: 25: 07/2017/10/28/10: 25: 08/506/2017/10/28/10/25: 09/510/2017/10/28/10/25: 10/510. The computer device sorts the 5 face images into an image 502, an image 504, an image 506, an image 508 and an image 510 according to the sequence of the shooting time. The computer equipment can continuously play the 5 human face images according to the sequence of the sequence, and then the dynamic image can be generated. When the computer device continuously plays 5 face images, the 5 face images may be continuously played at a preset first time interval, for example, the 5 face images are continuously played at a time interval of 0.5 seconds, that is, the time interval between the image 502 and the image 504 is t1 is 0.5 seconds, the time interval between the image 504 and the image 506 is t2 is 0.5 seconds, the time interval between the image 506 and the image 508 is t3 is 0.5 seconds, and the time interval between the image 506 and the image 508 is t4 is 0.5 seconds.
FIG. 6 is a block diagram showing an example of the structure of an image processing apparatus. As shown in fig. 6, an image processing apparatus includes:
the recognition module 602 is configured to perform face recognition on an image to be processed to obtain a face image;
an obtaining module 604, configured to obtain face information of a face image, where the face information includes a face area, a face position, and a face angle;
the comparison module 606 is configured to compare face information of the plurality of face images, and determine whether the plurality of face images are continuously shot images;
the processing module 608 is configured to generate a motion picture according to the plurality of face images if the plurality of face images are continuously shot images.
In one embodiment, the processing module 608 generates a motion picture according to the plurality of face images, wherein the motion picture includes at least one of the following cases:
(1) generating a moving picture from the plurality of face images according to the storage sequence of the plurality of face images;
(2) generating a moving picture from the plurality of face images according to the shooting time sequence of the plurality of face images;
(3) determining the arrangement sequence of a plurality of face images according to the face angles, and generating a moving picture from the plurality of face images according to the arrangement sequence;
(4) and generating a motion picture from the plurality of face images according to the selection sequence of the user.
In one embodiment, the processing module 608 generates a motion picture according to the plurality of face images, wherein the motion picture includes at least one of the following cases:
(1) continuously playing a plurality of face images according to a preset first time interval to obtain a motion picture;
(2) determining a second time interval according to the number of the plurality of face images, and continuously playing the plurality of face images according to the second time interval to obtain a motion picture;
(3) and acquiring a third time interval set for each face image, and continuously playing the plurality of face images according to the third time interval to obtain a motion picture.
In one embodiment, the processing module 608 generates a motion picture according to a plurality of face images, including: and extracting face region images from the plurality of face images respectively, and continuously playing the face region images in the plurality of face images to obtain a motion picture.
In an embodiment, the processing module 608 is further configured to obtain a plurality of facial images selected by the user, and generate a motion picture from the plurality of facial images according to the selection sequence of the user.
Fig. 7 is a block diagram showing the configuration of an image processing apparatus according to another embodiment. As shown in fig. 7, an image processing apparatus includes: an identification module 702, an acquisition module 704, a comparison module 706, a processing module 708, and a clustering module 710. The identifying module 702, the obtaining module 704, the comparing module 706, and the processing module 708 have the same functions as the corresponding modules in fig. 6.
The recognition module 702 is further configured to recognize a face identifier corresponding to the face image;
the clustering module 710 is used for clustering the face images according to the face identifications to generate a face atlas corresponding to the face identifications;
the comparing module 706 compares the face information of the plurality of face images, and determining whether the plurality of face images are continuous shooting images includes: and comparing the face information of a plurality of face images in the same picture set to determine whether the plurality of face images are continuous shooting images.
Fig. 8 is a block diagram showing the configuration of an image processing apparatus according to another embodiment. As shown in fig. 8, an image processing apparatus includes: an identification module 802, an acquisition module 804, a comparison module 806, a processing module 808, and a sending module 810. The identifying module 802, the obtaining module 804, the comparing module 808, and the processing module 808 have the same functions as the corresponding modules in fig. 5.
The obtaining module 804 is further configured to obtain a face atlas corresponding to the face identifier if the contact information corresponding to the face identifier has been stored in the computer device;
and a sending module 810, configured to send the face image in the face image set to a computer device corresponding to the contact.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of:
(1) and carrying out face recognition on the image to be processed to obtain a face image.
(2) And acquiring face information of the face image, wherein the face information comprises face area, face position and face angle.
(3) And comparing the face information of the plurality of face images to determine whether the plurality of face images are continuous shooting images.
(4) And if the plurality of face images are continuous shooting images, generating a moving picture according to the plurality of face images.
In one embodiment, generating a motion picture from a plurality of face images includes at least one of:
(1) and generating a moving picture from the plurality of face images according to the storage sequence of the plurality of face images.
(2) And generating a moving picture from the plurality of face images according to the shooting time sequence of the plurality of face images.
(3) And determining the arrangement sequence of the plurality of face images according to the face angles, and generating a moving picture from the plurality of face images according to the arrangement sequence.
(4) And generating a motion picture from the plurality of face images according to the selection sequence of the user.
In one embodiment, generating a motion picture from a plurality of face images includes at least one of:
(1) and continuously playing the plurality of face images according to a preset first time interval to obtain a motion picture.
(2) And determining a second time interval according to the number of the plurality of face images, and continuously playing the plurality of face images according to the second time interval to obtain a motion picture.
(3) And acquiring a third time interval set for each face image, and continuously playing the plurality of face images according to the third time interval to obtain a motion picture.
In one embodiment, generating a motion picture from a plurality of face images comprises: and extracting face region images from the plurality of face images respectively, and continuously playing the face region images in the plurality of face images to obtain a motion picture.
In one embodiment, further performing: and identifying the face identification corresponding to the face image, clustering the face image according to the face identification, and generating a face atlas corresponding to the face identification. Comparing the face information of the plurality of face images, and determining whether the plurality of face images are continuous shooting images comprises the following steps: and comparing the face information of a plurality of face images in the same picture set to determine whether the plurality of face images are continuous shooting images.
In one embodiment, further performing: and if the contact information corresponding to the face identification is stored in the computer equipment, acquiring a face atlas corresponding to the face identification. And sending the face image in the face image set to computer equipment corresponding to the contact person.
In one embodiment, further performing: and acquiring a plurality of face images selected by a user, and generating a moving picture from the plurality of face images according to the selection sequence of the user.
A computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of:
(1) and carrying out face recognition on the image to be processed to obtain a face image.
(2) And acquiring face information of the face image, wherein the face information comprises face area, face position and face angle.
(3) And comparing the face information of the plurality of face images to determine whether the plurality of face images are continuous shooting images.
(4) And if the plurality of face images are continuous shooting images, generating a moving picture according to the plurality of face images.
In one embodiment, generating a motion picture from a plurality of face images includes at least one of:
(1) and generating a moving picture from the plurality of face images according to the storage sequence of the plurality of face images.
(2) And generating a moving picture from the plurality of face images according to the shooting time sequence of the plurality of face images.
(3) And determining the arrangement sequence of the plurality of face images according to the face angles, and generating a moving picture from the plurality of face images according to the arrangement sequence.
(4) And generating a motion picture from the plurality of face images according to the selection sequence of the user.
In one embodiment, generating a motion picture from a plurality of face images includes at least one of:
(1) and continuously playing the plurality of face images according to a preset first time interval to obtain a motion picture.
(2) And determining a second time interval according to the number of the plurality of face images, and continuously playing the plurality of face images according to the second time interval to obtain a motion picture.
(3) And acquiring a third time interval set for each face image, and continuously playing the plurality of face images according to the third time interval to obtain a motion picture.
In one embodiment, generating a motion picture from a plurality of face images comprises: and extracting face region images from the plurality of face images respectively, and continuously playing the face region images in the plurality of face images to obtain a motion picture.
In one embodiment, further performing: and identifying the face identification corresponding to the face image, clustering the face image according to the face identification, and generating a face atlas corresponding to the face identification. Comparing the face information of the plurality of face images, and determining whether the plurality of face images are continuous shooting images comprises the following steps: and comparing the face information of a plurality of face images in the same picture set to determine whether the plurality of face images are continuous shooting images.
In one embodiment, further performing: and if the contact information corresponding to the face identification is stored in the computer equipment, acquiring a face atlas corresponding to the face identification. And sending the face image in the face image set to computer equipment corresponding to the contact person.
In one embodiment, further performing: and acquiring a plurality of face images selected by a user, and generating a moving picture from the plurality of face images according to the selection sequence of the user.
The embodiment of the application also provides computer equipment. As shown in fig. 9, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. The computer device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the computer device as the mobile phone as an example:
fig. 9 is a block diagram of a partial structure of a mobile phone related to a computer device provided in an embodiment of the present application. Referring to fig. 9, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, display unit 940, sensor 950, audio circuit 960, wireless fidelity (WiFi) module 970, processor 980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 9 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The RF circuit 910 may be used for receiving and transmitting signals during information transmission or communication, and may receive downlink information of a base station and then process the downlink information to the processor 980; the uplink data may also be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
The memory 920 may be used to store software programs and modules, and the processor 980 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 920. The memory 920 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as an application program for a sound playing function, an application program for an image playing function, and the like), and the like; the data storage area may store data (such as audio data, an address book, etc.) created according to the use of the mobile phone, and the like. Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 900. Specifically, the input unit 930 may include a touch panel 931 and other input devices 932. The touch panel 931, which may also be referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (e.g., a user operating the touch panel 931 or near the touch panel 931 by using a finger, a stylus, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. In one embodiment, the touch panel 931 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 980, and can receive and execute commands sent by the processor 980. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 930 may include other input devices 932 in addition to the touch panel 931. In particular, other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), and the like.
The display unit 940 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 940 may include a display panel 941. In one embodiment, the Display panel 941 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. In one embodiment, the touch panel 931 may overlay the display panel 941, and when the touch panel 931 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 980 to determine the type of touch event, and then the processor 980 provides a corresponding visual output on the display panel 941 according to the type of touch event. Although in fig. 9, the touch panel 931 and the display panel 941 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 931 and the display panel 941 may be integrated to implement the input and output functions of the mobile phone.
Cell phone 900 may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 941 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 941 and/or backlight when the mobile phone is moved to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the magnitude of acceleration in each direction, the magnitude and the direction of gravity can be detected when the mobile phone is static, and the motion sensor can be used for identifying the application of the gesture of the mobile phone (such as horizontal and vertical screen switching), the vibration identification related functions (such as pedometer and knocking) and the like; the mobile phone may be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor.
Audio circuitry 960, speaker 961 and microphone 962 may provide an audio interface between a user and a cell phone. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and convert the electrical signal into a sound signal for output by the speaker 961; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 960, and then outputs the audio data to the processor 980 for processing, and then the audio data can be transmitted to another mobile phone through the RF circuit 910, or the audio data can be output to the memory 920 for subsequent processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 9 shows WiFi module 970, it is to be understood that it does not belong to the essential components of cell phone 900 and may be omitted as desired.
The processor 980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. In one embodiment, processor 980 may include one or more processing units. In one embodiment, the processor 980 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, and the like; the modem processor handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 980.
The handset 900 also includes a power supply 990 (e.g., a battery) for supplying power to various components, which may preferably be logically connected to the processor 980 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
In one embodiment, the cell phone 900 may also include a camera, a bluetooth module, and the like.
In an embodiment of the present application, the processor 980 included in the mobile terminal implements the following steps when executing the computer program stored on the memory:
(1) and carrying out face recognition on the image to be processed to obtain a face image.
(2) And acquiring face information of the face image, wherein the face information comprises face area, face position and face angle.
(3) And comparing the face information of the plurality of face images to determine whether the plurality of face images are continuous shooting images.
(4) And if the plurality of face images are continuous shooting images, generating a moving picture according to the plurality of face images.
In one embodiment, generating a motion picture from a plurality of face images includes at least one of:
(1) and generating a moving picture from the plurality of face images according to the storage sequence of the plurality of face images.
(2) And generating a moving picture from the plurality of face images according to the shooting time sequence of the plurality of face images.
(3) And determining the arrangement sequence of the plurality of face images according to the face angles, and generating a moving picture from the plurality of face images according to the arrangement sequence.
(4) And generating a motion picture from the plurality of face images according to the selection sequence of the user.
In one embodiment, generating a motion picture from a plurality of face images includes at least one of:
(1) and continuously playing the plurality of face images according to a preset first time interval to obtain a motion picture.
(2) And determining a second time interval according to the number of the plurality of face images, and continuously playing the plurality of face images according to the second time interval to obtain a motion picture.
(3) And acquiring a third time interval set for each face image, and continuously playing the plurality of face images according to the third time interval to obtain a motion picture.
In one embodiment, generating a motion picture from a plurality of face images comprises: and extracting face region images from the plurality of face images respectively, and continuously playing the face region images in the plurality of face images to obtain a motion picture.
In one embodiment, further performing: and identifying the face identification corresponding to the face image, clustering the face image according to the face identification, and generating a face atlas corresponding to the face identification. Comparing the face information of the plurality of face images, and determining whether the plurality of face images are continuous shooting images comprises the following steps: and comparing the face information of a plurality of face images in the same picture set to determine whether the plurality of face images are continuous shooting images.
In one embodiment, further performing: and if the contact information corresponding to the face identification is stored in the computer equipment, acquiring a face atlas corresponding to the face identification. And sending the face image in the face image set to computer equipment corresponding to the contact person.
In one embodiment, further performing: and acquiring a plurality of face images selected by a user, and generating a moving picture from the plurality of face images according to the selection sequence of the user.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method, comprising:
carrying out face recognition on an image to be processed to obtain a face image;
acquiring face information of the face image, wherein the face information comprises a face area, a face position and a face angle;
determining whether the face area difference of any two images in the plurality of face images is within a first threshold value;
if yes, determining whether the face positions of any two images in the plurality of face images are the same;
if the face angles of any two images in the plurality of face images are the same, detecting whether the face angles of any two images in the plurality of face images are within a second threshold value;
if yes, determining whether the plurality of face images are continuous shooting images;
if the plurality of face images are continuously shot images, generating a moving image according to the plurality of face images;
the generating of the motion picture according to the plurality of face images comprises the following conditions: and acquiring a third time interval set for each face image, and continuously playing the plurality of face images according to the third time interval to obtain a motion picture.
2. The method according to claim 1, wherein the generating of the motion picture from the plurality of face images comprises at least one of:
generating a moving picture from the plurality of face images according to the storage sequence of the plurality of face images;
generating a moving picture from the plurality of face images according to the shooting time sequence of the plurality of face images;
determining the arrangement sequence of the plurality of face images according to the face angles, and generating a moving picture from the plurality of face images according to the arrangement sequence;
and generating a motion picture from the plurality of face images according to the selection sequence of the user.
3. The method according to claim 1, wherein the generating a motion picture from the plurality of face images further comprises at least one of:
continuously playing the plurality of face images according to a preset first time interval to obtain a motion picture;
and determining a second time interval according to the number of the plurality of face images, and continuously playing the plurality of face images according to the second time interval to obtain a moving picture.
4. The method according to claim 1, wherein the generating a motion picture from the plurality of face images comprises:
and extracting face region images from the plurality of face images respectively, and continuously playing the face region images in the plurality of face images to obtain a motion picture.
5. The method of any of claims 1 to 4, further comprising:
identifying a face identification corresponding to the face image, clustering the face image according to the face identification, and generating a face atlas corresponding to the face identification;
comparing the face information of a plurality of face images in the same picture set, and determining whether the plurality of face images are continuous shooting images.
6. The method of claim 5, further comprising:
if the contact information corresponding to the face identification is stored in the computer equipment, acquiring a face atlas corresponding to the face identification;
and sending the face images in the face image set to computer equipment corresponding to the contact persons.
7. The method of any of claims 1 to 4, further comprising:
the method comprises the steps of obtaining a plurality of face images selected by a user, and generating a moving picture from the plurality of face images according to the selection sequence of the user.
8. An image processing apparatus characterized by comprising:
the recognition module is used for carrying out face recognition on the image to be processed to acquire a face image;
the acquisition module is used for acquiring face information of the face image, wherein the face information comprises a face area, a face position and a face angle;
the comparison module is used for determining whether the difference of the human face areas of any two images in the plurality of human face images is within a first threshold value; if yes, determining whether the face positions of any two images in the plurality of face images are the same; if the face angles of any two images in the plurality of face images are the same, detecting whether the face angles of any two images in the plurality of face images are within a second threshold value; if yes, determining whether the plurality of face images are continuous shooting images;
the processing module is used for generating a moving picture according to the face images if the face images are continuously shot images; the generating of the motion picture according to the plurality of face images comprises the following conditions: and acquiring a third time interval set for each face image, and continuously playing the plurality of face images according to the third time interval to obtain a motion picture.
9. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions that, when executed by the processor, cause the processor to perform the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201711225296.7A 2017-11-29 2017-11-29 Image processing method, image processing device, computer equipment and computer readable storage medium Active CN108022274B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711225296.7A CN108022274B (en) 2017-11-29 2017-11-29 Image processing method, image processing device, computer equipment and computer readable storage medium
PCT/CN2018/115675 WO2019105237A1 (en) 2017-11-29 2018-11-15 Image processing method, computer device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711225296.7A CN108022274B (en) 2017-11-29 2017-11-29 Image processing method, image processing device, computer equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108022274A CN108022274A (en) 2018-05-11
CN108022274B true CN108022274B (en) 2021-10-01

Family

ID=62077521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711225296.7A Active CN108022274B (en) 2017-11-29 2017-11-29 Image processing method, image processing device, computer equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN108022274B (en)
WO (1) WO2019105237A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022274B (en) * 2017-11-29 2021-10-01 Oppo广东移动通信有限公司 Image processing method, image processing device, computer equipment and computer readable storage medium
CN110008673B (en) * 2019-03-06 2022-02-18 创新先进技术有限公司 Identity authentication method and device based on face recognition
CN109948586B (en) * 2019-03-29 2021-06-25 北京三快在线科技有限公司 Face verification method, device, equipment and storage medium
CN111353368A (en) * 2019-08-19 2020-06-30 深圳市鸿合创新信息技术有限责任公司 Pan-tilt camera, face feature processing method and device and electronic equipment
CN110490162A (en) * 2019-08-23 2019-11-22 北京搜狐新时代信息技术有限公司 The methods, devices and systems of face variation are shown based on recognition of face unlocking function
CN111339811B (en) * 2019-08-27 2024-02-20 杭州海康威视***技术有限公司 Image processing method, device, equipment and storage medium
CN110990088B (en) * 2019-12-09 2023-08-11 Oppo广东移动通信有限公司 Data processing method and related equipment
CN111754612A (en) * 2020-06-01 2020-10-09 Oppo(重庆)智能科技有限公司 Moving picture generation method and device
CN114153342A (en) * 2020-08-18 2022-03-08 深圳市万普拉斯科技有限公司 Visual information display method and device, computer equipment and storage medium
CN112565863B (en) * 2020-11-26 2024-07-05 深圳Tcl新技术有限公司 Video playing method, device, terminal equipment and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820675A (en) * 2015-04-08 2015-08-05 小米科技有限责任公司 Photo album displaying method and device
CN104980641A (en) * 2014-04-04 2015-10-14 宏碁股份有限公司 Electronic device and image viewing method thereof
CN106375531A (en) * 2016-08-29 2017-02-01 捷开通讯(深圳)有限公司 Picture sharing method and terminal

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4092059B2 (en) * 2000-03-03 2008-05-28 日本放送協会 Image recognition device
US20070254640A1 (en) * 2006-04-27 2007-11-01 Bliss Stephen J Remote control and viewfinder for mobile camera phone
JP2009088687A (en) * 2007-09-27 2009-04-23 Fujifilm Corp Album creation device
JP5028225B2 (en) * 2007-11-06 2012-09-19 オリンパスイメージング株式会社 Image composition apparatus, image composition method, and program
JP5176572B2 (en) * 2008-02-05 2013-04-03 ソニー株式会社 Image processing apparatus and method, and program
JP4911165B2 (en) * 2008-12-12 2012-04-04 カシオ計算機株式会社 Imaging apparatus, face detection method, and program
JP5821625B2 (en) * 2011-08-29 2015-11-24 カシオ計算機株式会社 Image editing apparatus and program
CN104113682B (en) * 2013-04-22 2018-08-31 联想(北京)有限公司 A kind of image acquiring method and electronic equipment
CN104408402B (en) * 2014-10-29 2018-04-24 小米科技有限责任公司 Face identification method and device
JP6332864B2 (en) * 2014-12-25 2018-05-30 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
CN104767933B (en) * 2015-03-13 2018-12-21 北京畅游天下网络技术有限公司 A method of having the portable digital equipment and screening photo of camera function
CN105069426B (en) * 2015-07-31 2018-09-04 小米科技有限责任公司 Similar pictures judgment method and device
CN105630954B (en) * 2015-12-23 2019-05-21 北京奇虎科技有限公司 A kind of method and apparatus based on photo synthesis dynamic picture
CN106980840A (en) * 2017-03-31 2017-07-25 北京小米移动软件有限公司 Shape of face matching process, device and storage medium
CN107358219B (en) * 2017-07-24 2020-06-09 艾普柯微电子(上海)有限公司 Face recognition method and device
CN108022274B (en) * 2017-11-29 2021-10-01 Oppo广东移动通信有限公司 Image processing method, image processing device, computer equipment and computer readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104980641A (en) * 2014-04-04 2015-10-14 宏碁股份有限公司 Electronic device and image viewing method thereof
CN104820675A (en) * 2015-04-08 2015-08-05 小米科技有限责任公司 Photo album displaying method and device
CN106375531A (en) * 2016-08-29 2017-02-01 捷开通讯(深圳)有限公司 Picture sharing method and terminal

Also Published As

Publication number Publication date
WO2019105237A1 (en) 2019-06-06
CN108022274A (en) 2018-05-11

Similar Documents

Publication Publication Date Title
CN108022274B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN107977674B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107729815B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108875451B (en) Method, device, storage medium and program product for positioning image
CN107846352B (en) Information display method and mobile terminal
CN107679559B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal
CN108460817B (en) Jigsaw puzzle method and mobile terminal
WO2020048392A1 (en) Application virus detection method, apparatus, computer device, and storage medium
CN108683850B (en) Shooting prompting method and mobile terminal
CN109409235B (en) Image recognition method and device, electronic equipment and computer readable storage medium
JP7467667B2 (en) Detection result output method, electronic device and medium
CN105989572B (en) Picture processing method and device
CN109521684B (en) Household equipment control method and terminal equipment
WO2019105457A1 (en) Image processing method, computer device and computer readable storage medium
CN109544445B (en) Image processing method and device and mobile terminal
CN110555171A (en) Information processing method, device, storage medium and system
CN108765522B (en) Dynamic image generation method and mobile terminal
CN110347858B (en) Picture generation method and related device
US10970522B2 (en) Data processing method, electronic device, and computer-readable storage medium
CN107729391B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal
CN107832714B (en) Living body identification method and device and storage equipment
CN108062370B (en) Application program searching method and mobile terminal
CN108595104B (en) File processing method and terminal
CN108848270B (en) Method for processing screen shot image and mobile terminal
CN110717486B (en) Text detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant