WO2021109863A1 - 照片处理方法及照片处理装置 - Google Patents

照片处理方法及照片处理装置 Download PDF

Info

Publication number
WO2021109863A1
WO2021109863A1 PCT/CN2020/129181 CN2020129181W WO2021109863A1 WO 2021109863 A1 WO2021109863 A1 WO 2021109863A1 CN 2020129181 W CN2020129181 W CN 2020129181W WO 2021109863 A1 WO2021109863 A1 WO 2021109863A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
cohesion
image
photo
rectangle
Prior art date
Application number
PCT/CN2020/129181
Other languages
English (en)
French (fr)
Inventor
乔宇
李英
王锴
彭小江
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2021109863A1 publication Critical patent/WO2021109863A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application belongs to the field of image processing technology, and in particular relates to a photo processing method and a photo processing device.
  • the processing method for group photos of people is relatively simple. Generally, it only improves the sharpness of the photos and removes obstructions in the photos. The composition of the group photos is not adjusted, and the integrity of the photo frame cannot be guaranteed. And coordination.
  • the embodiments of the present application provide a photo processing method and a photo processing device, which can solve the problem of poor picture integrity and coordination of existing group photos of people.
  • an embodiment of the present application provides a photo processing method, including:
  • the intercepting multiple sub-images from the photo to be processed includes:
  • N sub-images are captured based on the maximum rectangle and the minimum rectangle, the N sub-images and the minimum rectangle have the same center and aspect ratio, and any sub-image in the N sub-images
  • the area is larger than the area of the smallest rectangle and smaller than the area of the largest rectangle;
  • the multiple sub-images include the first sub-image, the second sub-image, and the N sub-images.
  • the determining the smallest rectangle containing all human figures in the to-be-processed photo includes:
  • the edge position point corresponding to each side of the photo to be processed is the pixel with the shortest distance from the one side among all the pixels corresponding to the portrait point;
  • the minimum rectangle is determined according to the edge position points corresponding to each side of the photo to be processed.
  • the capturing N sub-images based on the largest rectangle and the smallest rectangle includes:
  • N preset ratios and scale the smallest rectangles according to each preset ratio to obtain N middle rectangles.
  • the middle rectangle and the smallest rectangle have the same center, and the middle rectangle has the same center.
  • the area is smaller than the area of the largest rectangle;
  • the images corresponding to each middle rectangle are respectively intercepted to obtain N sub-images.
  • One possible implementation of the first aspect of the image are calculated for each sub-group corresponding to the degree of aggregation, comprising:
  • For each sub-image calculate the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-image, where the overall scene cohesion is used to characterize the relationship between the portrait and the background in the sub-image.
  • Cohesion where the face cohesion is used to characterize the facial expression of each person in the sub-image;
  • body cohesion is used to characterize the body posture of each person in the sub-image;
  • the group cohesion degree corresponding to the sub-image is calculated according to the overall scene cohesion degree, the face cohesion degree and the body cohesion degree.
  • the separately calculating the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-images includes:
  • the neural network including three sub-networks, and the three sub-networks are respectively used to calculate the overall scene cohesion, the face cohesion, and the body cohesion;
  • the sub-image is input into the neural network for processing, and the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-image are obtained.
  • the calculation of the group cohesion corresponding to the sub-image according to the overall scene cohesion, face cohesion, and body cohesion includes:
  • the overall scene cohesion, the face cohesion, and the body cohesion are weighted and summed according to preset weights to obtain the group cohesion.
  • an embodiment of the present application provides a photo processing device, including:
  • An acquiring unit configured to acquire a photo to be processed, and to intercept multiple sub-images from the photo to be processed, the sub-images containing at least one portrait;
  • a calculation unit configured to separately calculate a group cohesion degree corresponding to each sub-image, where the group cohesion degree is used to characterize the degree of cohesion between the individual portraits in the sub-image;
  • the processing unit is used to determine the sub-image corresponding to the highest group cohesion, which is the processed photo.
  • an embodiment of the present application provides a photo processing device, including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes The computer program implements the photo processing method according to any one of the above-mentioned first aspects.
  • an embodiment of the present application provides a computer-readable storage medium
  • an embodiment of the present application provides a computer-readable storage medium
  • the computer-readable storage medium stores a computer program, and is characterized in that the When the computer program is executed by the processor, the photo processing method according to any one of the above-mentioned first aspects is realized.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the photo processing method described in any one of the above-mentioned first aspects.
  • the group cohesion is used to characterize the degree of cohesion between the individual portraits in the sub-image, and this is used as an index to measure the quality of the sub-image; finally the sub-image corresponding to the highest group cohesion is determined, which is the post-processing Photo.
  • the group cohesion is used as an index to determine the processed photos, which can effectively adjust the composition of the group photo, thereby improving the integrity and coordination of the picture of the photo group.
  • FIG. 1 is a schematic diagram of a photo processing system provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a photo processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for capturing sub-images according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of the smallest rectangle provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the largest rectangle and the smallest rectangle provided by an embodiment of the present application.
  • Fig. 6 is a schematic diagram of a middle rectangle provided by an embodiment of the present application.
  • FIG. 7 is a structural block diagram of a photo processing device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a photo processing device provided by an embodiment of the present application.
  • the photo processing system may include a photographing device 101 and a terminal device 102.
  • the photographing device may be a camera, a video camera, a mobile phone with a photographing function, and so on.
  • the terminal device can be a mobile phone, a computer, etc.
  • the photographing device and the terminal device can be connected in a wired or wireless manner.
  • the photographing device sends the photographed photos to the terminal device, and the terminal device processes the received photos using the photo processing method provided in this embodiment of the application, and displays the processed photos to the user, or returns the processed photos To the camera, the camera displays the processed photos to the user.
  • the terminal device 102 may be integrated with the photographing device 101.
  • the terminal device not only has a photographing function, but also has a photo processing capability.
  • FIG. 2 shows a schematic flowchart of a photo processing method provided by an embodiment of the present application.
  • the method may include the following steps:
  • S201 Obtain a photo to be processed, and intercept multiple sub-images from the photo to be processed, where the sub-images include at least one portrait.
  • the photos to be processed are usually photos of people, that is, the photos include multiple portraits.
  • the photos to be processed are usually photos of people, that is, the photos include multiple portraits.
  • the sub-images contain one or more portraits, preferably all portraits.
  • S202 Calculate the group cohesion degree corresponding to each sub-image, where the group cohesion degree is used to represent the degree of cohesion between the individual portraits in the sub-image.
  • Group cohesion refers to the degree to which group members are attracted to each other and are willing to stay in the group. It is a resultant force to maintain the effectiveness of group behavior.
  • the degree of group cohesion is used as a measurement index to characterize the consistency and cohesion degree of various characters in the photo. The higher the cohesion of the group, the higher the quality of the photos of the people.
  • calculating the group cohesion corresponding to each sub-image in step S202 may include the following steps:
  • S21 For each sub-image, respectively calculate the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-image.
  • step S21 may include:
  • S211 Obtain a trained neural network, where the neural network includes three sub-networks, and the three sub-networks are respectively used to calculate the overall scene cohesion, the face cohesion, and the body cohesion.
  • S212 Input the sub-image into the neural network for processing, and obtain the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-image.
  • the neural network needs to be trained in advance.
  • SE-NET Seeze-and-Excitation network
  • the network structure is pre-trained with the ImageNet data set, and trained and tested with the GAF-Cohesion database. In the training process, you can also add emoticons to assist in supervised learning.
  • the loss function of this sub-network can adopt cross entropy, mean square error, emotional rank loss (Rank Loss) function and hourglass loss (Hourglass Loss) function.
  • the ResNet network structure can be used, the PERPlus data set is used for pre-training, and the GAF-Cohesion database is used for training and testing.
  • the loss function of this sub-network can adopt cross entropy, mean square error, Rank Loss and Hourglass Loss functions.
  • Using the trained neural network to calculate the overall scene cohesion, face cohesion, and body cohesion can improve calculation efficiency while ensuring calculation accuracy.
  • S22 Calculate the group cohesion degree corresponding to the sub-image according to the overall scene cohesion degree, the face cohesion degree and the body cohesion degree.
  • step S22 may include:
  • the overall scene cohesion, the face cohesion, and the body cohesion are weighted and summed according to preset weights to obtain the group cohesion.
  • the weight value of the overall scene cohesion h1 is set to w1
  • the weight value of h2 is set to the face cohesion number w2
  • the weight value of the body cohesion h3 is set to w3.
  • S203 Determine the sub-image corresponding to the highest group cohesion as a processed photo.
  • the sub-image with the highest degree of group cohesion is selected as the processed photo.
  • the embodiment of this application obtains photos to be processed, intercepts multiple sub-images from the photos to be processed, and uses these sub-images as photos to be selected; then calculates the group cohesion degree corresponding to each sub-image, and uses the group cohesion degree to represent The consistency and cohesion between the various characters in the sub-images are used as an index to measure the quality of the sub-images; the sub-image corresponding to the highest group cohesion is finally determined as the processed photo.
  • the group cohesion is used as an index to determine the processed photos, which can effectively improve the processing effect of the group photos of people, and further improve the quality of the photos of the people group photos.
  • intercepting multiple sub-images from the to-be-processed photo may include the following steps:
  • S301 Determine the smallest rectangle containing all human figures in the photo to be processed, and intercept the image corresponding to the smallest rectangle to obtain a first sub-image.
  • the portrait in the photo to be processed includes a facial image and a body image.
  • determining the smallest rectangle containing all human figures in the to-be-processed photo may include the following steps:
  • S3011 Perform portrait recognition on the photo to be processed, and obtain the coordinates of each pixel point corresponding to the recognized portrait.
  • face recognition includes two parts: face recognition and body detection.
  • face recognition can use a multi-task convolutional neural network (Multi-task convolutional neural network).
  • neural network MTCNN
  • the neural network combines face region detection and face key point detection together, and the output result contains both the face detection result and the key points of the detected face.
  • Body detection can be implemented using the open source human gesture recognition project OpenPose. This project can realize the posture estimation of human body movement, joint movement, etc.
  • the output result includes the positioning of each joint point of the human body.
  • the image to be processed is recognized to obtain the recognition result, and the recognition result may include the contour of the recognized portrait. Then obtain the coordinates of the pixels covered by the contour of the portrait in the photo to be processed.
  • S3012 According to the coordinates, respectively determine the edge position point corresponding to each side of the photo to be processed, wherein the edge position point corresponding to one side is the shortest distance from the one side among all the pixel points corresponding to the portrait Of pixels.
  • the shape of the photo is generally rectangular, that is, it has 4 sides. For each side, you can calculate the distance from each pixel corresponding to the identified portrait to this side (that is, calculate the distance from the point to the line) according to the coordinates, and use the pixel corresponding to the minimum distance as the side corresponding Edge position point.
  • 4 edges correspond to 4 edge position points.
  • S3013 Determine the minimum rectangle according to edge position points corresponding to each side of the photo to be processed.
  • FIG. 4 is a schematic diagram of the smallest rectangle provided in an embodiment of this application.
  • pixel A is the pixel closest to the side a of the photo to be processed
  • pixel B is the pixel closest to the side b of the photo to be processed
  • pixel point C is the pixel point closest to the side c of the photo to be processed
  • pixel point D is the pixel point closest to the side d of the photo to be processed. Therefore, pixel points A, B, C, and D are sides respectively The edge position points corresponding to a, b, c, and d.
  • S302 Determine the largest rectangle in the photo to be processed, and intercept the image corresponding to the largest rectangle to obtain a second sub-image.
  • the largest rectangle and the smallest rectangle have the same center and aspect ratio.
  • the largest rectangle is the largest rectangle with the same center and the same aspect ratio as the smallest rectangle in the photo to be processed, and the largest rectangle is actually a rectangle obtained by scaling up the smallest rectangle.
  • the largest rectangle contains the smallest rectangle, and the sides of the largest rectangle do not intersect with the sides of the smallest rectangle.
  • FIG. 5 is a schematic diagram of the largest rectangle and the smallest rectangle provided in an embodiment of this application. As shown in Fig. 5, the smallest rectangle and the largest rectangle have the same center O, and the diagonal of the largest rectangle passes through the apex of the smallest rectangle.
  • the N sub-images and the smallest rectangle have the same center and aspect ratio, and the area of any one of the N sub-images is larger than the area of the smallest rectangle and smaller than the area of the largest rectangle.
  • the multiple sub-images include a first sub-image, a second sub-image, and N sub-images.
  • step S303 capturing N sub-images based on the maximum rectangle and the minimum rectangle, may include the following steps:
  • the corresponding N is determined.
  • the smallest rectangle is enlarged to 1.1 times, 1.2 times, and 1.3 times, respectively, to obtain 3 middle rectangles, and 3 corresponding sub-images are captured.
  • the value of N can also be set first, and then the smallest rectangle is scaled up according to the value of N and according to certain rules.
  • FIG. 6 is a schematic diagram of the middle rectangle provided in this embodiment of the present application.
  • N the number of points
  • M1 and M2 Take M1 and M2 as the vertices, and scale up the smallest rectangle to obtain the middle rectangle 1 and the middle rectangle 2.
  • the smallest rectangle is scaled up according to the rule of taking the equal division points of the line segment PQ.
  • the non-equal points of the line segment PQ can also be taken, which is not limited here.
  • the first sub-image is obtained by determining the smallest rectangle containing all human figures in the photo to be processed, and intercepting the image corresponding to the smallest rectangle; then determining the largest rectangle in the photo to be processed, and intercepting the image corresponding to the largest rectangle to obtain the second Zhang sub-images; finally, N sub-images are captured based on the maximum rectangle and the minimum rectangle.
  • FIG. 7 shows a structural block diagram of a photo processing device provided in an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
  • the device includes:
  • the acquiring unit 71 is configured to acquire a photo to be processed, and to intercept multiple sub-images from the photo to be processed, the sub-images containing at least one portrait.
  • the calculation unit 72 is configured to calculate the group cohesion degree corresponding to each sub-image, and the group cohesion degree is used to characterize the degree of cohesion between the individual portraits in the sub-image.
  • the processing unit 73 is used to determine the sub-image corresponding to the highest group cohesion, which is the processed photo.
  • the obtaining unit 71 includes:
  • the first determining module is configured to determine the smallest rectangle containing all human figures in the photo to be processed, and intercept the image corresponding to the smallest rectangle to obtain the first sub-image.
  • the second determining module is used to determine the largest rectangle in the photo to be processed, and to intercept the image corresponding to the largest rectangle to obtain a second sub-image, wherein the largest rectangle and the smallest rectangle have the same center and Aspect ratio.
  • the interception module is configured to intercept N sub-images based on the largest rectangle and the smallest rectangle, where the N sub-images and the smallest rectangle have the same center and aspect ratio, and any of the N sub-images
  • the area of a sub-image is larger than the area of the smallest rectangle and smaller than the area of the largest rectangle;
  • the multiple sub-images include the first sub-image, the second sub-image, and the N sub-images.
  • the first determining module includes:
  • the recognition sub-module is used to recognize the portrait of the photo to be processed and obtain the coordinates of each pixel corresponding to the recognized portrait.
  • the point determination sub-module is used to determine the edge position points corresponding to each side of the photo to be processed according to the coordinates, wherein the edge position points corresponding to one side are all pixels corresponding to the portrait. Describe the pixel with the shortest side distance.
  • the rectangle determining sub-module is configured to determine the minimum rectangle according to the edge position points corresponding to each side of the photo to be processed.
  • the interception module includes:
  • the scaling sub-module is used to obtain N preset ratios, and scale the smallest rectangles according to each preset ratio to obtain N middle rectangles, the middle rectangles and the smallest rectangles have the same center And the area of the middle rectangle is smaller than the area of the largest rectangle.
  • the interception sub-module is used to separately intercept the image corresponding to each middle rectangle to obtain N sub-images.
  • the calculation unit 72 includes:
  • the first calculation sub-module is used to calculate the overall scene cohesion, face cohesion, and body cohesion corresponding to each sub-image, wherein the overall scene cohesion is used to characterize the sub-image.
  • the degree of cohesion between the portrait and the background in the image the degree of facial cohesion is used to characterize the facial expressions of each portrait in the sub-image; the degree of body cohesion is used to characterize the body posture of each portrait in the sub-image.
  • the second calculation sub-module is used to calculate the group cohesion degree corresponding to the sub-image according to the overall scene cohesion degree, face cohesion degree and body cohesion degree.
  • the first calculation sub-module is also used to obtain a trained neural network
  • the neural network includes three sub-networks, and the three sub-networks are respectively used to calculate the overall scene cohesion, the face cohesion, and The body cohesion;
  • the sub-image is input into the neural network for processing, and the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-image are obtained.
  • the second calculation submodule is further configured to perform a weighted summation of the overall scene cohesion, the face cohesion, and the body cohesion according to preset weights to obtain the group cohesion.
  • the photo processing device shown in FIG. 7 can be a software unit, a hardware unit, or a combination of software and hardware built into an existing terminal device, or it can be integrated into the terminal device as an independent pendant, or Exist as an independent terminal device.
  • FIG. 8 is a schematic structural diagram of a photo processing device provided by an embodiment of the application.
  • the photo processing device 8 of this embodiment includes: at least one processor 80 (only one is shown in FIG. 8), a processor, a memory 81, and a memory 81 that is stored in the memory 81 and can be stored in the at least one processor.
  • the computer program 82 running on the processor 80 implements the steps in any of the foregoing photo processing method embodiments when the processor 80 executes the computer program 82.
  • the photo processing device may be a mobile phone, a desktop computer, a notebook, a palmtop computer and other equipment with a shooting function.
  • the photo processing device may include, but is not limited to, a processor and a memory.
  • FIG. 8 is only an example of the photo processing device 8 and does not constitute a limitation on the photo processing device 8. It may include more or less components than those shown in the figure, or combine certain components, or different components.
  • the components of, for example, can also include input and output devices, network access devices, and so on.
  • the so-called processor 80 may be a central processing unit (Central Processing Unit, CPU), and the processor 80 may also be other general-purpose processors or digital signal processors (Digital Signal Processors). Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 81 may be an internal storage unit of the photo processing device 8 in some embodiments, such as a hard disk or a memory of the photo processing device 8. In other embodiments, the memory 81 may also be an external storage device of the photo processing device 8, such as a plug-in hard disk or a smart memory card (Smart Media Card, SMC) equipped on the photo processing device 8. Secure Digital (SD) card, flash card (Flash Card), etc. Further, the memory 81 may also include both an internal storage unit of the photo processing apparatus 8 and an external storage device. The memory 81 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 81 can also be used to temporarily store data that has been output or will be output.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
  • the embodiments of the present application provide a computer program product.
  • the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include at least: any entity or device capable of carrying computer program code to the photo processing device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM) , Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory
  • telecommunications signals and software distribution media.
  • U disk mobile hard disk, floppy disk or CD-ROM, etc.
  • computer-readable media cannot be electrical carrier signals and telecommunication signals.
  • the disclosed apparatus/network equipment and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

一种照片处理方法及照片处理装置,适用于图像处理技术领域,包括:获取待处理照片,并从所述待处理照片中截取多张子图像,所述子图像中包含至少一个人像(S201);分别计算每张子图像对应的群体凝聚度,所述群体凝聚度用来表征所述子图像中各个人像之间的凝聚程度(S202);确定最高的群体凝聚度对应的子图像,为处理后的照片(S203)。通过上述方法,能够对人物合照的构图进行有效调整,进而提高了人物合照画面的完整性和协调性。

Description

照片处理方法及照片处理装置 技术领域
本申请属于图像处理技术领域,尤其涉及一种照片处理方法及照片处理装置。
背景技术
随着图像处理技术的不断提高,拍照设备的功能也逐渐强大,用户对照片的要求也越来越高。合照拍摄是一种常见的拍摄行为,在合照中,不仅要关注每个人物的表情,还要关注各个人物之间的相对位置、以及人物与背景之间的相对比例等等,即构图。如果构图上存在问题,将会影响照片画面的完整性和协调性。
现有技术中,对人物合照的处理方法比较简单,一般只是提高照片的清晰度和去除照片中的遮挡物等等,并不会对人物合照的构图进行调整,进而无法保证照片画面的完整性和协调性。
技术问题
本申请实施例提供了一种照片处理方法及照片处理装置,可以解决现有的人物合照的画面完整性和协调性较差的问题。
技术解决方案
第一方面,本申请实施例提供了一种照片处理方法,包括:
获取待处理照片,并从所述待处理照片中截取多张子图像,所述子图像中包含至少一个人像;
分别计算每张子图像对应的群体凝聚度,所述群体凝聚度用来表征所述子图像中各个人像之间的凝聚程度;
确定最高的群体凝聚度对应的子图像,为处理后的照片。
在第一方面的一种可能的实现方式中,所述从所述待处理照片中截取多张子图像,包括:
确定所述待处理照片中包含所有人像的最小矩形,并截取所述最小矩形对应的图像得到第一张子图像;
确定所述待处理照片中的最大矩形,并截取所述最大矩形对应的图像得到第二张子图像,其中,所述最大矩形与所述最小矩形具有相同的中心和长宽比;
基于所述最大矩形和所述最小矩形截取N张子图像,所述N张子图像与所述最小矩形具有相同的中心和长宽比,且所述N张子图像中任一张子图像的面积大于所述最小矩形的面积、小于所述最大矩形的面积;
其中,所述多张子图像包括所述第一张子图像、所述第二张子图像和所述N张子图像。
在第一方面的一种可能的实现方式中,所述确定所述待处理照片中包含所有人像的最小矩形,包括:
对所述待处理照片进行人像识别,并获取识别出的人像对应的各个像素点的坐标;
根据所述坐标,分别确定所述待处理照片的每条边对应的边缘位置点,其中,一条边对应的边缘位置点为所述人像对应的所有像素点中与所述一条边距离最短的像素点;
根据所述待处理照片各条边对应的边缘位置点确定所述最小矩形。
在第一方面的一种可能的实现方式中,所述基于所述最大矩形和所述最小矩形截取N张子图像,包括:
获取N个预设比例,并将所述最小矩形分别按照每个预设比例进行比例放大,得到N个中间矩形,所述中间矩形与所述最小矩形具有相同的中心、且所述中间矩形的面积小于所述最大矩形的面积;
分别截取每个中间矩形对应的图像,得到N张子图像。
在第一方面的一种可能的实现方式中,所述分别计算每张子图像对应的群体凝聚度 包括:
对于每张子图像,分别计算所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度,其中,所述整体场景凝聚度用于表征所述子图像中人像和背景之间的凝聚度,所述人脸凝聚度用于表征所述子图像中各个人像的面部表情;所述身体凝聚度用于表征所述子图像中各个人像的身体姿态;
根据所述整体场景凝聚度、人脸凝聚度和身体凝聚度计算所述子图像对应的群体凝聚度。
在第一方面的一种可能的实现方式中,所述分别计算所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度,包括:
获取训练后的神经网络,所述神经网络包括三个子网络,三个子网络分别用于计算所述整体场景凝聚度、所述人脸凝聚度和所述身体凝聚度;
将所述子图像输入到所述神经网络中进行处理,得到所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度。
在第一方面的一种可能的实现方式中,所述根据所述整体场景凝聚度、人脸凝聚度和身体凝聚度计算所述子图像对应的群体凝聚度,包括:
将所述整体场景凝聚度、所述人脸凝聚度和所述身体凝聚度按照预设权值进行加权求和,得到所述群体凝聚度。
第二方面,本申请实施例提供了一种照片处理装置,包括:
获取单元,用于获取待处理照片,并从所述待处理照片中截取多张子图像,所述子图像中包含至少一个人像;
计算单元,用于分别计算每张子图像对应的群体凝聚度,所述群体凝聚度用来表征所述子图像中各个人像之间的凝聚程度;
处理单元,用于确定最高的群体凝聚度对应的子图像,为处理后的照片。
第三方面,本申请实施例提供了一种照片处理装置,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如上述第一方面中任一项所述的照片处理方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如上述第一方面中任一项所述的照片处理方法。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项所述的照片处理方法。
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
有益效果
本申请实施例与现有技术相比存在的有益效果是:
本申请实施例通过获取待处理照片,从待处理照片中截取多张子图像,所述子图像中包含至少一个人像,将这些子图像作为待选的照片;然后分别计算每张子图像对应的群体凝聚度,所述群体凝聚度用来表征所述子图像中各个人像之间的凝聚程度,以此作为衡量子图像质量的指标;最后确定最高的群体凝聚度对应的子图像,为处理后的照片。通过上述方法,从现有的待处理照片中,以群体凝聚度为指标确定处理后的照片,能够对人物合照的构图进行有效调整,进而提高了人物合照画面的完整性和协调性。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提供的照片处理***的示意图;
图2是本申请一实施例提供的照片处理方法的流程示意图;
图3是本申请一实施例提供的截取子图像的方法流程示意图;
图4是本申请一实施例提供的最小矩形的示意图;
图5是本申请一实施例提供的最大矩形和最小矩形的示意图;
图6是本申请一实施例提供的中间矩形的示意图;
图7是本申请一实施例提供的照片处理装置的结构框图;
图8是本申请一实施例提供的照片处理装置的结构示意图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定***结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的***、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。
先介绍本申请实施例提供的照片处理方法的一个应用场景。参见图1,为本申请实施例提供的照片处理***的示意图。如图所示,照片处理***可以包括拍摄装置101和终端设备102。其中,拍摄装置可以是照相机、摄像机和具有拍摄功能的手机等等。终端设备可以是手机、电脑等。拍摄装置与终端设备可采用有线或无线的方式通信连接。拍摄装置将拍摄后的照片发送给终端设备,终端设备利用本申请实施例提供的照片处理方法对接收到的照片进行处理,并将处理后的照片显示给用户,或者,将处理后的照片返回给拍摄装置,由拍摄装置将处理后的照片显示给用户。
当然,实际应用中,终端设备102中可以集成有拍摄装置101,这样,终端设备在具有拍摄功能的同时,也具有照片处理能力。
图2示出了本申请一实施例提供的照片处理方法的流程示意图,作为示例而非限定,所述方法可以包括以下步骤:
S201,获取待处理照片,并从所述待处理照片中截取多张子图像,所述子图像中包含至少一个人像。
在实际应用中,待处理照片通常为人物合照,即照片中包括多个人像。从待处理照片中截取子图像,子图像中包含一个或多个人像,最好是可以包含所有的人像。将多张子图像作为待选的图像。
从待处理照片中截取多张子图像的过程,可参见图3实施例中的描述,在此不再赘述。
S202,分别计算每张子图像对应的群体凝聚度,所述群体凝聚度用来表征所述子图像中各个人像之间的凝聚程度。
群体凝聚度是指群体成员之间相互吸引并愿意留在群体中的程度,它是维持群体行为有效性的一种合力。在本申请实施例中,以群体凝聚度为衡量指标,用来表征照片中各个人物之间的一致性和凝聚程度。群体凝聚度越高,说明人物合照的照片质量越高。
在一个实施例中,步骤S202中分别计算每张子图像对应的群体凝聚度,可以包括以下步骤:
S21,对于每张子图像,分别计算所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度。
其中,整体场景凝聚度是基于子图像的整体进行计算的,用于表征子图像中人物和背景之间的凝聚度。人脸凝聚度是基于子图像中的面部图像进行计算的,用于表征子图像中各个人物的面部表情。身体凝聚度是基于子图像中的人物的身体图像进行计算的,用于表征子图像中各人物的身体姿态。可选的,步骤S21可以包括:
S211,获取训练后的神经网络,所述神经网络包括三个子网络,三个子网络分别用于计算所述整体场景凝聚度、所述人脸凝聚度和所述身体凝聚度。
S212,将所述子图像输入到所述神经网络中进行处理,得到所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度。
在实际应用中,需要预先对神经网络进行训练。
可选的,对于用于计算整体场景凝聚度的子网络和身体凝聚度的子网络,可以采用SE-NET(Squeeze-and-Excitation network)网络结构,利用ImageNet数据集对其进行预训练,并以GAF-Cohesion数据库进行训练和测试。在训练过程中还可以加入表情标签来辅助监督学习。该子网络的损失函数可采用交叉熵、均方误差、情感等级损失(Rank Loss)函数和沙漏损失(Hourglass Loss)函数。
可选的,对于用于计算人脸凝聚度的子网络,可以采用ResNet网络结构,利用PERPlus数据集进行预训练,并以GAF-Cohesion数据库进行训练和测试。该子网络的损失函数可采用交叉熵、均方误差、Rank Loss和Hourglass Loss函数。
利用训练后的神经网络计算整体场景凝聚度、人脸凝聚度和身体凝聚度,可以在保证计算准确度的同时,提高计算效率。
S22,根据所述整体场景凝聚度、人脸凝聚度和身体凝聚度计算所述子图像对应的群体凝聚度。
可选的,步骤S22可以包括:
将所述整体场景凝聚度、所述人脸凝聚度和所述身体凝聚度按照预设权值进行加权求和,得到所述群体凝聚度。
示例性的,对整体场景凝聚度h1设置权值为w1,对人脸凝聚度号设置h2权值为w2,对身体凝聚度h3设置权值为w3。那么群体凝聚度为H= h1×w1+ h2×w2+ h3×w3。
结合上述三种凝聚度来计算最后的群体凝聚度,不仅可以考虑到子图像人物和背景之间的关系,还可以考虑到子图像中各人物的面部表情和身体姿态,得到的群体凝聚度能够更全面地表征子图像的协调性和完整性。
S203,确定最高的群体凝聚度对应的子图像,为处理后的照片。
从S201中待选的子图像中选取群体凝聚度最高的子图像,为处理后的照片。
本申请实施例通过获取待处理照片,从待处理照片中截取多张子图像,将这些子图像作为待选的照片;然后分别计算每张子图像对应的群体凝聚度,利用群体凝聚度来表征子图像中各个人物之间的一致性和凝聚程度,并以此作为衡量子图像质量的指标;最后确定最高的群体凝聚度对应的子图像,为处理后的照片。通过上述方法,从现有的待处理照片中,以群体凝聚度为指标确定处理后的照片,能够有效提高人物合照的处理效果,进而提高人物合照的照片质量。
参见图3,为本申请实施例提供的截取子图像的方法流程示意图。如图3所示,上述步骤S201中,从所述待处理照片中截取多张子图像,可以包括以下步骤:
S301,确定所述待处理照片中包含所有人像的最小矩形,并截取所述最小矩形对应的图像得到第一张子图像。
本申请实施例中,待处理照片中的人像包括面部图像和身体图像。
可选的,步骤S301中,确定所述待处理照片中包含所有人像的最小矩形,可以包括以下步骤:
S3011,对所述待处理照片进行人像识别,并获取识别出的人像对应的各个像素点的坐标。
其中,人像识别包含人脸识别和身体检测两部分。
示例性的,人脸识别可利用多任务卷积神经网络(Multi-task convolutional neural network,MTCNN)实现。该神经网络将人脸区域检测与人脸关键点检测放在了一起,输出的结果中既包含了人脸检测结果,又包含了检测出的人脸上的关键点。
身体检测可利用开源人体姿态识别项目OpenPose实现。该项目可实现人体动作、关节动作等姿态估计,输出的结果中包含了人体各个关节点的定位。
在本实施例中,先对待处理照片进行人像识别,得到识别结果,识别结果中可以包括识别出的人像的轮廓。然后获取待处理照片中人像的轮廓覆盖的像素点的坐标。
S3012,根据所述坐标,分别确定所述待处理照片的每条边对应的边缘位置点,其中,一条边对应的边缘位置点为所述人像对应的所有像素点中与所述一条边距离最短的像素点。
在实际应用中,照片形状一般为矩形,即有4条边。对于每条边,可以根据坐标,分别计算识别出的人像对应的各个像素点到这条边的距离(即计算点到线的距离),并将最小距离对应的像素点作为这条边对应的边缘位置点。通过上述方法,4条边对应4个边缘位置点。
S3013,根据所述待处理照片各条边对应的边缘位置点确定所述最小矩形。
对于每个边缘位置点,通过该边缘位置点做一条平行于该边缘位置点对应的边的直线。将通过各个边缘位置点的直线围成的图形记为最小矩形。
示例性的,参见图4,为本申请实施例提供的最小矩形的示意图。如图4所示,在识别出的人像对应的所有像素点中,像素点A为与待处理照片的a边距离最近的像素点,像素点B为与待处理照片的b边距离最近的像素点,像素点C为与待处理照片的c边距离最近的像素点,像素点D为与待处理照片的d边距离最近的像素点,因此,像素点A、B、C、D分别为边a、b、c、d对应的边缘位置点。过A点做平行于a边的直线,过B点做平行于b边的直线,过C点做平行于c边的直线,过D点做平行于d边的直线。4条直线围成的矩形为最小矩形。
S302,确定所述待处理照片中的最大矩形,并截取所述最大矩形对应的图像得到第二张子图像。
其中,所述最大矩形与所述最小矩形具有相同的中心和长宽比。
换句话说,最大矩形是待处理照片中与最小矩形具有相同中心、相同长宽比的最大矩形,最大矩形实际是将最小矩形进行比例放大后得到的矩形。而且,最大矩形中包含最小矩形,最大矩形的边与最小矩形的边不相交。参见图5,为本申请实施例提供的最大矩形和最小矩形的示意图。如图5所示,最小矩形和最大矩形具有相同的中心O,且最大矩形的对角线经过最小矩形的顶点。
S303,基于所述最大矩形和所述最小矩形截取N张子图像。
其中,N张子图像与最小矩形具有相同的中心和长宽比,且N张子图像中任一张子图像的面积大于最小矩形的面积、小于最大矩形的面积。
本申请实施例中,多张子图像中包括第一张子图像、第二张子图像和N张子图像。可选的,步骤S303,基于所述最大矩形和所述最小矩形截取N张子图像,可以包括以下步骤:
S3031,获取N个预设比例,并将所述最小矩形分别按照每个预设比例进行比例放大,得到N个中间矩形,所述中间矩形与所述最小矩形具有相同的中心、且所述中间矩形的面积小于所述最大矩形的面积。
S3032,分别截取每个中间矩形对应的图像,得到N张子图像。
实际应用中,设定好预设比例,对应的N也就确定了。示例性的,假设预设比例分别为1.1、1.2、1.3,对应的N=3。此时,将最小矩形分别比例放大为1.1倍、1.2倍和1.3倍,得到3个中间矩形,对应的截取3张子图像。
可选的,也可以先设定N的值,再根据N的值、按照一定的规则将最小矩形进行比例放大。
示例性的,参见图6,为本申请实施例提供的中间矩形的示意图。如图6所示,先设定N=2。然后将最小矩形的一个顶点P和最大矩形中与该顶点对应的顶点Q连接成线段PQ,将线段PQ进行3(N+1)等分得到两个等分点M1、M2。分别以M1和M2为顶点,将最小矩形进行比例放大,得到中间矩形1和中间矩形2。
上述示例中,是按照取线段PQ的等分点的规则将最小矩形进行比例放大的,在实际应用中,也可以取线段PQ的非等分点,在此不做限定。
本申请实施例中通过确定待处理照片中包含所有人像的最小矩形,截取最小矩形对应的图像得到第一张子图像;然后确定待处理照片中的最大矩形,截取最大矩形对应的图像得到第二张子图像;最后基于所述最大矩形和所述最小矩形截取N张子图像。通过上述方法,能够获得多张待选子图像,并且能够保证每张子图像中都包含所有人像,在这些子图像中选取处理照片,有效保证了处理后照片的质量。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例所述的照片处理方法,图7示出了本申请实施例提供的照片处理装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图7,该装置包括:
获取单元71,用于获取待处理照片,并从所述待处理照片中截取多张子图像,所述子图像中包含至少一个人像。
计算单元72,用于分别计算每张子图像对应的群体凝聚度,所述群体凝聚度用来表征所述子图像中各个人像之间的凝聚程度。
处理单元73,用于确定最高的群体凝聚度对应的子图像,为处理后的照片。
可选的,获取单元71包括:
第一确定模块,用于确定所述待处理照片中包含所有人像的最小矩形,并截取所述最小矩形对应的图像得到第一张子图像。
第二确定模块,用于确定所述待处理照片中的最大矩形,并截取所述最大矩形对应的图像得到第二张子图像,其中,所述最大矩形与所述最小矩形具有相同的中心和长宽比。
截取模块,用于基于所述最大矩形和所述最小矩形截取N张子图像,所述N张子图像与所述最小矩形具有相同的中心和长宽比,且所述N张子图像中任一张子图像的面积大于所述最小矩形的面积、小于所述最大矩形的面积;
其中,所述多张子图像包括所述第一张子图像、所述第二张子图像和所述N张子图像。
可选的,第一确定模块包括:
识别子模块,用于对所述待处理照片进行人像识别,并获取识别出的人像对应的各个像素点的坐标。
点确定子模块,用于根据所述坐标,分别确定所述待处理照片的每条边对应的边缘位置点,其中,一条边对应的边缘位置点为所述人像对应的所有像素点中与所述一条边距离最短的像素点。
矩形确定子模块,用于根据所述待处理照片各条边对应的边缘位置点确定所述最小矩形。
可选的,截取模块包括:
比例放大子模块,用于获取N个预设比例,并将所述最小矩形分别按照每个预设比例进行比例放大,得到N个中间矩形,所述中间矩形与所述最小矩形具有相同的中心、且所述中间矩形的面积小于所述最大矩形的面积。
截取子模块,用于分别截取每个中间矩形对应的图像,得到N张子图像。
可选的,计算单元72包括:
第一计算子模块,用于对于每张子图像,分别计算所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度,其中,所述整体场景凝聚度用于表征所述子图像中人像和背景之间的凝聚度,所述人脸凝聚度用于表征所述子图像中各个人像的面部表情;所述身体凝聚度用于表征所述子图像中各个人像的身体姿态。
第二计算子模块,用于根据所述整体场景凝聚度、人脸凝聚度和身体凝聚度计算所述子图像对应的群体凝聚度。
可选的,第一计算子模块,还用于获取训练后的神经网络,所述神经网络包括三个子网络,三个子网络分别用于计算所述整体场景凝聚度、所述人脸凝聚度和所述身体凝聚度;将所述子图像输入到所述神经网络中进行处理,得到所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度。
可选的,第二计算子模块,还用于将所述整体场景凝聚度、所述人脸凝聚度和所述身体凝聚度按照预设权值进行加权求和,得到所述群体凝聚度。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
另外,图7所示的照片处理装置可以是内置于现有的终端设备内的软件单元、硬件单元、或软硬结合的单元,也可以作为独立的挂件集成到所述终端设备中,还可以作为独立的终端设备存在。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述***中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
图8为本申请一实施例提供的照片处理装置的结构示意图。如图8所示,该实施例的照片处理装置8包括:至少一个处理器80(图8中仅示出一个)处理器、存储器81以及存储在所述存储器81中并可在所述至少一个处理器80上运行的计算机程序82,所述处理器80执行所述计算机程序82时实现上述任意各个照片处理方法实施例中的步骤。
所述照片处理装置可以是具有拍摄功能的手机、桌上型计算机、笔记本、掌上电脑等设备。该照片处理装置可包括,但不仅限于,处理器、存储器。本领域技术人员可以理解,图8仅仅是照片处理装置8的举例,并不构成对照片处理装置8的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。
所称处理器80可以是中央处理单元(Central Processing Unit,CPU),该处理器80还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器81在一些实施例中可以是所述照片处理装置8的内部存储单元,例如照片处理装置8的硬盘或内存。所述存储器81在另一些实施例中也可以是所述照片处理装置8的外部存储设备,例如所述照片处理装置8上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器81还可以既包括所述照片处理装置8的内部存储单元也包括外部存储设备。所述存储器81用于存储操作***、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器81还可以用于暂时地存储已经输出或者将要输出的数据。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到照片处理装置的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种照片处理方法,其特征在于,包括:
    获取待处理照片,并从所述待处理照片中截取多张子图像,所述子图像中包含至少一个人像;
    分别计算每张子图像对应的群体凝聚度,所述群体凝聚度用来表征所述子图像中各个人像之间的凝聚程度;
    确定最高的群体凝聚度对应的子图像,为处理后的照片。
  2. 如权利要求1所述的照片处理方法,其特征在于,所述从所述待处理照片中截取多张子图像,包括:
    确定所述待处理照片中包含所有人像的最小矩形,并截取所述最小矩形对应的图像得到第一张子图像;
    确定所述待处理照片中的最大矩形,并截取所述最大矩形对应的图像得到第二张子图像,其中,所述最大矩形与所述最小矩形具有相同的中心和长宽比;
    基于所述最大矩形和所述最小矩形截取N张子图像,所述N张子图像与所述最小矩形具有相同的中心和长宽比,且所述N张子图像中任一张子图像的面积大于所述最小矩形的面积、小于所述最大矩形的面积;
    其中,所述多张子图像包括所述第一张子图像、所述第二张子图像和所述N张子图像。
  3. 如权利要求2所述的照片处理方法,其特征在于,所述确定所述待处理照片中包含所有人像的最小矩形,包括:
    对所述待处理照片进行人像识别,并获取识别出的人像对应的各个像素点的坐标;
    根据所述坐标,分别确定所述待处理照片的每条边对应的边缘位置点,其中,一条边对应的边缘位置点为所述人像对应的所有像素点中与所述一条边距离最短的像素点;
    根据所述待处理照片各条边对应的边缘位置点确定所述最小矩形。
  4. 如权利要求2所述的照片处理方法,其特征在于,所述基于所述最大矩形和所述最小矩形截取N张子图像,包括:
    获取N个预设比例,并将所述最小矩形分别按照每个预设比例进行比例放大,得到N个中间矩形,所述中间矩形与所述最小矩形具有相同的中心、且所述中间矩形的面积小于所述最大矩形的面积;
    分别截取每个中间矩形对应的图像,得到N张子图像。
  5. 如权利要求1所述的照片处理方法,其特征在于,所述分别计算每张子图像对应的群体凝聚度 包括:
    对于每张子图像,分别计算所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度,其中,所述整体场景凝聚度用于表征所述子图像中人像和背景之间的凝聚度,所述人脸凝聚度用于表征所述子图像中各个人像的面部表情;所述身体凝聚度用于表征所述子图像中各个人像的身体姿态;
    根据所述整体场景凝聚度、人脸凝聚度和身体凝聚度计算所述子图像对应的群体凝聚度。
  6. 如权利要求5所述的照片处理方法,其特征在于,所述分别计算所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度,包括:
    获取训练后的神经网络,所述神经网络包括三个子网络,三个子网络分别用于计算所述整体场景凝聚度、所述人脸凝聚度和所述身体凝聚度;
    将所述子图像输入到所述神经网络中进行处理,得到所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度。
  7. 如权利要求5所述的照片处理方法,其特征在于,所述根据所述整体场景凝聚度、人脸凝聚度和身体凝聚度计算所述子图像对应的群体凝聚度,包括:
    将所述整体场景凝聚度、所述人脸凝聚度和所述身体凝聚度按照预设权值进行加权求和,得到所述群体凝聚度。
  8. 一种照片处理装置,其特征在于,包括:
    获取单元,用于获取待处理照片,并从所述待处理照片中截取多张子图像,所述子图像中包含至少一个人像;
    计算单元,用于分别计算每张子图像对应的群体凝聚度,所述群体凝聚度用来表征所述子图像中各个人像之间的凝聚程度;
    处理单元,用于确定最高的群体凝聚度对应的子图像,为处理后的照片。
  9. 一种照片处理装置,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的方法。
PCT/CN2020/129181 2019-12-04 2020-11-16 照片处理方法及照片处理装置 WO2021109863A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911228635.6A CN111062279B (zh) 2019-12-04 2019-12-04 照片处理方法及照片处理装置
CN201911228635.6 2019-12-04

Publications (1)

Publication Number Publication Date
WO2021109863A1 true WO2021109863A1 (zh) 2021-06-10

Family

ID=70299689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129181 WO2021109863A1 (zh) 2019-12-04 2020-11-16 照片处理方法及照片处理装置

Country Status (2)

Country Link
CN (1) CN111062279B (zh)
WO (1) WO2021109863A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062279B (zh) * 2019-12-04 2023-06-06 深圳先进技术研究院 照片处理方法及照片处理装置
CN112650873A (zh) * 2020-12-18 2021-04-13 新疆爱华盈通信息技术有限公司 一种智能相册的实现方法及***、电子装置及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914689A (zh) * 2014-04-09 2014-07-09 百度在线网络技术(北京)有限公司 基于人脸识别的图片裁剪方法及装置
CN104504649A (zh) * 2014-12-30 2015-04-08 百度在线网络技术(北京)有限公司 图片的裁剪方法和装置
CN105718439A (zh) * 2016-03-04 2016-06-29 广州微印信息科技有限公司 一种基于人脸识别的照片排版方法
CN107545576A (zh) * 2017-07-31 2018-01-05 华南农业大学 基于构图规则的图像编辑方法
CN108062739A (zh) * 2017-11-02 2018-05-22 广东数相智能科技有限公司 一种基于主***置的图片智能裁剪方法及装置
JP2019052985A (ja) * 2017-09-19 2019-04-04 株式会社明電舎 フロックの定量評価装置及び定量評価方法
CN111062279A (zh) * 2019-12-04 2020-04-24 深圳先进技术研究院 照片处理方法及照片处理装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355549A (zh) * 2016-09-30 2017-01-25 北京小米移动软件有限公司 拍照方法及设备
CN107743200A (zh) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 拍照的方法、装置、计算机可读存储介质和电子设备
CN108574803B (zh) * 2018-03-30 2020-01-14 Oppo广东移动通信有限公司 图像的选取方法、装置、存储介质及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914689A (zh) * 2014-04-09 2014-07-09 百度在线网络技术(北京)有限公司 基于人脸识别的图片裁剪方法及装置
CN104504649A (zh) * 2014-12-30 2015-04-08 百度在线网络技术(北京)有限公司 图片的裁剪方法和装置
CN105718439A (zh) * 2016-03-04 2016-06-29 广州微印信息科技有限公司 一种基于人脸识别的照片排版方法
CN107545576A (zh) * 2017-07-31 2018-01-05 华南农业大学 基于构图规则的图像编辑方法
JP2019052985A (ja) * 2017-09-19 2019-04-04 株式会社明電舎 フロックの定量評価装置及び定量評価方法
CN108062739A (zh) * 2017-11-02 2018-05-22 广东数相智能科技有限公司 一种基于主***置的图片智能裁剪方法及装置
CN111062279A (zh) * 2019-12-04 2020-04-24 深圳先进技术研究院 照片处理方法及照片处理装置

Also Published As

Publication number Publication date
CN111062279A (zh) 2020-04-24
CN111062279B (zh) 2023-06-06

Similar Documents

Publication Publication Date Title
WO2021057848A1 (zh) 网络的训练方法、图像处理方法、网络、终端设备及介质
WO2020207190A1 (zh) 一种三维信息确定方法、三维信息确定装置及终端设备
WO2020024483A1 (zh) 用于处理图像的方法和装置
WO2021164269A1 (zh) 基于注意力机制的视差图获取方法和装置
CN113034358B (zh) 一种超分辨率图像处理方法以及相关装置
WO2021189733A1 (zh) 图像处理方法及装置、电子设备、存储介质
CN109766925B (zh) 特征融合方法、装置、电子设备及存储介质
WO2021109863A1 (zh) 照片处理方法及照片处理装置
WO2023124040A1 (zh) 一种人脸识别方法及装置
CN110147708A (zh) 一种图像数据处理方法和相关装置
CN113724391A (zh) 三维模型构建方法、装置、电子设备和计算机可读介质
CN114493988A (zh) 一种图像虚化方法、图像虚化装置及终端设备
CN111131688A (zh) 一种图像处理方法、装置及移动终端
CN110288560A (zh) 一种图像模糊检测方法及装置
TWI711004B (zh) 圖片處理方法和裝置
US20160350622A1 (en) Augmented reality and object recognition device
WO2021179923A1 (zh) 用户面部图像展示方法、展示装置及对应的存储介质
CN113628259A (zh) 图像的配准处理方法及装置
WO2022027432A1 (zh) 拍摄方法、拍摄装置及终端设备
CN111784726A (zh) 人像抠图方法和装置
CN111222446A (zh) 人脸识别方法、人脸识别装置及移动终端
WO2021139178A1 (zh) 图像合成方法及相关设备
CN111754411B (zh) 图像降噪方法、图像降噪装置及终端设备
CN112711984A (zh) 注视点定位方法、装置和电子设备
JP6892557B2 (ja) 学習装置、画像生成装置、学習方法、画像生成方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20896427

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20896427

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.01.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20896427

Country of ref document: EP

Kind code of ref document: A1