WO2021109863A1 - 照片处理方法及照片处理装置 - Google Patents
照片处理方法及照片处理装置 Download PDFInfo
- Publication number
- WO2021109863A1 WO2021109863A1 PCT/CN2020/129181 CN2020129181W WO2021109863A1 WO 2021109863 A1 WO2021109863 A1 WO 2021109863A1 CN 2020129181 W CN2020129181 W CN 2020129181W WO 2021109863 A1 WO2021109863 A1 WO 2021109863A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sub
- cohesion
- image
- photo
- rectangle
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- This application belongs to the field of image processing technology, and in particular relates to a photo processing method and a photo processing device.
- the processing method for group photos of people is relatively simple. Generally, it only improves the sharpness of the photos and removes obstructions in the photos. The composition of the group photos is not adjusted, and the integrity of the photo frame cannot be guaranteed. And coordination.
- the embodiments of the present application provide a photo processing method and a photo processing device, which can solve the problem of poor picture integrity and coordination of existing group photos of people.
- an embodiment of the present application provides a photo processing method, including:
- the intercepting multiple sub-images from the photo to be processed includes:
- N sub-images are captured based on the maximum rectangle and the minimum rectangle, the N sub-images and the minimum rectangle have the same center and aspect ratio, and any sub-image in the N sub-images
- the area is larger than the area of the smallest rectangle and smaller than the area of the largest rectangle;
- the multiple sub-images include the first sub-image, the second sub-image, and the N sub-images.
- the determining the smallest rectangle containing all human figures in the to-be-processed photo includes:
- the edge position point corresponding to each side of the photo to be processed is the pixel with the shortest distance from the one side among all the pixels corresponding to the portrait point;
- the minimum rectangle is determined according to the edge position points corresponding to each side of the photo to be processed.
- the capturing N sub-images based on the largest rectangle and the smallest rectangle includes:
- N preset ratios and scale the smallest rectangles according to each preset ratio to obtain N middle rectangles.
- the middle rectangle and the smallest rectangle have the same center, and the middle rectangle has the same center.
- the area is smaller than the area of the largest rectangle;
- the images corresponding to each middle rectangle are respectively intercepted to obtain N sub-images.
- One possible implementation of the first aspect of the image are calculated for each sub-group corresponding to the degree of aggregation, comprising:
- For each sub-image calculate the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-image, where the overall scene cohesion is used to characterize the relationship between the portrait and the background in the sub-image.
- Cohesion where the face cohesion is used to characterize the facial expression of each person in the sub-image;
- body cohesion is used to characterize the body posture of each person in the sub-image;
- the group cohesion degree corresponding to the sub-image is calculated according to the overall scene cohesion degree, the face cohesion degree and the body cohesion degree.
- the separately calculating the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-images includes:
- the neural network including three sub-networks, and the three sub-networks are respectively used to calculate the overall scene cohesion, the face cohesion, and the body cohesion;
- the sub-image is input into the neural network for processing, and the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-image are obtained.
- the calculation of the group cohesion corresponding to the sub-image according to the overall scene cohesion, face cohesion, and body cohesion includes:
- the overall scene cohesion, the face cohesion, and the body cohesion are weighted and summed according to preset weights to obtain the group cohesion.
- an embodiment of the present application provides a photo processing device, including:
- An acquiring unit configured to acquire a photo to be processed, and to intercept multiple sub-images from the photo to be processed, the sub-images containing at least one portrait;
- a calculation unit configured to separately calculate a group cohesion degree corresponding to each sub-image, where the group cohesion degree is used to characterize the degree of cohesion between the individual portraits in the sub-image;
- the processing unit is used to determine the sub-image corresponding to the highest group cohesion, which is the processed photo.
- an embodiment of the present application provides a photo processing device, including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes The computer program implements the photo processing method according to any one of the above-mentioned first aspects.
- an embodiment of the present application provides a computer-readable storage medium
- an embodiment of the present application provides a computer-readable storage medium
- the computer-readable storage medium stores a computer program, and is characterized in that the When the computer program is executed by the processor, the photo processing method according to any one of the above-mentioned first aspects is realized.
- the embodiments of the present application provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the photo processing method described in any one of the above-mentioned first aspects.
- the group cohesion is used to characterize the degree of cohesion between the individual portraits in the sub-image, and this is used as an index to measure the quality of the sub-image; finally the sub-image corresponding to the highest group cohesion is determined, which is the post-processing Photo.
- the group cohesion is used as an index to determine the processed photos, which can effectively adjust the composition of the group photo, thereby improving the integrity and coordination of the picture of the photo group.
- FIG. 1 is a schematic diagram of a photo processing system provided by an embodiment of the present application.
- FIG. 2 is a schematic flowchart of a photo processing method provided by an embodiment of the present application.
- FIG. 3 is a schematic flowchart of a method for capturing sub-images according to an embodiment of the present application
- FIG. 4 is a schematic diagram of the smallest rectangle provided by an embodiment of the present application.
- FIG. 5 is a schematic diagram of the largest rectangle and the smallest rectangle provided by an embodiment of the present application.
- Fig. 6 is a schematic diagram of a middle rectangle provided by an embodiment of the present application.
- FIG. 7 is a structural block diagram of a photo processing device provided by an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of a photo processing device provided by an embodiment of the present application.
- the photo processing system may include a photographing device 101 and a terminal device 102.
- the photographing device may be a camera, a video camera, a mobile phone with a photographing function, and so on.
- the terminal device can be a mobile phone, a computer, etc.
- the photographing device and the terminal device can be connected in a wired or wireless manner.
- the photographing device sends the photographed photos to the terminal device, and the terminal device processes the received photos using the photo processing method provided in this embodiment of the application, and displays the processed photos to the user, or returns the processed photos To the camera, the camera displays the processed photos to the user.
- the terminal device 102 may be integrated with the photographing device 101.
- the terminal device not only has a photographing function, but also has a photo processing capability.
- FIG. 2 shows a schematic flowchart of a photo processing method provided by an embodiment of the present application.
- the method may include the following steps:
- S201 Obtain a photo to be processed, and intercept multiple sub-images from the photo to be processed, where the sub-images include at least one portrait.
- the photos to be processed are usually photos of people, that is, the photos include multiple portraits.
- the photos to be processed are usually photos of people, that is, the photos include multiple portraits.
- the sub-images contain one or more portraits, preferably all portraits.
- S202 Calculate the group cohesion degree corresponding to each sub-image, where the group cohesion degree is used to represent the degree of cohesion between the individual portraits in the sub-image.
- Group cohesion refers to the degree to which group members are attracted to each other and are willing to stay in the group. It is a resultant force to maintain the effectiveness of group behavior.
- the degree of group cohesion is used as a measurement index to characterize the consistency and cohesion degree of various characters in the photo. The higher the cohesion of the group, the higher the quality of the photos of the people.
- calculating the group cohesion corresponding to each sub-image in step S202 may include the following steps:
- S21 For each sub-image, respectively calculate the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-image.
- step S21 may include:
- S211 Obtain a trained neural network, where the neural network includes three sub-networks, and the three sub-networks are respectively used to calculate the overall scene cohesion, the face cohesion, and the body cohesion.
- S212 Input the sub-image into the neural network for processing, and obtain the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-image.
- the neural network needs to be trained in advance.
- SE-NET Seeze-and-Excitation network
- the network structure is pre-trained with the ImageNet data set, and trained and tested with the GAF-Cohesion database. In the training process, you can also add emoticons to assist in supervised learning.
- the loss function of this sub-network can adopt cross entropy, mean square error, emotional rank loss (Rank Loss) function and hourglass loss (Hourglass Loss) function.
- the ResNet network structure can be used, the PERPlus data set is used for pre-training, and the GAF-Cohesion database is used for training and testing.
- the loss function of this sub-network can adopt cross entropy, mean square error, Rank Loss and Hourglass Loss functions.
- Using the trained neural network to calculate the overall scene cohesion, face cohesion, and body cohesion can improve calculation efficiency while ensuring calculation accuracy.
- S22 Calculate the group cohesion degree corresponding to the sub-image according to the overall scene cohesion degree, the face cohesion degree and the body cohesion degree.
- step S22 may include:
- the overall scene cohesion, the face cohesion, and the body cohesion are weighted and summed according to preset weights to obtain the group cohesion.
- the weight value of the overall scene cohesion h1 is set to w1
- the weight value of h2 is set to the face cohesion number w2
- the weight value of the body cohesion h3 is set to w3.
- S203 Determine the sub-image corresponding to the highest group cohesion as a processed photo.
- the sub-image with the highest degree of group cohesion is selected as the processed photo.
- the embodiment of this application obtains photos to be processed, intercepts multiple sub-images from the photos to be processed, and uses these sub-images as photos to be selected; then calculates the group cohesion degree corresponding to each sub-image, and uses the group cohesion degree to represent The consistency and cohesion between the various characters in the sub-images are used as an index to measure the quality of the sub-images; the sub-image corresponding to the highest group cohesion is finally determined as the processed photo.
- the group cohesion is used as an index to determine the processed photos, which can effectively improve the processing effect of the group photos of people, and further improve the quality of the photos of the people group photos.
- intercepting multiple sub-images from the to-be-processed photo may include the following steps:
- S301 Determine the smallest rectangle containing all human figures in the photo to be processed, and intercept the image corresponding to the smallest rectangle to obtain a first sub-image.
- the portrait in the photo to be processed includes a facial image and a body image.
- determining the smallest rectangle containing all human figures in the to-be-processed photo may include the following steps:
- S3011 Perform portrait recognition on the photo to be processed, and obtain the coordinates of each pixel point corresponding to the recognized portrait.
- face recognition includes two parts: face recognition and body detection.
- face recognition can use a multi-task convolutional neural network (Multi-task convolutional neural network).
- neural network MTCNN
- the neural network combines face region detection and face key point detection together, and the output result contains both the face detection result and the key points of the detected face.
- Body detection can be implemented using the open source human gesture recognition project OpenPose. This project can realize the posture estimation of human body movement, joint movement, etc.
- the output result includes the positioning of each joint point of the human body.
- the image to be processed is recognized to obtain the recognition result, and the recognition result may include the contour of the recognized portrait. Then obtain the coordinates of the pixels covered by the contour of the portrait in the photo to be processed.
- S3012 According to the coordinates, respectively determine the edge position point corresponding to each side of the photo to be processed, wherein the edge position point corresponding to one side is the shortest distance from the one side among all the pixel points corresponding to the portrait Of pixels.
- the shape of the photo is generally rectangular, that is, it has 4 sides. For each side, you can calculate the distance from each pixel corresponding to the identified portrait to this side (that is, calculate the distance from the point to the line) according to the coordinates, and use the pixel corresponding to the minimum distance as the side corresponding Edge position point.
- 4 edges correspond to 4 edge position points.
- S3013 Determine the minimum rectangle according to edge position points corresponding to each side of the photo to be processed.
- FIG. 4 is a schematic diagram of the smallest rectangle provided in an embodiment of this application.
- pixel A is the pixel closest to the side a of the photo to be processed
- pixel B is the pixel closest to the side b of the photo to be processed
- pixel point C is the pixel point closest to the side c of the photo to be processed
- pixel point D is the pixel point closest to the side d of the photo to be processed. Therefore, pixel points A, B, C, and D are sides respectively The edge position points corresponding to a, b, c, and d.
- S302 Determine the largest rectangle in the photo to be processed, and intercept the image corresponding to the largest rectangle to obtain a second sub-image.
- the largest rectangle and the smallest rectangle have the same center and aspect ratio.
- the largest rectangle is the largest rectangle with the same center and the same aspect ratio as the smallest rectangle in the photo to be processed, and the largest rectangle is actually a rectangle obtained by scaling up the smallest rectangle.
- the largest rectangle contains the smallest rectangle, and the sides of the largest rectangle do not intersect with the sides of the smallest rectangle.
- FIG. 5 is a schematic diagram of the largest rectangle and the smallest rectangle provided in an embodiment of this application. As shown in Fig. 5, the smallest rectangle and the largest rectangle have the same center O, and the diagonal of the largest rectangle passes through the apex of the smallest rectangle.
- the N sub-images and the smallest rectangle have the same center and aspect ratio, and the area of any one of the N sub-images is larger than the area of the smallest rectangle and smaller than the area of the largest rectangle.
- the multiple sub-images include a first sub-image, a second sub-image, and N sub-images.
- step S303 capturing N sub-images based on the maximum rectangle and the minimum rectangle, may include the following steps:
- the corresponding N is determined.
- the smallest rectangle is enlarged to 1.1 times, 1.2 times, and 1.3 times, respectively, to obtain 3 middle rectangles, and 3 corresponding sub-images are captured.
- the value of N can also be set first, and then the smallest rectangle is scaled up according to the value of N and according to certain rules.
- FIG. 6 is a schematic diagram of the middle rectangle provided in this embodiment of the present application.
- N the number of points
- M1 and M2 Take M1 and M2 as the vertices, and scale up the smallest rectangle to obtain the middle rectangle 1 and the middle rectangle 2.
- the smallest rectangle is scaled up according to the rule of taking the equal division points of the line segment PQ.
- the non-equal points of the line segment PQ can also be taken, which is not limited here.
- the first sub-image is obtained by determining the smallest rectangle containing all human figures in the photo to be processed, and intercepting the image corresponding to the smallest rectangle; then determining the largest rectangle in the photo to be processed, and intercepting the image corresponding to the largest rectangle to obtain the second Zhang sub-images; finally, N sub-images are captured based on the maximum rectangle and the minimum rectangle.
- FIG. 7 shows a structural block diagram of a photo processing device provided in an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
- the device includes:
- the acquiring unit 71 is configured to acquire a photo to be processed, and to intercept multiple sub-images from the photo to be processed, the sub-images containing at least one portrait.
- the calculation unit 72 is configured to calculate the group cohesion degree corresponding to each sub-image, and the group cohesion degree is used to characterize the degree of cohesion between the individual portraits in the sub-image.
- the processing unit 73 is used to determine the sub-image corresponding to the highest group cohesion, which is the processed photo.
- the obtaining unit 71 includes:
- the first determining module is configured to determine the smallest rectangle containing all human figures in the photo to be processed, and intercept the image corresponding to the smallest rectangle to obtain the first sub-image.
- the second determining module is used to determine the largest rectangle in the photo to be processed, and to intercept the image corresponding to the largest rectangle to obtain a second sub-image, wherein the largest rectangle and the smallest rectangle have the same center and Aspect ratio.
- the interception module is configured to intercept N sub-images based on the largest rectangle and the smallest rectangle, where the N sub-images and the smallest rectangle have the same center and aspect ratio, and any of the N sub-images
- the area of a sub-image is larger than the area of the smallest rectangle and smaller than the area of the largest rectangle;
- the multiple sub-images include the first sub-image, the second sub-image, and the N sub-images.
- the first determining module includes:
- the recognition sub-module is used to recognize the portrait of the photo to be processed and obtain the coordinates of each pixel corresponding to the recognized portrait.
- the point determination sub-module is used to determine the edge position points corresponding to each side of the photo to be processed according to the coordinates, wherein the edge position points corresponding to one side are all pixels corresponding to the portrait. Describe the pixel with the shortest side distance.
- the rectangle determining sub-module is configured to determine the minimum rectangle according to the edge position points corresponding to each side of the photo to be processed.
- the interception module includes:
- the scaling sub-module is used to obtain N preset ratios, and scale the smallest rectangles according to each preset ratio to obtain N middle rectangles, the middle rectangles and the smallest rectangles have the same center And the area of the middle rectangle is smaller than the area of the largest rectangle.
- the interception sub-module is used to separately intercept the image corresponding to each middle rectangle to obtain N sub-images.
- the calculation unit 72 includes:
- the first calculation sub-module is used to calculate the overall scene cohesion, face cohesion, and body cohesion corresponding to each sub-image, wherein the overall scene cohesion is used to characterize the sub-image.
- the degree of cohesion between the portrait and the background in the image the degree of facial cohesion is used to characterize the facial expressions of each portrait in the sub-image; the degree of body cohesion is used to characterize the body posture of each portrait in the sub-image.
- the second calculation sub-module is used to calculate the group cohesion degree corresponding to the sub-image according to the overall scene cohesion degree, face cohesion degree and body cohesion degree.
- the first calculation sub-module is also used to obtain a trained neural network
- the neural network includes three sub-networks, and the three sub-networks are respectively used to calculate the overall scene cohesion, the face cohesion, and The body cohesion;
- the sub-image is input into the neural network for processing, and the overall scene cohesion, face cohesion, and body cohesion corresponding to the sub-image are obtained.
- the second calculation submodule is further configured to perform a weighted summation of the overall scene cohesion, the face cohesion, and the body cohesion according to preset weights to obtain the group cohesion.
- the photo processing device shown in FIG. 7 can be a software unit, a hardware unit, or a combination of software and hardware built into an existing terminal device, or it can be integrated into the terminal device as an independent pendant, or Exist as an independent terminal device.
- FIG. 8 is a schematic structural diagram of a photo processing device provided by an embodiment of the application.
- the photo processing device 8 of this embodiment includes: at least one processor 80 (only one is shown in FIG. 8), a processor, a memory 81, and a memory 81 that is stored in the memory 81 and can be stored in the at least one processor.
- the computer program 82 running on the processor 80 implements the steps in any of the foregoing photo processing method embodiments when the processor 80 executes the computer program 82.
- the photo processing device may be a mobile phone, a desktop computer, a notebook, a palmtop computer and other equipment with a shooting function.
- the photo processing device may include, but is not limited to, a processor and a memory.
- FIG. 8 is only an example of the photo processing device 8 and does not constitute a limitation on the photo processing device 8. It may include more or less components than those shown in the figure, or combine certain components, or different components.
- the components of, for example, can also include input and output devices, network access devices, and so on.
- the so-called processor 80 may be a central processing unit (Central Processing Unit, CPU), and the processor 80 may also be other general-purpose processors or digital signal processors (Digital Signal Processors). Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the memory 81 may be an internal storage unit of the photo processing device 8 in some embodiments, such as a hard disk or a memory of the photo processing device 8. In other embodiments, the memory 81 may also be an external storage device of the photo processing device 8, such as a plug-in hard disk or a smart memory card (Smart Media Card, SMC) equipped on the photo processing device 8. Secure Digital (SD) card, flash card (Flash Card), etc. Further, the memory 81 may also include both an internal storage unit of the photo processing apparatus 8 and an external storage device. The memory 81 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 81 can also be used to temporarily store data that has been output or will be output.
- the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
- the embodiments of the present application provide a computer program product.
- the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented.
- the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
- the computer-readable medium may include at least: any entity or device capable of carrying computer program code to the photo processing device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM) , Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
- ROM read-only memory
- RAM random access memory
- telecommunications signals and software distribution media.
- U disk mobile hard disk, floppy disk or CD-ROM, etc.
- computer-readable media cannot be electrical carrier signals and telecommunication signals.
- the disclosed apparatus/network equipment and method may be implemented in other ways.
- the device/network device embodiments described above are only illustrative.
- the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
- components can be combined or integrated into another system, or some features can be omitted or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (10)
- 一种照片处理方法,其特征在于,包括:获取待处理照片,并从所述待处理照片中截取多张子图像,所述子图像中包含至少一个人像;分别计算每张子图像对应的群体凝聚度,所述群体凝聚度用来表征所述子图像中各个人像之间的凝聚程度;确定最高的群体凝聚度对应的子图像,为处理后的照片。
- 如权利要求1所述的照片处理方法,其特征在于,所述从所述待处理照片中截取多张子图像,包括:确定所述待处理照片中包含所有人像的最小矩形,并截取所述最小矩形对应的图像得到第一张子图像;确定所述待处理照片中的最大矩形,并截取所述最大矩形对应的图像得到第二张子图像,其中,所述最大矩形与所述最小矩形具有相同的中心和长宽比;基于所述最大矩形和所述最小矩形截取N张子图像,所述N张子图像与所述最小矩形具有相同的中心和长宽比,且所述N张子图像中任一张子图像的面积大于所述最小矩形的面积、小于所述最大矩形的面积;其中,所述多张子图像包括所述第一张子图像、所述第二张子图像和所述N张子图像。
- 如权利要求2所述的照片处理方法,其特征在于,所述确定所述待处理照片中包含所有人像的最小矩形,包括:对所述待处理照片进行人像识别,并获取识别出的人像对应的各个像素点的坐标;根据所述坐标,分别确定所述待处理照片的每条边对应的边缘位置点,其中,一条边对应的边缘位置点为所述人像对应的所有像素点中与所述一条边距离最短的像素点;根据所述待处理照片各条边对应的边缘位置点确定所述最小矩形。
- 如权利要求2所述的照片处理方法,其特征在于,所述基于所述最大矩形和所述最小矩形截取N张子图像,包括:获取N个预设比例,并将所述最小矩形分别按照每个预设比例进行比例放大,得到N个中间矩形,所述中间矩形与所述最小矩形具有相同的中心、且所述中间矩形的面积小于所述最大矩形的面积;分别截取每个中间矩形对应的图像,得到N张子图像。
- 如权利要求1所述的照片处理方法,其特征在于,所述分别计算每张子图像对应的群体凝聚度 ,包括:对于每张子图像,分别计算所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度,其中,所述整体场景凝聚度用于表征所述子图像中人像和背景之间的凝聚度,所述人脸凝聚度用于表征所述子图像中各个人像的面部表情;所述身体凝聚度用于表征所述子图像中各个人像的身体姿态;根据所述整体场景凝聚度、人脸凝聚度和身体凝聚度计算所述子图像对应的群体凝聚度。
- 如权利要求5所述的照片处理方法,其特征在于,所述分别计算所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度,包括:获取训练后的神经网络,所述神经网络包括三个子网络,三个子网络分别用于计算所述整体场景凝聚度、所述人脸凝聚度和所述身体凝聚度;将所述子图像输入到所述神经网络中进行处理,得到所述子图像对应的整体场景凝聚度、人脸凝聚度和身体凝聚度。
- 如权利要求5所述的照片处理方法,其特征在于,所述根据所述整体场景凝聚度、人脸凝聚度和身体凝聚度计算所述子图像对应的群体凝聚度,包括:将所述整体场景凝聚度、所述人脸凝聚度和所述身体凝聚度按照预设权值进行加权求和,得到所述群体凝聚度。
- 一种照片处理装置,其特征在于,包括:获取单元,用于获取待处理照片,并从所述待处理照片中截取多张子图像,所述子图像中包含至少一个人像;计算单元,用于分别计算每张子图像对应的群体凝聚度,所述群体凝聚度用来表征所述子图像中各个人像之间的凝聚程度;处理单元,用于确定最高的群体凝聚度对应的子图像,为处理后的照片。
- 一种照片处理装置,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的方法。
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911228635.6A CN111062279B (zh) | 2019-12-04 | 2019-12-04 | 照片处理方法及照片处理装置 |
CN201911228635.6 | 2019-12-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021109863A1 true WO2021109863A1 (zh) | 2021-06-10 |
Family
ID=70299689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/129181 WO2021109863A1 (zh) | 2019-12-04 | 2020-11-16 | 照片处理方法及照片处理装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111062279B (zh) |
WO (1) | WO2021109863A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111062279B (zh) * | 2019-12-04 | 2023-06-06 | 深圳先进技术研究院 | 照片处理方法及照片处理装置 |
CN112650873A (zh) * | 2020-12-18 | 2021-04-13 | 新疆爱华盈通信息技术有限公司 | 一种智能相册的实现方法及***、电子装置及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103914689A (zh) * | 2014-04-09 | 2014-07-09 | 百度在线网络技术(北京)有限公司 | 基于人脸识别的图片裁剪方法及装置 |
CN104504649A (zh) * | 2014-12-30 | 2015-04-08 | 百度在线网络技术(北京)有限公司 | 图片的裁剪方法和装置 |
CN105718439A (zh) * | 2016-03-04 | 2016-06-29 | 广州微印信息科技有限公司 | 一种基于人脸识别的照片排版方法 |
CN107545576A (zh) * | 2017-07-31 | 2018-01-05 | 华南农业大学 | 基于构图规则的图像编辑方法 |
CN108062739A (zh) * | 2017-11-02 | 2018-05-22 | 广东数相智能科技有限公司 | 一种基于主***置的图片智能裁剪方法及装置 |
JP2019052985A (ja) * | 2017-09-19 | 2019-04-04 | 株式会社明電舎 | フロックの定量評価装置及び定量評価方法 |
CN111062279A (zh) * | 2019-12-04 | 2020-04-24 | 深圳先进技术研究院 | 照片处理方法及照片处理装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106355549A (zh) * | 2016-09-30 | 2017-01-25 | 北京小米移动软件有限公司 | 拍照方法及设备 |
CN107743200A (zh) * | 2017-10-31 | 2018-02-27 | 广东欧珀移动通信有限公司 | 拍照的方法、装置、计算机可读存储介质和电子设备 |
CN108574803B (zh) * | 2018-03-30 | 2020-01-14 | Oppo广东移动通信有限公司 | 图像的选取方法、装置、存储介质及电子设备 |
-
2019
- 2019-12-04 CN CN201911228635.6A patent/CN111062279B/zh active Active
-
2020
- 2020-11-16 WO PCT/CN2020/129181 patent/WO2021109863A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103914689A (zh) * | 2014-04-09 | 2014-07-09 | 百度在线网络技术(北京)有限公司 | 基于人脸识别的图片裁剪方法及装置 |
CN104504649A (zh) * | 2014-12-30 | 2015-04-08 | 百度在线网络技术(北京)有限公司 | 图片的裁剪方法和装置 |
CN105718439A (zh) * | 2016-03-04 | 2016-06-29 | 广州微印信息科技有限公司 | 一种基于人脸识别的照片排版方法 |
CN107545576A (zh) * | 2017-07-31 | 2018-01-05 | 华南农业大学 | 基于构图规则的图像编辑方法 |
JP2019052985A (ja) * | 2017-09-19 | 2019-04-04 | 株式会社明電舎 | フロックの定量評価装置及び定量評価方法 |
CN108062739A (zh) * | 2017-11-02 | 2018-05-22 | 广东数相智能科技有限公司 | 一种基于主***置的图片智能裁剪方法及装置 |
CN111062279A (zh) * | 2019-12-04 | 2020-04-24 | 深圳先进技术研究院 | 照片处理方法及照片处理装置 |
Also Published As
Publication number | Publication date |
---|---|
CN111062279A (zh) | 2020-04-24 |
CN111062279B (zh) | 2023-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021057848A1 (zh) | 网络的训练方法、图像处理方法、网络、终端设备及介质 | |
WO2020207190A1 (zh) | 一种三维信息确定方法、三维信息确定装置及终端设备 | |
WO2020024483A1 (zh) | 用于处理图像的方法和装置 | |
WO2021164269A1 (zh) | 基于注意力机制的视差图获取方法和装置 | |
CN113034358B (zh) | 一种超分辨率图像处理方法以及相关装置 | |
WO2021189733A1 (zh) | 图像处理方法及装置、电子设备、存储介质 | |
CN109766925B (zh) | 特征融合方法、装置、电子设备及存储介质 | |
WO2021109863A1 (zh) | 照片处理方法及照片处理装置 | |
WO2023124040A1 (zh) | 一种人脸识别方法及装置 | |
CN110147708A (zh) | 一种图像数据处理方法和相关装置 | |
CN113724391A (zh) | 三维模型构建方法、装置、电子设备和计算机可读介质 | |
CN114493988A (zh) | 一种图像虚化方法、图像虚化装置及终端设备 | |
CN111131688A (zh) | 一种图像处理方法、装置及移动终端 | |
CN110288560A (zh) | 一种图像模糊检测方法及装置 | |
TWI711004B (zh) | 圖片處理方法和裝置 | |
US20160350622A1 (en) | Augmented reality and object recognition device | |
WO2021179923A1 (zh) | 用户面部图像展示方法、展示装置及对应的存储介质 | |
CN113628259A (zh) | 图像的配准处理方法及装置 | |
WO2022027432A1 (zh) | 拍摄方法、拍摄装置及终端设备 | |
CN111784726A (zh) | 人像抠图方法和装置 | |
CN111222446A (zh) | 人脸识别方法、人脸识别装置及移动终端 | |
WO2021139178A1 (zh) | 图像合成方法及相关设备 | |
CN111754411B (zh) | 图像降噪方法、图像降噪装置及终端设备 | |
CN112711984A (zh) | 注视点定位方法、装置和电子设备 | |
JP6892557B2 (ja) | 学習装置、画像生成装置、学習方法、画像生成方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20896427 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20896427 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.01.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20896427 Country of ref document: EP Kind code of ref document: A1 |