CN116703701B - Picture clipping method and electronic equipment - Google Patents

Picture clipping method and electronic equipment Download PDF

Info

Publication number
CN116703701B
CN116703701B CN202211620168.3A CN202211620168A CN116703701B CN 116703701 B CN116703701 B CN 116703701B CN 202211620168 A CN202211620168 A CN 202211620168A CN 116703701 B CN116703701 B CN 116703701B
Authority
CN
China
Prior art keywords
boundary
region
original picture
face
clipping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211620168.3A
Other languages
Chinese (zh)
Other versions
CN116703701A (en
Inventor
李宁生
袁文波
陈振冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211620168.3A priority Critical patent/CN116703701B/en
Publication of CN116703701A publication Critical patent/CN116703701A/en
Application granted granted Critical
Publication of CN116703701B publication Critical patent/CN116703701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The application provides a picture clipping method and electronic equipment, relates to the technical field of image processing, and aims to enable a clipped picture to be centered on a human face without deformation. The method comprises the following steps: acquiring an original picture; performing face detection on the original picture to obtain a face region, wherein the face region is a minimum region covering all faces in the original picture, and the face region is smaller than or equal to the region where the original picture is located; the face region is taken as a center, a first clipping region is determined according to the proportion of the face region and the original picture, the first clipping region is a region covering the face region in the original picture, the proportion of the first clipping region is consistent with that of the original picture, and the first clipping region is larger than or equal to the face region; and cutting the original picture according to the first cutting area to generate a first cutting picture.

Description

Picture clipping method and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method for clipping a picture and an electronic device.
Background
With the development of internet technology, electronic devices such as mobile phones and tablet computers are generally installed with a lot of application programs (APP), such as camera applications and chat applications. People can take pictures or photographs with a camera application, or video chat with a chat application, etc. Most camera applications, chat applications and the like have a face automatic focusing function, wherein the face automatic focusing function refers to that after an electronic device collects an original picture, the original picture is cut, and then the cut picture is enlarged to the size of the original picture and displayed, so that a person is positioned in the center of a picture, and a user can be prevented from manually adjusting the angle of a camera.
The clipping method provided by the prior art mainly adopts a face centered clipping mode, and the mode needs to firstly identify a face area in an original picture, and then clipping the original picture by the face area. However, because the size of the face area is uncontrollable, the proportion of the cut picture obtained after cutting is easily uncontrollable, and the problem that the amplified cut picture has face deformation is further caused, so that the cutting effect is poor.
Disclosure of Invention
The embodiment of the application provides a picture cutting method and electronic equipment, which are used for solving the problem of face deformation after picture cutting.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
In a first aspect, the present application provides a method for clipping a picture, the method comprising: acquiring an original picture; performing face detection on the original picture to obtain a face region, wherein the face region is a minimum region covering all faces in the original picture, and the face region is smaller than or equal to the region where the original picture is located; the face region is taken as a center, a first clipping region is determined according to the proportion of the face region and the original picture, the first clipping region is a region covering the face region in the original picture, the proportion of the first clipping region is consistent with that of the original picture, and the first clipping region is larger than or equal to the face region; and cutting the original picture according to the first cutting area to generate a first cutting picture.
It can be understood that, since the proportion of the first clipping region is consistent with the proportion of the original picture, the proportion of the finally obtained first clipping picture is consistent with the proportion of the original picture, so that the problem of face deformation caused by inconsistent stretching of the image in different directions due to unsuitable clipping proportion is avoided, and the quality of the clipping picture is ensured.
In an implementation manner provided in the first aspect, face detection is performed on an original picture to obtain a face area, including: performing face detection on the original picture to obtain position information of at least one face subarea, wherein each face subarea is a minimum area covering one face in the original picture; and determining the position information of the face region according to the position information of at least one face region.
In one embodiment provided in the first aspect, the position information of one region includes a first distance, a second distance, a third distance, and a fourth distance of the region, where the first distance is a pixel distance from a left boundary of the region to a left boundary of the original picture, the second distance is a pixel distance from a right boundary of the region to a right boundary of the original picture, the third distance is a pixel distance from an upper boundary of the region to an upper boundary of the original picture, and the fourth distance is a pixel distance from a lower boundary of the region to a lower boundary of the original picture; determining the position information of the face region according to the position information of at least one face region, including: and respectively taking the minimum first distance, the minimum second distance, the minimum third distance and the minimum fourth distance in the position information of the at least one face subarea as the first distance, the second distance, the third distance and the fourth distance of the face area to obtain the position information of the face area. In this way, the face area can be made to cover just all faces.
In an embodiment provided in the first aspect, the position information of a region includes coordinates of two vertices on a diagonal of the region; determining the position information of the face region according to the position information of at least one face region, including: and taking the coordinates determined according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value in the position information of at least one face subregion as the position information of the face region. In this way, the face area can be made to cover just all faces.
In an embodiment provided in the first aspect, the first clipping region is a smallest region covering a face region in the original picture, and a proportion of the first clipping region is consistent with a proportion of the original picture. It can be understood that, in the first clipping region provided in this embodiment, the occupancy ratio of the face image is the largest, so that the picture clipped according to the first clipping region is clearer.
In an implementation manner provided in the first aspect, the determining, with the face area as a center, the first clipping area according to a ratio of the face area to the original picture includes: acquiring n areas; the size of the n regions is sequentially increased, the size of one region comprises the width and the height of the region, the proportion of the n regions is the same as that of the original picture, and the width of the i region is i times of the width of the 1 st region, i is less than or equal to n; if the size of the i-th area is insufficient to cover the face area and the size of the i+1th area is sufficient to cover the face area, determining the size of the i+1th area as the size of the first clipping area; and determining the position of the first clipping region according to the size of the first clipping region and the position information of the face region. In this way, the first clipping region can be made the smallest of all the covering face regions and the proportion is identical to that of the original picture.
In an implementation manner provided in the first aspect, determining the position of the first clipping region according to the size of the first clipping region and the position information of the face region includes: determining a first interval, a second interval, a third interval and a fourth interval according to the size of the first clipping region and the position information of the face region; the sum of the first interval and the second interval is a first numerical value, the sum of the third interval and the fourth interval is a second numerical value, and the first interval, the second interval, the third interval and the fourth interval are positive integers; taking the boundary of the face area after the upper boundary of the face area moves upwards by a third interval as the upper boundary of the first clipping area, and taking the boundary of the face area after the lower boundary of the face area moves downwards by a fourth interval as the lower boundary of the first clipping area; determining a left boundary and a right boundary of the first clipping region according to the left boundary of the face region, the right boundary of the face region, the first interval and the second interval; the position of the first clipping region is determined according to the left boundary, the right boundary, the upper boundary and the lower boundary of the first clipping region.
In one embodiment provided in the first aspect, the original picture is in YUYV format or UYVY format, the pixel distance from the left boundary of the first cropping zone to the left boundary of the original picture is even, and the pixel distance from the right boundary of the first cropping zone to the right boundary of the original picture is even. In this way, two pixels sharing the same U, V components can be prevented from being separated in the clipping process, so that Y, U, V components of each pixel in the first clipping picture are complete, and the probability of color deviation of the first clipping picture is further reduced.
In an embodiment provided in the first aspect, determining the left boundary and the right boundary of the first clipping region according to the left boundary of the face region, the right boundary of the face region, the first pitch, and the second pitch includes: a first boundary is obtained after the left boundary of the face area moves leftwards by a first interval; if the pixel distance from the first boundary to the left boundary of the original picture is odd, taking the boundary after the first boundary is shifted left or shifted right by one pixel as the left boundary of the first clipping region, and taking the boundary after the left boundary of the first clipping region is shifted right by the width of i+1 regions as the right boundary of the first clipping region; if the pixel distance from the first boundary to the left boundary of the original picture is even, taking the first boundary as the left boundary of the first clipping region, and taking the boundary of the right boundary of the face region after moving rightwards by a second interval as the right boundary of the first clipping region. This ensures that the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture is even and the pixel distance from the right boundary of the first clipping region to the right boundary of the original picture is even.
In an embodiment provided in the first aspect, the original picture is not in YUYV format and UYVY format, and determining the left boundary and the right boundary of the first cropping area according to the left boundary of the face area, the right boundary of the face area, the first pitch, and the second pitch includes: the left boundary of the face area is moved leftwards by a first interval to serve as the left boundary of the first clipping area, and the right boundary of the face area is moved rightwards by a second interval to serve as the right boundary of the first clipping area.
In an implementation manner provided in the first aspect, determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes: if the left boundary of the face area coincides with the left boundary of the original picture or the pixel distance from the left boundary of the face area to the left boundary of the original picture is smaller than a third value, determining that the first interval is 0, and determining that the second interval is the first value. It will be appreciated that in the case where the left boundary of the face region coincides with the left boundary of the original picture, continued movement of the left boundary of the face region to the left may exceed the left boundary of the original picture. When the pixel distance from the left boundary of the face region to the left boundary of the original picture is smaller than the third value, the left boundary of the face region will also exceed the left boundary of the original picture after being shifted to the left by half of the first value. Therefore, the first interval is determined to be 0, and the second interval is determined to be the first value minus 0, i.e. the first value.
In an implementation manner provided in the first aspect, determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes: and if the right boundary of the face area coincides with the right boundary of the original picture or the pixel distance from the right boundary of the face area to the right boundary of the original picture is smaller than a third value, determining that the second interval is 0. It will be appreciated that in the case where the right boundary of the face region coincides with the right boundary of the original picture, continued movement of the right boundary of the face region to the right may exceed the right boundary of the original picture. When the pixel distance from the right boundary of the face region to the right boundary of the original picture is smaller than the third value, the right boundary of the face region will also exceed the right boundary of the original picture after being shifted to the right by half of the first value. Therefore, the second interval is determined to be 0, and the first interval is determined to be the first value minus 0, i.e. the first value.
In an implementation manner provided in the first aspect, determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes: if the upper boundary of the face area coincides with the upper boundary of the original picture or the pixel distance from the upper boundary of the face area to the upper boundary of the original picture is smaller than a fourth value, determining that the third interval is 0, and determining that the fourth interval is the second value.
In an implementation manner provided in the first aspect, determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes: if the lower boundary of the face area coincides with the lower boundary of the original picture or the pixel distance from the lower boundary of the face area to the lower boundary of the original picture is smaller than a fourth value, determining that the fourth interval is 0, and determining that the third interval is a second value.
In an implementation manner provided in the first aspect, determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes: if the boundary of the face region is not coincident with the boundary of the original picture and the pixel distance from the left boundary of the face region to the left boundary of the original picture is greater than or equal to a third value, the pixel distance from the right boundary of the face region to the right boundary of the original picture is greater than or equal to a third value, the pixel distance from the upper boundary of the face region to the upper boundary of the original picture is greater than or equal to a fourth value, the pixel distance from the lower boundary of the face region to the lower boundary of the original picture is greater than or equal to the fourth value, the first interval and the second interval are determined to be half of the first value, and the third interval and the fourth interval are determined to be half of the second value.
In one embodiment provided in the first aspect, the method further comprises: amplifying the first cut picture to the size of the original picture; and displaying the amplified first clipping picture. Therefore, the picture displayed by the electronic equipment can be centered on the face of the user, and the user does not need to adjust the angle and distance of the lens by himself, so that the user is more convenient and comfortable.
In an implementation manner provided in the first aspect, before cropping the original picture according to the first cropping zone to generate the first cropped picture, the method further includes: determining a step length according to the position information of the first clipping region and the size of the original picture; reducing the original picture according to the step length to obtain a plurality of second clipping areas; cutting the original picture according to the plurality of second cutting areas to obtain a plurality of second cutting pictures; and amplifying the plurality of second clipping pictures to the size of the original picture, and sequentially displaying the amplified plurality of second clipping pictures. Therefore, the number of frames of the image displayed by the electronic equipment in a period of time can be increased, and the hysteresis of the image caused by the time delay of the algorithm processing process is reduced.
In an embodiment provided in the first aspect, the step length includes a first step length, a second step length, a third step length and a fourth step length, where a sum of the first step length and the second step length is a preset horizontal step length, a sum of the third step length and the fourth step length is a preset vertical step length, a ratio of the horizontal step length to the vertical step length is the same as a ratio of an original picture, and the first step length, the second step length, the third step length and the fourth step length are positive integers; and reducing the original picture according to the step length to obtain a plurality of second clipping regions, wherein the method comprises the following steps: the left boundary of the original picture is moved to the right according to the first step, the right boundary of the original picture is moved to the left according to the second step, the upper boundary of the original picture is moved downwards according to the third step, and the lower boundary of the original picture is moved upwards according to the fourth step, so that a plurality of second clipping regions are obtained.
In an implementation manner provided in the first aspect, determining a step size according to the position information of the first cropping zone and the size of the original picture includes: determining a first pixel distance, a second pixel distance, a third pixel distance and a fourth pixel distance according to the position information of the first clipping region and the size of the original picture, wherein the first pixel distance is the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture, the second pixel distance is the pixel distance from the right boundary of the first clipping region to the right boundary of the original picture, the third pixel distance is the pixel distance from the upper boundary of the first clipping region to the upper boundary of the original picture, and the fourth pixel distance is the pixel distance from the lower boundary of the first clipping region to the lower boundary of the original picture; determining a first step length and a second step length according to the first pixel distance, the second pixel distance and a preset horizontal step length; and determining a third step length and a fourth step length according to the third pixel distance, the fourth pixel distance and a preset vertical step length.
In one embodiment provided in the first aspect, the third pixel distance, the fourth pixel distance, the preset vertical step size, the third step size, and the fourth step size satisfy the following formula:
speed_top=[top_gap/(top_gap+bottom_gap)*speed_y]
speed_bottom=speed_y-speed_top
Wherein, speed_top is the third step size, speed_bottom is the fourth step size, speed_y is the preset vertical step size, top_gap is the third pixel distance, bottom_gap is the fourth pixel distance, [ top_gap/(top_gap+bottom_gap): speed_y ] is used for rounding top_gap/(top_gap+bottom_gap): speed_y.
In an embodiment provided in the first aspect, the original picture is in YUYV format or UYVY format, and the first step size and the second step size are even numbers.
In one embodiment provided in the first aspect, the first pixel distance, the second pixel distance, the preset horizontal step size, the first step size, and the second step size satisfy the following formula:
tmp=[left_gap/(left_gap+right_gap)*speed_x]
speed_right=speed_x-speed_left
Wherein, speed_left is a first step, speed_right is a second step, speed_x is a preset horizontal step, left_gap is a first pixel distance, right_gap is a second pixel distance, [ left_gap/(left_gap+right_gap) ×speed_x ] is used to round left_gap/(left_gap+right_gap) ×speed_x.
In a second aspect, the present application provides a method for clipping a picture, applied to an electronic device, where the method includes: receiving a first event for triggering the electronic equipment to focus the face; wherein the first event includes one of: receiving an operation of a user for connecting a video call or an operation of a user for opening a camera application; responding to a first event, the electronic equipment acquires an original picture through the camera and displays a first interface; the first interface comprises an enlarged first cutting picture, the first cutting picture is obtained by cutting an original picture, the size of the enlarged first cutting picture is the same as that of the original picture, the proportion of the first cutting picture is consistent with that of the original picture, and the smallest area covering all faces in the original picture is taken as the center. Therefore, the picture displayed by the electronic equipment can be centered on the face of the user, the user does not need to adjust the angle and distance of the lens by himself, the user operation is simplified, and the user is more convenient and comfortable.
In one embodiment provided in the second aspect, before displaying the first interface, the method further comprises: sequentially displaying a plurality of second interfaces, wherein each second interface comprises a second amplified cutting picture; the size of each amplified second cutting picture is the same as that of the original picture, the size of each second cutting picture is larger than that of the first cutting picture, and the minimum area covering all faces in each amplified second cutting picture is sequentially increased according to the display sequence of a plurality of second interfaces. Therefore, the frame number of the image displayed by the electronic equipment can be increased, and the delay in the clipping processing process is reduced to bring a hysteresis sense on the picture.
In one embodiment provided in the second aspect, the method further comprises: acquiring an original picture; performing face detection on the original picture to obtain a face region, wherein the face region is a minimum region covering all faces in the original picture, and the face region is smaller than or equal to the region where the original picture is located; the face region is taken as a center, a first clipping region is determined according to the proportion of the face region and the original picture, the first clipping region is a region covering the face region in the original picture, the proportion of the first clipping region is consistent with that of the original picture, and the first clipping region is larger than or equal to the face region; clipping the original picture to obtain a first clipping picture, including: and cutting the original picture according to the first cutting area to generate a first cutting picture.
In one embodiment provided in the second aspect, the method further comprises: determining a step length according to the position information of the first clipping region and the size of the original picture; reducing the original picture according to the step length to obtain a plurality of second clipping areas; and respectively cutting the original picture according to the plurality of second cutting areas to obtain a plurality of second cutting pictures.
In one embodiment provided in the second aspect, the camera is a universal serial bus camera, the format of the original picture is YUYV format, the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture is even, and the pixel distance from the right boundary of the first clipping region to the right boundary of the original picture is even.
In an embodiment provided in the second aspect, face detection is performed on an original picture to obtain a face area, including: performing face detection on the original picture to obtain the position information of at least one face subarea; and determining the position information of the face region according to the position information of at least one face region.
In one embodiment provided in the second aspect, the position information of one region includes a first distance, a second distance, a third distance, and a fourth distance of the region, the first distance being a pixel distance from a left boundary of the region to a left boundary of the original picture, the second distance being a pixel distance from a right boundary of the region to a right boundary of the original picture, the third distance being a pixel distance from an upper boundary of the region to an upper boundary of the original picture, the fourth distance being a pixel distance from a lower boundary of the region to a lower boundary of the original picture; determining the position information of the face region according to the position information of at least one face region, including: and respectively taking the minimum first distance, the minimum second distance, the minimum third distance and the minimum fourth distance in the position information of the at least one face subarea as the first distance, the second distance, the third distance and the fourth distance of the face area to obtain the position information of the face area. In this way, the face area can be made to cover just all faces.
In one embodiment provided in the second aspect, the position information of an area includes coordinates of two vertices on a diagonal of the area; determining the position information of the face region according to the position information of at least one face region, including: and taking the coordinates determined according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value in the position information of at least one face subregion as the position information of the face region.
In one embodiment provided in the second aspect, the first clipping region is a smallest region covering a face region in the original picture, and a proportion of the first clipping region is consistent with a proportion of the original picture.
In an embodiment provided in the second aspect, the determining the first clipping region based on the ratio of the face region to the original picture with the face region as a center includes: acquiring n areas; the size of the n regions is sequentially increased, the size of one region comprises the width and the height of the region, the proportion of the n regions is the same as that of the original picture, and the width of the i region is i times of the width of the 1 st region, i is less than or equal to n; if the size of the i-th area is insufficient to cover the face area and the size of the i+1th area is sufficient to cover the face area, determining the size of the i+1th area as the size of the first clipping area; and determining the position of the first clipping region according to the size of the first clipping region and the position information of the face region.
In an embodiment provided in the second aspect, determining the position of the first clipping region according to the size of the first clipping region and the position information of the face region includes: determining a first interval, a second interval, a third interval and a fourth interval according to the size of the first clipping region and the position information of the face region; the sum of the first interval and the second interval is a first numerical value, the sum of the third interval and the fourth interval is a second numerical value, and the first interval, the second interval, the third interval and the fourth interval are positive integers; taking the boundary of the face area after the upper boundary of the face area moves upwards by a third interval as the upper boundary of the first clipping area, and taking the boundary of the face area after the lower boundary of the face area moves downwards by a fourth interval as the lower boundary of the first clipping area; determining a left boundary and a right boundary of the first clipping region according to the left boundary of the face region, the right boundary of the face region, the first interval and the second interval; the position of the first clipping region is determined according to the left boundary, the right boundary, the upper boundary and the lower boundary of the first clipping region.
In one embodiment provided in the second aspect, determining the left boundary and the right boundary of the first clipping region according to the left boundary of the face region, the right boundary of the face region, the first pitch, and the second pitch includes: a first boundary is obtained after the left boundary of the face area moves leftwards by a first interval; if the pixel distance from the first boundary to the left boundary of the original picture is odd, taking the boundary after the first boundary is shifted left or shifted right by one pixel as the left boundary of the first clipping region, and taking the boundary after the left boundary of the first clipping region is shifted right by the width of i+1 regions as the right boundary of the first clipping region; if the pixel distance from the first boundary to the left boundary of the original picture is even, taking the first boundary as the left boundary of the first clipping region, and taking the boundary of the right boundary of the face region after moving rightwards by a second interval as the right boundary of the first clipping region.
In an embodiment provided in the second aspect, the original picture is not in YUYV format and UYVY format, and determining the left boundary and the right boundary of the first cropping zone according to the left boundary of the face zone, the right boundary of the face zone, the first pitch, and the second pitch includes: the left boundary of the face area is moved leftwards by a first interval to serve as the left boundary of the first clipping area, and the right boundary of the face area is moved rightwards by a second interval to serve as the right boundary of the first clipping area.
In one embodiment provided in the second aspect, determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes: if the left boundary of the face area coincides with the left boundary of the original picture or the pixel distance from the left boundary of the face area to the left boundary of the original picture is smaller than a third value, determining that the first interval is 0, and determining that the second interval is the first value.
In one embodiment provided in the second aspect, determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes: and if the right boundary of the face area coincides with the right boundary of the original picture or the pixel distance from the right boundary of the face area to the right boundary of the original picture is smaller than a third value, determining that the second interval is 0.
In one embodiment provided in the second aspect, determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes: if the upper boundary of the face area coincides with the upper boundary of the original picture or the pixel distance from the upper boundary of the face area to the upper boundary of the original picture is smaller than a fourth value, determining that the third interval is 0, and determining that the fourth interval is the second value.
In one embodiment provided in the second aspect, determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes: if the lower boundary of the face area coincides with the lower boundary of the original picture or the pixel distance from the lower boundary of the face area to the lower boundary of the original picture is smaller than a fourth value, determining that the fourth interval is 0, and determining that the third interval is a second value.
In one embodiment provided in the second aspect, determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes: if the boundary of the face region is not coincident with the boundary of the original picture and the pixel distance from the left boundary of the face region to the left boundary of the original picture is greater than or equal to a third value, the pixel distance from the right boundary of the face region to the right boundary of the original picture is greater than or equal to a third value, the pixel distance from the upper boundary of the face region to the upper boundary of the original picture is greater than or equal to a fourth value, the pixel distance from the lower boundary of the face region to the lower boundary of the original picture is greater than or equal to the fourth value, the first interval and the second interval are determined to be half of the first value, and the third interval and the fourth interval are determined to be half of the second value.
In one embodiment provided in the second aspect, the step sizes include a first step size, a second step size, a third step size and a fourth step size, wherein the sum of the first step size and the second step size is a preset horizontal step size, the sum of the third step size and the fourth step size is a preset vertical step size, the ratio of the horizontal step size to the vertical step size is the same as the ratio of the original picture, and the first step size, the second step size, the third step size and the fourth step size are positive integers; and reducing the original picture according to the step length to obtain a plurality of second clipping regions, wherein the method comprises the following steps: the left boundary of the original picture is moved to the right according to the first step, the right boundary of the original picture is moved to the left according to the second step, the upper boundary of the original picture is moved downwards according to the third step, and the lower boundary of the original picture is moved upwards according to the fourth step, so that a plurality of second clipping regions are obtained.
In an embodiment provided in the second aspect, determining the step size according to the position information of the first cropping zone and the size of the original picture includes: determining a first pixel distance, a second pixel distance, a third pixel distance and a fourth pixel distance according to the position information of the first clipping region and the size of the original picture, wherein the first pixel distance is the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture, the second pixel distance is the pixel distance from the right boundary of the first clipping region to the right boundary of the original picture, the third pixel distance is the pixel distance from the upper boundary of the first clipping region to the upper boundary of the original picture, and the fourth pixel distance is the pixel distance from the lower boundary of the first clipping region to the lower boundary of the original picture; determining a first step length and a second step length according to the first pixel distance, the second pixel distance and a preset horizontal step length; and determining a third step length and a fourth step length according to the third pixel distance, the fourth pixel distance and a preset vertical step length.
In one embodiment provided in the second aspect, the third pixel distance, the fourth pixel distance, the preset vertical step size, the third step size, and the fourth step size satisfy the following formula:
speed_top=[top_gap/(top_gap+bottom_gap)*speed_y]
speed_bottom=speed_y-speed_top
Wherein, speed_top is the third step size, speed_bottom is the fourth step size, speed_y is the preset vertical step size, top_gap is the third pixel distance, bottom_gap is the fourth pixel distance, [ top_gap/(top_gap+bottom_gap): speed_y ] is used for rounding top_gap/(top_gap+bottom_gap): speed_y.
In one embodiment provided in the second aspect, the original picture is in YUYV format or UYVY format, and the first step size and the second step size are even numbers.
In one embodiment provided in the second aspect, the first pixel distance, the second pixel distance, the preset horizontal step size, the first step size, and the second step size satisfy the following formula:
tmp=[left_gap/(left_gap+right_gap)*speed_x]
speed_right=speed_x-speed_left
Wherein, speed_left is a first step, speed_right is a second step, speed_x is a preset horizontal step, left_gap is a first pixel distance, right_gap is a second pixel distance, [ left_gap/(left_gap+right_gap) ×speed_x ] is used to round left_gap/(left_gap+right_gap) ×speed_x.
In a third aspect, the present application provides an electronic device, including: a memory and one or more processors; the memory is coupled with the processor; wherein the memory is for storing computer program code, the computer program code comprising computer instructions; the computer instructions, when executed by a processor, cause an electronic device to perform the method of any one of the first and second aspects.
In a fourth aspect, the present application provides a computer-readable storage medium comprising computer instructions; when executed on an electronic device, the computer instructions cause the electronic device to perform the method of any of the first and second aspects.
In a fifth aspect, the present application provides a computer program product which, when run on a terminal device, causes the terminal device to perform the method according to the first aspect and any one of its possible designs.
The technical effects of any one of the design manners of the second aspect to the fifth aspect may be referred to the technical effects of the different design manners of the first aspect, and will not be repeated here.
Drawings
Fig. 1 is a schematic diagram of a data format of a picture according to an embodiment of the present application;
Fig. 2A is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2B is an interface diagram of an electronic device according to an embodiment of the present application;
Fig. 2C is a schematic diagram of an application scenario provided in an embodiment of the present application;
Fig. 2D is an interface diagram of an electronic device according to an embodiment of the present application;
fig. 2E is an interface diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a picture cropping method according to an embodiment of the present application;
Fig. 4 is a second flow chart of a picture cropping method according to an embodiment of the present application;
FIG. 5 is a schematic representation of a location information provided in an embodiment of the present application;
FIG. 6 is a schematic representation of another position information provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of determining location information of an area according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another embodiment of determining location information of an area;
fig. 9 is a flowchart of a picture cropping method according to an embodiment of the present application;
Fig. 10 is a flowchart illustrating a method for clipping pictures according to an embodiment of the present application;
fig. 11 is a flowchart fifth of a picture cropping method according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
For clarity and conciseness in the description of the embodiments below, a brief introduction to related concepts or technologies is first given:
RGB refers to red (R), green (G), blue (B). These three colors are superimposed in different amounts, and a colorful color can be obtained.
YUV, which refers to a pixel format in which luminance parameters and chrominance parameters are separately identified, wherein Y represents brightness, that is, a gray value; u and V represent chromaticity, which describes color and saturation, for specifying the color of the pixel.
YUV formats can be divided into two classes, plane format (planar) and packed format (packed). For the YUV format of planar, Y for all pixels is stored consecutively, then U for all pixels is stored, then V for all pixels. For the packed YUV format, Y, U, V for each pixel is stored in consecutive interleaving.
For example, the YUV format may specifically include YUV422 format, YUYV format, UYVY format, and the like.
The YUV422 format is a planar format, and stores all Y components, then all U components, and then all V components.
The YUYV format is stored in a packed format, meaning that each pixel samples the Y component, but every other pixel samples its UV component, in the order shown in fig. 1 (a), where Y0 and Y1 share the U0, V0 components, and Y2 and Y3 share the U2, V2 components.
The UYVY format is also stored in a packed format, which is in the reverse order of YUYV, sampling the U component first and then sampling the Y component, in the order shown in fig. 1 (b), where Y0 and Y1 share the U0, V0 components, and Y2 and Y3 share the U2, V2 components.
Scaling refers to scaling down or scaling up the length and width of a picture in the same proportion.
Stretching means that the picture is elongated in a certain direction (e.g., the long direction, or the wide direction).
Currently, some camera applications and chat applications have a face auto-focusing function. The automatic focusing function of the human face means that after the electronic equipment collects an original picture, the original picture is cut, and then the cut picture is enlarged to the size of the original picture for display, so that the person is positioned in the center of the picture, and the user can be prevented from manually adjusting the angle of the camera.
The clipping method provided by the prior art mainly adopts a face centered clipping mode, and the mode needs to firstly identify a face area in an original picture and then clip the original picture according to the face area. This can lead to image stretching problems after the cut-out picture is enlarged.
Further, for an electronic device using a universal serial bus (universal serial bus, USB) camera, the data format of the captured picture is YUYV format. As can be seen from fig. 1, two pixels in YUYV format share a set of U, V components, if two pixels sharing a set of U, V components are cut apart during clipping, the retained pixels may lack their own U component or V component, which may cause deviation of color of the clipped picture, for example, a problem of greenish face.
The application provides a picture cropping method, which comprises the following steps: and carrying out face detection on the obtained original picture to obtain a face region, then obtaining a first clipping region which takes the face region as a center and has the proportion consistent with that of the original picture, and clipping the original picture according to the first clipping region to obtain a first clipping picture. The proportion of the first clipping region is consistent with that of the original picture, so that the proportion of the finally obtained first clipping picture is consistent with that of the original picture, and the problem of image stretching caused by unsuitable clipping proportion is avoided.
The picture cropping method provided by the embodiments of the application can be operated in a target application of electronic equipment (for example, a mobile phone). The target application may be any application that invokes a camera to take a photograph, such as a camera application, gallery application, instant messaging application, blog application, or game application, among others. It should be noted that, the face image processing method provided by the embodiments of the present application may be executed by an electronic device. That is, various functional modules can be integrated on the electronic device, so as to cut an original picture taken by the electronic device through the camera or an original picture selected by a user.
The electronic device may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) device, a wearable device, a vehicle-mounted device, a smart home device, and/or a smart city device, which are not particularly limited in the specific type of the electronic device according to the embodiments of the present application.
Fig. 2A is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 2A, the electronic device may include: processor 210, external memory interface 220, internal memory 221, usb interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 270A, receiver 270B, microphone 270C, headset interface 270D, sensor module 280, keys 290, motor 291, indicator 292, camera 293, display 294, and subscriber identity module (subscriber identification module, SIM) card interface 295, etc.
Processor 210 may include, among other things, one or more processing units, such as: processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The processor 210 may be a neural hub and a command center of the electronic device. The processor 210 may generate operation control signals according to the instruction operation code and the timing signals to complete instruction fetching and instruction execution control.
A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 210 through an external memory interface 220 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
Internal memory 221 may be used to store computer executable program code that includes instructions. The processor 210 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 221. For example, in an embodiment of the present application, the processor 210 may include a memory program area and a memory data area by executing instructions stored in the internal memory 221.
The storage program area may store, among other things, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, a configuration file of the motor 291, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The charge management module 240 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The charging management module 240 may also provide power to the electronic device through the power management module 241 while charging the battery 242.
The power management module 241 is used for connecting the battery 242, and the charge management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charge management module 240 and provides power to the processor 210, the internal memory 221, the external memory, the display 294, the camera 293, the wireless communication module 260, and the like. In some embodiments, the power management module 241 and the charge management module 240 may also be provided in the same device.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. In some embodiments, antenna 1 and mobile communication module 250 of the electronic device are coupled, and antenna 2 and wireless communication module 260 are coupled, such that the electronic device may communicate with a network and other devices through wireless communication techniques.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied on an electronic device. The mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), or the like. The mobile communication module 250 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation.
The mobile communication module 250 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be provided in the same device as at least some of the modules of the processor 210.
The wireless communication module 260 may provide solutions for wireless communication including WLAN (e.g., (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near Field Communication (NFC), infrared (IR), etc. applied on an electronic device.
The wireless communication module 260 may be one or more devices that integrate at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
The electronic device may implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an ear-headphone interface 270D, an application processor, and the like. Such as music playing, recording, etc.
The sensor module 280 may include sensors such as pressure sensors, gyroscope sensors, barometric pressure sensors, magnetic sensors, acceleration sensors, distance sensors, proximity sensors, fingerprint sensors, temperature sensors, touch sensors, ambient light sensors, and bone conduction sensors. The electronics can collect various data via the sensor module 280.
The electronic device implements display functions through the GPU, the display screen 294, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The display 294 is used to display images, videos, and the like. The display 294 includes a display panel.
The electronic device may implement shooting functions through an ISP, a camera 293, a video codec, a GPU, a display 294, an application processor, and the like. The ISP is used to process the data fed back by the camera 293. The camera 293 is used to capture still images or video. In some embodiments, the electronic device may include 1 or N cameras 293, N being a positive integer greater than 1.
Keys 290 include a power on key, a volume key, etc. The keys 290 may be mechanical keys. Or may be a touch key. The motor 291 may generate a vibration alert. The motor 291 may be used for incoming call vibration alerting or for touch vibration feedback. The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc. The SIM card interface 295 is for interfacing with a SIM card. The SIM card may be inserted into the SIM card interface 295 or removed from the SIM card interface 295 to enable contact and separation from the electronic device. The electronic device may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 295 may support Nano SIM cards, micro SIM cards, and the like.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device. In other embodiments, the electronic device may also include more or fewer modules than provided in the foregoing embodiments, and different interfaces or a combination of multiple interfaces may be used between the modules in the foregoing embodiments.
In order to facilitate understanding, the application scenario of the picture cropping method provided by the embodiment of the application is specifically described below with reference to the accompanying drawings.
As shown in (a) of fig. 2B, a main interface (i.e., desktop) 301 of an electronic device (e.g., a mobile phone) includes an icon 301a of a camera application, the mobile phone may receive a click operation of the icon 301a of the camera application by a user, and in response to the click operation, the mobile phone may display an interface 302 as shown in (B) of fig. 2B. The interface 302 includes a panning switching control 302a, an image captured by a camera, and the like. If the user wishes to take a self-portrait, the control 302a may be clicked on. In response to receiving the operation of clicking the control 302a by the user, the mobile phone can acquire an original picture through the camera and display an amplified first cut picture, wherein the first cut picture is obtained by cutting the original picture, the size of the amplified first cut picture is the same as that of the original picture, the proportion of the first cut picture is consistent with that of the original picture, and the minimum area covering all faces in the original picture is taken as the center.
Illustratively, in the shooting scenario shown in fig. 2C, in response to receiving the operation of clicking the control 302a by the user, the mobile phone may take an original picture through the camera and display an interface 303 (which may also be referred to as a first interface) as shown in fig. 2D. The interface 303 includes an image 303a. The image 303a is the first enlarged cropping picture.
Optionally, as shown in fig. 2E, before the mobile phone displays the interface 303, an interface 304 and an interface 305 (may also be referred to as a second interface) may also be displayed. Wherein, the interface 304 includes an image 304a, and the interface 305 includes an image 305a. The image 304a and the image 305a are enlarged second cropping pictures, the size of each enlarged second cropping picture is the same as the size of the original picture, and the size of each second cropping picture is larger than the size of the first cropping picture. As can be seen from fig. 2E, the images 304a, 305a and 303a are sequentially enlarged, that is, the face in the preview area is clearer. It should be noted that, the display interface 304 and the interface 305 are only examples, and in fact, before the display interface 303, the mobile phone may further display a plurality of pictures including the enlarged second clipping pictures, so as to ensure that the minimum area covering all faces in each enlarged second clipping picture is sequentially enlarged according to the display sequence of the plurality of interfaces.
It can be understood that, before the electronic device displays the final clipping picture (i.e. before the image 303 a), the electronic device also displays multiple frames of other pictures, so as to achieve the effect of gradually enlarging the face, make the transition more natural, reduce the delay of the clipping process, bring the hysteresis on the picture, and bring better visual experience to the user.
Optionally, the picture cropping method provided in this embodiment may also be applied to a video call scene, and the process is the same as that of the scene shown in fig. 2B-2C, and will not be described here again.
Optionally, the electronic device may further receive an editing operation of the user on the original picture selected by the user in the gallery application, and in response to the operation, the electronic device may crop the original picture by using the picture cropping method provided by the embodiment to obtain a first cropped picture, and display the first cropped picture.
Optionally, the electronic device may further receive an editing operation of the user on the received original picture in the communication application, and in response to the editing operation, the electronic device may clip the original picture by using the picture clipping method provided by the embodiment to obtain a first clip picture, and display the first clip picture.
The picture cropping method used in each of the above-described scenes will be specifically described below with reference to the accompanying drawings.
Fig. 3 is a schematic flow chart of a picture cropping method according to the present application, which is applied to the electronic device shown in fig. 2A. The picture cropping method comprises the following steps:
S310, acquiring an original picture.
In this embodiment, the original picture refers to a picture to be cut out. The original picture may be a picture acquired by the electronic device in real time, or a picture acquired by the electronic device in response to a selection operation of a user in a gallery, or a picture received by the electronic device, which is not particularly limited herein. It should be noted that, when the electronic device obtains the original picture, the proportion of the original picture may be obtained at the same time. The scale refers to the ratio of width to height of the original picture, e.g., 1:1, 2:3, 4:3, 16:9, etc. For example, the size of the original picture is 1280×720, i.e., the width of the original picture is 1280 pixels and the height is 720 pixels, the ratio of the original picture is 16:9. Alternatively, the proportions of the original pictures are not limited to the proportions listed above, but may be other than those listed above, and are not particularly limited herein.
The data format of the original picture may be RGB format, YUYV format, UYVY format, YUV422 format, etc., which is not limited herein.
S320, face detection is carried out on the original picture, and a face area is determined.
The electronic equipment can perform face detection on the original picture by using a face recognition algorithm, and recognizes all faces in the original picture so as to determine a face area. The face recognition algorithm may be a recognition algorithm based on face feature points, a recognition algorithm based on a whole face image, a recognition algorithm based on a template, an algorithm for recognition by using a neural network, and the like, and is not particularly limited herein.
The face area is an area covering all faces in the original picture, and is smaller than or equal to the area where the original picture is located. In an alternative embodiment, the face region is the smallest of the regions that can cover all faces in the original picture. For example, as shown in fig. 3, the electronic device performs face detection on the original picture a to determine that 2 faces are included in the original picture a, and further determine a face area b that can just cover the 2 faces. The specific flow of determining the face area by the electronic device is shown in fig. 4 and related content, which is not described herein.
S330, the face area is taken as the center, and the first clipping area is determined according to the proportion of the face area and the original picture.
The first clipping region is larger than or equal to the face region, and the proportion of the first clipping region is consistent with that of the original picture. It can be understood that the first clipping region is greater than or equal to the face region, so that the image obtained by clipping the electronic device based on the first clipping region includes the content in the face region, and the problems of missing and incomplete face caused in the clipping process are avoided. In addition, the proportion of the first cutting area is consistent with that of the original picture, so that the problem of deformation caused by inconsistent stretching degrees of faces in different directions after the picture is zoomed and cut can be avoided.
In an alternative embodiment, the first clipping region is the smallest region covering the face region in the original picture, and the proportion of the first clipping region is consistent with the proportion of the original picture. Therefore, the face in the first clipping region can be larger in duty ratio, and the other backgrounds are smaller in duty ratio. Illustratively, as shown in fig. 3, the electronic device determines, from the face region b, a first clipping region c that just covers the face region b.
In an alternative embodiment, if the data format of the original picture is YUYV format or UYVY format, the pixel distance from the left boundary of the first cropping zone to the left boundary of the original picture is even, and the pixel distance from the right boundary of the first cropping zone to the right boundary of the original picture is even. Taking the upper side of the original picture passing through the origin as the x axis, taking the left side of the original picture passing through the origin as the y axis, setting up a coordinate system, taking the size of the original picture as x1 x y1 as an example, wherein the left boundary of the original picture is the side where the y axis is located, the right boundary of the original picture is the side where x=x1 is located, the left boundary of the first clipping region is the side where all pixels with the smallest abscissa are located in the first clipping region, and the right boundary of the first clipping region is the side where all pixels with the largest abscissa are located in the first clipping region. The pixel distance from the left boundary of the first crop region to the left boundary of the original picture refers to the vertical distance between the two boundaries.
It will be appreciated that since two pixels in YUYV format share a set of U, V components, i.e. as long as the width of the clipped region is a multiple of 2 (i.e. even), separation of two pixels sharing the same U, V component during clipping can be avoided. For example, assume that the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture is 101, which indicates that there are m pixels between the a point on the left boundary of the first clipping region and the B point on the left boundary of the original picture, which has the same ordinate as the a point. According to the YUYV format, the 1 st and 2 nd pixels in the 101 th pixel share a set U, V of components, the 3 rd and 4 th pixels share a set U, V of components, … …, and the 99 th and 100 th pixels share a set U, V of components. In the case where m is an odd number, the mth pixel and the m+1th pixel share a set U, V of components; in the case where m is even, the m-1 st pixel and the m-th pixel share a set U, V of components. When the original picture is cut according to the first cutting area, all pixels between the left boundary of the first cutting area and the left boundary of the original picture need to be cut, so that the mth pixel and the (m+1) th pixel can be separated under the condition that m is an odd number; under the condition that m is even, the m-1 th pixel and the m-th pixel are commonly cut off, so that the condition that the rest pixels have missing U, V components is avoided.
S340, clipping the original picture according to the first clipping region to generate a first clipping picture.
For example, as shown in fig. 3, the electronic device may crop the original picture a according to the first cropping area c, and generate a first cropped picture d.
It can be understood that the proportion of the first cut picture is the same as the proportion of the first cut region, so that the proportion of the first cut picture is consistent with the proportion of the original picture, and the problem that the face is deformed in the process of scaling the first cut picture by the electronic equipment can be avoided.
Further, in the case that the data format of the original picture is YUYV format or UYVY format, since the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture is even and the pixel distance from the right boundary of the first clipping region to the right boundary of the original picture is even, it is possible to avoid separating two pixels sharing the same U, V component during clipping, so that the Y, U, V component of each pixel in the first clipping picture is complete, and further reduce the probability of color deviation of the first clipping picture.
In order to implement the picture cropping method provided by the application, on the basis of the flowchart shown in fig. 3, the application provides a method for determining a face area, so that the electronic equipment can determine a first cropping area according to the face area. Fig. 4 is a second flow chart of a picture cropping method according to the present application. As shown in fig. 4, the process of determining a face region by the electronic device may include the following steps S321 and S322.
S321, performing face detection on the original picture to obtain the position information of at least one face subarea.
According to the foregoing, the electronic device may perform face detection on the original picture by using a face recognition algorithm, so as to obtain location information of at least one face sub-region. One face in the original picture corresponds to one face subarea, and the corresponding face is included in the one face subarea. For example, as shown in fig. 4, the electronic device performs face detection on the original picture a, so that a face subarea 1 and a face subarea 2 can be obtained.
The position information of an area is used to reflect the size of the area and the position in the original picture. Wherein the dimensions of the region may include the width and height of the region. The expression of the location information may be varied.
In an alternative embodiment, the location information of an area includes a first distance, a second distance, a third distance, and a fourth distance of the area. The first distance is the pixel distance from the left boundary of the region to the left boundary of the original picture, the second distance is the pixel distance from the right boundary of the region to the right boundary of the original picture, the third distance is the pixel distance from the upper boundary of the region to the upper boundary of the original picture, and the fourth distance is the pixel distance from the lower boundary of the region to the lower boundary of the original picture. The pixel distance between two boundaries refers to the vertical distance between the two boundaries. In a coordinate system established by taking the upper side edge passing through the origin in the original picture as an x axis and taking the left side edge passing through the origin in the original picture as a y axis, the left boundary of the region refers to the edge where all pixels with the smallest abscissa are located in the region, the right boundary of the region refers to the edge where all pixels with the largest abscissa are located in the region, the upper boundary of the region refers to the edge where all pixels with the smallest ordinate are located in the region, and the lower boundary of the region refers to the edge where all pixels with the largest ordinate are located in the region. For example, the left boundary of the region is a straight line where x=x1 is located, and the pixel distance from the left boundary of the region to the left boundary of the original picture is x1.
In an alternative embodiment, the location information may be expressed in the form of (X1, X2, Y1, Y2), where X1 represents a first distance, X2 represents a second distance, Y1 represents a third distance, and Y2 represents a fourth distance.
As illustrated in fig. 5, the location information of the area A1 is (L1, L2, L3, L4), which characterizes a first distance L1, a first distance L2, a third distance L3, and a fourth distance L4 of the area A1 to the original picture A2. The position of the area A1 in the original picture can be determined according to the first distance L1, the second distance L2, the third distance L3 and the fourth distance L4, and the size of the area A1 is determined to be (X-L1-L2) ×y-L3-L4, where X is the width of the original picture and Y is the length of the original picture.
In another alternative embodiment, the location information of an area includes coordinates of two vertices on the diagonal of the area. Illustratively, as shown in fig. 6, the positional information of the region A1 includes the coordinates (X1, Y1) of the upper left vertex P1 of the region A1 and the coordinates (X2, Y2) of the lower right vertex P2 of the region A1. The size of the area A1 can be determined as (X2-X1) ×y2-Y1 based on the positional information of the area A1. Or the position information of a region includes coordinates of a lower left vertex of the region and coordinates of an upper right vertex of the region.
The representation form of the location information of the region may be other, so long as the size of the region and the location in the original picture can be indicated, and the present invention is not limited in particular.
S322, determining the position information of the face area according to the position information of at least one face sub-area.
In an optional embodiment, in the case that the position information of the region includes the first distance, the second distance, the third distance, and the fourth distance of the region, the electronic device may compare the first distance, the second distance, the third distance, and the fourth distance of the at least one face sub-region, respectively, and use the minimum first distance, the minimum second distance, the minimum third distance, and the minimum fourth distance in the position information of the at least one face sub-region as the first distance, the second distance, the third distance, and the fourth distance of the face region, respectively, to obtain the position information of the face region.
Illustratively, as shown in fig. 7, the original picture includes a face sub-region 701 and a face sub-region 702, the position information of the face sub-region 701 is (L1, L2, L3, L4), and the position information of the face sub-region 702 includes (L1 ', L2', L3', L4'). The electronic device may compare L1 with L1', L2 with L2', L3 with L3', and L4 with L4', respectively, wherein since L1 is smaller than L1, L2 is larger than L2', L3 is smaller than L3', and L4 is larger than L4', the electronic device may determine the position information of the face region 703 includes: first distance L1, second distance L2', third distance L3, and fourth distance L4'.
In another alternative embodiment, in the case that the position information of the region includes coordinates of two vertices on a diagonal line of the region, the electronic device may compare the abscissa value and the ordinate value of the at least one face sub-region, respectively, and the coordinates determined according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value, and the maximum ordinate value in the position information of the at least one face sub-region are taken as the position information of the face region. It should be noted that, the coordinate value is a coordinate set established by taking the top left vertex of the original picture as the origin of coordinates, taking the upper side passing through the origin in the original picture as the x-axis, and taking the left side passing through the origin in the original picture as the y-axis.
Illustratively, as shown in fig. 8, a face sub-region 801 and a face sub-region 802 are included in the original picture, the position information of the face sub-region 801 includes coordinates (X1, Y1) of a P1 point and coordinates (X2, Y2) of a P2 point, and the position information of the face sub-region 802 includes coordinates (X1 ', Y1') of a P1 'point and coordinates (X2', Y2 ') of a P2' point. The electronic device may compare X1, X2, X1' and X2', determining a minimum abscissa value therein as X1 and a maximum abscissa value thereof as X2'; the electronic device may also compare Y1, Y2, Y1', and Y2', and determine the smallest ordinate value of which is Y1', and the largest ordinate value of which is Y2. In this way, it can be determined that the position information of the face region 803 includes the coordinates (X1, Y1 ') of the P3 point and the coordinates (X2', Y2) of the P4 point.
In yet another alternative embodiment, the electronic device may further use a first face sub-region of the at least one face sub-region as the face region, and then determine whether the face region covers the second face sub-region. And if the face area does not cover the second face subarea, adjusting the boundary of the face area to enable the face area to just cover the second face subarea. And then continuously judging whether the adjusted face area covers the third face subarea, if the adjusted face area does not cover the second face subarea, continuously adjusting the boundary of the face area to enable the face area to just cover the third face subarea, and the like until the face area covers all the face subareas.
In order to implement the picture cropping method provided by the application, on the basis of the flowchart shown in fig. 3, the application further provides a method for determining the first cropping zone, so that the subsequent electronic equipment crops the original picture according to the first cropping zone. Fig. 9 is a flowchart illustrating a method for clipping pictures according to the present application. As shown in fig. 9, the process of determining the first trimming area by the electronic device may include the following steps S331 to S334.
S331, generating n areas according to the proportion of the original picture.
The proportion of the n regions is the same as that of the original picture, the sizes of the n regions are sequentially increased, and the size of the i-th region is i times that of the 1-th region, i is less than or equal to n. From the foregoing, it is understood that the size of one region includes the width and the height of the region, that is, the width and the height of n regions are sequentially increased, and the width of the i-th region is i times the width of the 1 st region and the height of the i-th region is i times the height of the 1 st region.
For example, as shown in fig. 9, where the original picture has a scale of 16:9, the electronic device may generate n regions having a scale of 16:9. The n regions may be 16×9, 32×18, 48×27, … …, 16i×9i, … …, 16n×9n in size, respectively. The size of the nth region is equal to or smaller than the size of the original picture. For example, the resolution of the original picture is 1280×720, and the size of the nth region may be 1280×720 (where n=80) at maximum.
S332, judging whether the n areas can cover the face area or not sequentially.
Specifically, the electronic device may determine whether the width of each region is equal to or greater than the width of the face region, and whether the height of each region is equal to or greater than the height of the face region. If the width of a certain area is greater than or equal to the width of the face area and the height of the area is greater than or equal to the height of the face area, the area can be determined to cover the face area. If the width of a certain area is smaller than the width of the face area or the height is smaller than the height of the face area, it can be determined that the area cannot cover the face area.
S333, if the size of the i-th region cannot cover the face region and the size of the i+1th region can cover the face region, determining the size of the i+1th region as the size of the first clipping region.
It will be appreciated that if the size of the i-th region is insufficient to cover the face region and the size of the i+1th region is sufficient to cover the face region, it means that the size of the i+1th region is just sufficient to cover the face region, i.e., the i+1th region is the smallest region covering the face region in the original picture.
As shown in fig. 9, the electronic device performs face detection on the original picture a to obtain a face region b, where the size of the face region b is 722×200, and the electronic device may determine that the size of the 45 th region is 720×405 (i=45), and the size of the 46 th region is 736×414. The 45 th area is smaller in width (720) than the face area b (722), the 46 th area is larger in width (736) than the face area b (722), and the 46 th area is larger in height (414) than the face area b (200), so that the electronic device takes the 46 th area size (736×414) as the first clipping area size.
Therefore, the first clipping region can completely cover the face region, and the face information can be completely reserved in the clipping process; and the occupation of the face information in the first clipping region is larger, so that the faces are clearer.
It should be noted that S331 to S333 only show one embodiment in which the electronic device determines the size of the first clipping region according to the size of the face region and the proportion of the original picture. In other embodiments, the electronic device may determine the magnification according to the size of the face region and the scale of the original picture, and then determine the size of the first crop region based on the magnification and the scale of the original picture. Specifically, the electronic device may round up the quotient of the width of the face region and the ratio of the width in the original picture to obtain a first multiple, round up the quotient of the height of the face region and the ratio of the height in the original picture to obtain a second multiple, and use the larger value of the first multiple and the second multiple as the magnification. And finally, the value obtained after the proportion of the original picture is amplified by the amplification factor is used as the size of the first clipping region.
For example, the size of the face region is 360×244, and the ratio of the original pictures is 4:3, then the electronic device divides 360 by 4 to obtain a first multiple (i.e., 90), and divides 244 by 3 to obtain a second multiple (i.e., 82). That is, the width of the first clipping region can be greater than the width of the face region when 4×90=360, and the height of the first clipping region can be greater than the height of the face region when 3×82=246. However, in order to make the proportion of the first clipping region coincide with that of the original picture, and the first clipping region can just cover the face region, the electronic device uses the first multiple (i.e. 90) as the magnification, and further determines that the size of the first clipping region is 360×270 (i.e. 4×90) ×3×90).
S334, determining the position of the first clipping region according to the size of the first clipping region and the position information of the face region.
The position of the first clipping region may be determined by the positions of the left boundary, the right boundary, the upper boundary, and the lower boundary of the first clipping region.
In this embodiment, in order to make the first clipping region be the center of the face region, the electronic device may move the left boundary, the right boundary, the upper boundary, and the right boundary of the face region by a certain distance in the corresponding directions, so as to determine the positions of the left boundary, the right boundary, the upper boundary, and the lower boundary of the first clipping region.
Specifically, the electronic device may use the left boundary of the face area after moving leftwards by the first distance as the left boundary of the first clipping area, use the right boundary of the face area after moving rightwards by the second distance as the right boundary of the first clipping area, use the upper boundary of the face area after moving upwards by the third distance as the upper boundary of the first clipping area, and use the lower boundary of the face area after moving downwards by the fourth distance as the lower boundary of the first clipping area.
The first interval, the second interval, the third interval and the fourth interval are related to the size of the first clipping region and the position information of the face region. That is, the electronic device may determine the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region.
Specifically, the electronic device may calculate a difference between the width of the first clipping region and the width of the face region (hereinafter referred to as a first numerical value), and calculate a difference between the height of the first clipping region and the height of the face region (hereinafter referred to as a second numerical value). That is, from the width of the face region to the width of the first clipping region, the sum of the pixel distance moved by the left boundary and the pixel distance moved by the right boundary of the face region should be the first value; from the high of the face region to the high of the first clipping region, the sum of the pixel distance moved by the upper boundary and the pixel distance moved by the lower boundary of the face region should be the second value. In other words, the sum of the first distance and the second distance is a first value, and the sum of the third distance and the fourth distance is a second value.
Then, the electronic device may further determine the first interval, the second interval, the third interval, and the fourth interval according to the first value, the second value, and the position of the face region in the original picture. In this embodiment, the positions of the face regions in the original picture may include the following:
(1) The boundary of the face area is not coincident with the boundary of the original picture, the pixel distance from the left boundary of the face area to the left boundary of the original picture is more than or equal to a third value, the pixel distance from the right boundary of the face area to the right boundary of the original picture is more than or equal to the third value, the pixel distance from the upper boundary of the face area to the upper boundary of the original picture is more than or equal to a fourth value, and the pixel distance from the lower boundary of the face area to the lower boundary of the original picture is more than or equal to the fourth value. Wherein the third value is less than or equal to half the first value and the fourth value is less than or equal to half the second value. In this case, the electronic device may determine that the first pitch and the second pitch are half of the first value, and determine that the third pitch and the fourth pitch are half of the second value.
It will be appreciated that, in the case shown in (1), the boundary after the left boundary of the face region moves leftward by the first distance, the boundary after the right boundary of the face region moves rightward by the second distance, the boundary after the upper boundary of the face region moves upward by the third distance, and the boundary after the lower boundary of the face region moves downward by the fourth distance are all within the region where the original picture is located, so that it is determined that the first distance and the second distance are half of the first value, and the third distance and the fourth distance are half of the second value, and the first clipping region can be centered on the face region.
The values of the first pitch, the second pitch, the third pitch, and the fourth pitch should be integers. For example, if half of the second value is not an integer, the rounded value may be used as the third pitch (or the fourth pitch), and the third pitch (or the fourth pitch) may be subtracted from the second value to obtain the fourth pitch (or the third pitch).
(2) The left boundary of the face region coincides with the left boundary of the original picture or the pixel distance from the left boundary of the face region to the left boundary of the original picture is smaller than a third value. In this case, the electronic device may determine that the first pitch is 0 and determine that the second pitch is a first value.
It will be appreciated that in the case where the left boundary of the face region coincides with the left boundary of the original picture, continued movement of the left boundary of the face region to the left may exceed the left boundary of the original picture. When the pixel distance from the left boundary of the face region to the left boundary of the original picture is smaller than the third value, the left boundary of the face region will also exceed the left boundary of the original picture after being shifted to the left by half of the first value. Therefore, the first interval is determined to be 0, and the second interval is determined to be the first value minus 0, i.e. the first value.
(3) The right boundary of the face region coincides with the right boundary of the original picture or the pixel distance from the right boundary of the face region to the right boundary of the original picture is smaller than a third value. In this case, the electronic device may determine that the second pitch is 0 and determine that the first pitch is a first value.
It will be appreciated that in the case where the right boundary of the face region coincides with the right boundary of the original picture, continued movement of the right boundary of the face region to the right may exceed the right boundary of the original picture. When the pixel distance from the right boundary of the face region to the right boundary of the original picture is smaller than the third value, the right boundary of the face region will also exceed the right boundary of the original picture after being shifted to the right by half of the first value. Therefore, the second interval is determined to be 0, and the first interval is determined to be the first value minus 0, i.e. the first value.
(4) The pixel distance from the left boundary of the face region to the left boundary of the original picture or from the left boundary of the face region to the left boundary of the original picture is smaller than a third value, and the pixel distance from the right boundary of the face region to the right boundary of the original picture or from the right boundary of the face region to the right boundary of the original picture is smaller than the third value. In this case, the electronic device may determine that both the first pitch and the second pitch are 0.
(5) The upper boundary of the face region coincides with the upper boundary of the original picture or the pixel distance from the upper boundary of the face region to the upper boundary of the original picture is smaller than a fourth value. In this case, the electronic device may determine that the third pitch is 0 and determine that the fourth pitch is the second value.
It will be appreciated that in the case where the upper boundary of the face region coincides with the upper boundary of the original picture, the upper boundary of the face region continues to move upward beyond the upper boundary of the original picture. When the pixel distance from the upper boundary of the face region to the upper boundary of the original picture is smaller than the fourth value, the upper boundary of the face region moves up by half of the second value and then exceeds the upper boundary of the original picture. Therefore, the third interval is determined to be 0, and the first interval is determined to be the second value minus 0, namely the second value.
(6) The lower boundary of the face region coincides with the lower boundary of the original picture or the pixel distance from the lower boundary of the face region to the lower boundary of the original picture is smaller than a fourth value. In this case, the electronic device may determine that the fourth pitch is 0 and determine that the third pitch is the second value.
It will be appreciated that in the case where the lower boundary of the face region coincides with the lower boundary of the original picture, the continued downward movement of the lower boundary of the face region may exceed the lower boundary of the original picture. And under the condition that the pixel distance from the lower boundary of the face area to the lower boundary of the original picture is smaller than the fourth value, the lower boundary of the face area is moved downwards by half of the second value and then exceeds the lower boundary of the original picture. Therefore, the third interval is determined to be 0, and the first interval is determined to be the second value minus 0, namely the second value.
(7) The pixel distance from the upper boundary of the face region to the upper boundary of the original picture or from the upper boundary of the face region to the upper boundary of the original picture is smaller than a fourth value, and the pixel distance from the lower boundary of the face region to the lower boundary of the original picture or from the lower boundary of the face region to the lower boundary of the original picture is smaller than the fourth value. In this case, the electronic device may determine that both the third pitch and the fourth pitch are 0.
(8) The boundary of the face region coincides with the boundary of the original picture, or the pixel distance from the left boundary of the face region to the left boundary of the original picture is smaller than a third value, the pixel distance from the right boundary of the face region to the right boundary of the original picture is smaller than a third value, the pixel distance from the upper boundary of the face region to the upper boundary of the original picture is smaller than a fourth value, and the pixel distance from the lower boundary of the face region to the lower boundary of the original picture is smaller than a fourth value. In this case, the electronic device may determine that the first pitch, the second pitch, the third pitch, and the fourth pitch are all 0. That is, the electronic device takes four boundaries of the original picture as four boundaries of the first clipping region respectively.
In the case where the left boundary or the right boundary of the face region is any one of (2), (3) and (4), the values of the third pitch and the fourth pitch can be found in (5), (6) and (7), and vice versa. For example, if the left boundary of the face region coincides with the left boundary of the original picture or the pixel distance from the left boundary of the face region to the left boundary of the original picture is smaller than the third value (i.e., case (3) above), and the lower boundary of the face region coincides with the lower boundary of the original picture or the pixel distance from the lower boundary of the face region to the lower boundary of the original picture is smaller than the fourth value (i.e., case (6) above), the electronic device may determine the second pitch to be 0, determine the first pitch to be the first value, determine the fourth pitch to be 0, and determine the third pitch to be the second value.
The original picture is illustratively 1920 x 1080 in size, i.e., the ratio of the original pictures is 16:9. The electronic device determines that the position information of the face area is 73,1695,52,955. Thus the face area is 1920-73-1695=152 wide and 1080-52-955=73 high, i.e. the face area is 152 x 73 in size. The electronic device may determine that the size of the first clipping region is 160×90 according to the size of the face region. Further, since the face region does not overlap with the original picture, the electronic device may calculate the first distance to be (160-152)/2=4, the second distance to be 4, the third distance to be 9 (rounded up to (90-73)/2=8.5), and the fourth distance to be 8. In this way, it can be determined that the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture is 73-4=69, the pixel distance from the right boundary of the first clipping region to the right boundary of the original picture is 1695-4=1691, the pixel distance from the upper boundary of the first clipping region to the upper boundary of the original picture is 52-9=43, and the pixel distance from the lower boundary of the first clipping region to the lower boundary of the original picture is 955-8=947, that is, the position information of the first clipping region is (69,1691,43,947).
In an alternative embodiment, if the original picture is in YUYV format or UYVY format, the electronic device needs to ensure that the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture is even, and the pixel distance from the right boundary of the first clipping region to the right boundary of the original picture is even, so that two pixels sharing a group of U, V components are not separated in the clipping process, and the probability of color anomaly of the clipped picture is reduced.
In this case, the electronic device may first move the left boundary of the face area to the left by a first distance to obtain the first boundary, and determine that the pixel distance from the first boundary to the left boundary of the original picture is odd or even. If the pixel distance from the first boundary to the left boundary of the original picture is odd, taking the boundary after the first boundary is shifted left or shifted right by one pixel as the left boundary of the first clipping region, and taking the boundary after the left boundary of the first clipping region is shifted right by the width of i+1 regions as the right boundary of the first clipping region. If the pixel distance from the first boundary to the left boundary of the original picture is even, taking the first boundary as the left boundary of the first clipping region, and taking the boundary of the right boundary of the face region after moving rightwards by a second interval as the right boundary of the first clipping region.
Still taking the original picture size of 1920×1080, the position information of the face region of (73,1695,52,955) as an example, the position information of the first clipping region determined by the original picture is (69,1691,43,947). In the case where the original picture is in YUYV format or UYVY format, since the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture is 69 and the pixel distance from the right boundary of the first clipping region to the right boundary of the original picture is 1691, this may cause color deviation of the clipped picture, and thus the position information of the first clipping region may be adjusted to (68,1692,43,947).
Optionally, when the image clipping method provided by the embodiment of the present application is applied to a scene requiring automatic focusing of a face, after the electronic device generates the first clipping image, the electronic device may further amplify the first clipping image to the size of the original image, and display the amplified first clipping image. Therefore, the picture displayed by the electronic equipment can be centered on the face of the user, and the user does not need to adjust the angle and distance of the lens by himself, so that the user is more convenient and comfortable.
In another alternative embodiment, the electronic device may also directly display the first cropped picture.
Optionally, when the picture cropping method provided by the embodiment of the application is applied to a video call scene or a shooting scene, a certain time delay exists in the period from the acquisition of the original picture to the display of the first cropped picture after the amplification. In order to reduce the picture hysteresis caused by the time delay, before the electronic equipment cuts the original picture according to the first cutting area, the electronic equipment can also cut the original picture for a plurality of times to obtain a plurality of second cutting pictures, and display the plurality of amplified second cutting pictures. Wherein the size of the clipping region used for each clipping is different.
In an alternative embodiment, the electronic device may crop the original picture according to a plurality of cropping areas with sequentially decreasing sizes, to obtain a plurality of second cropped pictures.
In another optional implementation manner, the electronic device may reduce the original picture according to a certain step size to obtain a plurality of second cropping areas, and crop the original picture according to the plurality of second cropping areas to obtain a plurality of second cropping pictures.
Fig. 10 is a flow chart diagram of a picture cropping method according to the present application. As shown in fig. 10, the process of obtaining the plurality of second cropping pictures by the electronic device includes S410 to S440.
S410, determining a step length according to the position information of the first clipping region and the size of the original picture.
In this embodiment, there are different steps in different directions, including a first step, a second step, a third step, and a fourth step. The sum of the first step length and the second step length is a preset horizontal step length, the sum of the third step length and the fourth step length is a preset vertical step length, the ratio of the horizontal step length to the vertical step length is the same as the proportion of the original picture, and the first step length, the second step length, the third step length and the fourth step length are all positive integers. The preset horizontal step length refers to the number of pixels each time the width of the clipping region is reduced; the preset vertical step length refers to the number of pixels each time the length of the clipping region is reduced. For example, the ratio of the original picture is 4:3, and the ratio of the horizontal step size to the vertical step size is also 4:3. The preset horizontal step length can be 4m, the preset vertical step length is 3m, and m is an integer greater than or equal to 1.
Specifically, the electronic device may determine a first pixel distance, a second pixel distance, a third pixel distance, and a fourth pixel distance according to the first clipping region and the original picture, where the first pixel distance is a pixel distance from a left boundary of the first clipping region to a left boundary of the original picture, the second pixel distance is a pixel distance from a right boundary of the first clipping region to a right boundary of the original picture, the third pixel distance is a pixel distance from an upper boundary of the first clipping region to an upper boundary of the original picture, the fourth pixel distance is a pixel distance from a lower boundary of the first clipping region to a lower boundary of the original picture, and then determine a first step size and a second step size according to the first pixel distance, the second pixel distance, and a preset horizontal step size, and determine a third step size and a fourth step size according to the third pixel distance, the fourth pixel distance, and a preset vertical step size.
In this embodiment, the first pixel distance, the second pixel distance, the preset horizontal step length, the first step length, and the second step length satisfy the following formula:
speed_left=[left_gap/(left_gap+right_gap)*speed_x]
speed_right=speed_x-speed_left
Wherein, speed_left is a first step, speed_right is a second step, speed_x is a preset horizontal step, left_gap is a first pixel distance, right_gap is a second pixel distance, [ left_gap/(left_gap+right_gap) ×speed_x ] is used to round left_gap/(left_gap+right_gap) ×speed_x.
For example, a speed_left of 20, a right_gap of 70, and a speed_x of 16, then speed_left of 3 may be determined.
In this embodiment, the third pixel distance, the fourth pixel distance, the preset vertical step length, the third step length, and the fourth step length satisfy the following formula:
speed_top=[top_gap/(top_gap+bottom_gap)*speed_y]
speed_bottom=speed_y-speed_top
wherein, speed_top is the third step size, speed_bottom is the fourth step size, speed_y is the preset vertical step size, top_gap is the third pixel distance, bottom_gap is the fourth pixel distance, and [ top_gap/(top_gap+bottom_gap) ×speed_y ] is to round up top_gap/(top_gap+bottom_gap) ×speed_y.
Optionally, if the original picture is in YUYV format or UYVY format, the first step size and the second step size are even numbers. In this case, the first pixel distance, the second pixel distance, the preset horizontal step, the first step, and the second step satisfy the formula:
tmp=[left_gap/(left_gap+right_gap)*speed_x]
speed_right=speed_x-speed_left
Therefore, the first step size speed_left and the second step size speed_right are always even, and further the pixel distance from the left boundary of each second clipping region to the left boundary of the original picture is ensured to be even, and the pixel distance from the right boundary of each second clipping region to the right boundary of the original picture is ensured to be even. The two pixels sharing a group of U, V components are not separated in the clipping process, so that the probability of color abnormality of the clipped picture is reduced.
S420, the original picture is reduced according to the step length, and a plurality of second clipping areas are obtained.
Specifically, the electronic device may move the left boundary of the original picture rightward according to the first step, move the right boundary of the original picture leftward according to the second step, move the upper boundary of the original picture downward according to the third step, and move the lower boundary of the original picture upward according to the fourth step, so as to obtain a plurality of second clipping regions.
For example, the size of the original picture may be 1920×1080, the position information of the first cropping area is (68,1692,43,947), and if the preset horizontal step size is 160 and the preset vertical step size is 90, the first step size is 6, the second step size is 154, the third step size is 3, and the fourth step size is 87. On the basis, the electronic device can obtain a plurality of second cutting areas with the sizes of (6,154,3,87), (12,308,6,174) and (18,462,9,261) respectively.
In an alternative embodiment, the electronic device stops shrinking the original picture when the reduced second clipping region is less than or equal to the first clipping region.
And S430, respectively cutting the original picture according to the plurality of second cutting areas to obtain a plurality of second cutting pictures.
S440, amplifying the plurality of second clipping pictures to the size of the original picture, and displaying the amplified plurality of second clipping pictures in turn.
It should be noted that, each time the electronic device generates a second cropping picture, it can be subjected to an amplifying operation, and the amplified second cropping picture is displayed. That is, the electronic device may sequentially display the enlarged second cropping pictures before displaying the enlarged first cropping pictures. The sizes of the second clipping regions are sequentially reduced, so that the occupation ratio of the face image in the clipping picture is gradually increased until the face image is changed into the visual effect of the face in the amplified first clipping picture, the face change is natural, and the visual experience of a user can be improved.
Fig. 11 is a flowchart of a picture cropping method according to an embodiment of the present application. As shown in fig. 11, which shows the complete flow of the electronic device for one-time picture cropping. Firstly, the electronic device shoots a plurality of images through a camera and stores the plurality of images into a camera buffer (camera buffer). The electronic equipment can acquire an original picture based on the multi-frame image of the camera buffer, and face recognition is carried out on the original picture by utilizing a face recognition algorithm to obtain a face recognition result. Alternatively, the electronic device may use the latest image of one frame in the camera buffer as an original image, or use an image obtained by fusing multiple frames of images as an original image, which is not limited herein. The face recognition result comprises face sub-areas where all faces in the original picture are located. Then, the electronic device calculates a minimum area (i.e., a face area) covering all faces according to the face recognition result. Then, the electronic device may determine whether the face region is in the smallest clipping region (i.e., the smallest region of the n regions in S322), and if yes, directly take the smallest clipping region as the first clipping region; if the face region is not in the minimum clipping region, a first clipping region with the same proportion as the original picture is calculated according to the size of the face region and the proportion of the original picture, wherein the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture is even number and the pixel distance from the right boundary of the first clipping region to the right boundary of the original picture is even number when the data format of the original picture is in a YUYV format or a UYVY format. And next, the electronic equipment can take the area where the original picture is located as an initial cutting area and judge whether the initial cutting area is overlapped with the first cutting area or not. If the two images are not overlapped, determining the proportion of a horizontal step length and a vertical step length according to the proportion of the original images, wherein the horizontal step length refers to the number of pixels of which the width of the initial cutting area is reduced each time; the vertical step size refers to the number of pixels that the length of the initial clipping region decreases each time. Meanwhile, the electronic equipment can calculate the step length in four directions according to the position of the initial cutting area, the second cutting area and the position, and the initial cutting area is reduced according to the step length to obtain a plurality of second cutting areas until the second cutting areas coincide with the first cutting areas. If the two are coincident, the initial clipping region does not need to be continuously reduced. And when each cutting area is determined, cutting the original picture according to the cutting area, magnifying the cut picture to the size of the original picture in equal proportion, and covering the original picture with the magnified cut picture for display. The whole process electronic equipment can display images which are centered on the human face and sequentially enlarged on the human face on the interface, so that the electronic equipment is smooth and natural, a user does not need to manually adjust a lens, and the operation is simple and convenient.
Some embodiments of the application provide an electronic device that may include: a memory and one or more processors. The memory is coupled to the processor. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the various functions or steps performed by the electronic device in the method embodiments described above. The structure of the electronic device may refer to the structure of the electronic device shown in fig. 2A.
Embodiments of the present application also provide a system-on-a-chip (SoC) system including at least one processor 1201 and at least one interface circuit 1202 as shown in fig. 12. The processor 1201 and the interface circuit 1202 may be interconnected by wires. For example, interface circuit 1202 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit 1202 may be used to send signals to other devices (e.g., the processor 1201 or a touch screen of an electronic apparatus). The interface circuit 1202 may, for example, read instructions stored in memory and send the instructions to the processor 1201. The instructions, when executed by the processor 1201, may cause the electronic device to perform the various steps described in the embodiments above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
Embodiments of the present application also provide a computer readable storage medium, where the computer readable storage medium includes computer instructions, which when executed on an electronic device, cause the electronic device to perform the functions or steps performed by the electronic device in the method embodiments described above.
The embodiment of the application also provides a computer program product, which when run on an electronic device, causes the electronic device to execute the functions or steps executed by the electronic device in the above-mentioned method embodiment.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (26)

1. A method of cropping a picture, the method comprising:
Acquiring an original picture;
Performing face detection on the original picture to obtain a face region, wherein the face region is a minimum region covering all faces in the original picture, and the face region is smaller than or equal to the region where the original picture is located;
Acquiring n areas; the size of the n regions is sequentially increased, the size of one region comprises the width and the height of the region, the proportion of the n regions is the same as that of the original picture, and the width of the i region is i times of the width of the 1 st region, i is less than or equal to n;
If the size of the ith area is insufficient to cover the face area and the size of the (i+1) th area is sufficient to cover the face area, determining the size of the (i+1) th area as the size of a first clipping area;
The face area is taken as the center, and the position of the first cutting area is determined according to the size of the first cutting area and the position information of the face area;
and cutting the original picture according to the size and the position of the first cutting area to generate a first cutting picture.
2. The method of claim 1, wherein the performing face detection on the original picture to obtain a face region includes:
Performing face detection on the original picture to obtain position information of at least one face subarea, wherein each face subarea is a minimum area covering one face in the original picture;
and determining the position information of the face area according to the position information of the at least one face sub-area.
3. The method of claim 2, wherein the position information of a region includes a first distance, a second distance, a third distance, and a fourth distance of the region, the first distance being a pixel distance from a left boundary of the region to a left boundary of the original picture, the second distance being a pixel distance from a right boundary of the region to a right boundary of the original picture, the third distance being a pixel distance from an upper boundary of the region to an upper boundary of the original picture, the fourth distance being a pixel distance from a lower boundary of the region to a lower boundary of the original picture;
The determining the position information of the face region according to the position information of the at least one face sub-region includes:
And respectively taking the minimum first distance, the minimum second distance, the minimum third distance and the minimum fourth distance in the position information of the at least one face subarea as the first distance, the second distance, the third distance and the fourth distance of the face area to obtain the position information of the face area.
4. The method of claim 2, wherein the positional information of an area includes coordinates of two vertices on a diagonal of the area;
The determining the position information of the face region according to the position information of the at least one face sub-region includes:
and taking the coordinates determined according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value in the position information of the at least one face subarea as the position information of the face area.
5. The method of claim 1, wherein determining the location of the first cropped area based on the size of the first cropped area and the location information of the face area comprises:
Determining a first interval, a second interval, a third interval and a fourth interval according to the size of the first clipping region and the position information of the face region; the sum of the first interval and the second interval is a first numerical value, the sum of the third interval and the fourth interval is a second numerical value, and the first interval, the second interval, the third interval and the fourth interval are positive integers;
Taking the boundary of the face area after the upper boundary of the face area moves upwards by the third interval as the upper boundary of the first clipping area, and taking the boundary of the face area after the lower boundary of the face area moves downwards by the fourth interval as the lower boundary of the first clipping area;
determining a left boundary and a right boundary of the first clipping region according to the left boundary of the face region, the right boundary of the face region, the first interval and the second interval;
And determining the position of the first clipping region according to the left boundary, the right boundary, the upper boundary and the lower boundary of the first clipping region.
6. The method of claim 5, wherein the original picture is in YUYV format or UYVY format, the pixel distance from the left boundary of the first crop region to the left boundary of the original picture is even, and the pixel distance from the right boundary of the first crop region to the right boundary of the original picture is even.
7. The method of claim 6, wherein the determining the left and right boundaries of the first crop region based on the left boundary of the face region, the right boundary of the face region, the first pitch, and the second pitch comprises:
moving the first distance leftwards by the left boundary of the face area to obtain a first boundary;
If the pixel distance from the first boundary to the left boundary of the original picture is odd, taking the boundary after the first boundary is shifted left or shifted right by one pixel as the left boundary of the first clipping region, and taking the boundary after the left boundary of the first clipping region is shifted right by the width of the i+1 regions as the right boundary of the first clipping region;
And if the pixel distance from the first boundary to the left boundary of the original picture is even, taking the first boundary as the left boundary of the first clipping region, and taking the boundary after the right boundary of the face region moves rightwards by the second interval as the right boundary of the first clipping region.
8. The method of claim 5, wherein the original picture is not in YUYV format and UYVY format, the determining the left and right boundaries of the first cropped region based on the left boundary of the face region, the right boundary of the face region, the first pitch, and the second pitch comprises:
And taking the boundary of the face area after the left boundary of the face area moves leftwards by the first interval as the left boundary of the first clipping area, and taking the boundary of the face area after the right boundary of the face area moves rightwards by the second interval as the right boundary of the first clipping area.
9. The method according to any one of claims 5-8, wherein determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes:
If the left boundary of the face area coincides with the left boundary of the original picture or the pixel distance from the left boundary of the face area to the left boundary of the original picture is smaller than a third value, determining that the first interval is 0, determining that the second interval is the first value, and determining that the third interval and the fourth interval are half of the second value.
10. The method according to any one of claims 5-8, wherein determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes:
If the right boundary of the face area coincides with the right boundary of the original picture or the pixel distance from the right boundary of the face area to the right boundary of the original picture is smaller than a third value, determining that the second interval is 0, determining that the first interval is the first value, and determining that the third interval and the fourth interval are half of the second value.
11. The method according to any one of claims 5-8, wherein determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes:
if the upper boundary of the face area coincides with the upper boundary of the original picture or the pixel distance from the upper boundary of the face area to the upper boundary of the original picture is smaller than a fourth value, determining that the third interval is 0, determining that the fourth interval is the second value, and determining that the first interval and the second interval are half of the first value.
12. The method according to any one of claims 5-8, wherein determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes:
If the lower boundary of the face area coincides with the lower boundary of the original picture or the pixel distance from the lower boundary of the face area to the lower boundary of the original picture is smaller than a fourth value, determining that the fourth interval is 0, determining that the third interval is the second value, and determining that the first interval and the second interval are half of the first value.
13. The method according to any one of claims 5-8, wherein determining the first pitch, the second pitch, the third pitch, and the fourth pitch according to the size of the first clipping region and the position information of the face region includes:
And if the boundary of the face area is not coincident with the boundary of the original picture and the pixel distance from the left boundary of the face area to the left boundary of the original picture is greater than or equal to a third numerical value, the pixel distance from the right boundary of the face area to the right boundary of the original picture is greater than or equal to the third numerical value, the pixel distance from the upper boundary of the face area to the upper boundary of the original picture is greater than or equal to a fourth numerical value, the pixel distance from the lower boundary of the face area to the lower boundary of the original picture is greater than or equal to the fourth numerical value, the first interval and the second interval are determined to be half of the first numerical value, and the third interval and the fourth interval are determined to be half of the second numerical value.
14. The method according to any one of claims 1-8, further comprising:
amplifying the first clipping picture to the size of the original picture;
displaying the amplified first clipping picture.
15. The method of any of claims 1-8, wherein prior to said cropping the original picture according to the first cropping zone to generate a first cropped picture, the method further comprises:
determining a step length according to the position information of the first clipping region and the size of the original picture;
reducing the original picture according to the step length to obtain a plurality of second clipping areas;
cutting the original picture according to the plurality of second cutting areas to obtain a plurality of second cutting pictures;
and amplifying the plurality of second clipping pictures to the size of the original picture, and displaying the amplified plurality of second clipping pictures in sequence.
16. The method of claim 15, wherein the step sizes comprise a first step size, a second step size, a third step size, and a fourth step size, wherein a sum of the first step size and the second step size is a preset horizontal step size, a sum of the third step size and the fourth step size is a preset vertical step size, a ratio of the horizontal step size to the vertical step size is the same as a ratio of the original picture, and the first step size, the second step size, the third step size, and the fourth step size are positive integers;
and reducing the original picture according to the step length to obtain a plurality of second clipping regions, wherein the steps comprise:
And moving the left boundary of the original picture to the right according to the first step length, moving the right boundary of the original picture to the left according to the second step length, moving the upper boundary of the original picture to the lower according to the third step length, and moving the lower boundary of the original picture to the upper according to the fourth step length to obtain the plurality of second clipping regions.
17. The method of claim 16, wherein the determining a step size according to the position information of the first cropping zone and the size of the original picture comprises:
Determining a first pixel distance, a second pixel distance, a third pixel distance and a fourth pixel distance according to the position information of the first clipping region and the size of the original picture, wherein the first pixel distance is the pixel distance from the left boundary of the first clipping region to the left boundary of the original picture, the second pixel distance is the pixel distance from the right boundary of the first clipping region to the right boundary of the original picture, the third pixel distance is the pixel distance from the upper boundary of the first clipping region to the upper boundary of the original picture, and the fourth pixel distance is the pixel distance from the lower boundary of the first clipping region to the lower boundary of the original picture;
determining the first step length and the second step length according to the first pixel distance, the second pixel distance and a preset horizontal step length;
and determining the third step length and the fourth step length according to the third pixel distance, the fourth pixel distance and a preset vertical step length.
18. The method of claim 17, wherein the third pixel distance, the fourth pixel distance, the preset vertical step size, the third step size, and the fourth step size satisfy the formula:
speed_top=[top_gap/(top_gap+bottom_gap)*speed_y]
speed_bottom=speed_y-speed_top
Wherein speed_top is the third step size, speed_bottom is the fourth step size, speed_y is the preset vertical step size, top_gap is the third pixel distance, bottom_gap is the fourth pixel distance, [ top_gap/(top_gap+bottom_gap) ×speed_y ] is used for rounding top_gap/(top_gap+bottom_gap) ×speed_y.
19. The method of claim 17, wherein the original picture is in YUYV format or UYVY format, and the first step size and the second step size are even numbers.
20. The method of claim 19, wherein the first pixel distance, the second pixel distance, a preset horizontal step size, the first step size, and the second step size satisfy the formula:
tmp=[left_gap/(left_gap+right_gap)*speed_x]
speed_right=speed_x-speed_left
Wherein, speed_left is the first step length, speed_right is the second step length, speed_x is the preset horizontal step length, left_gap is the first pixel distance, right_gap is the second pixel distance, [ left_gap/(left_gap+right_gap). ] speed_x ] is used for rounding left_gap/(left_gap+right_gap) & x.
21. A picture cropping method, characterized by being applied to an electronic device, the method comprising:
receiving a first event for triggering the electronic equipment to focus the face; wherein the first event includes one of: receiving an operation of a user for connecting a video call or an operation of a user for opening a camera application;
Responding to the first event, and acquiring an original picture by the electronic equipment through a camera;
Performing face detection on the original picture to obtain a face region, wherein the face region is a minimum region covering all faces in the original picture, and the face region is smaller than or equal to the region where the original picture is located;
Acquiring n areas; the size of the n regions is sequentially increased, the size of one region comprises the width and the height of the region, the proportion of the n regions is the same as that of the original picture, and the width of the i region is i times of the width of the 1 st region, i is less than or equal to n;
If the size of the ith area is insufficient to cover the face area and the size of the (i+1) th area is sufficient to cover the face area, determining the size of the (i+1) th area as the size of a first clipping area;
The face area is taken as the center, and the position of the first cutting area is determined according to the size of the first cutting area and the position information of the face area;
Cutting the original picture according to the size and the position of the first cutting area to generate a first cutting picture;
Displaying a first interface; the first interface comprises the enlarged first cut picture, and the size of the enlarged first cut picture is the same as that of the original picture.
22. The method of claim 21, wherein prior to displaying the first interface, the method further comprises:
Sequentially displaying a plurality of second interfaces, wherein each second interface comprises an amplified second clipping picture; the size of each amplified second cutting picture is the same as that of the original picture, the size of each second cutting picture is larger than that of the first cutting picture, and the minimum area covering all faces in each amplified second cutting picture is sequentially increased according to the display sequence of the plurality of second interfaces.
23. The method of claim 22, wherein the method further comprises:
determining a step length according to the position information of the first clipping region and the size of the original picture;
reducing the original picture according to the step length to obtain a plurality of second clipping areas;
And respectively cutting the original picture according to the plurality of second cutting areas to obtain a plurality of second cutting pictures.
24. The method of any of claims 21-23, wherein the camera is a universal serial bus camera, the original picture is in YUYV format, the pixel distance from the left boundary of the first crop area to the left boundary of the original picture is even, and the pixel distance from the right boundary of the first crop area to the right boundary of the original picture is even.
25. An electronic device, the electronic device comprising: a memory and one or more processors; the memory is coupled with the processor;
Wherein the memory is for storing computer program code, the computer program code comprising computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the method of any one of claims 1-24.
26. A computer-readable storage medium comprising computer instructions;
the computer instructions, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-24.
CN202211620168.3A 2022-12-15 2022-12-15 Picture clipping method and electronic equipment Active CN116703701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211620168.3A CN116703701B (en) 2022-12-15 2022-12-15 Picture clipping method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211620168.3A CN116703701B (en) 2022-12-15 2022-12-15 Picture clipping method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116703701A CN116703701A (en) 2023-09-05
CN116703701B true CN116703701B (en) 2024-05-17

Family

ID=87828096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211620168.3A Active CN116703701B (en) 2022-12-15 2022-12-15 Picture clipping method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116703701B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007020029A (en) * 2005-07-11 2007-01-25 Sony Corp Image processor and method, program and recording medium
CN101196995A (en) * 2007-12-27 2008-06-11 北京中星微电子有限公司 Method for detecting maximum face in image
CN104408687A (en) * 2014-10-31 2015-03-11 酷派软件技术(深圳)有限公司 Picture playing method and device
CN104952027A (en) * 2014-10-11 2015-09-30 腾讯科技(北京)有限公司 Face-information-contained picture cutting method and apparatus
CN111768416A (en) * 2020-06-19 2020-10-13 Oppo广东移动通信有限公司 Photo clipping method and device
CN113628229A (en) * 2021-08-04 2021-11-09 展讯通信(上海)有限公司 Image cropping method and related product
CN115205943A (en) * 2022-07-22 2022-10-18 中国平安人寿保险股份有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115334237A (en) * 2022-07-26 2022-11-11 广州紫为云科技有限公司 Portrait focusing method, device and medium based on USB camera

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007020029A (en) * 2005-07-11 2007-01-25 Sony Corp Image processor and method, program and recording medium
CN101196995A (en) * 2007-12-27 2008-06-11 北京中星微电子有限公司 Method for detecting maximum face in image
CN104952027A (en) * 2014-10-11 2015-09-30 腾讯科技(北京)有限公司 Face-information-contained picture cutting method and apparatus
CN104408687A (en) * 2014-10-31 2015-03-11 酷派软件技术(深圳)有限公司 Picture playing method and device
CN111768416A (en) * 2020-06-19 2020-10-13 Oppo广东移动通信有限公司 Photo clipping method and device
CN113628229A (en) * 2021-08-04 2021-11-09 展讯通信(上海)有限公司 Image cropping method and related product
CN115205943A (en) * 2022-07-22 2022-10-18 中国平安人寿保险股份有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115334237A (en) * 2022-07-26 2022-11-11 广州紫为云科技有限公司 Portrait focusing method, device and medium based on USB camera

Also Published As

Publication number Publication date
CN116703701A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN111050062B (en) Shooting method and electronic equipment
US20220321797A1 (en) Photographing method in long-focus scenario and terminal
CN112584251B (en) Display method and electronic equipment
EP4117276B1 (en) Image display method and electronic device
CN111355998B (en) Video processing method and device
CN113810604B (en) Document shooting method, electronic device and storage medium
EP4117275A1 (en) Image fusion method and electronic device
CN115526787A (en) Video processing method and device
CN111835973A (en) Shooting method, shooting device, storage medium and mobile terminal
CN114531539B (en) Shooting method and electronic equipment
CN113573120A (en) Audio processing method and electronic equipment
CN114363678A (en) Screen projection method and equipment
CN116703701B (en) Picture clipping method and electronic equipment
CN112243117A (en) Image processing apparatus, method and camera
CN114466101B (en) Display method and electronic equipment
CN114979458B (en) Image shooting method and electronic equipment
CN116723416B (en) Image processing method and electronic equipment
RU2807091C1 (en) Image merger method and electronic device
CN117135259B (en) Camera switching method, electronic equipment, chip system and readable storage medium
CN116051435B (en) Image fusion method and electronic equipment
EP4280055A1 (en) Image display method, electronic device, and storage medium
CN115701129A (en) Image processing method and electronic equipment
JP2024526253A (en) Caption display method and related device
CN117479008A (en) Video processing method, electronic equipment and chip system
CN117714849A (en) Image shooting method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant