CN106326823B - Method and system for obtaining head portrait in picture - Google Patents

Method and system for obtaining head portrait in picture Download PDF

Info

Publication number
CN106326823B
CN106326823B CN201510394449.5A CN201510394449A CN106326823B CN 106326823 B CN106326823 B CN 106326823B CN 201510394449 A CN201510394449 A CN 201510394449A CN 106326823 B CN106326823 B CN 106326823B
Authority
CN
China
Prior art keywords
face
detection data
head portrait
face detection
original picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510394449.5A
Other languages
Chinese (zh)
Other versions
CN106326823A (en
Inventor
杨参参
王庆磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenzhou Taiyue Software Co Ltd
Original Assignee
Beijing Shenzhou Taiyue Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenzhou Taiyue Software Co Ltd filed Critical Beijing Shenzhou Taiyue Software Co Ltd
Priority to CN201510394449.5A priority Critical patent/CN106326823B/en
Publication of CN106326823A publication Critical patent/CN106326823A/en
Application granted granted Critical
Publication of CN106326823B publication Critical patent/CN106326823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Primary Health Care (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a system for acquiring a head portrait in a picture, wherein the method comprises the steps of detecting a face in an acquired original picture, and acquiring face detection data in the original picture; acquiring and displaying an intercepted area of the head portrait corresponding to the face detection data according to a preset interception proportion and the face detection data; and cutting and storing the intercepted area, acquiring a required head portrait, and uploading the head portrait to a page of an application website for displaying. According to the technical scheme, the required head portrait can be automatically obtained from the picture, and compared with the existing method, the method does not need a user to carry out manual interception for many times, so that the head portrait with the preset interception proportion can be quickly obtained.

Description

Method and system for obtaining head portrait in picture
Technical Field
The invention relates to the technical field of face detection, in particular to a method and a system for acquiring a head portrait in a picture.
Background
With the development of the internet, especially the mobile internet technology, more and more users make friends, chat and trade through a network community platform. In the virtualized network community platform, functions of displaying user head portraits are arranged for distinguishing users and displaying personalities.
At present, people increasingly need to upload head portraits on different websites, and due to the limitation of website page space, uploaded images are generally zoomed to a smaller scale for display. Therefore, in order to make the uploaded photos satisfactory, it is often necessary to find a head-only photo or to manually crop the photo. In the process of manually cropping the photos, a lot of time is often spent on selecting a proper area, estimating the cropping proportion and the like, and after a lot of time is spent on obtaining the photos with the optimal proportion, the uploaded photos are uploaded to an application system, and the display effect of the uploaded photos is not ideal due to the display characteristics of the application system.
According to the description, in the prior art, the mode of setting the head portrait by the user is mainly to manually cut the head portrait in the picture, and the head portrait obtained by cutting is uploaded to the network community platform, so that the mode is time-consuming and low in efficiency, and the user experience is influenced.
Disclosure of Invention
The invention provides a method and a system for acquiring a head portrait in a picture, which aim to solve the problems that manual image head portrait capturing is complicated and time-consuming and the head portrait display effect is poor in the prior art.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in one aspect, the present invention provides a method for obtaining an avatar in a picture, where the method includes:
detecting the face in the obtained original picture to obtain face detection data in the original picture;
acquiring and displaying an intercepted area of the head portrait corresponding to the face detection data according to a preset interception proportion and the face detection data;
and cutting and storing the intercepted area, acquiring a required head portrait, and uploading the head portrait to a page of an application website for displaying.
The face detection data at least comprises the number of faces and pixel coordinates of each face in the original image, and the obtaining and displaying of the intercepted area of the head portrait corresponding to the face detection data according to the preset intercepting proportion and the face detection data comprises:
generating editable selection frames with corresponding number on the original picture according to the number of the faces in the face detection data;
acquiring the size of a corresponding selection frame and the position of each face in the face detection data on the original image according to a preset interception proportion and the pixel coordinates of each face in the face detection data in the original image;
positioning an intercepted area of the head portrait corresponding to each face on the original picture according to the size of the selection frame and the position of the selection frame on the original picture;
and highlighting the intercepted area of the head portrait corresponding to the face meeting the conditions on the original picture.
Wherein the highlighting of the intercepted area of the corresponding head portrait of the qualified face on the original picture comprises:
when the number of the faces in the obtained face detection data is one, highlighting a selection frame corresponding to the face on the original picture;
and when the number of the faces in the obtained face detection data is two or more, carrying out face recognition on each detected face to obtain face recognition data of each face, comparing each obtained face recognition data with the face data in the database respectively, and highlighting a selection frame corresponding to the face recognition data with the highest similarity on the original picture.
Wherein the highlighting of the intercepted area of the corresponding avatar of the eligible face on the original picture specifically comprises:
and highlighting the intercepted area of the head portrait corresponding to the qualified face through the color and/or line type of the selection frame.
Preferably, the obtaining and displaying the cut-out area of the avatar corresponding to the face detection data according to the preset cut-out proportion and the face detection data further includes:
previewing and displaying the intercepted area corresponding to the highlighted selection frame;
manually adjusting the size and/or proportion of the selection frame according to the preview display effect;
and repositioning the intercepted area of the head portrait corresponding to the face detection data at the corresponding position of the original picture according to the adjusted size and/or proportion of the selection frame.
In another aspect, the present invention provides a system for obtaining a head portrait in a picture, the system comprising:
the face detection unit is used for detecting the face in the acquired original picture and acquiring face detection data in the original picture;
the intercepting unit is used for acquiring and displaying an intercepting area of the head portrait corresponding to the face detection data according to a preset intercepting proportion and the face detection data;
and the head portrait acquisition unit is used for cutting and storing the intercepted area, acquiring the required head portrait and uploading the head portrait to a page of an application website for displaying.
The face detection data obtained by the face detection unit at least comprises the number of faces and the pixel coordinates of each face in the original image, and the intercepting unit comprises:
the selection frame generation module is used for generating editable selection frames with corresponding number on the original picture according to the number of the faces in the face detection data;
the position acquisition module is used for acquiring the size of a corresponding selection frame and the position of each face in the face detection data on the original image according to a preset interception proportion and the pixel coordinates of each face in the face detection data in the original image;
the positioning module is used for positioning the intercepted area of the head portrait corresponding to each face on the original picture according to the size of the selection frame and the position of the selection frame on the original picture;
and the display module is used for highlighting the intercepted area of the head portrait corresponding to the face meeting the conditions on the original picture.
Wherein the display module is further configured to,
when the number of the faces in the obtained face detection data is one, highlighting a selection frame corresponding to the face on the original picture;
and when the number of the faces in the obtained face detection data is two or more, carrying out face recognition on each detected face to obtain face recognition data of each face, comparing each obtained face recognition data with the face data in the database respectively, and highlighting a selection frame corresponding to the face recognition data with the highest similarity on the original picture.
Wherein the display module is further configured to,
and highlighting the intercepted area of the head portrait corresponding to the qualified face through the color and/or line type of the selection frame.
Preferably, the intercepting unit further comprises a preview module;
the preview module is used for previewing and displaying the intercepted area corresponding to the highlighted selection frame;
the positioning module is further configured to reposition the cut-out region of the avatar corresponding to the face detection data at the corresponding position of the original picture according to the size and/or the ratio of the manually adjusted selection frame.
The embodiment of the invention has the beneficial effects that: the invention discloses a method and a system for acquiring a head portrait in a picture, which are characterized in that the face detection data comprising the number of faces and the pixel coordinates of each face in an original picture are acquired by automatically detecting the faces in the acquired original picture, an intercepted area of the head portrait corresponding to the face detection data is acquired according to a preset intercepting proportion and the face detection data, and the intercepted area is cut, stored and processed to acquire a required head portrait and uploaded to a page of an application website for display. According to the technical scheme, the required head portrait can be automatically obtained from the original picture, and compared with the existing method, a user does not need to perform manual interception for many times, so that the head portrait with the preset interception proportion can be quickly obtained.
Drawings
Fig. 1 is a flowchart of a method for obtaining an avatar in a picture according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a system for acquiring an avatar in a picture according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for obtaining an avatar in a picture according to an embodiment of the present invention, where the method is particularly suitable for an application website where a page space has size limitation on an uploaded avatar picture, and the method for obtaining the avatar in the picture in fig. 1 includes:
and S10, detecting the face in the obtained original picture, and obtaining the face detection data in the original picture. The obtained face detection data at least comprises the number of faces and the pixel coordinates of each face in the original image.
Face Detection (Face Detection) refers to a process of determining the position, size, pose of all faces (if any) in an input image. The face detection technology is further developed and optimized on the face recognition technology and is a key link in an automatic face recognition system. Faces are a class of natural structural objects with fairly complex detail variations, and the problem of detecting such objects is challenging: the human face has variability of modes due to different appearances, expressions, skin colors and the like; in the general sense, there may be attachments such as glasses, beard, etc. on the face; the image of a human face as a three-dimensional object is inevitably affected by shadows generated by light irradiation. Therefore, if a method for solving these problems can be found, a face detection and tracking system can be successfully constructed, which provides important hint for solving other similar complex pattern detection problems.
Because the face image contains abundant characteristic patterns, such as color characteristics (skin color, hair color, etc.), contour characteristics (elliptic contour, etc.), histogram characteristics (distribution, distance, etc.), heuristic characteristics (hair, jaw, etc.), template characteristics (mean, variance, distance, etc.), transform domain characteristics (eigenface, wavelet characteristics, etc.), and structural characteristics (symmetry, projection characteristics, etc.), and the face pattern has complex and fine changes, a method of synthesizing characteristics of multiple patterns is generally required. The face detection method can be divided into the following steps according to the color attribute of the utilized features: the method based on the skin color characteristic and the method based on the gray level characteristic are suitable for constructing a rapid human face detection and human face tracking algorithm, the method based on the gray level characteristic utilizes the more essential characteristics of human face distinction and other objects, and the method based on the gray level characteristic can be divided into a method based on a heuristic mode and a method based on a statistical model according to different models adopted during feature integration. Due to the complexity of the face detection problem, any kind of method cannot be adapted to all situations, and generally aims at a certain or some specific problems in the field of face detection.
The skin color is important information of the face, does not depend on the detail features of the face, is applicable to the change conditions of rotation, expression and the like, has relative stability, and is different from the color of most background objects, so the embodiment mainly adopts the skin color as the feature of the face mode.
The skin color characteristics are mainly described by a skin color model, and the common skin color models comprise a Gaussian model, a mixed Gaussian model and a histogram model. The use of any form of skin color model is closely related to the selection of the chromaticity Space (chroma Space), which is commonly used for face detection, including: RGB (red, green, and blue three primary colors), RGB (luminance normalized three primary colors), SHI (saturation, hue, and luminance), YIQ (NTSC luminance and chrominance model), YUV (PAL luminance and chrominance model), and YCbCr (CCIR601 encoding scheme chrominance model). One can look at a certain chromaticity space in two ways, one is whether the distribution of "skin tone" regions can be described with a given model in the chromaticity space, and the other is the degree of overlap of "skin tone" and "non-skin tone" regions in the chromaticity space.
In step S10, the original picture may be obtained through an input device, which may be a mobile smart device, such as a mobile phone, or a notebook or a camera.
And S20, acquiring and displaying the intercepted area of the head portrait corresponding to the face detection data according to the preset intercepted proportion and the face detection data, wherein the preset intercepted proportion can be acquired through data statistics or can be set as a golden section proportion.
In step S20, the acquiring and displaying the cut-out area of the avatar corresponding to the face detection data according to the preset cut-out proportion and the face detection data specifically includes:
generating editable selection frames with corresponding quantity on the original picture according to the number of the faces in the face detection data;
acquiring the size of a corresponding selection frame and the position of each face in the original image according to a preset interception proportion and the pixel coordinates of each face in the face detection data in the original image;
positioning an intercepted area of the head portrait corresponding to each face on the original picture according to the size of the selection frame and the position of the selection frame on the original picture;
and highlighting the intercepted area of the head portrait corresponding to the face meeting the conditions on the original picture.
When the number of the faces in the obtained face detection data is one, highlighting a selection frame corresponding to the face on the original picture; when the number of the faces in the obtained face detection data is two or more, carrying out face recognition on each detected face to obtain face recognition data of each face, comparing each obtained face recognition data with the face data in the database respectively, and highlighting a selection frame corresponding to the face recognition data with the highest similarity on the original picture.
Face Recognition (Face Recognition) is a technology for automatically identifying an identity according to facial features of a person, and comprehensively applies a plurality of technologies such as digital image/video processing, pattern Recognition and the like. The human face recognition in the broad sense refers to the recognition of human faces, in particular to the recognition of eyes, noses, mouths, cheeks and other parts. The face recognition technology comprises four main links, namely face image detection, face image preprocessing, face image feature extraction, matching and recognition. Wherein the content of the first and second substances,
1) the face image detection is as described above, that is, the position and size of the face are accurately calibrated in the image.
2) Because an original image is limited by various conditions and random interference and often cannot be directly used, the original image needs to be subjected to image preprocessing such as gray level correction and noise filtering in the early stage of image processing, and for a human face image, the preprocessing process mainly comprises light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, sharpening and the like of the human face image.
3) The methods for extracting the face features are roughly divided into a characterization method based on knowledge and a characterization method based on algebraic features or statistical learning: the knowledge-based characterization method mainly obtains feature data which is helpful for face classification according to shape description of face organs and distance characteristics between the face organs, and feature components of the feature data generally comprise Euclidean distance, curvature, angle and the like between feature points. The human face is composed of parts such as eyes, a nose, a mouth, a chin and the like, geometric description of the parts and the structural relationship among the parts can be used as important features for recognizing the human face, the features are called geometric features, and the knowledge-based human face characterization mainly comprises a geometric feature-based method and a template matching method.
4) The face image matching and recognition means that the extracted feature data of the face image is searched and matched with a feature template stored in a database, and a threshold value is set, and when the similarity exceeds the threshold value, the result obtained by matching is output. The face recognition is to compare the face features to be recognized with the obtained face feature template, and judge the identity information of the face according to the similarity degree. This process is divided into two categories: one is confirmation, which is a process of performing one-to-one image comparison, and the other is recognition, which is a process of performing one-to-many image matching comparison.
In this embodiment, the face recognition mainly adopts a Feature-based recognition algorithm (Feature-based recognition Algorithms) to realize automatic recognition of the face in the original picture.
It should be noted that, when the automatically acquired intercepted area of the avatar corresponding to the face is highlighted on the original image through the selection box, the embodiment further provides a man-machine interaction means, so as to meet the personalized requirements of the user. The user may reposition the region of the avatar that meets his needs by editing the highlighted selection box, for example, by adjusting the size or scale of the selection box corresponding to the automatically obtained face detection data to adjust the region of the avatar corresponding to the face detection data. Or when the user judges that the intercepted area of the head portrait corresponding to the face recognition data with the highest similarity of automatic recognition is not the intention of the user, the user can manually select a proper selection frame corresponding to the face to obtain the intercepted area of the head portrait corresponding to the face after reselection.
Further, in step S20, a preview display may be performed on the capture area corresponding to the highlighted selection box;
manually adjusting the size and/or proportion of the selection frame according to the preview display effect;
and repositioning the intercepting area corresponding to the face detection data at the corresponding position of the original picture according to the size and/or the proportion of the adjusted selection frame.
And S30, cutting and storing the intercepted area, acquiring the needed head portrait, and transmitting the head portrait to the page of the application website for display.
In this step, after the clipped region is clipped, the clipped image may be processed correspondingly, for example, the avatar may be processed with edges and light, and then the clipped avatar may be stored in the database, so as to complete the resources of the database.
In this embodiment, the face detection data in the picture is obtained by performing face detection on the picture, the cut-out area of the avatar corresponding to the face detection data is obtained according to the preset cut-out proportion and the face detection data, and the required avatar is obtained by cutting, storing and processing the cut-out area. The method of the embodiment can automatically obtain the optimal intercepting proportion, does not need a user to carry out manual intercepting for many times, and can effectively reduce the time for obtaining the head portrait with the preset intercepting proportion.
In order to describe the method for acquiring the avatar in the picture and the beneficial effects thereof in more detail, the following description will take the example of automatically acquiring the avatar of the face required in the original picture, assuming that the original picture has 3 faces.
First, a skin color model is obtained.
The method comprises the steps of establishing a skin color sample, wherein the skin color sample comprises RGB images of human faces with different skin colors and different sizes, selecting 100 human face images with different skin colors from the Internet, a human face library and daily life photographs, and cutting out a small part of the human face skin from the selected human face images to serve as the skin color sample.
Considering that the skin color model is a mathematical model describing a skin color distribution rule in a certain color space, in order to describe the mathematical model of the skin color sample, the embodiment converts the RGB color space into the YCrCb color space, and simultaneously divides the image pixels into skin color and non-skin color pixels, so as to improve the efficiency and accuracy of face detection.
In the present embodiment, a two-dimensional gaussian distribution is used to describe the chrominance distribution of Cb-Cr, since in the YCbCr color space, Y represents luminance, and Cb and Cr are color difference signals representing chrominance. Therefore, the present embodiment selects only the Cb and Cr chrominance components in the YCbCr color space and uses these two components to create the chrominance profile. After the skin color sample is trained, a distribution center is obtained, a similarity related to skin color is obtained according to the distance between the skin color sample data and the distribution center, and a Gaussian distribution model can be obtained by calculating the mean value and the variance of the similarity. The two-dimensional gaussian distribution of this embodiment is watchband-type as follows:
Figure BDA0000754687510000091
wherein the content of the first and second substances,
Figure BDA0000754687510000092
is the mean, mu is the variance,
Figure BDA0000754687510000093
is an expression of a two-dimensional gaussian distribution.
After estimating a skin color model of the face skin color in the YCbCr color space, acquiring a picture from an input device, wherein the input device can be a mobile phone, a computer or a camera, and detecting whether a point belongs to a face region or not through a trained skin color model for the skin color of the detection point in an input original picture, thereby determining the number of faces and the position of each face in the input original picture.
And then displaying the intercepted area of the head portrait corresponding to the required face detection data on the obtained picture.
And generating corresponding number of editable selection frames on the picture according to the number of the obtained face detection data. Assuming that all three faces are detected during the face detection process, three editable selection boxes are generated.
And obtaining the pixel position of the corresponding selection frame in the picture according to a preset interception proportion and corresponding each face detection data.
Based on the above assumptions, the three faces are labeled A, B, C accordingly for clarity in distinguishing the three faces. The method for obtaining the intercepted area of the face A comprises the following steps: according to the face detection data of the face A, pixel coordinates of four pixel points respectively located at four corner positions of the face A are obtained, pixel coordinates of four extension points corresponding to the four pixel points are respectively calculated according to a golden section ratio (certainly, other ratios are also available), and a rectangular area defined by the pixel coordinates of the four extension points is an intercepting area corresponding to the face A. The method for acquiring the intercepted areas of the face B and the face C is the same as the method for acquiring the intercepted area of the face A.
After the selection frame corresponding to each face detection data is obtained, because a plurality of faces are automatically detected on the picture, the embodiment adopts a recognition method based on the face characteristic points to perform face recognition on each face on the original image, and compares the obtained face recognition data of each face with the face data in the database respectively, thereby obtaining the face recognition data with the highest similarity with the face data in the database. Still based on the above assumption, A, B, C face recognition data are respectively compared with the face data in the database, the face recognition data of the face a is obtained as the face recognition data with the highest similarity to the face data in the database, and the selection frame corresponding to the face a is highlighted on the picture. The selection frame corresponding to the face a may be highlighted by means of color of the selection frame, line type, and the like, and the method for highlighting the selection frame on the picture is not limited in this embodiment.
When the intercepting area is highlighted on the picture through the selection frame, the user can edit the selection frame of the intercepting area, for example, the size, the proportion or the position of the selection frame is adjusted, and the intercepting area corresponding to the face data is positioned on the picture again according to the selection frame adjusted by the user. Still based on the above assumption, when the capture area of the face a is highlighted on the picture through the selection box, the user may perform corresponding adjustment based on the size, proportion or position of the selection box tone corresponding to the current face a, so as to obtain the capture area with the display effect meeting the user's requirements. Of course, the user may also highlight the selection box of the face B or the face C to obtain the capture area corresponding to the face detection data of the face B or the face C.
In order to better display the cut-out area, the present embodiment also provides a preview display function.
Specifically, the preview display is performed on the capture area corresponding to the highlighted selection frame, the size and/or the proportion of the selection frame is manually adjusted according to the preview display effect, and the capture area corresponding to the face data is repositioned at the corresponding position of the picture according to the adjusted size and/or proportion of the selection frame.
In practical application, only one face may be detected from an input original image, and when only one face is detected, further face recognition is not needed, and an intercepting region corresponding to the face detection data can be automatically obtained directly according to the face detection data and a preset intercepting proportion. Specifically, an editable selection frame is generated on the picture, the pixel position of the selection frame in the picture is obtained according to a preset interception proportion and the face detection data, and an interception area corresponding to the face detection data is positioned on the picture according to the pixel position of the selection frame. Of course, when the capture area corresponding to the face detection data is displayed on the picture through the selection frame, the capture area corresponding to the face detection data can also be adjusted by adjusting the size, proportion or position of the selection frame.
And finally, acquiring a required head portrait, and uploading the head portrait for display.
After the cut area is cut, corresponding processing can be carried out, for example, edge and ray processing can be carried out on the head portrait, and then the cut head portrait is stored in the database so as to enrich and improve data resources of the database.
The embodiment of the present invention also provides a system for acquiring an avatar in a picture, which includes a face detection unit 21, a capturing unit 22, and an avatar acquisition unit 23.
And the face detection unit 21 is configured to detect a face in the acquired original picture, and acquire face detection data in the original picture. The face detection data obtained by the face detection unit 21 at least includes the number of faces and the pixel coordinates of each face in the original image.
Face Detection (Face Detection) refers to a process of determining the position, size, pose of all faces (if any) in an input image. Because the face image contains abundant characteristic patterns, and the face patterns have complicated and detailed changes, a method of synthesizing characteristics of multiple patterns is generally required. Due to the complexity of the face detection problem, any kind of method cannot be adapted to all situations, and generally aims at a certain or some specific problems in the field of face detection.
Because the skin color is important information of the human face, the method is not dependent on the detail characteristics of the face, is applicable to the change conditions of rotation, expression and the like, has relative stability, and is different from the color of most background objects. Therefore, the face detection unit 21 in the present embodiment mainly uses skin color as a feature of the face mode.
The skin color characteristics are mainly described by a skin color model, and the common skin color models comprise a Gaussian model, a mixed Gaussian model and a histogram model. The use of any form of skin color model is closely related to the selection of the chromaticity Space (chroma Space), which is commonly used for face detection, including: RGB (red, green, and blue three primary colors), RGB (luminance normalized three primary colors), SHI (saturation, hue, and luminance), YIQ (NTSC luminance and chrominance model), YUV (PAL luminance and chrominance model), and YCbCr (CCIR601 encoding scheme chrominance model). One can look at a certain chromaticity space in two ways, one is whether the distribution of "skin tone" regions can be described with a given model in the chromaticity space, and the other is the degree of overlap of "skin tone" and "non-skin tone" regions in the chromaticity space.
The face detection unit 21 may obtain the original picture through an input device, which may be a mobile intelligent device, such as a mobile phone, or a notebook or a camera.
And the intercepting unit 22 is used for acquiring and displaying an intercepting area of the head portrait corresponding to the face detection data according to a preset intercepting proportion and the face detection data.
In a preferred embodiment of this embodiment, the intercepting unit 22 specifically includes:
the selection frame generation module is used for generating editable selection frames with corresponding number on the original picture according to the number of the faces in the face detection data;
the position acquisition module is used for acquiring the size of a corresponding selection frame and the position of each face in the original image according to a preset interception proportion and the pixel coordinates of each face in the face detection data in the original image;
the positioning module is used for positioning the intercepted area of the head portrait corresponding to each face on the original picture according to the size of the selection frame and the position of the selection frame on the original picture;
and the display module is used for highlighting the intercepted area of the head portrait corresponding to the face meeting the conditions on the original picture.
Preferably, the display module is further configured to highlight the cut-out area of the corresponding avatar of the qualified human face by selecting a color and/or a line type of the frame.
The display module is further used for highlighting a selection frame corresponding to the face on the original picture when the number of the faces in the obtained face detection data is one;
when the number of the faces in the obtained face detection data is two or more, carrying out face recognition on each detected face to obtain face recognition data of each face, comparing each obtained face recognition data with the face data in the database respectively, and highlighting a selection frame corresponding to the face recognition data with the highest similarity on the original picture.
Face Recognition (Face Recognition) is a technology for automatically identifying an identity according to facial features of a person, and the Face Recognition in a broad sense refers to the Recognition of the Face of a human body, in particular to the Recognition of eyes, nose, mouth, cheeks and the like. The face recognition technology comprises four main links, namely face image detection, face image preprocessing, face image feature extraction, matching and recognition. In this embodiment, the face Recognition mainly adopts a Feature-based Recognition algorithm (Feature-based Recognition Algorithms) to realize automatic Recognition of the face in the original picture.
It should be noted that, when the display module of the system for obtaining the avatar in the picture highlights the intercepted area of the automatically obtained face corresponding to the avatar through the selection box, the system of the embodiment also provides a human-computer interaction means, so as to meet the personalized requirements of the user. The user may reposition the region of the avatar that meets his needs by editing the highlighted selection box, for example, by adjusting the size or scale of the selection box corresponding to the automatically obtained face detection data to adjust the region of the avatar corresponding to the face detection data. Or when the user judges that the intercepted area of the head portrait corresponding to the face recognition data with the highest similarity of automatic recognition is not the intention of the user, the user can manually select a proper selection frame corresponding to the face to obtain the intercepted area of the head portrait corresponding to the face after reselection.
In the preferred embodiment, the intercepting unit 22 further comprises a preview module,
the preview module is used for previewing and displaying the intercepted area corresponding to the highlighted selection frame;
and the positioning module is further used for repositioning the intercepted area of the head portrait corresponding to the face detection data at the corresponding position of the original picture according to the size and/or the proportion of the manually adjusted selection frame.
And the head portrait acquiring unit 23 is configured to cut and store the intercepted area, acquire a required head portrait, and upload the acquired head portrait to a page of an application website for display.
The system for acquiring the head portrait in the picture is particularly suitable for the application website of which the page space has size limitation on the uploaded head portrait picture.
The system of the embodiment obtains face detection data in the picture by performing face detection on the picture, obtains an intercepted area of the head portrait corresponding to the face detection data according to a preset interception proportion and the face detection data, and obtains the required head portrait by cutting, storing and processing the intercepted area. The method of the embodiment can automatically obtain the optimal intercepting proportion, does not need a user to carry out manual intercepting for many times, and can effectively reduce the time for obtaining the head portrait with the preset intercepting proportion.
In summary, the present invention discloses a method and a system for obtaining a head portrait in a picture, wherein face detection data including the number of faces and pixel coordinates of each face in an original image is obtained by automatically detecting a face in an obtained original picture, an intercepted area of the head portrait corresponding to the face detection data is obtained according to a preset interception proportion and the face detection data, and a required head portrait is obtained by cutting and storing the intercepted area and is uploaded to a page of the application website for display. According to the technical scheme, the required head portrait can be automatically obtained from the original picture, and compared with the existing method, a user does not need to perform manual interception for many times, so that the head portrait with the preset interception proportion can be quickly obtained.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (8)

1. A method for obtaining a head portrait in a picture, the method comprising:
detecting whether a detection point in an obtained original picture belongs to a face region or not through a trained skin color model, and obtaining face detection data in the original picture;
acquiring and displaying an intercepted area of the head portrait corresponding to the face detection data according to a preset interception proportion and the face detection data; specifically, according to the face detection data of each face, pixel coordinates of four pixel points respectively located at four corner positions of each face are obtained, and pixel coordinates of four outward expansion points corresponding to the four pixel points are respectively calculated according to a preset intercepting proportion, so that a rectangular area defined by the pixel coordinates of the four outward expansion points is an intercepting area corresponding to the face;
cutting and storing the intercepted area, acquiring a needed head portrait and uploading the head portrait to a page of an application website for displaying;
the face detection data at least comprises the number of faces and pixel coordinates of each face in the original picture, and the obtaining and displaying of the intercepted area of the head portrait corresponding to the face detection data according to the preset intercepting proportion and the face detection data comprises:
generating editable selection frames with corresponding number on the original picture according to the number of the faces in the face detection data;
acquiring the size of a corresponding selection frame and the position of each face in the face detection data on the original image according to a preset interception proportion and the pixel coordinates of each face in the face detection data in the original image;
positioning an intercepted area of the head portrait corresponding to each face on the original picture according to the size of the selection frame and the position of the selection frame on the original picture;
and highlighting the intercepted area of the head portrait corresponding to the face meeting the conditions on the original picture through a selection frame so that a user can edit the selection frame of the intercepted area.
2. The method according to claim 1, wherein the highlighting of the cut-out area of the corresponding head portrait of the qualified face on the original picture comprises:
when the number of the faces in the obtained face detection data is one, highlighting a selection frame corresponding to the face on the original picture;
and when the number of the faces in the obtained face detection data is two or more, carrying out face recognition on each detected face to obtain face recognition data of each face, comparing each obtained face recognition data with the face data in the database respectively, and highlighting a selection frame corresponding to the face recognition data with the highest similarity on the original picture.
3. The method according to claim 2, wherein the highlighting of the clipped region of the corresponding avatar of the qualified face on the original picture specifically comprises:
and highlighting the intercepted area of the head portrait corresponding to the qualified face through the color and/or line type of the selection frame.
4. The method of claim 3, wherein the obtaining and displaying the cut-out region of the avatar corresponding to the face detection data according to the preset cut-out proportion and the face detection data further comprises:
previewing and displaying the intercepted area corresponding to the highlighted selection frame;
manually adjusting the size and/or proportion of the selection frame according to the preview display effect;
and repositioning the intercepted area of the head portrait corresponding to the face detection data at the corresponding position of the original picture according to the adjusted size and/or proportion of the selection frame.
5. A system for obtaining a head portrait in a picture, the system comprising:
the human face detection unit is used for detecting whether a detection point in the obtained original picture belongs to a human face area or not through a trained skin color model, and obtaining human face detection data in the original picture;
the intercepting unit is used for acquiring and displaying an intercepting area of the head portrait corresponding to the face detection data according to a preset intercepting proportion and the face detection data; specifically, according to the face detection data of each face, pixel coordinates of four pixel points respectively located at four corner positions of each face are obtained, and pixel coordinates of four outward expansion points corresponding to the four pixel points are respectively calculated according to a preset intercepting proportion, so that a rectangular area defined by the pixel coordinates of the four outward expansion points is an intercepting area corresponding to the face;
the head portrait acquisition unit is used for cutting, storing and processing the intercepted area, acquiring a required head portrait and uploading the head portrait to a page of an application website for displaying;
the face detection data obtained by the face detection unit at least includes the number of faces and the pixel coordinates of each face in the original picture, and the intercepting unit includes:
the selection frame generation module is used for generating editable selection frames with corresponding number on the original picture according to the number of the faces in the face detection data;
the position acquisition module is used for acquiring the size of a corresponding selection frame and the position of each face in the face detection data on the original picture according to a preset interception proportion and the pixel coordinates of each face in the face detection data in the original picture;
the positioning module is used for positioning the intercepted area of the head portrait corresponding to each face on the original picture according to the size of the selection frame and the position of the selection frame on the original picture;
and the display module is used for highlighting the intercepted area of the head portrait corresponding to the face meeting the conditions on the original picture through the selection frame so that the user can edit the selection frame of the intercepted area.
6. The system of claim 5, wherein the display module is further configured to,
when the number of the faces in the obtained face detection data is one, highlighting a selection frame corresponding to the face on the original picture;
and when the number of the faces in the obtained face detection data is two or more, carrying out face recognition on each detected face to obtain face recognition data of each face, comparing each obtained face recognition data with the face data in the database respectively, and highlighting a selection frame corresponding to the face recognition data with the highest similarity on the original picture.
7. The system of claim 6, wherein the display module is further configured to,
and highlighting the intercepted area of the head portrait corresponding to the qualified face through the color and/or line type of the selection frame.
8. The system of claim 7, wherein the intercepting unit further comprises a preview module;
the preview module is used for previewing and displaying the intercepted area corresponding to the highlighted selection frame;
the positioning module is further configured to reposition the cut-out region of the avatar corresponding to the face detection data at the corresponding position of the original picture according to the size and/or the ratio of the manually adjusted selection frame.
CN201510394449.5A 2015-07-07 2015-07-07 Method and system for obtaining head portrait in picture Active CN106326823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510394449.5A CN106326823B (en) 2015-07-07 2015-07-07 Method and system for obtaining head portrait in picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510394449.5A CN106326823B (en) 2015-07-07 2015-07-07 Method and system for obtaining head portrait in picture

Publications (2)

Publication Number Publication Date
CN106326823A CN106326823A (en) 2017-01-11
CN106326823B true CN106326823B (en) 2020-06-30

Family

ID=57725242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510394449.5A Active CN106326823B (en) 2015-07-07 2015-07-07 Method and system for obtaining head portrait in picture

Country Status (1)

Country Link
CN (1) CN106326823B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019071386A1 (en) * 2017-10-09 2019-04-18 深圳传音通讯有限公司 Image data processing method and device
CN108090430B (en) * 2017-12-08 2020-11-27 杭州魔点科技有限公司 Face detection method and device
CN108154515A (en) * 2017-12-27 2018-06-12 三星电子(中国)研发中心 Picture shows method and apparatus
CN108345842B (en) * 2018-01-24 2022-03-04 中电长城圣非凡信息***有限公司 Big data based processing method
CN109359508A (en) * 2018-08-27 2019-02-19 贵阳朗玛信息技术股份有限公司 A kind of head portrait processing method and processing device
CN109189289B (en) * 2018-09-03 2021-12-24 聚好看科技股份有限公司 Method and device for generating icon based on screen capture image
CN110334615A (en) * 2019-06-20 2019-10-15 湖北亮诚光电科技有限公司 A method of there is the recognition of face blocked
CN112487396A (en) * 2020-12-08 2021-03-12 平安国际智慧城市科技股份有限公司 Picture processing method and device, computer equipment and storage medium
CN112541919A (en) * 2020-12-29 2021-03-23 申建常 Picture segmentation processing method and processing system
CN113792671A (en) * 2021-09-16 2021-12-14 平安银行股份有限公司 Method and device for detecting face synthetic image, electronic equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100514365C (en) * 2007-01-15 2009-07-15 浙江大学 Method for automatic photomotage of multi-face
CN100463000C (en) * 2007-09-27 2009-02-18 上海交通大学 Human eye state detection method based on cascade classification and hough circle transform
CN102982568B (en) * 2012-11-12 2015-12-16 东莞宇龙通信科技有限公司 A kind of method of automatic cutting image and device
CN103914689B (en) * 2014-04-09 2017-03-15 百度在线网络技术(北京)有限公司 Picture method of cutting out and device based on recognition of face
CN104484855B (en) * 2014-12-24 2018-03-27 北京奇虎科技有限公司 The method and apparatus that picture is cut

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic Landmark Detection and Face Recognition for Side-View Face Images;Pinar Santemiz等;《IEEE》;20131003;第1-4页 *
基于整体特征和局部特征的人脸识别方法的研究;李彬彬;《中国优秀硕士学位论文全文数据库 信息科技辑》;20121015;第1-58页 *

Also Published As

Publication number Publication date
CN106326823A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN106326823B (en) Method and system for obtaining head portrait in picture
US10304166B2 (en) Eye beautification under inaccurate localization
EP3338217B1 (en) Feature detection and masking in images based on color distributions
US8385638B2 (en) Detecting skin tone in images
US8520089B2 (en) Eye beautification
US8902326B2 (en) Automatic face and skin beautification using face detection
US8983152B2 (en) Image masks for face-related selection and processing in images
CN106780311B (en) Rapid face image beautifying method combining skin roughness
WO2022161009A1 (en) Image processing method and apparatus, and storage medium and terminal
CN112614060A (en) Method and device for rendering human face image hair, electronic equipment and medium
KR20120070985A (en) Virtual experience system based on facial feature and method therefore
WO2021016896A1 (en) Image processing method, system and device, and movable platform and storage medium
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
CN114187166A (en) Image processing method, intelligent terminal and storage medium
US20190347469A1 (en) Method of improving image analysis
US10354125B2 (en) Photograph processing method and system
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
CN113781330A (en) Image processing method, device and electronic system
CN113837017B (en) Cosmetic progress detection method, device, equipment and storage medium
Rahman et al. Real-time face-based auto-focus for digital still and cell-phone cameras
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium
Shu et al. Real-Time Face Detection Method Using Wavelet-Based for a Beautification System
CN113837018A (en) Cosmetic progress detection method, device, equipment and storage medium
CN116017178A (en) Image processing method and device and electronic equipment
Quanyou et al. A new facial key features location algorithm in color images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: Room 818, 8 / F, 34 Haidian Street, Haidian District, Beijing 100080

Patentee after: BEIJING ULTRAPOWER SOFTWARE Co.,Ltd.

Address before: 100089 Beijing city Haidian District wanquanzhuang Road No. 28 Wanliu new building 6 storey block A Room 601

Patentee before: BEIJING ULTRAPOWER SOFTWARE Co.,Ltd.