CN108009470A - A kind of method and apparatus of image zooming-out - Google Patents
A kind of method and apparatus of image zooming-out Download PDFInfo
- Publication number
- CN108009470A CN108009470A CN201710998445.7A CN201710998445A CN108009470A CN 108009470 A CN108009470 A CN 108009470A CN 201710998445 A CN201710998445 A CN 201710998445A CN 108009470 A CN108009470 A CN 108009470A
- Authority
- CN
- China
- Prior art keywords
- coordinate value
- point
- hair
- image
- people
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/169—Holistic features and representations, i.e. based on the facial image taken as a whole
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses a kind of method and apparatus of image zooming-out, and method includes:Terminal identifies the face contour of target number of people image first;Secondly, the hair profile of the target number of people image is determined according to the face contour;Finally, the image information in the target number of people image according to the face contour and the hair contours extract, determines the second image for including people's head contour.The embodiment of the present application is conducive to improve the integrality and accuracy of image zooming-out.
Description
Technical field
The present invention relates to computer realm, more particularly to a kind of method and apparatus of image zooming-out.
Background technology
The many people of society like beautifying picture now, and more and more beautification instruments are known.For example,
The processing of Photoshop software tools progress picture can be used on computers or we can use Meitu Xiu Xiu on mobile phone
Wait application carry out picture beautification.
The content of the invention
The embodiment of the invention discloses a kind of method and apparatus of image zooming-out, be conducive to the integrality and standard of image zooming-out
True property.
First aspect of the embodiment of the present invention discloses a kind of image method, the described method includes:
Identify the face contour of target number of people image;
The hair profile of the target number of people image is determined according to the face contour;
According to the image information in target number of people image described in the face contour and the hair contours extract, bag is determined
Include the second image of people's head contour.
In a possible design, the face contour is included in the coordinate value of the first human eye central point, the second human eye
The coordinate value of heart point, the coordinate value of face central point, the coordinate value of the first cheekbone characteristic point, the coordinate of the second cheekbone characteristic point
Value;The hair profile that the target number of people image is determined according to the face contour, including:According to the first human eye central point
Coordinate value, coordinate value, the coordinate value of face central point of the second human eye central point determine the contour feature of the hair profile
The coordinate value of point;According to the coordinate value of the contour feature point, the coordinate value of the first cheekbone characteristic point, the second cheekbone characteristic point
Coordinate value determines the coordinate value and the second hair characteristic point of the first hair characteristic point of the hair zones of the target number of people image
Coordinate value;The hair profile is determined according to the first hair characteristic point and the second hair characteristic point.
In a possible design, coordinate value, the seat of the second human eye central point according to the first human eye central point
Scale value, the coordinate value of face central point determine the coordinate value of the contour feature point of the hair profile, including:According to the first human eye
The coordinate value of central point, the coordinate value of the second human eye central point determine the first human eye central point and the second human eye center
The coordinate value of the first reference center point between point;According to the coordinate value of the first reference center point and the face central point
Coordinate value determine the first distance between the first reference center point and the face central point;According to first distance
With the coordinate value of the first reference center point, and the relation between default first distance and second distance, determine described
The coordinate value of the contour feature point of hair profile, the second distance are the first reference center point and the contour feature point
The distance between.
In a possible design, coordinate value, the seat of the first cheekbone characteristic point according to the contour feature point
Scale value, the coordinate value of the second cheekbone characteristic point determine the seat of the first hair characteristic point of the hair zones of the target number of people image
The coordinate value of scale value and the second hair characteristic point, including:Determine the coordinate value of the first cheekbone characteristic point and second cheekbone
The 3rd distance between the coordinate value of bone characteristics point;Determine between the first reference center point and the contour feature point
The coordinate value of 2 reference center points;The target is determined according to the coordinate value of the 3rd distance and the second reference center point
The coordinate value of first hair characteristic point of the hair zones of number of people image and the coordinate value of the second hair characteristic point.
In a possible design, the target number of people according to the face contour and the hair contours extract
Image information in image, determines the second image for including people's head contour, including:According to the face contour and the hair wheel
The definite image information extraction region of exterior feature;Extract the image letter in the extraction of image information described in target number of people image region
Breath, to obtain the second image for including people's head contour.
Second aspect, the embodiment of the present application provide a kind of image acquiring apparatus, and described image extraction element includes identification mould
Block, determining module and extraction module.
The identification module, for identifying the face contour of number of people image;
The determining module, for determining the hair profile of the target number of people image according to the face contour.
The extraction module, in the target number of people image according to the face contour and the hair contours extract
Image information, determine to include the second image of people's head contour.
In a possible design, the determining module is specifically used for:According to the coordinate value of the first human eye central point,
Coordinate value, the coordinate value of face central point of two human eye central points determine the coordinate value of the contour feature point of the forehead profile;
Determined according to the coordinate value of the coordinate value of the contour feature point, the coordinate value of the first cheekbone characteristic point, the second cheekbone characteristic point
The coordinate value of first hair characteristic point of the hair zones of the target number of people image and the coordinate value of the second hair characteristic point;Root
The hair profile is determined according to the first hair characteristic point and the second hair characteristic point.
In a possible design, the determining module is specifically used for:According to the coordinate value of the first human eye central point,
The coordinate value of two human eye central points determines the first reference between the first human eye central point and the second human eye central point
The coordinate value of central point;According to determining the coordinate value of the coordinate value of the first reference center point and the face central point
The first distance between first reference center point and the face central point;According in first distance and first reference
Relation between the coordinate value of heart point, and default first distance and second distance, determines that the profile of the hair profile is special
The coordinate value of point is levied, the second distance is the distance between the first reference center point and described contour feature point.
In a possible design, the determining module is specifically used for:Determine the coordinate of the first cheekbone characteristic point
The 3rd distance between value and the coordinate value of the second cheekbone characteristic point;Determine the first reference center point and the profile
The coordinate value of the second reference center point between characteristic point;According to the 3rd distance and the coordinate of the second reference center point
Value determines the coordinate value of the first hair characteristic point and the seat of the second hair characteristic point of the hair zones of the target number of people image
Scale value.
In a possible design, the extraction module is specifically used for:According to the face contour and the hair wheel
The definite image information extraction region of exterior feature;Extract the image letter in the extraction of image information described in target number of people image region
Breath, to obtain the second image for including people's head contour.
The third aspect, the embodiment of the present application provide a kind of terminal, including processor and memory;The memory storage has
Program, the processor run the described program stored in the memory, perform as the embodiment of the present application first aspect is any
The described step of method.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable recording medium, wherein, it is described computer-readable
Computer program of the storage medium storage for electronic data interchange, wherein, the computer program causes computer to perform such as
Part or all of step described in the embodiment of the present application first aspect either method, the computer include mobile terminal.
5th aspect, the embodiment of the present application provide a kind of computer program product, wherein, above computer program product
Non-transient computer-readable recording medium including storing computer program, above computer program are operable to make calculating
Machine is performed such as the part or all of step described in the embodiment of the present application first aspect either method.The computer program product
Can be a software installation bag, above computer includes mobile terminal.
As can be seen that in the embodiment of the present application, terminal identifies the face contour of target number of people image first;According to the face
Contouring determines the hair profile of the target number of people image;According in face contour and the hair contours extract number of people image
Image information, determine to include the second image of people's head contour.The embodiment of the present application is conducive to improve the integrality of image zooming-out
And accuracy.
Brief description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, below will be to needed in the embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area
For logical technical staff, without creative efforts, other attached drawings can also be obtained according to these attached drawings.
Fig. 1 is a kind of flow diagram of image extraction method disclosed by the embodiments of the present invention;
Fig. 2 is a kind of flow diagram of image extraction method disclosed by the embodiments of the present invention;
Fig. 3 is a kind of number of people picture structure schematic diagram disclosed by the embodiments of the present invention;
Fig. 4 is a kind of structure diagram of terminal disclosed by the embodiments of the present invention;
Fig. 5 is a kind of structure diagram of terminal disclosed by the embodiments of the present invention;
Fig. 6 is a kind of structure diagram of smart mobile phone disclosed by the embodiments of the present invention.
Embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction with the embodiment of the present application
Attached drawing, is clearly and completely described the technical solution in the embodiment of the present application, it is clear that described embodiment is only
Some embodiments of the present application, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art
All other embodiments obtained without creative efforts, shall fall in the protection scope of this application.
Term " first ", " second " in the description and claims of this application and above-mentioned attached drawing etc. are to be used to distinguish
Different objects, rather than for describing particular order.In addition, term " comprising " and " having " and their any deformations, it is intended that
It is to cover non-exclusive include.Such as process, method, system, product or the equipment for containing series of steps or unit do not have
The step of having listed or unit are defined in, but alternatively further includes the step of not listing or unit, or is alternatively also wrapped
Include for other intrinsic steps of these processes, method, product or equipment or unit.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments
It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
In the embodiment of the present application, terminal may include but be not limited to smart mobile phone, palm PC, laptop, desktop
Brain etc..The operating system of the terminal may include but be not limited to Android operation system, IOS operating system, Symbian (Saipan)
Operating system, Black Berry (blackberry, blueberry) operating system, Windows Phone8 operating systems etc., the embodiment of the present invention is not
Limit.
The embodiment of the present application is introduced below in conjunction with the accompanying drawings.
Fig. 1 is referred to, Fig. 1 is a kind of flow diagram of image extraction method disclosed by the embodiments of the present invention.Such as Fig. 1 institutes
Show, this image extraction method can include:
S101, the face contour of the terminal recognition target number of people image.
Wherein, the number of people image includes at least the figure at the positions such as eyes, nose, face, cheek, forehead, hair, ear
As information.
In the specific implementation, the terminal can according to prestore/the recognition of face strategy that arrives of default/real-time reception, know
The face contour of not above-mentioned number of people image, the recognition of face strategy for example can be recognition of face strategy, base based on LGBP
In the recognition of face strategy of AdaBoost, unique restriction is not done herein.
S102, the terminal determine the hair profile of the target number of people image according to the face contour.
Wherein, the face contour includes coordinate value, coordinate value, the mouth of the second human eye central point of the first human eye central point
Bar coordinate value of central point, the coordinate value of the first cheekbone characteristic point, the coordinate value of the second cheekbone characteristic point.
S103, the figure in terminal target number of people image according to the face contour and the hair contours extract
As information, the second image for including people's head contour is determined.
Wherein, the hair profile includes the set of characteristic points at the edge of hair zones.
As can be seen that in the embodiment of the present application, terminal identifies the face contour of target number of people image first, secondly, eventually
End determines the hair profile of the target number of people image according to the face contour identified, and finally, terminal is taken turns according to the face
Image information in target number of people image described in wide and described hair contours extract, determines the second image for including people's head contour.
Since hair profile can accurately be determined based on face contour, so people's head contour in the second image can be accurately true including this
The hair profile made, avoids the situation for reducing the integrity degree of people's head contour because that can not accurately identify hair profile from occurring, has
Beneficial to raising number of people outline identification accuracy and integrity degree.
In a possible example, the face contour is included in the coordinate value of the first human eye central point, the second human eye
The coordinate value of heart point, the coordinate value of face central point, the coordinate value of the first cheekbone characteristic point, the coordinate of the second cheekbone characteristic point
Value;The hair profile that the target number of people image is determined according to the face contour, including:According to the first human eye central point
Coordinate value, coordinate value, the coordinate value of face central point of the second human eye central point determine the contour feature of the hair profile
The coordinate value of point;According to the coordinate value of the contour feature point, the coordinate value of the first cheekbone characteristic point, the second cheekbone characteristic point
Coordinate value determines the coordinate value and the second hair characteristic point of the first hair characteristic point of the hair zones of the target number of people image
Coordinate value;The hair profile is determined according to the first hair characteristic point and the second hair characteristic point.
Wherein, the first human eye and the second human eye are respectively the left eye eyeball and right eye eyeball in face.The wheel of the hair profile
Wide characteristic point is the characteristic point of hair profile lower boundary.
As it can be seen that in this example, since terminal can accurately determine the seat of the first hair characteristic point in hair zones
The coordinate value of scale value and the second hair characteristic point, and then terminal can be according to the first hair characteristic point and the second hair characteristic point
Accurately identify hair zones, finally determine complete hair profile, be conducive to raising terminal and determine target number of people image
Hair profile accuracy and integrity degree.
In a possible example, coordinate value, the seat of the second human eye central point according to the first human eye central point
Scale value, the coordinate value of face central point determine the coordinate value of the contour feature point of the hair profile, including:According to the first human eye
The coordinate value of central point, the coordinate value of the second human eye central point determine the first human eye central point and the second human eye center
The coordinate value of the first reference center point between point;According to the coordinate value of the reference center point and the seat of the face central point
Scale value determines the first distance between the reference center point and the face central point;According to first distance and described
The coordinate value of 1 reference center point, and the relation between default first distance and second distance, determine the hair profile
Contour feature point coordinate value, the second distance between the first reference center point and the contour feature point away from
From.
Wherein, midpoint of the first reference center point between the first human eye central point and the second human eye central point.It is described pre-
If the first distance and second distance between relation be preset ratio value that second distance is the first distance, the preset ratio value
Can be empirical value, such as 1/2,3/5, do not do unique restriction herein.
As it can be seen that in this example, since terminal can be accurate according to the preset ratio value for meeting conventional number of people contour feature
The coordinate value of the contour feature point of hair profile is calculated, which can be further used for determining to be in hair zones
The first hair characteristic point and the second hair characteristic point, the first, second hair characteristic point be used for determine hair zones, the head
Hair region be used for finally determine hair profile, be conducive to raising terminal recognition target number of people image in hair profile it is accurate
Degree.
In a possible example, coordinate value, the seat of the first cheekbone characteristic point according to the contour feature point
Scale value, the coordinate value of the second cheekbone characteristic point determine the seat of the first hair characteristic point of the hair zones of the target number of people image
The coordinate value of scale value and the second hair characteristic point, including:Determine the coordinate value of the first cheekbone characteristic point and second cheekbone
The 3rd distance between the coordinate value of bone characteristics point;Determine between the first reference center point and the contour feature point
The coordinate value of 2 reference center points;The target is determined according to the coordinate value of the 3rd distance and the second reference center point
The coordinate value of first hair characteristic point of the hair zones of number of people image and the coordinate value of the second hair characteristic point.
Wherein, the first hair characteristic point and the second hair characteristic point are respectively the left side hair zones in people's head contour
In characteristic point and the right hair zones in characteristic point.First cheekbone and second cheekbone are divided into the face contour
In left face cheekbone and right face cheekbone.The second reference center point refers to the spacing of the first reference center point and contour feature point
From 1/2 coordinate points.
As it can be seen that in this example, due in the general characteristics relation of the characteristic point of people's head contour, the first cheekbone characteristic point and
When the horizontal line where the second reference center point is moved on in the second cheekbone characteristic point, the corresponding characteristic point of the first cheekbone characteristic point
For the first hair characteristic point in hair zones, which is the in hair zones
Two hair characteristic points, so the first hair characteristic point and the second hair can be recognized accurately according to features described above relation in terminal
Characteristic point, but since the first hair characteristic point and the second hair characteristic point are used to determine hair zones, the hair zones are final
For determining hair profile, therefore be conducive to improve the accuracy of terminal recognition hair profile.
In a possible example, the target number of people according to the face contour and the hair contours extract
Image information in image, determines the second image for including people's head contour, including:According to the face contour and the hair wheel
The definite image information extraction region of exterior feature;Extract the image letter in the extraction of image information described in target number of people image region
Breath, to obtain the second image for including people's head contour.
As it can be seen that in this example, since in people's head contour, the scope of face contour and hair profile is determined, into
And can determine the extraction region of number of people image, according to said extracted region, complete image zooming-out is carried out, favorably with improving eventually
The accuracy of end extraction image and integrality.
It is consistent with the embodiment shown in above-mentioned Fig. 2, referring to Fig. 2, Fig. 2 is a kind of image provided by the embodiments of the present application
The flow diagram of extracting method, applied to terminal.As shown in the figure, this application image extraction method includes:
S201, the face contour of terminal recognition target number of people image.
S202, terminal determine described according to the coordinate value of the first human eye central point, the coordinate value of the second human eye central point
The coordinate value of the first reference center point between one human eye central point and the second human eye central point.
S203, terminal determine the ginseng according to the coordinate value of the reference center point and the coordinate value of the face central point
Examine the first distance between central point and the face central point.
S204, terminal is according to first distance and the coordinate value of the first reference center point, and default first
Relation between distance and second distance, determines the coordinate value of the contour feature point of the hair profile, and the second distance is
The distance between the first reference center point and the contour feature point.
S205, terminal determine the first cheekbone characteristic point coordinate value and the second cheekbone characteristic point coordinate value it
Between the 3rd distance.
S206, terminal determine the second reference center point between the first reference center point and the contour feature point
Coordinate value.
S207, terminal determine the target number of people according to the coordinate value of the 3rd distance and the second reference center point
The coordinate value of first hair characteristic point of the hair zones of image and the coordinate value of the second hair characteristic point.
S208, terminal determine the hair profile according to the first hair characteristic point and the second hair characteristic point.
S209, terminal determine that image information extracts region according to the face contour and the hair profile.
S210, terminal extract the image information in the region of image information extraction described in the target number of people image, with
To the second image including people's head contour.
As can be seen that in the embodiment of the present application, terminal identifies the face contour of target number of people image first, secondly, eventually
End determines the hair profile of the target number of people image according to the face contour identified, and finally, terminal is taken turns according to the face
Image information in target number of people image described in wide and described hair contours extract, determines the second image for including people's head contour.
Since hair profile can accurately be determined based on face contour, so people's head contour in the second image can be accurately true including this
The hair profile made, avoids the situation for reducing the integrity degree of people's head contour because that can not accurately identify hair profile from occurring, has
Beneficial to raising number of people outline identification accuracy and integrity degree.
The embodiment of the present application is described further with reference to concrete application scene.
As shown in figure 3, Fig. 3 is the profile diagram of number of people image.Face central point is C in the image, and eyes central point is distinguished
For A and B, the characteristic point of left and right cheekbone is D and E, and the central point of two cheekbone distances is I points.Wherein H points are A points to B points
The central point of distance, the distance of C points to H points is a, by default relation, learns a distances as α times of H points to the distance of J points,
J points are obtained, and J and H is in same vertical line;In the same horizontal line, and the distance of D points to E points is b, takes H points to arrive for D points and E points
The central point of J point distances is K points, will distance b move to K same horizontal lines, obtain F points and G points, using F points, G points and J points as
Scope obtains hair profile;Finally by third party's face recognition technology, the second image needed in described image information is extracted.
It is apparatus of the present invention embodiment below, apparatus of the present invention embodiment is realized for performing the method for the present invention embodiment
Method.As shown in figure 4, the terminal can include identification module 401, determining module 402 and extraction module 403, wherein
The identification module, for identifying the face contour of number of people image;
The determining module, for determining the hair profile of the target number of people image according to the face contour;
The extraction module, in the target number of people image according to the face contour and the hair contours extract
Image information, determine to include the second image of people's head contour.
As can be seen that in the embodiment of the present application, the face contour of terminal recognition target number of people image;Taken turns according to the face
Exterior feature determines the hair profile of the target number of people image;According to target person described in the face contour and the hair contours extract
Image information in head image, determines the second image for including people's head contour.Since hair profile can be based on face contour standard
Determine, so people's head contour in the second image can accurately include the hair profile determined, avoid because can not be accurate
Identification hair profile and the situation that reduces the integrity degree of people's head contour occurs, be conducive to improve number of people outline identification accuracy and complete
Whole degree.
In a possible example, the instruction of the determining module is specifically used for performing following operation:Control is according to the
The coordinate value of one human eye central point, the coordinate value of the second human eye central point, the coordinate value of face central point determine the forehead wheel
The coordinate value of wide contour feature point;And control is according to the coordinate value of the contour feature point, the seat of the first cheekbone characteristic point
Scale value, the coordinate value of the second cheekbone characteristic point determine the seat of the first hair characteristic point of the hair zones of the target number of people image
The coordinate value of scale value and the second hair characteristic point;And control is according to the first hair characteristic point and the second hair feature
Point determines the hair profile.
In a possible example, the instruction of the determining module is specifically used for performing following operation:Control is according to the
The coordinate value of one human eye central point, the coordinate value of the second human eye central point determine the first human eye central point and second people
The coordinate value of the first reference center point between eye central point;And control according to the coordinate value of the first reference center point and
The coordinate value of the face central point determines the first distance between the first reference center point and the face central point;With
And control is according to first distance and the coordinate value of the first reference center point, and default first distance with second away from
Relation between, determines the coordinate value of the contour feature point of the hair profile, and the second distance is the described first reference
The distance between central point and the contour feature point.
In a possible example, the instruction of the determining module is specifically used for performing following operation:Control determines institute
State the 3rd distance between the coordinate value of the first cheekbone characteristic point and the coordinate value of the second cheekbone characteristic point;And control is true
The coordinate value of fixed the second reference center point between the first reference center point and the contour feature point;And control basis
The coordinate value of 3rd distance and the second reference center point determines the first of the hair zones of the target number of people image
The coordinate value of the coordinate value of hair characteristic point and the second hair characteristic point.
In a possible example, the instruction of the extraction module is specifically used for performing following operation:Control is according to institute
State face contour and the hair profile determines that image information extracts region;And institute in the target number of people image is extracted in control
The image information in image information extraction region is stated, to obtain the second image for including people's head contour.
It is consistent with the embodiment shown in above-mentioned Fig. 2, Fig. 3, Fig. 4, referring to Fig. 5, Fig. 5 is provided by the embodiments of the present application
A kind of structure diagram of terminal, the terminal operating have one or more application program and operating system, as shown in the figure, the terminal
Including processor, memory, communication interface and one or more programs, wherein, one or more of programs are different from institute
One or more application program is stated, and one or more of programs are stored in the memory, and be configured by institute
Processor execution is stated, described program includes being used for the instruction for performing following steps;
Identify the face contour of target number of people image;
The hair profile of the target number of people image is determined according to the face contour;
According to the image information in target number of people image described in the face contour and the hair contours extract, bag is determined
Include the second image of people's head contour.
As can be seen that in the embodiment of the present application, terminal identifies the face contour of target number of people image first, secondly, eventually
End determines the hair profile of the target number of people image according to the face contour identified, and finally, terminal is taken turns according to the face
Image information in target number of people image described in wide and described hair contours extract, determines the second image for including people's head contour.
Since hair profile can accurately be determined based on face contour, so people's head contour in the second image can be accurately true including this
The hair profile made, avoids the situation for reducing the integrity degree of people's head contour because that can not accurately identify hair profile from occurring, has
Beneficial to raising number of people outline identification accuracy and integrity degree.
It is described that mainly the scheme of the embodiment of the present application is described from the angle of method side implementation procedure.It is appreciated that
, for terminal in order to realize the function, it comprises perform the corresponding hardware configuration of each function and/or software module.This
Field technology personnel should be readily appreciated that, with reference to each exemplary unit and algorithm of the embodiments described herein description
Step, the application can be realized with the combining form of hardware or hardware and computer software.Some function actually with hardware also
It is that the mode of computer software driving hardware performs, application-specific and design constraint depending on technical solution.Specialty
Technical staff specifically can realize described function to each using distinct methods, but this realization should not be recognized
For beyond scope of the present application.
A kind of structure diagram of smart mobile phone 600, above-mentioned intelligence are provided referring to Fig. 6, Fig. 6 is the embodiment of the present application
Mobile phone 600 includes:Housing 610, touching display screen 620, mainboard 630, battery 640 and subplate 650, before being provided with mainboard 630
Camera 631, processor 632, memory 633, power management chip 634 etc. are put, oscillator 651, integral sound are provided with subplate
Chamber 652, VOOC, which dodge, fills interface 653 and fingerprint recognition module 654.
Wherein, terminal identifies the face contour of target number of people image first;The target is determined according to the face contour
The hair profile of number of people image;According to the image in target number of people image described in the face contour and the hair contours extract
Information, determines the second image for including people's head contour.
Above-mentioned processor 632 is the control centre of smart mobile phone, utilizes various interfaces and the whole smart mobile phone of connection
Various pieces, by running or performing the software program and/or module that are stored in memory 633, and call and be stored in
Data in memory 633, perform the various functions and processing data of smart mobile phone, so as to carry out overall prison to smart mobile phone
Control.Optionally, processor 632 may include one or more processing units;Preferably, processor 632 can integrate application processor
And modem processor, wherein, application processor mainly handles operating system, user interface and application program etc., modulatedemodulate
Processor is adjusted mainly to handle wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor
In 632.The processor 632 for example can be central processing unit (Central Processing Unit, CPU), general procedure
Device, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit (Application-
Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate
Array, FPGA) either other programmable logic device, transistor logic, hardware component or its any combination.It can
To realize or perform with reference to the described various exemplary logic blocks of present disclosure, module and circuit.Above-mentioned place
It can also be the combination for realizing computing function to manage device, such as is combined comprising one or more microprocessors, DSP and microprocessor
Combination etc..
Above-mentioned memory 633 can be used for storage software program and module, and processor 632 is stored in memory by operation
633 software program and module, so as to perform various function application and the data processing of smart mobile phone.Memory 633 can
Mainly include storing program area and storage data field, wherein, storing program area can storage program area, needed at least one function
Application program etc.;Storage data field can be stored uses created data etc. according to smart mobile phone.In addition, memory 633
It can include high-speed random access memory, can also include nonvolatile memory, a for example, at least disk memory,
Flush memory device or other volatile solid-state parts.The memory 633 for example can be random access memory (Random
Access Memory, RAM), flash memory, read-only storage (Read Only Memory, ROM), the read-only storage of erasable programmable
Device (Erasable Programmable ROM, EPROM), Electrically Erasable Programmable Read-Only Memory (Electrically
EPROM, EEPROM), register, hard disk, mobile hard disk, read-only optical disc (CD-ROM) or any other shape well known in the art
The storage medium of formula.
The embodiment of the present application also provides a kind of computer-readable storage medium, wherein, computer-readable storage medium storage is used for electricity
The computer program that subdata exchanges, the computer program cause computer to perform any as described in the method embodiment
The part or all of step of method, the computer include mobile terminal.
The embodiment of the present application also provides a kind of computer program product, and the computer program product includes storing calculating
The non-transient computer-readable recording medium of machine program, the computer program are operable to make computer perform such as the side
The part or all of step of either method described in method embodiment.The computer program product can be a software installation
Bag, the computer include mobile terminal.
It should be noted that for foregoing each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the application and from the limitation of described sequence of movement because
According to the application, some steps can use other orders or be carried out at the same time.Secondly, those skilled in the art should also know
Know, embodiment described in this description belongs to preferred embodiment, involved action and module not necessarily the application
It is necessary.
In the described embodiment, the description to each embodiment all emphasizes particularly on different fields, and does not have the portion being described in detail in some embodiment
Point, it may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed device, can be by another way
Realize.For example, device embodiment described above is only schematical, such as the division of the unit, it is only one kind
Division of logic function, can there is an other dividing mode when actually realizing, such as multiple units or component can combine or can
To be integrated into another system, or some features can be ignored, or not perform.Another, shown or discussed is mutual
Coupling, direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING or communication connection of device or unit,
Can be electrical or other forms.
The unit illustrated as separating component may or may not be physically separate, be shown as unit
The component shown may or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
In network unit.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units integrate in a unit.The integrated list
Member can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or use
When, it can be stored in a computer-readable access to memory.Based on such understanding, the technical solution of the application substantially or
Person say the part to contribute to the prior art or the technical solution all or part can in the form of software product body
Reveal and, which is stored in a memory, including some instructions are used so that a computer equipment
(can be personal computer, server or network equipment etc.) performs all or part of each embodiment the method for the application
Step.And foregoing memory includes:USB flash disk, read-only storage (ROM, Read-Only Memory), random access memory
(RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. are various can be with the medium of store program codes.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of the embodiment is can
To instruct relevant hardware to complete by program, which can be stored in a computer-readable memory, memory
It can include:Flash disk, read-only storage (English:Read-Only Memory, referred to as:ROM), random access device (English:
Random Access Memory, referred to as:RAM), disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the application and
Embodiment is set forth, and the explanation of above example is only intended to help to understand the present processes and its core concept;
Meanwhile for those of ordinary skill in the art, according to the thought of the application, can in specific embodiments and applications
There is change part, in conclusion this specification content should not be construed as the limitation to the application.
Claims (10)
- A kind of 1. method of image zooming-out, it is characterised in that the described method includes:Identify the face contour of target number of people image;The hair profile of the target number of people image is determined according to the face contour;According to the image information in target number of people image described in the face contour and the hair contours extract, determine to include people Second image of head contour.
- 2. according to the method described in claim 1, it is characterized in that, the face contour includes the coordinate of the first human eye central point Value, the coordinate value of the second human eye central point, the coordinate value of face central point, the coordinate value of the first cheekbone characteristic point, the second cheekbone The coordinate value of characteristic point;The hair profile that the target number of people image is determined according to the face contour, including:Determined according to the coordinate value of the coordinate value of the first human eye central point, the coordinate value of the second human eye central point, face central point The coordinate value of the contour feature point of the hair profile;According to the coordinate value of the contour feature point, the coordinate value of the first cheekbone characteristic point, the second cheekbone characteristic point coordinate value Determine the coordinate value of the first hair characteristic point of the hair zones of the target number of people image and the coordinate of the second hair characteristic point Value;The hair profile is determined according to the first hair characteristic point and the second hair characteristic point.
- 3. according to the method described in claim 2, it is characterized in that, the coordinate value according to the first human eye central point, second Coordinate value, the coordinate value of face central point of human eye central point determine the coordinate value of the contour feature point of the hair profile, bag Include:According to the coordinate value of the coordinate value of the first human eye central point, the second human eye central point determine the first human eye central point and The coordinate value of the first reference center point between the second human eye central point;Determined according to the coordinate value of the coordinate value of the first reference center point and the face central point in first reference The first distance between heart point and the face central point;According to first distance and the coordinate value of the first reference center point, and default first distance and second distance Between relation, determine the coordinate value of the contour feature point of the hair profile, the second distance is in the described first reference The distance between heart point and the contour feature point.
- 4. according to the method in claim 2 or 3, it is characterised in that the coordinate value according to the contour feature point, The coordinate value of one cheekbone characteristic point, the coordinate value of the second cheekbone characteristic point determine the of the hair zones of the target number of people image The coordinate value of the coordinate value of one hair characteristic point and the second hair characteristic point, including:Determine the 3rd distance between the coordinate value of the first cheekbone characteristic point and the coordinate value of the second cheekbone characteristic point;Determine the coordinate value of the second reference center point between the first reference center point and the contour feature point;The hair area of the target number of people image is determined according to the coordinate value of the 3rd distance and the second reference center point The coordinate value of the first hair characteristic point in domain and the coordinate value of the second hair characteristic point.
- 5. according to claim 1-4 any one of them methods, it is characterised in that described according to the face contour and the head The image information in target number of people image described in contours extract is sent out, determines the second image for including people's head contour, including:Determine that image information extracts region according to the face contour and the hair profile;The image information in the extraction of image information described in target number of people image region is extracted, to obtain including people's head contour The second image.
- A kind of 6. image acquiring apparatus, it is characterised in that including:Identification module, for identifying the face contour of number of people image;Determining module, for determining the hair profile of the target number of people image according to the face contour;Extraction module, for the image letter in the target number of people image according to the face contour and the hair contours extract Breath, determines the second image for including people's head contour.
- 7. device according to claim 6, it is characterised in that the determining module is specifically used for:Determined according to the coordinate value of the coordinate value of the first human eye central point, the coordinate value of the second human eye central point, face central point The coordinate value of the contour feature point of the forehead profile;According to the coordinate value of the contour feature point, the coordinate value of the first cheekbone characteristic point, the second cheekbone characteristic point coordinate value Determine the coordinate value of the first hair characteristic point of the hair zones of the target number of people image and the coordinate of the second hair characteristic point Value;The hair profile is determined according to the first hair characteristic point and the second hair characteristic point.
- 8. device according to claim 7, it is characterised in that the determining module is specifically used for:According to the coordinate value of the coordinate value of the first human eye central point, the second human eye central point determine the first human eye central point and The coordinate value of the first reference center point between the second human eye central point;Determined according to the coordinate value of the coordinate value of the first reference center point and the face central point in first reference The first distance between heart point and the face central point;According to first distance and the coordinate value of the first reference center point, and default first distance and second distance Between relation, determine the coordinate value of the contour feature point of the hair profile, the second distance is in the described first reference The distance between heart point and the contour feature point.
- 9. the device according to claim 7 or 8, it is characterised in that the determining module is specifically used for:Determine the 3rd distance between the coordinate value of the first cheekbone characteristic point and the coordinate value of the second cheekbone characteristic point;Determine the coordinate value of the second reference center point between the first reference center point and the contour feature point;The hair area of the target number of people image is determined according to the coordinate value of the 3rd distance and the second reference center point The coordinate value of the first hair characteristic point in domain and the coordinate value of the second hair characteristic point.
- 10. according to claim 6-9 any one of them methods, it is characterised in that the extraction module is specifically used for:Determine that image information extracts region according to the face contour and the hair profile;The image information in the extraction of image information described in target number of people image region is extracted, to obtain including people's head contour The second image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710998445.7A CN108009470B (en) | 2017-10-20 | 2017-10-20 | Image extraction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710998445.7A CN108009470B (en) | 2017-10-20 | 2017-10-20 | Image extraction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108009470A true CN108009470A (en) | 2018-05-08 |
CN108009470B CN108009470B (en) | 2020-06-16 |
Family
ID=62051789
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710998445.7A Active CN108009470B (en) | 2017-10-20 | 2017-10-20 | Image extraction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108009470B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363107A (en) * | 2019-06-26 | 2019-10-22 | 成都品果科技有限公司 | Face forehead point Quick Extended method, apparatus, storage medium and processor |
CN110458855A (en) * | 2019-07-08 | 2019-11-15 | 安徽淘云科技有限公司 | Image extraction method and Related product |
CN111080754A (en) * | 2019-12-12 | 2020-04-28 | 广东智媒云图科技股份有限公司 | Character animation production method and device for connecting characteristic points of head and limbs |
CN111563898A (en) * | 2020-04-29 | 2020-08-21 | 万翼科技有限公司 | Image segmentation method, electronic equipment and related product |
CN113255561A (en) * | 2021-06-10 | 2021-08-13 | 平安科技(深圳)有限公司 | Hair information identification method, device, equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1453002A2 (en) * | 2003-02-28 | 2004-09-01 | Eastman Kodak Company | Enhancing portrait images that are processed in a batch mode |
JP4076777B2 (en) * | 2002-03-06 | 2008-04-16 | 三菱電機株式会社 | Face area extraction device |
CN101404910A (en) * | 2006-03-23 | 2009-04-08 | 花王株式会社 | Hair style simulation image creating method |
CN102214361A (en) * | 2010-04-09 | 2011-10-12 | 索尼公司 | Information processing device, method, and program |
CN102663820A (en) * | 2012-04-28 | 2012-09-12 | 清华大学 | Three-dimensional head model reconstruction method |
CN103679767A (en) * | 2012-08-30 | 2014-03-26 | 卡西欧计算机株式会社 | Image generation apparatus and image generation method |
CN106446781A (en) * | 2016-08-29 | 2017-02-22 | 厦门美图之家科技有限公司 | Face image processing method and face image processing device |
CN107316333A (en) * | 2017-07-07 | 2017-11-03 | 华南理工大学 | It is a kind of to automatically generate the method for day overflowing portrait |
-
2017
- 2017-10-20 CN CN201710998445.7A patent/CN108009470B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4076777B2 (en) * | 2002-03-06 | 2008-04-16 | 三菱電機株式会社 | Face area extraction device |
EP1453002A2 (en) * | 2003-02-28 | 2004-09-01 | Eastman Kodak Company | Enhancing portrait images that are processed in a batch mode |
CN101404910A (en) * | 2006-03-23 | 2009-04-08 | 花王株式会社 | Hair style simulation image creating method |
CN102214361A (en) * | 2010-04-09 | 2011-10-12 | 索尼公司 | Information processing device, method, and program |
CN102663820A (en) * | 2012-04-28 | 2012-09-12 | 清华大学 | Three-dimensional head model reconstruction method |
CN103679767A (en) * | 2012-08-30 | 2014-03-26 | 卡西欧计算机株式会社 | Image generation apparatus and image generation method |
CN106446781A (en) * | 2016-08-29 | 2017-02-22 | 厦门美图之家科技有限公司 | Face image processing method and face image processing device |
CN107316333A (en) * | 2017-07-07 | 2017-11-03 | 华南理工大学 | It is a kind of to automatically generate the method for day overflowing portrait |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363107A (en) * | 2019-06-26 | 2019-10-22 | 成都品果科技有限公司 | Face forehead point Quick Extended method, apparatus, storage medium and processor |
CN110458855A (en) * | 2019-07-08 | 2019-11-15 | 安徽淘云科技有限公司 | Image extraction method and Related product |
CN111080754A (en) * | 2019-12-12 | 2020-04-28 | 广东智媒云图科技股份有限公司 | Character animation production method and device for connecting characteristic points of head and limbs |
CN111080754B (en) * | 2019-12-12 | 2023-08-11 | 广东智媒云图科技股份有限公司 | Character animation production method and device for connecting characteristic points of head and limbs |
CN111563898A (en) * | 2020-04-29 | 2020-08-21 | 万翼科技有限公司 | Image segmentation method, electronic equipment and related product |
CN113255561A (en) * | 2021-06-10 | 2021-08-13 | 平安科技(深圳)有限公司 | Hair information identification method, device, equipment and storage medium |
WO2022257456A1 (en) * | 2021-06-10 | 2022-12-15 | 平安科技(深圳)有限公司 | Hair information recognition method, apparatus and device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108009470B (en) | 2020-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108009470A (en) | A kind of method and apparatus of image zooming-out | |
CN109657631B (en) | Human body posture recognition method and device | |
CN109829448B (en) | Face recognition method, face recognition device and storage medium | |
CN109829368B (en) | Palm feature recognition method and device, computer equipment and storage medium | |
CN107958230B (en) | Facial expression recognition method and device | |
CN111310705A (en) | Image recognition method and device, computer equipment and storage medium | |
CN107844742B (en) | Facial image glasses minimizing technology, device and storage medium | |
CN107590474B (en) | Unlocking control method and related product | |
EP4047509A1 (en) | Facial parsing method and related devices | |
CN107527046A (en) | Solve lock control method and Related product | |
CN105205380A (en) | Unlocking method and device of mobile terminal | |
CN107945102A (en) | A kind of picture synthetic method and device | |
CN112818909A (en) | Image updating method and device, electronic equipment and computer readable medium | |
CN112101073B (en) | Face image processing method, device, equipment and computer storage medium | |
CN107015745A (en) | Screen operation method and device, terminal equipment and computer readable storage medium | |
WO2020224136A1 (en) | Interface interaction method and device | |
CN110288715A (en) | Virtual necklace try-in method, device, electronic equipment and storage medium | |
Ren et al. | Hand gesture recognition with multiscale weighted histogram of contour direction normalization for wearable applications | |
CN104392356A (en) | Mobile payment system and method based on three-dimensional human face recognition | |
CN113536262A (en) | Unlocking method and device based on facial expression, computer equipment and storage medium | |
CN111460910A (en) | Face type classification method and device, terminal equipment and storage medium | |
CN109740511B (en) | Facial expression matching method, device, equipment and storage medium | |
CN106875332A (en) | A kind of image processing method and terminal | |
CN111444928A (en) | Key point detection method and device, electronic equipment and storage medium | |
Mehryar et al. | Automatic landmark detection for 3d face image processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |