CN109241878B - Lip positioning-based facial feature positioning method and system - Google Patents

Lip positioning-based facial feature positioning method and system Download PDF

Info

Publication number
CN109241878B
CN109241878B CN201810954321.3A CN201810954321A CN109241878B CN 109241878 B CN109241878 B CN 109241878B CN 201810954321 A CN201810954321 A CN 201810954321A CN 109241878 B CN109241878 B CN 109241878B
Authority
CN
China
Prior art keywords
thres
face
nose
block
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810954321.3A
Other languages
Chinese (zh)
Other versions
CN109241878A (en
Inventor
舒倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mengwang Video Co ltd
Original Assignee
Shenzhen Mengwang Video Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mengwang Video Co ltd filed Critical Shenzhen Mengwang Video Co ltd
Priority to CN201810954321.3A priority Critical patent/CN109241878B/en
Publication of CN109241878A publication Critical patent/CN109241878A/en
Application granted granted Critical
Publication of CN109241878B publication Critical patent/CN109241878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a facial feature positioning method and system based on lip positioning. Firstly, finding a lip area of a current face, and determining face inclination and face side degree; then, determining a search area of eyes and noses according to the inclination and the side degree of the face, and then accurately judging according to the characteristics of the eyes and the noses to complete the positioning of the five sense organs of the current face. Reasonable inclination angle analysis can improve the accuracy of face recognition on one hand, and on the other hand, an accurate positioning search direction can be provided for an incomplete eye and nose positioning system.

Description

Lip positioning-based facial feature positioning method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a lip positioning-based facial feature positioning method and system.
Background
Face recognition and online video beauty are two emerging video applications. In practical application, the human face is not always in the front and does not skew. Due to personal habits and needs for beautification, there are often situations such as sideways, leaning, etc. If the face is not distinguished according to the normal mode, the accuracy of face detection and recognition can be affected. The human face detection is inaccurate, and the good portrait beautifying effect cannot be achieved naturally.
Disclosure of Invention
The embodiment of the invention aims to provide a facial feature positioning method based on lip positioning, and aims to solve the problem of low face detection and identification accuracy in the prior art.
The embodiment of the invention is realized by a lip positioning-based facial feature positioning method, which comprises the following steps:
step 1: performing lip positioning on the current face;
step 2: finding a block column with the minimum column number in the face lip block set judged to be the current face lip block set, and positioning a middle block of the block column and marking as gmb (i1, j 1); finding the block column with the largest column number, locating the middle block of the block column, denoted gmb (i2, j 2);
step 3: finding a block row with the minimum row number in the face lip block set judged to be current, and positioning a middle block of the block row, and marking as gmb (i3, j 3); finding the block row with the largest row number, locating the middle block of the block row, and recording as gmb (i4, j 4);
step 4: calculating the current face inclination angle theta;
step 5: calculating the side proportion gamma of the current face;
step 6: determining an eye-nose to-be-detected area of the current face according to the face inclination angle and the face side ratio;
step 7: and carrying out accurate eye-nose judgment on the blocks in the eye-nose to-be-detected region of the current face to complete the positioning of the five sense organs of the current face.
It is another object of an embodiment of the present invention to provide a facial feature positioning system based on lip positioning, the system including:
the lip positioning device is used for carrying out lip positioning on the current face;
a first middle block searching and positioning module, which finds a block column with the minimum column number in the set of the lip blocks of the face judged to be current, positions the middle block of the block column and records as gmb (i1, j 1); finding the block column with the largest column number, locating the middle block of the block column, denoted gmb (i2, j 2);
a second intermediate block searching and positioning module, which finds the block row with the minimum row number in the set of the face lip blocks judged to be current, positions the intermediate block of the block row and records as gmb (i3, j 3); finding the block row with the largest row number, locating the middle block of the block row, and recording as gmb (i4, j 4);
the face inclination angle calculation module is used for calculating the current face inclination angle theta;
the face side proportion calculation module is used for calculating the current face side proportion gamma;
the eye-nose to-be-detected region determining module is used for determining the eye-nose to-be-detected region of the current face according to the face inclination angle and the face side proportion;
and the eye and nose accurate judgment module is used for accurately judging the eye and nose of the current face in the eye and nose to-be-detected region, so as to complete the positioning of the five sense organs of the current face.
The invention has the advantages of
The invention provides a facial feature positioning method and system based on lip positioning. Firstly, finding a lip area of a current face, and determining face inclination and face side degree; then, determining a search area of eyes and noses according to the inclination and the side degree of the face, and then accurately judging according to the characteristics of the eyes and the noses to complete the positioning of the five sense organs of the current face. Reasonable inclination angle analysis can improve the accuracy of face recognition on one hand, and on the other hand, an accurate positioning search direction can be provided for an incomplete eye and nose positioning system.
Drawings
FIG. 1 is a flow chart of a method for positioning facial features based on lip positioning according to a preferred embodiment of the present invention;
fig. 2 is a diagram of a facial feature positioning system based on lip positioning according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples, and for convenience of description, only parts related to the examples of the present invention are shown. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a facial feature positioning method and system based on lip positioning. Firstly, finding a lip area of a current face, and determining face inclination and face side degree; then, determining a search area of eyes and noses according to the inclination and the side degree of the face, and then accurately judging according to the characteristics of the eyes and the noses to complete the positioning of the five sense organs of the current face. Reasonable inclination angle analysis can improve the accuracy of face recognition on one hand, and on the other hand, an accurate positioning search direction can be provided for an incomplete eye and nose positioning system.
Example one
FIG. 1 is a flow chart of a method for positioning facial features based on lip positioning according to a preferred embodiment of the present invention; the method comprises the following steps:
step 1: and carrying out lip positioning on the current face.
The lip positioning method employs methods disclosed in the art.
Step 2: finding a block column with the minimum column number in the face lip block set judged to be the current face lip block set, and positioning a middle block of the block column and marking as gmb (i1, j 1); the block column with the largest column number is found, and the middle block of the block column is located, denoted gmb (i2, j 2).
Step 3: finding a block row with the minimum row number in the face lip block set judged to be current, and positioning a middle block of the block row, and marking as gmb (i3, j 3); the block row with the largest row number is found, and the middle block of this block row is located, denoted gmb (i4, j 4).
Step 4: calculating the inclination angle theta of the current face,
Figure BDA0001772303740000031
step 5: calculating the side proportion gamma of the current face,
Figure BDA0001772303740000032
step 6: and determining the eye-nose to-be-detected region of the current face according to the face inclination angle and the face side ratio.
Case1:θ=0
If gamma is a sufficient right-side face, only a single-side human eye judgment region exists, and the single-side human eye judgment region is defined as { gmb (i, j) |2 × i3-i4-Thresie≤i≤i3-ThresieAnd j1-ThresjeJ is not less than (j3+ j4)/2}, and the nose judgment area is defined as { gmb (i, j) |2 × i3-i4 is not less than i3 and j1 is not less than j is not less than (j3+ j4)/2 };
otherwise, if gamma is a sufficient left-side face, only a single-side human eye judgment region exists, and the single-side human eye judgment region is defined as { gmb (i, j) |2 × i3-i4-Thresie≤i≤i3-ThresieAnd (j3+ j4)/2 is not less than j2+ ThresjeDefining a nose judgment area as { gmb (i, j) |2 × i3-i4 ≤ i ≤ i3 and (j3+ j4)/2 ≤ j ≤ j2 };
otherwise, demarcating the human eye judgment area as
{gmb(i,j)|2*i3-i4-Thresie≤i≤i3-ThresieAnd j1-ThresjL≤j≤j2+ThresjRAnd defining a nose judgment area as { gmb (i, j) |2 i3-i4 ≦ i3 and j1 ≦ j2 }.
Wherein Thres < Thresie<3*Thres,Thres=max{i4-i3,j2-j1},
0.5*Thres<Thresje<2*Thres、
Figure BDA0001772303740000041
Thres、Thresie、Thresje、ThresjL、ThresjRRespectively called threshold, first threshold, second threshold, left column threshold, right column threshold; solving the maximum value by a max table; gmb (i, j) denotes a block located at ith row and jth column of the current picture. .
Case2:θ≠0
Firstly, defining a rectangular area of an eye nose according to corresponding gamma in Case 1; then finding respective central points of the eye rectangular region and the nose rectangular region, and respectively rotating the eye-nose rectangular region by theta by taking the respective central points as rotating fixed points; and finally, selecting the blocks falling into the rotating rectangular area as the eye-nose rectangular area. Step 7: and carrying out accurate eye-nose judgment on the blocks in the eye-nose to-be-detected region of the current face to complete the positioning of the five sense organs of the current face.
The method disclosed in the industry is adopted for accurately judging the eye nose of the current human face in the eye nose to-be-detected area.
Example two
Fig. 2 is a diagram of a facial feature positioning system based on lip positioning according to a preferred embodiment of the present invention. The system comprises:
and the lip positioning device is used for carrying out lip positioning on the current face.
The lip positioning of the current face adopts an industry-disclosed method;
a first middle block searching and positioning module, which finds a block column with the minimum column number in the set of the lip blocks of the face judged to be current, positions the middle block of the block column and records as gmb (i1, j 1); the block column with the largest column number is found, and the middle block of the block column is located, denoted gmb (i2, j 2).
A second intermediate block searching and positioning module, which finds the block row with the minimum row number in the set of the face lip blocks judged to be current, positions the intermediate block of the block row and records as gmb (i3, j 3); the block row with the largest row number is found, and the middle block of this block row is located, denoted gmb (i4, j 4).
A face inclination angle calculation module for calculating the current face inclination angle theta,
Figure BDA0001772303740000051
a face side proportion calculating module for calculating the current face side proportion gamma,
Figure BDA0001772303740000052
and the eye-nose to-be-detected region determining module is used for determining the eye-nose to-be-detected region of the current face according to the face inclination angle and the face side proportion.
Case1:θ=0
If gamma is a sufficient right-side face, only a single-side human eye judgment region exists, and the single-side human eye judgment region is defined as { gmb (i, j) |2 × i3-i4-Thresie≤i≤i3-ThresieAnd j1-ThresjeJ is not less than (j3+ j4)/2}, and the nose judgment area is defined as { gmb (i, j) |2 × i3-i4 is not less than i3 and j1 is not less than j is not less than (j3+ j4)/2 };
otherwise, if gamma is a sufficient left-side face, only a single-side human eye judgment region exists, and the single-side human eye judgment region is defined as { gmb (i, j) |2 × i3-i4-Thresie≤i≤i3-ThresieAnd (j3+ j4)/2 is not less than j2+ ThresjeDefining a nose judgment area as { gmb (i, j) |2 × i3-i4 ≤ i ≤ i3 and (j3+ j4)/2 ≤ j ≤ j2 };
otherwise, demarcating the human eye judgment area as
{gmb(i,j)|2*i3-i4-Thresie≤i≤i3-ThresieAnd j1-ThresjL≤j≤j2+ThresjRAnd defining a nose judgment area as { gmb (i, j) |2 i3-i4 ≦ i3 and j1 ≦ j2 }.
Wherein Thres < Thresie<3*Thres,Thres=max{i4-i3,j2-j1},
0.5*Thres<Thresje<2*Thres、
Figure BDA0001772303740000061
Thres、Thresie、Thresje、ThresjL、ThresjRRespectively called threshold, first threshold, second threshold, left column threshold, right column threshold; solving the maximum value by a max table; gmb (i, j) denotes a block located at ith row and jth column of the current picture.
Case2:θ≠0
Firstly, defining a rectangular area of an eye nose according to corresponding gamma in Case 1; then finding respective central points of the eye rectangular region and the nose rectangular region, and respectively rotating the eye-nose rectangular region by theta by taking the respective central points as rotating fixed points; and finally, selecting the blocks falling into the rotating rectangular area as the eye-nose rectangular area.
And the eye and nose accurate judgment module is used for accurately judging the eye and nose of the current face in the eye and nose to-be-detected region, so as to complete the positioning of the five sense organs of the current face.
The accurate eye-nose judgment of the block in the eye-nose to-be-detected area of the current face adopts an industry-disclosed method.
It will be understood by those skilled in the art that all or part of the steps in the method according to the above embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, such as ROM, RAM, magnetic disk, optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (4)

1. A method for lip location based facial features localization, the method comprising:
step 1: performing lip positioning on the current face;
step 2: finding a block column with the minimum column number in the face lip block set judged to be the current face lip block set, and positioning a middle block of the block column and marking as gmb (i1, j 1); finding the block column with the largest column number, locating the middle block of the block column, denoted gmb (i2, j 2);
step 3: finding a block row with the minimum row number in the face lip block set judged to be current, and positioning a middle block of the block row, and marking as gmb (i3, j 3); finding the block row with the largest row number, locating the middle block of the block row, and recording as gmb (i4, j 4);
step 4: calculating the current face inclination angle theta:
Figure FDA0003181396080000011
step 5: calculating the side proportion gamma of the current face:
Figure FDA0003181396080000012
step 6: according to the face inclination angle and the face side proportion, determining an eye-nose to-be-detected region of the current face, comprising:
Case1:θ=0
if gamma is a sufficient right-side face, only a single-side human eye judgment region exists, and the single-side human eye judgment region is defined as { gmb (i, j) |2 × i3-i4-Thresie≤i≤i3-ThresieAnd j1-ThresjeJ is not less than (j3+ j4)/2}, and the nose judgment area is defined as { gmb (i, j) |2 × i3-i4 is not less than i3 and j1 is not less than j is not less than (j3+ j4)/2 };
otherwise, if gamma is a sufficient left-side face, only a single-side human eye judgment region exists, and the single-side human eye judgment region is defined as { gmb (i, j) |2 × i3-i4-Thresie≤i≤i3-ThresieAnd (j3+ j4)/2 is not less than j2+ ThresjeDefining a nose judgment area as { gmb (i, j) |2 × i3-i4 ≤ i ≤ i3 and (j3+ j4)/2 ≤ j ≤ j2 };
otherwise, the human eye judgment region is defined as { gmb (i, j) |2 × i3-i4-Thresie≤i≤i3-ThresieAnd j1-ThresjL≤j≤j2+ThresjRDefining a nose judgment area as { gmb (i, j) |2 i3-i4 ≤ i ≤ i3 and j1 ≤ j2 };
wherein Thres < Thresie<3*Thres,Thres=max{i4-i3,j2-j1},0.5*Thres<Thresje<2*Thres、
Figure FDA0003181396080000021
Thres、Thresie、Thresje、ThresjL、ThresjRRespectively called threshold, first threshold, second threshold, left column threshold, right column threshold; solving the maximum value by a max table; gmb (i, j) represents a block located at ith row and jth column of the current picture;
step 7: and carrying out accurate eye-nose judgment on the blocks in the eye-nose to-be-detected region of the current face to complete the positioning of the five sense organs of the current face.
2. The lip positioning-based facial feature positioning method according to claim 1, wherein the determining the eye-nose to-be-detected region of the current face according to the face inclination angle and the face side ratio further comprises:
Case2:θ≠0
firstly, defining a rectangular area of an eye nose according to corresponding gamma in Case 1; then finding respective central points of the eye rectangular region and the nose rectangular region, and respectively rotating the eye-nose rectangular region by theta by taking the respective central points as rotating fixed points; and finally, selecting the blocks falling into the rotating rectangular area as the eye-nose rectangular area.
3. A lip-positioning-based facial feature positioning system, comprising:
the lip positioning device is used for carrying out lip positioning on the current face;
a first middle block searching and positioning module, which finds a block column with the minimum column number in the set of the lip blocks of the face judged to be current, positions the middle block of the block column and records as gmb (i1, j 1); finding the block column with the largest column number, locating the middle block of the block column, denoted gmb (i2, j 2);
a second intermediate block searching and positioning module, which finds the block row with the minimum row number in the set of the face lip blocks judged to be current, positions the intermediate block of the block row and records as gmb (i3, j 3); finding the block row with the largest row number, locating the middle block of the block row, and recording as gmb (i4, j 4);
the face inclination angle calculation module calculates the current face inclination angle theta:
Figure FDA0003181396080000031
the face side proportion calculation module is used for calculating the current face side proportion gamma:
Figure FDA0003181396080000032
the eye-nose detection region determining module is used for determining the eye-nose detection region of the current face according to the face inclination angle and the face side proportion:
Case1:θ=0
if gamma is a sufficient right-side face, only a single-side human eye judgment region exists, and the single-side human eye judgment region is defined as { gmb (i, j) |2 × i3-i4-Thresie≤i≤i3-ThresieAnd j1-ThresjeJ is not less than (j3+ j4)/2}, and the nose judgment area is defined as { gmb (i, j) |2 × i3-i4 is not less than i3 and j1 is not less than j is not less than (j3+ j4)/2 };
otherwise, if gamma is a sufficient left-side face, only a single-side human eye judgment region exists, and the single-side human eye judgment region is defined as { gmb (i, j) |2 × i3-i4-Thresie≤i≤i3-ThresieAnd (j3+ j4)/2 is not less than j2+ ThresjeDefining a nose judgment area as { gmb (i, j) |2 × i3-i4 ≤ i ≤ i3 and (j3+ j4)/2 ≤ j ≤ j2 };
otherwise, the human eye judgment region is defined as { gmb (i, j) |2 × i3-i4-Thresie≤i≤i3-ThresieAnd j1-ThresjL≤j≤j2+ThresjRDefining a nose judgment area as { gmb (i, j) |2 i3-i4 ≤ i ≤ i3 and j1 ≤ j2 };
wherein Thres < Thresie<3*Thres,Thres=max{i4-i3,j2-j1},
0.5*Thres<Thresje<2*Thres、
Figure FDA0003181396080000041
Thres、Thresie、Thresje、ThresjL、ThresjRRespectively called threshold, first threshold, second threshold, left column threshold, right column threshold; solving the maximum value by a max table; gmb (i, j) represents a block located at ith row and jth column of the current picture;
and the eye and nose accurate judgment module is used for accurately judging the eye and nose of the current face in the eye and nose to-be-detected region, so as to complete the positioning of the five sense organs of the current face.
4. The lip-positioning-based facial feature positioning system of claim 3,
in the eye-nose detection region determining module, the eye-nose detection region determining module is further configured to determine an eye-nose detection region of the current face according to the face inclination angle and the face side ratio:
Case2:θ≠0
firstly, defining a rectangular area of an eye nose according to corresponding gamma in Case 1; then finding respective central points of the eye rectangular region and the nose rectangular region, and respectively rotating the eye-nose rectangular region by theta by taking the respective central points as rotating fixed points; and finally, selecting the blocks falling into the rotating rectangular area as the eye-nose rectangular area.
CN201810954321.3A 2018-08-21 2018-08-21 Lip positioning-based facial feature positioning method and system Active CN109241878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810954321.3A CN109241878B (en) 2018-08-21 2018-08-21 Lip positioning-based facial feature positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810954321.3A CN109241878B (en) 2018-08-21 2018-08-21 Lip positioning-based facial feature positioning method and system

Publications (2)

Publication Number Publication Date
CN109241878A CN109241878A (en) 2019-01-18
CN109241878B true CN109241878B (en) 2021-10-22

Family

ID=65071009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810954321.3A Active CN109241878B (en) 2018-08-21 2018-08-21 Lip positioning-based facial feature positioning method and system

Country Status (1)

Country Link
CN (1) CN109241878B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807402B (en) * 2019-10-29 2023-08-08 深圳市梦网视讯有限公司 Facial feature positioning method, system and terminal equipment based on skin color detection
CN111461073B (en) * 2020-05-06 2023-12-08 深圳市梦网视讯有限公司 Reverse face detection method, system and equipment based on nose positioning
CN112132067B (en) * 2020-09-27 2024-04-09 深圳市梦网视讯有限公司 Face gradient analysis method, system and equipment based on compressed information

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1959702A (en) * 2006-10-10 2007-05-09 南京搜拍信息技术有限公司 Method for positioning feature points of human face in human face recognition system
CN101950355A (en) * 2010-09-08 2011-01-19 中国人民解放军国防科学技术大学 Method for detecting fatigue state of driver based on digital video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1959702A (en) * 2006-10-10 2007-05-09 南京搜拍信息技术有限公司 Method for positioning feature points of human face in human face recognition system
CN101950355A (en) * 2010-09-08 2011-01-19 中国人民解放军国防科学技术大学 Method for detecting fatigue state of driver based on digital video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"人脸检测及特征点定位技术研究";张金焕;《万方知识服务平台》;20160330;第4.3节 *
"驾驶状态监测技术研究";肖怡晨;《万方知识服务平台》;20141231;第2.1-2.2节 *

Also Published As

Publication number Publication date
CN109241878A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
JP4830650B2 (en) Tracking device
CN109241878B (en) Lip positioning-based facial feature positioning method and system
WO2020155518A1 (en) Object detection method and device, computer device and storage medium
US8615113B2 (en) Multi-view face recognition method and system
WO2019033574A1 (en) Electronic device, dynamic video face recognition method and system, and storage medium
CN110807402B (en) Facial feature positioning method, system and terminal equipment based on skin color detection
EP3690700A1 (en) Image similarity calculation method and device, and storage medium
CN103577815A (en) Face alignment method and system
CN109190529B (en) Face detection method and system based on lip positioning
US20140247963A1 (en) Object detection via validation with visual search
US8948517B2 (en) Landmark localization via visual search
CN111191649A (en) Method and equipment for identifying bent multi-line text image
CN111612822B (en) Object tracking method, device, computer equipment and storage medium
CN109840524A (en) Kind identification method, device, equipment and the storage medium of text
CN111552837A (en) Animal video tag automatic generation method based on deep learning, terminal and medium
US9081800B2 (en) Object detection via visual search
CN109255307B (en) Face analysis method and system based on lip positioning
CN114283448A (en) Child sitting posture reminding method and system based on head posture estimation
EP2998928B1 (en) Apparatus and method for extracting high watermark image from continuously photographed images
CN111144466B (en) Image sample self-adaptive depth measurement learning method
JP6003367B2 (en) Image recognition apparatus, image recognition method, and image recognition program
CN107452003A (en) A kind of method and device of the image segmentation containing depth information
US20140247992A1 (en) Attribute recognition via visual search
CN111401112B (en) Face recognition method and device
KR20210087494A (en) Human body orientation detection method, apparatus, electronic device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 Guangdong city of Shenzhen province Nanshan District Guangdong streets high in the four Longtaili Technology Building Room 325 No. 30

Applicant after: Shenzhen mengwang video Co., Ltd

Address before: 518000 Guangdong city of Shenzhen province Nanshan District Guangdong streets high in the four Longtaili Technology Building Room 325 No. 30

Applicant before: SHENZHEN MONTNETS ENCYCLOPEDIA INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant