CN102663349A - Intelligent interaction method based on face identification and apparatus thereof - Google Patents

Intelligent interaction method based on face identification and apparatus thereof Download PDF

Info

Publication number
CN102663349A
CN102663349A CN2012100780023A CN201210078002A CN102663349A CN 102663349 A CN102663349 A CN 102663349A CN 2012100780023 A CN2012100780023 A CN 2012100780023A CN 201210078002 A CN201210078002 A CN 201210078002A CN 102663349 A CN102663349 A CN 102663349A
Authority
CN
China
Prior art keywords
visitor
interactive device
identity
image
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100780023A
Other languages
Chinese (zh)
Inventor
黄建良
贺孝进
王思涵
宋熠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CRYSTAL CG Co Ltd
Original Assignee
CRYSTAL CG Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CRYSTAL CG Co Ltd filed Critical CRYSTAL CG Co Ltd
Priority to CN2012100780023A priority Critical patent/CN102663349A/en
Publication of CN102663349A publication Critical patent/CN102663349A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Collating Specific Patterns (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an intelligent interaction method based on face identification. The method comprises the following steps: using a camera to determine a visual area, and verifying a visitor identity through the face identification after a visitor enters into the visual area; after the visitor identity is determined, mechanically driving interaction equipment so that the interaction equipment is faced to the visitor; emitting an instruction to the interaction equipment by the visitor through man-machine interaction, wherein after the instruction is received, the interaction equipment processes the instruction and gives a feedback.

Description

A kind of intelligent interactive method and device based on face recognition
Technical field
The present invention relates to image processing field, particularly area of facial recognition.
Background technology
Present interactive means is confined to mutual one to one more, and mostly interactive device is fixed.Not enough hommization and intellectuality also lack certain specific aim simultaneously.Especially the interactive mode to VIP client is also very single.
Summary of the invention
In view of this,, the invention provides a kind of intelligent interactive method and device, can realize portable many interactive modes to VIP client based on face recognition for addressing the above problem.
In order to achieve the above object, the present invention provides a kind of intelligent interactive method based on face recognition, and it may further comprise the steps: adopt camera to confirm the viewing area, after the visitor gets into the viewing area, verify visitor's identity through face recognition; After confirming visitor's identity, the Mechanical Driven interactive device makes it towards the visitor; The visitor sends instruction through man-machine interaction to said interactive device, and said interactive device receives instruction aftertreatment instruction and feeds back.
Further; Said after the visitor gets into the viewing area step through face recognition checking visitor identity further comprise: the face-image of input legitimate viewer in database in advance; And the image of importing is carried out rasterizing handle, the rgb value at the grid intersections place of acquisition input face image and the coordinate figure of relative profile; Eyes with the people in the viewing area are the focusing benchmark, adopt facial the detection in picture, to detect and isolate face-image; Facial figure to acquisition carries out the gridding processing, the rgb value at the place, grid point of crossing of acquisition collection in worksite face-image and the coordinate figure of relative profile; With the input face image of said collection in worksite face-image in the database coupling of comparing, after mating successfully, confirm observer's identity.
Further; Said visitor sends instruction through man-machine interaction to said interactive device; Said interactive device receives instruction aftertreatment instruction and gives feedback step and further comprises: the observer sends phonetic order to said interactive device, and said interactive device receives said phonetic order post analysis phonetic order and responds.
4. method according to claim 3 is characterized in that, when the phonetic order of analyzing the visitor when interactive device was the path searching instruction, the method through virtual reality showed the way for the visitor.
Further, when existing more than an observer who confirms identity in the said viewing area, after first observer sent phonetic order, the Mechanical Driven interactive device made it towards said first observer.
Further, after first observer sends phonetic order, after said interactive device receives phonetic order, confirm first observer's physical location through auditory localization, the said interactive device of Mechanical Driven makes it towards first observer.
Further, said method is after confirming visitor's identity, and the Mechanical Driven interactive device makes it after visitor's step, plays corresponding film according to said visitor's identity.
On the other hand; The present invention also provides a kind of intelligent interaction device based on face recognition, it is characterized in that, comprises with the lower part: camera; It is used to obtain the image in the viewing area, after the visitor gets into the viewing area, verifies visitor's identity through face recognition; Microphone array is used to receive visitor's phonetic order, and confirms said visitor's position; Mechanical arm, after perhaps microphone array was received phonetic order after confirming visitor's identity, mechanical arm drives interactive device made it towards the visitor.
Further, said camera also comprises the intensity of illumination sensor, is used for through the illuminance in the current viewing area of said intensity of illumination sensor, and is strong and weak according to illuminance, adjustment camera exposure.
Embodiment provided by the invention confirms mutual person's identity through face recognition, and reports corresponding contents according to mutual person's identity.Support multi-person speech interaction simultaneously, and can face mutual person directly, make alternately hommization more that emphasis point is more arranged.
Description of drawings
Fig. 1 is based on the intelligent interactive method process flow diagram of face recognition in specific embodiment of the present invention.
Fig. 2 carries out the effect synoptic diagram after rasterizing is handled to face-image in specific embodiment of the present invention.
Fig. 3 carries out the synoptic diagram that mate the key area to face-image in specific embodiment of the present invention.
Fig. 4 utilizes microphone array to obtain the method synoptic diagram of sound source in specific embodiment of the present invention.
Fig. 5 is the intelligent interaction device synoptic diagram that gives face recognition in specific embodiment of the present invention.
Embodiment
The invention provides a kind of intelligent interactive method based on face recognition, specifically as shown in Figure 1, it may further comprise the steps.
Step 101 adopts camera to confirm the viewing area, after the visitor gets into the viewing area, verifies visitor's identity through face recognition.In a concrete embodiment, in database, import the face-image of legitimate viewer in advance, and the image of input is carried out the rasterizing processing, the rgb value at the grid intersections place of acquisition input face image and the coordinate figure of relative profile.Facial image after rasterizing is handled is as shown in Figure 2, and the data of each grid intersection comprise the rgb value and this position with respect to face mask of this point.
Eyes with the people in the viewing area are the focusing benchmark, adopt facial the detection in picture, to detect and isolate face-image.
Facial figure to acquisition carries out the gridding processing, the rgb value at the place, grid point of crossing of acquisition collection in worksite face-image and the coordinate figure of relative profile.
With the input face image of said collection in worksite face-image in the database coupling of comparing, after mating successfully, confirm observer's identity.
In a concrete embodiment, as shown in Figure 3, the Delta Region and the face profile of main scanning contrast people face are like positions such as eyes, nose, face, chin, cheekbone, place between the eyebrows, foreheads.In order to make matching detection more accurate, the present invention has adopted edge sharpening to handle the combination Fourier algorithm in a concrete embodiment and has repeatedly handled image, up to the profile that can describe people's face clearly and key parameter point.Simultaneously, detect the illuminance in the current viewing area through camera, strong and weak according to illuminance, adjustment camera exposure.Find a view at random through camera, through with the comparison of chances of losing games portion image, adjustment collection in worksite face-image quality is revised the rgb value of collection in worksite face-image corresponding point.
Step 102, after confirming visitor's identity, the Mechanical Driven interactive device makes it towards the visitor.In a concrete embodiment, the Mechanical Driven interactive device is play corresponding video according to visitor's identity behind the visitor, like welcome's video.
Step 103, the visitor sends instruction through man-machine interaction to said interactive device, and said interactive device receives instruction aftertreatment instruction and feeds back.
In a concrete embodiment, the visitor sends phonetic order to said interactive device, and said interactive device receives said phonetic order post analysis phonetic order and responds.How ask weather like the visitor, systematic analysis is also circulated a notice of weather on the same day.In a specific embodiment, when the phonetic order of analyzing the visitor when interactive device was the path searching instruction, the method through virtual reality showed the way for the visitor, and overview and analogue navigation promptly show paths on interactive device.
When existing more than an observer who confirms identity in the said viewing area, after first observer sent phonetic order, the Mechanical Driven interactive device made it towards said first observer.
In a specific embodiment, after first observer sent phonetic order, said interactive device received phonetic order, confirmed first observer's physical location through auditory localization, and the said interactive device of Mechanical Driven makes it towards first observer.Concrete grammar is as shown in Figure 4, and the frequency stabilization because human body is spoken does not have very strong variation, can adopt one group of microphone array to receive that the difference of identical voice calculates the sound source position of different voice according to different microphones.The vector that target A, B, C pass to microphone Mic C has nothing in common with each other, so the intensity of sound of the target A that obtains of Mic C, B, C is different.In like manner the intensity of sound of the target A that obtains of microphone Mic A, microphone Mic B, B, C is also different.The intensity of sound that system obtains according to MicA, MicB, MicC, and combine the physical location difference of 3 microphones to calculate sequencing of speaking and the orientation of target A, B, C.
The present invention also provide a kind of based on face recognition can only interactive device, as shown in Figure 5, it comprises with the lower part.
Camera 1, it is used to obtain the image in the viewing area, after the visitor gets into the viewing area, verifies visitor's identity through face recognition.In a concrete embodiment, in database, import the face-image of legitimate viewer in advance, and the image of input is carried out the rasterizing processing, the rgb value at the grid intersections place of acquisition input face image and the coordinate figure of relative profile.Facial image after rasterizing is handled is as shown in Figure 2, and the data of each grid intersection comprise the rgb value and this position with respect to face mask of this point.
Eyes with the people in the viewing area are the focusing benchmark, adopt facial the detection in picture, to detect and isolate face-image.
Facial figure to acquisition carries out the gridding processing, the rgb value at the place, grid point of crossing of acquisition collection in worksite face-image and the coordinate figure of relative profile.
With the input face image of said collection in worksite face-image in the database coupling of comparing, after mating successfully, confirm observer's identity.
In a concrete embodiment, as shown in Figure 3, the Delta Region and the face profile of main scanning contrast people face are like positions such as eyes, nose, face, chin, cheekbone, place between the eyebrows, foreheads.In order to make matching detection more accurate, the present invention has adopted the edge sharpening processing in a concrete embodiment, repeatedly handle image in conjunction with Fourier algorithm, up to the profile that can describe people's face clearly and key parameter point.Simultaneously, said camera 1 also comprises the intensity of illumination sensor.Through the illuminance in the current viewing area of said intensity of illumination sensor, strong and weak according to illuminance, adjustment camera exposure.Find a view at random through camera, through with the comparison of chances of losing games portion image, adjustment collection in worksite face-image quality is revised the rgb value of collection in worksite face-image corresponding point.
Microphone array 2 is used to receive visitor's phonetic order, and confirms said visitor's position.In a concrete embodiment, microphone array 2 can receive that the difference of identical voice calculates the sound source position of different voice according to different microphones.The vector that target A, B, C pass to microphone Mic C has nothing in common with each other, so the intensity of sound of the target A that obtains of Mic C, B, C is different.In like manner the intensity of sound of the target A that obtains of microphone Mic A, microphone Mic B, B, C is also different.The intensity of sound that system obtains according to MicA, MicB, MicC, and combine the physical location difference of 3 microphones to calculate sequencing of speaking and the orientation of target A, B, C.
Mechanical arm 4, after perhaps microphone array was received phonetic order after confirming visitor's identity, mechanical arm drives interactive device 3 made it towards the visitor.In a concrete embodiment, the Mechanical Driven interactive device is play corresponding video according to visitor's identity behind the visitor, like welcome's video.
The above is merely preferred embodiment of the present invention, and is in order to restriction the present invention, not all within spirit of the present invention and principle, any modification of being done, is equal to replacement etc., all should be included within protection scope of the present invention.

Claims (9)

1. intelligent interactive method based on face recognition, it may further comprise the steps:
Adopt camera to confirm the viewing area, after the visitor gets into the viewing area, verify visitor's identity through face recognition;
After confirming visitor's identity, the Mechanical Driven interactive device makes it towards the visitor;
The visitor sends instruction through man-machine interaction to said interactive device, and said interactive device receives instruction aftertreatment instruction and feeds back.
2. method according to claim 1 is characterized in that, said after the visitor gets into the viewing area step through face recognition checking visitor identity further comprise:
In database, import the face-image of legitimate viewer in advance, and the image of input is carried out the rasterizing processing, the rgb value at the grid intersections place of acquisition input face image and the coordinate figure of relative profile;
Eyes with the people in the viewing area are the focusing benchmark, adopt facial the detection in picture, to detect and isolate face-image;
Facial figure to acquisition carries out the gridding processing, the rgb value at the place, grid point of crossing of acquisition collection in worksite face-image and the coordinate figure of relative profile;
With the input face image of said collection in worksite face-image in the database coupling of comparing, after mating successfully, confirm observer's identity.
3. method according to claim 1; It is characterized in that; Said visitor sends instruction through man-machine interaction to said interactive device; Said interactive device receives instruction aftertreatment instruction and gives feedback step and further comprises: the observer sends phonetic order to said interactive device, and said interactive device receives said phonetic order post analysis phonetic order and responds.
4. method according to claim 3 is characterized in that, when the phonetic order of analyzing the visitor when interactive device was the path searching instruction, the method through virtual reality showed the way for the visitor.
5. method according to claim 3 is characterized in that, when existing more than an observer who confirms identity in the said viewing area, after first observer sent phonetic order, the Mechanical Driven interactive device made it towards said first observer.
6. method according to claim 5; It is characterized in that, after first observer sends phonetic order, after said interactive device receives phonetic order; Confirm first observer's physical location through auditory localization, the said interactive device of Mechanical Driven makes it towards first observer.
7. method according to claim 1 is characterized in that, said method is after confirming visitor's identity, and the Mechanical Driven interactive device makes it after visitor's step, plays corresponding film according to said visitor's identity.
8. the intelligent interaction device based on face recognition is characterized in that, comprises with the lower part: camera, and it is used to obtain the image in the viewing area, after the visitor gets into the viewing area, verifies visitor's identity through face recognition; Microphone array is used to receive visitor's phonetic order, and confirms said visitor's position; Mechanical arm, after perhaps microphone array was received phonetic order after confirming visitor's identity, mechanical arm drives interactive device made it towards the visitor.
9. device according to claim 8 is characterized in that said camera also comprises the intensity of illumination sensor, is used for through the illuminance in the current viewing area of said intensity of illumination sensor, and is strong and weak according to illuminance, adjustment camera exposure.
CN2012100780023A 2012-03-22 2012-03-22 Intelligent interaction method based on face identification and apparatus thereof Pending CN102663349A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012100780023A CN102663349A (en) 2012-03-22 2012-03-22 Intelligent interaction method based on face identification and apparatus thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100780023A CN102663349A (en) 2012-03-22 2012-03-22 Intelligent interaction method based on face identification and apparatus thereof

Publications (1)

Publication Number Publication Date
CN102663349A true CN102663349A (en) 2012-09-12

Family

ID=46772833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100780023A Pending CN102663349A (en) 2012-03-22 2012-03-22 Intelligent interaction method based on face identification and apparatus thereof

Country Status (1)

Country Link
CN (1) CN102663349A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019388A (en) * 2013-01-09 2013-04-03 苏州云都网络技术有限公司 Screen unlocking method, screen locking method and terminal
WO2015131712A1 (en) * 2014-09-19 2015-09-11 中兴通讯股份有限公司 Face recognition method, device and computer readable storage medium
CN105513304A (en) * 2016-01-13 2016-04-20 华侨大学 Audio and video identifying alarm using APP (application) of mobile phone to operate rolling shutter door
CN106446873A (en) * 2016-11-03 2017-02-22 北京旷视科技有限公司 Face detection method and device
CN110720762A (en) * 2019-09-26 2020-01-24 广州视觉风科技有限公司 Intelligent interaction method and device based on facial recognition

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019388A (en) * 2013-01-09 2013-04-03 苏州云都网络技术有限公司 Screen unlocking method, screen locking method and terminal
WO2015131712A1 (en) * 2014-09-19 2015-09-11 中兴通讯股份有限公司 Face recognition method, device and computer readable storage medium
US10311291B2 (en) 2014-09-19 2019-06-04 Zte Corporation Face recognition method, device and computer readable storage medium
CN105513304A (en) * 2016-01-13 2016-04-20 华侨大学 Audio and video identifying alarm using APP (application) of mobile phone to operate rolling shutter door
CN105513304B (en) * 2016-01-13 2020-07-17 华侨大学 But cell-phone APP controls voice and video recognition alarm of rolling slats door
CN106446873A (en) * 2016-11-03 2017-02-22 北京旷视科技有限公司 Face detection method and device
CN110720762A (en) * 2019-09-26 2020-01-24 广州视觉风科技有限公司 Intelligent interaction method and device based on facial recognition

Similar Documents

Publication Publication Date Title
US11031012B2 (en) System and method of correlating mouth images to input commands
US10853677B2 (en) Verification method and system
CN102663349A (en) Intelligent interaction method based on face identification and apparatus thereof
US9007473B1 (en) Architecture for augmented reality environment
US10638251B2 (en) Customizing head-related transfer functions based on monitored responses to audio content
US11234096B2 (en) Individualization of head related transfer functions for presentation of audio content
CN107894836B (en) Human-computer interaction method for processing and displaying remote sensing image based on gesture and voice recognition
TW201923759A (en) Sound processing method and interactive device
TWI620098B (en) Head mounted device and guiding method
CN107945625A (en) A kind of pronunciation of English test and evaluation system
US9591229B2 (en) Image tracking control method, control device, and control equipment
CN106926252A (en) A kind of hotel's intelligent robot method of servicing
US11526589B2 (en) Wearer identification based on personalized acoustic transfer functions
US11218831B2 (en) Determination of an acoustic filter for incorporating local effects of room modes
KR20140125183A (en) Eye-glasses which attaches projector and method of controlling thereof
US10987198B2 (en) Image simulation method for orthodontics and image simulation device thereof
CN107293236A (en) The intelligent display device of adaptive different user
CN114365510A (en) Selecting spatial positioning for audio personalization
KR20220044489A (en) Image processing apparatus, image processing method, and program
CN207373180U (en) A kind of hotel's intelligent robot service system
US11871198B1 (en) Social network based voice enhancement system
CN102194110B (en) Eye positioning method in human face image based on K-L (Karhunen-Loeve) transform and nuclear correlation coefficient
CN204215356U (en) Contactless health interactive system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120912