CN108161933A - Interactive mode selection method, system and reception robot - Google Patents

Interactive mode selection method, system and reception robot Download PDF

Info

Publication number
CN108161933A
CN108161933A CN201711282942.3A CN201711282942A CN108161933A CN 108161933 A CN108161933 A CN 108161933A CN 201711282942 A CN201711282942 A CN 201711282942A CN 108161933 A CN108161933 A CN 108161933A
Authority
CN
China
Prior art keywords
user
age
range
information
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711282942.3A
Other languages
Chinese (zh)
Inventor
刘雪楠
沈刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kngli Youlan Robot Technology Co Ltd
Original Assignee
Beijing Kngli Youlan Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kngli Youlan Robot Technology Co Ltd filed Critical Beijing Kngli Youlan Robot Technology Co Ltd
Priority to CN201711282942.3A priority Critical patent/CN108161933A/en
Publication of CN108161933A publication Critical patent/CN108161933A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Fuzzy Systems (AREA)
  • Manipulator (AREA)

Abstract

Robot interactive mode selecting method is received the invention discloses a kind of, including:User is sensed to approach;Judge whether user belongs to registered user, languages and interactive mode are if it is selected according to user's registration information, if otherwise creating and recording user's registration information;Judge whether to receive user speech, languages are if it is selected according to user speech, if otherwise identification user classification;According to user's categorizing selection languages;Judge the age of user;Interactive mode is selected according to the age of user.Interactive mode selection method, system and reception robot using this method and/or system according to the present invention correct age of user on the basis of the facial characteristics of user using voice spectrum and/or physiological characteristic, different interactive modes is selected according to age of user, so as to improve the success rate of age of user prediction, user experience is improved.

Description

Interactive mode selection method, system and reception robot
Technical field
The present invention relates to field in intelligent robotics, can judge age of user automatically more particularly to a kind of so as to select difference Method, interactive mode selection system and the reception robot using this method and/or system of interactive mode.
Background technology
With the development of robot technology, robot is applied to every field, and existing robot is divided into two classes, i.e. work Industry robot and specialized robot.So-called industrial robot is exactly towards the multi-joint manipulator of industrial circle or multiple degrees of freedom machine Device people.And specialized robot is then in addition to industrial robot, for nonmanufacturing industry and serves the various advanced machines of the mankind Device people, including:, underwater robot, amusement robot, military robot, agricultural robot, robotization machine etc..And it services Robot is frequently utilized in the occasions welcomes such as bank, market, restaurant, the portion that sells house, hotel reception, guide service and advertising Service trades are waited, this kind of service robot can set the operating modes such as welcome, inquiry, food delivery, checkout, amusement, server Device people have the characteristics that intelligence substitute manpower, in addition, have the function of with people interaction, compared to the waiter in life, server Device people can preferably try to please and customer is attracted either client etc. and to bring completely new service body to customer or client It tests, while service robot can save the cost of labor of businessman;Can meet can also ensure that while working long hours it is excellent Matter service avoids manual service and generates the tired feelings that the satisfaction of customer or client is caused to decline due to working long hours Condition greatly improves work efficiency.
The application places of current service robot hotel for example concerning foreign affairs, airport duty-free shop, external mechanism, shopping mall All it is using the service of pure Chinese operating system or reception robot Deng majority.Minority reception robot is using Chinese, English two A parallel system, is then selected between the two systems.But due to technology barrier, dual system selection is restarted The operations such as robot, complex steps take and extremely grow, in actual use, great inconvenience caused to user.And due to Lack the support for other languages, the international personage for more grasping different language is made to have no way of selecting.
Further, robot uses built-in system voice(Usually young woman or male)It is interacted with user, example User, the consulting for answering user are such as greeted, subscriber calendar arrangement is reminded, serves as venue explanation.However, existing system language Single sound languages are for example only Chinese or English, and it is for example only young voice that pattern is single, can not be directed to different nationalities, no User's progress specific aim interaction of same gender, all ages and classes.
A kind of possible improved procedure is to prejudge the year of user according to the voice sound ray characteristic frequency spectrum for collecting user Age, however this single decision procedure is easily interfered.Such as when user's not sounding or more loud there are other at one's side During source or as user because when sound mutation occurs for health reason or user emotion fluctuation or during the deliberately change of voice, Judge that age of user becomes unreliable by user's sound spectrum.
Invention content
Therefore, it is an object of the invention to automatically judge age of user and different interaction moulds is selected according to age of user Formula, so as to effectively improve user experience.
Robot interactive mode selecting method is received the present invention provides a kind of, including:User is sensed to approach;Judge to use Whether family belongs to registered user, if it is languages and interactive mode is selected according to user's registration information, if otherwise created simultaneously Record user's registration information;Judge whether to receive user speech, languages are if it is selected according to user speech, if otherwise Identify user's classification;According to user's categorizing selection languages;Judge the age of user;Interactive mode is selected according to the age of user.
The step of wherein identification user classifies further comprises:According to biological information to user's rough sort;Belonged to according to auxiliary Property to subscriber segmentation class, wherein, optionally, the biological information include Skin Color Information, face contour information, height, paces, shifting Dynamic speed, limbs are static/walking posture;Optionally, secondary attribute includes the body on clothing style, user's carrying luggage or clothing The language of part mark, the device name of the language in user-portable device operation interface, user-portable device;Optionally, Rough sort includes yellow, white people, black race;Optionally, user's registration information includes the recognition of face information of user, user Identity information, user language information, the age information of user.
Wherein, judge to further comprise before the age of user, receive user feedback and change user's registration information.
Wherein, the step of age for judging user, further comprises:The of age of user is predicted according to user's face feature One range;Second range of age of user is generated according to user speech frequency spectrum and/or user year is generated according to user's physiological characteristic The third range in age;Optionally, the 4th range is generated according to user's secondary attribute.
Wherein, if the second range is Chong Die with the first range or partly overlap, using the range of overlapping as age of user Final range;If the second range is not be overlapped with the first range and third range is Chong Die with the first range or partly overlaps, by Final range of the three ranges part Chong Die with the first range as age of user;If first, second, third range two is neither Overlapping is not overlapped, and the part that the 4th range is Chong Die with any one of first, second, third range is as age of user Final range.
Wherein, interactive mode includes respect language pattern, child mode, old pattern, default mode.
Wherein, respect language pattern is identical with volume, the word speed of default mode;Optionally, the volume of child mode and word speed are equal Less than default mode;Optionally, the volume of old pattern is higher than default mode, and word speed is less than default mode.
Wherein, the volume of the ambient noise based on reception robot local environment in default mode, according to user and reception The distance between robot, user age and volume, word speed are set.
The present invention also provides a kind of reception robot interactive mode switching system, for performing according to any of the above item institute The method stated, including:User approaches close to sensing module for sensing user;User information registration module is used for recording Family log-on message;User speech identification module, for receiving user speech;Age of user identification module, for judging user's Age;Processor is used for:Judge whether user belongs to registered user, if it is according to user's registration information select languages and Interactive mode, if otherwise being created in user information registration module and recording user's registration information;Judge whether to receive use Family voice if it is selects languages according to user speech, if otherwise identification user classification, and according to user's categorizing selection language Kind;And interactive mode is selected according to the age of user.
Invention additionally provides a kind of reception robots, are used using being received according to aforementioned any one of them method choice Languages used in family and interactive mode.
According to the interactive mode selection method of the present invention, system and the reception robot using this method and/or system Correct age of user using voice spectrum and/or physiological characteristic on the basis of the facial characteristics of user, according to age of user come Different interactive modes is selected, so as to improve the success rate of age of user prediction, improves user experience.
Purpose of the present invention and other purposes unlisted herein, in the range of the application independent claims It is satisfied.The embodiment of the present invention limits in the independent claim, and specific features limit in dependent claims thereto.
Description of the drawings
Carry out the technical solution that the present invention will be described in detail referring to the drawings, wherein:
Fig. 1 shows the schematic diagram of reception robot system according to embodiments of the present invention;
Fig. 2 shows the flow chart of reception robot interactive mode selecting method according to embodiments of the present invention;
Fig. 3 shows the flow for the specific steps that age of user is judged in flow chart shown in Fig. 2 according to embodiments of the present invention Figure;And
Fig. 4 shows the block diagram of reception robot interactive model selection system according to embodiments of the present invention.
Specific embodiment
Carry out the feature and its skill of the present invention will be described in detail technical solution referring to the drawings and with reference to schematical embodiment Art effect discloses the interactive mode selection method that can effectively improve user experience, system and uses this method and/or system Reception robot.It should be pointed out that the structure that similar reference numeral expression is similar, term use herein " the One ", " second ", " on ", " under " etc. can be used for modifying various system units or method and step.These modifications are unless stated otherwise Space, order or the hierarchical relationship of institute's modification system component or method and step are not implied that.
As shown in Figure 1, reception robot according to embodiments of the present invention includes:Highly sensitive microphone 1 positioned at the crown, For acquiring or receiving ambient enviroment acoustic information or personnel's voice messaging;Positioned at the high-definition camera 2 of forehead, for acquiring or The topology information of reception staff's face(Such as bone contours);Positioned at the fine sensor 3A and 3B of eye, for capturing The face detail of personnel(Such as iris, retina, the dynamic change at eyebrow or canthus, the smile degree of lip or tooth reflection, The slight twitch of ear or nose)So as to reflect the biological information of personnel or emotional information;It is touched positioned at robot various pieces Sensor, including chin touch sensor 4, abdomen touch sensor 7, crown touch sensor 10, left ear/auris dextra touch sensing Device 12A/12B, hindbrain touch sensor 13, left shoulder/right shoulder touch sensor 15A/15B, buttocks touch sensor 17, these are touched Touch sensor for identify with the interaction of the sense of touch of user, so as to improve the accuracy for user identity or Emotion identification, and It provides with the stress information of user's extremity to feed back, change movement/rotational parameters of robot body;Positioned at neck neck 3D depth cameras 5, for acquiring the depth of view information of surrounding scene;Positioned at the touch display screen 6 of chest, for show heartbeat or The raw information of plan of colour of skin variation is so as to improve the fidelity of robot or broadening to traversing entire chest(It is not shown)And to Family shows reception/Query Information or other video informations;Positioned at underbelly 2D laser radars 8, for measuring user or scene In the distance of other mobile objects and robot, the height of auxiliary judgement object, movement speed, limbs are static/walking posture etc.; Positioned at the Omni-mobile wheel 9 of foot, for the path movement for entire Robot being driven to prestore or real-time judgment selects;It is located at The loud speaker 11A/11B of ear, for transmitting voice, audio-frequency information to user;Positioned at the emergency stop switch 14 of back part, for tight Emergency stop stops the movement or action of robot, convenient for improving safety;Positioned at the power on button 16 of rear waist, connect for manually starting The operating system of robot is treated to provide reception, counseling services;Positioned at the hand biosensor 18 of hand, for acquiring user Fingerprint, measure user moisture content of skin(Resistivity)Or stress, the measurement user's pulse that roughness, measurement and user shake hands Or oxygen content of capillary etc.;Charging interface 19 positioned at leg side and the power switch positioned at the leg back side.
Fig. 2 shows the flow chart of reception robot interactive mode selecting method according to embodiments of the present invention, and Fig. 4 is then shown The block diagram of robot interactive model selection system is received used by having shown the embodiment.
First, it is approached by user close to sensing module to sense user.For example, pass through highly sensitive microphone 1, high definition Camera 2,3D depth cameras 5 or 2D laser radars 8 or other approaching sensors(It is not shown, such as including bioelectricity Field sensor, magnetic field sensor, sensor of chemical gas, mechanical vibration sensor)Receiving sensor information, if sensor is believed Breath is more than preset threshold value(The storage device that selection system is included is obtained and is pre-stored according to high-volume test data(Do not show Go out)In)Then judge in the neighbouring certain distance of robot or effective range(Such as 5 meters)In the presence of the personnel to the movement of reception robot Or user.If there is close user, then reception robot system particularly speech selection system is waken up.If it is determined that it does not deposit It is then receiving robot and is continuing to keep standby or dormant state, can so save electric energy, improving the continuation of the journey energy of reception robot Power.
Then, judge whether user belongs to registered user by user information registration module, if it is according to user's registration Information selection reception languages used and interactive mode, if otherwise creating and recording user's registration information and be stored in reception machine The memory of device people(It is not shown)In.
User's registration information includes the recognition of face information of user, mainly includes facial skeleton profile(Topology)Structure, rainbow Film/retinal feature information, facial surface characteristic information(Such as eyebrow distribution, eyelash length/curvature, the shape of color spot/mole Shape/position etc.), these recognition of face information create and recorded in the case where recognizing the user for the first time by reception robot. User's registration information further includes the identity information of user, mainly comprising name, gender, height, native place/nationality, occupation etc., these Subscriber identity information is actively provided during subsequent feedback by user or according to being consumed in venue, market, hotel etc. It records and records automatically.User's registration information further comprises the language information of user, mother tongue, first foreign language comprising user, Second Foreign Language, dialect etc., reception robot when recognizing user for the first time according to user be actively entered or system automatic identification And it records, and can change during subsequent feedback or be corrected automatically using consumer record.User's registration information is into one Step includes the age information of user, such as is manually entered including user or the accurate year of user using typings such as consumer records What age or reception robot were predicted according to the facial characteristics of user, voice spectrum, physiological characteristic and other secondary attributes The range of age of user.
User interaction patterns include:
1)Respect language pattern(There are the languages of respect language form for Japanese, Korean, Tibetan language etc. itself), used by receiving robot Reception voice usually selects young women or male, simulates the affiliated age bracket of the sound set as 20~25 years old.Therefore, if sentenced The languages for determining user are the languages with respect language form, and the range of age of the accurate age of user or prediction is more than 20 Year, then using respect language form, volume, word speed are identical with default mode;If it is not, then the interactive mode using acquiescence;
2)Child mode, if it is decided that the age of user is less than reception robot application place the country one belongs to or area for children Legal definition, such as 8 years old, 10 years old, 12 years old or 14 years old, in interactive language using children's pet phrase, for example add folded word or Cartoon figure's onomatopoeia, and preferably the sound of young women as interactive voice, while reduces volume, slows down word speed to protect children Youngster's hearing;
3)Old pattern, if it is decided that the age of user is higher than reception robot application place the country one belongs to or area for old age Definition, such as 60 years old or 65 years old, improve volume, slow down word speed in order to which the old man of hearing loss can understand, obtain in time Information;
4)Default mode, the volume of the ambient noise based on reception robot local environment, according to user and reception robot it Between distance(The bigger volume of distance is bigger), user age(Acquiescence 20~60 years old, the age is bigger, and word speed is slower)And it sets silent The volume recognized(Such as higher than 10~20dB of ambient noise), word speed(Such as 70~120% corresponding to news report word speed).
Specifically, in a preferred embodiment of the invention, user is collected by high-definition camera 2, fine sensor 3A/3B Facial information, be supplied to the processor of system(It is not shown)User information in recognition of face, with system storage is carried out to register Module recognition of face information in pre-stored user's registration information make comparisons.
In other preferred embodiments of the present invention, by bio-electric field sensor, sensor of chemical gas or above-mentioned hand Portion's biosensor 18 identifies the identity of user(Special bio-electric field either unique smell or hand it is relevant on State other biological feature)Or by RFID tag reader(It is not shown)Identification carries the identity of the user of identity label.
If result of the comparison determines that user belongs to registered users, user's registration letter is extracted from system storage Breath determines that reception robot is used for the user according to user recorded in user's registration information using language message Corresponding languages, interactive mode are greeted, are seeked advice from, are received.
If result of the comparison determines that user is not belonging to registered users, the record of user's registration information is created, including Recognition of face information, identity information, language information, age information.
Next, it is determined that user speech information whether is received, if(Such as user is during reception robot ambulation It is talking with surrounding people or is being conversed with mobile phone with extraneous)Reception language is then selected according to the voice of user, if not(Example Such as user in the webpage of browsing mobile phone or tablet computer, viewing video, listen melody with earphone and do not make a sound actively)Then Further identification user classification.Specifically, it is acquired or received close to machine by the highly sensitive microphone 1 on the reception robot crown People is certain or preset range in(Such as 2 meters, which, which is less than, judges user close to distance range used)User actively send out The voice messaging gone out grasps language, such as Chinese, day by user speech identification module according to voice messaging discriminance analysis user Text, Korean, English, French, Spanish, Arabic etc., and receive robot voice and select system that will use identification To user grasp language interacted later with user, to user's active inquiry, reply consulting item etc..
Then, in the case that user does not make a sound actively, voice selecting system using robot acquired about The non-audio information of user and with reference to big data statistical result to predict the classification of user, and select to use according to user's classification The language that family may skillfully use.
For example, for the team such as travel party customer, it can make a reservation for move in the time of hotel or visit according to travel party, and With reference to 3D depth cameras 5,2D laser radars 8(And the optional thermal imaging system not showed that)The team's number recognized And judge that team corresponds to which of predetermined team's list, and belonging country or ground are retrieved according to the information of predetermined team It is greeted so as to select the language of the country in area.
For example, for the tourist near airports shopping center, can be caught according to high-definition camera 2,3D depth cameras 5 The label tag that tourist's luggage case is wrapped is grasped, identify label tag plays enclave so as to judge tourist belonging country or area so as to pre- Sentence the language that the tourist may skillfully use.
In another example the user for watching video, can be identified video caption or comment barrage by high-definition camera 2 Language(Chinese, English etc.)And select the subtitle or the corresponding language of barrage.
Wherein, in a preferred embodiment of the invention, the step of classifying for user further comprises, first according to life Object information carries out rough sort to user, is then finely divided class to user according to secondary attribute.
In an embodiment of the invention, biological information includes the progress obtained colour of skin of recognition of face of high-definition camera 2 Information(Yellow, white, black), face contour information(Face is wide/narrow, and nose is high/low, lower jaw side/point, Sunken orbital socket depth, Cheekbone projecting height etc.), height, paces measured by 2D laser radars 8, movement speed, limbs are static/walking posture etc..Root Yellow will be roughly divided into according to these biological informations close to the user of reception robot(Anticipation may use Chinese, Japanese, Korea Spro Language), white people(Anticipation may use English, French, German, Spanish, Portuguese, Arabic), black race(Anticipation English, French, German, Spanish, Portuguese may be used)Three categories.
In a preferred embodiment of the invention, face/gesture recognition is carried out based on geometric properties.
First, the image data acquired to high-definition camera 2 pre-processes.Using the innovative variable neighborhood method of average Smooth original image to eliminate most of noise, specifically, for w high h pixel unit, width pixel unit image g (x, y)(0<=x<w,0<=y<H), the window of a n*n is taken out centered on being put using (i, j) as neighborhood, averaged rear output center pixel Gray value g*(x,y).Usual n is simply selected to be a fixed odd number, such as 3,5 etc. in the prior art, for big Majority application is enough to eliminate noise.However, for the scene that illumination condition is limited or atmospheric visibility is relatively low, image obscures journey Degree can aggravate.Therefore, in a preferred embodiment of the invention, using smaller n1(Such as 3)Calculate initial pictures gray scale value set {g*(x, y) }, then compare (i, j) and put corresponding gray value g*(x, y) is m with (i, j) point distance(Such as 1 ~ 4)Consecutive points Initial gray value g*(x, y) ', the g if the two difference is within threshold value T*The final gray value that (x, y) is put as (i, j), Larger n2 is used if difference is more than or equal to T(Such as 5 or 7, the odd number less than or equal to n1+m)Recalculate a small number of ashes Degree fluctuates big specified point, so can obtain better smooth effect with smaller calculation amount, overcomes under haze, dark environment Illumination, air cleanliness factor the problems such as.After preliminary smoothing processing, the minority being still had in image is isolated and is made an uproar Sound, further using two dimension median filter denoising.Similar as before, the dimension n of two-dimentional window is equally variable, Ye Jixian It is calculated for the first time using smaller n1, then compares the gray value of each point in distance m, if difference is in threshold value T2 It remains unchanged using first calculated value, window calculation gray scale is reset using larger n2 if difference is more than or equal to T2 Value.Change of scale is then carried out based on linear interpolation, and considers that average gray and variance carry out intensity profile markization, into one Step is preferably based on Canny methods progress edge detection and binary conversion treatment finally obtains graphics set to be detected.
Then, recognition of face or gesture recognition are carried out based on detection geometric properties point(The variation of skeletal support frame).At this In invention preferred embodiment, the recognition of face facial characteristics based on geometric properties has the center of two, the exterior measuring side of nose Edge position, corners of the mouth position, the supplementary features point constructed, particularly, the present invention also further uses eye width, open wiring and nose baseline Spacing, auricle length and width are distinguished to refine the subspecies between same ethnic group(Such as South Korea's human eye width is small etc.).Examine human face Position when, it is first determined then two centre coordinates extract eyebrow according to the facial ratio of face on the basis of two positions Each window such as window, eyes window, nose window, face window, auricle window carries out independent calculating, detects characteristic point namely calculates each The position of organ and geometric parameter.It is similar, gesture recognition use head, shoulder, hip, ankle, elbow position and width with And its change to characterize geometric properties point.In order to overcome the slight change of face/walking postures to the influence of recognition result, according to Characteristic point constructs foundation of the feature vector with size, rotation and shift invariant as recognition of face.Specifically, (fix, fiy) it is characterized point fiCoordinate, dij=√(fix-fjx)2+( fiy-fjy)2For distance between two points, then set of eigenvectors is combined into { d12/ d7a, d34/d7a, d23/d7a, d14/d7a, d56/d7a, d1a/d7a, d1a/d7a, d2a/d7a, d3a/d7a, d4a/d7a, d5a/d7a, d6a/d7a, dba/d7a}.Then according to the feature vector of obtained all training samples training neural network classifier, such as Bagging god Through network.
Further, using adaptive weighted multi-categorizer, the geometric properties point extraction subspace arrived for aforementioned identification is special Sign obtains two single classifier taxonomic structures and exports last classification results using weighting coefficient estimation.And it further, is based on Ethnic group classification results, with reference to reception robot historical data(Such as registered user's data, the use receiving and provided feedback The data at family etc.)To prejudge the languages that user may select.
Secondary attribute includes the clothing style of user(Such as Chinese style Tang style clothing/cheongsam, Japanese kimonos, Arabic dust-robe, Su Ge Blue skirt, ethnic group's characteristic clothes etc.), user carry identity on luggage or clothing(Such as travel party's League flag, regimental Or group's name on luggage), user-portable device(Mobile phone, laptop, tablet computer etc.)Language in operation interface or Corresponding language of bluetooth ID/ device names of person's user-portable device etc..Secondary attribute is equally by high-definition camera 2, essence Thin sensor 3A/3B, 3D depth camera 5, which is combined, to be obtained.
Voice selecting system is received by the identification for user's secondary attribute, is carried out with the big data that network statistics obtains It compares, prejudges the user for having identified secondary attribute in above-mentioned three categories user rough sort skillfully using each language Probability is chosen user's most probable and is used(Probability highest)Language as reception robot will be interacted with the user used in language Speech.Specifically, certain user is judged as white people and wears Arabic dust-robe then selecting Arabic, certain user is judged as Huang It kind of people and wears kimonos and then selects Japanese, certain user is white people and wears Scotland skirt and then selects English, certain user is Black people And cell phone apparatus number then selects French with French characters, certain user is white man and wears the young team uniform of rich card and then selects Spain Language, certain user then select Sichuan dialect, etc. for yellow and travel party's name comprising " Sichuan ".
According to a preferred embodiment of the present invention, further comprise receiving user feedback after selection languages and change use Family log-on message.Specifically, user feedback module receives user feedback, such as obtains user plane by fine sensor 3A/3B The emotional information that portion's details is reflected(Such as surprised or smile), pass through highly sensitive microphone 1 receive user direct audio Feedback(Such as it praises or cries out in alarm), pass through touch screen 6 obtain user touch feed back(Such as it chooses, draw and sweep), pass through the crown Touch sensor 10 or 18 grade touch sensors of hand biosensor obtain the force feedback of user(Such as it shakes hands, pat), and And these feedback informations are sent to user information registration module.When decision-feedback information is positive or positive, language will be received The currently employed language of sound selection system is recorded in user's language information included in user's registration information.
More than, user's languages determined by user speech or non-speech data and select accordingly corresponding languages as Language used by reception robot is interacted with user.Then, as described in Figure 3, the application is on the basis of the facial characteristics of user Age of user is corrected using voice spectrum and/or physiological characteristic, different interactive modes is selected according to age of user.
First, the first range of age of user is predicted according to user's face feature.For example, using reception robot volume is located at The high-definition camera 2 in portion, based on Flexible Model about Ecology(Flexible Models), Snake active contour model(Active Contour Model, ACM), active shape model(Active Shaper Model, ASM), active apparent model (AAM) etc. The profile information of method extraction user face.Here Image Multiscale analysis can be carried out using Two-Dimensional Gabor Wavelets, accurately It extracts multiple images Block direction feature in image local area and carries out global feature extraction, Flexible Model about Ecology is recycled to carry out part special Sign precisely extraction.Then, features of skin colors extraction is carried out for the image or video of the acquisition of high-definition camera 2, is obtained from video One frame image is detected, can with timing acquisition image or have been detected by scene have personage when timing acquisition list note image It is handled;Image preprocessing because skin cluster is different with face extraction, needs to carry out Preprocessing, will not Remove in the region for belonging to skin;Features of skin colors obtains, and by skin color segmentation model, will know closest to the features of skin colors of human body Not.Its method has simple color and position clustering algorithm, color and position to cluster preferential Graph-Cut algorithms etc., color It is preferable that preferential Graph-Cut algorithm effects are clustered with position.Then, it using the fine sensor 3A and 3B positioned at eye, obtains The minutia of user's face is taken, such as canthus, brow furrows are distributed and number, facial skin smoothness/reflectance, the corners of the mouth, Ptosis degree, the neighbouring cutis laxa degree of cheekbone etc..
On the basis of user's face mask achieved above, skin color, face detail, the big data with network acquisition Or the database to prestore is compared, and prejudges the first range of age of user.For example, user is the yellow race and skin Relaxation degree/wrinkle distribution density is more than 50% of average value and judges that age bracket is 60 years old or more, user for the white race and Cutis laxa degree/wrinkle distribution density then judges that age bracket is 60 years old or more more than average value 30%, and user is the white race And cutis laxa degree/wrinkle distribution density subaverage 40% then judges age bracket for under-18s, user is black people Kind and cutis laxa degree/judgement age bracket under-18s of wrinkle distribution density subaverage 30% etc..
Then, the first range is corrected according to user speech frequency spectrum and generates the second range.If judgement reception machine shown in Fig. 2 Result the step of whether device people receives voice obtains be "Yes", according to user speech select languages while or it Afterwards, the processor of robot is received for personnel's voice for acquiring or receive via the highly sensitive microphone 1 positioned at the crown Information is analyzed to obtain frequency spectrum.Specifically, Short Time Fourier Transform is made to voice signal, obtains its power spectrum chart;From The frequency corresponding to some representative wave crests is found out in its power spectrum chart, a spy is made with these characteristic frequency values Sign vector;With some particular person sound namely reference voice(Such as the young women to prestore or the man of reception robot interactive The sound of property or the sound of newscaster)Feature vector for standard vector F, define the feature vector G of sound to be measured with The distance between F function D, the corresponding D of different user to prestore according to database and the correlation between its age set multiple (At least two)Threshold value P1、P2、P3……PN.When D is less than or equal to P1When, judgement user is children, is, for example, less than 12 years old;When D is big In P1And less than or equal to P2When, judgement user is teenager, such as 13 to 16 years old;When D is more than P2And less than or equal to P3When, judgement is used Family is youth, is more than P as D within such as 17 to 25 years old ...N-1And less than or equal to PNWhen, judgement user is old age, such as 60~80 Year;When D is more than PNWhen, judgement user is man at an advanced age, such as more than 80 years old, etc..
In the process, if according to audio judge the second range fall into according to facial characteristics judge first in the range of Or partly overlap, then using the range identification of overlapping as correct the range of age.If the second range and the first range are complete It is misaligned, then the two ranges are pre-stored in register(It is not shown)In to treat subsequently to continue with.
If it is shown in Fig. 2 judge to receive result that robot the step of whether receiving voice obtains for "No" namely Reception robot, which does not recognize, nearby has user to speak(Including user's not sounding or distant, sound is relatively low and nothing Method accurately identifies), then processor according to user's physiological characteristic correct age of user the first range.
Specifically, reception robot actively attracts user to touch the hand biosensor 18 positioned at hand, for acquiring The fingerprint of user, the moisture content of skin for measuring user(Resistivity)Or roughness, measurement and user shake hands stress, measure user Oxygen content of pulse or capillary etc..The mode of attraction can be voice prompt, display screen Subtitle Demonstration, interactive dancing etc. Deng.Biosensor 18 is preferably skin-texture detection device, corresponding to judge for detecting the dermatoglyph of user's hand Age range.The mode for detecting dermatoglyph can be by silicon wafer(Solid-state)Identifier is reached, and is to utilize small capacitor To sense the texture fold of skin;Or optical scanner, such as use many charge coupled cells(CCD)The array of composition comes The digital image of skin is acquired, light emitting diode can also be used(LED)The image of light source irradiation skin back reflection carrys out analyzing skin Texture.Current research finds dermatoglyph, such as degree of roughness, wrinkle depth, skin mound(Or skin ridge, refer to the protrusion of skin surface Position)With sulci of skin(The depressed area of skin surface)Between distance, can substantially reflect different age levels.It therefore, can basis Cutaneous roughness, rill depth, i.e. physiological characteristic judge the age range of user.
For example, less than user is judged if first threshold for children be, for example, less than 12 years old if the skin roughness of user, skin Skin roughness but is below second threshold higher than first threshold and is then determined as teenager such as 13~25 years old, and skin roughness is higher than Second threshold and then it is determined as such as 26~40 years old middle age less than third threshold value, skin roughness is higher than third threshold value and less than the Four threshold values are then determined as person in middle and old age such as 41~60 years old, and skin roughness is then determined as old age such as 61~80 higher than the 4th threshold value Year etc..
【In another preferred embodiment of the present invention, the micro- biography of integrated protein is further included in hand biosensor 18 Sensor, for detecting on skin the protein with tissue, cell or subcellular specificity to reflect the age of user indirectly.Example Such as, microsensor includes the detecting electrode by being formed in oxide semiconductor substrate with respect to two graphenes, works as specific proteins Due to electric polarity and rotational symmetry that protein macromolecule is different when matter is across detecting electrode, subthreshold value electricity between electrode Stream can occur to change accordingly so as to corresponding to different proteins type.The microsensor is exclusively used in detection aquaporin matter, Belong to water-glycerol channel protein subtribe, it is neutral also for glycerine and urea etc. other than there is high-permeability for hydrone Small molecule has high-permeability.Aquaporin participates in a variety of physiology of human body, pathologic process, concentration, air flue table with urine The fluid balance in face has substantial connection.And the missing of this aquaporin can lead to dry skin, cuticula aquation Decline, skin elasticity decline etc. reflect the process of skin aging.Particularly, it is examined using the micro protein matter sensor of the application The expression quantity of aquaporin matter and mRNA expressions in user's skin histology are surveyed, difference is selected according to high-volume test set Threshold value selects the threshold value Tm1 to be for 0.31, Tm2 0.55, Tm3 to reflect different age group, such as mRNA expressions 0.67th, Tm4 0.74, for aquaporin matter expression quantity select threshold value Ta1 for 2.03, Ta2 1.95, Ta3 1.88, Ta4 is 1.21, children's group(Less than 12 years old)For Tm2 ~ Tm3 and less than Ta1, teenager's group(13 ~ 25 years old)For Tm3 ~ Tm4 and Ta1 ~ Ta2, middle aged group(26 ~ 40 years old)To be more than Tm4 and Ta2 ~ Ta3, person in middle and old age's group(41 ~ 60 years old)For Tm2 ~ Tm1 and Ta3 ~ Ta4, always Year group(60 years old or more)To be less than Tm1 and more than Ta4.
In the another preferred embodiment of the present invention, the age of user is detected using the telomeric dna of skin.With in identification It is similar to state specific proteins, using graphene counter-electrodes in integrated microsensor come analyze DNA macromoleculars across electrode it Between caused curent change so as to react different attributes, such as DNA fragmentation length(Detect current pulse width or half height It is wide).Generally, DNA fragmentation length is gradually shortened with the increase at age, and very fast in the rate that adolescence shortens And the short rate of retraction of growing up is slower.In the preferred embodiment of the application, the age of user Y that is obtained according to pre-stored data collection (Year)With skin telomere DNA fragmentation length value X(Kb)Between relationship be Y=7.89/X2+ 55.419/X+5.047, therefore can be with The X values of reaction are measured according to the microsensor of reception robot hand to be derived from the actual age of user in real time.
In other preferred embodiments of the invention, characterize user's using other skin parameters such as sebum, moisture, pH value Age.The measurement of sebum is based on photometer principle, and moisture uses capacitance method, and pH value measurement is passed through based on H ions Voltage difference caused by semi-permeable membrane.With the increase at age(In the range of identical BMI values), the buffer capacity of skin is gradually reduced, Sebum, moisture are gradually reduced, and pH value is gradually risen to close neutrality, therefore similar can be measured according to control group Correlation between parameters and age sets different threshold values so as to judge the age distribution of active user.
When the range of age judged according to user's physiological characteristic(Third range)Or part weight Chong Die with aforementioned first range When folded, using the range of overlapping as correct the range of age.If be not overlapped, by the first range with being sentenced according to user's physiological characteristic Fixed the range of age record remains subsequent processing in a register.
In a preferred embodiment of the invention, optionally, it receives user speech and the second range is generated according to voice spectrum Afterwards, third range further is generated using above-mentioned steps namely according to user's physiological characteristic, for correcting the second range.
Finally, the 4th range is generated according to user's secondary attribute to correct first/second/third range.It is as previously mentioned, auxiliary Help attribute equally by high-definition camera 2, fine sensor 3A/3B, 3D depth camera 5 and RFID card reader(It is not shown) Etc. joints obtain.Secondary attribute includes the clothing style of user(Such as Chinese style Tang style clothing/cheongsam, Japanese kimonos, Arabic dust-robe, Scotland skirt, ethnic group's characteristic clothes etc.), user carry identity on luggage or clothing(Such as travel party's League flag, Group's name on regimental or luggage), user-portable device(Mobile phone, laptop, tablet computer etc.)Language in operation interface Corresponding language of bluetooth ID/ device names of speech or user-portable device etc..In addition, for correcting third range User's secondary attribute still further comprises, and can reflect accessories, the dressing of age of user(Such as earrings/ear nail/tie clip style, Lip gloss popular colour, the cartoon character carried are made comparisons with the age of user distribution of data-base recording), can reflect user's receipts Enter/the article tag of the level of consumption(Such as the brand of western-style clothes, wind coat, shoes and hats, age of user-income phase with data-base recording Sex-intergrade is closed to compare)Etc..If the age of user judged according to user's secondary attribute(4th range)Third range is fallen into, then Using the age of user as final the range of age.If first/second/third range two is neither overlapped, use according to The Chong Die part of any one of the age of user of family secondary attribute judgement and first/second/third range(It is preferred that overlapping number is most Part, it is such as be overlapped or be overlapped with first, third with first, second)As the final the range of age of user.
In this way, above scheme is received the video of the sensor acquisition of robot by comprehensive utilization, audio, touched Feel, RFID information carry out the range of age of comprehensive descision user, improve the success rate of skeleton growth rings, and according to the age of user Select different interactive modes, especially for Japan and Korea S customer using dedicated respect language form, for children/old man using adjusting Volume/word speed of system etc., improves user experience, is conducive to improve the consulting performance of reception robot.
According to the interactive mode selection method of the present invention, system and the reception robot using this method and/or system Correct age of user using voice spectrum and/or physiological characteristic on the basis of the facial characteristics of user, according to age of user come Different interactive modes is selected, so as to improve the success rate of age of user prediction, improves user experience.
Although illustrating the present invention with reference to one or more exemplary embodiments, those skilled in the art could be aware that need not It is detached from the scope of the invention and various suitable changes and equivalents is made to system, method.It in addition, can by disclosed introduction The modification of particular condition or material can be can be adapted to without departing from the scope of the invention by making many.Therefore, the purpose of the present invention does not exist In being limited to as the preferred forms for being used to implement the present invention and disclosed specific embodiment, and disclosed system and side Method is by all embodiments including falling within the scope of the present invention.

Claims (10)

1. a kind of reception robot interactive mode selecting method, including:
User is sensed to approach;
Judge whether user belongs to registered user, languages and interactive mode are if it is selected according to user's registration information, if Otherwise it creates and records user's registration information;
Judge whether to receive user speech, languages are if it is selected according to user speech, if otherwise identification user classification;
According to user's categorizing selection languages;
Judge the age of user;
Interactive mode is selected according to the age of user.
2. according to the method described in claim 1, the step of wherein identification user classifies further comprises:
According to biological information to user's rough sort;
According to secondary attribute to subscriber segmentation class,
Wherein, optionally, the biological information includes Skin Color Information, face contour information, height, paces, movement speed, limbs Static/walking posture;Optionally, secondary attribute includes clothing style, user carries luggage or the identity on clothing, user The language of the device name of language, user-portable device in mancarried device operation interface;Optionally, rough sort includes Huang Kind people, white people, black race;
Optionally, user's registration information include the recognition of face information of user, the identity information of user, user language information, The age information of user.
3. according to the method described in claim 1, wherein, judge to further comprise before the age of user, receive user feedback And change user's registration information.
4. according to the method described in claim 1, wherein, the step of age for judging user, further comprises:
The first range of age of user is predicted according to user's face feature;
Second range of age of user is generated according to user speech frequency spectrum and/or age of user is generated according to user's physiological characteristic Third range;
Optionally, the 4th range is generated according to user's secondary attribute.
5. it according to the method described in claim 4, wherein, if the second range is Chong Die with the first range or partly overlaps, will weigh Final range of the folded range as age of user;If the second range is not be overlapped with the first range and third range and first Range is overlapped or partly overlaps, and the part that third range is Chong Die with the first range is as the final range of age of user;If First, second, third range two is neither overlapped or is not overlapped, by any of the 4th range and first, second, third range Final range of the part of item overlapping as age of user.
6. according to the method described in claim 1, wherein, interactive mode includes respect language pattern, child mode, old pattern, writes from memory Recognize pattern.
7. according to the method described in claim 6, wherein, respect language pattern is identical with volume, the word speed of default mode;Optionally, The volume and word speed of child mode are below default mode;Optionally, the volume of old pattern is higher than default mode, and word speed is less than Default mode.
8. according to the method described in claim 6, wherein, the ambient noise based on reception robot local environment in default mode Volume, according to user and reception the distance between robot, user age and volume, word speed are set.
9. a kind of reception robot interactive mode switching system, for performing according to claim 1 to 8 any one of them side Method, including:
User approaches close to sensing module for sensing user;
User information registration module, for recording user's registration information;
User speech identification module, for receiving user speech;
Age of user identification module, for judging the age of user;
Processor is used for:
Judge whether user belongs to registered user, languages and interactive mode are if it is selected according to user's registration information, if Otherwise it is created in user information registration module and records user's registration information;
Judge whether to receive user speech, languages are if it is selected according to user speech, if otherwise identification user classification, And according to user's categorizing selection languages;And
Interactive mode is selected according to the age of user.
10. a kind of reception robot is used using user is received according to claim 1 to 8 any one of them method choice Languages and interactive mode.
CN201711282942.3A 2017-12-07 2017-12-07 Interactive mode selection method, system and reception robot Pending CN108161933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711282942.3A CN108161933A (en) 2017-12-07 2017-12-07 Interactive mode selection method, system and reception robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711282942.3A CN108161933A (en) 2017-12-07 2017-12-07 Interactive mode selection method, system and reception robot

Publications (1)

Publication Number Publication Date
CN108161933A true CN108161933A (en) 2018-06-15

Family

ID=62524455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711282942.3A Pending CN108161933A (en) 2017-12-07 2017-12-07 Interactive mode selection method, system and reception robot

Country Status (1)

Country Link
CN (1) CN108161933A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109093631A (en) * 2018-09-10 2018-12-28 中国科学技术大学 A kind of service robot awakening method and device
CN109741744A (en) * 2019-01-14 2019-05-10 博拉网络股份有限公司 AI robot dialog control method and system based on big data search
CN109949795A (en) * 2019-03-18 2019-06-28 北京猎户星空科技有限公司 A kind of method and device of control smart machine interaction
CN110148399A (en) * 2019-05-06 2019-08-20 北京猎户星空科技有限公司 A kind of control method of smart machine, device, equipment and medium
CN110209957A (en) * 2019-06-06 2019-09-06 北京猎户星空科技有限公司 Explanation method, apparatus, equipment and storage medium based on intelligent robot
CN110427462A (en) * 2019-08-06 2019-11-08 北京云迹科技有限公司 With method, apparatus, storage medium and the service robot of user interaction
CN110610703A (en) * 2019-07-26 2019-12-24 深圳壹账通智能科技有限公司 Speech output method, device, robot and medium based on robot recognition
CN110913074A (en) * 2019-11-28 2020-03-24 北京小米移动软件有限公司 Sight distance adjusting method and device, mobile equipment and storage medium
CN111089581A (en) * 2018-10-24 2020-05-01 上海博泰悦臻网络技术服务有限公司 Traffic guidance method, terminal and robot
CN111506377A (en) * 2020-04-16 2020-08-07 上海茂声智能科技有限公司 Language switching method and device and voice service terminal
CN112929502A (en) * 2021-02-05 2021-06-08 国家电网有限公司客户服务中心 Voice recognition method and system based on electric power customer service
CN114633267A (en) * 2022-03-17 2022-06-17 上海擎朗智能科技有限公司 Interactive content determination method, mobile equipment, device and storage medium
CN115171284A (en) * 2022-07-01 2022-10-11 国网汇通金财(北京)信息科技有限公司 Old people care method and device
CN116787469A (en) * 2023-08-29 2023-09-22 四川数智宗医机器人有限公司 Digital traditional Chinese medicine robot pre-consultation system based on artificial intelligence

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004198831A (en) * 2002-12-19 2004-07-15 Sony Corp Method, program, and recording medium for speech recognition
CN101618542A (en) * 2009-07-24 2010-01-06 塔米智能科技(北京)有限公司 System and method for welcoming guest by intelligent robot
CN103533438A (en) * 2013-03-19 2014-01-22 Tcl集团股份有限公司 Clothing push method and system based on intelligent television
CN204723761U (en) * 2015-01-04 2015-10-28 玉林师范学院 Based on libraries of the universities' guest-meeting robot of RFID
CN106504743A (en) * 2016-11-14 2017-03-15 北京光年无限科技有限公司 A kind of interactive voice output intent and robot for intelligent robot
CN106649290A (en) * 2016-12-21 2017-05-10 上海木爷机器人技术有限公司 Speech translation method and system
CN106952648A (en) * 2017-02-17 2017-07-14 北京光年无限科技有限公司 A kind of output intent and robot for robot
CN107316254A (en) * 2017-08-01 2017-11-03 深圳市益廷科技有限公司 A kind of hotel service method
CN107358949A (en) * 2017-05-27 2017-11-17 芜湖星途机器人科技有限公司 Robot sounding automatic adjustment system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004198831A (en) * 2002-12-19 2004-07-15 Sony Corp Method, program, and recording medium for speech recognition
CN101618542A (en) * 2009-07-24 2010-01-06 塔米智能科技(北京)有限公司 System and method for welcoming guest by intelligent robot
CN103533438A (en) * 2013-03-19 2014-01-22 Tcl集团股份有限公司 Clothing push method and system based on intelligent television
CN204723761U (en) * 2015-01-04 2015-10-28 玉林师范学院 Based on libraries of the universities' guest-meeting robot of RFID
CN106504743A (en) * 2016-11-14 2017-03-15 北京光年无限科技有限公司 A kind of interactive voice output intent and robot for intelligent robot
CN106649290A (en) * 2016-12-21 2017-05-10 上海木爷机器人技术有限公司 Speech translation method and system
CN106952648A (en) * 2017-02-17 2017-07-14 北京光年无限科技有限公司 A kind of output intent and robot for robot
CN107358949A (en) * 2017-05-27 2017-11-17 芜湖星途机器人科技有限公司 Robot sounding automatic adjustment system
CN107316254A (en) * 2017-08-01 2017-11-03 深圳市益廷科技有限公司 A kind of hotel service method

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109093631A (en) * 2018-09-10 2018-12-28 中国科学技术大学 A kind of service robot awakening method and device
CN111089581A (en) * 2018-10-24 2020-05-01 上海博泰悦臻网络技术服务有限公司 Traffic guidance method, terminal and robot
CN109741744A (en) * 2019-01-14 2019-05-10 博拉网络股份有限公司 AI robot dialog control method and system based on big data search
CN109741744B (en) * 2019-01-14 2021-03-09 博拉网络股份有限公司 AI robot conversation control method and system based on big data search
CN109949795A (en) * 2019-03-18 2019-06-28 北京猎户星空科技有限公司 A kind of method and device of control smart machine interaction
CN110148399A (en) * 2019-05-06 2019-08-20 北京猎户星空科技有限公司 A kind of control method of smart machine, device, equipment and medium
CN110209957A (en) * 2019-06-06 2019-09-06 北京猎户星空科技有限公司 Explanation method, apparatus, equipment and storage medium based on intelligent robot
CN110610703A (en) * 2019-07-26 2019-12-24 深圳壹账通智能科技有限公司 Speech output method, device, robot and medium based on robot recognition
CN110427462A (en) * 2019-08-06 2019-11-08 北京云迹科技有限公司 With method, apparatus, storage medium and the service robot of user interaction
CN110913074A (en) * 2019-11-28 2020-03-24 北京小米移动软件有限公司 Sight distance adjusting method and device, mobile equipment and storage medium
CN111506377A (en) * 2020-04-16 2020-08-07 上海茂声智能科技有限公司 Language switching method and device and voice service terminal
CN112929502A (en) * 2021-02-05 2021-06-08 国家电网有限公司客户服务中心 Voice recognition method and system based on electric power customer service
CN114633267A (en) * 2022-03-17 2022-06-17 上海擎朗智能科技有限公司 Interactive content determination method, mobile equipment, device and storage medium
CN115171284A (en) * 2022-07-01 2022-10-11 国网汇通金财(北京)信息科技有限公司 Old people care method and device
CN115171284B (en) * 2022-07-01 2023-12-26 国网汇通金财(北京)信息科技有限公司 Senior caring method and device
CN116787469A (en) * 2023-08-29 2023-09-22 四川数智宗医机器人有限公司 Digital traditional Chinese medicine robot pre-consultation system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN108161933A (en) Interactive mode selection method, system and reception robot
CN108153169A (en) Guide to visitors mode switching method, system and guide to visitors robot
CN108182098A (en) Receive speech selection method, system and reception robot
US11989340B2 (en) Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system
US11748980B2 (en) Makeup evaluation system and operating method thereof
JP6850723B2 (en) Facial expression identification system, facial expression identification method and facial expression identification program
CN112784763B (en) Expression recognition method and system based on local and overall feature adaptive fusion
US10062163B2 (en) Health information service system
KR20160012902A (en) Method and device for playing advertisements based on associated information between audiences
Liu et al. Region based parallel hierarchy convolutional neural network for automatic facial nerve paralysis evaluation
CN101305913B (en) Face beauty assessment method based on video
US20170352351A1 (en) Communication robot
EP3579176A1 (en) Makeup evaluation system and operation method thereof
US10423978B2 (en) Method and device for playing advertisements based on relationship information between viewers
Abobakr et al. Rgb-d fall detection via deep residual convolutional lstm networks
KR102316723B1 (en) Body-tailored coordinator system using artificial intelligence
CN109670406B (en) Non-contact emotion recognition method for game user by combining heart rate and facial expression
CN108198159A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN107316333A (en) It is a kind of to automatically generate the method for day overflowing portrait
CN113920568B (en) Face and human body posture emotion recognition method based on video image
US20090033622A1 (en) Smartscope/smartshelf
CN106599785A (en) Method and device for building human body 3D feature identity information database
CN113343860A (en) Bimodal fusion emotion recognition method based on video image and voice
CN112101235B (en) Old people behavior identification and detection method based on old people behavior characteristics
CN111967324A (en) Dressing system with intelligent identification function and identification method thereof

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180615