CN114115526A - Head-wearing wireless earphone, control method thereof and wireless communication system - Google Patents

Head-wearing wireless earphone, control method thereof and wireless communication system Download PDF

Info

Publication number
CN114115526A
CN114115526A CN202111279245.9A CN202111279245A CN114115526A CN 114115526 A CN114115526 A CN 114115526A CN 202111279245 A CN202111279245 A CN 202111279245A CN 114115526 A CN114115526 A CN 114115526A
Authority
CN
China
Prior art keywords
camera
user
data
earphone
wireless headset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111279245.9A
Other languages
Chinese (zh)
Inventor
孟田翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202111279245.9A priority Critical patent/CN114115526A/en
Publication of CN114115526A publication Critical patent/CN114115526A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1008Earpieces of the supra-aural or circum-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a head wireless earphone, a control method thereof and a wireless communication system, wherein the head wireless earphone comprises a camera, and the control method of the head wireless earphone comprises the following steps: receiving a face image of a user acquired by a camera, and performing image processing to obtain face motion data of the user; determining the data type according to the face movement data of the user, and sending the face movement data to the mobile terminal; the data types at least comprise expression information type data, control instruction type data and emotion information type data. The invention solves the problem of single function of the head-wearing wireless earphone.

Description

Head-wearing wireless earphone, control method thereof and wireless communication system
Technical Field
The invention relates to the technical field of earphones, in particular to a head-mounted wireless earphone, a control method thereof and a wireless communication system.
Background
With the progress of society and the continuous innovation of science and technology, digitalized and networked products are more and more popularized, and audio and video products such as earphones and cameras are more and more popularized. At present, earphones are generally used for playing audio signals or picking up audio signals, and the functions of the earphones are single, so that the application requirements of users cannot be met.
Disclosure of Invention
The invention mainly aims to provide a head-wearing wireless earphone, a control method thereof and a wireless communication system, and aims to solve the problem that the function of the head-wearing wireless earphone is single.
In order to achieve the above object, the present invention provides a method for controlling a wireless headset, the wireless headset including a camera, the method comprising:
receiving the face image of the user acquired by the camera, and performing image processing to obtain face movement data of the user;
determining the data type according to the face movement data of the user, and sending the face movement data to a mobile terminal; the data types at least comprise expression information type data, control instruction type data and emotion information type data.
Optionally, when the determined facial motion data is control instruction type data, the method for controlling a wireless headset further includes the following steps:
generating a control instruction corresponding to the control instruction type data;
and outputting the control instruction to a mobile terminal so as to control the mobile terminal to execute an application control instruction corresponding to the face motion data.
Optionally, when the determined facial motion data is expression information type data, the method for controlling a wireless headset further includes the steps of:
and outputting the expression information type data to a mobile terminal so as to control a preset application to send image expression information corresponding to the expression information type data when the preset application of the mobile terminal is in an operating state.
Optionally, the method for controlling a wireless headset further comprises the following steps:
and when the determined face movement data is emotion information type data, outputting the emotion information type data to a mobile terminal so as to control the preset application to record and store the emotion information type data when the preset application of the mobile terminal is in a running state.
Optionally, before the step of receiving the face image of the user acquired by the camera and performing image processing to obtain the face movement data of the user, the control method of the wireless headset further comprises the following steps;
and driving the camera to move from the storage position to the opening position, and controlling the camera to start working so as to acquire the face image of the user.
Optionally, the head-mounted wireless headset includes a head bar, a left earphone and a right earphone, where the left earphone and the right earphone are respectively disposed at two ends of the head bar; the number of the cameras is at least two, and the at least two cameras are respectively arranged on the left earphone and the right earphone;
before the step of receiving the face image of the user acquired by the camera and performing image processing to obtain the face movement data of the user, the control method of the wireless headset further comprises the following steps:
when a camera control signal triggered by a user is received, a camera arranged on the left earphone and/or the right earphone is driven to move from a storage position to an opening position; and the number of the first and second groups,
and controlling the corresponding camera to work according to the camera control signal so as to obtain the face image of the user.
The present invention provides a wireless headset, comprising:
a top strip;
the earphone main body is detachably connected with the head strip;
the driving mechanism is movably connected to the earphone body;
the camera is arranged on the driving mechanism and provided with a storage position and an opening position; the driving mechanism is used for driving the camera to move between the accommodating position and the opening position; when the camera is in an open position, a face image of a user can be acquired; and a processor, a memory and a control program for a wireless headset stored on the memory and executable on the processor, wherein the control program for a wireless headset is executed by the processor to implement the steps of the method for controlling a wireless headset as described above.
Optionally, the earphone body comprises:
the left earphone and the right earphone are respectively arranged at two ends of the head strip; the driving mechanism and the camera are arranged on the left earphone or the right earphone.
Optionally, the wireless headset further comprises:
the processor is specifically used for controlling the driving mechanism to work when receiving a camera control signal triggered by a user, so as to drive the camera to move between the accommodating position and the opening position.
Optionally, the processor is further connected with the camera;
the processor is used for receiving the face image of the user acquired by the camera and carrying out image processing to obtain face movement data of the user; and determining the data type according to the face movement data of the user, and communicating with the mobile terminal to send the face movement data to the mobile terminal.
Optionally, the processor comprises:
the expression recognition processor is electrically connected with the camera and used for acquiring the facial image of the user and processing the image to obtain facial motion data of the user;
and the data processor is connected with the expression recognition processor and used for determining the data type according to the facial motion data.
The invention also provides a wireless communication system, which comprises a mobile terminal and the head-wearing wireless earphone;
the mobile terminal is in communication connection with the head-mounted wireless earphone.
Optionally, the mobile terminal comprises a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the communication connection with the wireless headset.
The invention receives the face image of the user acquired by the camera and processes the image to obtain the face movement data of the user; determining the data type according to the face movement data of the user, and sending the face movement data to a mobile terminal; the data types at least comprise expression information type data, control instruction type data and emotion information type data. The camera is arranged on the earphone body to acquire the facial movement of the user, so that the requirements of different application scenes of the user are met, the user can acquire required image videos while enjoying music, and the functions of the head-wearing wireless earphone are enriched. And the camera can be accomodate and open automatically according to user's demand under actuating mechanism's drive, and the camera can acquire user's face image automatically, need not to rely on mobile terminal, is favorable to improving the operation convenience.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an embodiment of a camera of a wireless headset in a storage position according to the present invention;
FIG. 2 is a schematic structural diagram of an embodiment of a headset wireless headset camera in an open position according to the present invention;
FIG. 3 is a functional block diagram of a wireless headset according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of another embodiment of a wireless headset according to the present invention;
FIG. 5 is a flowchart illustrating a method for controlling a wireless headset according to an embodiment of the present invention;
FIG. 6 is a flow chart of another embodiment of a method for controlling a wireless headset according to the present invention;
fig. 7 is a flowchart illustrating a method for controlling a wireless headset according to another embodiment of the present invention;
FIG. 8 is a flowchart illustrating a method for controlling a wireless headset according to still another embodiment of the present invention;
fig. 9 is a flowchart illustrating a method for controlling a wireless headset according to still another embodiment of the present invention;
fig. 10 is a flowchart illustrating a method for controlling a wireless headset according to an embodiment of the present invention.
The reference numbers illustrate:
Figure BDA0003329210250000041
Figure BDA0003329210250000051
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
In addition, if there is a description of "first", "second", etc. in an embodiment of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The invention also provides a control method of the head-wearing wireless earphone, and the head-wearing wireless earphone comprises a camera.
Referring to fig. 5, the method for controlling a wireless headset includes the steps of:
step S100, receiving a face image of the user acquired by the camera, and performing image processing to obtain face motion data of the user;
in this embodiment, the camera may work based on a control instruction triggered by the user, and acquire a facial image of the user while working, and based on this, perform image processing on the facial image output by the headset-side camera by using an expression recognition algorithm and the like, and finally output facial motion data. In addition, in the working process of the head-wearing wireless headset, the camera function can be in a long-time working state during communication with the mobile terminal all the time, or the camera function can be in a working state during the process of running a target application program such as an emotion management application or a social application, or the camera function can be turned on under the control of user trigger operation.
Step S200, determining the data type according to the face movement data of the user, and sending the face movement data to the mobile terminal; the data types at least comprise expression information type data, control instruction type data and emotion information type data.
It can be understood that the facial expression is a part of human language, is a physiological and psychological reaction, and can be used for transmitting emotion, therefore, the embodiment can acquire the expression action of the user and send the facial motion data generated by the acquired facial motion to the mobile terminal, so that the mobile terminal records, stores or uses the facial motion data. In addition, the control of the mobile terminal can be realized through expression recognition, and the expression, emotion and facial action characteristics generated by the facial movement of the user are utilized to send a control instruction to the mobile terminal. Various expressions/emotions of the user can be recognized according to the movement of the lower half part of the face, and the operation instruction is silent, namely a control instruction. When performing expression recognition processing, the facial image command may be recognized by using a preset model or the like, for example: the facial image obtained by the camera can be recognized by adopting a deep learning related model (such as a neural network). And before the model is adopted to identify the silent operation instruction, the model can be trained by adopting a preset sample so as to enable the model to have better resolution performance.
And before sending the face movement data to the mobile terminal, the face preset data can be preprocessed and converted into information which can be recognized by the terminal. Accordingly, the preset face preset data may also be stored in a form that can be recognized by the terminal. After the facial image of the user is acquired through the camera, the data type represented by the facial image of the user can be analyzed: the mobile terminal comprises emotion information type data expressed by different emotions (natural emotion exposure), control instruction type data expressed by a silent operation instruction, and expression information type data expressed by expressions of certain emotions to an opposite terminal communicated with the mobile terminal through the mobile terminal. The silent operation instruction may be lip language or a specific facial action, such as the number of opening and closing of lips. Silent control instructions such as mouth opening and smiling in the open teeth, and the like. The lower half of the face motion characteristics corresponding to different expressions and emotions can be known as shown in table 1:
TABLE 1
Figure BDA0003329210250000071
The invention receives the face image of the user acquired by the camera and processes the image to obtain the face movement data of the user; determining the data type according to the face movement data of the user, and sending the face movement data to a mobile terminal; the data types at least comprise expression information type data, control instruction type data and emotion information type data. The camera is arranged on the earphone body to acquire the facial movement of the user, so that the requirements of different application scenes of the user are met, the user can acquire required image videos while enjoying music, and the functions of the head-wearing wireless earphone are enriched. And the camera can be accomodate and open automatically according to user's demand under actuating mechanism's drive, and the camera can acquire user's face image automatically, need not to rely on mobile terminal, is favorable to improving the operation convenience.
Referring to fig. 6, in an embodiment, when the determined facial motion data is control instruction type data, the method for controlling the wireless headset further includes the following steps:
step S300, generating a control instruction corresponding to the control instruction type data;
and step S400, outputting the control instruction to a mobile terminal so as to control the mobile terminal to execute an application control instruction corresponding to the face motion data.
In this embodiment, the correspondence between the silent operation command and each application program of the mobile terminal may be established by default in the terminal, or the correspondence may be set by the user in a customized manner. When the corresponding relation is established according to user self-definition, the terminal can receive a setting instruction and establish the corresponding relation according to the setting instruction. Therefore, when the silent operation instruction is matched with the preset silent operation instruction, the application program corresponding to the preset silent operation instruction can be inquired according to the corresponding relation so as to run the application program. The preset silent operation instruction is a silent operation instruction which is matched with the silent operation instruction in at least one silent operation instruction.
Specifically, the mobile terminal has an emotion management application, a telephone application, an information application, a social application, a video playing application, an audio playing application, and the like, and establishes a correspondence between a silent operation instruction triggered by a user operation and an application program according to a user-defined setting of a user. In the case of lip language, the audio playback application corresponds to a silent operation command of "open audio playback" or corresponds to a silent operation command of one-to-one lip action. The telephone application corresponds to a silent operation command of "make a call" or corresponds to a silent operation command of a lip motion with a mouth corner raised. When the camera acquires the face image and analyzes that the corresponding face movement data is control instruction type data, the face image can be sent to a data processor of the head-wearing wireless headset in a message queue mode, and the data processor carries out data processing, such as format conversion and coding, and then sends the data to the mobile terminal. Based on the function, the head-wearing wireless earphone can automatically control the functions of music pause, play, previous and next of the audio play application on the mobile terminal, or control the pause, fast forward, skip next audio and video play of the audio and video play application on the mobile terminal.
The mobile terminal can also continue to input the silent operation instruction in the interface of the target application program after the target application program is opened according to the received silent operation instruction. The target application program can be any application program in the mobile terminal. For example: the target application may be an application in the terminal, an "emotion management application" or a "social application". Of course, in other embodiments, the head-mounted wireless headset may also implement intelligent control of electrical equipment, such as an intelligent lamp, an air conditioner, a sound box, and the like, according to a control signal triggered by a user, and specifically refer to control of the camera, which is not described herein again.
It can be understood that, when the same preset silent operation instruction is set in different interfaces and corresponds to different applications, after the input silent operation instruction is obtained, the input interface of the silent operation instruction can be identified, and the interface identifier of the input interface is obtained, so that the silent operation instruction is matched with the preset silent operation instruction corresponding to the input interface, and when the silent operation instruction is matched with the preset silent operation instruction, the corresponding target application is opened according to the corresponding relationship between the silent operation instruction and the application in the input interface.
It is also understood that there may be a certain similarity between face motions corresponding to silent operation instructions set by users, for example, when a lip language is used to send a silent operation instruction, only one word between "open audio play" and "open video play" is different in mouth shape, and the embodiment does not specifically limit it in order to prevent false triggering or to not correctly execute a silent operation instruction sent by a user. And, the control information may be identified by using a preset model, etc., and the control information may be identified by using a deep learning related model (e.g., a neural network). And before the model is adopted to identify the control information, the model can be trained by adopting a preset sample so as to enable the model to have better resolution performance.
In an embodiment, when determining that the data type is expression information type data or emotion information type data (natural emotion expression of a user), the expression information type data or emotion information type data is wirelessly transmitted to the mobile terminal, for example, WIFI, bluetooth, or the like, and when transmitting data by using bluetooth, the data may be specifically transmitted in a BLE data format.
It should be noted that, for different applications, the same expression may correspond to different application programs. For example: for a user making a "happy" facial motion, in the social application, the "happy" emoticon may be set to correspond to a social application program that transmits "happy" emoticon information function to an opposite end that is chatting. In the emotion management application, it may be arranged to record and store the current emotion of the user, and also to record the time, place, etc. at which the user made the facial movement.
Based on this, when the same facial image is expressed in different meanings in the corresponding application programs, and there may be a plurality of application programs of the mobile terminal matched with the facial image, the embodiment may determine whether the corresponding application program executes the execution facial expression according to whether each application program is running, for example, whether a display interface of the application program on the mobile terminal is opened. Specifically, when the same facial movement corresponds to different applications, after the input emotive information type data and emotional information type data are acquired, the mobile terminal can recognize the input interface of the expression information type data and the emotion information type data and obtain the interface identification of the input interface, so as to match the expression information type data and emotion information type data with the preset expression information type data and emotion information type data corresponding to the input interface, and when the expression information type data and emotion information type data match with preset expression information type data or emotion information type data, according to the corresponding relation between the expression information type data or emotion information type data and the application program in the input interface, and in the opened target application program, executing corresponding action according to the acquired expression information type data or emotion information type data.
Referring to fig. 7, specifically, when the determined facial motion data is expression information type data, the method for controlling a wireless headset further includes the steps of:
step S500, the expression information type data is output to a mobile terminal, so that when a preset application of the mobile terminal is in a running state, the preset application is controlled to send image expression information corresponding to the expression information type data.
In this embodiment, the preset application is a social application, and when the social application is used, there are input and output of characters, input and output of voice, and input and output of an expression package, where the expression package is a software package corresponding to a capitulum picture for transferring emotional moods in various social application chatting processes, and includes at least one expression graph and/or meaning words, where the expression graph is a small image for representing emotional moods, and the meaning words are specific meanings for expressing matching with the expression graph. When a user uses a social application program and the social application program supports sending of expression information, the user can open the social application program and input characters, symbols, images, expression packages, videos and the like in an interface of a target application program when the user sends information to a user at an opposite end through a mobile terminal. When the user wants to express the current emotion or when the user has a difficult emotion or language, it can be expressed by the emoticon.
Specifically, a plurality of expression packages are stored in the mobile terminal, for example, expression packages expressing emotions such as surprise, fear, disgust, anger, happiness and sadness, and a user can customize the association between facial movement data and the expression packages, for example, the user can select one expression package and associate the selected expression package with one facial image, so that the associated expression package is matched with the facial movement data, and in this way, in combination with a target application scene, corresponding facial movement data is sent to the mobile terminal, so that the expression package is matched with the facial movement data received by the target application in the running process when in use.
In the mobile terminal supporting the production of the expression package, the facial motion data can be sent to the mobile terminal, and the mobile terminal can process the facial images in the character video by adopting a face recognition technology when playing or recording the character video, extract at least one facial image with the facial images, configure character information and formats according to application requirements and produce the expression package.
When the mobile terminal is used, the mobile terminal can open the social application based on the silent operation instruction sent by the head-wearing wireless earphone, and an interface of an application program is displayed, so that when the social application of the mobile terminal is in an operating state, a user can make corresponding facial movement, the camera of the head-wearing wireless earphone can acquire the facial movement data, and when the current facial movement data of the user is confirmed to be expression/emotion through facial recognition, the expression/emotion is sent to the mobile terminal according to a certain protocol, namely, a corresponding expression picture is sent to the chat dialog box. And the facial motion data and each expression package image in the expression package image library have one-to-one corresponding expression similarity, the mobile terminal selects a facial image with the expression similarity larger than or equal to a preset similarity in an interface of an application program, and sends the expression package to an opposite terminal in chatting. Therefore, the intelligent sending of the emotion packets can be realized without manually selecting the emotion packets by the user.
Further, when the head-wearing wireless earphone is used for sending facial motion data to the mobile terminal for the first time to control the mobile terminal to send an expression package to the opposite terminal, the mobile terminal can adopt a deep neural network to extract facial expression features in the facial image according to the received facial image and expression package image when expression similarity calculation is carried out on the facial image and the expression package image, the deep neural network can be trained in advance on databases such as ImageNet, facial recognition data or facial expression data, and the extracted facial expression features are compared with the facial expression packages processed in advance in an expression package image database to select the expression packages meeting the requirements of users. When a plurality of expression packages with higher expression similarity are matched, a prompt can be sent to the earphone, and a matching result is fed back to a user through the earphone, so that the user can manually select an expression package image meeting the requirement, and the accuracy of matching the facial image of the portrait image with the expression package image is improved. After multiple times of learning, the mobile terminal can select the expression bag with higher user selection frequency based on the use habit of the user when receiving the same facial motion data.
Referring to fig. 8, alternatively, when the determined facial movement data is emotion information type data, the method for controlling the wireless headset further includes the steps of:
and S600, outputting the emotion information type data to a mobile terminal so as to control the preset application to record and store the emotion information type data when the preset application of the mobile terminal is in a running state.
In this embodiment, the preset application is an emotion management application, and in the emotion management application, the expression of the user may be recorded and stored in real time, so as to record emotion change of the user within a preset time, for example, a day, a week, or a month. When the user uses the emotion management application, the user can make corresponding facial movement, the camera of the head-wearing wireless earphone can acquire the facial movement data, and the expression/emotion is sent to the mobile terminal when the current facial movement data of the user is confirmed to be the expression/emotion through facial recognition. Furthermore, the face movement data can also have a one-to-one correspondence relationship with the general character information of the emotion management application, and when a 'happy' face image is received, the mobile terminal can indicate that the current emotion of the user is 'happy', and the mobile terminal can record the emotion of the user at the moment in an interface of an application program as 'happy', and configures a 'happy' text. Therefore, the daily emotion change of the user is recorded through the emotion management application, and table data or chart data can be formed, so that the user or medical staff can visually know the emotion change of the user within a certain time, the user can be helped to better know the psychological health of the user, the emotion of the user can be controlled, and the prevention and treatment of diseases are facilitated.
Referring to fig. 9, in an embodiment, before the step of receiving the image of the face of the user acquired by the camera and performing image processing to obtain the face movement data of the user, the method for controlling a wireless headset further includes the following steps;
and S700, driving the camera to move from the storage position to the opening position, and controlling the camera to start working so as to acquire the face image of the user.
In this embodiment, the camera has an open position and a storage position, and the camera can be driven by a driving mechanism installed in the earphone body to move from the storage position to the open position, and also can move from the open position to the storage position. The opening and closing of the camera function can be controlled based on a camera control signal triggered by a user, and can also be communicated with the mobile terminal and controlled based on the mobile terminal. In the embodiment of starting under the triggering operation of receiving the camera control signal triggered by the user, the camera can be driven to move from the storage position to the opening position after the user triggering operation aiming at the camera function is received, and the camera is controlled to start working. The user-triggered operation may include: based on setting up button on the earphone body, the user triggers through pressing the button, perhaps realizes voice trigger through the microphone. Or by a sensor arranged on the earphone body, such as a proximity sensor, an infrared sensor and the like, based on the gesture trigger of the user. In the embodiment based on the control of the mobile terminal, the user may generate a pressing operation in the operation interface of the mobile terminal to open an application such as an "emotion management application" or a "social application", or set a click operation to open/close a camera in the mobile terminal. The triggering of the camera function can be realized through a sensor, a touch screen or a microphone and the like in the terminal, so that the camera can collect the facial expression of the user when working.
Referring to fig. 10, in an embodiment, the wireless headset includes a head bar, a left earphone and a right earphone, where the left earphone and the right earphone are respectively disposed at two ends of the head bar; the number of the cameras is at least two, and the at least two cameras are respectively arranged on the left earphone and the right earphone;
before the step of receiving the face image of the user acquired by the camera and performing image processing to obtain the face movement data of the user, the method for controlling the head-mounted wireless headset further comprises the following steps:
step S800, when a camera control signal triggered by a user is received, driving a camera arranged on the left earphone and/or the right earphone to move from a storage position to an open position; and the number of the first and second groups,
and S900, controlling the corresponding camera to work according to the camera control signal so as to obtain the face image of the user.
In this embodiment, the earphone main body may include one earphone or two earphones, when two earphones are provided, one earphone is a master earphone, the other earphone is a slave earphone, the master earphone may be a left earphone, and the slave earphone may be a right earphone; alternatively, the master earphone is a right earphone and the slave earphone is a left earphone. The left earphone and the right earphone are respectively integrated with a wireless communication chip, such as a Bluetooth chip, and when the earphone works, the earphone is connected with a mobile terminal, such as a mobile phone, a smart watch and a tablet personal computer, so that signal interaction of the mobile terminal is realized. The two earphones can be provided with the cameras, the cameras can be arranged on any one of the left earphone and the right earphone, and when one camera is arranged, the one camera can work or stop working based on control of the earphone main body. When all being provided with the camera on two earphones, two cameras can simultaneous working, also can only control one of them work, when two cameras simultaneous working, can shoot same face part through two cameras from different angles, perhaps two cameras shoot different parts respectively to the face image according to two cameras collection comes the analysis user's face motion data.
The head-wearing wireless earphone is provided with a head strip 10 and an earphone main body 20, wherein the earphone main body 20 is detachably connected with the head strip 10; the earphone body 20 is also provided with a driving mechanism 30, the driving mechanism 30 can be movably connected to the earphone body, the camera 40 is arranged on the driving mechanism 30, and the camera 40 has a storage position and an opening position; the camera 40 is driven by the driving mechanism 30 to move between the storage position and the open position, and the camera 40 can acquire the face image of the user when being in the open position. The camera 40 is arranged on the earphone body to acquire the facial movement of the user, so that the requirements of different application scenes of the user are met, the user can acquire required image videos while enjoying music, and the functions of the head-wearing wireless earphone are enriched. Moreover, the camera 40 can be automatically stored and opened according to the requirements of the user under the driving of the driving mechanism 30, and the camera can automatically acquire the face image of the user without depending on a mobile terminal, thereby being beneficial to improving the operation convenience.
The invention provides a head-wearing wireless earphone.
Referring to fig. 1 to 4, in an embodiment of the present invention, the wireless headset includes:
a top tape 10;
an earphone body 20 detachably connected to the headband 10;
the driving mechanism 30 is movably connected to the earphone body;
a camera 40 provided on the drive mechanism 30, the camera 40 having a storage position and an open position; the driving mechanism 30 is used for driving the camera 40 to move between the storage position and the opening position; wherein the camera 40 is capable of acquiring a face image of the user when in the open position; and a processor 50, a memory and a control program for a wireless headset stored on the memory and executable on the processor, wherein the control program for a wireless headset when executed by the processor 50 implements the steps of the method for controlling a wireless headset as described above.
In this embodiment, the head strip 10 and the earphone main body 20 in the wireless headset are detachably connected, the earphone main body 20 is generally provided with a left earphone unit and a right earphone unit, and a single earphone unit can also be used independently, that is, two earphones can be used as both a main earphone and a slave earphone. When the two earphones work simultaneously, one earphone is a master earphone, and the other earphone is a slave earphone. The wireless earphone can transmit the audio signal of the mobile phone to one of the left earphone and the right earphone through a Bluetooth link and transmit the audio data to the other earphone through a Bluetooth technology, so that the purpose of simultaneously playing two ears is achieved.
The headpiece 10 and the earphone can be connected in a plug-in mode, a buckle connection mode or an electric connector mode, and the like. Specifically, the connection between the headband 10 and the earphones may be implemented through a serial port, for example, I2C, a URAT serial port, etc., after the connection is established, the headband 10 and the two earphones may transmit a data signal, for example, power information, and the headband 10 may also read data stored in the earphones, for example, a wireless communication address of the earphones, a stored address of a paired device, etc.
When the camera 40 is at the storage position, the camera 40 is stored in the storage cavity of the earphone main body 20 along with the driving mechanism 30, and when the camera 40 is at the open position, the position of the camera can correspond to the lips of the user, and the lower half part of the face is waited, which is beneficial for the camera 40 to obtain the face movement of the user. The driving mechanism 30 can move from the storage position to the open position, or from the open position to the storage position. In practical application, when the user does not need to use the camera 40, the camera 40 and the driving mechanism 30 can be accommodated in the accommodating cavity of the earphone main body 20 together, and when the user needs to use the camera 40, the driving mechanism 30 drives the camera 40 to move to the open position, so as to realize the function of playing the audio signal. Wherein the drive mechanism 30 includes:
the motor is accommodated in the accommodating cavity;
the telescopic arm is in transmission connection with the motor, and the camera 40 is arranged on the telescopic arm. The telescopic arm can be realized by a screw rod, a turbine, a gear and the like.
The head-wearing wireless earphone can also be provided with automatically controlled subassembly, automatically controlled subassembly includes automatically controlled board, set up in the holding intracavity of earphone main part 20, be provided with treater 5050 on the automatically controlled board, power supply, motor driver etc., motor driver can be connected with treater 50, according to the control signal of treater 50 output, generate the motor drive signal that corresponds, with driving motor work, under the drive of motor, drive the motion of flexible arm, and then drive the motion of camera 40 on the flexible arm. One end and the motor drive of flexible arm are connected, the other end of flexible arm accomodate the position with open between the position motion, camera 40 set up in the other end of flexible arm to change the relative position of camera 40 and earphone main part 20, realize accomodating camera 40 in earphone main part 20 or from keeping away from earphone main part 20 and moving to open position. The motor specifically can adopt step motor to realize, and step motor's turned angle can be controlled through treater 50, and turned angle and camera 40's motion position map to control camera 40's position, when carrying out corotation and reversal, can control camera 40 and move to open position from accomodating the position, perhaps move to accomodating the position from open position. Or adopt direct current motor to realize, gather the turned angle of motor through hall sensor to according to turned angle control direct current motor moves to corresponding angle, move to open position from the position of accomodating to control camera 40, or move to accomodating the position from open position. The driving mechanism 30 may be implemented by using an air cylinder or the like, and the camera 40 is moved between the storage position and the open position by the driving of the air cylinder. The driving mechanism 30 can also adjust the angle of the camera 40 according to the face shape of the user or according to the instruction of the customer, so as to ensure that the camera 40 can be in a better position for acquiring the motion data of the user panel.
The head-wearing wireless earphone is provided with a head strip 10 and an earphone main body 20, wherein the earphone main body 20 is detachably connected with the head strip 10; the earphone body 20 is also provided with a driving mechanism 30, the driving mechanism 30 can be movably connected to the earphone body, the camera 40 is arranged on the driving mechanism 30, and the camera 40 has a storage position and an opening position; the camera 40 is driven by the driving mechanism 30 to move between the storage position and the open position, and the camera 40 can acquire the face image of the user when being in the open position. The camera 40 is arranged on the earphone body to acquire the facial movement of the user, so that the requirements of different application scenes of the user are met, the user can acquire required image videos while enjoying music, and the functions of the head-wearing wireless earphone are enriched. Moreover, the camera 40 can be automatically stored and opened according to the requirements of the user under the driving of the driving mechanism 30, and the camera can automatically acquire the face image of the user without depending on a mobile terminal, thereby being beneficial to improving the operation convenience.
Referring to fig. 1 or 2, in an embodiment, the earphone body 20 includes:
the left earphone 21 and the right earphone 22 are respectively arranged at two ends of the head strip 10; the driving mechanism 30 and the camera 40 are disposed on the left earphone 21 and/or the right earphone 22.
In this embodiment, the earphone main body 20 may include one earphone or two earphones, and when there are two earphones, one earphone is a master earphone, and the other earphone is a slave earphone, where the master earphone may be the left earphone 21, and the slave earphone may be the right earphone 22; alternatively, the master earphone is the right earphone 22 and the slave earphone is the left earphone 21. The left earphone 21 and the right earphone 22 are respectively integrated with a wireless communication chip, such as a bluetooth chip, and when the mobile terminal works, the mobile terminal is connected with a mobile terminal, such as a mobile phone, a smart watch, a tablet computer, and the like through the bluetooth chip, so that signal interaction of the mobile terminal is realized. The two earphones may be provided with the cameras 40, or one of the left earphone 21 and the right earphone 22 may be provided with the camera 40, and when one camera 40 is provided, the one camera 40 may be operated or stopped based on the control of the earphone main body 20. When all being provided with camera 40 on two earphones, two cameras 40 can the simultaneous working, also can only control one of them work, when two cameras 40 simultaneous working, can shoot same face part through two cameras 40 from different angles, perhaps two cameras 40 shoot different positions respectively.
Referring to fig. 3, in an embodiment, the wireless headset further includes:
the processor 50 is arranged in the accommodating cavity of the earphone body, the processor 50 is electrically connected with the driving mechanism 30, and the processor 50 is used for controlling the driving mechanism 30 to work when receiving a camera control signal triggered by a user so as to drive the camera 40 to move between the accommodating position and the opening position.
In this embodiment, the processor 50 may include, but is not limited to, a processor, an RF (Radio Frequency) unit, a microphone, a WiFi module, a bluetooth module, an infrared transceiver module, an audio output unit, an interface unit, a memory, a processor, a power supply, and other components. Those skilled in the art will appreciate that the wireless headset may include more or fewer components, or some components may be combined, or a different arrangement of components. The processor is a control center of the wireless headset, various interfaces and lines are used for connecting all parts of the whole wireless headset, and various functions and processing data of the wireless headset are executed by running or executing software programs and/or modules stored in the memory and calling data stored in the memory, so that the wireless headset is monitored integrally. The processor may include microprocessors such as an SOC and an MCU, and a data processor 52 such as a single chip, a DSP, an FPGA, and a CODEC.
The camera control signal that the user triggered can be triggered and produce by one or more parts in acceleration sensor, gyroscope, button, the microphone, acceleration sensor, gyroscope etc. can set up in the holding intracavity of earphone main part 20, and button, microphone can set up on earphone main part 20, and when wireless headset need use camera 40, the user can trigger wireless headset of wear through above-mentioned part to through wireless headset of wear to touch the work of electricity generation subassembly. When the key is used for triggering, a user can realize the keying of different key signals in a key pressing time length mode, a key pressing frequency mode and the like, and the user triggering is realized. When the microphone is used for triggering, the microphone can pick up control instructions such as "turn on the camera 40" and "turn off the camera 40" sent by the user, and output the control instructions to the processor 50, so that the user triggering is realized. When triggering is performed by using an acceleration sensor, a gyroscope, or the like, after the user wears the wireless headset, the acceleration sensor built in the wireless headset outputs a corresponding detection signal to the processor 50 after detecting the motion of tapping the headset main body 20 by the user, thereby realizing user triggering.
Referring to fig. 3, in one embodiment, the processor 50 is further connected to the camera 40;
the processor 50 is configured to receive the face image of the user acquired by the camera 40, and perform image processing to obtain face movement data of the user; and determining the data type according to the face movement data of the user, and communicating with the mobile terminal to send the face movement data to the mobile terminal.
When the wireless headset works, the wireless headset can be matched with the mobile terminal in a Bluetooth mode or in a communication mode through WIFI, infrared, radio frequency and the like. The head-wearing wireless earphone can receive the audio data sent by the mobile terminal and play the audio data, and meanwhile, when the head-wearing wireless earphone is provided with a microphone, the picked voice information can be converted into the audio data and then output to the mobile terminal.
It can be understood that when the user generates different emotions (natural emotion exposure), or needs to send a silent operation instruction, or needs to express some emotions of an expression to an opposite terminal communicating with the mobile terminal through the mobile terminal, the lower half of the user panel may generate different degrees of action behaviors, based on which, the embodiment can capture these expressive motion characteristics through the camera 40 to obtain the motion data of the lower half of the face. These motion data are analyzed by building an expression database to determine the data type of the facial motion data. Thus, the head-mounted wireless headset can perform data analysis on the user face image acquired by the camera 40, confirm the data type corresponding to the current face image generated by the user face movement, perform corresponding data processing according to the data type corresponding to the face image, and send the data after data processing to the mobile terminal. The expression motion characteristics can include specific expression characteristics, time and angle of continuous expression characteristics and the like.
Referring to fig. 4, further, the processor 50 includes:
the expression recognition processor 51 is electrically connected with the camera 40 and is used for acquiring a face image of the user and carrying out image processing to obtain face movement data of the user;
and the data processor 52 is connected with the expression recognition processor 51 and is used for performing data processing on the control instruction type data and outputting the data to the mobile terminal when the data type is determined to be the control instruction type data according to the face movement data.
In this embodiment, the expression recognition processor 51 may process the facial image transmitted from the headphone camera 40 by using an expression recognition algorithm, and the like, and finally output facial motion data. These expression data are output to the data processor 52, processed by the data processor 52, and given to the data processor 52 in the form of a message queue if the acquired facial motion data are silent audio control commands, or may be transmitted to the mobile terminal in the form of BLE data or the like if the acquired facial motion data are natural emotional expressions or expression information of the user. When data transmission is carried out, expression information can be sent to the mobile terminal through Bluetooth communication, the social application of the mobile terminal can be controlled to send an expression package, the emotion of a user can be managed, and audio and video applications, conversation applications and the like can be operated. The social application background can develop new social content based on the definition, social activities are enriched, the expression recognition is combined with the conventional operation of the head-wearing wireless earphone, a user can have more wonderful audio-visual experience, and not only can the emotion be transferred by sound in the social activities, but also the expression of the user can be transmitted to the other side; in addition, when the emotion management system is applied to emotion management application, the emotion management system assists people in knowing daily emotion changes of the people, not only is the management of the emotion of the people facilitated for common users, but also a strategy for treating diseases is provided for the people who are closed and depressed; in some occasions where the voice is not suitable to be emitted or under the condition that the user cannot emit the voice, the head-mounted wireless earphone can identify the operation of the user by acquiring the facial movements of the mouth shape and the like of the user, automatically control the mobile terminal to be opened correspondingly, realize the intelligent control of the mobile terminal, and control the functions of the audio playing application such as pause, playing, previous and next functions.
The invention also provides a wireless communication system, which comprises a mobile terminal and the head-wearing wireless earphone;
the mobile terminal is in communication connection with the head-mounted wireless earphone.
The detailed structure of the wireless headset can refer to the above embodiments, and is not described herein; it can be understood that, because the wireless headset is used in the wireless communication system of the present invention, the embodiment of the wireless communication system of the present invention includes all technical solutions of all embodiments of the wireless headset, and the achieved technical effects are also completely the same, and are not described herein again.
The mobile terminal may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture experts Group Audio Layer III, motion video experts compression standard Audio Layer 3), an MP4 player (Moving Picture experts Group Audio Layer IV, motion video experts compression standard Audio Layer 4), a notebook computer, or a desktop computer.
In one embodiment, the mobile terminal includes a processor and a memory having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by the processor to enable a communication connection with the wireless headset.
In this embodiment, the execution policy of the mobile terminal is different according to the type of the received facial motion data. For example: when the face motion data is a silent operation instruction, i.e., control instruction type data, the execution policy may be to open a corresponding preset application. Moreover, when the face motion data is identified, the corresponding identification algorithm can be selected according to actual needs, and the embodiment of the invention does not specifically limit the identification algorithm. Furthermore, the control information may be identified by using a preset model, for example: the facial motion data may be identified using a deep learning related model, such as a neural network. And before the model is adopted to identify the control information, the model can be trained by adopting a preset sample so as to enable the model to have better resolution performance.
The processor may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory may include one or more computer-readable storage media, which may be non-transitory. The memory may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in a memory is used to store at least one instruction for execution by a processor to implement a method of controlling a wireless headset provided by a method embodiment of the present invention. The mobile terminal may also be provided with a display screen for displaying a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen is a touch display screen, the display screen also has the ability to capture touch signals on or over the surface of the display screen. The touch signal can be input to a processor as a control signal for processing.
The above description is only an alternative embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A control method of a wireless headset, the wireless headset comprising a camera, the control method of the wireless headset comprising the steps of:
receiving the face image of the user acquired by the camera, and performing image processing to obtain face movement data of the user;
determining the data type according to the face movement data of the user, and sending the face movement data to a mobile terminal; the data types at least comprise expression information type data, control instruction type data and emotion information type data.
2. The method of controlling a wireless headset according to claim 1, wherein when the determined facial movement data is control instruction type data, the method of controlling a wireless headset further comprises the steps of:
generating a control instruction corresponding to the control instruction type data;
and outputting the control instruction to a mobile terminal so as to control the mobile terminal to execute an application control instruction corresponding to the face motion data.
3. The method of controlling a wireless headset according to claim 1, wherein when the determined facial movement data is expression information type data, the method of controlling a wireless headset further comprises the steps of:
and outputting the expression information type data to a mobile terminal so as to control a preset application to send image expression information corresponding to the expression information type data when the preset application of the mobile terminal is in an operating state.
4. The method of controlling a wireless headset according to claim 1, wherein the method of controlling a wireless headset further comprises the steps of:
and when the determined face movement data is emotion information type data, outputting the emotion information type data to a mobile terminal so as to control the preset application to record and store the emotion information type data when the preset application of the mobile terminal is in a running state.
5. The method of controlling a wireless headset according to claim 1, wherein the method of controlling a wireless headset further comprises the following steps before the step of receiving the image of the face of the user acquired by the camera and performing image processing to obtain the facial movement data of the user;
and driving the camera to move from the storage position to the opening position, and controlling the camera to start working so as to acquire the face image of the user.
6. The method of claim 1, wherein the wireless headset comprises a head bar, a left earphone and a right earphone, the left earphone and the right earphone being disposed at two ends of the head bar; the number of the cameras is at least two, and the at least two cameras are respectively arranged on the left earphone and the right earphone;
before the step of receiving the face image of the user acquired by the camera and performing image processing to obtain the face movement data of the user, the control method of the wireless headset further comprises the following steps:
when a camera control signal triggered by a user is received, a camera arranged on the left earphone and/or the right earphone is driven to move from a storage position to an opening position; and the number of the first and second groups,
and controlling the corresponding camera to work according to the camera control signal so as to obtain the face image of the user.
7. A wireless headset, comprising:
a top strip;
the earphone main body is detachably connected with the head strip;
the driving mechanism is movably connected to the earphone body;
the camera is arranged on the driving mechanism and provided with a storage position and an opening position; the driving mechanism is used for driving the camera to move between the accommodating position and the opening position; when the camera is in an open position, a face image of a user can be acquired; and a processor, a memory and a control program for a wireless headset stored on the memory and executable on the processor, wherein the control program for a wireless headset when executed by the processor implements the steps of the method for controlling a wireless headset according to any of claims 1 to 6.
8. The wireless headset of claim 7, wherein the headset body comprises:
the left earphone and the right earphone are respectively arranged at two ends of the head strip; the driving mechanism and the camera are arranged on the left earphone or the right earphone.
9. A wireless communication system, characterized in that the wireless communication system comprises a mobile terminal and a wireless headset according to any of claims 7 or 8;
the mobile terminal is in communication connection with the head-mounted wireless earphone.
10. The wireless communication system of claim 9, wherein the mobile terminal comprises a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to enable a communication connection with the wireless headset.
CN202111279245.9A 2021-10-29 2021-10-29 Head-wearing wireless earphone, control method thereof and wireless communication system Pending CN114115526A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111279245.9A CN114115526A (en) 2021-10-29 2021-10-29 Head-wearing wireless earphone, control method thereof and wireless communication system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111279245.9A CN114115526A (en) 2021-10-29 2021-10-29 Head-wearing wireless earphone, control method thereof and wireless communication system

Publications (1)

Publication Number Publication Date
CN114115526A true CN114115526A (en) 2022-03-01

Family

ID=80379968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111279245.9A Pending CN114115526A (en) 2021-10-29 2021-10-29 Head-wearing wireless earphone, control method thereof and wireless communication system

Country Status (1)

Country Link
CN (1) CN114115526A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090262205A1 (en) * 2008-04-21 2009-10-22 Dana Stephen Smith Voice activated headset imaging system
CN104484037A (en) * 2014-12-12 2015-04-01 三星电子(中国)研发中心 Method for intelligent control by virtue of wearable device and wearable device
CN105404845A (en) * 2014-09-15 2016-03-16 腾讯科技(深圳)有限公司 Picture processing method and apparatus
KR20160137096A (en) * 2015-05-22 2016-11-30 그린맥스 주식회사 Integrated Camera Headset
CN106357927A (en) * 2016-10-31 2017-01-25 维沃移动通信有限公司 Playing control method and mobile terminal
CN106657650A (en) * 2016-12-26 2017-05-10 努比亚技术有限公司 System expression recommendation method and device, and terminal
CN112671632A (en) * 2019-10-14 2021-04-16 金德奎 Intelligent earphone system based on face recognition and information interaction and/or social contact method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090262205A1 (en) * 2008-04-21 2009-10-22 Dana Stephen Smith Voice activated headset imaging system
CN105404845A (en) * 2014-09-15 2016-03-16 腾讯科技(深圳)有限公司 Picture processing method and apparatus
CN104484037A (en) * 2014-12-12 2015-04-01 三星电子(中国)研发中心 Method for intelligent control by virtue of wearable device and wearable device
KR20160137096A (en) * 2015-05-22 2016-11-30 그린맥스 주식회사 Integrated Camera Headset
CN106357927A (en) * 2016-10-31 2017-01-25 维沃移动通信有限公司 Playing control method and mobile terminal
CN106657650A (en) * 2016-12-26 2017-05-10 努比亚技术有限公司 System expression recommendation method and device, and terminal
CN112671632A (en) * 2019-10-14 2021-04-16 金德奎 Intelligent earphone system based on face recognition and information interaction and/or social contact method thereof

Similar Documents

Publication Publication Date Title
JP2024028390A (en) Electronic device for generating image including 3d avatar with facial movements reflected thereon, using 3d avatar for face
US11941323B2 (en) Meme creation method and apparatus
WO2019005332A1 (en) Providing living avatars within virtual meetings
KR20180057366A (en) Mobile terminal and method for controlling the same
WO2014099872A1 (en) Multi-purpose stylus for a computing device
JP7483940B2 (en) Facial image generating method, device, electronic device and readable storage medium
CN108475507A (en) Information processing equipment, information processing method and program
CN111954063B (en) Content display control method and device for video live broadcast room
KR102667547B1 (en) Electronic device and method for providing graphic object corresponding to emotion information thereof
WO2021170062A1 (en) Method for adjusting air gesture and terminal
EP3664425A1 (en) Method and device for generating and displaying an electronic avatar
WO2016206645A1 (en) Method and apparatus for loading control data into machine device
CN108986803A (en) Scenery control method and device, electronic equipment, readable storage medium storing program for executing
JP6675136B1 (en) Virtual space providing system, virtual space providing method and program
CN112449098B (en) Shooting method, device, terminal and storage medium
CN114115526A (en) Head-wearing wireless earphone, control method thereof and wireless communication system
KR20150029197A (en) Mobile terminal and operation method thereof
CN111292743B (en) Voice interaction method and device and electronic equipment
US20210200500A1 (en) Telepresence device action selection
Yim et al. Development of communication model for social robots based on mobile service
KR102661381B1 (en) Apparatus and method for controlling operation of robot capable of mounting accessory
CN109815360B (en) Audio data processing method, device and equipment
CN111491212A (en) Video processing method and electronic equipment
JP2004098252A (en) Communication terminal, control method of lip robot, and control device of lip robot
JP7371820B1 (en) Animation operation method, animation operation program and animation operation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination