CN109151642A - A kind of intelligent earphone, intelligent earphone processing method, electronic equipment and storage medium - Google Patents

A kind of intelligent earphone, intelligent earphone processing method, electronic equipment and storage medium Download PDF

Info

Publication number
CN109151642A
CN109151642A CN201811033439.9A CN201811033439A CN109151642A CN 109151642 A CN109151642 A CN 109151642A CN 201811033439 A CN201811033439 A CN 201811033439A CN 109151642 A CN109151642 A CN 109151642A
Authority
CN
China
Prior art keywords
scene
intelligent earphone
recording
processing
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811033439.9A
Other languages
Chinese (zh)
Other versions
CN109151642B (en
Inventor
邓迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qinhai Technology Co.,Ltd.
Original Assignee
Beijing Jinchain Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jinchain Technology Co Ltd filed Critical Beijing Jinchain Technology Co Ltd
Priority to CN201811033439.9A priority Critical patent/CN109151642B/en
Publication of CN109151642A publication Critical patent/CN109151642A/en
Application granted granted Critical
Publication of CN109151642B publication Critical patent/CN109151642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present invention provides a kind of intelligent earphone, intelligent earphone processing method, electronic equipment and storage medium, the intelligent earphone includes: scene Recognition module, the scene type that intelligent earphone is presently in for identification;Processing module, for carrying out the processing being adapted with first kind scene when the scene identified is first kind scene;And when the scene identified is the second class scene, carry out the processing being adapted with the second class scene;Wherein, first kind scene is the session context without vision imaging content;The second class scene is the scene with vision imaging content.Intelligent earphone provided by the invention can carry out self-adaptive processing with automatic identification external world scene and according to the change of extraneous scene, to improve the experience that user uses intelligent earphone, enhance user to the use dependence of intelligent earphone.

Description

A kind of intelligent earphone, intelligent earphone processing method, electronic equipment and storage medium
Technical field
The present invention relates to technical field of intelligent equipment, and in particular to a kind of intelligent earphone, intelligent earphone processing method, electronics Equipment and storage medium.
Background technique
Intelligent earphone is a kind of novel wearable smart machine.Intelligent earphone can have as other smart machines Independent operating system can also can also be realized wireless by programs such as user installation software, game by mobile communication network Network insertion.
However, a main problem existing for current intelligent earphone is: intelligent earphone predominantly stay in answer sound this In one major function, seldom have the function of other automated intelligents.For example, current intelligent earphone seldom pays close attention to the change of extraneous factor Change, some adaptable automatic interactions can not be carried out according to the variation of extraneous factor and handled, so that some inconvenience are caused, it is unfavorable Intelligent earphone is made full use of in user, reduces user to the use degree of dependence of intelligent earphone.
Summary of the invention
For the problems of the prior art, the present invention provides a kind of intelligent earphone, intelligent earphone processing method, electronic equipment And storage medium.
Specifically, the present invention the following technical schemes are provided:
In a first aspect, the present invention provides a kind of intelligent earphones, comprising:
Scene Recognition module, the scene type that intelligent earphone is presently in for identification;
Processing module, for when the scene identified is first kind scene, progress to be adapted with first kind scene Processing;
And when the scene identified is the second class scene, carry out the processing being adapted with the second class scene;
Wherein, first kind scene is the session context without vision imaging content;The second class scene is with view Feel the scene of presentation content.
Further, the scene Recognition module, is specifically used for:
According to the ambient image information of the image acquisition device on intelligent earphone, and according to the sound on intelligent earphone The acoustic information of sound acquisition device acquisition, the scene type that identification intelligent earphone is presently in;
When judging current scene according to the ambient image information of the image acquisition device on intelligent earphone, there is no views Feel presentation content, and there are two people or multi-conferences according to the acoustic information judgement of the voice collection device acquisition on intelligent earphone When content, the scene type that identification intelligent earphone is presently in is first kind scene;
When judging current scene according to the ambient image information of the image acquisition device on intelligent earphone, there are visions When presentation content, the scene type that identification intelligent earphone is presently in is the second class scene.
Further, the processing module, is specifically used for:
When the scene identified is first kind scene, automatic recording processing is carried out to the sound in current scene, and It is marked while recording according to the audio frequency characteristics of spokesman and corresponds to spokesman ID belonging to sound, and will record while recording Sound file synchronization is converted into text file, by spokesman ID and corresponding speech content according to practical hair in the text file The raw corresponding storage of time sequencing progress sequence;
When the scene identified is the second class scene, automatic video recording is carried out to the vision imaging content in current scene Processing carries out automatic individual image continuous acquisition processing, and if judge to exist simultaneously sound-content in current scene, simultaneously Automatic recording processing is carried out to the sound in current scene, and is marked and is corresponded to according to the audio frequency characteristics of spokesman while recording Spokesman ID belonging to sound, and recording file is synchronized while recording and is converted into text file, in the text file Spokesman ID and corresponding speech content are stored according to the time sequencing progress sequence actually occurred is corresponding.
Further, the intelligent earphone further include: the first trigger module, the second trigger module and third trigger module;
First trigger module, for after receiving the first trigger signal of user, automatically to the sound of current scene Sound carries out automatic recording processing, and is marked while recording according to the audio frequency characteristics of spokesman and correspond to spokesman belonging to sound ID, and recording file is synchronized be converted into text file while recording, by spokesman ID and right in the text file The speech content answered is stored according to the time sequencing progress sequence actually occurred is corresponding;
Second trigger module, for after receiving the second trigger signal of user, automatically in current scene Vision imaging content carries out automatic video recording processing, and if judge to exist simultaneously sound-content in current scene, while to current Sound in scene carries out automatic recording processing, and marks corresponding sound institute according to the audio frequency characteristics of spokesman while recording The spokesman ID of category, and recording file is synchronized while recording and is converted into text file, it will speech in the text file Person ID and corresponding speech content are stored according to the time sequencing progress sequence actually occurred is corresponding;
The third trigger module, for after often receiving the third trigger signal of user, automatically to working as front court Vision imaging in scape carries out single image acquisition.
Further, the scene Recognition module, be also used to scene type that identification intelligent earphone is presently in whether be Third class scene;The third class scene is sleep rest scene;
Correspondingly, the processing module is also used to when the scene identified is third class scene, and whether detection user The sound of snoring is issued, if so, the vibration module or music reminding module by being mounted on intelligent earphone carry out snoring prompting, so that User adjusts sleeping position.
Further, the processing module is also used to when the scene identified is third class scene, and front court is worked as in detection Scape whether there is earthquake or fire hazard, if so, vibration module or warning reminding module by being mounted on intelligent earphone Dangerous prompting is carried out, so that user flees the scene as early as possible.
Second aspect, the present invention also provides a kind of intelligent earphone processing methods, comprising:
The scene type that identification intelligent earphone is presently in;
When the scene identified is first kind scene, the processing being adapted with first kind scene is carried out;
And when the scene identified is the second class scene, carry out the processing being adapted with the second class scene;
Wherein, first kind scene is the session context without vision imaging content;The second class scene is with view Feel the scene of presentation content.
Further, described when the scene identified is first kind scene, progress is adapted with first kind scene Processing;And when the scene identified is the second class scene, the processing being adapted with the second class scene is carried out, it is specific to wrap It includes:
When the scene identified is first kind scene, automatic recording processing is carried out to the sound in current scene, and It is marked while recording according to the audio frequency characteristics of spokesman and corresponds to spokesman ID belonging to sound, and will record while recording Sound file synchronization is converted into text file, by spokesman ID and corresponding speech content according to practical hair in the text file The raw corresponding storage of time sequencing progress sequence;
When the scene identified is the second class scene, automatic video recording is carried out to the vision imaging content in current scene Processing carries out automatic individual image continuous acquisition processing, and if judge to exist simultaneously sound-content in current scene, simultaneously Automatic recording processing is carried out to the sound in current scene, and is marked and is corresponded to according to the audio frequency characteristics of spokesman while recording Spokesman ID belonging to sound, and recording file is synchronized while recording and is converted into text file, in the text file Spokesman ID and corresponding speech content are stored according to the time sequencing progress sequence actually occurred is corresponding.
The third aspect, the present invention also provides a kind of electronic equipment, including memory, processor and storage are on a memory And the computer program that can be run on a processor, the processor realize intelligence as described in relation to the first aspect when executing described program The step of earphone processing method.
Fourth aspect, the present invention also provides a kind of computer readable storage mediums, are stored thereon with computer program, should The step of intelligent earphone processing method as described in relation to the first aspect is realized when computer program is executed by processor.
By prior art scheme it is found that intelligent earphone provided by the invention, comprising: scene Recognition module and processing module, The scene Recognition module scene type that intelligent earphone is presently in for identification;Processing module is used in the scene identified When first kind scene, the processing being adapted with first kind scene is carried out, and when the scene identified is the second class scene, Carry out the processing being adapted with the second class scene;Here, first kind scene is the session context without vision imaging content;Institute Stating the second class scene is the scene with vision imaging content.As it can be seen that intelligent earphone provided by the invention can be automatically according to intelligence The scene that energy earphone is presently in carries out adaptable processing.As intelligent earphone can be with automatic identification current scene, if working as front court Scape is the session context without vision imaging content, then can carry out recording processing automatically, if current scene is with vision The scene of presentation content, can be recorded a video or be taken pictures automatically and recording is handled, so that intelligent earphone becomes more intelligence Can, for example, user can be helped to solve the problems, such as to forget recording in meeting or record conference content and miss critical information.This It is outer it should be noted that intelligent earphone provided by the invention be particularly suitable for meeting, discussion, training, lecture, film viewing, The use of the occasions such as concert, intelligent earphone provided by the invention can carry out self-adaptive processing according to the change of extraneous scene, To improve the experience that user uses intelligent earphone, user is enhanced to the use dependence of intelligent earphone.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is the present invention Some embodiments for those of ordinary skill in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 is a kind of structural schematic diagram for the intelligent earphone that one embodiment of the invention provides;
Fig. 2 is another structural schematic diagram for the intelligent earphone that one embodiment of the invention provides;
Fig. 3 be another embodiment of the present invention provides intelligent earphone processing method flow chart;
Fig. 4 is the structural schematic diagram for the electronic equipment that further embodiment of this invention provides.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, the technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
One embodiment of the invention provides a kind of intelligent earphone, and referring to Fig. 1, which includes: scene Recognition module 11 and processing module 12;Wherein:
Scene Recognition module 11, the scene type that intelligent earphone is presently in for identification;
Processing module 12, for be adapted with first kind scene when the scene identified is first kind scene Processing;And when the scene identified is the second class scene, carry out the processing being adapted with the second class scene;
Wherein, first kind scene is the session context without vision imaging content;The second class scene is with view Feel the scene of presentation content.
It should be noted that the scene Recognition module 11 in the present embodiment can be presently in automatic identification intelligent earphone Scene type.In the present embodiment, scene Recognition module 11 can include at least two with the scene type of automatic identification, one is Without the session context of vision imaging content, such as conference scenario, talk scene discuss that scene, the feature of this kind of scene are general It is only talk or discussion with two people or more people, without vision imaging content.Another kind is with vision imaging content Scene, such as training, lecture, film viewing, concert scene, the feature of this kind of scene generally has vision imaging content, Here vision imaging content can be with are as follows: lantern slide broadcasting, video clip broadcasting, stage, screen or light play etc..Wherein, For the second class scene, sound characteristic can have, can also not have sound characteristic, such as can be with sound characteristic Scene, such as the broadcasting of film, the lantern slide with voice automatic briefing is played, or the lantern slide that cooperation is manually explained is broadcast It puts;Or the scene without sound characteristic, as page exhibition is pressed in the mute broadcasting of lantern slide handout or the sequence of text file Show etc..
Correspondingly, the processing module 12 carries out automatically adaptable processing according to scene Recognition result.Such as, work as identification When current scene is the session context without vision imaging content out, recording processing is carried out automatically, and scene is before identifying Scene with vision imaging content, is recorded a video or is taken pictures automatically and recording is handled.
For example, for example, when user is in one, the meeting room tissue small-sized discussion for being directed to a certain problem, the use Family forgets to turn on the recording system of meeting room, or forgets to turn on the session recording function on mobile phone, then in this meeting knot Shu Hou, the user will be unable to really restore everyone speech situation.And if the user has worn intelligence provided in this embodiment Energy earphone, then can solve this problem, because of intelligent earphone provided in this embodiment, can automatically identify current scene It is not necessarily to human intervention then intelligent earphone can carry out recording processing automatically for first kind scene, so that user is greatly facilitated, Solve the problems, such as that user forgets recording in meeting or records conference content and miss critical information.
Similarly, when user goes to participate in a certain training with lantern slide handout, often be inconvenient to shoot slide content, Important slide content take pictures the feelings of preservation alternatively, frequently occurring in training and just remembering when end Condition.And if the user has worn intelligent earphone provided in this embodiment, then can solve this problem because the present embodiment mentions The intelligent earphone of confession, can automatically identify current scene is the second class scene, then intelligent earphone can record a video automatically Or take pictures and recording processing, be not necessarily to human intervention, to greatly facilitate user.
It should be noted that being to meet the needs of intelligent earphone carries out automatic video recording or camera function, one on intelligent earphone As be provided with video recording or photo module, as being provided with miniature webcam on the intelligent earphone.In addition, intelligent earphone can also be with The connection of other wearable devices, is recorded a video or is taken pictures for triggering other wearable devices.Such as intelligent earphone and with camera shooting The camera glasses of function connect, and are recorded a video or are taken pictures for triggering camera glasses.
It should be noted that the application scenarios that the present embodiment is previously mentioned are to agree to record or shooting conference voice and video The scene of content, for being related to the scene of privacy secrecy, not within the scope of the discussion of the present embodiment.
By prior art content it is found that intelligent earphone provided in this embodiment, comprising: scene Recognition module and processing mould Block, the scene Recognition module scene type that intelligent earphone is presently in for identification;Processing module is used in the field identified Scape be the first kind scene when, carry out with first kind scene be adapted processing, and the scene identified be the second class field Jing Shi carries out the processing being adapted with the second class scene;Here, first kind scene is the session field without vision imaging content Scape;The second class scene is the scene with vision imaging content.As it can be seen that intelligent earphone provided in this embodiment can be automatic Adaptable processing is carried out according to the scene that intelligent earphone is presently in.As intelligent earphone can with automatic identification current scene, if Current scene is the session context without vision imaging content, then can carry out recording processing automatically, if current scene is band There is the scene of vision imaging content, can be recorded a video or be taken pictures automatically and recording is handled, so that intelligent earphone becomes It is more intelligent, for example, the user can be helped to solve to forget to record in meeting or record conference content and miss critical information Problem.Furthermore it should be noted that intelligent earphone provided by the invention is particularly suitable for meeting, discussion, training, lecture, electricity The use of the occasions such as shadow viewing, concert, intelligent earphone provided in this embodiment can carry out certainly according to the change of extraneous scene Adaptation processing enhances user to the use dependence of intelligent earphone to improve the experience that user uses intelligent earphone.
In a kind of optional embodiment, the scene Recognition module 11 is specifically used for:
According to the ambient image information of the image acquisition device on intelligent earphone, and according to the sound on intelligent earphone The acoustic information of sound acquisition device acquisition, the scene type that identification intelligent earphone is presently in;
When judging current scene according to the ambient image information of the image acquisition device on intelligent earphone, there is no views Feel presentation content, and there are two people or multi-conferences according to the acoustic information judgement of the voice collection device acquisition on intelligent earphone When content, the scene type that identification intelligent earphone is presently in is first kind scene;
When judging current scene according to the ambient image information of the image acquisition device on intelligent earphone, there are visions When presentation content, the scene type that identification intelligent earphone is presently in is the second class scene.
It in the present embodiment, can be according to intelligent earphone when judging current scene with the presence or absence of vision imaging content On image acquisition device 360 ° of ambient image, judge whether to have in 360 ° of ambient image played in lantern slide, The features such as video playing, light of stage screen, if so, then judging current scene, there are vision imaging contents, if nothing, judgement is worked as Vision imaging content is not present in preceding scene.In addition, whether having in judging ambient image in lantern slide broadcasting, video playing, dance When the features such as desk lamp screen, the mode that can use characteristic matching is judged, also can use image pixel intensity value Mode is judged.
In addition, in the present embodiment, when being identified to first kind scene, to avoid wrong identification, or to mention The accuracy of high scene Recognition can meet detecting " when the environment according to the image acquisition device on intelligent earphone Image information judges current scene, and there is no vision imaging contents, and according to the sound of the voice collection device acquisition on intelligent earphone There are when two people or multi-conference content for the judgement of message breath " this condition when, then an additional step judges, i.e., according to image collector The ambient image information for setting acquisition judges whether current scene is indoor scene or further judges whether it is meeting room, if It is that the scene type that then identification intelligent earphone is presently in is first kind scene, this is because general formal discussion all exists Indoor or meeting room carries out, and the purpose handled in this way is in order to avoid the scene Recognition that will be chatted between several friends is the first kind Scene.
Based on above content, in a kind of optional embodiment, the processing module 12 is specifically used for:
When the scene identified is first kind scene, automatic recording processing is carried out to the sound in current scene, and It is marked while recording according to the audio frequency characteristics of spokesman and corresponds to spokesman ID belonging to sound, and will record while recording Sound file synchronization is converted into text file, by spokesman ID and corresponding speech content according to practical hair in the text file The raw corresponding storage of time sequencing progress sequence;
When the scene identified is the second class scene, automatic video recording is carried out to the vision imaging content in current scene Processing carries out automatic individual image continuous acquisition processing, and if judge to exist simultaneously sound-content in current scene, simultaneously Automatic recording processing is carried out to the sound in current scene, and is marked and is corresponded to according to the audio frequency characteristics of spokesman while recording Spokesman ID belonging to sound, and recording file is synchronized while recording and is converted into text file, in the text file Spokesman ID and corresponding speech content are stored according to the time sequencing progress sequence actually occurred is corresponding.
As it can be seen that in the present embodiment, when the scene identified is first kind scene, to the sound in current scene Automatic recording processing is carried out, and is marked while recording according to the audio frequency characteristics of spokesman and corresponds to spokesman belonging to sound ID, and recording file is synchronized be converted into text file while recording, by spokesman ID and right in the text file The speech content answered is stored according to the time sequencing progress sequence actually occurred is corresponding.For example, finally can a recording file And a text file, wherein the form in text file are as follows:
Zhang San: speech content (optional: speech duration);
Li Si: speech content;
King five: speech content;
Zhao six: speech content;
Zhang San: speech content;
King five: speech content;
Sun Jiu: speech content.
As it can be seen that the processing mode of present embodiment not only can really restore everyone the speech voice in meeting, also A text file can be being obtained after the conference is over, this part of text file complete complete record in a manner of the meeting record descends Conference process, and everyone speech is subjected to corresponding playback, everyone speech process is clearly illustrated in the form of text And the interactive process between multiple spokesman, this result can help the personnel's progress data to arrange that makes a Summary, Huo Zheye It can be directly using this part of file as minutes, to eliminate a large amount of text housekeeping.In addition, in above-mentioned text Speech this information of duration can also be increased in file.
In addition, in the present embodiment, when the scene identified is the second class scene, can select according to demand pair Vision imaging content in current scene carries out automatic video recording processing or carries out automatic individual image continuous acquisition processing.Example Such as, when only needing to obtain the pictorial information about lantern slide being played on, according to default interval of taking pictures carry out it is automatic individual The processing of image continuous acquisition.It, then can be to the view in current scene when needing to obtain complete lantern slide playing process Feel that presentation content carries out automatic video recording processing.
In a kind of optional embodiment, referring to fig. 2, the intelligent earphone further include: the first trigger module 13, second touching Send out module 14 and third trigger module 15;
First trigger module 13, for after receiving the first trigger signal of user, automatically to current scene Sound carries out automatic recording processing, and is marked while recording according to the audio frequency characteristics of spokesman and correspond to speech belonging to sound Person ID, and recording file is synchronized be converted into text file while recording, in the text file by spokesman ID and Corresponding speech content is stored according to the time sequencing progress sequence actually occurred is corresponding;
Second trigger module 14, for after receiving the second trigger signal of user, automatically in current scene Vision imaging content carry out automatic video recording processing, and if judge to exist simultaneously sound-content in current scene, while to working as Sound in preceding scene carries out automatic recording processing, and marks corresponding sound according to the audio frequency characteristics of spokesman while recording Affiliated spokesman ID, and recording file is synchronized while recording and is converted into text file, it will hair in the text file Speaker ID and corresponding speech content are stored according to the time sequencing progress sequence actually occurred is corresponding;
The third trigger module 15, for after often receiving the third trigger signal of user, automatically to current Vision imaging in scene carries out single image acquisition.
As it can be seen that in the present embodiment, intelligent earphone can perform some processing under the trigger of the user, that is to say, that Intelligent earphone, can also be in user other than it can execute scene automatic identification described in above-described embodiment and automatically process Triggering under, carry out emergency processing.
For example, which image data of unrestricted choice is adopted when user needs according to the training contents in training meeting Collection, when which data does not need acquisition, in this case, user can be interim to trigger when seeing the content for wanting preservation Then third trigger module 15 makes intelligent earphone carry out single image acquisition to the vision imaging in current scene automatically.This In, third trigger module 15 is generally the key being arranged on intelligent earphone, for example key 3, then working as when user wants acquisition Under a slide image being played on when, it is automatically right that user can trigger intelligent earphone by way of pressing the button 3 Vision imaging in current scene carries out single image acquisition.
In the present embodiment, the application scenarios of the first trigger module 13, the second trigger module 14 and third trigger module 15 It can be the scene operated after the scene automatic identification function on intelligent earphone is closed.Certainly, a kind of situation also is, It identifies unpunctual when problem occurs in the scene automatic identification on intelligent earphone, the first trigger module 13, second can also be used Trigger module 14 and third trigger module 15 are temporarily remedied, and the corrective operation is simple and convenient.
In addition, it should be noted that, the processing of the third trigger module 15 can be with the first trigger module 13 or the second The processing of trigger module 14 is parallel.For example, that is automatic video recording processing can carried out to currently playing lantern slide While, also take pictures processing (here if it is necessary, two independent camera shootings can be set to certain emphasis lantern slide pages Head, a responsible video recording, one is responsible for taking pictures).The benefit handled in this way is, both available complete video, can also With the image for several lantern slide pages relatively attached most importance to, so that user be facilitated to be checked according to oneself.
In a kind of optional embodiment, the scene Recognition module 11 is also used to what identification intelligent earphone was presently in Whether scene type is third class scene;The third class scene is sleep rest scene;
Correspondingly, the processing module 12 is also used to when the scene identified is third class scene, and detection user is The no sending sound of snoring, if so, the vibration module or music reminding module by being mounted on intelligent earphone carry out snoring prompting, with User is set to adjust sleeping position.
As it can be seen that in the present embodiment, the scene Recognition module 11 can also identify sleep rest scene, then detect Whether user issues the sound of snoring, and when user issues the sound of snoring, passes through the vibration module or music reminding being mounted on intelligent earphone Module carries out snoring prompting, so that user adjusts sleeping position, to guarantee the health of user.
It should be noted that carry out sleep rest scene Recognition when, can according to the information such as heartbeat, the blood pressure of user into Row identification, can also be identified by way of acquiring image, such as whether identification user is sleeping position.
It should be noted that can be detected using sound of snoring sensor when whether detection user issues the sound of snoring, it can also To use other that can be detected with the sensor of the sound of snoring, which is not limited by the present invention.
In a kind of optional embodiment, the processing module 12 is also used in the scene identified be third class field Jing Shi, detection current scene whether there is earthquake or fire hazard, if so, passing through the vibration module being mounted on intelligent earphone Or warning reminding module carries out dangerous prompting, so that user flees the scene as early as possible.
As it can be seen that in the present embodiment, when current scene is sleep rest scene, being also used to whether detect current scene There are earthquake or fire hazards, if so, the vibration module or warning reminding module by being mounted on intelligent earphone are endangered Danger is reminded, so that user flees the scene as early as possible, to guarantee the life security of user.
It should be noted that vibrating sensing can be passed through when detecting current scene with the presence or absence of earthquake or fire hazard Device, temperature sensor or other can detecte earthquake or the sensor of fire is detected, which is not limited by the present invention.It needs Illustrate, multiple optional embodiments given by the present embodiment, it, can be under the premise of logic or structure do not conflict mutually Independent assortment, which is not limited by the present invention.
Based on identical inventive concept, another embodiment of the present invention additionally provides a kind of intelligent earphone processing method, referring to Fig. 3, this method comprises the following steps:
Step 101: the scene type that identification intelligent earphone is presently in.
Step 102: when the scene identified is first kind scene, carrying out the processing being adapted with first kind scene; And when the scene identified is the second class scene, carry out the processing being adapted with the second class scene;Wherein, first kind field Scape is the session context without vision imaging content;The second class scene is the scene with vision imaging content.
In a kind of optional embodiment, above-mentioned steps 101 can be accomplished in that
According to the ambient image information of the image acquisition device on intelligent earphone, and according to the sound on intelligent earphone The acoustic information of sound acquisition device acquisition, the scene type that identification intelligent earphone is presently in;
When judging current scene according to the ambient image information of the image acquisition device on intelligent earphone, there is no views Feel presentation content, and there are two people or multi-conferences according to the acoustic information judgement of the voice collection device acquisition on intelligent earphone When content, the scene type that identification intelligent earphone is presently in is first kind scene;
When judging current scene according to the ambient image information of the image acquisition device on intelligent earphone, there are visions When presentation content, the scene type that identification intelligent earphone is presently in is the second class scene.
In a kind of optional embodiment, above-mentioned steps 102 can be accomplished in that
The processing that when the scene identified is first kind scene, progress is adapted with first kind scene;And When the scene identified is the second class scene, the processing being adapted with the second class scene is carried out, is specifically included:
When the scene identified is first kind scene, automatic recording processing is carried out to the sound in current scene, and It is marked while recording according to the audio frequency characteristics of spokesman and corresponds to spokesman ID belonging to sound, and will record while recording Sound file synchronization is converted into text file, by spokesman ID and corresponding speech content according to practical hair in the text file The raw corresponding storage of time sequencing progress sequence;
When the scene identified is the second class scene, automatic video recording is carried out to the vision imaging content in current scene Processing carries out automatic individual image continuous acquisition processing, and if judge to exist simultaneously sound-content in current scene, simultaneously Automatic recording processing is carried out to the sound in current scene, and is marked and is corresponded to according to the audio frequency characteristics of spokesman while recording Spokesman ID belonging to sound, and recording file is synchronized while recording and is converted into text file, in the text file Spokesman ID and corresponding speech content are stored according to the time sequencing progress sequence actually occurred is corresponding.
In a kind of optional embodiment, intelligent earphone processing method provided in this embodiment, further includes:
After receiving the first trigger signal of user, automatic recording processing is carried out to the sound of current scene automatically, and It is marked while recording according to the audio frequency characteristics of spokesman and corresponds to spokesman ID belonging to sound, and will record while recording Sound file synchronization is converted into text file, by spokesman ID and corresponding speech content according to practical hair in the text file The raw corresponding storage of time sequencing progress sequence;
After receiving the second trigger signal of user, the vision imaging content in current scene is recorded automatically automatically As processing, and if judge to exist simultaneously sound-content in current scene, while the sound in current scene is recorded automatically Sound processing, and marked while recording according to the audio frequency characteristics of spokesman and correspond to spokesman ID belonging to sound, and recorded While recording file synchronized be converted into text file, by spokesman ID and corresponding speech content in the text file According to the corresponding storage of time sequencing progress sequence actually occurred;
After often receiving the third trigger signal of user, individual is carried out to the vision imaging in current scene automatically Image Acquisition.
In a kind of optional embodiment, intelligent earphone processing method provided in this embodiment, further includes:
Whether the scene type that identification intelligent earphone is presently in is third class scene;The third class scene is that sleep is stopped Scene is ceased, and when the scene identified is third class scene, whether detection user issues the sound of snoring, if so, passing through installation Vibration module or music reminding module on intelligent earphone carry out snoring prompting, so that user adjusts sleeping position.
In a kind of optional embodiment, intelligent earphone processing method provided in this embodiment, further includes:
When the scene identified is third class scene, detection current scene whether there is earthquake or fire hazard, if It is dangerous prompting then to be carried out by the vibration module or warning reminding module that are mounted on intelligent earphone, so that user escapes as early as possible From scene.
Intelligent earphone processing method provided in this embodiment can use intelligent earphone provided by the above embodiment to realize, Its concrete operating principle is similar with beneficial effect, and particular content can be found in the introduction of above-described embodiment, and and will not be described here in detail.
It should be noted that multiple optional embodiments given by the present embodiment, do not conflict mutually in logic or structure Under the premise of, it can be freely combined, which is not limited by the present invention.
Based on identical inventive concept, further embodiment of this invention provides a kind of electronic equipment, referring to fig. 4, the electricity Sub- equipment specifically includes following content: processor 301, memory 302, communication interface 303 and bus 304;
Wherein, the processor 301, memory 302, communication interface 303 complete mutual lead to by the bus 304 Letter;The communication interface 303 is for realizing the information between the relevant devices such as each modeling software and intelligent manufacturing equipment module library Transmission;
The processor 301 is used to call the computer program in the memory 302, and the processor executes the meter The Overall Steps in above-described embodiment one are realized when calculation machine program, for example, reality when the processor executes the computer program Existing following step:
Step 101: the scene type that identification intelligent earphone is presently in.
Step 102: when the scene identified is first kind scene, carrying out the processing being adapted with first kind scene; And when the scene identified is the second class scene, carry out the processing being adapted with the second class scene;Wherein, first kind field Scape is the session context without vision imaging content;The second class scene is the scene with vision imaging content.
Based on identical inventive concept, further embodiment of this invention provides a kind of computer readable storage medium, the meter It is stored with computer program on calculation machine readable storage medium storing program for executing, which realizes above-described embodiment one when being executed by processor Overall Steps, for example, the processor execute the computer program when realize following step:
Step 101: the scene type that identification intelligent earphone is presently in.
Step 102: when the scene identified is first kind scene, carrying out the processing being adapted with first kind scene; And when the scene identified is the second class scene, carry out the processing being adapted with the second class scene;Wherein, first kind field Scape is the session context without vision imaging content;The second class scene is the scene with vision imaging content.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
The above examples are only used to illustrate the technical scheme of the present invention, rather than its limitations;Although with reference to the foregoing embodiments Invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each implementation Technical solution documented by example is modified or equivalent replacement of some of the technical features;And these are modified or replace It changes, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.

Claims (10)

1. a kind of intelligent earphone characterized by comprising
Scene Recognition module, the scene type that intelligent earphone is presently in for identification;
Processing module, for carrying out the processing being adapted with first kind scene when the scene identified is first kind scene;
And when the scene identified is the second class scene, carry out the processing being adapted with the second class scene;
Wherein, first kind scene is the session context without vision imaging content;The second class scene is with vision shadow As the scene of content.
2. intelligent earphone according to claim 1, which is characterized in that the scene Recognition module is specifically used for:
It is adopted according to the ambient image information of the image acquisition device on intelligent earphone, and according to the sound on intelligent earphone The acoustic information of acquisition means acquisition, the scene type that identification intelligent earphone is presently in;
When judging current scene according to the ambient image information of the image acquisition device on intelligent earphone, there is no vision shadows As content, and according to the acoustic information judgement of the voice collection device acquisition on intelligent earphone, there are two people or multi-conference contents When, the scene type that identification intelligent earphone is presently in is first kind scene;
When judging current scene according to the ambient image information of the image acquisition device on intelligent earphone, there are vision imagings When content, the scene type that identification intelligent earphone is presently in is the second class scene.
3. intelligent earphone according to claim 2, which is characterized in that the processing module is specifically used for:
When the scene identified is first kind scene, automatic recording processing is carried out to the sound in current scene, and recording It is marked while sound according to the audio frequency characteristics of spokesman and corresponds to spokesman ID belonging to sound, and will recording text while recording Part synchronizes is converted into text file, by spokesman ID and corresponding speech content according to actually occurring in the text file The corresponding storage of time sequencing progress sequence;
When the scene identified is the second class scene, automatic video recording processing is carried out to the vision imaging content in current scene Or carry out automatic individual image continuous acquisition processing, and if judge to exist simultaneously sound-content in current scene, while to working as Sound in preceding scene carries out automatic recording processing, and marks corresponding sound according to the audio frequency characteristics of spokesman while recording Affiliated spokesman ID, and recording file is synchronized while recording and is converted into text file, it will hair in the text file Speaker ID and corresponding speech content are stored according to the time sequencing progress sequence actually occurred is corresponding.
4. described in any item intelligent earphones according to claim 1~3, which is characterized in that further include: the first trigger module, Two trigger modules and third trigger module;
First trigger module, for after receiving the first trigger signal of user, automatically to the sound of current scene into The processing of row automatic recording, and marked while recording according to the audio frequency characteristics of spokesman and correspond to spokesman ID belonging to sound, And recording file is synchronized be converted into text file while recording, by spokesman ID and corresponding in the text file Speech content is stored according to the time sequencing progress sequence actually occurred is corresponding;
Second trigger module, for after receiving the second trigger signal of user, automatically to the vision in current scene Presentation content carries out automatic video recording processing, and if judge to exist simultaneously sound-content in current scene, while to current scene In sound carry out automatic recording processing, and belonging to while recording marking corresponding sound according to the audio frequency characteristics of spokesman Spokesman ID, and recording file is synchronized while recording and is converted into text file, by spokesman ID in the text file And corresponding speech content is stored according to the time sequencing progress sequence actually occurred is corresponding;
The third trigger module, for after often receiving the third trigger signal of user, automatically in current scene Vision imaging carry out single image acquisition.
5. described in any item intelligent earphones according to claim 1~3, which is characterized in that the scene Recognition module is also used to Whether the scene type that identification intelligent earphone is presently in is third class scene;The third class scene is sleep rest scene;
Correspondingly, the processing module is also used to when the scene identified is third class scene, and whether detection user issues The sound of snoring, if so, the vibration module or music reminding module by being mounted on intelligent earphone carry out snoring prompting, so that user Adjust sleeping position.
6. intelligent earphone according to claim 5, which is characterized in that the processing module is also used to identifying When scene is third class scene, detection current scene whether there is earthquake or fire hazard, if so, by being mounted on Intellgent Ear Vibration module or warning reminding module on machine carry out dangerous prompting, so that user flees the scene as early as possible.
7. a kind of intelligent earphone processing method characterized by comprising
The scene type that identification intelligent earphone is presently in;
When the scene identified is first kind scene, the processing being adapted with first kind scene is carried out;
And when the scene identified is the second class scene, carry out the processing being adapted with the second class scene;
Wherein, first kind scene is the session context without vision imaging content;The second class scene is with vision shadow As the scene of content.
8. the method according to the description of claim 7 is characterized in that it is described the scene identified be the first kind scene when, Carry out the processing being adapted with first kind scene;And it when the scene identified is the second class scene, carries out and the second class The adaptable processing of scene, specifically includes:
When the scene identified is first kind scene, automatic recording processing is carried out to the sound in current scene, and recording It is marked while sound according to the audio frequency characteristics of spokesman and corresponds to spokesman ID belonging to sound, and will recording text while recording Part synchronizes is converted into text file, by spokesman ID and corresponding speech content according to actually occurring in the text file The corresponding storage of time sequencing progress sequence;
When the scene identified is the second class scene, automatic video recording processing is carried out to the vision imaging content in current scene Or carry out automatic individual image continuous acquisition processing, and if judge to exist simultaneously sound-content in current scene, while to working as Sound in preceding scene carries out automatic recording processing, and marks corresponding sound according to the audio frequency characteristics of spokesman while recording Affiliated spokesman ID, and recording file is synchronized while recording and is converted into text file, it will hair in the text file Speaker ID and corresponding speech content are stored according to the time sequencing progress sequence actually occurred is corresponding.
9. a kind of electronic equipment including memory, processor and stores the calculating that can be run on a memory and on a processor Machine program, which is characterized in that the processor realizes that intelligent earphone is handled as described in claim 7 or 8 when executing described program The step of method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt It is realized when processor executes as described in claim 7 or 8 the step of intelligent earphone processing method.
CN201811033439.9A 2018-09-05 2018-09-05 Intelligent earphone, intelligent earphone processing method, electronic device and storage medium Active CN109151642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811033439.9A CN109151642B (en) 2018-09-05 2018-09-05 Intelligent earphone, intelligent earphone processing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811033439.9A CN109151642B (en) 2018-09-05 2018-09-05 Intelligent earphone, intelligent earphone processing method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN109151642A true CN109151642A (en) 2019-01-04
CN109151642B CN109151642B (en) 2019-12-24

Family

ID=64827097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811033439.9A Active CN109151642B (en) 2018-09-05 2018-09-05 Intelligent earphone, intelligent earphone processing method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN109151642B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887533A (en) * 2019-04-10 2019-06-14 王俊芝 Multi-functional customer mobile terminal control system and method
CN110248265A (en) * 2019-05-31 2019-09-17 湖北工业大学 Headphone with danger early warning function
CN110866432A (en) * 2019-04-10 2020-03-06 王俊芝 Multifunctional user mobile terminal control system
CN111935581A (en) * 2020-08-13 2020-11-13 长春市长光芯忆科技有限公司 Electronic memory earphone
CN112019960A (en) * 2019-05-28 2020-12-01 深圳市冠旭电子股份有限公司 Method for monitoring scenes by utilizing earphone, device and readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009147752A (en) * 2007-12-14 2009-07-02 Sharp Corp Data broadcast compatible mobile view terminal
CN101552826A (en) * 2009-05-04 2009-10-07 中兴通讯股份有限公司 Videophone service automatic answering machine and device
CN201414159Y (en) * 2009-05-06 2010-02-24 珠海市东耀企业有限公司 Multimedia terminal machine
CN102169642A (en) * 2011-04-06 2011-08-31 李一波 Interactive virtual teacher system having intelligent error correction function
CN102411833A (en) * 2011-08-02 2012-04-11 杭州威威网络科技有限公司 Networking alarm apparatus based on audio identification
CN102982800A (en) * 2012-11-08 2013-03-20 鸿富锦精密工业(深圳)有限公司 Electronic device with audio video file video processing function and audio video file processing method
CN103309855A (en) * 2013-06-18 2013-09-18 江苏华音信息科技有限公司 Audio-video recording and broadcasting device capable of translating speeches and marking subtitles automatically in real time for Chinese and foreign languages
CN103956014A (en) * 2014-05-04 2014-07-30 福建创高安防技术股份有限公司 Remote image recognition antitheft method and system
CN104038717A (en) * 2014-06-26 2014-09-10 北京小鱼儿科技有限公司 Intelligent recording system
CN204069102U (en) * 2014-08-07 2014-12-31 深圳市微思客技术有限公司 Interactive bluetooth earphone and mobile terminal
CN105407379A (en) * 2014-08-26 2016-03-16 天脉聚源(北京)教育科技有限公司 Synchronous recording method for multiple media
WO2016148825A1 (en) * 2015-03-19 2016-09-22 Intel Corporation Acoustic camera based audio visual scene analysis
CN107527623A (en) * 2017-08-07 2017-12-29 广州视源电子科技股份有限公司 Screen transmission method, device, electronic equipment and computer-readable recording medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009147752A (en) * 2007-12-14 2009-07-02 Sharp Corp Data broadcast compatible mobile view terminal
CN101552826A (en) * 2009-05-04 2009-10-07 中兴通讯股份有限公司 Videophone service automatic answering machine and device
CN201414159Y (en) * 2009-05-06 2010-02-24 珠海市东耀企业有限公司 Multimedia terminal machine
CN102169642A (en) * 2011-04-06 2011-08-31 李一波 Interactive virtual teacher system having intelligent error correction function
CN102411833A (en) * 2011-08-02 2012-04-11 杭州威威网络科技有限公司 Networking alarm apparatus based on audio identification
CN102982800A (en) * 2012-11-08 2013-03-20 鸿富锦精密工业(深圳)有限公司 Electronic device with audio video file video processing function and audio video file processing method
CN103309855A (en) * 2013-06-18 2013-09-18 江苏华音信息科技有限公司 Audio-video recording and broadcasting device capable of translating speeches and marking subtitles automatically in real time for Chinese and foreign languages
CN103956014A (en) * 2014-05-04 2014-07-30 福建创高安防技术股份有限公司 Remote image recognition antitheft method and system
CN104038717A (en) * 2014-06-26 2014-09-10 北京小鱼儿科技有限公司 Intelligent recording system
CN204069102U (en) * 2014-08-07 2014-12-31 深圳市微思客技术有限公司 Interactive bluetooth earphone and mobile terminal
CN105407379A (en) * 2014-08-26 2016-03-16 天脉聚源(北京)教育科技有限公司 Synchronous recording method for multiple media
WO2016148825A1 (en) * 2015-03-19 2016-09-22 Intel Corporation Acoustic camera based audio visual scene analysis
CN107527623A (en) * 2017-08-07 2017-12-29 广州视源电子科技股份有限公司 Screen transmission method, device, electronic equipment and computer-readable recording medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887533A (en) * 2019-04-10 2019-06-14 王俊芝 Multi-functional customer mobile terminal control system and method
CN110866432A (en) * 2019-04-10 2020-03-06 王俊芝 Multifunctional user mobile terminal control system
CN109887533B (en) * 2019-04-10 2020-04-24 郑州轻工业大学 Multifunctional user mobile terminal control system and method
CN112019960A (en) * 2019-05-28 2020-12-01 深圳市冠旭电子股份有限公司 Method for monitoring scenes by utilizing earphone, device and readable storage medium
CN110248265A (en) * 2019-05-31 2019-09-17 湖北工业大学 Headphone with danger early warning function
CN111935581A (en) * 2020-08-13 2020-11-13 长春市长光芯忆科技有限公司 Electronic memory earphone

Also Published As

Publication number Publication date
CN109151642B (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN109151642A (en) A kind of intelligent earphone, intelligent earphone processing method, electronic equipment and storage medium
WO2021018154A1 (en) Information representation method and apparatus
JP6574937B2 (en) COMMUNICATION SYSTEM, CONTROL METHOD, AND STORAGE MEDIUM
CN109729266A (en) A kind of image capturing method, terminal and computer readable storage medium
WO2014156388A1 (en) Information processing device, notification state control method, and program
US20130169680A1 (en) Social system and method used for bringing virtual social network into real life
CN110177296A (en) A kind of video broadcasting method and mobile terminal
CN110618933A (en) Performance analysis method and system, electronic device and storage medium
US20080255840A1 (en) Video Nametags
CN109190509A (en) A kind of personal identification method, device and computer readable storage medium
CN107005629A (en) Information processor, information processing method and program
KR20170012979A (en) Electronic device and method for sharing image content
CN108174236A (en) A kind of media file processing method, server and mobile terminal
CN109660728A (en) A kind of photographic method and device
CN108763475B (en) Recording method, recording device and terminal equipment
JPWO2015068440A1 (en) Information processing apparatus, control method, and program
EP2402839A2 (en) System and method for indexing content viewed on an electronic device
CN109451178A (en) Video broadcasting method and terminal
CN114845129B (en) Interaction method, device, terminal and storage medium in virtual space
CN108174081B (en) A kind of image pickup method and mobile terminal
US20180232384A1 (en) Methods and apparatus for information capture and presentation
CN108089830B (en) Song information display methods, device and mobile terminal
CN114466283A (en) Audio acquisition method and device, electronic equipment and peripheral component method
CN109743448A (en) Reminding method and terminal device
CN107948714B (en) Video broadcasting method, equipment and storage medium under more people's video-see scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Deng Di

Inventor after: Cheng Fang

Inventor before: Deng Di

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100000 C5-05, F1, Building 19, No. 10, Langjiayuan, Jianguomenwai, Chaoyang District, Beijing

Patentee after: Taiyi Yunjia (Beijing) Technology Co.,Ltd.

Address before: 100000 C5-05, F1, Building 19, No. 10, Langjiayuan, Jianguomenwai, Chaoyang District, Beijing

Patentee before: Yuntai Jinke (Beijing) Technology Co.,Ltd.

Address after: 100000 C5-05, F1, Building 19, No. 10, Langjiayuan, Jianguomenwai, Chaoyang District, Beijing

Patentee after: Taiyi Yunjing (Beijing) Technology Co.,Ltd.

Address before: 100000 C5-05, F1, Building 19, No. 10, Langjiayuan, Jianguomenwai, Chaoyang District, Beijing

Patentee before: Taiyi Yunjia (Beijing) Technology Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 100000 C5-05, F1, Building 19, No. 10, Langjiayuan, Jianguomenwai, Chaoyang District, Beijing

Patentee after: Yuntai Jinke (Beijing) Technology Co.,Ltd.

Address before: Room A-5524, Building 3, No. 20 Yong'an Road, Shilong Economic Development Zone, Mentougou District, Beijing 102300

Patentee before: BEIJING JINLIAN TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221229

Address after: 101100 3586, Floor 1, Building 3, No. 6, Guoxing Second Street, Tongzhou District, Beijing

Patentee after: Beijing Taiyi Digital Technology Co.,Ltd.

Address before: 100000 C5-05, F1, Building 19, No. 10, Langjiayuan, Jianguomenwai, Chaoyang District, Beijing

Patentee before: Taiyi Yunjing (Beijing) Technology Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231107

Address after: 903-76, 9th Floor, Building 17, Yard 30, Shixing Street, Shijingshan District, Beijing, 100000

Patentee after: Beijing Qinhai Technology Co.,Ltd.

Address before: 101100 3586, Floor 1, Building 3, No. 6, Guoxing Second Street, Tongzhou District, Beijing

Patentee before: Beijing Taiyi Digital Technology Co.,Ltd.