CN208507176U - A kind of video audio interactive system - Google Patents
A kind of video audio interactive system Download PDFInfo
- Publication number
- CN208507176U CN208507176U CN201821007083.7U CN201821007083U CN208507176U CN 208507176 U CN208507176 U CN 208507176U CN 201821007083 U CN201821007083 U CN 201821007083U CN 208507176 U CN208507176 U CN 208507176U
- Authority
- CN
- China
- Prior art keywords
- information
- control information
- audio
- user
- interactive system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 25
- 230000007613 environmental effect Effects 0.000 claims abstract description 54
- 230000009471 action Effects 0.000 claims description 6
- 230000002996 emotional effect Effects 0.000 abstract description 13
- 230000001815 facial effect Effects 0.000 description 14
- 238000004458 analytical method Methods 0.000 description 11
- 230000036651 mood Effects 0.000 description 11
- 230000008451 emotion Effects 0.000 description 9
- 230000010365 information processing Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 238000013507 mapping Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000000034 method Methods 0.000 description 4
- 238000003672 processing method Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000008921 facial expression Effects 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
Landscapes
- Reverberation, Karaoke And Other Acoustics (AREA)
Abstract
A kind of video audio interactive system, comprising: acquisition unit, processor, one or more scene devices, audio-frequence player device;Wherein, acquisition unit is used for: the environmental information in acquisition predeterminable area;Processor is used for: the environmental information based on acquisition, exports the second control information of corresponding first control information sum;Scene device is used for: receiving the first control information, and according to the first control information work;Audio-frequence player device is used for: being received the second control information, and is carried out song output according to the second control information.The utility model embodiment realizes the control adjustment of the equipment in region according to the emotional state of user, improves the usage experience in fate intra domain user.
Description
Technical field
Present document relates to but be not limited to multimedia technology, espespecially a kind of video audio interactive system.
Background technique
With social progress and development, more and more users begin to focus on stress-relieving activity;User is to place and equipment
Selection, it is desirable that also become higher and higher;Singing is a kind of leisure way that user discharges mood;Traditional more people's boxes and list
People's Karaoke is the two big places of user's selection currently on the market sing.
Currently, user's requesting song, the man who loves to singing's machine interactive model, tend to be dull for a user, can not be different users
Corresponding leisure experience is provided, the usage experience of user is to be improved.
Utility model content
It is the general introduction to the theme being described in detail herein below.This general introduction is not the protection model in order to limit claim
It encloses.
The utility model embodiment provides a kind of video audio interactive system, is able to ascend during the leisure of preset areas intra domain user
Interactive experience.
The utility model embodiment provides a kind of video audio interactive system, comprising: acquisition unit, processor, one or one
The above scene device, audio-frequence player device;Wherein,
Acquisition unit is used for: the environmental information in acquisition predeterminable area;
Processor is used for: the environmental information based on acquisition, exports the second control information of corresponding first control information sum;
Scene device is used for: receiving the first control information, and according to the first control information work;
Audio-frequence player device is used for: being received the second control information, and is carried out song output according to the second control information.
Optionally, the video audio interactive system further includes action demonstration unit and mechanical device, is used for:
Action demonstration unit is used for: by controlling the mechanical device, display is pre-set to be broadcast corresponding to present video
Put the dance movement of the song of device plays.
Optionally, the mechanical device includes: dance robot.
Optionally, the video audio interactive system further includes singing comment unit, is used for:
It exports and the performance comment information that song is evaluated is sung to user.
Optionally, the acquisition unit includes:
Microphone and image capture device.
Optionally, the microphone includes:
Acquire the first microphone of user context sound audio-frequency information;
The second microphone of audio is sung for inputting user.
Optionally, described image acquisition equipment has the camera of face identification functions.
Optionally, the scene device includes: electric light;
The electric light brightness is adjustable or color tunable or brightness and color it is adjustable.Compared with the relevant technologies, the application
Technical solution includes: acquisition unit, processor, one or more scene devices, audio-frequence player device;Wherein, acquisition is single
Member is used for: the environmental information in acquisition predeterminable area;Processor is used for: the environmental information based on acquisition, output corresponding first
Control the second control information of information sum;Scene device is used for: receiving the first control information, and according to the first control information work
Make;Audio-frequence player device is used for: being received the second control information, and is carried out song output according to the second control information.This is practical new
Type embodiment realizes the control adjustment of the equipment in region according to the emotional state of user, improves in presumptive area
The usage experience of interior user.
Other features and advantages of the utility model will illustrate in the following description, also, partly from specification
In become apparent, or understood and implementing the utility model.The purpose of this utility model and other advantages can pass through
Specifically noted structure is achieved and obtained in the specification, claims and drawings.
Detailed description of the invention
Attached drawing is used to provide to further understand technical solutions of the utility model, and constitutes part of specification,
It is used to explain the technical solution of the utility model together with embodiments herein, not constitute to technical solutions of the utility model
Limitation.
Fig. 1 is the structural block diagram of the utility model embodiment video audio interactive system;
Fig. 2 is the flow chart of the utility model embodiment information processing method;
Fig. 3 is the structural block diagram of the utility model embodiment information processing system.
Specific embodiment
For the purpose of this utility model, technical solution and advantage is more clearly understood, below in conjunction with attached drawing to this
The embodiment of utility model is described in detail.It should be noted that in the absence of conflict, embodiment in the application and
Feature in embodiment can mutual any combination.
Step shown in the flowchart of the accompanying drawings can be in a computer system such as a set of computer executable instructions
It executes.Also, although logical order is shown in flow charts, and it in some cases, can be to be different from herein suitable
Sequence executes shown or described step.
Fig. 1 is the structural block diagram of the utility model embodiment video audio interactive system, as shown in Figure 1, comprising: acquisition unit,
Processor, one or more scene devices, audio-frequence player device;Wherein,
Acquisition unit is used for: the environmental information in acquisition predeterminable area;
Processor is used for: the environmental information based on acquisition, exports the second control information of corresponding first control information sum;
Scene device is used for: receiving the first control information, and according to the first control information work;
Audio-frequence player device is used for: being received the second control information, and is carried out song output according to the second control information.
Optionally, the video audio interactive system further includes action demonstration unit and mechanical device, is used for:
Action demonstration unit is used for: by controlling the mechanical device, display is pre-set to be broadcast corresponding to present video
Put the dance movement of the song of device plays.
Optionally, the mechanical device includes: dance robot.
Optionally, the video audio interactive system further includes singing comment unit, is used for:
It exports and the performance comment information that song is evaluated is sung to user.
Optionally, the acquisition unit includes:
Microphone and image capture device.
Optionally, the microphone includes:
Acquire the first microphone of user context sound audio-frequency information;
The second microphone of audio is sung for inputting user.
Optionally, described image acquisition equipment has the camera of face identification functions.
Optionally, the scene device includes: electric light;
The electric light brightness is adjustable or color tunable or brightness and color it is adjustable.
It should be noted that scene device of the embodiment of the present invention can also include other adjustable activities in public place of entertainment
The equipment of place atmosphere;Such as equipment, display screen of smoke creating etc..
Compared with the relevant technologies, technical scheme includes: acquisition unit, processor, one or more scenes
Equipment, audio-frequence player device;Wherein, acquisition unit is used for: the environmental information in acquisition predeterminable area;Processor is used for: being based on
The environmental information of acquisition exports the second control information of corresponding first control information sum;Scene device is used for: receiving the first control
Information processed, and according to the first control information work;Audio-frequence player device is used for: receiving the second control information, and according to the second control
Information processed carries out song output.The utility model embodiment realizes the equipment in region according to the emotional state of user
Control adjustment, improves the usage experience in fate intra domain user.
Fig. 2 is the flow chart of the utility model embodiment information processing method, as shown in Figure 2, comprising:
Step 201 obtains environmental index information according to the environmental information in presumptive area;
Wherein, the environmental information includes: one or more kinds of multimedia messages;The environmental index information is for measuring
Change the emotional state of the user in presumptive area.
It should be noted that the utility model embodiment can acquire environmental information using the relevant technologies;Presumptive area can
To be the amusement place including Karaoke.
Optionally, the utility model embodiment acquisition environmental index information includes:
According to the one or more kinds of multimedia messages for including in the environmental information, obtains and correspond to each multimedia letter
The reference of breath is scored;
It is scored according to the reference of the various multimedia messages of acquisition, calculates the environmental index information;
Wherein, the multimedia messages include following one or more information: facial image information, dialog information and use
The audio-frequency information of family background sound.
Optionally, the utility model embodiment obtains the reference scoring for corresponding to various multimedia messages including by following
The reference scoring that one or more analysis obtains:
The facial image information of each user is identified from the environmental information;By pre-set first database,
Determine the facial emotions information of each user;The determining facial emotions information is converted according to preset first scoring tactics
After corresponding first scoring;The ginseng for corresponding to the facial image information is determined according to the first of part or all of user the scoring
Examination point;
The dialog information of user is extracted from the environmental information;By the dialog information extracted and preset word frequency
Data compare, and determine the dialogue emotional information of fate intra domain user;Feelings will be talked with according to preset second scoring tactics
Thread information is converted to the reference scoring corresponding to the dialog information;
The audio-frequency information of user context sound is extracted from the environmental information;According to the user context sound extracted
The audio attribute of audio-frequency information is analyzed, and determines the audioref information in presumptive area;According to preset third scoring plan
Audioref information is slightly converted to the reference scoring corresponding to the audio-frequency information of the user context sound;
Wherein, the audio attribute includes following one or more kinds of contents: decibels, feature.
It should be noted that first database may include existing database in the related technology, pass through first database
The facial expression of user can be analyzed;Such as can determine user be it is happy, sad, fear, be angry, surprised, abhoing
Type Deng, facial expression can carry out additions and deletions according to analysis demand;The utility model embodiment, can according to the first scoring tactics
Think corresponding first scoring of each facial expression matching;Such as according to user mood by bad sequence being well various facial tables
The first scoring of feelings matching from high to low;The first scoring is obtained, the utility model embodiment can be to the first scoring of each user
After being averaging, obtains the above-mentioned reference corresponding to facial image information and score.The utility model embodiment word frequency data are
The word frequency data collected or obtained in advance, word frequency data are used to judge the mood quality of user;Such as user has said the following contents
When, it can be determined that out for happy: celebrate we can smoothly graduate, today mood it is especially good, must sing enough hard
?;Dialogue emotional information can be the keyword in dialog information, pass through word frequency data and preset second scoring tactics
After being scored respectively for each keyword according to mood quality, the reference scoring corresponding to dialog information can be obtained;
It may include: when audio attribute includes decibel that the embodiment of the present invention, which carries out analysis to the audio-frequency information of user context sound,
When number, according to the decibels of default setting and the relationship of user mood quality, decibels are converted to by third scoring tactics
The reference of audio-frequency information corresponding to user context sound is scored;When audio attribute includes audio frequency characteristics, it is assumed that audio-frequency information packet
Include: laugh, brouhaha of clapping hands, cheer etc. can be stored in advance in laugh, brouhaha of clapping hands, cheer etc., the embodiment of the present invention
Sample information extracts the audio-frequency information of user context sound from environmental information, according to determining laugh, drum of clapping hands from audio-frequency information
After the audio frequency characteristics such as applause, cheer, determine that audio frequency characteristics are and user emotion according to preset third scoring tactics
Audio frequency characteristics are converted to by third scoring tactics and are scored corresponding to the reference of the audio-frequency information of user context sound by association;When
When audio attribute includes decibels and audio frequency characteristics simultaneously, decibels are can be set in the embodiment of the present invention and audio frequency characteristics are corresponding
Weight is calculated and is obtained with reference to scoring after the weight of determining decibels and audio frequency characteristics respectively through the above way.
Optionally, the utility model embodiment calculating environmental index information includes:
According to default scoring tactics, corresponding weight is respectively set for the reference of various multimedia messages;
According to the reference of each multimedia messages scoring and corresponding weight, calculates and obtain the environmental index information.
Step 202, the environmental index information according to acquisition determine that the control controlled the equipment in presumptive area is believed
Breath;
Optionally, the utility model embodiment determines that the control information controlled the equipment in presumptive area includes:
According to preset mapping relations, determine corresponding in presumptive area with the environmental index information for calculating acquisition
The control information that is controlled of equipment;
Wherein, the mapping relations include: the control information of each environmental index information and setting at one-to-one
Relationship;The control information includes following one or more kinds of control information: the control information of color adjustment is carried out to light;
The whole control information of bright shadow is carried out to light;The control information that the song of audio-visual playback equipment output is adjusted.
It should be noted that environmental index information can be divided into default grade by the utility model embodiment, respectively
Control information with each grade with reference to scoring at mapping relations is set.
Step 203, according to determining control information, the equipment in presumptive area is controlled;
Optionally, the utility model embodiment information processing method further include:
When detecting that audio-frequence player device exports song, shown by preset mechanical device pre-set corresponding to institute
State the dance movement of song.
Optionally, the utility model embodiment information processing method further include:
Acquire audio data when user sings song;
Analysis determines the similarity information for the song that the audio data of acquisition and user are sung;
According to the determining similarity information of analysis and the environmental index information, generates and sing comment information;
Wherein, the performance comment information passes through text or voice output.
It should be noted that the utility model embodiment, can be analyzed based on existing method in the related technology and determine phase
Like degree information.
Compared with the relevant technologies, technical scheme includes: to obtain environment according to the environmental information in presumptive area to refer to
Mark information;According to the environmental index information of acquisition, the control information controlled the equipment in presumptive area is determined;According to true
Fixed control information, controls the equipment in presumptive area;Wherein, environmental information includes: one or more kinds of more matchmakers
Body information;The environmental index information is used to quantify the emotional state of the user in presumptive area.The utility model embodiment root
The control adjustment of the equipment in region is realized according to the emotional state of user, improving makes in fate intra domain user
With experience.
Fig. 3 is the structural block diagram of the utility model embodiment information processing system, as shown in Figure 3, comprising: analytical unit,
Determination unit and control unit;Wherein,
Analytical unit is used for: obtaining environmental index information according to the environmental information in presumptive area;
Determination unit is used for: being controlled according to the environmental index information of acquisition, determination the equipment in presumptive area
Control information;
Control unit is used for: according to determining control information, being controlled the equipment in presumptive area;
Wherein, the environmental information includes: one or more kinds of multimedia messages;The environmental index information is for measuring
Change the emotional state of the user in presumptive area.
Optionally, the utility model embodiment analytical unit is specifically used for:
According to the one or more kinds of multimedia messages for including in the environmental information, obtains and correspond to each multimedia letter
The reference of breath is scored;
It is scored according to the reference of the various multimedia messages of acquisition, calculates the environmental index information;
Wherein, the multimedia messages include following one or more information: facial image information, dialog information and use
The audio-frequency information of family background sound.
Optionally, the analytical unit is used to obtain includes: corresponding to the reference scoring of various multimedia messages
The facial image information of each user is identified from the environmental information;By pre-set first database,
Determine the facial emotions information of each user;The determining facial emotions information is converted according to preset first scoring tactics
After corresponding first scoring;The ginseng for corresponding to the facial image information is determined according to the first of part or all of user the scoring
Examination point;
The dialog information of user is extracted from the environmental information;By the dialog information extracted and preset word frequency
Data compare, and determine the dialogue emotional information of fate intra domain user;Feelings will be talked with according to preset second scoring tactics
Thread information is converted to the reference scoring corresponding to the dialog information;
The audio-frequency information of user context sound is extracted from the environmental information;According to the user context sound extracted
The audio attribute of audio-frequency information is analyzed, and determines the audioref information in presumptive area;According to preset third scoring plan
Audioref information is slightly converted to the reference scoring corresponding to the audio-frequency information of the user context sound;
Wherein, the audio attribute includes following one or more kinds of contents: decibels, feature.
Optionally, the utility model embodiment analytical unit includes: for calculating the environmental index information
According to default scoring tactics, corresponding weight is respectively set for the reference of various multimedia messages;
According to the reference of each multimedia messages scoring and corresponding weight, calculates and obtain the environmental index information.
Optionally, the utility model embodiment determination unit is specifically used for:
According to preset mapping relations, determine corresponding in presumptive area with the environmental index information for calculating acquisition
The control information that is controlled of equipment;
Wherein, the mapping relations include: the control information of each environmental index information and setting at one-to-one
Relationship;The control information includes following one or more kinds of control information: the control information of color adjustment is carried out to light;
The whole control information of bright shadow is carried out to light;The control information that the song of audio-visual playback equipment output is adjusted.
Optionally, the utility model embodiment information processing system further includes action demonstration unit, is used for:
When detecting that audio-frequence player device exports song, shown by preset mechanical device pre-set corresponding to institute
State the dance movement of song.
Optionally, the utility model embodiment information processing system further includes comment unit, is used for:
Acquire audio data when user sings song;
Analysis determines the similarity information for the song that the audio data of acquisition and user are sung;
According to the determining similarity information of analysis and the environmental index information, generates and sing comment information;
Wherein, the performance comment information passes through text or voice output.
The utility model embodiment method is carried out to understand detailed description below by way of using example, is only used using example
In statement the utility model, it is not used to limit the protection scope of the utility model.
Using example
The utility model application example provides a kind of information processing system, can apply in public places of entertainment such as Karaokes, borrow
The mode interacted with user emotion is helped, the recreation experience of user is promoted.
When user enters public place of entertainment (being equivalent to presumptive area), by acquiring environment letter with pre-set sensor
Breath, environmental information include one or more kinds of multimedia messages;In this implementation example, user can be acquired by camera and existed
Audio-frequency information in the image information and presumptive area of presumptive area;It can analyze determining number of users by image information;
The utility model application example can identify the face-image letter for each user for including from environmental information
Breath;By pre-set first database, the face-image of each user is analyzed respectively, determines the face of each user
Portion's emotional information;After determining facial emotions information is converted to corresponding first scoring according to preset first scoring tactics;
Determine that the reference for corresponding to facial image information is scored according to the first of part or all of user the scoring;
The utility model application example can extract the dialog information of user from environmental information;The dialogue extracted is believed
Breath is compared with preset word frequency data, determines the dialogue emotional information of fate intra domain user;According to preset
Dialogue emotional information is converted to the reference scoring corresponding to dialog information by two scoring tactics;
The utility model embodiment can extract the audio-frequency information of user context sound from environmental information;According to what is extracted
The decibels of the audio-frequency information of user context sound, feature are analyzed, and determine the audioref information in presumptive area;According to pre-
If third scoring tactics by audioref information be converted to corresponding to the audio-frequency information of user context sound reference scoring.
It should be noted that the above-mentioned reference scoring that the utility model embodiment obtains, it can be anti-to a certain extent
Reflect the mood quality of user;The utility model embodiment be it is each with reference to scoring based on determining reference of preparatory analysis score relative to
Corresponding weight is respectively set for each reference in the correlation degree of user mood quality;After accumulating operation, it can obtain
With reference to scoring.It is mainly used for evaluating the atmosphere of public place of entertainment with reference to scoring, with reference to the scoring scene of can showing to a certain extent
Happy index.
The utility model application example will be divided into three grades with reference to scoring, it is assumed that the reference scoring for calculating acquisition is 0
~10 points, 0~3 graduation can be divided into the first estate by the utility model embodiment;3~6 graduation are divided into the second grade;6~10
Graduation is divided into the tertiary gradient.By taking three grades as an example, the utility model embodiment is the corresponding control information of each grade mapping;
Assuming that higher grade expression user mood is better, then when higher grade, can be provided in the control information theory of mapping for user
More bright, colorful and lively scene;Such as determine currently be the tertiary gradient when, then the utility model embodiment can be set
It sets and the adjustment that color is colorful, bright is carried out to light;The song for adjusting audio-visual playback equipment output is hot, enthusiastic happy dance
It is bent.
It can also include the robot that can carry out dancing in the utility model application example system, robot is according to detection
The audio-frequence player device output song arrived combines the pre-set dance movement corresponding to song to carry out dancing displaying.This is practical
New application example is based on after exporting song corresponding with Entertainment Scene with reference to scoring, is shown by Robot dancing, Ke Yijin
The promotion of one step realization user experience.
Optionally, audio data when the utility model application example can also sing song to user is recorded;Point
Analysis analysis determines the similarity information for the song that the audio data of acquisition and user are sung;Including but not limited to: based on user's
Volume, tone, tone color, the similarity of rhythm and song feature carry out similarity information;According to the determining similarity information of analysis
And with reference to scoring, generates and sing comment information;Such as after calculating similarity information, after being weighted with reference to scoring, with
Similarity information is added, and is obtained and is sung comment information;It is relatively high with reference to scoring but when user mood pleasure, acquisition
It is also corresponding higher to sing comment information;The utility model embodiment performance comment information can be interacted defeated by voice mode
Out, increase the usage experience of user;The classical term including talent competition can be selected by singing comment information, for example, with
Family performance is horizontal splendid, and when current place is with reference to scoring high, can export high score, and voice output: you know that is frightened
It is gorgeous? song is also pleasantly surprised, and sings also to be pleasantly surprised, so that atmosphere further enhances after singing.The utility model application
Example facilitates in the case where scene is sung in amusement, by the interactive mode of emotion, builds for user a kind of by interaction and emotion feedback
The experience of composition effectively reduces the additional setting information of user when entering scene at the beginning of user;Affectional friendship is mainly taken later
Mutual mode and user interaction have positive effect in the mood of this scene for promoting user.
Those of ordinary skill in the art will appreciate that all or part of the steps in the above method can be instructed by program
Related hardware (such as processor) is completed, and described program can store in computer readable storage medium, as read-only memory,
Disk or CD etc..Optionally, one or more integrated circuits also can be used in all or part of the steps of above-described embodiment
It realizes.Correspondingly, each module/unit in above-described embodiment can take the form of hardware realization, such as pass through integrated electricity
Its corresponding function is realized on road, can also be realized in the form of software function module, such as is stored in by processor execution
Program/instruction in memory realizes its corresponding function.The utility model is not limited to the hardware of any particular form and soft
The combination of part.
Although embodiment disclosed by the utility model is as above, the content only the utility model for ease of understanding
And the embodiment used, it is not intended to limit the utility model.Technical staff in any the utility model fields,
Under the premise of not departing from spirit and scope disclosed by the utility model, it can be carried out in the form and details of implementation any
Modification and variation, but the scope of patent protection of the utility model, still should be subject to the scope of the claims as defined in the appended claims.
Claims (8)
1. a kind of video audio interactive system characterized by comprising acquisition unit, processor, one or more scenes are set
Standby, audio-frequence player device;Wherein,
Acquisition unit is used for: the environmental information in acquisition predeterminable area;
Processor is used for: the environmental information based on acquisition, exports the second control information of corresponding first control information sum;
Scene device is used for: receiving the first control information, and according to the first control information work;
Audio-frequence player device is used for: being received the second control information, and is carried out song output according to the second control information.
2. video audio interactive system according to claim 1, which is characterized in that the video audio interactive system further includes movement exhibition
Show unit and mechanical device, be used for:
Action demonstration unit is used for: by controlling the mechanical device, showing that pre-set play corresponding to present video sets
The dance movement of the standby song played.
3. video audio interactive system according to claim 2, which is characterized in that the mechanical device includes: dance robot.
4. video audio interactive system according to claim 1, which is characterized in that the video audio interactive system further includes singing point
Unit is commented, is used for:
It exports and the performance comment information that song is evaluated is sung to user.
5. video audio interactive system according to any one of claims 1 to 4, which is characterized in that the acquisition unit includes:
Microphone and image capture device.
6. video audio interactive system according to claim 5, which is characterized in that the microphone includes:
Acquire the first microphone of user context sound audio-frequency information;
The second microphone of audio is sung for inputting user.
7. video audio interactive system according to claim 5, which is characterized in that described image, which acquires equipment, has recognition of face
The camera of function.
8. video audio interactive system according to claim 5, which is characterized in that the scene device includes: electric light;
The electric light brightness is adjustable or color tunable or brightness and color it is adjustable.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201821007083.7U CN208507176U (en) | 2018-06-28 | 2018-06-28 | A kind of video audio interactive system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201821007083.7U CN208507176U (en) | 2018-06-28 | 2018-06-28 | A kind of video audio interactive system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN208507176U true CN208507176U (en) | 2019-02-15 |
Family
ID=65282534
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201821007083.7U Active CN208507176U (en) | 2018-06-28 | 2018-06-28 | A kind of video audio interactive system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN208507176U (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109893872A (en) * | 2019-04-18 | 2019-06-18 | 青海柏马教育科技有限公司 | A kind of shared dancing system |
CN115079876A (en) * | 2021-03-12 | 2022-09-20 | 北京字节跳动网络技术有限公司 | Interactive method, device, storage medium and computer program product |
-
2018
- 2018-06-28 CN CN201821007083.7U patent/CN208507176U/en active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109893872A (en) * | 2019-04-18 | 2019-06-18 | 青海柏马教育科技有限公司 | A kind of shared dancing system |
CN115079876A (en) * | 2021-03-12 | 2022-09-20 | 北京字节跳动网络技术有限公司 | Interactive method, device, storage medium and computer program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Dimitropoulos et al. | Capturing the intangible an introduction to the i-Treasures project | |
Castellano et al. | Automated analysis of body movement in emotionally expressive piano performances | |
CN106383676B (en) | Instant photochromic rendering system for sound and application thereof | |
Friberg | A fuzzy analyzer of emotional expression in music performance and body motion | |
CN111145777A (en) | Virtual image display method and device, electronic equipment and storage medium | |
CN107274876A (en) | A kind of audition paints spectrometer | |
CN208507176U (en) | A kind of video audio interactive system | |
WO2022242706A1 (en) | Multimodal based reactive response generation | |
CN111554303A (en) | User identity recognition method and storage medium in song singing process | |
CN112799771A (en) | Playing method and device of dynamic wallpaper, electronic equipment and storage medium | |
WO2022041192A1 (en) | Voice message processing method and device, and instant messaging client | |
Brendel et al. | Building a System for Emotions Detection from Speech to Control an Affective Avatar. | |
WO2023087932A1 (en) | Virtual concert processing method and apparatus, and device, storage medium and program product | |
CN104754110A (en) | Machine voice conversation based emotion release method mobile phone | |
CN108875047A (en) | A kind of information processing method and system | |
Fabiani et al. | Systems for interactive control of computer generated music performance | |
CN112235183B (en) | Communication message processing method and device and instant communication client | |
CN111182409B (en) | Screen control method based on intelligent sound box, intelligent sound box and storage medium | |
CN110767204B (en) | Sound processing method, device and storage medium | |
Castellano et al. | User-centered control of audio and visual expressive feedback by full-body movements | |
WO2020154883A1 (en) | Speech information processing method and apparatus, and storage medium and electronic device | |
Fabiani et al. | Interactive sonification of expressive hand gestures on a handheld device | |
CN106649643B (en) | A kind of audio data processing method and its device | |
Hirai et al. | Latent topic similarity for music retrieval and its application to a system that supports DJ performance | |
CN112752142B (en) | Dubbing data processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant |