CN103531216A - Audio-video playing device and method - Google Patents

Audio-video playing device and method Download PDF

Info

Publication number
CN103531216A
CN103531216A CN201210229674.XA CN201210229674A CN103531216A CN 103531216 A CN103531216 A CN 103531216A CN 201210229674 A CN201210229674 A CN 201210229674A CN 103531216 A CN103531216 A CN 103531216A
Authority
CN
China
Prior art keywords
mentioned
audio
play mode
voice data
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210229674.XA
Other languages
Chinese (zh)
Inventor
白翼铭
李锦昇
徐瑞富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hannstar Display Corp
Original Assignee
Hannstar Display Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hannstar Display Corp filed Critical Hannstar Display Corp
Priority to CN201210229674.XA priority Critical patent/CN103531216A/en
Publication of CN103531216A publication Critical patent/CN103531216A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses an audio-video playing device and method. The device comprises: a memory unit for storing multiple audio data; an image extraction unit for extracting an external image; a processing unit for obtaining an expression state of a user through the external image, deciding a playing mode according to the expression state and obtains corresponding audio data and an image data according to the playing mode; a loudspeaker unit for outputting the corresponding audio data according to the playing mode; and a display unit for displaying the corresponding image data according to the playing mode. By using the method and device provided by the invention, proper music or related images can be provided according to the different use states and moods of a user so as to achieve an effect of relaxing the body and the mind, and related function feedbacks can also be provided through cooperation with the habits of the user.

Description

Video-audio playing device and method
Technical field
The present invention relates to a kind of video-audio playing device, relate in particular to the video-audio playing device audio-visual according to user's state plays.
Background technology
Due to the quick evolution of semiconductor technology, make can hold more multiple transistor in equal area, and then the function of electronic product was strengthened more in the past, and the size of electronic product is invariably towards light, thin, little direction evolution.Current TV and display etc., not only have and compare in the past higher resolution, so that image quality to be more clearly provided, allows preferably visual enjoyment of user.Meanwhile, these television equipments and display also provide better loudspeaker to carry out output sound, or even 5.1 sound channels or the output of the audio of Doby, and whereby, user can utilize the audio device of LCD TV to listen to the music of high tone quality.Yet general traditional televisor or video-audio playing device only provide user to select TV programme or other video-audio data to play, these video-audio playing devices but lack and user between interaction.
Summary of the invention
In order to overcome the defect of prior art, the video-audio playing device according to described in one embodiment of the invention, comprising: a storage unit, store a plurality of voice datas; One image extraction unit, receives an external image; One processing unit, is obtained an expression state of a user by said external image, according to above-mentioned expression Determines one play mode, and also obtain corresponding above-mentioned voice data and a view data according to above-mentioned play mode; One loudspeaker unit, according to the corresponding above-mentioned voice data of above-mentioned play mode output; And a display unit, according to above-mentioned play mode, show corresponding above-mentioned view data.
According to the audio-visual player method described in one embodiment of the invention, be for a video-audio playing device, comprising: receive an external image; According to said external image, obtain an expression state of a user; According to above-mentioned expression Determines one play mode; According to above-mentioned play mode, obtain a corresponding voice data and a view data in a storage unit; By the corresponding above-mentioned voice data of a loudspeaker unit output; And show corresponding above-mentioned view data by a display unit.
The present invention can provide according to user suitable music or associated picture in different use states or mood, and to reach the effect of relaxing and to put body and mind, custom that also can the person of being used in conjunction with is done relevant function feedback.
Accompanying drawing explanation
By reading following detailed description and coordinating giving an example of accompanying drawing, can more completely understand disclosed, as follows:
Fig. 1 shows according to the block scheme of the video-audio playing device described in one embodiment of the invention.
Fig. 2 shows the operational flowchart of the audio-visual player method embodiment of the video-audio playing device shown in Fig. 1 according to the present invention.
Wherein, description of reference numerals is as follows:
100~video-audio playing device;
110~image extraction unit;
120~storage unit;
130~processing unit;
140~loudspeaker unit;
150~display unit;
160~network communication unit.
Embodiment
Below narration shows the embodiment that many present invention of borrowing complete.Its narration is in order to illustrate key concept of the present invention and without the connotation limiting.Scope of the present invention has best defining in how attached claim.
General traditional television equipment is only passive to be selected program viewing or is play film by user, yet active does not provide suitable audio-visual situation for user's emotional state instantly, the invention provides a kind of video-audio playing device suitable music or associated picture can be provided according to user in different use states or mood, to reach the effect of relaxing and to put body and mind, custom that also can the person of being used in conjunction with is done relevant function feedback.
Fig. 1 shows according to the block scheme of the video-audio playing device 100 described in one embodiment of the invention.In an embodiment of the present invention, video-audio playing device 100 comprises image extraction unit 110, storage unit 120, processing unit 130, loudspeaker unit 140, display unit 150 and network communication unit 160.
In certain embodiments, when user watches the TV programme of video-audio playing device 100, image extraction unit 110 can be extracted external image simultaneously and be sent to processing unit 130, then processing unit 130 can be obtained user's countenance state and the TV program information that user is watching according to the external image extracting, and the corresponding relation by the countenance state obtained and TV program information is as sample, and store each sample and identification result is set up Identification Data storehouse, the identification of face's mood can be provided, except basic face's posture, also want further classification and Detection to face's posture feature, correctly the current mood of user is referred to the model of setting up in advance, as happiness, sad or tired etc.The method of emotion identification mainly can be passed through facial action coded system, the athletic performance of face can be described by usage operation unit, the muscle activity that each motor unit performance face changes moment, with the face feature in tracking image and video, the expression form of then utilizing these motor units to classify different.Facial action in image can be decomposed into proper vector one by one, and the result of action identifying is exactly the proper vector difference distance reckling of picking out and inputting in the preset model in Identification Data storehouse.
Therefore, when processing unit 130 is obtained user's countenance state, can judge according to Identification Data storehouse current user's state, and further do corresponding processing afterwards.For instance, when processing unit 130 judges feeling blue of users, the play mode of video-audio playing device 100 can be switched to the die pressing type of relaxing, or when processing unit 130 judgement users are sleeping, the play mode of video-audio playing device 100 can be switched to sleep pattern.On the other hand, when processing unit 130 judgement users there is no while feeling blue, can not need to do the switching of play mode, and with normal mode according to user's setting with select to carry out audio-visual broadcasting.
In certain embodiments, when processing unit 130 judges feeling blue of user according to Identification Data storehouse, processing unit 130 switches to from normal play mode the die pressing type of relaxing by video-audio playing device 100.Then, processing unit 130 is obtained the voice data that corresponds to the die pressing type of relaxing in storage unit 120, for instance, voice data in storage unit 120 can be classified as a plurality of kinds in advance, or in the time need to obtaining corresponding voice data, processing unit 130 extracts in the voice data in storage unit 120 according to its frequency, tone, rhythm, tonality is analyzed, for example can calculate Mel cepstral coefficients (the Mel-frequency cepstral coefficients of voice data one fragment, MFCC) obtaining correlated characteristic classifies, and reach identification effect, the voice data of classification is being distinguished to corresponding various play mode, thereby can obtain the music under the situation that meets user.In further embodiments, processing unit 130 also can be downloaded by network communication unit 160 voice data that corresponds to the die pressing type of relaxing from network, for example can be connected to default server and bring in and obtain relevant voice data or obtain relevant voice data by the mode of searching, and obtained voice data is stored to storage unit 120.Should be appreciated that, obtained voice data can be for comprising the multi-medium data of view data.
When processing unit 130 is obtained after corresponding voice data, again by voice data one given time corresponding to loudspeaker unit 140 output, for example 30 minutes, simultaneously, display unit 150 can be in this both in fixed cycle, show the view data that corresponds to the die pressing type of relaxing, or show the view data such as photo in storage unit 120, should be noted, when if the loudspeaker unit 140 corresponding voice data of exporting is the multi-medium data that comprises view data, 150 of display units can show this view data accordingly.In addition, relax after die pressing type one given time entering, processing unit 130 can be switched to play mode original normal mode again, that is loudspeaker unit 140 stops exporting corresponding voice data, and display unit 150 stops showing and corresponds to the view data of the die pressing type of relaxing, and be returned to original audio-visual play mode.In further embodiments, display unit 140 also can show relevant information and the setting of audio-visual broadcasting, allow whereby user can confirm current play mode and change relevant setting value (for example, export the length of corresponding voice data given time, whether after this play mode finishes automatically powered-down, whether close display unit 140 etc.).
In certain embodiments, when processing unit 130 has been fallen asleep according to Identification Data storehouse judgement user, processing unit 130 switches to sleep pattern by video-audio playing device 100 from normal play mode.Then, processing unit 130 is obtained the voice data that corresponds to sleep pattern in storage unit 120, or can from network, download and correspond to the voice data of sleep pattern by network communication unit 160, and the voice data that is relevant to sleep pattern be stored to storage unit 120.
When processing unit 130 is obtained after the voice data that is relevant to sleep pattern, then by voice data one given time corresponding to loudspeaker unit 140 output, for example 20 minutes, meanwhile, the consumption by display unit 150 anergies with minimizing power supply.In addition,, after entering sleep pattern one given time, processing unit 130 can be by the power-off of video-audio playing device 100.Therefore, when user is after sleeping, video-audio playing device 100 can initiatively provide the suitable slow music of relaxing, and allows user can have better sleep quality, and can also be suitable by power-off to avoid unnecessary energy resource consumption.
Below also by process flow diagram, the present invention is described, Fig. 2 shows the operational flowchart of the audio-visual player method embodiment of the video-audio playing device shown in Fig. 1 according to the present invention.In step S202, image extraction unit 110 is extracted external image, with obtain user expression image and be sent to processing unit 130, then in step S204, processing unit 130 is according to obtained user's countenance image and contrast the state that Identification Data storehouse judges current user, when processing unit 130 judgement users there is no while feeling blue, continue step S206 play mode is maintained to normal mode, according to the current setting of user and selection, carry out audio-visual broadcasting, when processing unit 130 judges feeling blue of users, then continue step S208 the play mode of video-audio playing device 100 is switched to the die pressing type of relaxing, when processing unit 130 judgement users are sleeping, then continue step S214 the play mode of video-audio playing device 100 is switched to sleep pattern.
In switching to the step S208 relaxing after die pressing type, processing unit 130 is obtained the voice data that corresponds to the die pressing type of relaxing in storage unit 120, or download and correspond to the voice data of the die pressing type of relaxing from network by network communication unit 160, and obtained voice data is stored to storage unit 120.Then continue step S210, in a given time, loudspeaker unit 140 is exported the obtained voice data of processing unit 130 accordingly, and display unit 150 side by side can show the view data that corresponds to the die pressing type of relaxing.When loudspeaker unit 140 is exported the obtained voice data of processing unit 130 one after the set time accordingly, continue step S212, processing unit 130 can be switched to play mode original normal mode again, that is loudspeaker unit 140 stops exporting corresponding voice data, and display unit 150 stops showing and corresponds to the view data of the die pressing type of relaxing, and be returned to original audio-visual play mode.
After switching to sleep pattern in step S214, processing unit 130 can be obtained the voice data that corresponds to sleep pattern in storage unit 120 in the same manner, or can from network, download and correspond to the voice data of sleep pattern by network communication unit 160, and the voice data that is relevant to sleep pattern be stored to storage unit 120.Then continue step S216, in a given time, loudspeaker unit 140 is exported the obtained voice data of processing unit 130 accordingly, and processing unit 130 by display unit 150 anergies (closing) to reduce the consumption of power supply.Then after entering sleep pattern one given time, continue step S218, now the power-off of processing unit 130 video-audio playing devices 100.
Though the present invention has narrated preferred embodiment as above, because understanding above-mentioned announcement not in order to limit the embodiment of the present invention.On the contrary, it contains multiple variation and similar configuration (those of ordinary skills can obviously learn).In addition the deciphering that, should make broad sense according to appended claim is to comprise all above-mentioned variations and similar configuration.

Claims (14)

1. a video-audio playing device, comprising:
One storage unit, stores a plurality of voice datas;
One image extraction unit, extracts an external image;
One processing unit, is judged an expression state of a user by said external image, according to above-mentioned expression Determines one play mode, and also obtain corresponding above-mentioned voice data according to above-mentioned play mode;
One loudspeaker unit, according to the corresponding above-mentioned voice data of above-mentioned play mode output; And
One display unit, in order to show a view data.
2. video-audio playing device as claimed in claim 1, wherein, corresponds to one while relaxing die pressing type when processing unit judges above-mentioned expression state, and above-mentioned processing unit switches to the above-mentioned die pressing type of relaxing by above-mentioned play mode by a normal mode.
3. video-audio playing device as claimed in claim 2, wherein, when above-mentioned play mode is above-mentioned relaxing during die pressing type, above-mentioned loudspeaker unit had both been exported the above-mentioned voice data that corresponds to the above-mentioned die pressing type of relaxing in fixed cycle in one, above-mentioned display unit shows the above-mentioned view data that corresponds to the above-mentioned die pressing type of relaxing in above-mentioned both fixed cycles, and above-mentioned processing unit switches to above-mentioned normal mode by above-mentioned play mode when above-mentioned set end cycle.
4. video-audio playing device as claimed in claim 1, wherein, when processing unit judges that above-mentioned expression state corresponds to a sleep pattern, above-mentioned processing unit switches to above-mentioned sleep pattern by above-mentioned play mode by a normal mode.
5. video-audio playing device as claimed in claim 4, wherein, when above-mentioned play mode is above-mentioned sleep pattern, above-mentioned loudspeaker unit had both been exported the above-mentioned voice data that corresponds to above-mentioned sleep pattern in fixed cycle in one, above-mentioned display unit anergy, and above-mentioned processing unit is closed above-mentioned video-audio playing device when above-mentioned set end cycle.
6. video-audio playing device as claimed in claim 1, wherein above-mentioned processing unit also determines above-mentioned play mode according to an Identification Data storehouse, and above-mentioned Identification Data storehouse produces according to the corresponding relation of above-mentioned expression state and a TV information.
7. video-audio playing device as claimed in claim 1, also comprise a network communication unit, from network, download external audio data that correspond to above-mentioned play mode, and said external voice data is stored to said memory cells, wherein above-mentioned loudspeaker unit is also exported said external voice data.
8. an audio-visual player method, is for a video-audio playing device, comprising:
Extract an external image;
According to said external image, judge an expression state of a user;
According to above-mentioned expression Determines one play mode;
According to above-mentioned play mode, obtain a corresponding voice data in a storage unit;
By the corresponding above-mentioned voice data of a loudspeaker unit output; And
By a display unit, show a view data.
9. audio-visual player method as claimed in claim 8, also comprises: when above-mentioned expression state corresponds to one while relaxing die pressing type, above-mentioned play mode is switched to the above-mentioned die pressing type of relaxing by a normal mode.
10. audio-visual player method as claimed in claim 9, wherein, when above-mentioned play mode is above-mentioned relaxing during die pressing type, above-mentioned audio-visual player method also comprises:
By above-mentioned loudspeaker unit, in one, both in fixed cycle, exported the above-mentioned voice data that corresponds to the above-mentioned die pressing type of relaxing;
By above-mentioned display unit, in above-mentioned both fixed cycles, export the above-mentioned view data that corresponds to the above-mentioned die pressing type of relaxing; And
When above-mentioned set end cycle, above-mentioned play mode is switched back to above-mentioned normal mode.
11. audio-visual player methods as claimed in claim 8, also comprise: when above-mentioned expression state corresponds to a sleep pattern, above-mentioned play mode is switched to above-mentioned sleep pattern by a normal mode.
12. audio-visual player methods as claimed in claim 11, wherein, when above-mentioned play mode is above-mentioned sleep pattern, above-mentioned audio-visual player method also comprises:
In one, both in fixed cycle, exported the above-mentioned voice data that corresponds to above-mentioned sleep pattern;
The above-mentioned display unit of anergy; And
When above-mentioned set end cycle, television video playing device is closed.
13. audio-visual player methods as claimed in claim 8, also comprise: according to an Identification Data storehouse, determine above-mentioned play mode, wherein above-mentioned Identification Data storehouse produces according to the corresponding relation of above-mentioned expression state and a TV information.
14. audio-visual player methods as claimed in claim 8, also comprise:
From network, download external audio data that correspond to above-mentioned play mode;
Said external voice data is stored to said memory cells; And
By above-mentioned loudspeaker unit output said external voice data.
CN201210229674.XA 2012-07-04 2012-07-04 Audio-video playing device and method Pending CN103531216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210229674.XA CN103531216A (en) 2012-07-04 2012-07-04 Audio-video playing device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210229674.XA CN103531216A (en) 2012-07-04 2012-07-04 Audio-video playing device and method

Publications (1)

Publication Number Publication Date
CN103531216A true CN103531216A (en) 2014-01-22

Family

ID=49933166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210229674.XA Pending CN103531216A (en) 2012-07-04 2012-07-04 Audio-video playing device and method

Country Status (1)

Country Link
CN (1) CN103531216A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699769A (en) * 2015-02-28 2015-06-10 北京京东尚科信息技术有限公司 Interacting method based on facial expression recognition and equipment executing method
CN104851437A (en) * 2015-04-28 2015-08-19 广东欧珀移动通信有限公司 Song playing method and terminal
CN106658178A (en) * 2017-01-03 2017-05-10 京东方科技集团股份有限公司 Display control device and display control method
CN106998499A (en) * 2017-04-28 2017-08-01 张青 It is capable of the intelligent TV set and its control system and control method of intelligent standby
CN109635616A (en) * 2017-10-09 2019-04-16 阿里巴巴集团控股有限公司 Interactive approach and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080019594A1 (en) * 2006-05-11 2008-01-24 Sony Corporation Image processing apparatus, image processing method, storage medium, and program
CN201289739Y (en) * 2008-11-18 2009-08-12 天津三星电子有限公司 Remote control video player capable of automatically recognizing expression
CN102467668A (en) * 2010-11-16 2012-05-23 鸿富锦精密工业(深圳)有限公司 Emotion detecting and soothing system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080019594A1 (en) * 2006-05-11 2008-01-24 Sony Corporation Image processing apparatus, image processing method, storage medium, and program
CN201289739Y (en) * 2008-11-18 2009-08-12 天津三星电子有限公司 Remote control video player capable of automatically recognizing expression
CN102467668A (en) * 2010-11-16 2012-05-23 鸿富锦精密工业(深圳)有限公司 Emotion detecting and soothing system and method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699769A (en) * 2015-02-28 2015-06-10 北京京东尚科信息技术有限公司 Interacting method based on facial expression recognition and equipment executing method
CN104851437A (en) * 2015-04-28 2015-08-19 广东欧珀移动通信有限公司 Song playing method and terminal
CN104851437B (en) * 2015-04-28 2018-05-01 广东欧珀移动通信有限公司 A kind of playback of songs method and terminal
CN106658178A (en) * 2017-01-03 2017-05-10 京东方科技集团股份有限公司 Display control device and display control method
CN106658178B (en) * 2017-01-03 2020-02-07 京东方科技集团股份有限公司 Display control device and control method thereof
US10918823B2 (en) 2017-01-03 2021-02-16 Boe Technology Group Co., Ltd. Display control method and device
CN106998499A (en) * 2017-04-28 2017-08-01 张青 It is capable of the intelligent TV set and its control system and control method of intelligent standby
CN109635616A (en) * 2017-10-09 2019-04-16 阿里巴巴集团控股有限公司 Interactive approach and equipment
WO2019072104A1 (en) * 2017-10-09 2019-04-18 阿里巴巴集团控股有限公司 Interaction method and device
EP3696648A4 (en) * 2017-10-09 2021-07-07 Alibaba Group Holding Limited Interaction method and device
CN109635616B (en) * 2017-10-09 2022-12-27 阿里巴巴集团控股有限公司 Interaction method and device

Similar Documents

Publication Publication Date Title
US10950228B1 (en) Interactive voice controlled entertainment
CN106464939B (en) The method and device of play sound effect
CN103237248B (en) Media program is controlled based on media reaction
US8442389B2 (en) Electronic apparatus, reproduction control system, reproduction control method, and program therefor
CN103024521B (en) Program screening method, program screening system and television with program screening system
CN105453070B (en) User behavior characterization based on machine learning
CN104133851B (en) The detection method and detection device of audio similarity, electronic equipment
CN110019961A (en) Method for processing video frequency and device, for the device of video processing
CN107105314A (en) Video broadcasting method and device
CN106488311B (en) Sound effect adjusting method and user terminal
US8909636B2 (en) Lifestyle collecting apparatus, user interface device, and lifestyle collecting method
CN105791935A (en) Television control method and apparatus thereof
CN110266879A (en) Broadcast interface display methods, device, terminal and storage medium
CN103531216A (en) Audio-video playing device and method
CN107864410B (en) Multimedia data processing method and device, electronic equipment and storage medium
CN107924416A (en) The prompting for the media content quoted in other media contents
CN109788345A (en) Live-broadcast control method, device, live streaming equipment and readable storage medium storing program for executing
CN108259925A (en) Music gifts processing method, storage medium and terminal in net cast
CN107210038A (en) Speaker identification in multimedia system
CN106534888A (en) Method and system for selecting background music based on video content
CN105263044A (en) Method and device for adjusting smart home equipment
CN110121083A (en) The generation method and device of barrage
CN110223677A (en) Spatial audio signal filtering
CN106802913A (en) One kind plays content recommendation method and its device
CN109429077A (en) Method for processing video frequency and device, for the device of video processing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140122