CN109802968B - Conference speaking system - Google Patents

Conference speaking system Download PDF

Info

Publication number
CN109802968B
CN109802968B CN201910081572.XA CN201910081572A CN109802968B CN 109802968 B CN109802968 B CN 109802968B CN 201910081572 A CN201910081572 A CN 201910081572A CN 109802968 B CN109802968 B CN 109802968B
Authority
CN
China
Prior art keywords
data
speaking
speech
mode
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910081572.XA
Other languages
Chinese (zh)
Other versions
CN109802968A (en
Inventor
邓恒波
陈云明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Photool Vision Co ltd
Original Assignee
Shenzhen Photool Vision Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Photool Vision Co ltd filed Critical Shenzhen Photool Vision Co ltd
Priority to CN201910081572.XA priority Critical patent/CN109802968B/en
Publication of CN109802968A publication Critical patent/CN109802968A/en
Application granted granted Critical
Publication of CN109802968B publication Critical patent/CN109802968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a conference speaking system, which comprises an intelligent terminal and intelligent display equipment, wherein the intelligent terminal is connected with the intelligent display equipment; the intelligent terminal is used for acquiring the current speaking state information of the intelligent display equipment and sending a speaking application request to the intelligent display equipment when the current speaking state information is that speaking is allowed; then, when the intelligent display device confirms the request for applying for speaking, speaking data are generated according to a preset speaking mode, and the speaking data are sent to the intelligent display device; the speaking data comprises voice data, image data or video data; and the intelligent display equipment is also used for receiving the speech data and playing and displaying the speech data. By implementing the embodiment of the invention, the problem that in the prior art, participants can only speak through the microphone, the speaking form is single, and the knowledge of a speaker cannot be fully reflected is solved.

Description

Conference speaking system
Technical Field
The invention relates to the field of wireless communication, in particular to a conference speaking system.
Background
At present, when people attend a class in a university step classroom or listen to reports in a meeting room, participants speak through microphones, the participants can express own opinions of a certain problem only through languages, the expression mode is single, and meanwhile, the opinions of a speaker cannot be fully expressed sometimes through a voice mode.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a conference speaking system which can speak in various modes such as voice, video, image and the like, and solve the problems that participants can only speak through a microphone, the speaking mode is single, and the knowledge of a speaker cannot be fully reflected in the prior art.
An embodiment of the invention provides a conference speaking system, which comprises an intelligent terminal and an intelligent display device, wherein the intelligent terminal is connected with the intelligent display device;
the intelligent terminal is used for acquiring the current speaking state information of the intelligent display equipment and sending a speaking application request to the intelligent display equipment when the current speaking state information is that speaking is allowed; then when the intelligent display device confirms the speech application request, speech data are generated according to a preset speech mode, and the speech data are sent to the intelligent display device; wherein the speaking data comprises voice data, image data or video data;
the intelligent display device is further configured to receive the speech data, and play and display the speech data.
Further, the preset speaking mode comprises a voice speaking mode, a multimedia speaking mode or a mirror speaking mode; the multimedia speaking mode comprises a video speaking mode and an image speaking mode;
when the speech mode is a voice speech mode, the generating speech data according to a preset speech mode specifically includes:
the intelligent terminal acquires voice data of the speaker, and then performs data processing on the voice data to acquire the speaking data;
when the speech mode is a multimedia speech mode, the generating speech data according to a preset speech mode specifically includes:
if the multimedia speaking mode is the image speaking mode, the intelligent terminal starts a camera device to shoot to obtain a shot image, and data processing is carried out on the shot image to obtain speaking data;
if the multimedia speech mode is a video speech mode, starting a camera device for shooting while acquiring voice data of the speaker to acquire shot data, synthesizing the voice data and the shot data into video data, and processing the video data to acquire the speech data;
and when the speaking mode is a mirror image speaking mode, the intelligent terminal acquires the current display content of the intelligent terminal and generates the speaking data according to the current display content.
Further, the intelligent terminal obtains voice data of the speaker, and then performs data processing on the voice data to obtain the speech data, specifically:
and the intelligent terminal responds to a voice recording instruction, opens a microphone, records the voice of the speaker to obtain the voice data, performs howling suppression processing on the voice data, and performs audio compression and encapsulation to obtain the speech data.
Further, data processing is performed on the shot image to obtain utterance data, which specifically includes:
and carrying out compression coding on the shot image, and packaging according to a TCP (transmission control protocol) protocol to obtain the speaking data.
Further, the video data is compressed and encoded, and is encapsulated according to a TCP (transmission control protocol) protocol to obtain the speech data;
further, the intelligent terminal obtains current display content of the intelligent terminal, and generates the speech data according to the current display content, specifically:
if the current display content is an image, the intelligent terminal directly intercepts the current display content of the intelligent terminal to obtain image data, and then compression coding and packaging are carried out on the image data to generate the speaking data;
if the current display content is video data which is being played, the intelligent terminal firstly judges whether a microphone of the intelligent terminal is in an open state; if the microphone is in a closed state, directly performing compression coding and encapsulation on the video data to generate the speech data; and if the microphone is in an open state, extracting audio data collected by the microphone and an image sequence in the video data to synthesize second video data, and then performing compression coding and packaging on the second video data to generate the speech data.
Further, the intelligent display device is further configured to receive the speech data, and play and display the speech data, specifically:
after receiving the speech data, the intelligent display equipment judges the data type of the speech data;
if the speech data are video data, separating audio data in the speech data from an image sequence, and then sending the audio data to a power amplifier module so that the power amplifier module plays the audio data and sends the image sequence to a display module so that the display module displays images;
if the speech data are voice data, sending the speech data to a power amplifier module so that the power amplifier module plays the speech data;
if the speaking data is image data, sending the speaking data to a display module so that the display module displays the speaking data;
the power amplifier module and the display module are both arranged in the intelligent display device.
By implementing the embodiment of the invention, the following beneficial effects are achieved:
the embodiment of the invention provides a conference speaking system which comprises an intelligent terminal and an intelligent display device, wherein the intelligent terminal firstly acquires the current speaking state of the intelligent display device and judges whether the intelligent display device is in a speaking-allowed state, if so, the intelligent terminal can send a speaking application request to the intelligent display device, after the intelligent display device receives and confirms the speaking application request, the intelligent terminal can start to send speaking data to the intelligent display device, and after the intelligent display device receives the speaking data, the speaking data is played and displayed, so that the whole speaking process is completed, and the speaking data can be not only voice data, but also image data and video data. The speaker can speak in various forms, and the problem that the existing microphone can speak in a single speaking form and is insufficient to express the knowledge of the speaker is solved.
Drawings
Fig. 1 is a flowchart of a conference speaking system according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a working process of an intelligent terminal in a conference speaking system according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a working process of an intelligent display device in a conference speaking system according to an embodiment of the present invention;
fig. 4 is a schematic connection diagram illustrating a mixed wired and wireless connection between terminals in a conference speaking system according to an embodiment of the present invention;
fig. 5 is a schematic connection diagram of terminals in a conference speaking system according to an embodiment of the present invention, where the terminals are connected in a pure wireless manner;
fig. 6 is a schematic connection diagram of terminals in a conference speaking system according to an embodiment of the present invention, where a hotspot connection mode is adopted between the terminals;
fig. 7 is a schematic diagram illustrating a connection between terminals in a conference speaking system according to an embodiment of the present invention in a wireless coverage manner;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a conference speaking system provided by an embodiment of the present invention includes an intelligent terminal and an intelligent display device, where the intelligent terminal is connected to the intelligent display device;
the intelligent terminal is used for acquiring the current speaking state information of the intelligent display equipment and sending a speaking application request to the intelligent display equipment when the current speaking state information is that speaking is allowed; then, when the intelligent display device confirms the request for applying for speaking, speaking data are generated according to a preset speaking mode, and the speaking data are sent to the intelligent display device; the speaking data comprises voice data, image data and video data;
and the intelligent display equipment is also used for receiving the speech data and playing and displaying the speech data.
It should be noted that the intelligent terminal may be, but is not limited to, a mobile phone, a computer or a tablet computer, an intelligent display device, a large-scale touch control all-in-one machine, an intelligent projector, or an intelligent receiving host externally connected with a display device.
First, the smart display device sends an instruction message, "current speaking status information", to the smart terminal to identify whether the smart display device is currently in a speaking-allowed status or a speaking-not-allowed status. After receiving the instruction information, the intelligent terminal judges the current speaking state of the intelligent display device, and if the intelligent display device is judged to be in the speaking-allowed state currently, the intelligent terminal sends a speaking application request to the intelligent display device; and requesting to send data to the intelligent display device.
When receiving the request for applying for speaking, the intelligent display device may choose to reject or accept (confirm) the request for applying for speaking, and when the intelligent display device confirms that the request is accepted, the intelligent terminal may send speaking data to the intelligent display device, so as to speak. And after receiving the speech data, the intelligent display device plays and displays the received speech data. In an actual application scenario, the smart display device may send "current utterance state information" of itself to the plurality of smart terminals, and may also receive "request for utterance" fed back by the plurality of smart terminals, and then select a smart terminal that needs to speak from the plurality of smart terminals to speak.
It is necessary to supplement that the intelligent display device receives the speaking data of a selected intelligent terminal and modifies the current speaking state of the intelligent terminal into a non-speaking-allowed state when confirming a speaking application request of a certain intelligent terminal, at this time, other intelligent terminals except the intelligent terminal currently sending the speaking data cannot send the speaking data to the intelligent display device, and after the selected intelligent terminal finishes speaking, a "speaking end" instruction is fed back to the intelligent display device, and after receiving the "speaking end" instruction, the intelligent display device changes the current speaking state from the non-speaking-allowed state to the "speaking-allowed state". And the speech data includes voice data, image data, or video data.
In a preferred embodiment, the intelligent display device and the intelligent terminal are both installed with an application program, in the intelligent display device, when the application program is opened, a software interface is displayed on the intelligent display device, and then a "speak permission" button is arranged on the software interface, when a user (questioner) opens a switch of "speak permission", the current speaking state of the intelligent display device is set to be speaking permission and is identified by a state code "1", then the intelligent display device sends the state code to the intelligent terminal, and when the intelligent terminal receives the state code "1", the intelligent display device is in an accessible state at the moment. An application program interface of the intelligent terminal is provided with a 'speech applying' control key, when the intelligent terminal responds to a speaker to press the key, a speech applying request is sent to the intelligent display equipment, after confirmation information sent by the intelligent display equipment is received, a speech channel between the intelligent display equipment and the intelligent display equipment is established, speech data starts to be collected according to a corresponding speech mode, and speech is carried out. And the speech state of the intelligent display device can be changed into the state that speech is not allowed, other intelligent terminals can not be accessed into the intelligent display device except the intelligent terminal which is speaking at the moment, when the speaker closes the button for applying speech, the intelligent terminal finishes speaking, the intelligent display device changes the speech state of the intelligent display device again at the moment, the intelligent display device allows speech, and other intelligent terminals are allowed to be accessed into the intelligent display device for speaking at the moment.
It should be noted that the image data mentioned herein is defined as static, independent and discontinuous images, and the video data is defined as a continuous image sequence with audio, and certainly, a continuous image sequence without audio also belongs to the category of video data, and it is understood that the audio data in the video data is empty.
In a preferred embodiment, the preset speaking mode comprises a voice speaking mode, a multimedia speaking mode or a mirror speaking mode; the multimedia speaking mode comprises a video speaking mode and an image speaking mode;
when the speech mode is a voice speech mode, speech data are generated according to a preset speech mode, and the method specifically comprises the following steps:
the intelligent terminal acquires voice data of a speaker, and then performs data processing on the voice data to acquire speaking data;
when the speech mode is a multimedia speech mode, speech data are generated according to a preset speech mode, and the method specifically comprises the following steps:
if the multimedia speaking mode is the image speaking mode, the intelligent terminal starts a camera device to shoot to obtain a shot image, and data processing is carried out on the shot image to obtain speaking data;
if the multimedia speech mode is the video speech mode, the intelligent terminal starts a camera device to shoot while acquiring voice data of a speaker to obtain shot data, and after the voice data and the shot data are synthesized into video data, the video data are processed to obtain speech data;
when the speaking mode is the mirror image speaking mode, the intelligent terminal obtains the current display content of the intelligent terminal and generates speaking data according to the current display content.
In a preferred embodiment, the intelligent terminal obtains voice data of a speaker, and then performs data processing on the voice data to obtain utterance data, specifically:
the intelligent terminal responds to the voice recording instruction, a microphone is opened, the voice of a speaker is recorded, voice data are obtained, after howling suppression processing is carried out on the voice data, audio compression and packaging are carried out, and speaking data are obtained.
In a preferred embodiment, the data processing is performed on the captured image to obtain utterance data, specifically:
and carrying out compression coding on the shot image, and packaging according to a TCP (transmission control protocol) protocol to obtain speaking data.
In a preferred embodiment, the processing on the video data to obtain the speech data specifically includes:
compressing and coding the video data, and packaging according to a TCP (transmission control protocol) protocol to obtain speech data;
as shown in fig. 2, the following list a preferred example to specifically describe the workflow of the intelligent terminal:
the intelligent terminal has three speaking modes, namely a voice speaking mode, a multimedia speaking mode and a mirror image speaking mode, sends a speaking application request to the intelligent display device, responds to the operation of a user after the speaking application request passes, selects the speaking mode and generates speaking data, and the method specifically comprises the following steps:
when the selected speaking mode is the voice speaking mode, the intelligent terminal turns on a microphone carried by the intelligent terminal to collect voice data of a speaker, then sends the collected voice data to the squeal suppression module to eliminate self-oscillation so as to avoid squeal caused by the problem of the placement angle of the intelligent terminal and a sound box and normal sound signals after squeal suppression processing,
compressing through an AAC audio compression algorithm, packaging through a UDP protocol to generate speech data, finally performing wireless transmission through a Wi-Fi module of the intelligent terminal, ending speech, responding to the speech ending operation of a user, and closing a microphone.
When the selected speech mode is a multimedia speech mode, whether the speech mode is an image speech mode or a video speech mode is further judged, if the speech mode is the image speech mode, the intelligent terminal opens a camera device of the intelligent terminal to shoot to obtain a shot image, then the shot image is compressed and encoded by an H.264 compression encoder, then TCP protocol encapsulation is carried out to generate speech data, and finally wireless transmission is carried out through a Wi-Fi module of the intelligent terminal.
If the speech mode is the video speech mode, a microphone and a camera of the intelligent terminal are both opened, the microphone acquires audio data, the camera acquires shooting data, video data are finally synthesized, the video data are compressed and encoded through an H.264 compression encoder, TCP protocol encapsulation is then carried out, speech data are generated, and finally wireless transmission is carried out through a Wi-Fi module of the intelligent terminal.
When the selected speaking mode is the mirror image speaking mode, the intelligent terminal generates speaking data according to the content displayed on the current screen, namely the current display content, specifically as follows:
if the current display content is a static image, the intelligent terminal firstly captures all image information of the screen, such as mouse information and layered display information on a computer screen. The intercepted image information, namely image data, is compressed and encoded through an H.264 compression encoder, then TCP protocol encapsulation is carried out, and finally wireless transmission is carried out through a Wi-Fi module of the intelligent terminal.
If the intelligent terminal is playing a video file, namely video data, at the moment, the intelligent terminal firstly judges whether the microphone is in an open state, if the microphone is in a closed state at the moment, the video data is directly compressed and encoded through an H.264 compression encoder, then TCP protocol encapsulation is carried out, and finally wireless transmission is carried out through a Wi-Fi module of the intelligent terminal. If the microphone is in an open state, which indicates that the user is speaking at the moment, for example, the content played by the video may be explained, at the moment, the intelligent terminal extracts the image sequence in the audio data and the video data collected by the microphone, synthesizes new video data, namely, second video data, at the moment, the sound in the original video data is removed and is replaced by the audio data collected by the microphone, finally, the second video data is compressed and encoded through an H.264 compression encoder, then, TCP protocol encapsulation is performed, and finally, wireless transmission is performed through a Wi-Fi module of the intelligent terminal.
In a preferred embodiment, the intelligent display device is further configured to receive the speech data, and play and display the speech data, specifically:
after receiving the speech data, the intelligent display equipment judges the data type of the speech data;
if the speech data are video data, separating the audio data in the speech data from the image sequence, and then sending the audio data to the power amplifier module so that the power amplifier module plays the audio data and sends the image sequence to the display module so that the display module displays images;
if the speech data are voice data, the speech data are sent to the power amplifier module, so that the power amplifier module plays the speech data;
if the speaking data is image data, sending the speaking data to the display module so that the display module displays the speaking data;
the power amplifier module and the display module are both arranged in the intelligent display device.
As shown in fig. 3, the following describes the workflow of the smart display device in connection with a preferred embodiment:
after receiving the data sent by the intelligent terminal, the intelligent display device judges the type of the data,
if the received data is signaling data, the signaling data is directly transmitted to the signaling processing module, and corresponding operations are performed, it should be noted that the signaling data is not the utterance data mentioned herein, and the signaling data is some instruction data, such as the "request for utterance" mentioned herein.
If the received data is speech data, further judging the type of the speech data, and if the speech data is voice data, analyzing the voice data through a UDP (user Datagram protocol); then carrying out AAC decompression on the analyzed audio data; after decompression is finished, the audio signals are sent to an audio driving module to output audio signals, and a power amplification module is driven to play the audio signals.
And if the received data is image data, analyzing the image data through a TCP protocol, and then sending the image data to a display module for displaying.
If the received data is video data, firstly, TCP (transmission control protocol) protocol analysis is carried out on the video data, then audio and video separation is carried out on the obtained video stream, the separated audio data is sent to an audio module, and the audio module is sent to a power amplifier module and played by the power amplifier module; and transmitting the separated image sequence to a display module for displaying.
As shown in fig. 4, the intelligent terminal and the intelligent display device may be connected in a wired and wireless hybrid connection manner, specifically:
the intelligent devices are connected with the router through Wi-Fi of the intelligent devices; the intelligent display device is connected with the router in a wired mode through a network cable. In this connection mode, the router may be installed and fixed on the smart display device as a whole, or may be an independent router.
As shown in fig. 5, the intelligent terminals and the intelligent display device may be connected by a pure wireless connection mode, and the plurality of intelligent terminals are connected to the router by Wi-Fi; the intelligent display device is connected with the router in a wireless mode through Wi-Fi. In this form, the router is a separate device, separated from the smart display device by a distance.
As shown in fig. 6, the intelligent terminal and the intelligent display device may be connected by using a hot spot connection method, specifically:
the intelligent display equipment is provided with a Wi-Fi hotspot, a main CPU of the intelligent display equipment can be directly communicated with the hotspot, and each mobile terminal is connected with the hotspot through Wi-Fi; in the connection mode, the hot spot is usually a network card of the intelligent display device, and is converted into a 'soft route' form through software to exist; the network card can be an onboard network card or an external network card inserted on a USB interface of the large intelligent display device through a USB interface.
As shown in fig. 7, the intelligent terminal and the intelligent display device may be connected in a wireless coverage manner, specifically:
this form is mainly applied to large conferences or teaching spaces. The intelligent display device is connected with the router 0 through a network cable. The router 0 is connected with other routers through LAN ports by wires to form a network full coverage in a larger area. And a plurality of intelligent terminals are connected with the nearby router, so that the strength of the Wi-Fi signal is enough. Therefore, all the devices and the routers are connected in a local area network, and real-time data transmission can be realized.
The embodiment of the invention provides a conference speaking system which comprises an intelligent terminal and an intelligent display device, wherein the intelligent terminal firstly acquires the current speaking state of the intelligent display device and judges whether the intelligent display device is in a speaking-allowed state, if so, the intelligent terminal can send a speaking application request to the intelligent display device, after the intelligent display device receives and confirms the speaking application request, the intelligent terminal can start to send speaking data to the intelligent display device, and after the intelligent display device receives the speaking data, the speaking data is played and displayed, so that the whole speaking process is completed, and the speaking data can be not only voice data, but also image data and video data. The speaker can speak in various forms, and the problem that the existing microphone can speak in a single speaking form and is insufficient to express the knowledge of the speaker is solved.
The foregoing is a preferred embodiment of the present invention, and it should be noted that it would be apparent to those skilled in the art that various modifications and enhancements can be made without departing from the principles of the invention, and such modifications and enhancements are also considered to be within the scope of the invention.

Claims (5)

1. The conference speaking system is characterized by comprising an intelligent terminal and intelligent display equipment, wherein the intelligent terminal is connected with the intelligent display equipment;
the intelligent terminal is used for acquiring the current speaking state information of the intelligent display equipment and sending a speaking application request to the intelligent display equipment when the current speaking state information is that speaking is allowed; then when the intelligent display device confirms the speech application request, speech data are generated according to a preset speech mode, and the speech data are sent to the intelligent display device; wherein the speaking data comprises voice data, image data or video data; the preset speaking mode comprises a voice speaking mode, a multimedia speaking mode or a mirror image speaking mode; the multimedia speaking mode comprises a video speaking mode and an image speaking mode;
when the speech mode is a voice speech mode, the generating speech data according to a preset speech mode specifically includes: the intelligent terminal acquires voice data of a speaker, and then performs data processing on the voice data to acquire the speaking data;
when the speech mode is a multimedia speech mode, the generating speech data according to a preset speech mode specifically includes: if the multimedia speaking mode is the image speaking mode, the intelligent terminal starts a camera device to shoot to obtain a shot image, and data processing is carried out on the shot image to obtain speaking data; if the multimedia speech mode is a video speech mode, the intelligent terminal starts a camera device to shoot while acquiring voice data of the speaker to obtain shot data, and after the voice data and the shot data are synthesized into video data, the video data are processed to obtain the speech data;
when the speaking mode is a mirror image speaking mode, the intelligent terminal acquires the current display content of the intelligent terminal and generates the speaking data according to the current display content; if the current display content is an image, the intelligent terminal directly intercepts the current display content of the intelligent terminal to obtain image data, and then compression coding and packaging are carried out on the image data to generate the speaking data; if the current display content is video data which is being played, the intelligent terminal firstly judges whether a microphone of the intelligent terminal is in an open state; if the microphone is in a closed state, directly performing compression coding and encapsulation on the video data to generate the speech data; if the microphone is in an open state, extracting audio data collected by the microphone and an image sequence in the video data to synthesize second video data, and then performing compression coding and packaging on the second video data to generate the speech data; the intelligent display device is further configured to receive the speech data, and play and display the speech data.
2. The conference speaking system according to claim 1, wherein the intelligent terminal obtains voice data of the speaker, and then performs data processing on the voice data to obtain the speaking data, specifically:
and the intelligent terminal responds to a voice recording instruction, opens a microphone, records the voice of the speaker to obtain the voice data, performs howling suppression processing on the voice data, and performs audio compression and encapsulation to obtain the speech data.
3. The conference speaking system according to claim 1, wherein the data processing is performed on the shot image to obtain the speaking data, and specifically:
and carrying out compression coding on the shot image, and packaging according to a TCP (transmission control protocol) protocol to obtain the speaking data.
4. The conference speaking system according to claim 1, wherein the processing the video data to obtain the speaking data specifically comprises:
and carrying out compression coding on the video data, and packaging according to a TCP (transmission control protocol) protocol to obtain the speaking data.
5. The conference speaking system according to claim 1, wherein the intelligent display device is further configured to receive the speaking data, and play and display the speaking data, specifically:
after receiving the speech data, the intelligent display equipment judges the data type of the speech data;
if the speech data are video data, separating audio data in the speech data from an image sequence, and then sending the audio data to a power amplifier module so that the power amplifier module plays the audio data and sends the image sequence to a display module so that the display module displays images;
if the speech data are voice data, sending the speech data to a power amplifier module so that the power amplifier module plays the speech data;
if the speaking data is image data, sending the speaking data to a display module so that the display module displays the speaking data;
the power amplifier module and the display module are both arranged in the intelligent display device.
CN201910081572.XA 2019-01-28 2019-01-28 Conference speaking system Active CN109802968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910081572.XA CN109802968B (en) 2019-01-28 2019-01-28 Conference speaking system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910081572.XA CN109802968B (en) 2019-01-28 2019-01-28 Conference speaking system

Publications (2)

Publication Number Publication Date
CN109802968A CN109802968A (en) 2019-05-24
CN109802968B true CN109802968B (en) 2021-06-22

Family

ID=66560670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910081572.XA Active CN109802968B (en) 2019-01-28 2019-01-28 Conference speaking system

Country Status (1)

Country Link
CN (1) CN109802968B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112004046A (en) * 2019-05-27 2020-11-27 中兴通讯股份有限公司 Image processing method and device based on video conference
CN110348011A (en) * 2019-06-25 2019-10-18 武汉冠科智能科技有限公司 A kind of with no paper meeting shows that object determines method, apparatus and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101312423A (en) * 2007-05-24 2008-11-26 中国科学院自动化研究所 Expert authority evaluation method and system based on network integrated discussion environment
CN101335869A (en) * 2008-03-26 2008-12-31 北京航空航天大学 Video conference system based on Soft-MCU
CN105933129A (en) * 2016-04-25 2016-09-07 四川联友电讯技术有限公司 Fragmented asynchronous conference system conference participant message sending method
CN106301811A (en) * 2015-05-19 2017-01-04 华为技术有限公司 Realize the method and device of multimedia conferencing
CN108668099A (en) * 2017-03-31 2018-10-16 鸿富锦精密工业(深圳)有限公司 video conference control method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6531443B2 (en) * 2015-03-17 2019-06-19 株式会社リコー Transmission system, method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101312423A (en) * 2007-05-24 2008-11-26 中国科学院自动化研究所 Expert authority evaluation method and system based on network integrated discussion environment
CN101335869A (en) * 2008-03-26 2008-12-31 北京航空航天大学 Video conference system based on Soft-MCU
CN106301811A (en) * 2015-05-19 2017-01-04 华为技术有限公司 Realize the method and device of multimedia conferencing
CN105933129A (en) * 2016-04-25 2016-09-07 四川联友电讯技术有限公司 Fragmented asynchronous conference system conference participant message sending method
CN108668099A (en) * 2017-03-31 2018-10-16 鸿富锦精密工业(深圳)有限公司 video conference control method and device

Also Published As

Publication number Publication date
CN109802968A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
CN101720551B (en) Response of communicating peer generated by human gestures by utilizing a mobile phone
CN100459711C (en) Video compression method and video system using the method
EP2154885B1 (en) A caption display method and a video communication control device
US20080062252A1 (en) Apparatus and method for video mixing and computer readable medium
US20080151786A1 (en) Method and apparatus for hybrid audio-visual communication
JP2005536132A (en) A human / machine interface for the real-time broadcast and execution of multimedia files during a video conference without interrupting communication
CN108960158A (en) A kind of system and method for intelligent sign language translation
CN109802968B (en) Conference speaking system
US20220021980A1 (en) Terminal, audio cooperative reproduction system, and content display apparatus
JP2004304601A (en) Tv phone and its data transmitting/receiving method
CN112258912A (en) Network interactive teaching method, device, computer equipment and storage medium
JP2010239641A (en) Communication device, communication system, control program of communication device, and recording media-recording control program of communication device
CN111885412B (en) HDMI signal screen transmission method and wireless screen transmission device
CN105450970B (en) A kind of information processing method and electronic equipment
EP2207311A1 (en) Voice communication device
CN101888522B (en) Network video conference device and method for carrying out network video conference
JP2008141348A (en) Communication apparatus
JP4680034B2 (en) COMMUNICATION DEVICE, COMMUNICATION SYSTEM, COMMUNICATION DEVICE CONTROL PROGRAM, AND RECORDING MEDIUM CONTAINING COMMUNICATION DEVICE CONTROL PROGRAM
JP4400598B2 (en) Call center system and control method for videophone communication
JP3031320B2 (en) Video conferencing equipment
JP4120440B2 (en) COMMUNICATION PROCESSING DEVICE, COMMUNICATION PROCESSING METHOD, AND COMPUTER PROGRAM
CN114840282A (en) Screen recording method and screen recording device of intelligent interactive tablet
WO2017219796A1 (en) Video service control method, mobile terminal, and service server
US11764984B2 (en) Teleconference method and teleconference system
KR101067952B1 (en) Managing System for less traffic in video communication and Method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant