CN115334053A - Method for realizing associated screen projection in cloud conference and related product - Google Patents
Method for realizing associated screen projection in cloud conference and related product Download PDFInfo
- Publication number
- CN115334053A CN115334053A CN202210919844.0A CN202210919844A CN115334053A CN 115334053 A CN115334053 A CN 115334053A CN 202210919844 A CN202210919844 A CN 202210919844A CN 115334053 A CN115334053 A CN 115334053A
- Authority
- CN
- China
- Prior art keywords
- cloud
- speaker
- concerned
- file
- content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 62
- 230000006835 compression Effects 0.000 claims abstract description 62
- 238000007906 compression Methods 0.000 claims abstract description 62
- 230000015654 memory Effects 0.000 claims description 30
- 230000005540 biological transmission Effects 0.000 claims description 22
- 238000004891 communication Methods 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 13
- 230000001174 ascending effect Effects 0.000 claims description 5
- 230000001934 delay Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000007774 longterm Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/403—Arrangements for multi-party communication, e.g. for conferences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1454—Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Telephonic Communication Services (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The embodiment of the application provides a method for realizing associated screen projection in a cloud conference and a related product, wherein the method comprises the following steps: the method comprises the steps that a cloud conference server receives a shared file to be shared by a cloud terminal of a speaker in a cloud conference; the cloud conference server identifies the shared file to determine partial content concerned by the speaker and partial content not concerned by the speaker in the shared file; the cloud conference server compresses the concerned partial content through a first compression algorithm to obtain a first compression file, compresses the non-concerned partial content through a second compression algorithm to obtain a second compression file, and shares the first compression file and the second compression file to other cloud terminals in the cloud conference for screen projection. The technical scheme provided by the application has the advantage of improving the cloud conference effect.
Description
Technical Field
The application relates to the technical field of electronics and communication, in particular to a method for realizing associated screen projection in a cloud conference and a related product.
Background
The cloud conference is an efficient, convenient and low-cost conference form based on a cloud computing technology. A user can share voice, data files and videos with teams and clients all over the world quickly and efficiently only by performing simple and easy-to-use operation through an internet interface, and complex technologies such as transmission and processing of data in a conference are assisted by a cloud conference service provider to operate.
The image data transmission data volume of the shared file area in the cloud conference scene is too large, so that the time delay of the conference image data is influenced, the cloud conference effect is further influenced, and the user experience is reduced.
Disclosure of Invention
The embodiment of the application discloses a method for realizing associated screen projection in a cloud conference and a related product.
In a first aspect, a method for implementing associated screen projection in a cloud conference is provided, where the method includes the following steps:
the method comprises the steps that a cloud conference server receives a shared file to be shared by a cloud terminal of a speaker in a cloud conference;
the cloud conference server identifies the shared file to determine partial content concerned by the speaker and partial content not concerned by the speaker in the shared file;
the cloud conference server compresses the concerned partial content through a first compression algorithm to obtain a first compressed file, compresses the non-concerned partial content through a second compression algorithm to obtain a second compressed file, and shares the first compressed file and the second compressed file to other cloud terminals in the cloud conference for screen projection;
the second compression algorithm is an algorithm that compresses a greater amount than the first compression algorithm.
In a second aspect, a system for implementing associated screen projection in a cloud conference is provided, where the system includes:
the communication unit is used for receiving a shared file to be shared by a cloud terminal of a speaker in a cloud conference;
the processing unit is used for identifying the shared file to determine the part content concerned by the speaker and the part content not concerned by the speaker in the shared file; compressing the concerned partial content by a first compression algorithm to obtain a first compressed file, compressing the non-concerned partial content by a second compression algorithm to obtain a second compressed file, and sharing the first compressed file and the second compressed file to other cloud terminals in the cloud conference for screen projection;
the second compression algorithm is an algorithm that compresses a greater amount than the first compression algorithm.
In a third aspect, there is provided an electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method of the first aspect.
In a fifth aspect, there is provided a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps as described in the first aspect of an embodiment of the present application. The computer program product may be a software installation package.
The cloud conference server receives a shared file to be shared by a speaker cloud terminal in a cloud conference; the cloud conference server identifies the shared file to determine the part of the content concerned by the speaker and the part of the content not concerned by the speaker in the shared file; the cloud conference server compresses the concerned partial content through a first compression algorithm to obtain a first compressed file, compresses the non-concerned partial content through a second compression algorithm to obtain a second compressed file, and shares the first compressed file and the second compressed file to other cloud terminals in the cloud conference. According to the scheme, the contents which are not concerned by the speaker in the shared file are compressed in different compression modes, so that the network transmission quantity occupied by other contents which are not concerned by the speaker can be reduced as much as possible on the premise that the conference carried out by the speaker is not influenced, the data quantity is small, the single-frame data quantity is small, the data delay is low, the network flow is reduced, the smoothness of the cloud conference is improved, the conference quality is improved, and the user experience degree is improved.
Drawings
The drawings used in the embodiments of the present application are described below.
FIG. 1 is a schematic diagram of a framework of a cloud conferencing platform of the present application;
fig. 2 is a schematic flowchart of an implementation method of associated screen projection in a cloud conference provided by the present application;
fig. 3 is a schematic structural diagram of an implementation system for associated screen projection in a cloud conference, provided by the present application;
FIG. 4 is a schematic illustration of a split screen display provided herein;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more. The descriptions of the first, second, etc. appearing in the embodiments of the present application are only for illustrating and differentiating the objects, and do not represent the order or the particular limitation of the number of the devices in the embodiments of the present application, and do not constitute any limitation to the embodiments of the present application. The term "connect" in the embodiments of the present application refers to various connection manners, such as direct connection or indirect connection, to implement communication between devices, which is not limited in this embodiment of the present application.
Referring to fig. 1, fig. 1 is a schematic view of a framework of a cloud conference platform, as shown in fig. 1, the framework has a plurality of cloud terminals, the cloud terminals are connected together through a cloud conference server, and the cloud terminals specifically may include: the device comprises a processor, a memory, a display screen, a communication circuit, an audio component and a camera component, wherein the components can be connected through a bus or in other ways, and the application is not limited to the specific way of the connection. The cloud terminal and the cloud conference platform can be connected through a wired network, and certainly can also be connected through a wireless network of a wireless communication system.
The wireless communication system may be: a Global System for Mobile communications (GSM) System, a Code Division Multiple Access (CDMA) System, a Wideband Code Division Multiple Access (WCDMA) System, a General Packet Radio Service (GPRS), a Long Term Evolution (Long Term Evolution, LTE) System, an Advanced Long Term Evolution (LTE-a) System, a New Radio (NR) System, an Evolution System of an NR System, an LTE-over-unlicensed spectrum (LTE-U) System, an NR-over-unlicensed spectrum (NR-over-licensed spectrum) System, a Universal Mobile Telecommunications System (UMTS) System, or other next generation communication systems.
Referring to fig. 2, fig. 2 provides a schematic flow diagram of a method for implementing associated screen projection in a cloud conference, where the method shown in fig. 2 may be executed under the framework of the cloud conference platform shown in fig. 1, specifically, the method may be executed by a cloud terminal in the cloud conference platform shown in fig. 1, and certainly may be executed by a cloud conference server, the embodiment takes the cloud conference server as an example for description, and in an actual application, the method may also be executed by the cloud terminal, and as shown in fig. 2, the method includes the following steps:
step S201, a cloud conference server receives a shared file to be shared by a cloud terminal of a speaker in a cloud conference;
the shared file may be a file in any format, including but not limited to: PPT, word, WPS, and the like. The receiving method may be performed in a wired manner or in a wireless manner, and a specific receiving method may be determined according to a connection manner between the cloud terminal and the cloud conference server, and the specific receiving method is not limited in the present application.
Step S202, the cloud conference server identifies the shared file to determine the part of the content concerned by the speaker and the part of the content not concerned by the speaker in the shared file;
the concerned partial content can be the content that the speaker is speaking or the content that the speaker needs to speak within a set time, and the content can be any one or any combination of characters, pictures and videos.
For example, the focused partial content may be a content that the speaker marks the content to be explained at this time, and may be a content in another format.
Step S203, the cloud conference server compresses the concerned partial content through a first compression algorithm to obtain a first compressed file, compresses the non-concerned partial content through a second compression algorithm to obtain a second compressed file, and shares the first compressed file and the second compressed file to other cloud terminals in the cloud conference for screen projection (non-speaker terminals);
illustratively, the second compression algorithm is an algorithm that compresses a larger amount than the first compression algorithm.
For example, the first compression algorithm and the second compression algorithm may be the same compression algorithm, and specifically, when compressing, the compression ratio of the second compression algorithm is greater than that of the first compression algorithm, although the first compression algorithm and the second compression algorithm may be different compression algorithms, and the specific compression algorithm may be a general compression algorithm, and the application does not limit the specific form of the compression algorithm nor the specific flow of the compression.
For example, a 100Mb (megabit) file is compressed by the first compression algorithm to 50Mb, and compressed by the second compression algorithm to 40Mb (only less than 50Mb is needed), i.e., the second compression algorithm can be considered as an algorithm with a larger compression amount than the first compression algorithm.
The cloud conference server receives a shared file to be shared by a speaker cloud terminal in a cloud conference; the cloud conference server identifies the shared file to determine the part of the content concerned by the speaker and the part of the content not concerned by the speaker in the shared file; the cloud conference server compresses the concerned partial content through a first compression algorithm to obtain a first compressed file, compresses the non-concerned partial content through a second compression algorithm to obtain a second compressed file, and shares the first compressed file and the second compressed file to other cloud terminals in the cloud conference. According to the scheme, the contents which are not concerned by the speaker in the shared file are compressed in different compression modes, so that the network transmission quantity occupied by other contents which are not concerned by the speaker can be reduced as much as possible on the premise that the conference of the speaker is not influenced, the data volume is small, the data volume of a single frame is small, the data delay is low, the flow of the network is reduced, the smoothness of the cloud conference is improved, the quality of the conference is improved, and the user experience is improved.
For example, the sharing the first compressed file and the second compressed file to other cloud terminals in the cloud conference specifically may include:
the cloud conference server acquires network delay of other cloud terminals, and dynamically allocates transmission priorities of the first compressed file and the second compressed file at the other cloud terminals according to the network delay.
For example, the dynamically allocating the transmission priorities of the first compressed file and the second compressed file at other cloud terminals according to the network delay may specifically include:
and arranging other cloud terminals according to the ascending sequence of network delay to obtain a first sequence, and setting the transmission priority of the other cloud terminals according to the sequence of the first sequence, namely the transmission priority of the cloud terminal with the first sequence is high (namely, the cloud terminal is sent preferentially), and the priority of the cloud terminal with the second sequence is low (namely, the cloud terminal is sent later).
For example, if the network delay of terminal 1 is 10 ms and the network delay of terminal 2 is 20 ms, the per-transmission priority of terminal 2 is greater than the transmission priority of terminal 1.
For example, the identifying, by the cloud conference server, the shared file to determine the partial content of the shared file that the speaker pays attention to and the partial content of the shared file that the speaker does not pay attention to may specifically include:
the cloud conference server receives audio data collected by a cloud terminal of a speaker, performs voice recognition on the audio data to determine text information of the audio data, determines content corresponding to the text information from the shared file to be partial content concerned by the speaker, and determines other content of the shared file to be partial content not concerned by the speaker.
For example, the specific method for determining the text information of the audio data by performing speech recognition on the audio data may adopt an LSTM recognition method, and specifically may include:
forming input data Xt (wherein t represents the mark of the moment) of each moment of the LSTM by the audio data, and identifying by adopting the following formula to obtain text information of the audio data;
the LSTM can be divided into a forgetting gate, an input gate, and an output gate, corresponding to three calculations, and the formula of the calculation is as follows:
forget to remember the door f t =σ(h t-1 *X t +b f )。
An input gate:
i t =σ(h t-1 *X t +b i )
C’ t =tanh(h t-1 *X t +b c );
an output gate:
O t =σ(h t-1 *X t +b 0 );
h t =O t *tanh(C t )。
wherein, C t =C t-1 *f t +i t *C’ t 。
Above, b f Denotes f t The offset of the function, the value being constant, and, similarly, b i 、b c 、b o Respectively representing the offsets, O, of the corresponding formulae t Indicating the output at time t.
For example, the method may further include:
obtaining each output result of the audio data at each moment, counting each confidence rate of each output result, searching a confidence rate i lower than a first threshold value from each confidence rate, and extracting h of the time i of the confidence rate i i-1 And two confidence rates corresponding to i +1 and i-1, and acquiring a time i +1 or i-1 corresponding to a higher confidence rate from the two confidence rates; if the higher confidence rate is time i +1 (or time i-1), combining the input data corresponding to time i +1 with the input data at time i to obtain a new output result as a new input data by the above formula, and if the confidence rate of the new output result is higher than the average value of the confidence rates at time i +1 and time i, replacing the new output result with the output results at time i +1 and time i.
According to the method and the device, the accuracy of the input result is improved in a combined mode, and therefore the recognition accuracy of the audio data can be improved.
For example, the sharing the first compressed file and the second compressed file to other cloud terminals in the cloud conference for screen projection specifically may include:
the first compressed file and the second compressed file are shared with other cloud terminals in the cloud conference, and the other cloud terminals are instructed to display the first file and the second file obtained by decompressing the first compressed file and the second compressed file in a split screen mode, wherein the first file can be set in a middle area, and the second file can be set in an edge area.
As shown in fig. 4, it is assumed that the first file has 2 pictures, respectively picture 1 and picture 2, the second file has 4 pictures, respectively picture 3, 4, 5 and 6, the 2 pictures can be disposed on two sides of the middle area of the display area, and the other 4 pictures are disposed on 4 sides of the middle area.
For example, the method may further include:
if the cloud conference server receives indication information that a second cloud terminal (any one of other cloud terminals) clicks an edge area, determining a picture identifier in the second file according to the indication information, compressing the picture identifier by adopting a first compression algorithm, and sending the compressed picture identifier to the second cloud terminal.
According to the technical scheme, when an arbitrary terminal pays attention to the picture, the picture can be retransmitted, and the definition of the concerned picture is further improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an implementation system for associated screen projection in a cloud conference, where the system includes:
the communication unit 301 is configured to receive a shared file to be shared by a cloud terminal of a speaker in a cloud conference;
a processing unit 302, configured to identify the shared file, and determine a partial content of the shared file that is concerned by the speaker and a partial content of the shared file that is not concerned by the speaker; compressing the concerned partial content by a first compression algorithm to obtain a first compressed file, compressing the non-concerned partial content by a second compression algorithm to obtain a second compressed file, and sharing the first compressed file and the second compressed file to other cloud terminals in the cloud conference for screen projection;
the second compression algorithm is an algorithm that compresses a greater amount than the first compression algorithm.
In the system provided by the application, a shared file to be shared by a speaker cloud terminal in a cloud conference is received; the cloud conference server identifies the shared file to determine the part of the content concerned by the speaker and the part of the content not concerned by the speaker in the shared file; the cloud conference server compresses the concerned partial content through a first compression algorithm to obtain a first compressed file, compresses the non-concerned partial content through a second compression algorithm to obtain a second compressed file, and shares the first compressed file and the second compressed file to other cloud terminals in the cloud conference. According to the scheme, the contents which are not concerned by the speaker in the shared file are compressed in different compression modes, so that the network transmission quantity occupied by other contents which are not concerned by the speaker can be reduced as much as possible on the premise that the conference carried out by the speaker is not influenced, the data quantity is small, the single-frame data quantity is small, the data delay is low, the network flow is reduced, the smoothness of the cloud conference is improved, the conference quality is improved, and the user experience degree is improved.
As an example of this, it is possible to use,
the processing unit is specifically configured to acquire network delays of other cloud terminals, and dynamically allocate transmission priorities of the first compressed file and the second compressed file at the other cloud terminals according to the network delays.
As an example of this, it is possible to provide,
the processing unit is specifically configured to arrange the other cloud terminals in an ascending order of network delay to obtain a first sequence, and set the transmission priorities of the other cloud terminals according to the order of the first sequence.
As an example of this, it is possible to provide,
the communication unit is also used for receiving audio data collected by a cloud terminal of a speaker;
the processing unit is further configured to perform voice recognition on the audio data to determine text information of the audio data, determine, from the shared file, that content corresponding to the text information is determined to be partial content concerned by the speaker, and determine other content of the shared file to be partial content not concerned by the speaker.
As an example of this, it is possible to provide,
the processing unit is also used for forming the audio data into input data Xt (wherein t represents the mark of the moment) of each moment of the LSTM, and identifying and obtaining the text information of the audio data by adopting the following formula;
the LSTM can be divided into a forgetting gate, an input gate, and an output gate, corresponding to three calculations, and the formula of the calculation is as follows:
forget to remember the door f t =σ(h t-1 *X t +b f )。
An input gate:
i t =σ(h t-1 *X t +b i )
C’ t =tanh(h t-1 *X t +b c );
an output gate:
O t =σ(h t-1 *X t +b 0 );
h t =O t *tanh(C t )。
wherein, C t =C t-1 *f t +i t *C’ t 。
Above, b f Denotes f t The offset of the function, the value being constant, and, similarly, b i 、b c 、b o Respectively representing the offsets, O, of the corresponding formulae t Indicating the output at time t.
For example, the method may further include:
obtaining each output result of the audio data at each moment, counting each confidence rate of each output result, searching a confidence rate i lower than a first threshold value from each confidence rate, and extracting h of the time i of the confidence rate i i-1 And two confidence rates corresponding to i +1 and i-1, and acquiring a time i +1 or i-1 corresponding to a higher confidence rate from the two confidence rates; if the higher confidence rate is the time i +1 (or the time i-1), combining the input data corresponding to the time i +1 and the input data at the time i as new input data to calculate a new output result by the above formula, and if the confidence rate of the new output result is higher than the average value of the confidence rates of the time i +1 and the time i, replacing the new output result with the output results at the time i +1 and the time i.
It is understood that the above-mentioned means comprise corresponding hardware and/or software modules for performing the respective functions in order to realize the above-mentioned functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the electronic device may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be implemented in the form of hardware. It should be noted that, the division of the modules in this embodiment is schematic, and is only one logic function division, and another division manner may be available in actual implementation.
It should be noted that all relevant contents of each step related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In case an integrated unit is employed, the user equipment may comprise a processing module and a storage module. The processing module may be configured to control and manage an action of the user equipment, and for example, may be configured to support the electronic device to perform the steps performed by the obtaining unit, the communication unit, and the processing unit. The memory module may be used to support the electronic device in executing stored program codes and data, etc.
The processing module may be a processor or a controller. Which may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein. A processor may also be a combination of computing functions, e.g., a combination comprising one or more microprocessors, digital Signal Processing (DSP) and microprocessors, or the like. The storage module may be a memory. The communication module may specifically be a radio frequency circuit, a bluetooth chip, a Wi-Fi chip, or other devices that interact with other electronic devices.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an exemplary illustration, and does not form a structural limitation on the user equipment. In other embodiments of the present application, the user equipment may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
Referring to fig. 5, fig. 5 is an electronic device 50 provided in an embodiment of the present application, where the electronic device 50 includes a processor 501, a memory 502, a communication interface 503, and a display 504, where the processor 501, the memory 502, and the communication interface 503 are connected to each other through a bus, and the display supplies power to the electronic device, and the electronic device may further include:
the memory 502 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), and the memory 502 is used for related computer programs and data. The communication interface 503 is used to receive and transmit data.
The processor 501 may be one or more Central Processing Units (CPUs), and in the case that the processor 501 is one CPU, the CPU may be a single-core CPU or a multi-core CPU.
In some embodiments, processor 501 may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose-output (GPIO) interface, a SIM card interface, and/or a USB interface. The USB interface is an interface conforming to a USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface can be used for connecting a charger to charge the user equipment, and can also be used for transmitting data between the user equipment and peripheral equipment. The USB interface can also be used for connecting an earphone and playing audio through the earphone.
If the electronic device 50 is a cloud conference server or a cloud device, such as a smart phone, a computer device, or a server, the processor 501 in the electronic device 50 is configured to read the computer program code stored in the memory 502, and perform the following operations:
receiving a shared file to be shared by a cloud terminal of a speaker in a cloud conference; identifying the shared file to determine the part of the content concerned by the speaker and the part of the content not concerned by the speaker in the shared file;
compressing the concerned partial content by a first compression algorithm to obtain a first compressed file, compressing the non-concerned partial content by a second compression algorithm to obtain a second compressed file, and sharing the first compressed file and the second compressed file to other cloud terminals in the cloud conference for screen projection; the second compression algorithm is an algorithm that compresses a greater amount than the first compression algorithm.
Wherein the sharing of the first compressed file and the second compressed file to other cloud terminals in the cloud conference specifically includes:
and acquiring network delay of other cloud terminals, and dynamically distributing the transmission priority of the first compressed file and the transmission priority of the second compressed file at other cloud terminals according to the network delay.
The dynamically allocating the transmission priorities of the first compressed file and the second compressed file at other cloud terminals according to the network delay specifically includes:
and arranging other cloud terminals according to the ascending sequence of network delay to obtain a first sequence, and setting the transmission priority of the other cloud terminals according to the sequence of the first sequence.
Wherein, the identifying the shared file to determine the part of the content concerned by the speaker and the part of the content not concerned by the speaker in the shared file specifically includes:
the method comprises the steps of receiving audio data collected by a cloud terminal of a speaker, carrying out voice recognition on the audio data to determine text information of the audio data, determining content corresponding to the text information from a shared file to be partial content concerned by the speaker, and determining other content of the shared file to be partial content not concerned by the speaker.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a network device, the method flow shown in fig. 2 is implemented.
An embodiment of the present application further provides a computer program product, and when the computer program product runs on a terminal, the method flow shown in fig. 2 is implemented.
An embodiment of the present application also provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of the embodiment shown in fig. 2.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It will be appreciated that the electronic device, in order to carry out the functions described above, may comprise corresponding hardware structures and/or software templates for performing the respective functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
It should be noted that for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or templates referred to are necessarily required for this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
Claims (10)
1. A method for realizing associated screen projection in a cloud conference is characterized by comprising the following steps:
the method comprises the steps that a cloud conference server receives a shared file to be shared by a cloud terminal of a speaker in a cloud conference;
the cloud conference server identifies the shared file to determine the part of the content concerned by the speaker and the part of the content not concerned by the speaker in the shared file;
the cloud conference server compresses the concerned partial content through a first compression algorithm to obtain a first compressed file, compresses the non-concerned partial content through a second compression algorithm to obtain a second compressed file, and shares the first compressed file and the second compressed file to other cloud terminals in the cloud conference for screen projection;
the second compression algorithm is an algorithm that compresses a greater amount than the first compression algorithm.
2. The method according to claim 1, wherein the sharing the first compressed file and the second compressed file to other cloud terminals in the cloud conference specifically comprises:
and acquiring network delay of other cloud terminals, and dynamically distributing the transmission priority of the first compressed file and the transmission priority of the second compressed file at other cloud terminals according to the network delay.
3. The method of claim 2, wherein dynamically assigning the transmission priorities of the first and second compressed files at other cloud terminals according to the network latency comprises:
and arranging other cloud terminals according to the ascending sequence of network delay to obtain a first sequence, and setting the transmission priority of the other cloud terminals according to the sequence of the first sequence.
4. The method according to claim 1, wherein the identifying the shared file to determine the part of the content concerned by the speaker and the part of the content not concerned by the speaker in the shared file specifically comprises:
the method comprises the steps of receiving audio data collected by a cloud terminal of a speaker, carrying out voice recognition on the audio data to determine text information of the audio data, determining content corresponding to the text information from a shared file to be partial content concerned by the speaker, and determining other content of the shared file to be partial content not concerned by the speaker.
5. A system for realizing associated screen projection in a cloud conference is characterized by comprising:
the communication unit is used for receiving a shared file to be shared by a cloud terminal of a speaker in a cloud conference;
the processing unit is used for identifying the shared file to determine the part content concerned by the speaker and the part content not concerned by the speaker in the shared file; compressing the concerned partial content through a first compression algorithm to obtain a first compressed file, compressing the non-concerned partial content through a second compression algorithm to obtain a second compressed file, and sharing the first compressed file and the second compressed file to other cloud terminals in the cloud conference for screen projection;
the second compression algorithm is an algorithm that compresses a greater amount than the first compression algorithm.
6. The system of claim 5,
the processing unit is specifically configured to acquire network delays of other cloud terminals, and dynamically allocate transmission priorities of the first compressed file and the second compressed file at the other cloud terminals according to the network delays.
7. The system of claim 6,
the processing unit is specifically configured to arrange the other cloud terminals in an ascending order of network delay to obtain a first sequence, and set the transmission priorities of the other cloud terminals according to the sequence of the first sequence.
8. The system of claim 5,
the communication unit is also used for receiving audio data collected by a cloud terminal of a speaker;
the processing unit is further configured to perform voice recognition on the audio data to determine text information of the audio data, determine, from the shared file, that content corresponding to the text information is determined to be partial content concerned by the speaker, and determine other content of the shared file to be partial content not concerned by the speaker.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-4.
10. A computer-readable storage medium, in which a computer program is stored which, when run on a user equipment, performs the method of any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210919844.0A CN115334053B (en) | 2022-08-03 | 2022-08-03 | Method for realizing associated screen projection in cloud conference and related products |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210919844.0A CN115334053B (en) | 2022-08-03 | 2022-08-03 | Method for realizing associated screen projection in cloud conference and related products |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115334053A true CN115334053A (en) | 2022-11-11 |
CN115334053B CN115334053B (en) | 2023-07-18 |
Family
ID=83919767
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210919844.0A Active CN115334053B (en) | 2022-08-03 | 2022-08-03 | Method for realizing associated screen projection in cloud conference and related products |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115334053B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040111755A1 (en) * | 2002-12-10 | 2004-06-10 | Perlman Stephen G. | Apparatus and method for wireless video gaming |
US20100302405A1 (en) * | 2005-08-16 | 2010-12-02 | Konica Minolta Holdings, Inc. | Image sensing apparatus and image processing method |
CN102855059A (en) * | 2012-08-21 | 2013-01-02 | 东莞宇龙通信科技有限公司 | Terminal and information sharing method |
CN103561259A (en) * | 2013-07-10 | 2014-02-05 | 杭州云本科技有限公司 | Network conference visual quality automatic evaluation method for application sharing services |
CN105812714A (en) * | 2016-03-18 | 2016-07-27 | 浙江万朋教育科技股份有限公司 | Data compression method for shared PPT document pages |
CN109525802A (en) * | 2018-11-27 | 2019-03-26 | 平安科技(深圳)有限公司 | A kind of video stream transmission method and device |
CN111954051A (en) * | 2020-02-11 | 2020-11-17 | 华为技术有限公司 | Method, equipment and system for transmitting video and audio |
CN111954028A (en) * | 2020-10-19 | 2020-11-17 | 深圳乐播科技有限公司 | Screen projection method, device and equipment of audio data and storage medium |
CN112422591A (en) * | 2021-01-25 | 2021-02-26 | 北京拓课网络科技有限公司 | Method and device for transmitting video stream data and electronic equipment |
CN113204687A (en) * | 2020-11-10 | 2021-08-03 | 摩赛恩科技(苏州)有限公司 | Automatic mass spectrum data uploading method and terminal equipment |
CN114679437A (en) * | 2022-03-11 | 2022-06-28 | 阿里巴巴(中国)有限公司 | Teleconference method, data interaction method, device, and computer storage medium |
CN114816308A (en) * | 2022-06-28 | 2022-07-29 | 深圳乐播科技有限公司 | Information partition display method and related equipment |
CN114827134A (en) * | 2022-07-01 | 2022-07-29 | 深圳乐播科技有限公司 | Differentiated pushing method, related device and display method for cloud conference desktop |
-
2022
- 2022-08-03 CN CN202210919844.0A patent/CN115334053B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200230505A1 (en) * | 2002-12-10 | 2020-07-23 | Sony Interactive Entertainment America Llc | Video Compression System and Method for Compensating for Bandwidth Limitations of a Communication Channel |
US20040111755A1 (en) * | 2002-12-10 | 2004-06-10 | Perlman Stephen G. | Apparatus and method for wireless video gaming |
US20100302405A1 (en) * | 2005-08-16 | 2010-12-02 | Konica Minolta Holdings, Inc. | Image sensing apparatus and image processing method |
CN102855059A (en) * | 2012-08-21 | 2013-01-02 | 东莞宇龙通信科技有限公司 | Terminal and information sharing method |
CN103561259A (en) * | 2013-07-10 | 2014-02-05 | 杭州云本科技有限公司 | Network conference visual quality automatic evaluation method for application sharing services |
CN105812714A (en) * | 2016-03-18 | 2016-07-27 | 浙江万朋教育科技股份有限公司 | Data compression method for shared PPT document pages |
CN109525802A (en) * | 2018-11-27 | 2019-03-26 | 平安科技(深圳)有限公司 | A kind of video stream transmission method and device |
CN111954051A (en) * | 2020-02-11 | 2020-11-17 | 华为技术有限公司 | Method, equipment and system for transmitting video and audio |
CN111954028A (en) * | 2020-10-19 | 2020-11-17 | 深圳乐播科技有限公司 | Screen projection method, device and equipment of audio data and storage medium |
CN113204687A (en) * | 2020-11-10 | 2021-08-03 | 摩赛恩科技(苏州)有限公司 | Automatic mass spectrum data uploading method and terminal equipment |
CN112422591A (en) * | 2021-01-25 | 2021-02-26 | 北京拓课网络科技有限公司 | Method and device for transmitting video stream data and electronic equipment |
CN114679437A (en) * | 2022-03-11 | 2022-06-28 | 阿里巴巴(中国)有限公司 | Teleconference method, data interaction method, device, and computer storage medium |
CN114816308A (en) * | 2022-06-28 | 2022-07-29 | 深圳乐播科技有限公司 | Information partition display method and related equipment |
CN114827134A (en) * | 2022-07-01 | 2022-07-29 | 深圳乐播科技有限公司 | Differentiated pushing method, related device and display method for cloud conference desktop |
Non-Patent Citations (3)
Title |
---|
ILKER HAMZAOGLU: "Low power digital video compression hardware design", 2016 INTERNATIONAL CONFERENCE ON DESIGN AND TECHNOLOGY OF INTEGRATED SYSTEMS IN NANOSCALE ERA (DTIS) * |
刘晓昱: "网络视频会议***的研究与应用", 中国优秀硕士学位论文全文数据库 (信息科技辑) * |
田均成;胡坤;曹宏宇;宋占伟;: "移动终端图像信息云推送投影***的研发", 吉林大学学报(信息科学版), no. 03 * |
Also Published As
Publication number | Publication date |
---|---|
CN115334053B (en) | 2023-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200068635A1 (en) | Data-stream allocation method for link aggregation and related devices | |
WO2019042169A1 (en) | Resource allocation method and related products | |
WO2019042180A1 (en) | Resource allocation method and related product | |
US10165058B2 (en) | Dynamic local function binding apparatus and method | |
US9311920B2 (en) | Voice processing method, apparatus, and system | |
WO2019072208A1 (en) | Application running control method and device | |
US11182210B2 (en) | Method for resource allocation and terminal device | |
CN113766270B (en) | Video playing method, system, server, terminal equipment and electronic equipment | |
WO2019072180A1 (en) | Method and apparatus for allocating resources to application | |
CN114996168A (en) | Multi-device cooperative test method, test device and readable storage medium | |
CN112463391B (en) | Memory control method, memory control device, storage medium and electronic equipment | |
CN115119228A (en) | Method for reporting Channel State Information (CSI) and related product | |
CN113222806A (en) | Storage allocation method and system for big data | |
CN115334053B (en) | Method for realizing associated screen projection in cloud conference and related products | |
CN112965922A (en) | Method and system for enhancing AI performance of intelligent terminal | |
CN106330504B (en) | Method for realizing application and service controller | |
CN112165572A (en) | Image processing method, device, terminal and storage medium | |
US11012372B2 (en) | Electronic apparatus and method for control thereof | |
CN107426114B (en) | Resource allocation method and system | |
CN112887155B (en) | QoS (quality of service) associated information synchronization method and related product | |
CN115361569B (en) | Dynamic frame screen projection method in cloud conference and related products | |
CN111432384B (en) | Large-data-volume audio Bluetooth real-time transmission method for equipment with recording function | |
CN114641031A (en) | Method for reporting Channel State Information (CSI) and related product | |
CN112558885B (en) | Memory using method of functional mobile phone and related product | |
CN112203039A (en) | Processing method and device for online conference, electronic equipment and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |