CN105898180B - Method and terminal for optimizing video call display - Google Patents

Method and terminal for optimizing video call display Download PDF

Info

Publication number
CN105898180B
CN105898180B CN201610485270.5A CN201610485270A CN105898180B CN 105898180 B CN105898180 B CN 105898180B CN 201610485270 A CN201610485270 A CN 201610485270A CN 105898180 B CN105898180 B CN 105898180B
Authority
CN
China
Prior art keywords
image data
rate
video call
terminal
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610485270.5A
Other languages
Chinese (zh)
Other versions
CN105898180A (en
Inventor
刘小猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201610485270.5A priority Critical patent/CN105898180B/en
Publication of CN105898180A publication Critical patent/CN105898180A/en
Application granted granted Critical
Publication of CN105898180B publication Critical patent/CN105898180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/015High-definition television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a method and a terminal for optimizing video call display, wherein the method comprises the following steps: acquiring rate information of a current network, wherein the rate information comprises an uplink rate and a downlink rate; if the uplink rate is lower than a preset minimum threshold, filtering and collecting image data in the video call; and if the downlink rate is lower than a preset minimum threshold, adjusting a display window of the video call according to the real-time downlink rate. According to the invention, the acquisition of background subdata is reduced, so that the data volume required to be transmitted is reduced, and the image display of the video call is optimized; the display window of the video call is also adjusted according to the real-time downlink rate to avoid the displayed image from appearing blurred due to the oversize of the picture.

Description

Method and terminal for optimizing video call display
Technical Field
The invention relates to the technical field of video calls, in particular to a method and a terminal for optimizing video call display.
Background
With the continuous development of the VOLTE technology, video call gradually becomes a main communication mode among people. Video calls are sessions for transmitting and receiving image data in the form of LTE packets. In an LTE network, data is transmitted and received very fast, the requirement for transmitting image data in real time is completely met, but when a terminal is located in an area with weak LTE signals, the transmission of an LTE data packet may appear to be slow, even lower than the basic requirement of video call, so that the video call image is displayed in a fuzzy, unsmooth and unsmooth manner, such as a screen is shown. This is because the current LTE signal is poor, and the data packet received by the terminal from the network may have the situations of packet loss, high error rate, and the like, so that the transmitted image cannot completely display the quality of the original image.
Disclosure of Invention
The invention mainly aims to provide a method and a terminal for optimizing video call display, and aims to solve the problem that transmitted images cannot completely display the quality of original images due to poor LTE signals.
In order to achieve the above object, the present invention provides a method for optimizing video call display, which is characterized by comprising the steps of: acquiring rate information of a current network, wherein the rate information comprises an uplink rate and a downlink rate; if the uplink rate is lower than a preset minimum threshold, filtering and collecting image data in the video call; and if the downlink rate is lower than a preset minimum threshold, adjusting a display window of the video call according to the real-time downlink rate.
Optionally, the image data includes main body sub data and background sub data; the filtering and collecting of the image data in the video call comprises reducing the collection of background subdata of the image data.
Optionally, after filtering and collecting the image data in the video call, the method further includes: performing lossless compression on the image data after filtering and collecting by adopting maximum or moderate compression ratio according to the performance mode of the current terminal CPU; and increasing the transmission power of the current terminal and transmitting the lossless compressed image data to a network.
Optionally, before the adjusting the display window of the video call according to the real-time downlink rate, the method further includes the steps of: copying the acquired image data to a local cache, and compensating by using the image data in the local cache when the current terminal verifies the wrong image data.
Optionally, when the current network is a CA network, the method further includes the steps of: if the uplink rate is lower than a preset minimum threshold, copying the image data into at least two parts, and respectively sending the two parts to a main cell and at least one auxiliary cell; and if the downlink rate is lower than a preset minimum threshold, receiving the image data from the main cell and at least one auxiliary cell for combination compensation.
Optionally, the current network is provided with a buffer for copying the image data.
In addition, to achieve the above object, the present invention further provides a terminal, including: a rate obtaining unit, configured to obtain rate information of a current network, where the rate information includes an uplink rate and a downlink rate; the image data processing unit is used for filtering and collecting image data in the video call when the uplink rate is lower than a preset minimum threshold; and the window adjusting unit is used for adjusting the display window of the video call according to the real-time downlink rate when the downlink rate is lower than a preset minimum threshold.
Optionally, the image data includes main body sub data and background sub data; the image data processing unit is used for reducing the collection of background subdata in the image data.
Optionally, the method further comprises: the image compression unit is used for performing lossless compression on the image data after filtering and collecting by adopting maximum or moderate compression ratio according to the performance mode of the current terminal CPU; and the power increasing unit is used for increasing the sending power of the current terminal and sending the image data after lossless compression to the network.
Optionally, the local compensation unit is configured to copy the acquired image data to a local cache, and when the current terminal verifies incorrect image data, compensate by using the image data in the local cache; a data copy compensation unit, configured to copy the image data into at least two copies if the uplink rate is lower than a preset minimum threshold when the current network is a CA network, and send the at least two copies to a primary cell and at least one secondary cell respectively; and if the downlink rate is lower than a preset minimum threshold, receiving the image data from the main cell and at least one auxiliary cell for combination compensation.
According to the method and the terminal for optimizing the video call display, the image data in the video call are filtered and collected, so that redundant data are reduced as much as possible, and the video call display quality is improved.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention;
FIG. 2 is a diagram of a wireless communication system for the mobile terminal shown in FIG. 1;
fig. 3 is a schematic structural diagram of a terminal according to a first embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal according to a fourth embodiment of the present invention;
fig. 7 is a flowchart illustrating a method for optimizing a video call display according to a fifth embodiment of the present invention;
fig. 8 is a flowchart illustrating a method for optimizing a video call display according to a sixth embodiment of the present invention;
fig. 9 is a flowchart illustrating a method for optimizing a video call display according to a seventh embodiment of the present invention;
fig. 10 is a flowchart illustrating a method for optimizing a video call display according to an eighth embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic hardware configuration of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 100 may include a wireless communication unit 110, an a/V (audio/video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190, etc. Fig. 1 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented. More or fewer components may alternatively be implemented. Elements of the mobile terminal will be described in detail below.
The wireless communication unit 110 typically includes one or more components that allow radio communication between the mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
The broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel. The broadcast channel may include a satellite channel and/or a terrestrial channel. The broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to a terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Also, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal. The broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112. The broadcast signal may exist in various forms, for example, it may exist in the form of an Electronic Program Guide (EPG) of Digital Multimedia Broadcasting (DMB), an Electronic Service Guide (ESG) of digital video broadcasting-handheld (DVB-H), and the like. The broadcast receiving module 111 may receive a signal broadcast by using various types of broadcasting systems. In particular, the broadcast receiving module 111 may receive a broadcast signal by using a signal such as multimedia broadcasting-terrestrial (DMB-T), digital multimedia broadcasting-satellite (DMB-S), digital video broadcasting-handheld (DVB-H), forward link media (MediaFLO)@) A digital broadcasting system of a terrestrial digital broadcasting integrated service (ISDB-T), etc. receives digital broadcasting. The broadcast receiving module 111 may be constructed to be suitable for various broadcasting systems that provide broadcast signals as well as the above-mentioned digital broadcasting systems. Via a broadcast interfaceThe broadcast signals and/or broadcast associated information received by the receiving module 111 may be stored in the memory 160 (or other type of storage medium).
The mobile communication module 112 transmits and/or receives radio signals to and/or from at least one of a base station (e.g., access point, node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received according to text and/or multimedia messages.
The wireless internet module 113 supports wireless internet access of the mobile terminal. The module may be internally or externally coupled to the terminal. The wireless internet access technology to which the module relates may include WLAN (wireless LAN) (Wi-Fi), Wibro (wireless broadband), Wimax (worldwide interoperability for microwave access), HSDPA (high speed downlink packet access), and the like.
The short-range communication module 114 is a module for supporting short-range communication. Some examples of short-range communication technologies include bluetoothTMRadio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), zigbeeTMAnd so on.
The location information module 115 is a module for checking or acquiring location information of the mobile terminal. A typical example of the location information module is a GPS (global positioning system). According to the current technology, the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information, thereby accurately calculating three-dimensional current location information according to longitude, latitude, and altitude. Currently, a method for calculating position and time information uses three satellites and corrects an error of the calculated position and time information by using another satellite. In addition, the GPS module 115 can calculate speed information by continuously calculating current position information in real time.
The a/V input unit 120 is used to receive an audio or video signal. The a/V input unit 120 may include a camera 121 and a microphone 1220, and the camera 121 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 1210 may be provided according to the construction of the mobile terminal. The microphone 122 may receive sounds (audio data) via the microphone in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the mobile communication module 112 in case of a phone call mode. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The user input unit 130 may generate key input data according to a command input by a user to control various operations of the mobile terminal. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The sensing unit 140 detects a current state of the mobile terminal 100 (e.g., an open or closed state of the mobile terminal 100), a position of the mobile terminal 100, presence or absence of contact (i.e., touch input) by a user with the mobile terminal 100, an orientation of the mobile terminal 100, acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling an operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide-type mobile phone, the sensing unit 140 may sense whether the slide-type phone is opened or closed. In addition, the sensing unit 140 can detect whether the power supply unit 190 supplies power or whether the interface unit 170 is coupled with an external device. The sensing unit 140 may include a proximity sensor 1410 as will be described below in connection with a touch screen.
The interface unit 170 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification module may store various information for authenticating a user using the mobile terminal 100 and may include a User Identity Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter, referred to as an "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 100 via a port or other connection means. The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal and the external device.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal is accurately mounted on the cradle. The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 into an audio signal and output as sound when the mobile terminal is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output module 152 may provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may include a speaker, a buzzer, and the like.
The alarm unit 153 may provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alarm unit 153 may provide output in different ways to notify the occurrence of an event. For example, the alarm unit 153 may provide an output in the form of vibration, and when a call, a message, or some other incoming communication (incomingmunication) is received, the alarm unit 153 may provide a tactile output (i.e., vibration) to inform the user thereof. By providing such a tactile output, the user can recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 may also provide an output notifying the occurrence of an event via the display unit 151 or the audio output module 152.
The memory 160 may store software programs and the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, and the like) that has been or will be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 1810 for reproducing (or playing back) multimedia data, and the multimedia module 1810 may be constructed within the controller 180 or may be constructed separately from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 180. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 160 and executed by the controller 180.
Up to this point, mobile terminals have been described in terms of their functionality. Hereinafter, a slide-type mobile terminal among various types of mobile terminals, such as a folder-type, bar-type, swing-type, slide-type mobile terminal, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
A communication system in which a mobile terminal according to the present invention is operable will now be described with reference to fig. 2.
As shown in fig. 2, the terminal 100 may implement a video call through the LET network 21, and may also implement a video call through a CA (carrier aggregation) network 22; generally, a CA network has a primary cell and at least one secondary cell.
Based on the above mobile terminal hardware structure and communication system, the present invention provides various embodiments of the method.
As shown in fig. 3, a first embodiment of the present invention proposes a terminal, further including: a rate obtaining unit 11, configured to obtain rate information of a current network, where the rate information includes an uplink rate and a downlink rate; the image data processing unit 12 is configured to filter and collect image data in a video call when the uplink rate is lower than a preset minimum threshold, for example, 1 Mbps; the window adjusting unit 13 is configured to adjust a display window of the video call according to the real-time downlink rate when the downlink rate is lower than a preset minimum threshold, for example, 1 Mbps.
Specifically, when the terminal is used as a sending end, the terminal obtains a current LTE uplink rate, and if the current LTE uplink rate is lower than a minimum threshold (1Mbps) of an image data transmission rate, the image data processing unit 12 obtains image data through a camera, identifies the image data, and identifies main sub data and background sub data in the image data according to an identification result, where the main sub data mainly refers to data corresponding to a human image; the image data processing unit 12 reduces the light reception and reduces the acquisition of background subdata in the image data when filtering and acquiring the image data; that is, the image data processing unit 12 is mainly used to collect main body sub-data in the image data and reduce the collection of background sub-data.
When the terminal is used as a receiving end, the window adjusting unit 13 adjusts the display window of the video call according to the real-time downlink rate, and when the terminal is in a situation that the signal environment is complex, that is, the current network is unstable, if the downlink rate changes greatly, the window adjusting unit 13 can set a smaller video call window to avoid that the displayed image looks blurred due to an overlarge picture; if the downlink rate is stable, the window adjusting unit 13 displays the image by using the window with the fixed corresponding size. Generally, when the downlink rate is greater than the lowest threshold, e.g., 1Mbps, the image is displayed using a window of a fixed corresponding size; if the downlink rate is smaller than the lowest threshold, for example, 1Mbps, the size of the corresponding display window needs to be correspondingly adjusted according to the size of the downlink rate.
In this embodiment, when the terminal is used as a sending end, the image data processing unit is used to collect main subdata in the image data, and collect background subdata is reduced, so as to reduce the amount of data to be transmitted, thereby optimizing the image display of the video call. When the terminal serves as a receiving end, the display window of the video call is adjusted according to the real-time downlink rate so as to avoid the phenomenon that the displayed image looks blurred due to the overlarge picture.
As shown in fig. 4, a terminal according to a second embodiment of the present invention includes: a rate obtaining unit 11, configured to obtain rate information of a current network, where the rate information includes an uplink rate and a downlink rate; the image data processing unit 12 is configured to filter and collect image data in a video call when the uplink rate is lower than a preset minimum threshold, for example, 1 Mbps; the window adjusting unit 13 is configured to adjust a display window of the video call according to the real-time downlink rate when the downlink rate is lower than a preset minimum threshold, for example, 1 Mbps.
Specifically, when the terminal is used as a sending end, the terminal obtains a current LTE uplink rate, and if the current LTE uplink rate is lower than a minimum threshold (1Mbps) of an image data transmission rate, the image data processing unit 12 obtains image data through a camera, identifies the image data, and identifies main sub data and background sub data in the image data according to an identification result, where the main sub data mainly refers to data corresponding to a human image; the image data processing unit 12 reduces the light reception and reduces the acquisition of background subdata in the image data when filtering and acquiring the image data; that is, the image data processing unit 12 mainly collects main body sub data in the image data and reduces collection of background sub data. When the terminal is used as a receiving end, the window adjusting unit 13 adjusts the display window of the video call according to the real-time downlink rate, and when the terminal is in a situation that the signal environment is complex, that is, the current network is unstable, if the downlink rate changes greatly, the window adjusting unit 13 can set a smaller video call window to avoid that the displayed image looks blurred due to an overlarge picture; if the downlink rate is stable, the window adjusting unit 13 displays the image by using the window with the fixed corresponding size. Generally, when the downlink rate is greater than the lowest threshold, e.g., 1Mbps, the image is displayed using a window of a fixed corresponding size; if the downlink rate is smaller than the lowest threshold, for example, 1Mbps, the size of the corresponding display window needs to be correspondingly adjusted according to the size of the downlink rate.
Different from the previous embodiment, the present embodiment further includes an image compression unit 14, configured to perform lossless compression on the filtered and acquired image data by using a maximum or moderate compression ratio according to the performance mode of the current terminal CPU; a power increasing unit 15, configured to increase the transmission power of the current terminal, and send the lossless compressed image data to the network.
Specifically, when the terminal is used as the transmitting end, the image compression unit 14 performs data compression of the processed image data by a higher factor to further reduce the amount of data that needs to be transmitted. In the embodiment, a lossless compression method is adopted, namely the image quality of the image is unchanged, such as compression coding algorithms with high definition and high compression ratio like MPEG and AVI, and a specific compression format can adopt an algorithm with the maximum compression ratio in the lossless compression algorithm according to the current terminal CPU performance mode in the lossless compression method if the image is in a high performance state; if the performance is normal, an algorithm with a moderate compression ratio is adopted. It should be noted that, since the amount of audio data itself in the video call is small, compression is not necessary. In addition, the medium power increasing unit 15 in this embodiment transmits the compressed image data to the current network by increasing the transmission power, so that the data amount required to be transmitted in unit time can be reduced, and the probability of data packet loss and the error rate are further reduced.
In this embodiment, when the terminal is used as a sending end, the image data processing unit is used to collect main sub data in the image data, and reduce the collection of background sub data, thereby reducing the amount of data to be transmitted; the data volume is further reduced through the compression of the image data, so that the image display of the video call is optimized; and the probability of data packet loss and the error rate are reduced by increasing the transmitting power. When the terminal serves as a receiving end, the display window of the video call is adjusted according to the real-time downlink rate so as to avoid the phenomenon that the displayed image looks blurred due to the overlarge picture.
As shown in fig. 5, a terminal according to a third embodiment of the present invention includes: a rate obtaining unit 11, configured to obtain rate information of a current network, where the rate information includes an uplink rate and a downlink rate; the image data processing unit 12 is configured to filter and collect image data in a video call when the uplink rate is lower than a preset minimum threshold, for example, 1 Mbps; a window adjusting unit 13, configured to adjust a display window of the video call according to a real-time downlink rate when the downlink rate is lower than a preset minimum threshold, for example, 1 Mbps; the image compression unit 14 is used for performing lossless compression on the image data after filtering and collecting by adopting maximum or moderate compression ratio according to the performance mode of the current terminal CPU; a power increasing unit 15, configured to increase the transmission power of the current terminal, and send the lossless compressed image data to the network.
Specifically, when the terminal is used as a sending end, the terminal obtains a current LTE uplink rate, and if the current LTE uplink rate is lower than a minimum threshold (1Mbps) of an image data transmission rate, the image data processing unit 12 obtains image data through a camera, identifies the image data, and identifies main sub data and background sub data in the image data according to an identification result, where the main sub data mainly refers to data corresponding to a human image; the image data processing unit 12 reduces the light reception and reduces the acquisition of background subdata in the image data when filtering and acquiring the image data; that is, the image data processing unit 12 mainly collects main body sub data in the image data and reduces collection of background sub data. The image compression unit 14 performs higher-power data compression on the processed image data to further reduce the amount of data that needs to be transmitted. In the embodiment, a lossless compression method is adopted, namely the image quality of the image is unchanged, such as compression coding algorithms with high definition and high compression ratio like MPEG and AVI, and a specific compression format can adopt an algorithm with the maximum compression ratio in the lossless compression algorithm according to the current terminal CPU performance mode in the lossless compression method if the image is in a high performance state; if the performance is normal, an algorithm with a moderate compression ratio is adopted. It should be noted that, since the amount of audio data itself in the video call is small, compression is not necessary. In addition, the medium power increasing unit 15 in this embodiment transmits the compressed image data to the current network by increasing the transmission power, so that the data amount required to be transmitted in unit time can be reduced, and the probability of data packet loss and the error rate are further reduced.
When the terminal is used as a receiving end, the window adjusting unit 13 adjusts the display window of the video call according to the real-time downlink rate, and when the terminal is in a situation that the signal environment is complex, that is, the current network is unstable, if the downlink rate changes greatly, the window adjusting unit 13 can set a smaller video call window to avoid that the displayed image looks blurred due to an overlarge picture; if the downlink rate is stable, the window adjusting unit 13 displays the image by using the window with the fixed corresponding size. Generally, when the downlink rate is greater than the lowest threshold, e.g., 1Mbps, the image is displayed using a window of a fixed corresponding size; if the downlink rate is smaller than the lowest threshold, for example, 1Mbps, the size of the corresponding display window needs to be correspondingly adjusted according to the size of the downlink rate.
Different from the previous embodiment, the present embodiment further includes a local compensation unit 16, configured to copy the acquired image data to a local cache when the terminal serves as a receiving end, and perform compensation by using the image data in the local cache when the current terminal verifies the wrong image data. Specifically, the terminal checks each image data packet, and if there is an erroneous image data packet, the local compensation unit 16 compensates the displayed image data with the locally buffered image data packet. The cached data packet, i.e. the image data in the local cache, is data with a verification integrity higher than a certain proportion after about 5 seconds of caching. The data only carries out motion compensation aiming at the data with the later receiving verification integrity lower than a certain proportion, and the subsequent image data is predicted and compensated by utilizing the local image data; and if the integrity of the data check received later is higher than a certain proportion, replacing the original cache data.
In this embodiment, when the terminal is used as a sending end, the image data processing unit is used to collect main sub data in the image data, and reduce the collection of background sub data, thereby reducing the amount of data to be transmitted; the data volume is further reduced through the compression of the image data, so that the image display of the video call is optimized; and the probability of data packet loss and the error rate are reduced by increasing the transmitting power. When the terminal serves as a receiving end, the display window of the video call is adjusted according to the real-time downlink rate so as to avoid the phenomenon that the displayed image looks fuzzy due to overlarge picture; the locally cached data is also adopted to compensate the display image so as to improve the display effect.
As shown in fig. 6, a terminal according to a fourth embodiment of the present invention includes: a rate obtaining unit 11, configured to obtain rate information of a current network, where the rate information includes an uplink rate and a downlink rate; the image data processing unit 12 is configured to filter and collect image data in a video call when the uplink rate is lower than a preset minimum threshold, for example, 1 Mbps; a window adjusting unit 13, configured to adjust a display window of the video call according to a real-time downlink rate when the downlink rate is lower than a preset minimum threshold, for example, 1 Mbps; the image compression unit 14 is used for performing lossless compression on the image data after filtering and collecting by adopting maximum or moderate compression ratio according to the performance mode of the current terminal CPU; a power increasing unit 15, configured to increase the transmission power of the current terminal, and send the lossless compressed image data to a network; and the local compensation unit 16 is configured to copy one copy of the acquired image data to a local cache when the terminal serves as a receiving end, and compensate by using the image data in the local cache when the current terminal verifies wrong image data.
Specifically, when the terminal is used as a sending end, the terminal obtains a current LTE uplink rate, and if the current LTE uplink rate is lower than a minimum threshold (1Mbps) of an image data transmission rate, the image data processing unit 12 obtains image data through a camera, identifies the image data, and identifies main sub data and background sub data in the image data according to an identification result, where the main sub data mainly refers to data corresponding to a human image; the image data processing unit 12 reduces the light reception and reduces the acquisition of background subdata in the image data when filtering and acquiring the image data; that is, the image data processing unit 12 mainly collects main body sub data in the image data and reduces collection of background sub data. The image compression unit 14 performs higher-power data compression on the processed image data to further reduce the amount of data that needs to be transmitted. In the embodiment, a lossless compression method is adopted, namely the image quality of the image is unchanged, such as compression coding algorithms with high definition and high compression ratio like MPEG and AVI, and a specific compression format can adopt an algorithm with the maximum compression ratio in the lossless compression algorithm according to the current terminal CPU performance mode in the lossless compression method if the image is in a high performance state; if the performance is normal, an algorithm with a moderate compression ratio is adopted. It should be noted that, since the amount of audio data itself in the video call is small, compression is not necessary. In addition, the medium power increasing unit 15 in this embodiment transmits the compressed image data to the current network by increasing the transmission power, so that the data amount required to be transmitted in unit time can be reduced, and the probability of data packet loss and the error rate are further reduced.
When the terminal is used as a receiving end, the window adjusting unit 13 adjusts the display window of the video call according to the real-time downlink rate, and when the terminal is in a situation that the signal environment is complex, that is, the current network is unstable, if the downlink rate changes greatly, the window adjusting unit 13 can set a smaller video call window to avoid that the displayed image looks blurred due to an overlarge picture; if the downlink rate is stable, the window adjusting unit 13 displays the image by using the window with the fixed corresponding size. Generally, when the downlink rate is greater than the lowest threshold, e.g., 1Mbps, the image is displayed using a window of a fixed corresponding size; if the downlink rate is smaller than the lowest threshold, for example, 1Mbps, the size of the corresponding display window needs to be correspondingly adjusted according to the size of the downlink rate. In addition, the terminal checks each received image data packet, and if there is an erroneous image data packet, the local compensation unit 16 compensates the displayed image data using the locally buffered image data packet. The cached data packet, i.e. the image data in the local cache, is data with a verification integrity higher than a certain proportion after about 5 seconds of caching. The data only carries out motion compensation aiming at the data with the later receiving verification integrity lower than a certain proportion, and the subsequent image data is predicted and compensated by utilizing the local image data; and if the integrity of the data check received later is higher than a certain proportion, replacing the original cache data.
Different from the previous embodiment, in this embodiment, the data copy compensation unit 17 is further included, configured to copy, when the current network is a CA network, the image data into at least two copies if the uplink rate is lower than a preset minimum threshold, for example, 1Mbps, and send the at least two copies to the primary cell and the at least one secondary cell respectively; and if the downlink rate is lower than a preset minimum threshold, for example 1Mbps, receiving image data from the main cell and at least one auxiliary cell for combination compensation. Specifically, after the data copy compensation unit copies and transmits the image data, the data copy compensation unit performs data integration and compensation on the VOLTE server, so that the packet loss rate and the bit error rate of the data can be further reduced. Because the current CA network generally has a main cell and an auxiliary cell, the image data can be duplicated into two parts, and subsequently, along with the continuous improvement of the network, when the CA network supports a plurality of auxiliary cells, the data can be further divided into multiple paths for transmission, thereby reducing the influence of the data loss of single path transmission on the whole image display.
In this embodiment, when the terminal is used as a sending end, the image data processing unit is used to collect main sub data in the image data, and reduce the collection of background sub data, thereby reducing the amount of data to be transmitted; the data volume is further reduced through the compression of the image data, so that the image display of the video call is optimized; and the probability of data packet loss and the error rate are reduced by increasing the transmitting power. When the terminal serves as a receiving end, the display window of the video call is adjusted according to the real-time downlink rate so as to avoid the phenomenon that the displayed image looks fuzzy due to overlarge picture; the locally cached data is also adopted to compensate the display image so as to improve the display effect. When the current network is a CA network, the image data is copied into at least two copies which are respectively sent to the main cell and the at least one auxiliary cell, so that the packet loss rate and the bit error rate of the data are further reduced.
It should be noted that, although not shown in the figure, in an implementation, a buffer may also be set in the current network for copying the image data. Therefore, when the terminal receives an image data packet, the terminal can send a confirmation message to the network, and the network can continue to send subsequent image data packets after receiving the confirmation message; when the image data packet received by the terminal is lost, a message can be sent to the network to instruct the network to resend the lost data packet.
As shown in fig. 7, the present invention further provides a method for optimizing a video call display, comprising the steps of:
s1, acquiring the speed information of the current network, wherein the speed information comprises an uplink speed and a downlink speed; specifically, if the current terminal is used as a sending end, the uplink rate of the current network is obtained; if the current terminal is used as a receiving end, the downlink rate of the current network is obtained;
if the obtained uplink rate of the current network is obtained, step S21 is entered, and it is determined whether the uplink rate is lower than a preset minimum threshold, for example, 1Mbps, if not, step S1 is returned to; if yes, filtering and collecting image data in the video call as shown in step S22;
if the obtained downlink rate of the current network is obtained, step S31 is entered, and it is determined whether the downlink rate is lower than a preset minimum threshold, for example, 1Mbps, if not, step S1 is returned to; if so, in step S33, the display window of the video call is adjusted according to the real-time downlink rate.
In step S22, image data may be obtained through a camera of the current terminal, the image data is identified, and main sub data and background sub data in the image data are identified according to the identification result, where the main sub data mainly refers to data corresponding to a human image; the image data processing unit 12 reduces the light reception and reduces the acquisition of background subdata in the image data when filtering and acquiring the image data; that is to say, filtering and collecting image data in a video call mainly refers to collecting main subdata in the image data and reducing the collection of background subdata.
In step S33, if the downstream rate change is large, the window adjustment unit 13 may set a small video call window to avoid the displayed image from appearing blurred due to the large screen size; if the downlink rate is stable, the window adjusting unit 13 displays the image by using the window with the fixed corresponding size. Generally, when the downlink rate is greater than the lowest threshold, e.g., 1Mbps, the image is displayed using a window of a fixed corresponding size; if the downlink rate is smaller than the lowest threshold, for example, 1Mbps, the size of the corresponding display window needs to be correspondingly adjusted according to the size of the downlink rate.
In this embodiment, when the terminal is used as a sending end, the main subdata in the image data is collected, and the collection of the background subdata is reduced, so that the data volume to be transmitted is reduced, and the image display of the video call is optimized. When the terminal serves as a receiving end, the display window of the video call is adjusted according to the real-time downlink rate so as to avoid the phenomenon that the displayed image looks blurred due to the overlarge picture.
As shown in fig. 8, the present invention further provides a method for optimizing a video call display, comprising the steps of:
s1, acquiring the speed information of the current network, wherein the speed information comprises an uplink speed and a downlink speed; specifically, if the current terminal is used as a sending end, the uplink rate of the current network is obtained; if the current terminal is used as a receiving end, the downlink rate of the current network is obtained;
if the obtained uplink rate of the current network is obtained, step S21 is entered, and it is determined whether the uplink rate is lower than a preset minimum threshold, for example, 1Mbps, if not, step S1 is returned to; if yes, filtering and collecting image data in the video call as shown in step S22;
if the obtained downlink rate of the current network is obtained, step S31 is entered, and it is determined whether the downlink rate is lower than a preset minimum threshold, for example, 1Mbps, if not, step S1 is returned to; if so, in step S33, the display window of the video call is adjusted according to the real-time downlink rate.
In step S22, image data may be obtained through a camera of the current terminal, the image data is identified, and main sub data and background sub data in the image data are identified according to the identification result, where the main sub data mainly refers to data corresponding to a human image; the image data processing unit 12 reduces the light reception and reduces the acquisition of background subdata in the image data when filtering and acquiring the image data; that is to say, filtering and collecting image data in a video call mainly refers to collecting main subdata in the image data and reducing the collection of background subdata.
In step S33, if the downstream rate change is large, the window adjustment unit 13 may set a small video call window to avoid the displayed image from appearing blurred due to the large screen size; if the downlink rate is stable, the window adjusting unit 13 displays the image by using the window with the fixed corresponding size. Generally, when the downlink rate is greater than the lowest threshold, e.g., 1Mbps, the image is displayed using a window of a fixed corresponding size; if the downlink rate is smaller than the lowest threshold, for example, 1Mbps, the size of the corresponding display window needs to be correspondingly adjusted according to the size of the downlink rate.
Unlike the previous embodiment, in the present embodiment, after step S22, the method further includes the steps of: s23, performing lossless compression on the image data after filtering and collecting by adopting maximum or moderate compression ratio according to the performance mode of the current terminal CPU; and S24, increasing the transmission power of the current terminal, and transmitting the image data after lossless compression to the network.
In step S23, different image compression methods have different compression coding ratios to the source data, and a certain data loss occurs during the compression process, in this embodiment, a lossless compression method is adopted, that is, the image quality of the image is not changed, such as MPEG, AVI and other high-definition high-compression-ratio compression coding algorithms, and the specific compression format may be in a lossless compression method according to the current terminal CPU performance mode, and if the image quality is in a high-performance state, the algorithm with the largest compression ratio is adopted in the lossless compression algorithm; if the performance is normal, an algorithm with a moderate compression ratio is adopted. It should be noted that, since the amount of audio data itself in the video call is small, compression is not necessary. In step S24, the compressed image data is transmitted to the current network by increasing the transmission power, so that the data amount required to be transmitted per unit time can be reduced, and the probability of packet loss and the error rate can be further reduced.
In this embodiment, when the terminal is used as a sending end, the main subdata in the image data is collected, and the collection of the background subdata is reduced, so that the data volume required to be transmitted is reduced, and the data volume required to be transmitted is further reduced by compressing the image data, so that the image display of the video call is optimized; and the probability of data packet loss and the error rate are reduced by increasing the transmitting power. When the terminal serves as a receiving end, the display window of the video call is adjusted according to the real-time downlink rate so as to avoid the phenomenon that the displayed image looks blurred due to the overlarge picture.
As shown in fig. 9, the present invention further provides a method for optimizing a video call display, comprising the steps of:
s1, acquiring the speed information of the current network, wherein the speed information comprises an uplink speed and a downlink speed; specifically, if the current terminal is used as a sending end, the uplink rate of the current network is obtained; if the current terminal is used as a receiving end, the downlink rate of the current network is obtained;
if the obtained uplink rate of the current network is obtained, step S21 is entered, and it is determined whether the uplink rate is lower than a preset minimum threshold, for example, 1Mbps, if not, step S1 is returned to; if yes, filtering and collecting image data in the video call as shown in step S22; then step S23 is carried out, the image data after being filtered and collected is subjected to lossless compression by adopting maximum or moderate compression ratio according to the performance mode of the current terminal CPU; next, as shown in step S24, the transmission power of the current terminal is increased, and the lossless compressed image data is transmitted to the network.
If the obtained downlink rate of the current network is obtained, step S31 is entered, and it is determined whether the downlink rate is lower than a preset minimum threshold, for example, 1Mbps, if not, step S1 is returned to; if so, in step S33, the display window of the video call is adjusted according to the real-time downlink rate.
In step S22, image data may be obtained through a camera of the current terminal, the image data is identified, and main sub data and background sub data in the image data are identified according to the identification result, where the main sub data mainly refers to data corresponding to a human image; the image data processing unit 12 reduces the light reception and reduces the acquisition of background subdata in the image data when filtering and acquiring the image data; that is to say, filtering and collecting image data in a video call mainly refers to collecting main subdata in the image data and reducing the collection of background subdata. In step S23, different image compression methods have different compression coding ratios to the source data, and a certain data loss occurs during the compression process, in this embodiment, a lossless compression method is adopted, that is, the image quality of the image is not changed, such as MPEG, AVI and other high-definition high-compression-ratio compression coding algorithms, and the specific compression format may be in a lossless compression method according to the current terminal CPU performance mode, and if the compression format is in a high-performance state, the algorithm with the largest compression ratio is adopted in the lossless compression algorithm; if the performance is normal, an algorithm with a moderate compression ratio is adopted. It should be noted that, since the amount of audio data itself in the video call is small, compression is not necessary. In step S24, the compressed image data is transmitted to the current network by increasing the transmission power, so that the data amount required to be transmitted per unit time can be reduced, and the probability of packet loss and the error rate can be further reduced.
In step S33, if the downstream rate change is large, the window adjustment unit 13 may set a small video call window to avoid the displayed image from appearing blurred due to the large screen size; if the downlink rate is stable, the window adjusting unit 13 displays the image by using the window with the fixed corresponding size. Generally, when the downlink rate is greater than the lowest threshold, e.g., 1Mbps, the image is displayed using a window of a fixed corresponding size; if the downlink rate is smaller than the lowest threshold, for example, 1Mbps, the size of the corresponding display window needs to be correspondingly adjusted according to the size of the downlink rate.
Unlike the previous embodiment, in the present embodiment, before step S33, the method further includes the steps of: and S32, copying the acquired image data to a local cache. Therefore, if the current terminal checks the wrong image data, the image data in the local cache can be used for compensation. Specifically, the terminal checks each image data packet, and if there is an erroneous image data packet, the local compensation unit 16 compensates the displayed image data with the locally buffered image data packet. The cached data packet, i.e. the image data in the local cache, is data with a verification integrity higher than a certain proportion after about 5 seconds of caching. The data only carries out motion compensation aiming at the data with the later receiving verification integrity lower than a certain proportion, and the subsequent image data is predicted and compensated by utilizing the local image data; and if the integrity of the data check received later is higher than a certain proportion, replacing the original cache data.
In this embodiment, when the terminal is used as a sending end, the main subdata in the image data is collected, and the collection of the background subdata is reduced, so that the data volume required to be transmitted is reduced, and the data volume required to be transmitted is further reduced by compressing the image data, so that the image display of the video call is optimized; and the probability of data packet loss and the error rate are reduced by increasing the transmitting power. When the terminal serves as a receiving end, the display window of the video call is adjusted according to the real-time downlink rate so as to avoid the phenomenon that the displayed image looks fuzzy due to overlarge picture; the locally cached data is also adopted to compensate the display image so as to improve the display effect.
As shown in fig. 9 and 10, the present invention further provides a method for optimizing a video call display, comprising the steps of:
s1, acquiring the speed information of the current network, wherein the speed information comprises an uplink speed and a downlink speed; specifically, if the current terminal is used as a sending end, the uplink rate of the current network is obtained; if the current terminal is used as a receiving end, the downlink rate of the current network is obtained;
if the obtained uplink rate of the current network is obtained, step S21 is entered, and it is determined whether the uplink rate is lower than a preset minimum threshold, for example, 1Mbps, if not, step S1 is returned to; if yes, filtering and collecting image data in the video call as shown in step S22; then step S23 is carried out, the image data after being filtered and collected is subjected to lossless compression by adopting maximum or moderate compression ratio according to the performance mode of the current terminal CPU; next, as shown in step S24, the transmission power of the current terminal is increased, and the lossless compressed image data is transmitted to the network.
If the obtained downlink rate of the current network is obtained, step S31 is entered, and it is determined whether the downlink rate is lower than a preset minimum threshold, for example, 1Mbps, if not, step S1 is returned to; if yes, in step S32, copy the acquired image data to a local cache, and then go to step S33 to adjust the display window of the video call according to the real-time downlink rate.
In step S22, image data may be obtained through a camera of the current terminal, the image data is identified, and main sub data and background sub data in the image data are identified according to the identification result, where the main sub data mainly refers to data corresponding to a human image; the image data processing unit 12 reduces the light reception and reduces the acquisition of background subdata in the image data when filtering and acquiring the image data; that is to say, filtering and collecting image data in a video call mainly refers to collecting main subdata in the image data and reducing the collection of background subdata. In step S23, different image compression methods have different compression coding ratios to the source data, and a certain data loss occurs during the compression process, in this embodiment, a lossless compression method is adopted, that is, the image quality of the image is not changed, such as MPEG, AVI and other high-definition high-compression-ratio compression coding algorithms, and the specific compression format may be in a lossless compression method according to the current terminal CPU performance mode, and if the compression format is in a high-performance state, the algorithm with the largest compression ratio is adopted in the lossless compression algorithm; if the performance is normal, an algorithm with a moderate compression ratio is adopted. It should be noted that, since the amount of audio data itself in the video call is small, compression is not necessary. In step S24, the compressed image data is transmitted to the current network by increasing the transmission power, so that the data amount required to be transmitted per unit time can be reduced, and the probability of packet loss and the error rate can be further reduced.
The step S32 is added to compensate the image data in the local cache when the current terminal checks the wrong image data. Specifically, the terminal checks each image data packet, and if there is an erroneous image data packet, the local compensation unit 16 compensates the displayed image data with the locally buffered image data packet. The cached data packet, i.e. the image data in the local cache, is data with a verification integrity higher than a certain proportion after about 5 seconds of caching. The data only carries out motion compensation aiming at the data with the later receiving verification integrity lower than a certain proportion, and the subsequent image data is predicted and compensated by utilizing the local image data; and if the integrity of the data check received later is higher than a certain proportion, replacing the original cache data. In step S33, if the downstream rate change is large, the window adjustment unit 13 may set a small video call window to avoid the displayed image from appearing blurred due to the large screen size; if the downlink rate is stable, the window adjusting unit 13 displays the image by using the window with the fixed corresponding size. Generally, when the downlink rate is greater than the lowest threshold, e.g., 1Mbps, the image is displayed using a window of a fixed corresponding size; if the downlink rate is smaller than the lowest threshold, for example, 1Mbps, the size of the corresponding display window needs to be correspondingly adjusted according to the size of the downlink rate.
As shown in fig. 10, different from the previous embodiment, in this embodiment, if the current network is a CA network, step S1 is specifically represented as S11, that is, speed information of the current CA network is obtained, where the speed information includes an uplink speed and a downlink speed; if the uplink rate of the current CA network is obtained, the step S21 is entered, whether the uplink rate is lower than a preset minimum threshold, such as 1Mbps, is judged, and if not, the step S11 is returned; if yes, as shown in step S41, the image data is copied into at least two copies, and the copies are respectively sent to the primary cell and the at least one secondary cell; if the obtained downlink rate of the current CA network is obtained, the step S31 is entered, whether the downlink rate is lower than a preset lowest threshold, such as 1Mbps, is judged, and if not, the step S11 is returned; if so, the image data is received from the primary cell and the at least one secondary cell for combining compensation, as shown in step S51. Specifically, in step S41, after the image data is copied and sent out, data integration and compensation are performed on the VOLTE server, so that the packet loss rate and the error rate of the data can be further reduced. Because the current CA network generally has a main cell and an auxiliary cell, the image data can be duplicated into two parts, and subsequently, along with the continuous improvement of the network, when the CA network supports a plurality of auxiliary cells, the data can be further divided into multiple paths for transmission, thereby reducing the influence of the data loss of single path transmission on the whole image display. In step S51, the terminal checks the data packets received from the primary cell and the secondary cell, and if it is found that the data packet received by a certain cell has lost or failed in the check, compensates the data packet received by another cell.
In this embodiment, when the terminal is used as a sending end, the main subdata in the image data is collected, and the collection of the background subdata is reduced, so that the data volume required to be transmitted is reduced, and the data volume required to be transmitted is further reduced by compressing the image data, so that the image display of the video call is optimized; and the probability of data packet loss and the error rate are reduced by increasing the transmitting power. When the terminal serves as a receiving end, the display window of the video call is adjusted according to the real-time downlink rate so as to avoid the phenomenon that the displayed image looks fuzzy due to overlarge picture; the locally cached data is also adopted to compensate the display image so as to improve the display effect. When the current network is a CA network, the image data is copied into at least two copies which are respectively sent to the main cell and the at least one auxiliary cell, so that the packet loss rate and the bit error rate of the data are further reduced.
In the method for optimizing the video call display provided by the invention, a cache region can be set at the current network side for copying the image data. Therefore, when the terminal receives an image data packet, the terminal sends a confirmation message to the network, and the network continues to send subsequent image data packets after receiving the confirmation message; when the image data packet received by the terminal is lost, a message is sent to the network to indicate the network to resend the lost data packet.
According to the method and the terminal for optimizing the video call display, when the terminal serves as a sending end, the main subdata in the image data is collected, the collection of the background subdata is reduced, the data volume needing to be transmitted is further reduced through the compression of the image data, and the image display of the video call is optimized; and the probability of data packet loss and the error rate are reduced by increasing the transmitting power. When the terminal serves as a receiving end, the display window of the video call is adjusted according to the real-time downlink rate so as to avoid the phenomenon that the displayed image looks fuzzy due to overlarge picture; the locally cached data is also adopted to compensate the display image so as to improve the display effect. When the current network is a CA network, the image data is copied into at least two copies which are respectively sent to the main cell and the at least one auxiliary cell, so that the packet loss rate and the bit error rate of the data are further reduced.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A method for optimizing a video call display, comprising the steps of:
acquiring rate information of a current network, wherein the rate information comprises an uplink rate and a downlink rate;
if the uplink rate is lower than a preset minimum threshold, filtering and collecting image data in the video call;
if the downlink rate is lower than a preset minimum threshold, adjusting a display window of the video call according to the real-time downlink rate;
if the downlink rate is higher than a preset minimum threshold, displaying an image by adopting a window with a fixed corresponding size;
the image data comprises main body subdata and background subdata; the filtering and collecting of the image data in the video call comprises reducing the receiving of light and reducing the collection of background subdata of the image data.
2. The method for optimizing video call display according to claim 1, further comprising the step of, after filtering and collecting the image data in the video call:
performing lossless compression on the image data after filtering and collecting by adopting maximum or moderate compression ratio according to the performance mode of the current terminal CPU;
and increasing the transmission power of the current terminal and transmitting the lossless compressed image data to a network.
3. The method of optimizing a video call display of claim 1, further comprising, prior to said adjusting the display window of the video call based on the real-time downlink rate, the steps of: copying the acquired image data to a local cache, and compensating by using the image data in the local cache when the current terminal verifies the wrong image data.
4. The method of claim 1, wherein when the current network is a CA network, the method further comprises the steps of:
if the uplink rate is lower than a preset minimum threshold, copying the image data into at least two parts, and respectively sending the two parts to a main cell and at least one auxiliary cell;
and if the downlink rate is lower than a preset minimum threshold, receiving the image data from the main cell and at least one auxiliary cell for combination compensation.
5. The method of claim 1 or 4, wherein the current network is provided with a buffer for copying the image data.
6. A terminal, comprising:
a rate obtaining unit, configured to obtain rate information of a current network, where the rate information includes an uplink rate and a downlink rate;
the image data processing unit is used for filtering and collecting image data in the video call when the uplink rate is lower than a preset minimum threshold;
the window adjusting unit is used for adjusting a display window of the video call according to the real-time downlink rate when the downlink rate is lower than a preset minimum threshold;
the window adjusting unit is further configured to display an image by using a window with a fixed corresponding size when the downlink rate is higher than a preset minimum threshold;
the image data comprises main body subdata and background subdata; the image data processing unit is used for reducing the receiving of light and reducing the collection of background subdata of the image data.
7. The terminal of claim 6, further comprising:
the image compression unit is used for performing lossless compression on the image data after filtering and collecting by adopting maximum or moderate compression ratio according to the performance mode of the current terminal CPU;
and the power increasing unit is used for increasing the sending power of the current terminal and sending the image data after lossless compression to the network.
8. The terminal of claim 6, further comprising:
the local compensation unit is used for copying the acquired image data into a local cache, and compensating by using the image data in the local cache when the current terminal verifies the wrong image data;
a data copy compensation unit, configured to copy the image data into at least two copies if the uplink rate is lower than a preset minimum threshold when the current network is a CA network, and send the at least two copies to a primary cell and at least one secondary cell respectively;
and if the downlink rate is lower than a preset minimum threshold, receiving the image data from the main cell and at least one auxiliary cell for combination compensation.
CN201610485270.5A 2016-06-28 2016-06-28 Method and terminal for optimizing video call display Active CN105898180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610485270.5A CN105898180B (en) 2016-06-28 2016-06-28 Method and terminal for optimizing video call display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610485270.5A CN105898180B (en) 2016-06-28 2016-06-28 Method and terminal for optimizing video call display

Publications (2)

Publication Number Publication Date
CN105898180A CN105898180A (en) 2016-08-24
CN105898180B true CN105898180B (en) 2020-03-27

Family

ID=56719067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610485270.5A Active CN105898180B (en) 2016-06-28 2016-06-28 Method and terminal for optimizing video call display

Country Status (1)

Country Link
CN (1) CN105898180B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111385616B (en) * 2018-12-28 2021-08-13 华为技术有限公司 Method and device for determining video transmission quality
CN111385463B (en) * 2018-12-29 2021-12-28 华为技术有限公司 Method for controlling camera, control device, network equipment and camera
CN112672090B (en) * 2020-12-17 2023-04-18 深圳随锐视听科技有限公司 Method for optimizing audio and video effects in cloud video conference

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101683291B1 (en) * 2010-05-14 2016-12-06 엘지전자 주식회사 Display apparatus and control method thereof
CN103064365A (en) * 2012-12-11 2013-04-24 南京理工大学 Agriculture comprehensive information remote wireless monitoring and pre-warning system
CN103916622B (en) * 2013-01-06 2020-06-23 联想(北京)有限公司 Call mode switching method and device
US9781385B2 (en) * 2013-03-15 2017-10-03 Blue Jeans Network User interfaces for presentation of audio/video streams
CN104519415B (en) * 2013-09-26 2018-04-03 成都鼎桥通信技术有限公司 Triggering method, audio frequency and video processing equipment and the network equipment adaptively adjusted
JP6505019B2 (en) * 2013-10-25 2019-04-24 シャープ株式会社 Terminal device and method in terminal device
CN105072345B (en) * 2015-08-25 2018-12-25 深圳市通则技术股份有限公司 Method for video coding and device
CN105578112B (en) * 2015-12-23 2020-06-09 南京秦淮紫云创益企业服务有限公司 Video call method and device

Also Published As

Publication number Publication date
CN105898180A (en) 2016-08-24

Similar Documents

Publication Publication Date Title
CN106302589B (en) File transmission method and terminal
US20170255382A1 (en) Mobile terminal and operation method thereof and computer storage medium
CN106909274B (en) Image display method and device
CN108093019B (en) Member information refreshing method and terminal
CN106210286B (en) Parameter adjusting method and device for double-screen mobile terminal
CN106302651B (en) Social picture sharing method and terminal with social picture sharing system
CN105933452B (en) Information pushing and displaying method and device
CN108234556B (en) Item information reminding method, terminal and server
CN106200991B (en) Angle adjusting method and device and mobile terminal
CN105898180B (en) Method and terminal for optimizing video call display
CN107018326B (en) Shooting method and device
CN106453863B (en) Method and system for controlling terminal and earphone
CN106484534B (en) Control method and control device for displayed mobile terminal
CN109168029B (en) Method, device and computer-readable storage medium for adjusting resolution
CN105554285B (en) Processing method for taking person photo and intelligent mobile terminal
CN104639428B (en) Self-adaptive method for session scene in instant messaging and mobile terminal
CN106131942B (en) Communication method and terminal
CN106257943B (en) Method and device for controlling terminal function through access point and access point
CN109842721B (en) Screen brightness adjusting method, mobile terminal and computer readable storage medium
CN105554323B (en) Call system, device and method
CN109165487B (en) Face unlocking method, mobile terminal and computer readable storage medium
CN105307156B (en) Device and method for requesting downloading of application
CN108964188B (en) Mobile terminal charging control circuit, charging circuit and electronic equipment
CN106789792B (en) Wifi frequency offset adjusting method and terminal
CN106780408B (en) Picture processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant